Unnamed: 0
int64 0
192k
| title
stringlengths 1
200
| text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
| info
stringlengths 45
90.4k
|
---|---|---|---|---|---|---|---|
1,700 | Spotisis- Analysis of my Spotify Streaming History | You guys might be wondering how Spotify did the 2020 wrapup for its every user. After reading this you will find it easy and anybody can do it with their Spotify account. I’ll be showing you details about my Spotify Streaming history and I’ll be comparing my music with one of my friend Sreelekshmy. Spotify lets you download your data All you have to do is to go to your privacy settings in the dashboard and click request data. It will be available in 3–4 days even though they tell you it might take 30 days. I’ll be using the JSON file “Streaming History0.json” they provided me for this project. You can find the source code at the GitHub link provided at the end of the article. So let’s get started.
These are the things I’m going to analyze
Timeline of my streaming history
Day preference
Favourite artist
Favourite songs
Diversity
Spirit of songs
Part A
The First Song that I heard from Spotify was Old Town Road (Jessie James Decker Version) which was the first object in my JSON. Even though I heard my first song in the last November I didn’t use Spotify frequently. So the first thing I’m gonna share is my minutes streamed per day. You can see a Spike Starting to rise from March 25 2020. You guys know the reason :D. On August 28 2020. The Graph says I have Streamed about 260 minutes.
Streaming history
I use Spotify mostly when I’m working it makes me do my work faster.
you can see that from the pie chart below.
Favourite Artist — everybody who knows me I’m a big fan of A.R. Rahman and One direction. I have played One direction songs 352 times and A.R. Rahman songs 254 times. But when comes to the uniqueness of songs I have played 61 different songs of A.R. Rahman comparing to 41 of One direction. The bigger the circle denotes bigger is the uniqueness of the songs.
The bigger circle indicates the uniqueness of songs of that artist
Favourite Song-
In the Graph, you can see one song Staying way ahead of the others. “The Nights”. This is my all-time favourite. The First time I heard this song was during my first year at college. May have heard this More than a thousand times in my entire life.
Spotify provides each song with certain attributes if you’re a musician you might be familiar with some of those. The attributes that Spotify proves for a song are as follows:
Danceability — A description of how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.
Energy — Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy.
Instrumentalness — Predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content.
Liveness — Detects the presence of an audience in the recording.
Loudness — The overall loudness of a track in decibels (dB). Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 dB.
Speechiness — Speechiness detects the presence of spoken words in a track.
Valence — A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track.
Tempo — The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, the tempo is the speed or pace of a given piece and derives directly from the average beat duration
Mode — Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.
Key — The estimated overall key of the track.
I’ll be comparing five features of my top 5 songs Danceability, Instrumentalness , Speechiness, energy, loudness.
Song Diversity
Do I listen to positive songs?
Spotify also provides one attribute called valence. The valence scale is from 0–1, with one being the most positiveness conveyed in the track.
When I plotted the histogram of my top 50 songs it says I listen to less positive songs. When I plotted Venn diagram it says 28 of songs are low spirits(valence<0.5).
Histogram of the valence of my top 50 songs
Venn diagram
Part B
In this Part, I’ll be comparing my top 50 songs with one of my friend Sreelekshmy’s playlist. I hear more energetic songs but she hears songs having a positive mood than me. and danceability of my songs is higher than her. I have also compared tempo and other audio features of our playlist. | https://medium.com/analytics-vidhya/spotisis-analysis-of-my-spotify-streaming-history-50dc1dbbb6c | ['Appu Aravind'] | 2020-12-07 15:13:40.556000+00:00 | ['Python', 'Spotify', 'Plotly', 'Data Visualization'] | Title Spotisis Analysis Spotify Streaming HistoryContent guy might wondering Spotify 2020 wrapup every user reading find easy anybody Spotify account I’ll showing detail Spotify Streaming history I’ll comparing music one friend Sreelekshmy Spotify let download data go privacy setting dashboard click request data available 3–4 day even though tell might take 30 day I’ll using JSON file “Streaming History0json” provided project find source code GitHub link provided end article let’s get started thing I’m going analyze Timeline streaming history Day preference Favourite artist Favourite song Diversity Spirit song Part First Song heard Spotify Old Town Road Jessie James Decker Version first object JSON Even though heard first song last November didn’t use Spotify frequently first thing I’m gonna share minute streamed per day see Spike Starting rise March 25 2020 guy know reason August 28 2020 Graph say Streamed 260 minute Streaming history use Spotify mostly I’m working make work faster see pie chart Favourite Artist — everybody know I’m big fan AR Rahman One direction played One direction song 352 time AR Rahman song 254 time come uniqueness song played 61 different song AR Rahman comparing 41 One direction bigger circle denotes bigger uniqueness song bigger circle indicates uniqueness song artist Favourite Song Graph see one song Staying way ahead others “The Nights” alltime favourite First time heard song first year college May heard thousand time entire life Spotify provides song certain attribute you’re musician might familiar attribute Spotify prof song follows Danceability — description suitable track dancing based combination musical element including tempo rhythm stability beat strength overall regularity value 00 least danceable 10 danceable Energy — Energy measure 00 10 represents perceptual measure intensity activity Typically energetic track feel fast loud noisy Instrumentalness — Predicts whether track contains vocal “Ooh” “aah” sound treated instrumental context closer instrumentalness value 10 greater likelihood track contains vocal content Liveness — Detects presence audience recording Loudness — overall loudness track decibel dB Loudness quality sound primary psychological correlate physical strength amplitude Values typical range 60 0 dB Speechiness — Speechiness detects presence spoken word track Valence — measure 00 10 describing musical positiveness conveyed track Tempo — overall estimated tempo track beat per minute BPM musical terminology tempo speed pace given piece derives directly average beat duration Mode — Mode indicates modality major minor track type scale melodic content derived Major represented 1 minor 0 Key — estimated overall key track I’ll comparing five feature top 5 song Danceability Instrumentalness Speechiness energy loudness Song Diversity listen positive song Spotify also provides one attribute called valence valence scale 0–1 one positiveness conveyed track plotted histogram top 50 song say listen le positive song plotted Venn diagram say 28 song low spiritsvalence05 Histogram valence top 50 song Venn diagram Part B Part I’ll comparing top 50 song one friend Sreelekshmy’s playlist hear energetic song hears song positive mood danceability song higher also compared tempo audio feature playlistTags Python Spotify Plotly Data Visualization |
1,701 | 3 Things I’ve Learned After 3 Weeks on the Partner Program | I decided to dive into Medium’s Partner Program three weeks ago. I’ve talked about my long-term plan to build a creative life. A crucial piece of that was to start earning a side income. I’m working on the transition away from my current day job, and I knew that I needed to start building the foundation.
I’ve been incredibly fortunate by being curated early in my process. That’s helped kickstart this whole project immensely. It’s both given me a massive boost of encouragement, and far exceeded what I thought I’d achieve this soon.
I was expecting to earn a few cents this month. Instead, I broke past ten dollars. Even in my wildest dreams, I wasn’t hoping to make double-digits in my first month on Medium.
It’s not just about the money, but the money is a validation that I’m on the right path. I’ve wanted to be a writer for as far as I can remember, and now I’m being paid for it.
I’ve made a few minor mistakes so far, nothing major. I’ve also learned three essential lessons.
1. I rediscovered my integrity
I try to live with integrity. It’s something I value highly in my day job. I consistently push to do the right thing, not just the quick fix. Sometimes that causes problems for me when I’m too unwilling to bend.
I wasn’t aware of how this carried over to my beliefs as a writer. I quickly discovered that I still have the same value when it comes to my writing. I’ve passed on publishing this article twice already. I started putting this together around my 2-week mark while I was waiting for some stories to come out in publications where I’d submitted.
I’ve decided to take the high-ground
I was getting a bit uneasy about not having something released. I quickly wrote the original draft for this article. Most of it was just simple comments about curation, getting a signal boost from a publication, that sort of thing. There wasn’t anything new there. There was very little of myself in that writing. So I didn’t publish it.
I hit the same feeling yesterday and nearly published this article then. It was late on Monday, and I didn’t have anything to release. It wasn’t bad. But it was nowhere near the work I’d recently written. For the second time, I held back on publishing this story. | https://medium.com/the-partnered-pen/3-things-ive-learned-after-3-weeks-on-the-partner-program-eb2ebb362eb3 | ['Andrew Dacey'] | 2019-12-03 14:55:10.036000+00:00 | ['Authenticity', 'Writing', 'Lessons Learned', 'Creativity', 'Integrity'] | Title 3 Things I’ve Learned 3 Weeks Partner ProgramContent decided dive Medium’s Partner Program three week ago I’ve talked longterm plan build creative life crucial piece start earning side income I’m working transition away current day job knew needed start building foundation I’ve incredibly fortunate curated early process That’s helped kickstart whole project immensely It’s given massive boost encouragement far exceeded thought I’d achieve soon expecting earn cent month Instead broke past ten dollar Even wildest dream wasn’t hoping make doubledigits first month Medium It’s money money validation I’m right path I’ve wanted writer far remember I’m paid I’ve made minor mistake far nothing major I’ve also learned three essential lesson 1 rediscovered integrity try live integrity It’s something value highly day job consistently push right thing quick fix Sometimes cause problem I’m unwilling bend wasn’t aware carried belief writer quickly discovered still value come writing I’ve passed publishing article twice already started putting together around 2week mark waiting story come publication I’d submitted I’ve decided take highground getting bit uneasy something released quickly wrote original draft article simple comment curation getting signal boost publication sort thing wasn’t anything new little writing didn’t publish hit feeling yesterday nearly published article late Monday didn’t anything release wasn’t bad nowhere near work I’d recently written second time held back publishing storyTags Authenticity Writing Lessons Learned Creativity Integrity |
1,702 | Silhouette Method — Better than Elbow Method to find Optimal Clusters | Silhouette Method — Better than Elbow Method to find Optimal Clusters
Deep dive analysis of Silhouette Method to find optimal clusters in k-Means clustering
Image by Mediamodifier from Pixabay
Hyperparameters are model configurations properties that define the model and remain constants during the training of the model. The design of the model can be changed by tuning the hyperparameters. For K-Means clustering there are 3 main hyperparameters to set-up to define the best configuration of the model:
Initial values of clusters
Distance measures
Number of clusters
Initial values of clusters greatly impact the clustering model, there are various algorithms to initialize the values. Distance measures are used to find points in clusters to the cluster center, different distance measures yield different clusters.
The number of clusters (k) is the most important hyperparameter in K-Means clustering. If we already know beforehand, the number of clusters to group the data into, then there is no use to tune the value of k. For example, k=10 for the MNIST digit classification dataset.
If there is no idea about the optimal value of k, then there are various methods to find the optimal/best value of k. In this article we will cover two such methods:
Elbow Method
Silhouette Method
Elbow Method:
Elbow Method is an empirical method to find the optimal number of clusters for a dataset. In this method, we pick a range of candidate values of k, then apply K-Means clustering using each of the values of k. Find the average distance of each point in a cluster to its centroid, and represent it in a plot. Pick the value of k, where the average distance falls suddenly.
(Image by Author), Elbow Method to find optimal k
With an increase in the number of clusters (k), the average distance decreases. To find the optimal number of clusters (k), observe the plot and find the value of k for which there is a sharp and steep fall of the distance. This is will be an optimal point of k where an elbow occurs.
In the above plot there a sharp fall of average distance at k=2, 3, and 4. Here comes a confusion to pick the best value of k. In the below plot observe the clusters formed for k=2, 3, and 4 with their average distance.
(Image by Author), Scatter plot of clusters formed at k=2, 3, and 4
This data is 2-D, so it's easy to visualize and pick the best value of k, which is k=4. For higher-dimensional data, we can employ the Silhouette Method to find the best k, which is a better alternative to Elbow Method. | https://towardsdatascience.com/silhouette-method-better-than-elbow-method-to-find-optimal-clusters-378d62ff6891 | ['Satyam Kumar'] | 2020-10-18 18:34:02.056000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Education', 'Data Science', 'Clustering'] | Title Silhouette Method — Better Elbow Method find Optimal ClustersContent Silhouette Method — Better Elbow Method find Optimal Clusters Deep dive analysis Silhouette Method find optimal cluster kMeans clustering Image Mediamodifier Pixabay Hyperparameters model configuration property define model remain constant training model design model changed tuning hyperparameters KMeans clustering 3 main hyperparameters setup define best configuration model Initial value cluster Distance measure Number cluster Initial value cluster greatly impact clustering model various algorithm initialize value Distance measure used find point cluster cluster center different distance measure yield different cluster number cluster k important hyperparameter KMeans clustering already know beforehand number cluster group data use tune value k example k10 MNIST digit classification dataset idea optimal value k various method find optimalbest value k article cover two method Elbow Method Silhouette Method Elbow Method Elbow Method empirical method find optimal number cluster dataset method pick range candidate value k apply KMeans clustering using value k Find average distance point cluster centroid represent plot Pick value k average distance fall suddenly Image Author Elbow Method find optimal k increase number cluster k average distance decrease find optimal number cluster k observe plot find value k sharp steep fall distance optimal point k elbow occurs plot sharp fall average distance k2 3 4 come confusion pick best value k plot observe cluster formed k2 3 4 average distance Image Author Scatter plot cluster formed k2 3 4 data 2D easy visualize pick best value k k4 higherdimensional data employ Silhouette Method find best k better alternative Elbow MethodTags Machine Learning Artificial Intelligence Education Data Science Clustering |
1,703 | How Creative People Work | How Creative People Work
I asked a bunch of writers, journalists, and other creatives how they manage their creative systems.
Recently, I’ve been doing two things that have really helped my writing:
1. For the past year, I’ve been using a spreadsheet to track my writing, input, and a few other factors. I’ve found it really illuminating to see how streaks of writing daily can benefit me, when I really need to take a break to recharge, how reading/watching in new genres has helped me break out of ruts, and how exercise, unfortunately, is very highly correlated with me writing more, and drinking alcohol is not. This is not about productivity as much as finding a balance that lets me enjoy the act of creating, and make it special and distinct from my other work.
Here’s the spreadsheet I use, which is a version of something I saw from writer Leigh Stein (if you haven’t read her excellent satirical novel “Self Care” yet, buy it here!). Feel free to make a copy and give it a try yourself, or adjust the columns, or create something totally new.
2. Since October, I’ve been using a simple word count template (created by author Hannah Orenstein) to track the progress on my novel draft on Instagram. This works for me because as a short piece writer, I was used to creating, submitting, and publishing on a much faster pace, and I missed the engagement with other people as I went. So now I get people weighing in, cheering me on, and giving me little bits of encouragement to keep going. Follow me on IG here to see them, and if you DM me there I’ll share the template with you!
These two systems work for me. What hasn’t worked for me? Trello, A paper journal. My Notes app. Voice dictation. Many other things. But as you’ll see below, those tools DO work for a lot of other writers! I asked people to share their creative working systems as well as something they wanted to shout out this year. Check them out, and see if any of them might be something to try in 2021: | https://kunkeltron.medium.com/how-creative-people-work-712fb2bbca9b | ['Caitlin Kunkel'] | 2020-12-16 16:21:47.575000+00:00 | ['Creative Process', 'Writing Life', 'Writing', 'Writing Tips', 'Creativity'] | Title Creative People WorkContent Creative People Work asked bunch writer journalist creatives manage creative system Recently I’ve two thing really helped writing 1 past year I’ve using spreadsheet track writing input factor I’ve found really illuminating see streak writing daily benefit really need take break recharge readingwatching new genre helped break rut exercise unfortunately highly correlated writing drinking alcohol productivity much finding balance let enjoy act creating make special distinct work Here’s spreadsheet use version something saw writer Leigh Stein haven’t read excellent satirical novel “Self Care” yet buy Feel free make copy give try adjust column create something totally new 2 Since October I’ve using simple word count template created author Hannah Orenstein track progress novel draft Instagram work short piece writer used creating submitting publishing much faster pace missed engagement people went get people weighing cheering giving little bit encouragement keep going Follow IG see DM I’ll share template two system work hasn’t worked Trello paper journal Notes app Voice dictation Many thing you’ll see tool work lot writer asked people share creative working system well something wanted shout year Check see might something try 2021Tags Creative Process Writing Life Writing Writing Tips Creativity |
1,704 | Ansible For AWS — Manage Your Cloud Infrastructure Easily | Ansible for AWS — Edureka
Companies have invested a large amount of time and money developing and installing software to improve their operations. The introduction to cloud computing offered their business to access software on the internet as service which proved to be more efficient and safe. Integrating an IT automation tool like Ansible which will easily provision and manage your cloud infrastructure like AWS is like hitting the jackpot. And that’s what we’re going to talk about in this Ansible for AWS article.
Agenda:
Why Companies Migrate To The Cloud?
Ansible Features
Why Use Ansible For AWS
Demo: Automate the provisioning of an EC2 Instance using Ansible
Why Companies Migrate To The Cloud?
As mentioned earlier, Could Computing lets companies access servers like software over the internet. To make it clear, Cloud Computing is like plugging into a central power grid instead of generating your own power. Cloud has become the new normal and this ends up saving a lot of time and money. Let’s have a look at a few advantages of why companies migrate to the cloud.
1. Flexibility:
Business growth is never static. Cloud-based services are suitable for growing and fluctuating business demands. A feature to scale up and scale down your deployment based on the requirement makes it very flexible.
2. Disaster Recovery:
Every business should have invested in disaster recovery. Every fortune company ends up investing a ton lot on disaster recovery. Startups and low budget companies lack the money and the required skill for this and are unable to have a proper functional disaster recovery trait. Cloud provides disaster recovery solutions for the customers to develop robust and cost-effective plans.
3. Automatic Software Updates:
As you already know, the cloud is the service provided by the internet and hence all the servers are out of your reach or rather not your headache. Suppliers take care of them which includes updating when required and running regular security check-ups. This again ends up saving a lot of time and money.
4. Reduced Costs:
Establishing a data center from scratch can get expensive. Running and maintaining adds up to the expenses. You need the right technology, right hardware, right staff with the right knowledge and experience which just sounds like a lot of work to me. Also, not very promising, there are a million ways this could go wrong. Migrating to the cloud gives you this plus point.
5. Scalability:
The traditional way of planning for unexpected growth is to purchase and keep additional servers, storage, and licenses. It may take years before you actually use them. Cloud platforms allow you to scale up these resources as in when needed. This dynamic scaling goes perfectly for unpredictable growth.
6. Data Security:
Most of the times, it’s better to keep your data on the cloud over storing them on a physical device like laptops or hard disks. There are high chances of these physical devices getting stolen or shattered. Cloud allows you to remotely either remove the data or transfer them to another server making sure that data remains intact and safe.
7. Increased Collaboration:
Using cloud platforms allows the team to access, edit and share documents anytime, anywhere. They are able to work together hence increasing the efficiency. This also provides real-time and transparent updates.
Ansible Features
Ansible has some unique features and when such features collaborate with Amazon Web Services, leaves a mark. Let’s have a look at these incredible features:
Ansible is based on an agentless architecture, unlike Chef and Puppet Ansible accesses its host through SSH which is makes the communication between servers and hosts feel like a snap No custom security infrastructure is needed Configuring playbooks and modules is super easy as it follows YAML format Has a wide range of modules for its customers Allows complete configuration management, orchestration, and deployment capability Ansible Vault keeps the secrets safe
Why Use Ansible For AWS?
Now that we’ve gone through the benefits of using a Cloud Platform like AWS and unique features of Ansible, let’s have a look at the magic created by integrating these two legends.
1. Cloud As Group Of Services
Cloud is not just a group of servers on someone else’s data center but much more than that. You’ll realize that once you’ve deployed your services on it. There are many services available that let you rapidly deploy and scale your applications. Ansible automation helps you manage your AWS environment like a group of services rather than using them as a group of servers.
2. Ansible Modules Supporting AWS
Ansible is used to define, deploy and manage a wide variety of services. Most complicated AWS environments can be provisioned very easily using a playbook. The best feature is, you create a server-host connection and then run the playbook on just one system and provision multiple other systems with an option to scale up and scale down as per requirement.
Ansible has hundreds of modules supporting AWS and some of them include:
Autoscaling groups
CloudFormation
CloudTrail
CloudWatch
DynamoDB
ElastiCache
Elastic Cloud Compute (EC2)
Identity Access Manager (IAM)
Lambda
Relational Database Service (RDS)
Route53
Security Groups
Simple Storage Service (S3)
Virtual Private Cloud (VPC)
And many more
3. Dynamic Inventory
In a development environment, hosts keep spinning up and shutting down with diverse business requirements. In such a case, using static inventory might not be sufficient. Such situations call for using Dynamic Inventory. This lets you map hosts based on groups provided by inventory scripts, unlike normal inventory which forces you to map hosts manually which is very tedious.
4. Safe Automation
Assume that you have a team of 5 people and each of them has two subordinates under them who are not completely skilled. You wouldn’t want to give them complete access to the entire deployment process. That’s when you realize the need for restricting the authorization.
Ansible Tower delivers this feature to restrict authorizations. So basically, you chose who can do what, which makes it easier to moderate. Also, Ansible Tower encrypts credentials and other sensitive data and you only give the subordinates access to relevant resources while restricting their access to irrelevant ones.
Demo: Automate The Provisioning Of An EC2 Instance Using Ansible
In this Demo section, I’m going to demonstrate how Ansible supports AWS by showing how to automate the starting and provisioning of an EC2 instance. Let’s get started.
Step 1:
Install Ansible on your server node and make an SSH connection between your server and the client nodes on AWS. In this case, I have created two EC2 instances, one server on which Ansible is installed and the other is the client.
Step 2:
Now make sure you have all the requirements installed. According to the documentation, these are the following requirements:
Install python using the following command:
$ sudo apt install python
Install boto using the following command:
$ sudo apt install python-pip
$ pip install boto
Boto is a python interface for using Amazon Web services. You’ll have to import it using the following command:
$ python
$ import boto
$ exit()
Step 3:
You have to configure your AWS. Use the following command for the same:
$ aws configure
And add your AWS access key id, secret key and default region(which is optional).
Write a playbook to start and provision an EC2 instance.
$ sudo vi /etc/ansible/launch.yml
Mention the below lines:
---
- name: Create an ec2 instance
hosts: web
gather_facts: false
vars:
region: us-east-1
instance_type: t2.micro
ami: ami-05ea7729e394412c8
keypair: priyajdm
tasks:
- name: Create an ec2 instance
ec2:
aws_access_key: '********************'
aws_secret_key: '****************************************'
key_name: "{{ keypair }}"
group: launch-wizard-26
instance_type: "{{ instance_type }}"
image: "{{ ami }}"
wait: true
region: "{{ region }}"
count: 1
vpc_subnet_id: subnet-02f498e16fd56c277
assign_public_ip: yes
register: ec2
It’s a good practice to know what the code does before actually executing it. Let me explain this playbook for better understanding.
Name: It can be literally anything. A good practice is to keep a name that gives a basic description of the task it performs.
Host: Mentions the name of the host list against which the playbook needs to be executed. In my case it’s web.
gather_facts: This parameter tells Ansible to gather all the relevant facts, variables and other data for future reference. In our case, we’ve set it to false because we have no use of collecting facts(IP addr., Hostname, etc).
vars: This section defines and initializes all the variables that we’ll be using in this playbook. We have four variables here:
region defines the region in which the EC2 instance needs to come up
defines the region in which the EC2 instance needs to come up instance_type defines the type of instance we’re trying to bring up. In our case, we are using t2.micro
defines the type of instance we’re trying to bring up. In our case, we are using t2.micro ami defines the AMI of the instance we’re trying to bring up
ec2: This is a module provided by Ansible used to start or terminate an EC2 instance.
This module has certain parameters that we’ll be using to specify other functionalities of the EC2 instance that we’re trying to start.
We start by mentioning AWS access key id and secret key using the parameters aws_access_key and aws-secret_key .
and . key_name: pass the variable that defines the keypair being used here
pass the variable that defines the keypair being used here mention the name of the security group. This defines the security rules of the EC2 instance we’re trying to bring up
instance_type: pass the variable that defines the type of instance we’re using here
pass the variable that defines the type of instance we’re using here image: pass the variable that defines the AMI of the image we’re trying to start
pass the variable that defines the AMI of the image we’re trying to start This has a boolean value of either true or false. If true, it waits for the instance to reach the desired state before returning
region: pass the variable that defines the region in which an EC2 instance needs to be created.
pass the variable that defines the region in which an EC2 instance needs to be created. This parameter specifies the number of instances that need to be created. In this case, I’ve only mentioned only one but this depends on your requirements.
vpc_subnet_id: pass the subnet id in which you wish to create the instance
pass the subnet id in which you wish to create the instance assign_public_ip: This parameter has a boolean value. If true like in our case, a public IP will be assigned to the instance when provisioned within VPC.
Step 5:
Now that you’ve understood every line in the playbook, let’s go ahead and execute it. Use the following command:
$ ansible-playbook /etc/ansible/launch.yml
Once you’ve executed the playbook, you’ll see an instance is created.
And TADA! You’ve successfully automated the provisioning of an EC2 instance. The same way you can also write a playbook to stop the EC2 instance.
This brings us to the end of Ansible For AWS article. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of DevOps. | https://medium.com/edureka/ansible-for-aws-provision-ec2-instance-9308b49daed9 | ['Saurabh Kulshrestha'] | 2020-09-09 11:18:40.391000+00:00 | ['Amazon Web Services', 'Cloud Computing', 'DevOps', 'Ansible', 'AWS'] | Title Ansible AWS — Manage Cloud Infrastructure EasilyContent Ansible AWS — Edureka Companies invested large amount time money developing installing software improve operation introduction cloud computing offered business access software internet service proved efficient safe Integrating automation tool like Ansible easily provision manage cloud infrastructure like AWS like hitting jackpot that’s we’re going talk Ansible AWS article Agenda Companies Migrate Cloud Ansible Features Use Ansible AWS Demo Automate provisioning EC2 Instance using Ansible Companies Migrate Cloud mentioned earlier Could Computing let company access server like software internet make clear Cloud Computing like plugging central power grid instead generating power Cloud become new normal end saving lot time money Let’s look advantage company migrate cloud 1 Flexibility Business growth never static Cloudbased service suitable growing fluctuating business demand feature scale scale deployment based requirement make flexible 2 Disaster Recovery Every business invested disaster recovery Every fortune company end investing ton lot disaster recovery Startups low budget company lack money required skill unable proper functional disaster recovery trait Cloud provides disaster recovery solution customer develop robust costeffective plan 3 Automatic Software Updates already know cloud service provided internet hence server reach rather headache Suppliers take care includes updating required running regular security checkup end saving lot time money 4 Reduced Costs Establishing data center scratch get expensive Running maintaining add expense need right technology right hardware right staff right knowledge experience sound like lot work Also promising million way could go wrong Migrating cloud give plus point 5 Scalability traditional way planning unexpected growth purchase keep additional server storage license may take year actually use Cloud platform allow scale resource needed dynamic scaling go perfectly unpredictable growth 6 Data Security time it’s better keep data cloud storing physical device like laptop hard disk high chance physical device getting stolen shattered Cloud allows remotely either remove data transfer another server making sure data remains intact safe 7 Increased Collaboration Using cloud platform allows team access edit share document anytime anywhere able work together hence increasing efficiency also provides realtime transparent update Ansible Features Ansible unique feature feature collaborate Amazon Web Services leaf mark Let’s look incredible feature Ansible based agentless architecture unlike Chef Puppet Ansible access host SSH make communication server host feel like snap custom security infrastructure needed Configuring playbook module super easy follows YAML format wide range module customer Allows complete configuration management orchestration deployment capability Ansible Vault keep secret safe Use Ansible AWS we’ve gone benefit using Cloud Platform like AWS unique feature Ansible let’s look magic created integrating two legend 1 Cloud Group Services Cloud group server someone else’s data center much You’ll realize you’ve deployed service many service available let rapidly deploy scale application Ansible automation help manage AWS environment like group service rather using group server 2 Ansible Modules Supporting AWS Ansible used define deploy manage wide variety service complicated AWS environment provisioned easily using playbook best feature create serverhost connection run playbook one system provision multiple system option scale scale per requirement Ansible hundred module supporting AWS include Autoscaling group CloudFormation CloudTrail CloudWatch DynamoDB ElastiCache Elastic Cloud Compute EC2 Identity Access Manager IAM Lambda Relational Database Service RDS Route53 Security Groups Simple Storage Service S3 Virtual Private Cloud VPC many 3 Dynamic Inventory development environment host keep spinning shutting diverse business requirement case using static inventory might sufficient situation call using Dynamic Inventory let map host based group provided inventory script unlike normal inventory force map host manually tedious 4 Safe Automation Assume team 5 people two subordinate completely skilled wouldn’t want give complete access entire deployment process That’s realize need restricting authorization Ansible Tower delivers feature restrict authorization basically chose make easier moderate Also Ansible Tower encrypts credential sensitive data give subordinate access relevant resource restricting access irrelevant one Demo Automate Provisioning EC2 Instance Using Ansible Demo section I’m going demonstrate Ansible support AWS showing automate starting provisioning EC2 instance Let’s get started Step 1 Install Ansible server node make SSH connection server client node AWS case created two EC2 instance one server Ansible installed client Step 2 make sure requirement installed According documentation following requirement Install python using following command sudo apt install python Install boto using following command sudo apt install pythonpip pip install boto Boto python interface using Amazon Web service You’ll import using following command python import boto exit Step 3 configure AWS Use following command aws configure add AWS access key id secret key default regionwhich optional Write playbook start provision EC2 instance sudo vi etcansiblelaunchyml Mention line name Create ec2 instance host web gatherfacts false var region useast1 instancetype t2micro ami ami05ea7729e394412c8 keypair priyajdm task name Create ec2 instance ec2 awsaccesskey awssecretkey keyname keypair group launchwizard26 instancetype instancetype image ami wait true region region count 1 vpcsubnetid subnet02f498e16fd56c277 assignpublicip yes register ec2 It’s good practice know code actually executing Let explain playbook better understanding Name literally anything good practice keep name give basic description task performs Host Mentions name host list playbook need executed case it’s web gatherfacts parameter tell Ansible gather relevant fact variable data future reference case we’ve set false use collecting factsIP addr Hostname etc var section defines initializes variable we’ll using playbook four variable region defines region EC2 instance need come defines region EC2 instance need come instancetype defines type instance we’re trying bring case using t2micro defines type instance we’re trying bring case using t2micro ami defines AMI instance we’re trying bring ec2 module provided Ansible used start terminate EC2 instance module certain parameter we’ll using specify functionality EC2 instance we’re trying start start mentioning AWS access key id secret key using parameter awsaccesskey awssecretkey keyname pas variable defines keypair used pas variable defines keypair used mention name security group defines security rule EC2 instance we’re trying bring instancetype pas variable defines type instance we’re using pas variable defines type instance we’re using image pas variable defines AMI image we’re trying start pas variable defines AMI image we’re trying start boolean value either true false true wait instance reach desired state returning region pas variable defines region EC2 instance need created pas variable defines region EC2 instance need created parameter specifies number instance need created case I’ve mentioned one depends requirement vpcsubnetid pas subnet id wish create instance pas subnet id wish create instance assignpublicip parameter boolean value true like case public IP assigned instance provisioned within VPC Step 5 you’ve understood every line playbook let’s go ahead execute Use following command ansibleplaybook etcansiblelaunchyml you’ve executed playbook you’ll see instance created TADA You’ve successfully automated provisioning EC2 instance way also write playbook stop EC2 instance brings u end Ansible AWS article wish check article market’s trending technology like Artificial Intelligence Python Ethical Hacking refer Edureka’s official site look article series explain various aspect DevOpsTags Amazon Web Services Cloud Computing DevOps Ansible AWS |
1,705 | On Becoming A Writer | When Jeff Nobbs (my partner) and I returned to San Francisco last September after several months on the road, I fell into a deep panic because I didn’t know what the heck I was going to do next with my life. Naturally, the most frequently asked question of us at that time was: “So, what’s next for you guys?”
This anxiety was further magnified because Jeff always had a good answer to the question — he had many options awaiting him, and by October, he’d picked one and pursued it. Yes, yes, I know comparison is the thief of joy, but I felt like I’d regressed to my post-college years, totally clueless as to what my life calling was, what I was good at, what I enjoyed and where I might add value. I was simultaneously paralyzed by the infinite optionality put forth by modern society. At one point, I’d listed out 30+ career paths I would be interested in exploring further, one of which was a “Wholeness Mentor” — like wtf is that? I don’t even know.
For the next couple of months, I worked on a project here, a project there, and explored, however briefly, what it would be like to start a business with my best friend. It was fun while it lasted, but I came to realize while I liked the idea of being an entrepreneur, I didn’t actually like being one. The zealous city of San Francisco and its people had primed me into thinking everyone and anyone could be an entrepreneur, including myself, but when it came down to it, I, quite simply, didn’t enjoy the day-to-day of being one, of always having to be “on” and having to answer to everyone and their mothers. I’d deluded myself into believing I wanted to be an entrepreneur, but I quickly realized: 1) I’m not that motivated by problem-solving, and 2) I’m not that ambitious.
These are lamentable things to admit to yourself, especially living in a city that often feels like it’s populated by the world’s valedictorians, over-achievers and people who won “Best All-Around” in their high school yearbook, where every single person wants to change the world for the better. To admit that I didn’t care much about making the world a better place without eroding my sense of self-worth was tough, really tough. | https://medium.com/sumofourparts/on-becoming-a-writer-6da293335a3d | ['Renee Chen'] | 2019-04-08 23:04:24.032000+00:00 | ['Life Lessons', 'Writing', 'Self Improvement', 'Life', 'Creativity'] | Title Becoming WriterContent Jeff Nobbs partner returned San Francisco last September several month road fell deep panic didn’t know heck going next life Naturally frequently asked question u time “So what’s next guys” anxiety magnified Jeff always good answer question — many option awaiting October he’d picked one pursued Yes yes know comparison thief joy felt like I’d regressed postcollege year totally clueless life calling good enjoyed might add value simultaneously paralyzed infinite optionality put forth modern society one point I’d listed 30 career path would interested exploring one “Wholeness Mentor” — like wtf don’t even know next couple month worked project project explored however briefly would like start business best friend fun lasted came realize liked idea entrepreneur didn’t actually like one zealous city San Francisco people primed thinking everyone anyone could entrepreneur including came quite simply didn’t enjoy daytoday one always “on” answer everyone mother I’d deluded believing wanted entrepreneur quickly realized 1 I’m motivated problemsolving 2 I’m ambitious lamentable thing admit especially living city often feel like it’s populated world’s valedictorian overachiever people “Best AllAround” high school yearbook every single person want change world better admit didn’t care much making world better place without eroding sense selfworth tough really toughTags Life Lessons Writing Self Improvement Life Creativity |
1,706 | How to Support Secured Connections inside Micronaut’s GraalVM | Micronaut has great out-of-box support for GraalVM. I've tried a simple task to create a Micronaut function to be handled by Amazon API Gateway which connects to Amazon RDS PostgreSQL. The following command generates the skeleton of the function handling API Gateway Proxy using GraalVM:
mn create-app graalvm-function --features aws-api-gateway-graal
See Custom GraalVM Native Runtimes for more information about API Gateway and GraalVM functions.
Next, we create a domain class, controller and repository using Micronaut Data JDBC. Feel free to copy from Pet Clinic example. How to use Micronaut Data JDBC goes beyond this post but there is one important part of the documentation which is Going Native with GraalVM. Please, follow the Configuration for Postgres section.
Now we come to the part which is not included in any guide yet. If you deploy the function with deploy.sh command and test it either from the AWS Lambda or by setting up API Gateway endpoint and executing the HTTP command then you will get the following warning with exception:
WARNING: The sunec native library, required by the SunEC provider, could not be loaded. This library is usually shipped as part of the JDK and can be found under <JAVA_HOME>/jre/lib/<platform>/libsunec.so. It is loaded at run time via System.loadLibrary("sunec"), the first time services from SunEC are accessed. To use this provider's services the java.library.path system property needs to be set accordingly to point to a location that contains libsunec.so. Note that if java.library.path is not set it defaults to the current working directory. com.amazonaws.serverless.exceptions.ContainerInitializationException: Error starting Micronaut container: Bean definition [javax.sql.DataSource] could not be loaded: Error instantiating bean of type [javax.sql.DataSource]: sun.security.ec.ECKeyPairGenerator.isCurveSupported([B)Z [symbol: Java_sun_security_ec_ECKeyPairGenerator_isCurveSupported or Java_sun_security_ec_ECKeyPairGenerator_isCurveSupported___3B]
The reason is that the function sets up a secured connection to the PostgreSQL inside the VPC using SunEC library. The library is provided by libsunec.so file inside GraalVM distribution. There are two steps which need to be accomplished.
First, copy libsunec.so file into the deployment archive function.zip . In Dockerfile located in the root of the project replace the line RUN zip -j function.zip bootstrap server with following two lines:
RUN cp /usr/lib/graalvm/jre/lib/amd64/libsunec.so libsunec.so
RUN zip -j function.zip bootstrap server libsunec.so
Second, replace the last line of bootstrap script with following to let the application find libsunec.so library file:
./server -Djava.library.path=$(pwd) | https://medium.com/agorapulse-stories/how-to-support-secured-connections-inside-micronauts-graalvm-730800d2ed03 | ['Vladimír Oraný'] | 2020-01-21 14:26:38.207000+00:00 | ['Graalvm', 'Java', 'AWS', 'Micronaut', 'Tech'] | Title Support Secured Connections inside Micronaut’s GraalVMContent Micronaut great outofbox support GraalVM Ive tried simple task create Micronaut function handled Amazon API Gateway connects Amazon RDS PostgreSQL following command generates skeleton function handling API Gateway Proxy using GraalVM mn createapp graalvmfunction feature awsapigatewaygraal See Custom GraalVM Native Runtimes information API Gateway GraalVM function Next create domain class controller repository using Micronaut Data JDBC Feel free copy Pet Clinic example use Micronaut Data JDBC go beyond post one important part documentation Going Native GraalVM Please follow Configuration Postgres section come part included guide yet deploy function deploysh command test either AWS Lambda setting API Gateway endpoint executing HTTP command get following warning exception WARNING sunec native library required SunEC provider could loaded library usually shipped part JDK found JAVAHOMEjrelibplatformlibsunecso loaded run time via SystemloadLibrarysunec first time service SunEC accessed use provider service javalibrarypath system property need set accordingly point location contains libsunecso Note javalibrarypath set default current working directory comamazonawsserverlessexceptionsContainerInitializationException Error starting Micronaut container Bean definition javaxsqlDataSource could loaded Error instantiating bean type javaxsqlDataSource sunsecurityecECKeyPairGeneratorisCurveSupportedBZ symbol JavasunsecurityecECKeyPairGeneratorisCurveSupported JavasunsecurityecECKeyPairGeneratorisCurveSupported3B reason function set secured connection PostgreSQL inside VPC using SunEC library library provided libsunecso file inside GraalVM distribution two step need accomplished First copy libsunecso file deployment archive functionzip Dockerfile located root project replace line RUN zip j functionzip bootstrap server following two line RUN cp usrlibgraalvmjrelibamd64libsunecso libsunecso RUN zip j functionzip bootstrap server libsunecso Second replace last line bootstrap script following let application find libsunecso library file server DjavalibrarypathpwdTags Graalvm Java AWS Micronaut Tech |
1,707 | At the Center of All Beauty | At the Center of All Beauty
“The secret to contentment is low overhead.” —Fenton Johnson
Photo: Pedro Robredo — Personal files of Fenton Johnson
Award-winning writer Fenton Johnson explains that this advice in his new book, At the Center of All Beauty, Solitude and the Creative Life, is a variation on Marianne Moore’s version, “The cure for loneliness is solitude.” His book is balm, validation, even celebration of all “solitaries,” his description for those of us who, like him, actively cherish and thrive in solitude. We solitaries draw our creative and artistic juices from within ourselves, from the boundless field of empty space with which we love to surround ourselves, an open corral in which to let our muses romp and run wild, unfettered, uninfluenced by external stimuli or others.
Not to be confused with only recluses or hermits, we solitaries love and delight in our own company and we are often avid social beings. I was reminded of the results of my Meyers-Briggs test (taken in the ’80s before it was dumbed down to psychobabble). I was mildly surprised to place on the spectrum toward introversion, believing myself outgoing and friendly. But the moderator explained that what this metric meant was that I felt drained after some time among people. I needed to recharge my mental and physical energies all by my lonesome. I realized how true that had been all my life. As often as I indulge in my social communities, my many urban tribes, my intimate friendship circles, I need to retreat to the silence of my own little nest.
“Silence and solitude set the imagination free to roam, which may be why capitalism devotes itself so assiduously to creating crowds and noise.”
Johnson fans will already know what he reveals early on, that he enjoyed the love of his life, a man who died of AIDS in Paris years ago. Since then, Johnson has cherished his rich literary life and as a writing professor been gratified to nurture aspiring writers and to be an icon for the worldwide writing community.
At the Center of All Beauty encompasses an engaging weaving of Johnson’s own life as a life-long solitude-lover with the lives of nearly a dozen artists in various disciplines, including Henry David Thoreau, Emily Dickinson, Paul Cézanne, Walt Whitman, Eudora Welty, Nina Simone, Zora Neale Hurston, Rod McKuen, Rabindranath Tagore, and Bill Cunningham. Johnson illuminates the cultural prejudice toward loners and the societal pressure, sometimes counterproductive, to engage in coupling. Note that some of his subjects — Cézanne, Simone, Hurston, Tagore — did couple in marriage, yet still fit the profile of solitary, each fiercely protective of her/his interior space. Of Whitman and Cézanne, Johnson writes, “Each preferred the world of their vast and fertile imagination over the confining world of fact.”
Describing what his subjects have in common, Johnson, a Zen Buddhist devotee, says, “each lost the self to find the self,” referring to Thoreau in the woods, Cézanne to his painting, Welty in her art, Tagore in his music and poetry, Simone in her music.
“Perhaps . . . what defines my solitaries — a reluctance to sacrifice openness to all for openness to one.”
While his book is a torch-bearer for solitaries, it is not a diatribe against coupling/marriage. It merely shines a bright light on the prevalent blind faith in those cultural assumptions, “the avalanche of messages telling us that marriage is our most noble means of self-sacrifice.” Like his subjects, he says, many solitaries “sacrifice ourselves . . . not for our individual wealth but for the common wealth.” Dickinson, for example, had an offer of marriage that she turned down, one can argue to humanity’s benefit, given the body of lofty work she left us. She demonstrates, as Johnson proposes, that “solitude, not marriage, is the more selfless choice.”
What I personally loved about At the Center of All Beauty was learning more about the author. Wikipedia describes Johnson as the last of nine children in a whiskey-making family. (I wrote him to say I am the fifth of ten in a pasta-making family.) Like me, Johnson was raised in a Roman Catholic family, in his case right next to the Trappist Monastery Gethsemani in Kentucky. I envy his having known Thomas Merton, the Catholic convert whose writings on mysticism are still influential. Johnson acquaints us with his blood family, the rural setting of his youth, the southern foods at their table, much of which they cultivated. He gives us the sense that his parents, notwithstanding a large brood, had solitary lives, as well as an open-door policy, including lots of social activity with the neighborly monks.
In his defense of solitaries, Johnson delves skillfully into the hidden subtext of our notions of love being fused to coupling. Commenting on the lifelong bond between Cézanne and the writer Émile Zola, he writes, “That such ecstatic friendship has fallen from our lives and art is due in part to our obsession with labels (gay, straight, married, single), and partly due to our elevation of church-designed, government-sanctioned marriage as the apogee of human relationship. Somewhere, in part in service to capitalism, the notion took hold that to be worthy of celebration, love must be certified by government or church edict, when my experience has that love does not submit itself to logic or reason, calendar or clock — that one may love differently perhaps, but as intensely in a moment as across a lifetime.”
As Johnson trains our eye on the artist’s work one hears his religious breeding: “In Cézanne’s painting the sacred becomes flesh and dwells among us.” We hear his mystical vision as he notes how solitaries are some of the most agile at transcending the artifice of time: “Long before quantum physics, Cézanne understood that all moments are present to this moment.”
Perhaps only the bona fide solitary can know the opiate-high of the zone, the flow, the gratifying choice of aloneness, where time is irrelevant or as Johnson quotes Albert Einstein: “This distinction between past, present, and future is an illusion, however tenacious.” He writes, “Only the marathon runner, high on endorphins, or the heroin addict, or the besotted lover in the presence of the beloved . . . can understand . . . what it means to live outside time — to live, in fact, not in the past or future but in the mystic eternal now.”
Tipping the balance of prejudice toward pro-friendship is crucial, Johnson says, because “. . . the very survival of the species depends on our transcending ties based on blood and marriage . . . the ties of blood which perpetuate and reinforce conflict — recognizing instead the bonds of love, with friendship, not marriage, as the tie that binds.”
“ . . . to understand only biological offspring as our children is to shortchange the great human impulse toward magnanimity, toward altruism.”
Revealing that he practices “celibacy not as negation . . . but as joyous turning inward,” Johnson gives us a pearl to that end from famous solitary Dickinson who poeticized herself as an “Inebriate of air, debauchee of dew.” Johnson lyrically describes the Belle of Amherst as the “most promiscuous of celibates.”
In these times of enforced solitude, what better book to shelter in place with, than this one, which squarely places you At the Center of All Beauty. | https://medium.com/nomudnolotus-writer/at-the-center-of-all-beauty-3561a083d732 | ['Camille Cusumano'] | 2020-06-09 01:24:31.005000+00:00 | ['Books And Authors', 'Nonfiction', 'Solitude', 'Writer', 'Creativity'] | Title Center BeautyContent Center Beauty “The secret contentment low overhead” —Fenton Johnson Photo Pedro Robredo — Personal file Fenton Johnson Awardwinning writer Fenton Johnson explains advice new book Center Beauty Solitude Creative Life variation Marianne Moore’s version “The cure loneliness solitude” book balm validation even celebration “solitaries” description u like actively cherish thrive solitude solitary draw creative artistic juice within boundless field empty space love surround open corral let mus romp run wild unfettered uninfluenced external stimulus others confused recluse hermit solitary love delight company often avid social being reminded result MeyersBriggs test taken ’80s dumbed psychobabble mildly surprised place spectrum toward introversion believing outgoing friendly moderator explained metric meant felt drained time among people needed recharge mental physical energy lonesome realized true life often indulge social community many urban tribe intimate friendship circle need retreat silence little nest “Silence solitude set imagination free roam may capitalism devotes assiduously creating crowd noise” Johnson fan already know reveals early enjoyed love life man died AIDS Paris year ago Since Johnson cherished rich literary life writing professor gratified nurture aspiring writer icon worldwide writing community Center Beauty encompasses engaging weaving Johnson’s life lifelong solitudelover life nearly dozen artist various discipline including Henry David Thoreau Emily Dickinson Paul Cézanne Walt Whitman Eudora Welty Nina Simone Zora Neale Hurston Rod McKuen Rabindranath Tagore Bill Cunningham Johnson illuminates cultural prejudice toward loner societal pressure sometimes counterproductive engage coupling Note subject — Cézanne Simone Hurston Tagore — couple marriage yet still fit profile solitary fiercely protective herhis interior space Whitman Cézanne Johnson writes “Each preferred world vast fertile imagination confining world fact” Describing subject common Johnson Zen Buddhist devotee say “each lost self find self” referring Thoreau wood Cézanne painting Welty art Tagore music poetry Simone music “Perhaps defines solitary — reluctance sacrifice openness openness one” book torchbearer solitary diatribe couplingmarriage merely shine bright light prevalent blind faith cultural assumption “the avalanche message telling u marriage noble mean selfsacrifice” Like subject say many solitary “sacrifice individual wealth common wealth” Dickinson example offer marriage turned one argue humanity’s benefit given body lofty work left u demonstrates Johnson proposes “solitude marriage selfless choice” personally loved Center Beauty learning author Wikipedia describes Johnson last nine child whiskeymaking family wrote say fifth ten pastamaking family Like Johnson raised Roman Catholic family case right next Trappist Monastery Gethsemani Kentucky envy known Thomas Merton Catholic convert whose writing mysticism still influential Johnson acquaints u blood family rural setting youth southern food table much cultivated give u sense parent notwithstanding large brood solitary life well opendoor policy including lot social activity neighborly monk defense solitary Johnson delf skillfully hidden subtext notion love fused coupling Commenting lifelong bond Cézanne writer Émile Zola writes “That ecstatic friendship fallen life art due part obsession label gay straight married single partly due elevation churchdesigned governmentsanctioned marriage apogee human relationship Somewhere part service capitalism notion took hold worthy celebration love must certified government church edict experience love submit logic reason calendar clock — one may love differently perhaps intensely moment across lifetime” Johnson train eye artist’s work one hears religious breeding “In Cézanne’s painting sacred becomes flesh dwells among us” hear mystical vision note solitary agile transcending artifice time “Long quantum physic Cézanne understood moment present moment” Perhaps bona fide solitary know opiatehigh zone flow gratifying choice aloneness time irrelevant Johnson quote Albert Einstein “This distinction past present future illusion however tenacious” writes “Only marathon runner high endorphin heroin addict besotted lover presence beloved understand mean live outside time — live fact past future mystic eternal now” Tipping balance prejudice toward profriendship crucial Johnson say “ survival specie depends transcending tie based blood marriage tie blood perpetuate reinforce conflict — recognizing instead bond love friendship marriage tie binds” “ understand biological offspring child shortchange great human impulse toward magnanimity toward altruism” Revealing practice “celibacy negation joyous turning inward” Johnson give u pearl end famous solitary Dickinson poeticized “Inebriate air debauchee dew” Johnson lyrically describes Belle Amherst “most promiscuous celibates” time enforced solitude better book shelter place one squarely place Center BeautyTags Books Authors Nonfiction Solitude Writer Creativity |
1,708 | How To Do Less and Still Grow Your Business | Then one day I realized I couldn’t hold a conversation without popping my ears and twitching my right eye; interesting, I thought. Where the hell is this coming from, and what is my body telling me?
I then realized I had been unsatisfied, controlling, and continuously swinging between two different kinds of guilts (not doing enough and doing too much at the same time) for the longest time.
I needed a way out, as all this doing wasn’t even supporting my dreams, as I was utterly miserable, and I still struggling to make ends meet.
Going back to my values
I went back to a very simple question; why I was doing what I was doing and did my behaviors aligned with my values. It was nice to reinforce my whys and realize that I was doing the work that made me feel extremely satisfied and sparked my creativity. It was also hard to face that the way I was acting towards my family, self, and business wasn’t aligning with my values.
When did it become ok to put myself care and the quality time spent with the people I loved the most in the backburner? I justified my busyness as I had so many things to tick off daily, but could I shorten the mental list I was relying on? To do so, I had to apply…
The Law of the vital few
In laymen’s’ term, I had to figure out what I was doing that was beneficial for my business. To give you an example, I struggled running FB ads; for some reason, I found them quite pricey and somewhat tricky to set up. I second-guessed my choice every time I clicked send, and I didn’t know how to read the analytics. Turned out, although I was investing a decent amount on my ads, I wouldn’t receive much in return, only hours of headaches and tons of insecurity. I then stopped running ads and…nothing changed. It showed me that I was investing much of my precious time in an activity I wasn’t clearly good at, and that didn’t spark any joy (more on it on the next point).
Writing content, and doing research, instead, makes me tremendously happy, mainly because I know I’m able to reach and support a broader audience. Writing an article wasn’t always the most profitable task, but I knew that it was taking me in the right direction, as I was reaching the life and the hearts of my target audience. I just had to wait for the love to come around, and in the meantime, I had enough articles posted to create a program or two, plenty of webinars and workshops, and never-ending lead magnets. Content creation was my thing.
Outsource
To go back to the example of the Facebook ads, I wasn’t saying they don’t work (apparently, 85% of clicks on FB come from friends and paid ads, as it is almost impossible to grow in a platform that is clogged by information), but that I didn’t know how to make them work for me. That’s when I decided to outsource, and I also hired an amazing girl to create my IG posts. She would spend 10 minutes on it, whereas it would take me hours to come up with a decent design.
Outsourcing is an incredible tool, and I applied it to different areas of my life; I do, for example, hire a cleaner to come and tidy up our home twice per month, so the only thing I have to do is keeping the kitchen clean until her next visit. I know that I come from a privileged place where I have the opportunity to hire a cleaner, but I have also been babysitting for other people to get the same favor in return. Also, I don’t spend a dollar on buying clothes, cosmetics, and shoes; unless I really need to.
Find what floats your boat and start asking for the help you need.
Focus on your mental health
As long as I concentrated on doing, instead of being, I had literally no chance to grow. As long as stress and anxiety were taking a massive real estate’s space in my brain, I couldn’t see the light at the end of the tunnel. I used to focus on getting short-lived gains, which lead me to feel overwhelmed in the long term. I used to say yes to everything, for fear of missing out. I used to prioritize everything else but me, with the mentality of “I’d rather do it myself as I can’t be bothered asking.”
I had to get to a miserable place of exhaustion before sitting down and focusing on patching up my broken cup, instead of filling it mindlessly.
Also, journaling helped. I would love to say that I turned to meditation, and life became magic in less than a week; this would give you a tool that you can utilize and put in place. Unfortunately, meditation didn’t help me as much as it did in the past, but journaling came to rescue, as it was a moment of mindfulness in my weekly routine where I would sit, regroup and recharge.
Fix your energetic leak, recharge your batteries, and find the way that works for you. As much as I would love to be able to work 80 hours a week and shine, my body and mind can get easily overwhelmed with 25 hours of week of intense focus. Find out who you are and grow from there. | https://medium.com/an-idea/how-to-do-less-and-still-grow-your-business-6c1586ca2198 | ['Claudia Vidor'] | 2020-12-07 06:38:28.121000+00:00 | ['Money', 'Self Improvement', 'Startup', 'Business', 'Productivity'] | Title Less Still Grow BusinessContent one day realized couldn’t hold conversation without popping ear twitching right eye interesting thought hell coming body telling realized unsatisfied controlling continuously swinging two different kind guilt enough much time longest time needed way wasn’t even supporting dream utterly miserable still struggling make end meet Going back value went back simple question behavior aligned value nice reinforce why realize work made feel extremely satisfied sparked creativity also hard face way acting towards family self business wasn’t aligning value become ok put care quality time spent people loved backburner justified busyness many thing tick daily could shorten mental list relying apply… Law vital laymen’s’ term figure beneficial business give example struggled running FB ad reason found quite pricey somewhat tricky set secondguessed choice every time clicked send didn’t know read analytics Turned although investing decent amount ad wouldn’t receive much return hour headache ton insecurity stopped running ad and…nothing changed showed investing much precious time activity wasn’t clearly good didn’t spark joy next point Writing content research instead make tremendously happy mainly know I’m able reach support broader audience Writing article wasn’t always profitable task knew taking right direction reaching life heart target audience wait love come around meantime enough article posted create program two plenty webinars workshop neverending lead magnet Content creation thing Outsource go back example Facebook ad wasn’t saying don’t work apparently 85 click FB come friend paid ad almost impossible grow platform clogged information didn’t know make work That’s decided outsource also hired amazing girl create IG post would spend 10 minute whereas would take hour come decent design Outsourcing incredible tool applied different area life example hire cleaner come tidy home twice per month thing keeping kitchen clean next visit know come privileged place opportunity hire cleaner also babysitting people get favor return Also don’t spend dollar buying clothes cosmetic shoe unless really need Find float boat start asking help need Focus mental health long concentrated instead literally chance grow long stress anxiety taking massive real estate’s space brain couldn’t see light end tunnel used focus getting shortlived gain lead feel overwhelmed long term used say yes everything fear missing used prioritize everything else mentality “I’d rather can’t bothered asking” get miserable place exhaustion sitting focusing patching broken cup instead filling mindlessly Also journaling helped would love say turned meditation life became magic le week would give tool utilize put place Unfortunately meditation didn’t help much past journaling came rescue moment mindfulness weekly routine would sit regroup recharge Fix energetic leak recharge battery find way work much would love able work 80 hour week shine body mind get easily overwhelmed 25 hour week intense focus Find grow thereTags Money Self Improvement Startup Business Productivity |
1,709 | How Did Ancient People Deal With Boredom? | Photo by Joshua Rawson-Harris on Unsplash
People around the world have been subjected to a number of changes in their lives over the last few months.
If we’re lucky, we’re still healthy and employed. But that doesn’t make it easy — we’re stuck apart from our friends, subject to new anxieties about our health and our economic futures, waiting for faraway officials to tell us when and how we will be able to resume our “normal” lives.
Perhaps the most common feeling for many of us right now is boredom. It’s why people are engaging in increasingly elaborate internet video challenges or baking complex breads. Frankly, it’s part of why I’m writing this right now.
But we’re facing this time of boredom with more resources than anybody has before us — we have the ability to browse the internet, watch or listen to pretty much anything ever made, and order most goods over Amazon. In short, during the most boring month of our lives we still have more ways to entertain ourselves than some of the most privileged people centuries ago.
Though the English word “boredom” did not come into use until Charles Dickens used it in Bleak House in 1852, boredom has been a well known phenomenon for most of human history. Indeed, imagining the tedium and repetition involved in pretty much any way of making a living in ancient or medieval times would be enough to terrify most of us in the overstimulated modern world.
Imagine spending every day churning butter, or tending a field of crops, or weaving fabric — all by hand. Few people moved beyond their immediate surroundings; one historian estimates that 80% of medieval Europeans, for example, never traveled more than 20 miles from their homes.
Most people in the past did the same (often physically grueling) tasks day after day in the company of the same small group of people. Their options for entertainment were slim — most people likely could not read in these societies, and the same myths and stories were repeated over and over. So how did they experience boredom, how did they cope, and what can we learn from them about our predicament?
Despite that fact that, from our vantage point, boredom must have been endemic in the pre-modern world, the concept shows up unevenly in records from ancient societies. The ancient Greeks rarely referred to boredom, and didn’t really have a word that maps onto our concept of boredom.
The words that they used to describe states like boredom also meant things like “distraction” or “disgust” or “irritation.” Aristophanes describes one character’s boredom at having to wait for the Athenian Assembly to begin through action rather than description — he says, “I groan, I yawn, I stretch, I fart, I don’t know what to do.” You may have been in this situation recently — unable to precisely name your mental state but restless nonetheless.
A Roman bust of Aristotle (public domain)
Perhaps boredom was such a ubiquitous part of ancient Greek life that it wasn’t necessary to name it precisely. In the same way that we don’t think about the fact that we are breathing all the time, maybe ancient Greeks didn’t identify their constant boredom, just its symptoms.
There may have been cultural attitudes involved, as well. Some of Athens’ most prominent philosophers praised their fellow citizens’ ability to enjoy their leisure. Aristotle, for example, saw leisure as absolutely crucial for creating a learned and politically engaged populace.
In his Politics, he wrote, “we should be able, not only to work well, but to use leisure well; for, as I must repeat once again, the first principle of all action is leisure.” Though it’s unclear how widespread these attitudes were, perhaps Greek cultural ideas about the beneficial effects of leisure made them more comfortable with the conditions that could lead to boredom.
Romans, a culture that valued hard work and discipline perhaps more than the Greeks, discussed boredom more frequently, and in ways that map onto our modern experience. Seneca, the Stoic philosopher and playwright who lived between 4 BCE and 65 CE, describes the mix of restlessness and inertia that many of us feel these days:
Thence comes that feeling which makes men loathe their own leisure and complain that they themselves have nothing to be busy with. For their unhappy sloth fosters envy, and, because they could not succeed themselves, they wish every one else to be ruined; then from this aversion to the progress of others and despair of their own their mind becomes incensed against Fortune, and complains of the times, and retreats into corners and broods over its trouble until it becomes weary and sick of itself. For it is the nature of the human mind to be active and prone to movement.
Seneca emphasizes the ways in which boredom (he used the word taedium) can curdle into more harmful emotions like bitterness and anger if it’s not dealt with properly. Seneca’s remedy for boredom is pretty simple: he says that we need to find something to do. if we have nothing to do around the house, he encourages his readers to engage in public affairs. So Seneca would advocate what a lot of us are already doing — baking bread, repainting the bedroom, and perhaps gearing up to get involved in the upcoming election.
“Months of boredom punctuated by moments of terror” became a common description of warfare during the First World War. Rome was known for its military exploits more than pretty much anything else. Its massive armies spent long periods of time away from home, and many soldiers found themselves with long periods of time during which there was nothing to do.
Ennius, a Roman playwright, has a soldier character say, “When there is a lazy beginning the mind doesn’t know what it wants… The mind wanders indecisively; we only live sort of a life.” Perhaps this half-living seems very familiar to you right now; boredom can often make the days seem endless and pointless.
Plutarch, a Greek historian writing during the Roman period, warns of the problematic effects of boredom on soldiers’ morale. He wrote about Eumenes, a general whose troops were trapped in a siege. Eumenes prescribed his men exercise — he had them walk around a large house over and over again, speeding up with each lap.
One of the desert fathers, early monks who battled the sin of boredom (public domain)
During the early Christian era, boredom came to seem worse than just an condition of life — it could be a sin.
The monastic life, like many religious vocations, was deliberately constructed to include long, low-stimulation periods of silence and contemplation. They, like us, had their freedom and social interaction severely limited; this deprivation was supposed to allow them to focus on getting closer to spiritual truths. So where was the line between virtuous simplicity and boredom?
Early medieval monks spoke of the vice of acedia, a Greek word originally meaning “indifference” or “lack of care.” Evagrius, an early desert monk, spoke of acedia as the “demon of noontide” that would come to ruin monks’ moral and spiritual commitment. Acedia was labeled one of the deadly sins (the eighth; it was later folded in with sloth). Evagrius’ disciple John Cassian described a monk suffering from acedia this way:
He looks about anxiously this way and that, and sighs that none of the brethren come to see him, and often goes in and out of his cell, and frequently gazes up at the sun, as if it was too slow in setting, and so a kind of unreasonable confusion of mind takes possession of him like some foul darkness.
It’s this lack of focus that made boredom dangerous for monks. Becoming unmoored in the long, hot hours of the middle of the day made them vulnerable to temptation. Perhaps they would wonder why they had chosen the religious life at all; perhaps they would just lose focus on the spiritual life they were supposed to be living.
Life in boredom can seem purposeless, as I’m sure you know. So how should we fight acedia? Some monastic rules prescribed communal reprimand — one’s fellow monks should scold those falling victim to boredom and help them to focus on spiritual contemplation.
So you’re bored — I am too! So were most people throughout history! But there are some time-tested solutions. Ancient authors had a number of remedies for the boredom that they faced. Some encouraged bored people to lean into it — to enjoy the leisure they had.
Others encouraged the bored to get busy, finding a project or taking up exercise. Others said to lean on their community, allowing those around them to jolt them out of their stupor. Whatever you choose, take refuge in the fact that your boredom is alleviated by modern technology and — most importantly — temporary. | https://medium.com/lessons-from-history/how-did-ancient-people-deal-with-boredom-dfeae9f9aa74 | ['Historical Insights'] | 2020-04-29 20:20:10.394000+00:00 | ['Self Improvement', 'Boredom', 'History', 'Society', 'Coronavirus'] | Title Ancient People Deal BoredomContent Photo Joshua RawsonHarris Unsplash People around world subjected number change life last month we’re lucky we’re still healthy employed doesn’t make easy — we’re stuck apart friend subject new anxiety health economic future waiting faraway official tell u able resume “normal” life Perhaps common feeling many u right boredom It’s people engaging increasingly elaborate internet video challenge baking complex bread Frankly it’s part I’m writing right we’re facing time boredom resource anybody u — ability browse internet watch listen pretty much anything ever made order good Amazon short boring month life still way entertain privileged people century ago Though English word “boredom” come use Charles Dickens used Bleak House 1852 boredom well known phenomenon human history Indeed imagining tedium repetition involved pretty much way making living ancient medieval time would enough terrify u overstimulated modern world Imagine spending every day churning butter tending field crop weaving fabric — hand people moved beyond immediate surroundings one historian estimate 80 medieval Europeans example never traveled 20 mile home people past often physically grueling task day day company small group people option entertainment slim — people likely could read society myth story repeated experience boredom cope learn predicament Despite fact vantage point boredom must endemic premodern world concept show unevenly record ancient society ancient Greeks rarely referred boredom didn’t really word map onto concept boredom word used describe state like boredom also meant thing like “distraction” “disgust” “irritation” Aristophanes describes one character’s boredom wait Athenian Assembly begin action rather description — say “I groan yawn stretch fart don’t know do” may situation recently — unable precisely name mental state restless nonetheless Roman bust Aristotle public domain Perhaps boredom ubiquitous part ancient Greek life wasn’t necessary name precisely way don’t think fact breathing time maybe ancient Greeks didn’t identify constant boredom symptom may cultural attitude involved well Athens’ prominent philosopher praised fellow citizens’ ability enjoy leisure Aristotle example saw leisure absolutely crucial creating learned politically engaged populace Politics wrote “we able work well use leisure well must repeat first principle action leisure” Though it’s unclear widespread attitude perhaps Greek cultural idea beneficial effect leisure made comfortable condition could lead boredom Romans culture valued hard work discipline perhaps Greeks discussed boredom frequently way map onto modern experience Seneca Stoic philosopher playwright lived 4 BCE 65 CE describes mix restlessness inertia many u feel day Thence come feeling make men loathe leisure complain nothing busy unhappy sloth foster envy could succeed wish every one else ruined aversion progress others despair mind becomes incensed Fortune complains time retreat corner brood trouble becomes weary sick nature human mind active prone movement Seneca emphasizes way boredom used word taedium curdle harmful emotion like bitterness anger it’s dealt properly Seneca’s remedy boredom pretty simple say need find something nothing around house encourages reader engage public affair Seneca would advocate lot u already — baking bread repainting bedroom perhaps gearing get involved upcoming election “Months boredom punctuated moment terror” became common description warfare First World War Rome known military exploit pretty much anything else massive army spent long period time away home many soldier found long period time nothing Ennius Roman playwright soldier character say “When lazy beginning mind doesn’t know wants… mind wanders indecisively live sort life” Perhaps halfliving seems familiar right boredom often make day seem endless pointless Plutarch Greek historian writing Roman period warns problematic effect boredom soldiers’ morale wrote Eumenes general whose troop trapped siege Eumenes prescribed men exercise — walk around large house speeding lap One desert father early monk battled sin boredom public domain early Christian era boredom came seem worse condition life — could sin monastic life like many religious vocation deliberately constructed include long lowstimulation period silence contemplation like u freedom social interaction severely limited deprivation supposed allow focus getting closer spiritual truth line virtuous simplicity boredom Early medieval monk spoke vice acedia Greek word originally meaning “indifference” “lack care” Evagrius early desert monk spoke acedia “demon noontide” would come ruin monks’ moral spiritual commitment Acedia labeled one deadly sin eighth later folded sloth Evagrius’ disciple John Cassian described monk suffering acedia way look anxiously way sigh none brother come see often go cell frequently gaze sun slow setting kind unreasonable confusion mind take possession like foul darkness It’s lack focus made boredom dangerous monk Becoming unmoored long hot hour middle day made vulnerable temptation Perhaps would wonder chosen religious life perhaps would lose focus spiritual life supposed living Life boredom seem purposeless I’m sure know fight acedia monastic rule prescribed communal reprimand — one’s fellow monk scold falling victim boredom help focus spiritual contemplation you’re bored — people throughout history timetested solution Ancient author number remedy boredom faced encouraged bored people lean — enjoy leisure Others encouraged bored get busy finding project taking exercise Others said lean community allowing around jolt stupor Whatever choose take refuge fact boredom alleviated modern technology — importantly — temporaryTags Self Improvement Boredom History Society Coronavirus |
1,710 | Mental Health Awareness is More Than Just a Meme | I speak candidly from my own experience with PTSD and clinical depression, which were exacerbated by six years of late-stage neurological Lyme disease and the hormonal and emotional upheaval brought on during menopause.
I was in a very dark place for years. I could barely muster the energy to get out of bed, and I was miserable every minute of every day from constant, debilitating pain. Without sleep or physical healing, my emotional and mental faculties were overwhelmed, and I became an overly-sensitive, anxious, irritable, melancholy mess.
Instead of earning compassion and kindness from those around me, I earned disdain and alienation. Part of me doesn’t blame others. Clearly, I was not a fun person to be around. The truth is — I was drowning, and even I didn’t understand how far I’d fallen into the pit of depression until I no longer recognized myself.
Sadly, some of the worst offenders when it came to recognizing my mental illness were people in the medical or therapeutic fields.
However, mental illness in real life isn’t always like it is in the movies. We aren’t all banging our heads against walls and running naked through the halls in the Cuckoo’s Nest.
These also happen to be the very people who post enthusiastically online about how woke they are when it comes to mental illness. They make sure everyone knows they are ‘warriors’ who donate money and support the mental health community.
What I’ve seen from many of these mental health and medical professionals is outdated knowledge gleaned from college courses they took thirty years ago. On stage, they can pontificate about theory and collect awards for their service and valor. They recognize people with extreme mental illness who need heavy-duty medications or hospitalization or constant supervision to survive.
However, mental illness in real life isn’t always like it is in the movies. We aren’t all banging our heads against walls and running naked through the halls in the Cuckoo’s Nest.
In every-day life, many people, including therapists and clinicians, fail to recognize that millions of people — their friends and family members included — are living and working and struggling with varying degrees of mental illness.
Just because we are functioning at high levels and look normal on the outside doesn’t mean we aren’t silently struggling every minute of every day.
We’re not just having bad days.
We’re not just a Debbie Downer.
We’re fighting demons.
And sometimes the demons are winning. | https://medium.com/narrative/mental-health-awareness-is-more-than-just-a-meme-78d9a93dadbd | ['Lizzie Finn'] | 2020-10-10 15:08:00.624000+00:00 | ['Life Lessons', 'Mental Illness', 'Mental Health', 'Depression', 'Health'] | Title Mental Health Awareness MemeContent speak candidly experience PTSD clinical depression exacerbated six year latestage neurological Lyme disease hormonal emotional upheaval brought menopause dark place year could barely muster energy get bed miserable every minute every day constant debilitating pain Without sleep physical healing emotional mental faculty overwhelmed became overlysensitive anxious irritable melancholy mess Instead earning compassion kindness around earned disdain alienation Part doesn’t blame others Clearly fun person around truth — drowning even didn’t understand far I’d fallen pit depression longer recognized Sadly worst offender came recognizing mental illness people medical therapeutic field However mental illness real life isn’t always like movie aren’t banging head wall running naked hall Cuckoo’s Nest also happen people post enthusiastically online woke come mental illness make sure everyone know ‘warriors’ donate money support mental health community I’ve seen many mental health medical professional outdated knowledge gleaned college course took thirty year ago stage pontificate theory collect award service valor recognize people extreme mental illness need heavyduty medication hospitalization constant supervision survive However mental illness real life isn’t always like movie aren’t banging head wall running naked hall Cuckoo’s Nest everyday life many people including therapist clinician fail recognize million people — friend family member included — living working struggling varying degree mental illness functioning high level look normal outside doesn’t mean aren’t silently struggling every minute every day We’re bad day We’re Debbie Downer We’re fighting demon sometimes demon winningTags Life Lessons Mental Illness Mental Health Depression Health |
1,711 | How Much Does It Cost To Make An Mobile App Like UberEats and Deliveroo | How Much Does It Cost To Make An Mobile App Like UberEats and Deliveroo Sophia Martin Follow Aug 17 · 10 min read
Have you been into the food industry and planning to take your venture to the next level with on-demand food delivery apps? If yes, then your very next question will be, how much does it cost to develop a food delivery app? How to set a task for the developer and what functionalities you need to consider first in your app? Let’s follow the app clone of the top food delivery apps including UberEats and Deliveroo to understand their background and what exactly makes them successful in the competitive market.
Before jumping on the features and functionalities of the app, it is worth understanding the background of the food delivery apps and their scope.
Why On-Demand Food Delivery Apps Have Become So Demanding?
Have you noticed that food delivery companies like UberEats and Deliveroo have become dominating names in the food industry today- just because of convenience and maximalism. And its impact on Americans is, 60% of US consumers order delivery or takeout once a week.
In this fast-paced life, food delivery apps are a true blessing for everyone. The on-demand food delivery applications are one of those techniques that have created a platform where customers and chains of restaurants can easily meet to quench their needs. But here are the few reasons that make sense in increasing the popularity of online food ordering apps and making it an ideal choice for the startups.
Raise the Business: The restaurants are finding online food delivery apps are a much more straightforward and convenient option to get delivery orders than taking orders from the wild calls. Moreover, 60% of restaurant operators say that offering delivery has lifted up their sales.
Improve Customer Relationship Management: 43% of restaurant professionals said that they believe third-party apps help in building a direct relationship between a restaurant/bar/pub and its customers. The seamless online food ordering solutions have actually modernized customer relationship management and can offer all the required services right from food ordering to quick delivery at their home.
Enhance Restaurant Business Promotions: Undoubtedly, on-demand applications are integrated with multiple social media platforms like Facebook, Twitter, Instagram and provide a great platform for seamless business promotions. The various types of online promotions help in attracting a large number of people to the app.
Expand Customer Base: According to the survey, the customers who place an online order with a restaurant will visit the restaurant 67% more frequently than those who don’t use the app. Moreover, working with a third-party delivery service has been found to raise restaurant sales volume by 10 to 20%.
Additional key facts and some insights portraying how powerful online food ordering apps have become:
63% of customers agree that ordering food from the online food app is more convenient than dining out with a family.
According to the studies, UberEATs grew by 230% last year, with its average customer spending more than $220 annually.
New food delivery platforms are sticky, once users sign up the app, an average of 77% of customers never or rarely leave the platform or switch to another app.
Digital Ordering and delivery have grown 300% faster than dine-in traffic since 2014.
It is estimated that mobile orders will make up close to 11% of all QSR sales by 2020.
GrubHub, UberEATS, DoorDash and Postmates have become the most powerful food delivery apps in the industry and acquiring the largest share of the market.
Now the central question is, why food delivery apps like UberEats, Postmates, DoorDash or Deliveroo are booming?
What Makes UberEATS and Deliveroo a Leading Food Delivery App?
There is no doubt in this fact that people always go for the one that adds convenience and comfort to their life. And this mantra applies to everything from the live movie streaming apps to music listening apps, and the games that we play to food that we eat. And there are multiple factors behind the boom of food delivery apps like Deliveroo and UberEATS.
So, before learning the process to develop a successful online food delivery app, it is worth understanding the great blend of technology that UberEATs has used to add a great level of comfort:
For Payment integration: UberEATs use Braintree, Stripe, E Wallets along with the payment on cash.
UberEATs use Braintree, Stripe, E Wallets along with the payment on cash. Database Storage: UberEATs use AWS, Google for a better cloud environment, storage and backup.
UberEATs use AWS, Google for a better cloud environment, storage and backup. GEO tracking: Since UberEATs is using Google Location API for Android and Core Location Framework for iOS, therefore offers you seamless location tracking experience.
Since UberEATs is using Google Location API for Android and Core Location Framework for iOS, therefore offers you seamless location tracking experience. Navigation: UberEATs using Android Maps API and Mapkit for Apple devices.
UberEATs using Android Maps API and Mapkit for Apple devices. Listing or Menu: To display the list of the menu as per the location, the developer can use any popular API like Foursquare.
To display the list of the menu as per the location, the developer can use any popular API like Foursquare. Analytics: To review the performance and analytics of your business, UberEATs use Google Analytics or Mix Panel.
In the Nutshell: The UberEATs clone may sound excellent for any food delivery app, but the performance of the app is depending upon the mobile app development company you hire for the app development. So before getting collaborated with any mobile app developer, it is worth discussing your app plans and needs.
How To Develop A Food Delivery App Like UberEATs and Deliveroo?
There is no doubt in this fact that developing food delivery apps like UberEATs, Postmates, DoorDash or Deliveroo required a huge investment. The upfront cost of the app development from scratch can be starting from $25,000 per project, which is quite expensive for the startups. So, what if you don’t have that kind of budget, should you drop the plan to hire an app developer?
Well, everything has a solution. Today even some small businesses are coming up with something similar to the UberEATs at a fraction of the cost. There are many app development companies that usually make use of existing APIs to lessen the app development process and cost. All you need is hire the right app development team that can understand your business requirements and be able to build an app under the limited budget.
Here’s what exactly you need to focus while developing a food delivery app like UberEATs:
Understanding the Key Components of the UberEATs
Key Features to Build an App Like UberEATs
App Technology Used For Android/iOS/Cross-Platform
Monetizing Strategy to Make Profit From the App
Let’s dig deep in each point for better understanding:
1. Understanding the Key Elements of UberEATs
Today, UberEATs has become the fastest-growing food delivery app and has collaborated with more than 50,000+ restaurants and provided the end number of food options. Moreover, with a $2.8 trillion addressable market, it is making up 22% of the company’s total bookings in 2019. Now the central question is, how do they manage everything right from the heavy traffic on the app, orders, courier to restaurant partners?
Well, the food delivery apps like UberEATs have 3 major elements:
For Customer: This version helps customers to choose from the extensive list of offerings in terms of restaurants as per their location and menus. All the food deliveries are made as per the customers chosen time.
This version helps customers to choose from the extensive list of offerings in terms of restaurants as per their location and menus. All the food deliveries are made as per the customers chosen time. For Courier partners: This type of app is for the Uber drivers, who have been signed up or registered as a part of a delivery network. Once the order is made, it has been allocated to the driver as per the location. With the courier app version, both restaurants and customers are notified with the approx delivery time.
This type of app is for the Uber drivers, who have been signed up or registered as a part of a delivery network. Once the order is made, it has been allocated to the driver as per the location. With the courier app version, both restaurants and customers are notified with the approx delivery time. For Restaurants Partners: Restaurants will get detailed order information and they are responsible to update the order status and send a notification to the customers & drivers. They have access to the list of the current orders made every day.
2. Key Features To Develop An App Like UberEATs
When it comes to developing a mobile app, it’s features and functionality can make a great difference in how it is acceptable in the market. Moreover, mobile app features and functionalities can easily eat up your budget, therefore it is important to set your budget and accordingly make a choice of features that you need to integrate in the application.
Since UberEATs clone is layered with different app versions, therefore we have categorized the major features required for customer and courier app version along with the estimated development hours.
If you are a startup and planning to enter the food industry with the lowest budget, then you can consider developing an app with the MVP or basic features. Here’s the breakup of basic food delivery app:
In case, if you are looking for the food app with advanced features, then the development cost of the Customer app version will be starting from $20,000+ and can go to any expensive price.
You can check the break up below:
And, here’s the necessary features required for the courier app version which is starting from $10,000 and can go to $35,000+ depending upon the choice of features and complexity of the app.
3. App Technology Used For Android/iOS/Cross-Platform
The tech side of the app is depending upon the choice of operating system you choose to launch your app. If you are targeting a native app, then for Android app development solutions, it is worth hiring a mobile app developer expert in Java, Node.Js, whereas Objective-C and Swift can be an excellent option for iOS app development. In case, if you are targeting a large number of customers through multiple platforms then developing hybrid apps using Flutter or React Native can be an optimum choice.
4. How To Make Profit From Your Food Delivery App?
No matter how amazingly you have designed your app, but there is no use of developing a beautiful app or hiring the best software development company for the project if there is no scope of making a profit from the app. So here are the few best app monetization methods used by UberEATs:
Commission from Restaurants: This is one of the most common and efficient ways to earn profit from your app. UberEATs earn itself 15% to 40% commission from the orders fulfilled by UberEATs.
Advertising Income From Restaurant Partners: UberEATs charging their restaurant partners a promotion fee to increase their visibility on the app’s listing. And with the increasing number of restaurants, it becomes important for the restaurants to invest in marketing to make their name visible to customers on the app.
Delivery Fee From Customers: Charging plus surcharges for the food delivery during peak hours including lunch and dinner time is one of the worthy ways to make a profit from the app.
5. Hire a Development Team
To build an app like UberEATs, you need a professional IT team that can understand the app clone and able to develop an app that can help you launch a successful food delivery app.
Basically the mobile app development team should include:
Business Analyst
Project Manager
Backend/Frontend Developer
UX/UI designer
Quality Analyst
There are two ways to hire a team of app developers, either look for an in house app development team or look for an offshore app development company. On one side, communicating with in-house mobile app development team is easier whereas outsourcing a mobile app development company can be a cheaper option for you. Moreover, you can hire a mobile app developer at a price of $15 to $50 per hour in India whereas $150 per hour in the US and $100/hr in the UK.
Cost to Build Food Delivery App Like UberEATs
Estimating the overall price of the food delivery app can be a complicated task as there are a number of factors affecting the development cost. Right from the features, functionalities, complexity, size of the app, development team and its location, to technologies, there are a plethora of factors influencing the cost of app development.
The average cost to develop a food delivery app can be starting from $15,000 to $25,000+ with the basic features, but the price can go to any expensive figures depending upon the choice of features and functionality. The complex the app structure is, the higher the app development cost will be. So if you want to calculate the exact price of the app development, then you need to follow this simple formula:
Developer’s per hour cost * Total Development Hours = Total cost of the app development
Conclusion
To wrap up this blog, it is worth mentioning that the mobile food ordering business has become a fast growing trend in the food industry. Since the technology behind the entire food delivery app system continues to grow, the value of food delivery apps like UberEATs and Deliveroo are definitely soar to higher than expected. If you also want to be a part of this growing market, then it is the best time to get started with the online food delivery app.
To help you understand the app clone, we have tried to cover every major aspect of the food delivery app, but if you still find yourself stuck anywhere in between the app, then it is recommended to get in touch with experts to discuss your app plan and get the right solution. Moreover, as the cost of the app development is the concern, so we have mentioned the rough estimations based on various surveys. But, again, we suggest to get real cost estimations from the mobile or software development company after discussing your app idea and business needs. | https://medium.com/flutter-community/how-much-does-it-cost-to-make-an-mobile-app-like-ubereats-and-deliveroo-df3a5cd52733 | ['Sophia Martin'] | 2020-08-17 06:12:23.078000+00:00 | ['Mobile App Development', 'Mobile Apps', 'Technology', 'Startup', 'Business'] | Title Much Cost Make Mobile App Like UberEats DeliverooContent Much Cost Make Mobile App Like UberEats Deliveroo Sophia Martin Follow Aug 17 · 10 min read food industry planning take venture next level ondemand food delivery apps yes next question much cost develop food delivery app set task developer functionality need consider first app Let’s follow app clone top food delivery apps including UberEats Deliveroo understand background exactly make successful competitive market jumping feature functionality app worth understanding background food delivery apps scope OnDemand Food Delivery Apps Become Demanding noticed food delivery company like UberEats Deliveroo become dominating name food industry today convenience maximalism impact Americans 60 US consumer order delivery takeout week fastpaced life food delivery apps true blessing everyone ondemand food delivery application one technique created platform customer chain restaurant easily meet quench need reason make sense increasing popularity online food ordering apps making ideal choice startup Raise Business restaurant finding online food delivery apps much straightforward convenient option get delivery order taking order wild call Moreover 60 restaurant operator say offering delivery lifted sale Improve Customer Relationship Management 43 restaurant professional said believe thirdparty apps help building direct relationship restaurantbarpub customer seamless online food ordering solution actually modernized customer relationship management offer required service right food ordering quick delivery home Enhance Restaurant Business Promotions Undoubtedly ondemand application integrated multiple social medium platform like Facebook Twitter Instagram provide great platform seamless business promotion various type online promotion help attracting large number people app Expand Customer Base According survey customer place online order restaurant visit restaurant 67 frequently don’t use app Moreover working thirdparty delivery service found raise restaurant sale volume 10 20 Additional key fact insight portraying powerful online food ordering apps become 63 customer agree ordering food online food app convenient dining family According study UberEATs grew 230 last year average customer spending 220 annually New food delivery platform sticky user sign app average 77 customer never rarely leave platform switch another app Digital Ordering delivery grown 300 faster dinein traffic since 2014 estimated mobile order make close 11 QSR sale 2020 GrubHub UberEATS DoorDash Postmates become powerful food delivery apps industry acquiring largest share market central question food delivery apps like UberEats Postmates DoorDash Deliveroo booming Makes UberEATS Deliveroo Leading Food Delivery App doubt fact people always go one add convenience comfort life mantra applies everything live movie streaming apps music listening apps game play food eat multiple factor behind boom food delivery apps like Deliveroo UberEATS learning process develop successful online food delivery app worth understanding great blend technology UberEATs used add great level comfort Payment integration UberEATs use Braintree Stripe E Wallets along payment cash UberEATs use Braintree Stripe E Wallets along payment cash Database Storage UberEATs use AWS Google better cloud environment storage backup UberEATs use AWS Google better cloud environment storage backup GEO tracking Since UberEATs using Google Location API Android Core Location Framework iOS therefore offer seamless location tracking experience Since UberEATs using Google Location API Android Core Location Framework iOS therefore offer seamless location tracking experience Navigation UberEATs using Android Maps API Mapkit Apple device UberEATs using Android Maps API Mapkit Apple device Listing Menu display list menu per location developer use popular API like Foursquare display list menu per location developer use popular API like Foursquare Analytics review performance analytics business UberEATs use Google Analytics Mix Panel Nutshell UberEATs clone may sound excellent food delivery app performance app depending upon mobile app development company hire app development getting collaborated mobile app developer worth discussing app plan need Develop Food Delivery App Like UberEATs Deliveroo doubt fact developing food delivery apps like UberEATs Postmates DoorDash Deliveroo required huge investment upfront cost app development scratch starting 25000 per project quite expensive startup don’t kind budget drop plan hire app developer Well everything solution Today even small business coming something similar UberEATs fraction cost many app development company usually make use existing APIs lessen app development process cost need hire right app development team understand business requirement able build app limited budget Here’s exactly need focus developing food delivery app like UberEATs Understanding Key Components UberEATs Key Features Build App Like UberEATs App Technology Used AndroidiOSCrossPlatform Monetizing Strategy Make Profit App Let’s dig deep point better understanding 1 Understanding Key Elements UberEATs Today UberEATs become fastestgrowing food delivery app collaborated 50000 restaurant provided end number food option Moreover 28 trillion addressable market making 22 company’s total booking 2019 central question manage everything right heavy traffic app order courier restaurant partner Well food delivery apps like UberEATs 3 major element Customer version help customer choose extensive list offering term restaurant per location menu food delivery made per customer chosen time version help customer choose extensive list offering term restaurant per location menu food delivery made per customer chosen time Courier partner type app Uber driver signed registered part delivery network order made allocated driver per location courier app version restaurant customer notified approx delivery time type app Uber driver signed registered part delivery network order made allocated driver per location courier app version restaurant customer notified approx delivery time Restaurants Partners Restaurants get detailed order information responsible update order status send notification customer driver access list current order made every day 2 Key Features Develop App Like UberEATs come developing mobile app it’s feature functionality make great difference acceptable market Moreover mobile app feature functionality easily eat budget therefore important set budget accordingly make choice feature need integrate application Since UberEATs clone layered different app version therefore categorized major feature required customer courier app version along estimated development hour startup planning enter food industry lowest budget consider developing app MVP basic feature Here’s breakup basic food delivery app case looking food app advanced feature development cost Customer app version starting 20000 go expensive price check break here’s necessary feature required courier app version starting 10000 go 35000 depending upon choice feature complexity app 3 App Technology Used AndroidiOSCrossPlatform tech side app depending upon choice operating system choose launch app targeting native app Android app development solution worth hiring mobile app developer expert Java NodeJs whereas ObjectiveC Swift excellent option iOS app development case targeting large number customer multiple platform developing hybrid apps using Flutter React Native optimum choice 4 Make Profit Food Delivery App matter amazingly designed app use developing beautiful app hiring best software development company project scope making profit app best app monetization method used UberEATs Commission Restaurants one common efficient way earn profit app UberEATs earn 15 40 commission order fulfilled UberEATs Advertising Income Restaurant Partners UberEATs charging restaurant partner promotion fee increase visibility app’s listing increasing number restaurant becomes important restaurant invest marketing make name visible customer app Delivery Fee Customers Charging plus surcharge food delivery peak hour including lunch dinner time one worthy way make profit app 5 Hire Development Team build app like UberEATs need professional team understand app clone able develop app help launch successful food delivery app Basically mobile app development team include Business Analyst Project Manager BackendFrontend Developer UXUI designer Quality Analyst two way hire team app developer either look house app development team look offshore app development company one side communicating inhouse mobile app development team easier whereas outsourcing mobile app development company cheaper option Moreover hire mobile app developer price 15 50 per hour India whereas 150 per hour US 100hr UK Cost Build Food Delivery App Like UberEATs Estimating overall price food delivery app complicated task number factor affecting development cost Right feature functionality complexity size app development team location technology plethora factor influencing cost app development average cost develop food delivery app starting 15000 25000 basic feature price go expensive figure depending upon choice feature functionality complex app structure higher app development cost want calculate exact price app development need follow simple formula Developer’s per hour cost Total Development Hours Total cost app development Conclusion wrap blog worth mentioning mobile food ordering business become fast growing trend food industry Since technology behind entire food delivery app system continues grow value food delivery apps like UberEATs Deliveroo definitely soar higher expected also want part growing market best time get started online food delivery app help understand app clone tried cover every major aspect food delivery app still find stuck anywhere app recommended get touch expert discus app plan get right solution Moreover cost app development concern mentioned rough estimation based various survey suggest get real cost estimation mobile software development company discussing app idea business needsTags Mobile App Development Mobile Apps Technology Startup Business |
1,712 | Data visualization using Pandas | Data visualization using Pandas
This article will help you to use in-built Pandas methods for visualizing data and drawing insights.
How to import the packages?
Numpy and Pandas package is imported. Along with this the magic function ‘%matplotlib inline’ is mentioned to make sure that the plots are displayed in the notebook.
>>> import numpy as np
>>> import pandas as pd
>>> %matplotlib inline
For the purpose of understanding, a dataset is taken which has random values.
>>> df1 = pd.read_csv('dataset2.csv')
>>> df1.head()
How to create a histogram plot?
A histogram plot can be generated by using the method ‘hist’ on a column of a dataframe. The number of bins can also be specified.
>>> df1['b'].hist(bins=15)
In case you want to view the plots in the seaborn style then import the package of seaborn, set the style and run the code again.
>>> import seaborn as sns
>>> sns.set_style('whitegrid') >>> df1['b'].hist(bins=15)
How to deal with general plot kinds?
You can also call the ‘plot’ method off the dataframe and mention the kind of plot needed.
>>> df1['c'].plot(kind='hist',bins=10)
You can also call the hist method directly through the plot method.
>>> df1['c'].plot.hist()
How to create area plots?
It plots the area of the dataframe columns. The alpha keyword can be specified to tweak the transparency.
>>> df1.plot.area(alpha=0.7)
How to create bar plots?
The ‘bar’ method plots the bar plot.
>>> df1.plot.bar()
In case you do not want the bars to be plotted separately and instead be stack on each other, pass the value True to the stacked keyword.
>>> df1.plot.bar(stacked=True)
How to create line plots?
A line plot can be drawn by calling the line method and pass the x and y values.
>>> df1.plot.line(y='d')
The figure size and line style can also be changed like linewidth.
>>> df1.plot.line(y='a',figsize=(5,3),lw=2)
How to create scatter plots?
The scatter method is called and the x and y values are passed.
>>> df1.plot.scatter(x='b',y='d')
If you want to compare with another column then mention the column name to the ‘c’ value. You can also change the color of the plot using the cmap keyword.
>>> df1.plot.scatter(x='b',y='d',c='a',cmap='rainbow')
How to create box plots?
The box method can be called to create box plots.
>>> df1.plot.box()
How o create hex plots?
This is just like the scatter plot but the data points are represented as hex cells.
>>> df1.plot.hexbin(x='b',y='d',gridsize=15)
How to create KDE plot?
KDE stands for kernel density estimation. The kde method can be called to plot it.
>>> df1['d'].plot.kde()
>>> df1.plot.kde()
For more detailed information on Pandas data visualization, check the official documentation here. Refer to the notebook for code here. | https://medium.com/nerd-for-tech/data-visualization-using-pandas-cfcde72807b1 | ['Jayashree Domala'] | 2020-12-27 02:51:56.204000+00:00 | ['Pandas', 'Python', 'Data Science', 'Data Visualization', 'Exploratory Data Analysis'] | Title Data visualization using PandasContent Data visualization using Pandas article help use inbuilt Pandas method visualizing data drawing insight import package Numpy Pandas package imported Along magic function ‘matplotlib inline’ mentioned make sure plot displayed notebook import numpy np import panda pd matplotlib inline purpose understanding dataset taken random value df1 pdreadcsvdataset2csv df1head create histogram plot histogram plot generated using method ‘hist’ column dataframe number bin also specified df1bhistbins15 case want view plot seaborn style import package seaborn set style run code import seaborn sn snssetstylewhitegrid df1bhistbins15 deal general plot kind also call ‘plot’ method dataframe mention kind plot needed df1cplotkindhistbins10 also call hist method directly plot method df1cplothist create area plot plot area dataframe column alpha keyword specified tweak transparency df1plotareaalpha07 create bar plot ‘bar’ method plot bar plot df1plotbar case want bar plotted separately instead stack pas value True stacked keyword df1plotbarstackedTrue create line plot line plot drawn calling line method pas x value df1plotlineyd figure size line style also changed like linewidth df1plotlineyafigsize53lw2 create scatter plot scatter method called x value passed df1plotscatterxbyd want compare another column mention column name ‘c’ value also change color plot using cmap keyword df1plotscatterxbydcacmaprainbow create box plot box method called create box plot df1plotbox create hex plot like scatter plot data point represented hex cell df1plothexbinxbydgridsize15 create KDE plot KDE stand kernel density estimation kde method called plot df1dplotkde df1plotkde detailed information Pandas data visualization check official documentation Refer notebook code hereTags Pandas Python Data Science Data Visualization Exploratory Data Analysis |
1,713 | Dear APEX Community Members | Updates — Technology and enterprise pilots
We are on track to have 4–5 enterprise pilots this year that will use APEX Network in production mode. Currently we cannot yet disclose the names of the enterprise pilot users but there are three piloting enterprises from two different industries that are well into the process and already beginning to see value:
One previously announced budget Chinese airline (similar to Southwest in the US)
One top 5 Chinese car brand (Tesla competitor)
One high-end Chinese airline
We are expecting first experimentations of pilots on the official APEX Network mainnet as soon as 1–2 months after launch. We are also in the process of experimenting with a hybrid model where user data is stored on a private or alliance chain version of APEX Network, but cross-enterprise transactions occur on the main network.
Pilots are actively pushed by the partnership development team if they assess the enterprise to be a good fit. At least 70% of our enterprise base are aware of our blockchain technology offerings.
Federated Learning, of which an early introduction was given by the team, is one of the latest blockchain technology features we are developing, and it will eventually be available for use both on the main network as well as private/hybrid chains. As previously stated, the goal is ultimately to have it run on the public chain. Indeed, FL at scale only makes sense at the public chain level, though to ease adoption and reduce hesitance I’m sure certain enterprises would like to test it at the private level first. | https://medium.com/apex-network/dear-apex-community-members-b3378f2b075a | ['Jimmy Hu'] | 2020-03-16 17:42:25.664000+00:00 | ['Big Data', 'Technology', 'Blockchain', 'AI'] | Title Dear APEX Community MembersContent Updates — Technology enterprise pilot track 4–5 enterprise pilot year use APEX Network production mode Currently cannot yet disclose name enterprise pilot user three piloting enterprise two different industry well process already beginning see value One previously announced budget Chinese airline similar Southwest US One top 5 Chinese car brand Tesla competitor One highend Chinese airline expecting first experimentation pilot official APEX Network mainnet soon 1–2 month launch also process experimenting hybrid model user data stored private alliance chain version APEX Network crossenterprise transaction occur main network Pilots actively pushed partnership development team ass enterprise good fit least 70 enterprise base aware blockchain technology offering Federated Learning early introduction given team one latest blockchain technology feature developing eventually available use main network well privatehybrid chain previously stated goal ultimately run public chain Indeed FL scale make sense public chain level though ease adoption reduce hesitance I’m sure certain enterprise would like test private level firstTags Big Data Technology Blockchain AI |
1,714 | 6 Tips to Stay Motivated on Your Side Projects | 1. Define the MVP
This is my most important rule when it comes to staying motivated on side projects: the MVP. MVP stands for Minimum Viable Product. It is the minimum amount of features required to achieve functionality so that users can use it.
Photo by Halacious on Unsplash.
Why do you need to do this?
This is extremely important when working on a side project because it is the first major goal or the major goal you are trying to hit. It is what keeps you on track during your project and keeps you motivated to finish since you have a clear idea of what needs to be done.
Write down all the features you want to be implemented for the first iteration of your project. Ask yourself the following questions:
What is my application trying to accomplish?
Is this a must-have in order for my application to work? Or this a nice-to-have?
Once you release the MVP, you can iterate on top of it, introducing more features that would enhance the user experience. Some side projects never see the light of day because tasks keep making their way into the backlog, fighting an endless battle of scope creep.
For my projects, I will make a v1.0.0 release of the application I am trying to build and iterate through the weeks (v0.1.0, v0.2.0, etc.) until I hit that prized v1.0.0 version that I consider the MVP. | https://medium.com/better-programming/6-tips-to-stay-motivated-on-your-side-projects-903432041644 | ['Eric Chi'] | 2020-10-02 16:11:39.389000+00:00 | ['Programming', 'Technology', 'Motivation', 'Software Development', 'Software Engineering'] | Title 6 Tips Stay Motivated Side ProjectsContent 1 Define MVP important rule come staying motivated side project MVP MVP stand Minimum Viable Product minimum amount feature required achieve functionality user use Photo Halacious Unsplash need extremely important working side project first major goal major goal trying hit keep track project keep motivated finish since clear idea need done Write feature want implemented first iteration project Ask following question application trying accomplish musthave order application work nicetohave release MVP iterate top introducing feature would enhance user experience side project never see light day task keep making way backlog fighting endless battle scope creep project make v100 release application trying build iterate week v010 v020 etc hit prized v100 version consider MVPTags Programming Technology Motivation Software Development Software Engineering |
1,715 | I Strived to Be an Instagram Influencer, Now I Write | More content, more brand.
I once hosted an Instagram Live morning show for my commercial real estate business. Each Friday, while the rest of the office enjoyed a jeans day, I showed up to work in a blue suit and tie, reserved a conference room, and recorded myself regurgitating real estate news.
“Nationwide Insurance moved into their new 400,000 square foot headquarters, and office rent prices rose to all-time highs”
This was before COVID, office space was actually leasing then.
Optimistically, I wanted to be the Jimmy Fallon of commercial real estate, but sounded more like a professor addressing students cramming for a final.
I posted ten episodes, grew a small audience, then decided, I hate this.
Why do it at all?
I was looking for something to spark sales and stumbled upon the world of content marketing and the influencers who champion such a strategy, notably Gary Vaynerchuk.
If you don’t know him, his message goes like this: Unless you want to sit Shiva while your business peels, you better post daily on Instagram, Facebook, Snapchat, Twitter, LinkedIn, and TikTok.
Gary says people don’t post because they’re worried mean girls from high school will DM poop emojis. A pleasant way of saying, insecure.
Of course he’s right. I’m no Jimmy Fallon, but I wasn’t about to give up. I ditched the morning show and started documenting my work life. I filmed myself cold calling, took pictures of clients on building tours, and even created a “day in the life” TikTok.
I posted about twenty behind-the-scenes clips, grew a small audience, and decided, I hate this.
After twenty pieces of content, things became repetitive. Plus, just because I approached work like Hearts of Darkness didn’t mean my coworkers or clients felt as much. I received many stink eyes and several “what do you think you’re doing?”
Clearly, I’m not Jimmy Fallon or Francis Ford Coppola. | https://medium.com/swlh/i-strived-to-be-an-instagram-influencer-now-i-write-5becc1996149 | ['Cal Axe'] | 2020-10-22 18:44:14.728000+00:00 | ['Marketing', 'Content Marketing', 'Writing', 'Business', 'Influencer Marketing'] | Title Strived Instagram Influencer WriteContent content brand hosted Instagram Live morning show commercial real estate business Friday rest office enjoyed jean day showed work blue suit tie reserved conference room recorded regurgitating real estate news “Nationwide Insurance moved new 400000 square foot headquarters office rent price rose alltime highs” COVID office space actually leasing Optimistically wanted Jimmy Fallon commercial real estate sounded like professor addressing student cramming final posted ten episode grew small audience decided hate looking something spark sale stumbled upon world content marketing influencers champion strategy notably Gary Vaynerchuk don’t know message go like Unless want sit Shiva business peel better post daily Instagram Facebook Snapchat Twitter LinkedIn TikTok Gary say people don’t post they’re worried mean girl high school DM poop emojis pleasant way saying insecure course he’s right I’m Jimmy Fallon wasn’t give ditched morning show started documenting work life filmed cold calling took picture client building tour even created “day life” TikTok posted twenty behindthescenes clip grew small audience decided hate twenty piece content thing became repetitive Plus approached work like Hearts Darkness didn’t mean coworkers client felt much received many stink eye several “what think you’re doing” Clearly I’m Jimmy Fallon Francis Ford CoppolaTags Marketing Content Marketing Writing Business Influencer Marketing |
1,716 | 5 Things You’ll Never Hear From a Successful Entrepreneur | Photo by Humphrey Muleba on Unsplash
Entrepreneurs come in all shapes and sizes, from a great diversity of backgrounds and with a great diversity of different philosophies and approaches. This is evident in the spread of company cultures and growth trajectories among startups. Being so, if you ask 10 different entrepreneurs what the most important factors for entrepreneurial success are, you’ll probably get 10 different answers.
Nevertheless, there are fundamental qualities that almost every successful entrepreneur shares in common. They are passionate, imaginative, and undaunted by the inevitable challenges of starting a business from scratch. That’s why you’ll never hear a successful entrepreneur say one of these five things:
1. I don’t want to hear it. The most successful entrepreneurs are open to new ideas and inspirations no matter where they come from. They’re willing to listen to customer complaints and incorporate that feedback into later models. They’re open to talk with mentors and peers about different approaches and different ways of doing things. They’re eager to hear from their team to discover new perspectives about the challenges faced by the business.
Listening to others’ thoughts and opinions, even if you don’t agree with them, is essential for achieving any kind of meaningful growth. Our individual perspectives are limited, no matter how much we’d like to think otherwise. Entrepreneurs who are open-minded enough to hear others out tend to be far more successful than those who aren’t.
2. That’s impossible. Possibility is relative. What might be impossible to one group of people in one set of circumstances might be entirely possible to another. When someone says “that’s impossible,” what they often mean is “I’m not capable of doing this right now.”
Successful entrepreneurs don’t view the world with this type of artificial limitation. Instead of seeing how a challenge can be overcome by their current abilities and current resources, they think of how it can be overcome by any possible set of abilities or resources. For example, if something is “impossible” in the moment, the successful entrepreneur might imagine that it’s not impossible with the addition of two new team members and an extra week added to the timeline. Alternative solutions drive innovation, and successful entrepreneurs are always willing to experiment to get the results they want.
3. It’s good enough. Some people stroll through their entire careers with a “good enough” mentality — they put in just enough effort to see a favorable result, and make decisions based on minimum criteria for success. There’s nothing inherently wrong with this; for most people in most careers, good enough really does mean good enough.
But in entrepreneurship, competition is much fiercer and you’re in far greater control of your own destiny. Too many competitors and volatile factors are bearing down on you for you to settle for anything. When you first launch your core business, or your core product, your mind will be racing with ways you can improve upon it. Even after years and multiple generations, you’ll still be driven to experiment and find ways to improve. This constant denial of satisfaction can be maddening, but it’s what drives these entrepreneurs to success.
4. I’m too busy. Most people don’t know the meaning of “busy” until they get started as an entrepreneur. You’ll be wearing so many hats, taking on so many different responsibilities, and making so many decisions each day you won’t know what to do with yourself. But at the same time, you’ll be exhilarated to be in such a position.
To successful entrepreneurs, the position of business owner isn’t a burden; it’s a thrill. It’s not a job; it’s a passion. There will be moments where you feel overwhelmed, of course, but if you’re truly committed to what you do, you’ll never be “too busy” for that extra conversation or that one additional responsibility.
5. I give up. There will be times when you question whether you have what it takes to be a successful entrepreneur, and times when you question whether all your sacrifices were worth it. There will be challenges you face that will threaten to collapse your entire business. This is normal; it is part of the process, and the successful entrepreneurs of the world are the ones who encountered these moments and decided to keep going. The minute you give up, on your business or entrepreneurship in general, your journey is over, and there’s no going back.
Entrepreneurial success starts with the right frame of mind. You have to have an innate drive and a passion for what you do, and you can’t let the unavoidable complexities and trials of business ownership get in the way of your ultimate vision. Take inspiration from these taboo phrases and set your own course for entrepreneurship; just don’t let your doubts get the better of you.
For more content like this, be sure to check out my podcast, The Entrepreneur Cast! | https://jaysondemers.medium.com/5-things-youll-never-hear-from-a-successful-entrepreneur-58f8de808ed3 | ['Jayson Demers'] | 2020-08-27 20:17:14.364000+00:00 | ['Startup', 'Startup Lessons', 'Business', 'Entrepreneurship', 'Entrepreneur'] | Title 5 Things You’ll Never Hear Successful EntrepreneurContent Photo Humphrey Muleba Unsplash Entrepreneurs come shape size great diversity background great diversity different philosophy approach evident spread company culture growth trajectory among startup ask 10 different entrepreneur important factor entrepreneurial success you’ll probably get 10 different answer Nevertheless fundamental quality almost every successful entrepreneur share common passionate imaginative undaunted inevitable challenge starting business scratch That’s you’ll never hear successful entrepreneur say one five thing 1 don’t want hear successful entrepreneur open new idea inspiration matter come They’re willing listen customer complaint incorporate feedback later model They’re open talk mentor peer different approach different way thing They’re eager hear team discover new perspective challenge faced business Listening others’ thought opinion even don’t agree essential achieving kind meaningful growth individual perspective limited matter much we’d like think otherwise Entrepreneurs openminded enough hear others tend far successful aren’t 2 That’s impossible Possibility relative might impossible one group people one set circumstance might entirely possible another someone say “that’s impossible” often mean “I’m capable right now” Successful entrepreneur don’t view world type artificial limitation Instead seeing challenge overcome current ability current resource think overcome possible set ability resource example something “impossible” moment successful entrepreneur might imagine it’s impossible addition two new team member extra week added timeline Alternative solution drive innovation successful entrepreneur always willing experiment get result want 3 It’s good enough people stroll entire career “good enough” mentality — put enough effort see favorable result make decision based minimum criterion success There’s nothing inherently wrong people career good enough really mean good enough entrepreneurship competition much fiercer you’re far greater control destiny many competitor volatile factor bearing settle anything first launch core business core product mind racing way improve upon Even year multiple generation you’ll still driven experiment find way improve constant denial satisfaction maddening it’s drive entrepreneur success 4 I’m busy people don’t know meaning “busy” get started entrepreneur You’ll wearing many hat taking many different responsibility making many decision day won’t know time you’ll exhilarated position successful entrepreneur position business owner isn’t burden it’s thrill It’s job it’s passion moment feel overwhelmed course you’re truly committed you’ll never “too busy” extra conversation one additional responsibility 5 give time question whether take successful entrepreneur time question whether sacrifice worth challenge face threaten collapse entire business normal part process successful entrepreneur world one encountered moment decided keep going minute give business entrepreneurship general journey there’s going back Entrepreneurial success start right frame mind innate drive passion can’t let unavoidable complexity trial business ownership get way ultimate vision Take inspiration taboo phrase set course entrepreneurship don’t let doubt get better content like sure check podcast Entrepreneur CastTags Startup Startup Lessons Business Entrepreneurship Entrepreneur |
1,717 | Powering iOS, Android and web experiences with a backend-for-frontend | All of our seller reporting data is stored in a brand new “data warehouse” service. This service is populated with data through an event-driven system (specifically, SQS) from an upstream “source of truth” service we call “transactions”, as our sellers make sales through the iZettle Food & Drink POS (an iPad point of sale system).
We started building the web experience first, alongside the data warehouse. At first, things were fine, but as the service grew and we started to build out our iOS and Android experiences, we started to hit some problems relating to what’s included in each of the service endpoints, and how they evolve. The mobile apps would end up receiving data that they don’t need, which is problematic when sellers are frequently on mobile internet.
Further, the release cadence of the website and our mobiles apps are necessarily different per platform. The website can be delivered quickly, with changes taking only a few minutes to go out to production, but our mobile apps have a two week release train for new code, because of how app stores work. Deployed versions of mobile apps must also be supported for longer, causing frustration when we’d like to change or deprecate an endpoint that was previously only used by the web.
Ultimately, with our data warehouse, we want to strive for two things:
The time from selling something, to the data appearing correctly in a seller’s reports, should be as low as we can reasonably make it with a complicated distributed system Sellers should be able to request data over time periods that are helpful to them, including over the previous year, and have it appear on their chosen platform as quickly as we can reasonably get it to them
We knew that, unless we changed something about how we get data to our clients, our problems would only get bigger as we added more endpoints, features, and clients.
Working towards a solution
As with all the problems we have to solve, we did some investigation and came up with a few possible solutions:
Have frontend developers own the presentation of data within the existing data warehouse service, requiring them to learn Go Investigate and implement a brand new GraphQL service Investigate and implement a “Backend for Frontend” service, potentially in a language more suited to frontend skills
The team has a lot of micro-service and REST-ful service experience. The cost, for our team, of setting up new services, is relatively small.
Most of the iZettle Food & Drink backend is written in Go. It’s a good language, with great performance characteristics (especially when dealing with large amounts of data), but it’s difficult for frontend team members to learn, contribute to, and context switch between when delivering their other work, which is written largely in Typescript, Kotlin, and Swift.
Other sections of the business use Kotlin to build their backend services already, and I personally have a lot of Kotlin experience, so we felt like that was a good bet.
Our team has little experience with GraphQL and less appetite to own and maintain a brand new paradigm in our area for getting data to clients, when a more typical REST service will get the job done and let us ship quicker. It did look promising, and it is used elsewhere in the business, but we decided it wasn’t the right fit for us at the time.
Given all the above, we decided to try making a “Backend for Frontend” service, written in Kotlin, to focus on getting data quickly and efficiently to our clients with great native experiences. It would be owned and maintained by the frontend team, with help from backend team members to make sure everything was up to standard. 💪
Building the BFF reporting service
We picked Ktor as a service framework because I had some experience from other projects, and it can be as lightweight as you want it to be.
To make spinning up this new kind of service nice and simple, and inspired by my colleagues’ work on a Go-based service chassis internally called izettlefx , I put together a Ktor-based service chassis, exemplified by one of the tests:
I’m really impressed with how easy Ktor makes unit testing services. With a small amount of glue code, we have separation between our “request contexts” and “response handlers”, meaning we can thoroughly test our code at multiple levels. withTestApplication in Ktor (shown above) lowers the cost of doing a more integration-style test enough that we have many of them for the service chassis, and we can be much more confident that the integration of the features is working as intended.
In the BFF reporting service itself, request handlers often aggregate data from upstream services— combining seller information like their business details, and seller data from several different endpoints exposed by the data warehouse.
We built the service in a way which frontend team members are comfortable with. We use ReactiveX a lot, so it made sense for us to use this tech for wrangling upstream requests to make our mobile landing page. Relatively complex upstream request combinations become simple:
Finally, requesters built to fetch upstream data from services can be reused for different endpoints and platforms — the requester that provides payment type information to our web-based Sales Report, in component-form, is reused to provide the same information in the mobile sales reports.
Conclusion
Ktor has been great to work with. It has a really solid set of building blocks with which we built a small service framework. The new service is very much focused on solving a particular set of problems, without being a kitchen sink. When we need another piece, we add to the service framework incrementally.
Trust within the team is really important — everybody took the time to listen and discuss solutions when we started to have problems building new experiences with the existing data warehouse service. Folks helped throughout with backend best practices, and the service is deployed, monitored, and managed just like all our other services, with frontend and backend team members contributing to its ongoing development.
Our data warehouse service stays lean, and the team builds fast endpoints for extracting seller data as we want to. The BFF service worries about the aggregation and presentation, and clients get the data they need in a format that suits the platform. 🚀 | https://medium.com/izettle-engineering/powering-ios-android-and-web-experiences-with-a-backend-for-frontend-e198d55a21cc | ['Skye Welch'] | 2020-09-11 11:13:12.056000+00:00 | ['Reporting', 'Kotlin', 'Engineering', 'Software Engineering', 'Food And Drink'] | Title Powering iOS Android web experience backendforfrontendContent seller reporting data stored brand new “data warehouse” service service populated data eventdriven system specifically SQS upstream “source truth” service call “transactions” seller make sale iZettle Food Drink POS iPad point sale system started building web experience first alongside data warehouse first thing fine service grew started build iOS Android experience started hit problem relating what’s included service endpoint evolve mobile apps would end receiving data don’t need problematic seller frequently mobile internet release cadence website mobile apps necessarily different per platform website delivered quickly change taking minute go production mobile apps two week release train new code app store work Deployed version mobile apps must also supported longer causing frustration we’d like change deprecate endpoint previously used web Ultimately data warehouse want strive two thing time selling something data appearing correctly seller’s report low reasonably make complicated distributed system Sellers able request data time period helpful including previous year appear chosen platform quickly reasonably get knew unless changed something get data client problem would get bigger added endpoint feature client Working towards solution problem solve investigation came possible solution frontend developer presentation data within existing data warehouse service requiring learn Go Investigate implement brand new GraphQL service Investigate implement “Backend Frontend” service potentially language suited frontend skill team lot microservice RESTful service experience cost team setting new service relatively small iZettle Food Drink backend written Go It’s good language great performance characteristic especially dealing large amount data it’s difficult frontend team member learn contribute context switch delivering work written largely Typescript Kotlin Swift section business use Kotlin build backend service already personally lot Kotlin experience felt like good bet team little experience GraphQL le appetite maintain brand new paradigm area getting data client typical REST service get job done let u ship quicker look promising used elsewhere business decided wasn’t right fit u time Given decided try making “Backend Frontend” service written Kotlin focus getting data quickly efficiently client great native experience would owned maintained frontend team help backend team member make sure everything standard 💪 Building BFF reporting service picked Ktor service framework experience project lightweight want make spinning new kind service nice simple inspired colleagues’ work Gobased service chassis internally called izettlefx put together Ktorbased service chassis exemplified one test I’m really impressed easy Ktor make unit testing service small amount glue code separation “request contexts” “response handlers” meaning thoroughly test code multiple level withTestApplication Ktor shown lower cost integrationstyle test enough many service chassis much confident integration feature working intended BFF reporting service request handler often aggregate data upstream services— combining seller information like business detail seller data several different endpoint exposed data warehouse built service way frontend team member comfortable use ReactiveX lot made sense u use tech wrangling upstream request make mobile landing page Relatively complex upstream request combination become simple Finally requester built fetch upstream data service reused different endpoint platform — requester provides payment type information webbased Sales Report componentform reused provide information mobile sale report Conclusion Ktor great work really solid set building block built small service framework new service much focused solving particular set problem without kitchen sink need another piece add service framework incrementally Trust within team really important — everybody took time listen discus solution started problem building new experience existing data warehouse service Folks helped throughout backend best practice service deployed monitored managed like service frontend backend team member contributing ongoing development data warehouse service stay lean team build fast endpoint extracting seller data want BFF service worry aggregation presentation client get data need format suit platform 🚀Tags Reporting Kotlin Engineering Software Engineering Food Drink |
1,718 | The Four Words That Made Her a Billionaire | This is The Story… of a woman who changed a nation… by running an illegal business out of her 258 square foot apartment.
And now… onto The Story
The knock on the door made her jump.
She wasn’t expecting anyone, and her heart began to race.
As she tiptoed over to the peephole, she peered through.
It was the police. Again.
She glanced around at the inside of her tiny apartment. Every available surface was covered with the pictures and files of her customers.
It was all illegal.
She cracked the door and asked if anything was wrong.
They calmly informed her that she would have to come with them to answer some questions.
She sighed, then slid out the door, locked it, and followed the police to the car.
Thirty years earlier, World War II was raging. In Japan, everyone was part of the war effort.
In the early mornings, the sound of wooden practice swords echoed through every town.
The swords were being swung by young children. Children as young as nine were learning to fight and kill.
The older teenagers were all at work in factories. Just like in America, they were told that their patriotic duty was to work long days to make supplies, guns, bullets, and bombs.
Propaganda was all over the papers, the radio, and on every street corner. Everyone had to be ready to fight and sacrifice. That meant everything and everyone would work tirelessly until the war was won.
One young girl couldn’t figure out why there was such an obsession with death and destruction.
When she was six years old, her father died. The loss crushed her, and it wasn’t long until she hated the war effort. What she hated even more was that everyone around her seemed to like preparing for it.
Her father was an admired, well-respected school headmaster, and her family had been dependent on her father’s income. His untimely death instantly threw them into poverty.
Now the little girl’s world was dark.
What would her family do without her father? How would they get the money to eat and survive?
Her mother told her not to worry, but it was hard not to.
After that, it wasn’t long until the war effort fell apart. The girl tried to make money and help her family however she could, but it was almost impossible. The economy was in shambles.
And then Hiroshima and Nagasaki happened.
After that, it was all over.
The little girl survived through the complete societal collapse. They were told that the enemy would arrive and kill them all, but that day never came. She didn’t know how anyone continued to function under that pressure, but her mother did.
Despite the madness during and after the war, her mother continued to work as a midwife. She worked every single day. Each day, the young girl would wonder if her mother would return, or simply disappear like so many of the other adults. But she always returned. Her mother would continue to mourn the loss of her father and would never remarry.
The girl watched her mother’s iron will and became determined. She respected her mother deeply, and like so many children, wanted to please her. When she graduated high school, her mother was beaming with pride. She had single-handedly raised a child in the middle of one of the most catastrophic wars in history. She had managed to keep her alive… and help her graduate from high school.
When a man asked her to marry him, her mother again beamed with pride. The young woman accepted. There were no other options she could see, and worried a “no” might crush her mother.
Her marriage worked for awhile, but soon she grew restless. She didn’t love her husband. To be blunt, she didn’t even like him. The only thing she was allowed to do was a narrow range of housewife tasks… nothing else. She longed for a challenge, but it was forbidden.
Divorce looked like the only way out.
She knew that her mother would never allow it. And neither would the rest of society. In those days in Japan, women didn’t initiate divorces. But a man could get a divorce anytime he wanted with a simple three line letter.
The price for being a divorced woman in Japan was steep. It would be almost impossible to get a job, her family would be shamed, and she wouldn’t be welcome at most social engagements.
Besides, could she really put her mother through anything else?
Despite her fear, she trusted her gut, and pursued the divorce. Her husband was shocked, but accepted. Just like she suspected, her mother was shocked too. After the divorce, the feeling of freedom was real. It was intoxicating and the young woman revealed in it. But soon she found that she wasn’t doing anything except lounging around at home. Her mother had sacrificed to give her a chance at a better life… and now she had freedom! But she was wasting the opportunity.
Finally, she summoned the courage to go out into the world on her own. She was broke, needed a job, and wanted to see if she could make any of her big dreams happen.
But something was still holding her back.
All throughout her years growing up, she was taught that women in Japan worked the “boring jobs.” The boring jobs tended to be the soul-crushing, repetitive work that no one wanted to do.
The young girl wanted to do something exciting. She wanted to be a part of something bigger than herself, a job that let her explore her talents. But when she went out to look, there were no opportunities in Japan. She’d been trained to hate the enemy, but now news was trickling into the country and there were whispers. Not all of their enemies were bad. Some of them were decent people. And besides, there were rumors that the Americans were pouring money into rebuilding Europe.
The call to adventure beckoned, and the young woman planned a trip to Europe. Her mother begged her to reconsider, but the girl had to go out and see the world for herself.
After working the boring jobs long enough, she had scrounged enough money for her Europe trip. Soon the day came and she left.
The stories she heard about these people weren’t true. Yes, they were strange, but Europe was fascinating, and so was England. During her travels, she came into contact with hundreds of new ideas.
As she looked around, she realized she was swimming in opportunity. The standard business practices here were nothing like those in Japan. In Europe, the jobs they considered “boring” were fascinating. And the young girl found that there was a business called a temp agency that would allow her to go from job to job. She couldn’t believe it. She was going to get paid to work and learn at jobs at the cutting edge of all kinds of different industries. What she really couldn’t believe was that all the European’s hated these jobs. In Japan, people were expected to have the same job for life. Why didn’t these Europeans realize how lucky they were?
It wasn’t long before she was the star of the temp staffing agency, and all kinds of temporary job offers came her way.
She accepted all the ones that sounded interesting, and got to try out a variety of different industries.
Once she had some money saved up, she moved from England to Australia. Once again, she experienced a radically different work environment than Japan.
After awhile, something was pulling her back to Japan. She knew exactly what kind of business she was going to start.
Inspired by her time abroad, she was confident that her ideas would resonate with other Japanese people. Back in Tokyo, she rented a 258-square-foot apartment and setup a part-time work agency.
Technically, it was an illegal business. In Japan, it was expected that you worked at a company… for life. The idea of temporary employment terrified the government.
But she didn’t care. She had seen the future abroad, and knew it was only a matter of time until Japan modernized.
But the cultural change was slow, and so was her business. Other Japanese women weren’t enthusiastic about the concept of being a temporary employee. Disappointed but still hopeful, she began teaching nighttime English classes to pay the bills and keep her dream going.
After five long years in that tiny apartment, she was finally able to move her business into its first office space. Before she moved to Europe, Japanese women had been trapped in an ill-fated cycle. Most of them quit their jobs after marrying because they weren’t comfortable continuing their careers past a certain age. This young woman’s company clearly addressed this issue. She provided Japanese women with the opportunity to become temps, rather than fighting for the limited number of specialized career paths that they had to stay on for their whole life.
In those early days, she only hired female workers. It was the 1980s, and she noticed that the company’s sales were slowing down. Many of her employees were uncomfortable going out and seeking new business leads. They were worried they’d be fined or arrested for spreading the idea of temporary work.
Temporary work was still against the law despite her lobbying efforts for change.
The woman was frustrated. Stagnation was unhealthy — she’d learned that after her divorce. But some of the women at her firm simply refused to budge. How could they carefully grow the business? She did not want to get herself or her employees arrested after all.
Determined to continue making progress, the woman decided to begin hiring men. Soon she had a company culture where the women and men were in perfect balance.
Despite her success, lifetime employment continued to be the norm in Japan. The government continued to advertise that under the law, temping by private companies was illegal.
On one particular day, the knock came to her door. When she went to the peephole, it was the police.
She glanced around at the inside of her tiny apartment. Every available surface was covered with the pictures and files of her customers.
It was all illegal.
She cracked the door and asked if anything was wrong.
They calmly informed her that she would have to come with them to answer some questions.
She sighed, then slid out the door, locked it, and followed the police to the car.
She knew this day would come, and as the police walked her to the car, she laughed to herself.
When she went into the police station to plead her case, she somehow managed to talk her way out of it.
After that, she was frequently summoned by the police, questioned, and then let go. Each time she got released, she grew bolder.
She had seen the future. Her entire country might believe that what she was doing was wrong, but she knew she was right. And she knew that one day, there would be a tidal wave of those who woke up and agreed with her.
Sometimes she would lay awake in bed and wonder when she’d be thrown in jail for good, but fortunately that day never came.
Eventually, after years of work, lobbying, and arguing with the government, she won.
The law was changed. Temporary employment became legal in Japan.
Little did the woman know, but she had positioned herself perfectly for a macro-economic tidal wave of opportunity.
It was the 1990’s, when Japan entered what is known as the “Lost Decade.” Businesses went bust, and every single business needed one thing.
Temp workers.
The woman’s business, Temp Holdings was now large enough to give them exactly what they needed.
It wasn’t long before Temp Holdings went public in 2008, and soon expanded around the globe.
The little girl who craved freedom, had sought it out in the world. She found it, saw the future, and brought it back and shared it with her culture.
The woman who paved the path was none other than Yoshiko Shinohara.
Yoshiko became Japan’s first self-made woman billionaire.
She says that there is one personal trait above all others that helped her become the first female billionaire in Japan. In our modern day, when everyone wants a complicated formula, Yoshiko’s four words of how she did it are a reminder that it doesn’t have to be complicated.
Yoshiko says:
“I hate to lose.”
She trusted in her desire for freedom, and it led her on a path directly to it.
Not only did Yoshiko Shinohara blaze a trail for others to follow, but her business has helped millions of women explore what it’s like to be more free and independent. She saw the future, and realized that eventually it would arrive.
She didn’t wait until she had a glamorous office, or until the government gave her the thumbs up.
It’s easy to complain that things aren’t fair.
It’s hard to start trying to fix them from your 258 square foot apartment, and struggle alone for five years. It’s even harder to have to risk jail time to do it!
As Emerson famously said:
“If you are right, you are a majority of one.”
Yoshiko’s story is a reminder that if you know you’re right, place a bet on your idea and yourself. You might be a majority of one.
That’s her story. What’s yours going to be? | https://medium.com/the-mission/the-four-words-that-made-her-a-billionaire-351b539fd5fa | [] | 2018-04-17 18:47:12.752000+00:00 | ['Storytelling', 'Business', 'History', 'Podcast', 'Entrepreneurship'] | Title Four Words Made BillionaireContent Story… woman changed nation… running illegal business 258 square foot apartment now… onto Story knock door made jump wasn’t expecting anyone heart began race tiptoed peephole peered police glanced around inside tiny apartment Every available surface covered picture file customer illegal cracked door asked anything wrong calmly informed would come answer question sighed slid door locked followed police car Thirty year earlier World War II raging Japan everyone part war effort early morning sound wooden practice sword echoed every town sword swung young child Children young nine learning fight kill older teenager work factory like America told patriotic duty work long day make supply gun bullet bomb Propaganda paper radio every street corner Everyone ready fight sacrifice meant everything everyone would work tirelessly war One young girl couldn’t figure obsession death destruction six year old father died loss crushed wasn’t long hated war effort hated even everyone around seemed like preparing father admired wellrespected school headmaster family dependent father’s income untimely death instantly threw poverty little girl’s world dark would family without father would get money eat survive mother told worry hard wasn’t long war effort fell apart girl tried make money help family however could almost impossible economy shamble Hiroshima Nagasaki happened little girl survived complete societal collapse told enemy would arrive kill day never came didn’t know anyone continued function pressure mother Despite madness war mother continued work midwife worked every single day day young girl would wonder mother would return simply disappear like many adult always returned mother would continue mourn loss father would never remarry girl watched mother’s iron became determined respected mother deeply like many child wanted please graduated high school mother beaming pride singlehandedly raised child middle one catastrophic war history managed keep alive… help graduate high school man asked marry mother beamed pride young woman accepted option could see worried “no” might crush mother marriage worked awhile soon grew restless didn’t love husband blunt didn’t even like thing allowed narrow range housewife tasks… nothing else longed challenge forbidden Divorce looked like way knew mother would never allow neither would rest society day Japan woman didn’t initiate divorce man could get divorce anytime wanted simple three line letter price divorced woman Japan steep would almost impossible get job family would shamed wouldn’t welcome social engagement Besides could really put mother anything else Despite fear trusted gut pursued divorce husband shocked accepted like suspected mother shocked divorce feeling freedom real intoxicating young woman revealed soon found wasn’t anything except lounging around home mother sacrificed give chance better life… freedom wasting opportunity Finally summoned courage go world broke needed job wanted see could make big dream happen something still holding back throughout year growing taught woman Japan worked “boring jobs” boring job tended soulcrushing repetitive work one wanted young girl wanted something exciting wanted part something bigger job let explore talent went look opportunity Japan She’d trained hate enemy news trickling country whisper enemy bad decent people besides rumor Americans pouring money rebuilding Europe call adventure beckoned young woman planned trip Europe mother begged reconsider girl go see world working boring job long enough scrounged enough money Europe trip Soon day came left story heard people weren’t true Yes strange Europe fascinating England travel came contact hundred new idea looked around realized swimming opportunity standard business practice nothing like Japan Europe job considered “boring” fascinating young girl found business called temp agency would allow go job job couldn’t believe going get paid work learn job cutting edge kind different industry really couldn’t believe European’s hated job Japan people expected job life didn’t Europeans realize lucky wasn’t long star temp staffing agency kind temporary job offer came way accepted one sounded interesting got try variety different industry money saved moved England Australia experienced radically different work environment Japan awhile something pulling back Japan knew exactly kind business going start Inspired time abroad confident idea would resonate Japanese people Back Tokyo rented 258squarefoot apartment setup parttime work agency Technically illegal business Japan expected worked company… life idea temporary employment terrified government didn’t care seen future abroad knew matter time Japan modernized cultural change slow business Japanese woman weren’t enthusiastic concept temporary employee Disappointed still hopeful began teaching nighttime English class pay bill keep dream going five long year tiny apartment finally able move business first office space moved Europe Japanese woman trapped illfated cycle quit job marrying weren’t comfortable continuing career past certain age young woman’s company clearly addressed issue provided Japanese woman opportunity become temp rather fighting limited number specialized career path stay whole life early day hired female worker 1980s noticed company’s sale slowing Many employee uncomfortable going seeking new business lead worried they’d fined arrested spreading idea temporary work Temporary work still law despite lobbying effort change woman frustrated Stagnation unhealthy — she’d learned divorce woman firm simply refused budge could carefully grow business want get employee arrested Determined continue making progress woman decided begin hiring men Soon company culture woman men perfect balance Despite success lifetime employment continued norm Japan government continued advertise law temping private company illegal one particular day knock came door went peephole police glanced around inside tiny apartment Every available surface covered picture file customer illegal cracked door asked anything wrong calmly informed would come answer question sighed slid door locked followed police car knew day would come police walked car laughed went police station plead case somehow managed talk way frequently summoned police questioned let go time got released grew bolder seen future entire country might believe wrong knew right knew one day would tidal wave woke agreed Sometimes would lay awake bed wonder she’d thrown jail good fortunately day never came Eventually year work lobbying arguing government law changed Temporary employment became legal Japan Little woman know positioned perfectly macroeconomic tidal wave opportunity 1990’s Japan entered known “Lost Decade” Businesses went bust every single business needed one thing Temp worker woman’s business Temp Holdings large enough give exactly needed wasn’t long Temp Holdings went public 2008 soon expanded around globe little girl craved freedom sought world found saw future brought back shared culture woman paved path none Yoshiko Shinohara Yoshiko became Japan’s first selfmade woman billionaire say one personal trait others helped become first female billionaire Japan modern day everyone want complicated formula Yoshiko’s four word reminder doesn’t complicated Yoshiko say “I hate lose” trusted desire freedom led path directly Yoshiko Shinohara blaze trail others follow business helped million woman explore it’s like free independent saw future realized eventually would arrive didn’t wait glamorous office government gave thumb It’s easy complain thing aren’t fair It’s hard start trying fix 258 square foot apartment struggle alone five year It’s even harder risk jail time Emerson famously said “If right majority one” Yoshiko’s story reminder know you’re right place bet idea might majority one That’s story What’s going beTags Storytelling Business History Podcast Entrepreneurship |
1,719 | Configuring Web Server in Docker Inside a Cloud Instance | Configuring Web Server in Docker Inside a Cloud Instance
How to configure a Web Server in a Docker Container, which is launched in a Cloud Instance.
Hello Geeks, I Hope you are here to learn about Web-Server and Docker, so let's get started with the blog…
We will explain how we can launch a container technology inside a cloud instance and configure a Web-Server inside the blog's same container.
The task we are going to complete:
Launch and start a docker container in EC2 instance
Configuring HTTPD Server on Docker Container
Setting up Python Interpreter and running Python Code on Docker Container.
Here is basic information about the technology we will be using in the blog.
Container Technology: Container technology is a method of packaging an application to be run with isolated dependencies. They have fundamentally altered the development of software today due to their compartmentalization of a computer system.
Docker: Docker is a platform as service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated and bundle their own software, libraries, and configuration files; they can communicate through well-defined channels.
Apache HTTP WebServer: The Apache HTTP Server, colloquially called Apache, is a free and open-source cross-platform web server software, released under Apache License 2.0. Apache is developed and maintained by an open community of developers under the Apache Software Foundation's auspices.
Launch and start a docker container in EC2 instance
First, we launched our EC2 instance, and we connected to it using Putty. And now we are inside the system. As the system is new, we don’t have any software or program inside. We have to install docker inside our OS at first. To install the docker, we need to use yum, and we don’t have any docker configuration file, so we have to configure yum first.
We went to the path /etc/yum.repos.d/.
And we used the dnf command to download docker software into our system. | https://medium.com/swlh/configuring-web-server-in-docker-inside-cloud-d46fbf60ccf5 | [] | 2020-11-27 15:57:12.390000+00:00 | ['Docker', 'Centos', 'Python', 'Apache Httpd', 'AWS'] | Title Configuring Web Server Docker Inside Cloud InstanceContent Configuring Web Server Docker Inside Cloud Instance configure Web Server Docker Container launched Cloud Instance Hello Geeks Hope learn WebServer Docker let get started blog… explain launch container technology inside cloud instance configure WebServer inside blog container task going complete Launch start docker container EC2 instance Configuring HTTPD Server Docker Container Setting Python Interpreter running Python Code Docker Container basic information technology using blog Container Technology Container technology method packaging application run isolated dependency fundamentally altered development software today due compartmentalization computer system Docker Docker platform service product use OSlevel virtualization deliver software package called container Containers isolated bundle software library configuration file communicate welldefined channel Apache HTTP WebServer Apache HTTP Server colloquially called Apache free opensource crossplatform web server software released Apache License 20 Apache developed maintained open community developer Apache Software Foundations auspex Launch start docker container EC2 instance First launched EC2 instance connected using Putty inside system system new don’t software program inside install docker inside OS first install docker need use yum don’t docker configuration file configure yum first went path etcyumreposd used dnf command download docker software systemTags Docker Centos Python Apache Httpd AWS |
1,720 | Rough Seas | image by Ulkar — purchased by the author
Friend ships can be sailed
Through rough seas and quiet storms
Scarred — never broken
First of all, you and Siri weren't stupid. You were a couple of kids doing what kids do — learning. I'm so glad you both escaped further harm, and that you built such a bond with one another. | https://medium.com/survivors/first-of-all-you-and-siri-werent-stupid-db927cef145c | ['Toni Tails'] | 2020-09-10 07:47:16.835000+00:00 | ['Poetry', 'Relationships', 'Mental Health', 'Creativity', 'Life'] | Title Rough SeasContent image Ulkar — purchased author Friend ship sailed rough sea quiet storm Scarred — never broken First Siri werent stupid couple kid kid — learning Im glad escaped harm built bond one anotherTags Poetry Relationships Mental Health Creativity Life |
1,721 | Web scraping com Python | Python RegEx
Well organized and easy to understand Web building tutorials with lots of examples of how to use HTML, CSS, JavaScript… | https://medium.com/dados/web-scraping-com-python-45531a6138c9 | ['Wesley Watanabe'] | 2019-07-10 12:12:04.304000+00:00 | ['Data Science', 'Web Scraping', 'Python', 'Analysis', 'Data'] | Title Web scraping com PythonContent Python RegEx Well organized easy understand Web building tutorial lot example use HTML CSS JavaScript…Tags Data Science Web Scraping Python Analysis Data |
1,722 | Kannada-MNIST:A new handwritten digits dataset in ML town | Class-wise mean images of the 10 handwritten digits in the Kannada MNIST dataset
TLDR:
I am disseminating 2 datasets:
Kannada-MNIST dataset: 28X 28 grayscale images: 60k Train | 10k Test
Dig-MNIST: 28X 28 grayscale images: 10240 (1024x10) {See pic below}
Putting the ‘Dig’ in Dig-MNIST
The Kannada-MNIST dataset is meant to be a drop-in replacement for the MNIST dataset 🙏 , albeit for the numeral symbols in the Kannada language .
Also, I am disseminating an additional dataset of 10k handwritten digits in the same language (predominantly by the non-native users of the language) called Dig-MNIST that can be used as an additional test set.
Resource-list:
GitHub 👉: https://github.com/vinayprabhu/Kannada_MNIST
Kaggle 👉: https://www.kaggle.com/higgstachyon/kannada-mnist
ArXiv 👉 : https://arxiv.org/pdf/1908.01242.pdf
If you use Kannada-MNIST in a peer reviewed paper, we would appreciate referencing it as:
Prabhu, Vinay Uday. “Kannada-MNIST: A new handwritten digits dataset for the Kannada language.” arXiv preprint arXiv:1908.01242 (2019)..
Bibtex entry:
@article{prabhu2019kannada,
title={Kannada-MNIST: A new handwritten digits dataset for the Kannada language},
author={Prabhu, Vinay Uday},
journal={arXiv preprint arXiv:1908.01242},
year={2019}
}
Introduction:
Kannada is the official and administrative language of the state of Karnataka in India with nearly 60 million speakers worldwide. Also, as per articles 344(1) and 351 of the Indian Constitution, Kannada holds the status of being one of the 22 scheduled languages of India . The language is written using the official Kannada script, which is an abugida of the Brahmic family and traces its origins to the Kadamba script (325–550 AD).
Kannada stone inscriptions: Source: https://karnatakaitihasaacademy.org/karnataka-epigraphy/inscriptions/
Distinct glyphs are used to represent the numerals 0–9 in the language that appear distinct from the modern Hindu-Arabic numerals in vogue in much of the world today. Unlike some of the other archaic numeral-systems, these numerals are very much used in day-to-day affairs in Karnataka, as in evinced by the prevalence of these glyphs on license-plates of vehicles captured in the pic below:
A vehicle license plate with Kannada numeral glyphs
MNIST-ized renderings of the variations of the glyphs across the modern Kannada fonts
This figure here captures the MNIST-ized renderings of the variations of the glyphs across the following modern fonts: Kedage, Malige-i, Malige-n, Malige-b, Kedage-n, Malige-t, Kedage-t, Kedage-i, Lohit-Kannada, Sampige and Hubballi-Regular.
Dataset curation:
Kannada-MNIST:
65 volunteers were recruited in Bangalore, India, who were native speakers of the language as well as day-to-day users of the numeral script. Each volunteer filled out an A3 sheet containing a 32 × 40 grid. This yielded filled-out A3 sheets containing 128 instances of each number which we posit is large enough to capture most of the natural intra-volunteer variations of the glyph shapes. All of the sheets thus collected were scanned at 600 dots-per-inch resolution using the Konica Accurio-Press-C6085 scanner that yielded 65 4963 × 3509 png images.
Volunteers helping curate the Kannada-MNIST dataset
Dig-MNIST:
8 volunteers aged 20 to 40 were recruited to generate a 32 × 40 grid of Kannada numerals (akin to 2.1), all written with a black ink Z-Grip Series | Zebra Pen on a commercial Mead Cambridge Quad Writing Pad, 8–1/2" x 11", Quad Ruled, White, 80 Sheets/Pad book. We then scan the sheet(s) using a Dell — S3845cdn scanner with the following settings: • Output color: Grayscale • Original type: Text • Lighten/Darken: Darken+3 • Size: Auto-detect The reduced size of the sheets used for writing the digits (US-letter vis-a-vis A3) resulted in smaller scan (.tif) images that were all approximately 1600×2000.
Comparisons with MNIST:
1: Mean pixel-intensities distribution:
2: Morphological properties:
3: PCA-analysis:
4: UMAP visualizations:
Some classification bench-marking:
I used a standard MNIST-CNN architecture to get some basic accuracy benchmarks (See fig below)
The CNN architecture used for the benchmarks
(a) Train on Kannada-MNIST train and test on Kannada-MNIST test
(b) Train on Kannada-MNIST train and test on Dig-MNIST
Open challenges to the machine learning community
We propose the following open challenges to the machine learning community at large.
To characterize the nature of catastrophic forgetting when a CNN pre-trained on MNIST is retrained with Kannada-MNIST. This is particularly interesting given the observation that the typographical glyphs for 3 and 7 in Kannada-MNIST hold uncanny resemblance with the glyph for 2 in MNIST. Get a model trained on purely synthetic data generated using the fonts (as in [1]) and augmenting to achieve high accuracy of the Kannada-MNIST and Dig-MNIST datasets. Replicate the procedure described in the paper across different languages/scripts, especially the Indic scripts. With regards to the dig-MNIST dataset, we saw that some of the volunteers had transgressed the borders of the grid and hence some of the images either have only a partial slice of the glyph/stroke or have an appearance where it can be argued that they could potentially belong to either of two different classes. With regards to these images, it would be worthwhile to see if we can design a classifier that will allocate proportionate softmax masses to the candidate classes. The main reason behind us sharing the raw scan images was to foster research into auto-segmentation algorithms that will parse the individual digit images from the grid, which might in turn lead to higher quality of images in the upgraded versions of the dataset. Achieve MNIST-level accuracy by training on the Kannada-MNIST dataset and testing on the Dig-MNIST dataset without resorting to image pre-processing.
[1] Prabhu, Vinay Uday, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri, and John Whaley. “Fonts-2-Handwriting: A Seed-Augment-Train framework for universal digit classification.” arXiv preprint arXiv:1905.08633 (2019). [ https://arxiv.org/abs/1905.08633 ] | https://towardsdatascience.com/a-new-handwritten-digits-dataset-in-ml-town-kannada-mnist-69df0f2d1456 | ['Vinay Prabhu'] | 2019-08-12 07:02:31.581000+00:00 | ['Computer Science', 'Data Science', 'Artificial Intelligence', 'Machine Learning', 'Computer Vision'] | Title KannadaMNISTA new handwritten digit dataset ML townContent Classwise mean image 10 handwritten digit Kannada MNIST dataset TLDR disseminating 2 datasets KannadaMNIST dataset 28X 28 grayscale image 60k Train 10k Test DigMNIST 28X 28 grayscale image 10240 1024x10 See pic Putting ‘Dig’ DigMNIST KannadaMNIST dataset meant dropin replacement MNIST dataset 🙏 albeit numeral symbol Kannada language Also disseminating additional dataset 10k handwritten digit language predominantly nonnative user language called DigMNIST used additional test set Resourcelist GitHub 👉 httpsgithubcomvinayprabhuKannadaMNIST Kaggle 👉 httpswwwkagglecomhiggstachyonkannadamnist ArXiv 👉 httpsarxivorgpdf190801242pdf use KannadaMNIST peer reviewed paper would appreciate referencing Prabhu Vinay Uday “KannadaMNIST new handwritten digit dataset Kannada language” arXiv preprint arXiv190801242 2019 Bibtex entry articleprabhu2019kannada titleKannadaMNIST new handwritten digit dataset Kannada language authorPrabhu Vinay Uday journalarXiv preprint arXiv190801242 year2019 Introduction Kannada official administrative language state Karnataka India nearly 60 million speaker worldwide Also per article 3441 351 Indian Constitution Kannada hold status one 22 scheduled language India language written using official Kannada script abugida Brahmic family trace origin Kadamba script 325–550 AD Kannada stone inscription Source httpskarnatakaitihasaacademyorgkarnatakaepigraphyinscriptions Distinct glyph used represent numeral 0–9 language appear distinct modern HinduArabic numeral vogue much world today Unlike archaic numeralsystems numeral much used daytoday affair Karnataka evinced prevalence glyph licenseplates vehicle captured pic vehicle license plate Kannada numeral glyph MNISTized rendering variation glyph across modern Kannada font figure capture MNISTized rendering variation glyph across following modern font Kedage Maligei Maligen Maligeb Kedagen Maliget Kedaget Kedagei LohitKannada Sampige HubballiRegular Dataset curation KannadaMNIST 65 volunteer recruited Bangalore India native speaker language well daytoday user numeral script volunteer filled A3 sheet containing 32 × 40 grid yielded filledout A3 sheet containing 128 instance number posit large enough capture natural intravolunteer variation glyph shape sheet thus collected scanned 600 dotsperinch resolution using Konica AccurioPressC6085 scanner yielded 65 4963 × 3509 png image Volunteers helping curate KannadaMNIST dataset DigMNIST 8 volunteer aged 20 40 recruited generate 32 × 40 grid Kannada numeral akin 21 written black ink ZGrip Series Zebra Pen commercial Mead Cambridge Quad Writing Pad 8–12 x 11 Quad Ruled White 80 SheetsPad book scan sheet using Dell — S3845cdn scanner following setting • Output color Grayscale • Original type Text • LightenDarken Darken3 • Size Autodetect reduced size sheet used writing digit USletter visavis A3 resulted smaller scan tif image approximately 1600×2000 Comparisons MNIST 1 Mean pixelintensities distribution 2 Morphological property 3 PCAanalysis 4 UMAP visualization classification benchmarking used standard MNISTCNN architecture get basic accuracy benchmark See fig CNN architecture used benchmark Train KannadaMNIST train test KannadaMNIST test b Train KannadaMNIST train test DigMNIST Open challenge machine learning community propose following open challenge machine learning community large characterize nature catastrophic forgetting CNN pretrained MNIST retrained KannadaMNIST particularly interesting given observation typographical glyph 3 7 KannadaMNIST hold uncanny resemblance glyph 2 MNIST Get model trained purely synthetic data generated using font 1 augmenting achieve high accuracy KannadaMNIST DigMNIST datasets Replicate procedure described paper across different languagesscripts especially Indic script regard digMNIST dataset saw volunteer transgressed border grid hence image either partial slice glyphstroke appearance argued could potentially belong either two different class regard image would worthwhile see design classifier allocate proportionate softmax mass candidate class main reason behind u sharing raw scan image foster research autosegmentation algorithm parse individual digit image grid might turn lead higher quality image upgraded version dataset Achieve MNISTlevel accuracy training KannadaMNIST dataset testing DigMNIST dataset without resorting image preprocessing 1 Prabhu Vinay Uday Sanghyun Han Dian Ang Yap Mihail Douhaniaris Preethi Seshadri John Whaley “Fonts2Handwriting SeedAugmentTrain framework universal digit classification” arXiv preprint arXiv190508633 2019 httpsarxivorgabs190508633 Tags Computer Science Data Science Artificial Intelligence Machine Learning Computer Vision |
1,723 | Effects of Internalized Homophobia | Internalized homophobia is something most LGBTQ+ people have found to be a battle at different points in their lives. It mostly stems from the homophobic, heterosexist, discriminatory culture that we have been taught by society as we grew up. Most of us were taught negative ideas about being homosexual. It was considered to be something wrong, bad and immoral, and in my community, it is considered a taboo to even think about it. It is mentioned in whispers and immediately termed as demonic. Constantly hearing such demeaning and negative depictions of LGBTQ+ individuals can lead to us internalizing these ideas and words. We carry them inside our hearts and minds and start viewing ourselves through the lenses of a homophobic society. Sometimes we don’t even realize that we are doing this to ourselves.
Internalized homophobia is a serious issue because it can affect the LGBTQ+ individual’s mental and physical health. Like every other issue affecting the world, internalized homophobia should not be left behind. People are still holding on to the ideologies of a homophobic society, forgetting that it is up to us to give them a real and positive view of the LGBTQ+ community. Some people within the LGBTQ+ community have ended up feeling contempt for themselves and other LGBTQ+ individuals. Others suffer from low esteem, negative body image and feel unworthy.
There is also the shame they feel for being queer and not living by societal expectations. As this gets worse, they end up alienating themselves from other LGBTQ+ people because they don’t want to be associated, lest someone questions their sexuality. Others chose to numb the pain by getting lost in substance abuse, practicing unsafe sex, getting into trouble with the law or being emotionally unavailable. In severe cases, some have denied their sexual orientation and lead a life that makes them miserable because they fear they will be outcasted when they admit the truth. The attempt to pass as heterosexual in hopes that they will gain the social approval or ‘be cured.’ They will constantly monitor their behaviours, mannerism in fear of being discovered and try as much as possible to align their beliefs and ideas with what the society states is ‘right.’ In the worst-case scenario, some end up committing suicide .
This just goes to show the tremendous impact internalized homophobia has on our mental health and the influence on our thoughts, feelings and mannerisms. A recent study revealed our daily lives have also been affected by internalized homophobia. It is linked to several negative outcomes in romantic relationships and non-romantic relationships in LGB individuals. In most cases, they end up struggling with long-term and committed relationships. They can be afraid of being in stable relationships and will subconsciously self-sabotage. In most scenarios, it also affects relationship quality and most of them find themselves in abusive and unfaithful relationships. It also comes with the burden of numbing discrimination and inequality. Because most of them cannot manage to deal with these negative global attitudes, they end up conforming to the dominant heterosexual culture while suppressing their own individual expressions.
On a positive note, recent findings have been useful for counselors interested in interventions and treatment approaches to help individuals to cope with internalized homophobia and relationship problems. It is important to change our personal views of ourselves. Therapy is an important tool in combating internalized or outward homophobia. It can help break down negative internal thoughts. Openly talking about this will help us understand that there is nothing wrong with us; we need to be comfortable with who we are and embrace and celebrate our sexuality. It will make us understand that internalized homophobia is something we can work through and not to be afraid of opening up and admitting we need support. There is a lot of peace in self — acceptance and we all deserve to be ourselves.
Personally, I wish someone had told me this earlier on in my life; things would have certainly been easier.
Lastly, society around us should step up to come up with a mindful approach. They say that it takes a whole village to raise a child. The world is a global village, and everyone has a responsibility to create an inclusive environment for all forms of sexuality and provide a warm environment for everyone. | https://medium.com/matthews-place/effects-of-internalized-homophobia-11606f39204a | [] | 2020-07-02 20:34:05.684000+00:00 | ['Internalized Homophobia', 'Mental Health', 'Homophobia', 'LGBTQ', 'Health'] | Title Effects Internalized HomophobiaContent Internalized homophobia something LGBTQ people found battle different point life mostly stem homophobic heterosexist discriminatory culture taught society grew u taught negative idea homosexual considered something wrong bad immoral community considered taboo even think mentioned whisper immediately termed demonic Constantly hearing demeaning negative depiction LGBTQ individual lead u internalizing idea word carry inside heart mind start viewing lens homophobic society Sometimes don’t even realize Internalized homophobia serious issue affect LGBTQ individual’s mental physical health Like every issue affecting world internalized homophobia left behind People still holding ideology homophobic society forgetting u give real positive view LGBTQ community people within LGBTQ community ended feeling contempt LGBTQ individual Others suffer low esteem negative body image feel unworthy also shame feel queer living societal expectation get worse end alienating LGBTQ people don’t want associated lest someone question sexuality Others chose numb pain getting lost substance abuse practicing unsafe sex getting trouble law emotionally unavailable severe case denied sexual orientation lead life make miserable fear outcasted admit truth attempt pas heterosexual hope gain social approval ‘be cured’ constantly monitor behaviour mannerism fear discovered try much possible align belief idea society state ‘right’ worstcase scenario end committing suicide go show tremendous impact internalized homophobia mental health influence thought feeling mannerism recent study revealed daily life also affected internalized homophobia linked several negative outcome romantic relationship nonromantic relationship LGB individual case end struggling longterm committed relationship afraid stable relationship subconsciously selfsabotage scenario also affect relationship quality find abusive unfaithful relationship also come burden numbing discrimination inequality cannot manage deal negative global attitude end conforming dominant heterosexual culture suppressing individual expression positive note recent finding useful counselor interested intervention treatment approach help individual cope internalized homophobia relationship problem important change personal view Therapy important tool combating internalized outward homophobia help break negative internal thought Openly talking help u understand nothing wrong u need comfortable embrace celebrate sexuality make u understand internalized homophobia something work afraid opening admitting need support lot peace self — acceptance deserve Personally wish someone told earlier life thing would certainly easier Lastly society around u step come mindful approach say take whole village raise child world global village everyone responsibility create inclusive environment form sexuality provide warm environment everyoneTags Internalized Homophobia Mental Health Homophobia LGBTQ Health |
1,724 | Estimating AI Project Costs & Timescales: 4 Rules of Thumb | But, first a reminder of some of the basics of IT project estimating. Estimating AI project costs is an evolution and adaptation from that.
The Fundamentals of Reliable IT Project Estimating
IT work has an often-deserved reputation for massive over-runs and overspends, especially large, multi-year projects. However, over the last couple of decades, the reality has changed. IT projects still run late, but huge, eye-catching failures are far less common.
There have been several reasons for this change. One is the prevalence of more flexible development processes and shorter project cycles. Another is the adoption of project quality metrics and continuous improvement. Carnegie Mellon’s CMM work was instrumental in the second and is still invaluable reading for IT newcomers.
There is still heated debate about IT processes and project management methods. However, a request for IT project quotes generates broadly comparable estimates from most vendors, given the same information. This will include similar breakdowns of how the overall work will split into phases or activities e.g. requirements etc.
This is relevant for AI projects because we’re currently where general IT estimating was many years ago: inconsistent, often unreliable. A big cause is that AI development processes vary, as do opinions on what quality means for AI deliverables. So discussions about estimates are typically driven by personal experience or project constraints rather than something more objective.
Creating consistency in how you build AI will result in a more objective basis for estimating future AI projects. This will also allow you to improve and optimise your AI work, and hold your own in vendor discussions.
Estimating AI Projects: Rule of Thumb 1
Don’t Plan Big Projects Without Credible Benchmarks, Ideally Your Own
No matter what you’re promised or the figures indicate, you should embark on big AI projects judiciously. If you’re experienced in AI work, you probably have some benchmarks on what’s involved. These are the best way of validating large predicted AI project costs. Even so, it’s worth asking for other objective data to support estimates, ideally ones you can validate. Client references are particularly helpful.
If you’re experiencing your first taste of AI, the risks of starting with a large project are greater. I’d invariably suggest starting with a short, relatively low-cost exercise.
With that, your teams and partners will have experience of AI in your organisation. This will be a real experience of developing AI solutions that use your data and connect to your IT systems. This is a more realistic basis for future estimates of AI project costs and timescales than generic or industry figures. After a second or third small project, you’ll have a more reliable starting point for estimating more ambitious AI work.
When starting out with AI, useful small projects can be as short as a 2–3 months, rather than quarters or years. Realistic results are possible with tens or very small hundreds of thousands of pounds/dollars/euros; costs in the hundreds of thousands or millions create much greater pressure on results. And a core team of single-digit size can be more effective and productive than one of dozens.
If people advising you say otherwise, then you may want to look again at the choice of scope or ambition. And of course, there may be organisational or commercial factors influencing views.
Estimating AI Projects: Rule of Thumb 2
Accept Some Phases are “Experimental” in Nature, But Keep Them On a Leash
A sometimes misunderstood feature of AI work is that it’s normal for days’ or weeks’ work to be “wasted”. This isn’t necessarily a sign of team inexperience or lack of competence — although of course productivity improves with practice. It’s a reflection of underlying AI techniques, which require experimentation and a degree of trial and error.
A challenge in estimating AI project costs & timescales is they can become — perhaps inadvertently — “blank cheques”, especially for AI vendors. The trick is finding the right balance between enough latitude for effective results, and work that’s allowed to “drift”.
Most AI courses, including non-technical ones, refer to the concept of EDA (or some equivalent). This is Exploratory Data Analysis, a crucial early part of most AI projects.
Tips for Handling Estimates for “Experimental” Work
One common approach is to separate estimates for AI project costs and timescales into two parts. A firm estimate is possible for EDA, with a more tentative one for the rest of the project. The second estimate is only confirmed at the end of EDA. By then, there’s a better understanding of the data, business problem and wider IT environment.
Another option is to use IT development techniques like timeboxing to limit the duration of the experimental work. This, in turn, constrains the rest of the project to what was discovered in the timebox. This touches on a discussion of development methods like Agile. It’s not appropriate to get into the details of that here, other than to observe that Agile isn’t an excuse for poor estimating.
How you approach the “experimental” aspects of an AI project is less important than recognising their characteristics. For good AI project estimating the trick is to limit the duration of experimental phases, without excessively compromising their usefulness.
Ideally, by using such phases well, your team discovers unavoidable errors or dead-ends after days or weeks, not months.
Estimating AI Projects: Rule of Thumb 3
Decide How You’ll Balance the Tension Between “Good” and “Good Enough”
A key role of data scientists is exploring different ways of solving business problems, evaluating alternatives to identify the best. Whereas business members of an AI team is to maintain focus on the business results of AI work. From that perspective, AI is a means to an end, to be reached as quickly and cost-effectively as possible.
This can lead to two viewpoints on what makes a task “complete”, or needing more effort. For example, if an algorithm hits 95% “accuracy”, data scientists may consider this poor, but business users may believe otherwise.
The right answer, of course, depends on what the algorithm is being used for. This includes understanding the value of getting an answer 95% “right”, and the cost/risk of 5% inaccuracy. This is where terms like Precision, Recall and F1 score become relevant and lead to discussions such as the relative importance of false positives vs false negatives.
What Does This Balance Look Like in Practice?
To understand this a little more, consider an example from medical diagnosis, say cancer detection. The meaning of “5% inaccuracy” is ambiguous. One possibility is that 5% of the patients screened believe they don’t have cancer, but actually do. Conversely, it may mean 5% of patients flagged as having cancer are actually all clear.
The decision to pursue a more accurate algorithm or stop at 95% accuracy needs understanding of such differences. Ideally, there’s also an awareness of the associated business value.
For more accuracy, further work tuning the model may be the answer;
If 95% is approaching the limit of how well the algorithm can perform, other algorithms may be required;
A third option might be revisiting options to source or prepare better data;
The right answer may be to accept 95% accuracy, even if better is possible.
This is just one example. There can be several such potential conflicts on the “right” answer during AI work. Each will impact AI project estimates.
A role of AI project leadership is to balance these two mindsets. It’s tricky enough during the project. Even with facts, there may have compelling arguments for both viewpoints.
It’s difficult — arguably impossible — to accurately allow for such possibilities before they arise. So when estimating AI project costs and durations at the outset, allowance needs to be made for this. This includes agreeing within the team how to handle such decisions.
Estimating AI Projects: Rule of Thumb 4
Have Realistic Expectations & Contingencies for Data “Gruntwork”
The fourth rule of thumb arises because AI results usually require substantial data preparation work. This work is often under-emphasised, and sometimes overlooked, in project estimates.
Getting enough appropriate data to “feed” an AI solution can be a black hole in AI work, especially for the unwary. Data preparation can suck up inordinate amounts of effort, and sometimes may not even be part of vendor quotes.
One challenge is knowing how much data gruntwork is really needed for your circumstances. The other is making sure it’s been included appropriately in estimates.
Look especially carefully at vendor estimates for issues here. Data preparation can sometimes be missing from quotes completely, buried in proposal assumptions. Another approach is to allow a token few days, knowing full well it will expand immediately.
Estimating Data Preparation Work
To cover such possibilities, your team needs to understand what relevant data is available. This includes how complete, accurate and reliable it is, and what may and may not be done with it (e.g. legal, regulatory, ethical).
Without this understanding, it’s unlikely anyone can reliably predict effort and costs of preparing data. This understanding includes what data there really is, what state it’s in (completeness, accuracy etc), and the work to prepare and clean it. It’s another tricky part of estimating AI project costs and timescales.
If your organisation has done AI or analytics work in the area you’re considering, this information may be available. It may not be in formal documents, but hopefully, there will be people with relevant knowledge to help size the data preparation work.
If you don’t have the knowledge available to understand data preparation, you’ll need a scoping exercise, called Exploratory Data Analysis (EDA). If vendors do this, you should expect clarity on data preparation activities and estimates as a deliverable.
Estimating AI Project Costs & Timescales: Wrap-up
AI Processes & Benchmarks are Key to Long Term Estimating Accuracy
Estimating AI project costs and timescales is harder than for regular IT work. In part, this is due to the lack of AI industry processes and benchmarks. The result is more reliance than ideal on the experience, judgement and opinions of your AI experts and vendors.
If AI is something you’ll do often, consider creating your own approach to AI work over your first few projects. You’ll be capturing AI project cost and effort data for your own organisation. This will be more reliable for you than industry figures, and provides benchmarks for continuous improvement. It isn’t necessarily a difficult exercise and doesn’t add much overhead. In fact, it should pay for itself quickly through savings in future work.
Estimating Smaller Projects is Easier, Especially Early On
In the meantime, starting small is usually the way to go when getting started with AI projects.
Regardless of project size, there are some obvious areas where estimates may be excessive or over-optimistic. This is especially the case if you’re relying on vendors to deliver your work.
Consider Estimating Each Phase As You Go
Effective use of EDA can be very helpful, along with good project management practices — especially iterative approaches like Agile. You’ll also need to balance the instincts and priorities of different team roles, especially during “experimental” phases of work.
This article was first published on www.aiprescience.com . | https://towardsdatascience.com/estimating-ai-project-costs-timescales-4-rules-of-thumb-707ccf49a768 | ['Was Rahman'] | 2020-04-30 20:55:23.945000+00:00 | ['Machine Learning', 'Project Management', 'Artificial Intelligence', 'AI', 'Data Science'] | Title Estimating AI Project Costs Timescales 4 Rules ThumbContent first reminder basic project estimating Estimating AI project cost evolution adaptation Fundamentals Reliable Project Estimating work oftendeserved reputation massive overrun overspends especially large multiyear project However last couple decade reality changed project still run late huge eyecatching failure far le common several reason change One prevalence flexible development process shorter project cycle Another adoption project quality metric continuous improvement Carnegie Mellon’s CMM work instrumental second still invaluable reading newcomer still heated debate process project management method However request project quote generates broadly comparable estimate vendor given information include similar breakdown overall work split phase activity eg requirement etc relevant AI project we’re currently general estimating many year ago inconsistent often unreliable big cause AI development process vary opinion quality mean AI deliverable discussion estimate typically driven personal experience project constraint rather something objective Creating consistency build AI result objective basis estimating future AI project also allow improve optimise AI work hold vendor discussion Estimating AI Projects Rule Thumb 1 Don’t Plan Big Projects Without Credible Benchmarks Ideally matter you’re promised figure indicate embark big AI project judiciously you’re experienced AI work probably benchmark what’s involved best way validating large predicted AI project cost Even it’s worth asking objective data support estimate ideally one validate Client reference particularly helpful you’re experiencing first taste AI risk starting large project greater I’d invariably suggest starting short relatively lowcost exercise team partner experience AI organisation real experience developing AI solution use data connect system realistic basis future estimate AI project cost timescales generic industry figure second third small project you’ll reliable starting point estimating ambitious AI work starting AI useful small project short 2–3 month rather quarter year Realistic result possible ten small hundred thousand poundsdollarseuros cost hundred thousand million create much greater pressure result core team singledigit size effective productive one dozen people advising say otherwise may want look choice scope ambition course may organisational commercial factor influencing view Estimating AI Projects Rule Thumb 2 Accept Phases “Experimental” Nature Keep Leash sometimes misunderstood feature AI work it’s normal days’ weeks’ work “wasted” isn’t necessarily sign team inexperience lack competence — although course productivity improves practice It’s reflection underlying AI technique require experimentation degree trial error challenge estimating AI project cost timescales become — perhaps inadvertently — “blank cheques” especially AI vendor trick finding right balance enough latitude effective result work that’s allowed “drift” AI course including nontechnical one refer concept EDA equivalent Exploratory Data Analysis crucial early part AI project Tips Handling Estimates “Experimental” Work One common approach separate estimate AI project cost timescales two part firm estimate possible EDA tentative one rest project second estimate confirmed end EDA there’s better understanding data business problem wider environment Another option use development technique like timeboxing limit duration experimental work turn constrains rest project discovered timebox touch discussion development method like Agile It’s appropriate get detail observe Agile isn’t excuse poor estimating approach “experimental” aspect AI project le important recognising characteristic good AI project estimating trick limit duration experimental phase without excessively compromising usefulness Ideally using phase well team discovers unavoidable error deadends day week month Estimating AI Projects Rule Thumb 3 Decide You’ll Balance Tension “Good” “Good Enough” key role data scientist exploring different way solving business problem evaluating alternative identify best Whereas business member AI team maintain focus business result AI work perspective AI mean end reached quickly costeffectively possible lead two viewpoint make task “complete” needing effort example algorithm hit 95 “accuracy” data scientist may consider poor business user may believe otherwise right answer course depends algorithm used includes understanding value getting answer 95 “right” costrisk 5 inaccuracy term like Precision Recall F1 score become relevant lead discussion relative importance false positive v false negative Balance Look Like Practice understand little consider example medical diagnosis say cancer detection meaning “5 inaccuracy” ambiguous One possibility 5 patient screened believe don’t cancer actually Conversely may mean 5 patient flagged cancer actually clear decision pursue accurate algorithm stop 95 accuracy need understanding difference Ideally there’s also awareness associated business value accuracy work tuning model may answer 95 approaching limit well algorithm perform algorithm may required third option might revisiting option source prepare better data right answer may accept 95 accuracy even better possible one example several potential conflict “right” answer AI work impact AI project estimate role AI project leadership balance two mindset It’s tricky enough project Even fact may compelling argument viewpoint It’s difficult — arguably impossible — accurately allow possibility arise estimating AI project cost duration outset allowance need made includes agreeing within team handle decision Estimating AI Projects Rule Thumb 4 Realistic Expectations Contingencies Data “Gruntwork” fourth rule thumb arises AI result usually require substantial data preparation work work often underemphasised sometimes overlooked project estimate Getting enough appropriate data “feed” AI solution black hole AI work especially unwary Data preparation suck inordinate amount effort sometimes may even part vendor quote One challenge knowing much data gruntwork really needed circumstance making sure it’s included appropriately estimate Look especially carefully vendor estimate issue Data preparation sometimes missing quote completely buried proposal assumption Another approach allow token day knowing full well expand immediately Estimating Data Preparation Work cover possibility team need understand relevant data available includes complete accurate reliable may may done eg legal regulatory ethical Without understanding it’s unlikely anyone reliably predict effort cost preparing data understanding includes data really state it’s completeness accuracy etc work prepare clean It’s another tricky part estimating AI project cost timescales organisation done AI analytics work area you’re considering information may available may formal document hopefully people relevant knowledge help size data preparation work don’t knowledge available understand data preparation you’ll need scoping exercise called Exploratory Data Analysis EDA vendor expect clarity data preparation activity estimate deliverable Estimating AI Project Costs Timescales Wrapup AI Processes Benchmarks Key Long Term Estimating Accuracy Estimating AI project cost timescales harder regular work part due lack AI industry process benchmark result reliance ideal experience judgement opinion AI expert vendor AI something you’ll often consider creating approach AI work first project You’ll capturing AI project cost effort data organisation reliable industry figure provides benchmark continuous improvement isn’t necessarily difficult exercise doesn’t add much overhead fact pay quickly saving future work Estimating Smaller Projects Easier Especially Early meantime starting small usually way go getting started AI project Regardless project size obvious area estimate may excessive overoptimistic especially case you’re relying vendor deliver work Consider Estimating Phase Go Effective use EDA helpful along good project management practice — especially iterative approach like Agile You’ll also need balance instinct priority different team role especially “experimental” phase work article first published wwwaipresciencecom Tags Machine Learning Project Management Artificial Intelligence AI Data Science |
1,725 | Q & A with Marian Dörk on the UN’s Habitat III conference and the role of the data visualization for sustainable urban futures | Photo: Matthew Tobiasz
Marian Dörk is a Research Professor for information visualization at the Urban Futures Institute for Applied Research and the Urban Complexity Lab at University of Applied Sciences Potsdam, Germany.
This October, the UN will hold its biggest ever summit on the future of cities. Why have cities become such a hot topic at this point in history?
Marian: By 2050, more than 70% of the world’s population will be living in cities. Cities also produce most of the world’s GDP and greenhouse gas emissions, yet they are the key to a more sustainable future. The future of humanity lies in cities. Ecological and social crises affect both urban and rural areas, but cities are a laboratory where we can better understand the complex causes of today’s grand challenges and find holistic approaches to addressing them.
Why are you going to Quito and what you hope to achieve there?
Marian: We’re going to Quito to demonstrate that data visualization can be an important partner for designing the future of cities. For that, we’ve teamed up with Future Earth and the International Council for Science to build Habitat X Change. This will be an event and exhibition space, where people from diverse backgrounds with a common interest in science, visualization, and sustainability of cities can take part in an exciting program of talks, workshops, and panel discussions at the intersection of these topics. Furthermore, we are sharing the results from an open call for city visualizations that teams of scientists, developers, and designers have submitted in the run-up to Habitat III. We will also exhibit a working prototype of a visualization framework (with the working title “vis tent”) that blends physical city models with digital data visualizations of three cities.
So Habitat X Change is a collaboration between science and visualization — could you explain the basis for the collaboration? Is it a common partnership?
Marian: All partners in Habitat X Change share the recognition that complex challenges such as climate change are difficult to communicate. In order to inform decision-making at various levels, especially in cities, we need more research and design to develop new ways to bridge the gap between knowledge and practice. Data visualization (a scientific field itself) has in recent years become popular in the media, in particular to communicate scientific findings or when stories are complex and daunting to convey in text alone. The next challenge is for visualization to step up its role as a natural ally in communicating science to decision makers in business and civil society.
The “Vis Tent” — could you talk a little more about this? What is it and how can city stakeholders engage with it, what can they learn from it?
Marian: The “Vis Tent” — we’re still looking for a better name for it — is a hybrid visualization framework for cities that we’re building at FH Potsdam with some support from the mapmakers at HERE. The visualization brings together the traditional city model in physical form with projections of different urban data patterns. The physical model differentiates between water, land, streets, and buildings, while the projection represents various dynamic aspects of the city such as air quality, traffic, and population density. This allows, for example, to easily see where and at which times of the day and week traffic is particularly busy. By incorporating multiple data dimensions one can also analyze how certain dimensions may correlate — such as traffic and air quality. For Habitat III we are preparing visualizations of three cities: Bogotá, Cape Town, and Singapore. We have just recently launched cf. city flows, a comparative visualization on bike sharing, which has demonstrated the potential of juxtaposing and contrasting multiple cities. Visitors will be able to see these three cities next to each other and examine urban data patterns.
How about the open call for visualizations? Is there a way to share the winning entries and their unique insights with policymakers and city stakeholders attending Habitat III?
Marian: Yes, we will have a dedicated event at Habitat X Change, during which we present the winning entries and give an overview of the latest trends in visualizing cities. Throughout Habitat III we will also exhibit a broad range of city visualizations submitted by visualization groups from around the world. These have been reviewed by an international programme committee of researchers in data visualization, urban sciences and communication. We are planning Q&As with the people behind the winning entries either live during the presentation or afterwards on our blog.
What happens after Quito?
Marian: Quito is just one step on the road towards a much more integrated approach to visualization and science. Our core activities at FH Potsdam involve research and teaching on urban futures, so we will continue our work on visualization and other related topics in cooperation with city administrators, but also with partners from industry and civil society. The results of the visualization call will be shared on a web platform that is planned to be a continuously evolving resource for those interested in the science and visualization of cities. We are planning international workshops and summer schools on visualizing cities with scientists and stakeholders in the months and years to come. | https://medium.com/sustainable-urban-futures/q-a-with-marian-d%C3%B6rk-on-the-uns-habitat-iii-conference-and-the-role-of-the-data-visualization-557bcd9ec437 | ['Habitat X Change'] | 2016-09-12 16:11:00.755000+00:00 | ['Cities', 'Design', 'Sustainability', 'United Nations', 'Data Visualization'] | Title Q Marian Dörk UN’s Habitat III conference role data visualization sustainable urban futuresContent Photo Matthew Tobiasz Marian Dörk Research Professor information visualization Urban Futures Institute Applied Research Urban Complexity Lab University Applied Sciences Potsdam Germany October UN hold biggest ever summit future city city become hot topic point history Marian 2050 70 world’s population living city Cities also produce world’s GDP greenhouse gas emission yet key sustainable future future humanity lie city Ecological social crisis affect urban rural area city laboratory better understand complex cause today’s grand challenge find holistic approach addressing going Quito hope achieve Marian We’re going Quito demonstrate data visualization important partner designing future city we’ve teamed Future Earth International Council Science build Habitat X Change event exhibition space people diverse background common interest science visualization sustainability city take part exciting program talk workshop panel discussion intersection topic Furthermore sharing result open call city visualization team scientist developer designer submitted runup Habitat III also exhibit working prototype visualization framework working title “vis tent” blend physical city model digital data visualization three city Habitat X Change collaboration science visualization — could explain basis collaboration common partnership Marian partner Habitat X Change share recognition complex challenge climate change difficult communicate order inform decisionmaking various level especially city need research design develop new way bridge gap knowledge practice Data visualization scientific field recent year become popular medium particular communicate scientific finding story complex daunting convey text alone next challenge visualization step role natural ally communicating science decision maker business civil society “Vis Tent” — could talk little city stakeholder engage learn Marian “Vis Tent” — we’re still looking better name — hybrid visualization framework city we’re building FH Potsdam support mapmakers visualization brings together traditional city model physical form projection different urban data pattern physical model differentiates water land street building projection represents various dynamic aspect city air quality traffic population density allows example easily see time day week traffic particularly busy incorporating multiple data dimension one also analyze certain dimension may correlate — traffic air quality Habitat III preparing visualization three city Bogotá Cape Town Singapore recently launched cf city flow comparative visualization bike sharing demonstrated potential juxtaposing contrasting multiple city Visitors able see three city next examine urban data pattern open call visualization way share winning entry unique insight policymakers city stakeholder attending Habitat III Marian Yes dedicated event Habitat X Change present winning entry give overview latest trend visualizing city Throughout Habitat III also exhibit broad range city visualization submitted visualization group around world reviewed international programme committee researcher data visualization urban science communication planning QAs people behind winning entry either live presentation afterwards blog happens Quito Marian Quito one step road towards much integrated approach visualization science core activity FH Potsdam involve research teaching urban future continue work visualization related topic cooperation city administrator also partner industry civil society result visualization call shared web platform planned continuously evolving resource interested science visualization city planning international workshop summer school visualizing city scientist stakeholder month year comeTags Cities Design Sustainability United Nations Data Visualization |
1,726 | The Mysterious Person Who Earns $49,090/Month on Medium | The Mysterious Person Who Earns $49,090/Month on Medium
My detective “hypothesis” has never failed me
Photo by timJ on Unsplash
It’s funny how you clicked on this story because of the title. That’s proof that stories with dollar signs in titles and clickbait are actually manipulative. But this isn’t clickbait. My 237 IQ brain is overheating and It’s undergoing an analysis of Medium’s recent changes.
This isn’t a story with complaints like the 63 articles I’ve read all month of October 2020. This is a story of curiosity and concern, questions raising suspicions that something is going on with Medium. This is working out well for me because after figuring this out, I knew I could be the next one to make the $49,090 this month or next month. Oh, I’m sorry — I didn’t get the dollar figures correctly because It’s too stressful to check. My point is, we all know there’s some mysterious being who makes $4$$$$ in a month, and $6000 and something on a single story.
I thought someone who made that amount had a story trending. I mean, the 6 trending stories on the homepage, or popular on Medium across the topics. No. Their story isn’t either of those. We also know this person, but at the same time, we don’t know this person. We’ve been seeing this person’s name, say once a day or once in 5 days depending on your niche. But, we’ve just never noticed this person, they’re on lowkey mode. I have a suspect and it’s not who is on your mind. It took me one glance at the profile page to suspect.
They’re not an editor for any mighty publication with 4 million followers, they own a publication, and they have a specific niche. It’s not a self-improvement niche that everyone’s squeezing into. It’s a niche no one pays attention to. They don’t write listicles. Never! They don’t write self-help stuff, probably because they seem clueless about self-help, awareness, life lessons, and others like the rest of some writers including myself. They’re not a jerk writing stories about “3 things to do to change your life forever” Nope! Never! They just don’t.
They don’t write one-sentence paragraphs that everyone wants to adapt to, because most top writers do it. Their paragraphs are moderately long with little or no formatting. I mean, they don’t use block quotes and pull quotes 135 places in a single story. They’ve never published in a Medium-owned publication, just theirs. They’ve also not highlighted, clapped, responded, or sent a private note to correct a writer on his/her typos. Their titles are the most boring ever, it’s not clickbaity in any way, it’s unique, just like a newspaper headline — exactly what Medium wants.
Oh! I almost forgot. They don’t write the hail and hearty “how to make money on Medium” or “how much I earned in 3 months” or “how to make $78nslwo96 blogging on Medium” or “the mysterious person who earns $49,090/month on Medium” or “54 ways to earn $1 on Medium” or “How to get curated in 87 topics.” In other blatantly truthful words, they do not lie to their readers, just like Nicole Akers said. Why? Because honest writers don’t lie to their readers.
Hold on, there’s more.
They don’t link-bomb their stories, Ryan Fan has an insightful story on why link-bombing won’t save you. You know, the embed thing that connects your other stories at the end. They barely… barely ever do that. Also, they don’t do the “get my newsletter” thingy at the end of each story they write. Their newsletter info is sweetly tucked in their profile bio. It always glows. Everyone sees it after reading their story, you know after each story, Medium has a wonderful feature or design, whichever — it contains the author’s profile photo, the author’s bio, and most of all, the follow button. The publication’s underneath too. It even glows in the dark.
Are you wondering who this is? Honestly, you know who this person is, you’ve just never noticed. It’s funny how someone could be so unnoticed with so much quality content, and even worse, this person’s stories have an average amount of claps, not 703k. My detective brain said it’s because people sometimes forget to clap when they read a very interesting and captivating story. Psychology believes people don’t appreciate their most loved gifts, but rather appreciate their least the most. Just saying. They don’t care about claps, dwell on it, or beg for it on Facebook groups.
With these hypotheses, I’ve finally decided that this person isn’t me. It certainly isn’t you. Except it is — but that’s only if you’re innocent of these suspicions. If you read this, you’re not, because the mysterious person has better things to do than read such a satirical/ode(ish) piece about themselves. | https://medium.com/wreader/i-know-who-makes-the-mysterious-49-090-on-medium-2c62c1a1d91c | ['Winifred J. Akpobi'] | 2020-10-30 18:30:01.392000+00:00 | ['Sarcasm', 'Writing', 'Humor', 'Creativity', 'Satire'] | Title Mysterious Person Earns 49090Month MediumContent Mysterious Person Earns 49090Month Medium detective “hypothesis” never failed Photo timJ Unsplash It’s funny clicked story title That’s proof story dollar sign title clickbait actually manipulative isn’t clickbait 237 IQ brain overheating It’s undergoing analysis Medium’s recent change isn’t story complaint like 63 article I’ve read month October 2020 story curiosity concern question raising suspicion something going Medium working well figuring knew could next one make 49090 month next month Oh I’m sorry — didn’t get dollar figure correctly It’s stressful check point know there’s mysterious make 4 month 6000 something single story thought someone made amount story trending mean 6 trending story homepage popular Medium across topic story isn’t either also know person time don’t know person We’ve seeing person’s name say day 5 day depending niche we’ve never noticed person they’re lowkey mode suspect it’s mind took one glance profile page suspect They’re editor mighty publication 4 million follower publication specific niche It’s selfimprovement niche everyone’s squeezing It’s niche one pay attention don’t write listicles Never don’t write selfhelp stuff probably seem clueless selfhelp awareness life lesson others like rest writer including They’re jerk writing story “3 thing change life forever” Nope Never don’t don’t write onesentence paragraph everyone want adapt top writer paragraph moderately long little formatting mean don’t use block quote pull quote 135 place single story They’ve never published Mediumowned publication They’ve also highlighted clapped responded sent private note correct writer hisher typo title boring ever it’s clickbaity way it’s unique like newspaper headline — exactly Medium want Oh almost forgot don’t write hail hearty “how make money Medium” “how much earned 3 months” “how make 78nslwo96 blogging Medium” “the mysterious person earns 49090month Medium” “54 way earn 1 Medium” “How get curated 87 topics” blatantly truthful word lie reader like Nicole Akers said honest writer don’t lie reader Hold there’s don’t linkbomb story Ryan Fan insightful story linkbombing won’t save know embed thing connects story end barely… barely ever Also don’t “get newsletter” thingy end story write newsletter info sweetly tucked profile bio always glow Everyone see reading story know story Medium wonderful feature design whichever — contains author’s profile photo author’s bio follow button publication’s underneath even glow dark wondering Honestly know person you’ve never noticed It’s funny someone could unnoticed much quality content even worse person’s story average amount clap 703k detective brain said it’s people sometimes forget clap read interesting captivating story Psychology belief people don’t appreciate loved gift rather appreciate least saying don’t care clap dwell beg Facebook group hypothesis I’ve finally decided person isn’t certainly isn’t Except — that’s you’re innocent suspicion read you’re mysterious person better thing read satiricalodeish piece themselvesTags Sarcasm Writing Humor Creativity Satire |
1,727 | Tutorial: Get alerted when feature flags change, via AWS Lambda and Webhooks | The Problem
At Optimizely, we’re always looking for ways to eat our own dogfood. Once we added feature flags to Optimizely Full Stack, we started adopting flags to remotely configure our own application. In the last few months, we’ve found this has been especially helpful for our end-to-end testing. Being able to toggle features on and off for each language we test makes running these features much faster.
Our SDK team is using a Full Stack project to configure our E2E tests. We do this by setting up our changes as feature flags in Optimizely, and creating an audience for each SDK we manage (e.g. when we developed the workflow for feature flags, we set this up as feature=”feature_management”, audience=”node_sdk”). This allows us to turn off features for certain SDKs or all SDKs during testing. Below is a snapshot of the feature layout.
We’ve seen good results from using feature flags to manage these tests. But recently, we hit a problem. With many different engineers working on E2E tests all at once, what happens when someone else turns off a feature that you may also be working on?
In order to track this better, I wanted to get a slack notification to our developer channel telling us what changed in our datafile (the configuration file capturing the state of all our experiments and feature flags).
I created an AWS Lambda with API gateway to accomplish this. The API gateway endpoint is used to register a webhook with Optimizely. When the datafile is changed, the webhook is fired. The lambda function reads the datafile from the CDN and then from the DynamoDB instance if it exists (if not, it creates the DB table and stores the current version of the datafile before publishing a “no difference” message to your slack channel). The function compares these two JSON datafiles and tries to send a human readable diff. Below is an example diff using the feature above that someone toggled from on to off.
There’s room for readability improvements. But, from the above I can tell that the rollout for feature_management has changed and that the feature has been disabled since featureEnabled was set to false and status went from Running to Paused.
Getting started was easy. It was trivial to setup the Lambda and the webhooks. The hardest part was figuring out the permissions for the lambda.
*Since the datafile is not changed often and the lambda only published to a slack channel, this implementation is a low cost solution. You could use this type of setup to notify your developer channel of project file changes or even QA for staging and debugging before release.
For example, the developers may have gated features and initially had them set to false. When it’s time to flip on the features that need to be tested, your QA team could be notified in a channel and begin testing. This could actually be dogfooded company wide not just for QA.
*Using the webhook, you could also post notifications to your servers via some service such as AWS Simple Notification Service. This would work nicely within AWS. Your Elastic Beanstalk instance spins up and registers for a SNS. When a notification comes in, your server instance would just read the file from DynamoDB latest instance or directly from the SNS message. But, that’s a blog for another time.
Implementation
This document assumes you already have an AWS account and know how to create an Optimizely project. Once you log into the AWS account console you can easily find the lambda section. The three areas we will touch with the AWS console are the lambda function, the DynamoDB, IAM console, and CloudWatch for logs.
Above is a basic breakdown of what the webhook lambda will look like. This is the Design view of the lambda. When you click on the individual components, the views below it change for the appropriate module (such as the lambda showing the code). In the example below we can talk to the API Gateway, access logs, and connect with DynamoDB.
The first thing we need to do is create the webhook forwarder. To do this, you can use a lambda template. Go to the AWS Lambda Management Console and create a function Using blueprint microservice-http-endpoint. The microservice-http-endpoint blueprint gives you a setup for gateway API and DynamoDB using node.JS.
Name your lambda and create a new role with “Simple Microservice permissions.” Also, create and name your new API endpoint. Don’t worry about the code right now. We’ll just create the lambda to start.
Edit your DynamoDB permissions by accessing the IAM console and adding “Create table” permission.
*It may be a good idea to create the DB beforehand and just use that DB instead of allowing the lambda to create the DB. I decided to have the lambda create the DynamoDB file.
Ok, now we can look at the code. The template code provided by AWS shows you how to update, delete, and insert into a DynamoDB instance using various HTTP methods (POST, GET, PUT). We only care about making POST calls so lets start there.
First change the example code so that you are only switching on the POST or of course default. Then, let’s just print out the webhook project id and url from the Optimizely webhook payload.
Next, create a test using the API Gateway test template (upper right hand corner next to the Test button) and replace the following as your body property of the payload:
Validate that your endpoint is working correctly. Now, let’s register our webhook. Rather than run through the whole process, this help article here covers how to register your webhook with Optimizely. Your webhook URL is your API Gateway Invoke URL that is available by clicking in the API Gateway Button in the Design view of your lambda.
Now, you can test this by actually updating your project and looking at your logs. You should see your URL there. Next, you need to add your secret key and test for that in the payload. So, create a environment variable and add your secret key there. You will see environment variables below your coding view. Below is a snippet of code from our lambda showing the testing for the secret key:
f the secret key and the header don’t match, don’t honor the request. Keep in mind that if you are servicing multiple projects then you need a way to either have multiple secret keys or not use secret keys. One way for multiple keys would be to store the key in projectId:secret key value pairs in multiple environment variable or a single environment variable.
This document includes the gist of the lambda so you can look through all the code at one time.
So, we have our url, we can tell if the event coming in is legitimate, now, let’s process the request.
When a request comes in we will:
Load the new datafile from the webhook payload. Look to see if the datafile exists in DynamoDB. If it does exist, use it to find the latest differences, otherwise, store the current copy in DynamoDB and say nothing has changed yet. Finally, we do a diff, update the datafile in DynamoDB, and publish the difference to the Slack channel.
First, the way I implemented the diff of the datafile might not be best. It uses recursion since we know the datafile is relatively small. It tries to print messages that tell what actually changed. You may want to replace that with a npm json diff package or tweak the existing to your needs.
I’m not going to go through each function. But, I would like to discuss how we setup the webhook publish to Slack. If you are logged into Slack it’s really easy. You can just open up a browser and go to https://api.slack.com/incoming-webhooks. Or, you can go directly to https://my.slack.com/services/new/incoming-webhook/. Enter your channel and create a Slack webhook. In the sendWebhook call in our lambda, you will add that webhook to the appropriate area. Notice that the send portion is just the last part of the url.
Conclusion
Setting up a webhook to notify the team of project configuration changes helped me to know when we were potentially stepping on each others’ toes. The lambda webhook can also be used to notify your servers, as well as send a Slack message. Using lambda functions for webhooks is a powerful tool in project management.
I hope that this document makes it easy to understand and create a webhook lambda to digest through Slack. Finally, you can find all of my index.js code here on GitHub. You can simply replace your code with the provided index.js and add your YOUR_KEY_HERE from Slack and you’re ready to go. Don’t forget to add your secret key as a environment variable. Happy Coding! | https://medium.com/product-experimentation/tutorial-get-alerted-when-feature-flags-change-via-aws-lambda-and-webhooks-2b2ba1cb2447 | ['Tom Zurkan'] | 2018-07-31 01:16:45.775000+00:00 | ['Feature Flags', 'Optimizely', 'Experiment', 'AWS', 'Startup'] | Title Tutorial Get alerted feature flag change via AWS Lambda WebhooksContent Problem Optimizely we’re always looking way eat dogfood added feature flag Optimizely Full Stack started adopting flag remotely configure application last month we’ve found especially helpful endtoend testing able toggle feature language test make running feature much faster SDK team using Full Stack project configure E2E test setting change feature flag Optimizely creating audience SDK manage eg developed workflow feature flag set feature”featuremanagement” audience”nodesdk” allows u turn feature certain SDKs SDKs testing snapshot feature layout We’ve seen good result using feature flag manage test recently hit problem many different engineer working E2E test happens someone else turn feature may also working order track better wanted get slack notification developer channel telling u changed datafile configuration file capturing state experiment feature flag created AWS Lambda API gateway accomplish API gateway endpoint used register webhook Optimizely datafile changed webhook fired lambda function read datafile CDN DynamoDB instance exists creates DB table store current version datafile publishing “no difference” message slack channel function compare two JSON datafiles try send human readable diff example diff using feature someone toggled There’s room readability improvement tell rollout featuremanagement changed feature disabled since featureEnabled set false status went Running Paused Getting started easy trivial setup Lambda webhooks hardest part figuring permission lambda Since datafile changed often lambda published slack channel implementation low cost solution could use type setup notify developer channel project file change even QA staging debugging release example developer may gated feature initially set false it’s time flip feature need tested QA team could notified channel begin testing could actually dogfooded company wide QA Using webhook could also post notification server via service AWS Simple Notification Service would work nicely within AWS Elastic Beanstalk instance spin register SNS notification come server instance would read file DynamoDB latest instance directly SNS message that’s blog another time Implementation document assumes already AWS account know create Optimizely project log AWS account console easily find lambda section three area touch AWS console lambda function DynamoDB IAM console CloudWatch log basic breakdown webhook lambda look like Design view lambda click individual component view change appropriate module lambda showing code example talk API Gateway access log connect DynamoDB first thing need create webhook forwarder use lambda template Go AWS Lambda Management Console create function Using blueprint microservicehttpendpoint microservicehttpendpoint blueprint give setup gateway API DynamoDB using nodeJS Name lambda create new role “Simple Microservice permissions” Also create name new API endpoint Don’t worry code right We’ll create lambda start Edit DynamoDB permission accessing IAM console adding “Create table” permission may good idea create DB beforehand use DB instead allowing lambda create DB decided lambda create DynamoDB file Ok look code template code provided AWS show update delete insert DynamoDB instance using various HTTP method POST GET PUT care making POST call let start First change example code switching POST course default let’s print webhook project id url Optimizely webhook payload Next create test using API Gateway test template upper right hand corner next Test button replace following body property payload Validate endpoint working correctly let’s register webhook Rather run whole process help article cover register webhook Optimizely webhook URL API Gateway Invoke URL available clicking API Gateway Button Design view lambda test actually updating project looking log see URL Next need add secret key test payload create environment variable add secret key see environment variable coding view snippet code lambda showing testing secret key f secret key header don’t match don’t honor request Keep mind servicing multiple project need way either multiple secret key use secret key One way multiple key would store key projectIdsecret key value pair multiple environment variable single environment variable document includes gist lambda look code one time url tell event coming legitimate let’s process request request come Load new datafile webhook payload Look see datafile exists DynamoDB exist use find latest difference otherwise store current copy DynamoDB say nothing changed yet Finally diff update datafile DynamoDB publish difference Slack channel First way implemented diff datafile might best us recursion since know datafile relatively small try print message tell actually changed may want replace npm json diff package tweak existing need I’m going go function would like discus setup webhook publish Slack logged Slack it’s really easy open browser go httpsapislackcomincomingwebhooks go directly httpsmyslackcomservicesnewincomingwebhook Enter channel create Slack webhook sendWebhook call lambda add webhook appropriate area Notice send portion last part url Conclusion Setting webhook notify team project configuration change helped know potentially stepping others’ toe lambda webhook also used notify server well send Slack message Using lambda function webhooks powerful tool project management hope document make easy understand create webhook lambda digest Slack Finally find indexjs code GitHub simply replace code provided indexjs add YOURKEYHERE Slack you’re ready go Don’t forget add secret key environment variable Happy CodingTags Feature Flags Optimizely Experiment AWS Startup |
1,728 | 10 Efficient Ways to Use Python Lists | Copy List by Value
There are many ways to copy a list, but using an assignment operator isn’t one of them. Let’s confirm this:
>>> a = [1, 2, 3, 4, 5]
>>> b = a >>> id(a)
4345924656 >>> id(b)
4345924656
The assignment just creates a reference to the list a . This implies both of the lists now point to the same memory and any changes in one list would affect the other.
Following are some possible ways to create a standalone “shallow” copy of a Python list, ranked from the most efficient to the least in terms of speed:
b = [*a]
b = a * 1
b = a[:]
b = a.copy() (Python 3 — shallow copy)
(Python 3 — shallow copy) b = [x for x in a]
b = copy.copy(a) (Python 2)
While the difference in speeds is comparable, sometimes doing a deepcopy (which is obviously the slowest and most memory-needing approach) is unavoidable.
Unlike a deep copy, a shallow copy doesn’t do a clone of the nested objects. Instead, it just copies the reference of the nested objects. Let’s look at the following example to validate this:
>>> a = [[0,1],[2,3]]
>>> b = [*a]
>>> a[1][0] = 5 #Output of b: [[0, 1], [5, 3]]
Updating the nested list element a[1][0] = 5 changes the list b as well. In such scenarios where we aren’t using a 1D list, the following ways work best for doing a deep copy of all the list elements: | https://medium.com/better-programming/10-efficient-ways-to-use-python-lists-f6e7e666708 | ['Anupam Chugh'] | 2020-04-13 16:08:22.391000+00:00 | ['Software Engineering', 'Software Development', 'Python', 'Data Science', 'Programming'] | Title 10 Efficient Ways Use Python ListsContent Copy List Value many way copy list using assignment operator isn’t one Let’s confirm 1 2 3 4 5 b ida 4345924656 idb 4345924656 assignment creates reference list implies list point memory change one list would affect Following possible way create standalone “shallow” copy Python list ranked efficient least term speed b b 1 b b acopy Python 3 — shallow copy Python 3 — shallow copy b x x b copycopya Python 2 difference speed comparable sometimes deepcopy obviously slowest memoryneeding approach unavoidable Unlike deep copy shallow copy doesn’t clone nested object Instead copy reference nested object Let’s look following example validate 0123 b a10 5 Output b 0 1 5 3 Updating nested list element a10 5 change list b well scenario aren’t using 1D list following way work best deep copy list elementsTags Software Engineering Software Development Python Data Science Programming |
1,729 | The Not-So-Smelly Truths of Washing Like the French | It’s easy to fall for the City of Love.
Just don’t expect it to always smell like a fresh bouquet of roses.
This is especially true in the summertime, on crowded metros when the smell of body odor mixed with cigarette smoke can knock you over faster than you can say “Mon Dieu, d’où vient cette odeur??” (translation: where on God’s green earth is that rank smell coming from)
As Americans, it’s hard for us to understand (naive teenagers aside) how anyone can stand body odor.
Who showers better?
An AOL Health poll conducted in 2009 revealed that 65 percent of Americans shower or bathe every day, while 4 percent shower more than once every day.
Contrast this with the French, who, while not too far off from the American average, still manage to make showering less of a thing.
In a recent poll conducted in France, only 57% of respondents reported showering daily.
From the local climate to energy costs to advertising, there are numerous factors as to why one culture showers more than another.
The Atlantic Journalist James Hamblin recently reported that we waste nearly two years of our lives washing:
“12,167 hours... That’s how much life you use, if you spend 20 minutes per day washing and moisturizing your skin and hair (and you live to be 100, as we all surely will). That adds up to nearly two entire years of washing every waking hour.”
If you consider that the average American shower, according to Home Water Works, takes 17.2 gallons of water, you begin to better understand the French perspective.
As one American teacher who regularly teaches study abroad in France pointed out in a New York Times Opinion Piece:
“Having spent a number of summers teaching… [I’ve] witnessed enough conflicts between students and the [French] families they live with to know that cleanliness is an endless source of cultural misunderstanding. [American] students feel that one can’t shower enough and that clothing should never be worn more than once, whereas their French lodgers worry about water bills and can’t understand why anyone would want to shower every day.”
While the occasional bout of body odor might be disagreeable in France, it’s preferred to not being able to afford your energy bill at the end of the month.
Should we shower every day?
Sweat, believe it or not, is actually odorless. It is pumped out of your body in two different types of irrigation systems: eccrine glands and apocrine glands.
Eccrine glands are found all over your body. As your body temperature rises, these glands act as an internal water cooler, releasing fluids to cool off your body. If ever you start sweating in areas you didn’t know you could sweat, you can thank the eccrine glands.
In general, eccrine glands do not contribute to body odor. It’s your other irrigation system, the apocrine glands, that make the big stink.
Apocrine glands are found on the parts of your body where you naturally have hair (namely your armpits and groin). If it helps, think of them as the trickling streams running through forests of body hair.
These trickling streams flow freely with a milky fluid when your body is under stress (emotional, mental, and physical). Again, this milky fluid by itself is largely odorless. The foul-smelling reaction occurs when the sweat comes in contact with the bacteria on the skin.
As you can tell from the forest of hairs analogy, I’m not a doctor nor do I play one on Medium. That being said, I’m willing to bet that if you only shower once every two days, like 24% of the French people you meet, there is going to be more bacteria on their bodies.
Which, contrary to what your nose might tell your brain, is not necessarily a bad thing.
Our skin is designed to maintain a layer of oils, bacteria, and other microorganisms.
When we hop in the shower or bath, we wipe our skin clean of these positive protective influences.
The side effects, as reported by Harvard Medical School, can be far more damaging than an occasional bout with body odor:
Skin may become dry, irritated, or itchy.
Dry, cracked skin allows bacteria and allergens to more easily creep in, allowing skin infections and allergic reactions to occur.
Antibacterial soaps can actually kill off normal bacteria, upsetting the balance of microorganisms on the skin.
Frequent baths or showers throughout a lifetime may reduce the ability of the immune system to do its job.
Perhaps instead of recoiling at the smell of body odor on the French metro and elsewhere, we could recognize the possible factors leading that person to smell the way they do: including saving the planet, being healthier, or affording rent at the beginning of next month.
We don’t have to fully embrace them (especially if they’re sweaty), but we should learn to appreciate them. | https://medium.com/mindtrip/an-american-nose-in-paris-9a9dc3ae84af | ['Dave Smurthwaite'] | 2020-02-11 07:35:41.315000+00:00 | ['Travel', 'Wellness', 'France', 'Health', 'World'] | Title NotSoSmelly Truths Washing Like FrenchContent It’s easy fall City Love don’t expect always smell like fresh bouquet rose especially true summertime crowded metro smell body odor mixed cigarette smoke knock faster say “Mon Dieu d’où vient cette odeur” translation God’s green earth rank smell coming Americans it’s hard u understand naive teenager aside anyone stand body odor shower better AOL Health poll conducted 2009 revealed 65 percent Americans shower bathe every day 4 percent shower every day Contrast French far American average still manage make showering le thing recent poll conducted France 57 respondent reported showering daily local climate energy cost advertising numerous factor one culture shower another Atlantic Journalist James Hamblin recently reported waste nearly two year life washing “12167 hour That’s much life use spend 20 minute per day washing moisturizing skin hair live 100 surely add nearly two entire year washing every waking hour” consider average American shower according Home Water Works take 172 gallon water begin better understand French perspective one American teacher regularly teach study abroad France pointed New York Times Opinion Piece “Having spent number summer teaching… I’ve witnessed enough conflict student French family live know cleanliness endless source cultural misunderstanding American student feel one can’t shower enough clothing never worn whereas French lodger worry water bill can’t understand anyone would want shower every day” occasional bout body odor might disagreeable France it’s preferred able afford energy bill end month shower every day Sweat believe actually odorless pumped body two different type irrigation system eccrine gland apocrine gland Eccrine gland found body body temperature rise gland act internal water cooler releasing fluid cool body ever start sweating area didn’t know could sweat thank eccrine gland general eccrine gland contribute body odor It’s irrigation system apocrine gland make big stink Apocrine gland found part body naturally hair namely armpit groin help think trickling stream running forest body hair trickling stream flow freely milky fluid body stress emotional mental physical milky fluid largely odorless foulsmelling reaction occurs sweat come contact bacteria skin tell forest hair analogy I’m doctor play one Medium said I’m willing bet shower every two day like 24 French people meet going bacteria body contrary nose might tell brain necessarily bad thing skin designed maintain layer oil bacteria microorganism hop shower bath wipe skin clean positive protective influence side effect reported Harvard Medical School far damaging occasional bout body odor Skin may become dry irritated itchy Dry cracked skin allows bacteria allergen easily creep allowing skin infection allergic reaction occur Antibacterial soap actually kill normal bacteria upsetting balance microorganism skin Frequent bath shower throughout lifetime may reduce ability immune system job Perhaps instead recoiling smell body odor French metro elsewhere could recognize possible factor leading person smell way including saving planet healthier affording rent beginning next month don’t fully embrace especially they’re sweaty learn appreciate themTags Travel Wellness France Health World |
1,730 | Are You an Abusive Person? | Most of us don’t want to be an abusive or toxic person. We don’t want to hear someone we care about say that we hurt them. We don’t want to end up estranged from those we care about.
No one wants to read an email, letter or blog post detailing how just being themselves has hurt and driven away someone they love.
Being an abusive person will mean you hurt the people that are closest to you. Imagine destroying the love someone has for you by being an abusive nightmare. Who wants to face up to that?
Being the toxic person in a relationship or family is like playing pass the parcel with a hot brick. No one wants to be left holding it. Still, abusive people exist, so it’s got to be someone. You hopefully already know that physical abuse is bad but people frequently hurt others without getting physical.
If you value your relationships you should be worried if you regularly display any of these behaviors:
You don’t care how your behavior affects others.
If certain political topics make people around you uncomfortable you still bring them up relentlessly. Why should you pander to people when it comes to what you think? If you’re asked not to make derogatory comments on someone’s weight, you go out of your way to do it. Someone asks you not to swear? Fuck that.
If someone politely asks you to modify your behaviour and you flat out refuse; that should tell you something.
You don’t understand the word “inappropriate”.
Most of us worry that we’ve done something inappropriate from time to time. A red flag for abusive behaviour is that people are constantly telling you that you’re being inappropriate.
They constantly tell you and you constantly don’t care, so you never change.
This is a sign that you like to do things your way and anyone who disagrees with you is wrong. This attitude could quite easily lead you to be abusive towards people.
You blame others for your bad behavior.
You raised your voice because someone annoyed you. You caused a scene at that graduation dinner because no one talked about what interests you. You insulted your daughter’s boyfriend because his dress sense irritated you. You threw away your wife’s photo albums because she doesn’t keep the kitchen tidy enough.
Do you get into these kinds of arguments with the people close to you?
This is a double whammy of abusive behavior. You’re not only upsetting someone but when challenged you insist they brought it on themselves.
This can start out as you just wanting to avoid the unpleasant truth that you’ve upset someone but it can very quickly become controlling behaviour. If everyone around you does as they’re told everything will be OK. If they don’t then they sort of asked for you to get angry.
Treating someone badly in order to punish them for something you think they did wrong is abusive. There is nothing healthy about taking punitive action against someone you are supposed to care about. You’re basically telling the other person that if they slip up in your eyes they get hurt in some way. This kind of attitude will make you toxic to be around, it doesn’t matter if you’re making excuses or you really believe your justification. The effect on your loved one will still be damaging.
If this point applies to you then it’s a worrying one. It’s a sign that you have very poor communication skills and struggle to manage your emotions. The potential for you to be or become abusive is quite high.
When someone tells you how they feel, you feel confused.
This one is quite simple. If you can’t understand other people’s feelings it’s a sign that you’re very fixated on your own.
Do you mind not turning every conversation around to the election? I’m getting tired of it.
Why not? I enjoy it.
Making rude remarks about my weight hurts my feelings.
Well it’s all true. If you can’t handle the truth that’s your problem.
You know I’m vegetarian, I don’t want to eat that.
Being vegetarian is a stupid fad, I cooked meat, eat it or go without.
That’s a terrible thing to say.
You’re too sensitive, you’re just being silly.
If other people’s feelings do not compute for you then that’s a big red flag.
Do you think how you feel about something should dominate another person’s life?
If so, you’re likely causing offense, upset and resentment on a very regular basis.
You see emotional expression as weakness.
Toxic and abusive people don’t like it when people express how they feel. They see this as weak rather than as an attempt to create peace and harmony in the relationship.
If you carry on long enough with this attitude you alienate your loved ones. They won’t bother trying to connect with you on an emotional level. Your relationships with them will feel strained and distant. Bear in mind, they’ll still form emotional connections with people, just not with you.
Showing emotional vulnerability, as well as giving and receiving emotional support is integral to a close relationship. However, no one said it was easy.
If you want happy, close relationships you can’t opt out of vital aspects of them.
You think your abusive background was fine.
“It never did me any harm.”
Whenever someone utters this phrase they make it plainly obvious that whatever they’re talking about did in fact do them harm.
This attitude is a really strong indication that people around you are going to end up suffering because of you.
Firstly this phrase is almost always a response to someone explaining that they are upset by something. This phrase then totally shuts them down. That’s not conducive to a good relationship.
Secondly this train of thought is usually followed up with justification by comparison.
You didn’t give the kids lunch.
I used to go days at a time without food, it never did me any harm.
I can’t believe you ignored me for the whole day.
My father once didn’t talk to me for a week.
This kind of poor communication turns an attempt to discuss and resolve something into a competition you always win.
You’re not really saying that you think it’s fine to not give the kids their lunch. You’re saying you refuse to discuss the matter and you’ll do what you want.
~
Good relationships are the cornerstone of a happy life. If you recognize yourself in some of the points here then you need to take action. Don’t be that person that harms loved ones and ultimately pushes them away.
Maybe you aren’t good at communication or understanding emotions. No one is good at everything and all of us can learn.
If you are on the receiving end of some of the behaviors listed here then you need to recognize this could be abuse. If the person displaying these behaviors doesn’t agree to change then you need to step away from them in order to protect yourself. | https://medium.com/swlh/are-you-an-abusive-person-3a448dc0d02f | ['Stef Hill'] | 2020-02-27 12:04:16.509000+00:00 | ['Self-awareness', 'Relationships', 'Mental Health', 'Self', 'Abuse'] | Title Abusive PersonContent u don’t want abusive toxic person don’t want hear someone care say hurt don’t want end estranged care one want read email letter blog post detailing hurt driven away someone love abusive person mean hurt people closest Imagine destroying love someone abusive nightmare want face toxic person relationship family like playing pas parcel hot brick one want left holding Still abusive people exist it’s got someone hopefully already know physical abuse bad people frequently hurt others without getting physical value relationship worried regularly display behavior don’t care behavior affect others certain political topic make people around uncomfortable still bring relentlessly pander people come think you’re asked make derogatory comment someone’s weight go way Someone asks swear Fuck someone politely asks modify behaviour flat refuse tell something don’t understand word “inappropriate” u worry we’ve done something inappropriate time time red flag abusive behaviour people constantly telling you’re inappropriate constantly tell constantly don’t care never change sign like thing way anyone disagrees wrong attitude could quite easily lead abusive towards people blame others bad behavior raised voice someone annoyed caused scene graduation dinner one talked interest insulted daughter’s boyfriend dress sense irritated threw away wife’s photo album doesn’t keep kitchen tidy enough get kind argument people close double whammy abusive behavior You’re upsetting someone challenged insist brought start wanting avoid unpleasant truth you’ve upset someone quickly become controlling behaviour everyone around they’re told everything OK don’t sort asked get angry Treating someone badly order punish something think wrong abusive nothing healthy taking punitive action someone supposed care You’re basically telling person slip eye get hurt way kind attitude make toxic around doesn’t matter you’re making excuse really believe justification effect loved one still damaging point applies it’s worrying one It’s sign poor communication skill struggle manage emotion potential become abusive quite high someone tell feel feel confused one quite simple can’t understand people’s feeling it’s sign you’re fixated mind turning every conversation around election I’m getting tired enjoy Making rude remark weight hurt feeling Well it’s true can’t handle truth that’s problem know I’m vegetarian don’t want eat vegetarian stupid fad cooked meat eat go without That’s terrible thing say You’re sensitive you’re silly people’s feeling compute that’s big red flag think feel something dominate another person’s life you’re likely causing offense upset resentment regular basis see emotional expression weakness Toxic abusive people don’t like people express feel see weak rather attempt create peace harmony relationship carry long enough attitude alienate loved one won’t bother trying connect emotional level relationship feel strained distant Bear mind they’ll still form emotional connection people Showing emotional vulnerability well giving receiving emotional support integral close relationship However one said easy want happy close relationship can’t opt vital aspect think abusive background fine “It never harm” Whenever someone utters phrase make plainly obvious whatever they’re talking fact harm attitude really strong indication people around going end suffering Firstly phrase almost always response someone explaining upset something phrase totally shuts That’s conducive good relationship Secondly train thought usually followed justification comparison didn’t give kid lunch used go day time without food never harm can’t believe ignored whole day father didn’t talk week kind poor communication turn attempt discus resolve something competition always win You’re really saying think it’s fine give kid lunch You’re saying refuse discus matter you’ll want Good relationship cornerstone happy life recognize point need take action Don’t person harm loved one ultimately push away Maybe aren’t good communication understanding emotion one good everything u learn receiving end behavior listed need recognize could abuse person displaying behavior doesn’t agree change need step away order protect yourselfTags Selfawareness Relationships Mental Health Self Abuse |
1,731 | Why I’m Saying Goodbye to Caffeine | Why I’m Saying Goodbye to Caffeine
I’ve quit alcohol and tobacco, but can I kick caffeine too?
Photo by Sebastián León Prado on Unsplash
This year I’m aiming for a goal that I never expected to set: Going completely caffeine free.
How Did I Get to This Point?
I’ve been a proud caffeine drinker for most of my life. It started with soda in high school, but in recent years coffee has joined the mix too.
My caffeine consumption really took off three years ago when I quit drinking alcohol. Before getting sober, I spent my evenings with beer bottles practically attached to my palms. After quitting, it helped to have non-alcoholic drinks on hand to replace the habit. Unfortunately, most of those drinks were caffeinated.
Although caffeine is nowhere near as bad as alcohol (and not even in the same ballpark), I certainly can’t deny that I had started drinking unhealthy amounts of it. As of last year, I was up to about 4 cups of coffee and 4 sodas per day. That’s around 500 mg of caffeine — not an insane amount, but 100 mg more than the Mayo Clinic’s recommended maximum.
I could also tell that I had at least a minor addiction to caffeine, because I felt like I needed it to start my mornings, and would get headaches in the afternoon when I skipped it.
Despite that, this time last year, I still hadn’t even considered quitting. I thought that caffeine was just such minor problem compared to alcohol that it was ridiculous for me to think twice about it. I also had a third, more serious addiction to contend with: cigarettes.
Last year, instead of worrying at all about caffeine, I was entirely focused on quitting cigarettes. I went on and off of nicotine patches for months, and finally quit all forms of nicotine last September.
In the end, it was quitting smoking that actually led me to this year’s goal of quitting caffeine.
After quitting smoking, my sleep habits became a total mess. I was staying up late, sleeping only a few hours some nights and 10 hours or more others. I started waking up multiple times a night, a problem that I haven’t had since my drinking days.
At first I wrote all this off as nicotine withdrawal symptoms, but after it lasted for a couple of months, I looked into it more seriously. I learned that caffeine was the most likely culprit.
Nicotine causes the body to process caffeine more quickly, so smokers end up feeling less of the effects of caffeine than non-smokers.
When I had quit smoking, I had actually started drinking a little more coffee than usual. To add to that, my body was no longer processing caffeine at an accelerated rate, so each coffee was having a stronger effect than I was used to. That’s certainly enough to throw off my sleep.
I immediately decided to cut down on my caffeine, and I started seeing improvements in my sleep from the very first day.
From Cutting Down to Quitting
I cut down on caffeine to address the sleep problems I experienced after quitting smoking, and — sure enough — it fixed them. So why didn’t I stop there? Why did I decide to quit completely?
I had always thought of caffeine as a benign addiction, but while reading up on how it was affecting my sleep, I discovered that it actually has much more harmful side effects than I ever realized.
My starting place for learning about caffeine was simply reading anecdotes of people who had quit caffeine on Reddit. Although anecdotes aren’t necessarily scientifically reliable, my experiences with quitting drinking and smoking have taught me that reading about others’ addictions can be an important learning tool.
To my surprise, one of the most commonly reported symptoms of caffeine use is high levels of anxiety. Anxiety is something that I’d struggled with for most of my life, and it never once occurred to me that caffeine could in any way be related. Over years of going to therapy, I don’t remember a therapist ever asking me how much caffeine I drank.
Reddit users described experiencing anxiety for years, than having it disappear (or at least decrease) after they quit caffeine. Some said the changes took a few months to take effect, others reported noticing a difference in just days.
Reading through these stories, I was hopeful, but also a bit skeptical. A forum for people quitting caffeine was bound to be a biased source, and I wondered if the placebo effect was playing a role as well.
I looked for actual research studies to back up the claims I was reading on Reddit. In short, the studies agree: caffeine causes anxiety.
Does that mean quitting caffeine will eliminate all anxiety? Of course not, but it could potentially help.
After reading anecdotes about the improved anxiety levels, and finding a few research studies to back it up, I decided I owed it to myself to at least try a life without caffeine.
Although I don’t expect my anxiety to disappear entirely, even a small reduction would make this experiment worth it.
Quitting and Withdrawal
I’m now on day four without any caffeine. Almost everything I read recommended weaning off caffeine instead of quitting cold-turkey, so that’s exactly what I did.
I had already been down to about three cups of coffee a day (and no soda). I noticed that the new year was right around the corner, so I timed my reduction so that I’d hit my first zero-cup day on January 1st. For two days I drank two cups of coffee a day, then for two days I drank one cup a day.
Even by the time I was down to one cup of coffee a day, I was already feeling a bit of confusion. My mind was so used to operating on constant caffeine, that having just one cup in the morning was really throwing me off.
On January 1st, my first day without caffeine, the confusion got much worse, and I had a very bad headache as well. These both got even worse in the second day, but mostly cleared up in day three.
Today, day four, the withdrawal feelings are essentially gone. Now that it’s over, I can say it wasn’t a fun experience, but it wasn’t too hard at all compared to quitting alcohol or nicotine.
From what I’ve read, if I had done a more gradual weaning schedule, I could have avoided withdrawal side effects entirely. I think I just got a little too carried away with the idea of quitting right on New Year’s Day.
As far as cravings, I’ve been feeling mild ones all day. Again though, it’s really nothing at all compared to quitting alcohol and nicotine. Those experiences definitely left me over-prepared for this one.
My goal is to last at least this entire year without caffeine, but ideally I’d like to just stay off of it for good. I think my most likely cause of failure will be just not taking it seriously enough.
Caffeine isn’t nearly as detrimental as my other vices had been, and so I don’t feel as strongly about quitting. It feels like the stakes just aren’t high as far as caffeine is concerned.
But, with that said, I really do think that the benefit to my anxiety could be huge, and I’m trying to keep that in mind. Even if caffeine isn’t the worst habit I’ve ever had, it was certainly not doing me any good.
The Benefits
So, after four days without caffeine, have I noticed any reduction in anxiety?
Maybe this is just the placebo talking, but yeah, I really have. I actually feel great today, in a way that I haven’t in months. I know it’s too soon to tell for sure whether this is the lack of caffeine or just a coincidence, but things are off to a great start. | https://medium.com/the-ascent/why-im-saying-goodbye-to-caffeine-dcd7013e22e6 | ['Benya Clark'] | 2020-01-09 13:11:01.345000+00:00 | ['Addiction', 'Mental Health', 'Anxiety', 'Health', 'Lifestyle'] | Title I’m Saying Goodbye CaffeineContent I’m Saying Goodbye Caffeine I’ve quit alcohol tobacco kick caffeine Photo Sebastián León Prado Unsplash year I’m aiming goal never expected set Going completely caffeine free Get Point I’ve proud caffeine drinker life started soda high school recent year coffee joined mix caffeine consumption really took three year ago quit drinking alcohol getting sober spent evening beer bottle practically attached palm quitting helped nonalcoholic drink hand replace habit Unfortunately drink caffeinated Although caffeine nowhere near bad alcohol even ballpark certainly can’t deny started drinking unhealthy amount last year 4 cup coffee 4 soda per day That’s around 500 mg caffeine — insane amount 100 mg Mayo Clinic’s recommended maximum could also tell least minor addiction caffeine felt like needed start morning would get headache afternoon skipped Despite time last year still hadn’t even considered quitting thought caffeine minor problem compared alcohol ridiculous think twice also third serious addiction contend cigarette Last year instead worrying caffeine entirely focused quitting cigarette went nicotine patch month finally quit form nicotine last September end quitting smoking actually led year’s goal quitting caffeine quitting smoking sleep habit became total mess staying late sleeping hour night 10 hour others started waking multiple time night problem haven’t since drinking day first wrote nicotine withdrawal symptom lasted couple month looked seriously learned caffeine likely culprit Nicotine cause body process caffeine quickly smoker end feeling le effect caffeine nonsmoker quit smoking actually started drinking little coffee usual add body longer processing caffeine accelerated rate coffee stronger effect used That’s certainly enough throw sleep immediately decided cut caffeine started seeing improvement sleep first day Cutting Quitting cut caffeine address sleep problem experienced quitting smoking — sure enough — fixed didn’t stop decide quit completely always thought caffeine benign addiction reading affecting sleep discovered actually much harmful side effect ever realized starting place learning caffeine simply reading anecdote people quit caffeine Reddit Although anecdote aren’t necessarily scientifically reliable experience quitting drinking smoking taught reading others’ addiction important learning tool surprise one commonly reported symptom caffeine use high level anxiety Anxiety something I’d struggled life never occurred caffeine could way related year going therapy don’t remember therapist ever asking much caffeine drank Reddit user described experiencing anxiety year disappear least decrease quit caffeine said change took month take effect others reported noticing difference day Reading story hopeful also bit skeptical forum people quitting caffeine bound biased source wondered placebo effect playing role well looked actual research study back claim reading Reddit short study agree caffeine cause anxiety mean quitting caffeine eliminate anxiety course could potentially help reading anecdote improved anxiety level finding research study back decided owed least try life without caffeine Although don’t expect anxiety disappear entirely even small reduction would make experiment worth Quitting Withdrawal I’m day four without caffeine Almost everything read recommended weaning caffeine instead quitting coldturkey that’s exactly already three cup coffee day soda noticed new year right around corner timed reduction I’d hit first zerocup day January 1st two day drank two cup coffee day two day drank one cup day Even time one cup coffee day already feeling bit confusion mind used operating constant caffeine one cup morning really throwing January 1st first day without caffeine confusion got much worse bad headache well got even worse second day mostly cleared day three Today day four withdrawal feeling essentially gone it’s say wasn’t fun experience wasn’t hard compared quitting alcohol nicotine I’ve read done gradual weaning schedule could avoided withdrawal side effect entirely think got little carried away idea quitting right New Year’s Day far craving I’ve feeling mild one day though it’s really nothing compared quitting alcohol nicotine experience definitely left overprepared one goal last least entire year without caffeine ideally I’d like stay good think likely cause failure taking seriously enough Caffeine isn’t nearly detrimental vice don’t feel strongly quitting feel like stake aren’t high far caffeine concerned said really think benefit anxiety could huge I’m trying keep mind Even caffeine isn’t worst habit I’ve ever certainly good Benefits four day without caffeine noticed reduction anxiety Maybe placebo talking yeah really actually feel great today way haven’t month know it’s soon tell sure whether lack caffeine coincidence thing great startTags Addiction Mental Health Anxiety Health Lifestyle |
1,732 | How to Outsmart a Plague | The Skeptics — “You’re not going to get sick and nobody you know is going to get sick.”
Trump is one of many downplaying the size of the outbreak based on current infection number.
People are pointing to the still-quite-low total numbers of cases of infection as a reason to write off the virus as non-newsworthy.
However, what they’re failing to account for is COVID-19’s high level of contagion, difficulty to detect, inability to test effectively, and lack of a vaccine. All of these factors increase the overall danger of this pandemic, especially now that it is being reported in most of the world’s countries.
And, counterintuitively, the fact that it doesn’t have the nightmare-inducing mortality rate of other diseases of recent memory actually means it is more effective at spreading to massive numbers of people. If it doesn’t kill you or confine you to your bed, you’re out in the world and much more able to share the bug with your fellow citizens.
Notice that coronavirus has already jumped to the second most lethal on this list overall, and the outbreak is only just beginning. (source)
What’s more, in comparing coronavirus to the common seasonal flu, which itself kills tens of thousands of Americans each year, coronavirus is estimated to be 1.5–2.3 times more infectious and 10–50 times more lethal. Please take a moment to let that sink in.
As of March 11. 2020, these are the countries reporting cases of coronavirus. (source — CDC website)
This thing is just getting started, and the window to stop the spread is narrowing very rapidly, if it is even still open at all.
Harvard epidemiologist Marc Lipsitch thinks there’s no way to stop this thing from spreading far more significantly, and he predicts that, without strong countermeasures, “between [20–60%] of the world’s adult population could end up infected with coronavirus.” That’s a number in the billions. Even with a lower-end mortality rate estimate of 1%, that means tens of millions of deaths, at minimum. If this thing isn’t slowed down, and if just about everyone gets this bug, you will almost certainly know people who will die.
Essentially, left unchecked, we could see the number of cases multiply by a factor of 10 every two weeks. That adds up to a very large number very quickly.
Maybe that seems far-fetched. You might point, as Trump did in the tweet above, to the fact that we’re still at only about .001% infection rate for the global population, and even less in the United States. But one must understand, here, how exponential growth works. This video provides an incredibly helpful visualization of this phenomenon.
One bit of hope comes from the possibility that the coming warmer weather might help slow or delay the outbreak, but even that is highly uncertain. It all comes down to the growth rate, as in how quickly and unrestrictedly the virus is allowed to spread.
We’re in a situation of when, not if, this deadly disease spreads, and to what extent. At this point, many more people will die regardless of what we do, but the video above should hopefully make it crystal clear just how incredibly important efforts to contain and slow the virus are. | https://medium.com/basic-income/how-to-outsmart-a-plague-8c55c442eae4 | ['Conrad Shaw'] | 2020-03-21 15:25:42.289000+00:00 | ['Economics', 'Disease', 'Health', 'Basic Income', 'Coronavirus'] | Title Outsmart PlagueContent Skeptics — “You’re going get sick nobody know going get sick” Trump one many downplaying size outbreak based current infection number People pointing stillquitelow total number case infection reason write virus nonnewsworthy However they’re failing account COVID19’s high level contagion difficulty detect inability test effectively lack vaccine factor increase overall danger pandemic especially reported world’s country counterintuitively fact doesn’t nightmareinducing mortality rate disease recent memory actually mean effective spreading massive number people doesn’t kill confine bed you’re world much able share bug fellow citizen Notice coronavirus already jumped second lethal list overall outbreak beginning source What’s comparing coronavirus common seasonal flu kill ten thousand Americans year coronavirus estimated 15–23 time infectious 10–50 time lethal Please take moment let sink March 11 2020 country reporting case coronavirus source — CDC website thing getting started window stop spread narrowing rapidly even still open Harvard epidemiologist Marc Lipsitch think there’s way stop thing spreading far significantly predicts without strong countermeasure “between 20–60 world’s adult population could end infected coronavirus” That’s number billion Even lowerend mortality rate estimate 1 mean ten million death minimum thing isn’t slowed everyone get bug almost certainly know people die Essentially left unchecked could see number case multiply factor 10 every two week add large number quickly Maybe seems farfetched might point Trump tweet fact we’re still 001 infection rate global population even le United States one must understand exponential growth work video provides incredibly helpful visualization phenomenon One bit hope come possibility coming warmer weather might help slow delay outbreak even highly uncertain come growth rate quickly unrestrictedly virus allowed spread We’re situation deadly disease spread extent point many people die regardless video hopefully make crystal clear incredibly important effort contain slow virus areTags Economics Disease Health Basic Income Coronavirus |
1,733 | The Cardio of Audio | STRUCTURED vs UNSTRUCTURED DATA
A structured data usually lives in an RDBMS or a database that you can easily search records, see the numbers and compare them. For example, a record can have names, id, date of birth, salary, address, etc. The data is arranged in a structured tabular like format and it’s simple to work with them.
Unstructured data comprises audio, text, images, etc. Around 80% of Enterprise data is stored in an unstructured format. It is not easy to work with them because we can’t directly use the data stored in an image or an audio file. In this article, we will be mainly focussing on the audio data.
AUDIO DATA
The human brain is continuously perceiving audio around us. We hear the birds chirping, the road racing, the air blowing and the people speaking. We have devices to store all this data in various formats like mp3, wav, WMA, etc. Now, what else can we do with this data?
For working with unstructured data like this Deep learning techniques are your best bet.
First, we see what audio looks like.
Audio is represented in the form of waves where the amplitude of the waves differs from time to time.
AUDIO SAMPLING
It is important to understand sampling because sounds are continuous analog signals and when we convert them into the digital signal that is composed of discrete data points from the signals. This process is called sampling and the rate by which sampling is done is called the sample rate. It is measured in Hz (Heartz). Audio with a 48kHz sample rate means the audio was sampled with 48,000 data points in a second. When sampling a little bit of the information is lost.
LibROSA
LibROSA is the popular python package used for music and audio analysis. It has the building blocks to create a music information retrieval.
To install the package with pip you can run this command in your terminal.
pip install librosa
Load an Audio File
We will load a 23-second audio file of a dog barking.
import librosa data, sample_rate = librosa.load(“dog bark.wav”)
The load method of librosa takes the path of the audio file and returns a tuple that has audio sampled data and the sample rate. The default sample rate is 22050. You can also specify a custom sample rate in the argument. To use the original sample rate we use sr=None
data, sample_rate = librosa.load(“dog bark.wav”, sr=None)
Let us see what is in the data-
print(data.shape, data) print(sample_rate)
Output:
(1049502,) [ 0.00019836 -0.00036621 0.00016785 …. 0.00099182 0.00161743 0.00135803]
44100
The data is a numpy array with 1049502 data points. The original sample rate is 44100 Hz. Scaling down the sample rate will make the data less and we can perform operations faster but too much scaling down will also result in some information loss.
Displaying the Audio Data
Librosa has a display module that plots you a graph of the data-
import librosa.display librosa.display.waveplot(data)
Output:
This is what barking of a dog looks like. Now with the sampled data and the sample rate, we can extract features from the audio.
Feature extractions for Machine Learning
There are various methods and techniques to extract audio features. These are-
Time-domain Features
Zero-Crossing Rate -
If you observed the audio image we saw the sampled data was between -1 to 1. Zero crossing rate is the rate of the change in these signs i.e. the rate of change from a negative value to positive value. This is used heavily in speech recognition and music information retrieval.
Spectral Features
Spectral Centroid -
It indicates the “brightness” of a given sound. It represents the spectral center of gravity. Suppose you are trying to balance a pencil on your finger. So, the spectral centroid would be the frequency where your finger touches the pencil when it’s balanced.
Spectral Rolloff -
Spectral roll-off is the frequency in Hz below a predefined percentage (roll_percent) which is 85% by default in librosa library.
This feature is useful in determining voiced signals from non voiced signals. It is also good for approximating the minimum or maximum frequency by setting the roll_percent to 1 or 0.
Perceptual Features
MFCC — Mel-Frequency Cepstral Coefficients -
Each individual voice sounds different because the voice is filtered by our vocal tract including the tongue, teeth, etc. The shape decides how it sounds and by determining shape accurately we can identify the sound it will produce. The job of the MFCC is to determine the shape of the vocal tract and represent it into a power spectrum.
MFCCs are the most used feature in audio and speech recognition. They were introduced in 1980 and have been the state of art ever since.
CONCLUSION
The presence of unstructured data is huge on the internet. It’s not an easy task to analyze unstructured data as we have to perform a lot of transformations on the data to extract features. The audio can have 3 different categories of features time-domain, spectral and perceptual features. | https://towardsdatascience.com/the-cardio-of-audio-cbe310d94b48 | ['Rinu Gour'] | 2019-11-23 15:18:58.858000+00:00 | ['Big Data', 'Python', 'Data', 'Data Science', 'Machine Learning'] | Title Cardio AudioContent STRUCTURED v UNSTRUCTURED DATA structured data usually life RDBMS database easily search record see number compare example record name id date birth salary address etc data arranged structured tabular like format it’s simple work Unstructured data comprises audio text image etc Around 80 Enterprise data stored unstructured format easy work can’t directly use data stored image audio file article mainly focussing audio data AUDIO DATA human brain continuously perceiving audio around u hear bird chirping road racing air blowing people speaking device store data various format like mp3 wav WMA etc else data working unstructured data like Deep learning technique best bet First see audio look like Audio represented form wave amplitude wave differs time time AUDIO SAMPLING important understand sampling sound continuous analog signal convert digital signal composed discrete data point signal process called sampling rate sampling done called sample rate measured Hz Heartz Audio 48kHz sample rate mean audio sampled 48000 data point second sampling little bit information lost LibROSA LibROSA popular python package used music audio analysis building block create music information retrieval install package pip run command terminal pip install librosa Load Audio File load 23second audio file dog barking import librosa data samplerate librosaload“dog barkwav” load method librosa take path audio file return tuple audio sampled data sample rate default sample rate 22050 also specify custom sample rate argument use original sample rate use srNone data samplerate librosaload“dog barkwav” srNone Let u see data printdatashape data printsamplerate Output 1049502 000019836 000036621 000016785 … 000099182 000161743 000135803 44100 data numpy array 1049502 data point original sample rate 44100 Hz Scaling sample rate make data le perform operation faster much scaling also result information loss Displaying Audio Data Librosa display module plot graph data import librosadisplay librosadisplaywaveplotdata Output barking dog look like sampled data sample rate extract feature audio Feature extraction Machine Learning various method technique extract audio feature Timedomain Features ZeroCrossing Rate observed audio image saw sampled data 1 1 Zero crossing rate rate change sign ie rate change negative value positive value used heavily speech recognition music information retrieval Spectral Features Spectral Centroid indicates “brightness” given sound represents spectral center gravity Suppose trying balance pencil finger spectral centroid would frequency finger touch pencil it’s balanced Spectral Rolloff Spectral rolloff frequency Hz predefined percentage rollpercent 85 default librosa library feature useful determining voiced signal non voiced signal also good approximating minimum maximum frequency setting rollpercent 1 0 Perceptual Features MFCC — MelFrequency Cepstral Coefficients individual voice sound different voice filtered vocal tract including tongue teeth etc shape decides sound determining shape accurately identify sound produce job MFCC determine shape vocal tract represent power spectrum MFCCs used feature audio speech recognition introduced 1980 state art ever since CONCLUSION presence unstructured data huge internet It’s easy task analyze unstructured data perform lot transformation data extract feature audio 3 different category feature timedomain spectral perceptual featuresTags Big Data Python Data Data Science Machine Learning |
1,734 | The Shaw Alphabet and Other Quixotic Solutions I Love | From Wikimedia Commons
Some, like me, have a Quixotic love for rational solutions. A rational love for rational solutions is a love for rational solutions that work, but some love a rational solution that would work if only people were not so damn stubborn and set in their ways.
Let’s call such people “Panzas.” Panzas are able to see that a solution, though fascinating, is not really accepted, but they love it anyway.
My name is Michael. I am a Panza. Here I tell of some of my loves, which I’ll list in groups:
Those that have not yet taken off (thinking positively here);
Those that have found a niche (some on the way down, some on the way up, some just holding steady, but all secure within the niche);
Those that are starting to achieve lift-off;
Those, still beloved by Panzas, that seem to be losing ground; and
A success story: from the outer edges, when it was known only to (and loved only by) Panzas, but now is mainstream
You will surely have your own nominees, but these are ones I’ve followed.
Have not yet taken off
Panzas live in hope (our namesake, you will recall, did indeed finally have his hopes realized and received his promised island to rule), but in some cases hope struggles for breath. Here are some of those for which hopes are somewhat dim.
The Shaw alphabet was created per a bequest in George Bernard Shaw’s will to develop a phonetic alphabet, distinct from the Roman alphabet, of at least 40 characters. A competition was held, overseen by Pittman (of Pittman shorthand), and the final result was developed based on the designs of the four contest winners. One book, The Shaw Alphabet Edition of Androcles and the Lion was written with parallel text, Roman alphabet and Shaw alphabet. The Shaw alphabet is more compact, taking up about 1/3 the room of the Roman transliteration of the same text. The book was published in 1962 and not since. I bought a handful of copies and sent them to friends, whom I then plagued with letters written using the Shaw alphabet. Of late, the title of the establishing book has been ignored in favor of the “Shavian” alphabet, presumably to parade one’s education.
It’s a lovely alphabet, but phonetic alphabets don’t work well when regional pronunciations vary so widely. Shaw did say that the text of Androcles and the Lion should be written using the British Received Pronunciation as spoken by King George VI. Since I come from a region where “pen” and “pin” are pronounced the same, I fear I sadly mangled the spelling. But I loved the idea (which some pronounce “idear”).
The HK G11 assault rifle is another lovely idea that unfortunately was not adopted. It is a robust weapon that uses caseless ammo, which means that a soldier can carry many more rounds of G11 ammo than regular ammo — because without brass shells, ammo weighs, round for round, much less. Moreover, with caseless ammo, no shells are ejected. Ammo is loaded from magazine into firing chamber with a rotating mechanism, which is fast: 3-round bursts fire at the rate of 2000 rounds per minute (thus a 3-round burst takes 90 milliseconds), with recoil felt after the burst is fired. A magazine holds 45 rounds, and the rifle carries one magazine in firing position and two alongside, ready for quick loading: 135 rounds ready to go, in effect.
The rifle initially had some problems with heat causing cook-off of the caseless ammo during sustained fire (brass shells on conventional cartridges provide some insulation), which was fixed, and a later problem of the propellant block being too fragile for field conditions. The big problem, I suspect, was that the rifle was unconventional. The military tends to be extremely conservative and resistant to change (cf. British tactics in the Great War: having troops run across open fields to charge machine-gun emplacements). [Update: another reason for the G11’s not being adopted: a change in the geopolitical situation.]
The Dvorak keyboard layout impressed me so much that for my children I ordered Smith-Corona portable typewriters that had the Dvorak layout — the true Dvorak layout, in which “?” does not require using the shift key. (The committee developing the ANSI-standard Dvorak layout were told that they had to stick with the existing keycaps, which have “?” as an up-shift of “/”.)The eldest did indeed learn and use her Dvorak typewriter and found it a great advantage in college because none of her fellow students tried to borrow her typewriter. But even she converted to QWERTY in time. So it goes. Still, people continue to work to improve the QWERTY layout, and one — the Workman keyboard layout — seems quite interesting (though the “?” requires using the shift key, for reasons unclear to me—possibly the existing-keycaps curse). The big benefit of the Dvorak keyboard is increased typing comfort, with less finger travel and a better balanced workload for the hands.
The Fitaly keyboard layout is optimized for one-finger typing (as on a touchscreen) or for typing using a stylus. I used the Fitaly layout for some years when I had a Palm Pilot. It worked extremely well. For a touchscreen keyboard in, say, a public library, that is used constantly by first-time users, the QWERTY layout is better because it’s familiar, but for a keyboard layout used repeatedly by the same person, the Fitaly is much better because, once it’s learned, it results in much faster and more accurate entry. I loved it. With practice, many people could type 50wpm with the Fitaly keyboard, with the best reaching 84wpm using the Fitaly layout on a Treo Thumboard. I so wish the Fitaly were available for the iPhone, and I really don’t see why it isn’t, except it has the usual curse: it’s different.
Esperanto, a language constructed to be easy to learn to serve as a common second language for all (with no political overtones from being a national language), is still active, but I think most will agree that it has not sustained the momentum it had until interrupted by the Great War. Still, it’s around (and easily learned on-line), and it has proven to work, both for communication among speakers whose only common language is Esperanto and as a first foreign language, since learning Esperanto as one’s first foreign great facilitates learning a subsequent foreign language.
For example, one study in Finland (where German is commonly taught in school) showed that students who studied a year of Esperanto followed by two years of German knew more German and were more fluent in German than students who studied German for three years. And it’s a fun language. This post about Ithkuil — another fascinating constructed language that looks considerably less fun, serving as a testbed for a philosophical test of language capabilities— begins by explaining the five reasons Esperanto works so well. So why is Esperanto not commonly taught in schools that teach students a foreign language? We Panzas want to know.
Esperanto as an introductory foreign language perhaps belongs in the next category, but its real goal was to be a universal second language.
Forth is a programming language that includes, among other commands in the language, compiler commands, so that within your program you can define new commands and use them like any other command in the language. Indeed, except for a core of hard-coded commands, most of Forth is written in Forth. You use your new commands along with existing commands to define additional commands, until finally you define a command that is the program — executing that command does the job.
Forth uses a push-down stack to hold data, and arithmetic operations use Reverse Polish Notation: (3+5)*4 in a Forth command is 3 5 + 4 *. A handful of commands allow you to manipulate the stack.
Forth is fast (since it works in a computer’s native language, which consists of addresses) and large Forth programs take less memory than assembly language equivalents, so its main use nowadays (insofar as it is used) is embedded programming for microprocessors. It is very easy to implement and often is the first high-level language brought up on a new microprocessor. And for an individual programmer using the language, it grows (through commands the programmer adds) to be very powerful in addressing the applications the programmer most frequently develops (because the added commands are tailored to those applications). Iterative development is natural: define a command, execute it on the spot, and revise as needed.
The drawback is the name explosion (since command definitions are typically brief, you end up with a lot of command names), which makes Forth not so good for team projects, now the norm. But back in the day, it was great even though (strangely) not widely accepted. Partly, I think, that is because Forth was developed in the field, by programmers, not within a supporting institutional/academic structure.
In a niche
Italic handwriting (aka chancery cursive) is much better than printing or traditional cursive (since italic letter shapes hold together when written at speed, and since italic handwriting is beautiful). When I was teaching at a private elementary day-school, I introduced italic handwriting into the curriculum, and it was a hit. When parents saw the beauty of their children’s handwriting, they felt that their tuition money was showing some results — plus the students loved it. Italic handwriting takes a little practice, but a fountain pen with an italic (or stub) nib does most of the work for you. Italic handwriting is still definitely around and is taught in some schools, but not like back in the day (say, in Elizabethan times — the earlier Elizabethan times).
Traditional shaving, using true lather (made from shaving soap with a shaving brush and water) and a razor that has only one blade (a double-edge safety razor or a straight razor), has fallen from popular favor but still has a cadre of adherents, who like it because their shaves are better than when using canned foam and a multiblade cartridge (or — shudder! — an electric razor) and better in two ways: a better result (smoother, easier on the skin) and a better experience (a shave that’s actually enjoyable) — well, three ways if you include that I spend less than $5 per year on blades.
Starting each day with a pleasurable ritual that improves your mood and appearance and makes you feel squared away and shipshape has a cumulative positive effect. Why don’t all men shave this way? I suspect the answer is “clever (and well-funded) marketing.” The marketing effort costs many millions of dollars, but that’s fine with Gillette and its cohorts since the men buying the products pay the marketing costs.
It is true that a traditional shave with true lather and a double-edge safety razor takes a little more time — at first, enough so that novices shave in the evening, when they don’t have to rush (rushing a shave is a Bad Idea). But with experience a complete shave (wetting brush under hot water, loading brush with soap (10 seconds so far), lathering one’s face, doing a three-pass shave (with the grain, across the grain, against the grain — except for in-grown-prone areas, where the third pass is across the grain in the other direction), lathering before each pass and rinsing after, drying the face and splashing on aftershave) takes a total of 5 minutes. That’s perhaps 2 minutes longer than a shave done with canned foam and a cartridge razor, but in return for the 2 minutes you begin the day doing something you enjoy that improves your mood and thus the character of the day.
Recumbent bikes are definitely around, but not so popular as one would expect. They make so much sense — the sight line is easier on your neck, for instance, and you can exert more force on the pedals than merely your body weight. [Full disclosure: Not only did I have a recumbent bike, I also had a Moulton bike.] Recumbent bikes did not fall into a niche; they started in a niche and in a niche they remain, but perhaps they will break out.
Crokinole solves a particular special problem: to find an active indoor game easily enjoyed by players of varying skill (such as one might find at a party), is fun to watch, and moves quickly so that many get a chance to play. Ping-pong (table tennis, if you’re a purist) takes too much room and doesn’t accommodate well players of different skill levels. Also, ping-pong games take too long to complete, so that waiting your turn is a drag.
Crokinole takes but a table-top — a card table is fine. Even a novice can enjoy playing and the mix of luck and skill makes novice/expert matches still fun. It seems to me ideal for parties, which gives you an idea of what sort of parties include me. Somehow the sliding of the crokinole pieces across that smooth maple board is so satisfying, with luck and skill nicely balanced. And it’s proved to have legs: it’s been played since 1876, when it was invented by Eckhardt Wettlaufer in Ontario, Canada. Crokinole is in fact popular (in parts of Canada), but any Panza can see that it should be more popular everywhere.
Beginning (perhaps) to burgeon
Go/Weichi/Baduk (Japanese/Chinese/Korean names), like Crokinole, solves the problem of having fun when the players differ in ability and experience, with Crokinole being a game of physical activity and Go a strategic board game like chess — but also not like chess. For players who differ in ability and experience, chess doesn’t work well at all, particularly if the difference is great, since in chess the handicap changes the character of the game.
In Go, the handicap is integral to the game and doesn’t alter the nature of play — and the handicap can be finely tuned. When two people play repeatedly with each other, very soon every game is tense and close-fought with a narrow victory (for one) or loss (for the other). Here’s how: if one player wins 3 games in a row, the opponent’s advantage is adjusted by one stone at the start, so winning over the opponent becomes slightly more difficult. If a player loses 3 in a row, the handicap is adjusted one stone in his favor, so winning becomes slightly easier. Very soon there are no more 3-game winning streaks for either player, and every game is a hard-fought toss-up.
A regular game on a 19x19 board might take 40 minutes to an hour, but the game play feels much the same if the board is reduced to 13x13 or even 9x9, with the game taking less time as the board shrinks. So if you want to play a game over lunch, the 13x13 board makes sense. If you want to play a quick game, 9x9 is the answer.
Add in the aesthetics of the game — the board and stones and sounds and tactile pleasure — and a Panza doesn’t understand why the game is not as popular in the West as it is in the East.
That may be changing. Go got a big boost with the publicity from Deep Mind’s AI AlphaGo Zero, which taught itself to play by starting with only the rules and then playing game after game against itself until, after 3 days of self-play, it was better than AlphaGo I, which had beat the human world champion Lee Sedol 4 times in a 5-game match. Always popular in Japan, Korea, and China—well, not always, but for 2500 years — the game now is increasingly popular in North America.
Watch the movie The Surrounding Game to get an idea why. After my first game as an undergraduate, I had no idea at all what went on, and I thought, “Never again. Why do people play this game?” Then I started playing in graduate school, saw why, and wanted to go to Japan to delve deeper into it. (I didn’t.)
The whole-food plant-based diet and time-restricted eating are both gaining popularity as people become more aware of studies that demonstrate the positive health effects, try them, and discover they work. Still many people still are reluctant. They conjure up imaginary difficulties (“What about protein?”), and they draw back from feeling awkward, ignorance, and confused (feelings that often arise from plunging into something totally new), not realizing those feelings can be enjoyed.
Two books by Michael Greger MD FACLM, How Not to Die and How Not to Diet have helped people see the benefits, and the documentary The Game Changers show elite athletes thriving on the diet. Evidence from nutritional studies sare also convincing — to take one example, from the New England Journal of Medicine, “Effects of Intermittent Fasting on Health, Aging, and Disease”:
Preclinical studies consistently show the robust disease-modifying efficacy of intermittent fasting in animal models on a wide range of chronic disorders, including obesity, diabetes, cardiovascular disease, cancers, and neurodegenerative brain diseases. Periodic flipping of the metabolic switch not only provides the ketones that are necessary to fuel cells during the fasting period but also elicits highly orchestrated systemic and cellular responses that carry over into the fed state to bolster mental and physical performance, as well as disease resistance.
Still, neither whole-food plant-based diets nor time-restricted eating have yet been widely adopted in (say) the U.S., as shown in obesity trends in men and in women. But at least more people are talking about it, and some will edge into it.
Losing ground
This is the unburgeoning category: skills and solutions that seem to be dwindling and withering.
Home cooking and kitchen skills seem to be fading due to over-scheduled lives, resulting in many losing the knowledge and practice of daily cooking — not fancy feasts, just getting good and tasty food on the table efficiently and enjoyably. The slack is being taken up by prepared meals, pizza, fast food, and other foods that are CRAP (calorie-rich and processed). (See charts at links above.)
Human interaction is falling victim to busyness and automation. David at Raptitude has a good column describing the descent in which he notes
Human interaction probably isn’t in danger of extinction, but it is quietly losing great swaths of its natural habitat. Technology is making real interaction less necessary at work, home, and everywhere in between, which must mean there’s simply less of it in the world than there was a decade ago.
Read the column for more. The trend is dire: humans are social animals and need human interaction for their health — mental, physical, and spiritual. On-line interactions do not deliver the benefits. The solution is clear: interacting in person with others. Note that Crokinole and Go do involve interacting with others (unless you play Go against an AI).
Popular now beyond Panzas
Science-fiction was once very very niche. It’s been around for centuries — Gulliver’s Travels, Jules Verne, and so on — because it’s a good solution to making pointed criticisms of one’s society even when strong social forces don’t want criticism. As Gulliver’s Travels demonstrates, science-fiction lets a satirist criticize politics and society at a safe remove, by presenting some distant and fantastical society that (coincidentally) reflects and puts in high relief the foibles of the writer’s own society. Nowadays science-fiction is taken for granted. Philip K. Dick and Kurt Vonnegut (both strong social critics) once were fringe; now they’re mainstream. Science-fiction movies are big box office. Panza can relax. They get it now. | https://medium.com/age-of-awareness/the-shaw-alphabet-and-other-quixotic-solutions-i-love-8e6a28ebe28f | ['Michael Ham'] | 2020-02-03 21:10:57.711000+00:00 | ['Health', 'Artificial Intelligence', 'Games', 'Language'] | Title Shaw Alphabet Quixotic Solutions LoveContent Wikimedia Commons like Quixotic love rational solution rational love rational solution love rational solution work love rational solution would work people damn stubborn set way Let’s call people “Panzas” Panzas able see solution though fascinating really accepted love anyway name Michael Panza tell love I’ll list group yet taken thinking positively found niche way way holding steady secure within niche starting achieve liftoff still beloved Panzas seem losing ground success story outer edge known loved Panzas mainstream surely nominee one I’ve followed yet taken Panzas live hope namesake recall indeed finally hope realized received promised island rule case hope struggle breath hope somewhat dim Shaw alphabet created per bequest George Bernard Shaw’s develop phonetic alphabet distinct Roman alphabet least 40 character competition held overseen Pittman Pittman shorthand final result developed based design four contest winner One book Shaw Alphabet Edition Androcles Lion written parallel text Roman alphabet Shaw alphabet Shaw alphabet compact taking 13 room Roman transliteration text book published 1962 since bought handful copy sent friend plagued letter written using Shaw alphabet late title establishing book ignored favor “Shavian” alphabet presumably parade one’s education It’s lovely alphabet phonetic alphabet don’t work well regional pronunciation vary widely Shaw say text Androcles Lion written using British Received Pronunciation spoken King George VI Since come region “pen” “pin” pronounced fear sadly mangled spelling loved idea pronounce “idear” HK G11 assault rifle another lovely idea unfortunately adopted robust weapon us caseless ammo mean soldier carry many round G11 ammo regular ammo — without brass shell ammo weighs round round much le Moreover caseless ammo shell ejected Ammo loaded magazine firing chamber rotating mechanism fast 3round burst fire rate 2000 round per minute thus 3round burst take 90 millisecond recoil felt burst fired magazine hold 45 round rifle carry one magazine firing position two alongside ready quick loading 135 round ready go effect rifle initially problem heat causing cookoff caseless ammo sustained fire brass shell conventional cartridge provide insulation fixed later problem propellant block fragile field condition big problem suspect rifle unconventional military tends extremely conservative resistant change cf British tactic Great War troop run across open field charge machinegun emplacement Update another reason G11’s adopted change geopolitical situation Dvorak keyboard layout impressed much child ordered SmithCorona portable typewriter Dvorak layout — true Dvorak layout “” require using shift key committee developing ANSIstandard Dvorak layout told stick existing keycaps “” upshift “”The eldest indeed learn use Dvorak typewriter found great advantage college none fellow student tried borrow typewriter even converted QWERTY time go Still people continue work improve QWERTY layout one — Workman keyboard layout — seems quite interesting though “” requires using shift key reason unclear me—possibly existingkeycaps curse big benefit Dvorak keyboard increased typing comfort le finger travel better balanced workload hand Fitaly keyboard layout optimized onefinger typing touchscreen typing using stylus used Fitaly layout year Palm Pilot worked extremely well touchscreen keyboard say public library used constantly firsttime user QWERTY layout better it’s familiar keyboard layout used repeatedly person Fitaly much better it’s learned result much faster accurate entry loved practice many people could type 50wpm Fitaly keyboard best reaching 84wpm using Fitaly layout Treo Thumboard wish Fitaly available iPhone really don’t see isn’t except usual curse it’s different Esperanto language constructed easy learn serve common second language political overtone national language still active think agree sustained momentum interrupted Great War Still it’s around easily learned online proven work communication among speaker whose common language Esperanto first foreign language since learning Esperanto one’s first foreign great facilitates learning subsequent foreign language example one study Finland German commonly taught school showed student studied year Esperanto followed two year German knew German fluent German student studied German three year it’s fun language post Ithkuil — another fascinating constructed language look considerably le fun serving testbed philosophical test language capabilities— begin explaining five reason Esperanto work well Esperanto commonly taught school teach student foreign language Panzas want know Esperanto introductory foreign language perhaps belongs next category real goal universal second language Forth programming language includes among command language compiler command within program define new command use like command language Indeed except core hardcoded command Forth written Forth use new command along existing command define additional command finally define command program — executing command job Forth us pushdown stack hold data arithmetic operation use Reverse Polish Notation 354 Forth command 3 5 4 handful command allow manipulate stack Forth fast since work computer’s native language consists address large Forth program take le memory assembly language equivalent main use nowadays insofar used embedded programming microprocessor easy implement often first highlevel language brought new microprocessor individual programmer using language grows command programmer add powerful addressing application programmer frequently develops added command tailored application Iterative development natural define command execute spot revise needed drawback name explosion since command definition typically brief end lot command name make Forth good team project norm back day great even though strangely widely accepted Partly think Forth developed field programmer within supporting institutionalacademic structure niche Italic handwriting aka chancery cursive much better printing traditional cursive since italic letter shape hold together written speed since italic handwriting beautiful teaching private elementary dayschool introduced italic handwriting curriculum hit parent saw beauty children’s handwriting felt tuition money showing result — plus student loved Italic handwriting take little practice fountain pen italic stub nib work Italic handwriting still definitely around taught school like back day say Elizabethan time — earlier Elizabethan time Traditional shaving using true lather made shaving soap shaving brush water razor one blade doubleedge safety razor straight razor fallen popular favor still cadre adherent like shave better using canned foam multiblade cartridge — shudder — electric razor better two way better result smoother easier skin better experience shave that’s actually enjoyable — well three way include spend le 5 per year blade Starting day pleasurable ritual improves mood appearance make feel squared away shipshape cumulative positive effect don’t men shave way suspect answer “clever wellfunded marketing” marketing effort cost many million dollar that’s fine Gillette cohort since men buying product pay marketing cost true traditional shave true lather doubleedge safety razor take little time — first enough novice shave evening don’t rush rushing shave Bad Idea experience complete shave wetting brush hot water loading brush soap 10 second far lathering one’s face threepass shave grain across grain grain — except ingrownprone area third pas across grain direction lathering pas rinsing drying face splashing aftershave take total 5 minute That’s perhaps 2 minute longer shave done canned foam cartridge razor return 2 minute begin day something enjoy improves mood thus character day Recumbent bike definitely around popular one would expect make much sense — sight line easier neck instance exert force pedal merely body weight Full disclosure recumbent bike also Moulton bike Recumbent bike fall niche started niche niche remain perhaps break Crokinole solves particular special problem find active indoor game easily enjoyed player varying skill one might find party fun watch move quickly many get chance play Pingpong table tennis you’re purist take much room doesn’t accommodate well player different skill level Also pingpong game take long complete waiting turn drag Crokinole take tabletop — card table fine Even novice enjoy playing mix luck skill make noviceexpert match still fun seems ideal party give idea sort party include Somehow sliding crokinole piece across smooth maple board satisfying luck skill nicely balanced it’s proved leg it’s played since 1876 invented Eckhardt Wettlaufer Ontario Canada Crokinole fact popular part Canada Panza see popular everywhere Beginning perhaps burgeon GoWeichiBaduk JapaneseChineseKorean name like Crokinole solves problem fun player differ ability experience Crokinole game physical activity Go strategic board game like chess — also like chess player differ ability experience chess doesn’t work well particularly difference great since chess handicap change character game Go handicap integral game doesn’t alter nature play — handicap finely tuned two people play repeatedly soon every game tense closefought narrow victory one loss Here’s one player win 3 game row opponent’s advantage adjusted one stone start winning opponent becomes slightly difficult player loses 3 row handicap adjusted one stone favor winning becomes slightly easier soon 3game winning streak either player every game hardfought tossup regular game 19x19 board might take 40 minute hour game play feel much board reduced 13x13 even 9x9 game taking le time board shrink want play game lunch 13x13 board make sense want play quick game 9x9 answer Add aesthetic game — board stone sound tactile pleasure — Panza doesn’t understand game popular West East may changing Go got big boost publicity Deep Mind’s AI AlphaGo Zero taught play starting rule playing game game 3 day selfplay better AlphaGo beat human world champion Lee Sedol 4 time 5game match Always popular Japan Korea China—well always 2500 year — game increasingly popular North America Watch movie Surrounding Game get idea first game undergraduate idea went thought “Never people play game” started playing graduate school saw wanted go Japan delve deeper didn’t wholefood plantbased diet timerestricted eating gaining popularity people become aware study demonstrate positive health effect try discover work Still many people still reluctant conjure imaginary difficulty “What protein” draw back feeling awkward ignorance confused feeling often arise plunging something totally new realizing feeling enjoyed Two book Michael Greger MD FACLM Die Diet helped people see benefit documentary Game Changers show elite athlete thriving diet Evidence nutritional study sare also convincing — take one example New England Journal Medicine “Effects Intermittent Fasting Health Aging Disease” Preclinical study consistently show robust diseasemodifying efficacy intermittent fasting animal model wide range chronic disorder including obesity diabetes cardiovascular disease cancer neurodegenerative brain disease Periodic flipping metabolic switch provides ketone necessary fuel cell fasting period also elicits highly orchestrated systemic cellular response carry fed state bolster mental physical performance well disease resistance Still neither wholefood plantbased diet timerestricted eating yet widely adopted say US shown obesity trend men woman least people talking edge Losing ground unburgeoning category skill solution seem dwindling withering Home cooking kitchen skill seem fading due overscheduled life resulting many losing knowledge practice daily cooking — fancy feast getting good tasty food table efficiently enjoyably slack taken prepared meal pizza fast food food CRAP calorierich processed See chart link Human interaction falling victim busyness automation David Raptitude good column describing descent note Human interaction probably isn’t danger extinction quietly losing great swath natural habitat Technology making real interaction le necessary work home everywhere must mean there’s simply le world decade ago Read column trend dire human social animal need human interaction health — mental physical spiritual Online interaction deliver benefit solution clear interacting person others Note Crokinole Go involve interacting others unless play Go AI Popular beyond Panzas Sciencefiction niche It’s around century — Gulliver’s Travels Jules Verne — it’s good solution making pointed criticism one’s society even strong social force don’t want criticism Gulliver’s Travels demonstrates sciencefiction let satirist criticize politics society safe remove presenting distant fantastical society coincidentally reflects put high relief foible writer’s society Nowadays sciencefiction taken granted Philip K Dick Kurt Vonnegut strong social critic fringe they’re mainstream Sciencefiction movie big box office Panza relax get nowTags Health Artificial Intelligence Games Language |
1,735 | Self-employed vs. Employed: Pros & Cons | This is not going to be some deeply controversial rant about why one way of making a salary is better or worse than the other.
This is not a verbose way to toot my own horn and make you spend your time and energy reading about how great I am.
This is not an article filled with cat pictures (sorry).
Because the question of “Which is better?” is an unfair question. The “right” answer is completely subjective.
I am currently self-employed and I love it, though of course there are drawbacks.
I previously worked for 10 years in corporate America and it was also both good and bad.
There are “dream jobs,” of course, but the reality is that even your dream job has drawbacks and days that suck sometimes.
Both self-employment and traditional employment have advantages and disadvantages, it is truly about what is best FOR YOU as an individual and for your family.
As Quora user Kelven Swords points out:
Pros:
YOU make the decisions, no one else… and you thus reap the rewards.
YOU control the finances, no one else… and you thus reap the profits.
YOU determine who is on staff, no one else… and you thus control the social structure.
Cons:
You make the decisions… thus have no one else to blame for your errors.
You control the finances… thus have no one else to blame for any wasted money.
You determine who is on staff… thus you have no one else to blame for any parasitic staff members who poison the well.
Let’s take it a step further than what Kelven has described above.
There are obvious advantages to working for yourself.
You can set your own working hours.
You choose who to work with…and who NOT to work with.
You have significantly more control over processes, contracts, clients, work, time, and everything else.
You can work in your pajamas — and even sleep in!
You get to build great relationships with your clients because you’re steering the ship and choosing how to cultivate those relationships.
There are some obvious disadvantages, as well.
You have no one else to rely on.
You do not have a manager setting tasks or deadlines, so all deadlines are self-imposed, which can be difficult for some to manage and stick to.
Time management becomes extremely important, which is hard for many.
No company insurance or other benefits.
No sick time, paid vacation time, or maternity leave.
Less stability in terms of income.
You will find yourself working far more than 40 hours most weeks.
You do not have coworkers and it can be sometimes lonely and isolating.
You are probably not an expert in every single thing a business needs: processes, sales, closing sales, marketing, website building and maintenance, creative stuff, contracts, organization, admin work, etc.
Higher potential for burnout/overworking.
Doing your taxes is harder.
When it comes to working for a company, you are getting some very specific advantages, in terms of a stable, dependable income, medical and other benefits, having people to ask when you need help, and being told what you should be doing.
Something people rarely think about when dreaming of being self-employed is the lack of structure and organization.
You have to create your own schedule, keep yourself on task, make sure work gets done, track deadlines, invoices, payments, all business expenses, and create a structure to your day.
It is incredibly easy to lose track of time or lose focus and end up spending half your day on social media when no one is watching!
There are many tools out there to help you get organized and create a structure for your day. Some are free and some cost money — which you need to keep track of so that you can make sure to deduct it on your taxes as a business expense.
Taxes are different and a bit more difficult when you work for yourself, and you have to save some of your income to pay it, and it WILL be a difficult check to write.
If you have personal assets, you’ll need to consider if it makes more sense for you to be a sole proprietor, LLC, S-Corp, or several other options, each with their own benefits and drawbacks. There is much research involved in starting your own business!
For Me
Being my own boss has been fun, challenging, interesting, and lonely. I love being a writer and being able to choose what I write and who I work with, and I created a business model which works well for me.
I also continuously refine and evolve my business offerings, update my own website, look for clients, maintain my social media accounts, and blog regularly. All of which is part of running my business, but is ultimately unpaid work.
I love my business and what I do, but I also enjoyed my work as a Business Development Director in the recruitment industry. I had a great boss, cool coworkers, a stable and dependable paycheck, and a set end time to my workday, none of which I now have.
However, I have the freedom to do the work I want, charge the rates I want, and am much more flexible with my schedule. I can go to the gym in the middle of the day, run errands whenever I want, work in the middle of the night if I am so inclined, and pet my cat all day.
For You
It’s about what works best for you. Don’t put pressure on yourself to be one way or the other or let people tell you one is “better” or more “right” for you than the other.
Make plans, do research, interview people, and figure out what is best for you and make sure you have a clear idea of both the advantages and disadvantages so you are well informed! | https://jyssicaschwartz.medium.com/self-employed-vs-employed-pros-cons-d97b4bdc4f70 | ['Jyssica Schwartz'] | 2020-02-04 16:23:57.033000+00:00 | ['Life Lessons', 'Freelancing', 'Entrepreneurship', 'Writing', 'Business'] | Title Selfemployed v Employed Pros ConsContent going deeply controversial rant one way making salary better worse verbose way toot horn make spend time energy reading great article filled cat picture sorry question “Which better” unfair question “right” answer completely subjective currently selfemployed love though course drawback previously worked 10 year corporate America also good bad “dream jobs” course reality even dream job drawback day suck sometimes selfemployment traditional employment advantage disadvantage truly best individual family Quora user Kelven Swords point Pros make decision one else… thus reap reward control finance one else… thus reap profit determine staff one else… thus control social structure Cons make decisions… thus one else blame error control finances… thus one else blame wasted money determine staff… thus one else blame parasitic staff member poison well Let’s take step Kelven described obvious advantage working set working hour choose work with…and work significantly control process contract client work time everything else work pajama — even sleep get build great relationship client you’re steering ship choosing cultivate relationship obvious disadvantage well one else rely manager setting task deadline deadline selfimposed difficult manage stick Time management becomes extremely important hard many company insurance benefit sick time paid vacation time maternity leave Less stability term income find working far 40 hour week coworkers sometimes lonely isolating probably expert every single thing business need process sale closing sale marketing website building maintenance creative stuff contract organization admin work etc Higher potential burnoutoverworking tax harder come working company getting specific advantage term stable dependable income medical benefit people ask need help told Something people rarely think dreaming selfemployed lack structure organization create schedule keep task make sure work get done track deadline invoice payment business expense create structure day incredibly easy lose track time lose focus end spending half day social medium one watching many tool help get organized create structure day free cost money — need keep track make sure deduct tax business expense Taxes different bit difficult work save income pay difficult check write personal asset you’ll need consider make sense sole proprietor LLC SCorp several option benefit drawback much research involved starting business bos fun challenging interesting lonely love writer able choose write work created business model work well also continuously refine evolve business offering update website look client maintain social medium account blog regularly part running business ultimately unpaid work love business also enjoyed work Business Development Director recruitment industry great bos cool coworkers stable dependable paycheck set end time workday none However freedom work want charge rate want much flexible schedule go gym middle day run errand whenever want work middle night inclined pet cat day It’s work best Don’t put pressure one way let people tell one “better” “right” Make plan research interview people figure best make sure clear idea advantage disadvantage well informedTags Life Lessons Freelancing Entrepreneurship Writing Business |
1,736 | Joining a Professional Association as a Freelancer | Personal benefits
Most professional associations require members to keep professional development (CPD) records and therefore offer plenty of training opportunities.
The greatest benefit of belonging to an association is, no doubt, networking. Working for ourselves, we don’t usually get to meet colleagues during the day, so networking events are excellent for meeting colleagues in the flesh. Getting to know and interacting with colleagues is essential and may even lead to fruitful business relationships.
Being a member of a professional association is also a great opportunity for you to get involved. Volunteer, join a committee, take on tasks such as editing their member magazine or helping with their website. This will get your name out there, put you in touch with colleagues and generally allow you to do something for the profession, which will benefit us all. | https://medium.com/the-lucky-freelancer/joining-a-professional-association-as-a-freelancer-aa74b82d4922 | ['Kahli Bree Adams'] | 2020-07-06 23:43:10.031000+00:00 | ['Freelancing', 'Entrepreneurship', 'Business', 'Startup', 'Small Business'] | Title Joining Professional Association FreelancerContent Personal benefit professional association require member keep professional development CPD record therefore offer plenty training opportunity greatest benefit belonging association doubt networking Working don’t usually get meet colleague day networking event excellent meeting colleague flesh Getting know interacting colleague essential may even lead fruitful business relationship member professional association also great opportunity get involved Volunteer join committee take task editing member magazine helping website get name put touch colleague generally allow something profession benefit u allTags Freelancing Entrepreneurship Business Startup Small Business |
1,737 | A Microbial-Based Explanation for Cooling Human Body Temperatures | A Microbial-Based Explanation for Cooling Human Body Temperatures
Could changes to our gut microbial landscapes be responsible for cooling human body temperatures?
The value of 98.6° F (37° C) for the standard human body temperature was first proposed by German physician Carl Reinhold August Wunderlich in 1851. This reference point is likely inaccurate, however, as recent studies have shown that human body temperatures generally run lower than the accepted norm.
37 degrees Celsius has been the traditionally accepted value for normal body temperature, photo by orelphoto on Adobe Stock
Nevertheless, average human body temperatures have decreased since the Industrial Revolution, according to the results of a recent Stanford study, which concluded that the average American’s body temperature is about 0.58° F (0.03° C) lower for woman and 1.06° F (0.6° C) lower for men than it was in the 19th century. On average, human body temperatures have fallen by 0.05° F (0.03° C) per decade. The researchers attributed these changes to a reduction in metabolic expenditure, reduced inflammation, and lowered incidence of infectious diseases in modern times.
In their study, the Stanford team examined three different datasets, which included records from the Union Army Veterans of the Civil War from 1860 to 1940, the National Health and Nutrition Examination Survey I from 1971 to 1975, and the Stanford Translational Research Integrated Database Environment from 2007 to 2017.
The team combed through all 677,423 temperature measurements, accounting for variables such as age, height, weight, and potential differences in temperature measurement accuracy, to arrive at their conclusions.
Microbes provide warmth to their hosts
I would like to propose that our reduced body temperature measurements may be the result of loss of microbial diversity and rampant antibiotic use in the Western world. Indeed, a small study of healthy volunteers from Pakistan reported higher mean body temperatures than those encountered in developed countries where exposure to antimicrobial products is greater.
Heat provision is an under-appreciated contribution of microbiota to hosts. Microbes produce heat as a byproduct when breaking down dietary substrates and creating cell materials. Previous reports have estimated bacterial specific rates of heat production at around 168 mW/gram. From these findings, we can extrapolate that an estimated 70% of human body heat production in a resting state is the result of gut bacterial metabolism. | https://medium.com/medical-myths-and-models/a-microbial-based-explanation-for-cooling-human-body-temperatures-4746a3a9868 | ['Nita Jain'] | 2020-02-26 08:26:33.452000+00:00 | ['Health', 'Ideas', 'Microbiome', 'Science', 'Education'] | Title MicrobialBased Explanation Cooling Human Body TemperaturesContent MicrobialBased Explanation Cooling Human Body Temperatures Could change gut microbial landscape responsible cooling human body temperature value 986° F 37° C standard human body temperature first proposed German physician Carl Reinhold August Wunderlich 1851 reference point likely inaccurate however recent study shown human body temperature generally run lower accepted norm 37 degree Celsius traditionally accepted value normal body temperature photo orelphoto Adobe Stock Nevertheless average human body temperature decreased since Industrial Revolution according result recent Stanford study concluded average American’s body temperature 058° F 003° C lower woman 106° F 06° C lower men 19th century average human body temperature fallen 005° F 003° C per decade researcher attributed change reduction metabolic expenditure reduced inflammation lowered incidence infectious disease modern time study Stanford team examined three different datasets included record Union Army Veterans Civil War 1860 1940 National Health Nutrition Examination Survey 1971 1975 Stanford Translational Research Integrated Database Environment 2007 2017 team combed 677423 temperature measurement accounting variable age height weight potential difference temperature measurement accuracy arrive conclusion Microbes provide warmth host would like propose reduced body temperature measurement may result loss microbial diversity rampant antibiotic use Western world Indeed small study healthy volunteer Pakistan reported higher mean body temperature encountered developed country exposure antimicrobial product greater Heat provision underappreciated contribution microbiota host Microbes produce heat byproduct breaking dietary substrate creating cell material Previous report estimated bacterial specific rate heat production around 168 mWgram finding extrapolate estimated 70 human body heat production resting state result gut bacterial metabolismTags Health Ideas Microbiome Science Education |
1,738 | Azure — Deploying Vue App With Java Backend on AKS | Azure — Deploying Vue App With Java Backend on AKS
A step by step guide with an example project
AKS is Microsoft Azure’s managed Kubernetes solution that lets you run and manage containerized applications in the cloud. Since this is a managed Kubernetes service, Microsoft takes care of a lot of things for us such as security, maintenance, scalability, and monitoring. This makes us quickly deploy our applications into the Kubernetes cluster without worrying about the underlying details of building it.
In this post, we are going to deploy a Vue application with a Java environment. First, we dockerize our app and push that image to the Azure container registry and run that app on Azure AKS. We will see how we can build the Kubernetes cluster on Azure AKS, Accessing clusters from outside, configuring kubectl to work with AKS cluster, and many more.
Example Project
Prerequisites
Install Azure CLI and Configure
Dockerize the Project
Pushing Docker Image To Container Registry
Creating AKS Cluster
Configure Kuebctl With AKS Cluster
Deploy Kubernetes Objects On Azure AKS Cluster
Access the WebApp from the browser
Summary
Conclusion
Example Project
This is a simple project which demonstrates developing and running Vue application with Java. We have a simple app in which we can add users, count, and display them at the side, and retrieve them whenever you want.
Example Project
If you want to practice your own here is a Github link to this project. You can clone it and run it on your machine as well. | https://medium.com/bb-tutorials-and-thoughts/azure-deploying-vue-app-with-java-backend-on-aks-a938eaed0cf4 | ['Bhargav Bachina'] | 2020-12-20 06:10:05.345000+00:00 | ['Azure', 'Cloud Computing', 'JavaScript', 'Kubernetes', 'Web Development'] | Title Azure — Deploying Vue App Java Backend AKSContent Azure — Deploying Vue App Java Backend AKS step step guide example project AKS Microsoft Azure’s managed Kubernetes solution let run manage containerized application cloud Since managed Kubernetes service Microsoft take care lot thing u security maintenance scalability monitoring make u quickly deploy application Kubernetes cluster without worrying underlying detail building post going deploy Vue application Java environment First dockerize app push image Azure container registry run app Azure AKS see build Kubernetes cluster Azure AKS Accessing cluster outside configuring kubectl work AKS cluster many Example Project Prerequisites Install Azure CLI Configure Dockerize Project Pushing Docker Image Container Registry Creating AKS Cluster Configure Kuebctl AKS Cluster Deploy Kubernetes Objects Azure AKS Cluster Access WebApp browser Summary Conclusion Example Project simple project demonstrates developing running Vue application Java simple app add user count display side retrieve whenever want Example Project want practice Github link project clone run machine wellTags Azure Cloud Computing JavaScript Kubernetes Web Development |
1,739 | If You’re Not in a Developer Community Then You’re Missing Out | Help Along the Way
This one is fairly obvious, but it’s also extremely understated. No matter how much you offer to the developer community, it will always give you more in return.
(My favorite online community right now is IndieHackers.)
When I first started to show interest in code, I met one of the developers at the company I work for through a mutual friend. I expressed my interest in learning code, and he immediately offered some advice.
He suggested a few resources and learning platforms to get my feet wet. I maintained a connection with him throughout my learning journey, and as soon as I felt comfortable with the basics of what I was learning (at the time it was HTML and CSS), he offered for me to take some of his smaller freelance jobs.
This was huge!
Even today as I have moved on to new technologies and am learning different things, my friend continues to offer suggestions and guidance. As of late, I have been learning Laravel and I reached out about a bug I kept running into. We jumped on a call and he helped me fix the bug I was stuck on.
The value he’s been able to offer me has been tremendous in my journey to learn code. I can honestly say that I would not be as far as I am today without his help.
And that is just one person! The developer community is full of people just like my friend who are eager to help and offer guidance.
I know that people like him and others who I have met through online communities like IndieHackers will continue to be a source of help as I learn and grow as a developer. | https://medium.com/better-programming/if-youre-not-in-a-developer-community-then-you-re-missing-out-50471426a37e | ['Jesse Nieman'] | 2020-08-14 14:01:32.554000+00:00 | ['Programming', 'JavaScript', 'Python', 'Community', 'Startup'] | Title You’re Developer Community You’re Missing OutContent Help Along Way one fairly obvious it’s also extremely understated matter much offer developer community always give return favorite online community right IndieHackers first started show interest code met one developer company work mutual friend expressed interest learning code immediately offered advice suggested resource learning platform get foot wet maintained connection throughout learning journey soon felt comfortable basic learning time HTML CSS offered take smaller freelance job huge Even today moved new technology learning different thing friend continues offer suggestion guidance late learning Laravel reached bug kept running jumped call helped fix bug stuck value he’s able offer tremendous journey learn code honestly say would far today without help one person developer community full people like friend eager help offer guidance know people like others met online community like IndieHackers continue source help learn grow developerTags Programming JavaScript Python Community Startup |
1,740 | To smarter emails & efficient service, presenting the AI case | I am sure you’ve all been hearing about artificial intelligence. You see it on new devices, the cool Alexa that you can speak with, a phone support system that’s instantly capable of listening & responding to your voice, and of course the media and investment community has been crazy looking for the next big thing. Us being us, we thought we’d try and make some sense of it in the marketing and sales function.
I went about looking for the latest innovations in some of the most established tacts of digital marketing and here’s what I found:
· Email marketing
Email marketing has been an ongoing topic for a while, but we haven’t seen the true value it can provide until the rise of AI in the email marketing analytics space. A notable service we found was Nova. By utilising AI to scrape through a person’s online identity, it generates a personalised paragraph that sales representatives can add to their sales proposition. How does it work? Dump a batch of email addresses in, as well as the text of your pitch. Nova then screens the contacts and pulls information from sources published publically online and on social media accounts to create a personalised pitch. That’s great right?
· Customer service
Now, when you think of customer service, do you picture a bot serving you? Or the real question these days is, do you prefer it? A study of 5,000 consumers worldwide, conducted by LivePerson, showed that more than 50% of consumers preferred a human representative, and found only 38% of those surveyed had positive perceptions of this technology. They also found that some factors such as country and industry, had an effect on the receptiveness of consumers to these technologies.
Additionally, the nature of customer conversations for industries are inherently different. The fast food industry only really need to engage in simple conversations with their customers. Dominos for example, implemented a chatbot feature, “DRU” for their customers to easily choose their pizza base, toppings, dressing and sides, then order it. This was objectively efficient, and even impressive. Was it a success? YES. Chief executive Don Meij even stated, after realising the benefits of AI, that they are beginning to shift the philosophy of the company from “mobile first” to “AI first”. New initiatives are expected to come out such as drone deliveries, a Facebook chat that helps consumers find vouchers and coupons, and soon enough- DRU manager which helps Domino store owners automate rosters and order stock.
However, chatbots won’t be so easily implemented for customers asking about, say, life insurance. Again, the nature of the questions and conversations are important. In saying this, it’s also important to ensure that the tone and intonation of the chatbot is reflective of the brand. Amazon’s Alexa is a good example of this. She was friendly the majority of times, but there were few times the chatbot was perceived as judgemental. Another tribute to DRU’s success was that it conveyed the Domino brand well, and built a closer connection with the customer than the point-and-click interface.
As I continue on my journey to explore the applications of Artificial intelligence and how it can help make a real difference, I will be coming back soon with more interesting technologies we’ve tested and enjoyed. | https://medium.com/drizzlin/to-smarter-emails-efficient-service-presenting-the-ai-case-b4810cc4fe0 | ['Andrea Virrey'] | 2017-12-15 12:24:37.842000+00:00 | ['Customer Service', 'AI', 'Artificial Intelligence', 'Chatbots', 'Digital Marketing'] | Title smarter email efficient service presenting AI caseContent sure you’ve hearing artificial intelligence see new device cool Alexa speak phone support system that’s instantly capable listening responding voice course medium investment community crazy looking next big thing Us u thought we’d try make sense marketing sale function went looking latest innovation established tact digital marketing here’s found · Email marketing Email marketing ongoing topic haven’t seen true value provide rise AI email marketing analytics space notable service found Nova utilising AI scrape person’s online identity generates personalised paragraph sale representative add sale proposition work Dump batch email address well text pitch Nova screen contact pull information source published publically online social medium account create personalised pitch That’s great right · Customer service think customer service picture bot serving real question day prefer study 5000 consumer worldwide conducted LivePerson showed 50 consumer preferred human representative found 38 surveyed positive perception technology also found factor country industry effect receptiveness consumer technology Additionally nature customer conversation industry inherently different fast food industry really need engage simple conversation customer Dominos example implemented chatbot feature “DRU” customer easily choose pizza base topping dressing side order objectively efficient even impressive success YES Chief executive Meij even stated realising benefit AI beginning shift philosophy company “mobile first” “AI first” New initiative expected come drone delivery Facebook chat help consumer find voucher coupon soon enough DRU manager help Domino store owner automate roster order stock However chatbots won’t easily implemented customer asking say life insurance nature question conversation important saying it’s also important ensure tone intonation chatbot reflective brand Amazon’s Alexa good example friendly majority time time chatbot perceived judgemental Another tribute DRU’s success conveyed Domino brand well built closer connection customer pointandclick interface continue journey explore application Artificial intelligence help make real difference coming back soon interesting technology we’ve tested enjoyedTags Customer Service AI Artificial Intelligence Chatbots Digital Marketing |
1,741 | Data Journalism Crash Course #5: Advanced Data Search — Google | Image by the author
Google is not limited to the search engine, and the search engine is not limited to simple search. Many users do not take advantage of the full potential of Google applications, and for journalists, it is particularly important to know how to use them well.
Search filters
In a simple Google search, you can filter results by country, language, publication date, and city. It is also possible to choose between displaying all available results or just pages that have been visited previously, unvisited pages, and literal results. This action can display pages with terms exactly as they were typed, with all the words together and in the same order.
Advanced search and search operators
The Google search engine has search operators that can refine your search efficiently and practically. Here are some examples:
Search for exact word or phrase
Use quotation marks: “Every idea needs a Medium” shows only results that contain exactly this sentence, not results that contain the four words used at different points on the page.
Delete a word
Add a negative sign (-): idea -medium shows results that address products other than Medium.
Search on a single website or domain
Search by site: medium.com only shows results that are on pages hosted on the Medium site.
Search files
Search by filetype: xls shows only results contained in spreadsheets of Xls format.
Related Sites
Searching for related: medium.com shows other websites of Vice itself, in addition to some that work with subjects related to the one she addresses.
Related doesn’t just work with websites. You can also search for terms and find other sites that talk about a topic that interests you.
Terms within the body of the text
Forget the title of the article: with this search, no matter the main theme of the site, Google will search focused on the “inside” of the content.
To do this, type intext: and, after the colon, type the keywords, as in the following example:
intext: big data for beginners
Terms in one-page titles
Searching for the title: medium shows only results contained in titles.
This is a simple way to quickly find out about content that already exists on the internet before producing your own or to analyze how competing content is on the same subject as your article.
See all the Google search tricks here. Some can be done using Google’s advanced search, without the need to memorize operators or consult the help page.
Search for images
Google Image Search — Image by the author
Google’s image search can filter results by image dimensions, color, type (drawing or photo, for example), and publication date. When selecting a result, the user has the option to visit the image source website and obtain more information about it.
You can also use an image as a search term. When uploading an image to Google Images, the result can be returned with a suggestion for the image name, pages containing that image, similar images, and the same image with other dimensions. The only condition is that the image already exists on a website. It is possible to add a keyword next to the image sent to make the search easier. Search operators also work on this search.
To upload an image to Google Images, simply click on the photo camera icon inside the search box and upload an image from your computer or paste the address of a photo online.
Google Trends — Image by Google
Google Trends allows you to assess the popularity of a term over time and compare it to the popularity of other terms, and you can also filter results by country, date, category, and Google product (Images or YouTube, for example).
Besides, Google Trends highlights the top terms searched on Google over a given period.
Google Public Data — Image by Google
The little-known Google Public Data presents in the form of graphs important statistics from the database of large institutions, such as the World Bank and Eurostat. This information can be compared and analyzed using the various filters offered by the application.
Google Crisis Response — Image by Google
An initiative that gathers maps created from data on disaster situations, making information more accessible.
Google Earth — Image by Google
Google Earth is one of Google’s most famous tools, with animated satellite imagery in three dimensions. This functionality ends up creating a more interactive experience than Google Maps, especially in the paid version, Google Earth Pro.
Satellites take around 14 days to take pictures of the entire planet.
Google Street View — Image by Google
Google Street View offers 360 degree panoramic views horizontally and 290 vertically. This feature of Google Maps and Google Earth, has been available since August 2007.
Street View is often being used in “before and after” situations when a location undergoes a significant change.
Since 2012, Google has also used Trekker, a kind of high-tech backpack with a 360-degree camera. With the equipment it is possible to get images of where cars cannot go, such as inside museums, theme parks that are difficult to access and even cemeteries.
Through a Trekker loan program, in 2013, volunteers were able to start to collaborate with images for Street View. Cameras were borrowed so that each person could collect 360-degree images of the places they know well. Currently, anyone can help Google Street View, whether collaborating with normal photos or panoramic photos taken by the smartphone.
With the use of Trekker, Google was able to register places where it was not previously thought to map. From cemeteries around the world, to museums in Latin America, through the Grand Canyon, through the Amazon rainforest, through deserts and down to the bottom of the sea. Through Street View it is also possible to “visit” famous movie sets, the White House, a submarine and even CERN. The feature allows anyone to experience famous places on the planet without leaving their home.
IF YOU WANT TO KNOW MORE | https://medium.com/datadriveninvestor/data-journalism-crash-course-5-advanced-data-search-google-3e2a40a2ac52 | ['Deborah M.'] | 2020-10-30 13:38:02.101000+00:00 | ['Data Journalism', 'Journalism', 'Google', 'Data Science', 'Technology'] | Title Data Journalism Crash Course 5 Advanced Data Search — GoogleContent Image author Google limited search engine search engine limited simple search Many user take advantage full potential Google application journalist particularly important know use well Search filter simple Google search filter result country language publication date city also possible choose displaying available result page visited previously unvisited page literal result action display page term exactly typed word together order Advanced search search operator Google search engine search operator refine search efficiently practically example Search exact word phrase Use quotation mark “Every idea need Medium” show result contain exactly sentence result contain four word used different point page Delete word Add negative sign idea medium show result address product Medium Search single website domain Search site mediumcom show result page hosted Medium site Search file Search filetype xl show result contained spreadsheet Xls format Related Sites Searching related mediumcom show website Vice addition work subject related one address Related doesn’t work website also search term find site talk topic interest Terms within body text Forget title article search matter main theme site Google search focused “inside” content type intext colon type keywords following example intext big data beginner Terms onepage title Searching title medium show result contained title simple way quickly find content already exists internet producing analyze competing content subject article See Google search trick done using Google’s advanced search without need memorize operator consult help page Search image Google Image Search — Image author Google’s image search filter result image dimension color type drawing photo example publication date selecting result user option visit image source website obtain information also use image search term uploading image Google Images result returned suggestion image name page containing image similar image image dimension condition image already exists website possible add keyword next image sent make search easier Search operator also work search upload image Google Images simply click photo camera icon inside search box upload image computer paste address photo online Google Trends — Image Google Google Trends allows ass popularity term time compare popularity term also filter result country date category Google product Images YouTube example Besides Google Trends highlight top term searched Google given period Google Public Data — Image Google littleknown Google Public Data present form graph important statistic database large institution World Bank Eurostat information compared analyzed using various filter offered application Google Crisis Response — Image Google initiative gather map created data disaster situation making information accessible Google Earth — Image Google Google Earth one Google’s famous tool animated satellite imagery three dimension functionality end creating interactive experience Google Maps especially paid version Google Earth Pro Satellites take around 14 day take picture entire planet Google Street View — Image Google Google Street View offer 360 degree panoramic view horizontally 290 vertically feature Google Maps Google Earth available since August 2007 Street View often used “before after” situation location undergoes significant change Since 2012 Google also used Trekker kind hightech backpack 360degree camera equipment possible get image car cannot go inside museum theme park difficult access even cemetery Trekker loan program 2013 volunteer able start collaborate image Street View Cameras borrowed person could collect 360degree image place know well Currently anyone help Google Street View whether collaborating normal photo panoramic photo taken smartphone use Trekker Google able register place previously thought map cemetery around world museum Latin America Grand Canyon Amazon rainforest desert bottom sea Street View also possible “visit” famous movie set White House submarine even CERN feature allows anyone experience famous place planet without leaving home WANT KNOW MORETags Data Journalism Journalism Google Data Science Technology |
1,742 | Why Productivity is Killing Us All | Why Productivity is Killing Us All
And How Eastern Medicine & Philosophy Can Save Us
How often do we rest? I mean TRULY rest. Without input, stimulation, without guilt that we should be doing this or that? No phone or computer in sight; no emails to be sent, appointments to schedule? In fact, how many of us are running around filling every second of potential down time with shit just so we can stay busy and productive?
I know I’m guilty of this.
As an acupuncturist and practitioner of Chinese Medicine- I do my best to practice what I preach. I know for a fact that stress is a top causative factor of illness and disease and that our modern lifestyles are literally killing us.
As an entrepreneur who has been raised by baby boomers within the paradigm of the American dream and the “you only get ahead if you work your ass off” mentality; I struggle, A LOT. I feel like I’m fighting a constant battle against the deeply-ingrained ideals of a culture driven by productivity and capitalism. It doesn’t leave much space for rest without guilt.
Several years ago when I began studying Eastern Medicine and Philosophy it quickly became apparent to me how vast the dichotomy truly was between Eastern and Western ideals and lifestyles. I use the term “Western” loosely because after spending time living in Europe, I can truly say that the United States is on another level when it comes to stress, productivity, burn-out, and quality of life.
I see this with my patients daily. Throughout the years of looking for root causes and how to address them with acupuncture & herbal medicine, I’ve learned one thing is certain- being too productive is killing us. I get all kinds of complaints and symptoms walking in my office door. I often get the off-the-wall, random mystery symptoms and the patients who have been to every doctor and specialist and who have been on every medication and still feel like they’re not getting anywhere. It’s provided an interesting vantage point to assess some of the common threads and cultural health trends we face in modern times.
The conclusion I’ve come to: I’m never just battling back pain, migraines, autoimmune disease and the like; but rather, I am always treating the side effects of stress, over-work, and the chronic state of inflammation that comes with it. This is not to say that there aren’t other contributing factors to these issues and illnesses, but there is ALWAYS a stress component.
On a physiological level, we know what stress does to the body. The sympathetic nervous response (or “fight or flight”) is a mechanism that is intended to be engaged for a short period of time so we can literally survive a life-threatening situation. All of our other systems like our respiration, digestion, and bladder function are inhibited to create the optimal chance for survival.
The problem is that our modern lifestyles keep our sympathetic nervous system constantly engaged in a way that we are not built for. When this mechanism is activated over a longer period of time it starts to wreak havoc on the body. These other systems stop functioning optimally and our threshold for stress-response is much lower. This means that our bodies start registering even minor non-threatening occurrences as threats, and we end up in a chronic state of fight-or-flight and inflammation.
Photo by Sage Friedman on Unsplash
So where does Eastern Medicine and Philosophy fit into this? And how can it create value for us here in the West?
One of the core tenants of Chinese Medicine is the concept of yin and yang. It’s a term we’ve all vaguely heard of but that a lot of us don’t truly understand or perceive as relevant to our lives. I beg to differ; I think that this simple yet profound concept is a key to leading a life of longevity and vitality in modern times. Let me explain.
The concept of yin and yang states that there are two opposing yet interdependent energies in all of life. These lie within us and around us. The way I relate these to our modern lifestyles is that yang can be defined as the exertion or output of energy while yin is the input, restoration, or reception of energy.
At any given time, both yin and yang are present. They operate in a constant state of balance with one another. We are all always dancing between input and output; action and passivity. The problem for most of us is that we are always exerting WAY more than we are restoring. This idea of “filling your cup” and the fact that “self-care” is a buzzword are indicative of this.
My question for you is this: how can we exert energy at an optimal state if we aren’t refilling our tanks enough? Much like the gas tanks in our cars get low and we must refill them constantly, our bodies work in a similar fashion. This has to happen on a regular basis in order for us to maintain balance and create longevity and vitality in our health and lives. Just like a car, optimal function and increased longevity occurs with proper maintenance and regular re-fueling.
This can be applied in the workplace for companies as well. Employee satisfaction, retention, and productivity can directly be tied back to creating a culture that values input and restoration as much as it does action. This is why creating initiatives that allow for employees to disconnect and recharge (i.e. disabling company email at off-work times, providing incentives for taking vacation days etc.) is crucial to optimal productivity. If a company is constantly demanding output and productivity from it’s employees without providing sufficient opportunities and space for the input and restoration to occur; how can optimal results be achieved? We have seen this play out recently in Japan when Microsoft piloted a 4 day work week for it’s employees and saw a 40% increase in productivity. This would no doubt positively impact a company’s bottom line in a significant way.
We are addicted to being busy. And it has to stop.
If there’s one thing Eastern medicine and philosophy has taught me it is that the times of rest are just as much part of the action as the action itself. How can we create, innovate, and be productive if we’re never recharging? If we’re never creating space and we’re just constantly going and exerting our energy, how can we show up in the world in an optimal way? We can’t.
It’s time to start placing equal value on restoration and preservation, before we kill ourselves. | https://allysonschurtz.medium.com/why-productivity-is-killing-us-all-dfd9e1c38372 | ['Allyson Schurtz L.Ac'] | 2019-12-04 02:19:15.744000+00:00 | ['Productivity', 'Corporate Wellness', 'Workplace', 'Personal Development', 'Health'] | Title Productivity Killing Us AllContent Productivity Killing Us Eastern Medicine Philosophy Save Us often rest mean TRULY rest Without input stimulation without guilt phone computer sight email sent appointment schedule fact many u running around filling every second potential time shit stay busy productive know I’m guilty acupuncturist practitioner Chinese Medicine best practice preach know fact stress top causative factor illness disease modern lifestyle literally killing u entrepreneur raised baby boomer within paradigm American dream “you get ahead work as off” mentality struggle LOT feel like I’m fighting constant battle deeplyingrained ideal culture driven productivity capitalism doesn’t leave much space rest without guilt Several year ago began studying Eastern Medicine Philosophy quickly became apparent vast dichotomy truly Eastern Western ideal lifestyle use term “Western” loosely spending time living Europe truly say United States another level come stress productivity burnout quality life see patient daily Throughout year looking root cause address acupuncture herbal medicine I’ve learned one thing certain productive killing u get kind complaint symptom walking office door often get offthewall random mystery symptom patient every doctor specialist every medication still feel like they’re getting anywhere It’s provided interesting vantage point ass common thread cultural health trend face modern time conclusion I’ve come I’m never battling back pain migraine autoimmune disease like rather always treating side effect stress overwork chronic state inflammation come say aren’t contributing factor issue illness ALWAYS stress component physiological level know stress body sympathetic nervous response “fight flight” mechanism intended engaged short period time literally survive lifethreatening situation system like respiration digestion bladder function inhibited create optimal chance survival problem modern lifestyle keep sympathetic nervous system constantly engaged way built mechanism activated longer period time start wreak havoc body system stop functioning optimally threshold stressresponse much lower mean body start registering even minor nonthreatening occurrence threat end chronic state fightorflight inflammation Photo Sage Friedman Unsplash Eastern Medicine Philosophy fit create value u West One core tenant Chinese Medicine concept yin yang It’s term we’ve vaguely heard lot u don’t truly understand perceive relevant life beg differ think simple yet profound concept key leading life longevity vitality modern time Let explain concept yin yang state two opposing yet interdependent energy life lie within u around u way relate modern lifestyle yang defined exertion output energy yin input restoration reception energy given time yin yang present operate constant state balance one another always dancing input output action passivity problem u always exerting WAY restoring idea “filling cup” fact “selfcare” buzzword indicative question exert energy optimal state aren’t refilling tank enough Much like gas tank car get low must refill constantly body work similar fashion happen regular basis order u maintain balance create longevity vitality health life like car optimal function increased longevity occurs proper maintenance regular refueling applied workplace company well Employee satisfaction retention productivity directly tied back creating culture value input restoration much action creating initiative allow employee disconnect recharge ie disabling company email offwork time providing incentive taking vacation day etc crucial optimal productivity company constantly demanding output productivity it’s employee without providing sufficient opportunity space input restoration occur optimal result achieved seen play recently Japan Microsoft piloted 4 day work week it’s employee saw 40 increase productivity would doubt positively impact company’s bottom line significant way addicted busy stop there’s one thing Eastern medicine philosophy taught time rest much part action action create innovate productive we’re never recharging we’re never creating space we’re constantly going exerting energy show world optimal way can’t It’s time start placing equal value restoration preservation kill ourselvesTags Productivity Corporate Wellness Workplace Personal Development Health |
1,743 | How to Design RESTful Web Services with Dropwizard | Creating a new Dropwizard application
Now that, we have understood what Dropwizard is and few internal libraries, let us create a Dropwizard application to understand these concepts even further. Dropwizard dependencies are exposed as a maven dependency. In this section, we will develop a maven project and add the relevant dependencies.
Step 1: Creating a Maven Project
Open your favorite IDE and create a new maven project with archetype selected as maven-archetype-quickstart. Add the following dependency:
<dependency>
<groupId>io.dropwizard</groupId>
<artifactId>dropwizard-core</artifactId>
<version>2.0.0-rc9</version>
</dependency>
This single dependency ensures that all related components are downloaded and our application is ready to run.
Step 2: Creating a Dropwizard Configuration file
Each Dropwizard application has its own subclass of the Configuration class which specifies environment-specific parameters. These parameters are specified in a YAML configuration file which is deserialized to an instance of your application’s configuration class and validated. Create the following configuration file.
Custom Configuration file
Above mentioned configuration class extracts firstName and lastName parameters from the supplied YAML file. We will create a YAML file later stage of the application which will have following contents:
firstName: "John"
lastName: "Doe"
Step 3: Creating a Jersey Resource
Jersey resources are the meat-and-potatoes of a Dropwizard application. Each resource class is associated with a URI template. For our application, we need a resource which returns new Person instances from the URI /hello so our resource class looks like this:
Jersey resource
We are performing the following activities in the above class:
@Path annotation indicates that this class is a JAX-RS resource
@Produces indicates that this resource produces JSON data
@GET indicates that resource can be accessed over HTTP GET
@Timed let Dropwizard automatically records the duration and rate of invocations as a Metrics Timer
The method arguments indicate that if data is supplied in the request query parameters, then the same will be used. Otherwise, default values configured in the YAML file will be used
The response returned by the getPerson() method is a Person instance and will be mapped by Jackson. Following is the Person class representation:
Person class
Step 4: Creating a HealthCheck
Dropwizard strongly recommends providing health checks for a Dropwizard application. In fact, if health checks are not configured it warns the user at application startup. We have created the following health check for this application:
Dropwizard health check
Step 5: Creating an Application Class
Combined with Configuration subclass defined earlier, Dropwizard’s Application subclass forms the core of a Dropwizard application. The Application class pulls together the various bundles and commands which provide the basic functionality of the application. Following is the application class:
Dropwizard application class
Step 6: Building the executable jar
We are now done with our Dropwizard application development. We have created the barebones components and added a JAX-RS resource with an endpoint. Let us add the following maven shade plugin in order to build the executable JAR file:
Note the mainClass parameter in the configuration. It must the application main class. | https://medium.com/swlh/how-to-design-restful-web-services-with-dropwizard-d5681a127cba | ['Somnath Musib'] | 2019-12-02 09:01:02.012000+00:00 | ['Coding', 'Programming', 'Java', 'Software Engineering', 'Technology'] | Title Design RESTful Web Services DropwizardContent Creating new Dropwizard application understood Dropwizard internal library let u create Dropwizard application understand concept even Dropwizard dependency exposed maven dependency section develop maven project add relevant dependency Step 1 Creating Maven Project Open favorite IDE create new maven project archetype selected mavenarchetypequickstart Add following dependency dependency groupIdiodropwizardgroupId artifactIddropwizardcoreartifactId version200rc9version dependency single dependency ensures related component downloaded application ready run Step 2 Creating Dropwizard Configuration file Dropwizard application subclass Configuration class specifies environmentspecific parameter parameter specified YAML configuration file deserialized instance application’s configuration class validated Create following configuration file Custom Configuration file mentioned configuration class extract firstName lastName parameter supplied YAML file create YAML file later stage application following content firstName John lastName Doe Step 3 Creating Jersey Resource Jersey resource meatandpotatoes Dropwizard application resource class associated URI template application need resource return new Person instance URI hello resource class look like Jersey resource performing following activity class Path annotation indicates class JAXRS resource Produces indicates resource produce JSON data GET indicates resource accessed HTTP GET Timed let Dropwizard automatically record duration rate invocation Metrics Timer method argument indicate data supplied request query parameter used Otherwise default value configured YAML file used response returned getPerson method Person instance mapped Jackson Following Person class representation Person class Step 4 Creating HealthCheck Dropwizard strongly recommends providing health check Dropwizard application fact health check configured warns user application startup created following health check application Dropwizard health check Step 5 Creating Application Class Combined Configuration subclass defined earlier Dropwizard’s Application subclass form core Dropwizard application Application class pull together various bundle command provide basic functionality application Following application class Dropwizard application class Step 6 Building executable jar done Dropwizard application development created barebones component added JAXRS resource endpoint Let u add following maven shade plugin order build executable JAR file Note mainClass parameter configuration must application main classTags Coding Programming Java Software Engineering Technology |
1,744 | AI In The Shipping Industry | Artificial Intelligence has been re-shaping the world as we know it. Not in the way that we saw in the famous movie the Terminator (SkyNet is not going to take over your phone) but it has transformed our everyday lives by improving processes that we perform regularly. The same holds true for the shipping industry. AI has enabled the implementation of IoT devices that gather information that the AI can learn from, improve upon, and make automated decisions. Automation may be the biggest and most beneficial way AI is used in shipping because it enables anomaly detection, reduction in waste, improved quality control, and decreased shipping times. The combination of IoT devices and AI has brought unparalleled improvements to quality control, fuel consumption, and safety.
One example of how AI is aiding the shipping industry is how it improves quality control. With the installation of cameras and IoT devices in cargo containers shipping companies can monitor the environment of goods to prevent damage and spoiling. This is very prevalent in the food transport industry. AI monitors data received from cameras and IoT devices about data points such as temperature, humidity, shock(fall), light exposure, and even vibration. This provides a richer bank of information on the quality of goods not only at the beginning and end of transport but during. Through automated alerts and solution suggestions shipping companies can adjust containers to remedy problems in real-time and prevent the waste or degradation of goods.
Artificial Intelligence is also allowing shipping companies to manage their fuel consumption and utilize the most efficient shipping routes. The accurate monitoring of how fuel is being used leads to a reduction in fuel spend and brings environmental benefits by decreasing emissions. AI is also being used to improve the physical routes that shipping companies take. Based on historical data on weather patterns like water and wind currents, traffic through certain areas and ports; shipping companies can plan routes that leverage all of this information to reduce fuel consumption and reduce trip times.
Lastly, AI can be used to improve safety by monitoring ship systems and the environment around the ship. An example is a ship image recognition system being developed by tech company SenseTime and Japanese shipping firm Mitsui OSK Lines (MOL). They are developing an image recognition system to identify ships in the surrounding area and monitor shipping lanes. This makes transport within bays, from entering to departing ports, much safer, not only for your ship but the surrounding ships as well.
The advent of cheaper and faster computing power, combined with the improvements in the accuracy and efficiency of AI models has brought about a lot of improvements to the shipping industry. Reduction in waste, efficient fuel consumption, and improved safety features are just a few of the ways the shipping industry is becoming faster, safer, and more efficient in the present and future.
About the Author: Josh Miramant, CEO Blue Orange Digital, Presskit and Bio
Image Credit: Intel
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/ai-in-the-shipping-industry-6b35a0cadb3f | ['Blue Orange Digital'] | 2020-09-29 15:00:45.243000+00:00 | ['Supply Chain Solutions', 'Supply Chain', 'AI', 'Artificial Intelligence', 'Ai In Supplychain'] | Title AI Shipping IndustryContent Artificial Intelligence reshaping world know way saw famous movie Terminator SkyNet going take phone transformed everyday life improving process perform regularly hold true shipping industry AI enabled implementation IoT device gather information AI learn improve upon make automated decision Automation may biggest beneficial way AI used shipping enables anomaly detection reduction waste improved quality control decreased shipping time combination IoT device AI brought unparalleled improvement quality control fuel consumption safety One example AI aiding shipping industry improves quality control installation camera IoT device cargo container shipping company monitor environment good prevent damage spoiling prevalent food transport industry AI monitor data received camera IoT device data point temperature humidity shockfall light exposure even vibration provides richer bank information quality good beginning end transport automated alert solution suggestion shipping company adjust container remedy problem realtime prevent waste degradation good Artificial Intelligence also allowing shipping company manage fuel consumption utilize efficient shipping route accurate monitoring fuel used lead reduction fuel spend brings environmental benefit decreasing emission AI also used improve physical route shipping company take Based historical data weather pattern like water wind current traffic certain area port shipping company plan route leverage information reduce fuel consumption reduce trip time Lastly AI used improve safety monitoring ship system environment around ship example ship image recognition system developed tech company SenseTime Japanese shipping firm Mitsui OSK Lines MOL developing image recognition system identify ship surrounding area monitor shipping lane make transport within bay entering departing port much safer ship surrounding ship well advent cheaper faster computing power combined improvement accuracy efficiency AI model brought lot improvement shipping industry Reduction waste efficient fuel consumption improved safety feature way shipping industry becoming faster safer efficient present future Author Josh Miramant CEO Blue Orange Digital Presskit Bio Image Credit Intel Gain Access Expert View — Subscribe DDI IntelTags Supply Chain Solutions Supply Chain AI Artificial Intelligence Ai Supplychain |
1,745 | It’s Time to Rethink What It Means to Be Healthy | There are many metrics that are currently used to assess human health that aren’t based in sound science, and yet they persist. The body mass index (BMI) is one of them. Even the U.S. Centers for Disease Control and Prevention (CDC) says that the BMI “can be used to screen for weight categories that may lead to health problems but it is not diagnostic of the body fatness or health of an individual.”
Writer Annaliese Griffin spoke to health experts and came up with five new metrics to assess your health that have nothing to do with measurements like weight or calories and all to do with reframing your relationship with health. They include questions like: How much green stuff are you eating? What did your body do for you today? And are you sleeping enough?
Use the coming new year as an opportunity to embrace better health habits that are based in science and also take into account your well-being and mental health. Read how below. | https://elemental.medium.com/its-time-to-rethink-what-it-means-to-be-healthy-4835432ff8c3 | ['Alexandra Sifferlin'] | 2020-12-29 06:32:39.979000+00:00 | ['Nutrition', 'Body', 'Health', 'Life', 'Science'] | Title It’s Time Rethink Means HealthyContent many metric currently used ass human health aren’t based sound science yet persist body mass index BMI one Even US Centers Disease Control Prevention CDC say BMI “can used screen weight category may lead health problem diagnostic body fatness health individual” Writer Annaliese Griffin spoke health expert came five new metric ass health nothing measurement like weight calorie reframing relationship health include question like much green stuff eating body today sleeping enough Use coming new year opportunity embrace better health habit based science also take account wellbeing mental health Read belowTags Nutrition Body Health Life Science |
1,746 | Grammar Mistakes That Medium’s Copy Editors Really Don’t Want You to Make | Common errors
David: When a word or phrase is written by a majority of online users the wrong way: Like “everyday.” Instead of “I drink a ton of coffee every day,” they write “I drink a ton of coffee everyday.” I hate that.
Sam: A small error I see frequently is “that” instead of “who,” such as “we need a president that shows empathy.”
Tana: Using “which” when it should be “that,” like “a mindset which contributes to more incarceration” versus “a mindset that contributes to more incarceration.”
Sam: Unnecessarily capitalized words — especially when referring generally to the president of the United States.
Tana: Agreed! Unnecessarily capitalized words, particularly position or job titles, like “President of the HOA” or “Chief Security Officer for the company.”
Tiffany: I agree with Tana and Sam that unnecessary capitals are getting to me lately. Also, why are people trying to capitalize the internet!?
Tana: Comma splices, like “half of the users are police, the other half are private citizens.” Should be a semicolon.
Iris: Errors like $1 million dollars. [Ed.: This is redundant. It’s either 1 million dollars or $1 million.]
David: Hyphenated adverbs ending in -ly when modifying another word. A no-no in my book. [Ed.: AP Stylebook agrees.]
Common typos
Iris: The infamous “missing L in public” typo.
Sloane: The equally infamous its/it’s, there/their/they’re, than/then typo.
Pet peeves
Iris: Impact vs. affect. This year, “impact” as a verb is showing up a lot. Impact makes a great noun, but it can be a problematic verb or just a bit too much, particularly when the less intense “affect” can take its place. When I read impact in a context like “how the pandemic impacted the workplace,” I might even replace it with a stronger verb: “how the pandemic dismantled/altered/forever changed the workplace.” Here’s an example I just read in my local news outlet where impact shows up as an appropriately impactful noun: “the pandemic’s impacts on learning.” That said, I will sometimes leave impact as a verb when the context is especially intense, like so many things 2020.
Sloane: Language redundancies like “my own” + noun. Phrases like “my own mind” or “ my own thoughts” are redundant; “my mind” or “my thoughts” works just fine.
Iris: Definitely an online thing, but super-long in line links bug me, especially when they run onto a second or even third line. Or when stuff like quotation marks or trailing punctuation marks are linked when they don’t need to be. Tidy linking makes me happy.
Sloane: Me too! Long in-line linked copy — ugh, no. Just link the relevant word or short phrase, but not the word “here” or “said.”
Iris: Overuse of “not only/but also.”
Tana: Unnecessary commas: 1) “That a crime goes unsolved is not due to lack of effort by law enforcement, but lack of evidence.” 2) “Biases could be used to implicate someone in a crime, or in any variety of other legal but uncomfortable situations.” 3) “Thessen didn’t connect the dots at the time, but realizes now that this new bylaw was an act of subterfuge.”
Iris: Overuse of “from X to Y to Z.” Putting commas in there really gets my goat (from X, to Y, to Z).
Sloane: Overuse of the em dash kills me; however, when used well, it’s a delight. This piece by Peter Rubin is a good em-dash explainer.
Technically not wrong but try to avoid
Iris: Looooong run-on sentences for cute effect. This was a huge writing trend for a while. It’s okay and sometimes preferable in small doses, but I’ve seen a few pieces where it’s nearly every sentence. That’s asking a lot from a reader.
Sloane: Similarly, overly hyphenated word phrases are also an older trend that is still hanging on.
Tana: I’m not sure if this annoys anyone besides me, but it’s 100% a cross-platform thing [Ed.: Meaning other media publications are okay with this usage]: tucking in an unnecessary “the” before a profession and a name. Examples: 1) “The writer Maya Angelou says…” 2) “I spoke with the biologist Tedros Adhanom.” 3) “According to the scientist Jennifer Doudna.”
Bonus things we love
Sam: Love when linking is nice and tidy — not having the entire sentence linked.
Iris: I will love a writer forever when they demonstrate a solid grasp of semicolons and en dashes.
David: I love it when a writer caps the first word of a complete sentence following a colon, which is correct usage.
Sam: Also, I love when names are all spelled correctly! Like when the writer has clearly checked those.
David: I love seeing “minuscule” spelled correctly. | https://medium.com/creators-hub/grammar-mistakes-that-mediums-copy-editors-really-don-t-want-you-to-make-6b9eee0c7e3e | ['Sloane Miller'] | 2020-12-02 19:18:25.122000+00:00 | ['Writing', 'Editing', 'Copywriting', 'Creativity', 'Writing Tips'] | Title Grammar Mistakes Medium’s Copy Editors Really Don’t Want MakeContent Common error David word phrase written majority online user wrong way Like “everyday” Instead “I drink ton coffee every day” write “I drink ton coffee everyday” hate Sam small error see frequently “that” instead “who” “we need president show empathy” Tana Using “which” “that” like “a mindset contributes incarceration” versus “a mindset contributes incarceration” Sam Unnecessarily capitalized word — especially referring generally president United States Tana Agreed Unnecessarily capitalized word particularly position job title like “President HOA” “Chief Security Officer company” Tiffany agree Tana Sam unnecessary capital getting lately Also people trying capitalize internet Tana Comma splice like “half user police half private citizens” semicolon Iris Errors like 1 million dollar Ed redundant It’s either 1 million dollar 1 million David Hyphenated adverb ending ly modifying another word nono book Ed AP Stylebook agrees Common typo Iris infamous “missing L public” typo Sloane equally infamous itsit’s theretheirthey’re thanthen typo Pet peeve Iris Impact v affect year “impact” verb showing lot Impact make great noun problematic verb bit much particularly le intense “affect” take place read impact context like “how pandemic impacted workplace” might even replace stronger verb “how pandemic dismantledalteredforever changed workplace” Here’s example read local news outlet impact show appropriately impactful noun “the pandemic’s impact learning” said sometimes leave impact verb context especially intense like many thing 2020 Sloane Language redundancy like “my own” noun Phrases like “my mind” “ thoughts” redundant “my mind” “my thoughts” work fine Iris Definitely online thing superlong line link bug especially run onto second even third line stuff like quotation mark trailing punctuation mark linked don’t need Tidy linking make happy Sloane Long inline linked copy — ugh link relevant word short phrase word “here” “said” Iris Overuse “not onlybut also” Tana Unnecessary comma 1 “That crime go unsolved due lack effort law enforcement lack evidence” 2 “Biases could used implicate someone crime variety legal uncomfortable situations” 3 “Thessen didn’t connect dot time realizes new bylaw act subterfuge” Iris Overuse “from X Z” Putting comma really get goat X Z Sloane Overuse em dash kill however used well it’s delight piece Peter Rubin good emdash explainer Technically wrong try avoid Iris Looooong runon sentence cute effect huge writing trend It’s okay sometimes preferable small dos I’ve seen piece it’s nearly every sentence That’s asking lot reader Sloane Similarly overly hyphenated word phrase also older trend still hanging Tana I’m sure annoys anyone besides it’s 100 crossplatform thing Ed Meaning medium publication okay usage tucking unnecessary “the” profession name Examples 1 “The writer Maya Angelou says…” 2 “I spoke biologist Tedros Adhanom” 3 “According scientist Jennifer Doudna” Bonus thing love Sam Love linking nice tidy — entire sentence linked Iris love writer forever demonstrate solid grasp semicolon en dash David love writer cap first word complete sentence following colon correct usage Sam Also love name spelled correctly Like writer clearly checked David love seeing “minuscule” spelled correctlyTags Writing Editing Copywriting Creativity Writing Tips |
1,747 | Dancing in the Dark | Dancing in the Dark
What happens if the Prefect Cloud API goes down?
“What happens if your API goes down?” This is an understandably common question from Prefect’s enterprise customers, who depend on Prefect Cloud to automate mission-critical workflows.
I always explain that because of Prefect’s unique Hybrid Model, an API outage is not nearly as disruptive as what they probably expect from a SaaS service, and in some cases its effects can be entirely mitigated.
Last Friday, my claim was put to the ultimate test: Cloudflare experienced a DNS issue causing many websites and services to become inaccessible for a short period, including the Prefect Cloud API. We predictably saw a large spike in outstanding task runs that became “zombies” and lost their connection to the backend (more on this later). Despite this unpleasant situation, the moment the issue was resolved, work continued as scheduled and all affected workflows were easily (and in most cases automatically) resumed.
People who naively glance at our Hybrid Model might conclude that it is purely about separation of concerns (execution environment vs. platform environment), but as with most things at Prefect, it is the product of careful consideration to ensure that even in a worst-case event we are still proactively working on users’ behalf and providing value. In particular, thanks to its innovative design, even if our API is down:
your business critical data is not lost or affected
your work is still being scheduled
a record of all outstanding jobs is maintained and curated (including sending notifications)
work will resume when API access is restored
Why do we have such confidence in our approach? Because it was designed to put resilience first. This is another example of how we designed Prefect as an insurance product — most useful when things go wrong. Through careful design of each component, we created a system that delivers value and recovers resiliently from failure even when a substantial portion of the global internet is down.
Scheduling
The Prefect Cloud scheduler service is an always-on, horizontally scalable service that is constantly parsing all flow schedules. Its job is simple: to create new flow runs and place them in a Scheduled state (with the appropriate future start time) for every flow that needs scheduling.
Once a run is placed in a Scheduled state, it is added to a work queue and stays there until a Prefect Agent picks it up at the appropriate time via an API query. This design ensures that scheduled work is never lost —if no Agents can communicate via the API, then at worst some runs begin late.
We are currently working on a feature that will allows users to send notifications on both late flow runs and when agents stop communicating with the API, ensuring they are alerted that something might be awry (Note: because of this design, these alerts will be triggered even if the API is down!).
Impacts to in-flight work
Prefect flows have configurable executors that manage all dependency resolution of the tasks within a given flow (this is critical to the scale that Prefect enables). In normal operation, this is sufficient to ensure all work is visited and completed. However, in extreme circumstances, it is possible for task and flow runs to end in a half-completed state. For example, Kubernetes preemption events can shut down work without warning. Similarly, an API outage means that tasks cannot confirm their final state with the backend and consequently the flow run can not complete.
Prefect Cloud has multiple services running behind the scenes that monitor for these types of situations. Two of the most visible are:
Zombie Killer Service : this service looks for task runs that are in a Running state but haven’t sent a heartbeat in the last 2 minutes; when found, the service either places the run into a Failed state or a Retrying state (if the task has configured retries). If no activity occurs on the retrying tasks, the Retrying states eventually make their way into the work queue for agents to pick up. Advanced users will be able to configure zombie behavior separately from task-level retries.
: this service looks for task runs that are in a state but haven’t sent a heartbeat in the last 2 minutes; when found, the service either places the run into a state or a state (if the task has configured retries). If no activity occurs on the retrying tasks, the states eventually make their way into the work queue for agents to pick up. Advanced users will be able to configure zombie behavior separately from task-level retries. Lazarus Service: this service looks for distressed flow runs and task runs that don’t appear to be making any progress. When found, this service places them into the work queue for Agents to retry. If Lazarus visits the same flow 3 times in a row, it will conclude that it is fundamentally broken and automatically mark it as Failed , triggering any configured Cloud hooks to fire and send the appropriate notification. As a concrete example, the most common Lazarus event is a flow run that has been Submitted by an Agent but has not entered a Running state after some time, for example if the Agent is unable to deploy it into an execution cluster.
These services (along with many others) guarantee that in the extreme event wherein work stops communicating with the API, items are re-added back to the work queue for completion once API communication is possible again. Critically, a record of the event becomes both easily discoverable and apparent in your UI dashboard, from which you can choose to manually restart or inspect further.
Data Availability
Last but not least is the issue of data — more often than not, enterprises are concerned about their ability to access data during an outage. Independent of the outage we’re discussing here, Prefect’s Hybrid Model ensures that no proprietary data is ever stored in Prefect Cloud’s database. This means that your ability to access your business critical data is completely unaffected by Prefect Cloud API’s availability.
We say that the Hybrid Model provides “cloud convenience with on-prem security,” and indeed, it ensures that your code remains fully on-premise and in your control. This means that in an absolute worst case event, you could call flow.run() yourself to guarantee your data is updated.
Our work continues!
All aspects of what I’ve described above are in a continual cycle of improvement, as we constantly seek to strengthen our guarantees. Prefect’s ultimate mission is to eliminate negative engineering by ensuring that data professionals can confidently and efficiently automate their data applications with the most user-friendly toolkit around.
Our design goal is to be minimally invasive when things go right and maximally helpful when they go wrong; what better proof than a global internet failure? | https://medium.com/the-prefect-blog/dancing-in-the-dark-b4cc0e240ba7 | ['Christopher White'] | 2020-07-20 18:37:51.305000+00:00 | ['API', 'Site Reliability', 'Python', 'Data Engineering', 'Workflow'] | Title Dancing DarkContent Dancing Dark happens Prefect Cloud API go “What happens API go down” understandably common question Prefect’s enterprise customer depend Prefect Cloud automate missioncritical workflow always explain Prefect’s unique Hybrid Model API outage nearly disruptive probably expect SaaS service case effect entirely mitigated Last Friday claim put ultimate test Cloudflare experienced DNS issue causing many website service become inaccessible short period including Prefect Cloud API predictably saw large spike outstanding task run became “zombies” lost connection backend later Despite unpleasant situation moment issue resolved work continued scheduled affected workflow easily case automatically resumed People naively glance Hybrid Model might conclude purely separation concern execution environment v platform environment thing Prefect product careful consideration ensure even worstcase event still proactively working users’ behalf providing value particular thanks innovative design even API business critical data lost affected work still scheduled record outstanding job maintained curated including sending notification work resume API access restored confidence approach designed put resilience first another example designed Prefect insurance product — useful thing go wrong careful design component created system delivers value recovers resiliently failure even substantial portion global internet Scheduling Prefect Cloud scheduler service alwayson horizontally scalable service constantly parsing flow schedule job simple create new flow run place Scheduled state appropriate future start time every flow need scheduling run placed Scheduled state added work queue stay Prefect Agent pick appropriate time via API query design ensures scheduled work never lost —if Agents communicate via API worst run begin late currently working feature allows user send notification late flow run agent stop communicating API ensuring alerted something might awry Note design alert triggered even API Impacts inflight work Prefect flow configurable executor manage dependency resolution task within given flow critical scale Prefect enables normal operation sufficient ensure work visited completed However extreme circumstance possible task flow run end halfcompleted state example Kubernetes preemption event shut work without warning Similarly API outage mean task cannot confirm final state backend consequently flow run complete Prefect Cloud multiple service running behind scene monitor type situation Two visible Zombie Killer Service service look task run Running state haven’t sent heartbeat last 2 minute found service either place run Failed state Retrying state task configured retries activity occurs retrying task Retrying state eventually make way work queue agent pick Advanced user able configure zombie behavior separately tasklevel retries service look task run state haven’t sent heartbeat last 2 minute found service either place run state state task configured retries activity occurs retrying task state eventually make way work queue agent pick Advanced user able configure zombie behavior separately tasklevel retries Lazarus Service service look distressed flow run task run don’t appear making progress found service place work queue Agents retry Lazarus visit flow 3 time row conclude fundamentally broken automatically mark Failed triggering configured Cloud hook fire send appropriate notification concrete example common Lazarus event flow run Submitted Agent entered Running state time example Agent unable deploy execution cluster service along many others guarantee extreme event wherein work stop communicating API item readded back work queue completion API communication possible Critically record event becomes easily discoverable apparent UI dashboard choose manually restart inspect Data Availability Last least issue data — often enterprise concerned ability access data outage Independent outage we’re discussing Prefect’s Hybrid Model ensures proprietary data ever stored Prefect Cloud’s database mean ability access business critical data completely unaffected Prefect Cloud API’s availability say Hybrid Model provides “cloud convenience onprem security” indeed ensures code remains fully onpremise control mean absolute worst case event could call flowrun guarantee data updated work continues aspect I’ve described continual cycle improvement constantly seek strengthen guarantee Prefect’s ultimate mission eliminate negative engineering ensuring data professional confidently efficiently automate data application userfriendly toolkit around design goal minimally invasive thing go right maximally helpful go wrong better proof global internet failureTags API Site Reliability Python Data Engineering Workflow |
1,748 | Teens, Brains, and Tetrahydrocannabinol | My academic studies have taken me on a tour of addiction and mental health. Where I believe THC has beneficial properties, I also believe we must be mindful of the risks. The younger the user, the higher the risks. Here is a small exploration of teens and THC as adapted from my academic paper.
Teens and THC
From their parent’s medicine cabinets to the person selling marijuana on the sly near schools, youth find avenues to use; they desire to change their current reality.
Here’s a quick video explaining how THC Affects the brain:
With the use of marijuana, the adolescent brain, which is still developing tends to fall short of the ability to make sound decisions. The use of THC tends to decrease wise choices in an already underdeveloped brain.
Tetrahydrocannabinol (THC), the chemical responsible for most of marijuana’s psychological effects, affects brain cells throughout the brain, including cells in circuits related to learning and memory, coordination, and addiction (SAMSHA, 2018).
The prefrontal cortex (PFC) is still in the development process where decisions and ideas are weighed based on risk assessment. During the sensitive time of brain growth, their healthy decisions are halted.
The brain’s limbic system, which develops first, creates memory and emotional responses and so the instant gratification and quick choices disregarding consequences takes over, rather than the focused, thinking in logical terms, and planning part of the brain.
Unfortunately, one of the reasons teens make emotionally triggered choices has to do with an undeveloped prefrontal cortex. For the teens who find access to drugs, the accepted choice derives the decision from the limbic system; soon, the ease of access, transfers to action (Hart & Ksir, 2014).
Teens who use prior to their brain’s full growth, tend to stunt the maturity of reason and logic. The slope to try out illicit drugs happens rather quickly.
For instance, the use of ‘blunts’, which are cigars that are hollowed out, filled with marijuana and crack, and then smoked increases the dangers of the drugs. The developing brain receives a double dose of drugs, increasing the damage taking place across the blood-brain barrier.
The validation of the shift from one drug to another, Melberg, Jones, Bretteville-Jensen suggest, “Our findings demonstrate, first of all, that there is a gateway effect and the hazard of taking up hard drugs increases substantially after the initiation of cannabis” (p.586).
Another concerning drug comparable to marijuana is on the rise. A synthetic drug called “Spice,” which is “sold over the counter in many states — particularly in gas stations, convenience stores, and head shops — has synthetic chemical components of marijuana sprayed onto shredded plant material that is then smoked” (Wadely, 2014, p.2). The drug is dangerous.
Thankfully, it is on the NIDA’s list of a drug that is decreased in use during the 2014 year. Those around 12th grade rather than the younger grades usually use this drug.
Unfortunately, “Other drugs, which use remained unchanged in 2014 include Ritalin and Adderall — both stimulants used in the treatment of ADHD — as well as LSD, inhalants, powder cocaine, tranquilizers, sedatives, and anabolic steroids. However, most of these drugs are now well below their recent peak levels of use according to the investigators” (Wadely, 2014, p.4). | https://medium.com/publishous/teens-brains-and-tetrahydrocannabinol-d29244cb7e7b | ['Pamela J. Nikodem'] | 2020-02-02 16:36:01.152000+00:00 | ['Growth', 'Life Lessons', 'Health', 'Mental Health', 'Addiction'] | Title Teens Brains TetrahydrocannabinolContent academic study taken tour addiction mental health believe THC beneficial property also believe must mindful risk younger user higher risk small exploration teen THC adapted academic paper Teens THC parent’s medicine cabinet person selling marijuana sly near school youth find avenue use desire change current reality Here’s quick video explaining THC Affects brain use marijuana adolescent brain still developing tends fall short ability make sound decision use THC tends decrease wise choice already underdeveloped brain Tetrahydrocannabinol THC chemical responsible marijuana’s psychological effect affect brain cell throughout brain including cell circuit related learning memory coordination addiction SAMSHA 2018 prefrontal cortex PFC still development process decision idea weighed based risk assessment sensitive time brain growth healthy decision halted brain’s limbic system develops first creates memory emotional response instant gratification quick choice disregarding consequence take rather focused thinking logical term planning part brain Unfortunately one reason teen make emotionally triggered choice undeveloped prefrontal cortex teen find access drug accepted choice derives decision limbic system soon ease access transfer action Hart Ksir 2014 Teens use prior brain’s full growth tend stunt maturity reason logic slope try illicit drug happens rather quickly instance use ‘blunts’ cigar hollowed filled marijuana crack smoked increase danger drug developing brain receives double dose drug increasing damage taking place across bloodbrain barrier validation shift one drug another Melberg Jones BrettevilleJensen suggest “Our finding demonstrate first gateway effect hazard taking hard drug increase substantially initiation cannabis” p586 Another concerning drug comparable marijuana rise synthetic drug called “Spice” “sold counter many state — particularly gas station convenience store head shop — synthetic chemical component marijuana sprayed onto shredded plant material smoked” Wadely 2014 p2 drug dangerous Thankfully NIDA’s list drug decreased use 2014 year around 12th grade rather younger grade usually use drug Unfortunately “Other drug use remained unchanged 2014 include Ritalin Adderall — stimulant used treatment ADHD — well LSD inhalant powder cocaine tranquilizer sedative anabolic steroid However drug well recent peak level use according investigators” Wadely 2014 p4Tags Growth Life Lessons Health Mental Health Addiction |
1,749 | What Would Democrats Do? | Terry H. Schwadron
Oct. 11, 2018
Even as President Trump was whipping up about 9,000 at an Iowa rally with cries that Democrats are an “angry mob,” intent on “policies of anger, division and destruction,” actual Democrats in Washington are thinking through what will happen if they actually do flip the currently Republican-majority House in November.
There was something very odd about the juxtaposition. “You don’t hand matches to an arsonist and you don’t give power to an angry left-wing mob, and that’s what the Democrats are,” Trump said. In an op-ed for USA Today , the president argued, for example, that the plan would threaten seniors and represented “radical socialism.”
You’ve got to squirm at the constantly more pointed language marking our political races.
Meanwhile, TheHill.com went about a very sober journalistic task about the agenda for a potential Democratic-majority future. Their reporters actually asked those who would be incoming House committee chairs what they actually want to do.
The most amazing thing you learn is that Democrats might actually take a ten-minute break from fund-raising to pay attention instead to matters of governing. That said, the ambitious agenda for seeking legislation to address items of governance and budget that have gone unaddressed includes a healthy dose of closer questioning of Trump officials.
After eight years in the minority, Democrats want a wide variety of bills, from shoring up ObamaCare and Dodd-Frank financial rules to protecting “Dreamers” and the integrity of elections. Of course, the likelihood is that even if the House turns blue, the Senate will hold its REpubican majority, meaning that none of these bills have a hope of becoming law.
Nevertheless, it is good to see a different view of why we even have a federal government. Under Trump and Republican leadership, the clear outline has been to shrink social services, build up the military, offer tax cuts built for a smaller version of government and to vastly reduce regulation to let corporations and entrepreneurs flourish.
Of course, Democrats also are vowing to be aggressive in investigating the actions of the Trump administration — an oversight role Democrats contend was virtually abandoned by Republicans.
Here’s a summary of thehlll.com’s findings by committee:
Appropriations. Rep. Nita Lowey, D-NY, would focus efforts on increasing support for social service programs, including a labor-health spending bill with a $1 billion increase over 2018 levels for medical research, maternity care, home-heating subsidies, nutrition and education programs, and funding to fight the opioid crisis. Lowey said an aim would be to reinstate a system of passing the various spending bills separately.
Armed Services.Rep. Adam Smith , D-WA, (Wash.), would seek to revisit particular initiatives, including the spending to overhaul nuclear weapons.
Smith said deployment of special forces would be a primary interest, particularly operations in Africa and other hotspots. And he would revisita ban on transgender people in the military.
Budget. Rep. John Yarmuth (Ky.) wants to expand the scope of the panel to include overarching assessments of how specific issues, in the broadest terms, impact the federal budget. That means new attention on the impact of tax cuts, immigration, health care and climate change.
Energy and Commerce. The priority for Rep. Frank Pallone Jr., D-NJ, is health care and shoring up what has been cut in ObamaCare, a target clearly opposed by the president and Republicans. He proposes action on reducing drug costs, elimination on income caps on tax credits, and limitations on rising Medicare payments for drugs. The committee also would take a bigger oversight role towards energy issues.
Financial Services. Rep. Maxine Waters, D-CA, a fierce Trump critic, is in line to take on a consumerist agenda that has been set aside by Republicans,, including a lot of oversight investigation. Democrats also want to bolster Dodd-Frank restrictions on Wall Street as well as the Consumer Financial Protection Bureau (CFPB), and the federal flood insurance program.
Homeland Security. Rep. Bennie Thompson, D-MS,wants the committee to conduct deep dives into election security, Trump’s travel ban, the administration’s uneven response to Hurricane Maria and the screening methods adopted by the Transportation Security Administration. He would lead questioning of enforcement on the southern border as well as policies allowing family separations and the Wall.
Intelligence. It is easy to see Rep. Adam Schiff, D-CA, going hard after the special counsel investigation, possibly openingexamination of Russia’s potential financial ties to Trump’s global business empire.
Judiciary. Rep. Jerrold Nadler, D-NY, outlines a busy program of questioning policies affectingimmigration, guns, voting rights and, of course, impeachment. Nadler has lashed out at the administration for refusing to defend certain ObamaCare insurance protections from outside lawsuits; for separating immigrant families at the southern border; for backing the National Rifle Association in opposition to tougher gun laws; and for defending states that have adopted tougher voting restrictions.
Natural Resources.Rep. Raúl Grijalva (D-Ariz.) hopes to revisit the Democrats’ upset over environmental issues by serious questioning of policies and actions by Interior Secretary Ryan Zinke on climate change, oil drilling, shrinking national monuments and selling off mineral rights. He would seek to strengthen the Endangered Species Act and the National Environmental Policy Act.
Oversight and Government Reform. Rep. Elijah Cummings, D-MD, would lead investigations of all sort — voting rights, elimination of pre-existing conditions in health care, attacks on the FBI, the media and other institutions.
As a forecast of what might come, Cummings and Oversight Democrats have submitted more than 50 subpoena requests for administrative documents on topics ranging from Trump’s efforts to dismantle ObamaCare and officials’ use of chartered flights to the president’s travel ban and the use of private email in the White House. Republicans have denied every request.
Transportation and Infrastructure. Rep. Peter DeFazio,D-OR, wants the infrastructure package that Trump promised but never delivered.
Ways and Means.Rep. Richard Neal, D-MA, wants to revisit tax cut, including hearings on what the cuts never delivered to the middle class. They also want to reinstall a state and local tax deduction, known as SALT, that was eliminated in the GOP tax law. He also targets shoring up retirement savings, protecting multi-employer pension plans and infrastructure.
Does any of this sound “radical”?
##
www.terryschwadron.wordpress.com | https://terryschwadron.medium.com/what-would-democrats-do-969e9b88675f | ['Terry Schwadron'] | 2018-10-11 11:28:39.816000+00:00 | ['Democrats', 'Health', 'Politics', 'Environment', 'Congress'] | Title Would Democrats DoContent Terry H Schwadron Oct 11 2018 Even President Trump whipping 9000 Iowa rally cry Democrats “angry mob” intent “policies anger division destruction” actual Democrats Washington thinking happen actually flip currently Republicanmajority House November something odd juxtaposition “You don’t hand match arsonist don’t give power angry leftwing mob that’s Democrats are” Trump said oped USA Today president argued example plan would threaten senior represented “radical socialism” You’ve got squirm constantly pointed language marking political race Meanwhile TheHillcom went sober journalistic task agenda potential Democraticmajority future reporter actually asked would incoming House committee chair actually want amazing thing learn Democrats might actually take tenminute break fundraising pay attention instead matter governing said ambitious agenda seeking legislation address item governance budget gone unaddressed includes healthy dose closer questioning Trump official eight year minority Democrats want wide variety bill shoring ObamaCare DoddFrank financial rule protecting “Dreamers” integrity election course likelihood even House turn blue Senate hold REpubican majority meaning none bill hope becoming law Nevertheless good see different view even federal government Trump Republican leadership clear outline shrink social service build military offer tax cut built smaller version government vastly reduce regulation let corporation entrepreneur flourish course Democrats also vowing aggressive investigating action Trump administration — oversight role Democrats contend virtually abandoned Republicans Here’s summary thehlllcom’s finding committee Appropriations Rep Nita Lowey DNY would focus effort increasing support social service program including laborhealth spending bill 1 billion increase 2018 level medical research maternity care homeheating subsidy nutrition education program funding fight opioid crisis Lowey said aim would reinstate system passing various spending bill separately Armed ServicesRep Adam Smith DWA Wash would seek revisit particular initiative including spending overhaul nuclear weapon Smith said deployment special force would primary interest particularly operation Africa hotspot would revisita ban transgender people military Budget Rep John Yarmuth Ky want expand scope panel include overarching assessment specific issue broadest term impact federal budget mean new attention impact tax cut immigration health care climate change Energy Commerce priority Rep Frank Pallone Jr DNJ health care shoring cut ObamaCare target clearly opposed president Republicans proposes action reducing drug cost elimination income cap tax credit limitation rising Medicare payment drug committee also would take bigger oversight role towards energy issue Financial Services Rep Maxine Waters DCA fierce Trump critic line take consumerist agenda set aside Republicans including lot oversight investigation Democrats also want bolster DoddFrank restriction Wall Street well Consumer Financial Protection Bureau CFPB federal flood insurance program Homeland Security Rep Bennie Thompson DMSwants committee conduct deep dive election security Trump’s travel ban administration’s uneven response Hurricane Maria screening method adopted Transportation Security Administration would lead questioning enforcement southern border well policy allowing family separation Wall Intelligence easy see Rep Adam Schiff DCA going hard special counsel investigation possibly openingexamination Russia’s potential financial tie Trump’s global business empire Judiciary Rep Jerrold Nadler DNY outline busy program questioning policy affectingimmigration gun voting right course impeachment Nadler lashed administration refusing defend certain ObamaCare insurance protection outside lawsuit separating immigrant family southern border backing National Rifle Association opposition tougher gun law defending state adopted tougher voting restriction Natural ResourcesRep Raúl Grijalva DAriz hope revisit Democrats’ upset environmental issue serious questioning policy action Interior Secretary Ryan Zinke climate change oil drilling shrinking national monument selling mineral right would seek strengthen Endangered Species Act National Environmental Policy Act Oversight Government Reform Rep Elijah Cummings DMD would lead investigation sort — voting right elimination preexisting condition health care attack FBI medium institution forecast might come Cummings Oversight Democrats submitted 50 subpoena request administrative document topic ranging Trump’s effort dismantle ObamaCare officials’ use chartered flight president’s travel ban use private email White House Republicans denied every request Transportation Infrastructure Rep Peter DeFazioDOR want infrastructure package Trump promised never delivered Ways MeansRep Richard Neal DMA want revisit tax cut including hearing cut never delivered middle class also want reinstall state local tax deduction known SALT eliminated GOP tax law also target shoring retirement saving protecting multiemployer pension plan infrastructure sound “radical” wwwterryschwadronwordpresscomTags Democrats Health Politics Environment Congress |
1,750 | Donald Trump Is Smarter Than We Ever Gave Him Credit For | Donald Trump Is Smarter Than We Ever Gave Him Credit For
Ladies and gentlemen, we’ve been played.
Photo by Charles Deluvio on Unsplash
It was all an act.
Back in February, when everyone was trying to warn him about an impending pandemic, Donald Trump under-reacted. He didn’t seem to get it. He dismissed the experts. He called it a democratic hoax. In response, democrats mocked him as a fool. Now we’re finding out the truth.
The entire time, Trump knew how deadly the virus was. He’s on record acknowledging its potential to become deadlier than any virus we’d seen in a century. And yet, he continued to shrug it off as “Kung Flu,” and played politics with masks and ventilators.
He knew. And instead of trying to save lives, he conspired with his administration to sabotage cities and states with democratic majorities. He did this in hopes that it would weaken bastions of liberal progressivism, and turn the election in his favor.
Three books show us exactly who Donald Trump is:
You don’t even have to read these entire books to see the real Donald Trump, the one who his inner circle knows when the cameras cut off.
They show different sides of the president, but they all agree.
We should’ve been much more afraid of this man. | https://medium.com/the-apeiron-blog/donald-trump-is-smarter-than-we-ever-gave-him-credit-for-996c493f6492 | ['Jessica Wildfire'] | 2020-09-10 16:01:01.860000+00:00 | ['Books', 'Politics', 'Society', 'News', 'Culture'] | Title Donald Trump Smarter Ever Gave Credit ForContent Donald Trump Smarter Ever Gave Credit Ladies gentleman we’ve played Photo Charles Deluvio Unsplash act Back February everyone trying warn impending pandemic Donald Trump underreacted didn’t seem get dismissed expert called democratic hoax response democrat mocked fool we’re finding truth entire time Trump knew deadly virus He’s record acknowledging potential become deadlier virus we’d seen century yet continued shrug “Kung Flu” played politics mask ventilator knew instead trying save life conspired administration sabotage city state democratic majority hope would weaken bastion liberal progressivism turn election favor Three book show u exactly Donald Trump don’t even read entire book see real Donald Trump one inner circle know camera cut show different side president agree should’ve much afraid manTags Books Politics Society News Culture |
1,751 | How to Become Friends With Your Anxiety | Photographer: Michelle (Fisher) Bulla — Model: Allison Crady
Tension in your head, soreness along your upper back, your body slowly curves inward, and your mind races with unkind thoughts. You feel out of control, uncomfortable, and disoriented. You have anxiety.
I’m pretty sure I said something awkward, and I can’t get it out of my head — They must think I’m a psycho. Who the hell am I? Why do I say things like that? Will this ever stop?
I have a lot of anxiety, and it sucks. Sitting with discomfort and uncertainty was not part of my life training. Our emotional pain is real.
Becoming friends with anxiety means slowing down, acknowledging your pain, and listening to your body with self-compassion.
“You can’t stop the waves, but you can learn to surf.”
— Jon Kabat-Zinn
We can hear ideas over and over, then one day someone says it a certain way, and it clicks. By sharing my anxiety journey — from trying to “fix it” to somatic healing to self-compassion — I hope something clicks.
1. Acknowledging Anxiety
We all have anxiety as part of our human journey, a reminder of our shared humanity.
We have to start by acknowledging and welcoming its presence.
Pushing past anxiety is like telling yourself that it’s not okay to feel or be your whole self: numbing and unhelpful.
We like to push away anxiety with vices. I once drew a cartoon of myself watching TV with a thought bubble, “Haha, I don’t have to deal with my emotions.”
Pushing aside unpleasant emotions does not heal them.
2. Sitting with Anxiety
Most situations are not nearly as bad as we make them out to be. Our stories and judgments cause us pain and anxiety.
Anxiety activates our core emotions and fears, the need for control, survival, and love.
I found a few guided anxiety meditations that have helped — a 5-minute meditation and a 10-minute meditation. After grounding, we can separate the situation from our response and see the situation more clearly.
Being with your anxiety feels difficult, and we don’t want to start. It feels much more appealing to run to comfort or to try to “fix it.”
I try to logic my way through anxiety — Okay, my thoughts are racing. I seem to be feeling unsafe or inadequate. I’ll repeat a mantra, take deep breaths, do meditation or yoga, and it will go away.
We need to combine a logical approach with the felt experience.
Anxiety manifests as trapped energy in our bodies. We need to slow down and release the anxious energy. Easier said than done.
I get frustrated. I’m doing all the right things, so why isn’t this working? Why can’t I be calm already? I’ve been taking deep breaths for a while now.
It takes time and practice to build faith in your ability to feel and heal through anxiety. Our technology culture gives us instant gratification, so we expect everything to happen quickly.
Our bodies function in a slower, more flowing way. Our bodies require more listening, acceptance, and intentional breathing to feel balanced.
3. Listening to Anxiety
Our anxiety alerts us to important information. When we slow down and listen, we can ask ourselves what we need to feel better about the situation.
Talk to the anxious energy in your body — “What more is there for me to know?”
My body often reminds me to slow down and take deeper breaths. I remind myself that I am well-resourced to handle this situation.
I felt anxious about writing this blog. Watching my thoughts, I realized I felt scared of sharing my darkness and feeling not good enough.
By listening to my body, I learned that I need to let myself be vulnerable and embrace the creative process. I can also take steps to make the process more enjoyable and create safety for my artist child.
4. Building Confidence
Anxiety comes from feeling out of control. We feel like a situation, or our emotions, have become unmanageable, and that scares us a lot.
Anxiety is intelligent, blocked energy in our bodies. We can talk to the parts of our bodies where we feel tense, and we can move the blocked energy through our bodies.
My somatic life coach walked me through an exercise: notice the places in your body where you feel tension when you are anxious. Then, identify what confidence feels like in your body and where.
With awareness, you can gently move the energy back into a place of confidence.
I learned that my body’s wisdom knows how to handle every situation that I encounter. We need to become very relaxed and present to tap into our inner wisdom.
Using somatic processing, along with a logical approach, has helped me feel more in tune with my body and embrace anxiety.
5. Befriending Anxiety
Anxiety is part of us, and we need to value the ways it helps us. We need to care for our bodies, tell a new story about anxiety, and find strategies.
For many of us, self-compassion does not come naturally.
We grew up in critical, hyper-masculine environments that have made us judgmental of ourselves.
In her self-compassion talk, Dr. Kristen Neff shares an exercise: imagine a friend was having a hard time. How would you respond? Then imagine yourself having a hard time. Compare the responses.
Most of us are harsher with ourselves, using a judging tone. We are more willing to be kind and supportive of our friends in their hard times.
By understanding our anxiety, caring for our bodies, and listening to our needs, we can develop self-compassion, making us healthier, more resilient, more likable, and more balanced human beings. | https://medium.com/an-injustice/how-to-become-friends-with-your-anxiety-bd756e4526b6 | ['Allison Crady'] | 2020-12-16 01:18:43.420000+00:00 | ['Anxiety', 'Mental Health', 'Creativity', 'Compassion', 'Feminism'] | Title Become Friends AnxietyContent Photographer Michelle Fisher Bulla — Model Allison Crady Tension head soreness along upper back body slowly curve inward mind race unkind thought feel control uncomfortable disoriented anxiety I’m pretty sure said something awkward can’t get head — must think I’m psycho hell say thing like ever stop lot anxiety suck Sitting discomfort uncertainty part life training emotional pain real Becoming friend anxiety mean slowing acknowledging pain listening body selfcompassion “You can’t stop wave learn surf” — Jon KabatZinn hear idea one day someone say certain way click sharing anxiety journey — trying “fix it” somatic healing selfcompassion — hope something click 1 Acknowledging Anxiety anxiety part human journey reminder shared humanity start acknowledging welcoming presence Pushing past anxiety like telling it’s okay feel whole self numbing unhelpful like push away anxiety vice drew cartoon watching TV thought bubble “Haha don’t deal emotions” Pushing aside unpleasant emotion heal 2 Sitting Anxiety situation nearly bad make story judgment cause u pain anxiety Anxiety activates core emotion fear need control survival love found guided anxiety meditation helped — 5minute meditation 10minute meditation grounding separate situation response see situation clearly anxiety feel difficult don’t want start feel much appealing run comfort try “fix it” try logic way anxiety — Okay thought racing seem feeling unsafe inadequate I’ll repeat mantra take deep breath meditation yoga go away need combine logical approach felt experience Anxiety manifest trapped energy body need slow release anxious energy Easier said done get frustrated I’m right thing isn’t working can’t calm already I’ve taking deep breath take time practice build faith ability feel heal anxiety technology culture give u instant gratification expect everything happen quickly body function slower flowing way body require listening acceptance intentional breathing feel balanced 3 Listening Anxiety anxiety alert u important information slow listen ask need feel better situation Talk anxious energy body — “What know” body often reminds slow take deeper breath remind wellresourced handle situation felt anxious writing blog Watching thought realized felt scared sharing darkness feeling good enough listening body learned need let vulnerable embrace creative process also take step make process enjoyable create safety artist child 4 Building Confidence Anxiety come feeling control feel like situation emotion become unmanageable scare u lot Anxiety intelligent blocked energy body talk part body feel tense move blocked energy body somatic life coach walked exercise notice place body feel tension anxious identify confidence feel like body awareness gently move energy back place confidence learned body’s wisdom know handle every situation encounter need become relaxed present tap inner wisdom Using somatic processing along logical approach helped feel tune body embrace anxiety 5 Befriending Anxiety Anxiety part u need value way help u need care body tell new story anxiety find strategy many u selfcompassion come naturally grew critical hypermasculine environment made u judgmental selfcompassion talk Dr Kristen Neff share exercise imagine friend hard time would respond imagine hard time Compare response u harsher using judging tone willing kind supportive friend hard time understanding anxiety caring body listening need develop selfcompassion making u healthier resilient likable balanced human beingsTags Anxiety Mental Health Creativity Compassion Feminism |
1,752 | Why Flutter is the Future Trend in Mobile App Development? | Make Your Business Successful with Flutter Mobile App
Why Flutter is the Future Trend in Mobile App Development?
In this Blog, You can Get the Overview flutter and Why Flutter is More Efficient for Startups.
In startups, there is confusion regarding which cross-platform mobile app development is more efficient in the future for rapid growth in the competitive market. By choosing the wrong mobile application platform many startups fail.
So, The Quick Solution is Flutter.
With the right choice of technology, startups can easily survive in the competitive world for a long time with more efficiency. So in this blog, we will discuss the reasons why flutter is the right choice for cross-platform mobile application development.
Brief Introduction of Flutter:
I know that to understand the technology is making you boring but believe me, the flutter is easy to understand and more interesting.
Flutter is a single code base application for both Android and iOS. It is a free and open-source cross-platform app with high performance. It is launched in 2018 by Google so it is more trustworthy. It is faster app development for developers.
Flutter is the hot reload feature that saves more time and you can change the codebase instantly. The developers can build the app without compromising the performance. Flutter is the more customize and attractive app ever.
Pick points of the Blog:
Why Flutter is the Best Platform? Why Flutter is the Development Trend What is the Scope of Flutter? Amazing Apps using Flutter Framework Conclusion
Why Flutter is the Best Platform?
React Native, Angular Js or Xamarin are other mobile frameworks available over flutter. So when the decision has come many developers and owners think that flutter why is the best platform for mobile app development.
Refer the below image for clear comparison:
Flutter is developed and supported by google so the long term maintenance is more than the other frameworks.
Look at some benefits of having the mobile app in a flutter.
Cost-effective
It is cost-effective so for startups it is the best option for mobile app development.
Fewer developers
There is no requirement to hire separate developers for Android and iOS because Flutter has required fewer developers. With the one small team of Flutter developers, you can build the cross-platform mobile app speedily.
Faster code development
With the Flutter, you can develop your app with faster code than other frameworks. It increases the developers’ efficiency and saves more time in your business.
Go beyond mobile
Flutter has the potential ability to go beyond the mobile that leads to the more growth of your business.
Before choosing any technology the research, the pros and cons are necessary to know each framework. Thus, after knowing these benefits of Flutter you can make a decision that Flutter mobile app is more efficient for your next mobile app development for startups.
Why Flutter is the Development Trend:
Quickly look at some reasons why Flutter has more development trends in mobile app development.
By editing the code for IOS and Android apps both we can easily adjust the UI.
Not spending more time in-app development you can save time to develop the app and instant changes without losing the present application state.
Flutter is more similar to native app performance.
What is the Scope of Flutter?
The future scope of the Flutter is as long as Google has.
The Flutter beat the React Native the market. Let’s have a look at the Scope of the Flutter in the mobile app development. The resources are less used in Flutter compared to others in less money and investment, fewer developers you can build the app. Flutter is easy to learn and more popular among the developers also in the market.
Flutter is an excellent pixel-perfect design. It is a system in Dart that developers implement the Flutter for reading, replace, remove and change the operations in an easier way. Constantly updated Dart libraries and the quality of the code is more in Flutter that creates a more precise, accurate and less bulky app.
Amazing Apps using Flutter Framework:
Google ads users can view their campaign on the smartphone that provides the details of the campaign, alert notifications, suggestions and also allow the calling Google expert. You can add, edit and remove the keywords of the particular campaign and more. So this Flutter app also helps to manage all the activity of your app without a desktop anywhere.
Alibaba is the world’s largest e-commerce company that connects dealers around the world. Alibaba app is for the wholesale marketplace app. It is for the global trade app that provides the users to buy the products from suppliers across the world in the mobile app.
Birch Finance is the app for the credit card reward that allows users to manage the cards which are existing. It provides various ways to redeem and earn rewards.
Coach yourself for the German-language market. It is a meditation app that helps users in personal development.
This app provides news, videos, and lotteries daily. This app for New York, Chicago, London, and more tour locations and more.
Watermaniac is a healthcare app that provides users to track the amount of water they drink. By using this app the users can set reminders, alerts regarding the drinking of the water, it is a more customized app that helps users to set and achieve the daily goal of water.
Conclusion: | https://medium.com/devtechtoday/why-is-fluter-the-future-trend-in-mobile-app-development-26596c84296b | ['Binal Prajapati'] | 2020-03-19 12:26:46.629000+00:00 | ['Mobile App Development', 'Technology', 'Startup', 'Business', 'Flutter'] | Title Flutter Future Trend Mobile App DevelopmentContent Make Business Successful Flutter Mobile App Flutter Future Trend Mobile App Development Blog Get Overview flutter Flutter Efficient Startups startup confusion regarding crossplatform mobile app development efficient future rapid growth competitive market choosing wrong mobile application platform many startup fail Quick Solution Flutter right choice technology startup easily survive competitive world long time efficiency blog discus reason flutter right choice crossplatform mobile application development Brief Introduction Flutter know understand technology making boring believe flutter easy understand interesting Flutter single code base application Android iOS free opensource crossplatform app high performance launched 2018 Google trustworthy faster app development developer Flutter hot reload feature save time change codebase instantly developer build app without compromising performance Flutter customize attractive app ever Pick point Blog Flutter Best Platform Flutter Development Trend Scope Flutter Amazing Apps using Flutter Framework Conclusion Flutter Best Platform React Native Angular Js Xamarin mobile framework available flutter decision come many developer owner think flutter best platform mobile app development Refer image clear comparison Flutter developed supported google long term maintenance framework Look benefit mobile app flutter Costeffective costeffective startup best option mobile app development Fewer developer requirement hire separate developer Android iOS Flutter required fewer developer one small team Flutter developer build crossplatform mobile app speedily Faster code development Flutter develop app faster code framework increase developers’ efficiency save time business Go beyond mobile Flutter potential ability go beyond mobile lead growth business choosing technology research pro con necessary know framework Thus knowing benefit Flutter make decision Flutter mobile app efficient next mobile app development startup Flutter Development Trend Quickly look reason Flutter development trend mobile app development editing code IOS Android apps easily adjust UI spending time inapp development save time develop app instant change without losing present application state Flutter similar native app performance Scope Flutter future scope Flutter long Google Flutter beat React Native market Let’s look Scope Flutter mobile app development resource le used Flutter compared others le money investment fewer developer build app Flutter easy learn popular among developer also market Flutter excellent pixelperfect design system Dart developer implement Flutter reading replace remove change operation easier way Constantly updated Dart library quality code Flutter creates precise accurate le bulky app Amazing Apps using Flutter Framework Google ad user view campaign smartphone provides detail campaign alert notification suggestion also allow calling Google expert add edit remove keywords particular campaign Flutter app also help manage activity app without desktop anywhere Alibaba world’s largest ecommerce company connects dealer around world Alibaba app wholesale marketplace app global trade app provides user buy product supplier across world mobile app Birch Finance app credit card reward allows user manage card existing provides various way redeem earn reward Coach Germanlanguage market meditation app help user personal development app provides news video lottery daily app New York Chicago London tour location Watermaniac healthcare app provides user track amount water drink using app user set reminder alert regarding drinking water customized app help user set achieve daily goal water ConclusionTags Mobile App Development Technology Startup Business Flutter |
1,753 | Confidence intervals for permutation importance | Confidence intervals for permutation importance
A new theoretical perspective on an old measure of feature importance
Feature importance helps us find the features that matter.
Introduction
In this post, we explain how a new theoretical perspective on the popular permutation feature importance technique allows us to quantify its uncertainty with confidence intervals and avoid potential pitfalls in its use.
First, let’s motivate the “why” of using this technique in the first place. Let’s imagine you just got hired onto the data science team at a major international retailer. Prior to your arrival, this team built a complex model to forecast weekly sales at each of your dozens of locations around the globe. The model takes into account a multitude of factors: geographic data (like local population density and demographics), seasonality data, weather forecast data, information about individual stores (like total square footage), and even the number of likes your company’s tweets have been getting recently. Let’s assume, too, that this model works wonders, giving the business team advance insight into future sales patterns weeks in advance. There is just one problem. Can you guess what it is?
Nobody knows why the sales forecast model works so well.
Why is this a problem? A number of reasons. The business folks relying on the model’s predictions have no idea how reliable they would be if, say, Twitter experienced an outage and tweet likes decreased one week. On the data science team, you have little sense of what factors are most useful to the model, so you’re flying blind when it comes to identifying new signals with which to bolster your model’s performance. And let’s not forget other stakeholders. If a decision based on this model’s forecast were to lead to bad results for the company, the board will want to know a lot more about this model than “it just works,” especially as AI continues to grow more regulated.
So what can we do? A great first step is to get some measure of feature importance. This means assigning a numerical score of importance to each of the factors that your model uses. These numerical scores represent how important these features are to your model’s ability to make quality predictions.
Many modeling techniques come with built-in feature importance measurements. Perhaps you can use the information-gain-based importance measure that comes by default with your xgboost model? Not so fast! As your teammates will point out, there is no guarantee that these feature importances will describe your complex ensemble, and besides, gain-based importance measures are biased [1].
So what can we do instead? We can use “randomized ablation” (aka “permutation”) feature importance measurements. Christoph Molnar offers a clear and concise description of this technique in his Interpretable ML Book [2]:
The concept is really straightforward: We measure the importance of a feature by calculating the increase in the model’s prediction error after permuting the feature. A feature is “important” if shuffling its values increases the model error, because in this case the model relied on the feature for the prediction. A feature is “unimportant” if shuffling its values leaves the model error unchanged, because in this case the model ignored the feature for the prediction.
Background
Where did this technique come from? Randomized ablation feature importance is certainly not new. Indeed, its inception dates back to at least 2001, when a variant of this technique was introduced as the “noising” of variables to better understand how random forest models use them [3]. Recently, however, this technique has seen a resurgence in use and variation. For example, an implementation of this technique will be included in the upcoming version 0.22 of the popular Scikit-learn library [4]. For a more theoretical example, consider the recently-introduced framework of “model class reliance,” which has termed a variant of the randomized ablation feature importance “model reliance” and used it as a core building block [5].
A new theoretical perspective
While working with this technique at Fiddler Labs, we have sought to develop a clear sense of what it means, theoretically, to permute a column of your features, run that through your model, and see how much the model’s error increases. This has led us to use the theoretical lens of randomized ablation, hence our new name for what is commonly called permutation feature importance.
In a recent preprint released on arXiv, we develop a clear theoretical formulation of this technique as it relates to the classic statistical learning problem statement. We find that the notion of measuring error after permuting features (or, more formally, ablating them through randomization) actually fits in quite nicely with the mathematics of risk minimization in supervised learning [6]. If you are familiar with this body of theory, we hope this connection will be as helpful to your intuition as it has been to ours.
Additionally, our reformulation provides two ways of constructing confidence intervals around the randomization ablation feature importance scores, a technique that practitioners can use to avoid potential pitfalls in the application of randomized ablation feature importance. To the best of our knowledge, current formulations and implementations of this technique do not include these confidence measurements.
Confidence intervals on feature importance
Consider what might happen if we were to re-run randomized ablation feature importance with a different randomized ablation (e.g. by using a different random seed), or if we run it on two different random subsets of a very large dataset (e.g. to avoid using a full dataset that would exceed our machine’s memory capacity). Our feature importances might change! Ideally, we would want to use a large dataset and average over many ablations to mitigate the randomness inherent in the algorithm, but in practice, we may not have enough data or compute power to do so.
There are two sources of uncertainty in the randomized ablation feature importance scores: the data points we use, and the random ablation values (i.e. permutation) we use. By running the algorithm multiple times and examining the run-to-run variance, we can construct a confidence interval (CI) that measures the uncertainty stemming from the ablation used. Similarly, by looking point-by-point at the loss increases caused by ablation (instead of just averaging loss over our dataset), we can construct a CI that measures the uncertainty stemming from our finite dataset.
Example: forecasting the price of a home
To demonstrate the use of randomized ablation feature importance values with CIs, let’s apply the technique to a real model. To this end, I used the Ames Housing Dataset [7] to build a complex model that estimates the sale price of houses. The full code for this example is available in a Jupyter notebook here.
To show the importance of confidence intervals, we run randomized ablation feature importance using just 100 points, with just K=3 repetitions. This gives us the following top-10 features by score, with a 95% confidence interval indicated by the black error bars:
Randomized ablation feature importance for 100 points after 3 repetitions.
As we can see from our error bars, it is uncertain which feature is actually the third most important over these 100 points. Re-running randomized ablation feature importance with K=30 iterations, we arrive at much tighter error bounds, and we find with confidence that a house’s neighborhood actually edges out its total basement square footage in importance to our model:
Randomized ablation feature importance for the same 100 points after 30 repetitions.
However, it turns out that a larger source of uncertainty in these feature importance scores actually stems from the small size of the dataset used, rather than the small number of ablation repetitions. This fact is uncovered by using the other CI methodology presented in our paper, which captures uncertainty resulting from both ablation and the size of the dataset. Running this other CI technique on another 100 points of our dataset (with just one repetition) we observe the following wide CIs:
Randomized ablation feature importance for 100 points with point-by-point CIs.
By increasing the number of points to 500 instead of 100, our confidence improves significantly, and we become fairly confident that neighborhood is the third most important feature to our model overall (not just in our limited dataset).
Randomized ablation feature importance for 500 points with point-by-point CIs.
Conclusion
Feature importance techniques are a powerful and easy way to gain valuable insight about your machine learning models. The randomized ablation feature importance technique, often referred to as “permutation” importance, offers a straightforward and broadly-applicable technique for computing feature importances. We also showed here how, through a new way of theorizing and formulating the “true” value of randomized ablation feature importance, we are able to construct confidence intervals around our feature importance measurements. These confidence intervals are a useful tool for avoiding pitfalls in practice, especially when datasets are not large.
If you liked this post, you can find more like it on Fiddler’s blog, and if you want a deeper dive into CIs for randomized ablation feature importance, be sure to check out the full paper. Don’t worry, it’s only four pages long!
References
[1] Parr et. al. Beware Default Random Forest Importances (2018). https://explained.ai/rf-importance/
[2] Molnar, Christoph. Interpretable Machine Learning (2019). https://christophm.github.io/interpretable-ml-book/feature-importance.html
[3] Breiman, Leo. Random Forests (2001). https://www.stat.berkeley.edu/%7Ebreiman/randomforest2001.pdf
[4] Scikit-learn Contributors. Permutation feature importance (2019). https://scikit-learn.org/dev/modules/permutation_importance.html
[5] Fisher et. al. Model Class Reliance (2019). https://arxiv.org/abs/1801.01489
[6] Merrick, Luke. Randomized Ablation Feature Importance (2019). https://arxiv.org/abs/1910.00174
[7] De Cock, Dean. Ames, Iowa: Alternative to the Boston Housing Data as an End of Semester Regression Project (2011). http://jse.amstat.org/v19n3/decock.pdf | https://towardsdatascience.com/confidence-intervals-for-permutation-importance-2d025bc740c5 | ['Luke Merrick'] | 2019-10-08 18:07:06.602000+00:00 | ['Feature Importance', 'AI', 'Artificial Intelligence', 'Explainable Ai', 'Machine Learning'] | Title Confidence interval permutation importanceContent Confidence interval permutation importance new theoretical perspective old measure feature importance Feature importance help u find feature matter Introduction post explain new theoretical perspective popular permutation feature importance technique allows u quantify uncertainty confidence interval avoid potential pitfall use First let’s motivate “why” using technique first place Let’s imagine got hired onto data science team major international retailer Prior arrival team built complex model forecast weekly sale dozen location around globe model take account multitude factor geographic data like local population density demographic seasonality data weather forecast data information individual store like total square footage even number like company’s tweet getting recently Let’s assume model work wonder giving business team advance insight future sale pattern week advance one problem guess Nobody know sale forecast model work well problem number reason business folk relying model’s prediction idea reliable would say Twitter experienced outage tweet like decreased one week data science team little sense factor useful model you’re flying blind come identifying new signal bolster model’s performance let’s forget stakeholder decision based model’s forecast lead bad result company board want know lot model “it works” especially AI continues grow regulated great first step get measure feature importance mean assigning numerical score importance factor model us numerical score represent important feature model’s ability make quality prediction Many modeling technique come builtin feature importance measurement Perhaps use informationgainbased importance measure come default xgboost model fast teammate point guarantee feature importance describe complex ensemble besides gainbased importance measure biased 1 instead use “randomized ablation” aka “permutation” feature importance measurement Christoph Molnar offer clear concise description technique Interpretable ML Book 2 concept really straightforward measure importance feature calculating increase model’s prediction error permuting feature feature “important” shuffling value increase model error case model relied feature prediction feature “unimportant” shuffling value leaf model error unchanged case model ignored feature prediction Background technique come Randomized ablation feature importance certainly new Indeed inception date back least 2001 variant technique introduced “noising” variable better understand random forest model use 3 Recently however technique seen resurgence use variation example implementation technique included upcoming version 022 popular Scikitlearn library 4 theoretical example consider recentlyintroduced framework “model class reliance” termed variant randomized ablation feature importance “model reliance” used core building block 5 new theoretical perspective working technique Fiddler Labs sought develop clear sense mean theoretically permute column feature run model see much model’s error increase led u use theoretical lens randomized ablation hence new name commonly called permutation feature importance recent preprint released arXiv develop clear theoretical formulation technique relates classic statistical learning problem statement find notion measuring error permuting feature formally ablating randomization actually fit quite nicely mathematics risk minimization supervised learning 6 familiar body theory hope connection helpful intuition Additionally reformulation provides two way constructing confidence interval around randomization ablation feature importance score technique practitioner use avoid potential pitfall application randomized ablation feature importance best knowledge current formulation implementation technique include confidence measurement Confidence interval feature importance Consider might happen rerun randomized ablation feature importance different randomized ablation eg using different random seed run two different random subset large dataset eg avoid using full dataset would exceed machine’s memory capacity feature importance might change Ideally would want use large dataset average many ablation mitigate randomness inherent algorithm practice may enough data compute power two source uncertainty randomized ablation feature importance score data point use random ablation value ie permutation use running algorithm multiple time examining runtorun variance construct confidence interval CI measure uncertainty stemming ablation used Similarly looking pointbypoint loss increase caused ablation instead averaging loss dataset construct CI measure uncertainty stemming finite dataset Example forecasting price home demonstrate use randomized ablation feature importance value CIs let’s apply technique real model end used Ames Housing Dataset 7 build complex model estimate sale price house full code example available Jupyter notebook show importance confidence interval run randomized ablation feature importance using 100 point K3 repetition give u following top10 feature score 95 confidence interval indicated black error bar Randomized ablation feature importance 100 point 3 repetition see error bar uncertain feature actually third important 100 point Rerunning randomized ablation feature importance K30 iteration arrive much tighter error bound find confidence house’s neighborhood actually edge total basement square footage importance model Randomized ablation feature importance 100 point 30 repetition However turn larger source uncertainty feature importance score actually stem small size dataset used rather small number ablation repetition fact uncovered using CI methodology presented paper capture uncertainty resulting ablation size dataset Running CI technique another 100 point dataset one repetition observe following wide CIs Randomized ablation feature importance 100 point pointbypoint CIs increasing number point 500 instead 100 confidence improves significantly become fairly confident neighborhood third important feature model overall limited dataset Randomized ablation feature importance 500 point pointbypoint CIs Conclusion Feature importance technique powerful easy way gain valuable insight machine learning model randomized ablation feature importance technique often referred “permutation” importance offer straightforward broadlyapplicable technique computing feature importance also showed new way theorizing formulating “true” value randomized ablation feature importance able construct confidence interval around feature importance measurement confidence interval useful tool avoiding pitfall practice especially datasets large liked post find like Fiddler’s blog want deeper dive CIs randomized ablation feature importance sure check full paper Don’t worry it’s four page long References 1 Parr et al Beware Default Random Forest Importances 2018 httpsexplainedairfimportance 2 Molnar Christoph Interpretable Machine Learning 2019 httpschristophmgithubiointerpretablemlbookfeatureimportancehtml 3 Breiman Leo Random Forests 2001 httpswwwstatberkeleyedu7Ebreimanrandomforest2001pdf 4 Scikitlearn Contributors Permutation feature importance 2019 httpsscikitlearnorgdevmodulespermutationimportancehtml 5 Fisher et al Model Class Reliance 2019 httpsarxivorgabs180101489 6 Merrick Luke Randomized Ablation Feature Importance 2019 httpsarxivorgabs191000174 7 De Cock Dean Ames Iowa Alternative Boston Housing Data End Semester Regression Project 2011 httpjseamstatorgv19n3decockpdfTags Feature Importance AI Artificial Intelligence Explainable Ai Machine Learning |
1,754 | APEX Supernode Candidate Selections | Supernode Candidates, ongoing efforts & reimbursement continuation
It should be noted that each and every Community Supernode Candidate will be expected to maintain a certain level of activity and ongoing support for the project, as their efforts up to this point forms a large part of the basis for why they have been selected. This is not solely for the benefit of APEX Network, but will be a requirement to maintain support and garner the votes necessary from the community of CPX holders to retain active production status in the future.
Every Supernode Candidate will be receive a special “Supernode Candidate” tag in the main and tech support chat.
Supernode Candidates will receive continued reimbursement for their hardware rental costs until further notice. Reimbursement would naturally halt before or close to the start of the staking on the mainnet. | https://medium.com/apex-network/apex-supernode-candidate-selections-858f6db85dbf | ['Apex Team'] | 2020-01-15 11:01:17.594000+00:00 | ['AI', 'Technology', 'Blockchain', 'Big Data'] | Title APEX Supernode Candidate SelectionsContent Supernode Candidates ongoing effort reimbursement continuation noted every Community Supernode Candidate expected maintain certain level activity ongoing support project effort point form large part basis selected solely benefit APEX Network requirement maintain support garner vote necessary community CPX holder retain active production status future Every Supernode Candidate receive special “Supernode Candidate” tag main tech support chat Supernode Candidates receive continued reimbursement hardware rental cost notice Reimbursement would naturally halt close start staking mainnetTags AI Technology Blockchain Big Data |
1,755 | Everything You Know About Productivity is a Lie | We live in a world of empty slogans and meaningless mantras. But, of all the empty phrases in our culture, none are as damaging as productivity.
Productivity sounds nice. It sounds technical, efficient, and powerful. Most of us have bought into the idea that to be successful you have to be productive. You probably end most of your days looking at your to-do list and scolding yourself for being so unproductive.
Productivity used to be a term of art in the discipline of economics. Now, it is a multi-billion-dollar cash cow for the self-help industry. What does productivity mean?
The most common answer is the equally vapid phrase, “getting shit done.” But what shit are you getting done?
In the knowledge economy, productivity is meaningless. It is a leftover from the first industrial revolution. It might have mattered how many widgets you could crank out on the assembly line in 1950. But measuring the amount of thought you put into an article, design, or line of code is impossible. Productivity is an unmoored metric in the knowledge economy. It doesn’t measure anything worth tracking.
Productivity in Economics
In economics, productivity is the ratio between the output volume and the input volumes. It is a measure of efficiency. Economists look at productivity to see how efficiently countries and corporations are using capital and labor. In economic terms, the more productive you are, the more value you can create for the economy.
Productivity only makes sense as a ratio. The problem is that it has become increasingly difficult in the knowledge economy to measure the inputs that go into creating an output. Traditionally, economists look at hours worked. Payroll records are the most common way to assess the amount of labor used to create a given product or service.
This works well for most physical products. But it fails miserably with the kind of work lawyers, writers, designers, consultants, coders, and engineers do. If thinking is a major part of your job, traditional measures of productivity do not accurately capture your inputs. They also likely fail to measure the value of your output accurately.
It might take me one hour to type out an article. If that is all I produce that day, have I been productive? If I sell the article for $400, does that mean I was more productive than if I only sell it for $100? What if I have been thinking about the article and writing it in my head for months? What if I wrote the entire thing from concept to final edits in an hour? How does productivity measure my efficiency?
There are too many variables in the creation of knowledge work for economic productivity to accurately calculate its economic efficiency.
Productivity is even more nebulous when you leave economics and venture into the world of self-help.
Productivity in Our Self-Help Culture
Not surprisingly, the definition of productivity in the realm of self-help is not as precise as the one used by economists. Most self-help gurus think of productivity only in terms of output. Input is rarely even an afterthought. It is all about how much did you get done that can be shipped out today?
There are thousands of self-help productivity books, and ten times that many productivity gurus. Our hustle culture loves to promote the idea of getting shit done.
But, as a knowledge worker, what does it even mean to get shit done? If you are a painter, are you more productive if you finish twenty miniature portraits in a week than if you take three months to paint a mural in a public park?
If you write one line of code that solves the one bug that has kept the product from shipping, are you less productive or more productive than all the coders who spent weeks writing the rest of the code?
If you are a writer, is writing 10,000 words more productive than writing 1,000 words? Does it make you ten times more productive?
I know many freelance writers that make less writing 10,000 words than I make writing 500 words because I work with businesses, and I charge high rates for my labor. Self-help productivity might leave me feeling like a loser and the 10,000-word writer making pennies per word deeply confused. They might wonder, if they got so much shit done, why are they still broke?
Again, the notion of productivity also fails to capture all the time I spend thinking about my work. I write in the shower every day. I outline articles and write complete introductions. I end up publishing or selling many of those articles. Many of them never go anywhere because, upon deeper reflection, I realize they are garbage.
How does productivity measure that?
Today while playing cards with my kids, I had an interesting idea pop into my head. I jotted it down and will return to it later. Does that mean playing cards was productive? Or, am I only as good as the number of projects I finish in a day?
Like other knowledge workers, I draw upon my life experience and the media influences I am exposed to every day in creating my work. There is no way to measure the inputs I require to produce something. Additionally, not everything I produce has an absolute economic value. I have sold articles to magazines, written content and sales copy for clients, and independently published articles and books.
Some of the pieces I have published as an indie have earned far more over several years than they would have made if I had sold them to a traditional publisher like a magazine or if I had written them for a client. But, for most things I write, I earn the most money writing for private clients. Sometimes I have published articles independently and only made a couple of bucks.
However, occasionally an article I have published myself or placed with a magazine or website under my own name goes on to bring new clients to me. It is impossible to see a finished article and book and know what its economic value will be.
If you are a knowledge worker, you should stop focusing on productivity and focus on something else instead.
Process Over Productivity
How you work is a better measure of your progress than your pile of completed tasks. If you want to maximize your work time, you need to create a process for doing your work. If you focus on process over tasks or goals, you will be happier and more successful.
As a knowledge worker, you usually cannot control how well your idea or finished product is received. You can control your process.
I do not write every day. Every day I spend time doing some combination of these activities:
Writing Reading Thinking Outlining Editing Observing the world around me Throwing away bad ideas
I do plan ahead to make sure I can meet client deadlines. I have a writing process. I don’t worry about how many words I write a day or how many projects I finish in a day. Instead, I focus on my process. Over the past eight years, I have learned that I will make the money I want if I do certain things consistently.
When I fail to follow my process, my business flails. I don’t have to-do lists. My only goal is to be better today than I was yesterday. I don’t panic about efficiency or stress about how much shit I got done.
I can measure my process. It takes into account everything I need to create my best work. It also takes into account my crazy life. Some days that means I only read and think. Some days I produce 500 words, and other days I produce 5,000 words. Some days I don’t write any words. It doesn’t matter as long as I am following my process.
I do care about how many projects I finish and ship. But, writing and other creative endeavors are about much more than the end result. The process determines the quality and quantity of work I create. A daily tally of checkmarks doesn’t make me a more prolific writer.
Knowledge workers of the world, it’s time to unite and kill our senseless obsession with productivity. | https://medium.com/escape-motivation/everything-you-know-about-productivity-is-a-lie-4e2a703f806e | ['Jason Mcbride'] | 2020-08-01 23:13:50.550000+00:00 | ['Life Lessons', 'Business', 'Productivity', 'Freelancing', 'Writing'] | Title Everything Know Productivity LieContent live world empty slogan meaningless mantra empty phrase culture none damaging productivity Productivity sound nice sound technical efficient powerful u bought idea successful productive probably end day looking todo list scolding unproductive Productivity used term art discipline economics multibilliondollar cash cow selfhelp industry productivity mean common answer equally vapid phrase “getting shit done” shit getting done knowledge economy productivity meaningless leftover first industrial revolution might mattered many widget could crank assembly line 1950 measuring amount thought put article design line code impossible Productivity unmoored metric knowledge economy doesn’t measure anything worth tracking Productivity Economics economics productivity ratio output volume input volume measure efficiency Economists look productivity see efficiently country corporation using capital labor economic term productive value create economy Productivity make sense ratio problem become increasingly difficult knowledge economy measure input go creating output Traditionally economist look hour worked Payroll record common way ass amount labor used create given product service work well physical product fails miserably kind work lawyer writer designer consultant coder engineer thinking major part job traditional measure productivity accurately capture input also likely fail measure value output accurately might take one hour type article produce day productive sell article 400 mean productive sell 100 thinking article writing head month wrote entire thing concept final edits hour productivity measure efficiency many variable creation knowledge work economic productivity accurately calculate economic efficiency Productivity even nebulous leave economics venture world selfhelp Productivity SelfHelp Culture surprisingly definition productivity realm selfhelp precise one used economist selfhelp guru think productivity term output Input rarely even afterthought much get done shipped today thousand selfhelp productivity book ten time many productivity guru hustle culture love promote idea getting shit done knowledge worker even mean get shit done painter productive finish twenty miniature portrait week take three month paint mural public park write one line code solves one bug kept product shipping le productive productive coder spent week writing rest code writer writing 10000 word productive writing 1000 word make ten time productive know many freelance writer make le writing 10000 word make writing 500 word work business charge high rate labor Selfhelp productivity might leave feeling like loser 10000word writer making penny per word deeply confused might wonder got much shit done still broke notion productivity also fails capture time spend thinking work write shower every day outline article write complete introduction end publishing selling many article Many never go anywhere upon deeper reflection realize garbage productivity measure Today playing card kid interesting idea pop head jotted return later mean playing card productive good number project finish day Like knowledge worker draw upon life experience medium influence exposed every day creating work way measure input require produce something Additionally everything produce absolute economic value sold article magazine written content sale copy client independently published article book piece published indie earned far several year would made sold traditional publisher like magazine written client thing write earn money writing private client Sometimes published article independently made couple buck However occasionally article published placed magazine website name go bring new client impossible see finished article book know economic value knowledge worker stop focusing productivity focus something else instead Process Productivity work better measure progress pile completed task want maximize work time need create process work focus process task goal happier successful knowledge worker usually cannot control well idea finished product received control process write every day Every day spend time combination activity Writing Reading Thinking Outlining Editing Observing world around Throwing away bad idea plan ahead make sure meet client deadline writing process don’t worry many word write day many project finish day Instead focus process past eight year learned make money want certain thing consistently fail follow process business flail don’t todo list goal better today yesterday don’t panic efficiency stress much shit got done measure process take account everything need create best work also take account crazy life day mean read think day produce 500 word day produce 5000 word day don’t write word doesn’t matter long following process care many project finish ship writing creative endeavor much end result process determines quality quantity work create daily tally checkmarks doesn’t make prolific writer Knowledge worker world it’s time unite kill senseless obsession productivityTags Life Lessons Business Productivity Freelancing Writing |
1,756 | Inside Out: Repository Pattern for Data Layer | Inside Out: Repository Pattern for Data Layer
A perfect place to put your Domain logic for Data Models outside Enitity Definition
Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects. — Edward Hieatt and Rob Mee in Patterns of Enterprise Application Architecture by Martin Fowler
Before We Begin
The Object-Relational Impedance Mismatch*
In industry, Relational Databases (e.g., OracleDB, MySQL, PostgreSQL) are the one most used as persistent data storage of an application. On the other hand, Object-oriented Programming is the dominant programming paradigm. So often it is the case that your application will be written in one of the languages that supports Object-oriented Programming patterns, whereas your database will be relational in nature. Now, the Object-oriented programming paradigm is an evolution of programming techniques originated from Software Engineering domain; whereas, Relational Database or rather relational mapping in proven Mathematical deduction founding it’s application in data storage domain and hence it’s way to Software Engineering. And as expected, they don’t really go together, and have multiple spaces with subtle difference and gap in communication. This is termed as The Object-Relational Impedance Mismatch. Lucky for us, years of Software development has brought very much enough solution to this problem.
Database Connectors and Transaction Script Pattern*
As obvious, every enterprise application has to find a way to Query and Mutate data stored inside database from application based upon application behaviour. The basic and naïve technique to do that is to open a TCP connection to database and operate directly. Every database vendor offers SDK library in almost every popular programming language to open and connect to database port exposed over network (e.g., JDBC). And every SQL database has to maintain a standard Query Specifications defined by SQL. Note that, although every database vendor has to abide by the SQL Spec, they however if they want, can extend it to best use of their database and have an edge on product comparison. So, in order to perform any application behaviour that requires either of the operation on database, application has to open one or many connection to database, write SQL as per behaviour requirements and execute them inside database itself. The SDK must offer a cursor to current execution context inside database, that the application can use to perform core database operations (e.g., COMMIT , ROLLBACK etc). As it seems evident, this comes with another set of problems. Now that the SQL query is stored inside the application, it is strongly coupled with application code. Not only that, any change in requirement will force the engineer to rethink the implementation on database as well.
Stored Procedures and Triggers*
Extending the above discussion a bit more, Stored Procedures and Triggers are quite common non-standard feature that SQL Databases offers. The purpose of Stored Procedure is to define a set of instruction that is required to be performed for one single operation expected from application. It could be addition of Foreign Key to an entity in a different Table in respond to a INSERT mutation of the original Table. Now that the complexity has been moved out of application, where you just invoke the procedure via SQL prepared statement, the domain-specific implementation is now inside database itself. This increases coupling to a good cost that must be avoided in any changing system. Triggers are also might be helpful tools but again must be used in a database specific operation and not with domain specific operations.
Domain Model Pattern*
The first layer of abstraction that we can use is the Domain Model pattern. Domain model is an object model of the domain that incorporates both behaviour and data. So a Domain model can be think of as a set of states (attributes) and methods (behaviour). Now it may seem evident to represent a model in Object-oriented paradigm as a Class with One-to-One mapping with a database table. The entity attributes of database table are represented with attributes that either gets initialised during constructors and can be set by anyone (i.e., public accessor) associated with that Class instance. Each methods of that Class can then represent each form of database operation, Query or Mutate related to that model. Now compared to Transaction Script pattern, where all the logic resides in Application Controller layer that handles service requests of the application, the database specific operations are now moved inside model implementations providing a fine layer of abstraction between business logic and database logic.
Data Mapper*
Now from Domain Model, we encapsulate Model behaviour from Controller behaviour, but that does not solve Object-Relational Impedance Mismatch as mentioned in the beginning. For example, you cannot do inheritance in databases whereas it’s a common practice in Object-oriented programming. To solve such inconsistencies, another layer is introduced between Domain Model (in-memory Entity) and Database Table row (persistent Entity). The responsibilities of this layer includes — To keep in-memory references in sync with Database Table row, and to ripple any change that is done on in-memory reference. In particular, Data Mapper is not aware of either Data Model Class or Database Table, but it has knowledge how to map them and transfer information from one another.
Object Relational Mapper/Entity Relational Mapper
Extending functionalities one more level over Data Mapper is Object Relational Mapper. Data Mapper only maps between a Domain Class Instance and corresponding entity row in Database Table. But it is not aware of any relations that exists among entities, which is often the case. To solve this inconsistency, Object Relational Mapper is introduced. An Object Relational Mapper not only can map between Domain Model and Database Entity, but also could defined and Query/Mutation other Domain Models and Database Entities which are made relative to it. This abstracts away all the database operations from Model implementation by offering standard set of API to Query and Mutate, which then is used within Model Class to perform specific persistent operation without have to writing any raw SQL. There are multiple patterns involved to achieve this — Identity Field, Association Table Map, Foreign Key Map and Dependent Map.
Repository Pattern
Entity
Entity
Entity is a single instance of a Domain Model Class that represents an entry in a table in Database, i.e., Entity has One-to-One mapping with a Database Table row. So for a committed Entity, the data attributes of that instance can safely assumed to be persisted inside database. Now every Entity Relational Framework or simply Entity Framework offers a set of functionality to perform standard Query and Mutate operation. But often, if not always, enterprise application models requires more that primitive operations to be performed and that usually involves some amount of business logic to be coupled with Model Class implementations. For a small scale application code, that might just be enough and preferable option. But as the requirements keep adding up and/or changing, it becomes tedious work to maintain that inside a Model object. Also more amount of business logic dependent of Entity Framework means the application being too tight coupled with Framework which in turn makes it hard to replace or modify.
Repository
Repository
Repository is a collection of a particular Entity type. Repositories will always have One-to-One correspondence with Entities. Or in other words, Repository has a composition relation with Entity. Repository could be thought of as a Table or Entity Set in a relational database whereas Entity being a row in Table with a Set of Attributes of its own identified using a key. Now that we have separated Domain Model implementation and specific Business rules, we can now put them inside Repository. In Repository pattern, a Model is allowed to have only data attributes that via Data Mapper will get pushed to Database and static/hook methods that performs pre-processing of those attributes during Object lifecycle. Rest of functionalities that are expected on Data Layer, i.e., business logics around Model can safely be implemented inside Repository Class.
Recap
A Domain Model is an Object-oriented representation of a Database Table Entity. A Data Mapper takes an in-memory Domain Model Instance and maps to particular Database Table Entity and responsible to keep both in sync. A Object-relational Mapper extends the behaviour of Data Mapper and associates related Database Table Entities with Domain Model object. A Repository is a collection of Entity and hence could be thought of as a representation of a Database Table, where all business logic related to Model resides.
Advantages
By separating Business Logic and Domain implementation, the application no longer have hard dependency on Entity Framework. Hence, depending upon use-cases Entity Framework can be changed, modified or upgraded. By encapsulating Domain specific logics inside Repository, we can reuse queries following DRY methodology. We can also perform Object-relational behaviour patterns (e.g., Unit of Work, Identity Map, Lazy Load) within scope of Repository to optimise number of database operations and hence the overall application performance. By providing solid separation between Application Controller (or Web Controller) and Domain Model, tight coupling around modules can be eradicated and all Query and Mutation at Controller layer can be done based upon Interfaces exposed by Repository only. Encapsulation of Domain Model and Repository helps avoiding unnecessary bugs regarding exposing a sensitive information to outside world. This also helps to maintain stable relations among Domain Objects and perform JOIN operations smoothly and abstracted from top layers.
Bibliography
Patterns of Enterprise Application Architecture — Martin Fowler, David Rice, Matthew Foemmel, Edward Hieatt, Robert Mee, Randy Stafford
Domain-Driven Design: Tackling Complexity in the Heart of Software — Eric Evans, Foreword by Martin Fowler
Agile Database Techniques: Effective Strategies for the Agile Developer — Scott W. Ambler
I found this repository as a great starting point on undestanding the Repository Pattern with very simple relational example: https://github.com/w3tecch/express-typescript-boilerplate | https://medium.com/swlh/inside-out-repository-pattern-for-data-layer-5eca4dd0e7d4 | ['Progyan Bhattacharya'] | 2020-06-07 15:55:15.135000+00:00 | ['Database', 'Architecture', 'Software Development', 'Design Patterns', 'Software Engineering'] | Title Inside Repository Pattern Data LayerContent Inside Repository Pattern Data Layer perfect place put Domain logic Data Models outside Enitity Definition Mediates domain data mapping layer using collectionlike interface accessing domain object — Edward Hieatt Rob Mee Patterns Enterprise Application Architecture Martin Fowler Begin ObjectRelational Impedance Mismatch industry Relational Databases eg OracleDB MySQL PostgreSQL one used persistent data storage application hand Objectoriented Programming dominant programming paradigm often case application written one language support Objectoriented Programming pattern whereas database relational nature Objectoriented programming paradigm evolution programming technique originated Software Engineering domain whereas Relational Database rather relational mapping proven Mathematical deduction founding it’s application data storage domain hence it’s way Software Engineering expected don’t really go together multiple space subtle difference gap communication termed ObjectRelational Impedance Mismatch Lucky u year Software development brought much enough solution problem Database Connectors Transaction Script Pattern obvious every enterprise application find way Query Mutate data stored inside database application based upon application behaviour basic naïve technique open TCP connection database operate directly Every database vendor offer SDK library almost every popular programming language open connect database port exposed network eg JDBC every SQL database maintain standard Query Specifications defined SQL Note although every database vendor abide SQL Spec however want extend best use database edge product comparison order perform application behaviour requires either operation database application open one many connection database write SQL per behaviour requirement execute inside database SDK must offer cursor current execution context inside database application use perform core database operation eg COMMIT ROLLBACK etc seems evident come another set problem SQL query stored inside application strongly coupled application code change requirement force engineer rethink implementation database well Stored Procedures Triggers Extending discussion bit Stored Procedures Triggers quite common nonstandard feature SQL Databases offer purpose Stored Procedure define set instruction required performed one single operation expected application could addition Foreign Key entity different Table respond INSERT mutation original Table complexity moved application invoke procedure via SQL prepared statement domainspecific implementation inside database increase coupling good cost must avoided changing system Triggers also might helpful tool must used database specific operation domain specific operation Domain Model Pattern first layer abstraction use Domain Model pattern Domain model object model domain incorporates behaviour data Domain model think set state attribute method behaviour may seem evident represent model Objectoriented paradigm Class OnetoOne mapping database table entity attribute database table represented attribute either get initialised constructor set anyone ie public accessor associated Class instance method Class represent form database operation Query Mutate related model compared Transaction Script pattern logic resides Application Controller layer handle service request application database specific operation moved inside model implementation providing fine layer abstraction business logic database logic Data Mapper Domain Model encapsulate Model behaviour Controller behaviour solve ObjectRelational Impedance Mismatch mentioned beginning example cannot inheritance database whereas it’s common practice Objectoriented programming solve inconsistency another layer introduced Domain Model inmemory Entity Database Table row persistent Entity responsibility layer includes — keep inmemory reference sync Database Table row ripple change done inmemory reference particular Data Mapper aware either Data Model Class Database Table knowledge map transfer information one another Object Relational MapperEntity Relational Mapper Extending functionality one level Data Mapper Object Relational Mapper Data Mapper map Domain Class Instance corresponding entity row Database Table aware relation exists among entity often case solve inconsistency Object Relational Mapper introduced Object Relational Mapper map Domain Model Database Entity also could defined QueryMutation Domain Models Database Entities made relative abstract away database operation Model implementation offering standard set API Query Mutate used within Model Class perform specific persistent operation without writing raw SQL multiple pattern involved achieve — Identity Field Association Table Map Foreign Key Map Dependent Map Repository Pattern Entity Entity Entity single instance Domain Model Class represents entry table Database ie Entity OnetoOne mapping Database Table row committed Entity data attribute instance safely assumed persisted inside database every Entity Relational Framework simply Entity Framework offer set functionality perform standard Query Mutate operation often always enterprise application model requires primitive operation performed usually involves amount business logic coupled Model Class implementation small scale application code might enough preferable option requirement keep adding andor changing becomes tedious work maintain inside Model object Also amount business logic dependent Entity Framework mean application tight coupled Framework turn make hard replace modify Repository Repository Repository collection particular Entity type Repositories always OnetoOne correspondence Entities word Repository composition relation Entity Repository could thought Table Entity Set relational database whereas Entity row Table Set Attributes identified using key separated Domain Model implementation specific Business rule put inside Repository Repository pattern Model allowed data attribute via Data Mapper get pushed Database statichook method performs preprocessing attribute Object lifecycle Rest functionality expected Data Layer ie business logic around Model safely implemented inside Repository Class Recap Domain Model Objectoriented representation Database Table Entity Data Mapper take inmemory Domain Model Instance map particular Database Table Entity responsible keep sync Objectrelational Mapper extends behaviour Data Mapper associate related Database Table Entities Domain Model object Repository collection Entity hence could thought representation Database Table business logic related Model resides Advantages separating Business Logic Domain implementation application longer hard dependency Entity Framework Hence depending upon usecases Entity Framework changed modified upgraded encapsulating Domain specific logic inside Repository reuse query following DRY methodology also perform Objectrelational behaviour pattern eg Unit Work Identity Map Lazy Load within scope Repository optimise number database operation hence overall application performance providing solid separation Application Controller Web Controller Domain Model tight coupling around module eradicated Query Mutation Controller layer done based upon Interfaces exposed Repository Encapsulation Domain Model Repository help avoiding unnecessary bug regarding exposing sensitive information outside world also help maintain stable relation among Domain Objects perform JOIN operation smoothly abstracted top layer Bibliography Patterns Enterprise Application Architecture — Martin Fowler David Rice Matthew Foemmel Edward Hieatt Robert Mee Randy Stafford DomainDriven Design Tackling Complexity Heart Software — Eric Evans Foreword Martin Fowler Agile Database Techniques Effective Strategies Agile Developer — Scott W Ambler found repository great starting point undestanding Repository Pattern simple relational example httpsgithubcomw3tecchexpresstypescriptboilerplateTags Database Architecture Software Development Design Patterns Software Engineering |
1,757 | The AWS Shell | Anyone developing applications and infrastructure in AWS will at some point make use of the AWS Command Line Interface (CLI), either interactively at a shell command line, or by integrating the CLI into shell scripts.
While it is a powerful avenue to access, create and manage AWS resources, it can get cumbersome to remember all of the possible commands and arguments for each of the services needed. We have relatively simple commands like
$ aws ec2 describe-subnets
which doesn’t need any arguments to retrieve a list of subnets for the default VPC in your account. On the other hand, there are a large number of CLI commands which require one or more arguments to get a response and the data you are interested in.
The AWS Shell is a GitHub project which creates another interactive interface, which is capable of guiding what you need to do next for the command. aws-shell is a python based interface, easily installed using pip.
pip install aws-shell
After installing aws-shell, the first execution takes a little longer as the autocomplete index is built for all of the AWS commands. The autocomplete index is important as we shall see in a moment. In addition to the autocomplete index, a complete documentation set is also indexed and displayed at various times.
Before you can use the shell to access AWS resources, you must configure your AWS access and secret key in the same manner as you would for the AWS CLI. Start aws-shell, and enter configure as the command.
aws> configure
AWS Access Key ID [****************NEWP]:
AWS Secret Access Key [****************w7NK]:
Default region name [us-east-1]:
Default output format [json]:
aws>
All of the commands you would normally use in the CLI are available in aws-shell, except no more typing that aws command, and you get autocomplete to help you execute the command successfully.
Profile Support
One thing I find tedious about the CLI is using profiles to change which access and secret key I am using. This is much simpler to take advantage of in aws-shell.
First, we need to set up a profile if we don’t already have one. Let’s look at what we have configured already (output has been formatted to fit the view).
aws> configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****NEWP shared-credentials-file
secret_key ****w7NK shared-credentials-file
region us-east-1 config-file ~/.aws/config
Now, let’s add a profile called test, and then list the credentials we have configured:
aws> configure --profile test
AWS Access Key ID [None]: ........
AWS Secret Access Key [None]: ........
Default region name [None]: us-east-2
Default output format [None]: json
aws> configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key *****NEWP shared-credentials-file
secret_key *****w7NK shared-credentials-file
region us-east-1 config-file ~/.aws/config
What? Where is our new profile? To see the new credentials, we have to specify the profile argument.
aws> configure list --profile test
Name Value Type Location
---- ----- ---- --------
profile <not set> test manual --profile
access_key *****YAMY shared-credentials-file
secret_key *****C7iv shared-credentials-file
region us-east-2 config-file ~/.aws/config
aws>
Just as we can use this profile with the AWS CLI like:
$ aws ec2 describe-instances --profile test
which gets tedious very quickly, we can set the profile we want to use in the aws-shell.
Using Profiles
With your profile created, you can either start aws-shell with a profile definition using the command
$ aws-shell --profile test
Alternatively, you can also set and change your profile from within the aws-shell.
aws> .profile
Current shell profile: no profile configured
You can change profiles using: .profile profile-name
aws> .profile test
Current shell profile changed to: test
aws> .profile
Current shell profile: test
aws>
As you know from working with the AWS CLI, changing profiles changes the access and secret key used to execute the commands in the associated account.
Using the aws-shell
The aws-shell takes over the terminal window, and displays some key sequences at the bottom, which can be toggled to suit your preference.
The options which can be toggled and the key sequences are:
F2 — Turn “fuzzy” on or off
F3 — Change the key layout between emacs and vi
F4 — Multi-column — provide command and sub-command hints in one or two columns
F5 — Turn Help on or off
F9 — set the focus
F10 — exit aws-shell
Fuzzy Search
“Fuzzy” refers to fuzzy searching for the commands you type. This means you can get to the command you want without typing the full name.
For example, we can type EC2 drio, and aws-shell shows ec2 describe-reserved-instances-offerings as the first option as drio are the first letter of each of the words in the command. Similarly, typing r53 shows the list of Route53 commands. Pressing the ‘F2’ key to turn off fuzzy searching, means you must type the command, sub-command, and options exactly. This feature is best left enabled.
Keys
This alters the key bindings used by aws-shell. The choices are vi and Emacs
Multi-Column
When aws-shell shows the list of commands, you can control if it is a single or multi-column list.
It is a personal preference, but using multi-column with commands which have a long list of sub-commands can make it easier to find what you are looking for.
Help
By default, aws-shell displays help on the command and sub-command as you type them.
If you find this distracting, you can disable the help display by pressing F5.
Working in the aws-shell
Aside from several features specific to the aws-shell, executing commands is like working in the AWS CLI, with the benefit of not having to remember precisely the name of the commands, sub-commands, options, etc. This can be a time saver.
There are several other useful commands offered by aws-shell, called dot commands as they are prefixed by a . before the command.
The .profile command allows changing the profile, meaning the access and secret keys used to execute the CLI commands. We saw this command earlier in this article.
It is possible to change the working directory using the .cd command.
aws> .cd
invalid syntax, must be: .cd dirname
aws> .cd ~
aws> !pwd
/Users/roberthare
aws> .cd /tmp
aws> !pwd
/private/tmp
aws>
It isn’t possible to see your current directory using a dot command, but this leads to the next feature, executing shell commands directly within aws-shell by prefixing the command with a !. Not only can we execute arbitrary shell commands within aws-shell, but we can use pipes (|), to send the output of the aws-shell command to a shell.
aws> ec2 describe-subnets --output table | grep CidrBlock
|| CidrBlock | 172.31.48.0/20 ||
|| CidrBlock | 172.31.80.0/20 ||
aws-shell keeps a history of all commands executed in the file ~/.aws/shell/history, so you can see what commands you have executed. There is no history command per se in the aws-shell, but you can take advantage of another feature to see and interact with the list.
The .edit command retrieves your shell history in an editor, allowing you to both view your command history, and create a shell script from the aws-shell commands.
The last dot commands are .exit and .quit, which have the same effect as pressing the F10 key, that of ending your aws-shell session.
Conclusion
If you spend any amount of time interacting with the AWS CLI, then you know how tedious it is always having to type those three extra letters. It doesn’t sound like a big deal, but if you are like me, occasionally you forget the “aws” part of that long command line.
The aws-shell makes it simpler to interact with the AWS CLI, especially with the dynamic display of sub-commands options.
References
The AWS Shell
The AWS CLI
The AWS CLI Command Interface
About the Author
Chris is a highly-skilled Information Technology AWS Cloud, Training and Security Professional bringing cloud, security, training and process engineering leadership to simplify and deliver high-quality products. He is the co-author of more than seven books and author of more than 70 articles and book chapters in technical, management and information security publications. His extensive technology, information security, and training experience makes him a key resource who can help companies through technical challenges.
Copyright
This article is Copyright © 2020, Chris Hare. | https://labrlearning.medium.com/the-aws-shell-1792361a0c89 | ['Chris Hare'] | 2020-01-14 05:16:49.005000+00:00 | ['Aws Cli', 'Technology', 'Python', 'System Administration', 'AWS'] | Title AWS ShellContent Anyone developing application infrastructure AWS point make use AWS Command Line Interface CLI either interactively shell command line integrating CLI shell script powerful avenue access create manage AWS resource get cumbersome remember possible command argument service needed relatively simple command like aws ec2 describesubnets doesn’t need argument retrieve list subnets default VPC account hand large number CLI command require one argument get response data interested AWS Shell GitHub project creates another interactive interface capable guiding need next command awsshell python based interface easily installed using pip pip install awsshell installing awsshell first execution take little longer autocomplete index built AWS command autocomplete index important shall see moment addition autocomplete index complete documentation set also indexed displayed various time use shell access AWS resource must configure AWS access secret key manner would AWS CLI Start awsshell enter configure command aws configure AWS Access Key ID NEWP AWS Secret Access Key w7NK Default region name useast1 Default output format json aws command would normally use CLI available awsshell except typing aws command get autocomplete help execute command successfully Profile Support One thing find tedious CLI using profile change access secret key using much simpler take advantage awsshell First need set profile don’t already one Let’s look configured already output formatted fit view aws configure list Name Value Type Location profile set None None accesskey NEWP sharedcredentialsfile secretkey w7NK sharedcredentialsfile region useast1 configfile awsconfig let’s add profile called test list credential configured aws configure profile test AWS Access Key ID None AWS Secret Access Key None Default region name None useast2 Default output format None json aws configure list Name Value Type Location profile set None None accesskey NEWP sharedcredentialsfile secretkey w7NK sharedcredentialsfile region useast1 configfile awsconfig new profile see new credential specify profile argument aws configure list profile test Name Value Type Location profile set test manual profile accesskey YAMY sharedcredentialsfile secretkey C7iv sharedcredentialsfile region useast2 configfile awsconfig aws use profile AWS CLI like aws ec2 describeinstances profile test get tedious quickly set profile want use awsshell Using Profiles profile created either start awsshell profile definition using command awsshell profile test Alternatively also set change profile within awsshell aws profile Current shell profile profile configured change profile using profile profilename aws profile test Current shell profile changed test aws profile Current shell profile test aws know working AWS CLI changing profile change access secret key used execute command associated account Using awsshell awsshell take terminal window display key sequence bottom toggled suit preference option toggled key sequence F2 — Turn “fuzzy” F3 — Change key layout emacs vi F4 — Multicolumn — provide command subcommand hint one two column F5 — Turn Help F9 — set focus F10 — exit awsshell Fuzzy Search “Fuzzy” refers fuzzy searching command type mean get command want without typing full name example type EC2 drio awsshell show ec2 describereservedinstancesofferings first option drio first letter word command Similarly typing r53 show list Route53 command Pressing ‘F2’ key turn fuzzy searching mean must type command subcommand option exactly feature best left enabled Keys alters key binding used awsshell choice vi Emacs MultiColumn awsshell show list command control single multicolumn list personal preference using multicolumn command long list subcommands make easier find looking Help default awsshell display help command subcommand type find distracting disable help display pressing F5 Working awsshell Aside several feature specific awsshell executing command like working AWS CLI benefit remember precisely name command subcommands option etc time saver several useful command offered awsshell called dot command prefixed command profile command allows changing profile meaning access secret key used execute CLI command saw command earlier article possible change working directory using cd command aws cd invalid syntax must cd dirname aws cd aws pwd Usersroberthare aws cd tmp aws pwd privatetmp aws isn’t possible see current directory using dot command lead next feature executing shell command directly within awsshell prefixing command execute arbitrary shell command within awsshell use pipe send output awsshell command shell aws ec2 describesubnets output table grep CidrBlock CidrBlock 1723148020 CidrBlock 1723180020 awsshell keep history command executed file awsshellhistory see command executed history command per se awsshell take advantage another feature see interact list edit command retrieves shell history editor allowing view command history create shell script awsshell command last dot command exit quit effect pressing F10 key ending awsshell session Conclusion spend amount time interacting AWS CLI know tedious always type three extra letter doesn’t sound like big deal like occasionally forget “aws” part long command line awsshell make simpler interact AWS CLI especially dynamic display subcommands option References AWS Shell AWS CLI AWS CLI Command Interface Author Chris highlyskilled Information Technology AWS Cloud Training Security Professional bringing cloud security training process engineering leadership simplify deliver highquality product coauthor seven book author 70 article book chapter technical management information security publication extensive technology information security training experience make key resource help company technical challenge Copyright article Copyright © 2020 Chris HareTags Aws Cli Technology Python System Administration AWS |
1,758 | In Psychedelic Therapy, Don’t Forget the ‘Therapy’ | In Psychedelic Therapy, Don’t Forget the ‘Therapy’
A wide gulf lies between what we ‘see’ on psychedelics and what we do with what we saw
Image: nutcat/Getty Images
Twenty minutes late, Matt (whose name was changed for privacy) stumbles into my therapy office, dives onto my green couch, stretches out like a Freudian pro, and buries his face in his hands. Matt, a renowned New York wellness entrepreneur, had found me through the intersection of Burning Man and the plant medicine communities — where most of my clients come from. This niche has found me organically and inevitably; I’ve had a private practice as a marriage and family therapist for 13 years, and for 10 of those, I’ve been on my own parallel personal journey of exploration both as a burner and a psychonaut (sailor of the mind).
“Stupid. Stupid. Just stupid!” Matt groans.
“What?” I ask.
“I came to you to talk about psychedelics, but now you know my dirty secret. I can’t believe I came in today. I haven’t even slept. Fucking cocaine.”
Matt had come to me to talk about his experience on ibogaine, a natural psychoactive medicine derived from the West African shrub iboga and increasingly used by patients trying to kick drug addictions. It clearly hadn’t worked for him, and it occurs to me that he might still be high on coke. And if he’s high, it’s unethical to treat him. But at this moment, other issues take precedence: the shame spiral he’s in, the fact that we’re already 25 minutes into a 50-minute session, and the reality that I couldn’t move his huge body off my sofa if I tried.
Above all, a more universal question is nagging me: As more people turn to psychedelic-aided therapy, why are so many forgoing the necessary follow-up to process these experiences? That follow-up, known as psychedelic integration, is how we metabolize the supranormal phenomena we’ve experienced while tripping and fold it back into “normal” life. Integration takes real time and is likely to be bumpy. Old traumas can suddenly surface hours or days after a trip; we can be suddenly flooded by unmet needs or pain we thought we’d left in the past.
This is something I know firsthand. Ayahuasca helped me become a better mother, a better ex-wife, and a better therapist. The “medicine” reflected back to me the power of compassion and how to cultivate it. Iboga, a medicine that originated in Gabon, showed me my own darkness and reconnected me to my creativity. Psilocybin has added depth and perspective to my meditation practice. And I’m still processing my one and only LSD trip that I had a little over a year ago. I know that without proper resources in place to guide them through the aftermath, people often struggle to reconcile life-changing “medicine experiences” with their actual lives. That is, a wide gulf lies between what we “see” on psychedelics and what we do with what we saw. Mind the gap, as we say in London, where I’m from.
Psychedelics are proliferating both in and out of therapeutic settings; they have transcended hippies’ “tuning in and dropping out” and become increasingly accepted as mental health treatments. As the Multidisciplinary Association for Psychedelic Studies (MAPS) website puts it, “With both MDMA and psilocybin on the precipice of approvals as mainstream medicines, and several leading universities opening dedicated psychedelic research facilities, the story of the last 10 years has been one of profound breakthrough.” Psychedelic wellness solutions are featured in Michael Pollan’s bestseller How to Change Your Mind, Gwyneth Paltrow’s The Goop Lab, and Anderson Cooper’s segment on the John Hopkins study of psilocybin for mental health.
The best thing about psychedelics may also be the worst: Psychedelics tear down your defenses fast.
A study by Scientific American shows a 223% rise in LSD use among 35- to 39-year-olds between 2015 and 2018. That startling number only proves what I’m seeing in my own psychotherapy practice in Los Angeles: a striking influx of people who have taken psychedelics for therapeutic purposes. Too many of them, I’ve found, are seeking help weeks or months or years after life-changing experiences on ayahuasca, psilocybin, or iboga. And I’m increasingly concerned. So many patients are struggling.
A therapeutic trip can trigger a dramatic shift — and that’s the point. People break addictions, heal traumas, and mend relationships immediately after psychedelic experiences. But over time, many become jaded, disappointed, confused, and destabilized. The best thing about psychedelics may also be the worst: Psychedelics tear down your defenses fast. People often finish a trip and leap into life-changing decisions — divorcing their spouses, leaving their jobs, or conceiving children. Many people don’t regret those decisions, but inevitably, after time has passed, some do.
Which brings us back to my patient Matt, the wellness entrepreneur who can’t stop snorting cocaine. “What did iboga show you about yourself?” I ask him. Considered the “grandfather of psychedelics” (ayahuasca is the “grandmother”), iboga is a traditional West African root bark used in low doses to retain alertness while hunting and in high doses to cause near-death experiences for the purpose of spiritual awakening. Administered legally in sobriety clinics throughout Mexico, iboga is also one of the most powerful addiction interrupters we know of. It successfully got Matt off coke a couple of times, but he avoided the after-care protocols, including therapy and group support, and his abstinence didn’t stick. He had planned to take iboga again, he tells me, except two weeks ago, he discovered he’s a heart attack candidate, so it’s no longer safe.
My own 48-hour iboga experience was a mixture of torture and revelation. I relived the shadows of my childhood, hatred, and rage over my parents’ divorce and my father’s addictions — all ugly emotions my upbringing had trained me to hide. Iboga revealed a split in me: the part of me that was on board with my life and the part of me that wanted out. It was the most confronting psychedelic experience I’ve ever had — wildly disorienting, exhausting, nauseating, and full of all the traumas I hadn’t been able to look at square in the face despite years of therapy. When it finally ended, I was overwhelmed with gratitude, as if I’d been given a permission slip to break out of the old family narratives and rewrite my future.
We can open our hearts, be in community, and rip the lid off our failing civilization. But then we return home, and the real work begins: We must integrate.
Experience has taught me what to do after a trip, and privilege has allowed me to do those things. I have the resources to receive therapy, get a massage, or do a meditation retreat. I have the know-how and education to keep researching. I’ve gotten to know the cutting-edge thinkers of the psychedelic community through conferences, networking, and reading (not that any of that guarantees a lack of turbulence).
Even among those of us who have the material resources and time to process our experiences, many of us don’t have the communal models of indigenous communities. In plant medicine circles in the U.S., in contrived ceremonial settings, we may get a glimpse of what life could look like. We can open our hearts, be in community, and rip the lid off our failing civilization. But then we return home, and the real work begins: We must integrate.
Matt turns to me, his face locked in unexpressed emotion. I know that look: the dam before it bursts. Then he starts sobbing so hard, he can’t speak and can’t stop. He covers his heart with his hand, his whole body shaking. “It showed me this,” he says. And for a minute, I get to see it: the excruciating vulnerability that is so often at the center of personal chaos. Matt has a vulnerable heart, figuratively and literally under threat of attack. Iboga has helped expose this reality, but what should he do about it?
I outline a treatment plan for Matt that doesn’t involve iboga. To quit cocaine and find more healthy coping mechanisms, he would most likely need to enter a sobriety program or curate a team of mental health professionals to help him stay clean. Most challengingly, Matt would need to commit to sobriety. Even then, there would be no guarantee of success.
Psychedelics are immensely helpful in getting under our defenses and providing a simplicity of vision that adults can’t normally access. In this sense, they are more powerful than the greatest therapists. But when they wear off, there’s not only what’s been exposed but the rest of life to deal with also: the to-do lists and taxes and physical needs. As Buddhist practitioner and mindfulness expert Jack Kornfield put it in the title of his book: “After the ecstasy, the laundry.” Or, in Matt’s case, rehab.
For anyone new to psychedelic healing, my advice is to plan for integration before even starting the journey. I’d recommend thinking carefully through three categories: set, setting, and support.
Set: Mindset is all-important. Having a clear and positive intention can be very helpful. It makes you more of an empowered participant in your journey, inviting in more purpose as well as a sense of a home base to return to — which could be a mantra of some kind or a question that you lead with. Music also makes a huge difference to a journey. If you are selecting the music yourself, choose tunes that speak to your heart and relax your nervous system.
Mindset is all-important. Having a clear and positive intention can be very helpful. It makes you more of an empowered participant in your journey, inviting in more purpose as well as a sense of a home base to return to — which could be a mantra of some kind or a question that you lead with. Music also makes a huge difference to a journey. If you are selecting the music yourself, choose tunes that speak to your heart and relax your nervous system. Setting: Who you have your experience with and where is important. Pick a location that feels safe and quiet and that, ideally, you have a sense of connection to. If you’re sensitive to other people’s energy, do not do psychedelics in large groups. Being in or near nature can feel very supportive even if it just means being close to a plant or a tree. If you’re journeying away from home, make sure you have a solid plan of how to get there and back. (Needless to say, it should not involve you driving.)
Who you have your experience with and where is important. Pick a location that feels safe and quiet and that, ideally, you have a sense of connection to. If you’re sensitive to other people’s energy, do not do psychedelics in large groups. Being in or near nature can feel very supportive even if it just means being close to a plant or a tree. If you’re journeying away from home, make sure you have a solid plan of how to get there and back. (Needless to say, it should not involve you driving.) Support: Research the shaman or therapist who will act as your guide. Make sure there’s someone else you’re checking in with on either side of your experience — a good friend or a wisdom figure in your life. It’s always valuable to have another perspective and not make your shaman or therapist your only source of feedback and or authority. Make sure you have time carved out on the back end of a journey to go slow and talk with a counselor if you can. Journaling, meditation, and spending time in nature all help with processing psychedelic downloads as well as allowing new neural pathways to be reinforced.
A trip is like any voyage: It requires preparation, time for proper digestion.
For further information and support, InnerSpace Integration hosts a network of resources along with psychedelic integration circles. Tam Integration offers a collection of trip sitter manuals and guides. For those who want to go deeper, Psychonautdocs.com has curated a wealth of essays and studies on a range of psychedelics.
Most importantly, hold off on big decisions after life-altering experiences. A trip is like any voyage: It requires preparation, time for proper digestion, and a willingness to not only surrender to the experience but to allow the necessary time to change your life. | https://elemental.medium.com/in-psychedelic-therapy-dont-forget-the-therapy-1d40f886ba45 | ['Jane Garnett'] | 2020-12-17 23:29:47.999000+00:00 | ['Psychedelics', 'Mental Health', 'Brain', 'Therapy', 'Life'] | Title Psychedelic Therapy Don’t Forget ‘Therapy’Content Psychedelic Therapy Don’t Forget ‘Therapy’ wide gulf lie ‘see’ psychedelics saw Image nutcatGetty Images Twenty minute late Matt whose name changed privacy stumble therapy office dive onto green couch stretch like Freudian pro buries face hand Matt renowned New York wellness entrepreneur found intersection Burning Man plant medicine community — client come niche found organically inevitably I’ve private practice marriage family therapist 13 year 10 I’ve parallel personal journey exploration burner psychonaut sailor mind “Stupid Stupid stupid” Matt groan “What” ask “I came talk psychedelics know dirty secret can’t believe came today haven’t even slept Fucking cocaine” Matt come talk experience ibogaine natural psychoactive medicine derived West African shrub iboga increasingly used patient trying kick drug addiction clearly hadn’t worked occurs might still high coke he’s high it’s unethical treat moment issue take precedence shame spiral he’s fact we’re already 25 minute 50minute session reality couldn’t move huge body sofa tried universal question nagging people turn psychedelicaided therapy many forgoing necessary followup process experience followup known psychedelic integration metabolize supranormal phenomenon we’ve experienced tripping fold back “normal” life Integration take real time likely bumpy Old trauma suddenly surface hour day trip suddenly flooded unmet need pain thought we’d left past something know firsthand Ayahuasca helped become better mother better exwife better therapist “medicine” reflected back power compassion cultivate Iboga medicine originated Gabon showed darkness reconnected creativity Psilocybin added depth perspective meditation practice I’m still processing one LSD trip little year ago know without proper resource place guide aftermath people often struggle reconcile lifechanging “medicine experiences” actual life wide gulf lie “see” psychedelics saw Mind gap say London I’m Psychedelics proliferating therapeutic setting transcended hippies’ “tuning dropping out” become increasingly accepted mental health treatment Multidisciplinary Association Psychedelic Studies MAPS website put “With MDMA psilocybin precipice approval mainstream medicine several leading university opening dedicated psychedelic research facility story last 10 year one profound breakthrough” Psychedelic wellness solution featured Michael Pollan’s bestseller Change Mind Gwyneth Paltrow’s Goop Lab Anderson Cooper’s segment John Hopkins study psilocybin mental health best thing psychedelics may also worst Psychedelics tear defense fast study Scientific American show 223 rise LSD use among 35 39yearolds 2015 2018 startling number prof I’m seeing psychotherapy practice Los Angeles striking influx people taken psychedelics therapeutic purpose many I’ve found seeking help week month year lifechanging experience ayahuasca psilocybin iboga I’m increasingly concerned many patient struggling therapeutic trip trigger dramatic shift — that’s point People break addiction heal trauma mend relationship immediately psychedelic experience time many become jaded disappointed confused destabilized best thing psychedelics may also worst Psychedelics tear defense fast People often finish trip leap lifechanging decision — divorcing spouse leaving job conceiving child Many people don’t regret decision inevitably time passed brings u back patient Matt wellness entrepreneur can’t stop snorting cocaine “What iboga show yourself” ask Considered “grandfather psychedelics” ayahuasca “grandmother” iboga traditional West African root bark used low dos retain alertness hunting high dos cause neardeath experience purpose spiritual awakening Administered legally sobriety clinic throughout Mexico iboga also one powerful addiction interrupter know successfully got Matt coke couple time avoided aftercare protocol including therapy group support abstinence didn’t stick planned take iboga tell except two week ago discovered he’s heart attack candidate it’s longer safe 48hour iboga experience mixture torture revelation relived shadow childhood hatred rage parents’ divorce father’s addiction — ugly emotion upbringing trained hide Iboga revealed split part board life part wanted confronting psychedelic experience I’ve ever — wildly disorienting exhausting nauseating full trauma hadn’t able look square face despite year therapy finally ended overwhelmed gratitude I’d given permission slip break old family narrative rewrite future open heart community rip lid failing civilization return home real work begin must integrate Experience taught trip privilege allowed thing resource receive therapy get massage meditation retreat knowhow education keep researching I’ve gotten know cuttingedge thinker psychedelic community conference networking reading guarantee lack turbulence Even among u material resource time process experience many u don’t communal model indigenous community plant medicine circle US contrived ceremonial setting may get glimpse life could look like open heart community rip lid failing civilization return home real work begin must integrate Matt turn face locked unexpressed emotion know look dam burst start sobbing hard can’t speak can’t stop cover heart hand whole body shaking “It showed this” say minute get see excruciating vulnerability often center personal chaos Matt vulnerable heart figuratively literally threat attack Iboga helped expose reality outline treatment plan Matt doesn’t involve iboga quit cocaine find healthy coping mechanism would likely need enter sobriety program curate team mental health professional help stay clean challengingly Matt would need commit sobriety Even would guarantee success Psychedelics immensely helpful getting defense providing simplicity vision adult can’t normally access sense powerful greatest therapist wear there’s what’s exposed rest life deal also todo list tax physical need Buddhist practitioner mindfulness expert Jack Kornfield put title book “After ecstasy laundry” Matt’s case rehab anyone new psychedelic healing advice plan integration even starting journey I’d recommend thinking carefully three category set setting support Set Mindset allimportant clear positive intention helpful make empowered participant journey inviting purpose well sense home base return — could mantra kind question lead Music also make huge difference journey selecting music choose tune speak heart relax nervous system Mindset allimportant clear positive intention helpful make empowered participant journey inviting purpose well sense home base return — could mantra kind question lead Music also make huge difference journey selecting music choose tune speak heart relax nervous system Setting experience important Pick location feel safe quiet ideally sense connection you’re sensitive people’s energy psychedelics large group near nature feel supportive even mean close plant tree you’re journeying away home make sure solid plan get back Needless say involve driving experience important Pick location feel safe quiet ideally sense connection you’re sensitive people’s energy psychedelics large group near nature feel supportive even mean close plant tree you’re journeying away home make sure solid plan get back Needless say involve driving Support Research shaman therapist act guide Make sure there’s someone else you’re checking either side experience — good friend wisdom figure life It’s always valuable another perspective make shaman therapist source feedback authority Make sure time carved back end journey go slow talk counselor Journaling meditation spending time nature help processing psychedelic downloads well allowing new neural pathway reinforced trip like voyage requires preparation time proper digestion information support InnerSpace Integration host network resource along psychedelic integration circle Tam Integration offer collection trip sitter manual guide want go deeper Psychonautdocscom curated wealth essay study range psychedelics importantly hold big decision lifealtering experience trip like voyage requires preparation time proper digestion willingness surrender experience allow necessary time change lifeTags Psychedelics Mental Health Brain Therapy Life |
1,759 | What Does Coronavirus Do to the Body? | This Is How Your Immune System Reacts to the Coronavirus
And what it means for treatment
Photo: Bertrand Blay/iStock/Getty Images Plus
People infected with the novel coronavirus can have markedly different experiences. Some report having nothing more than symptoms of a mild cold; others are hospitalized and even die as their lungs become inflamed and fill up with fluid. How can the same virus result in such different outcomes?
Scientists are still perplexed by the novel coronavirus. But it’s becoming increasingly clear that the immune system plays a critical role in whether you recover from the virus or you die from it. In fact, most coronavirus-related deaths are due to the immune system going haywire in its response, not damage caused by the virus itself. So what exactly is happening in your body when you get the virus, and who is at risk for a more severe infection?
In fact, most coronavirus-related deaths are due to the immune system going haywire in its response, not damage caused by the virus itself.
When you first become infected, your body launches its standard innate immune defense like it would for any virus. This involves the release of proteins called interferons that interfere with the virus’s ability to replicate inside the body’s cells. Interferons also recruit other immune cells to come and attack the virus in order to stop it from spreading. Ideally, this initial response enables the body to gain control over the infection quickly, although the virus has its own defenses to blunt or escape the interferon effect.
The innate immune response is behind many of the symptoms you experience when you’re sick. These symptoms typically serve two purposes: One is to alert the body that an attack has occurred — this is thought to be one of the roles of fever, for example. The other purpose is to try and get rid of the virus, such as expelling the microscopic particles through cough or diarrhea.
“What typically happens is that there is a period where the virus establishes itself, and the body starts to respond to it, and that’s what we refer to as mild symptoms,” says Mandeep Mehra, MD, a professor of medicine at Harvard Medical School and chair in advanced cardiovascular medicine at Brigham and Women’s Hospital. “A fever occurs. If the virus establishes itself in the respiratory tract, you develop a cough. If the virus establishes itself in the gastrointestinal mucosal tract, you’ll develop diarrhea.”
These very different symptoms emerge depending on where in the body the virus takes hold. The novel coronavirus gains entry into a cell by latching onto a specific protein called the ACE2 receptor that sits on the cell’s surface. These receptors are most abundant in the lungs, which is why Covid-19 is considered a respiratory illness. However, the second-highest number of ACE2 receptors are in the intestines, which could explain why many people with the coronavirus experience diarrhea.
“Because the virus is acquired through droplets, if it comes into your mouth and enters your oropharynx, it has two places where it can go from there. It can transition into the lung from the oropharynx when you breathe in, or if you have a swallow reflex, it’ll go down to your stomach,” Mehra says. “That’s how it can affect both sites.”
The goal of the innate immune defense is to contain the virus and prevent it from replicating too widely so that the second wave of the immune system — the adaptive, or virus-specific response — has enough time to kick in before things get out of hand. The adaptive immune response consists of virus-specific antibodies and T cells that the body develops that can recognize and more quickly destroy the virus. These antibodies are also what provide immunity and protect people from becoming reinfected with the virus after they’ve already had it. | https://elemental.medium.com/this-is-how-your-immune-system-reacts-to-coronavirus-cbf5271e530e | ['Dana G Smith'] | 2020-11-13 19:44:30.797000+00:00 | ['Body', 'Covid 19', 'Coronavirus', 'Immune System', 'Health'] | Title Coronavirus BodyContent Immune System Reacts Coronavirus mean treatment Photo Bertrand BlayiStockGetty Images Plus People infected novel coronavirus markedly different experience report nothing symptom mild cold others hospitalized even die lung become inflamed fill fluid virus result different outcome Scientists still perplexed novel coronavirus it’s becoming increasingly clear immune system play critical role whether recover virus die fact coronavirusrelated death due immune system going haywire response damage caused virus exactly happening body get virus risk severe infection fact coronavirusrelated death due immune system going haywire response damage caused virus first become infected body launch standard innate immune defense like would virus involves release protein called interferon interfere virus’s ability replicate inside body’s cell Interferons also recruit immune cell come attack virus order stop spreading Ideally initial response enables body gain control infection quickly although virus defense blunt escape interferon effect innate immune response behind many symptom experience you’re sick symptom typically serve two purpose One alert body attack occurred — thought one role fever example purpose try get rid virus expelling microscopic particle cough diarrhea “What typically happens period virus establishes body start respond that’s refer mild symptoms” say Mandeep Mehra MD professor medicine Harvard Medical School chair advanced cardiovascular medicine Brigham Women’s Hospital “A fever occurs virus establishes respiratory tract develop cough virus establishes gastrointestinal mucosal tract you’ll develop diarrhea” different symptom emerge depending body virus take hold novel coronavirus gain entry cell latching onto specific protein called ACE2 receptor sits cell’s surface receptor abundant lung Covid19 considered respiratory illness However secondhighest number ACE2 receptor intestine could explain many people coronavirus experience diarrhea “Because virus acquired droplet come mouth enters oropharynx two place go transition lung oropharynx breathe swallow reflex it’ll go stomach” Mehra say “That’s affect sites” goal innate immune defense contain virus prevent replicating widely second wave immune system — adaptive virusspecific response — enough time kick thing get hand adaptive immune response consists virusspecific antibody cell body develops recognize quickly destroy virus antibody also provide immunity protect people becoming reinfected virus they’ve already itTags Body Covid 19 Coronavirus Immune System Health |
1,760 | OpenGenius, every business and individual has the power to use innovation to progress | OpenGenius has raised £1.1M in total. We talk with Chris Griffiths, its CEO.
PetaCrunch: How would you describe OpenGenius in a single tweet?
Chris Griffiths: OpenGenius are global experts in creativity, productivity and innovation strategy — we believe every business and individual has the power to use innovation to progress. We’re trailblazing the way for companies to ‘work creative’ with our pioneering app, Ayoa.
PC: How did it all start and why?
CG: Before I started OpenGenius, I was CEO at one of Europe’s fastest-growing ed-tech companies; I knew I wanted it to grow into something more and weave innovation into the fabric of the company, but the board only wanted to focus on ed-tech, so I resigned.
Being true to myself, I knew I believed in the power of creativity and innovation — I wanted to start a company that could bring those things to other people through software and training. Alas, 6 months later I founded OpenGenius. Today, our mission of innovation is established company-wide and over 1.5 million people have used our software and services to drive their innovation-based progress.
PC: What have you achieved so far?
CG: We have achieved a lot — teams and individuals from companies such as Disney, Nasa, Apple, Coca-Cola, Nike and McDonalds have all used our software solutions. The Ayoa app — which was launched in June this year — is something we’re particularly proud of. It’s a tool that really overhauls modern society’s broken approach to productivity; it stops people app-switching, and puts an emphasis on using creativity to uncover the right ideas so you can then do the right tasks.
We also have a global network of innovation professionals who are helping to spread the innovation message around the world. Our OpenGenius team is continuing to grow; we’re based in beautiful Penarth in the tech-hub of Tec Marina which I founded with my wife Gaile to foster a creativity-focussed workspace. The whole company is excited to see what we can achieve going forward.
PC: How will you use your recent funding round?
CG: We are using our funding to expand our international customer base — we’re doing that by being more aggressive in our marketing and sales tactics. We’re growing the teams in both these areas, as well as continuing to develop Ayoa which has a very exciting roadmap laid out for the next six months.
PC: What do you plan to achieve in the next 2–3 years?
CG: We’re ambitious and we firmly believe we have created a disruptive software product with Ayoa — over the next 2–3 years, we expect to see a huge increase in our user base and the awareness of our brand. People are excited by what we are doing here. We’re proud to be the first Welsh company ever accepted onto the London Stock Exchange ELITE Accelerator Programme, and we have our sights set on flotation in the future. We’re not following market trends, but instead, we’re driving change with the Ayoa tool which offers something no other software does — all I can say is, watch this space. | https://medium.com/petacrunch/opengenius-every-business-and-individual-has-the-power-to-use-innovation-to-progress-dedea6518353 | ['Kevin Hart'] | 2019-08-24 21:21:01.002000+00:00 | ['Startup', 'Innovation Management', 'Innovation', 'Progress', 'Creativity'] | Title OpenGenius every business individual power use innovation progressContent OpenGenius raised £11M total talk Chris Griffiths CEO PetaCrunch would describe OpenGenius single tweet Chris Griffiths OpenGenius global expert creativity productivity innovation strategy — believe every business individual power use innovation progress We’re trailblazing way company ‘work creative’ pioneering app Ayoa PC start CG started OpenGenius CEO one Europe’s fastestgrowing edtech company knew wanted grow something weave innovation fabric company board wanted focus edtech resigned true knew believed power creativity innovation — wanted start company could bring thing people software training Alas 6 month later founded OpenGenius Today mission innovation established companywide 15 million people used software service drive innovationbased progress PC achieved far CG achieved lot — team individual company Disney Nasa Apple CocaCola Nike McDonalds used software solution Ayoa app — launched June year — something we’re particularly proud It’s tool really overhaul modern society’s broken approach productivity stop people appswitching put emphasis using creativity uncover right idea right task also global network innovation professional helping spread innovation message around world OpenGenius team continuing grow we’re based beautiful Penarth techhub Tec Marina founded wife Gaile foster creativityfocussed workspace whole company excited see achieve going forward PC use recent funding round CG using funding expand international customer base — we’re aggressive marketing sale tactic We’re growing team area well continuing develop Ayoa exciting roadmap laid next six month PC plan achieve next 2–3 year CG We’re ambitious firmly believe created disruptive software product Ayoa — next 2–3 year expect see huge increase user base awareness brand People excited We’re proud first Welsh company ever accepted onto London Stock Exchange ELITE Accelerator Programme sight set flotation future We’re following market trend instead we’re driving change Ayoa tool offer something software — say watch spaceTags Startup Innovation Management Innovation Progress Creativity |
1,761 | SigNet (Detecting Signature Similarity Using Machine Learning/Deep Learning): Is This the End of Human Forensic Analysis? | SigNet (Detecting Signature Similarity Using Machine Learning/Deep Learning): Is This the End of Human Forensic Analysis?
SigNet (Detecting Signature Similarity using Machine Learning/Deep Learning): Is this the end of Human Forensic Analysis?
My grandfather was an expert in handwriting analysis. He spent all his life analyzing documents for the CBI (Central Bureau Of Investigation) and other organizations. His unique way of analyzing documents using a magnifying glass and different tools required huge amounts of time and patience to analyze a single document. This is back when computers were not fast enough. I remember vividly that he photocopied the same document multiple times and arranged it on the table to gain a closer look at the handwriting style.
Handwriting analysis involves a comprehensive comparative analysis between a questioned document and the known handwriting of a suspected writer. Specific habits, characteristics, and individualities of both the questioned document and the known specimen are examined for similarities and differences.
As this problem consists of detecting and analyzing patterns, Machine Learning is a great fit to solve this problem.
A handwritten document captures a lot of detail. (https://unsplash.com/photos/AbQNy5Vvpjc)
Why and How?
Why: My grandfather’s unique way of analyzing documents using a magnifying glass and different tools required huge amounts of time and patience to analyze a single document. This is back when computers were not fast enough. I remember vividly that he photocopied the same document multiple times and arranged it on the table to gain a closer look at the handwriting style. While I agree that we cannot replace that job with an A.I with a 100% accuracy, we can certainly build a system capable of aiding human beings.
How: To build our Signature Similarity network, we will use utilize the wonders of Deep Learning. We will go through three approaches to extract the similarity between our handwritten signatures. For our initial data, we will use the HandWritten Signatures dataset from Kaggle.
Requirements
For this project we will require:
Python 3.8: The Programming Language
TensorFlow 2: The Deep Learning Library
Numpy: Linear Algebra
Matplotlib: Plotting images
Scikit-Learn: General Machine Learning Library
The Dataset
The dataset contains real and forged signatures of 30 people. Each person has 5 genuine and 5 forged signatures.
The Directory Structure of our data.
For loading the data, I have created a simple load_data() that iterates through all the datasets and extracts real and forged signatures with a label of 1 and 0 respectively.
In addition to this, I have also created a dictionary of tuples consisting of images and labels. (To be used later in the project).
def load_data(DATA_DIR=DATA_DIR, test_size=0.2, verbose=True, load_grayscale=True):
"""
Loads the data into a dataframe.
Arguments:
DATA_DIR: str
test_size: float
Returns:
(x_train, y_train,x_test, y_test, x_val, y_val, df)
"""
features = []
features_forged = []
features_real = []
features_dict = {}
labels = [] # forged: 0 and real: 1
mode = "rgb"
if load_grayscale:
mode = "grayscale"
for folder in os.listdir(DATA_DIR):
# forged images
if folder == '.DS_Store' or folder == '.ipynb_checkpoints':
continue
print ("Searching folder {}".format(folder))
for sub in os.listdir(DATA_DIR+"/"+folder+"/forge"):
f = DATA_DIR+"/"+folder+"/forge/" + sub
img = load_img(f,color_mode=mode, target_size=(150,150))
features.append(img_to_array(img))
features_dict[sub] = (img, 0)
features_forged.append(img)
if verbose:
print ("Adding {} with label 0".format(f))
labels.append(0) # forged
# real images
for sub in os.listdir(DATA_DIR+"/"+folder+"/real"):
f = DATA_DIR+"/"+folder+"/real/" + sub
img = load_img(f,color_mode=mode, target_size=(150,150))
features.append(img_to_array(img))
features_dict[sub] = (img, 1)
features_real.append(img)
if verbose:
print ("Adding {} with label 1".format(f))
labels.append(1) # real
features = np.array(features)
labels = np.array(labels)
x_train, x_test, y_train, y_test = train_test_split(features, labels, test_size=test_size, random_state=42)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42)
print ("Generated data.")
return features, labels,features_forged, features_real,features_dict,x_train, x_test, y_train, y_test, x_val, y_val def convert_label_to_text(label=0):
"""
Convert label into text
Arguments:
label: int
Returns:
str: The mapping
"""
return "Forged" if label == 0 else "Real" features, labels,features_forged, features_real, features_dict,x_train, x_test, y_train, y_test, x_val, y_val = load_data(verbose=False, load_grayscale=False)
Visualization of the data
The images are loaded with a target_size of (150,150,3).
A snapshot of the data loaded followed by the label. (1 represents real and 0 represents forged)
Approach #1: Similarity in images (signatures) using MSE and SSIM.
For this approach, we will compute the similarity between images by using MSE (Mean Squared Error) or SSIM(Structural similarity). As you can see the formulas are pretty straightforward and fortunately Scikit-Learn provides an implementation for SSIM.
def mse(A, B):
"""
Computes Mean Squared Error between two images. (A and B)
Arguments:
A: numpy array
B: numpy array
Returns:
err: float
"""
# sigma(1, n-1)(a-b)^2)
err = np.sum((A - B) ** 2)
# mean of the sum (r,c) => total elements: r*c
err /= float(A.shape[0] * B.shape[1])
return err def ssim(A, B):
"""
Computes SSIM between two images.
Arguments:
A: numpy array
B: numpy array
Returns:
score: float
"""
return structural_similarity(A, B)
Now let us take two images from the same person, one of them is real and the other is a fake.
First Image
Second Image
Results for MSE and SSIM
As you can see, MSE Error does not have a fixed bound whereas SSIM has a fixed bound between -1 and 1. Lower MSE represents Similar images whereas lower SSIM represents Similar images.
Approach #2: Building a classifier using CNNs that can detect forged or real signatures.
With this approach, we will try to come up with a classifier (using CNNs) to detect forged or real signatures.
As CNN's are known to detect intricate features among images, we will experiment with this classifier.
We are bound to encounter with overfitting as we do not have enough data.
We will probably use Image Augmentation to generate more training data.
Our Model Architecture
On training our model, we are bound to encounter overfitting and after applying techniques to overcome the problem, the model did not improve.
Model Training
Our Model’s loss
Approach #2.1: Transfer Learning using Inception
To improve our model we will use transfer learning and fine-tune the model for this particular problem.
The InceptionV3 Model
For this approach, we will load pre-trained weights and add a classification head at the top to cater to this problem.
# loading Inception
model2 = tf.keras.applications.InceptionV3(include_top=False, input_shape=(150,150,3)) # freezing layers
for layer in model2.layers:
layer.trainable=False # getting mixed7 layer
l = model2.get_layer("mixed7") x = tf.keras.layers.Flatten()(l.output)
x = tf.keras.layers.Dense(1024, activation='relu')(x)
x = tf.keras.layers.Dropout(.5)(x)
x = tf.keras.layers.Dense(1, activation='sigmoid')(x)
net = tf.keras.Model(model2.input, x) net.compile(optimizer='adam', loss=tf.keras.losses.binary_crossentropy, metrics=['acc']) h2 = net.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=5)
Our Model Training
Model’s loss
Model’s accuracy
These two approaches show that if we use transfer learning, we get much better results than using a plain CNN model.
Keep in mind, these approaches do not learn the similarity function but these focus on the classifying whether the image is forged or real.
There are still many ways we can improve our model, one is by augmenting data.
Approach #3: Siamese networks for image similarity
With our third approach, we will try to learn the similarity function. We will use something called Siamese networks (due to the nature of our data i.e fewer training examples).
In this approach, we will use Siamese networks to learn the similarity function. Siamese means ‘twins’ and the biggest difference between normal NNs is that these networks try to learn the similarity function instead of trying to classify (fitting the function).
We first create a common feature vector for our images. We will pass two images (positive and negative) and use a contrastive loss function (Distance metric (L1 distance)) and in the end, we squash the output between 1 and 0 (sigmoid) to get the final result.
Siamese network (Image from [1])
Our Feature Vector Model
# creating the siamese network
im_a = tf.keras.layers.Input(shape=(150,150,3))
im_b = tf.keras.layers.Input(shape=(150,150,3)) encoded_a = feature_vector(im_a)
encoded_b = feature_vector(im_b) combined = tf.keras.layers.concatenate([encoded_a, encoded_b])
combine = tf.keras.layers.BatchNormalization()(combined)
combined = tf.keras.layers.Dense(4, activation = 'linear')(combined)
combined = tf.keras.layers.BatchNormalization()(combined)
combined = tf.keras.layers.Activation('relu')(combined)
combined = tf.keras.layers.Dense(1, activation = 'sigmoid')(combined) sm = tf.keras.Model(inputs=[im_a, im_b], outputs=[combined])
sm.summary()
Our complete siamese network
Dataset Generation
To generate the required dataset, we will try two approaches. First, we will generate data on the basis of labels. If two images have the same label (1 or 0), then they are similar. We will generate data in pairs in the form (im_a, im_b, label). Second, we will generate data on the basis of a person's number. According to the dataset, 02104021.png represents the signature produced by person 21 (i.e real).
Data generation Approach #1:
Here we are assuming similarity on the basis of labels. If two images have the same label (i.e 1 or 0) then they are similar.
def generate_data_first_approach(features, labels, test_size=0.25):
"""
Generate data in pairs according to labels.
Arguments:
features: numpy
labels: numpy
"""
im_a = [] # images a
im_b = [] # images b
pair_labels = []
for i in range(0, len(features)-1):
j = i + 1
if labels[i] == labels[j]:
im_a.append(features[i])
im_b.append(features[j])
pair_labels.append(1) # similar
else:
im_a.append(features[i])
im_b.append(features[j])
pair_labels.append(0) # not similar
pairs = np.stack([im_a, im_b], axis=1)
pair_labels = np.array(pair_labels)
x_train, x_test, y_train, y_test = train_test_split(pairs, pair_labels, test_size=test_size, random_state=42)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42)
return x_train, y_train, x_test, y_test, x_val, y_val, pairs, pair_labels x_train, y_train, x_test, y_test, x_val, y_val, pairs, pair_labels = generate_data_first_approach(features, labels) # show data
plt.imshow(pairs[:,0][0]/255.)
plt.show()
plt.imshow(pairs[:,1][0]/255.)
plt.show()
print("Label: ",pair_labels[0])
Preview of our dataset
Training the dataset with Dataset Generation #1
Now we will train the network. Due to computational limitations, we only train the model on a single epoch.
# x_train[:,0] => axis=1 (all 150,150,3) x_train[:,1] => axis=1 (second column)
sm.fit([x_train[:,0], x_train[:,1]], y_train, validation_data=([x_val[:,0],x_val[:,1]], y_val),epochs=1)
Siamese Network’s result
The metric is calculating the L1-Distance (MAE) between the y_hat and y.
between the y_hat and y. Due to computation limitations, we only train it for one epoch
This represents a very simple siamese network capable of learning the similarity function.
Data Generation Approach #2
In this approach, we try to set up a dataset where we cross multiply each signature with other number signature.
The inputs and the outputs must be the same size.
def generate_data(person_number="001"):
x = list(features_dict.keys())
im_r = []
im_f = []
labels = [] # represents 1 if signature is real else 0
for i in x:
if i.startswith(person_number):
if i.endswith("{}.png".format(person_number)):
im_r.append(i)
labels.append(1)
else:
im_f.append(i)
labels.append(0)
return im_r, im_f, labels def generate_dataset_approach_two(size=100, test_size=0.25):
"""
Generate data using the second approach.
Remember input and output must be the same size!
Arguments:
features: numpy array
labels: numpy array
size: the target size (length of the array)
Returns:
x_train, y_train
"""
im_r = []
im_f = []
ls = [] ids = ["001","002","003",'004','005','006','007','008','009','010','011','012','013','014','015','016','017','018','019','020','021','022',
'023','024','025','026','027','028','029','030']
for i in ids:
imr, imf, labels = generate_data(i)
# similar batch
for i in imr:
for j in imr:
im_r.append(img_to_array(features_dict[i][0]))
im_f.append(img_to_array(features_dict[j][0]))
ls.append(1) # they are similar
# not similar batch
for k in imf:
for l in imf:
im_r.append(img_to_array(features_dict[k][0]))
im_f.append(img_to_array(features_dict[l][0]))
ls.append(0) # they are not similar
print(len(im_r), len(im_f))
pairs = np.stack([im_r, im_f], axis=1)
ls = np.array(ls)
x_train, x_test, y_train, y_test = train_test_split(pairs, ls, test_size=test_size, random_state=42)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42)
return x_train, y_train, x_test, y_test, x_val, y_val, pairs, ls x_train, y_train, x_test, y_test, x_val, y_val, pairs, ls = generate_dataset_approach_two() # show data
plt.imshow(x_train[:,0][0]/255.)
plt.show()
plt.imshow(x_train[:,0][1]/255.)
print("Label: ",y_train[0])
Represents a forged signature. (0)
Training the Network with Dataset Generation #2
Training the network (Due to computational limitations, we train the model for a single epoch)
The biggest difference between dataset generation #1 and #2 is the way inputs are arranged. In dataset #1 we select random signatures according to their labels but in #2 we select signatures from the same person throughout.
Conclusion
To conclude, we present a plausible method to detect forged signatures using Siamese Networks and most importantly we show how we can easily train a Siamese network only a few training examples. We see how we can easily achieve great results using transfer learning.
References
[1] https://arxiv.org/pdf/1709.08761.pdf
Github: https://github.com/aaditkapoor/SigNet | https://medium.com/swlh/signet-detecting-signature-similarity-using-machine-learning-deep-learning-is-this-the-end-of-1a6bdc76b04b | ['Aadit Kapoor'] | 2020-08-17 17:58:22.362000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Python', 'TensorFlow', 'Data Science'] | Title SigNet Detecting Signature Similarity Using Machine LearningDeep Learning End Human Forensic AnalysisContent SigNet Detecting Signature Similarity Using Machine LearningDeep Learning End Human Forensic Analysis SigNet Detecting Signature Similarity using Machine LearningDeep Learning end Human Forensic Analysis grandfather expert handwriting analysis spent life analyzing document CBI Central Bureau Investigation organization unique way analyzing document using magnifying glass different tool required huge amount time patience analyze single document back computer fast enough remember vividly photocopied document multiple time arranged table gain closer look handwriting style Handwriting analysis involves comprehensive comparative analysis questioned document known handwriting suspected writer Specific habit characteristic individuality questioned document known specimen examined similarity difference problem consists detecting analyzing pattern Machine Learning great fit solve problem handwritten document capture lot detail httpsunsplashcomphotosAbQNy5Vvpjc grandfather’s unique way analyzing document using magnifying glass different tool required huge amount time patience analyze single document back computer fast enough remember vividly photocopied document multiple time arranged table gain closer look handwriting style agree cannot replace job AI 100 accuracy certainly build system capable aiding human being build Signature Similarity network use utilize wonder Deep Learning go three approach extract similarity handwritten signature initial data use HandWritten Signatures dataset Kaggle Requirements project require Python 38 Programming Language TensorFlow 2 Deep Learning Library Numpy Linear Algebra Matplotlib Plotting image ScikitLearn General Machine Learning Library Dataset dataset contains real forged signature 30 people person 5 genuine 5 forged signature Directory Structure data loading data created simple loaddata iterates datasets extract real forged signature label 1 0 respectively addition also created dictionary tuples consisting image label used later project def loaddataDATADIRDATADIR testsize02 verboseTrue loadgrayscaleTrue Loads data dataframe Arguments DATADIR str testsize float Returns xtrain ytrainxtest ytest xval yval df feature featuresforged featuresreal featuresdict label forged 0 real 1 mode rgb loadgrayscale mode grayscale folder oslistdirDATADIR forged image folder DSStore folder ipynbcheckpoints continue print Searching folder formatfolder sub oslistdirDATADIRfolderforge f DATADIRfolderforge sub img loadimgfcolormodemode targetsize150150 featuresappendimgtoarrayimg featuresdictsub img 0 featuresforgedappendimg verbose print Adding label 0formatf labelsappend0 forged real image sub oslistdirDATADIRfolderreal f DATADIRfolderreal sub img loadimgfcolormodemode targetsize150150 featuresappendimgtoarrayimg featuresdictsub img 1 featuresrealappendimg verbose print Adding label 1formatf labelsappend1 real feature nparrayfeatures label nparraylabels xtrain xtest ytrain ytest traintestsplitfeatures label testsizetestsize randomstate42 xtrain xval ytrain yval traintestsplitxtrain ytrain testsize025 randomstate42 print Generated data return feature labelsfeaturesforged featuresrealfeaturesdictxtrain xtest ytrain ytest xval yval def convertlabeltotextlabel0 Convert label text Arguments label int Returns str mapping return Forged label 0 else Real feature labelsfeaturesforged featuresreal featuresdictxtrain xtest ytrain ytest xval yval loaddataverboseFalse loadgrayscaleFalse Visualization data image loaded targetsize 1501503 snapshot data loaded followed label 1 represents real 0 represents forged Approach 1 Similarity image signature using MSE SSIM approach compute similarity image using MSE Mean Squared Error SSIMStructural similarity see formula pretty straightforward fortunately ScikitLearn provides implementation SSIM def mseA B Computes Mean Squared Error two image B Arguments numpy array B numpy array Returns err float sigma1 n1ab2 err npsumA B 2 mean sum rc total element rc err floatAshape0 Bshape1 return err def ssimA B Computes SSIM two image Arguments numpy array B numpy array Returns score float return structuralsimilarityA B let u take two image person one real fake First Image Second Image Results MSE SSIM see MSE Error fixed bound whereas SSIM fixed bound 1 1 Lower MSE represents Similar image whereas lower SSIM represents Similar image Approach 2 Building classifier using CNNs detect forged real signature approach try come classifier using CNNs detect forged real signature CNNs known detect intricate feature among image experiment classifier bound encounter overfitting enough data probably use Image Augmentation generate training data Model Architecture training model bound encounter overfitting applying technique overcome problem model improve Model Training Model’s loss Approach 21 Transfer Learning using Inception improve model use transfer learning finetune model particular problem InceptionV3 Model approach load pretrained weight add classification head top cater problem loading Inception model2 tfkerasapplicationsInceptionV3includetopFalse inputshape1501503 freezing layer layer model2layers layertrainableFalse getting mixed7 layer l model2getlayermixed7 x tfkeraslayersFlattenloutput x tfkeraslayersDense1024 activationrelux x tfkeraslayersDropout5x x tfkeraslayersDense1 activationsigmoidx net tfkerasModelmodel2input x netcompileoptimizeradam losstfkeraslossesbinarycrossentropy metricsacc h2 netfitxtrain ytrain validationdataxval yval epochs5 Model Training Model’s loss Model’s accuracy two approach show use transfer learning get much better result using plain CNN model Keep mind approach learn similarity function focus classifying whether image forged real still many way improve model one augmenting data Approach 3 Siamese network image similarity third approach try learn similarity function use something called Siamese network due nature data ie fewer training example approach use Siamese network learn similarity function Siamese mean ‘twins’ biggest difference normal NNs network try learn similarity function instead trying classify fitting function first create common feature vector image pas two image positive negative use contrastive loss function Distance metric L1 distance end squash output 1 0 sigmoid get final result Siamese network Image 1 Feature Vector Model creating siamese network ima tfkeraslayersInputshape1501503 imb tfkeraslayersInputshape1501503 encodeda featurevectorima encodedb featurevectorimb combined tfkeraslayersconcatenateencodeda encodedb combine tfkeraslayersBatchNormalizationcombined combined tfkeraslayersDense4 activation linearcombined combined tfkeraslayersBatchNormalizationcombined combined tfkeraslayersActivationrelucombined combined tfkeraslayersDense1 activation sigmoidcombined sm tfkerasModelinputsima imb outputscombined smsummary complete siamese network Dataset Generation generate required dataset try two approach First generate data basis label two image label 1 0 similar generate data pair form ima imb label Second generate data basis person number According dataset 02104021png represents signature produced person 21 ie real Data generation Approach 1 assuming similarity basis label two image label ie 1 0 similar def generatedatafirstapproachfeatures label testsize025 Generate data pair according label Arguments feature numpy label numpy ima image imb image b pairlabels range0 lenfeatures1 j 1 labelsi labelsj imaappendfeaturesi imbappendfeaturesj pairlabelsappend1 similar else imaappendfeaturesi imbappendfeaturesj pairlabelsappend0 similar pair npstackima imb axis1 pairlabels nparraypairlabels xtrain xtest ytrain ytest traintestsplitpairs pairlabels testsizetestsize randomstate42 xtrain xval ytrain yval traintestsplitxtrain ytrain testsize025 randomstate42 return xtrain ytrain xtest ytest xval yval pair pairlabels xtrain ytrain xtest ytest xval yval pair pairlabels generatedatafirstapproachfeatures label show data pltimshowpairs00255 pltshow pltimshowpairs10255 pltshow printLabel pairlabels0 Preview dataset Training dataset Dataset Generation 1 train network Due computational limitation train model single epoch xtrain0 axis1 1501503 xtrain1 axis1 second column smfitxtrain0 xtrain1 ytrain validationdataxval0xval1 yvalepochs1 Siamese Network’s result metric calculating L1Distance MAE yhat yhat Due computation limitation train one epoch represents simple siamese network capable learning similarity function Data Generation Approach 2 approach try set dataset cross multiply signature number signature input output must size def generatedatapersonnumber001 x listfeaturesdictkeys imr imf label represents 1 signature real else 0 x istartswithpersonnumber iendswithpngformatpersonnumber imrappendi labelsappend1 else imfappendi labelsappend0 return imr imf label def generatedatasetapproachtwosize100 testsize025 Generate data using second approach Remember input output must size Arguments feature numpy array label numpy array size target size length array Returns xtrain ytrain imr imf l id 001002003004005006007008009010011012013014015016017018019020021022 023024025026027028029030 id imr imf label generatedatai similar batch imr j imr imrappendimgtoarrayfeaturesdicti0 imfappendimgtoarrayfeaturesdictj0 lsappend1 similar similar batch k imf l imf imrappendimgtoarrayfeaturesdictk0 imfappendimgtoarrayfeaturesdictl0 lsappend0 similar printlenimr lenimf pair npstackimr imf axis1 l nparrayls xtrain xtest ytrain ytest traintestsplitpairs l testsizetestsize randomstate42 xtrain xval ytrain yval traintestsplitxtrain ytrain testsize025 randomstate42 return xtrain ytrain xtest ytest xval yval pair l xtrain ytrain xtest ytest xval yval pair l generatedatasetapproachtwo show data pltimshowxtrain00255 pltshow pltimshowxtrain01255 printLabel ytrain0 Represents forged signature 0 Training Network Dataset Generation 2 Training network Due computational limitation train model single epoch biggest difference dataset generation 1 2 way input arranged dataset 1 select random signature according label 2 select signature person throughout Conclusion conclude present plausible method detect forged signature using Siamese Networks importantly show easily train Siamese network training example see easily achieve great result using transfer learning References 1 httpsarxivorgpdf170908761pdf Github httpsgithubcomaaditkapoorSigNetTags Machine Learning Artificial Intelligence Python TensorFlow Data Science |
1,762 | Reasons why developers make apps for Android rather than iOS | Android vs iOS
What is the most popular operating system for mobile platforms? From small startups to big enterprises in Information technology. This is the paramount question and the litmus test that decides what mobile platform is to be developed first.
Why not all at the same time you may ask? If you run a big development team skilled with all set, possess all the resources and tools for mobile development, then you can kick off all the platforms at once. However, it is very important to have a detailed risk analysis and a robust budget.
Why Andriod is Worthy
We are taking a look at the technical and commercial advantages of Andriod over iOS. These are the rays of light to the path, the main reasons why an Andriod app will be a great choice to come first in the development process.
Importance of Andriod
The Market share
This is no doubt a pointer, even If you ignore all the other reasons mentioned later, numbers don’t Lie! Statistics from the IDC shows clearly that Android is leading the number of smartphones shipped worldwide with an 86.1% market share.
“ In 2018, around 1.56 billion smartphones were sold worldwide. In the first quarter of 2019, around 88 percent of all smartphones sold to end users were phones with the Android operating system. Android operating system.” source www.statista.com/
Portability
Java as the primary programming language for Native Android apps provides a special advantage. The codes are accessible to other mobile operating systems easily via porting. Besides, Android apps have a range that reaches to Chrome OS and Windows devices.
Android Studio
The Android Studio has the beauty and strength of a very potent IDE that is based on IntelliJ IDE. Android Studio as an IDE was designed and customized for Android app development. Speed and efficiency are key with Andriod studio. It allows the setup of a new Android project for different types of Android apps in a few seconds. This, of course, was a relief from the old era when Android app development was made with Eclipse and the Android Developer Tools plugin. Android studio is powered with the following features :
Systems built on Gradle-base
Real-time app layout rendering with Live-layout
The option of multiple screen configurations and layout preview while editing
Variants and multiple Apk file generation can be built.
Lint tools for enabling version compatibility, usability, performance, etc
Allows the development of Android Wear and gears, TV and Auto apps
Compatible integration with Google Cloud Platform, App Engine, and Google Cloud Messaging
Coding in Java
Java has proven itself to be a chosen programming language for the coding of various devices and operating systems including Android. Java codes hold the keys to other operating systems including Windows, and Linux.
Unlike iOS, Apple’s coding languages Objective C and Swift are really only used for developing Apple-type products iOS and OS X, and can not be easily ported to other operating systems. Except for Swift that is open-sourced with Linux tools.
Quick to App Store
There is a delay for weeks before Apps deployed to Apple’s App Store can be available for download by users, but for Google Play store it’s in just a few hours and you can download.
Google play store allows easy updates, you can run updates multiple times a day depending on the urgency. On the other side of the Apple App Stores, it is the same lengthy protocol as a fresh deployment even if it is just fixing a bug.
Play Store Monitor
If you plan your app release well, you can control by specifics the percentage of users that can get updates, this will allow you to track feedback and crash reports. You can then increase the percentage of users to receive further updates.
This is possible because the Play Store allows the release of an app in both alpha and beta releases, and can be made to be available to an exclusive group of testers.
The advantage here is that the initial access to a subset of users, and feedbacks received can be used to finetune the app before the final release. It also allows a staged and gradual update rollout.
Cost of Android phones
iPhone is most times seen as high class and expensive, fewer people use it if compared to Android on the bases of cost. The notion that iPhone uses can afford more makes Andriod apps cheaper. Although in the past this might be true, however in the present day, Android apps have been surpassing iPhone apps in some categories both in the initial app and for in-app purchases. This has been proven by increasing profit from in-app adverts which are cheaper on Andriod and mobile app games.
Your Road to Andriod
As a product owner, if you are still confused about the platform to develop first, be it for your business automation or for the use of the general public. It is smart moves backed by economics and best to go for Andriod since it has a larger reach.
Are you a new business startup, established business owner, or a product manager? Do you want to create, tweak, or manage an app? iTwis is willing to see you through this process. Our well-skilled and experienced mobile development team can successfully facilitate your app production process from scratch and bring it to a significant product release. Contact us today for a consultation. | https://medium.com/itwis/reasons-why-developers-make-apps-for-android-rather-than-ios-c345c8b1c198 | ['Ayo Oladele'] | 2020-07-02 02:41:49.251000+00:00 | ['Mobile App Development', 'Java', 'Software Development', 'Kotlin', 'Android'] | Title Reasons developer make apps Android rather iOSContent Android v iOS popular operating system mobile platform small startup big enterprise Information technology paramount question litmus test decides mobile platform developed first time may ask run big development team skilled set posse resource tool mobile development kick platform However important detailed risk analysis robust budget Andriod Worthy taking look technical commercial advantage Andriod iOS ray light path main reason Andriod app great choice come first development process Importance Andriod Market share doubt pointer even ignore reason mentioned later number don’t Lie Statistics IDC show clearly Android leading number smartphones shipped worldwide 861 market share “ 2018 around 156 billion smartphones sold worldwide first quarter 2019 around 88 percent smartphones sold end user phone Android operating system Android operating system” source wwwstatistacom Portability Java primary programming language Native Android apps provides special advantage code accessible mobile operating system easily via porting Besides Android apps range reach Chrome OS Windows device Android Studio Android Studio beauty strength potent IDE based IntelliJ IDE Android Studio IDE designed customized Android app development Speed efficiency key Andriod studio allows setup new Android project different type Android apps second course relief old era Android app development made Eclipse Android Developer Tools plugin Android studio powered following feature Systems built Gradlebase Realtime app layout rendering Livelayout option multiple screen configuration layout preview editing Variants multiple Apk file generation built Lint tool enabling version compatibility usability performance etc Allows development Android Wear gear TV Auto apps Compatible integration Google Cloud Platform App Engine Google Cloud Messaging Coding Java Java proven chosen programming language coding various device operating system including Android Java code hold key operating system including Windows Linux Unlike iOS Apple’s coding language Objective C Swift really used developing Appletype product iOS OS X easily ported operating system Except Swift opensourced Linux tool Quick App Store delay week Apps deployed Apple’s App Store available download user Google Play store it’s hour download Google play store allows easy update run update multiple time day depending urgency side Apple App Stores lengthy protocol fresh deployment even fixing bug Play Store Monitor plan app release well control specific percentage user get update allow track feedback crash report increase percentage user receive update possible Play Store allows release app alpha beta release made available exclusive group tester advantage initial access subset user feedback received used finetune app final release also allows staged gradual update rollout Cost Android phone iPhone time seen high class expensive fewer people use compared Android base cost notion iPhone us afford make Andriod apps cheaper Although past might true however present day Android apps surpassing iPhone apps category initial app inapp purchase proven increasing profit inapp advert cheaper Andriod mobile app game Road Andriod product owner still confused platform develop first business automation use general public smart move backed economics best go Andriod since larger reach new business startup established business owner product manager want create tweak manage app iTwis willing see process wellskilled experienced mobile development team successfully facilitate app production process scratch bring significant product release Contact u today consultationTags Mobile App Development Java Software Development Kotlin Android |
1,763 | How the U.S. got Today’s Uncle Sam | Alfred Leete’s 1914 poster Lord Kitchener Wants You (above), was a key promotional tool to encourage British men to volunteer for the army. It featured Great Britain’s Secretary for War Lord Kitchener, a man with a serious mustache, and a more serious stare.
According to The Conversation, hundreds of thousands of British men volunteered to fight in World War I after the poster was released. Many other factors led to the high levels of volunteerism amongst British men, such as social pressure, peer pressure, and an aggressive recruitment campaign. Still, the poster of Kitchener was popular and effective.
The Lord Kitchener poster, especially his pointing finger, influenced the 1916 Uncle Sam poster’s illustrator.
James Flagg — Library of Congress
James Montgomery Flagg is the illustrator of the iconic Uncle Sam, I Want YOU for U.S. Army poster, but there is also a striking resemblance.
Look at his face (above); if you aged it until the hair turned white, added a white-starred hat with blue trim and a white goatee, you would see none other than the iconic Uncle Sam. Flagg based Uncle Sam’s face on his own.
According to Travis Andrews of the Washington Post, Flagg’s self-influenced depiction of Uncle Sam was so effective it was printed at least 4 million times in the final year of World War I. It became so popular it was used again to recruit troops during World War II.
Including the Uncle Sam poster, Flagg designed 46 propaganda posters for the U.S. during World War I. Flagg’s Uncle Sam, aside from the new face, was also more muscular and powerful than the previous depictions.
Flagg’s Uncle Sam fits how many Americans see America, strong, fervent, and patriotic.
Context
Many Americans wanted to enter World War I soon after it began in 1914. On May 7, 1915, the Lusitania’s sinking increased those desires, but President Woodrow Wilson stayed peaceful. President Wilson was reelected on a peace platform in 1916, but it was clear war loomed on the horizon.
Flagg’s reimagination of Uncle Sam deepened feelings of patriotism during the key months leading up to the U.S. officially entered the war. It had a similar effect as Leete’s poster featuring Lord Kitchener had on the British citizens.
President Woodrow Wilson appeared before a Joint Session of Congress on April 2, 1917. He said:
“It is a fearful thing to lead this great peaceful people into war…but the right is more precious than peace, and we shall fight for the things which we have always carried nearest our hearts, — for democracy, for the right of those who submit to authority to have a voice in their own governments…for a universal dominion of right by such a concert of free peoples as shall bring peace and safety to all nations and make the world itself at last free.”
Uncle Sam represents President Wilson’s words. He is strong enough to stand up and fight against tyranny to preserve democracy and freedom.
Final Thoughts
Today, Uncle Sam is not only used to promote patriotism and recruitment for the military. He is used throughout American culture. I recently saw Uncle Sam wearing a mask and gloves to promote safety during the coronavirus pandemic.
Now, when I look at Uncle Sam, I see Wilson, Lord Kitchener, and Flagg, as well as the distinguishable personification of the U.S. that I value as an American. | https://medium.com/frame-of-reference/how-the-u-s-got-todays-uncle-sam-b954d1bd4242 | ['Samuel Sullivan'] | 2020-11-17 18:02:11.062000+00:00 | ['Culture', 'Nonfiction', 'History', 'Marketing', 'Art'] | Title US got Today’s Uncle SamContent Alfred Leete’s 1914 poster Lord Kitchener Wants key promotional tool encourage British men volunteer army featured Great Britain’s Secretary War Lord Kitchener man serious mustache serious stare According Conversation hundred thousand British men volunteered fight World War poster released Many factor led high level volunteerism amongst British men social pressure peer pressure aggressive recruitment campaign Still poster Kitchener popular effective Lord Kitchener poster especially pointing finger influenced 1916 Uncle Sam poster’s illustrator James Flagg — Library Congress James Montgomery Flagg illustrator iconic Uncle Sam Want US Army poster also striking resemblance Look face aged hair turned white added whitestarred hat blue trim white goatee would see none iconic Uncle Sam Flagg based Uncle Sam’s face According Travis Andrews Washington Post Flagg’s selfinfluenced depiction Uncle Sam effective printed least 4 million time final year World War became popular used recruit troop World War II Including Uncle Sam poster Flagg designed 46 propaganda poster US World War Flagg’s Uncle Sam aside new face also muscular powerful previous depiction Flagg’s Uncle Sam fit many Americans see America strong fervent patriotic Context Many Americans wanted enter World War soon began 1914 May 7 1915 Lusitania’s sinking increased desire President Woodrow Wilson stayed peaceful President Wilson reelected peace platform 1916 clear war loomed horizon Flagg’s reimagination Uncle Sam deepened feeling patriotism key month leading US officially entered war similar effect Leete’s poster featuring Lord Kitchener British citizen President Woodrow Wilson appeared Joint Session Congress April 2 1917 said “It fearful thing lead great peaceful people war…but right precious peace shall fight thing always carried nearest heart — democracy right submit authority voice governments…for universal dominion right concert free people shall bring peace safety nation make world last free” Uncle Sam represents President Wilson’s word strong enough stand fight tyranny preserve democracy freedom Final Thoughts Today Uncle Sam used promote patriotism recruitment military used throughout American culture recently saw Uncle Sam wearing mask glove promote safety coronavirus pandemic look Uncle Sam see Wilson Lord Kitchener Flagg well distinguishable personification US value AmericanTags Culture Nonfiction History Marketing Art |
1,764 | My top three Fermi Paradox solutions | It is not easy to dismiss the Fermi paradox. Either technological civilizations or even life are extremely unlikely or too short-lived, or some exotic theory is the case, of which there are many. For instance, aliens may have already conquered the galaxy and are among us, or they retreated in some digital simulated life rather than venturing in space, or perhaps we live in a computer simulation with Earth being the only simulated planet with life. Here, just for the record I briefly go over three I find most likely (at least today).
One important requirement for an explanation is that it should not rely on undue assumptions on other civilizations. “Other civilizations don’t wish to expand like we do” won’t do: some of them may not, but some may. It only takes one civilization with similar mentality to ours to expand across the galaxy. If we are the first, and if we don’t destroy ourselves, most likely we will eventually expand.
So here are my top three explanations:
A technological great filter lies ahead of us. I find it likely that science and technology require a certain free spirit of innovation, exploration, and individuality. Across our history, the leverage of an individual to cause damage is increasing steadily. Back 10,000 years ago, a strong and mean individual could kill a few people and bring down a hut or two. Today, a bad actor can cause much more harm. What if there is a technology that will unavoidably be invented, which gives the ability to anyone to instantly and irreversibly destroy the civilization? For example, an exotic and easily tapped energy source, or downloadable code for grey goo. If such a technology inexorably lies ahead of us, which is plausible, it is difficult to imagine how we could prevent every single individual from deploying it. How about other civilizations, could a collectivist civilization akin to an ant colony avoid such doom? Brains are expensive; in a collectivist civilization that confers no evolutionary advantage to individual intelligence, “free-riders” will get rid of their brains, so it is conceivable that every technological civilization consists of competing individuals and in every single one of them one individual eventually and inexorably triggers the doomsday machine. One catch to this explanation: for “best results” the doomsday machine must be triggered before exponential space exploration commences.
Aliens are among us. The first civilization to develop space travel, if similar to us in mindset, will likely want to expand at least defensively across the galaxy and beyond. If nothing else, to prevent future aggressor civilizations from expanding. Or perhaps because it is aware of destructive abilities of even inferior civilizations (think: grey goo) and wants to monitor the galaxy. A defensive expansion is more likely — a no-brainer — compared to a rapid colonization, which has the downside of creating potential future competitors. A civilization that interconnects into a big internet-brain may have little use of distant colonies and expand at a rate much lower than 1% of the speed of light. In the defensive expansion scenario, the civilization will still rapidly send robot factories to build drones that will monitor all interesting planetary systems, and be ready to unleash destructive force to anything that looks threatening. Incidentally, UFOs are becoming mainstream. If UFO reports are to be believed (OK, a big IF) then the reported UFOs are acting exactly as expected from drones who inspect things, are unconcerned about us, and are ready to engage in case anything they deem threatening appears. Which raises the important question of what they might deem threatening. Or perhaps, aliens are among us in the quantum realm or in some other unexpected physical form. The exponential technological progress has to reach one or a few phase transitions, after which all bets are off. To advanced aliens, components such as neurons or silicon transistors will seem hopelessly bulky and inefficient as computational building blocks. Hence, as a colleague pointed out, SETI is severely outdated using technology and reasoning of the 1950s to search for aliens and should broaden its scope and methods. I bet Carl Sagan — my childhood hero and pioneer of SETI— would agree.
Technological civilizations are unlikely. This is the explanation I find least likely (rather, I leave room for an entirely different explanation, such as a specific and compelling hypothesis of why a sufficiently advanced civilization finds the visible universe uninteresting or explores it invisibly). Intelligence has most likely only evolved once on Earth in terms of nervous system; however, higher intelligence has evolved independently multiple times. Orangutans and chimps, dolphins and whales, elephants, ravens and crows, kea and African Grey parrots, and very independently octopuses and squids, have remarkable intelligence. Many species use tools. We are first to develop technology on Earth, but isn’t it a stretch to assert that if we weren’t around no other species on Earth would develop technology in the next 100 million years? Or 1 billion years? What if life is vanishingly unlikely. Again, I don’t think that’s a robust explanation. The first step of life cannot be unlikely: while liquid water appeared on Earth 4.4 billion years ago, the first evidence of life may date back to 4.3 billion years ago, which hints to life originating quickly in geological terms once conditions are right. If any step in the evolution to intelligence was vanishingly unlikely, that step would most likely have taken a disproportionately long time on Earth. That is not what we observe: the last universal common ancestor appears about 3.5 billion years ago (bya) after a steady evolution of basic biomolecular functions; photosynthesis appears 3 bya; land microbes 2.8 bya; cyanobacteria’s oxygen photosynthesis 2.5 bya; eukaryotes 1.85 bya, land fungi 1.3 bya, sexual reproduction 1.2 bya; marine eukaryotes 1 bya; protozoa 750 million years ago, and so on, steadily evolving into intelligent species in the past few hundred million years. The coarse-grained breakdown of evolution’s steps in the early billions of years reflects our lack of data on the ancient progression of molecular biology rather than any single vanishingly unlikely event.
Photo by Aziz Acharki on Unsplash
Incidentally, I want to urge against jumping to the anthropic principle and stating that there is nothing puzzling about seemingly being alone because the sole intelligent civilization necessarily is puzzled about being alone. The anthropic principle is quite unsatisfying to begin with in cosmology. However, at least in that case we have a single observed event to explain — the universe and its cosmological properties — and no expectation of observing other similar events (i.e., other universes). In the case of the Fermi Paradox, because there may be yet unobserved civilizations lurking around, we have to weigh any theory of us being alone against some prior probability of it being true. Given our observations on Earth, the prior probability we assign to technological civilizations cannot be vanishingly small — everything points to steady biochemical and then organismal evolution from formation of water all the way to intelligent tool-using species — therefore we have to make every effort to completely exclude other explanations before we jump to the conclusion that we are alone.
So where does this leave us? I hope (1) is false. (2) is no good news either. (3) is wishful thinking, or perhaps scary too. I would love to see a better explanation. If you have your favorite explanation in mind, or thoughts to share, please comment below!
Want more stories like this? Be sure to check out more from Mission!🧠 👉 Here! | https://medium.com/the-mission/my-top-three-fermi-paradox-solutions-10d598a86197 | ['Serafim Batzoglou'] | 2019-06-25 03:39:02.577000+00:00 | ['Aliens', 'History', 'Cosmos', 'Science', 'Future'] | Title top three Fermi Paradox solutionsContent easy dismiss Fermi paradox Either technological civilization even life extremely unlikely shortlived exotic theory case many instance alien may already conquered galaxy among u retreated digital simulated life rather venturing space perhaps live computer simulation Earth simulated planet life record briefly go three find likely least today One important requirement explanation rely undue assumption civilization “Other civilization don’t wish expand like do” won’t may may take one civilization similar mentality expand across galaxy first don’t destroy likely eventually expand top three explanation technological great filter lie ahead u find likely science technology require certain free spirit innovation exploration individuality Across history leverage individual cause damage increasing steadily Back 10000 year ago strong mean individual could kill people bring hut two Today bad actor cause much harm technology unavoidably invented give ability anyone instantly irreversibly destroy civilization example exotic easily tapped energy source downloadable code grey goo technology inexorably lie ahead u plausible difficult imagine could prevent every single individual deploying civilization could collectivist civilization akin ant colony avoid doom Brains expensive collectivist civilization confers evolutionary advantage individual intelligence “freeriders” get rid brain conceivable every technological civilization consists competing individual every single one one individual eventually inexorably trigger doomsday machine One catch explanation “best results” doomsday machine must triggered exponential space exploration commences Aliens among u first civilization develop space travel similar u mindset likely want expand least defensively across galaxy beyond nothing else prevent future aggressor civilization expanding perhaps aware destructive ability even inferior civilization think grey goo want monitor galaxy defensive expansion likely — nobrainer — compared rapid colonization downside creating potential future competitor civilization interconnects big internetbrain may little use distant colony expand rate much lower 1 speed light defensive expansion scenario civilization still rapidly send robot factory build drone monitor interesting planetary system ready unleash destructive force anything look threatening Incidentally UFOs becoming mainstream UFO report believed OK big reported UFOs acting exactly expected drone inspect thing unconcerned u ready engage case anything deem threatening appears raise important question might deem threatening perhaps alien among u quantum realm unexpected physical form exponential technological progress reach one phase transition bet advanced alien component neuron silicon transistor seem hopelessly bulky inefficient computational building block Hence colleague pointed SETI severely outdated using technology reasoning 1950s search alien broaden scope method bet Carl Sagan — childhood hero pioneer SETI— would agree Technological civilization unlikely explanation find least likely rather leave room entirely different explanation specific compelling hypothesis sufficiently advanced civilization find visible universe uninteresting explores invisibly Intelligence likely evolved Earth term nervous system however higher intelligence evolved independently multiple time Orangutans chimp dolphin whale elephant raven crow kea African Grey parrot independently octopus squid remarkable intelligence Many specie use tool first develop technology Earth isn’t stretch assert weren’t around specie Earth would develop technology next 100 million year 1 billion year life vanishingly unlikely don’t think that’s robust explanation first step life cannot unlikely liquid water appeared Earth 44 billion year ago first evidence life may date back 43 billion year ago hint life originating quickly geological term condition right step evolution intelligence vanishingly unlikely step would likely taken disproportionately long time Earth observe last universal common ancestor appears 35 billion year ago bya steady evolution basic biomolecular function photosynthesis appears 3 bya land microbe 28 bya cyanobacteria’s oxygen photosynthesis 25 bya eukaryote 185 bya land fungi 13 bya sexual reproduction 12 bya marine eukaryote 1 bya protozoa 750 million year ago steadily evolving intelligent specie past hundred million year coarsegrained breakdown evolution’s step early billion year reflects lack data ancient progression molecular biology rather single vanishingly unlikely event Photo Aziz Acharki Unsplash Incidentally want urge jumping anthropic principle stating nothing puzzling seemingly alone sole intelligent civilization necessarily puzzled alone anthropic principle quite unsatisfying begin cosmology However least case single observed event explain — universe cosmological property — expectation observing similar event ie universe case Fermi Paradox may yet unobserved civilization lurking around weigh theory u alone prior probability true Given observation Earth prior probability assign technological civilization cannot vanishingly small — everything point steady biochemical organismal evolution formation water way intelligent toolusing specie — therefore make every effort completely exclude explanation jump conclusion alone leave u hope 1 false 2 good news either 3 wishful thinking perhaps scary would love see better explanation favorite explanation mind thought share please comment Want story like sure check Mission🧠 👉 HereTags Aliens History Cosmos Science Future |
1,765 | TLDR Migrating a 130TB Cluster from Elasticsearch 2 to 5 in 20 Hours with 0 Downtime by Fred de Villamil | TLDR Migrating a 130TB Cluster from Elasticsearch 2 to 5 in 20 Hours with 0 Downtime by Fred de Villamil
Replicate it using built-in ES capabilities, then reindex delta from your main data source. Also — it’s so cool to have everything saved in ~kafka~ a log.
77 nodes cluster, with 200TB storage, 4.8TB RAM, 2.4TB being allocated to Java, and 924 CPU core. It is made of 3 master nodes, 6 ingest nodes, and 68 data nodes. The cluster holds 1137 indices, with 13613 primary shards, and 1 replica in a second data center.
1 option — Cluster Restart: close every index, upgrade the software version, start the nodes, then open the indexes again. Downside — maintenance downtime =(
Better option — expand cluster 2x times, replicate everything 2 more times, split cluster into two, then catch-up with changes:
added 90 new servers with the same ES 2.3.
“number_of_replicas : 1” becomes “number_of_replicas : 3" and ES takes care of copying every index and shard onto new servers.
Transferring 130TB of data at up to 4Gb/s puts lots of pressure on the hardware. The load on most machines was up to 40, with 99% of the CPU in use. Iowait went from 0 to 60% on most of our servers.
mitigate problems of serving clients from the busy hardware by using ES “zones” to split the data into cold and hot parts and dedicated some resources to serve the hot part to clients without reduced QoS.
split the cluster: shutdown new servers, disconnect them from the cluster in terms of auto-discovery, start them separately:
close all the indexes on the new cluster and now upgrade the ES to 5.0
reopen indexes and catchup with changes by reindexing the delta from kafka source.
switch ES clients to use new cluster.
Profit! | https://medium.com/some-tldrs/tldr-migrating-a-130tb-cluster-from-elasticsearch-2-to-5-in-20-hours-with-0-downtime-by-fred-de-65270f5ca894 | ['Pavel Trukhanov'] | 2018-03-01 13:47:06.769000+00:00 | ['Tldr', 'Elasticsearch', 'DevOps', 'Engineering', 'Big Data'] | Title TLDR Migrating 130TB Cluster Elasticsearch 2 5 20 Hours 0 Downtime Fred de VillamilContent TLDR Migrating 130TB Cluster Elasticsearch 2 5 20 Hours 0 Downtime Fred de Villamil Replicate using builtin ES capability reindex delta main data source Also — it’s cool everything saved kafka log 77 node cluster 200TB storage 48TB RAM 24TB allocated Java 924 CPU core made 3 master node 6 ingest node 68 data node cluster hold 1137 index 13613 primary shard 1 replica second data center 1 option — Cluster Restart close every index upgrade software version start node open index Downside — maintenance downtime Better option — expand cluster 2x time replicate everything 2 time split cluster two catchup change added 90 new server ES 23 “numberofreplicas 1” becomes “numberofreplicas 3 ES take care copying every index shard onto new server Transferring 130TB data 4Gbs put lot pressure hardware load machine 40 99 CPU use Iowait went 0 60 server mitigate problem serving client busy hardware using ES “zones” split data cold hot part dedicated resource serve hot part client without reduced QoS split cluster shutdown new server disconnect cluster term autodiscovery start separately close index new cluster upgrade ES 50 reopen index catchup change reindexing delta kafka source switch ES client use new cluster ProfitTags Tldr Elasticsearch DevOps Engineering Big Data |
1,766 | The Motivation To Exercise Starts From Cues. | The Motivation To Exercise Starts From Cues.
Willpower is out. Cues are in.
Photo by Ethan Elisara on Unsplash
Very often, we rely too much on our willpower to get ourselves up for exercise.
It could be a trip to the gym, to the pool, to the tennis court or even just lacing up for an evening jog.
The power of our Will is very often, not very powerful.
That is probably why we cannot really depend on it.
Ask any smokers who tried to quit smoking based on sheer willpower.
Or any fried food junkie trying to lose weight.
If it is not willpower, then what can we rely on?
For me, the answer is “cues”.
And I am big fan of cues.
For instance, if I know that I want to head for a swim tomorrow, this is what I am going to do.
I will pack the swimming equipment the evening prior to my session.
And then, I will put spare goggles at my working desk at home, at the working desk in the office, and in the shared locker where I deposit my bag in office.
I constantly remind myself of my swim session using cues (goggles) strategically placed all over where my eyes cannot miss.
That is swimming.
Running is a different ballgame.
With running, motivation comes from proximity.
I find that I have a tendency to not run when I have to travel far to the start line.
It is much easier for truckloads of excuses to infiltrate and takeover my mind when it takes a long while from preparation to start.
I do this.
I keep a set of running clothes near the front door of the apartment before I head to work.
When I come home, I would immediately change into my workout gear and start warming up the moment I shut the gate.
If I am heading for track training, I would search for the nearest stadium from the office and deposit my running attire and shoes there before heading to office.
That way, no matter what happens, I will have to make my way to the stadium to collect my gear before I head home.
This eliminates all excuses from my end.
As with all things in life, a little deliberation goes a long way.
Don’t you think so? | https://medium.com/illumination/the-motivation-to-exercise-starts-from-cues-b8599e841373 | ['Aldric Chen'] | 2020-09-22 02:43:24.438000+00:00 | ['Self Improvement', 'Motivation', 'Health', 'Healthy Lifestyle', 'Short Story'] | Title Motivation Exercise Starts CuesContent Motivation Exercise Starts Cues Willpower Cues Photo Ethan Elisara Unsplash often rely much willpower get exercise could trip gym pool tennis court even lacing evening jog power often powerful probably cannot really depend Ask smoker tried quit smoking based sheer willpower fried food junkie trying lose weight willpower rely answer “cues” big fan cue instance know want head swim tomorrow going pack swimming equipment evening prior session put spare goggles working desk home working desk office shared locker deposit bag office constantly remind swim session using cue goggles strategically placed eye cannot miss swimming Running different ballgame running motivation come proximity find tendency run travel far start line much easier truckloads excuse infiltrate takeover mind take long preparation start keep set running clothes near front door apartment head work come home would immediately change workout gear start warming moment shut gate heading track training would search nearest stadium office deposit running attire shoe heading office way matter happens make way stadium collect gear head home eliminates excuse end thing life little deliberation go long way Don’t think soTags Self Improvement Motivation Health Healthy Lifestyle Short Story |
1,767 | Top Spring Framework Tutorials For Beginners — Learn Spring Framework [Updated 2021] | This course describes by example how to build cloud services via the use of object-oriented design techniques, Java Servlets, the Java Spring Framework, and cloud computing platforms, such as Amazon Web Services.
In this course you will learn:
Understand the details of the Hypertext Transfer Protocol
Be able to develop cloud services using the Java Spring Framework
Understand basic issues in scaling cloud services
Be able to use the Java Persistence API to integrate databases into cloud services
Due to the importance of building secure and scalable mobile/cloud platforms, this MOOC will not only show you how to build cloud services, but how to do so securely, scalably, and efficiently. Security and scalability topics will be woven into discussions of cloud service creation so that students learn, from the start, how to create robust cloud services.
Get a comprehensive overview of Spring in this intermediate-level course.
The course includes:
Spring Overview
Configuring the ApplicationContext
Component Scanning
The Bean Lifecycle
Aspect-Oriented Programming
The course develops applications and web services with Spring, and shares what its knowledge about configuring the ApplicationContext (the interface for accessing components, loading files, publishing events, and more), as well as the beans (objects within the Spring IOC container).
It demonstrates a modern Java configuration workflow and explores the Spring lifecycle in depth, so you can extend the framework and better troubleshoot any issues you have with your applications.
Plus, learn how to use aspect-oriented programming to add behaviors to your apps in a reusable way.
The course focuses on a systematic approach of Spring and breaks down the entire subject into systematic sections for easier understanding. The course also includes practical work (or homework) which will help you actually grasp how to work in Spring, instead of just a theoretical approach or following the instructor.
In this course, you will learn:
Introduction to the Spring Framework
Take a look at the core Spring Framework
Detailed introduction to Dependency Injection
Work with the MVC (Model-View Controller) in Spring
How Spring Framework can help simplify building apps that utilize the web
Take a look at some JSP basics, the visuals of the application
Work with REST and API, what it is, how does it work, etc.
How to configure a logger into the application
This course is designed to give you a solid foundation of Spring MVC. The course covers the most recent approach of using both contained and exported WAR deployments. All configuration is done using the Java approach instead of XML.
The course includes:
What Is Spring MVC?
Creating Your First Spring MVC Application
Understanding the Structure of Spring MVC Applications
Creating Controllers in Spring MVC
Creating Views in Spring MVC Applications
Using Java Server Pages with Spring MVC View
Using Thymeleaf in Spring MVC Views
Validating Objects in Spring MVC Applications
Using Client-side JavaScript in Spring MVC Applications
In this course, Spring Framework: Spring MVC Fundamentals, you will gain a solid understanding of creating web applications with Spring MVC.
First, you will learn architecture in Spring. Next, you will discover controllers and navigation.
Finally, you will explore how to create views. When you are finished with this course, you will have the skills and knowledge of Spring MVC needed to create web applications.
Learn the hottest, most in-demand Java web framework, including web programming with Spring MVC and Hibernate. Lifetime access with no subscription on Udemy.
An introduction to the widely-used Java Spring framework. Discover how to wire together your Java objects using Spring and dependency injection. You’ll learn how to set up your system for Spring development, how to use Maven, and how to work with databases using Spring and Hibernate and how to create web applications with Spring MVC. We’ll also look at managing user accounts with Spring Security,JDBC, working with web forms, Apache tiles for building modular web pages, aspect-oriented programming (AOP) and using Log4J and JUnit.
Learn Spring with Core, MVC, JDBC, MySQL; Upcoming: Spring 5, Spring Boot 2, Thymeleaf, Security, Hibernate, JPA & more.
Java, the world’s leading programming language, is used to develop Spring applications.
The Spring Framework is the most popular and widely used Java Enterprise Edition (JEE) framework.
Spring is an open source, lightweight framework that handles all the infrastructure.
Spring makes life easy by allowing developers to focus on the business logic while it takes care of the low-level “plumbing”.
Spring is super lightweight to give you faster deployment. That’s because:
it advocates the POJO programming model which means you don’t need a dedicated server for deployment.
Is highly modular, which means you pick and choose which modules you need.
Testing Spring Framework applications are easy because of this.
This course assumes you know at least a little of the basics of Java. If you don’t know Java or want a refresh, then I suggest you take my Complete Java Masterclass first before this Spring Framework course. But that’s optional. You can still get a lot out of this course, with even a little Java knowledge.
New content to be released includes:
Spring MVC in-depth (Forms and validation): Drilling further into Spring MVC — Handling Web forms and Validation.
Spring AOP — Here’s where you’ll learn about Spring’s Aspect Oriented Programming (AOP). AOP helps to address cross-cutting concerns such as Logging, Security etc.
Spring Security — This topic covers Spring’s security feature that helps to make Spring based web-apps more secure and robust.
Spring with Hibernate — You’ll learn Spring integration with Hibernate, one of the most popular Object Relational Framework (ORM).
Spring with JPA — This is where you’ll learn Spring integration with Java Persistence API (JPA) which helps to make Spring applications database and ORM agnostic.
Spring Data — Spring Data unifies and makes it easy to access to different kinds of persistence stores, both relational database systems and NoSQL data stores.
Spring with Apache Tiles — Apache Tiles is a free open-source template engine for Java web frameworks You’ll learn it’s integration with Spring.
Spring Web Flow — Spring Web Flow builds on Spring MVC and allows implementing the “flows” in a web application.
Spring & Testing — In this section you’ll learn how to carry out Unit testing of Spring applications with testing frameworks such as JUnit.
Learn the magic of Spring Framework in 100 Steps with Spring Boot, Spring JDBC, Spring AOP, JUnit, Mockito and JPA.
This is an excellent basic introduction to Spring, SpringBoot & JPA. Easy to follow and seems to cover all the basic concepts with a good potted history to explain why certain techniques have evolved as they have.
Learn the magic of Spring Framework. From IOC (Inversion of Control), DI (Dependency Injection), Application Context to the world of Spring Boot, AOP, JDBC and JPA. Get set for an incredible journey.
In this course, you will learn the features of Spring and Spring Modules — JDBC, AOP, Data JPA with hands-on step by step approach.
You will get introduced to Spring Boot, Unit Testing with JUnit and Mockito, talking to the database with Spring JDBC and JPA, Maven (dependencies management), Eclipse (IDE) and Tomcat Embedded Web Server. We will help you set up each one of these.
You will learn about Spring step by step — in more than 100 steps. This course would be a perfect first step as an introduction to Spring.
You will learn about
Basics of Spring Framework — Dependency Injection, IOC Container, Application Context and Bean Factory.
Spring Annotations — @Autowired, @Component, @Service, @Repository, @Configuration, @Primary….
Spring MVC in depth — DispatcherServlet , Model, Controllers and ViewResolver
Spring Boot Starters — Spring Boot Starter Web, Starter Data Jpa, Starter Test
Basics of Spring Boot, Spring AOP, Spring JDBC and JPA
Basics of Eclipse, Maven, JUnit and Mockito
Basic concept of a Web application step by step using JSP Servlets and Spring MVC
Unit testing with JUnit and Mockito using XML and Java Spring Application Contexts
Level 1 : Spring Framework in 10 Steps
Level 2 : Spring in Depth
Level 3 has 3 steps on Unit Tests with Java and XML Contexts
Level 4 : Spring Boot in 10 Steps
Level 5 : Spring AOP
Level 6 :Spring JDBC and JPA
Best Spring Courses — Java Application Framework
Enterprise class use of Spring Framework 4 and Spring Boot.
Spring Core course is intended to be a predecessor to this course. In Spring Core, I gave you a solid foundation in working with the Spring Framework. In this course, I build upon that foundation expanding your skills with the Spring Framework. The skills taught in this course are skills you will need for enterprise application development using the Spring Framework.
Topics Include:
Spring Data JPA
Form Validation in Spring MVC
Externalized messages
Using Spring Security
Aspect Oriented Programming
Spring Application Events
Scheduled Tasks
Advanced Spring Configuration
The course is started by showing students how to replace the traditional JPA DAO structure we created in the Spring Core course, using Spring Data JPA. It continues building upon concepts learned in the Spring Core course by showing students how to use Command objects in Spring MVC and how to perform server-side property validations.
Next, we get into using Spring Security. Spring Security is one of the most widely used modules of the Spring Framework.
It shows how to add Spring Security to our existing Spring MVC web application. We configure Spring Security to read user information from our database, and then secure URLs to authenticated users and users with specific security roles.
Aspect-Oriented Programming (AOP) is a really cool programming paradigm, and it is supported by the Spring Framework. In the module on AOP, I show you how to use AOP to log login activity in Spring Security. By using AOP, we don’t need to change any of the Spring Security code.
The Spring Framework has a very mature events framework we can use for application events. I show you how to create a custom application event, then how to set up an event handler to take action on specific application events.
In Spring Core and in this course, the project we’re working on uses Spring Boot as its foundation. Spring Boot is doing a lot of automatic configuration for us.
In the last module of this course, we will remove Spring Boot from the project. This will require us to configure all the objects and data sources being provided by Spring Boot manually. In doing so, students will gain insight into all the automation being provided by Spring Boot, and how to manage a more advanced Spring Configuration.
A deep-dive into the Microservice architectural style, and how to implement it with Spring technologies.
Microservices with Spring Cloud is an online workshop designed to help you learn the Microservices architectural style, and how to implement it using Spring technologies
This course provides a good, solid introduction to the topic of the Microservices architectural style, and combines this with practical experience gained by working through the exercises featuring Spring Cloud.
Along the way, this course will provides a brief introduction to Spring Boot and Spring Data (enough to get you familiar with these technologies if you have not been immersed in them already).
The course provides exercises that provide you with hands-on experience working with the various components of Spring Cloud.
The goal of this course is to serve as a practical guide through the Spring Cloud projects, so you can see how they are used to implement microservice-based architecture.
By the time you finish this course, you will have gained the ability to articulate what the Microservices architectural style is all about, including its advantages and disadvantages.
You will gain familiarity with Spring Boot, and you’ll see how to use it to build web interfaces, REST interfaces, and how to use Spring Data and Spring Data REST.
You will gain the ability to build microservice-based applications utilizing Spring Cloud technologies. You will learn about
Centralized, versioned configuration management using Spring Cloud Config
Dynamic configuration updates with Spring Cloud Bus
Service discovery with Spring Cloud Eureka
Client-Side Load Balancing with Ribbon
Declarative REST Clients with Feign
Software Circuit Breakers with Hystrix.
Deploy Spring Boot Applications to the Cloud on AWS.
The Spring Framework is very popular with large companies. In fact, Spring is the most popular Java framework.
A typical company will deploy its Spring Framework application in at least 3 different environments. Having a development, test, and production environment is common.
The problem developers face is each environment is different.
Different server names.
Different databases.
Different user accounts.
Different passwords.
In this course, you will learn how to use Spring’s IoC container to deploy your application in many different environments. Through Inversion of Control, your Spring application can wire itself for the needs of each environment.
You’ll start the course learning advanced configuration options of the Spring Framework.
Next, the course takes a DevOps approach. You’ll see how to deploy Spring Framework applications in different environments.
In development, it’s common to use an H2 in-memory database. Of course, this database is only temporary. Not something you’d want to use for your production deployment.
Do you want to see how to flip a switch and use MySQL? Flip another switch and your app can be using an RDS database managed Amazon. You can do this with no code changes.
The course also looks like the best practices used in enterprise software development. Using a continuous integration server is a best practice. Jenkins is the most popular CI server. You will learn how to install Jenkins on a Linux server. A server you provisioned in the AWS cloud. Once you have Jenkins running on your AWS server a best practice is to set up Jenkins on a friendly URL. Jenkins is a Java application running on port 8080. You don’t want to be typing some IP colon 8080 into your browser to reach Jenkins.
Docker is an exciting technology. You will see how to leverage Docker to host your own Artifactory Maven repository. We’ll use Artifactory to manage build artifacts produced by Jenkins.
Just for fun, we will also use Docker to set up a MySQL database server. We’ll do this by provisioning a Linux server on AWS, installing Docker on it, and then deploy MySQL in a Docker container.
It will also provide an application server we can use to run our Spring Boot application. You will pull the Spring Boot jar right from Artifactory and tell it to connect to a database server.
Amazon AWS also has managed MySQL databases. This is their RDS service. You will see how to provision your own RDS database. We’ll then reconfigure our Spring Boot application to connect to the RDS database.
There is a lot of fun and challenging content in this course. You will learn:
How to manage Spring properties.
Why you want to encrypt sensitive properties, such as passwords.
How Spring Profiles are used.
Using YAML to configure Spring.
To provision servers on Amazon AWS.
Logging into your servers via SSH.
How to use the yum package manager to install software on Linux.
How to configure your own Linux service.
How DNS works, and how to use Route 53 to setup your own hostnames.
How to use webhooks in GitHub to trigger your builds immediately.
Why you don’t want to use root accounts for your application.
Configure Jenkins to perform a Maven build.
Use Jenkins to deploy build artifacts to Artifactory.
This is a very hands-on course. To get the most out of this course, you will need an account on AWS. You should be able to use the AWS free tier to complete the course assignments.
To get the most out of this course, you will need a domain name. You will need to have control of the domain. Without this, you will not be able to configure subdomains in Route 53.
The course does leverage AWS for its cloud services. The skills you learn on the AWS platform will transfer to most corporate environments. AWS is used to mimic the typical company.
Spring Boot gives you all the power of the Spring Framework without all of the complexity. Start writing apps today.
Spring Boot and the Spring Framework makes it easy to create both powered and production-grade applications and services that run on their own and can be maintained with a minimum fuss. It also provides production-ready features such as metrics, health checks, and even externalized configuration. It is software designed to run anywhere, meaning you can create executable JARs, which is one of the most favorable features of this type of program.
While learning this type of application might seem like a daunting task, this course structures Spring Boot and Spring Framework learning in an easy to comprehend fashion. Featuring topics like an introduction into the Framework as well as step by step guidelines into creating your first application, this course is perfect for almost any user. The only requirements in order to excel at this courses’ teaching of Spring Boot are some familiarity with Java and Groovy programming languages, some web development experiences as well as a computer that is capable of running both Java + Intellij or Eclipse.
Besides this course offering lifetime access to all eighty featured lectures and over ten hours of teaching content, it also offers you the opportunity to create Spring MVC applications and also tutorials on how to connect to various databases using Spring Data. This course will be extremely beneficial to students who are new to Spring Boot, students who are unfamiliar with Spring Framework or those who are looking into writing their own apps. This course applies to all of these cases.
Spring Framework, Hibernate & Java: Programming, JPA, OCA Java SE, JDBC, Oracle, Database App, SQL & MySQL For Beginners.
Spring Framework course will show you the exact techniques and strategies you need to develop a full CRUD app with Hibernate, write unit tests with XML, Java application contexts, build web applications and do programming.
In This Spring Framework Training, You’ll Learn:
Aspect-Oriented Programming
Setting Up Spring Environment
Java Development Kit (JDK) Setup
Installation of Apache Common Logging API
Eclipse IDE Setup
The Necessary IOC, BeanFactory & Application Container
The Application Context Container
The Singleton and Prototype Bean Scope
Bean & Life Cycle
Initialization & Destruction Callbacks
Default Initialization And Destroy Methods
Dependency Injection
Injecting Inner Beans & References
Autowiring Modes & Constructor
JDBC Framework
Configuring Data Sources
Data Access Object
Executing SQL and DDL Statements
Local and Global Transactions
Programmatic and Declarative, Transaction Management
Logging with LOG4J
Jakarta Commons Logging (JCL) API
Get Ready for Your Spring Interview with Spring, Spring Boot, RESTful, SOAP Web Services and Spring MVC.
Spring Framework is the most popular Java Framework ever. It continues to evolve with changing architectures. Spring Boot is one of the most popular Spring projects. Spring Boot is the most used Java framework to develop RESTful Services and Microservices.
Preparing for Spring Interview is tricky. There are a wide variety of Spring Modules and Spring Projects you would need to recollect and be prepared to answer questions on. You would need to get a good understanding of the new features of Spring and have a firm grasp of the concepts you implemented in your projects.
This course helps you prepare for Spring Interview with code examples covering 200+ Spring Interview Questions and Answers on Spring, Spring Boot, Spring MVC, Spring JDBC, JPA, AOP, RESTful Services and SOAP Web Services.
You will learn below topics
Spring Spring MVC Spring Boot Database Connectivity — JDBC, Spring JDBC & JPA Spring Data Unit Testing AOP SOAP Web Services RESTful Web Services
Spring Framework 5: Learn Spring Framework 5, Spring Boot 2, Spring MVC, Spring Data JPA, Spring Data MongoDB, Hibernate.
Learn Spring with the most modern and comprehensive course available for Spring Framework 5 and Spring Boot 2. You will see how to build multiple real world applications using Spring Framework 5.
The in demand technologies you will use to build Spring Framework applications, include:
Spring Framework 5
Spring Boot 2
Spring Data JPA
Spring MVC
Spring MockMVC
Spring WebFlux
Spring Data MongoDB
Spring Security (Coming in Q1 2018)
Hibernate
Project Lombok
MapStruct
Maven
Gradle
In addition to teaching you Spring Framework 5, you will learn about modern best practices used in enterprise application development.As we build the applications, you’ll see me using Test Driven Development (TDD) with JUnit and Mockito. Using Mockito mocks keeps your Spring Framework unit tests light and fast. You’ll also see how the Spring context can be used for more complex integration tests. These techniques are best practices used by companies all over the world to build and manage large scale Spring Framework applications.
Spring MVC and Hibernate have long been cornerstones of the Spring Framework. You will learn how to use Spring MVC, Spring Data JPA and Hibernate to build a real world web application. You’ll learn about Hibernate configuration, and about the mapping of JPA entities.
Spring MVC has a lot of robust capabilities.
I start you off showing you how to build recipe application (using TDD, of course).
Initially, it’s all happy path development. We go back and add custom exception handling, form validation, and internationalization.
In the course you will also learn how to use Spring MVC to create RESTful APIs.
A big theme of Spring Framework 5 is Reactive Programming.
Inside the course we build a web application using Thymeleaf, Spring MVC, Spring Data MongoDB, and MongoDB.
We then take the MongoDB application we built and convert it to a Reactive application. You’ll see how you can leverage the new Reactive types inside the Spring Framework from the data tier to the web tier.
You will get to see step by step how to convert a traditional Spring MVC application to an end to end reactive application using the WebFlux framework — which is brand new to Spring Framework 5.
Coming soon to the course in early 2018:
Spring Security
Documenting your APIs with RestDoc
Aspect Oriented Programming (AOP)
Using Spring Events
Scheduling Tasks
Caching with eHcache
Spring JDBC (JDBC Template)
JMS Messaging
AMQP with RabbitMQ
Logging configuration for Logback and Log4J 2
And more real world Spring Framework apps.
Build a web application using Spring Framework 4 and Spring Boot.
If you’re new to the Spring Framework, this is the course you want to start with. This course covers the core of the Spring Framework, the foundation which all of the other Spring Framework projects are built from.
In this course, you will learn about important key concepts, such as dependency injection and inversion of control, which are used throughout the Spring Framework. Within the Spring Framework, you have the option of using the traditional XML configuration, or the new Java based configuration. I’ll show you step by step how to configure Spring Beans using best practices in XML and Java. I’ll also show you how to use Spring to persist data into a database, and Spring MVC to show content from the database on a webpage.
Throughout the course you will have access to the code examples being presented in the tutorials. This is code you can build and run on your computer. You will be able to study the working code examples. Whenever possible, I will go into real world use cases and examples from my years of experience as a Spring Source consultant. I’ve seen a lot of good code, and bad code over the years. Through my experience with Spring, I will show you good code and poor programming practices to avoid.
By the time we reach the end of this course, you will be able to build a functioning Spring Web Application.
In this course, you will learn about: | https://medium.com/quick-code/top-tutorials-to-learn-spring-framework-for-the-java-application-12db01d9c288 | ['Quick Code'] | 2020-12-25 16:47:09.235000+00:00 | ['Development', 'Spring Framework', 'Java', 'Coding', 'Application'] | Title Top Spring Framework Tutorials Beginners — Learn Spring Framework Updated 2021Content course describes example build cloud service via use objectoriented design technique Java Servlets Java Spring Framework cloud computing platform Amazon Web Services course learn Understand detail Hypertext Transfer Protocol able develop cloud service using Java Spring Framework Understand basic issue scaling cloud service able use Java Persistence API integrate database cloud service Due importance building secure scalable mobilecloud platform MOOC show build cloud service securely scalably efficiently Security scalability topic woven discussion cloud service creation student learn start create robust cloud service Get comprehensive overview Spring intermediatelevel course course includes Spring Overview Configuring ApplicationContext Component Scanning Bean Lifecycle AspectOriented Programming course develops application web service Spring share knowledge configuring ApplicationContext interface accessing component loading file publishing event well bean object within Spring IOC container demonstrates modern Java configuration workflow explores Spring lifecycle depth extend framework better troubleshoot issue application Plus learn use aspectoriented programming add behavior apps reusable way course focus systematic approach Spring break entire subject systematic section easier understanding course also includes practical work homework help actually grasp work Spring instead theoretical approach following instructor course learn Introduction Spring Framework Take look core Spring Framework Detailed introduction Dependency Injection Work MVC ModelView Controller Spring Spring Framework help simplify building apps utilize web Take look JSP basic visuals application Work REST API work etc configure logger application course designed give solid foundation Spring MVC course cover recent approach using contained exported WAR deployment configuration done using Java approach instead XML course includes Spring MVC Creating First Spring MVC Application Understanding Structure Spring MVC Applications Creating Controllers Spring MVC Creating Views Spring MVC Applications Using Java Server Pages Spring MVC View Using Thymeleaf Spring MVC Views Validating Objects Spring MVC Applications Using Clientside JavaScript Spring MVC Applications course Spring Framework Spring MVC Fundamentals gain solid understanding creating web application Spring MVC First learn architecture Spring Next discover controller navigation Finally explore create view finished course skill knowledge Spring MVC needed create web application Learn hottest indemand Java web framework including web programming Spring MVC Hibernate Lifetime access subscription Udemy introduction widelyused Java Spring framework Discover wire together Java object using Spring dependency injection You’ll learn set system Spring development use Maven work database using Spring Hibernate create web application Spring MVC We’ll also look managing user account Spring SecurityJDBC working web form Apache tile building modular web page aspectoriented programming AOP using Log4J JUnit Learn Spring Core MVC JDBC MySQL Upcoming Spring 5 Spring Boot 2 Thymeleaf Security Hibernate JPA Java world’s leading programming language used develop Spring application Spring Framework popular widely used Java Enterprise Edition JEE framework Spring open source lightweight framework handle infrastructure Spring make life easy allowing developer focus business logic take care lowlevel “plumbing” Spring super lightweight give faster deployment That’s advocate POJO programming model mean don’t need dedicated server deployment highly modular mean pick choose module need Testing Spring Framework application easy course assumes know least little basic Java don’t know Java want refresh suggest take Complete Java Masterclass first Spring Framework course that’s optional still get lot course even little Java knowledge New content released includes Spring MVC indepth Forms validation Drilling Spring MVC — Handling Web form Validation Spring AOP — Here’s you’ll learn Spring’s Aspect Oriented Programming AOP AOP help address crosscutting concern Logging Security etc Spring Security — topic cover Spring’s security feature help make Spring based webapps secure robust Spring Hibernate — You’ll learn Spring integration Hibernate one popular Object Relational Framework ORM Spring JPA — you’ll learn Spring integration Java Persistence API JPA help make Spring application database ORM agnostic Spring Data — Spring Data unifies make easy access different kind persistence store relational database system NoSQL data store Spring Apache Tiles — Apache Tiles free opensource template engine Java web framework You’ll learn it’s integration Spring Spring Web Flow — Spring Web Flow build Spring MVC allows implementing “flows” web application Spring Testing — section you’ll learn carry Unit testing Spring application testing framework JUnit Learn magic Spring Framework 100 Steps Spring Boot Spring JDBC Spring AOP JUnit Mockito JPA excellent basic introduction Spring SpringBoot JPA Easy follow seems cover basic concept good potted history explain certain technique evolved Learn magic Spring Framework IOC Inversion Control DI Dependency Injection Application Context world Spring Boot AOP JDBC JPA Get set incredible journey course learn feature Spring Spring Modules — JDBC AOP Data JPA handson step step approach get introduced Spring Boot Unit Testing JUnit Mockito talking database Spring JDBC JPA Maven dependency management Eclipse IDE Tomcat Embedded Web Server help set one learn Spring step step — 100 step course would perfect first step introduction Spring learn Basics Spring Framework — Dependency Injection IOC Container Application Context Bean Factory Spring Annotations — Autowired Component Service Repository Configuration Primary… Spring MVC depth — DispatcherServlet Model Controllers ViewResolver Spring Boot Starters — Spring Boot Starter Web Starter Data Jpa Starter Test Basics Spring Boot Spring AOP Spring JDBC JPA Basics Eclipse Maven JUnit Mockito Basic concept Web application step step using JSP Servlets Spring MVC Unit testing JUnit Mockito using XML Java Spring Application Contexts Level 1 Spring Framework 10 Steps Level 2 Spring Depth Level 3 3 step Unit Tests Java XML Contexts Level 4 Spring Boot 10 Steps Level 5 Spring AOP Level 6 Spring JDBC JPA Best Spring Courses — Java Application Framework Enterprise class use Spring Framework 4 Spring Boot Spring Core course intended predecessor course Spring Core gave solid foundation working Spring Framework course build upon foundation expanding skill Spring Framework skill taught course skill need enterprise application development using Spring Framework Topics Include Spring Data JPA Form Validation Spring MVC Externalized message Using Spring Security Aspect Oriented Programming Spring Application Events Scheduled Tasks Advanced Spring Configuration course started showing student replace traditional JPA DAO structure created Spring Core course using Spring Data JPA continues building upon concept learned Spring Core course showing student use Command object Spring MVC perform serverside property validation Next get using Spring Security Spring Security one widely used module Spring Framework show add Spring Security existing Spring MVC web application configure Spring Security read user information database secure URLs authenticated user user specific security role AspectOriented Programming AOP really cool programming paradigm supported Spring Framework module AOP show use AOP log login activity Spring Security using AOP don’t need change Spring Security code Spring Framework mature event framework use application event show create custom application event set event handler take action specific application event Spring Core course project we’re working us Spring Boot foundation Spring Boot lot automatic configuration u last module course remove Spring Boot project require u configure object data source provided Spring Boot manually student gain insight automation provided Spring Boot manage advanced Spring Configuration deepdive Microservice architectural style implement Spring technology Microservices Spring Cloud online workshop designed help learn Microservices architectural style implement using Spring technology course provides good solid introduction topic Microservices architectural style combine practical experience gained working exercise featuring Spring Cloud Along way course provides brief introduction Spring Boot Spring Data enough get familiar technology immersed already course provides exercise provide handson experience working various component Spring Cloud goal course serve practical guide Spring Cloud project see used implement microservicebased architecture time finish course gained ability articulate Microservices architectural style including advantage disadvantage gain familiarity Spring Boot you’ll see use build web interface REST interface use Spring Data Spring Data REST gain ability build microservicebased application utilizing Spring Cloud technology learn Centralized versioned configuration management using Spring Cloud Config Dynamic configuration update Spring Cloud Bus Service discovery Spring Cloud Eureka ClientSide Load Balancing Ribbon Declarative REST Clients Feign Software Circuit Breakers Hystrix Deploy Spring Boot Applications Cloud AWS Spring Framework popular large company fact Spring popular Java framework typical company deploy Spring Framework application least 3 different environment development test production environment common problem developer face environment different Different server name Different database Different user account Different password course learn use Spring’s IoC container deploy application many different environment Inversion Control Spring application wire need environment You’ll start course learning advanced configuration option Spring Framework Next course take DevOps approach You’ll see deploy Spring Framework application different environment development it’s common use H2 inmemory database course database temporary something you’d want use production deployment want see flip switch use MySQL Flip another switch app using RDS database managed Amazon code change course also look like best practice used enterprise software development Using continuous integration server best practice Jenkins popular CI server learn install Jenkins Linux server server provisioned AWS cloud Jenkins running AWS server best practice set Jenkins friendly URL Jenkins Java application running port 8080 don’t want typing IP colon 8080 browser reach Jenkins Docker exciting technology see leverage Docker host Artifactory Maven repository We’ll use Artifactory manage build artifact produced Jenkins fun also use Docker set MySQL database server We’ll provisioning Linux server AWS installing Docker deploy MySQL Docker container also provide application server use run Spring Boot application pull Spring Boot jar right Artifactory tell connect database server Amazon AWS also managed MySQL database RDS service see provision RDS database We’ll reconfigure Spring Boot application connect RDS database lot fun challenging content course learn manage Spring property want encrypt sensitive property password Spring Profiles used Using YAML configure Spring provision server Amazon AWS Logging server via SSH use yum package manager install software Linux configure Linux service DNS work use Route 53 setup hostnames use webhooks GitHub trigger build immediately don’t want use root account application Configure Jenkins perform Maven build Use Jenkins deploy build artifact Artifactory handson course get course need account AWS able use AWS free tier complete course assignment get course need domain name need control domain Without able configure subdomains Route 53 course leverage AWS cloud service skill learn AWS platform transfer corporate environment AWS used mimic typical company Spring Boot give power Spring Framework without complexity Start writing apps today Spring Boot Spring Framework make easy create powered productiongrade application service run maintained minimum fuss also provides productionready feature metric health check even externalized configuration software designed run anywhere meaning create executable JARs one favorable feature type program learning type application might seem like daunting task course structure Spring Boot Spring Framework learning easy comprehend fashion Featuring topic like introduction Framework well step step guideline creating first application course perfect almost user requirement order excel courses’ teaching Spring Boot familiarity Java Groovy programming language web development experience well computer capable running Java Intellij Eclipse Besides course offering lifetime access eighty featured lecture ten hour teaching content also offer opportunity create Spring MVC application also tutorial connect various database using Spring Data course extremely beneficial student new Spring Boot student unfamiliar Spring Framework looking writing apps course applies case Spring Framework Hibernate Java Programming JPA OCA Java SE JDBC Oracle Database App SQL MySQL Beginners Spring Framework course show exact technique strategy need develop full CRUD app Hibernate write unit test XML Java application context build web application programming Spring Framework Training You’ll Learn AspectOriented Programming Setting Spring Environment Java Development Kit JDK Setup Installation Apache Common Logging API Eclipse IDE Setup Necessary IOC BeanFactory Application Container Application Context Container Singleton Prototype Bean Scope Bean Life Cycle Initialization Destruction Callbacks Default Initialization Destroy Methods Dependency Injection Injecting Inner Beans References Autowiring Modes Constructor JDBC Framework Configuring Data Sources Data Access Object Executing SQL DDL Statements Local Global Transactions Programmatic Declarative Transaction Management Logging LOG4J Jakarta Commons Logging JCL API Get Ready Spring Interview Spring Spring Boot RESTful SOAP Web Services Spring MVC Spring Framework popular Java Framework ever continues evolve changing architecture Spring Boot one popular Spring project Spring Boot used Java framework develop RESTful Services Microservices Preparing Spring Interview tricky wide variety Spring Modules Spring Projects would need recollect prepared answer question would need get good understanding new feature Spring firm grasp concept implemented project course help prepare Spring Interview code example covering 200 Spring Interview Questions Answers Spring Spring Boot Spring MVC Spring JDBC JPA AOP RESTful Services SOAP Web Services learn topic Spring Spring MVC Spring Boot Database Connectivity — JDBC Spring JDBC JPA Spring Data Unit Testing AOP SOAP Web Services RESTful Web Services Spring Framework 5 Learn Spring Framework 5 Spring Boot 2 Spring MVC Spring Data JPA Spring Data MongoDB Hibernate Learn Spring modern comprehensive course available Spring Framework 5 Spring Boot 2 see build multiple real world application using Spring Framework 5 demand technology use build Spring Framework application include Spring Framework 5 Spring Boot 2 Spring Data JPA Spring MVC Spring MockMVC Spring WebFlux Spring Data MongoDB Spring Security Coming Q1 2018 Hibernate Project Lombok MapStruct Maven Gradle addition teaching Spring Framework 5 learn modern best practice used enterprise application developmentAs build application you’ll see using Test Driven Development TDD JUnit Mockito Using Mockito mock keep Spring Framework unit test light fast You’ll also see Spring context used complex integration test technique best practice used company world build manage large scale Spring Framework application Spring MVC Hibernate long cornerstone Spring Framework learn use Spring MVC Spring Data JPA Hibernate build real world web application You’ll learn Hibernate configuration mapping JPA entity Spring MVC lot robust capability start showing build recipe application using TDD course Initially it’s happy path development go back add custom exception handling form validation internationalization course also learn use Spring MVC create RESTful APIs big theme Spring Framework 5 Reactive Programming Inside course build web application using Thymeleaf Spring MVC Spring Data MongoDB MongoDB take MongoDB application built convert Reactive application You’ll see leverage new Reactive type inside Spring Framework data tier web tier get see step step convert traditional Spring MVC application end end reactive application using WebFlux framework — brand new Spring Framework 5 Coming soon course early 2018 Spring Security Documenting APIs RestDoc Aspect Oriented Programming AOP Using Spring Events Scheduling Tasks Caching eHcache Spring JDBC JDBC Template JMS Messaging AMQP RabbitMQ Logging configuration Logback Log4J 2 real world Spring Framework apps Build web application using Spring Framework 4 Spring Boot you’re new Spring Framework course want start course cover core Spring Framework foundation Spring Framework project built course learn important key concept dependency injection inversion control used throughout Spring Framework Within Spring Framework option using traditional XML configuration new Java based configuration I’ll show step step configure Spring Beans using best practice XML Java I’ll also show use Spring persist data database Spring MVC show content database webpage Throughout course access code example presented tutorial code build run computer able study working code example Whenever possible go real world use case example year experience Spring Source consultant I’ve seen lot good code bad code year experience Spring show good code poor programming practice avoid time reach end course able build functioning Spring Web Application course learn aboutTags Development Spring Framework Java Coding Application |
1,768 | Machine Learning Classification Models | A brief guide to Model Evaluation Techniques: Machine Learning
Machine Learning Classification Models
Model Evaluation Techniques for Machine Learning Classification Models
Image courtesy: Great Learning
In machine learning, we often use the classification models to get a predicted result of population data. Classification is one of the two sections of supervised learning, and it deals with data from different categories. The training data-set trains the model to predict the unknown labels of population data. There are multiple algorithms: Logistic regression, K-nearest neighbor, Decision tree, Naive Bayes etc. All these algorithms have their own style of execution and different techniques of prediction. To find the most suitable algorithm for a particular business problem, there are few model evaluation techniques. In this article different model evaluation techniques will be discussed.
Confusion Matrix
It probably got its name from the state of confusion it deals with. If you remember hypothesis testing, you may recall the two errors we defined as type-I and type-II. As depicted in Fig.1, type-I error occurs when null hypothesis is rejected, which should not be in actual. Type-II error occurs when the alternate hypothesis is true, but you are failing to reject null hypothesis.
Fig.1: Type-I and Type-II errors
In figure 1 it is depicted clearly that the choice of confidence interval affects the probabilities of these errors to occur. But if you try to reduce either of these errors, it will result in the increase of the other one.
So, what is confusion matrix?
Fig.2: Confusion Matrix
Confusion matrix is the image given above. It is a matrix representation of the results of any binary testing. For example let us take the case of predicting a disease. You have done some medical testing and with the help of the results of those tests, you are going to predict whether the person is having a disease. So, actually you are going to validate if the hypothesis of declaring a person as having disease is acceptable or not. Say, among 100 people you are predicting 20 people to have the disease. In actual only 15 people to have the disease and among those 15 people you have diagnosed 12 people correctly. So, if I put the result in a confusion matrix, it will look like the following —
Fig.3: Confusion Matrix of prediction a disease
So, if we compare fig.3 with fig.2 we will find —
True Positive: 12 (You have predicted the positive case correctly!) True Negative: 77 (You have predicted negative case correctly!) False Positive: 8 ( You have predicted these people as having a disease, which they actually don’t. But do not worry, this can be rectified in further medical analysis. So, this is a low risk error. This is type-II error in this case.) False Negative: 3 (Oh ho! You have predicted these three poor fellows as fit. But actually they have the disease. This is dangerous! Be careful! This is type-I error in this case.)
Now if I ask what is the accuracy of the prediction model what I followed to get these results, the answer should be the ratio of the accurately predicted number and the total number of people which is (12+77)/100 = 0.89. If you study the confusion matrix thoroughly you will find the following things:
The top row is depicting the total number of predictions you did as having the disease. Among these predictions, you have predicted 12 people correctly to have the disease in actual. So, the ratio, 12/(12+8) = 0.6 is the measure of the accuracy of your model in detecting a person to have the disease. This is called Precision of the model. Now, take the first column. This column represents the total number of people who are having the disease in actual. And you have predicted correctly for 12 of them. So, the ratio, 12/(12+3) = 0.8 is the measure of the accuracy of your model to detect a person having disease out of all the people who are having the disease in actual. This is termed as Recall.
Now, you may ask the question that why do we need to measure precision or recall to evaluate the model?
The answer is it is highly recommended when a particular result is very sensitive. For example you are going to build a model for a bank to predict fraudulent transactions. It is not very common to have a fraudulent transaction. In 1000 transactions, there may be 1 transaction which is fraud. So, undoubtedly your model will predict a transaction as non-fraudulent very accurately. So, in this case the whole accuracy does not matter as it will be always very high irrespective of the accuracy of the prediction of the fraudulent transactions as that is of very low percentage in the whole population. But the prediction of a fraudulent transaction as non-fraudulent is not desirable. So, in this case the measurement of precision will take a vital role to evaluate the model. It will help to understand out of all the actual fraudulent transactions, how many are being predicted. If it is low, even if the overall accuracy if high, the model is not acceptable.
Receiver Operating Characteristics (ROC) Curve
Measuring the area under the ROC curve is also a very useful method for evaluating a model. ROC is the ratio of True Positive Rate (TPR) and False Positive Rate (FPR) (see fig.2). In our disease detection example, TPR is the measure of the ratio between the number of accurate predictions of people having the disease and the total number of people having the disease in actual. FPR is the ratio between the number of people who are predicted as not to have disease correctly and the total number of people who are not having the disease in actual. So, if we plot the curve, it comes like this —
Fig.4: ROC curve (source: https://www.medcalc.org/manual/roc-curves.php)
The blue line denotes the change of TPR with different FPR for a model. More the ratio of the area under the curve and the total area (100 x 100 in this case) defines more the accuracy of the model. If it becomes 1, the model will be overfit and if it is equal below 0.5 (i.e when the curve is along the dotted diagonal line), the model will be too inaccurate to use.
For classification models, there are many other evaluation methods like Gain and Lift charts, Gini coefficient etc. But the in-depth knowledge about the confusion matrix can help to evaluate any classification model very effectively. So, in this article I tried to demystify the confusions around the confusion matrix to help the readers.
Example: Machine Learning Models Spotify uses to recommend music you’ll like
In the early 2000s, Songza implemented a manual music recommendation system for its listeners, where a team of music experts and curators would create playlists. But these recommendations were not objective, as they were dependent on the personal taste of the curators.
It was an average experience for listeners, with a fair share of hits and misses, because it was impossible to make a playlist which catered to the varied tastes of a diverse set of people. The technology and the data did not exist back then to build a playlist that would be personalized to the taste of each individual listener.
Along came Spotify a few years later, offering a highly personalized weekly playlist called Discover Weekly that quickly became one of their flagship offerings.
Every Monday, millions of listeners receive a fresh playlist of new song recommendations, customized to their personal tastes based on their listening history and the songs they’ve engaged with. Spotify uses a combination of different data aggregation and sorting methods to create their unique and powerful recommendation model that’s powered by machine learning.
“One of our flagship features is called Discover Weekly. Every Monday, we give you a list of 50 tracks that you haven’t heard before that we think you’re going to like. The ML engine that’s the main basis of it, and it’s advanced some since, had actually been around at Spotify a bit before Discover Weekly was there, just powering our Discover page” — David Murgatroyd, Machine Learning Leader at Spotify.
Spotify uses three forms of recommendation models to power Discover Weekly.
1. Collaborative Filtering
Collaborative Filtering is a popular technique used by recommend-er systems to make automated predictions about the preferences of users, based on the preference of other similar users.
On Spotify, the collaborative filtering algorithm compares multiple user-created playlists that have the songs that users have listened to. The algorithm then combs those playlists to look at other songs that appear in the playlists and recommends those songs.
This framework executed by matrix math in Python libraries. The algorithm first creates a matrix of all the active users and songs. The Python library then runs a series of complex factorization formula on the matrix. The end result is two separate vectors, where X is the user vector representing the taste of an individual user. Vector Y represents the profile of a single song. To find out users with similar taste, collaborative filtering will compare a given user vector with each and every single user vectors to give a similar user vector as the output. The same procedure is applied to the song vectors.
Spotify does not only rely on collaborative filtering. The second recommendation model used is NLP.
2. Natural Language Processing
NLP is the ability of an algorithm to understand speech and text in real-time. Spotify’s NLP constantly trawls the web to find articles, blog posts, or any other text about music, to come up with a profile for each song.
With all this scraped data, the NLP algorithm can classify songs based on the kind of language used to describe them and can match them with other songs that are discussed in the same vein. Artists and songs are assigned to classifying keywords based on the data, and each term has a certain weight assigned to them. Similar to collaborative filtering, a vector representation of the song is created, and that’s used to suggest similar songs.
3. Convolutional Neural Networks
Convolutional Neural Networks are used to hone the recommendation system and to increase accuracy because less-popular songs might be neglected by the other models. The CNN model ensures that obscure and new songs are considered.
The CNN model is most popularly used for facial recognition, and Spotify has configured the same model for audio files. Each song is converted into a raw audio file as a waveform. These wave forms are processed by the CNN and is assigned key parameters such as beats per minute, loudness, major/minor key and so on. Spotify then tries to match similar songs that have the same parameters as the songs their listeners like listening to.
With these key machine learning models, Spotify is able to tailor a unique playlist of music that surprises its listeners every week with songs they would have never found otherwise.
A key problem in many machine learning models is the lack of access to clean, structured data that can be processed. Spotify has been able to circumvent that problem due to their access to massive amounts of data that they collect from their users. They’ve been able to shine as a great example of effective use of Machine Learning models to give their users an unrivaled personalized experience.
Saikat Bhattacharya is a Senior Software engineer at Freshworks, and is pursuing the PGP-Machine Learning program from Great Learning. This article originally appeared on Towards Data Science and has been syndicated with permission from the author.
Happy modelling! | https://medium.com/my-great-learning/machine-learning-models-great-learning-7c258eeb68b6 | ['Great Learning'] | 2019-08-27 10:35:03.344000+00:00 | ['Machine Learning', 'Technology', 'Artificial Intelligence', 'Marketing', 'Business'] | Title Machine Learning Classification ModelsContent brief guide Model Evaluation Techniques Machine Learning Machine Learning Classification Models Model Evaluation Techniques Machine Learning Classification Models Image courtesy Great Learning machine learning often use classification model get predicted result population data Classification one two section supervised learning deal data different category training dataset train model predict unknown label population data multiple algorithm Logistic regression Knearest neighbor Decision tree Naive Bayes etc algorithm style execution different technique prediction find suitable algorithm particular business problem model evaluation technique article different model evaluation technique discussed Confusion Matrix probably got name state confusion deal remember hypothesis testing may recall two error defined typeI typeII depicted Fig1 typeI error occurs null hypothesis rejected actual TypeII error occurs alternate hypothesis true failing reject null hypothesis Fig1 TypeI TypeII error figure 1 depicted clearly choice confidence interval affect probability error occur try reduce either error result increase one confusion matrix Fig2 Confusion Matrix Confusion matrix image given matrix representation result binary testing example let u take case predicting disease done medical testing help result test going predict whether person disease actually going validate hypothesis declaring person disease acceptable Say among 100 people predicting 20 people disease actual 15 people disease among 15 people diagnosed 12 people correctly put result confusion matrix look like following — Fig3 Confusion Matrix prediction disease compare fig3 fig2 find — True Positive 12 predicted positive case correctly True Negative 77 predicted negative case correctly False Positive 8 predicted people disease actually don’t worry rectified medical analysis low risk error typeII error case False Negative 3 Oh ho predicted three poor fellow fit actually disease dangerous careful typeI error case ask accuracy prediction model followed get result answer ratio accurately predicted number total number people 1277100 089 study confusion matrix thoroughly find following thing top row depicting total number prediction disease Among prediction predicted 12 people correctly disease actual ratio 12128 06 measure accuracy model detecting person disease called Precision model take first column column represents total number people disease actual predicted correctly 12 ratio 12123 08 measure accuracy model detect person disease people disease actual termed Recall may ask question need measure precision recall evaluate model answer highly recommended particular result sensitive example going build model bank predict fraudulent transaction common fraudulent transaction 1000 transaction may 1 transaction fraud undoubtedly model predict transaction nonfraudulent accurately case whole accuracy matter always high irrespective accuracy prediction fraudulent transaction low percentage whole population prediction fraudulent transaction nonfraudulent desirable case measurement precision take vital role evaluate model help understand actual fraudulent transaction many predicted low even overall accuracy high model acceptable Receiver Operating Characteristics ROC Curve Measuring area ROC curve also useful method evaluating model ROC ratio True Positive Rate TPR False Positive Rate FPR see fig2 disease detection example TPR measure ratio number accurate prediction people disease total number people disease actual FPR ratio number people predicted disease correctly total number people disease actual plot curve come like — Fig4 ROC curve source httpswwwmedcalcorgmanualroccurvesphp blue line denotes change TPR different FPR model ratio area curve total area 100 x 100 case defines accuracy model becomes 1 model overfit equal 05 ie curve along dotted diagonal line model inaccurate use classification model many evaluation method like Gain Lift chart Gini coefficient etc indepth knowledge confusion matrix help evaluate classification model effectively article tried demystify confusion around confusion matrix help reader Example Machine Learning Models Spotify us recommend music you’ll like early 2000s Songza implemented manual music recommendation system listener team music expert curator would create playlist recommendation objective dependent personal taste curator average experience listener fair share hit miss impossible make playlist catered varied taste diverse set people technology data exist back build playlist would personalized taste individual listener Along came Spotify year later offering highly personalized weekly playlist called Discover Weekly quickly became one flagship offering Every Monday million listener receive fresh playlist new song recommendation customized personal taste based listening history song they’ve engaged Spotify us combination different data aggregation sorting method create unique powerful recommendation model that’s powered machine learning “One flagship feature called Discover Weekly Every Monday give list 50 track haven’t heard think you’re going like ML engine that’s main basis it’s advanced since actually around Spotify bit Discover Weekly powering Discover page” — David Murgatroyd Machine Learning Leader Spotify Spotify us three form recommendation model power Discover Weekly 1 Collaborative Filtering Collaborative Filtering popular technique used recommender system make automated prediction preference user based preference similar user Spotify collaborative filtering algorithm compare multiple usercreated playlist song user listened algorithm comb playlist look song appear playlist recommends song framework executed matrix math Python library algorithm first creates matrix active user song Python library run series complex factorization formula matrix end result two separate vector X user vector representing taste individual user Vector represents profile single song find user similar taste collaborative filtering compare given user vector every single user vector give similar user vector output procedure applied song vector Spotify rely collaborative filtering second recommendation model used NLP 2 Natural Language Processing NLP ability algorithm understand speech text realtime Spotify’s NLP constantly trawl web find article blog post text music come profile song scraped data NLP algorithm classify song based kind language used describe match song discussed vein Artists song assigned classifying keywords based data term certain weight assigned Similar collaborative filtering vector representation song created that’s used suggest similar song 3 Convolutional Neural Networks Convolutional Neural Networks used hone recommendation system increase accuracy lesspopular song might neglected model CNN model ensures obscure new song considered CNN model popularly used facial recognition Spotify configured model audio file song converted raw audio file waveform wave form processed CNN assigned key parameter beat per minute loudness majorminor key Spotify try match similar song parameter song listener like listening key machine learning model Spotify able tailor unique playlist music surprise listener every week song would never found otherwise key problem many machine learning model lack access clean structured data processed Spotify able circumvent problem due access massive amount data collect user They’ve able shine great example effective use Machine Learning model give user unrivaled personalized experience Saikat Bhattacharya Senior Software engineer Freshworks pursuing PGPMachine Learning program Great Learning article originally appeared Towards Data Science syndicated permission author Happy modellingTags Machine Learning Technology Artificial Intelligence Marketing Business |
1,769 | The Clathrate Gun Hypothesis or Why Methane “Burping” is Cause for Concern | The Clathrate Gun has already been fired.
The video below shows a lead researcher breaking down while reading out the results of her study which concludes that the Clathrate Gun has been already been fired.
So what is the hypothesis specifically?
The Clathrate Gun Hypothesis suggests that the release of methane from the earth due to warming could cause a massive increase in temperatures within a lifetime (‘gun’ because the process can’t be stopped, the process is irreversible).
Now let’s break it down.
Methane.
There are a couple of things to know about methane. It’s a colorless greenhouse gas 24 times more potent than carbon dioxide. That being said, methane can be a problem in the atmosphere, even in small quantities.
Alright, next: Clathrates.
For our situation, methane in ‘ice’ = clathrate. Now, clathrates are stable in cold temperatures or under high pressure. One cubic meter of clathrate could release 164 cubic meters of methane. It’s quite a bit. To put that in perspective, that’s 43,324.2 gallons.
Simple enough.
So Methane Clathrate is simply ice holding a lot of methane within its crystal structure.
Next. Where is this thing found?
Methane clathrate is found in seabed permafrost. Essentially, it’s mainly found on the ocean floor, however, there isn’t a consensus on just how large these deposits are.
We’ll jump back in history to grasp the scale of this theory. Apparently, some scientists theorize that the violent degassing may have affected the planet significantly. They suggest it could have resulted in the Eocene hothouse period. Eventually, the period of great warmth gave way for the cooling climate.
The large extinction occurred near the end of the Permian period about 250 million years ago. The damage to marine life was great; more than 94 percent of all species then abruptly disappeared as oxygen levels sank. It took at least 20 million years, and in special cases over 100 million years for environment diversity to recover. All of this happened simply from a temperature increase less than 6.5 degrees Celsius.
So how would methane clathrate figure into all of this?
The release of methane from the methane clathrate could increase global temperatures. An increase in global temperatures would release even more of the compound increasing temperatures further. An abrupt release could drastically impact the environment. Even if a runaway effect is unlikely, as postulated by few, it could still ocean acidification and alter the atmosphere. During a period of the Glacial Minimum, the temperatures went up 6 degrees Celsius.
Side note: The release of methane from the compound can be referred to as methane degassing or “burping.”
So, the trapped methane from the seabeds may have caused the End-Permian Extinction. Alright, that covers all the basics. We’ll delve further into the issue.
By using a process called ebullition, researchers could find the density of bubbles from the permafrost and found 100–630 mg of methane per square meter is released daily from the East Siberian Shelf into the water column. It basically suggests that methane release is gradual, not abrupt. However, events such as arctic cyclones could increase the rate of methane being released.
Another thing to consider is that clathrates can also exist not just in seabed permafrost. They can be found in water if the temperature is lower. Also, the methane could be contained by a ‘lid’ of permafrost.
Picture: Chris Butler/Science Source
In light of this info, let’s investigate Snowball Earth. About 630 million years ago, it’s believed the earth’s surface was almost entirely frozen. The frozen ice sheets of the planet would’ve had quite a bit of methane trapped within. However, because these sheets were unstable, they would collapse after growing big enough, releasing methane into the atmosphere. Temperatures increased, melting more sheets, releasing more methane, increasing temperatures further bringing the Snowball Earth to its end. It is thought that the last ice age was not brought to its end by methane-related warming.
Many of the methane clathrate deposits are in sediments that are too deep to be released suddenly. Furthermore, in the overall scheme of warming, the effect of methane would not be drastic. This is due to the fact that they destabilize from the deepest part of their stability zone, which is usually 100 meters below the seabed. It will surface eventually, but not as rapidly as previously thought.
Our Current Situation
Photo by Annie Spratt on Unsplash
Around 2008, there was research in the Siberian Arctic which claimed millions of tons of methane were escaping through breaches in the seabed permafrost. In certain areas, the concentrations of methane hit 100 times normal levels.
There’s a release of .5 metric tons of methane per year. Also, 50 gigatons of it are risked to be released at any moment. What if that happened? The amount of methane on the planet would increase by a factor of 12.
Miscellaneous:
There’s also a trapped gas deposit off Canada in the Beaufort sea. Considered to be the shallowest known deposit of methane, it lies 290 meters below sea level.
Along the eastern continental slope of the United States, destabilizing methane hydrate can be found, about 2.5 gigatons worth. It’s still unclear whether it would reach the atmosphere.
Although there isn’t exactly a consensus on the Clathrate Gun, after all, it is a hypothesis, it’s nonetheless good to be informed about the current state of our planet and how it’s evolved in the past and may change in the future.
Thanks for reading! | https://eashanreddykotha.medium.com/the-clathrate-gun-hypothesis-or-why-methane-burping-is-cause-for-concern-bf664bc1723f | ['Eashan Reddy Kotha'] | 2020-07-24 17:33:33.968000+00:00 | ['Climate Change', 'Environment', 'Climate', 'Education', 'Science'] | Title Clathrate Gun Hypothesis Methane “Burping” Cause ConcernContent Clathrate Gun already fired video show lead researcher breaking reading result study concludes Clathrate Gun already fired hypothesis specifically Clathrate Gun Hypothesis suggests release methane earth due warming could cause massive increase temperature within lifetime ‘gun’ process can’t stopped process irreversible let’s break Methane couple thing know methane It’s colorless greenhouse gas 24 time potent carbon dioxide said methane problem atmosphere even small quantity Alright next Clathrates situation methane ‘ice’ clathrate clathrates stable cold temperature high pressure One cubic meter clathrate could release 164 cubic meter methane It’s quite bit put perspective that’s 433242 gallon Simple enough Methane Clathrate simply ice holding lot methane within crystal structure Next thing found Methane clathrate found seabed permafrost Essentially it’s mainly found ocean floor however isn’t consensus large deposit We’ll jump back history grasp scale theory Apparently scientist theorize violent degassing may affected planet significantly suggest could resulted Eocene hothouse period Eventually period great warmth gave way cooling climate large extinction occurred near end Permian period 250 million year ago damage marine life great 94 percent specie abruptly disappeared oxygen level sank took least 20 million year special case 100 million year environment diversity recover happened simply temperature increase le 65 degree Celsius would methane clathrate figure release methane methane clathrate could increase global temperature increase global temperature would release even compound increasing temperature abrupt release could drastically impact environment Even runaway effect unlikely postulated could still ocean acidification alter atmosphere period Glacial Minimum temperature went 6 degree Celsius Side note release methane compound referred methane degassing “burping” trapped methane seabed may caused EndPermian Extinction Alright cover basic We’ll delve issue using process called ebullition researcher could find density bubble permafrost found 100–630 mg methane per square meter released daily East Siberian Shelf water column basically suggests methane release gradual abrupt However event arctic cyclone could increase rate methane released Another thing consider clathrates also exist seabed permafrost found water temperature lower Also methane could contained ‘lid’ permafrost Picture Chris ButlerScience Source light info let’s investigate Snowball Earth 630 million year ago it’s believed earth’s surface almost entirely frozen frozen ice sheet planet would’ve quite bit methane trapped within However sheet unstable would collapse growing big enough releasing methane atmosphere Temperatures increased melting sheet releasing methane increasing temperature bringing Snowball Earth end thought last ice age brought end methanerelated warming Many methane clathrate deposit sediment deep released suddenly Furthermore overall scheme warming effect methane would drastic due fact destabilize deepest part stability zone usually 100 meter seabed surface eventually rapidly previously thought Current Situation Photo Annie Spratt Unsplash Around 2008 research Siberian Arctic claimed million ton methane escaping breach seabed permafrost certain area concentration methane hit 100 time normal level There’s release 5 metric ton methane per year Also 50 gigatons risked released moment happened amount methane planet would increase factor 12 Miscellaneous There’s also trapped gas deposit Canada Beaufort sea Considered shallowest known deposit methane lie 290 meter sea level Along eastern continental slope United States destabilizing methane hydrate found 25 gigatons worth It’s still unclear whether would reach atmosphere Although isn’t exactly consensus Clathrate Gun hypothesis it’s nonetheless good informed current state planet it’s evolved past may change future Thanks readingTags Climate Change Environment Climate Education Science |
1,770 | MLOps In Action: Training-serving skew | MLOps In Action
MLOps In Action: Training-serving skew
Training-serving skew is one of the most common problems when deploying ML models. This post explains what it is and how to prevent it.
A typical Machine Learning workflow
When training a Machine Learning model, we always follow the same series of steps:
Get data (usually from a database) Clean it (e.g. fix/discard corrupted observations) Generate features Train model Evaluate model
Once we clean the data (2), we apply transformations (3) to it to make the learning problem easier. Feature engineering is a particularly important task when working with tabular data and classic ML models (which is the most common setting in industry), the only exception are Deep Learning models, where there is little to no feature engineering. This post focuses on the former scenario.
When deploying a model, the pipelines look very similar, except we make predictions using a previously trained model after computing the features. However, not all deployments are equal, the two most common settings are:
Batch (e.g. make predictions for every user every week and upload predictions to database) Online (e.g. expose a model as a REST API to make on-demand predictions)
What is feature engineering?
Feature engineering is the set of statistically independent transformations that operate on a single (or group of) observation(s). In practical terms, it means that no information from the training set is part of the transformation.
This contrasts with some pre-processing procedures such as feature scaling, where information from the training set (i.e. mean and standard deviation) is used as part of the transformation (subtract mean, divide by standard deviation). These pre-processing methods are not feature engineering, but part of the model itself: mean and standard deviation are “learned” from the training set and then applied to the validation/test set.
In mathematical terms, we can express the feature engineering process as a function that transforms a raw input into another vector that is used to train the model.
Training-serving skew
Ideally, we should re-use the same feature engineering code to guarantee that a given raw input maps to the same feature vector at training and serving time. If this does not happen, we have training-serving skew. One common reason for this is a mismatch of computational resources at training and serving time.
Imagine you are working on a new ML project and decide to write your pipeline using Spark. A few months later, you have the first version and are ready to deploy it as a microservice. It would be very inefficient to require your microservice to connect to a Spark cluster to make a new prediction, hence, you decide to re-implement all your feature engineering code using numpy/pandas to avoid any extra infrastructure. All of a sudden, you have two feature engineering codebases to maintain (spark and numpy/pandas). Given an input, you must ensure they the same output to avoid training-serving skew.
This is less of a problem with batch deployments, since you usually have the same resources available at training and serving time, but always keep this situation in mind. And whenever possible, use a training technology stack that can also be used at serving time.
If for any reason, you cannot re-use your feature engineering training code. You must test for training-serving skew before deploying a new model. To do this, pass your raw data through your feature engineering pipelines (training and serving), then compare the output. All raw input vectors should map to the same output feature vector.
Note that training-serving skew is not a universally defined term. For the purpose of this post, we limit the definition to a discrepancy between the training and serving feature engineering code.
Re-using code at training and serving time
If you are able to re-use feature engineering at training and serving time, you must ensure the code is modular so you can integrate it in both pipelines easily, the next sections present three ways of doing so.
Solution 1: Abstract feature engineering in a function
The simplest approach is to abstract your feature engineering code in a function and call it at training and serving time:
Then, call your generate_features function in your microservice code, here's an example if using Flask:
Solution 2: Use a workflow manager
While simple, the first solution above does not offer a great development experience. Developing features is a highly iterative process. The best way to accelerate this is via incremental builds, which keep track of source code changes and skip redundant work.
Say for example you have 20 features that are independent of each other, if you modify one of them, you can skip the rest since they’ll produce identical results if you executed them before.
Workflow managers are frameworks that allow you to describe a graph of computations (such as feature engineering code). There are many options to choose from, unfortunately, only a few of them support incremental builds, Ploomber is one of them. Other options are DVC Pipelines and drake for R.
To enable incremental builds, workflow managers save results to disk, and load them if the source code hasn’t changed. In production, you usually want to perform in-memory operations exclusively because disk access is slow. As far as I know, Ploomber is the only workflow manager that allows you to convert a batch-based pipeline to an in-memory one for deployment without any code changes.
Solution 3: Use a feature store
Another solution is to use a feature store, which is an external system that pre-computes features for you. You only need to fetch the ones you want for model training and serving. Here’s an example using Feast, an open-source feature store:
A feature store is a great way to tackle training-serving skew. It also reduces development time, since you only need to code a feature once. This is a great solution for batch deployments and some online ones. If your online model expects user-submitted input data (e.g. a model the classifies images from a user’s camera), a feature store is unfeasible.
The main caveat is that you need to invest in maintaining the feature store infrastructure, but if you are developing many models that can benefit from the same set of features, it is worth considering this option.
Found an error? Click here to let us know. | https://towardsdatascience.com/training-serving-skew-77d947c4c100 | ['Eduardo Blancas'] | 2020-12-29 13:11:42.567000+00:00 | ['Mlops', 'Machine Learning', 'Python', 'AI', 'Data Science'] | Title MLOps Action Trainingserving skewContent MLOps Action MLOps Action Trainingserving skew Trainingserving skew one common problem deploying ML model post explains prevent typical Machine Learning workflow training Machine Learning model always follow series step Get data usually database Clean eg fixdiscard corrupted observation Generate feature Train model Evaluate model clean data 2 apply transformation 3 make learning problem easier Feature engineering particularly important task working tabular data classic ML model common setting industry exception Deep Learning model little feature engineering post focus former scenario deploying model pipeline look similar except make prediction using previously trained model computing feature However deployment equal two common setting Batch eg make prediction every user every week upload prediction database Online eg expose model REST API make ondemand prediction feature engineering Feature engineering set statistically independent transformation operate single group observation practical term mean information training set part transformation contrast preprocessing procedure feature scaling information training set ie mean standard deviation used part transformation subtract mean divide standard deviation preprocessing method feature engineering part model mean standard deviation “learned” training set applied validationtest set mathematical term express feature engineering process function transforms raw input another vector used train model Trainingserving skew Ideally reuse feature engineering code guarantee given raw input map feature vector training serving time happen trainingserving skew One common reason mismatch computational resource training serving time Imagine working new ML project decide write pipeline using Spark month later first version ready deploy microservice would inefficient require microservice connect Spark cluster make new prediction hence decide reimplement feature engineering code using numpypandas avoid extra infrastructure sudden two feature engineering codebases maintain spark numpypandas Given input must ensure output avoid trainingserving skew le problem batch deployment since usually resource available training serving time always keep situation mind whenever possible use training technology stack also used serving time reason cannot reuse feature engineering training code must test trainingserving skew deploying new model pas raw data feature engineering pipeline training serving compare output raw input vector map output feature vector Note trainingserving skew universally defined term purpose post limit definition discrepancy training serving feature engineering code Reusing code training serving time able reuse feature engineering training serving time must ensure code modular integrate pipeline easily next section present three way Solution 1 Abstract feature engineering function simplest approach abstract feature engineering code function call training serving time call generatefeatures function microservice code here example using Flask Solution 2 Use workflow manager simple first solution offer great development experience Developing feature highly iterative process best way accelerate via incremental build keep track source code change skip redundant work Say example 20 feature independent modify one skip rest since they’ll produce identical result executed Workflow manager framework allow describe graph computation feature engineering code many option choose unfortunately support incremental build Ploomber one option DVC Pipelines drake R enable incremental build workflow manager save result disk load source code hasn’t changed production usually want perform inmemory operation exclusively disk access slow far know Ploomber workflow manager allows convert batchbased pipeline inmemory one deployment without code change Solution 3 Use feature store Another solution use feature store external system precomputes feature need fetch one want model training serving Here’s example using Feast opensource feature store feature store great way tackle trainingserving skew also reduces development time since need code feature great solution batch deployment online one online model expects usersubmitted input data eg model classifies image user’s camera feature store unfeasible main caveat need invest maintaining feature store infrastructure developing many model benefit set feature worth considering option Found error Click let u knowTags Mlops Machine Learning Python AI Data Science |
1,771 | Likes, Comments, Shares Aren’t a Reliable Proxy for Success, Period. | On social platforms like Facebook, engagements — likes, comments, shares, and click-throughs long dominated conversation around the particular success of a post. Yet, as Facebook writes:
“Online engagement metrics are a proxy for interest, but they are not a reliable indicator of the content’s persuasiveness. Persuasive content influences your audience in a way that helps move your business.”
In reality, these engagements do not effectively correlate with business results for brand content. Facebook research has found that “content doesn’t need to be persuasive to elicit an engagement.” and inversely, “not all persuasive content elicits an engagement”.
Campaigns that wish to drive brand awareness cannot therefore be measured by the level of engagement, as a potential customer can inherently notice and be influenced by that content without interacting with it. In fact, a 2012 Facebook & Datalogix ROI study found that:
“more than 90% of offline sales come from people who don’t interact with ads during the campaign.”
Engagement on Facebook — When it Matters
Engagements Don’t Represent Your Audience
One of the biggest issues with engagements is that they may be more indicative of a user’s behaviors (e.g. a “clicky” user) rather than the effectiveness of the content. In fact, analysis of content from major brands at BBDO increasingly shows that those engaging with content registered outside of the target audience the brand wished to impact.
For example, we took real-life Brand X and looked at the demographics of people who engaged with its promoted content, which included all formats across Facebook and Instagram.
While this brand’s target skewed relatively young, data showed that the engagement rate for those under 34 was at 3%, but the highest engagers were 64+ at 22%, with the engagement rate increasing dramatically with every new age block. However, when we examined Estimated Ad Recall Lift by age, we found all blocks consistently performed at or above benchmarks.
Depending on who you are targeting, engagements may be more of a red herring than they are an indicator of positive responses from your target.
Invest in Equitable Measures of Success
As brands shift away from shiny engagement metrics on Facebook, it is essential to invest in studies that can more concretely and more thoroughly measure the impact of campaign efforts on consumers.
Nielsen Brand Effects studies, for instance, have long been used to analyze the impact of Facebook ads in key brand metrics. Consumers see a piece of advertising and shortly thereafter answer a survey to help determine the impact of an ad in shifting awareness, attitudes, favorability, intent, or preference.
Brands also have the ability to track the effect of their work in driving business objectives through studies such as Datalogix or marketing mix modeling. Datalogix studies can help marketers understand how their Facebook spends impact offline sales by matching purchasing data for 70 million American households via loyalty cards and programs. By pulling and anonymizing information associated with their Facebook accounts, marketers can start to see the difference in sales when someone is exposed to a Facebook ad.
Marketing mix modeling studies can also pinpoint the value of social marketing in driving business objectives, but are not available in the short term.
However, as short-term metrics continue to dominate marketing, brands have the opportunity to track metrics that demonstrate the largest probability of success — namely the 10 Second Retention Rate and Estimated Ad Recall Lift.
10-SECOND RETENTION RATE:
The Facebook Marketing Science commissioned Nielsen to analyze the value of Facebook video in driving three key brand metrics: lifts in ad recall, brand awareness, and purchase intent.
Initial data analysis showed that from the moment a video was viewed, there were statistical lifts across each of the three metrics, even amongst those who did not watch the video but did see the impression.
Further investigation then focused on how video duration potentially impacted the metrics outlined. The results revealed a notice lift in cumulative impact when viewers were retained to the :3 second mark. The most statistically significant results, however, came from users who were retained to the :10 mark, with massive lifts seen across ad recall, brand awareness, and purchase intent.
The longer a user is engaged with a piece of content the larger the effect. Yet, the strong correlation between 10-second with impact-led metrics demonstrates a huge opportunity for marketers to measure effectiveness and optimizes their work on the platform in the short-game.
ESTIMATED AD RECALL LIFT:
Marketers can also track a Facebook-calculated proxy metric known as “estimated ad recall lift” (EARL), which measures the impact of ads on driving ad recall by comparing the reach of an ad, coupled with the relative time users spend looking at the ad. This is then weighted against historical data for ad recall taken from 300 previous campaigns. This metric offers a more effective proxy for real-time measurement of the lift in ad recall a brand can expect to gain from a campaign. Importantly, EARL normalizes for users’ scrolling habits, so the quick scroll a younger user might be used to and the slower scroll of an older user are taken into account, along with historical ad recall lift data.
While an attractive measurement, brands should not rely on this metric alone, given that estimates employ a degree of probability.
To gain a more robust look at the effectiveness of work in driving awareness and return on investment, further tracking studies should be employed alongside.
This article is part three of a five part series highlighting BBDO Comms Planning’s latest report, About Face: A New Approach to Facebook for Big Brands.
To download this white paper, click here. | https://medium.com/comms-planning/likes-comments-shares-arent-a-reliable-proxy-for-success-period-65426c2ea524 | ['James Mullally'] | 2016-09-28 14:17:01.124000+00:00 | ['Advertising', 'Measurement', 'Marketing', 'Facebook', 'Social Media'] | Title Likes Comments Shares Aren’t Reliable Proxy Success PeriodContent social platform like Facebook engagement — like comment share clickthroughs long dominated conversation around particular success post Yet Facebook writes “Online engagement metric proxy interest reliable indicator content’s persuasiveness Persuasive content influence audience way help move business” reality engagement effectively correlate business result brand content Facebook research found “content doesn’t need persuasive elicit engagement” inversely “not persuasive content elicits engagement” Campaigns wish drive brand awareness cannot therefore measured level engagement potential customer inherently notice influenced content without interacting fact 2012 Facebook Datalogix ROI study found “more 90 offline sale come people don’t interact ad campaign” Engagement Facebook — Matters Engagements Don’t Represent Audience One biggest issue engagement may indicative user’s behavior eg “clicky” user rather effectiveness content fact analysis content major brand BBDO increasingly show engaging content registered outside target audience brand wished impact example took reallife Brand X looked demographic people engaged promoted content included format across Facebook Instagram brand’s target skewed relatively young data showed engagement rate 34 3 highest engagers 64 22 engagement rate increasing dramatically every new age block However examined Estimated Ad Recall Lift age found block consistently performed benchmark Depending targeting engagement may red herring indicator positive response target Invest Equitable Measures Success brand shift away shiny engagement metric Facebook essential invest study concretely thoroughly measure impact campaign effort consumer Nielsen Brand Effects study instance long used analyze impact Facebook ad key brand metric Consumers see piece advertising shortly thereafter answer survey help determine impact ad shifting awareness attitude favorability intent preference Brands also ability track effect work driving business objective study Datalogix marketing mix modeling Datalogix study help marketer understand Facebook spends impact offline sale matching purchasing data 70 million American household via loyalty card program pulling anonymizing information associated Facebook account marketer start see difference sale someone exposed Facebook ad Marketing mix modeling study also pinpoint value social marketing driving business objective available short term However shortterm metric continue dominate marketing brand opportunity track metric demonstrate largest probability success — namely 10 Second Retention Rate Estimated Ad Recall Lift 10SECOND RETENTION RATE Facebook Marketing Science commissioned Nielsen analyze value Facebook video driving three key brand metric lift ad recall brand awareness purchase intent Initial data analysis showed moment video viewed statistical lift across three metric even amongst watch video see impression investigation focused video duration potentially impacted metric outlined result revealed notice lift cumulative impact viewer retained 3 second mark statistically significant result however came user retained 10 mark massive lift seen across ad recall brand awareness purchase intent longer user engaged piece content larger effect Yet strong correlation 10second impactled metric demonstrates huge opportunity marketer measure effectiveness optimizes work platform shortgame ESTIMATED AD RECALL LIFT Marketers also track Facebookcalculated proxy metric known “estimated ad recall lift” EARL measure impact ad driving ad recall comparing reach ad coupled relative time user spend looking ad weighted historical data ad recall taken 300 previous campaign metric offer effective proxy realtime measurement lift ad recall brand expect gain campaign Importantly EARL normalizes users’ scrolling habit quick scroll younger user might used slower scroll older user taken account along historical ad recall lift data attractive measurement brand rely metric alone given estimate employ degree probability gain robust look effectiveness work driving awareness return investment tracking study employed alongside article part three five part series highlighting BBDO Comms Planning’s latest report Face New Approach Facebook Big Brands download white paper click hereTags Advertising Measurement Marketing Facebook Social Media |
1,772 | TOP 5 — Must Read for every CEO.. IMPORTANT NOTE BEFORE I GET STARTED… | IMPORTANT NOTE BEFORE I GET STARTED: This is my first list of my own, I consider myself as evid reader and technologist. I hate reading books on Kindle, Google rather than at bookstore or at my study table. I work with IT Consulting firm — Cloud Certitude and works typically in SMB Market and Mid Market.
Hit Refresh — by Satya Nadella, CEO Microsoft
Though I’m not a big Microsoft fan but this book is all about individual change, the transformation happened inside Microsoft and the arrival of the most exciting and disruptive wave of technology humankind has experienced — including artificial intelligence, mixed reality and quantum computing. One of the finest words I loved was “Ideas Excites Me.” Empathy ground and centres me”.
2.META HUMAN — Unleashing your infinite potential by Dr. DEEPAK CHOPRA
Well I would say this is the most controversial book in the entire list. I met with Dr. Deepak in United States, SFO for a brief discussion and I become his big fan. In this brilliant book Dr. Deepak successfully argues that consciousness is the sole creator of self, mind, brain, body and the universe we know it. METAHUMAN is a brilliant vision of human potential and how we can move beyond the limitations, concepts, and stories created by the mind.
3. THE TECH WHISPERER — ON DIGITAL TRANSFORMATION AND THE TECHNOLOGIES THAT ENABLE IT BY JASPREET BINDRA.
This book is all about Digital Transformations and the technologies which can enable it. Companies across the world are being buffeted by new technologies, disruptive business models and start up innovation. Business leaders knows that need to adopt the new technologies, like blockchain, Artificial Intelligence (AI), Internet of Things (IoT), using them to keep pace with rapid customer and business environment changes. My loved chapter of the book is Brahma and Business Models.
4. HOW TO CREATE A MIND — THE SECRET OF HUMAN THOUGHTS REVEALED
RAY KURZWEIL prevents a provocative exploration of the most important project in human — machine civilization: reverse — engineering the brain to understand precisely how it works and using the knowledge to create even more intelligent machines. Kurzweil also has explained how the brain functions, how the mind emerges, brain computer interfaces, and the implications of vastly increasing the power of our intelligence to the address the world’s complex problems. I would say every page of this is unique and inspiring.
5. Customer — Support — Focused — Driven — Obsessed — A whole company approach delivering exceptional customer experiences. GET AHEAD OF THE CUSTOMER EXPERIENCE CURVE
This book is for the companies who defines success through business outcomes and put customers at the center of their business realize sustainable, continuous growth. Customer experience is a key driver of technical innovation and business success — customer obsessed teaches organization how to leverage it across all the levels of the organizations to sustain competitive advantage in the digital era. | https://medium.com/doctorsalesforce/top-5-must-read-for-every-ceo-32c758784987 | ['Sumit Mattey'] | 2020-02-28 18:40:52.300000+00:00 | ['CEO', 'Artificial Intelligence', 'AI', 'Microsoft', 'Mindfulness'] | Title TOP 5 — Must Read every CEO IMPORTANT NOTE GET STARTED…Content IMPORTANT NOTE GET STARTED first list consider evid reader technologist hate reading book Kindle Google rather bookstore study table work Consulting firm — Cloud Certitude work typically SMB Market Mid Market Hit Refresh — Satya Nadella CEO Microsoft Though I’m big Microsoft fan book individual change transformation happened inside Microsoft arrival exciting disruptive wave technology humankind experienced — including artificial intelligence mixed reality quantum computing One finest word loved “Ideas Excites Me” Empathy ground centre me” 2META HUMAN — Unleashing infinite potential Dr DEEPAK CHOPRA Well would say controversial book entire list met Dr Deepak United States SFO brief discussion become big fan brilliant book Dr Deepak successfully argues consciousness sole creator self mind brain body universe know METAHUMAN brilliant vision human potential move beyond limitation concept story created mind 3 TECH WHISPERER — DIGITAL TRANSFORMATION TECHNOLOGIES ENABLE JASPREET BINDRA book Digital Transformations technology enable Companies across world buffeted new technology disruptive business model start innovation Business leader know need adopt new technology like blockchain Artificial Intelligence AI Internet Things IoT using keep pace rapid customer business environment change loved chapter book Brahma Business Models 4 CREATE MIND — SECRET HUMAN THOUGHTS REVEALED RAY KURZWEIL prevents provocative exploration important project human — machine civilization reverse — engineering brain understand precisely work using knowledge create even intelligent machine Kurzweil also explained brain function mind emerges brain computer interface implication vastly increasing power intelligence address world’s complex problem would say every page unique inspiring 5 Customer — Support — Focused — Driven — Obsessed — whole company approach delivering exceptional customer experience GET AHEAD CUSTOMER EXPERIENCE CURVE book company defines success business outcome put customer center business realize sustainable continuous growth Customer experience key driver technical innovation business success — customer obsessed teach organization leverage across level organization sustain competitive advantage digital eraTags CEO Artificial Intelligence AI Microsoft Mindfulness |
1,773 | Super Simple React Native Redux Example | Inspired byhttp://blog.tylerbuchea.com/super-simple-react-redux-application-example/
In this article we explore the barest of solutions to get started with React Native + Redux. The only pre-requisite to the below is to have “create-react-native-app” installed (https://facebook.github.io/react-native/docs/getting-started.html)
Setup
create-react-native-app superSimple
cd superSimple
npm install --save redux react-redux
redux.js
App.js
Notes | https://medium.com/david-vassallos-blog-posts/super-simple-react-native-redux-example-f0db89e7338 | ['David Vassallo'] | 2018-07-09 14:12:36.385000+00:00 | ['React', 'React Native', 'Reactjs', 'JavaScript', 'Development'] | Title Super Simple React Native Redux ExampleContent Inspired byhttpblogtylerbucheacomsupersimplereactreduxapplicationexample article explore barest solution get started React Native Redux prerequisite “createreactnativeapp” installed httpsfacebookgithubioreactnativedocsgettingstartedhtml Setup createreactnativeapp superSimple cd superSimple npm install save redux reactredux reduxjs Appjs NotesTags React React Native Reactjs JavaScript Development |
1,774 | What's the Role of Fiction in Social Change? | We all love a good apocalyptical story. When I was a teenager, I used to love the comic book Tank Girl. Set in a post-apocalyptical world where water was scarce, Tank girl fought together with mutant kangaroos against the Water & Power Corporation.
In the "apocalyptic" genre, we have major pieces of work picturing the absurd structures and technologies we create. They show how high the stakes are if we don’t take care of our reality. Apart from big explosions and heroic scenes, sci-fi and the like bring about social criticism, delivering a digestible bite of reflection on the crazy world we have created. Published 70 years ago, 1984 is a great example of reflection coming from fiction. It's a piece that we, who were born before the 90’s, got exposed to long before cable internet existed. Up until today, it is impossible to talk about it without comments on the impact of government surveillance on free will or making a parallel to what is happening in society today.
On the other hand, take Blade Runner 2049. Besides the beautiful images and engaging plot, as most sci-fi goes, it also poses important social criticisms and a sad perspective of what the (not so far) future could hold. Implications of the technology we are building on the environment and society are something we can no longer ignore. Moral issues in genetic engineering, corporate power, consumer culture, they are all there. Yet, we barely talk about these things in conjunction with the film. These are only pieces that make the story exciting. Like nice furniture in a living room.
We are now so bombarded with entertainment and embedded in a consumer culture that our capacity to analyze seems to have changed tremendously. All we see in Blade Runner 2049 is a great piece of entertainment. Together with the Netflix & chill phenomena, it seems as if fiction has been losing its ability to make us question, review beliefs and set actions (that is, if we believe that fiction ever had such potential). If anything, fiction seems to be a coping mechanism, to relax after our workday.
Dream, baby, dream
It’s not only that we reduce great works of art to space-out-of-reality couch entertainment. My issue is how we have come to over-romanticize everything, and normalize the apocalypse. The fight is lost even before it has began.
Going to Mars to save our lives becomes a reachable dream while stopping plastic consumption and changing our eating behaviors to save our planet a big fat annoyance. Quite a few friends have told me that they are fascinated by the sky and the potential of SpaceX but that they are terrified to death of what is under our oceans and of seeing Blue Planet II. That is, after all, the point of romanticizing something: not caring about what we have and dreaming of what we might never have. Dreaming doesn’t require much energy anyway, whereas maintaining and improving our situation does.
Tired of our 9–5 jobs and defeated by our dreams not coming true, we watch fiction, fantasizing to be the main character. In an apocalyptic scenario, sure, we could totally be Ryan Gosling in Blade Runner 2049. In this romantic view, we have a central role. We kill people, we have power. The thing is…. of all the 7,4 billion people in this world, are you and I really the ones for the job? If this was war, chances are that you and I would be a mere casualty on the street, shot on the head when we were fighting for a few coins to get water while someone else (a lot stronger, more important, with better contacts and more money) looked for replicants and killed us instead.
“You are not Denzel,” repeats Chris D’Elia on his show Man on Fire on Netflix over and over again. His constant cry is a reminder of the absurd situation we find ourselves.
As I was growing up, films like The Matrix, Fight Club and Avatar triggered people around me, bringing up questions about the broken society we have built, and our distorted relationship with nature. Maybe we did not do anything serious about these questionings, anyway. When Black Mirror was launched, we were impressed by the harsh critics and what the future could entail, but we were sure we would never get to any of those social absurdities. Now officers have to fight with people who, instead of helping a teenager, film him drowning.
Where do we go from here?
We don’t create things from nowhere, we imagine them first and then we work on it. Fiction is a rich pot of concrete technological developments and images to take inspiration from. What happens when we can’t either write about a better world, nor stop to analyze the sublots of apocalyptic stories? We create a dreadful narrative that is going to be normalized. In a world with so much content and overwhelming busyness, who has time and willingness to think critically about the film they watched and make a parallel to their lives? If we are awake by the time the film finishes, we will probably just consume another piece of content.
When chaos is normalized, everything is just entertainment to forget a hard day’s work, and Netflix sees our time sleeping as a competition, the only thing that is clear is that if there is ever going to be a revolution it is not going to be consumer-led.
But not everything is shit. What brings me hope is the fact that most of us want to be the hero in the movie, not the villain. That points to the existence of something inside us that wants to be more than we are, that wants to act, do something for the common good. The question is, how do we wake it up, how does the voice go from a whisper to a shout?
Yes, it is a big role, where no one is to blame and yet everyone is responsible. In the era of endless bread and circuses, I see that content creators could be more aware of their role in creating narratives. Create stories set in utopias where we would actually like to live, help society visualize a different future. Otherwise, we are doomed to live a life of bread and circuses, if we are not yet. | https://medium.com/literally-literary/whats-the-role-of-fiction-in-social-change-564497f91125 | ['Aline Müller'] | 2020-02-05 02:28:42.938000+00:00 | ['Essay', 'Science Fiction', 'Writing', 'Society', 'Fiction Writing'] | Title Whats Role Fiction Social ChangeContent love good apocalyptical story teenager used love comic book Tank Girl Set postapocalyptical world water scarce Tank girl fought together mutant kangaroo Water Power Corporation apocalyptic genre major piece work picturing absurd structure technology create show high stake don’t take care reality Apart big explosion heroic scene scifi like bring social criticism delivering digestible bite reflection crazy world created Published 70 year ago 1984 great example reflection coming fiction piece born 90’s got exposed long cable internet existed today impossible talk without comment impact government surveillance free making parallel happening society today hand take Blade Runner 2049 Besides beautiful image engaging plot scifi go also pose important social criticism sad perspective far future could hold Implications technology building environment society something longer ignore Moral issue genetic engineering corporate power consumer culture Yet barely talk thing conjunction film piece make story exciting Like nice furniture living room bombarded entertainment embedded consumer culture capacity analyze seems changed tremendously see Blade Runner 2049 great piece entertainment Together Netflix chill phenomenon seems fiction losing ability make u question review belief set action believe fiction ever potential anything fiction seems coping mechanism relax workday Dream baby dream It’s reduce great work art spaceoutofreality couch entertainment issue come overromanticize everything normalize apocalypse fight lost even began Going Mars save life becomes reachable dream stopping plastic consumption changing eating behavior save planet big fat annoyance Quite friend told fascinated sky potential SpaceX terrified death ocean seeing Blue Planet II point romanticizing something caring dreaming might never Dreaming doesn’t require much energy anyway whereas maintaining improving situation Tired 9–5 job defeated dream coming true watch fiction fantasizing main character apocalyptic scenario sure could totally Ryan Gosling Blade Runner 2049 romantic view central role kill people power thing is… 74 billion people world really one job war chance would mere casualty street shot head fighting coin get water someone else lot stronger important better contact money looked replicants killed u instead “You Denzel” repeat Chris D’Elia show Man Fire Netflix constant cry reminder absurd situation find growing film like Matrix Fight Club Avatar triggered people around bringing question broken society built distorted relationship nature Maybe anything serious questioning anyway Black Mirror launched impressed harsh critic future could entail sure would never get social absurdity officer fight people instead helping teenager film drowning go don’t create thing nowhere imagine first work Fiction rich pot concrete technological development image take inspiration happens can’t either write better world stop analyze sublots apocalyptic story create dreadful narrative going normalized world much content overwhelming busyness time willingness think critically film watched make parallel life awake time film finish probably consume another piece content chaos normalized everything entertainment forget hard day’s work Netflix see time sleeping competition thing clear ever going revolution going consumerled everything shit brings hope fact u want hero movie villain point existence something inside u want want act something common good question wake voice go whisper shout Yes big role one blame yet everyone responsible era endless bread circus see content creator could aware role creating narrative Create story set utopia would actually like live help society visualize different future Otherwise doomed live life bread circus yetTags Essay Science Fiction Writing Society Fiction Writing |
1,775 | How to Create Eye-Catching Maps With Python and Kepler.gl | How to Create Eye-Catching Maps With Python and Kepler.gl
Use this intuitive tool to simplify mapping
In this article, we’ll explore Kepler.gl, an open-source solution for geospatial data visualization and exploration. Kepler was developed by Uber to make it easier for users of all levels to design meaningful maps that also look good. The tool can handle large amounts of data and has a friendly, intuitive interface that allows users to build effective maps in an instant.
Available for all to use since 2018, it’s about time we get a closer look at how the tool fits into the data visualization landscape. In this article, we’ll cover the basics of importing data to Kepler using Python’s Pandas and GeoPandas, how to design your visualization, and export the map to an HTML file.
Vancouver Number of Graffitis by Block
Getting Started
The dataset for this example is NOOA’s Global Significant Earthquakes dataset. [Kaggle]
Import statements.
Pandas
I’m interested in looking at the intensity of the earthquakes and if they generated a tsunami, but those aren’t the only values we’ll need. We also need some Geolocation.
# read csv
df = pd.read_csv('data/Worldwide-Earthquake-database.csv')
In this dataset, our geolocation is stored in two fields, Latitude and Longitude.
Those will be essential for Kepler to draw our data, so we need to make sure all those values are clean and usable.
# lat and lon to numeric, errors converted to nan
df['LONGITUDE'] = pd.to_numeric(df.LONGITUDE, errors='coerce')
df['LATITUDE'] = pd.to_numeric(df.LATITUDE, errors='coerce') # drop rows with missing lat, lon, and intensity
df.dropna(subset=['LONGITUDE', 'LATITUDE', 'INTENSITY'], inplace=True) # convert tsunami flag from string to int
df['FLAG_TSUNAMI'] = [1 if i=='Yes' else 0 for i in df.FLAG_TSUNAMI.values]
After loading the data to Pandas, we can use .numeric to make sure they’re numbers, then use .dropna to remove the empty values.
You can also convert the TSUNAMI_FLAG from yes and no to 1 and 0.
Cleaning and preparation are up to your needs, you may have different requirements or use other tools for that, but once your data is in a Pandas data frame you can map.
Kepler.gl
Kepler is straightforward. It gives you a world map and tools to build the visualization; it expects the data, and the configurations of the map.
Let’s start by defining a map. (I’m using Kepler for Jupyter)
kepler_map = keplergl.KeplerGl(height=400)
kepler_map
Default map without data.
Then we add the data frame to it.
kepler_map.add_data(data=df, name="earthquakes")
Map marking the earthquakes.
And the map is updated. Quite easy!
You can load your data to Kepler with Pandas and Geopandas, which support a more comprehensive array of extensions, or directly from a GeoJSON and CSV files.
Design
On the top left of the map, there’s an arrow that opens the settings menu.
Settings Menu
On the menu we have:
Layers — Defines how the variables are encoded to the map
— Defines how the variables are encoded to the map Filters — For selecting smaller sets of data
— For selecting smaller sets of data Interactions — Defines interactions such as Tooltips, search boxes, and others
— Defines interactions such as Tooltips, search boxes, and others Basemap — Defines the style of the world map and other elements like labels, roads, styles
Layers
You can select an existing layer or create a new one, then click the ellipsis besides Basic. That’ll open a selection of different encodings for your map, try selecting Hexbin for the next example. | https://medium.com/nightingale/how-to-create-eye-catching-maps-with-python-and-kepler-gl-e7e897eff8ac | ['Thiago Carvalho'] | 2020-06-26 15:03:18.577000+00:00 | ['Mapping', 'Python', 'Data Science', 'Programming', 'Data Visualization'] | Title Create EyeCatching Maps Python KeplerglContent Create EyeCatching Maps Python Keplergl Use intuitive tool simplify mapping article we’ll explore Keplergl opensource solution geospatial data visualization exploration Kepler developed Uber make easier user level design meaningful map also look good tool handle large amount data friendly intuitive interface allows user build effective map instant Available use since 2018 it’s time get closer look tool fit data visualization landscape article we’ll cover basic importing data Kepler using Python’s Pandas GeoPandas design visualization export map HTML file Vancouver Number Graffitis Block Getting Started dataset example NOOA’s Global Significant Earthquakes dataset Kaggle Import statement Pandas I’m interested looking intensity earthquake generated tsunami aren’t value we’ll need also need Geolocation read csv df pdreadcsvdataWorldwideEarthquakedatabasecsv dataset geolocation stored two field Latitude Longitude essential Kepler draw data need make sure value clean usable lat lon numeric error converted nan dfLONGITUDE pdtonumericdfLONGITUDE errorscoerce dfLATITUDE pdtonumericdfLATITUDE errorscoerce drop row missing lat lon intensity dfdropnasubsetLONGITUDE LATITUDE INTENSITY inplaceTrue convert tsunami flag string int dfFLAGTSUNAMI 1 iYes else 0 dfFLAGTSUNAMIvalues loading data Pandas use numeric make sure they’re number use dropna remove empty value also convert TSUNAMIFLAG yes 1 0 Cleaning preparation need may different requirement use tool data Pandas data frame map Keplergl Kepler straightforward give world map tool build visualization expects data configuration map Let’s start defining map I’m using Kepler Jupyter keplermap keplerglKeplerGlheight400 keplermap Default map without data add data frame keplermapadddatadatadf nameearthquakes Map marking earthquake map updated Quite easy load data Kepler Pandas Geopandas support comprehensive array extension directly GeoJSON CSV file Design top left map there’s arrow open setting menu Settings Menu menu Layers — Defines variable encoded map — Defines variable encoded map Filters — selecting smaller set data — selecting smaller set data Interactions — Defines interaction Tooltips search box others — Defines interaction Tooltips search box others Basemap — Defines style world map element like label road style Layers select existing layer create new one click ellipsis besides Basic That’ll open selection different encoding map try selecting Hexbin next exampleTags Mapping Python Data Science Programming Data Visualization |
1,776 | Self-Driving Cars Aren’t Just About Safety | Self-Driving Cars Aren’t Just About Safety
Time, energy, human life quality- FSD would improve all of these.
Photo by Bram Van Oost on Unsplash
One thing that you hear a lot nowadays is how stressed out people are. There’s a good number of reasons for this. So good, in fact, that I probably don’t need to spell them out. You’re probably well aware of the stressors in your own life. But there is one major stressor that we never think about anymore- having to drive.
Of course, not everyone has to (or gets to) drive. Some places have robust public transportation or they live within walking distance of most places that they go. Some people use scooters or bikes. That said, about 227 million people in the US, or 69% of the population, have driver’s licenses. This doesn’t mean that all of these people drive every day- however, there’s probably also a lot of people that drive without licenses. I’ll assume that these differences pretty much even out to this number of drivers.
By the way, that’s 69% of the total US population, including children and people who otherwise can’t drive. So the percentage of drivers taken out of people who actually could drive is even higher.
Driving is one of the most stressful things I do every day, and yet if someone asks me on a particularly stressful day what’s wrong, having to drive will be the last thing on my mind. And yet it is stressful. When you’re driving, there is the ever-present need to be alert at all times. If you mess up, maybe you die. Or maybe you plow into another vehicle. Maybe that vehicle has a family in it. Or maybe you run off the road and crash into a building.
These are all real things that happen every day. For all I know, after I get up from writing this and drive to work, one of them will happen to me. And my articles are set to automatically post, so who knows if it did? I mean, probably there will be articles after this one if I lived.
So imagine self-driving cars. If you actually have a car with a computer installed that can drive better than a human, all that stress is taken off of your plate. Yes, you’re handing the car over to a machine. But if the machine has been properly validated, then you’re doing the safer thing.
Now think about all of the time you’ve gotten back if you don’t have to drive any more. My current commute is 20 minutes. That’s one of the shortest commutes that I’ve ever had. It’s still 40 minutes a day, and currently I commute to the office 4 days a week. So that’s 160 minutes a week, or 8230 minutes a year, which is 138 hours a year. That’s about 3.5 work weeks per year.
It looks like the average US commute time is 26.9 minutes. Crunching the same numbers as before, you get up to 4.6 work weeks per year. And this is with just four commutes per week.
So self-driving cars get people back time and energy. That’s not as talked-about as the lives that will be saved. But I think it’s an important part of the story. Here’s an example:
It’s sometime in the future, maybe 10 years from now. Self-driving cars are now fairly common and have made themselves available to the mid-market. You are looking at car options and the self-driving computer option is available for an extra $2500 (because it’s the future, the price has gone down despite the value being basically the same as it is today). Do you buy it?
Well, let’s crunch some more numbers. Let’s say that you do get the computer. Can you justify that financially? Let’s say you use the computer to spend your newly freed-up 4.6 work weeks working an extra job on Fiverr that pays $15/hr. After a year, you’ve made $2797.60. That pays for the computer. And then you have an extra 4.6 work weeks every year for as long as you have the car. Think of what you could do with that time.
I’m optimistic about the potential that these cars have to improve not just safety, but human life quality. I believe that humans driving cars was very necessary for the 20th century- I am less sure about the 21st. | https://medium.com/carre4/self-driving-cars-arent-just-about-safety-2328fec6a061 | ['Paul Cipparone'] | 2020-11-20 14:34:18.981000+00:00 | ['Technology', 'Cars', 'Artificial Intelligence', 'Robotics', 'AI'] | Title SelfDriving Cars Aren’t SafetyContent SelfDriving Cars Aren’t Safety Time energy human life quality FSD would improve Photo Bram Van Oost Unsplash One thing hear lot nowadays stressed people There’s good number reason good fact probably don’t need spell You’re probably well aware stressor life one major stressor never think anymore drive course everyone get drive place robust public transportation live within walking distance place go people use scooter bike said 227 million people US 69 population driver’s license doesn’t mean people drive every day however there’s probably also lot people drive without license I’ll assume difference pretty much even number driver way that’s 69 total US population including child people otherwise can’t drive percentage driver taken people actually could drive even higher Driving one stressful thing every day yet someone asks particularly stressful day what’s wrong drive last thing mind yet stressful you’re driving everpresent need alert time mess maybe die maybe plow another vehicle Maybe vehicle family maybe run road crash building real thing happen every day know get writing drive work one happen article set automatically post know mean probably article one lived imagine selfdriving car actually car computer installed drive better human stress taken plate Yes you’re handing car machine machine properly validated you’re safer thing think time you’ve gotten back don’t drive current commute 20 minute That’s one shortest commute I’ve ever It’s still 40 minute day currently commute office 4 day week that’s 160 minute week 8230 minute year 138 hour year That’s 35 work week per year look like average US commute time 269 minute Crunching number get 46 work week per year four commute per week selfdriving car get people back time energy That’s talkedabout life saved think it’s important part story Here’s example It’s sometime future maybe 10 year Selfdriving car fairly common made available midmarket looking car option selfdriving computer option available extra 2500 it’s future price gone despite value basically today buy Well let’s crunch number Let’s say get computer justify financially Let’s say use computer spend newly freedup 46 work week working extra job Fiverr pay 15hr year you’ve made 279760 pay computer extra 46 work week every year long car Think could time I’m optimistic potential car improve safety human life quality believe human driving car necessary 20th century le sure 21stTags Technology Cars Artificial Intelligence Robotics AI |
1,777 | Is Decluttering Good for Your Creativity? | Is Decluttering Good for Your Creativity?
How cleaning up your act could be a boon to your brain, or lead to creative block.
Organizing and decluttering is all the rage these days. For that, we can largely thank the present-day titan of tidiness, Marie Kondo, whose KonMari method has fueled several books, a Netflix series, and invigorated an entire industry devoted to helping people maintain control over their physical possessions.
But as an architect who’s written a book about scientific research into the psychology of creative space, I have long wondered whether Ms Kondo’s prescriptions for self-dispossession were beneficial to creative types, many of whom work in home environments.
So I started to look into it.
My conclusion? Assuming you’re not a pathological hoarder, it depends.
Granted, that sounds a tad wishy-washy. Why the hesitation? And how could having an abundance of stuff be anything but detrimental to one’s creativity?
Let me explain.
KonMari and the Question of Science
A curious thing happens when you search for the word ‘science’ in Marie Kondo’s debut blockbuster manual, The Life-Changing Magic of Tidying Up: it doesn’t show up. Not once.
Hmmm.
How about the word ‘scientific’? Slightly more reassuring news: it appears a single time in a passage where the author acknowledges that she has no scientific basis for her theory that people accrue a variety of mental and physical health benefits from getting organized.
In this regard, Ms Kondo is misinformed. In truth, there’s a considerable body of research indicating that putting one’s house in order does exactly that. What’s more, these findings make it clear that a dishevelled environment can indeed depress creative task performance, largely by diminishing our well-being.
Home office. Austin, Texas. Architecture and interior design by Tim Cuppett Architects. Photography by Alec Hemer.
Take, for example, a 2010 study published in the scientific journal Personality and Social Psychology Bulletin. It found that subjects who described their homes as cluttered exhibited greater depression and fatigue, diminished coping skills, and increased difficulty transitioning from work to home compared to people who viewed their place of residence more positively.
What’s the connection between unkempt physical surroundings and a lack of mental well-being? Biology. According to the researchers, the group with messy environments registered elevated levels of the stress hormone cortisol, a substance released into the bloodstream by the adrenal glands. Normally, the body boosts the flow of cortisol when it perceives an external threat in order to sharpen our focus and analytic thinking skills, and by extension our ability to defend ourselves against potential harm. We then return to normal levels after the threat has passed. The problem with being stressed out by a messy environment is that the mess tends to remain in place, thereby leading to constant cortisol production and the kinds of disorders evident among subjects in the 2010 study.
And those are only some of the maladies linked to an oversupply of the hormone. Others include headaches, irritability, intestinal problems, high blood pressure, low libido, poor sleep, heart disease, suppressed immunity to disease, and difficulty recovering from exercise.
Home office. Scarborough, Maine. Architecture by Caleb Johnson Studio. Photography by Trent Bell.
But wait — there’s still more, as in more weight. That’s right — another potential consequence of mess-induced stress is weight gain. According to one source, people with unkempt homes are an eye-popping 77 per cent more likely to be overweight than those who reside in well-tended surroundings. Unsurprisingly, kitchens overladen with goods are especially detrimental for maintaining narrow waistlines; a 2017 study from the journal Environment and Behavior found that subjects living in chaotic food environments significantly upped their consumption of scale-busting high-calorie snacks (aka junk food), the effects of which become all too plain for everyone to see.
Other unhealthy consequences of clutter accrue indirectly. Air quality, for instance, often suffers in disorganized environments because the profusion of objects creates more surfaces to attract dust. The extra layers of dust not only increase the possibility of respiratory problems among occupants, but they can also reduce the amount of natural light inside a space by making those surfaces less reflective. Households with pets and in urban locations are particularly susceptible to the loss of light and dirtied air resulting from having too much stuff out and about.
And if all this weren’t enough to instantly turn you into a neat freak, clutter can also hamper your ability to focus on task completion. This insight comes to us via a 2011 paper out of Princeton University, where researchers found that our sensory apparatus can be easily overwhelmed by having too many things to look at at one time, thereby making it harder to sort out only those objects relevant to the task at hand. More stuff also makes it more likely that people will be distracted from what they’re doing as something new catches their eye with each pass of the room.
Library. Yonkers, New York. Architecture by Gary Brewer for Robert A.M. Stern Architects. Photography by Francis Dzikowski / Otto.
More Problems: Incompleteness and Control
The inability of people in disorganized settings to focus points to one of the reasons that clutter affects us as it does: a space in disarray imparts a sense of unfinished business. Sometimes that sense derives from projects or tasks that remain undone, the residue of which lingers in stacks of unfiled papers or the detritus of half-completed household chores. At other times it might stem from deferred decisions, such as whether to keep possession and if so, where to store it. Given the discomfort most people experience when confronted by a plethora of unresolved conditions, it’s hardly surprising that the unhappy subjects in the 2010 study I discussed at the beginning of this article repeatedly used the term ‘unfinished’ to describe their disorderly habitats.
A second possible explanation for the negative impact of disorganization involves a psychological construct known as the locus of control. In a nutshell, the concept proposes that people fall into two main camps: those who believe that they are in control of their lives, and those who believe that external forces largely determine their fate. As you might expect, people who create tidy environments for themselves tend to fall into the former category, while those in less organized surroundings often feel that their belongings have gotten the better of them through no fault or desire of their own. As also might be expected, a landmark British study found that people with an ‘internal’ locus of control are generally more successful, healthier, better educated, and less anxious than those with an ‘external’ locus.
Mark Twain, possibly at home in New York City. 1901. Photograph by Theodore C. Marceau. Library of Congress.
On the Other Hand…
So all this would strongly suggest that going full Kondo can only boost your creative performance by sparing you the downsides of disorganization. Why, then, did I qualify my judgment at the beginning of this essay by suggesting that there might be more than one side to the story?
Answer: Because there’s evidence that a messy environment can materially stimulate idea generation. Exhibit A: a 2013 study that found that a group of subjects brainstorming ideas around a messy table evinced greater creativity than a second group performing the same task around a tidy table. The researchers who oversaw the study theorized that the neat table primed the subjects for conformity because neatness is a socially acceptable norm, whereas the unkempt work surface suggested a more devil-may-care attitude toward conventional expectations. I would also add that the findings are entirely consistent with the observation that creative thinking is by nature a ‘messy’ process — that is, non-linear and riddled with unanticipated surprises.
As for Exhibit B: I offer you a cadre of historically eminent creatives who did some of their best work in messy spaces, Mark Twain being a revered avatar in this category.
So where does all this leave you? Mess or no mess?
As with nearly all techniques for enhancing creativity, ultimately it depends on your personal work habits, even when the scientific evidence suggests that your preferences might run counter to those of the general population. My suggestion, then, would be to test both conditions to determine which will be most beneficial to your work.
My apologies for not being more definitive, but in this case, I wouldn’t be truthful if I said I could give you a neat and tidy answer. | https://medium.com/the-creative-mind/is-decluttering-good-for-your-creativity-4ca5e6f66030 | ['Donald M. Rattner'] | 2020-03-15 21:31:10.526000+00:00 | ['Self Improvement', 'Konmari', 'Creativity', 'Interior Design', 'Psychology'] | Title Decluttering Good CreativityContent Decluttering Good Creativity cleaning act could boon brain lead creative block Organizing decluttering rage day largely thank presentday titan tidiness Marie Kondo whose KonMari method fueled several book Netflix series invigorated entire industry devoted helping people maintain control physical possession architect who’s written book scientific research psychology creative space long wondered whether Ms Kondo’s prescription selfdispossession beneficial creative type many work home environment started look conclusion Assuming you’re pathological hoarder depends Granted sound tad wishywashy hesitation could abundance stuff anything detrimental one’s creativity Let explain KonMari Question Science curious thing happens search word ‘science’ Marie Kondo’s debut blockbuster manual LifeChanging Magic Tidying doesn’t show Hmmm word ‘scientific’ Slightly reassuring news appears single time passage author acknowledges scientific basis theory people accrue variety mental physical health benefit getting organized regard Ms Kondo misinformed truth there’s considerable body research indicating putting one’s house order exactly What’s finding make clear dishevelled environment indeed depress creative task performance largely diminishing wellbeing Home office Austin Texas Architecture interior design Tim Cuppett Architects Photography Alec Hemer Take example 2010 study published scientific journal Personality Social Psychology Bulletin found subject described home cluttered exhibited greater depression fatigue diminished coping skill increased difficulty transitioning work home compared people viewed place residence positively What’s connection unkempt physical surroundings lack mental wellbeing Biology According researcher group messy environment registered elevated level stress hormone cortisol substance released bloodstream adrenal gland Normally body boost flow cortisol perceives external threat order sharpen focus analytic thinking skill extension ability defend potential harm return normal level threat passed problem stressed messy environment mess tends remain place thereby leading constant cortisol production kind disorder evident among subject 2010 study malady linked oversupply hormone Others include headache irritability intestinal problem high blood pressure low libido poor sleep heart disease suppressed immunity disease difficulty recovering exercise Home office Scarborough Maine Architecture Caleb Johnson Studio Photography Trent Bell wait — there’s still weight That’s right — another potential consequence messinduced stress weight gain According one source people unkempt home eyepopping 77 per cent likely overweight reside welltended surroundings Unsurprisingly kitchen overladen good especially detrimental maintaining narrow waistline 2017 study journal Environment Behavior found subject living chaotic food environment significantly upped consumption scalebusting highcalorie snack aka junk food effect become plain everyone see unhealthy consequence clutter accrue indirectly Air quality instance often suffers disorganized environment profusion object creates surface attract dust extra layer dust increase possibility respiratory problem among occupant also reduce amount natural light inside space making surface le reflective Households pet urban location particularly susceptible loss light dirtied air resulting much stuff weren’t enough instantly turn neat freak clutter also hamper ability focus task completion insight come u via 2011 paper Princeton University researcher found sensory apparatus easily overwhelmed many thing look one time thereby making harder sort object relevant task hand stuff also make likely people distracted they’re something new catch eye pas room Library Yonkers New York Architecture Gary Brewer Robert Stern Architects Photography Francis Dzikowski Otto Problems Incompleteness Control inability people disorganized setting focus point one reason clutter affect u space disarray imparts sense unfinished business Sometimes sense derives project task remain undone residue lingers stack unfiled paper detritus halfcompleted household chore time might stem deferred decision whether keep possession store Given discomfort people experience confronted plethora unresolved condition it’s hardly surprising unhappy subject 2010 study discussed beginning article repeatedly used term ‘unfinished’ describe disorderly habitat second possible explanation negative impact disorganization involves psychological construct known locus control nutshell concept proposes people fall two main camp believe control life believe external force largely determine fate might expect people create tidy environment tend fall former category le organized surroundings often feel belonging gotten better fault desire also might expected landmark British study found people ‘internal’ locus control generally successful healthier better educated le anxious ‘external’ locus Mark Twain possibly home New York City 1901 Photograph Theodore C Marceau Library Congress Hand… would strongly suggest going full Kondo boost creative performance sparing downside disorganization qualify judgment beginning essay suggesting might one side story Answer there’s evidence messy environment materially stimulate idea generation Exhibit 2013 study found group subject brainstorming idea around messy table evinced greater creativity second group performing task around tidy table researcher oversaw study theorized neat table primed subject conformity neatness socially acceptable norm whereas unkempt work surface suggested devilmaycare attitude toward conventional expectation would also add finding entirely consistent observation creative thinking nature ‘messy’ process — nonlinear riddled unanticipated surprise Exhibit B offer cadre historically eminent creatives best work messy space Mark Twain revered avatar category leave Mess mess nearly technique enhancing creativity ultimately depends personal work habit even scientific evidence suggests preference might run counter general population suggestion would test condition determine beneficial work apology definitive case wouldn’t truthful said could give neat tidy answerTags Self Improvement Konmari Creativity Interior Design Psychology |
1,778 | Creating Little Nightmares | CREATOR INTERVIEW
Creating Little Nightmares
An exclusive interview with Dave Mervik of Tarsier Studios, creators of the creepy platformer/puzzler Little Nightmares
Formed in 2004, Tarsier Studios in Sweden got its start working with Sony on the Little Big Planet series of games. That association with Media Molecule lead to them bringing a bigger and better version of Tearaway to the PlayStation 4 as Tearaway Unfolded. They headed next to the wilds of PSVR to make the puzzler Statik, and then in 2017 released Little Nightmares, a creepy and atmospheric platformer/puzzler, to wide acclaim. Their latest game is The Stretchers, a Nintendo Switch exclusive comedy puzzle game.
Before the end of 2020 they will return to the existing IP well for the first time, as they are currently wrapping up development on Little Nightmares 2. We recently sat down (virtually) with the studio’s Head of Communication, Dave Mervik (Merv) to talk about that, the studio’s origins, and so much more!
SUPERJUMP
Thank you Merv for joining us on the interview, I wanted to start by congratulating you and everyone at the studio for the tremendous success of Little Nightmares, now having sold over 2 million copies.
MERV
Thanks man.
SUPERJUMP
Our readers really enjoy learning about the genesis of independent studios, so could you share a bit about the founding of the studio and how you came to be associated with Sony, making your first two titles exclusively for PS4?
MERV
The studio was originally a bunch of students named Team Tarsier, who made a prototype called The City of Metronome. As the well-told story goes, it was one of the darlings of E3, but was never asked to the dance, so now it remains a shadow looming over everything we do :) The happy outcome from that, though, was that it was the beginning of our relationship with Sony and Media Molecule.
Malmo, Sweden. Source: Matador Network.
SUPERJUMP
The studio is located in Malmo, Sweden, which has become a massive center for game development studios in Europe. What do you think it is about that location in particular that is making studios congregate there? Is there any collaboration between studios with so many being in such close proximity?
MERV
I could wax lyrical about a culture of creativity, but that’d be just some bullshit. Of course those things exist here, but they exist everywhere, it’s just not always a story. I think it’s most likely a practical thing, being close to Copenhagen Airport makes it easier to attract talent from all over the world, for publishers to visit their developers, and for events like Nordic Game Conference to flourish. Or maybe it’s just be one of those things like when people accidentally form a queue. You see a bunch of game devs loitering around Malmö, so you start loitering too, and the next thing you know you’ve got to have a difficult conversation with a whole bunch of loitering developers. One thing I’ve learned is that Swedes will do anything to avoid a difficult conversation, so maybe it was just easier to turn Malmö into a massive centre for game development studios in Europe.
In terms of collaboration, it really depends on the studio. I imagine large studios like Massive have all they need in-house and then some, whereas a place like Game Habitat is a lot more open to collaboration and sharing of resources and learnings. Personally I love that mentality, but you can’t escape the fact that all of these companies are in some kind of competition with each other, so I wouldn’t think there’d be any linking of arms or singing Kum-Bah-Ya round the campfire just yet.
SUPERJUMP
Your second game was a PSVR-exclusive title called Statik. What were some of the unique challenges of making a PSVR title in comparison to the non-VR games the studio has developed? Do you see yourselves going back to a VR title, or perhaps building a VR mode into a future game?
MERV
Speaking personally, the unique challenge/opportunity was the ways in which we could tell our story. The potential and restrictions of the VR format is something we didn’t try to ‘solve’, but rather worked with it and in some cases made it central to the experience. The game became about sitting in a chair with a device on your head, a device in your hands, and a slightly tragic person in your ears, solving puzzles for a reason you don’t fully understand. I loved what we tried to do with Statik, and am only sad that more people didn’t get to experience it. Messing with people in that way, playing with their expectations and sense of ‘self’ was something we could only do with VR, and I would only want to go back to VR if that opportunity presented itself again. I’m just not a fan of VR for the sake of it, it reinforces this notion that it’s novelty tech, when it could offer so much more.
Statik. Source: vrnerds.de.
SUPERJUMP
The art style and character design for Little Nightmares is quite unique, and with the odd proportions of the “enemies” and the way they move, it’s all very frightening, nightmare-fuel type stuff. What were the inspirations when it came to designing the characters and the world they inhabit?
MERV
Our world and the characters that inhabit it.
SUPERJUMP
Reviews of Little Nightmares universally praised its style and atmosphere, but several thought that parts of the design and control mechanics were flawed or difficult to come to grips with. How have you incorporated the feedback and criticism like that into the design of the sequel?
MERV
We don’t really work that way. We know ourselves what worked, what didn’t, and what could have worked better; and it’s important that we maintain that focus, and refine the execution of our ambitions. Some of the measures we take may please some of the critics, but it’s important that we remain our most incisive critics, or we’ll forget how to do properly what we love the most.
SUPERJUMP
You’ve made four very different games since the studio was created, from a 3D platformer (Tearaway Unfolded) with a unique paper-based look, to a PSVR title (Statik), to a horror-esque action-puzzler, and then a comedic puzzle co-op title (Stretchers). Little Nightmares 2 will be your first time revisiting an IP, was it easier to develop the game not having to start from scratch, or is it more difficult creating something that lives up to the hype and expectations that fans now have based on the success of the original?
MERV
Probably a little of both :) We’re not trying to live up to anything though, as I said earlier, it’s important that we keep our own counsel and know what feels right in any of our games, rather than what people might expect. If you’ve seen Dumb & Dumber 2 or Anchorman 2, you’ll have an idea what can go wrong when you pander to audience expectations instead of listening to your own best instincts. | https://medium.com/super-jump/creating-little-nightmares-462d5d880d | ['Bryan Finck'] | 2020-08-21 06:31:09.850000+00:00 | ['Gaming', 'Startup', 'Interview', 'Creativity', 'Videogames'] | Title Creating Little NightmaresContent CREATOR INTERVIEW Creating Little Nightmares exclusive interview Dave Mervik Tarsier Studios creator creepy platformerpuzzler Little Nightmares Formed 2004 Tarsier Studios Sweden got start working Sony Little Big Planet series game association Media Molecule lead bringing bigger better version Tearaway PlayStation 4 Tearaway Unfolded headed next wild PSVR make puzzler Statik 2017 released Little Nightmares creepy atmospheric platformerpuzzler wide acclaim latest game Stretchers Nintendo Switch exclusive comedy puzzle game end 2020 return existing IP well first time currently wrapping development Little Nightmares 2 recently sat virtually studio’s Head Communication Dave Mervik Merv talk studio’s origin much SUPERJUMP Thank Merv joining u interview wanted start congratulating everyone studio tremendous success Little Nightmares sold 2 million copy MERV Thanks man SUPERJUMP reader really enjoy learning genesis independent studio could share bit founding studio came associated Sony making first two title exclusively PS4 MERV studio originally bunch student named Team Tarsier made prototype called City Metronome welltold story go one darling E3 never asked dance remains shadow looming everything happy outcome though beginning relationship Sony Media Molecule Malmo Sweden Source Matador Network SUPERJUMP studio located Malmo Sweden become massive center game development studio Europe think location particular making studio congregate collaboration studio many close proximity MERV could wax lyrical culture creativity that’d bullshit course thing exist exist everywhere it’s always story think it’s likely practical thing close Copenhagen Airport make easier attract talent world publisher visit developer event like Nordic Game Conference flourish maybe it’s one thing like people accidentally form queue see bunch game devs loitering around Malmö start loitering next thing know you’ve got difficult conversation whole bunch loitering developer One thing I’ve learned Swedes anything avoid difficult conversation maybe easier turn Malmö massive centre game development studio Europe term collaboration really depends studio imagine large studio like Massive need inhouse whereas place like Game Habitat lot open collaboration sharing resource learning Personally love mentality can’t escape fact company kind competition wouldn’t think there’d linking arm singing KumBahYa round campfire yet SUPERJUMP second game PSVRexclusive title called Statik unique challenge making PSVR title comparison nonVR game studio developed see going back VR title perhaps building VR mode future game MERV Speaking personally unique challengeopportunity way could tell story potential restriction VR format something didn’t try ‘solve’ rather worked case made central experience game became sitting chair device head device hand slightly tragic person ear solving puzzle reason don’t fully understand loved tried Statik sad people didn’t get experience Messing people way playing expectation sense ‘self’ something could VR would want go back VR opportunity presented I’m fan VR sake reinforces notion it’s novelty tech could offer much Statik Source vrnerdsde SUPERJUMP art style character design Little Nightmares quite unique odd proportion “enemies” way move it’s frightening nightmarefuel type stuff inspiration came designing character world inhabit MERV world character inhabit SUPERJUMP Reviews Little Nightmares universally praised style atmosphere several thought part design control mechanic flawed difficult come grip incorporated feedback criticism like design sequel MERV don’t really work way know worked didn’t could worked better it’s important maintain focus refine execution ambition measure take may please critic it’s important remain incisive critic we’ll forget properly love SUPERJUMP You’ve made four different game since studio created 3D platformer Tearaway Unfolded unique paperbased look PSVR title Statik horroresque actionpuzzler comedic puzzle coop title Stretchers Little Nightmares 2 first time revisiting IP easier develop game start scratch difficult creating something life hype expectation fan based success original MERV Probably little We’re trying live anything though said earlier it’s important keep counsel know feel right game rather people might expect you’ve seen Dumb Dumber 2 Anchorman 2 you’ll idea go wrong pander audience expectation instead listening best instinctsTags Gaming Startup Interview Creativity Videogames |
1,779 | Top A/B Mobile Testing Services and Tools to Adopt in 2020–21 | With more than 100,000 new Android apps released in the Google Play Store every month and users are projected to spend 90% of their internet time on the mobile apps, it is no surprise that the mobile industry is thriving at the fast-paced. And this highlighted the fact that companies from all the industries can take their business to the next level by merely investing in mobile app developments.
Well, the trend of migrating the business from a physical store to digital stores is not new, but Covid-19 pandemic has increased the demand for mobile apps in all sectors. From teleshopping to telehealth, mobile applications are revolutionising things all around the globe.
Now the eye-popping fact is, 40% of applications submitted on the app stores are rejected due to app completeness, design spam, incorrect metadata and more. Apple’s Director of Federal Government Affairs, “Timothy Powderly said- The App Review team reviews more than 100,000 submissions per week and rejects approximately 36,000 of those submissions”.
If we conclude these figures, then it won’t be wrong to interpret that 3 out of 5 apps are rejected. According to the survey, 42% of apps are rejected due to completeness, 10% are due to design spam, and 8% are due to incorrect metadata and so on. So how to get your app published in the app store and how to make it run smoothly on different app stores are few of the major concerns of the developers.
Don’t worry, this blog has listed 10 most Amazing A/B testing tools and services that help you improve the performance of your mobile apps.
But before jumping on the tools and services, it is important to understand what exactly A/B testing tools are and why do you need it?
What is A/B Testing and Why Do You Need It?
This is true that a million-dollar app development idea can help you achieve success, but you can’t ignore this fact that the success of the app is in the way you develop it. So A/B testing should be the initial step of mobile app strategy that enables you to evaluate each element of your app deeply.
A/B testing usually conducts tests on every aspect of the app to understand what’s working and what’s not. So this testing consists of various comparing elements that help you know which one drives more traffic and app installations and more.
So basically A/B testing carried out in two ways:
App Store A/B Testing: This will help to test the elements on the store listing and product pages such as title and description, visuals, gallery, and more.
This will help to test the elements on the store listing and product pages such as title and description, visuals, gallery, and more. App A/B Testing: In which you will test the product.
Benefit of A/B Mobile Testing
Everybody wants a perfect app to generate better traffic and leads for their business, but that’s required constant optimization and never-ending experiments to keep your app up with the next update. And A/B testing will help you analyse with all the statistics about what to make the next change and when in your mobile app. So A/B testing tools and services play an integral part in enhancing mobile app performance.
Now the central question is, what are the best A/B mobile testing tools and services you can look for in 2020–21? In this post, we have rounded up a list of best testing tools and services that you can consider to improve the performance of the mobile app.
Let’s get started with the list…
Top 10 A/B Mobile Testing Tools and Services To Ensure Best Performance in 2020–21
1. Apptimize: Experiment Anywhere & Track Everywhere
Founded in: 2013
Pricing to Use: Feature flags are free, but subscription plans are based on request.
Major Users: Glassdoor, Hotels.com, Delivery Hero and more.
Whether it’s about testing a Native, Web or Hybrid mobile app, Apptimize is one of the leading A/B mobile testing tools that offers you a seamless way to optimize the performance of the app on across all channels. It is ultimately a cross-platform A/B testing solution that allows you to test a variation on any platform and evaluate the chance on all across the channels.
With this app testing tool, you have complete control of feature releases, no matter which platform you are releasing to. With the feature flags, your mobile app development team can easily manage and ramp up mobile, server-side and OTT and web changes without any risk. Apptimize allows you to launch new functionality in your mobile app with complete confidence.
2. Mixpanel: Built Better Product With Power Analytics
Founded in: 2009
Pricing to Use: Free for starters but plans are starting from $24 per month.
Major Users: Uber, Skyscanner, Expedia, Twitter and more.
Mixpanel is one of the most powerful product analytics tools that helps you build a better product and enables you to convert, engage and retain more users through the app. With Mixpanel A/B testing tools, you can get insights into the app, generate a simple report and make integrations that best fit your app. This tool is majorly used to analyse, measure and improve your customer experience. The simple features of this tool are Product Analytics, Product Metrics, and Product Foundations.
Moreover, this tool is easy to access and lets you change any part of your application, without having to deploy any coding.
3. HubSpot & Kissmetrics’ A/B Testing Tool Kit
Founded in: NA
Pricing to Use: Free to use
Major Users: Humana, Unbounce, Groove and more.
Boost the performance of your mobile app with HubSpot as it enables you to download a complete A/B testing kit for free. With this kit, you can access an easy-to-use significance calculator that optimizes variables of the app. Secondly, get a template tracking feature that helps you improve your conversion rates over time.
This is an ideal testing tool for business apps, as it helps you testing everything right from the landing page, emails to call-to-action, that significantly affects the number of leads.
4. Optimizely: Deliver Better Software, Products, and Growth
Founded in: 2010
Pricing To Use: $1,440 With Monthly Charges
Major User: Microsoft, IBM, Zendesk and more.
Optimizely is a standalone powerful and fastest A/B testing tool that allows you to experiment with various elements including onboarding, feature discovery and other strategies that overall help in improving engagement and retention. With the use of this tool, a developer can easily optimize the app experience across any platform including website, backend code, mobile and conversational apps.
Being a super fast and powerful testing tool, it helps you update the app in real-time without waiting for the review report of the App Store and Google Play store.
5. VWO: Most Trusted A/B testing Tool in the World
Founded in: 2009
Pricing To Use: $1,440 With Monthly Charges
Major User: Hilton, eBay, Disney, Target, PennState and more
When it comes to choosing the best yet leading A/B mobile app testing tool to boost the performance of the app, VMO is the first choice of developers. Being used by the world’s best brands, including eBay, Target, Virgin, Holidays, VMO has built its reputation as the best A/B testing tool that also helps in optimizing conversion rates.
To simplify the testing process, VMO offers you a robust reporting dashboard, where you can leverage Bayesian statistics that enables you to run the testing on the fastest mode. Moreover, it will give you more control of your tests and help you reach accurate app test conclusions. This testing tool has been designed to suit A/B tests, Split URL tests and multivariate tests with a drop-and-drop editor.
6. Omniconvert: Optimize Your Customer Journey With Data-Driven Results
Founded in: 2013
Pricing To Use: Plans are based on client request
Major User: NA
Omniconvert is a well-known mobile app conversion rate platform that offers you A/B testing tools along with the survey, personalisation, overlay and segmentation tools to help you get better results. Using their testing tools, you can quickly test apps running on different platforms, desktop, mobile and tablet.
With Omniconver, you can achieve better testing conclusions. It has blended their segmentation tool with their A/B testing tool to allow you test approx 40 segmentation parameters including geolocation, traffic source, visitor behaviour, product features and ability to verify the quality of content to engage the visitor. Omniconvert can be an ideal A/B testing solution for medium-sized business apps.
7. Taplytics: A/B Testing and Experimentation
Founded in: 2011
Pricing To Use: On Request
Major User: Ticketmaster, Chick-Fil-A, CBS and more
Taplytics is a unique and widely used testing tool that allows you to change anything you can see in the iOS and Android apps. Right from buttons, images to colours, the entire UI/UX you can access with this tool and keep your track on it. With the help of this tool, you can see the impact of the recent changes in the real-time, which leads to better user experience.
Taplytics is built with an advanced analytics system that helps you get accurate data for your team and other third-party data systems. Moreover, you can also manage A/B test push notifications from all across the platforms. To leverage this testing tool, all you need is to hire the best mobile app development company that has skills and experience to use it.
8. Leanplum: Multi-Channel A/B Testing Platform
Founded in: 2012
Pricing To Use: Plans are on request
Major User: Tinder, Zynga, App Annie, NBC and more.
Leanplum is the most renowned simple to access and flexible A/B testing platform that helps you optimise every aspect of the app right from user engagement to in-app experience. With this testing tool, you can set any number of goals for campaigns, and be able to conclude highly accurate customer impacts and trends.
Since Leanplum is a highly flexible testing tool, you can access it to understand both the negative and positive impacts of every campaign. For example, did your last push notification increase the app conversion but also lead to additional app uninstallations? Like this way, this testing tool will help you go beyond the average app testing strategies and allow you to make necessary steps for better success.
9. SplitForce: Drive Statistically Significant Results
Founded in: 2013
Pricing To Use: Plans starting from $14 per month
Major User: Marks and Spencer, FreeCharge, Burpple and more.
Since its inception in 2003, it is a widely used A/B Testing platform that supports all the major existing and emerging platforms and providing libraries for iOS native, Android Native and Unity projects. Moreover, it provides you with a unique feature set that enables you to segment your users virtually based on different criteria, including mobile OS, regional, more. These settings will help you collect information stored on your backend using their targeting API.
SplitForce is based on adaptive learning algorithms, though it helps you automate the entire A/B testing by automating the process. Overall, testing with this tool will help you save time and gradually show you better results.
10. Monetate: Complete App Optimization Tool
Founded in: 2008
Pricing To Use: Prices are based on request
Major User: The North Face, National Geographic, True Religion and more.
It is a leading A/B testing tool designed to test marketing apps as it uses contextual data for decisions and recommendations as its core. The technicalities it uses for testing will help you bring together first-party data from sources like your CRM and POS and then combine them with the real-time behavioural and contextual observations to develop influential customer segments.
This testing tool provides you with a unique blend of easy-to-use interface with a powerful backend-testing and segmentation engine to help you create impressive customer experiences that increase conversions and revenue.
Conclusion
To end this post, it is worth mentioning that these are the constant A/B mobile app testing tools that are free to get started with the basic functionalities and features. These testing tools will provide you with the platform to analyse every aspect of the app and help you understand the scope of improving the performance of the app with accurate statistics. You can choose to hire a software development company to better leverage the features of these testing tools. | https://medium.com/quick-code/top-a-b-mobile-testing-services-and-tools-to-adopt-in-2020-21-665296e787db | ['Sophia Martin'] | 2020-11-02 05:08:06.615000+00:00 | ['Technology', 'Mobile Apps', 'Mobile App Development', 'Business', 'Startup'] | Title Top AB Mobile Testing Services Tools Adopt 2020–21Content 100000 new Android apps released Google Play Store every month user projected spend 90 internet time mobile apps surprise mobile industry thriving fastpaced highlighted fact company industry take business next level merely investing mobile app development Well trend migrating business physical store digital store new Covid19 pandemic increased demand mobile apps sector teleshopping telehealth mobile application revolutionising thing around globe eyepopping fact 40 application submitted app store rejected due app completeness design spam incorrect metadata Apple’s Director Federal Government Affairs “Timothy Powderly said App Review team review 100000 submission per week reject approximately 36000 submissions” conclude figure won’t wrong interpret 3 5 apps rejected According survey 42 apps rejected due completeness 10 due design spam 8 due incorrect metadata get app published app store make run smoothly different app store major concern developer Don’t worry blog listed 10 Amazing AB testing tool service help improve performance mobile apps jumping tool service important understand exactly AB testing tool need AB Testing Need true milliondollar app development idea help achieve success can’t ignore fact success app way develop AB testing initial step mobile app strategy enables evaluate element app deeply AB testing usually conduct test every aspect app understand what’s working what’s testing consists various comparing element help know one drive traffic app installation basically AB testing carried two way App Store AB Testing help test element store listing product page title description visuals gallery help test element store listing product page title description visuals gallery App AB Testing test product Benefit AB Mobile Testing Everybody want perfect app generate better traffic lead business that’s required constant optimization neverending experiment keep app next update AB testing help analyse statistic make next change mobile app AB testing tool service play integral part enhancing mobile app performance central question best AB mobile testing tool service look 2020–21 post rounded list best testing tool service consider improve performance mobile app Let’s get started list… Top 10 AB Mobile Testing Tools Services Ensure Best Performance 2020–21 1 Apptimize Experiment Anywhere Track Everywhere Founded 2013 Pricing Use Feature flag free subscription plan based request Major Users Glassdoor Hotelscom Delivery Hero Whether it’s testing Native Web Hybrid mobile app Apptimize one leading AB mobile testing tool offer seamless way optimize performance app across channel ultimately crossplatform AB testing solution allows test variation platform evaluate chance across channel app testing tool complete control feature release matter platform releasing feature flag mobile app development team easily manage ramp mobile serverside OTT web change without risk Apptimize allows launch new functionality mobile app complete confidence 2 Mixpanel Built Better Product Power Analytics Founded 2009 Pricing Use Free starter plan starting 24 per month Major Users Uber Skyscanner Expedia Twitter Mixpanel one powerful product analytics tool help build better product enables convert engage retain user app Mixpanel AB testing tool get insight app generate simple report make integration best fit app tool majorly used analyse measure improve customer experience simple feature tool Product Analytics Product Metrics Product Foundations Moreover tool easy access let change part application without deploy coding 3 HubSpot Kissmetrics’ AB Testing Tool Kit Founded NA Pricing Use Free use Major Users Humana Unbounce Groove Boost performance mobile app HubSpot enables download complete AB testing kit free kit access easytouse significance calculator optimizes variable app Secondly get template tracking feature help improve conversion rate time ideal testing tool business apps help testing everything right landing page email calltoaction significantly affect number lead 4 Optimizely Deliver Better Software Products Growth Founded 2010 Pricing Use 1440 Monthly Charges Major User Microsoft IBM Zendesk Optimizely standalone powerful fastest AB testing tool allows experiment various element including onboarding feature discovery strategy overall help improving engagement retention use tool developer easily optimize app experience across platform including website backend code mobile conversational apps super fast powerful testing tool help update app realtime without waiting review report App Store Google Play store 5 VWO Trusted AB testing Tool World Founded 2009 Pricing Use 1440 Monthly Charges Major User Hilton eBay Disney Target PennState come choosing best yet leading AB mobile app testing tool boost performance app VMO first choice developer used world’s best brand including eBay Target Virgin Holidays VMO built reputation best AB testing tool also help optimizing conversion rate simplify testing process VMO offer robust reporting dashboard leverage Bayesian statistic enables run testing fastest mode Moreover give control test help reach accurate app test conclusion testing tool designed suit AB test Split URL test multivariate test dropanddrop editor 6 Omniconvert Optimize Customer Journey DataDriven Results Founded 2013 Pricing Use Plans based client request Major User NA Omniconvert wellknown mobile app conversion rate platform offer AB testing tool along survey personalisation overlay segmentation tool help get better result Using testing tool quickly test apps running different platform desktop mobile tablet Omniconver achieve better testing conclusion blended segmentation tool AB testing tool allow test approx 40 segmentation parameter including geolocation traffic source visitor behaviour product feature ability verify quality content engage visitor Omniconvert ideal AB testing solution mediumsized business apps 7 Taplytics AB Testing Experimentation Founded 2011 Pricing Use Request Major User Ticketmaster ChickFilA CBS Taplytics unique widely used testing tool allows change anything see iOS Android apps Right button image colour entire UIUX access tool keep track help tool see impact recent change realtime lead better user experience Taplytics built advanced analytics system help get accurate data team thirdparty data system Moreover also manage AB test push notification across platform leverage testing tool need hire best mobile app development company skill experience use 8 Leanplum MultiChannel AB Testing Platform Founded 2012 Pricing Use Plans request Major User Tinder Zynga App Annie NBC Leanplum renowned simple access flexible AB testing platform help optimise every aspect app right user engagement inapp experience testing tool set number goal campaign able conclude highly accurate customer impact trend Since Leanplum highly flexible testing tool access understand negative positive impact every campaign example last push notification increase app conversion also lead additional app uninstallations Like way testing tool help go beyond average app testing strategy allow make necessary step better success 9 SplitForce Drive Statistically Significant Results Founded 2013 Pricing Use Plans starting 14 per month Major User Marks Spencer FreeCharge Burpple Since inception 2003 widely used AB Testing platform support major existing emerging platform providing library iOS native Android Native Unity project Moreover provides unique feature set enables segment user virtually based different criterion including mobile OS regional setting help collect information stored backend using targeting API SplitForce based adaptive learning algorithm though help automate entire AB testing automating process Overall testing tool help save time gradually show better result 10 Monetate Complete App Optimization Tool Founded 2008 Pricing Use Prices based request Major User North Face National Geographic True Religion leading AB testing tool designed test marketing apps us contextual data decision recommendation core technicality us testing help bring together firstparty data source like CRM POS combine realtime behavioural contextual observation develop influential customer segment testing tool provides unique blend easytouse interface powerful backendtesting segmentation engine help create impressive customer experience increase conversion revenue Conclusion end post worth mentioning constant AB mobile app testing tool free get started basic functionality feature testing tool provide platform analyse every aspect app help understand scope improving performance app accurate statistic choose hire software development company better leverage feature testing toolsTags Technology Mobile Apps Mobile App Development Business Startup |
1,780 | Syncfusion Essential Studio 2020 Volume 4 Is Here! | Syncfusion is glad to roll out the last major release of this year, Essential Studio 2020 Volume 4. You can now enjoy the enhancements available in this release.
Here is the brief description of the major features we have implemented.
WinUI (preview)
We have included the following seven new controls in WinUI:
— Ribbon
— Calendar
— Calendar Date Picker
— Date Picker
— Time Picker
— Slider
— Range Slider
There is also support for many new chart types in the Charts control. This includes:
— Polar charts
— Radar charts
— Pyramid charts
— Funnel charts
The Radial Gauge now supports adding labels to represent gauge ranges.
The DataGrid control has been enhanced to support printing.
The TreeView control has horizontal scrolling and a context menu.
The Barcode control now lets you generate GS1Code128Barcode and Pdf417Barcode symbology.
Flutter
The all-new Sparkline Charts widget is included in this release.
The Charts widget was enhanced to allow:
— Defining the maximum width of the axis labels.
— Using a customization template for the trackball.
— Converting a logical pixel value to chart data points and vice versa.
— Restricting maximum zoom level on pinch-zooming in the Cartesian chart.
The Calendar widget was enhanced to support:
— Navigation animation.
— A custom widget builder for time regions and appointments.
The DataGrid widget was enhanced to load more data and support infinite scrolling.
The Maps widgets now lets you add:
— Polylines
— Arcs
— Sublayers to shape layers
The PDF Viewer widget now supports:
Text search.
Text selection and copying.
Navigation using document link annotation.
The PDF Library for Flutter now allows users to:
— Encrypt and decrypt PDF documents.
— Create PDFs in the following conformances:
PDF/A-1B, PDF/A-2B, PDF/A-3B
— Add attachments to PDF documents.
The Excel Flutter library now lets you:
— Add hyperlinks to text and images.
— Insert and delete rows and columns.
— Autofit rows and columns.
— Create Excel documents with logical functions, string functions, and nested formulas.
Xamarin
Xamarin.Forms WPF platform support is now extended to the ComboBox component.
In the Autocomplete component, place the drop-down either in the top or in the bottom, based on space availability.
Auto tab width support has been added to render the Tabbed View control’s tabs based on the text size.
The circular cropping feature in Image Editor allows users to crop images in a circle or ellipse shape.
Subscript and superscript for the Rich Text Editor.
Dark and light themes for the StepProgressBar.
Blazor
Syncfusion Blazor components are now compatible with .NET 5.0.
You can now perform lazy loading on the Syncfusion Blazor assemblies in Blazor WebAssembly applications.
A new Button Group component.
The following components have been developed to industry standards and moved from preview to production-ready:
Individual NuGet packages have been provided for our Syncfusion Blazor components.
The DataGrid component now supports virtual scrolling in the virtual placeholder.
Kanban now supports customizing workflow validation, a card UI, and a tooltip template.
In Scheduler, WebAssembly performance was greatly improved for the following views:
— Timeline day
— Timeline week
— Timeline workweek
— Timeline month
— Month
Essential JS 2
You can now freeze columns on the right side of the DataGrid.
Workflow validation is now possible in the Kanban control.
You can now include charts in the Spreadsheet control.
In Gantt Chart, you can now perform virtual scrolling and task splitting.
The Scheduler control now supports resizing and the drag and drop of appointments in the timeline year view.
Word Processor now supports inserting, accessing, and editing the footnotes and endnotes in a Word document.
WPF
Syncfusion WPF controls are now compatible with .NET 5.0.
A new Badge control shows additional details of elements, like the online status and number of notifications.
The following controls have been developed to meet industry standards and are marked as production ready:
A new Office 2019 high contrast white theme has been provided for all controls.
The Diagram control has a new ribbon to access its tools.
Navigation Drawer adds:
— Compact and extended display modes.
— Built-in items.
— Itemsource support.
The Ribbon control can now be hosted inside a normal window or any part of an application.
RichTextBox provides suggestions when typing words.
WinForms
Syncfusion WinForms controls are now compatible with .NET 5.0.
The RibbonControlAdv is now available in a simplified layout mode, similar to the most recent Office product.
File Formats
.NET PDF Library
The .NET PDF Library now supports:
Drawing HTML-styled text in PDF pages and PDF grids.
Rendering EAN-13 and EAN-8 barcodes in PDF pages and images.
.NET Word Library
With the .NET Word Library, you can now access the metadata properties of a Word document used in the SharePoint document library.
.NET PowerPoint Library
With the .NET PowerPoint Library, you can now access and modify the language property of the text in a PowerPoint presentation.
Java Word Library
The Java Word Library now supports:
Creating, reading, and editing RTF documents.
Encrypting and decrypting Word documents.
Conclusion
These are just some of the features added in our 2020 Volume 4 release. You can check out the list of all the features in our release notes and on the What’s New page.
Try these features and share your feedback as comments in this blog. You can also reach us through our support forums, Direct-Trac, or feedback portal. | https://medium.com/syncfusion/syncfusion-essential-studio-2020-volume-4-is-here-f50ae4cb6110 | ['Rajeshwari Pandinagarajan'] | 2020-12-18 03:09:40.164000+00:00 | ['Dotnet', 'Mobile App Development', 'Software Development', 'Productivity', 'Web Development'] | Title Syncfusion Essential Studio 2020 Volume 4 HereContent Syncfusion glad roll last major release year Essential Studio 2020 Volume 4 enjoy enhancement available release brief description major feature implemented WinUI preview included following seven new control WinUI — Ribbon — Calendar — Calendar Date Picker — Date Picker — Time Picker — Slider — Range Slider also support many new chart type Charts control includes — Polar chart — Radar chart — Pyramid chart — Funnel chart Radial Gauge support adding label represent gauge range DataGrid control enhanced support printing TreeView control horizontal scrolling context menu Barcode control let generate GS1Code128Barcode Pdf417Barcode symbology Flutter allnew Sparkline Charts widget included release Charts widget enhanced allow — Defining maximum width axis label — Using customization template trackball — Converting logical pixel value chart data point vice versa — Restricting maximum zoom level pinchzooming Cartesian chart Calendar widget enhanced support — Navigation animation — custom widget builder time region appointment DataGrid widget enhanced load data support infinite scrolling Maps widget let add — Polylines — Arcs — Sublayers shape layer PDF Viewer widget support Text search Text selection copying Navigation using document link annotation PDF Library Flutter allows user — Encrypt decrypt PDF document — Create PDFs following conformance PDFA1B PDFA2B PDFA3B — Add attachment PDF document Excel Flutter library let — Add hyperlink text image — Insert delete row column — Autofit row column — Create Excel document logical function string function nested formula Xamarin XamarinForms WPF platform support extended ComboBox component Autocomplete component place dropdown either top bottom based space availability Auto tab width support added render Tabbed View control’s tab based text size circular cropping feature Image Editor allows user crop image circle ellipse shape Subscript superscript Rich Text Editor Dark light theme StepProgressBar Blazor Syncfusion Blazor component compatible NET 50 perform lazy loading Syncfusion Blazor assembly Blazor WebAssembly application new Button Group component following component developed industry standard moved preview productionready Individual NuGet package provided Syncfusion Blazor component DataGrid component support virtual scrolling virtual placeholder Kanban support customizing workflow validation card UI tooltip template Scheduler WebAssembly performance greatly improved following view — Timeline day — Timeline week — Timeline workweek — Timeline month — Month Essential JS 2 freeze column right side DataGrid Workflow validation possible Kanban control include chart Spreadsheet control Gantt Chart perform virtual scrolling task splitting Scheduler control support resizing drag drop appointment timeline year view Word Processor support inserting accessing editing footnote endnotes Word document WPF Syncfusion WPF control compatible NET 50 new Badge control show additional detail element like online status number notification following control developed meet industry standard marked production ready new Office 2019 high contrast white theme provided control Diagram control new ribbon access tool Navigation Drawer add — Compact extended display mode — Builtin item — Itemsource support Ribbon control hosted inside normal window part application RichTextBox provides suggestion typing word WinForms Syncfusion WinForms control compatible NET 50 RibbonControlAdv available simplified layout mode similar recent Office product File Formats NET PDF Library NET PDF Library support Drawing HTMLstyled text PDF page PDF grid Rendering EAN13 EAN8 barcodes PDF page image NET Word Library NET Word Library access metadata property Word document used SharePoint document library NET PowerPoint Library NET PowerPoint Library access modify language property text PowerPoint presentation Java Word Library Java Word Library support Creating reading editing RTF document Encrypting decrypting Word document Conclusion feature added 2020 Volume 4 release check list feature release note What’s New page Try feature share feedback comment blog also reach u support forum DirectTrac feedback portalTags Dotnet Mobile App Development Software Development Productivity Web Development |
1,781 | 9 Stories Our Editors Can’t Stop Thinking About in 2020 | In a year where time has stretched itself beyond meaning and the bigness of loss has burned us all out, I’ve found that a lot of the writing on grief hasn’t always resonated. This year’s grief is universal in many ways, because we’re all experiencing it, but stories of an entire nation in mourning lose the specificity of grief, and that specificity is what makes grief both painful and beautiful.
All this to say, Elizabeth Hackett’s writing on her personal corner of grief stayed with me. If you want me to be technical and say why I liked it as a piece of writing, I can tell you it was sentimental without being cliché, it was well-paced, and creatively structured. If you want me to tell you why I liked it as a human being, I’ll say that it gave meaning and weight to a seemingly mundane moment, simply picking out an outfit, and it also gave space for deep, heartbreaking, specific grief in the loss of her mother.
In a year that forced us into isolation, where our people were often far away, I am so grateful that Elizabeth shared a piece of herself with the world. — Sam Zabell, Audience Development Manager
In October, Kim Kardashian West announced she was taking her “closest inner circle” on vacation. To a private island. For her 40th birthday. In 2020. Sounds normal, right? Yeah, pretty normal.
I sort of caught onto the meme late (“Why is everyone posting jokes about going to a private island?” I thought as I scrolled through Twitter alone). When I did, I thought it was funny… then slightly horrifying. Also, where did they go? Also also, what were they thinking? I still don’t know what they were thinking (I can guess) but now I know where they went. Or at least I’m pretty sure, thanks to Vicky Mochama’s expert analysis of Kim and Kendall’s Twitter and Instagram feeds.
Want to match two photos of the same island bar? Want to sift through Kendall Jenner’s selfies trying to pick out which resort bathroom she’s standing in? Track down exactly which private jet the Kardashians rented (sorry, “chartered”)? Have you lost your mind and would prefer to just rabbit-hole your way through a list of things rich people do during a pandemic? Reading Mochama’s investigative reporting was thrilling for me. Maybe you’ll enjoy it, too. — Harris Sockel, Deputy Editor of Human Parts
Great writing grows from a great subject. But a great subject doesn’t need to be existential in scope or even inherently remarkable to very many people: It simply needs to matter a lot to you. The reader will follow.
This is exactly why I loved Maya Kosoff’s story about a beloved (and, I’ll say it, fairly gross-looking) Jell-O salad recipe. Deep in the creamy, pear-stuffed gelatin you will find a story filled with heart and humanity — a story about family and acceptance and marching forward amid unfathomable darkness to construct a seafoam-tinted dessert that means everything, even though, at a glance, it would seem to mean nothing at all. — Damon Beres, EIC of OneZero
2020 has been a dumpster fire. Between the consistent themes like Covid-19, racism, and the worst president ever, you have one-off terrible moments like murder hornets, celebrity passings, and Chet Hanks cosplaying a Jamaican. It’s hard not to think that we’ve through the worst year in history. But Saamir Ansari argues differently; in his post “536 AD — the Worst Year in History,” he argues that 2020 isn’t even top 5 on the “all-bad era” rankings.
I love this story for several reasons. One, I love how he approached it technically; it’s quick and gets straight to the point while avoiding leaving the reader unsatisfied. And two, anyone who knows me knows that I’m a sucker for a good history lesson. And in a time that feels like the worst time ever, I found it helpful to contextualize 2020 in relation to the eras that came before. I felt better about my year leaving this story than I felt coming in, and that’s really what it’s all about. — Shaq Cheris, Editor of Creators Hub
My favorite thing about the platform is coming across writers who can perfectly articulate everything I might be thinking or feeling. This summer was a stressful one. As if the pandemic wasn’t already turning life asunder, the deaths of Ahmaud Arbery, Breonna Taylor, and George Floyd; the global protests; and the subsequent conversations about race, policing, and justice made it even harder to feel like I could function like a normal human being.
Medium writer Shenequa Golding captured this feeling in her essay “Maintaining Professionalism in the Age of Black Death Is… a Lot.” She writes about the exhaustion inherent with navigating life as a Black person in America and how it’s almost impossible (and shouldn’t be asked of us!) to go about business as usual, especially as we may be actively mourning, biting back rage, or living in fear.
Her words make it easy to feel seen and to know that if we’re feeling fired up or even burned out by racism in America, we’re not alone. — Jolie A. Doggett, Platform Editor ZORA
If we talk long enough, eventually meditation will come up. For like 10 years it was my New Year’s resolution, and finally a couple years ago I started to sit more regularly because of anxiety. It doesn’t “solve” everything, but it helps. So I keep going. Over the months and years, I’ve noticed how I can get closer to what I feel, how I feel, and maybe even start to see why. There’s a healing happening. I found my creative work moves in a direction that explores all of this. This year it’s been especially helpful.
A story I keep coming back to is “Invite Your Writing Demons in for Tea” by Gavin Lamb, PhD. Drawing inspiration from Tibetan Buddhism and Joli Jensen’s book Write No Matter What, Lamb offers an approach to overcome challenges in the writing process. When you’re stuck, the advice here is to pause, and notice what’s happening. Notice the beliefs coming up. “Don’t be judgemental of the beliefs you discover. Simply notice their presence.” Approach them with a nonjudgmental curiosity, and you will discover more about yourself and what you need. I love how this advice combines ideas from meditation and healing with the creative (and difficult) practice of writing. As we write, we discover more about ourselves and grow. — Kawandeep Virdee, Editor of Creators Hub
My friend Brooke Hammerling is a natural born blogger. Six days a week Brooke’s a communications professional, but on that seventh day she publishes Pop Culture Mondays, her weekly missive to her “darling pop culture junkies” that rounds up of “all the news you’re too embarrassed to admit you don’t know… or too embarrassed to admit you DO.” To wit: on a recent Monday she went deep on the “hot priest” Carl Lentz, Elliot Page’s transition, “Bad Romance” lip syncing on TikTok, and actors wearing masks on Law & Order SVU.
As a pop culture junkie, most weeks I’m embarrassed to admit I do know what she’s blogging about. But to be honest I don’t read Brooke for the news, I read it for her voice. Her posts are like having her on the phone, delivering piping hot takes on the absurd joy that is pop culture in 2020.
Which made her post from late September, “My Heart: a short but true story” such a shocker.
On Thursday, October 1st I am checking into the Ronald Reagan UCLA Medical Center to have open heart surgery. I am getting my aortic valve replaced with a cow’s valve and my aorta replaced with this thing called a Dacron Graft. It’s super sci-fi and cool and I will be part cow and part robot JUST LIKE I HAVE ALWAYS DREAMED OF.
In the post she tells the story of how as a kid she was diagnosed with Severe Aortic Stenosis (which involved her literally chasing a boy), how she ignored it in her twenties and thirties, and then the recent shock of her cardiologist telling her she needed open heart surgery ASAP. “I drank a SHIT-ton of tequila that night,” she writes. (Can relate.)
As a long-time blogger, I’ve always struggled with voice. What’s the right tone? How much of myself to let through? Where do I draw the line between the public persona and my private life? If those questions trouble Brooke, she never lets it show. Instead she just lets it fly — whether she’s writing about TikTok collabs, or the prospect of getting her chest cracked open on the operating table. And she does it with joy, humor, and love for her darling pop culture junkies. — Michael Sippey, VP Labs
I love stories where writers act as a tour guide. They escort you down an almost comically narrow rabbit hole of information, answering a question that you never asked or knew you cared about. But once you’ve made it through to the end, that issue suddenly consumes your every thought. Sydney Urbanek perfectly captured that level of niche nuance earlier this year when she dissected the 2009 Lady Gaga/Beyoncé collab, “Telephone.” Urbanek brings a seemingly random pop culture artifact to life, pulling every relevant interview and article available to reconstruct the song and music video’s origin story and legacy. The result is an unexpected delight. Sign me up for more guided tours! — Amanda Sakuma, Senior Editor GEN
The only thing writers complain about more than writing is not being able to write. How-to books, Twitter threads, and conference panels are so often filled with screeds against writer’s block that when you stumble upon a truly unique and insightful insight into such a well-worn topic, it can feel truly revelatory. That’s exactly how it felt to read Alexander Chee’s frank but empathic investigation into why writer’s block exists and what we can do to stop it. Chee argues that writer’s block doesn’t come from your creative well running dry, but instead arises from “the fear of humiliating yourself” — the nagging worry that you will write something so dumb and wrong that people stop loving you and start hating you. Shame and embarrassment are often overlooked amid discussions of ambition, imagination, and work ethic, but as Chee wisely notes, these deepest, simplest emotions are often at the root of our feeling stuck, and they can easily compound upon themselves. “It’s hard enough to have a problem without also being ashamed of the problem,” Chee writes, and as well as being useful writing advice, it’s a good reminder for anyone who’s lived through 2020. It’s often more liberating to first admit you are embarrassed by your problems — be they loneliness, aimlessness, or grief — than it is to simply hope you can push past them. — Jean-Luc Bouchard, Senior Platform Editor Marker | https://medium.com/creators-hub/9-stories-our-editors-cant-stop-thinking-about-in-2020-8d1034ef773c | ['Medium Creators'] | 2020-12-17 22:01:34.782000+00:00 | ['Writing', 'Wrap Up', 'Writing Tips', 'Creativity', 'Creators'] | Title 9 Stories Editors Can’t Stop Thinking 2020Content year time stretched beyond meaning bigness loss burned u I’ve found lot writing grief hasn’t always resonated year’s grief universal many way we’re experiencing story entire nation mourning lose specificity grief specificity make grief painful beautiful say Elizabeth Hackett’s writing personal corner grief stayed want technical say liked piece writing tell sentimental without cliché wellpaced creatively structured want tell liked human I’ll say gave meaning weight seemingly mundane moment simply picking outfit also gave space deep heartbreaking specific grief loss mother year forced u isolation people often far away grateful Elizabeth shared piece world — Sam Zabell Audience Development Manager October Kim Kardashian West announced taking “closest inner circle” vacation private island 40th birthday 2020 Sounds normal right Yeah pretty normal sort caught onto meme late “Why everyone posting joke going private island” thought scrolled Twitter alone thought funny… slightly horrifying Also go Also also thinking still don’t know thinking guess know went least I’m pretty sure thanks Vicky Mochama’s expert analysis Kim Kendall’s Twitter Instagram feed Want match two photo island bar Want sift Kendall Jenner’s selfies trying pick resort bathroom she’s standing Track exactly private jet Kardashians rented sorry “chartered” lost mind would prefer rabbithole way list thing rich people pandemic Reading Mochama’s investigative reporting thrilling Maybe you’ll enjoy — Harris Sockel Deputy Editor Human Parts Great writing grows great subject great subject doesn’t need existential scope even inherently remarkable many people simply need matter lot reader follow exactly loved Maya Kosoff’s story beloved I’ll say fairly grosslooking JellO salad recipe Deep creamy pearstuffed gelatin find story filled heart humanity — story family acceptance marching forward amid unfathomable darkness construct seafoamtinted dessert mean everything even though glance would seem mean nothing — Damon Beres EIC OneZero 2020 dumpster fire consistent theme like Covid19 racism worst president ever oneoff terrible moment like murder hornet celebrity passing Chet Hanks cosplaying Jamaican It’s hard think we’ve worst year history Saamir Ansari argues differently post “536 AD — Worst Year History” argues 2020 isn’t even top 5 “allbad era” ranking love story several reason One love approached technically it’s quick get straight point avoiding leaving reader unsatisfied two anyone know know I’m sucker good history lesson time feel like worst time ever found helpful contextualize 2020 relation era came felt better year leaving story felt coming that’s really it’s — Shaq Cheris Editor Creators Hub favorite thing platform coming across writer perfectly articulate everything might thinking feeling summer stressful one pandemic wasn’t already turning life asunder death Ahmaud Arbery Breonna Taylor George Floyd global protest subsequent conversation race policing justice made even harder feel like could function like normal human Medium writer Shenequa Golding captured feeling essay “Maintaining Professionalism Age Black Death Is… Lot” writes exhaustion inherent navigating life Black person America it’s almost impossible shouldn’t asked u go business usual especially may actively mourning biting back rage living fear word make easy feel seen know we’re feeling fired even burned racism America we’re alone — Jolie Doggett Platform Editor ZORA talk long enough eventually meditation come like 10 year New Year’s resolution finally couple year ago started sit regularly anxiety doesn’t “solve” everything help keep going month year I’ve noticed get closer feel feel maybe even start see There’s healing happening found creative work move direction explores year it’s especially helpful story keep coming back “Invite Writing Demons Tea” Gavin Lamb PhD Drawing inspiration Tibetan Buddhism Joli Jensen’s book Write Matter Lamb offer approach overcome challenge writing process you’re stuck advice pause notice what’s happening Notice belief coming “Don’t judgemental belief discover Simply notice presence” Approach nonjudgmental curiosity discover need love advice combine idea meditation healing creative difficult practice writing write discover grow — Kawandeep Virdee Editor Creators Hub friend Brooke Hammerling natural born blogger Six day week Brooke’s communication professional seventh day publishes Pop Culture Mondays weekly missive “darling pop culture junkies” round “all news you’re embarrassed admit don’t know… embarrassed admit DO” wit recent Monday went deep “hot priest” Carl Lentz Elliot Page’s transition “Bad Romance” lip syncing TikTok actor wearing mask Law Order SVU pop culture junkie week I’m embarrassed admit know she’s blogging honest don’t read Brooke news read voice post like phone delivering piping hot take absurd joy pop culture 2020 made post late September “My Heart short true story” shocker Thursday October 1st checking Ronald Reagan UCLA Medical Center open heart surgery getting aortic valve replaced cow’s valve aorta replaced thing called Dacron Graft It’s super scifi cool part cow part robot LIKE ALWAYS DREAMED post tell story kid diagnosed Severe Aortic Stenosis involved literally chasing boy ignored twenty thirty recent shock cardiologist telling needed open heart surgery ASAP “I drank SHITton tequila night” writes relate longtime blogger I’ve always struggled voice What’s right tone much let draw line public persona private life question trouble Brooke never let show Instead let fly — whether she’s writing TikTok collabs prospect getting chest cracked open operating table joy humor love darling pop culture junky — Michael Sippey VP Labs love story writer act tour guide escort almost comically narrow rabbit hole information answering question never asked knew cared you’ve made end issue suddenly consumes every thought Sydney Urbanek perfectly captured level niche nuance earlier year dissected 2009 Lady GagaBeyoncé collab “Telephone” Urbanek brings seemingly random pop culture artifact life pulling every relevant interview article available reconstruct song music video’s origin story legacy result unexpected delight Sign guided tour — Amanda Sakuma Senior Editor GEN thing writer complain writing able write Howto book Twitter thread conference panel often filled screed writer’s block stumble upon truly unique insightful insight wellworn topic feel truly revelatory That’s exactly felt read Alexander Chee’s frank empathic investigation writer’s block exists stop Chee argues writer’s block doesn’t come creative well running dry instead arises “the fear humiliating yourself” — nagging worry write something dumb wrong people stop loving start hating Shame embarrassment often overlooked amid discussion ambition imagination work ethic Chee wisely note deepest simplest emotion often root feeling stuck easily compound upon “It’s hard enough problem without also ashamed problem” Chee writes well useful writing advice it’s good reminder anyone who’s lived 2020 It’s often liberating first admit embarrassed problem — loneliness aimlessness grief — simply hope push past — JeanLuc Bouchard Senior Platform Editor MarkerTags Writing Wrap Writing Tips Creativity Creators |
1,782 | This 20 Year Old Made 1 Million in 8 Minutes. How Can You Replicate That? | The Great ‘One Million’ Sale
Photo by Sharon McCutcheon on Unsplash
It was in the first quarter of 2020, MoonXCosmetics were not operating at full fledge. Due to lack of workforce and minimal operations at the manufacturing hub.
MoonXCosmetics were dealing with orders which outran their production and shipping capabilities.The entire online store was blocked for some time. While people were still eagerly looking for skin products from MoonXCosmetics.
In April 2020, Mariee decided to restock her site and make products available for her customers. On April 30 2020, as soon as the stock was live — thousands of people jumped onto the store.
Just in the matter of 8 minutes. She was able to get 1300+ orders which valued over one million dollars in sales. The biggest reasons why her store was able to do well was:-
Unintended Scarcity That Led To Hype
MoonXCosmetics was a million dollar brand even before the famous ‘One Million Dollar Sale’. But Mariee always struggled to manage and operate her business with a small team.
This led to delivering orders quite late to her customers, and sometimes people never got their products delivered. There are many reviews and complaints from customers online — Regarding that they never got their products even months after ordering them.
Screenshot 1 (Source: Twitter)
MoonXCosmetics Instagram Post (Source: Instagram)
These negative reviews and opinions could have taken a worse turn for Mariee’s business. But it didn’t happen, because customers loved her products and were after her products.
Screenshot 2 (Source: Twitter)
Mariee had been making natural skincare solutions since 2017. Over the years, people have reported to her regarding the skincare solutions — which didn’t work on them. Mariie made multiple changes in her recipe to make it work for her customers.
She created her skincare line to suit every kind of skin. The best selling product was ‘Rose Galore’. People loved it and are mad after this product. Because it proved to work on them and they didn’t wanna lose such valuable skincare solution.
Due to the slow production and shipping capabilities. It meant that the product was scarce and would be available for a shorter periods on site. Thus this created an unintended scarcity around people.
Customers who got their skincare solution felt special and obviously were benefited from it’s advantage. As a founder Mariee used this scarcity to build hype for April restock which led to the one million dollar sale.
Mariee’s Personal Branding on Social Media
Screenshot of Mariee’s Instgram account (Source: Instagram)
Mariee is the brand face of ‘MoonXCosmetics’. Today we see a lot of entrepreneurs use their social media handles to talk about their brand and business.
The more prominent example of personal branding on today’s social media is Elon Musk. His twitter has over 39 Million followers. He shares everything regarding his businesses, and it’s progress.
Screenshot of Elon Musk’s Twitter Tweets (Source: Twitter)
This is the reason why most of us see Elon Musk in a news headline every other week. His words and his actions on social media get him that kind of media coverage.
On the other hand, Mariee Revere has replicated this concept of personal branding on a small scale through her various social media accounts:-
YouTube (par moon) — 27K subscribers Instagram (parmoonx) — 75K followers Twitter (@parmoonx) — 30K followers
Customer believes in her products because of her authentic behaviour on social media. Mariee let’s customers know what they are buying and from whom they are buying from.
Building Transparency = Building a Community
These days where brands utilize influencers and professional marketing strategies to make millions. MoonXCosmetics, as a online brand is a mixture of both professional and authentic branding.
MoonXCosmetics did used to run some high paid influencer campaigns in the past that got them customers. As of today if you check their Instagram account (moonxcosmeticsllc). Around 20–30% of their content is user generated. The brand is focused on showing you the results and the products which are helping people to get that results.
MoonXCosmetics Instagram Feed (Source: Instagram)
It’s not only on Instagram, but this also reverts backs to how Mariee is leveraging personal branding through her social media. She shares everything on her social media.
On December 25 2019, Mariee shared her tweet that says,
I paid my grandma mortgage for all of 2020 and paid off my mom’s credit card debt
Screednshot (Source: Tweet)
This tweet and the video has more one million views, and people loved what they saw. As a MoonXCosmetics customer, I would feel like I knew Mariee and her brand more personally just by looking at what she shares on social media. (That’s Building Transparency)
Moving onto the business side of trust. Mariee runs her vlogging channel on YouTube, which ultimately takes you through the behind the scenes of the buisness and Mariee’s daily life.
Watching her YouTube videos shows you how genuine Mariee is with her products and customers. | https://rahulthakursingh.medium.com/this-20-year-old-made-1-million-in-just-8-minutes-how-can-you-replicate-that-96d4c844afd9 | ['Thakur Rahul Singh'] | 2020-10-09 13:48:05.725000+00:00 | ['Business Strategy', 'Business', 'Startup', 'Marketing', 'Marketing Strategies'] | Title 20 Year Old Made 1 Million 8 Minutes Replicate ThatContent Great ‘One Million’ Sale Photo Sharon McCutcheon Unsplash first quarter 2020 MoonXCosmetics operating full fledge Due lack workforce minimal operation manufacturing hub MoonXCosmetics dealing order outran production shipping capabilitiesThe entire online store blocked time people still eagerly looking skin product MoonXCosmetics April 2020 Mariee decided restock site make product available customer April 30 2020 soon stock live — thousand people jumped onto store matter 8 minute able get 1300 order valued one million dollar sale biggest reason store able well Unintended Scarcity Led Hype MoonXCosmetics million dollar brand even famous ‘One Million Dollar Sale’ Mariee always struggled manage operate business small team led delivering order quite late customer sometimes people never got product delivered many review complaint customer online — Regarding never got product even month ordering Screenshot 1 Source Twitter MoonXCosmetics Instagram Post Source Instagram negative review opinion could taken worse turn Mariee’s business didn’t happen customer loved product product Screenshot 2 Source Twitter Mariee making natural skincare solution since 2017 year people reported regarding skincare solution — didn’t work Mariie made multiple change recipe make work customer created skincare line suit every kind skin best selling product ‘Rose Galore’ People loved mad product proved work didn’t wanna lose valuable skincare solution Due slow production shipping capability meant product scarce would available shorter period site Thus created unintended scarcity around people Customers got skincare solution felt special obviously benefited it’s advantage founder Mariee used scarcity build hype April restock led one million dollar sale Mariee’s Personal Branding Social Media Screenshot Mariee’s Instgram account Source Instagram Mariee brand face ‘MoonXCosmetics’ Today see lot entrepreneur use social medium handle talk brand business prominent example personal branding today’s social medium Elon Musk twitter 39 Million follower share everything regarding business it’s progress Screenshot Elon Musk’s Twitter Tweets Source Twitter reason u see Elon Musk news headline every week word action social medium get kind medium coverage hand Mariee Revere replicated concept personal branding small scale various social medium account YouTube par moon — 27K subscriber Instagram parmoonx — 75K follower Twitter parmoonx — 30K follower Customer belief product authentic behaviour social medium Mariee let’s customer know buying buying Building Transparency Building Community day brand utilize influencers professional marketing strategy make million MoonXCosmetics online brand mixture professional authentic branding MoonXCosmetics used run high paid influencer campaign past got customer today check Instagram account moonxcosmeticsllc Around 20–30 content user generated brand focused showing result product helping people get result MoonXCosmetics Instagram Feed Source Instagram It’s Instagram also reverts back Mariee leveraging personal branding social medium share everything social medium December 25 2019 Mariee shared tweet say paid grandma mortgage 2020 paid mom’s credit card debt Screednshot Source Tweet tweet video one million view people loved saw MoonXCosmetics customer would feel like knew Mariee brand personally looking share social medium That’s Building Transparency Moving onto business side trust Mariee run vlogging channel YouTube ultimately take behind scene buisness Mariee’s daily life Watching YouTube video show genuine Mariee product customersTags Business Strategy Business Startup Marketing Marketing Strategies |
1,783 | The Seven Habits of Highly Creative People | Photo by Ricardo Rocha on Unsplash
There are two myths that are often perpetuated about creative people. One — that they’re all “artsy”. Two — that they’re all unpredictable.
We tend to associate creativity with the arts, and for good reason. Art is by definition born out of creation, be it in the form of a sculpture, a poem, a symphony or a photograph, and people who craft those are undoubtedly creative. It’s wrong, however, to think of creativity as restricted to the arts. Creativity refers to the ability to come up with new things or new ways to look at things, and that’s a desirable quality to have regardless of which industry one works in. So those of us who come up with new business ideas, new processes, new product formulae and new people management methods are undoubtedly creative — as much as the sculptors and writers among us.
This brings me to the second myth, about unpredictability. We often think that creative people live life entirely as it comes — no schedules, no timings, just giving in to their creativity as and when it chooses to emerge. That’s wrong. While they certainly don’t live like automatons, they recognise the importance of gaining control over their creativity so that it can be channelised into the right pursuits, at the right time. Creativity has an inherent aversion towards too much control, but like any other human trait, it’s at its best when mastered. The most creative among us, therefore, display a number of healthy habits that help them use their creativity in the most efficient and enjoyable way, and which are listed as follows:
· They have a routine — yes, it’s tempting to think of creativity as an excuse to ditch discipline. Too many rules can suffocate the spontaneity of creativity, no doubt. But the most creative people are so because they set aside time specifically to be creative. They know that they have other things to do all day — jobs, chores, family time and socialising — so they manage their time such that everything else gets done, and they also have a chunk of time to devote wholeheartedly to what they do best.
· They know when to break the routine — when the next big idea hits, it’s often without warning and it’s likely to drift off without warning too. That’s why creative people don’t wait — when it’s truly essential to capture or expand on an idea, they put everything else aside and do so, even if it’s just a quick note or a sketch.
· They write things down — creative minds know the power of the pen. Be it lists or mind maps, bullet journals or paragraphs, creative people always have paper handy to capture ideas and random thoughts when they float by. Digital aids help, of course, but any creative mind will tell you that they’d far rather grab a notebook and pen than an iPad when inspiration strikes during the day.
· They get their sleep — those caffeine-fuelled 72-hour working marathons are few and far between, and for good reason. Sleep deprivation has multiple ill-effects, and one of them is causing the brain to become sluggish, which leads to low energy levels and reduced productivity in the short run as well as the long run. Creative people know when it’s time to call it a day and resume their work next morning — and the handy notes and diagrams they sketch will help preserve their ideas until it’s time to start again.
· They are curious about everything — creative minds get their fuel from exploring the world around them. Inspiration lurks everywhere — a nature documentary could trigger an idea for an environment-friendly innovation, and a news photo could spark the next war novel. Creative people know this, which is why they are constantly reading, learning, watching, listening and asking.
· They aren’t afraid of failure — not every great idea materialises into something lasting, and creative people are okay with that. Tying creativity to ultimate success will stifle the natural urge to try new things. What is important is to keep ideating and experimenting so that the perfect idea — the one that will translate into a finished creation — can come along.
· They know how to be happily un-creative — too much of anything is bad, and creativity is no exception. Highly creative people know this, which is why they know the importance of switching off their “creative mode” and indulging in simple, fun activities like watching Netflix shows, going for walks, meeting friends and cooking favourite meals. The mind, like the body, needs the occasional vacation. It’s the best protection against burnout.
If you’re a creative person who struggles to use that creativity optimally, try adopting the habits listed above, one by one. By bringing more method and organisation into your life, you’ll find that you have better control over how and when to engage in creative pursuits, while also allowing your creativity enough leeway to run free and play with ideas that may just develop into your next masterpiece. | https://medium.com/maice/the-seven-habits-of-highly-creative-people-eb18cd83cc5b | ['Deya Bhattacharya'] | 2018-09-24 11:01:21.079000+00:00 | ['Productivity', 'Creativity', 'Maice', 'Time Management'] | Title Seven Habits Highly Creative PeopleContent Photo Ricardo Rocha Unsplash two myth often perpetuated creative people One — they’re “artsy” Two — they’re unpredictable tend associate creativity art good reason Art definition born creation form sculpture poem symphony photograph people craft undoubtedly creative It’s wrong however think creativity restricted art Creativity refers ability come new thing new way look thing that’s desirable quality regardless industry one work u come new business idea new process new product formula new people management method undoubtedly creative — much sculptor writer among u brings second myth unpredictability often think creative people live life entirely come — schedule timing giving creativity chooses emerge That’s wrong certainly don’t live like automaton recognise importance gaining control creativity channelised right pursuit right time Creativity inherent aversion towards much control like human trait it’s best mastered creative among u therefore display number healthy habit help use creativity efficient enjoyable way listed follows · routine — yes it’s tempting think creativity excuse ditch discipline many rule suffocate spontaneity creativity doubt creative people set aside time specifically creative know thing day — job chore family time socialising — manage time everything else get done also chunk time devote wholeheartedly best · know break routine — next big idea hit it’s often without warning it’s likely drift without warning That’s creative people don’t wait — it’s truly essential capture expand idea put everything else aside even it’s quick note sketch · write thing — creative mind know power pen list mind map bullet journal paragraph creative people always paper handy capture idea random thought float Digital aid help course creative mind tell they’d far rather grab notebook pen iPad inspiration strike day · get sleep — caffeinefuelled 72hour working marathon far good reason Sleep deprivation multiple illeffects one causing brain become sluggish lead low energy level reduced productivity short run well long run Creative people know it’s time call day resume work next morning — handy note diagram sketch help preserve idea it’s time start · curious everything — creative mind get fuel exploring world around Inspiration lurks everywhere — nature documentary could trigger idea environmentfriendly innovation news photo could spark next war novel Creative people know constantly reading learning watching listening asking · aren’t afraid failure — every great idea materialises something lasting creative people okay Tying creativity ultimate success stifle natural urge try new thing important keep ideating experimenting perfect idea — one translate finished creation — come along · know happily uncreative — much anything bad creativity exception Highly creative people know know importance switching “creative mode” indulging simple fun activity like watching Netflix show going walk meeting friend cooking favourite meal mind like body need occasional vacation It’s best protection burnout you’re creative person struggle use creativity optimally try adopting habit listed one one bringing method organisation life you’ll find better control engage creative pursuit also allowing creativity enough leeway run free play idea may develop next masterpieceTags Productivity Creativity Maice Time Management |
1,784 | 2018 Projects & Showreel | Every few years, our talented team of 2D / 3D designers & animators work towards putting together a video that showcases our latest work (it also gives them the opportunity to show off their skills). With 2018 now just 2 months away from drawing to a close, we felt it would be a good time to unveil our showreel containing some of the fantastic projects we were fortunate enough to work on this year.
Starting off the video is The Body Shop, a globally renowned brand in the cosmetics, skin care and perfume space. For this project, we custom built a robust E-commerce web application that conformed to the global standards and guidelines set forth by the brand, while providing the local bodies enough flexibility to cater to the region’s specific needs. The site was built to be optimized for marketing campaigns, SEO and conversion and provided the administrators a back-end system where they can moderate and maintain every aspect of the product.
We offer a fully custom solution built to your needs, covering everything from inventory management to payment processing.
Next up is the MIT Innovation Ecosystems web application. MIT is a name that needs no further introduction. We had a fantastic experience working with them on this project, which entailed designing & developing a web application that allows MIT students and faculty members to build custom graphs & reports from 3 decades worth of data for 180 countries using 40 different metrics. These can be plotted in various ways, compared against each other, exported and downloaded to be opened in Excel, or as an image file to be used in presentations. MIT is using this tool in Masters and P.H.D. level economics classes as a tool for research & learning.
Then we have iEvent, a SaaS product that allows event managers to dynamically create beautiful and customized applications (for iOS & Android) for their events. It even generates a microsite for the event as well, using all the data that is fed in through the self-serving back-end panel. Once the event manager uploads all the event information such as agenda, speakers, attendees, map, etc. (there are over 20 different categories available), the product automatically generates an iOS & Android app which is submitted to the App Store & Play Store. And of course, the colors & UI of the application & microsite are fully controllable from the back-end panel as well, ensuring that your event application is perfectly in sync with the brand guidelines.
Our team of user-experience designers can turn a project brief into a visual prototype, collaborating with you every step of the way.
Then we have DocLock — a fantastic new way to share documents in a secure manner. We were approached by a serial entrepreneur looking to disrupt the document sharing workspace with a fantastic and novel idea. We designed and developed an application where users can share documents with each other, and specify a geo-location where this document is accessible. Complete with enterprise grade security protocols and document access tracking, users of DocLock can share sensitive documents with the peace of mind that their documents will not leave their office premises (or other specified safe-zones). As we move forward with this product, we are embarking on the next step of bringing DocLock onto the Blockchain platform.
We then got to embark on a fun new style of project, different from anything we had built previously. In this B2C space, we built PicTakToe for a group of entrepreneurs. This E-commerce product aims to take people’s memories out of their phone camera rolls and into their hands, or onto their walls. With PicTakToe, the customers can go to the web site either from any of their devices, and upload their pictures. They can then design their own stunning Photo Books, Canvases or select from available Frame styles. There are dozens of themes and templates to choose from and place your order, which is then delivered to your doorstep. In just its first 3 months, PicTakToe has taken the country by storm.
Need to build an ecommerce app?
We offer a fully custom solution built to your needs, covering everything from inventory management to payment processing.
Contact Us
For GMC Sierra, we built a Facebook Hub. We custom designed & animated a 3D model replica of their latest truck, brought it to life and put it on the web for users on social media to be able to get a real feel for the product. On this Facebook Hub, users are able to browse the different features, a gallery of interactive elements and find out what people are saying on the different social networks about the truck.
Next up, we worked with some incredible minds over at John Hopkins University & Emory University. In order to spread awareness for getting the Flu Shot (as flu season is quickly upon us), the Moms Talk Shots web application provides a quick survey for expecting moms, or new moms to take. Based on the answers they provide, the survey adapts and asks further follow-up questions. Then, based on the final answers of the survey, the user is shown a series of videos which are most relevant to them. At the end, they are given a discount coupon for Walgreens where they can receive a flu shot, or any other medication they require. The system also sends follow-up surveys, custom email reminders, and much more. This entire system is controllable via a dynamic back-end through which the admins at John Hopkins University can build their own surveys, detail the rules based on which different videos are served, create follow-up email templates, setup rules based on which reminder emails and follow-ups are sent, and much much more. This project was an incredible exercise in building systems which can adapt to several different use cases, and scale accordingly.
We replace old enterprise implementations with the latest technology, custom built for better scale, security, usability and value.
Finally, we cap it all off with some VR fun! We designed and developed 3 VR games, VR Cricket, VR Food Truck & VR Basketball. This was a study in Virtual Reality for us and we learned an immense amount putting it all together. Our VR Cricket & Basketball apps were downloaded over 50,000 times and well received by users across the board. We were pleased to hear all the feedback, especially considering these were just fun experiments for us, as we embark on bigger enterprise grade VR Projects in which we are tackling user training scenarios for on-site workers. We are extremely excited about the direction and potential of VR and AR in the years ahead.
Need to build an enterprise grade product?
We replace old enterprise implementations with the latest technology, custom built for better scale, security, usability and value.
Contact Us
Wrapping Up
These are just a few of the fantastic products we had the pleasure of working on in 2018 with some incredible clients and partners. Growth and learning are two core pillars of our ethos at Cygnis Media and we are extremely proud of the fine work our team of men & women have put forth in 2018. We can’t wait to see what we create in 2019! Stay tuned. | https://medium.com/cygnis-media/2018-projects-showreel-21bf6dab9949 | ['Cygnis Media'] | 2018-11-08 06:57:11.722000+00:00 | ['Mobile App Development', 'Entrepreneurship', 'Web Development', 'Virtual Reality'] | Title 2018 Projects ShowreelContent Every year talented team 2D 3D designer animator work towards putting together video showcase latest work also give opportunity show skill 2018 2 month away drawing close felt would good time unveil showreel containing fantastic project fortunate enough work year Starting video Body Shop globally renowned brand cosmetic skin care perfume space project custom built robust Ecommerce web application conformed global standard guideline set forth brand providing local body enough flexibility cater region’s specific need site built optimized marketing campaign SEO conversion provided administrator backend system moderate maintain every aspect product offer fully custom solution built need covering everything inventory management payment processing Next MIT Innovation Ecosystems web application MIT name need introduction fantastic experience working project entailed designing developing web application allows MIT student faculty member build custom graph report 3 decade worth data 180 country using 40 different metric plotted various way compared exported downloaded opened Excel image file used presentation MIT using tool Masters PHD level economics class tool research learning iEvent SaaS product allows event manager dynamically create beautiful customized application iOS Android event even generates microsite event well using data fed selfserving backend panel event manager uploads event information agenda speaker attendee map etc 20 different category available product automatically generates iOS Android app submitted App Store Play Store course color UI application microsite fully controllable backend panel well ensuring event application perfectly sync brand guideline team userexperience designer turn project brief visual prototype collaborating every step way DocLock — fantastic new way share document secure manner approached serial entrepreneur looking disrupt document sharing workspace fantastic novel idea designed developed application user share document specify geolocation document accessible Complete enterprise grade security protocol document access tracking user DocLock share sensitive document peace mind document leave office premise specified safezones move forward product embarking next step bringing DocLock onto Blockchain platform got embark fun new style project different anything built previously B2C space built PicTakToe group entrepreneur Ecommerce product aim take people’s memory phone camera roll hand onto wall PicTakToe customer go web site either device upload picture design stunning Photo Books Canvases select available Frame style dozen theme template choose place order delivered doorstep first 3 month PicTakToe taken country storm Need build ecommerce app offer fully custom solution built need covering everything inventory management payment processing Contact Us GMC Sierra built Facebook Hub custom designed animated 3D model replica latest truck brought life put web user social medium able get real feel product Facebook Hub user able browse different feature gallery interactive element find people saying different social network truck Next worked incredible mind John Hopkins University Emory University order spread awareness getting Flu Shot flu season quickly upon u Moms Talk Shots web application provides quick survey expecting mom new mom take Based answer provide survey adapts asks followup question based final answer survey user shown series video relevant end given discount coupon Walgreens receive flu shot medication require system also sends followup survey custom email reminder much entire system controllable via dynamic backend admins John Hopkins University build survey detail rule based different video served create followup email template setup rule based reminder email followup sent much much project incredible exercise building system adapt several different use case scale accordingly replace old enterprise implementation latest technology custom built better scale security usability value Finally cap VR fun designed developed 3 VR game VR Cricket VR Food Truck VR Basketball study Virtual Reality u learned immense amount putting together VR Cricket Basketball apps downloaded 50000 time well received user across board pleased hear feedback especially considering fun experiment u embark bigger enterprise grade VR Projects tackling user training scenario onsite worker extremely excited direction potential VR AR year ahead Need build enterprise grade product replace old enterprise implementation latest technology custom built better scale security usability value Contact Us Wrapping fantastic product pleasure working 2018 incredible client partner Growth learning two core pillar ethos Cygnis Media extremely proud fine work team men woman put forth 2018 can’t wait see create 2019 Stay tunedTags Mobile App Development Entrepreneurship Web Development Virtual Reality |
1,785 | Joint Friendly Fitness: Make Light Weights Feel Heavy | Perform The Big Movements Last
I’ve spoken plenty about how your training should be centered around the big, compound lifts — press, squat, row, and deadlift variations are all staples in a sound strength training program.
Traditionally, these movements are performed first in a workout, when we’re at our “freshest”, so that we can move the heaviest weight possible. After this main lift is done, we move onto the accessory movements like curls and side lateral raises; the more “bodybuilder” style exercises.
This is sound logic, and it’s clearly the way to go if your one and only objective is to move as much weight as humanly possible on that given day. But a great way to make things a bit more “joint friendly” while still getting a productive workout is to occasionally flip the exercise order of your workout on its head.
By performing those accessory movements first, you’re pre-exhausting the muscles, which means when it comes time to perform that compound exercise at the end of your workout, you’re not going to be able to move as much weight as you normally could. This is a good thing from a joint health standpoint, because you can still train hard and reap the muscle building benefits that these compound lifts provide, but with less weight.
If you’re a competitive powerlifter training for a contest, this may not be ideal for you — but for pretty much anybody else, this is a great strategy to cycle into your training every few months to give your joints a break from all of the heavy lifting you’re otherwise performing throughout the year.
I like performing my compound lifts in this fashion for about 6 weeks at a time, every 3 or 4 months. | https://medium.com/datadriveninvestor/joint-friendly-fitness-make-light-weights-feel-heavy-d8ee4caec553 | ['Zack Harris'] | 2020-12-27 16:05:12.401000+00:00 | ['Health', 'Wellness', 'Fitness', 'Life', 'Self Improvement'] | Title Joint Friendly Fitness Make Light Weights Feel HeavyContent Perform Big Movements Last I’ve spoken plenty training centered around big compound lift — press squat row deadlift variation staple sound strength training program Traditionally movement performed first workout we’re “freshest” move heaviest weight possible main lift done move onto accessory movement like curl side lateral raise “bodybuilder” style exercise sound logic it’s clearly way go one objective move much weight humanly possible given day great way make thing bit “joint friendly” still getting productive workout occasionally flip exercise order workout head performing accessory movement first you’re preexhausting muscle mean come time perform compound exercise end workout you’re going able move much weight normally could good thing joint health standpoint still train hard reap muscle building benefit compound lift provide le weight you’re competitive powerlifter training contest may ideal — pretty much anybody else great strategy cycle training every month give joint break heavy lifting you’re otherwise performing throughout year like performing compound lift fashion 6 week time every 3 4 monthsTags Health Wellness Fitness Life Self Improvement |
1,786 | How to Get Cloud Certified???? | Over the past few years, the cloud computing industry has generated a lot of interest and investment. Cloud computing has become an integral part of the IT infrastructure for many companies worldwide. Industry analysts report that the cloud computing industry has grown swiftly over recent years.
According to Wikibon, the Amazon Web Services (AWS) revenue will climb to $43 billion by 2022 with Microsoft Azure and Google Cloud close behind. As cloud computing becomes critical to IT and business in general, the demand for cloud skills will increase. Aspiring cloud professionals must prove that they have the skills and knowledge to be able to compete favorably in the market, and a cloud certification is the best way to do that.
The 3 Most Valuable Cloud Certifications to Have on Your Résumé
▸ AWS Certified Solutions Architect — Professional
▸Microsoft Certified Azure Solutions Architect Expert
▸Google Cloud Certified — Professional Cloud Architect
My Background: I am Cloud and Big Data Enthusiast , I am here because I love to Talk about Cloud. I am 11x Cloud Certified Expert. 4x AWS Certified , 3x Oracle Cloud Certified , 3x Azure Certified 1x Alibaba Cloud Certified .
How to Get Started.
▸I started my journey into the cloud last year around june 2018. I was always interested in cloud technologies and found it fascinating. I was surfing a lot of random articles and blogs carrying information about cloud .
▸I found that In Order to learn AWS , I have to practice all the AWS Services and thus I quickly signed up on the AWS .
▸I used to spend almost 3 to 4 hours daily trying to practice the various AWS Services and it Helped me a lot in Understanding the AWS Architecture.
To know more about my journey into the aws cloud refer to this article.
Getting Started with AWS Educate.
▸AWS has recently introduced it’s new learning platform called AWS Educate and it is best if anyone wants to start with AWS .
▸AWS Educate is Amazon’s new Learning Platform to Learn Amazon Web Services and it is basically free for all students and Educators who want to learn AWS Services and Explore all the new Stuff which AWS has to Offer that to Absolutely Free without any charges .
▸If you’re a student, you can benefit with no-cost, at-home learning opportunities through AWS Educate Cloud Career Pathways and specialty badges, and online workshops and webinars to help you continue to build cloud skills.
▸AWS Credits for Students
(renews annually)
▸AWS Educate Starter Account with $75 in AWS Promotional Credits if your school is an AWS Educate member institution
▸AWS Educate Starter Account with $30 in AWS Promotional Credits if your school is not an AWS Educate member institution.
To Know more refer to this article.
Tips and Tricks for everyone to get Microsoft Azure Certified within days for free.
For anyone wanting to start their journey into the Microsoft Azure , now is the perfect time as Microsoft Azure is Offering free Azure Training and AZ-900 Exam Voucher.
What are the steps required to prepare for Microsoft Certified: Azure Administrator Associate Exam ?
How to prepare and pass the AZ-300 Exam & AZ-301 Exam ?
To Know all about getting Azure Certified refer to my article.
Tips and Tricks for everyone to get oracle cloud infrastructure (OCI) Cerified within days for free.
Oracle Cloud Infrastructure certifications carry great value and can increase your salary exponentially. Normally these costs around $230 — $250. The list of the certifications are as follows: –
To know more refer to my article.
Why to Get Certified?
A certification in cloud computing implies that you are skilled to help your organization reduce risks and costs to implement your workloads and projects on different cloud platforms. This will provide opportunities for cloud-related projects, and your clients will see you as a credible subject matter expert.
Why Hands-On Labs are Important.
Hands-on labs help you learn faster and better because the lab offers the ability to practice and experiment. You do not need your own account credentials to use the hands-on labs. When you start a hands-on lab, the system launches all the necessary resources and starts the timer.
I hope that this guide helps you in building your career with Cloud and getting Cloud Certified,
If you have any doubt or unable to understand any concept feel free to contact me on
LinkedIn :https://www.linkedin.com/in/adit-modi-2a4362191/
Instagram :https://www.instagram.com/adit_aweesome/
Twitter : https://twitter.com/adi_12_modi
Github : https://github.com/AditModi
You can view my badges on:
https://www.youracclaim.com/users/adit-modi/badges
I also am working on various AWS Services and Developing various Cloud , Big Data & Devops Projects.
If you are interested in learning AWS Services then follow me on github.
If you liked this content then do clap and share it . Thank You . | https://medium.com/analytics-vidhya/how-to-get-cloud-certified-4cf8888edc3f | ['Adit Modi'] | 2020-12-18 16:19:17.521000+00:00 | ['Cloud Certification', 'Certification', 'AWS', 'Cloud Computing', 'Cloud'] | Title Get Cloud CertifiedContent past year cloud computing industry generated lot interest investment Cloud computing become integral part infrastructure many company worldwide Industry analyst report cloud computing industry grown swiftly recent year According Wikibon Amazon Web Services AWS revenue climb 43 billion 2022 Microsoft Azure Google Cloud close behind cloud computing becomes critical business general demand cloud skill increase Aspiring cloud professional must prove skill knowledge able compete favorably market cloud certification best way 3 Valuable Cloud Certifications Résumé ▸ AWS Certified Solutions Architect — Professional ▸Microsoft Certified Azure Solutions Architect Expert ▸Google Cloud Certified — Professional Cloud Architect Background Cloud Big Data Enthusiast love Talk Cloud 11x Cloud Certified Expert 4x AWS Certified 3x Oracle Cloud Certified 3x Azure Certified 1x Alibaba Cloud Certified Get Started ▸I started journey cloud last year around june 2018 always interested cloud technology found fascinating surfing lot random article blog carrying information cloud ▸I found Order learn AWS practice AWS Services thus quickly signed AWS ▸I used spend almost 3 4 hour daily trying practice various AWS Services Helped lot Understanding AWS Architecture know journey aws cloud refer article Getting Started AWS Educate ▸AWS recently introduced it’s new learning platform called AWS Educate best anyone want start AWS ▸AWS Educate Amazon’s new Learning Platform Learn Amazon Web Services basically free student Educators want learn AWS Services Explore new Stuff AWS Offer Absolutely Free without charge ▸If you’re student benefit nocost athome learning opportunity AWS Educate Cloud Career Pathways specialty badge online workshop webinars help continue build cloud skill ▸AWS Credits Students renews annually ▸AWS Educate Starter Account 75 AWS Promotional Credits school AWS Educate member institution ▸AWS Educate Starter Account 30 AWS Promotional Credits school AWS Educate member institution Know refer article Tips Tricks everyone get Microsoft Azure Certified within day free anyone wanting start journey Microsoft Azure perfect time Microsoft Azure Offering free Azure Training AZ900 Exam Voucher step required prepare Microsoft Certified Azure Administrator Associate Exam prepare pas AZ300 Exam AZ301 Exam Know getting Azure Certified refer article Tips Tricks everyone get oracle cloud infrastructure OCI Cerified within day free Oracle Cloud Infrastructure certification carry great value increase salary exponentially Normally cost around 230 — 250 list certification follows – know refer article Get Certified certification cloud computing implies skilled help organization reduce risk cost implement workload project different cloud platform provide opportunity cloudrelated project client see credible subject matter expert HandsOn Labs Important Handson lab help learn faster better lab offer ability practice experiment need account credential use handson lab start handson lab system launch necessary resource start timer hope guide help building career Cloud getting Cloud Certified doubt unable understand concept feel free contact LinkedIn httpswwwlinkedincominaditmodi2a4362191 Instagram httpswwwinstagramcomaditaweesome Twitter httpstwittercomadi12modi Github httpsgithubcomAditModi view badge httpswwwyouracclaimcomusersaditmodibadges also working various AWS Services Developing various Cloud Big Data Devops Projects interested learning AWS Services follow github liked content clap share Thank Tags Cloud Certification Certification AWS Cloud Computing Cloud |
1,787 | Tackling Mitochondria-Based Diseases | by Jackie Swift
Inside almost every cell in our bodies live little powerhouses known as mitochondria. These tiny organelles, with their own genome, primarily produce adenosine triphosphate (ATP), the fuel on which your cells depend in order to function. If something goes wrong in the chain of mitochondrial electron transport components that ultimately produce ATP, disease results.
“ATP production is the only system in the body that is under dual genetic control,” says Joeva J. Barrow, Nutritional Sciences at Cornell University. “Your nuclear genes and your mitochondrial genes work together to make the system functional. Any defect in either genome leads to disease because if you can’t produce enough ATP, then you don’t have enough energy in your body, and your cells begin to die. Typically tissues that are very energetic and require a lot of ATP, like the brain, heart, and muscles, are most susceptible.”
Mitochondrial Disorders, What Are They
There is no cure for mitochondrial disorders, which are hard to diagnose and impossible to treat. They result in complex diseases that are hardly household names, such as mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS) and Leber’s hereditary optic neuropathy (LHON), yet they are more common than most people realize. One in 4,500 people suffer from a mitochondrial disease, and one in 200 show no symptoms but carry a mitochondrial mutation potentially able to trigger disease later in life or when passed on to the next generation. These asymptomatic carriers are all women, since mitochondria are maternally inherited.
Photo Credit: Dave Burbank
To understand the processes that cause mitochondrial disease, as well as potential treatments, the Barrow lab depends on unbiased, high-throughput screening mechanisms, such as small molecule chemical targeting and genome-wide CRISPR-Cas9 gene ablations. “Our goal is to identify any genes or proteins that may be linked to mitochondrial bioenergetics, then significantly leverage them to see if we can push them toward therapy,” Barrow says.
Studying the Genetics and Biochemistry Underlying Mitochondrial Disorders
The researchers use a combination of cell and mouse models, in addition to tissue from patients, to explore the genetics and biochemistry behind these diseases. “Our typical experiments start off with seeing how long we can keep cells with damaged mitochondria alive,” Barrow says. “We put them under certain nutrient conditions we know will kill them because they can’t make ATP. Then we try to promote survival by treating them with small molecules or by modifying certain genes.”
Once they’ve established which compounds can rescue the cells, Barrow and her collaborators move on to the discovery phase of their research. “We have to figure exactly what the compound does,” Barrow says. “What is it binding? How is it targeting this function? How is it boosting ATP production? To maximize the potential for therapy, we need to answer questions like those. At the same time, we might discover other additional factors that show therapeutic potential along the way.”
“My lab is looking at genetic and molecular components to discover if some people have a predisposition that makes them more or less obese.”
Barrow is following up on her earlier work as a postdoctoral researcher at Harvard University, where she profiled 10,015 small molecules — naturally occurring and synthesized compounds that target various proteins in the body. She and her colleagues identified more than 100 promising chemical compounds. Now her lab is characterizing them to evaluate their ability to correct mitochondrial damage, specifically in muscle cells. So far, a significant subset has a positive effect, and the researchers are trying to pin down exactly how they work.
Obesity and Metabolic Diseases, Mitochondria-Related
Continuing her research connected to mitochondria, Barrow also explores metabolic disease in the context of obesity. Worldwide, 1.9 million, or one in three people, are overweight, and 41 million of them are children under the age of five. With obesity comes associated metabolic diseases such as cancer, cardiovascular disease, and hypertension.
“Every year we do the statistics on obesity, and no matter how much we counsel on diet and exercise, no matter how easy it should be to maintain an energetic balance, something is amiss,” Barrow says. “So my lab is looking at genetic and molecular components to discover if some people have a predisposition that makes them more or less obese or to see if we can take advantage of the molecular system to increase energy expenditure. This could offer another form of therapy to fight against obesity in conjunction with diet and exercise.”
Photo Credit: Dave Burbank
Thermogenic Fat
The researchers have turned their attention to thermogenic fat. This subset of fat cells, also called brown and beige fat, is prevalent in animals that go through hibernation, but scientists recently discovered it in humans as well. “Brown and beige fat don’t only store fat molecules, like white fat does, they have a special ability to burn them to produce heat,” Barrow explains.
Thermogenic fat has a protein known as uncoupling protein 1 that pokes a hole in the membrane of mitochondria, allowing protons to leak out. These protons are part of a proton gradient that is integral to the production of ATP. Without them, mitochondria are no longer able to effectively make the chemical. “Your body’s response is to start burning everything it can to try to maintain the proton gradient,” Barrow says. “And as a result, your energy expenditure goes through the roof.”
Brown fat is prevalent in newborn humans where it serves to keep infants from going into thermal shock as they exit from maternal body temperature to the much colder temperature outside the womb. Later, other mechanisms, such as shivering, serve to keep adults warm while maintaining their body weight. “But adults still have brown fat that we can activate to increase energy expenditure components,” Barrow explains.
Using proteomics, metabolomics, and genomics, Barrow and her colleagues seek to unveil factors that will activate brown and beige fat cells. “We have discovered a host of novel genes that are involved in turning on the thermogenic pathway that protects you against obesity,” Barrow says. “Now it will be fascinating to discover how these genes work so that they can be targeted toward therapy.”
For Barrow, who has a doctorate in biochemistry and molecular biology, with clinical expertise as a registered dietitian, mitochondria are a perfect target for research. “The mitochondria are the metabolic hub of the cell,” she says. “No matter what aspect of metabolism you study — lipids, carbohydrates, vitamins — they all feed back into whether or not you can effectively produce energy. Everything my lab works on centers around this very mighty, tiny organelle that’s so important to life.” | https://medium.com/cornell-university/tackling-mitochondria-based-diseases-4a28591f9a8d | ['Cornell Research'] | 2019-12-16 20:01:01.495000+00:00 | ['Health', 'Cornell University', 'Nutrition', 'Science', 'Medicine'] | Title Tackling MitochondriaBased DiseasesContent Jackie Swift Inside almost every cell body live little powerhouse known mitochondrion tiny organelle genome primarily produce adenosine triphosphate ATP fuel cell depend order function something go wrong chain mitochondrial electron transport component ultimately produce ATP disease result “ATP production system body dual genetic control” say Joeva J Barrow Nutritional Sciences Cornell University “Your nuclear gene mitochondrial gene work together make system functional defect either genome lead disease can’t produce enough ATP don’t enough energy body cell begin die Typically tissue energetic require lot ATP like brain heart muscle susceptible” Mitochondrial Disorders cure mitochondrial disorder hard diagnose impossible treat result complex disease hardly household name mitochondrial encephalomyopathy lactic acidosis strokelike episode MELAS Leber’s hereditary optic neuropathy LHON yet common people realize One 4500 people suffer mitochondrial disease one 200 show symptom carry mitochondrial mutation potentially able trigger disease later life passed next generation asymptomatic carrier woman since mitochondrion maternally inherited Photo Credit Dave Burbank understand process cause mitochondrial disease well potential treatment Barrow lab depends unbiased highthroughput screening mechanism small molecule chemical targeting genomewide CRISPRCas9 gene ablation “Our goal identify gene protein may linked mitochondrial bioenergetics significantly leverage see push toward therapy” Barrow say Studying Genetics Biochemistry Underlying Mitochondrial Disorders researcher use combination cell mouse model addition tissue patient explore genetics biochemistry behind disease “Our typical experiment start seeing long keep cell damaged mitochondrion alive” Barrow say “We put certain nutrient condition know kill can’t make ATP try promote survival treating small molecule modifying certain genes” they’ve established compound rescue cell Barrow collaborator move discovery phase research “We figure exactly compound does” Barrow say “What binding targeting function boosting ATP production maximize potential therapy need answer question like time might discover additional factor show therapeutic potential along way” “My lab looking genetic molecular component discover people predisposition make le obese” Barrow following earlier work postdoctoral researcher Harvard University profiled 10015 small molecule — naturally occurring synthesized compound target various protein body colleague identified 100 promising chemical compound lab characterizing evaluate ability correct mitochondrial damage specifically muscle cell far significant subset positive effect researcher trying pin exactly work Obesity Metabolic Diseases MitochondriaRelated Continuing research connected mitochondrion Barrow also explores metabolic disease context obesity Worldwide 19 million one three people overweight 41 million child age five obesity come associated metabolic disease cancer cardiovascular disease hypertension “Every year statistic obesity matter much counsel diet exercise matter easy maintain energetic balance something amiss” Barrow say “So lab looking genetic molecular component discover people predisposition make le obese see take advantage molecular system increase energy expenditure could offer another form therapy fight obesity conjunction diet exercise” Photo Credit Dave Burbank Thermogenic Fat researcher turned attention thermogenic fat subset fat cell also called brown beige fat prevalent animal go hibernation scientist recently discovered human well “Brown beige fat don’t store fat molecule like white fat special ability burn produce heat” Barrow explains Thermogenic fat protein known uncoupling protein 1 poke hole membrane mitochondrion allowing proton leak proton part proton gradient integral production ATP Without mitochondrion longer able effectively make chemical “Your body’s response start burning everything try maintain proton gradient” Barrow say “And result energy expenditure go roof” Brown fat prevalent newborn human serf keep infant going thermal shock exit maternal body temperature much colder temperature outside womb Later mechanism shivering serve keep adult warm maintaining body weight “But adult still brown fat activate increase energy expenditure components” Barrow explains Using proteomics metabolomics genomics Barrow colleague seek unveil factor activate brown beige fat cell “We discovered host novel gene involved turning thermogenic pathway protects obesity” Barrow say “Now fascinating discover gene work targeted toward therapy” Barrow doctorate biochemistry molecular biology clinical expertise registered dietitian mitochondrion perfect target research “The mitochondrion metabolic hub cell” say “No matter aspect metabolism study — lipid carbohydrate vitamin — feed back whether effectively produce energy Everything lab work center around mighty tiny organelle that’s important life”Tags Health Cornell University Nutrition Science Medicine |
1,788 | How to Persuade People Without Being a Scam “Artist” — The Catalyst by Jonah Berger | Introduction
Berger starts with a story about a hostage negotiator who helped a SWAT team get a criminal to come out on his own without incident. (From the perspective of Greg Vecchio, an FBI agent)
Crisis negotiation came after the 1972 Munich Olympics, when 11 Israeli athletes were killed. Before, it was about force. Now, people have learned to get the guy to “come out by himself.”
Everything has something they want to change, but change is hard. Isaac Newton was the one who talked about this concept of inertia. Inertia means that people tend to do what they’ve always done.
Some people think that if you just push people, give more information, more facts, more reasons and arguments, or more force, people will change. But people are not like marbles. They push back.
In chemistry, chemists use catalysts, special substances that speed up chemical reactions. They do this not by increasing heat or pressure, but by providing an alternate route. In other words, faster change with less energy.
Being the catalyst is equally powerful in the social world. It’s not about trying to be a better persuader or be more convincing. It’s about changing minds by removing barriers.
Push people and they will snap. Tell them what to do and they probably won’t listen.
Good hostage negotiator start by listening and building trust. They encourage people to talk to their fears and motivations and who’s waiting for them at home, even pets. Great negotiators don’t push harder or increase the heat. They identify the barrier and remove it like a catalyst.
Most people think changing minds is about presenting evidence and explaining reasons, but we forget about the person who’s mind we’re trying to change.
Catalysts start with this basic question:
“Why hasn’t the person changed already? What’s blocking them?”
Sometimes change doesn’t require more horsepower. Sometimes we just need to unlock the parking brake.
The 5 principles that address roadblocks — REDUCE:
Reactance: People push back when pushed. So catalysts encourage them to persuade themselves. Endowment: People are stuck two what they’re doing and don’t want to switch. Catalysts highlight how inaction isn’t costless. Distance: People have an innate anti-persuasion system. New info must be within zone of acceptance for them to listen. Uncertainty: This makes people pause. Catalysts reduce risk. Corroborating evidence: One person’s evidence is not enough. Catalysts find reinforcement.
In the following chapters there will be illustrations on each principle,
…From changing the boss’s mind and driving Britons to support Brexit to changing consumer behavior and getting a grand Dragon to renounce the Ku Klux Klan.
Chapter 1 Reactance
Berger starts with story of Chuck Wolfe, asked to get teens to stop smoking in Florida.
Difficulty: warnings often become recommendations (think Tide Pods fad)
A nursing home found that residents who had more control (where to put their decorations, etc) were more cheery and active, and long term lived longer.
People need freedom/autonomy. They like feeling they have control over their choices/actions/behavior.
When others threaten or restrict people’s freedom, they get upset.
Threatening to restrict something makes it more desirable. Restriction creates a psychological effect called reactance. And this happens even when you’re asking people to do something rather than telling them not to do something.
In the absence of persuasion, people think they’re doing what THEY want.
Pushing, telling, even encouraging people to do something often backfires.
When you try to convince people, you give them an alternate explanation for their interest which threatens their perceived freedom. And they then react against persuasion and do the opposite.
This happens even when people wanted to take that action in the first place. People need to see their behavior as freely driven or it’ll backfire.
People have an anti persuasion radar, and they’re constantly scanning for influence attempts. If they find one, they set up countermeasures, such as avoidance and ignoring the message.
And when you make a claim, people don’t take it at face value. They scrutinize and argue against those claims. They raise objections until the message falls apart.
Catalyst allow for agency. They don’t try to persuade and get people to persuade themselves instead.
The reason why the anti-smoking campaigns didn’t work before, because they always implied that they knew what’s best for you and you should listen to them.
So Chuck took a different route: He showed teenagers how the tobacco companies were trying to manipulate them in order to sell cigarettes.
He showed how the company’s manipulated politics, sports, TV, Etc, to make smoking seem cool.
“Here is what the industry is doing, you tell us what you want to do about it.”
Leon didn’t demand anything from the teens or tell them what to do. It left it out for them to decide, and it works. The truth campaign was so powerful that in 2002, tobacco companies try to sue them.
4 ways to reduce reactance
Provide menu: Limited set of bounded/guided options. (2–3 not 15–16) Ask, don’t tell: Don’t make statements. Ex: GRE prep course asked students how much time they thought they’d need to master the material. Highlight a gap: The Smoking Kid campaign sent children to ask smokers for a light. When refused, the kids would give them a note saying “you care about my health, why not your own?” (cognitive dissonance) Start with understanding: Starting by trying to influence someone makes it about you. It’s not about other people, their wants and motivations, it’s about you and what you want.
Menu-ing and questioning shifts listener's role. Instead of thinking of counterarguments, they’re trying to think of an answer to the question, and how they feel about it (opinion).
Questions increase buy in/commitment to the conclusion and behaving consistently. They may not follow others’ lead, but their own ideas.
Being too forceful can backfire. You can rephrase as a question.
Before people will change, they have to be willing to listen. They have to trust the person they’re communicating with.
Seasons negotiators don’t start with what they want, they start with whom they want to change.
Listening makes a person feel like they are a stakeholder in the relationship.
Stay in their frame, make it about them, and that lays the groundwork for influence. You become their helper, their Advocate, their means to get what they want.
Use the right language. You, we. Mirror their words back to them. Instead of trying to persuade, start by understanding.
When people feel understood and cared about, trust develops.
To truly get rid of weeds, or change minds, find the root. No one likes feeling someone is trying to influence them. After all, when’s the last time you changed your mind because someone told you to?
Berger ends with a case study about a KKK member who was won over by a kind Jewish rabbi and his wife, Michael and Julie Weisser. They did it by leaving “love notes,” kind phone messages, in response to Weisser’s hateful harassment.
Larry was in the KKK because of his abusive father
In some strange way, emulating the thing that had hurt him the most gave Larry the strength you needed to go on. Until one day someone showed him another option.
But Michael told him:
“Larry, you better think about all this hatred you’re spreading, because one day you ‘re going to have to answer to God for all this hatred, and it’s not going to be easy.”
No one had tried to think why Larry was a problem in the first place. As Michael Weisser said, “love your neighbor” means loving neighbors different from you.
Chapter 2 Endowment
Berger starts with a story about not wanting to change to a new phone. (Loss aversion — people value what they already have)
If potential gains barely outweigh potential losses, people don’t change…advantages have to be at least 2x better (larger)*.
*It’s PERCEIVED gain that matters, what the person cares about. Understand a person’s needs/values to know whether change will be PERCEIVED as gain or loss.
Switching costs: psych/financial/time/effort barriers to switching.
But consider the cost of doing nothing.
Paradox: recovery is faster for severe > mild injuries because people will do their physical therapy for serious injuries.
But leaser injuries tend not to marshal the same resources.
Even if people have a plan, they won’t follow it.
It’s hard to get people to change when things are not-terrible, or just-okay, not-great.
Ex: A financial advisor convinced her client to invest by keeping track of how much potential money he was losing by not investing.
Cost-benefit time gap: You pay up front for the product before you receive the benefit. This is another deterrent to change.
People need to see how much time or money is lost: more motivating than seeing what is gained.
Burn the ships: Cortes and Tariq ibn Ziyad, and ancient Chinese saying.
Burning bridges/ships takes inaction off the table and forces people to get off the old way.
Catalyzing change isn’t just about making people more comfortable with new things, it’s about helping them let go of old ones.
Case study on how Brexit passed: they used Brexit bus ads showing the cost of sending the EU 350 million pounds a week for health. Slogan? “Take BACK control.”
The “back” is important, because “take control” implies taking action/change, triggering thoughts of switching costs, whereas “back” triggers loss aversion.
Same with Trump’s “Make America Great AGAIN.” (Reagan did a similar thing in 1980s.
Chapter 3 Distance
For people with less favorable attitudes toward X [Ex: perceived link between vaccines and autism], learning the truth about X actually backfires and pushes them farther.
Region of rejection vs Zone of acceptance: Won’t consider vs the place where people agree the most and the range of views they could consider.
If info falls in the latter zone, forget it. It will backfire.
That’s why “one person’s truth is another’s fake news.” Whether something seems true depends on where you stand on the “field.” Plus, don’t forget confirmation bias.
Strong feelings reduce the range of ideas you’re willing to consider.
Catalysts use a “more surgical approach” and target people with specific, relevant messages, looking for the moveable middle” and “behavioral residue” that indicates conflicting ideas/willingness to change.
Start by asking for less.
Chunking change: Break big asks into smaller ones.
And if someone’s really stubborn, change the field. Find a dimension where there is already agreement and use it as a pivot. This has a long-lasting effect.
Deep canvassing: Encourage people to find a parallel situation from their own situation, when they felt similarly about something. (Not exactly the same, because people can’t really imagine what it’s like to be others) Look for an unsticking point where they agree.
People appreciate when you help them be their best self.
Highlight ways people already agree or are moving in the direction. Ex: book that says:
Congratulations, whether you realize it or not, simply by picking up this book you have taken the first of what I hope will be many steps… To reclaiming your physical health, well-being, and happiness. (Greene, 2002)
Berger ends with two stories of Repubs and Democrats switching sides.
As with most big changes, things didn’t happen right away. Someone had to shrink the distance. It took a number of small steps rather than one big leap. Multiple interactions over months or even years. A slow, gradual change…
Chapter 4 Uncertainty
Berger starts with the story of Shoesite.org which became zappos, and how it was hard to get started because of the Uncertainty Tax: devaluing things that are uncertain. And this is a BIGger tax than you think.
People hate uncertainty. It’s worse than known negatives.
The more ambiguity there is around a product, service, or idea, the less valuable that thing becomes.
Uncertainty can stop decision-making completely. Uncertainty is good for maintaining the status quo, but terrible for changing minds.
How to combat uncertainty?
Trialability: How easy to experiment with something
Freemium: Dropbox Reduce up-front cost: Zappos free shipping, drowning simulator to show how important life jackets are Drive discovery: Zappos “mental pic of bringing shoe store to your home,” test drive cars (sell apartments by encouraging house parties, birthday party boxes) Make it reversible: Test drive cars, return policies
The real barrier isn’t money, but uncertainty.
Make magazine free, then people pay for the privilege of NOT LOSING it.
We’re all neophobic to some degree.
Risk aversion is relevant to the domain of gains, but risk-seeking int he domain of losses (gambling).
Freemium takes advantage of switching costs.
Dividing big things into smaller bits, like a monthly not yearly contract, helps.
Ends with story about employee who wanted to convince boss to treat customers with more personalization, which he did by enacting the plan on the company employees first:
To write a few words, personally and accurately, was what generated the most emotion.
Chapter 5 Corroborating Evidence
Berger starts with the story of a drug addict who didn't change until his family staged an intervention, and all got together to talk to him at once.
If an opinion is important to you, it takes more evidence to change. we discount info that we disagree with, so more proof is required for more certainty. But hearing the same words over and over is annoying.
You are more likely to accept an opinion from “Another you” someone who is like you, in terms of likes/dislikes, concerns/values.
Addicts need to change their entire ecosystem to change. Dr. Vernon E Johnson, forefather of interventions: “rationalization and projection work together to block [addicts’] awareness of the disease.”
But if many people say the same thing at the same time, there can be a breakthrough.
Who, when, and how are important though.
People are more likely to change if the people who are doing the thing are from separate, independent groups, because that provides additional info.
Also, compressing an intervention into one shorter period of time works better than extended.
It’s the difference between a sprinkler and a hose strategy. The former for a less opiniated person, the latter for a harder-to-change mind.
The more proof needed, the more important multiple sources are.
Case study: How the US got people to be willing to eat organ meat during WWII to save the meat for the soldiers: Kurt Lewin reduced the size of ask (mix some organ meats in meat loaf), reduce uncertainty and obstacles (gave out free recipes), used group discussions to make it feel more voluntary.
Epilogue
Another story: to help with the Israeli-Palestinian conflict, the US created a Seeds of Peace camp to bring youth together in Maine. This camp helped them improve relations, even after a long time.
Kurt Lewin: “If you want to truly understand something, try to change it.” The opposite is true too.
Eureka moments are great in movies but not realistic. big changes are more like the Grand Canyon.
Appendix: Active Listening
Listening is about asking the right questions and showing people you are listening.
Why questions can put people on the defense. Better open-ended. Also, use effective pauses. As does mirroring.
Use emotional labeling: understand underlying emotions to ID the issues affecting people’s behavior.
Appendix: Applying Freemium
If you give freemiums, you need to give enough time to try so they sense value before paying.
Appendix: Force Field Analysis
Force field analysis: Framework for analyzing factors in a situation to help make change possible. Identify restrainers (forces against change) and drivers( forces for change)
Get your copy of Catalyst here | https://medium.com/be-a-brilliant-writer/how-to-persuade-people-without-being-a-scam-artist-the-catalyst-by-jonah-berger-ebcd4ecb9d14 | ['Sarah Cy'] | 2020-09-03 13:41:01.820000+00:00 | ['Inspiration', 'Creativity', 'Life Lessons', 'Love', 'Writing'] | Title Persuade People Without Scam “Artist” — Catalyst Jonah BergerContent Introduction Berger start story hostage negotiator helped SWAT team get criminal come without incident perspective Greg Vecchio FBI agent Crisis negotiation came 1972 Munich Olympics 11 Israeli athlete killed force people learned get guy “come himself” Everything something want change change hard Isaac Newton one talked concept inertia Inertia mean people tend they’ve always done people think push people give information fact reason argument force people change people like marble push back chemistry chemist use catalyst special substance speed chemical reaction increasing heat pressure providing alternate route word faster change le energy catalyst equally powerful social world It’s trying better persuader convincing It’s changing mind removing barrier Push people snap Tell probably won’t listen Good hostage negotiator start listening building trust encourage people talk fear motivation who’s waiting home even pet Great negotiator don’t push harder increase heat identify barrier remove like catalyst people think changing mind presenting evidence explaining reason forget person who’s mind we’re trying change Catalysts start basic question “Why hasn’t person changed already What’s blocking them” Sometimes change doesn’t require horsepower Sometimes need unlock parking brake 5 principle address roadblock — REDUCE Reactance People push back pushed catalyst encourage persuade Endowment People stuck two they’re don’t want switch Catalysts highlight inaction isn’t costless Distance People innate antipersuasion system New info must within zone acceptance listen Uncertainty make people pause Catalysts reduce risk Corroborating evidence One person’s evidence enough Catalysts find reinforcement following chapter illustration principle …From changing boss’s mind driving Britons support Brexit changing consumer behavior getting grand Dragon renounce Ku Klux Klan Chapter 1 Reactance Berger start story Chuck Wolfe asked get teen stop smoking Florida Difficulty warning often become recommendation think Tide Pods fad nursing home found resident control put decoration etc cheery active long term lived longer People need freedomautonomy like feeling control choicesactionsbehavior others threaten restrict people’s freedom get upset Threatening restrict something make desirable Restriction creates psychological effect called reactance happens even you’re asking people something rather telling something absence persuasion people think they’re want Pushing telling even encouraging people something often backfire try convince people give alternate explanation interest threatens perceived freedom react persuasion opposite happens even people wanted take action first place People need see behavior freely driven it’ll backfire People anti persuasion radar they’re constantly scanning influence attempt find one set countermeasure avoidance ignoring message make claim people don’t take face value scrutinize argue claim raise objection message fall apart Catalyst allow agency don’t try persuade get people persuade instead reason antismoking campaign didn’t work always implied knew what’s best listen Chuck took different route showed teenager tobacco company trying manipulate order sell cigarette showed company’s manipulated politics sport TV Etc make smoking seem cool “Here industry tell u want it” Leon didn’t demand anything teen tell left decide work truth campaign powerful 2002 tobacco company try sue 4 way reduce reactance Provide menu Limited set boundedguided option 2–3 15–16 Ask don’t tell Don’t make statement Ex GRE prep course asked student much time thought they’d need master material Highlight gap Smoking Kid campaign sent child ask smoker light refused kid would give note saying “you care health own” cognitive dissonance Start understanding Starting trying influence someone make It’s people want motivation it’s want Menuing questioning shift listener role Instead thinking counterargument they’re trying think answer question feel opinion Questions increase buy incommitment conclusion behaving consistently may follow others’ lead idea forceful backfire rephrase question people change willing listen trust person they’re communicating Seasons negotiator don’t start want start want change Listening make person feel like stakeholder relationship Stay frame make lay groundwork influence become helper Advocate mean get want Use right language Mirror word back Instead trying persuade start understanding people feel understood cared trust develops truly get rid weed change mind find root one like feeling someone trying influence when’s last time changed mind someone told Berger end case study KKK member kind Jewish rabbi wife Michael Julie Weisser leaving “love notes” kind phone message response Weisser’s hateful harassment Larry KKK abusive father strange way emulating thing hurt gave Larry strength needed go one day someone showed another option Michael told “Larry better think hatred you’re spreading one day ‘re going answer God hatred it’s going easy” one tried think Larry problem first place Michael Weisser said “love neighbor” mean loving neighbor different Chapter 2 Endowment Berger start story wanting change new phone Loss aversion — people value already potential gain barely outweigh potential loss people don’t change…advantages least 2x better larger It’s PERCEIVED gain matter person care Understand person’s needsvalues know whether change PERCEIVED gain loss Switching cost psychfinancialtimeeffort barrier switching consider cost nothing Paradox recovery faster severe mild injury people physical therapy serious injury leaser injury tend marshal resource Even people plan won’t follow It’s hard get people change thing notterrible justokay notgreat Ex financial advisor convinced client invest keeping track much potential money losing investing Costbenefit time gap pay front product receive benefit another deterrent change People need see much time money lost motivating seeing gained Burn ship Cortes Tariq ibn Ziyad ancient Chinese saying Burning bridgesships take inaction table force people get old way Catalyzing change isn’t making people comfortable new thing it’s helping let go old one Case study Brexit passed used Brexit bus ad showing cost sending EU 350 million pound week health Slogan “Take BACK control” “back” important “take control” implies taking actionchange triggering thought switching cost whereas “back” trigger loss aversion Trump’s “Make America Great AGAIN” Reagan similar thing 1980s Chapter 3 Distance people le favorable attitude toward X Ex perceived link vaccine autism learning truth X actually backfire push farther Region rejection v Zone acceptance Won’t consider v place people agree range view could consider info fall latter zone forget backfire That’s “one person’s truth another’s fake news” Whether something seems true depends stand “field” Plus don’t forget confirmation bias Strong feeling reduce range idea you’re willing consider Catalysts use “more surgical approach” target people specific relevant message looking moveable middle” “behavioral residue” indicates conflicting ideaswillingness change Start asking le Chunking change Break big asks smaller one someone’s really stubborn change field Find dimension already agreement use pivot longlasting effect Deep canvassing Encourage people find parallel situation situation felt similarly something exactly people can’t really imagine it’s like others Look unsticking point agree People appreciate help best self Highlight way people already agree moving direction Ex book say Congratulations whether realize simply picking book taken first hope many steps… reclaiming physical health wellbeing happiness Greene 2002 Berger end two story Repubs Democrats switching side big change thing didn’t happen right away Someone shrink distance took number small step rather one big leap Multiple interaction month even year slow gradual change… Chapter 4 Uncertainty Berger start story Shoesiteorg became zappos hard get started Uncertainty Tax devaluing thing uncertain BIGger tax think People hate uncertainty It’s worse known negative ambiguity around product service idea le valuable thing becomes Uncertainty stop decisionmaking completely Uncertainty good maintaining status quo terrible changing mind combat uncertainty Trialability easy experiment something Freemium Dropbox Reduce upfront cost Zappos free shipping drowning simulator show important life jacket Drive discovery Zappos “mental pic bringing shoe store home” test drive car sell apartment encouraging house party birthday party box Make reversible Test drive car return policy real barrier isn’t money uncertainty Make magazine free people pay privilege LOSING We’re neophobic degree Risk aversion relevant domain gain riskseeking int domain loss gambling Freemium take advantage switching cost Dividing big thing smaller bit like monthly yearly contract help Ends story employee wanted convince bos treat customer personalization enacting plan company employee first write word personally accurately generated emotion Chapter 5 Corroborating Evidence Berger start story drug addict didnt change family staged intervention got together talk opinion important take evidence change discount info disagree proof required certainty hearing word annoying likely accept opinion “Another you” someone like term likesdislikes concernsvalues Addicts need change entire ecosystem change Dr Vernon E Johnson forefather intervention “rationalization projection work together block addicts’ awareness disease” many people say thing time breakthrough important though People likely change people thing separate independent group provides additional info Also compressing intervention one shorter period time work better extended It’s difference sprinkler hose strategy former le opiniated person latter hardertochange mind proof needed important multiple source Case study US got people willing eat organ meat WWII save meat soldier Kurt Lewin reduced size ask mix organ meat meat loaf reduce uncertainty obstacle gave free recipe used group discussion make feel voluntary Epilogue Another story help IsraeliPalestinian conflict US created Seeds Peace camp bring youth together Maine camp helped improve relation even long time Kurt Lewin “If want truly understand something try change it” opposite true Eureka moment great movie realistic big change like Grand Canyon Appendix Active Listening Listening asking right question showing people listening question put people defense Better openended Also use effective pause mirroring Use emotional labeling understand underlying emotion ID issue affecting people’s behavior Appendix Applying Freemium give freemiums need give enough time try sense value paying Appendix Force Field Analysis Force field analysis Framework analyzing factor situation help make change possible Identify restrainer force change driver force change Get copy Catalyst hereTags Inspiration Creativity Life Lessons Love Writing |
1,789 | Why Companies Desperately Need Generalists to Innovate | Field outsiders see structural similarities better
Shubin Dai, a specialist in bank data analysis is maybe one of the best model of the outsider.
Passionate about his data financial work, he also spends his time responding to the various challenges posed by Kaggle, a community of data scientists. Quite surprisingly, he enjoys working on various topics such as nature conservation and medicine where he has gained considerable knowledge. For example, in these fields, he has achieved to identify the human causes of deforestation in the Amazon, and he has become a leading expert in disease prediction.
Like Kaggle, these kinds of outsiders are more and more prospected by companies, as they respond perfectly to the issue of open innovation. They have increasingly been opposed to the field insiders that know deeply their stuff, but have been suspected of being blinded by the limits of their own expertise.
The outsider specific talents have been based on what researchers have conversely called the “outside view”. It consists in relying not on familiar experience or analogies but on more distant and deeper analogies. By adopting an open perspective, they can seek and find similar structures with their project and thus better judge as they compare it with examples from other horizons.
InnoCentive is an organization that has tried to promote an outsider-like vision of innovation. It has bet on open-mindedness by setting up crowdsourcing with communities of various specialists. Thus, biologists or chemistry specialists can work on issues as distant from their field than IT or network architecture issues. Yet, by providing diverse and open expertise, they find connections that the clients themselves haven’t found about.
This enables InnoCentive to have a 75% more chance of success in their clients’ projects, compared to the 20% of corporate and internal research projects.
The bottom line is that organizations need to rely more and more on collaborators who have remote expertise and are therefore more original and accurate on their issues. | https://medium.com/swlh/why-companies-desperately-need-generalists-to-innovate-866b5c3bdf35 | ['Jean-Marc Buchert'] | 2020-11-27 15:32:33.096000+00:00 | ['Management', 'Innovation', 'Productivity', 'Creativity', 'Jobs'] | Title Companies Desperately Need Generalists InnovateContent Field outsider see structural similarity better Shubin Dai specialist bank data analysis maybe one best model outsider Passionate data financial work also spends time responding various challenge posed Kaggle community data scientist Quite surprisingly enjoys working various topic nature conservation medicine gained considerable knowledge example field achieved identify human cause deforestation Amazon become leading expert disease prediction Like Kaggle kind outsider prospected company respond perfectly issue open innovation increasingly opposed field insider know deeply stuff suspected blinded limit expertise outsider specific talent based researcher conversely called “outside view” consists relying familiar experience analogy distant deeper analogy adopting open perspective seek find similar structure project thus better judge compare example horizon InnoCentive organization tried promote outsiderlike vision innovation bet openmindedness setting crowdsourcing community various specialist Thus biologist chemistry specialist work issue distant field network architecture issue Yet providing diverse open expertise find connection client haven’t found enables InnoCentive 75 chance success clients’ project compared 20 corporate internal research project bottom line organization need rely collaborator remote expertise therefore original accurate issuesTags Management Innovation Productivity Creativity Jobs |
1,790 | Forecasting with Stochastic Models | We all want to know the future. Imagine the power we’d possess if we knew what was going to happen in the future, we could alter it to get more suitable results, bet on it for financial gain, or even budget better. Although we can not outright determine what will happen in the future, we can build somewhat of an intuition of what it may be like.
We hear often from self-improvement evangelist that we can distinguish how we have arrived at our present point in life by reflecting on our past actions. Thereby, to some degree we can predict the trajectory of our lives if we continue on a particular path. This is the essence of Time-series analysis. As stated in Adhikari, R and Agrawal, R. (2013). An Introduction Study on Time Series Modelling and Forecasting, “The main aim of time-series modelling is to carefully collect and rigorously study the past observations of a time-series to develop an appropriate model which describes the inherent structure of the series. This model is then used to generate future values for the series, i.e. to make forecast. Time-series forecasting thus can be termed as the act of predicting the future by understanding the past.”
“The present moment is an accumulation of past decisions” — Unknown
A popular and frequently used stochastic time-series model is the ARIMA model. It assumes that the time-series is linear and follows a particular known statistical distribution, such as the normal distribution, and has subclass of other models such as the Autoregressive (AR) model, the Moving average (MA) model, and the Autoregressive Moving Average (ARMA) model of which the ARIMA model was based on. Before effectively applying the ARIMA model to a problem, there are some things that we should understand about our data as you’ll come to understand by the end of this post.
Things to know - The 4 main components of a time series:
Trend → The propensity of a a time series to increase, decrease or stagnate over a long period of time.
Seasonality → Fluctuations within a year that are regular and predictable.
Cyclical → Medium term changes in the series that repeat in cycles.
Irregularities → Unpredictable influences that are not regular and do not repeat in a particular pattern.
Data
To download the data, click this Link and follow the instructions.
The data I will be using in this article is from the M5 Forecasting - Accuracy competition on Kaggle, which is currently still live (at the time of writing this article). The competition challenges competitors of whom have been provided with hierarchical sales data — thanks to Walmart- from 3 different states (California, Texas and Wisconsin) to forecast the sales 28 days into the future. For access to the code generated in this article can be found on a Kaggle Notebook that I created, which can be found here or in the link below.
Here are the frameworks we must import to perform the task at hand.
import numpy as np
import pandas as pd import matplotlib.pyplot as plt
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.graphics.tsaplots import plot_pacf from statsmodels.tsa.arima_model import ARIMA
from statsmodels.tsa.stattools import adfuller
I have done some preprocessing to the data to make use of it’s hierarchical structure.
# store of the sales data columns
d_cols = full_df.columns[full_df.columns.str.contains("d_")] # group columns by store_id
df= full_df.groupby(full_df["store_id"]).sum()[d_cols].T
df.head()
Figure 2: Data grouped by the store_id
The competition is evaluated on RMSSE (Root Mean Squared Scaled Error), which is derived from the MASE (Mean Absolute Scaled Error) that was designed to be invariant and symmetric — you can learn more about forecast accuracy metrics here (the difference for this competition is that the A(Absoulute) in MASE is replaced with S(Squared) for Mean Squared Scaled Error and we take the Root of this for RMSSE).
Concept of Stationarity
Understanding the concept of stationarity is important as it has high impact on the type of model that we can fit to our data to forecast future values. We refer to a time-series as stationary when its properties do not depend on the time at which the series was observed. Some criteria for stationarity are as follows:
Constant mean in the time-series
Constant variance in the time-series
No seasonality
Simply, a time-series that is stationary will have no predictable patterns in the long term. For the mathematicians, a random process is known to be stationary when the joint distribution remains the same over time. Let’s look at some random items from our data to see whether they are stationary.
The ARMA model is a combination of the Autoregressive and Moving Average models. This traditional approach requires the data to be stationary, however, things do not always work out as we’d expect in the real world. In-fact, in the real world, data is much more likely to be non-stationary, hence the birth of ARIMA which uses a clever technique called differencing to make non-stationary data stationary.
Differencing
Differencing computes the change between consecutive observations in the original series, which helps to stabilize the mean since it removes the changes in the level of a series — This has the effect of eliminating (or reducing) seasonality and trend. This technique is widely used for non-stationary data such as financial and economic data. The ARIMA model adopts the differencing technique to convert a time-series that is non-stationary to a stationary time-series. We can express the differenced series mathematically as shown in Figure 2.
Figure 2: First difference
When the differenced data does not appear to stationary, we can do differencing a 2nd time — It’s almost never required for you to go past the 2nd order in practice — which can be expressed mathematically in Figure 3.
Figure 3: Formula for First difference of the 2nd degree.
We can also get the difference of an observation and another observation from the same season. This phenomenon is known as seasonal differencing.
Figure 4: Formula for first degree seasonal differencing
Occasionally, we may be required to take the ordinary differences (this is the differencing technique we discuss in Figure 2.Referred to as first differences meaning the differences at lag 1) and seasonal differences to make our data stationary.
Figure 5: Formula for first differences and Seasonal difference
In python, we can use visualization and/or unit root test to determine whether differencing is required for our data — not there are other methods to determine stationarity. There are many different unit root test which have different assumption, but we will be using the Dickey-Fuller. Below I will visualize a store and try to determine whether you think it is stationary before you look at the results of the dickey-fuller test.
Figure 6: Total sales per store — note that I have illuminated the store “CA_1”. In the notebook you can click on whichever store you’d like to for it to be illuminated or visualize them all at the same time.
# Dickey-fuller statistical test
def ad_fuller(timeseries: pd.DataFrame, significance_level= 0.05):
non_stationary_cols= []
stationary_cols= []
for col in timeseries.columns:
dftest= adfuller(df[col], autolag="AIC")
if dftest[1] < significance_level:
non_stationary_cols.append(
{col:{"Test Statistic": dftest[0],
"p-value": dftest[1],
"# Lags": dftest[2],
"# Observations": dftest[3],
"Critical Values": dftest[4],
"Stationary": False}})
else:
stationary_cols.append(
{col:{"Test Statistic": dftest[0],
"p-value": dftest[1],
"# Lags": dftest[2],
"# Observations": dftest[3],
"Critical Values": dftest[4],
"Stationary": True}})
return non_stationary_cols, stationary_cols non_stationary_cols, stationary_cols= ad_fuller(df[stores])
len(non_stationary_cols), len(stationary_cols)
>>>> (10, 0) non_stationary_cols[0]
Figure 7: Augmented DIckey-Fuller results for store CA_1.
The p-value is greater than the significance level that we set (0.05) therefore we do not reject the null hypothesis that there is unit roots in our data. In other words, our data is non-stationary — it does not meet the criteria for stationarity that we described above, hence why we must do some differencing for our data to become stationary. Pandas has a cool function DataFrame.diff() that does this for us — you can read more in the documentation here.
# making the data stationary
df["lag-1_CA_1"]= df["CA_1"].diff().fillna(df["CA_1"])
ACF and PACF plots
The ARIMA model has hyperparameters, p, d and q that must be defined. Autocorrelation Function (ACF) and Partial Autocorrelation function (PACF) plots make determining the order p, and q of the model easier.
The ACF plot shows the autocorrelation of the time-series, meaning that we can measure the relationship between the y_t and y_{t-k}. The simplest way to put this is as the coefficients of correlation between a time-series and the lags of itself.
Note: “y_t” denotes the subscript.
PACF plots show a measure of relationship between y_t and y_{t-k} after the effects of lags are removed. If we think of correlation, it’s the interdependence of variables. The “partial” correlation talks of the correlation between them that is not explained by their mutual correlations with a specified set of other variables. When we adjust this for autocorrelation, we speak of the correlation between a time-series and a lag of itself that is not explained by correlations from lower order lags.
This is a great resource to learn more about ACF and PACF plots.
Let’s see some of our visualizations…
_, ax= plt.subplots(1, 2, figsize= (10,8))
plot_acf(df["lag-1_CA_1"], lags=10, ax=ax[0]), plot_pacf(df["lag-1_CA_1"], lags=10, ax=ax[1])
plt.show()
Figure 8: Autocorrelation Function and Partial Autocorrelation Function for lag-1_CA_1; As stated in Identifying the orders of AR and MA terms in an ARIMA model, by mere inspection of the PACF you can determine how many AR terms you need to use to explain the autocorrelation pattern in a time series: if the partial autocorrelation is significant at lag k and not significant at any higher order lags — i.e., if the PACF “cuts off” at lag k — then this suggests that you should try fitting an autoregressive model of order k;
This suggest that we should try to fit an AR(8) model to our data, which I have done in the next section.
Lag/Backshift Notation
Lag/Backshift notation is an extremely useful notation device. Various sources use different notation to denote Lag, L or Backshift, B.
Figure 9: Backshift operator notation
The Autoregression, AR(p), model generates forecast by using a linear combination of past variables of the variable. We can think of autoregression as regression of the variable against itself.
Figure 10: Autoregression model (without Lag/Backshift notation)
Moving Average, MA(q), model on the other hand uses past forecast errors instead of past values, in a regression like model. Therefore, we can think of each forecast value to be a weighted moving average of the past few forecast errors.
Figure 11: Moving average model (without backshift notation)
The ARIMA model
All roads lead to this point. If we combine differencing, our autoregression model and moving average model, we get ARIMA(p, d, q).
Figure 12: Arima formulation. Source: Hyndman, R.J., & Athanasopoulos, G. (2018) Forecasting: principles and practice, 2nd edition, OTexts: Melbourne, Australia. OTexts.com/fpp2. Accessed on 09/06/2020
Note that it is often much easier to use lag notation to denote the ARIMA model. You can learn more about how to do this here.
p = The order of the Autoregressive part of the model
d= The degree of first differencing in our model
q = The order of the Moving average part of the model
Figure 13: Special cases of ARIMA. Source: Hyndman, R.J., & Athanasopoulos, G. (2018) Forecasting: principles and practice, 2nd edition, OTexts: Melbourne, Australia. OTexts.com/fpp2. Accessed on 09/06/2020
# fitting the model
model= ARIMA(df["lag-1_CA_1"], order=(8,1,0))
results= model.fit(disp=-1) # visualizing the fitted values
fig= go.Figure(data=
[go.Scatter(x= df["date"],
y= df["lag-1_CA_1"],
name= "original",
showlegend=True,
marker=dict(color="blue"))])
fig.add_trace(
go.Scatter(x= df["date"],
y=results.fittedvalues,
name= "fitted values",
showlegend= True,
marker=dict(color="red")))
fig.update_layout(
title="Fitted values",
xaxis_title="Dates",
yaxis_title="Units Sold",
font=dict(
family="Arial, monospace",
size=14,
color="#7f7f7f"
)
)
fig.show()
Figure 14: Fitted values from ARIMA model.
We can have a closer look…
# a closer look
_, ax= plt.subplots(figsize=(12,8))
results.plot_predict(1799, 1940, dynamic=False, ax=ax)
plt.show()
Figure 15: Closer look at Actual vs forecast
To see how we done against actual predictions, we must first go back to the original scale of the data to compare. There is a useful cumsum() function we can use in pandas — Documentation.
compare_df= pd.DataFrame({"actual": df["CA_1"],
"predictions": pd.Series(results.fittedvalues.cumsum(), copy=True),
"d": df["d"]}).set_index("d")
compare_df.loc["d_1", "predictions"]= 0
Then we plot this…
Figure 16: Actual vs Predictions of the model.
I have joined this competition a little late, but there is still sufficient time to better this result (which I will be sharing with you all).
Useful Resource: Workflow Guide
A useful flow chart was provided by Rob Hyndman in the book Forecasting: Principles and Practice which is extremely useful. A link to the online book will be in the Other Resources section below.
Figure 16: The ARIMA flow chart. Source — Hyndman, R.J., & Athanasopoulos, G. (2018) Forecasting: principles and practice, 2nd edition, OTexts: Melbourne, Australia. OTexts.com/fpp2. Accessed on 09/06/2020
Final Word
Thank you for taking the time to read this article. I am a self-taught Data Scientist from London, England. I can be reached via LinkedIn via https://www.linkedin.com/in/kurtispykes/. Please do not hesitate to reach out to me, meeting new people is awesome.
Other Resources:
Forecasting: Principles and Practice, 2nd Edition
An Introduction Study on Time Series Modelling and Forecasting
Autoregressive Integrated Moving Average | https://towardsdatascience.com/forecasting-with-stochastic-models-abf2e85c9679 | ['Kurtis Pykes'] | 2020-06-12 14:25:38.410000+00:00 | ['Machine Learning', 'Data Science', 'AI', 'Towards Data Science', 'Artificial Intelligence'] | Title Forecasting Stochastic ModelsContent want know future Imagine power we’d posse knew going happen future could alter get suitable result bet financial gain even budget better Although outright determine happen future build somewhat intuition may like hear often selfimprovement evangelist distinguish arrived present point life reflecting past action Thereby degree predict trajectory life continue particular path essence Timeseries analysis stated Adhikari R Agrawal R 2013 Introduction Study Time Series Modelling Forecasting “The main aim timeseries modelling carefully collect rigorously study past observation timeseries develop appropriate model describes inherent structure series model used generate future value series ie make forecast Timeseries forecasting thus termed act predicting future understanding past” “The present moment accumulation past decisions” — Unknown popular frequently used stochastic timeseries model ARIMA model assumes timeseries linear follows particular known statistical distribution normal distribution subclass model Autoregressive AR model Moving average model Autoregressive Moving Average ARMA model ARIMA model based effectively applying ARIMA model problem thing understand data you’ll come understand end post Things know 4 main component time series Trend → propensity time series increase decrease stagnate long period time Seasonality → Fluctuations within year regular predictable Cyclical → Medium term change series repeat cycle Irregularities → Unpredictable influence regular repeat particular pattern Data download data click Link follow instruction data using article M5 Forecasting Accuracy competition Kaggle currently still live time writing article competition challenge competitor provided hierarchical sale data — thanks Walmart 3 different state California Texas Wisconsin forecast sale 28 day future access code generated article found Kaggle Notebook created found link framework must import perform task hand import numpy np import panda pd import matplotlibpyplot plt import plotlygraphobjects go plotlysubplots import makesubplots statsmodelsgraphicstsaplots import plotacf statsmodelsgraphicstsaplots import plotpacf statsmodelstsaarimamodel import ARIMA statsmodelstsastattools import adfuller done preprocessing data make use it’s hierarchical structure store sale data column dcols fulldfcolumnsfulldfcolumnsstrcontainsd group column storeid df fulldfgroupbyfulldfstoreidsumdcolsT dfhead Figure 2 Data grouped storeid competition evaluated RMSSE Root Mean Squared Scaled Error derived MASE Mean Absolute Scaled Error designed invariant symmetric — learn forecast accuracy metric difference competition AAbsoulute MASE replaced SSquared Mean Squared Scaled Error take Root RMSSE Concept Stationarity Understanding concept stationarity important high impact type model fit data forecast future value refer timeseries stationary property depend time series observed criterion stationarity follows Constant mean timeseries Constant variance timeseries seasonality Simply timeseries stationary predictable pattern long term mathematician random process known stationary joint distribution remains time Let’s look random item data see whether stationary ARMA model combination Autoregressive Moving Average model traditional approach requires data stationary however thing always work we’d expect real world Infact real world data much likely nonstationary hence birth ARIMA us clever technique called differencing make nonstationary data stationary Differencing Differencing computes change consecutive observation original series help stabilize mean since remove change level series — effect eliminating reducing seasonality trend technique widely used nonstationary data financial economic data ARIMA model adopts differencing technique convert timeseries nonstationary stationary timeseries express differenced series mathematically shown Figure 2 Figure 2 First difference differenced data appear stationary differencing 2nd time — It’s almost never required go past 2nd order practice — expressed mathematically Figure 3 Figure 3 Formula First difference 2nd degree also get difference observation another observation season phenomenon known seasonal differencing Figure 4 Formula first degree seasonal differencing Occasionally may required take ordinary difference differencing technique discus Figure 2Referred first difference meaning difference lag 1 seasonal difference make data stationary Figure 5 Formula first difference Seasonal difference python use visualization andor unit root test determine whether differencing required data — method determine stationarity many different unit root test different assumption using DickeyFuller visualize store try determine whether think stationary look result dickeyfuller test Figure 6 Total sale per store — note illuminated store “CA1” notebook click whichever store you’d like illuminated visualize time Dickeyfuller statistical test def adfullertimeseries pdDataFrame significancelevel 005 nonstationarycols stationarycols col timeseriescolumns dftest adfullerdfcol autolagAIC dftest1 significancelevel nonstationarycolsappend colTest Statistic dftest0 pvalue dftest1 Lags dftest2 Observations dftest3 Critical Values dftest4 Stationary False else stationarycolsappend colTest Statistic dftest0 pvalue dftest1 Lags dftest2 Observations dftest3 Critical Values dftest4 Stationary True return nonstationarycols stationarycols nonstationarycols stationarycols adfullerdfstores lennonstationarycols lenstationarycols 10 0 nonstationarycols0 Figure 7 Augmented DIckeyFuller result store CA1 pvalue greater significance level set 005 therefore reject null hypothesis unit root data word data nonstationary — meet criterion stationarity described hence must differencing data become stationary Pandas cool function DataFramediff u — read documentation making data stationary dflag1CA1 dfCA1difffillnadfCA1 ACF PACF plot ARIMA model hyperparameters p q must defined Autocorrelation Function ACF Partial Autocorrelation function PACF plot make determining order p q model easier ACF plot show autocorrelation timeseries meaning measure relationship yt ytk simplest way put coefficient correlation timeseries lag Note “yt” denotes subscript PACF plot show measure relationship yt ytk effect lag removed think correlation it’s interdependence variable “partial” correlation talk correlation explained mutual correlation specified set variable adjust autocorrelation speak correlation timeseries lag explained correlation lower order lag great resource learn ACF PACF plot Let’s see visualizations… ax pltsubplots1 2 figsize 108 plotacfdflag1CA1 lags10 axax0 plotpacfdflag1CA1 lags10 axax1 pltshow Figure 8 Autocorrelation Function Partial Autocorrelation Function lag1CA1 stated Identifying order AR term ARIMA model mere inspection PACF determine many AR term need use explain autocorrelation pattern time series partial autocorrelation significant lag k significant higher order lag — ie PACF “cuts off” lag k — suggests try fitting autoregressive model order k suggest try fit AR8 model data done next section LagBackshift Notation LagBackshift notation extremely useful notation device Various source use different notation denote Lag L Backshift B Figure 9 Backshift operator notation Autoregression ARp model generates forecast using linear combination past variable variable think autoregression regression variable Figure 10 Autoregression model without LagBackshift notation Moving Average MAq model hand us past forecast error instead past value regression like model Therefore think forecast value weighted moving average past forecast error Figure 11 Moving average model without backshift notation ARIMA model road lead point combine differencing autoregression model moving average model get ARIMAp q Figure 12 Arima formulation Source Hyndman RJ Athanasopoulos G 2018 Forecasting principle practice 2nd edition OTexts Melbourne Australia OTextscomfpp2 Accessed 09062020 Note often much easier use lag notation denote ARIMA model learn p order Autoregressive part model degree first differencing model q order Moving average part model Figure 13 Special case ARIMA Source Hyndman RJ Athanasopoulos G 2018 Forecasting principle practice 2nd edition OTexts Melbourne Australia OTextscomfpp2 Accessed 09062020 fitting model model ARIMAdflag1CA1 order810 result modelfitdisp1 visualizing fitted value fig goFiguredata goScatterx dfdate dflag1CA1 name original showlegendTrue markerdictcolorblue figaddtrace goScatterx dfdate yresultsfittedvalues name fitted value showlegend True markerdictcolorred figupdatelayout titleFitted value xaxistitleDates yaxistitleUnits Sold fontdict familyArial monospace size14 color7f7f7f figshow Figure 14 Fitted value ARIMA model closer look… closer look ax pltsubplotsfigsize128 resultsplotpredict1799 1940 dynamicFalse axax pltshow Figure 15 Closer look Actual v forecast see done actual prediction must first go back original scale data compare useful cumsum function use panda — Documentation comparedf pdDataFrameactual dfCA1 prediction pdSeriesresultsfittedvaluescumsum copyTrue dfdsetindexd comparedflocd1 prediction 0 plot this… Figure 16 Actual v Predictions model joined competition little late still sufficient time better result sharing Useful Resource Workflow Guide useful flow chart provided Rob Hyndman book Forecasting Principles Practice extremely useful link online book Resources section Figure 16 ARIMA flow chart Source — Hyndman RJ Athanasopoulos G 2018 Forecasting principle practice 2nd edition OTexts Melbourne Australia OTextscomfpp2 Accessed 09062020 Final Word Thank taking time read article selftaught Data Scientist London England reached via LinkedIn via httpswwwlinkedincominkurtispykes Please hesitate reach meeting new people awesome Resources Forecasting Principles Practice 2nd Edition Introduction Study Time Series Modelling Forecasting Autoregressive Integrated Moving AverageTags Machine Learning Data Science AI Towards Data Science Artificial Intelligence |
1,791 | How to Raise Your First Fund With Right Side Capital Management’s Dave Lambert | How do you make it to Fund II, III, and beyond?
It’s challenging enough to identify and make investments that will create extraordinary returns. On top of that, GPs have to juggle fundraising, manage current investors, and handle the fund as a business itself — including employees, payroll, and HR and capital management and operating costs.
Oftentimes, Dave added, you’re also starting with less funds and staff than expected. It takes a long time to raise your first fund and there’s only so much to show for it. This means GPs are operating a small company with less bandwidth on a lower salary.
How do you keep all the plates spinning?
No matter what, it won’t be easy to keep up with diligence, deal flow, LP management, fundraising, legalese, and all the challenges of a growing company. It goes back to your background, track record, and team. According to Dave, managers with operational experience can manage all the plates better. They’re already accustomed to, or at least have experience with, handling these matters.
Secondly, what team have you built to face these challenges? For instance, although legal and accounting work will get outsourced, you still need a knowledgeable partner within the fund. Make sure to surround yourself with partners with complementary skill sets and a passion for the fund that matches yours.
Where do you go with Fund II?
Do you build Fund II as a larger fund, as a follow on fund, or as a continuation of your Fund I strategy? This is one of the biggest challenges.
Oftentimes, successful early-stage firms choose option A: build a larger fund. However, “instead of doing what made them successful,” Dave conveyed, “the default is to write bigger checks.” Raising a larger fund and writing bigger checks moves the fund into a completely different investment stage with a different return profile and ecosystem. “Many funds go from successful to struggling, and it’s hard to avoid,” Dave added.
Instead, cut the same size checks. However, with the larger fund, you’ll need to bring in more deal flow, perform more due diligence, and make more deals. Fund managers need to then bring on more people to keep up with the workload, but hesitate as it spreads the management fees even thinner. Funds that neglect to recognize that they need a larger labor force to get this done will fail.
“In the VC model, labor is the scarce resource”
It’s difficult to keep pitching your vision when you don’t have much evidence accrued from Fund I yet, but if you really believe in your thesis, stick with it. “There’s a lot more randomness and luck involved in success,” Dave admitted. Stick with what has brought you success and convinced your first LPs to sign on, and make sure you manage your bandwidth. Is now the right time to scale up for you and your fund?
Pivot or persevere?
What do you do if you’re missing out on opportunities? Is it wise to change and adapt if your thesis is too narrow, or should you persist with what you’ve planned?
Most VC funds are under-diversified. If you don’t think your thesis is working, definitely change, but be reasonable with the changes. First-time fund managers tend to give themselves too small of a box to operate within. Dave advised, “You’re going to learn as you invest. The world is going to change as well, so you need some flexibility to adjust with it and make the best decisions for your fund and investors.”
The key is to communicate any changes clearly with LPs before they happen. Take advantage of your resources — reach out to your LP committee, explain why, and get their advice and thoughts.
When do you start fundraising for Fund II?
You never stop raising. When you’re in Fund I, you’re already raising for Fund II.
This is one of the challenges of managing a fund. “You should always be out there talking to potential LPs, making them aware of the fund’s work with quarterly updates,” Dave said. This will help them make a quicker decision when it comes to committing. It’s also a good time to circle back to anyone who was close to investing or wanted to invest in Fund I.
Bring your metrics and returns from Fund I, but understand that there still may not be a lot to go off of. Dave found that investors are much less data-driven than they think they are. A lot of LPs believe data more easily when it aligns with a thesis they already believe in. There are a lot of ways to describe what you do and what your goals are. Craft the story that resonates well with your target investor profile, and keep bringing your passion and conviction.
What do you do if investors have decided to not invest further?
It’s a very strong signal for future funds if your original investors return. But if some of them don’t, it may be an issue. It depends on the profile of the investors, Dave shared.
It can be a red flag if an institutional investor doesn’t stay on. They may have concerns about ROI, a member of the team, or the fund as a whole — take their concerns to heart and reevaluate if changes are needed. If it’s an individual or smaller family office — their financial situation has changed or they can’t afford to keep investing until there are greater returns — it’s not ideal if they can’t continue, but it doesn’t market badly.
Do your best to retain existing LPs and use them to create FOMO and get new investors to commit. If there are investors stepping away, make sure you’re prepared to answer the question of why. | https://medium.com/swlh/how-to-raise-your-first-fund-with-right-side-capital-managements-dave-lambert-99d91382156e | ['Theron Mccollough'] | 2020-11-25 16:45:09.161000+00:00 | ['Entrepreneurship', 'Business', 'Fund Management', 'Startup', 'Venture Capital'] | Title Raise First Fund Right Side Capital Management’s Dave LambertContent make Fund II III beyond It’s challenging enough identify make investment create extraordinary return top GPs juggle fundraising manage current investor handle fund business — including employee payroll HR capital management operating cost Oftentimes Dave added you’re also starting le fund staff expected take long time raise first fund there’s much show mean GPs operating small company le bandwidth lower salary keep plate spinning matter won’t easy keep diligence deal flow LP management fundraising legalese challenge growing company go back background track record team According Dave manager operational experience manage plate better They’re already accustomed least experience handling matter Secondly team built face challenge instance although legal accounting work get outsourced still need knowledgeable partner within fund Make sure surround partner complementary skill set passion fund match go Fund II build Fund II larger fund follow fund continuation Fund strategy one biggest challenge Oftentimes successful earlystage firm choose option build larger fund However “instead made successful” Dave conveyed “the default write bigger checks” Raising larger fund writing bigger check move fund completely different investment stage different return profile ecosystem “Many fund go successful struggling it’s hard avoid” Dave added Instead cut size check However larger fund you’ll need bring deal flow perform due diligence make deal Fund manager need bring people keep workload hesitate spread management fee even thinner Funds neglect recognize need larger labor force get done fail “In VC model labor scarce resource” It’s difficult keep pitching vision don’t much evidence accrued Fund yet really believe thesis stick “There’s lot randomness luck involved success” Dave admitted Stick brought success convinced first LPs sign make sure manage bandwidth right time scale fund Pivot persevere you’re missing opportunity wise change adapt thesis narrow persist you’ve planned VC fund underdiversified don’t think thesis working definitely change reasonable change Firsttime fund manager tend give small box operate within Dave advised “You’re going learn invest world going change well need flexibility adjust make best decision fund investors” key communicate change clearly LPs happen Take advantage resource — reach LP committee explain get advice thought start fundraising Fund II never stop raising you’re Fund you’re already raising Fund II one challenge managing fund “You always talking potential LPs making aware fund’s work quarterly updates” Dave said help make quicker decision come committing It’s also good time circle back anyone close investing wanted invest Fund Bring metric return Fund understand still may lot go Dave found investor much le datadriven think lot LPs believe data easily aligns thesis already believe lot way describe goal Craft story resonates well target investor profile keep bringing passion conviction investor decided invest It’s strong signal future fund original investor return don’t may issue depends profile investor Dave shared red flag institutional investor doesn’t stay may concern ROI member team fund whole — take concern heart reevaluate change needed it’s individual smaller family office — financial situation changed can’t afford keep investing greater return — it’s ideal can’t continue doesn’t market badly best retain existing LPs use create FOMO get new investor commit investor stepping away make sure you’re prepared answer question whyTags Entrepreneurship Business Fund Management Startup Venture Capital |
1,792 | Coronavirus: Experts Have Failed Us (Expensively)… and They Will Again | Let down by a system of experts that had promised competence.
The novel Coronavirus has shown that our current system, that relies on experts at the top, lacks either the skill or the inclination to protect us — our jobs, our savings, our lives. Yet, we give this system approximately $4.9 trillion every year to do just this. For this money, it has missed obvious risks, over and over. And it continues to ignore future calamities.
How much longer do we lend legitimacy to this system of experts that doesn’t work for us?
What Color Are All These Swans?
Should we really blame this system of experts for a crazy black swan event like the Coronavirus? Well, if it were truly a Black Swan, probably not. But a Black Swan is not merely an unforeseen event, it is an unforeseeable event. This particular swan is, to use Nassim Taleb’s categorization, grey. That is to say, it is a risk we absolutely knew was out there. We did not know when it would hit, but we knew it would eventually.
In 2015, Bill Gates was giving TED talks about the threat of new viruses.
And in October 2019 the Bill and Melinda Gates Foundation hosted Event 201 in NY, in conjunction with the Johns Hopkins Center for Health Security and the World Economic Forum, to discuss how best to respond to a pandemic .
Gates is not exactly a fringe character. Other experts have warned of the dangers of new viruses. Taleb himself pointed out that our increased interconnectedness means diseases spread more rapidly.
Not Prepared for Swans of Any Color
Was our system standing by, for years, with additional medical equipment? Did we have resources in place, ready to jump into action? It appears, instead, we had failed virus test kits from the CDC and an FDA process that slowed down the adoption of new tests.
And now, as these experts warn us that this crisis could overwhelm our medical systems (future readers will know whether they got that one right), one wonders, why weren’t there stockpiles of supplies, additional ventilators, emergency facilities?
The system of experts has not given us readiness, but red tape, pharmaceuticals dependent on foreign ingredients, and brittle supply chains.
Criticize Trump if you wish, but this lack of preparedness predates him. In 2008, the financial crisis struck. Was the “mortgage meltdown” unforeseeable? Of course not. The Federal Deposit Insurance Corporation had never had enough money to insure more than a few percent of depositors money. It was built to stop runs on a few banks. Fannie Mae owned literally trillions of dollars in mortgages, with very little capital. In fact, a company guaranteed by the government in turn guaranteeing mortgages pushes the system toward over-leverage and too much risk by its very nature. It was another brittle system.
And this system of experts is ignoring future crises now. We’ve known for 40 years that the Social Security and Medicare systems are out of balance by trillions of dollars. State pension plans are massively underfunded, as well. Are our leaders devising a strategy to resolve this? Of course not. And when those particular chickens come home to roost, our system will look for black paint and tell us the chickens are swans.
Mammoth Budgets Misplaced
And the cost of all this incompetence? Well, ignoring state and local governments, and the costs of regulations pushed onto people and corporations, the Federal government alone spends $4,900,000,000,000 every year. That’s $14,893 for every man, woman, and child in the country.
Imagine if we invested 0.1% of this budget on preparedness. That’s $4.9 billion per year.
Information Systems are Not Enough
We’ve all heard glowing accounts of how our new communication tools allow the flow of data around the world in an instant. And it’s all true.
But data from too limited a set of sources is fragile. And data without the means to take action is useless.
Let’s Find Another Way
Imagine the year is 1998 and I just asked you to take a video in your home, add music, and post it on the Internet for the entire world to see. Ten years earlier, you would have said, “what’s an Internet?” But in 1998, you could have done what I asked if you owned the right camera, knew how to upload the content to your computer, had the right editing software, and had the knowledge to put the whole thing together. Now, you just make a TikTok.
What required expertise and special tools is now easy. It is as though we have made people experts, to the point where that expertise is no longer special. And in place of a set of tools that an expert would use, we’ve embedded the expertise in the tools, enabling easy interaction with others.
Let’s give people tools with the expertise embedded.
Now, let’s do the same thing, but for the big stuff. Let’s give people tools for interaction, with the expertise embedded, to handle their financial lives and risks, to give them more options, to improve their health, to safeguard our economy.
Once we’ve built these kinds of tools, people will come together to solve their problems without relying on experts. In the financial arena, they will seek return knowing the real risk they’ve taken. Businesses will protect themselves against the risks that can end their business. But this dynamic of exchange and action can occur outside finance. It can lead to improvements in life and health.
In other words, we’ll give people the ability to generate their own ideas, find real solutions, and interact in ways that benefit them most. | https://medium.com/greyswandigital/coronavirus-experts-have-failed-us-expensively-and-they-will-again-5100bc6323c3 | ['Peter Harrigan'] | 2020-03-21 19:04:19.015000+00:00 | ['Economics', 'Coronavirus', 'Risk', 'Health'] | Title Coronavirus Experts Failed Us Expensively… AgainContent Let system expert promised competence novel Coronavirus shown current system relies expert top lack either skill inclination protect u — job saving life Yet give system approximately 49 trillion every year money missed obvious risk continues ignore future calamity much longer lend legitimacy system expert doesn’t work u Color Swans really blame system expert crazy black swan event like Coronavirus Well truly Black Swan probably Black Swan merely unforeseen event unforeseeable event particular swan use Nassim Taleb’s categorization grey say risk absolutely knew know would hit knew would eventually 2015 Bill Gates giving TED talk threat new virus October 2019 Bill Melinda Gates Foundation hosted Event 201 NY conjunction Johns Hopkins Center Health Security World Economic Forum discus best respond pandemic Gates exactly fringe character expert warned danger new virus Taleb pointed increased interconnectedness mean disease spread rapidly Prepared Swans Color system standing year additional medical equipment resource place ready jump action appears instead failed virus test kit CDC FDA process slowed adoption new test expert warn u crisis could overwhelm medical system future reader know whether got one right one wonder weren’t stockpile supply additional ventilator emergency facility system expert given u readiness red tape pharmaceutical dependent foreign ingredient brittle supply chain Criticize Trump wish lack preparedness predates 2008 financial crisis struck “mortgage meltdown” unforeseeable course Federal Deposit Insurance Corporation never enough money insure percent depositor money built stop run bank Fannie Mae owned literally trillion dollar mortgage little capital fact company guaranteed government turn guaranteeing mortgage push system toward overleverage much risk nature another brittle system system expert ignoring future crisis We’ve known 40 year Social Security Medicare system balance trillion dollar State pension plan massively underfunded well leader devising strategy resolve course particular chicken come home roost system look black paint tell u chicken swan Mammoth Budgets Misplaced cost incompetence Well ignoring state local government cost regulation pushed onto people corporation Federal government alone spends 4900000000000 every year That’s 14893 every man woman child country Imagine invested 01 budget preparedness That’s 49 billion per year Information Systems Enough We’ve heard glowing account new communication tool allow flow data around world instant it’s true data limited set source fragile data without mean take action useless Let’s Find Another Way Imagine year 1998 asked take video home add music post Internet entire world see Ten year earlier would said “what’s Internet” 1998 could done asked owned right camera knew upload content computer right editing software knowledge put whole thing together make TikTok required expertise special tool easy though made people expert point expertise longer special place set tool expert would use we’ve embedded expertise tool enabling easy interaction others Let’s give people tool expertise embedded let’s thing big stuff Let’s give people tool interaction expertise embedded handle financial life risk give option improve health safeguard economy we’ve built kind tool people come together solve problem without relying expert financial arena seek return knowing real risk they’ve taken Businesses protect risk end business dynamic exchange action occur outside finance lead improvement life health word we’ll give people ability generate idea find real solution interact way benefit mostTags Economics Coronavirus Risk Health |
1,793 | Prefect Cloud has Launched! 🎉 | For more than two years, Prefect has been making steady progress on our mission to eliminate negative engineering. Today, we’re excited to announce that Prefect Cloud is available to the public — including its free Scheduler tier!
Learn more here.
We worked with hundreds of early Cloud previewers and tens of Lighthouse Partners to reach this point. Since July 2019, when we onboarded Cloud’s very first customer, we have made enormous strides in our understanding of workflow systems and user requirements.
The biggest lesson of all is that a system we built for a very specific set of customers — large financial institutions — has come to dominate our business model. Our Hybrid Model delivers cloud convenience with on-prem security, and is so innovative it has resulted in two separate patent filings. Users keep their code and data on their private infrastructure — whether that’s a personal laptop, an IoT device, a cloud-hosted cluster, a serverless function, or just bare metal — while Prefect Cloud’s managed orchestration service provides complete oversight and confidence. The hybrid model is a clear advantage that Prefect provides over any alternative system.
Learn more about the hybrid model here.
The public release of Prefect Cloud caps “Phase 1” of our company’s story. It represents everything we’ve learned about negative engineering, and is informed by thousands of user stories gathered from all industries and experience levels. Just as when we launched our open-source Prefect Core library with the extreme confidence that comes from iterating with a small group of early previewers, we’ve already seen Prefect Cloud deployed at institutions large and small. We know that it fulfills the objective we laid out in this blog a year and a half ago:
Prefect is the codification of the patterns we observe in modern data engineering. At our core, we provide two things. One is our open-source framework [Core], which operates like a hardware store: stocked with all the necessary components for building great data applications. The other is our platform logic [Cloud], which we think of as the store manager: guiding users to the right tools and making sure their projects are successful. With these two things working together, we can offer a compelling solution for both positive and negative engineering problems.
What will you build?
— The Prefect Team | https://medium.com/the-prefect-blog/prefect-cloud-has-launched-ed4b1cc6a6e | ['Jeremiah Lowin'] | 2020-03-03 14:40:05.415000+00:00 | ['Data Science', 'Prefect', 'Python', 'Data Engineering', 'Workflow'] | Title Prefect Cloud Launched 🎉Content two year Prefect making steady progress mission eliminate negative engineering Today we’re excited announce Prefect Cloud available public — including free Scheduler tier Learn worked hundred early Cloud previewers ten Lighthouse Partners reach point Since July 2019 onboarded Cloud’s first customer made enormous stride understanding workflow system user requirement biggest lesson system built specific set customer — large financial institution — come dominate business model Hybrid Model delivers cloud convenience onprem security innovative resulted two separate patent filing Users keep code data private infrastructure — whether that’s personal laptop IoT device cloudhosted cluster serverless function bare metal — Prefect Cloud’s managed orchestration service provides complete oversight confidence hybrid model clear advantage Prefect provides alternative system Learn hybrid model public release Prefect Cloud cap “Phase 1” company’s story represents everything we’ve learned negative engineering informed thousand user story gathered industry experience level launched opensource Prefect Core library extreme confidence come iterating small group early previewers we’ve already seen Prefect Cloud deployed institution large small know fulfills objective laid blog year half ago Prefect codification pattern observe modern data engineering core provide two thing One opensource framework Core operates like hardware store stocked necessary component building great data application platform logic Cloud think store manager guiding user right tool making sure project successful two thing working together offer compelling solution positive negative engineering problem build — Prefect TeamTags Data Science Prefect Python Data Engineering Workflow |
1,794 | Five Books That Made Me Laugh Out Loud in Quarantine (and Taught Me Amazing Lessons) | Five Books That Made Me Laugh Out Loud in Quarantine (and Taught Me Amazing Lessons)
#3 is one of the most unorthodox, original books I’ve ever read
Image by Christopher Ross from Pixabay
I have a special place in my heart for writers who can make me laugh when I’m alone, especially now. We’re in a pandemic, people! That news outlet you love? It’s just going in circles.
Why not step back, and read something that doesn’t put the weight of the world on your shoulders?
All the books I’m including in this list manage to interweave fantastic life lessons with the comedy, so even if you’re a hyper-efficient self-help junkie, there are pearls of wisdom awaiting you between the humor. I’ll be sharing a quote from each book and a short summary of my takeaways. Enjoy! | https://medium.com/books-are-our-superpower/five-books-that-made-me-laugh-out-loud-in-quarantine-and-taught-me-amazing-lessons-11510f9fa136 | ['Aaron Nichols'] | 2020-12-02 05:24:13.772000+00:00 | ['Comedy', 'Books', 'Reading', 'Creativity', 'Self Improvement'] | Title Five Books Made Laugh Loud Quarantine Taught Amazing LessonsContent Five Books Made Laugh Loud Quarantine Taught Amazing Lessons 3 one unorthodox original book I’ve ever read Image Christopher Ross Pixabay special place heart writer make laugh I’m alone especially We’re pandemic people news outlet love It’s going circle step back read something doesn’t put weight world shoulder book I’m including list manage interweave fantastic life lesson comedy even you’re hyperefficient selfhelp junkie pearl wisdom awaiting humor I’ll sharing quote book short summary takeaway EnjoyTags Comedy Books Reading Creativity Self Improvement |
1,795 | PI and Simulation Art in R | I spent the better part of an afternoon last week perusing a set of old flash drives I’d made years ago for my monthly notebook backups. One that especially caught my attention had a folder of R scripts, probably at least 15 years old — harking back to my earliest days with R. I could only smile at some of the inefficient scripts I wrote then, reflecting an early, awkward attempt to switch gears from SAS to R.
The script I reviewed, in particular, had to do with Monte Carlo estimation of pi, as in pi*(r**2), for the area of a circle. Estimating pi via random sampling is quite straightforward and generally a first assignment in an intro numerical/statistical computation course.
The old code actually worked fine but was far from the ideal R vectorized/functional programming metaphor — gnarled with procedurally oriented nested loops and lists. And the limit of 2500,000 iterations reflected processor performance at that time. So, I decided to modernize the code a bit, adding a visualization that showed pi as derived from the ratio of an embedded circle to an enclosing square.
The point of departure for this exercise is a circle of diameter 2 centered at coordinates (1,1), embedded within a square of side length 2. A uniform random sample of x’s and y’s <= 2 is generated, their distance from the circle center calculated, and a determination made of whether each (x,y) point is within or outside the circle. The ratio of “in” to total points estimates the ratio of the area of the circle to the area of the square. And since the area of the square is 4, the area of the circle is estimated by 4*(in/total). More, the radius of the circle is 1, so 4*(in/out) estimates pi as well. Pretty nifty.
What follows is the MC script code, partitioned into Jupyter Notebook cells. The technology used is Wintel 10 with 128 GB RAM, along with JupyterLab 1.2.4 and R 3.6.2. The R data.table, ggplot, and knitr packages are featured.
Set options, import packages, and load a personal library. The private functions used are blanks, freqsdt, meta, mykab, and obj_sz. freqsdt is a general-purpose, multi-attribute frequencies function for data.tables. meta displays metadata and individual records from data.tables. mykab is a printing function that uses knitr kable, and obj_sz returns the size of R objects.
In [1]:
options(warn=-1)
options(scipen = 10)
options(datatable.print.topn=100)
options(datatable.showProgress=FALSE)
options(stringsAsFactors=TRUE) usualsuspects <- c(
'tidyverse', 'data.table', 'pryr',
'rvest', 'magrittr','lubridate',
'fst','feather',
'knitr', 'kableExtra',
'ggplot2','RColorBrewer'
) suppressMessages(invisible(lapply(usualsuspects, library, character.only = TRUE))) funcsdir <- "/steve/r/functions"
funcsfile <- "rfunctions.r" setwd(funcsdir)
source(funcsfile) lsf.str() blanks(2)
allfreqs : function (dtn, catlim = 100)
blanks : function (howmany)
freqsdt : function (DTstr, xstr)
freqsonly : function (DTstr, xstr)
meta : function (df, data = FALSE, dict = TRUE)
mykab : function (dt)
obj_sz : function (obj)
First up, an updated procedural code estimate of pi for 8 different simulations to assess how the estimates vary with sample size. A data.table with columns denoting the pi estimate and sample size is output.
In [2]:
set.seed(531) HOWMANY <- c(500,2500,12500,62500,312500,1562500,7812500,39062500) pisim <- data.table(howmany=HOWMANY,piest=NULL) for (i in 1:length(HOWMANY))
{
h <- HOWMANY[i]
x<-runif(h,max=2)
y<-runif(h,max=2)
d<-((x-1)**2+(y-1)**2)**.5
inout<-factor(ifelse(d<=1,"in","out")) pisim[i,piest:= 4*sum(inout=='in')/length(inout)] } mykab(pisim) blanks(2)
|howmany | piest |
|: — — :|: — — :|
| 500 |3.128000|
| 2500 |3.158400|
| 12500 |3.134400|
| 62500 |3.152512|
| 312500 |3.136166|
|1562500 |3.140460|
|7812500 |3.142583|
|39062500|3.141737|
The same calculation, this time using a more current functional approach. The results are, fortunately, identical.
In [3]:
set.seed(531) HOWMANY <- c(500,2500,12500,62500,312500,1562500,7812500,39062500) mksim <- function(h)
{
x<-runif(h,max=2)
y<-runif(h,max=2)
d<-((x-1)**2+(y-1)**2)**.5
inout<-factor(ifelse(d<=1,"in","out")) data.table(howmany=h,piest=4*sum(inout=='in')/length(inout))
} pisim <- rbindlist(lapply(HOWMANY,mksim)) mykab(pisim) blanks(2)
|howmany | piest |
|: — — :|: — — :|
| 500 |3.128000|
| 2500 |3.158400|
| 12500 |3.134400|
| 62500 |3.152512|
| 312500 |3.136166|
|1562500 |3.140460|
|7812500 |3.142583|
|39062500|3.141737|
Graph the pi estimates above as a function of sample size. Note the convergence to the R constant pi.
In [4]:
options(repr.plot.width=10, repr.plot.height=10) bpal <- brewer.pal(9,"Blues")
gpal <- brewer.pal(9,"Greens") g <- ggplot(pisim, aes(x=howmany,y=piest)) +
geom_point(size=5) +
geom_line() +
theme(plot.background = element_rect(fill = bpal[2]),
panel.background = element_rect(fill = bpal[2])) +
geom_hline(aes(yintercept=pi), na.rm = FALSE, show.legend = NA,col="black",size=.3,linetype=2) +
theme(axis.text = element_text(size=15)) +
theme(legend.position="none") +
ylim(3.1,3.2) +
scale_x_log10(breaks=pisim$howmany) +
theme(axis.text.x = element_text(angle=45)) +
labs(title="Simulation Estimate of pi by Sample Size
", y="Estimate of pi
", x="
Sample Size (log scale)
") +
theme(plot.title = element_text(size=25,face = "bold")) +
theme(axis.text = element_text(size=12)) +
annotate("text", x = 100, y = pi+.00100, label = paste("",round(pi,5),sep=""), size=5) +
theme(text = element_text(size=rel(4))) print(g) blanks(2)
In [ ]:
Move on to a related process for showing pi visually. Create a data.table with two random uniform columns in the range of (0,2), an attribute that measures the distance between the two columns from circle center (1,1), and a factor that specifies whether each point is within or outside the circle of radius 1. A sample size of 1,000,000 is used for this simulation.
In [5]:
set.seed(345) howmany <- 1000000 simpoints <- data.table(x=runif(howmany,max=2),y=runif(howmany,max=2))[,distance1_1:=((x-1)**2+(y-1)**2)**.5][
,inout:=factor(ifelse(distance1_1<=1,"in","out"))] meta(simpoints) blanks(2)
| name | class | rows |columns| size |
|: — — -:|: — — — — — — — — — :|: — –:|: — –:|: — –:|
|simpoints|c(“data.table”, “data.frame”)|1000000| 4 |26.7 MB|
Classes ‘data.table’ and ‘data.frame’: 1000000 obs. of 4 variables:
$ x : num 0.433 0.55 0.78 1.311 0.872 …
$ y : num 0.689 1.352 1.868 0.773 0.355 …
$ distance1_1: num 0.647 0.572 0.896 0.385 0.658 …
$ inout : Factor w/ 2 levels “in”,”out”: 1 1 1 1 1 1 1 2 1 1 …
– attr(*, “.internal.selfref”)=
NULL
Ratio of points in the circle to points in the enclosing square — i.e. of the area of the circle to the area of the square. The area of the square is 4, which implies the area of the circle is arearatio*4. And with a radius of 1, the area of the circle and estimate of pi are identical. Look familiar?
In [6]:
f <- freqsdt("simpoints","inout")
mykab(f) arearatio <- f[inout=='in',percent]/100
blanks(1) print(arearatio) pisim <- 4*arearatio blanks(1)
print(pisim) blanks(2)
|inout|frequency|percent|
|: — :|: — — -:|: — –:|
| in | 785094 |78.5094|
| out | 214906 |21.4906|
[1] 0.785094
[1] 3.140376
Visualize the above simulation/computation in ggplot — graphing 1,000,000 points. The result is rather artistic.
In [7]:
start <- proc.time() options(repr.plot.width=10, repr.plot.height=10) bpal <- brewer.pal(9,"Blues")
gpal <- brewer.pal(9,"Greens")
rpal <- brewer.pal(9,'Reds') myColors <- gpal[c(5,9)]
names(myColors) <- levels(simpoints$inout) tit <- "Area of Square and Circle"
subtit <- paste("Simulation pi: ", round(pisim,6),"
Actual pi: ", round(pi,6),sep="") g <- ggplot(simpoints, aes(x=x,y=y,col=inout)) +
geom_point(size=.5) +
theme(plot.background = element_rect(fill = bpal[2]),
panel.background = element_rect(fill = bpal[2])) +
theme(legend.position="none") +
ylim(-1,3) +
xlim(-1,3) +
labs(title=tit,subtitle=subtit, y="Height
", x="
Length") +
theme(plot.title = element_text(size=22,face = "bold")) +
theme(plot.subtitle = element_text(size=15,face = "bold")) +
theme(axis.text = element_text(size=15)) +
scale_color_manual(values = myColors) +
theme(text = element_text(size=rel(4))) print(g) end <- proc.time()
print(end-start) blanks(2)
user system elapsed
10.19 20.08 30.27
That’s it for now. More R/Python-Pandas next time.
In [ ]: | https://medium.com/swlh/pi-and-simulation-art-in-r-92098b7463b2 | ['Odsc - Open Data Science'] | 2020-03-18 16:17:02.172000+00:00 | ['Data Science', 'R', 'Artificial Intelligence', 'Jupyter Notebook', 'Mathematics'] | Title PI Simulation Art RContent spent better part afternoon last week perusing set old flash drive I’d made year ago monthly notebook backup One especially caught attention folder R script probably least 15 year old — harking back earliest day R could smile inefficient script wrote reflecting early awkward attempt switch gear SAS R script reviewed particular Monte Carlo estimation pi pir2 area circle Estimating pi via random sampling quite straightforward generally first assignment intro numericalstatistical computation course old code actually worked fine far ideal R vectorizedfunctional programming metaphor — gnarled procedurally oriented nested loop list limit 2500000 iteration reflected processor performance time decided modernize code bit adding visualization showed pi derived ratio embedded circle enclosing square point departure exercise circle diameter 2 centered coordinate 11 embedded within square side length 2 uniform random sample x’s y’s 2 generated distance circle center calculated determination made whether xy point within outside circle ratio “in” total point estimate ratio area circle area square since area square 4 area circle estimated 4intotal radius circle 1 4inout estimate pi well Pretty nifty follows MC script code partitioned Jupyter Notebook cell technology used Wintel 10 128 GB RAM along JupyterLab 124 R 362 R datatable ggplot knitr package featured Set option import package load personal library private function used blank freqsdt meta mykab objsz freqsdt generalpurpose multiattribute frequency function datatables meta display metadata individual record datatables mykab printing function us knitr kable objsz return size R object 1 optionswarn1 optionsscipen 10 optionsdatatableprinttopn100 optionsdatatableshowProgressFALSE optionsstringsAsFactorsTRUE usualsuspects c tidyverse datatable pryr rvest magrittrlubridate fstfeather knitr kableExtra ggplot2RColorBrewer suppressMessagesinvisiblelapplyusualsuspects library characteronly TRUE funcsdir steverfunctions funcsfile rfunctionsr setwdfuncsdir sourcefuncsfile lsfstr blanks2 allfreqs function dtn catlim 100 blank function howmany freqsdt function DTstr xstr freqsonly function DTstr xstr meta function df data FALSE dict TRUE mykab function dt objsz function obj First updated procedural code estimate pi 8 different simulation ass estimate vary sample size datatable column denoting pi estimate sample size output 2 setseed531 HOWMANY c500250012500625003125001562500781250039062500 pisim datatablehowmanyHOWMANYpiestNULL 1lengthHOWMANY h HOWMANYi xrunifhmax2 yrunifhmax2 dx12y125 inoutfactorifelsed1inout pisimipiest 4suminoutinlengthinout mykabpisim blanks2 howmany piest — — — — 500 3128000 2500 3158400 12500 3134400 62500 3152512 312500 3136166 1562500 3140460 7812500 3142583 390625003141737 calculation time using current functional approach result fortunately identical 3 setseed531 HOWMANY c500250012500625003125001562500781250039062500 mksim functionh xrunifhmax2 yrunifhmax2 dx12y125 inoutfactorifelsed1inout datatablehowmanyhpiest4suminoutinlengthinout pisim rbindlistlapplyHOWMANYmksim mykabpisim blanks2 howmany piest — — — — 500 3128000 2500 3158400 12500 3134400 62500 3152512 312500 3136166 1562500 3140460 7812500 3142583 390625003141737 Graph pi estimate function sample size Note convergence R constant pi 4 optionsreprplotwidth10 reprplotheight10 bpal brewerpal9Blues gpal brewerpal9Greens g ggplotpisim aesxhowmanyypiest geompointsize5 geomline themeplotbackground elementrectfill bpal2 panelbackground elementrectfill bpal2 geomhlineaesyinterceptpi narm FALSE showlegend NAcolblacksize3linetype2 themeaxistext elementtextsize15 themelegendpositionnone ylim3132 scalexlog10breakspisimhowmany themeaxistextx elementtextangle45 labstitleSimulation Estimate pi Sample Size yEstimate pi x Sample Size log scale themeplottitle elementtextsize25face bold themeaxistext elementtextsize12 annotatetext x 100 pi00100 label pasteroundpi5sep size5 themetext elementtextsizerel4 printg blanks2 Move related process showing pi visually Create datatable two random uniform column range 02 attribute measure distance two column circle center 11 factor specifies whether point within outside circle radius 1 sample size 1000000 used simulation 5 setseed345 howmany 1000000 simpoints datatablexrunifhowmanymax2yrunifhowmanymax2distance11x12y125 inoutfactorifelsedistance111inout metasimpoints blanks2 name class row column size — — — — — — — — — — — — – — – — – simpointsc“datatable” “dataframe”1000000 4 267 MB Classes ‘datatable’ ‘dataframe’ 1000000 ob 4 variable x num 0433 055 078 1311 0872 … num 0689 1352 1868 0773 0355 … distance11 num 0647 0572 0896 0385 0658 … inout Factor w 2 level “in””out” 1 1 1 1 1 1 1 2 1 1 … – attr “internalselfref” NULL Ratio point circle point enclosing square — ie area circle area square area square 4 implies area circle arearatio4 radius 1 area circle estimate pi identical Look familiar 6 f freqsdtsimpointsinout mykabf arearatio finoutinpercent100 blanks1 printarearatio pisim 4arearatio blanks1 printpisim blanks2 inoutfrequencypercent — — — — – 785094 785094 214906 214906 1 0785094 1 3140376 Visualize simulationcomputation ggplot — graphing 1000000 point result rather artistic 7 start proctime optionsreprplotwidth10 reprplotheight10 bpal brewerpal9Blues gpal brewerpal9Greens rpal brewerpal9Reds myColors gpalc59 namesmyColors levelssimpointsinout tit Area Square Circle subtit pasteSimulation pi roundpisim6 Actual pi roundpi6sep g ggplotsimpoints aesxxyycolinout geompointsize5 themeplotbackground elementrectfill bpal2 panelbackground elementrectfill bpal2 themelegendpositionnone ylim13 xlim13 labstitletitsubtitlesubtit yHeight x Length themeplottitle elementtextsize22face bold themeplotsubtitle elementtextsize15face bold themeaxistext elementtextsize15 scalecolormanualvalues myColors themetext elementtextsizerel4 printg end proctime printendstart blanks2 user system elapsed 1019 2008 3027 That’s RPythonPandas next time Tags Data Science R Artificial Intelligence Jupyter Notebook Mathematics |
1,796 | How To Make Scalable APIs Using Flask and FaunaDB | What does Serverless have to do with this tutorial?
The main reason serverless is being mentioned here is because FaunaDB is a NoSQL database that is made for serverless in mind. The pricing on this database is request based, precisely what serverless apps need.
Using a service like FaunaDB can help cut costs so much that the hosting capabilities of the app would be virtually free. Excluding the development costs of course. Thus, using a monthly billed database for serverless apps kind of kills the point.
A free stack example would be a combination of Netlify, Netlify Functions, and FaunaDB. Though it would only be ‘free’ for a certain amount of requests. Unless you are making an app that gets thousands of users on day zero of deployment I don’t think it would be much of a problem.
In my opinion, using a monthly billed database for serverless apps kind of kills the point
Flask on the other hand is a microframework written in Python. It is a minimalistic framework with no database abstraction layers, form validation, or any other particular functions provided by other frameworks.
Flask is by large serverless compatible. You can make a serverless Flask app using AWS Lambda. Here is an official guide to Flask serverless from serverless.com. | https://towardsdatascience.com/how-to-make-scalable-apis-using-flask-and-faunadb-f6005d4a8065 | ['Agustinus Theodorus'] | 2020-10-27 23:50:13.217000+00:00 | ['Software Engineering', 'Programming', 'Software Development', 'Microservices', 'Serverless'] | Title Make Scalable APIs Using Flask FaunaDBContent Serverless tutorial main reason serverless mentioned FaunaDB NoSQL database made serverless mind pricing database request based precisely serverless apps need Using service like FaunaDB help cut cost much hosting capability app would virtually free Excluding development cost course Thus using monthly billed database serverless apps kind kill point free stack example would combination Netlify Netlify Functions FaunaDB Though would ‘free’ certain amount request Unless making app get thousand user day zero deployment don’t think would much problem opinion using monthly billed database serverless apps kind kill point Flask hand microframework written Python minimalistic framework database abstraction layer form validation particular function provided framework Flask large serverless compatible make serverless Flask app using AWS Lambda official guide Flask serverless serverlesscomTags Software Engineering Programming Software Development Microservices Serverless |
1,797 | Practical Data Analysis with Pandas and Seaborn | Practical Data Analysis with Pandas and Seaborn
Exploratory data analysis on a bank customer dataset
Photo by Joshua Hoehne on Unsplash
Whether we are creating a dashboard, doing predicting analytics, or working on any other machine learning task, we first need to explore the data at hand. We should obtain a thorough understanding of the data and the relationships among variables.
There are many tools and packages that can be used to analyze data. What they all have in common is that the best way to learn them is through practice.
In this practical article, we will explore a dataset that contains information about the customers of a bank. The ultimate task is to predict whether a customer will leave the credit card services of the bank.
We will be using Pandas for data analysis and manipulation and Seaborn to create visualizations.
The first step is to import the libraries.
import numpy as np
import pandas as pd import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='darkgrid')
Let’s create a dataframe by reading the provided csv file.
churn = pd.read_csv("/content/BankChurners.csv", usecols=list(range(21)))
I have excluded the first column and the last two columns by providing a list of indices of columns to be included in the dataframe. The usecol parameter is used to select only certain columns. We can pass the names or indices of the columns to be included.
The first column is client number which does not add any value to the analysis. The last two columns were not relevant as indicated by the dataset provider.
The shape method returns the size of the dataframe in terms of the number of rows and columns.
print(churn.shape)
(10127, 20)
(image by author)
There are 20 columns. The screenshot above only includes 7 columns for demonstration purposes. We can view the entire list of columns by using the “columns” method.
Before starting the analysis, we should check if there is any missing value in the columns. The isna function of Pandas returns true if a value is missing. We can apply sum functions to count the number of missing values in each column or entire dataframe.
churn.isna().sum().sum()
0
There is no missing value in the dataset. | https://towardsdatascience.com/practical-data-analysis-with-pandas-and-seaborn-8fec3cb9cd16 | ['Soner Yıldırım'] | 2020-12-22 17:58:15.541000+00:00 | ['Data Science', 'Python', 'Artificial Intelligence', 'Data Analysis', 'Pandas'] | Title Practical Data Analysis Pandas SeabornContent Practical Data Analysis Pandas Seaborn Exploratory data analysis bank customer dataset Photo Joshua Hoehne Unsplash Whether creating dashboard predicting analytics working machine learning task first need explore data hand obtain thorough understanding data relationship among variable many tool package used analyze data common best way learn practice practical article explore dataset contains information customer bank ultimate task predict whether customer leave credit card service bank using Pandas data analysis manipulation Seaborn create visualization first step import library import numpy np import panda pd import matplotlibpyplot plt import seaborn sn snssetstyledarkgrid Let’s create dataframe reading provided csv file churn pdreadcsvcontentBankChurnerscsv usecolslistrange21 excluded first column last two column providing list index column included dataframe usecol parameter used select certain column pas name index column included first column client number add value analysis last two column relevant indicated dataset provider shape method return size dataframe term number row column printchurnshape 10127 20 image author 20 column screenshot includes 7 column demonstration purpose view entire list column using “columns” method starting analysis check missing value column isna function Pandas return true value missing apply sum function count number missing value column entire dataframe churnisnasumsum 0 missing value datasetTags Data Science Python Artificial Intelligence Data Analysis Pandas |
1,798 | Upgrade to Latest Version ASAP — No Thanks | Background
At the moment k8s 1.16, 1.17, 1.18 are officially supported; the support for 1.15 has ended. But in AWS EKS, the latest version is still 1.16, and at the writing of the twitter above, even 1.16 in EKS hasn’t been released yet.
Do Not Upgrade To Latest Version ASAP
Photo by Michael Dziedzic on Unsplash
This may seem controversial, but while I do think we should move to 1.16 at least, when we are talking about security and stability, I am with Mozilla: not “upgrading to latest version asap”.
I think many people are like me. Take a very simple example: I guess not all software engineers have already upgraded their MacOS to 10.15.4 (I did).
You might say that this is only because you are lazy and don’t want to be interrupted by the download and restart, but there are actually very good reasons for “not always upgrade to the latest version as soon as possible”. The most important one is:
Bugs Diminishing Model
Photo by Markus Spiske on Unsplash
The number of bugs discovered for a given version, given software is diminishing over time.
Thus, by a “delayed” upgrading strategy, you have much less trouble and potential security issues in production and reduce your maintenance in production.
Of course, like everything else, there is exception. For example, if there is a major security issue that can’t be patched, instead, must release a new version, you should definitely upgrade ASAP. But the norm would be to NOT upgrade ASAP so that you expose less potential security issues and bugs and stability issues in production environment.
Example — K8s 1.15
Taking k8s 1.15 as an example:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md
If you have a detailed look at the release notes, you will find out that, the number of bugs/issues fixed in each version is much less in the end than in the beginning.
It is universal for literally any piece of software.
Example — Firefox ESR 52
Take another example from Firefox ESR:
https://www.mozilla.org/en-US/security/known-vulnerabilities/firefox-esr/
In the beginning month of Firefox 52, there are 31 bugs (can you believe that, a major release by a major company, one bug per day).
But only Half a year later, the number dropped to 7 per month, and it maintained that level ever since (some months even much lower).
Apparently, Mozilla also although about this “bug diminishing model” when they provide enterprise edition of Firefox (extended support release, ESR, or Firefox for enterprise). The ESR major version is always 3 less than (or one year behind) latest version. At the moment, latest version of Firefox is 71, ESR is only 68. Which one am I using? 68. I believe the bug diminishing model also works here, and I also believe the developers of the software know what they are doing when they are talking about enterprise security.
Conclusion
As of today, since EKS 1.16 has been released, I would upgrade to k8s 1.16 immediately (already did both in my work project and personal project), since 1.15 support ended. This makes perfect sense.
In a not-so-perfect real world (like literally 1 month ago when 1.16 EKS hadn’t been released), given the two choices:
A), using 1.15 for a few weeks/months before upgrading, and
B), using 1.18 immediately;
I’d definitely choose the former one for production, latter one for development env. | https://medium.com/devops-dudes/upgrade-to-latest-version-asap-no-thanks-a6cb99d739b3 | ['Tiexin Guo'] | 2020-05-31 16:21:27.359000+00:00 | ['Version Control', 'Software Engineering', 'Kubernetes', 'Software Development', 'Security'] | Title Upgrade Latest Version ASAP — ThanksContent Background moment k8s 116 117 118 officially supported support 115 ended AWS EKS latest version still 116 writing twitter even 116 EKS hasn’t released yet Upgrade Latest Version ASAP Photo Michael Dziedzic Unsplash may seem controversial think move 116 least talking security stability Mozilla “upgrading latest version asap” think many people like Take simple example guess software engineer already upgraded MacOS 10154 might say lazy don’t want interrupted download restart actually good reason “not always upgrade latest version soon possible” important one Bugs Diminishing Model Photo Markus Spiske Unsplash number bug discovered given version given software diminishing time Thus “delayed” upgrading strategy much le trouble potential security issue production reduce maintenance production course like everything else exception example major security issue can’t patched instead must release new version definitely upgrade ASAP norm would upgrade ASAP expose le potential security issue bug stability issue production environment Example — K8s 115 Taking k8s 115 example httpsgithubcomkuberneteskubernetesblobmasterCHANGELOGCHANGELOG115md detailed look release note find number bugsissues fixed version much le end beginning universal literally piece software Example — Firefox ESR 52 Take another example Firefox ESR httpswwwmozillaorgenUSsecurityknownvulnerabilitiesfirefoxesr beginning month Firefox 52 31 bug believe major release major company one bug per day Half year later number dropped 7 per month maintained level ever since month even much lower Apparently Mozilla also although “bug diminishing model” provide enterprise edition Firefox extended support release ESR Firefox enterprise ESR major version always 3 le one year behind latest version moment latest version Firefox 71 ESR 68 one using 68 believe bug diminishing model also work also believe developer software know talking enterprise security Conclusion today since EKS 116 released would upgrade k8s 116 immediately already work project personal project since 115 support ended make perfect sense notsoperfect real world like literally 1 month ago 116 EKS hadn’t released given two choice using 115 weeksmonths upgrading B using 118 immediately I’d definitely choose former one production latter one development envTags Version Control Software Engineering Kubernetes Software Development Security |
1,799 | NAIC Principles for the Use of Artificial Intelligence in the Insurance Industry | NAIC Principles for the Use of Artificial Intelligence in the Insurance Industry Raffaella Aghemo Follow Jul 21 · 3 min read
The NAIC (National Association of Insurance Commissioners) is the National Association of Insurance Commissioners operating in the United States: on June 30, it approved the guiding principles, exactly version 5, to be applied in the use of Artificial Intelligence.
Let’s see what they are: in the document it is recommended to all operators, who work in the insurance field, but also to third parties, such as rating and consulting organizations, defined in the text as “AI actors”, to adhere to these fundamental principles, which are complementary to each other and functional to a “trustworthy” Artificial Intelligence.
The AI system should be:
FAIR AND ETHICAL : Operators will have to respect the rule of law, especially with regard to commercial practices, unfair discrimination, access to insurance, underwriting, privacy, consumer protection and eligibility practices, instalment standards, advertising decisions, claims practices and solvency; and shall act proactively, subjecting the use of Artificial Intelligence to supervision, so that such systems are not designed to harm or deceive people and are implemented in a way that minimises negative outcomes for consumers, avoiding harmful or undesirable consequences.
: Operators will have to respect the rule of law, especially with regard to commercial practices, unfair discrimination, access to insurance, underwriting, privacy, consumer protection and eligibility practices, instalment standards, advertising decisions, claims practices and solvency; and shall act proactively, subjecting the use of Artificial Intelligence to supervision, so that such systems are not designed to harm or deceive people and are implemented in a way that minimises negative outcomes for consumers, avoiding harmful or undesirable consequences. RESPONSIBLE : on the same path, constant surveillance of the AI system will be required so that it does not harm or create prejudice to consumers, and if there is no negligence in its creation, monitoring or implementation, the remedy for a possible error must be its correction! An appropriate methodology will have to be put in place, in order to be able to request the review or explanation of the process that led to a given decision, in clear and simple language, also within the reach of non “vertical” consumers on technology.
: on the same path, constant surveillance of the AI system will be required so that it does not harm or create prejudice to consumers, and if there is no negligence in its creation, monitoring or implementation, the remedy for a possible error must be its correction! An appropriate methodology will have to be put in place, in order to be able to request the review or explanation of the process that led to a given decision, in clear and simple language, also within the reach of non “vertical” consumers on technology. COMPLIANT AND CONFORM : always keeping in mind all state and, in the United States, federal regulations, as well as all privacy regulations.
: always keeping in mind all state and, in the United States, federal regulations, as well as all privacy regulations. TRANSPARENT : AI actors must commit to ensuring transparency and responsible disclosure of AI systems to stakeholders, including consumers. Actors must have the ability to protect the confidentiality of proprietary algorithms and adherence to the laws and regulations of individual states. Such proactive disclosures include disclosure of the type of data used, the purpose of the data in the IA system and the consequences for all stakeholders.
: AI actors must commit to ensuring transparency and responsible disclosure of AI systems to stakeholders, including consumers. Actors must have the ability to protect the confidentiality of proprietary algorithms and adherence to the laws and regulations of individual states. Such proactive disclosures include disclosure of the type of data used, the purpose of the data in the IA system and the consequences for all stakeholders. SECURE, RELIABLE AND STRONG: Actors should, based on their role, context and ability to act, apply a systematic approach to risk management, at every stage of the IA system’s life cycle, to address risks such as privacy, digital security and unfair discrimination.
These five proposed principles will be considered by the NAIC’s Innovation and Technology Task Force on July 23rd, and although they will not have the same force of law after publication, they will be a good guide for future initiatives and regulations in the insurance industry!
The insurance industry has significant regulatory aspects, involving privacy, risk management, customer assistance, also through chatbots; therefore, the upcoming introduction of Artificial Intelligence, although aimed at implementing market performance, will involve and require a useful and greater involvement of legal professionals, oriented towards an increasingly technological and digital market.
All Rights Reserved
Raffaella Aghemo, Lawyer
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.com/datadriveninvestor/naic-principles-for-the-use-of-artificial-intelligence-in-the-insurance-industry-fce6c940b6a4 | ['Raffaella Aghemo'] | 2020-07-21 16:14:15.754000+00:00 | ['Insurance', 'Artificial Intelligence', 'Chatbots', 'AI', 'Naic'] | Title NAIC Principles Use Artificial Intelligence Insurance IndustryContent NAIC Principles Use Artificial Intelligence Insurance Industry Raffaella Aghemo Follow Jul 21 · 3 min read NAIC National Association Insurance Commissioners National Association Insurance Commissioners operating United States June 30 approved guiding principle exactly version 5 applied use Artificial Intelligence Let’s see document recommended operator work insurance field also third party rating consulting organization defined text “AI actors” adhere fundamental principle complementary functional “trustworthy” Artificial Intelligence AI system FAIR ETHICAL Operators respect rule law especially regard commercial practice unfair discrimination access insurance underwriting privacy consumer protection eligibility practice instalment standard advertising decision claim practice solvency shall act proactively subjecting use Artificial Intelligence supervision system designed harm deceive people implemented way minimises negative outcome consumer avoiding harmful undesirable consequence Operators respect rule law especially regard commercial practice unfair discrimination access insurance underwriting privacy consumer protection eligibility practice instalment standard advertising decision claim practice solvency shall act proactively subjecting use Artificial Intelligence supervision system designed harm deceive people implemented way minimises negative outcome consumer avoiding harmful undesirable consequence RESPONSIBLE path constant surveillance AI system required harm create prejudice consumer negligence creation monitoring implementation remedy possible error must correction appropriate methodology put place order able request review explanation process led given decision clear simple language also within reach non “vertical” consumer technology path constant surveillance AI system required harm create prejudice consumer negligence creation monitoring implementation remedy possible error must correction appropriate methodology put place order able request review explanation process led given decision clear simple language also within reach non “vertical” consumer technology COMPLIANT CONFORM always keeping mind state United States federal regulation well privacy regulation always keeping mind state United States federal regulation well privacy regulation TRANSPARENT AI actor must commit ensuring transparency responsible disclosure AI system stakeholder including consumer Actors must ability protect confidentiality proprietary algorithm adherence law regulation individual state proactive disclosure include disclosure type data used purpose data IA system consequence stakeholder AI actor must commit ensuring transparency responsible disclosure AI system stakeholder including consumer Actors must ability protect confidentiality proprietary algorithm adherence law regulation individual state proactive disclosure include disclosure type data used purpose data IA system consequence stakeholder SECURE RELIABLE STRONG Actors based role context ability act apply systematic approach risk management every stage IA system’s life cycle address risk privacy digital security unfair discrimination five proposed principle considered NAIC’s Innovation Technology Task Force July 23rd although force law publication good guide future initiative regulation insurance industry insurance industry significant regulatory aspect involving privacy risk management customer assistance also chatbots therefore upcoming introduction Artificial Intelligence although aimed implementing market performance involve require useful greater involvement legal professional oriented towards increasingly technological digital market Rights Reserved Raffaella Aghemo Lawyer Gain Access Expert View — Subscribe DDI IntelTags Insurance Artificial Intelligence Chatbots AI Naic |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.