Unnamed: 0
int64 0
192k
| title
stringlengths 1
200
| text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
| info
stringlengths 45
90.4k
|
---|---|---|---|---|---|---|---|
5,100 |
5 Habits I Gave Up to Begin Healing From Borderline Personality Disorder
|
Recently, I received an… interesting comment on one of my stories about life with borderline personality disorder (BPD). It was a little bit ironic, even. The reader who said, “It’s so strange that now just being an @@@hole has a disease attached to it,” closed their commentary by claiming, “I’m sure I will be hated for this.”
The irony of the comment, for me, was that the reader clearly sought negative attention by being something of an asshole themselves. They lumped all of the symptoms of BPD into being an asshole, and declared that it’s not even a mental illness but a choice.
What made the reader need that kind of attention, anyway?
To be honest, this is the kind of compulsion I used to battle in my worst episodes of BPD. We're not the only people who've been known to act out for attention. In fact, I was already planning to write about the habits I gave up to start healing from BPD when this comment caught my attention and reminded me.
To be fair, not every person with borderline has these same habits. And there are plenty of people without BPD who do these things too.
It’s important to recognize, though, the ways we self-sabotage and stunt our own growth.
1. I quit looking for attention.
At the heart of BPD is a desire to be loved. It’s terribly all-consuming. I would sometimes talk a bit more loudly than necessary, or draw negative attention to myself, simply because I wanted to know I wasn’t invisible.
As much as I didn’t want to admit it, attention from other people felt something like a drug to me. Attention made it just a little easier to survive.
To begin my healing, I had to quit looking for some feel good drag from others.
2. I stopped chasing men who didn’t really care about me.
Everyone has heard the phrase, “looking for love in all the wrong places.” For many BPD sufferers, that’s a huge problem. I never witnessed healthy romantic relationships as a child. Practically everything I learned about love came from television or movies. It was, of course, all wrong.
For a long time, I chased guys who were no good for me. And I didn’t disengage when I realized they didn’t truly care about me because I was already fully invested.
I kept thinking that "love" was my lifeline but I wasn’t honest with the fact that my idea of love was toxic and twisted. Once I stopped chasing the wrong guys, I began to heal and reexamine my definition of love.
3. I went on a selfie hiatus.
I’ve written about the benefits of selfies before, so please hear me out. I don’t mean that selfies are inherently bad. They’re not.
It’s just that for some folks battling mental health issues, and for some of us with borderline personality disorder, taking selfies can be excessive and problematic.
It can be difficult to get yourself away from a compulsive need for attention when you’re stuck on taking (and posting) selfies. Healing from BPD really does require you to cool it with seeking approval from other people.
Not only that, but people with BPD typically struggle significantly with their self-esteem. As our symptoms flare up, the last thing we need to do is over analyze our features and appearance.
4. I changed my story and quit feeling sorry for myself.
For the longest time, every story I told myself about my life was a sad one. I told myself that I was worthless and ugly.
Like so many other borderline folks, I had a bad habit of telling myself that everybody rejected me. I didn’t just fear rejection; it was my entire identity.
During the first couple years of motherhood, even though I was doing well with putting my baby first, I still carried my sob stories. I was alone. I was abandoned. I was unlovable.
I had to finally let go of that ugly narrative, because I had grown so comfortable with my pain. Pain was more familiar to me than hope. And I couldn’t heal as long as I let myself stay stuck in that narrative.
5. I quit expecting other people to bring meaning into my life.
By far, one of the worst things about BPD is this chromic emptiness. It’s like a permanent storm cloud above your head or some sort of splinter in your soul that wakes you up at the worst possible moments.
That emptiness has a way of eating up everything and making you believe that you will never really know who you are.
And you try so hard to fill it with different things. Maybe with stuff. But more often, with people.
I had to quit looking to other people and expecting them to give me value. I had stop trying to use them to bring meaning into my life. I had to learn how to see my own value.
But people who wish to heal from trauma, which is a huge part of BPD, can’t only drop their bad habits. They have to pick up new and healthier habits too.
1. I began writing about my struggles instead of stewing over them.
For me, writing has been my real “lifeline.” Not my old definitions of love. And I’m not talking about writing in my journal, or crying out “woe is me.”
Writing for myself and others has made an incredible difference in my mental health. I get to work out some of my demons, and figure out what I really feel, but the fact that I’m not just writing for me forces me to find more distance and clarity.
What do I wish I had known five or ten years ago? What did I really need someone to say? What stories could have changed my entire life?
These are the things I think about when I write. And it’s helped me to finally process my pain. If I didn’t write essays about my life in an effort to help others, I would likely still be stewing over all of my hurt and trauma.
2. I started looking for silver linings.
Again, writing certainly helps. I’ve developed this habit of looking for the good in the worst scenarios, including many of my fears.
These days, I now choose to see adversity and unexpected problems as opportunities for growth and further healing.
It’s not always easy, but it’s not too different from flexing any other creative muscle. Over time, I’ve gotten better at finding silver linings, so even when I face a deep depressive episode, I am eventually able to find my way back to a less biased story. I can now see how my depression always lies and tries to keep me stuck.
3. I met my own needs.
For much of my life, I had this chip on my shoulder about being on my own. It became ingrained in me when I was 18 and my sister went to prison. Our whole family dynamic changed and I found myself increasingly alone.
My mother quit celebrating holidays a few years later when my sister’s children were taken away and sent to live with their paternal grandmother in Missouri. She suddenly started telling me to make my own plans for Thanksgiving and Christmas because she had nothing to celebrate.
I grew very bitter that my mother didn’t seem to think I was family worth keeping. I grew reclusive and resented that I would have to put my own gift under my tree.
As I began to heal, however, I quit whining to myself about being on my own and I started to take pride in meeting my own needs. That was a game changer for me. No more pain over being “forgotten” or alone. Meeting my own needs made me feel strong and much more secure in myself.
4. I stopped reading into every relationship and started to chill out.
There’s this thing that many people do. It’s not just those of us with BPD. People often gravitate to toxic relationships or poison their healthy ones by reading into every little thing.
I’ve learned the hard way that reading into everything somebody says or does is a recipe for disaster. We wind up doing one of two things: we either get led by our fears and worry about everything we over analyze, or we see the person in an unrealistic light.
Seeing only those things you want to see in a relationship is a shortcut to destruction. It’s important to relax enough to be honest not just with them, but with yourself.
Healthy love is chill, yet it doesn’t ignore red flags in favor of a fantasy. But it takes a lot of practice to get there and it’s important to know that old habits do die hard. I have to frequently remind myself to be realistic and relaxed.
5. I leaned into my fear of being alone.
If you’ve ever battled obsessive compulsive disorder (OCD), you have most likely heard of exposure therapy. I never really set out to use exposure therapy for myself in my treatment for BPD. It was more like a happy accident brought about by my circumstances as a single mom.
I used to be so frightened to be romantically alone. Although I’ve always been an introvert who likes to work alone, I believed that I couldn’t be happy without a partner.
The idea of dying alone and spending holidays alone horrified me. Like many others with BPD, I sometimes went to extremes to avoid being abandoned. I also looked for extra attention and unfairly tested my lovers.
The only thing that got me past my fear of being alone was to finally lean into my loneliness and learn how to manage it in a healthier way.
It took years, but I finally discovered that I like being alone. There are perks to going at life on your own terms. It’s not that I never feel sad, alone, or abandoned anymore. But now that I’ve faced my fears of loneliness head on, those feelings no longer rule me.
|
https://medium.com/honestly-yours/5-habits-i-gave-up-to-begin-healing-from-borderline-personality-disorder-49858c9bed9d
|
['Shannon Ashley']
|
2019-12-30 14:36:30.471000+00:00
|
['Life Lessons', 'Mental Health', 'Personal Development', 'Self Improvement', 'Self']
|
Title 5 Habits Gave Begin Healing Borderline Personality DisorderContent Recently received an… interesting comment one story life borderline personality disorder BPD little bit ironic even reader said “It’s strange hole disease attached it” closed commentary claiming “I’m sure hated this” irony comment reader clearly sought negative attention something asshole lumped symptom BPD asshole declared it’s even mental illness choice made reader need kind attention anyway honest kind compulsion used battle worst episode BPD people whove known act attention fact already planning write habit gave start healing BPD comment caught attention reminded fair every person borderline habit plenty people without BPD thing It’s important recognize though way selfsabotage stunt growth 1 quit looking attention heart BPD desire loved It’s terribly allconsuming would sometimes talk bit loudly necessary draw negative attention simply wanted know wasn’t invisible much didn’t want admit attention people felt something like drug Attention made little easier survive begin healing quit looking feel good drag others 2 stopped chasing men didn’t really care Everyone heard phrase “looking love wrong places” many BPD sufferer that’s huge problem never witnessed healthy romantic relationship child Practically everything learned love came television movie course wrong long time chased guy good didn’t disengage realized didn’t truly care already fully invested kept thinking love lifeline wasn’t honest fact idea love toxic twisted stopped chasing wrong guy began heal reexamine definition love 3 went selfie hiatus I’ve written benefit selfies please hear don’t mean selfies inherently bad They’re It’s folk battling mental health issue u borderline personality disorder taking selfies excessive problematic difficult get away compulsive need attention you’re stuck taking posting selfies Healing BPD really require cool seeking approval people people BPD typically struggle significantly selfesteem symptom flare last thing need analyze feature appearance 4 changed story quit feeling sorry longest time every story told life sad one told worthless ugly Like many borderline folk bad habit telling everybody rejected didn’t fear rejection entire identity first couple year motherhood even though well putting baby first still carried sob story alone abandoned unlovable finally let go ugly narrative grown comfortable pain Pain familiar hope couldn’t heal long let stay stuck narrative 5 quit expecting people bring meaning life far one worst thing BPD chromic emptiness It’s like permanent storm cloud head sort splinter soul wake worst possible moment emptiness way eating everything making believe never really know try hard fill different thing Maybe stuff often people quit looking people expecting give value stop trying use bring meaning life learn see value people wish heal trauma huge part BPD can’t drop bad habit pick new healthier habit 1 began writing struggle instead stewing writing real “lifeline” old definition love I’m talking writing journal cry “woe me” Writing others made incredible difference mental health get work demon figure really feel fact I’m writing force find distance clarity wish known five ten year ago really need someone say story could changed entire life thing think write it’s helped finally process pain didn’t write essay life effort help others would likely still stewing hurt trauma 2 started looking silver lining writing certainly help I’ve developed habit looking good worst scenario including many fear day choose see adversity unexpected problem opportunity growth healing It’s always easy it’s different flexing creative muscle time I’ve gotten better finding silver lining even face deep depressive episode eventually able find way back le biased story see depression always lie try keep stuck 3 met need much life chip shoulder became ingrained 18 sister went prison whole family dynamic changed found increasingly alone mother quit celebrating holiday year later sister’s child taken away sent live paternal grandmother Missouri suddenly started telling make plan Thanksgiving Christmas nothing celebrate grew bitter mother didn’t seem think family worth keeping grew reclusive resented would put gift tree began heal however quit whining started take pride meeting need game changer pain “forgotten” alone Meeting need made feel strong much secure 4 stopped reading every relationship started chill There’s thing many people It’s u BPD People often gravitate toxic relationship poison healthy one reading every little thing I’ve learned hard way reading everything somebody say recipe disaster wind one two thing either get led fear worry everything analyze see person unrealistic light Seeing thing want see relationship shortcut destruction It’s important relax enough honest Healthy love chill yet doesn’t ignore red flag favor fantasy take lot practice get it’s important know old habit die hard frequently remind realistic relaxed 5 leaned fear alone you’ve ever battled obsessive compulsive disorder OCD likely heard exposure therapy never really set use exposure therapy treatment BPD like happy accident brought circumstance single mom used frightened romantically alone Although I’ve always introvert like work alone believed couldn’t happy without partner idea dying alone spending holiday alone horrified Like many others BPD sometimes went extreme avoid abandoned also looked extra attention unfairly tested lover thing got past fear alone finally lean loneliness learn manage healthier way took year finally discovered like alone perk going life term It’s never feel sad alone abandoned anymore I’ve faced fear loneliness head feeling longer rule meTags Life Lessons Mental Health Personal Development Self Improvement Self
|
5,101 |
Advantages of using NumPy for numerical operations. Speed gains and additional features offered by NumPy.
|
Advantages of using NumPy over Python Lists
Features and performance gains of using NumPy for numerical operations
In this article, I will show a few neat tricks that come with NumPy, yet are must faster than vanilla python code.
Photo by Alex Chambers on Unsplash
Memory usage
The most important gain is the memory usage. This comes in handy when we implement complex algorithms and in research work.
array = list(range(10**7))
np_array = np.array(array)
I found the following code from a blog. I will be using this code snippet to compute the size of the objects in this article.
get_size(array) ====> 370000108 bytes ~ 352.85MB
get_size(np_array) => 80000160 bytes ~ 76.29MB
This is because NumPy arrays are fixed-length arrays, while vanilla python has lists that are extensible.
Speed
Speed is, in fact, a very important property in data structures. Why does it take much less time to use NumPy operations over vanilla python? Let’s have a look at a few examples.
Matrix Multiplication
In this example, we will look at a scenario where we multiply two square matrices.
from time import time
import numpy as np def matmul(A, B):
N = len(A)
product = [[0 for x in range(N)] for y in range(N)] for i in range(N):
for j in range(N):
for k in range(N):
product[i][j] += matrix1[i][k] * matrix2[k][j]
return product matrix1 = np.random.rand(1000, 1000)
matrix2 = np.random.rand(1000, 1000) t = time()
prod = matmul(matrix1, matrix1)
print("Normal", time() - t)
t = time()
np_prod = np.matmul(matrix1, matrix2)
print("Numpy", time() - t)
The times will be observed as follows;
Normal 7.604596138000488
Numpy 0.0007512569427490234
We can see that the NumPy implementation is almost 10,000 times faster. Why? Because NumPy uses under-the-hood optimizations such as transposing and chunked multiplications. Furthermore, the operations are vectorized so that the looped operations are performed much faster. The NumPy library uses the BLAS (Basic Linear Algebra Subroutines) library under in its backend. Hence, it is important to install NumPy properly to compile the binaries to fit the hardware architecture.
More Vectorized Operations
Vectorized operations are simply scenarios that we run operations on vectors including dot product, transpose and other matrix operations, on the entire array at once. Let’s have a look at the following example that we compute the element-wise product.
vec_1 = np.random.rand(5000000)
vec_2 = np.random.rand(5000000) t = time()
dot = [float(x*y) for x, y in zip(vec_1, vec_2)]
print("Normal", time() - t)
t = time()
np_dot = vec_1 * vec_2
print("Numpy", time() - t)
The timings on each operation will be;
Normal 2.0582966804504395
Numpy 0.02198004722595215
We can see that the implementation of NumPy gives a much faster vectorized operation.
Broadcast Operations
Numpy vectorized operations also provide much faster operations on arrays. These are called broadcast operations. This is because the operations are broadcasted over the entire array using Intel Vectorized instructions (Intel AVX).
vec = np.random.rand(5000000) t = time()
mul = [float(x) * 5 for x in vec]
print("Normal", time() - t)
t = time()
np_mul = 5 * vec
print("Numpy", time() - t)
Let’s see how the running times look;
Normal 1.3156049251556396
Numpy 0.01950979232788086
Almost 100 times!
Filtering
Filtering includes scenarios where you only pick a few items from an array, based on a condition. This is integrated into the NumPy indexed access. Let me show you a simple practical example.
X = np.array(DATA)
Y = np.array(LABELS) Y_red = Y[Y=='red'] # obtain all Y values with RED
X_red = X[Y=='red'] # feed Y=='red' indices and filter X
Let’s compare this against the vanilla python implementation.
X = np.random.rand(5000000)
Y = np.int64(10 * np.random.rand(5000000)) t = time()
Y_even = [int(y) for y in Y if y%2==0]
X_even = [float(X[i]) for i, y in enumerate(Y) if y%2==0]
print("Normal", time() - t)
t = time()
np_Y_even = Y[Y%2==0]
np_X_even = X[Y%2==0]
print("Numpy", time() - t)
The running times are as follows;
Normal 6.341982841491699
Numpy 0.2538008689880371
This is a pretty handy trick when you want to separate data based on some condition or the label. It is very useful in data analytics and machine learning.
Finally, let’s have a look at np.where which enables you to transform a NumPy array with a condition.
X = np.int64(10 * np.random.rand(5000000))
X_even_or_zeros = np.where(X%2==0, 1, 0)
This returns an array where even-numbered slots are replaced with ones and others with zeros.
These are a few vital operations and I hope the read was worth the time. I always use NumPy with huge numeric datasets and find the performance very satisfying. NumPy has really helped the research community to stick with python without levelling down to C/C++ to gain numeric computation speeds. Room for improvements still exists!
Cheers!
|
https://medium.com/swlh/why-use-numpy-d06c573fbcda
|
['Anuradha Wickramarachchi']
|
2020-08-15 05:20:02.962000+00:00
|
['Programming', 'Python', 'Computer Science', 'Data Science', 'Machine Learning']
|
Title Advantages using NumPy numerical operation Speed gain additional feature offered NumPyContent Advantages using NumPy Python Lists Features performance gain using NumPy numerical operation article show neat trick come NumPy yet must faster vanilla python code Photo Alex Chambers Unsplash Memory usage important gain memory usage come handy implement complex algorithm research work array listrange107 nparray nparrayarray found following code blog using code snippet compute size object article getsizearray 370000108 byte 35285MB getsizenparray 80000160 byte 7629MB NumPy array fixedlength array vanilla python list extensible Speed Speed fact important property data structure take much le time use NumPy operation vanilla python Let’s look example Matrix Multiplication example look scenario multiply two square matrix time import time import numpy np def matmulA B N lenA product 0 x rangeN rangeN rangeN j rangeN k rangeN productij matrix1ik matrix2kj return product matrix1 nprandomrand1000 1000 matrix2 nprandomrand1000 1000 time prod matmulmatrix1 matrix1 printNormal time time npprod npmatmulmatrix1 matrix2 printNumpy time time observed follows Normal 7604596138000488 Numpy 00007512569427490234 see NumPy implementation almost 10000 time faster NumPy us underthehood optimization transposing chunked multiplication Furthermore operation vectorized looped operation performed much faster NumPy library us BLAS Basic Linear Algebra Subroutines library backend Hence important install NumPy properly compile binary fit hardware architecture Vectorized Operations Vectorized operation simply scenario run operation vector including dot product transpose matrix operation entire array Let’s look following example compute elementwise product vec1 nprandomrand5000000 vec2 nprandomrand5000000 time dot floatxy x zipvec1 vec2 printNormal time time npdot vec1 vec2 printNumpy time timing operation Normal 20582966804504395 Numpy 002198004722595215 see implementation NumPy give much faster vectorized operation Broadcast Operations Numpy vectorized operation also provide much faster operation array called broadcast operation operation broadcasted entire array using Intel Vectorized instruction Intel AVX vec nprandomrand5000000 time mul floatx 5 x vec printNormal time time npmul 5 vec printNumpy time Let’s see running time look Normal 13156049251556396 Numpy 001950979232788086 Almost 100 time Filtering Filtering includes scenario pick item array based condition integrated NumPy indexed access Let show simple practical example X nparrayDATA nparrayLABELS Yred YYred obtain value RED Xred XYred feed Yred index filter X Let’s compare vanilla python implementation X nprandomrand5000000 npint6410 nprandomrand5000000 time Yeven inty y20 Xeven floatXi enumerateY y20 printNormal time time npYeven YY20 npXeven XY20 printNumpy time running time follows Normal 6341982841491699 Numpy 02538008689880371 pretty handy trick want separate data based condition label useful data analytics machine learning Finally let’s look npwhere enables transform NumPy array condition X npint6410 nprandomrand5000000 Xevenorzeros npwhereX20 1 0 return array evennumbered slot replaced one others zero vital operation hope read worth time always use NumPy huge numeric datasets find performance satisfying NumPy really helped research community stick python without levelling CC gain numeric computation speed Room improvement still exists CheersTags Programming Python Computer Science Data Science Machine Learning
|
5,102 |
Marketing in Crypto: Crash & Burn or Build & Prosper?
|
1) Identity
All is simple: let’s think of crypto community members and newcomers as a customers first. Then open CoinMarketCap and count the number of projects: 1586 cryptocurrencies are competing for your existing and potential customers. It has been 10 years since Bitcoin was born and cryptocurrencies now become a more common definition in our lives. If we consider cryptocurrency as a product, brand management should take a place in a marketing story.
Identity can be a magic spell. Color palette, typography, imagery, tone of voice, even icons set and online elements are able to communicate beyond the words. Build your project a “face” to deliver a message.
Take seriously external materials, such as brand book, press-kit, one pager and different media templates. Creative materials guidelines are necessary to keep the holistic image persistence. Do the same for internal documents. Why? Your employees are internal customers and potential brand advocates. Have you seen team members standing up for honor of a project? The goal is to make it happen and keep it constant.
One day a newcomer brand will be able to sell an image and trust itself to ensure customers they hold a right cryptocurrency in their hands (and they are in right hands themselves).
Think about forming the first impression, scientific approach and even brand archetypes. Prepare necessary project documentation and marketing materials.
|
https://medium.com/hackernoon/marketing-in-crypto-crash-burn-or-build-prosper-f1de5b58afe7
|
['Katoshi']
|
2018-08-22 11:00:50.079000+00:00
|
['Bitcoin', 'Marketing', 'Cryptocurrency', 'Marketing In Crypto', 'Marketing Strategies']
|
Title Marketing Crypto Crash Burn Build ProsperContent 1 Identity simple let’s think crypto community member newcomer customer first open CoinMarketCap count number project 1586 cryptocurrencies competing existing potential customer 10 year since Bitcoin born cryptocurrencies become common definition life consider cryptocurrency product brand management take place marketing story Identity magic spell Color palette typography imagery tone voice even icon set online element able communicate beyond word Build project “face” deliver message Take seriously external material brand book presskit one pager different medium template Creative material guideline necessary keep holistic image persistence internal document employee internal customer potential brand advocate seen team member standing honor project goal make happen keep constant One day newcomer brand able sell image trust ensure customer hold right cryptocurrency hand right hand Think forming first impression scientific approach even brand archetype Prepare necessary project documentation marketing materialsTags Bitcoin Marketing Cryptocurrency Marketing Crypto Marketing Strategies
|
5,103 |
Review: DRN — Dilated Residual Networks (Image Classification & Semantic Segmentation)
|
Review: DRN — Dilated Residual Networks (Image Classification & Semantic Segmentation)
Using Dilated Convolution, Improved ResNet, for Image Classification, Image Localization & Semantic Segmentation
In this story, DRN (Dilated Residual Networks), from Princeton University and Intel Labs, is reviewed. After publishing DilatedNet in 2016 ICML for semantic segmentation, authors invented the DRN which can improve not only semantic segmentation, but also image classification, without increasing the model’s depth or complexity. It is published in 2017 CVPR with over 100 citations. (Sik-Ho Tsang @ Medium)
|
https://towardsdatascience.com/review-drn-dilated-residual-networks-image-classification-semantic-segmentation-d527e1a8fb5
|
['Sik-Ho Tsang']
|
2019-03-20 16:07:37.825000+00:00
|
['Image Classification', 'Deep Learning', 'Artificial Intelligence', 'Semantic Segmentation', 'Machine Learning']
|
Title Review DRN — Dilated Residual Networks Image Classification Semantic SegmentationContent Review DRN — Dilated Residual Networks Image Classification Semantic Segmentation Using Dilated Convolution Improved ResNet Image Classification Image Localization Semantic Segmentation story DRN Dilated Residual Networks Princeton University Intel Labs reviewed publishing DilatedNet 2016 ICML semantic segmentation author invented DRN improve semantic segmentation also image classification without increasing model’s depth complexity published 2017 CVPR 100 citation SikHo Tsang MediumTags Image Classification Deep Learning Artificial Intelligence Semantic Segmentation Machine Learning
|
5,104 |
Richard Sherman knows content, bro
|
Screenshot of Richard Sherman’s multimedia hub on The Players’ Tribune.
I’m a Richard Sherman fan. “You Mad Bro?” was my Facebook cover photo for a minute. My best pal, who’s also my #1 rival in all athletic competitions, called Sherman an asshole following his infamous Michael Crabtree interview. I loved it. And my best pal being grumpy, well that was gravy.
Still, I didn’t bother reading Sherman’s weekly column. I get it in my email inbox but I never opened it.
Then the kerfuffles happened with our local Seattle media, and Sherman skipped a press conference. Mainstream media outlets got up in arms, too. This got me thinking. About press conferences.
I’ve attended a few of them plus I watched The West Wing in the late 1990's, so I know what a press conference is: A bunch of journalists crammed into a room asking questions to a person(s) who’s in front of a microphone. Then, those journalists all rush like mad to take the exact same comments from the exact same person(s) and crank out a story that’s … unique.
Except it often doesn’t work out that way. Instead, press conferences frequently yield the same story, one that gets published many times across many different media outlets. Also, press conferences are boring, leaving reporters desperate to glean a moment — any moment — that rises above the mundane. Ironically, when that “moment” occurs, frequently it involves an athlete being irritated by a press conference — like Cam after the Super Bowl; or Iverson following the death of his close friend.
I wasn’t mad (bro) when Sherman skipped his press conference. This isn’t the late 1990’s and media has changed a bit since The West Wing. Heck, even presidential candidates and president elects rarely do press conferences anymore.
Then, Sherman gave one of my favorite quotes since “You mad bro?” when he Tweeted this:
If I have something to say then I will write it myself.
He’s right. Athletes are their own storytellers. This is old news — social media and platforms like The Player’s Tribune have been around for a minute, giving athletes the ability to bring their content directly to the public. When Kobe announced his retirement, he didn’t hold a press conference. He published a personal note.
Sherman is taking the athlete storyteller to the next level — the athlete as journalist. He Tweeted as much when he said:
I understand that I can write my own story
Right again. Sherman is smart and articulate. He has a platform, The Players Tribune, with millions of readers. He has millions of social media followers. That combination of eloquence plus audience scares some journalists. Because no one can get the inside story on an athlete, like an athlete himself (or herself).
I went back and read several of Sherman’s columns. They are good. Some of his columns are really good, like when he takes the NFL to task. Sherman calls out the NFL for throwing a flag on Antonio Brown for “sexually suggestive” twerking, while cheerleaders gyrate through similar moves every week. He notes the double standard of a penalty on Josh Norman for “shooting” a bow and arrow, while Norman’s team — the Redskins — won’t change its name despite offending many people.
Sherman also provides a nuanced takedown of the “poopfest” that is Thursday night football, with this classic quote:
I’d like to put Roger Goodell in pads for a late game on a Sunday, in December, in Green Bay, on the frozen tundra — then see what time he gets to the office on Monday morning, knowing that he would have to suit up again on Thursday.
Hell, yes.
I read this recent article which says that first-person athlete stories are heavily edited:
Athletes — who have dedicated their lives to their sport of choice, not writing — often rely on behind-the-scenes help when it comes to public communications. If you see your favorite athlete or celebrity’s byline on a surprisingly eloquent piece, well, there’s probably a reason why it’s so well-written.
Good writing and good editing go hand in hand. That’s why reputable media outlets employ copy editors. Writing is a craft that one hones over time, just like any skill. I don’t know how much editing goes into Sherman’s columns but, his spoken words are similar to his written ones — clear and concise.
The NFL would do well to empower more of its athletes to be storytellers. NFL ratings are down this year, which suggests that people are growing tired of the standard storylines. Embracing new content platforms isn’t enough. The league should embrace new content creators. And some of the most interesting creators are the people who actually play the games.
One straightforward way to inject more life into NFL storytelling is: Allow players to post to social media during the games. Yes, I said it. Let them Tweet, Snap, Insta, etc. while a game is happening. It won’t ruin the games — the teams and players will police themselves against Tweeting from the huddle or Snapping from the bottom of a pile. But Instagramming one’s self after a touchdown, that would be some cool content. Imagine a selfie of Zeke from inside the Salvation Army kettle.
Players and coaches already give in-game interviews. And those are extremely boring. So why not change it up a little?
The press conference is a dinosaur. By now, we all know that the only reason a player shows up to a press conference is…say it with me…“So I won’t get fined.”
I’m not mad (bro) at Sherman for skipping a press conference. I’m not mad, either, at a reporter who asks an athlete tough questions. There will always be a push-pull between athletes and reporters to tell meaningful stories. That because many good stories require a level of exposure that’s not always comfortable.
Technology has lowered the barrier for content creation, empowering more people than ever to be their own publisher. Which means, among other things, that high-quality storytelling is more precious today than ever before. Those who excel at storytelling will find their audience, without needing to hold a press conference.
|
https://medium.com/loseatfantasy/richard-sherman-knows-content-bro-5b7b4b3205cf
|
['Mike Harms']
|
2017-02-07 03:07:20.474000+00:00
|
['Seahawks', 'Richard Sherman', 'Content Marketing', 'NFL', 'Journalism']
|
Title Richard Sherman know content broContent Screenshot Richard Sherman’s multimedia hub Players’ Tribune I’m Richard Sherman fan “You Mad Bro” Facebook cover photo minute best pal who’s also 1 rival athletic competition called Sherman asshole following infamous Michael Crabtree interview loved best pal grumpy well gravy Still didn’t bother reading Sherman’s weekly column get email inbox never opened kerfuffle happened local Seattle medium Sherman skipped press conference Mainstream medium outlet got arm got thinking press conference I’ve attended plus watched West Wing late 1990s know press conference bunch journalist crammed room asking question person who’s front microphone journalist rush like mad take exact comment exact person crank story that’s … unique Except often doesn’t work way Instead press conference frequently yield story one get published many time across many different medium outlet Also press conference boring leaving reporter desperate glean moment — moment — rise mundane Ironically “moment” occurs frequently involves athlete irritated press conference — like Cam Super Bowl Iverson following death close friend wasn’t mad bro Sherman skipped press conference isn’t late 1990’s medium changed bit since West Wing Heck even presidential candidate president elect rarely press conference anymore Sherman gave one favorite quote since “You mad bro” Tweeted something say write He’s right Athletes storyteller old news — social medium platform like Player’s Tribune around minute giving athlete ability bring content directly public Kobe announced retirement didn’t hold press conference published personal note Sherman taking athlete storyteller next level — athlete journalist Tweeted much said understand write story Right Sherman smart articulate platform Players Tribune million reader million social medium follower combination eloquence plus audience scare journalist one get inside story athlete like athlete went back read several Sherman’s column good column really good like take NFL task Sherman call NFL throwing flag Antonio Brown “sexually suggestive” twerking cheerleader gyrate similar move every week note double standard penalty Josh Norman “shooting” bow arrow Norman’s team — Redskins — won’t change name despite offending many people Sherman also provides nuanced takedown “poopfest” Thursday night football classic quote I’d like put Roger Goodell pad late game Sunday December Green Bay frozen tundra — see time get office Monday morning knowing would suit Thursday Hell yes read recent article say firstperson athlete story heavily edited Athletes — dedicated life sport choice writing — often rely behindthescenes help come public communication see favorite athlete celebrity’s byline surprisingly eloquent piece well there’s probably reason it’s wellwritten Good writing good editing go hand hand That’s reputable medium outlet employ copy editor Writing craft one hone time like skill don’t know much editing go Sherman’s column spoken word similar written one — clear concise NFL would well empower athlete storyteller NFL rating year suggests people growing tired standard storyline Embracing new content platform isn’t enough league embrace new content creator interesting creator people actually play game One straightforward way inject life NFL storytelling Allow player post social medium game Yes said Let Tweet Snap Insta etc game happening won’t ruin game — team player police Tweeting huddle Snapping bottom pile Instagramming one’s self touchdown would cool content Imagine selfie Zeke inside Salvation Army kettle Players coach already give ingame interview extremely boring change little press conference dinosaur know reason player show press conference is…say me…“So won’t get fined” I’m mad bro Sherman skipping press conference I’m mad either reporter asks athlete tough question always pushpull athlete reporter tell meaningful story many good story require level exposure that’s always comfortable Technology lowered barrier content creation empowering people ever publisher mean among thing highquality storytelling precious today ever excel storytelling find audience without needing hold press conferenceTags Seahawks Richard Sherman Content Marketing NFL Journalism
|
5,105 |
8 Reasons You Should Join A Local Writer’s Group
|
Photo by Dylan Gillis on Unsplash
Five years ago, at the dinner table, my father told my sister and me that he found an interesting article in the newspaper. (Yes, the newspaper.)
There was an advertisement for a writer’s group at my local library. My dad doesn’t know exactly what a writer’s group is, but because my sister and I write, he thought it might be useful.
So, we went to the initial meeting. Five years later, we’re still part of the same group meeting once a month at our library.
My sister and I are the only “veterans” of the group. The group has changed a lot over the years. People have come and gone (including our original group facilitator). However, including myself and my sister, there are five steady people and we have a new interest in the group every month from people. It’s still effective and fun.
I sometimes wonder how it’s already been five years. We skip a month here and there (our group usually takes December off due to the holidays) but it’s weird to me how we’ve been doing this regularly for five straight years.
Honestly, when I first joined the group, I wondered how long it would last. It seemed too good to be true. I’m not complaining though.
I’ve learned a lot in these past five years and I’m looking forward to more. Joining a local writer’s group was one of the best decisions for a few reasons.
1. Writing and editing skills
This is a given. When joining a writing group, you’re going to be writing and critiquing. You write your own piece, critique others, and then improve upon your own piece again. Even though it’s a little bit each month, it’s still something. You’re routinely working on your own writing.
2. Time
Speaking of routine, writers excuse their lack of writing all the time by saying they “have no time.” Joining a writing group that meets once a month or once every two weeks or something, allows you to carve in writing time no matter what.
You need to write your next chapter or piece to submit to your group. Then you have to find the time to read and critique their work. Then you meet in person to discuss your piece and others. It gives your particular work one-on-one attention from you (and others).
Then, you rinse and repeat.
3. Motivation and inspiration
Speaking of time, it can be hard to be motivated to write. I’m sure we all procrastinate in some shape or form and I’m sure writing is no different.
Seeing work from fellow writers motivates you to keep going with your own. Not to mention, you have a deadline to meet. You need to submit your piece to your group by a certain time so everyone has ample time to read and critique it.
On the flip side, the group gives you the inspiration to keep going. If you’re stuck on a certain part of your novel, you can ask your group what they think could or should happen next. You may not even have to ask. If your group is anything like mine, they’ll let their imagination run wild and interpret certain pieces together right in front of you.
Sometimes they’re right, sometimes they're wrong. Other times, they give you a new idea you can expand upon.
4. Self-confidence and thick skin
It’s not easy to share your writing with others. Personally, I find it easier to share my work with strangers than with close family and friends. The members of your writing group will become your friends, but in the beginning, sharing your work will increase your self-confidence. As writers, we all hate sharing our work in the beginning because it never seems to be good enough.
On the flip side, it’ll toughen you up as well. You’ll gain the self-confidence to share your work and the thick skin to take any and all critique — positive or negative.
6. Promotion
Do you have a blog or an author website? Do you share your writing online? Have you gotten a piece published in a magazine or an anthology? Maybe you got a book deal?
Your group will be proud and help promote your work through their own social media and such. Like you would help promote their work, they’ll do the same for you. Who doesn’t want to brag that they know an author and helped get their book off the ground?
7. Connections
People come from everywhere. They have different jobs, different skills, and have different experiences. They know different people outside of the group. If you need to research a specific job, for example, someone in the group may work in that field or know someone who does. Maybe they can set up a date for you to interview them for research purposes.
Books and the internet are great, but it’s wonderful to talk to a real person who has hands-on experience.
Not to mention, writers typically have a job that pertains to writing. Maybe someone has connections on how you can market your book. You never know if you don’t ask.
8. Socialization
As writers, we’re not great at socializing. We don’t like to talk to people or go out and about. Being part of a writing group allows you to make some new friends. Aside from writing, you’ll find other things you have in common with them. Now only will you get out of the house for your group meeting, but you may get together some other times.
I now meet with members of my writing group twice a month — once for the writer’s group and once for Dungeons & Dragons.
Overall
Joining a writer’s group was one of the best decisions I’ve made. I’ve learned a lot with my writing, I’m consistent with it, and I’ve made some great friends.
Check with your local library or other nearby libraries or bookstores and see if there’s a writing group you can join. I promise you won’t regret it.
Happy writing!
|
https://medium.com/swlh/8-reasons-you-should-join-a-local-writers-group-109104bb28ba
|
['Rachel Poli']
|
2020-06-15 13:51:09.136000+00:00
|
['Creative Writing', 'Writers On Writing', 'Writing', 'Writer', 'Writing Tips']
|
Title 8 Reasons Join Local Writer’s GroupContent Photo Dylan Gillis Unsplash Five year ago dinner table father told sister found interesting article newspaper Yes newspaper advertisement writer’s group local library dad doesn’t know exactly writer’s group sister write thought might useful went initial meeting Five year later we’re still part group meeting month library sister “veterans” group group changed lot year People come gone including original group facilitator However including sister five steady people new interest group every month people It’s still effective fun sometimes wonder it’s already five year skip month group usually take December due holiday it’s weird we’ve regularly five straight year Honestly first joined group wondered long would last seemed good true I’m complaining though I’ve learned lot past five year I’m looking forward Joining local writer’s group one best decision reason 1 Writing editing skill given joining writing group you’re going writing critiquing write piece critique others improve upon piece Even though it’s little bit month it’s still something You’re routinely working writing 2 Time Speaking routine writer excuse lack writing time saying “have time” Joining writing group meet month every two week something allows carve writing time matter need write next chapter piece submit group find time read critique work meet person discus piece others give particular work oneonone attention others rinse repeat 3 Motivation inspiration Speaking time hard motivated write I’m sure procrastinate shape form I’m sure writing different Seeing work fellow writer motivates keep going mention deadline meet need submit piece group certain time everyone ample time read critique flip side group give inspiration keep going you’re stuck certain part novel ask group think could happen next may even ask group anything like mine they’ll let imagination run wild interpret certain piece together right front Sometimes they’re right sometimes theyre wrong time give new idea expand upon 4 Selfconfidence thick skin It’s easy share writing others Personally find easier share work stranger close family friend member writing group become friend beginning sharing work increase selfconfidence writer hate sharing work beginning never seems good enough flip side it’ll toughen well You’ll gain selfconfidence share work thick skin take critique — positive negative 6 Promotion blog author website share writing online gotten piece published magazine anthology Maybe got book deal group proud help promote work social medium Like would help promote work they’ll doesn’t want brag know author helped get book ground 7 Connections People come everywhere different job different skill different experience know different people outside group need research specific job example someone group may work field know someone Maybe set date interview research purpose Books internet great it’s wonderful talk real person handson experience mention writer typically job pertains writing Maybe someone connection market book never know don’t ask 8 Socialization writer we’re great socializing don’t like talk people go part writing group allows make new friend Aside writing you’ll find thing common get house group meeting may get together time meet member writing group twice month — writer’s group Dungeons Dragons Overall Joining writer’s group one best decision I’ve made I’ve learned lot writing I’m consistent I’ve made great friend Check local library nearby library bookstore see there’s writing group join promise won’t regret Happy writingTags Creative Writing Writers Writing Writing Writer Writing Tips
|
5,106 |
Color Your Life With Some Birds
|
Let’s take a walk
I take a bus and get off where the city vanishes away, and the pastoral serenity begins to show its freshness. I hit the muddy, dusty roads and gradually enter into a magical world.
No one is with me. It’s just me with myself.
The first thing I notice is the air — so fresh and full of life. I take a few deep breaths and feel relaxed. I deliberately walk at a steady pace because I don’t want to miss a thing on my way.
It’s just 11.00 am and a whole day ahead of me. The sun is up there to shine everything with its soft glow, the clear blue sky — cloudless and shiny. A gentle breeze is whispering to the leaves about something, and the leaves are nodding their heads.
I walk slowly — and gently observe what’s going around. On my right, there is an ocean of green, yellow, and white. On my left a tiny river just like a lifeline in the wilderness. I see some herons are enjoying a flight over the river. Their shadows are reflecting on the water, making each of them double.
I feel tremendous joy as I love the herons most. When they wait gently in the riverside to catch fish with their long legs and necks, they look like saints to me. Catching fish is like their way of having meditation. I love them — especially the white ones.
I take my time to watch them crossing the river. Then I dive deep into the green ocean of mystery. I am looking for birds. I approach carefully, not making any sound. After a few minutes, I encounter a few bee-eaters. I see them every time I go on a bird-watching adventure.
I drag myself close to the bee-eaters to capture the moment, but they somehow read my mind and fly away to another place.
Bee-eater. Photo captured by the author, Bokchar, Dhaka, 2019
I saw a few more at a distance. This time, I bend my knees and nearly stop my breathing. I move forward more cautiously like a cat and take my position to have a snap. I see a bee-eater is busking in the sun. I take a few shots quickly and silently observe what it does.
After a while, more bee-eaters come and start hunting some insects. They make noise with their wings, talk to each other, and finally go in different directions in search of food maybe. Oh, I love this beautiful green bird. And I love its dark, nearly invisible eyes.
Spending some time there, I walk ahead — I come across a field, and at the end of that, I see a few big trees.
Green ocean. Bokchar, Dhaka, 2019. Photo captured by the author.
Wait, a sharp sound is coming from there. It must be a woodpecker. I cross the field and go in the direction of that sharp sound. I keep my eyes on the tree-tops. I see a few big holes in some trees and become sure that there must be some woodpeckers.
I move forward with the camera ready in my hand. I hear the sound again. Where is it? I try to concentrate. Yes, I feel something is happening. I can sense it. I move my head to my right and see some leaves moving.
Yes, finally, I spot the lovely bird. It’s trying to find a suitable tree to make a new home or maybe in search of some tasty worm to have a feast.
The woodpecker. Bokchar, Dhaka, 2019. Photo captured by the author.
I take a few good photos with both my camera and my eyes. I see some finch and a few sparrows there as well.
I continue my journey to the unknown and go where my eyes take me. I walk and walk and keep my eyes open to observe the wilderness.
I spend some time in front of an agri-farm and hear the sound of growing vegetables — fresh and enchanting. I see different shades of green everywhere and the hide and seek games of sun and shadow. I hear the music of tranquility.
Now, it is midday, and the sun is getting warmer. I find a small village-market on my way and take a break. I sprinkle water on my face, wash my hands, and then eat my lunch. At this point, I talk to locals — a little chitchat to know where I am and where more birds are waiting for me.
I come to know that a few miles away, there is a nearly dead river, and on its bank, birds are in great numbers. So I take a new route, and after a few birds later, I find the river. It has lost its youth long ago. Now it is like a narrow canal that I must cross with a boat.
Horse in the field. Bokchar, Dhaka, 2019. Photo captured by the author.
I see an unfinished bridge there. Maybe the bridge killed the river, or I don’t know the reason behind the river’s untimely death. I cross the narrow river and land on a field full of yellow.
A horse grazing nearby catches my attention. I see the open field where birds are flying with a yellow backdrop. Oh, it’s love at first sight.
Let’s spend some time sitting on the riverbank watching some kingfishers and cormorants. They are the real hunters — swift and sharp.
Kingfisher is truly the king of fishing. The way it hunts is spectacular to watch. Sitting on the grass, I see them hunting fish and eating right away.
Ah, it seems an easy way to live a life. But I reckon they have their own society and competition like ours, as I have seen, in a lake near my house, seven herons competing for a particular place for fishing.
The hunting is on. Bokchar, Dhaka, 2019. Photo captured by the author.
I am convinced that having the ability to fly is no guarantee to achieve freedom. Birds have their community with a set of rules. Maybe, they have their own kind of hierarchy.
Do they have democracy? Or are they still in debate whether communism is better for them or not? I don’t know — but I am sure they have found peace, balance, and harmony.
Anyway, I end my bird-philosophy and see there are lots of doves walking through the fields. It’s hard to identify them on the ground as their colors are quite similar to earth.
Doves are very shy. When I come close to them, they fly away immediately. But the field is full of them. So, I capture some of their beautiful, decorative body.
Innocent Dove. Bokchar, Dhaka, 2019. Photo captured by the author.
I see a few thatched houses at a distance but no one in the field this afternoon. I am the only man here wandering for some birds. And it’s a different experience to walk absolutely alone in nature like this. It’s lovely and a little intimidating too.
Numerous little birds are with me all the time. They are spinning around to make me feel comfortable. When they are in the air, anyone may confuse them with bullets.
One moment they are near you, but in a second, they go somewhere else. Then again, return as a flock.
They inspect the entire field like a police battalion. And the insect-criminals have nowhere to hide from their eyes. They eat the bees from the mustard-plants. Then go for an aerobatic display up above.
I spend the rest of the day in this open field full of life. I breathe the green, yellow, and the entire experience. The experience — words often fail to contain.
I come to understand that I must take a break, explore nature, and spend time more with birds to have a deeper understanding of the world I live in.
After a day with birds, I feel that a day is well-spent, and I need to make more days like this.
|
https://medium.com/the-masterpiece/color-your-life-with-some-birds-ffae4fddde2
|
['S M Mamunur Rahman']
|
2020-12-28 12:41:55.530000+00:00
|
['Travel', 'Nature', 'The Masterpiece', 'Environment', 'Birds']
|
Title Color Life BirdsContent Let’s take walk take bus get city vanishes away pastoral serenity begin show freshness hit muddy dusty road gradually enter magical world one It’s first thing notice air — fresh full life take deep breath feel relaxed deliberately walk steady pace don’t want miss thing way It’s 1100 whole day ahead sun shine everything soft glow clear blue sky — cloudless shiny gentle breeze whispering leaf something leaf nodding head walk slowly — gently observe what’s going around right ocean green yellow white left tiny river like lifeline wilderness see heron enjoying flight river shadow reflecting water making double feel tremendous joy love heron wait gently riverside catch fish long leg neck look like saint Catching fish like way meditation love — especially white one take time watch crossing river dive deep green ocean mystery looking bird approach carefully making sound minute encounter beeeaters see every time go birdwatching adventure drag close beeeaters capture moment somehow read mind fly away another place Beeeater Photo captured author Bokchar Dhaka 2019 saw distance time bend knee nearly stop breathing move forward cautiously like cat take position snap see beeeater busking sun take shot quickly silently observe beeeaters come start hunting insect make noise wing talk finally go different direction search food maybe Oh love beautiful green bird love dark nearly invisible eye Spending time walk ahead — come across field end see big tree Green ocean Bokchar Dhaka 2019 Photo captured author Wait sharp sound coming must woodpecker cross field go direction sharp sound keep eye treetop see big hole tree become sure must woodpecker move forward camera ready hand hear sound try concentrate Yes feel something happening sense move head right see leaf moving Yes finally spot lovely bird It’s trying find suitable tree make new home maybe search tasty worm feast woodpecker Bokchar Dhaka 2019 Photo captured author take good photo camera eye see finch sparrow well continue journey unknown go eye take walk walk keep eye open observe wilderness spend time front agrifarm hear sound growing vegetable — fresh enchanting see different shade green everywhere hide seek game sun shadow hear music tranquility midday sun getting warmer find small villagemarket way take break sprinkle water face wash hand eat lunch point talk local — little chitchat know bird waiting come know mile away nearly dead river bank bird great number take new route bird later find river lost youth long ago like narrow canal must cross boat Horse field Bokchar Dhaka 2019 Photo captured author see unfinished bridge Maybe bridge killed river don’t know reason behind river’s untimely death cross narrow river land field full yellow horse grazing nearby catch attention see open field bird flying yellow backdrop Oh it’s love first sight Let’s spend time sitting riverbank watching kingfisher cormorant real hunter — swift sharp Kingfisher truly king fishing way hunt spectacular watch Sitting grass see hunting fish eating right away Ah seems easy way live life reckon society competition like seen lake near house seven heron competing particular place fishing hunting Bokchar Dhaka 2019 Photo captured author convinced ability fly guarantee achieve freedom Birds community set rule Maybe kind hierarchy democracy still debate whether communism better don’t know — sure found peace balance harmony Anyway end birdphilosophy see lot dove walking field It’s hard identify ground color quite similar earth Doves shy come close fly away immediately field full capture beautiful decorative body Innocent Dove Bokchar Dhaka 2019 Photo captured author see thatched house distance one field afternoon man wandering bird it’s different experience walk absolutely alone nature like It’s lovely little intimidating Numerous little bird time spinning around make feel comfortable air anyone may confuse bullet One moment near second go somewhere else return flock inspect entire field like police battalion insectcriminals nowhere hide eye eat bee mustardplants go aerobatic display spend rest day open field full life breathe green yellow entire experience experience — word often fail contain come understand must take break explore nature spend time bird deeper understanding world live day bird feel day wellspent need make day like thisTags Travel Nature Masterpiece Environment Birds
|
5,107 |
Web Scraping with Selenium IDE
|
What is Selenium?
Although Selenium is incredibly helpful with web scraping, this is not its explicit purpose; it’s actually a framework for testing web applications. It accomplishes this through two components, WebDriver and Selenium IDE.
The WebDriver accepts commands via programming code (in a variety of languages) and launches this code in your default web browser. Once the browser is launched, WebDriver will automate the commands, based on the scripted code, and simulate all possible user interactions with the page, including scrolling, clicking, and typing.
The Selenium IDE (integrated development environment) is a browser extension for Firefox and Chrome that allows you to record your interactions with a web page and edit those interactions to further customize your test. Via Selenium’s API, you can actually export the underlying code to a Python script, which can later be used in your Jupyter Notebook or text editor of choice.
How Selenium Helps with Web Scraping
As websites get more complex, simple scraping techniques and libraries (such as Beautiful Soup) might run into the following obstacles:
Java Script or CSS that obscure or transform the elements.
that obscure or transform the elements. Username and password authentication requirements that hide data on a web page fulfilled.
This is where the Selenium IDE shines. As it mimics a user and interacts with the front facing elements of a web page, it can easily identify the necessary Java Script or CSS code and provide the proper path to get you what you need. If a login pop-up box arrives, Selenium IDE can type in your credentials and move the process along.
The IDE can even assist you on your easier scraping tasks, by providing you the tags to any location you click on a page. Notice how tags is plural there? The IDE will give you a list of all possible tags for that link, providing you with multiple angles to attack your web scraping task.
Getting Started with the IDE
Coders of any skill level can get Selenium up and running, by simply starting with the IDE!
To use the IDE, you will need the extension with either Chrome or Firefox.
Selenium IDE on the Chrome Web Store
Selenium IDE on Firefox
Once you have the extension, open the IDE and select “Record a new test in a new project.”
Select a base URL for your project, in the next pop-up, and click “start recording.” This will open a new web browser and the IDE, which will track all of your actions. Click, scroll, type, and interact with the webpage in the manner that you choose. Note: it is beneficial that you click on the elements you want to scrape. We will cover this in depth later. When finished, click on the stop button in the IDE, in the upper right hand corner, and you are done.
Selenium IDE Features
When you are finished your output should look something like this:
|
https://medium.com/swlh/web-scraping-with-selenium-ide-4c16cea8329d
|
['Jeremy Opacich']
|
2019-08-19 07:23:30.788000+00:00
|
['Web Scraping', 'Selenium', 'Python', 'Beautifulsoup', 'Selenium Ide']
|
Title Web Scraping Selenium IDEContent Selenium Although Selenium incredibly helpful web scraping explicit purpose it’s actually framework testing web application accomplishes two component WebDriver Selenium IDE WebDriver accepts command via programming code variety language launch code default web browser browser launched WebDriver automate command based scripted code simulate possible user interaction page including scrolling clicking typing Selenium IDE integrated development environment browser extension Firefox Chrome allows record interaction web page edit interaction customize test Via Selenium’s API actually export underlying code Python script later used Jupyter Notebook text editor choice Selenium Helps Web Scraping website get complex simple scraping technique library Beautiful Soup might run following obstacle Java Script CSS obscure transform element obscure transform element Username password authentication requirement hide data web page fulfilled Selenium IDE shine mimic user interacts front facing element web page easily identify necessary Java Script CSS code provide proper path get need login popup box arrives Selenium IDE type credential move process along IDE even assist easier scraping task providing tag location click page Notice tag plural IDE give list possible tag link providing multiple angle attack web scraping task Getting Started IDE Coders skill level get Selenium running simply starting IDE use IDE need extension either Chrome Firefox Selenium IDE Chrome Web Store Selenium IDE Firefox extension open IDE select “Record new test new project” Select base URL project next popup click “start recording” open new web browser IDE track action Click scroll type interact webpage manner choose Note beneficial click element want scrape cover depth later finished click stop button IDE upper right hand corner done Selenium IDE Features finished output look something like thisTags Web Scraping Selenium Python Beautifulsoup Selenium Ide
|
5,108 |
Why You Should Send Your Next Email Newsletter to Fewer People
|
Why You Should Send Your Next Email Newsletter to Fewer People
And the exact steps to remove those who won’t open anyway
Photo by Matthew Fournier on Unsplash
Grow an email list.
That’s common advice here on Medium, and it’s good advice for a reason.
Email is a direct and easy way to communicate with your audience.
I see a lot of email campaigns in my day job working for one of the largest email service providers. Our users send billions of emails every month.
The biggest mistake I see our users making with their email campaigns is also one of the easiest to fix. Read on for a step-by-step guide on how to fix it.
You’re spamming people
Some of your subscribers are no longer interested in your emails. Maybe they’ve maximized all of the value you can provide them. Perhaps their interests and priorities have changed, or they’ve moved on to a new hobby.
That’s ok, you can’t please everyone. Desperately holding onto uninterested subscribers hurts your newsletter.
Inbox providers like Gmail and Yahoo care a lot about how users interact with your emails. It’s in their best interest to protect users from spam, and they’re ruthless about it. They’ve built complicated algorithms to detect spam and make sure it never reaches the inbox.
Gmail now prompts users to unsubscribe from emails they haven’t opened recently.
Image source: Mailjet
All that hard work building your subscriber list goes right out the window if the emails never make it to the inbox.
Respect your subscribers
On average there are 111 billion commercial emails sent every day. Your newsletter is just one email of hundreds that your subscribers receive every day.
You can easily tell which subscribers are still interested in your emails based on their engagement. Here are some positive engagement signals in the eyes of Gmail and Yahoo:
Opening your email
Clicking a link from your email
Replying
Forwarding your email to a friend
The average open rate is about 21%, and the average click rate is about 2.6%, according to Mailchimp.
Take a look at your most recent campaign and see how you compare.
Your secret weapon
Remove unengaged users from your email list.
It may seem heartbreaking to remove someone from your email list. You worked hard for that subscriber! Why take them out of your email list?
Inbox providers have a very low tolerance for spam. A spam rate below 0.1% is considered normal.
That means for every 1000 emails you send, only one can be marked as spam. Anything above that is too high in their eyes.
Once you get flagged with a high spam rate it’s very difficult to fix your reputation, so avoid it all costs.
My suggestion: remove users who haven’t opened your emails in 3 months.
Here’s exactly how to do that
Below is a 7-step guide for archiving unengaged users from Mailchimp. If you use another provider like Sendgrid, SendinBlue, or Mailjet, those links will take you to their specific guides.
Login to Mailchimp and navigate to Audience at the top. Select the audience you want to modify, then click View Contacts on the right. Next, click into Manage contacts above your contact list, then select Segments. Create a new segment for Campaign Activity that matches anyone who did not open all campaigns in the last 3 months. Click Preview Segment, then hit the arrow at the top of the list next to Email Address, and select all. Click Actions, then Remove contacts. Don’t worry, Mailchimp will ask you to confirm on the next page. You’ll actually be archiving them instead. Click Archive. This maintains their stats but prevents them from receiving your emails again. Later, you can build a re-engagement campaign to get them interested again! That’s it! I included a screenshot below of what your segment should look like.
Your segment should look like this. Screenshot by the author.
What do you have to lose?
Test it out and see what happens.
Set a reminder to periodically clean out your email list. Your unengaged users will thank you, and inbox providers will look at you more favorably too.
You just increased the chance of your future emails landing in the inbox.
|
https://medium.com/better-marketing/why-you-should-send-your-next-email-newsletter-to-fewer-people-d79eb2601dde
|
['Nick Lafferty']
|
2020-01-30 01:23:43.493000+00:00
|
['Email', 'Newsletter', 'Marketing', 'Email Marketing', 'Technology']
|
Title Send Next Email Newsletter Fewer PeopleContent Send Next Email Newsletter Fewer People exact step remove won’t open anyway Photo Matthew Fournier Unsplash Grow email list That’s common advice Medium it’s good advice reason Email direct easy way communicate audience see lot email campaign day job working one largest email service provider user send billion email every month biggest mistake see user making email campaign also one easiest fix Read stepbystep guide fix You’re spamming people subscriber longer interested email Maybe they’ve maximized value provide Perhaps interest priority changed they’ve moved new hobby That’s ok can’t please everyone Desperately holding onto uninterested subscriber hurt newsletter Inbox provider like Gmail Yahoo care lot user interact email It’s best interest protect user spam they’re ruthless They’ve built complicated algorithm detect spam make sure never reach inbox Gmail prompt user unsubscribe email haven’t opened recently Image source Mailjet hard work building subscriber list go right window email never make inbox Respect subscriber average 111 billion commercial email sent every day newsletter one email hundred subscriber receive every day easily tell subscriber still interested email based engagement positive engagement signal eye Gmail Yahoo Opening email Clicking link email Replying Forwarding email friend average open rate 21 average click rate 26 according Mailchimp Take look recent campaign see compare secret weapon Remove unengaged user email list may seem heartbreaking remove someone email list worked hard subscriber take email list Inbox provider low tolerance spam spam rate 01 considered normal mean every 1000 email send one marked spam Anything high eye get flagged high spam rate it’s difficult fix reputation avoid cost suggestion remove user haven’t opened email 3 month Here’s exactly 7step guide archiving unengaged user Mailchimp use another provider like Sendgrid SendinBlue Mailjet link take specific guide Login Mailchimp navigate Audience top Select audience want modify click View Contacts right Next click Manage contact contact list select Segments Create new segment Campaign Activity match anyone open campaign last 3 month Click Preview Segment hit arrow top list next Email Address select Click Actions Remove contact Don’t worry Mailchimp ask confirm next page You’ll actually archiving instead Click Archive maintains stats prevents receiving email Later build reengagement campaign get interested That’s included screenshot segment look like segment look like Screenshot author lose Test see happens Set reminder periodically clean email list unengaged user thank inbox provider look favorably increased chance future email landing inboxTags Email Newsletter Marketing Email Marketing Technology
|
5,109 |
Changes That’ll Make a Big Difference With Your Learning Programming For Kids : Any Kid Can Code
|
Here, we are going to use library named turtle and how:
import turtle
Then, type magical keywords and give suitable name to your turtle, I choose “jumper”.
jumper = turtle.pen() or see in case you have to use turtle.Pen() [CAPITAL P]
And suddenly we have the new window open or so called our own techie playground
Our Play area to make innovative figures
We have our own turtle
Most of you want to see turtle, lets give this arrow shape of turtle. Refer above image.
jumper.shape(“turtle”) or turtle.shape(“turtle”)
Now you are ready to move with turtle. Where you will move there will be line drawn. Thus you can create different images.
For basic understanding of the screen, we should know the X,Y axis. X axis is horizontal line and Y axis is vertical line. Coordinates (0,0) where X is 0 and Y is 0, is the position where our turtle is now. Try to Grasp it. If not, don’t worry we have commands to move turtle in playground.
2 dimensioal chart
Basic commands to play with Turtle
forward: to move forward in the direction of its mouth
left: to change the direction. Bit tricky, you have to have knowledge of angles.
up: When you move, it wont make any line, till you bring it down.
down: To bring it down, otherwise your turtle is flying turtle
shape, width, goto, color are the other commands to play with.
If you want to go more deeper, please go to the below link and take a lead:
https://docs.python.org/3.1/library/turtle.html
Lets make use of the above commands and make quadrilateral (square or rectangle)
I ran below commands in sequence and here we go, we have our first magic figure ready:
jumper.forward(100)
jumper.left(90)
jumper.forward(100)
jumper.left(90)
jumper.forward(100)
jumper.left(90)
jumper.forward(100)
our first creation: Square
jumper.left(90), will change the direction of your turtle in left upto 90 degrees. And, if you want to be creative, change this from 90 to some other number and see the magic. You will have magical figures turning up.
Just one important thing to mention, when you type command, try to use up arrow key. It will bring the previous commands and you need not to type and give pain to your fingers.
Magic to master this is to practice and practice. So, try to make 5–10 different figures. And, challenge yourself by putting the color inside those figures. It is pretty simple. I will be coming up with next blogs to that where we will learn new things in programming like conditions, loop and their types, variables and their usage etc.
Name your turtle and move as you want. Code and don’t forget to have fun! Included Links below for further Learnings!
|
https://medium.com/swlh/simplistically-easy-any-kid-can-code-2294919a34e
|
['Laxman Singh']
|
2020-12-22 09:44:36.454000+00:00
|
['Kids Programming', 'Python', 'Kids', 'Technology', 'Kids And Tech']
|
Title Changes That’ll Make Big Difference Learning Programming Kids Kid CodeContent going use library named turtle import turtle type magical keywords give suitable name turtle choose “jumper” jumper turtlepen see case use turtlePen CAPITAL P suddenly new window open called techie playground Play area make innovative figure turtle want see turtle let give arrow shape turtle Refer image jumpershape“turtle” turtleshape“turtle” ready move turtle move line drawn Thus create different image basic understanding screen know XY axis X axis horizontal line axis vertical line Coordinates 00 X 0 0 position turtle Try Grasp don’t worry command move turtle playground 2 dimensioal chart Basic command play Turtle forward move forward direction mouth left change direction Bit tricky knowledge angle move wont make line till bring bring otherwise turtle flying turtle shape width goto color command play want go deeper please go link take lead httpsdocspythonorg31libraryturtlehtml Lets make use command make quadrilateral square rectangle ran command sequence go first magic figure ready jumperforward100 jumperleft90 jumperforward100 jumperleft90 jumperforward100 jumperleft90 jumperforward100 first creation Square jumperleft90 change direction turtle left upto 90 degree want creative change 90 number see magic magical figure turning one important thing mention type command try use arrow key bring previous command need type give pain finger Magic master practice practice try make 5–10 different figure challenge putting color inside figure pretty simple coming next blog learn new thing programming like condition loop type variable usage etc Name turtle move want Code don’t forget fun Included Links LearningsTags Kids Programming Python Kids Technology Kids Tech
|
5,110 |
Addiction
|
Photo by Christopher Burns on Unsplash
Addiction
A poem
I see the monster with big pink lips and seven black eyes all lined with ink. I press my finger into his soft round skull, tap gently on his sharp white teeth. Wet your hands in his spittle. Let him leap at your face and rip into your arms. Let him tear the red flesh from your bones. He is here for your Budweiser. He has come for your weed. He bites your cans and pierces their guts, then sucks out their juices and foam. He stands on your bones and spits into rubber. He chews on your tight, sticky buds, then washes his mouth with your vodka and wine. Hide your pills. Tuck them neatly. Place them high on the shelf. Cover your arms, drape your shoulders, this monster has come for your Budweiser and weed, your children, your husband, your family, your wife.
|
https://jzkrebsbach.medium.com/addiction-fb5800862145
|
['Jessica Zeek Krebsbach']
|
2020-01-02 03:16:43.611000+00:00
|
['Addiction', 'Partying', 'Mental Health', 'Family', 'Poetry']
|
Title AddictionContent Photo Christopher Burns Unsplash Addiction poem see monster big pink lip seven black eye lined ink press finger soft round skull tap gently sharp white teeth Wet hand spittle Let leap face rip arm Let tear red flesh bone Budweiser come weed bite can pierce gut suck juice foam stand bone spit rubber chew tight sticky bud wash mouth vodka wine Hide pill Tuck neatly Place high shelf Cover arm drape shoulder monster come Budweiser weed child husband family wifeTags Addiction Partying Mental Health Family Poetry
|
5,111 |
When to Fire Your Therapist
|
Have you kissed any good frogs lately?
Photo by Nik Shuliahin on Unsplash
I would never make it as a Scientologist, because I am a true believer in therapy. I think counseling is something almost everyone could benefit from. For those of us with rougher personal histories it’s mandatory. PTSD is the №1 threat to my life. Aside from the suicidal tendencies, PTSD ravages my physical health.
According to The American Institute of Stress:
Researchers reported that the number and severity of PTSD symptoms were significantly associated with deaths due to coronary heart disease as well as non-fatal heart attacks.
Unfortunately I have to fire my new therapist.
One of the saddest things I hear people say: “I went to therapy. It didn’t help.” That always comes down to one of two things: either the patient really didn’t want to do the work, or the therapist failed in their role — which many of them do.
So I’m here with some pro tips I’ve picked up since beginning therapy in the late 1980s. You can learn from my mistakes, work smarter at your wellness, and get better results. Self-care is the gift that keeps on giving.
Therapy: The emotional gym
The last person who told me therapy didn’t help her, when pressed, said she had only gone to two sessions. People often have unrealistic expectations for how therapy should work, what kind of benefits they can expect.
The issues we address in therapy have usually built up over a lifetime. I’ve found that a good therapist helps people learn how to balance and navigate from the lives they have to the lives they want. The therapist helps people find the healthier, happier, more balanced version of themselves. That’s why I go.
Using the gym analogy, I’ve recently lost over 120 pounds. For many years my weight and mental health both seemed insurmountable, intertwined and completely demoralizing. However in hindsight it has only taken me a year to lose about 35 percent of my body weight. Because I was emotionally ready, like I had hit bottom, it has been much quicker to correct the problem than to create it — which took decades of imbalance.
When I started at the gym I was 350 pounds. It took a good six months before I saw any difference except on the scale. But there is no scale for your therapy progress except the happiness in your life. What works for me is to trust the process and keep doing tiny amounts of it constantly.
The imposition of an infrastructure — a sustainable, reality-based understanding of my physical needs and how to consistently approach them — was what I needed to lose weight all along.
And that’s what I want from a therapist, too: a competent person to provide healthy guardrails for my life. People like me almost never graduate high school. Anyone who was able to become a therapist by definition had a better life with significantly more social support than I did.
Really excellent therapists enable me to leverage their stable childhoods by relating to them. They show me how a person with undamaged self-respect and healthy attachment would respond, often indirectly. A strong therapist is an ultimate role model.
For me a therapist is like a professional parent.
Key word there is professional: they’re not your actual parent. But they’re trained to behave like a highly appropriate, high-functioning parent.
At least that’s how I think of it, since neither of my parents was appropriate. Therapists give me a model of what it could look like without all of my parents’ poop injected into my thought process, a healthier way to reframe my life than I could imagine. Therapists don’t share your history, and thus can create a wonderfully neutral space.
I learned what good parenting is supposed to look like in therapy. A psychologist was the first person to tell me it’s not appropriate for parents to lean on their children for emotional support. I was horrified at the suggestion that I should stop being my mom’s #1 emotional supporter. It took me years of therapy to get it in my bones, which I now do.
After discussing these ideas many times with many therapists over the years, here’s what I believe a good parent thinks:
1.The child’s needs always come first.
2. The child may not necessarily like what’s best for them, but I do it anyway.
3. I may not like what’s best for the child, but I do it anyway.
Like a doctor, lawyer, or teacher, a good parent is an advocate who runs interference while providing guidance. You can’t do that when you are emotionally dependent on the child. That imbalanced relationship requires clearly defined duties and priorities. There’s a fiduciary responsibility. It is unethical for a parent, doctor, lawyer, or teacher to place their own personal needs —such as sex or money — above the needs of their charge.
The fiduciary duty doesn’t mean therapists can or should be abused. But in the session, the patient’s personal feelings count and the therapist’s don’t. That’s the only way it can be.
But a lot of problems are more subtle than theft or sexual abuse. Therapists are only human. It’s not always easy to see where interpersonal boundaries are or should be. Ultimately it’s your job as the patient to make sure you feel heard and respected, and if you don’t, kiss another frog.
Of course the relationship works both ways. I’ve never been fired by a therapist, but I know people who have. I don’t get close to anyone who confides that in me. I take it as a clinically educated red flag that this person has unsustainable issues. Therapists usually have to drum up their own work. They don’t just fire patients on a whim. If a doctor can’t work with them I probably can’t either.
My new therapist isn’t necessarily wrong. She’s just wrong for me.
I had a therapist that I really liked for the last couple of years, and my life improved immensely in that time. I moved out of my van and into an apartment with no help from anyone, which was Herculean. As I said, I’ve lost 120 pounds and counting. Neighbors who once ignored me now meet me in the parking lot for aerobics before dawn. People want to pay me to be their personal trainer. All of my problems are solving each other. My life got much better with her, and it felt so good.
So I was devastated when my therapist told me that she’d no longer be accepting insurance, any insurance, in 2020. She saw me on New Year’s Eve, twice that week. I know she hated to leave me without a new provider lined up. We both had to move on. Again, it’s a professional parent. I never lose sight of that. It doesn’t hurt me on a personal level, only the issue of kissing more frogs.
Importantly, though, it puts my entire therapeutic process back to Square 1. I’m heading into Month 6 without a new therapeutic relationship established despite my best efforts. For someone with my mental health issues this not sustainable. I’m keeping myself alive until I find a new doctor to help me survive. Unfortunately the last session with the latest provider, Kathy, was a deal-breaker.
Like having a baby, it’s never a convenient time to switch therapists.
I thought I was going to die when my second therapist, Raye, moved away.
I had fired the first-ever therapist after one visit. I told her that I was working with childhood sexual abuse, and I felt like forgiveness was a crucial part of my process. I needed to de-escalate emotionally. Forgiveness was so clear to me, like a lighthouse in my heart.
That initial therapist felt that for me to be thinking about forgiveness that early in the game was a cop-out, that I was letting people off the hook to avoid my anger. This was a complete misread of the situation and a foolish thing for her to say. She didn’t even explore what I meant by forgiveness.
My very first therapist didn’t have enough clinical experience to get that I first needed to forgive myself in order to truly believe I deserved therapy. I could not budge off that forgiveness vibe, nor should she have advised me to. I was so angry I couldn’t find any other safe way to approach it except with the goal of forgiving everyone unilaterally, because I can and that’s how I want to feel.
That first therapist was shocked and horrified when I told her I would need to switch providers because of the irreconcilable difference in strategy. The next person, Raye, was much more skilled. She saved my life.
THERAPY PRO TIP: When I fired that first therapist, she said that was fine but I’d have to come in for another session to discuss it. She said she needed to ensure that I wasn’t ditching her because she challenged me. I had already explained to her that we have irreconcilable differences in our goals. If anyone ever tells you you need another appointment to fire them, tell them no. The relationship is over when you say it is.
Not all therapists are created equal.
Like every other profession, therapists have their share of C students. Then there’s also the common thing of people simply not communicating well together. Some people don’t click. However it’s the therapist’s job to establish rapport with the patient, never the other way around.
I think my latest therapist, let’s call her Kathy, is both inexperienced and not especially skilled.
With the very first therapist I ever saw, our irreconcilable difference was, on the surface, how we understood the word “forgiveness.” But it was actually her inability to meet me where I was at. She expected me to get onto her page. And it’s the same thing with Kathy.
With Kathy I find myself in the session thinking I’m not doing it right, not meeting her expectations, that she’s not understanding me. I remember several times thinking, “Wow, she must think this is my first therapy session, Day 1 of my process.” I made another appointment with her for next week, because I wanted to make sure I’m not incorrect. I wanted to fire her several times during the session. But I felt like I needed to sleep on it to be sure it’s the right decision. Next time I will just pull the plug, even on a therapist.
People often want to fire a therapist who challenges them in an appropriate, even vital way.
Before I do the list of why to fire a therapist, the list of why not to is very short:
Don’t fire a therapist who tells you things you don’t want to hear, unless those things are abusive, or like the forgiveness thing, an irreconcilable difference. That’s why I didn’t fire Kathy mid-session. I need to make sure she didn’t have a valid point.
A good therapist must be able to put a boot up your ass if need be.
It goes back to the professionalism of the therapist. I need to work with someone whose clinical judgment I trust. I have to get that they’re my advocate, that they get me, that we’re understanding each other. If we’re not communicating in therapy something is bad wrong. Kathy has to go.
So here’s my quick list, reasons I’ve fired therapists:
Must-Fire ASAP №1: I’m not comfortable with this person or in this environment.
I once went to a session at the private office of a therapist who also worked for the county mental health department. He gave me the address, and I met him at a small office park near the county building.
When I pulled in on Saturday morning it was a little spooky, completely abandoned. It was four smaller buildings around a courtyard. As I was parking, a homeless guy came out from the courtyard and stood by the entry.
I sat in the van and smoked a bowl, hoping he’d wander off and I could go to my appointment. He didn’t look dangerous, just dirty and disheveled. I didn’t want to be panhandled or deal with him.
The guy didn’t leave after five minutes. I decided I could take him in a fight, and went to look for the office.
Even though he turned out to be the therapist, I did go into the office with him. I was amazed that he would show up in dirty sweatpants and a T-shirt, as though he had rolled out of bed, down Main Street, and through a mud puddle. In hindsight this simple lack of professionalism should have been an automatic deal-breaker.
His office was, of course, in the back of the building. We went in and sat down on opposing couches, in a room he clearly shared with other therapists. When he left the door open for the session I was glad he did. He gave me the creeps.
I sat inside his office looking at outer door and the base of the stairs. During the session a stranger walked into the middle of the room with us and asked us how to get to the architect’s office. I walked out behind him.
I shouldn’t have walked into that office, period. That guy skeeved me out, which is why I was glad he left the door open. If that’s the case, save time and trouble and just leave. You can’t do therapy with someone you don’t want to be alone with.
Never do a therapy session where your privacy isn’t guaranteed. Someone walked in on me with my last therapist and she looked like she wanted to gouge his eyes with a scissors. Even so she held her shit together and stayed in professional mode until my session was over, because I wanted to finish my thought. I hated losing her. When I get rich I’ll pay cash and go back to her, unless I kiss the lucky frog.
Must-Fire Moment №2: This visit is traumatic.
I once had a psychiatrist ask me a rapid-fire series of questions about my life, ticking boxes without considering either the questions or the answers.
A psychiatrist is not actually a therapist, but was a required part of my mental health assessment process. That appointment itself was incredibly traumatic, jumping from one painful aspect of my life to the next at a high rate of speed without regard for the questions’ impact on me.
That psychiatrist taught me to get up and walk out the door mid-session if need be. I almost did it with Kathy today. But because she’s an actual therapist I wanted to be sure I wasn’t reflexively rejecting constructive criticism, that she didn’t have an important point.
Take a tip from every Larry Nassar survivor:
“I hated it, but I didn’t know it was abuse.”
If you’re traumatized by the experience itself — unless it’s like a colonoscopy or a root canal, and you’ve been warned, and they’ve taken steps to minimize your stress level — something is wrong. I know psychiatry kind of sucks. But the first rule for doctors is “do no harm.” There’s a reasonable, empathetic way to do that process. That wasn’t it.
Must-Fire Moment №3: Can you hear me now?
I’ve been in therapy longer than many people have been in practice. So I have my own personal practice of therapy, over many years, with many different providers as described herein. I’ve got a certain style that may be challenging for the clinician, especially if their own game is not so strong.
I have to fire Kathy for a few reasons:
I felt judged. I was stunned when Kathy stopped me mid-sentence and accused me of ranting. This is how I’ve worked successfully with other therapists for years. But I gave her the benefit of the doubt. Then she didn’t have anything other than her belief that I was much angrier than I was. Annoying AF. Not good. I feel like it’s rude for a therapist to stop me short and say I’m ranting, especially if that’s all she has to say. I felt like I’m not able to be emotional with her. I can’t express myself freely with Kathy, game over. I felt belittled. Kathy asked me what I got out of that, and I told her how I work through my process, talking. I listed the reasons it’s important for me. Even so she described it as “venting” and told me that wasn’t therapeutic. My last few therapists — and the circumstances of my life , described above— would beg to differ.
Given a do-over I would ask her to rate my anger level on a 1 to 10. I would’ve put it at about 2–3. She seemed to think I was much angrier, even when I flipped it on demand. Again it’s her job to get on my page, not the other way around.
Must-fire ASAP: therapist who tells me how I feel. Especially when they’re incorrect.
PRO TIP: Next time I notice feeling bad about myself during a therapy session — not processing that self-blame but experiencing it anew — I will drop the therapist like she’s hot, THEN sleep on it and decide if I’ve made a mistake and want another appointment. Because that one rule, about not firing them for challenging you when you need to be challenged, is really crucial.
3. Kathy doesn’t understand the therapeutic relationship well enough. While we were discussing rape trolls on Medium and how I deal with them, Kathy said she didn’t think I was being an effective advocate because she could tell I was angry. I was shocked that she assumed I would speak to everyone the same way I do in therapy. Not everybody has a fiduciary relationship with my feelings, right? Especially rape trolls, eh? Super basic. She also assumed that the rape troll was conversing with me the same way people do with her in her office. Also obviously wrong. Those are two entirely different communications. If Kathy thinks everybody is at the same level of conversation in therapy as they are all the time, she’s very low-functioning.
4. It’s not the patient’s job to establish rapport or run the session. Kathy and I met on a video chat due to the lockdown, I brought up the awkward last session. I wanted to see if she had some point that I missed, and clearly she felt the same way. This is your red flag moment, folks. I shouldn’t be the one trying to make this work.
Kathy said, “If you were in my office I would’ve shut you right down.” First of all it’s very scoldy, fuck that. I’m sure I wasn’t inappropriate. I’ve never had a therapist suggest that I was before. That tone, especially after she used the offensive word “ranting,” was a good time to end the call. Other therapists have successfully set boundaries with me. Not often, but it can happen. There was a gentle, effective way to redirect me, but Kathy couldn’t find it.
It’s important to understand the therapeutic relationship, the sacred space that is created. That video conference was her virtual office. If she maintains lower professional standards because of the lockdown, because the patient is not physically present, then videoconferencing is unethical for her to participate in. To say, “If you were in my office…” implies that what we’re doing here is somehow now official, not what she actually does. If she wanted to “shut me down” and didn’t know how, and I also didn’t know what she wanted from me, then we have nothing to discuss. She’s simply unqualified.
PRO TIP: Don’t let anyone do you any special favors, things they wouldn’t normally allow but we’ll keep it between us. That’s a grooming technique.
At the end of that session Kathy showed me a printout that said, “You are not your thoughts,” which is not a deep or new thought, and also had nothing to do with anything we discussed. Kathy failed to demonstrate either insight or empathy, and she got fired.
So that’s my short list of reasons to look for a new provider. Fortunately there are more options than ever before. I found Kathy through an online service that has all kinds of virtual doctors. Like a dating site, I can pull up profiles, request appointments, and see how it goes.
On the one hand I see that I will need to be more cautious next time, taking the time to make the provider prove themselves before thinking I can continue working on my issues. Just like with dating, I shouldn’t have jumped right into bed with Kathy right away. When I got to know her I didn’t like her.
On the bright side, I can take my time and shop around, and find the perfect partner for my best life. And I know that I will.
|
https://medium.com/narrative/when-to-fire-your-therapist-5b276b45dc72
|
['Art Nunymiss']
|
2020-05-26 19:28:09.041000+00:00
|
['Life Lessons', 'Mental Health', 'PTSD', 'Therapy', 'Self']
|
Title Fire TherapistContent kissed good frog lately Photo Nik Shuliahin Unsplash would never make Scientologist true believer therapy think counseling something almost everyone could benefit u rougher personal history it’s mandatory PTSD №1 threat life Aside suicidal tendency PTSD ravage physical health According American Institute Stress Researchers reported number severity PTSD symptom significantly associated death due coronary heart disease well nonfatal heart attack Unfortunately fire new therapist One saddest thing hear people say “I went therapy didn’t help” always come one two thing either patient really didn’t want work therapist failed role — many I’m pro tip I’ve picked since beginning therapy late 1980s learn mistake work smarter wellness get better result Selfcare gift keep giving Therapy emotional gym last person told therapy didn’t help pressed said gone two session People often unrealistic expectation therapy work kind benefit expect issue address therapy usually built lifetime I’ve found good therapist help people learn balance navigate life life want therapist help people find healthier happier balanced version That’s go Using gym analogy I’ve recently lost 120 pound many year weight mental health seemed insurmountable intertwined completely demoralizing However hindsight taken year lose 35 percent body weight emotionally ready like hit bottom much quicker correct problem create — took decade imbalance started gym 350 pound took good six month saw difference except scale scale therapy progress except happiness life work trust process keep tiny amount constantly imposition infrastructure — sustainable realitybased understanding physical need consistently approach — needed lose weight along that’s want therapist competent person provide healthy guardrail life People like almost never graduate high school Anyone able become therapist definition better life significantly social support Really excellent therapist enable leverage stable childhood relating show person undamaged selfrespect healthy attachment would respond often indirectly strong therapist ultimate role model therapist like professional parent Key word professional they’re actual parent they’re trained behave like highly appropriate highfunctioning parent least that’s think since neither parent appropriate Therapists give model could look like without parents’ poop injected thought process healthier way reframe life could imagine Therapists don’t share history thus create wonderfully neutral space learned good parenting supposed look like therapy psychologist first person tell it’s appropriate parent lean child emotional support horrified suggestion stop mom’s 1 emotional supporter took year therapy get bone discussing idea many time many therapist year here’s believe good parent think 1The child’s need always come first 2 child may necessarily like what’s best anyway 3 may like what’s best child anyway Like doctor lawyer teacher good parent advocate run interference providing guidance can’t emotionally dependent child imbalanced relationship requires clearly defined duty priority There’s fiduciary responsibility unethical parent doctor lawyer teacher place personal need —such sex money — need charge fiduciary duty doesn’t mean therapist abused session patient’s personal feeling count therapist’s don’t That’s way lot problem subtle theft sexual abuse Therapists human It’s always easy see interpersonal boundary Ultimately it’s job patient make sure feel heard respected don’t kiss another frog course relationship work way I’ve never fired therapist know people don’t get close anyone confides take clinically educated red flag person unsustainable issue Therapists usually drum work don’t fire patient whim doctor can’t work probably can’t either new therapist isn’t necessarily wrong She’s wrong therapist really liked last couple year life improved immensely time moved van apartment help anyone Herculean said I’ve lost 120 pound counting Neighbors ignored meet parking lot aerobics dawn People want pay personal trainer problem solving life got much better felt good devastated therapist told she’d longer accepting insurance insurance 2020 saw New Year’s Eve twice week know hated leave without new provider lined move it’s professional parent never lose sight doesn’t hurt personal level issue kissing frog Importantly though put entire therapeutic process back Square 1 I’m heading Month 6 without new therapeutic relationship established despite best effort someone mental health issue sustainable I’m keeping alive find new doctor help survive Unfortunately last session latest provider Kathy dealbreaker Like baby it’s never convenient time switch therapist thought going die second therapist Raye moved away fired firstever therapist one visit told working childhood sexual abuse felt like forgiveness crucial part process needed deescalate emotionally Forgiveness clear like lighthouse heart initial therapist felt thinking forgiveness early game copout letting people hook avoid anger complete misread situation foolish thing say didn’t even explore meant forgiveness first therapist didn’t enough clinical experience get first needed forgive order truly believe deserved therapy could budge forgiveness vibe advised angry couldn’t find safe way approach except goal forgiving everyone unilaterally that’s want feel first therapist shocked horrified told would need switch provider irreconcilable difference strategy next person Raye much skilled saved life THERAPY PRO TIP fired first therapist said fine I’d come another session discus said needed ensure wasn’t ditching challenged already explained irreconcilable difference goal anyone ever tell need another appointment fire tell relationship say therapist created equal Like every profession therapist share C student there’s also common thing people simply communicating well together people don’t click However it’s therapist’s job establish rapport patient never way around think latest therapist let’s call Kathy inexperienced especially skilled first therapist ever saw irreconcilable difference surface understood word “forgiveness” actually inability meet expected get onto page it’s thing Kathy Kathy find session thinking I’m right meeting expectation she’s understanding remember several time thinking “Wow must think first therapy session Day 1 process” made another appointment next week wanted make sure I’m incorrect wanted fire several time session felt like needed sleep sure it’s right decision Next time pull plug even therapist People often want fire therapist challenge appropriate even vital way list fire therapist list short Don’t fire therapist tell thing don’t want hear unless thing abusive like forgiveness thing irreconcilable difference That’s didn’t fire Kathy midsession need make sure didn’t valid point good therapist must able put boot as need go back professionalism therapist need work someone whose clinical judgment trust get they’re advocate get we’re understanding we’re communicating therapy something bad wrong Kathy go here’s quick list reason I’ve fired therapist MustFire ASAP №1 I’m comfortable person environment went session private office therapist also worked county mental health department gave address met small office park near county building pulled Saturday morning little spooky completely abandoned four smaller building around courtyard parking homeless guy came courtyard stood entry sat van smoked bowl hoping he’d wander could go appointment didn’t look dangerous dirty disheveled didn’t want panhandled deal guy didn’t leave five minute decided could take fight went look office Even though turned therapist go office amazed would show dirty sweatpants Tshirt though rolled bed Main Street mud puddle hindsight simple lack professionalism automatic dealbreaker office course back building went sat opposing couch room clearly shared therapist left door open session glad gave creep sat inside office looking outer door base stair session stranger walked middle room u asked u get architect’s office walked behind shouldn’t walked office period guy skeeved glad left door open that’s case save time trouble leave can’t therapy someone don’t want alone Never therapy session privacy isn’t guaranteed Someone walked last therapist looked like wanted gouge eye scissors Even held shit together stayed professional mode session wanted finish thought hated losing get rich I’ll pay cash go back unless kiss lucky frog MustFire Moment №2 visit traumatic psychiatrist ask rapidfire series question life ticking box without considering either question answer psychiatrist actually therapist required part mental health assessment process appointment incredibly traumatic jumping one painful aspect life next high rate speed without regard questions’ impact psychiatrist taught get walk door midsession need almost Kathy today she’s actual therapist wanted sure wasn’t reflexively rejecting constructive criticism didn’t important point Take tip every Larry Nassar survivor “I hated didn’t know abuse” you’re traumatized experience — unless it’s like colonoscopy root canal you’ve warned they’ve taken step minimize stress level — something wrong know psychiatry kind suck first rule doctor “do harm” There’s reasonable empathetic way process wasn’t MustFire Moment №3 hear I’ve therapy longer many people practice personal practice therapy many year many different provider described herein I’ve got certain style may challenging clinician especially game strong fire Kathy reason felt judged stunned Kathy stopped midsentence accused ranting I’ve worked successfully therapist year gave benefit doubt didn’t anything belief much angrier Annoying AF good feel like it’s rude therapist stop short say I’m ranting especially that’s say felt like I’m able emotional can’t express freely Kathy game felt belittled Kathy asked got told work process talking listed reason it’s important Even described “venting” told wasn’t therapeutic last therapist — circumstance life described above— would beg differ Given doover would ask rate anger level 1 10 would’ve put 2–3 seemed think much angrier even flipped demand it’s job get page way around Mustfire ASAP therapist tell feel Especially they’re incorrect PRO TIP Next time notice feeling bad therapy session — processing selfblame experiencing anew — drop therapist like she’s hot sleep decide I’ve made mistake want another appointment one rule firing challenging need challenged really crucial 3 Kathy doesn’t understand therapeutic relationship well enough discussing rape troll Medium deal Kathy said didn’t think effective advocate could tell angry shocked assumed would speak everyone way therapy everybody fiduciary relationship feeling right Especially rape troll eh Super basic also assumed rape troll conversing way people office Also obviously wrong two entirely different communication Kathy think everybody level conversation therapy time she’s lowfunctioning 4 It’s patient’s job establish rapport run session Kathy met video chat due lockdown brought awkward last session wanted see point missed clearly felt way red flag moment folk shouldn’t one trying make work Kathy said “If office would’ve shut right down” First it’s scoldy fuck I’m sure wasn’t inappropriate I’ve never therapist suggest tone especially used offensive word “ranting” good time end call therapist successfully set boundary often happen gentle effective way redirect Kathy couldn’t find It’s important understand therapeutic relationship sacred space created video conference virtual office maintains lower professional standard lockdown patient physically present videoconferencing unethical participate say “If office…” implies we’re somehow official actually wanted “shut down” didn’t know also didn’t know wanted nothing discus She’s simply unqualified PRO TIP Don’t let anyone special favor thing wouldn’t normally allow we’ll keep u That’s grooming technique end session Kathy showed printout said “You thoughts” deep new thought also nothing anything discussed Kathy failed demonstrate either insight empathy got fired that’s short list reason look new provider Fortunately option ever found Kathy online service kind virtual doctor Like dating site pull profile request appointment see go one hand see need cautious next time taking time make provider prove thinking continue working issue like dating shouldn’t jumped right bed Kathy right away got know didn’t like bright side take time shop around find perfect partner best life know willTags Life Lessons Mental Health PTSD Therapy Self
|
5,112 |
Revised Guidelines For Brave&Inspired (11/2/19)
|
With all the new changes in Medium, it seems that a good game face is more important than ever. Each story needs to be readable and professional, whether it is a longer piece or a short poem. I now have tabs included on the top of the page, so to have your story go into the correct tab, they should be properly tagged.
Poetry — should be tagged poetry, and should have some theme of being inspired or behaving in a way that feels brave to you.
True Courage — is the tab for stories that happened to you. Please use This Happened to Me as one of your tags
Inspired — If your story is based on how someone else’s bravery inspired you, use “Inspiration” as your tag.
Stories may sometimes appear in more than one tab, that’s okay.
Self Promotion and Embeds
I am all for self-promotion and embeds, but your story should be mostly about what you are writing now. Excessive extras look messy and detract from your piece. Keep them reasonable.
Aside from that, all the standard stuff applies. Watch your grammar. I have Grammarly on my laptop, and if it catches something glaring, like a spelling error, I may fix that, but not much else. (I don’t “fix” UK spelling, even if Grammarly tells me to). Use a credited image with proper permission. Etc.
If you aren’t a writer on Brave&Inspired and feel you have the right kinds of stories to tell. Reply below to be added as a writer. Thanks!
|
https://medium.com/brave-inspired/revised-guidelines-for-brave-inspired-11-2-19-3920376cb40a
|
['Gretchen Lee Bourquin', 'Pom-Poet']
|
2019-11-02 12:38:17.807000+00:00
|
['Brave', 'Inspired', 'Guidelines', 'Writing']
|
Title Revised Guidelines BraveInspired 11219Content new change Medium seems good game face important ever story need readable professional whether longer piece short poem tab included top page story go correct tab properly tagged Poetry — tagged poetry theme inspired behaving way feel brave True Courage — tab story happened Please use Happened one tag Inspired — story based someone else’s bravery inspired use “Inspiration” tag Stories may sometimes appear one tab that’s okay Self Promotion Embeds selfpromotion embeds story mostly writing Excessive extra look messy detract piece Keep reasonable Aside standard stuff applies Watch grammar Grammarly laptop catch something glaring like spelling error may fix much else don’t “fix” UK spelling even Grammarly tell Use credited image proper permission Etc aren’t writer BraveInspired feel right kind story tell Reply added writer ThanksTags Brave Inspired Guidelines Writing
|
5,113 |
How to Find Device Metrics for Any Screen
|
How to Find Device Metrics for Any Screen
Calculate the right measurements for design across devices
New devices are always coming online, offering new formats, screen sizes, and pixel densities that you’ll want your product to accommodate. If Googling doesn’t give you the numbers you need, or you just want to show off your math skills, here’s a relatively easy way to determine the relevant metrics for your designs.
Do it yourself 🛠
There are six pieces of information you’ll want in the end: the screen diagonal measurement, screen dimensions, aspect ratio, pixel resolution, dp (or density-independent pixel, Android’s logical pixel) and the density bucket to which each device belongs. Some of those specifications can be found on each device’s product page, or through other sites that collect information about specific devices.
It’s easiest to visualize this information if we have an example. So, to get a feel for the process of finding these values, let’s start with the Pixel 4.
The screen diagonal, aspect ratio, and pixel resolution can all be found on the Pixel 4 product page. For devices that don’t have in-depth or easily accessible specification pages, sites like the GSMArena Phone Finder can be a good resource.
Finding Screen Dimensions 📐
After filling in the readily available info, we have three more specs to solve for. The first one is the screen’s dimensions in terms of width and height. The formulas for this — which I learned from an Omni Calculator page by Hanna Pamula — are fairly easy to use, and only require a diagonal measurement and an aspect ratio (AR).
Width = diagonal / √(AR²+1)
Height = (AR)×Width
So for our example, we know the screen diagonal is 5.7 and the aspect ratio is 19:9 (which we’ll write as 19/9 in the formula).
Width = 5.7 / √((19/9)²+1)
Width = 2.44”
And now that we know Width, we can solve for Height.
Height = (19/9)×2.44
Height = 5.15”
So our screen dimensions are 2.44×5.15”
Finding dp Resolution 📏
Density-independent pixels are Android’s logical pixel. Measuring in dp allows designers and developers to ensure that what they create appears at the same physical size across screens, no matter what density those screens happen to be. So knowing the dp resolution of a device can be really helpful in targeting that device with your design. You can easily set up artboards and assets that focus on specific form factors, and reliably reproduce your design across them.
For this formula (which you can find in the Android Developers documentation on pixel density), we need to know the screen’s pixel resolution, the dimensions we calculated before, and the screen’s ppi. The screen’s dpi (written as ppi) should be available at one of the sources mentioned above.
px = dp×(dpi/160)
In our example, we know the screen’s pixel resolution is 1080×2280px, and its physical dimensions are 2.44×5.15” so we can plug those values into the formula, starting again with width.
1080 = dp×(444/160)
1080 = dp×2.775
dp = 1080/2.775
dp = 389
Next we’ll do the same calculation for the screen’s height in density-independent pixels.
2280 = dp×2.775
dp = 2280/2.775
dp = 822
Finding the density bucket 🔍
The Android Developers documentation on pixel density also outlines the notion of “density qualifiers,” which Android uses to serve bitmap drawable resources in an app. If there are non-vector assets like photos or illustrations in your design, it can be useful to know which density buckets you’re targeting, serving the right asset to each device to speed up loading and to avoid distortion and “out of memory” errors.
Finding the density bucket is as easy as looking at the table in the documentation linked above and comparing it to our dpi value. For the Pixel 4’s ~444ppi, we would assign the XXHDPI density qualifier.
Putting it all together 🧩
Having worked through those calculations, we now have a complete set of device metrics for our example device, the Pixel 4.
For further reading on pixel density, layout, and how the two interact, see the Material Design guidance on Layout.
|
https://medium.com/google-design/how-to-find-device-metrics-for-any-screen-62b9ad84d097
|
['Liam Spradlin']
|
2020-05-19 15:22:47.032000+00:00
|
['How To', 'Design Process', 'Design', 'Material Design', 'Tutorial']
|
Title Find Device Metrics ScreenContent Find Device Metrics Screen Calculate right measurement design across device New device always coming online offering new format screen size pixel density you’ll want product accommodate Googling doesn’t give number need want show math skill here’s relatively easy way determine relevant metric design 🛠 six piece information you’ll want end screen diagonal measurement screen dimension aspect ratio pixel resolution dp densityindependent pixel Android’s logical pixel density bucket device belongs specification found device’s product page site collect information specific device It’s easiest visualize information example get feel process finding value let’s start Pixel 4 screen diagonal aspect ratio pixel resolution found Pixel 4 product page device don’t indepth easily accessible specification page site like GSMArena Phone Finder good resource Finding Screen Dimensions 📐 filling readily available info three spec solve first one screen’s dimension term width height formula — learned Omni Calculator page Hanna Pamula — fairly easy use require diagonal measurement aspect ratio AR Width diagonal √AR²1 Height AR×Width example know screen diagonal 57 aspect ratio 199 we’ll write 199 formula Width 57 √199²1 Width 244” know Width solve Height Height 199×244 Height 515” screen dimension 244×515” Finding dp Resolution 📏 Densityindependent pixel Android’s logical pixel Measuring dp allows designer developer ensure create appears physical size across screen matter density screen happen knowing dp resolution device really helpful targeting device design easily set artboards asset focus specific form factor reliably reproduce design across formula find Android Developers documentation pixel density need know screen’s pixel resolution dimension calculated screen’s ppi screen’s dpi written ppi available one source mentioned px dp×dpi160 example know screen’s pixel resolution 1080×2280px physical dimension 244×515” plug value formula starting width 1080 dp×444160 1080 dp×2775 dp 10802775 dp 389 Next we’ll calculation screen’s height densityindependent pixel 2280 dp×2775 dp 22802775 dp 822 Finding density bucket 🔍 Android Developers documentation pixel density also outline notion “density qualifiers” Android us serve bitmap drawable resource app nonvector asset like photo illustration design useful know density bucket you’re targeting serving right asset device speed loading avoid distortion “out memory” error Finding density bucket easy looking table documentation linked comparing dpi value Pixel 4’s 444ppi would assign XXHDPI density qualifier Putting together 🧩 worked calculation complete set device metric example device Pixel 4 reading pixel density layout two interact see Material Design guidance LayoutTags Design Process Design Material Design Tutorial
|
5,114 |
Neumorphism will NOT be a huge trend in 2020
|
For it to work well you need to make sure that even blockframes of your objects with the background removed can be all identified as part of the same group. ⬜️ ⬜️ ⬜️
So in short — those cards will work if the interface can be just as good without them. Now that’s an awesome recommendation, is it? Especially when we take that Dieter Rams point about removing the “unnecessary” 😂
If you want a pre-made Sketch / Figma files with those shapes to play around go to www.Neumorphism.com
But it’s so fresh!
Remember Pantone Color of the year 2019? Let me refresh your memory on that amazing “new design trend of 2019” as it was proclaimed in January.
While of course there were initial examples of that “living coral” most of them were not relevant anymore by early February 2019.
I think by the same time we will exhaust all possible neumorphic combinations and come back to what worked well before.
Directions for 2020 🔥
That doesn’t mean Neumorphism is dead in the water though.
It just means that on it’s own it won’t be able to carry an entire product to success. Sure — a couple initial products done this way can be successful, but this will get more tiresome than even material design.
However mixing parts of this style with parts of other styles will definitely be a trend in 2020 and beyond.
Making a product that is both beautiful and functional means not going over the top in any direction. Even the currently popular soft-colorful shadow only works well if it’s done on buttons and/or icons. Imagine applying it to the entire product — exactly!
Dark mode
Dark mode is going to be a thing whether we like it or not. But not necessarily the desaturated grey-blue dark mode we’re seeing everywhere.
Since the introduction of OLED screens it’s obvious that pure-black doesn’t use much energy (if at all). So if the goal of a dark mode is to save battery we should start seeing more minimal, functional interfaces that are primarily black. Not dark-grey.
Eye-strain can be another reason for dark modes and if that’s the case those soft dark-modes definitely look better.
Many apps will make a light and dark versions of their interfaces.
Illustration and 3D
We definitely need a more diverse illustration landscape. Currently the most popular style with those slightly unproportional bodyparts and loose-lines is showing up everywhere. This gets boring quickly.
These all look good but way too similar.
However illustration is one of the best ways to stand out — but we need to experiment with it more not to fall into the trap of sameness.
3d on the other hand is a bit easier to make in a way that’s different. It’s also significantly harder to make, requiring more effort. That means that if that time was spent making a 3d render it’s likely to be better quality and on-brand.
Great example of an on-brand style is Pitch.com
Animation
Transitions and scene-builds will raise even more this year. One of the accellerants of it is the new, exciting JS libraries that allow for complex 2d and 3d transitions with relative ease.
👨🏻💻 Yes you can now code “cool stuff” much easier. Don’t overdo it!
We’re going to put that flat-design on surfaces and turn them around. Kind of like in that FEZ game ;)
Isometry?
In 2019 while building our cryptocurrency analytics platform I took the time to analyse the design of over 2000 crypto related websites (a long article on it is coming in the new year). That literally means I went to 2000 websites and gave each one a score based on quality, originality and consistency.
This is beautiful (if you’re the author reach out and I’ll tag you here). But at the same time seeing similar images EVERYWHERE can get very boring.
One thing that struck me the most is that almost 1/4 of them had some sort of isometric images on them. All done in different yet so familiar style that after a while I wasn’t sure they weren’t all from the same free library.
This trend can be done well, but if you plan to just replicate the popular ones in your designs — just don’t. Please. It’s been one of the most overused design things this year (right after colorful shadows).
Ultra minimalism for mindfullness?
This trend is just starting so it may not go outside a small niche. I jumped into digital-detox and using more minimal products this year and so did many people I know.
I ordered both. Using the Light Phone 2 now — quite refreshing.
Devices like the Mudita Pure or the Light Phone 2 deliver simple, black&white, super-minimal interfaces. If we consider the apps we use as tools that must serve a specific function, a minimal interface makes some sense. Of course not apps will work in that style (imagine a text-only Instagram 🤣)
Voice interfaces?
On one of the conferences I attended (and lectured at) this year I’ve heard this quote:
Don’t learn UI. We will be using mostly voice recognition in the near future — talking to our devices.
While this seems futuristic and makes sense in some scenarios (while driving or exercising) there are two main problems that I think won’t push voice UI’s into a dominant position (yet).
There are huge concerns about privacy and “creepy” AI. Not long ago Alexa advised its user to kill themselves as it’s the best way to stop global warming and save the planet. While this may logically be true it’s definitely a clickbait-y headline that will make many people scratch their heads and say — yeah I don’t like those smart speakers listening to our every word and secretly building the next skynet. In many cases it’s really weird to talk to your phone (especially in public). A few quick taps are more private and faster. So until we get those brain-computer interfaces talking to your phone to write a text message on a bus won’t be a thing.
So what’s it gonna be?
The only right answer here can be — I don’t know. I may be wrong and we’ll be living in the extruded soft plastic future, or we may get extruding glass on phones and make it even more real.
Add all those trends on top of one another and see what happens ;-)
But what’s likely to happen is that no one trend will dominate this year.
The best designs — as it has always been — will come from the right mix of all the trends with good typography. Because you can cast a different type of shadow on your card, but if the text on it looks all misplaced and weird, no amount of extruded plastic will make that design good.
Readable IS beautiful. Remember that in 2020!
|
https://uxdesign.cc/neumorphism-will-not-be-a-huge-trend-in-2020-67a8c35e52cc
|
['Michal Malewicz']
|
2020-06-13 08:53:14.811000+00:00
|
['Design', 'UI', 'UX', 'Visual Design', 'Prototyping']
|
Title Neumorphism huge trend 2020Content work well need make sure even blockframes object background removed identified part group ⬜️ ⬜️ ⬜️ short — card work interface good without that’s awesome recommendation Especially take Dieter Rams point removing “unnecessary” 😂 want premade Sketch Figma file shape play around go wwwNeumorphismcom it’s fresh Remember Pantone Color year 2019 Let refresh memory amazing “new design trend 2019” proclaimed January course initial example “living coral” relevant anymore early February 2019 think time exhaust possible neumorphic combination come back worked well Directions 2020 🔥 doesn’t mean Neumorphism dead water though mean it’s won’t able carry entire product success Sure — couple initial product done way successful get tiresome even material design However mixing part style part style definitely trend 2020 beyond Making product beautiful functional mean going top direction Even currently popular softcolorful shadow work well it’s done button andor icon Imagine applying entire product — exactly Dark mode Dark mode going thing whether like necessarily desaturated greyblue dark mode we’re seeing everywhere Since introduction OLED screen it’s obvious pureblack doesn’t use much energy goal dark mode save battery start seeing minimal functional interface primarily black darkgrey Eyestrain another reason dark mode that’s case soft darkmodes definitely look better Many apps make light dark version interface Illustration 3D definitely need diverse illustration landscape Currently popular style slightly unproportional bodyparts looselines showing everywhere get boring quickly look good way similar However illustration one best way stand — need experiment fall trap sameness 3d hand bit easier make way that’s different It’s also significantly harder make requiring effort mean time spent making 3d render it’s likely better quality onbrand Great example onbrand style Pitchcom Animation Transitions scenebuilds raise even year One accellerants new exciting JS library allow complex 2d 3d transition relative ease 👨🏻💻 Yes code “cool stuff” much easier Don’t overdo We’re going put flatdesign surface turn around Kind like FEZ game Isometry 2019 building cryptocurrency analytics platform took time analyse design 2000 crypto related website long article coming new year literally mean went 2000 website gave one score based quality originality consistency beautiful you’re author reach I’ll tag time seeing similar image EVERYWHERE get boring One thing struck almost 14 sort isometric image done different yet familiar style wasn’t sure weren’t free library trend done well plan replicate popular one design — don’t Please It’s one overused design thing year right colorful shadow Ultra minimalism mindfullness trend starting may go outside small niche jumped digitaldetox using minimal product year many people know ordered Using Light Phone 2 — quite refreshing Devices like Mudita Pure Light Phone 2 deliver simple blackwhite superminimal interface consider apps use tool must serve specific function minimal interface make sense course apps work style imagine textonly Instagram 🤣 Voice interface one conference attended lectured year I’ve heard quote Don’t learn UI using mostly voice recognition near future — talking device seems futuristic make sense scenario driving exercising two main problem think won’t push voice UI’s dominant position yet huge concern privacy “creepy” AI long ago Alexa advised user kill it’s best way stop global warming save planet may logically true it’s definitely clickbaity headline make many people scratch head say — yeah don’t like smart speaker listening every word secretly building next skynet many case it’s really weird talk phone especially public quick tap private faster get braincomputer interface talking phone write text message bus won’t thing what’s gonna right answer — don’t know may wrong we’ll living extruded soft plastic future may get extruding glass phone make even real Add trend top one another see happens what’s likely happen one trend dominate year best design — always — come right mix trend good typography cast different type shadow card text look misplaced weird amount extruded plastic make design good Readable beautiful Remember 2020Tags Design UI UX Visual Design Prototyping
|
5,115 |
How to Develop and Deploy a Webhook Alert Action App with Custom Payload and Headers for Splunk
|
How to Develop and Deploy a Webhook Alert Action App with Custom Payload and Headers for Splunk Ankit Tyagi Follow Jun 29 · 6 min read
Splunk is a wonderful tool with loads of features, yet a few of them are not available out-of-the-box — one of those missing features is the customizable webhook alert action.
While working on a project, I needed to send some custom metrics along with some headers from Splunk alerts and ended up exploring the default available webhook alert action.
Here’s what’s already available
Splunk, by default, provides webhook alert action for an alert; however, it is lacking some of the required features like:
Authentication mechanism
Custom header support
Custom payload support
With the above features being missing from the default webhook alert action on Splunk, it becomes much less useful because the webhooks mostly require some kind of authentication to receive the data and similarly, customized payload might be a requirement for some webhook APIs.
Splunk has done a great job with its pluggable platform, which allows you to write your own apps and deploy them to serve your custom requirements. Despite this, building a custom app can be tricky and time-consuming.
After reviewing the webhook alert action I decided to develop my own custom webhook alert action with all the required missing features.
customWebhook app to the rescue
I developed a customWebhook alert action application with the following configurable options:
URL: Target webhook URL
Headers (dict): Include any number of headers
e.g. {‘Authorization’: ‘Bearer <token>’, ‘Content-Type’: ‘application/json’…}
Payload (dict): Payload data with key-value pairs of your choice
e.g. {‘service_name’: ‘MyService’, ‘key1’: ‘value1’, ‘key2’: ‘value2’…}
I’ll walk you through easy and simple steps that would give you a kick start with your Splunk project.
Objectives:
1. Understand the Splunk app directory structure
2. Components with the respective directory structure
3. Configuration and logic code snippet
4. Deploy the application on the Splunk server
5. Create an alert with customWebhook alert action
Understand the Splunk app directory structure
A snapshot of the application directory structure.
Let’s break it down.
This is an exhaustive list of configuration I used in the customWebhook app, The complete list is here: https://docs.splunk.com/Documentation/Splunk/8.0.4/AdvancedDev/ModAlertsCreate
Configuration and logic code snippet
alert_actions.conf
# Configuration file path #$SPLUNK_HOME$/etc/apps/customWebhook_app/default/alert_actions.conf [customWebhook] is_custom = 1 label = customWebhook description = Send CustomWebhook alert notifications icon_path = customWebhook_icon.png payload_format = json param.alert_source = Splunk param.user_agent = Splunk/$server.guid$
app.conf
[ui] is_visible = 0 label = customWebhook alerts [launcher] author = Ankit Tyagi description = Send alert payload to custom webhook version = 1.0 [install] state = enabled is_configured = 1
restmap.conf
[validation:savedsearch] action.customWebhook = case('action.customWebhook' != "1", null(), 'action.customWebhook.param.base_url' == "action.customWebhook.param.base_url" OR 'action.customWebhook.param.base_url' == "", "No Webhook URL specified", 1==1, null()) action.customWebhook.param.base_url = validate( match('action.customWebhook.param.url', "^https?://[^\s]+$"), "Webhook URL is invalid")
default.meta
[alert_actions/customWebhook] export = system [alerts] export = system [restmap] export = system
customWebhook.html
<!--$SPLUNK_HOME$/etc/apps/customWebhook_app/default/data/ui/alerts/customWebhook.html-->
<form class="form-horizontal form-complex">
<div class="control-group">
<label class="control-label" for="customWebhook_base_url">URL</label>
<div class="controls">
<input type="text" class="input-xlarge" name="action.customWebhook.param.base_url" id="customWebhook_base_url" placeholder="https://server.com/api/v2/webhook/" />
<br>
<span class="help-block">
Webhook URL (str) https://server.com/api/v2/webhook
</span>
</div>
</div>
<div class="control-group">
<label class="control-label" for="customWebhook_headers">Headers</label>
<div class="controls">
<input type="text" name="action.customWebhook.param.headers" id="customWebhook_headers" placeholder="{'Content-Type': 'application/json', 'CustomHeader': 'Value'}"/>
<br>
<span class="help-block">
Headers (dict) <br>
e.g. {'Content-Type': 'application/json', 'CustomHeader': 'Value'}
</span>
</div>
</div>
<div class="control-group">
<label class="control-label" for="customWebhook_payload">Payload</label>
<div class="controls">
<input type="text" name="action.customWebhook.param.payload" id="customWebhook_payload" placeholder="{'key1': 'value1', 'key2': 'value2'}"/>
<br>
<span class="help-block">
Payload data (dict)
e.g. {'key1': 'value1', 'key2': 'value2'} <br>
add the key from seach result like this. <br>
e.g. {'key1': 'value1', 'key2': '$result.key_name$'}
</span>
</div>
</div>
</form>
alert_action.conf.spec
#$SPLUNK_HOME$/etc/apps/customWebhook_app/README/alert_actions.conf.spec [customWebhook] param.alert_source = "Splunk"
customWebhook.py
|
https://medium.com/adobetech/how-to-develop-and-deploy-a-webhook-alert-action-app-with-custom-payload-and-headers-for-splunk-6dc1529c49f3
|
['Ankit Tyagi']
|
2020-07-11 15:32:47.434000+00:00
|
['Webhooks', 'Adobe Engineering', 'API', 'Python', 'Splunk']
|
Title Develop Deploy Webhook Alert Action App Custom Payload Headers SplunkContent Develop Deploy Webhook Alert Action App Custom Payload Headers Splunk Ankit Tyagi Follow Jun 29 · 6 min read Splunk wonderful tool load feature yet available outofthebox — one missing feature customizable webhook alert action working project needed send custom metric along header Splunk alert ended exploring default available webhook alert action Here’s what’s already available Splunk default provides webhook alert action alert however lacking required feature like Authentication mechanism Custom header support Custom payload support feature missing default webhook alert action Splunk becomes much le useful webhooks mostly require kind authentication receive data similarly customized payload might requirement webhook APIs Splunk done great job pluggable platform allows write apps deploy serve custom requirement Despite building custom app tricky timeconsuming reviewing webhook alert action decided develop custom webhook alert action required missing feature customWebhook app rescue developed customWebhook alert action application following configurable option URL Target webhook URL Headers dict Include number header eg ‘Authorization’ ‘Bearer token’ ‘ContentType’ ‘applicationjson’… Payload dict Payload data keyvalue pair choice eg ‘servicename’ ‘MyService’ ‘key1’ ‘value1’ ‘key2’ ‘value2’… I’ll walk easy simple step would give kick start Splunk project Objectives 1 Understand Splunk app directory structure 2 Components respective directory structure 3 Configuration logic code snippet 4 Deploy application Splunk server 5 Create alert customWebhook alert action Understand Splunk app directory structure snapshot application directory structure Let’s break exhaustive list configuration used customWebhook app complete list httpsdocssplunkcomDocumentationSplunk804AdvancedDevModAlertsCreate Configuration logic code snippet alertactionsconf Configuration file path SPLUNKHOMEetcappscustomWebhookappdefaultalertactionsconf customWebhook iscustom 1 label customWebhook description Send CustomWebhook alert notification iconpath customWebhookiconpng payloadformat json paramalertsource Splunk paramuseragent Splunkserverguid appconf ui isvisible 0 label customWebhook alert launcher author Ankit Tyagi description Send alert payload custom webhook version 10 install state enabled isconfigured 1 restmapconf validationsavedsearch actioncustomWebhook caseactioncustomWebhook 1 null actioncustomWebhookparambaseurl actioncustomWebhookparambaseurl actioncustomWebhookparambaseurl Webhook URL specified 11 null actioncustomWebhookparambaseurl validate matchactioncustomWebhookparamurl http Webhook URL invalid defaultmeta alertactionscustomWebhook export system alert export system restmap export system customWebhookhtml SPLUNKHOMEetcappscustomWebhookappdefaultdatauialertscustomWebhookhtml form classformhorizontal formcomplex div classcontrolgroup label classcontrollabel forcustomWebhookbaseurlURLlabel div classcontrols input typetext classinputxlarge nameactioncustomWebhookparambaseurl idcustomWebhookbaseurl placeholderhttpsservercomapiv2webhook br span classhelpblock Webhook URL str httpsservercomapiv2webhook span div div div classcontrolgroup label classcontrollabel forcustomWebhookheadersHeaderslabel div classcontrols input typetext nameactioncustomWebhookparamheaders idcustomWebhookheaders placeholderContentType applicationjson CustomHeader Value br span classhelpblock Headers dict br eg ContentType applicationjson CustomHeader Value span div div div classcontrolgroup label classcontrollabel forcustomWebhookpayloadPayloadlabel div classcontrols input typetext nameactioncustomWebhookparampayload idcustomWebhookpayload placeholderkey1 value1 key2 value2 br span classhelpblock Payload data dict eg key1 value1 key2 value2 br add key seach result like br eg key1 value1 key2 resultkeyname span div div form alertactionconfspec SPLUNKHOMEetcappscustomWebhookappREADMEalertactionsconfspec customWebhook paramalertsource Splunk customWebhookpyTags Webhooks Adobe Engineering API Python Splunk
|
5,116 |
Computing PI in Only 3 Lines
|
Computing PI in Only 3 Lines
3.1415… the rest is calculus with a keyboard
Photo by sheri silver on Unsplash
I’ve always been amazed by how precisely computers can approximate Pi: the famous irrational constant 3.141592… plus infinitely more digits. Today, over 50 trillion digits have been found.
Though the value of Pi is essential to nature, Pi does not come naturally to computers. A number of algorithms are commonly used to approximate Pi, including the Gauss–Legendre, Borwein’s, and Salamin–Brent algorithms. These methods are extraordinarily efficient, but somewhat complex.
A More Straightforward Approach
There is one method — one that I discovered on my own in high school — that isn’t as fast, but is most elegant in its simple mathematic groundwork.
In calculus class, you probably studied the Taylor Series: an infinitely long summation that can model any function by matching all derivatives up to the nth degree. Our goal is to find the right Taylor Series such that the infinite summation converges to Pi itself.
Let’s start with a basic trig expression involving Pi.
1 = tan(π / 4)
Rearranging this equation allows us to solve directly for Pi.
π = 4 * arctan(1)
Though I won’t walk through the formal derivation, the Taylor Series for 4 * arctan(1) is
Image by Author
which can be written in summation notation as
Image by Author
As you can see, this definition of Pi finds its beauty in its simplicity.
Translating to Code (Python)
The formal summation we found falls hand-in-hand with modern programming methods, as the sigma is really a for-loop in disguise.
Amazingly, the summation itself only requires 3 lines of code! A variable pi holds the current sum, as new terms of the series are added on loop.
Though the above program is functional, the exponentiation runtime grows proportionally with i . An easy fix is using modulus to determine the next term’s sign.
Convergence and Algorithm Efficiency
|
https://towardsdatascience.com/computing-pi-in-only-3-lines-95c26276f4c9
|
['Blake Sanie']
|
2020-12-21 18:13:01.045000+00:00
|
['Calculus', 'Pi', 'Taylor Series', 'Python', 'Precision']
|
Title Computing PI 3 LinesContent Computing PI 3 Lines 31415… rest calculus keyboard Photo sheri silver Unsplash I’ve always amazed precisely computer approximate Pi famous irrational constant 3141592… plus infinitely digit Today 50 trillion digit found Though value Pi essential nature Pi come naturally computer number algorithm commonly used approximate Pi including Gauss–Legendre Borwein’s Salamin–Brent algorithm method extraordinarily efficient somewhat complex Straightforward Approach one method — one discovered high school — isn’t fast elegant simple mathematic groundwork calculus class probably studied Taylor Series infinitely long summation model function matching derivative nth degree goal find right Taylor Series infinite summation converges Pi Let’s start basic trig expression involving Pi 1 tanπ 4 Rearranging equation allows u solve directly Pi π 4 arctan1 Though won’t walk formal derivation Taylor Series 4 arctan1 Image Author written summation notation Image Author see definition Pi find beauty simplicity Translating Code Python formal summation found fall handinhand modern programming method sigma really forloop disguise Amazingly summation requires 3 line code variable pi hold current sum new term series added loop Though program functional exponentiation runtime grows proportionally easy fix using modulus determine next term’s sign Convergence Algorithm EfficiencyTags Calculus Pi Taylor Series Python Precision
|
5,117 |
Introducing Pagination in the Syncfusion Flutter DataGrid
|
Paging is an important feature for loading large amounts of data and displaying it instantly in the DataGrid widget. It also provides easy navigation through the data. We at Syncfusion have developed the Syncfusion Flutter Data Pager widget to achieve pagination in the Flutter DataGrid widget. You can get this Data Pager in our 2020 Volume 3 release.
Let’s discuss in this blog how to integrate the SfDataPager with SfDataGrid and the customization options available in the Data Pager widget.
Integrating SfDataPager into SfDataGrid
Step 1: Include the Syncfusion Flutter DataGrid package dependency in the pubspec.yaml file of your project with the following code.
syncfusion_flutter_datagrid: ^18.3.35-beta
Step 2: Import the DataGrid package in the main.dart file using the following code example.
import 'package:syncfusion_flutter_datagrid/datagrid.dart';
Step 3: Create a common delegate for both the SfDataPager and SfDataGrid and do the following. Please note that, by default, DataGridSource is extended with the DataPagerDelegate.
Set the SfDataGrid.DataGridSource to the SfDataPager.delegate property. Set the number of rows to be displayed on a page by setting the SfDataPager.rowsPerPage property. Set the number of items that should be displayed in view by setting the SfDataPager.visibleItemsCount property. Override the SfDataPager.delegate.rowCount property and SfDataPager.delegate.handlePageChanges method in the SfDataGrid.DataGridSource. You can also load data for a specific page using the handlePageChanges method. This method is called for every page navigation in the Data Pager.
Refer to the following code example.
class OrderInfoDataSource extends DataGridSource<OrderInfo> { @override
List<OrderInfo> get dataSource => paginatedDataSource; class OrderInfoDataSource extends DataGridSource<OrderInfo> { @override List<OrderInfo> get dataSource => paginatedDataSource; @override Object getValue(OrderInfo orderInfos, String columnName) { switch (columnName) { case 'orderID': return orderInfos.orderID; break; case 'customerID': return orderInfos.customerID; break; case 'freight': return orderInfos.freight; break; case 'orderDate': return orderInfos.orderData; break; default: return ''; break; } } @override int get rowCount => orders.length; @override Future<bool> handlePageChange(int oldPageIndex, int newPageIndex, int startRowIndex, int rowsPerPage) async { int endIndex = startRowIndex + rowsPerPage; if (endIndex > orders.length) { endIndex = orders.length - 1; } paginatedDataSource = List.from( orders.getRange(startRowIndex, endIndex).toList(growable: false)); notifyListeners(); return true; } }
Step 4: Create an instance of OrderInfoDataSource and assign it to DataGrid’s source and Data Pager’s delegate properties.
Refer to the following code example.
List<OrderInfo> orders = []; List<OrderInfo> paginatedDataSource = []; static const double dataPagerHeight = 60; final _OrderInfoRepository _repository = _OrderInfoRepository(); final OrderInfoDataSource _orderInfoDataSource = OrderInfoDataSource(); @override void initState() { super.initState(); orders = _repository.getOrderDetails(300); } @override Widget build(BuildContext context) { return MaterialApp( title: 'Paginated SfDataGrid', home: Scaffold( appBar: AppBar( title: Text('Paginated SfDataGrid'), ), body: LayoutBuilder(builder: (context, constraint) { return Column( children: [ SizedBox( height: constraint.maxHeight - dataPagerHeight, width: constraint.maxWidth, child: SfDataGrid( source: _orderInfoDataSource, columnWidthMode: ColumnWidthMode.fill, columns: <GridColumn>[ GridNumericColumn( mappingName: 'orderID', headerText: 'Order ID'), GridTextColumn( mappingName: 'customerID', headerText: 'Customer Name'), GridDateTimeColumn( mappingName: 'orderDate', headerText: 'Order Date'), GridNumericColumn( mappingName: 'freight', headerText: 'Freight'), ])), Container( height: dataPagerHeight, color: Colors.white, child: SfDataPager( delegate: _orderInfoDataSource, rowsPerPage: 20, direction: Axis.horizontal, ), ) ], ); }), )); }
After executing this code example, we will get output like in the following GIF image.
Pagination feature in Flutter DataGrid
Appearance customization of Data Pager
Data Pager allows you to customize the appearance of its elements using SfDataPagerThemeData in the SfDataPagerTheme property. To do this, the SfDataPager should be wrapped inside the SfDataPagerTheme.
Follow these steps to customize the Data Pager:
Step 1: Import the theme package in the main.dart file using the following code.
import 'package:syncfusion_flutter_core/theme.dart';
Step 2: Add the SfDataPager widget inside the SfDataGridTheme like in the following code example.
@override
Widget build(BuildContext context) {
return Scaffold(
body: SfDataPagerTheme(
data: SfDataPagerThemeData(
itemColor: Colors.white,
selectedItemColor: Colors.lightGreen,
itemBorderRadius: BorderRadius.circular(5),
backgroundColor: Colors.teal,
),
child: SfDataPager(
delegate: _orderInfoDataSource,
rowsPerPage: 20,
direction: Axis.horizontal,
),
));
}
After executing this code example, we will get output like in the following screenshot.
Customized Data Pager widget
Conclusion
In this blog, we have seen the steps to integrate the Syncfusion Flutter DataGrid with the new Data Pager. This feature helps you to easily navigate to required data. You can get the complete source code for the example used in this blog in this GitHub repository.
So, try it out and leave your feedback in the comments section below!
Syncfusion Flutter widgets offer fast, fluid, and flexible widgets for creating high-quality apps for iOS, Android, and the web. Use them to enhance your productivity!
For existing customers, the new version is available for download from the License and Downloads page. If you are not yet a Syncfusion customer, you can try our 30-day free trial to check out our available features. Also, try our samples from this GitHub location.
You can also reach us through our support forums, Direct-Trac, or feedback portal. We are always happy to assist you!
|
https://medium.com/syncfusion/introducing-pagination-in-the-syncfusion-flutter-datagrid-b66c600add73
|
['Rajeshwari Pandinagarajan']
|
2020-11-19 12:56:42.558000+00:00
|
['Flutter', 'Android App Development', 'Dart', 'Mobile App Development', 'Web Development']
|
Title Introducing Pagination Syncfusion Flutter DataGridContent Paging important feature loading large amount data displaying instantly DataGrid widget also provides easy navigation data Syncfusion developed Syncfusion Flutter Data Pager widget achieve pagination Flutter DataGrid widget get Data Pager 2020 Volume 3 release Let’s discus blog integrate SfDataPager SfDataGrid customization option available Data Pager widget Integrating SfDataPager SfDataGrid Step 1 Include Syncfusion Flutter DataGrid package dependency pubspecyaml file project following code syncfusionflutterdatagrid 18335beta Step 2 Import DataGrid package maindart file using following code example import packagesyncfusionflutterdatagriddatagriddart Step 3 Create common delegate SfDataPager SfDataGrid following Please note default DataGridSource extended DataPagerDelegate Set SfDataGridDataGridSource SfDataPagerdelegate property Set number row displayed page setting SfDataPagerrowsPerPage property Set number item displayed view setting SfDataPagervisibleItemsCount property Override SfDataPagerdelegaterowCount property SfDataPagerdelegatehandlePageChanges method SfDataGridDataGridSource also load data specific page using handlePageChanges method method called every page navigation Data Pager Refer following code example class OrderInfoDataSource extends DataGridSourceOrderInfo override ListOrderInfo get dataSource paginatedDataSource class OrderInfoDataSource extends DataGridSourceOrderInfo override ListOrderInfo get dataSource paginatedDataSource override Object getValueOrderInfo orderInfos String columnName switch columnName case orderID return orderInfosorderID break case customerID return orderInfoscustomerID break case freight return orderInfosfreight break case orderDate return orderInfosorderData break default return break override int get rowCount orderslength override Futurebool handlePageChangeint oldPageIndex int newPageIndex int startRowIndex int rowsPerPage async int endIndex startRowIndex rowsPerPage endIndex orderslength endIndex orderslength 1 paginatedDataSource Listfrom ordersgetRangestartRowIndex endIndextoListgrowable false notifyListeners return true Step 4 Create instance OrderInfoDataSource assign DataGrid’s source Data Pager’s delegate property Refer following code example ListOrderInfo order ListOrderInfo paginatedDataSource static const double dataPagerHeight 60 final OrderInfoRepository repository OrderInfoRepository final OrderInfoDataSource orderInfoDataSource OrderInfoDataSource override void initState superinitState order repositorygetOrderDetails300 override Widget buildBuildContext context return MaterialApp title Paginated SfDataGrid home Scaffold appBar AppBar title TextPaginated SfDataGrid body LayoutBuilderbuilder context constraint return Column child SizedBox height constraintmaxHeight dataPagerHeight width constraintmaxWidth child SfDataGrid source orderInfoDataSource columnWidthMode ColumnWidthModefill column GridColumn GridNumericColumn mappingName orderID headerText Order ID GridTextColumn mappingName customerID headerText Customer Name GridDateTimeColumn mappingName orderDate headerText Order Date GridNumericColumn mappingName freight headerText Freight Container height dataPagerHeight color Colorswhite child SfDataPager delegate orderInfoDataSource rowsPerPage 20 direction Axishorizontal executing code example get output like following GIF image Pagination feature Flutter DataGrid Appearance customization Data Pager Data Pager allows customize appearance element using SfDataPagerThemeData SfDataPagerTheme property SfDataPager wrapped inside SfDataPagerTheme Follow step customize Data Pager Step 1 Import theme package maindart file using following code import packagesyncfusionfluttercorethemedart Step 2 Add SfDataPager widget inside SfDataGridTheme like following code example override Widget buildBuildContext context return Scaffold body SfDataPagerTheme data SfDataPagerThemeData itemColor Colorswhite selectedItemColor ColorslightGreen itemBorderRadius BorderRadiuscircular5 backgroundColor Colorsteal child SfDataPager delegate orderInfoDataSource rowsPerPage 20 direction Axishorizontal executing code example get output like following screenshot Customized Data Pager widget Conclusion blog seen step integrate Syncfusion Flutter DataGrid new Data Pager feature help easily navigate required data get complete source code example used blog GitHub repository try leave feedback comment section Syncfusion Flutter widget offer fast fluid flexible widget creating highquality apps iOS Android web Use enhance productivity existing customer new version available download License Downloads page yet Syncfusion customer try 30day free trial check available feature Also try sample GitHub location also reach u support forum DirectTrac feedback portal always happy assist youTags Flutter Android App Development Dart Mobile App Development Web Development
|
5,118 |
A philosophy of participation in dynamic wholeness
|
A philosophy of participation in dynamic wholeness
Excerpt from Exploring Participation (D.C.Wahl 2002)
[Note: This is an excerpt from my 2002 masters dissertation in Holistic Science at Schumacher College. It addresses some of the root causes of our current crises of unsustainability and applied insights from holistic science to ecological design. This excerpt explores the fundamentals of what Charles Eisenstein later (2014) referred to as ‘the story of separation’ and ‘the story of interbeing. Be warned, this is academic, somewhat dense writing, yet it addresses some crucial issues. Enjoy!]
“Our task is to look at the world and see it whole.” — E.F. Schumacher “The success or failure of saying, and hence of writing, turns upon the ability to recognize what is part and what is not. But a part is a part only inasmuch as it serves to let the whole come forth, which is to let meaning emerge. A part is only a part according to the emergence of the whole which it serves; otherwise it is mere noise. At the same time, the whole does not dominate, for the whole cannot emerge without the parts. The hazard of emergence is such that the whole depends on the parts to be able to come forth, and the parts depend on the coming forth of the whole to be significant instead of superficial.” — Henri Bortoft
The above remains meaningful whether we understand it hermeneutically, as referring to a text and the reciprocity of meaning between the whole and the parts, as well as understood as an analogy for our individual relationship to the wider whole — the world we live in.
In order to continue to emerge in good health, a whole like our living planet depends on the appropriate participation of its parts, and that includes humanity. Just as the parts — you and me — depend on the emergence of this healthy whole for their meaningful and healthy existence.
In this dissertation, I aim to convey some of the understanding that recognizes the intricate link between the health of the whole and the appropriate participation of the parts. I argue that appropriate participation takes place on the appropriate spatial-temporal scales and that the complex dynamic processes of life, which are continuously transforming the whole, depend on an intricate web of relationships through which everything and everybody participate simultaneously as the weavers and as the web of life.
Each individual part of this dissertation is intended to let meaning emerge and let the whole come forth. “Logic is analytical, whereas meaning is evidently holistic, and hence understanding can not be reduced to logic. We understand meaning in the moment of coalescence when the whole is reflected in the parts so that they disclose the whole.”10 The circle of relationship between the whole and the parts, in which meaning emerges, was first recognized by Friedrich Ast, who called it the hermeneutical circle.
I argue, based on my interpretation of Bortoft’s work, that the same circle of relationships between the whole and the part that is expressed in the hermeneutical circle of understanding meaning, may also help to understand our own reciprocally co-creative relationship between our perceived selves and the world we thus perceive.
It is the whole-part relation that manifests in creation, maintenance and transformation of the identities, which in turn manifest as the world we experience. At the core of this re-emergence of meaning, at the foundations of a newly emerging worldview is the awareness of our deeply participatory relationship with the living world.
In the same way that the parts and the whole of this dissertation bring forth each other, mutually dependent on each other for their meaningful existence, each one of us derives meaningful existence through his or her profoundly reciprocal relationships and interactions with the world.
Paying attention to the participatory nature of all existence and our associated creative agency in the world can help us to overcome the alienation and lack of meaning, which characterizes the modern world.
Whether we are conscious of it or not, each one of us is constantly engaged in a creative process through which we collectively bring forth or contribute to the emergence of the whole. Yet, the whole in its entirety has neither beginning nor end and there is no possibility of being outside the whole. Therefore, it is important to understand that the interactions and relationships of its parts continuously transform the whole from within, allowing for creative change to occur and new interpretations of meaning as well as new manifestations of the whole to emerge.
In other words, we need to face up to responsibility for our actions. Whatever we do is shaping the world we live in, while we are simultaneously being shaped by the world we collectively co-create through our thoughts, words and actions. The whole and the part create each other and it may serve us better to think of them not as separate and exclusive, but rather as mutually dependent and encompassing, reciprocating entities that co-create the world in a continuous process of interacting with and relating to each other. The whole and the part, as well as the observer and the observed are truly one, as well as being distinct within that one whole.
A participant-observer understanding of the world as a whole can lead us to seeing how the one can express itself as the many in the diversity of manifestations that emerge through the interactions and relationships of all participant-observers.
The process is analogous to how the one meaning of this dissertation expresses itself through the different interpretations of each individual reader. There is only one meaning and one whole, but it manifests itself differently through the ever-shifting web of relationships of its participant- observers. Like the one universal whole, meaning should not be thought of as fixed or static, but as a dynamic process un/en-folding through relationship, direct sensory experience and interpretation.
Seen in this way, diversity is no longer the threatening, challenging existence of many others competing with our self, but rather the breathtakingly beautiful and meaningful expression of a limitless variety of manifestations that the same one, which we are of, can take as we experience relationship.
We are all of the same one, yet we are the same one differently, so we can learn to begin to cherish diversity as the true expression of our unity with the world. We are not all the same, but we are one. In loosing diversity, we loose ways of experiencing our own individuality and ultimately we loose a part of ourselves.
This understanding of the relationships between the whole and its parts, between each one of us individually and the world or the universe as a whole can help us to find language to express our fundamental interconnectedness with the living world.
We are who we are only in relationship to all there is. The world as we know it emerges in a process of reciprocal co-creation that integrates us inseparably into the world we experience. In other words, as David Abram has put it so beautifully:
“Caught up in a mass of abstractions, our attention hypnotized by a host of human-made technologies that only reflect us back to ourselves, it is all too easy for us to forget our carnal inherence in a more- than-human matrix of sensations and sensibilities. Our bodies have formed themselves in delicate reciprocity with the manifold textures, sounds, and shapes of an animate earth — our eyes have evolved in subtle interaction with other eyes, as our ears are attuned by their very structure to the howling of the wolves and the honking of the geese. To shut ourselves off from these other voices, to continue by our lifestyles to condemn these other sensibilities to the oblivion of extinction, is to rob our own senses of their integrity, and to rob our minds of their coherence. We are human only in contact, and conviviality, with what is not human.”11 — David Abram
In the Santiago theory of cognition Humberto Maturana and Francisco Varela proposed that the process through which an individual interacts with its environment is fundamentally a cognitive process. This process of cognition, of structural coupling between the organism and its environment, is regarded as the fundamental process of life itself. In our reflective consciousness this process emerges as the process of knowing.
Maturnana and Varela emphasize that, as we are beginning to understand how we know, we have to realize “that the world everyone sees is not the world but a world which we bring forth with others” and “that the world will be different only if we live differently.”12 I would like to add that every time we bring forth a world, it also is the world manifesting itself in a particular way.
We will have to find new ways of expressing that we are living in a fundamentally paradoxical universe. While our individual experience of our embodied self is real and allows us, through our senses and the concepts we form, to enter into relationships with a real world, we paradoxically also are the world we thus perceive as it emerges out of our relationship with it.
A truly participatory understanding recognizes that a participating part can never be separate from the whole it participates in. It is always both part and whole simultaneously. This new dynamical way of thinking is “recovering a way of thinking based on living in the movement of paradox rather than eliminating it.” The dynamical way of thinking places paradox “at the very core of understanding.”13
The universe as the one unique whole can never be fully understood in the way that modern Reductionist science aims to understand the world through logical reasoning, prediction and control. Why? Simply because, it is not possible to take an outsider or objective point of view of the whole in its entirety.
The whole can only be approached by taking partial reference from within. This makes the observer of the whole inevitably a participant in it and blurs subject object distinctions. In observing the whole the observer will have to include him or herself, both as the observing subject as well as the observed object.
To be subject and object simultaneously fundamentally conflicts with logical reasoning dependent on either/or choices. We encounter the paradox. Reductionist science is based on the either/or logic of a Cartesian subject-object dichotomy.
Curiously, despite Werner Heisenberg’s famous reminder to the scientific community, that “what we observe is not nature itself, but nature exposed to our method of questioning” and his caution that every act of observation has its associated observational blind-spot, most of science today is still based on dualistic reasoning, that draws sharp either/or distinctions between subject and object and between the observer and the observed.
Based on Aristotle’s, often disregarded or misinterpreted, self actualisation thesis, which metaphorically explained equates the process of the builder building with the process of the building being built, Henri Bortoft points out that this way of experiencing an event as process constitutes “an intermediary philosophical position between monism and dualism.” In what Bortoft calls “a unitary event” one is not reduced to the other. It is not a “monistic event”.14 Neither are there two events. Dualism sees the builder as the subject and the building as the object being built and the process of building-a-building–a-building-being-built as two distinct processes.
I believe Aristotle’s focus here is on the process of reciprocal co-creation and the emergence of identity out of relationship and interaction in this unitary event. Building the building makes the builder a builder, while being built by the builder, makes the building a building. The individual identities emerge out of, or manifest themselves through the relationships established by one single process.
Understanding process in this way lets us experience the coming into being of individual identity through relationship. Rather than experiencing a static world of finished products preconceived by viewing with subject-object goggles from the start, we being to see and think more dynamically thus experiencing reciprocal co-creation of the diversity of identities through relationship.
It is important to realize how deeply the Cartesian subject-object separation is affecting the way we interpret the world and our experiences within it. Especially with regard to the conceptual framework that makes the world appear to us as it does. Bortoft, extending Gadamer’s work on hermeneutics shows that “meaning is understanding”15 and that it is precisely the Cartesian subject-object presupposition that stops us from understanding how meaningful this insight is.
Maybe this is also why the Reductionist universe we constructed based on the subject-object presupposition seems so devoid of meaning? Our purely rational understanding of the objectified world lacks the deeper meaning which arises in the participatory experience of true, embodied understanding as an expression of a conscious universe.
Just as there are sheer infinite possibilities to interpret meaning, there are infinite possibilities of the universe manifesting itself in a diversity of identities. In both cases, interpretation or manifestation depends on the relation between the part and the whole.
Seen from this perspective one could venture to say: the meaning of life is living in relation to the whole that enfolds us, participating meaningfully in its unfolding. To understand this be aware that neither life, nor mind, nor meaning, nor language, nor the universe are things, they are processes reflecting one single process: the whole un/en-folding in relation with and through the parts
As we begin to realize our fundamentally participatory relationship with this process of the whole un/en-folding, as we direct our attention toward our co-creative potential as consciously participating agents in the expression of individual identity and in the interpretation of meaning, life becomes profoundly meaningful. Meaning and life unfold together.
We can relate to this unfolding through our sensory experience, intuitive perception, as well as through our language and all other forms of communication. Since we are enfolded in the whole, we are participants by nature and always in relation. Seen from this perspective, our relationships are us.
Who we are is continuously being defined by the relationships that give us identity. Life and meaning are processes of expression and interpretation of the whole through its parts. We are participating parts of this process, therefore, as Brian Goodwin once told me:
“The point is not to understand the meaning of life, but to live a life of meaning.” — Prof. Brian Goodwin
I believe that living a life of meaning is about paying attention to our relationship to the community of life in its entirety. It is about appropriate participation in the process of life — participation in meaning unfolding.
Life is a universe of meaning unfolding through relationship. Or, as Thomas Berry has put it: “The universe is not a collection of objects, but a communion of subjects.”16 Meaning is not something we need to search for or that we may encounter at some time in the future. The only place that meaning can unfold is in our relation to the living present — in the relations we participate in from day to day.
“The notion of the living present is one in which the future, as expectation and anticipation, is in the detail of actual interactions [relation] taking place now, as is the past as reconstructions in this process of memory. There is no dismissing the past or the future here, nor is there any distraction from the present of what we are doing together.”17
Seen this way, the universe is a continuous and diverse interpretation of meaning, in the living present, life manifesting as the diversity of expressions of individual identity through relationship. The universe is the whole coming forth into and through its parts. Bortoft argues:
“There is only one meaning. It is the one meaning that can manifest itself in different forms and therefore there is difference within meaning. The differences are elicited by the different cultural and historical contexts and personal situations [read relation], which that meaning appears in … no matter how many times the work is understood it is always the one meaning … coming into being in the happening of understanding.” 18 — Henri Bortoft
Bortoft refers to the “one that appears as the many”, as the “intensive dimension of one”. He emphasizes that the diversity of interpretations of the work (whole) do not fragment it, because what we see as diversity is in fact dynamical living unity. Just like Maturana and Varela proposed to equate the process of cognition with the process of life and, in the reflective consciousness of humans, with the process of knowing, Bortoft shows that meaning is akin to life. Like Goethe and Gadamer before him, he takes the position that meaning is inexhaustible, but neither predetermined nor indeterminate. Meaning in Gadamer’s hermeneutics is rather like Goethe described the world:
“She is complete but ever unfinished”. — Goethe
Bortoft shows clearly that the world is not an object, but a process. We participate in a world that is complete but unfinished. The way that meaning continuously manifests itself through the diversity of its interpretations reflects the way the world continuously manifests as a “dynamical unity producing itself in different modes according to language.” What is important to understand here, as Bortoft emphasized, is that experiencing the world in this way “is not perspectivism. It is manifestationism.”19 Each interpretation is a manifestation of the whole, each experience of the world is the world.
Language and all other forms of communication are ways of entering into relation and thus participation in the whole. Sensory experience is fundamental for entering into relation with the world. Just as described by Aristotle’s self-actualization thesis, where the event of building manifests the identities of the builder and the building, in the event of perception the identities of the perceiver and the perceived manifest.
Through the same reciprocally co-creative relationship of the whole and the part, described above, the identities of the perceiver and of the perceived manifest within the web of relationships that connects them to each other and the whole. It is through relationship that we bring forth a world.
In his book The Spell of the Sensuous, David Abram explores our relationship as sensing, embodied beings with the living world we participate in. He points out that in the event of perception as it is experienced “neither the perceiver nor the perceived are wholly passive…. To the sensing body no thing presents itself as utterly passive or inert.”20
Experientially considered the world is a living presence to us; distinctions like animate and inanimate, active and passive arise when we interpret experience conceptually.
Entering into relation, including all forms of communication, transform the web of relationships, thereby changing (causing, defining or terminating) individual identities, their physical manifestation as well as an identity’s particular way of interpreting meaning or manifesting the whole.
Interpreting Merleau-Ponty, David Abram writes: “Experientially considered, language is no more the special property of the human organism than it is expression of the animate earth that enfolds us.”21 He points out that:
“Communicative meaning is always, in its depths, affective; it remains rooted in the sensual dimension of experience, born of the body’s native capacity to resonate with other bodies and with the landscape as a whole. Linguistic meaning is not some ideal and bodiless essence that we arbitrarily assign to physical sound or word and then toss out into the external world. Rather meaning sprouts in the very depth of the sensory world, in the heat of meeting, encounter, participation.”22 — David Abram
Yet another very important mental construct that influences the way we see the world fundamentally is our understanding of time and space. As three prominent phenomenologists, Husserl, Merleau-Ponty and Heidegger, have concluded independently, in direct pre-conceptual experience it is impossible to distinguish time and space.23 Our understanding of time and space is really at the heart of it all, but it would go beyond the bounds of this dissertation to mention more than the most fundamental points.
The philosopher and Zen master David Loy points out that “the objectification of time is also the subjectification of the self, which thus appears only to discover itself in the anxious position of being a nontemporal entity inextricably trapped in time.”24
In other words the idea of self as something permanent and unchanging, as a thing and not a process of changing identity in relation, creates linear time as separate from space.
Indigenous, oral cultures, living by the cycles of day and night, the moon and the seasons, perceive time as cyclical. “Unlike linear time, time conceived as cyclical cannot be readily abstracted from the spatial phenomena that exemplify it — from, for instance, the circular trajectories of the sun, the moon, and the stars. Unlike a straight line, a circle demarcates and encloses a spatial field.”25
Our visible experiential space, as David Abram points out, is also demarcated by a circle — the horizon. He concludes: “Thus cyclical time, the experiential time of an oral culture, has the same shape as perceivable space. And the two circles are in truth one.”26 Our predominant understanding of linear space and time is yet another example of rigid either/or thinking. David Abram believes:
“The conceptual separation of time and space — the literate distinction between a linear, progressive time and a homogenous, featureless space — function to eclipse the enveloping earth from human awareness. As long as we structure our lives according to assumed parameters of static space and a rectilinear time, we will be able to ignore, or overlook, our thorough dependence upon the earth around us. Only when space and time are reconciled into a single, unified field of phenomena does the encompassing earth become evident, once again, in all its power and its depth, as the very ground and horizon of all our knowing.”27 — David Abram
I would like to stress the fundamental importance of direct sensory experience as the primary mode of entering into relationship and knowing the world. Although, as human beings, we predominantly live in a world that manifests itself through language and the mental concepts we express, nevertheless language can only remain meaningful if it reflects our embodied, sensory experience of the world and allows us to express the direct, intuitive understanding of that world-whole, as it is reflected in us — the world-part — through direct experience. David Abram reminds us that:
“Language is … an evolving medium we collectively inhabit, a vast topological matrix in which speaking bodies are generative sites, vortices where the matrix itself is continually being spun out of the silence of sensorial experience…Merleau-Ponty comes in his final writings to affirm that it is first the sensuous, perceptual world that is relational and web-like in character, and hence that the organic, interconnected structure of any language is an extension or echo of the deeply interconnected matrix of sensorial reality itself. Ultimately, it is not human language that is primary, but rather the sensuous, perceptual life-world, whose wild, participatory logic ramifies and elaborates itself in language.”28 — David Abram
The distrust of the senses arose out of Descartes’ cogito ergo sum and is an expression of the resulting mind-body dualism. It has critically influenced Reductionist science, our culture and the way we experience the world. Phenomenologists, like Merleau- Ponty, do not follow the tradition of Reductionist science attempting to explain the world objectively, but aim to describe “as closely as possible the way the world makes itself evident to awareness, the way things arise in our direct sensorial experience.”29
By accepting one way of seeing and interpreting, that of Reductionist science and its dualistic perspective, as the only way of seeing, we have become epistemologically rigid. Not only have we locked ourselves into our bodies, through a rigidly adhered to, either/or — type, boundary between the self and the world, between subject and object. We have retreated even further in denying our own subjective sensory experience as a legitimate way of entering into relation and understanding (presence-ing meaning in) the world.
Our reduced sense of self is hiding out in our minds, busily re-enforcing the alienating prison of a mentally constructed objective reality that isolates us from the world of qualities and meaning in which we actually live and experience.
By asserting that only the measurable and quantifiable is real and doubting our own sensory, embodied experience of the world we have fallen victim to yet another rigid dualism — the separation of mind and body.
This same dualism finds expression in separating mind and matter, as well as energy and matter. Yet boundaries are never as rigid as they appear when viewed from within the dualist mindset. Seen more dynamically, nothing purely excludes its dualist opposite in the process of un/en–folding by which the whole transforms, as it manifests itself in diversity.
What one way of seeing labels as opposites, in another, more dynamical way of seeing, merely reflects the relationship of the whole and the part. Each one containing the other, but not as two, rather as potential manifestations of the one unity, depending on the complete but ever unfinished web of relationships through which temporary identities manifest and express themselves. Let me briefly exemplify the dissolution of these rigid either/or boundaries taking place in the history of science. I will dedicate more attention to this in the next chapter.
Almost one hundred years ago, quantum mechanics and the understanding of quantum entanglement provided a scientific basis for accepting the fundamental interconnectedness of all matter and relativity theory established how mass and energy can transform into each other ( Einstein’s famous e = mc2).
With regard to the split between mind and matter, Fritjof Capra believes that the Santiago theory of cognition is the first scientific theory that overcomes this division. He explains that by regarding mind not as a thing but as a process — the process of cognition — and by identifying this process as the process of life, “mind and matter no longer appear to belong to two separate categories, but can be seen as representing two complementary aspects of the phenomenon of life…”30
What is important to realize here is that “we are accustomed to thinking of mind as if it were inside us — ‘in our heads’. But it is the other way around. We live within a dimension of mind which is, for the most part, as invisible to us as the air we breathe.”31 There is no rigid, either/or boundary between mind and matter, or as David Abram put it:
“Clearly, a wholly immaterial mind could neither see nor touch things — indeed, could not experience anything at all. We can experience things — can touch, hear, and taste things — only because, as bodies, we are ourselves included in the sensible field, and have our own texture, sounds, and tastes. We can perceive things at all only because we ourselves are entirely a part of the sensible world that we perceive! We might as well say that we are organs of this world, flesh of its flesh, and that the world is perceiving itself through us.”32 — David Abram
In the predominant, dualistic way of seeing, “we consider knowledge to be a subjective state of the knower, a modification of consciousness which in no way affects the phenomenon that is known”, which we regard to be the same “whether it is known or not.” The dynamical way of seeing, regards the knower not as “an onlooker but a participant in nature’s processes, which now act in consciousness to produce the phenomenon consciously as they act externally to produce it materially.”33
As participants in the complex and dynamic processes of nature, as parts reflecting the whole, we are transformed by and transform the world through our way of knowing, which expresses and guides our way of participating.
What we have to realize here is that what we become aware of, depends on the sensory and on the non-sensory aspect to cognitive perception. While it is our direct, embodied, sensory experience that allows us to enter into relationship with the world in the first place, our way of seeing, the organizing ideas we employ, shape what we become aware of and thus the world we bring forth. This has profound implications as it can help us to understand that “all scientific knowledge …is a correlation of what is seen with the way it is seen.”34
One of the most promising and daring attempts to provide a philosophical framework to integrate most of what I have discussed above was recently provided by the philosopher Christian de Quincey in his book Radical Nature — Rediscovering the Soul of Matter.
De Qunicey offers both a new ontological basis, as well as a new epistemology, which complements the relation-focused understanding of participation in the un/en-folding of the whole, which I discussed above. As I have mentioned, I believe we live in a fundamentally paradoxical universe, since we all need to come to terms with our daily experience of subjectivity in what we otherwise describe as an objective universe.
I agree with De Quincey that rather than attempting to remove the paradox, “our task will be to move into it, and know it in a new way.” He proposes a new postmodern “paradox paradigm” that “asserts the primacy of extrarational experience.”35 He argues for a participatory and intersubjective epistemology — “a way of knowing that takes us into the heart of mystery, and invites the paradox of consciousness into our very being.”36 De Quincey believes that:
“Epistemologically, we must engage the paradox. “Paradox” means, literally, “beyond” (para) “opinion or belief” (doxa). Paradox, then, takes us into the “space” that is beyond belief — into experience itself. Ontologically, it invites us into the ambiguity of being — an ambiguity of neither this-or that nor this-and- that, nether either/or nor both/and, but all of these together.”37 — Christian De Qunicey
De Quincey calls his philosophical framework Radical Naturalism. Central to his argument is the fundamental assumption: “It is inconceivable that sentience (subjectivity; consciousness), could ever emerge or evolve from wholly insentient (objective, physical matter).” He therefore argues: “the assumption of consciousness and matter as coextensive and coeternal is the most adequate ‘postmodern’ solution to the question of consciousness in the physical world.
Where materialism, idealism, and dualism fall short as adequate ontologies for a science of consciousness, radical naturalism provides a coherent foundation. The central tenet of radical naturalism is that matter is intrinsically sentient — it is both subject and object.
Radical naturalism confronts head-on the essential paradox of consciousness: We exist as embodied subjects — as subjective objects or feeling matter.” 38 The proposed philosophical framework acknowledges “the ontological and epistemological primacy of embodied feeling.”39
De Quincey uses the ancient greek concept of entelechy to describe mind as ‘a becoming of matter’. He regards mind as “neither outside nor inside of matter” but rather as “constituent of the very essence of matter — interior to its being.”40
De Quincey believes that “the Cartesian error was to identify consciousness as a kind of substance, and not to recognize it as a process or as dynamic form inherent in matter itself. Mind is the self-becoming of self-organization — the self-creation — of matter. Without this matter could never produce consciousness.”41 De Quincey summarizes the implications of his proposed philosophical framework as follows:
“With this new perspective we can now embrace the actuality of consciousness and meaning in a self- organising cosmos. Elements of the new story will include: (1) complementarity rather than dualism, (2) organicism rather than mechanism, (3) holism complementing reductionism, (4) interconnectedness rather than separateness, (5) process rather than things, (6) synchronicity as well as causality, (7) creativity rather than certainty, (8) participation and entanglement rather than objectivity. But most of all, the new cosmology will emphasize (9) that matter is inherently sentient all the way down, and (10) that, therefore, nature, the cosmos — matter itself — is inherently and thoroughly meaningful, purposeful, and valuable in and for itself. Nature, we must see, is sacred.”42 — Christian De Quincey
It may now be more obvious why I found it necessary to set a philosophical context for this dissertation, why it is so important that we learn to employ multiple ways of seeing and to integrate the dualistic/Reductionist into a wider holistic, or non-dual context. It is important that we learn to embrace and live in the paradox. The rigid either/or logic of dualism is at the heart of our alienation from nature and the whole. It is also at the heart of the consequential environmental, social and cultural crisis. As David Loy explains:
“[In dualism] the self is understood to be the source of awareness and therefore of all meaning and value, which is to devalue the world/nature into merely that field of activity wherein the self labours to fulfil itself. … the alienated subject feels no responsibility for the objectified other and attempts to find satisfaction through projects that usually merely increase the sense of alienation. The meaning and purpose sought can be attained only in a relationship whereby nonduality with the objectified other is re-established.”43 — David Loy
The dualist distinction between the self and the world allowed for the development of a detached, Reductionist science in the first place. Bortoft argued that the existing ontological gulf between science and its object is a fundamental prerequisite of Reductionist science, as it allows for a science of measuring and experimenting, in which an “object appears to consciousness, it does not appear in consciousness.”44
I would add that it is precisely this ontological gulf, which enables the moral and ethical detachment with which modern science continues to evade responsibility for its participation in bringing about the current environmental, social and economic crisis, we observe worldwide.
Only if we understand our fundamental interconnectedness as participants in the process of life, in relationship and conviviality with the community of life, only then will meaning appear.
Only as we learn to participate appropriately in the whole, at the appropriate spatio-temporal scale, will we be able to sustain our participation in the process. This is the heart of sustainability.
If we truly understand our participation in the whole we don’t have to fear the future and can love the present. We will become aware of our responsibility to participate appropriately and meaningfully through focusing our attention to meaningful relation with the community of life.
Once we understand the meaning of the whole un/en-folding in relation with and through the parts, we have accomplished the task set by E.F. Schumacher at the beginning of this chapter “to look at the world and see it whole.” Then, as Erwin Schroedinger put it:
“You can throw yourself flat on the ground, stretched out upon Mother Earth, with the certain conviction that you are one with her and she with you. Your are as firmly established, as invulnerable as she is, indeed a thousand times firmer and more invulnerable. As surely as she will engulf you tomorrow, so surely will she bring you forth anew to new striving and suffering. And not merely ‘some day’: now, today, every day she brings you forth, not once but a thousand times over. For eternally and always there is only now, one and the same now. The present is the only thing that has no end.”45 — Erwin Schrödinger
[Note: This is an excerpt from my 2002 masters dissertation in Holistic Science at Schumacher College. It addresses some of the root causes of our current crises of unsustainability. If you are interested in the references you can find them here. The research I did for my masters thesis directly informed my 2006 PhD thesis in ‘Design for Human and Planetary Health: A Holistic/Integral Approach to Complexity and Sustainability’ (2006).]
—
If you like the post, please clap AND remember that you can clap up to 50 times if you like it a lot ;-)!
Daniel Christian Wahl — Catalyzing transformative innovation in the face of converging crises, advising on regenerative whole systems design, regenerative leadership, and education for regenerative development and bioregional regeneration.
Author of the internationally acclaimed book Designing Regenerative Cultures
|
https://medium.com/age-of-awareness/a-philosophy-of-participation-in-dynamic-wholeness-b5923c063a99
|
['Daniel Christian Wahl']
|
2020-02-05 12:52:43.382000+00:00
|
['Consciousness', 'Sustainability', 'Culture', 'Philosophy', 'Holistic Science']
|
Title philosophy participation dynamic wholenessContent philosophy participation dynamic wholeness Excerpt Exploring Participation DCWahl 2002 Note excerpt 2002 master dissertation Holistic Science Schumacher College address root cause current crisis unsustainability applied insight holistic science ecological design excerpt explores fundamental Charles Eisenstein later 2014 referred ‘the story separation’ ‘the story interbeing warned academic somewhat dense writing yet address crucial issue Enjoy “Our task look world see whole” — EF Schumacher “The success failure saying hence writing turn upon ability recognize part part part inasmuch serf let whole come forth let meaning emerge part part according emergence whole serf otherwise mere noise time whole dominate whole cannot emerge without part hazard emergence whole depends part able come forth part depend coming forth whole significant instead superficial” — Henri Bortoft remains meaningful whether understand hermeneutically referring text reciprocity meaning whole part well understood analogy individual relationship wider whole — world live order continue emerge good health whole like living planet depends appropriate participation part includes humanity part — — depend emergence healthy whole meaningful healthy existence dissertation aim convey understanding recognizes intricate link health whole appropriate participation part argue appropriate participation take place appropriate spatialtemporal scale complex dynamic process life continuously transforming whole depend intricate web relationship everything everybody participate simultaneously weaver web life individual part dissertation intended let meaning emerge let whole come forth “Logic analytical whereas meaning evidently holistic hence understanding reduced logic understand meaning moment coalescence whole reflected part disclose whole”10 circle relationship whole part meaning emerges first recognized Friedrich Ast called hermeneutical circle argue based interpretation Bortoft’s work circle relationship whole part expressed hermeneutical circle understanding meaning may also help understand reciprocally cocreative relationship perceived self world thus perceive wholepart relation manifest creation maintenance transformation identity turn manifest world experience core reemergence meaning foundation newly emerging worldview awareness deeply participatory relationship living world way part whole dissertation bring forth mutually dependent meaningful existence one u derives meaningful existence profoundly reciprocal relationship interaction world Paying attention participatory nature existence associated creative agency world help u overcome alienation lack meaning characterizes modern world Whether conscious one u constantly engaged creative process collectively bring forth contribute emergence whole Yet whole entirety neither beginning end possibility outside whole Therefore important understand interaction relationship part continuously transform whole within allowing creative change occur new interpretation meaning well new manifestation whole emerge word need face responsibility action Whatever shaping world live simultaneously shaped world collectively cocreate thought word action whole part create may serve u better think separate exclusive rather mutually dependent encompassing reciprocating entity cocreate world continuous process interacting relating whole part well observer observed truly one well distinct within one whole participantobserver understanding world whole lead u seeing one express many diversity manifestation emerge interaction relationship participantobservers process analogous one meaning dissertation express different interpretation individual reader one meaning one whole manifest differently evershifting web relationship participant observer Like one universal whole meaning thought fixed static dynamic process unenfolding relationship direct sensory experience interpretation Seen way diversity longer threatening challenging existence many others competing self rather breathtakingly beautiful meaningful expression limitless variety manifestation one take experience relationship one yet one differently learn begin cherish diversity true expression unity world one loosing diversity loose way experiencing individuality ultimately loose part understanding relationship whole part one u individually world universe whole help u find language express fundamental interconnectedness living world relationship world know emerges process reciprocal cocreation integrates u inseparably world experience word David Abram put beautifully “Caught mass abstraction attention hypnotized host humanmade technology reflect u back easy u forget carnal inherence thanhuman matrix sensation sensibility body formed delicate reciprocity manifold texture sound shape animate earth — eye evolved subtle interaction eye ear attuned structure howling wolf honking goose shut voice continue lifestyle condemn sensibility oblivion extinction rob sens integrity rob mind coherence human contact conviviality human”11 — David Abram Santiago theory cognition Humberto Maturana Francisco Varela proposed process individual interacts environment fundamentally cognitive process process cognition structural coupling organism environment regarded fundamental process life reflective consciousness process emerges process knowing Maturnana Varela emphasize beginning understand know realize “that world everyone see world world bring forth others” “that world different live differently”12 would like add every time bring forth world also world manifesting particular way find new way expressing living fundamentally paradoxical universe individual experience embodied self real allows u sens concept form enter relationship real world paradoxically also world thus perceive emerges relationship truly participatory understanding recognizes participating part never separate whole participates always part whole simultaneously new dynamical way thinking “recovering way thinking based living movement paradox rather eliminating it” dynamical way thinking place paradox “at core understanding”13 universe one unique whole never fully understood way modern Reductionist science aim understand world logical reasoning prediction control Simply possible take outsider objective point view whole entirety whole approached taking partial reference within make observer whole inevitably participant blur subject object distinction observing whole observer include observing subject well observed object subject object simultaneously fundamentally conflict logical reasoning dependent eitheror choice encounter paradox Reductionist science based eitheror logic Cartesian subjectobject dichotomy Curiously despite Werner Heisenberg’s famous reminder scientific community “what observe nature nature exposed method questioning” caution every act observation associated observational blindspot science today still based dualistic reasoning draw sharp eitheror distinction subject object observer observed Based Aristotle’s often disregarded misinterpreted self actualisation thesis metaphorically explained equates process builder building process building built Henri Bortoft point way experiencing event process constitutes “an intermediary philosophical position monism dualism” Bortoft call “a unitary event” one reduced “monistic event”14 Neither two event Dualism see builder subject building object built process buildingabuilding–abuildingbeingbuilt two distinct process believe Aristotle’s focus process reciprocal cocreation emergence identity relationship interaction unitary event Building building make builder builder built builder make building building individual identity emerge manifest relationship established one single process Understanding process way let u experience coming individual identity relationship Rather experiencing static world finished product preconceived viewing subjectobject goggles start see think dynamically thus experiencing reciprocal cocreation diversity identity relationship important realize deeply Cartesian subjectobject separation affecting way interpret world experience within Especially regard conceptual framework make world appear u Bortoft extending Gadamer’s work hermeneutics show “meaning understanding”15 precisely Cartesian subjectobject presupposition stop u understanding meaningful insight Maybe also Reductionist universe constructed based subjectobject presupposition seems devoid meaning purely rational understanding objectified world lack deeper meaning arises participatory experience true embodied understanding expression conscious universe sheer infinite possibility interpret meaning infinite possibility universe manifesting diversity identity case interpretation manifestation depends relation part whole Seen perspective one could venture say meaning life living relation whole enfolds u participating meaningfully unfolding understand aware neither life mind meaning language universe thing process reflecting one single process whole unenfolding relation part begin realize fundamentally participatory relationship process whole unenfolding direct attention toward cocreative potential consciously participating agent expression individual identity interpretation meaning life becomes profoundly meaningful Meaning life unfold together relate unfolding sensory experience intuitive perception well language form communication Since enfolded whole participant nature always relation Seen perspective relationship u continuously defined relationship give u identity Life meaning process expression interpretation whole part participating part process therefore Brian Goodwin told “The point understand meaning life live life meaning” — Prof Brian Goodwin believe living life meaning paying attention relationship community life entirety appropriate participation process life — participation meaning unfolding Life universe meaning unfolding relationship Thomas Berry put “The universe collection object communion subjects”16 Meaning something need search may encounter time future place meaning unfold relation living present — relation participate day day “The notion living present one future expectation anticipation detail actual interaction relation taking place past reconstruction process memory dismissing past future distraction present together”17 Seen way universe continuous diverse interpretation meaning living present life manifesting diversity expression individual identity relationship universe whole coming forth part Bortoft argues “There one meaning one meaning manifest different form therefore difference within meaning difference elicited different cultural historical context personal situation read relation meaning appears … matter many time work understood always one meaning … coming happening understanding” 18 — Henri Bortoft Bortoft refers “one appears many” “intensive dimension one” emphasizes diversity interpretation work whole fragment see diversity fact dynamical living unity like Maturana Varela proposed equate process cognition process life reflective consciousness human process knowing Bortoft show meaning akin life Like Goethe Gadamer take position meaning inexhaustible neither predetermined indeterminate Meaning Gadamer’s hermeneutics rather like Goethe described world “She complete ever unfinished” — Goethe Bortoft show clearly world object process participate world complete unfinished way meaning continuously manifest diversity interpretation reflects way world continuously manifest “dynamical unity producing different mode according language” important understand Bortoft emphasized experiencing world way “is perspectivism manifestationism”19 interpretation manifestation whole experience world world Language form communication way entering relation thus participation whole Sensory experience fundamental entering relation world described Aristotle’s selfactualization thesis event building manifest identity builder building event perception identity perceiver perceived manifest reciprocally cocreative relationship whole part described identity perceiver perceived manifest within web relationship connects whole relationship bring forth world book Spell Sensuous David Abram explores relationship sensing embodied being living world participate point event perception experienced “neither perceiver perceived wholly passive… sensing body thing present utterly passive inert”20 Experientially considered world living presence u distinction like animate inanimate active passive arise interpret experience conceptually Entering relation including form communication transform web relationship thereby changing causing defining terminating individual identity physical manifestation well identity’s particular way interpreting meaning manifesting whole Interpreting MerleauPonty David Abram writes “Experientially considered language special property human organism expression animate earth enfolds us”21 point “Communicative meaning always depth affective remains rooted sensual dimension experience born body’s native capacity resonate body landscape whole Linguistic meaning ideal bodiless essence arbitrarily assign physical sound word toss external world Rather meaning sprout depth sensory world heat meeting encounter participation”22 — David Abram Yet another important mental construct influence way see world fundamentally understanding time space three prominent phenomenologists Husserl MerleauPonty Heidegger concluded independently direct preconceptual experience impossible distinguish time space23 understanding time space really heart would go beyond bound dissertation mention fundamental point philosopher Zen master David Loy point “the objectification time also subjectification self thus appears discover anxious position nontemporal entity inextricably trapped time”24 word idea self something permanent unchanging thing process changing identity relation creates linear time separate space Indigenous oral culture living cycle day night moon season perceive time cyclical “Unlike linear time time conceived cyclical cannot readily abstracted spatial phenomenon exemplify — instance circular trajectory sun moon star Unlike straight line circle demarcates encloses spatial field”25 visible experiential space David Abram point also demarcated circle — horizon concludes “Thus cyclical time experiential time oral culture shape perceivable space two circle truth one”26 predominant understanding linear space time yet another example rigid eitheror thinking David Abram belief “The conceptual separation time space — literate distinction linear progressive time homogenous featureless space — function eclipse enveloping earth human awareness long structure life according assumed parameter static space rectilinear time able ignore overlook thorough dependence upon earth around u space time reconciled single unified field phenomenon encompassing earth become evident power depth ground horizon knowing”27 — David Abram would like stress fundamental importance direct sensory experience primary mode entering relationship knowing world Although human being predominantly live world manifest language mental concept express nevertheless language remain meaningful reflects embodied sensory experience world allows u express direct intuitive understanding worldwhole reflected u — worldpart — direct experience David Abram reminds u “Language … evolving medium collectively inhabit vast topological matrix speaking body generative site vortex matrix continually spun silence sensorial experience…MerleauPonty come final writing affirm first sensuous perceptual world relational weblike character hence organic interconnected structure language extension echo deeply interconnected matrix sensorial reality Ultimately human language primary rather sensuous perceptual lifeworld whose wild participatory logic ramifies elaborates language”28 — David Abram distrust sens arose Descartes’ cogito ergo sum expression resulting mindbody dualism critically influenced Reductionist science culture way experience world Phenomenologists like Merleau Ponty follow tradition Reductionist science attempting explain world objectively aim describe “as closely possible way world make evident awareness way thing arise direct sensorial experience”29 accepting one way seeing interpreting Reductionist science dualistic perspective way seeing become epistemologically rigid locked body rigidly adhered eitheror — type boundary self world subject object retreated even denying subjective sensory experience legitimate way entering relation understanding presenceing meaning world reduced sense self hiding mind busily reenforcing alienating prison mentally constructed objective reality isolates u world quality meaning actually live experience asserting measurable quantifiable real doubting sensory embodied experience world fallen victim yet another rigid dualism — separation mind body dualism find expression separating mind matter well energy matter Yet boundary never rigid appear viewed within dualist mindset Seen dynamically nothing purely excludes dualist opposite process unen–folding whole transforms manifest diversity one way seeing label opposite another dynamical way seeing merely reflects relationship whole part one containing two rather potential manifestation one unity depending complete ever unfinished web relationship temporary identity manifest express Let briefly exemplify dissolution rigid eitheror boundary taking place history science dedicate attention next chapter Almost one hundred year ago quantum mechanic understanding quantum entanglement provided scientific basis accepting fundamental interconnectedness matter relativity theory established mass energy transform Einstein’s famous e mc2 regard split mind matter Fritjof Capra belief Santiago theory cognition first scientific theory overcomes division explains regarding mind thing process — process cognition — identifying process process life “mind matter longer appear belong two separate category seen representing two complementary aspect phenomenon life…”30 important realize “we accustomed thinking mind inside u — ‘in heads’ way around live within dimension mind part invisible u air breathe”31 rigid eitheror boundary mind matter David Abram put “Clearly wholly immaterial mind could neither see touch thing — indeed could experience anything experience thing — touch hear taste thing — body included sensible field texture sound taste perceive thing entirely part sensible world perceive might well say organ world flesh flesh world perceiving us”32 — David Abram predominant dualistic way seeing “we consider knowledge subjective state knower modification consciousness way affect phenomenon known” regard “whether known not” dynamical way seeing regard knower “an onlooker participant nature’s process act consciousness produce phenomenon consciously act externally produce materially”33 participant complex dynamic process nature part reflecting whole transformed transform world way knowing express guide way participating realize become aware depends sensory nonsensory aspect cognitive perception direct embodied sensory experience allows u enter relationship world first place way seeing organizing idea employ shape become aware thus world bring forth profound implication help u understand “all scientific knowledge …is correlation seen way seen”34 One promising daring attempt provide philosophical framework integrate discussed recently provided philosopher Christian de Quincey book Radical Nature — Rediscovering Soul Matter De Qunicey offer new ontological basis well new epistemology complement relationfocused understanding participation unenfolding whole discussed mentioned believe live fundamentally paradoxical universe since need come term daily experience subjectivity otherwise describe objective universe agree De Quincey rather attempting remove paradox “our task move know new way” proposes new postmodern “paradox paradigm” “asserts primacy extrarational experience”35 argues participatory intersubjective epistemology — “a way knowing take u heart mystery invite paradox consciousness being”36 De Quincey belief “Epistemologically must engage paradox “Paradox” mean literally “beyond” para “opinion belief” doxa Paradox take u “space” beyond belief — experience Ontologically invite u ambiguity — ambiguity neither thisor thisand nether eitheror bothand together”37 — Christian De Qunicey De Quincey call philosophical framework Radical Naturalism Central argument fundamental assumption “It inconceivable sentience subjectivity consciousness could ever emerge evolve wholly insentient objective physical matter” therefore argues “the assumption consciousness matter coextensive coeternal adequate ‘postmodern’ solution question consciousness physical world materialism idealism dualism fall short adequate ontology science consciousness radical naturalism provides coherent foundation central tenet radical naturalism matter intrinsically sentient — subject object Radical naturalism confronts headon essential paradox consciousness exist embodied subject — subjective object feeling matter” 38 proposed philosophical framework acknowledges “the ontological epistemological primacy embodied feeling”39 De Quincey us ancient greek concept entelechy describe mind ‘a becoming matter’ regard mind “neither outside inside matter” rather “constituent essence matter — interior being”40 De Quincey belief “the Cartesian error identify consciousness kind substance recognize process dynamic form inherent matter Mind selfbecoming selforganization — selfcreation — matter Without matter could never produce consciousness”41 De Quincey summarizes implication proposed philosophical framework follows “With new perspective embrace actuality consciousness meaning self organising cosmos Elements new story include 1 complementarity rather dualism 2 organicism rather mechanism 3 holism complementing reductionism 4 interconnectedness rather separateness 5 process rather thing 6 synchronicity well causality 7 creativity rather certainty 8 participation entanglement rather objectivity new cosmology emphasize 9 matter inherently sentient way 10 therefore nature cosmos — matter — inherently thoroughly meaningful purposeful valuable Nature must see sacred”42 — Christian De Quincey may obvious found necessary set philosophical context dissertation important learn employ multiple way seeing integrate dualisticReductionist wider holistic nondual context important learn embrace live paradox rigid eitheror logic dualism heart alienation nature whole also heart consequential environmental social cultural crisis David Loy explains “In dualism self understood source awareness therefore meaning value devalue worldnature merely field activity wherein self labour fulfil … alienated subject feel responsibility objectified attempt find satisfaction project usually merely increase sense alienation meaning purpose sought attained relationship whereby nonduality objectified reestablished”43 — David Loy dualist distinction self world allowed development detached Reductionist science first place Bortoft argued existing ontological gulf science object fundamental prerequisite Reductionist science allows science measuring experimenting “object appears consciousness appear consciousness”44 would add precisely ontological gulf enables moral ethical detachment modern science continues evade responsibility participation bringing current environmental social economic crisis observe worldwide understand fundamental interconnectedness participant process life relationship conviviality community life meaning appear learn participate appropriately whole appropriate spatiotemporal scale able sustain participation process heart sustainability truly understand participation whole don’t fear future love present become aware responsibility participate appropriately meaningfully focusing attention meaningful relation community life understand meaning whole unenfolding relation part accomplished task set EF Schumacher beginning chapter “to look world see whole” Erwin Schroedinger put “You throw flat ground stretched upon Mother Earth certain conviction one firmly established invulnerable indeed thousand time firmer invulnerable surely engulf tomorrow surely bring forth anew new striving suffering merely ‘some day’ today every day brings forth thousand time eternally always one present thing end”45 — Erwin Schrödinger Note excerpt 2002 master dissertation Holistic Science Schumacher College address root cause current crisis unsustainability interested reference find research master thesis directly informed 2006 PhD thesis ‘Design Human Planetary Health HolisticIntegral Approach Complexity Sustainability’ 2006 — like post please clap remember clap 50 time like lot Daniel Christian Wahl — Catalyzing transformative innovation face converging crisis advising regenerative whole system design regenerative leadership education regenerative development bioregional regeneration Author internationally acclaimed book Designing Regenerative CulturesTags Consciousness Sustainability Culture Philosophy Holistic Science
|
5,119 |
Prototyping with React VR
|
Why React
One of React’s biggest innovations is that it enables developers to describe a system, such as the UI of a web or mobile app, as a set of declarative components. The power of this declarative approach is that the description of the UI is decoupled from its implementation, allowing authors to build custom “renderers” that target more platforms than just web browsers, such as hardware, terminal applications, music synthesizers, and Sketch.app.
React as a paradigm is perfect for wrapping the complexity of underlying platform APIs and providing consistent and smooth tools to the developers using them. Jon Gold, “Painting with Code” http://airbnb.design/painting-with-code
React Native Under the Hood
Because React VR is built on top of React Native, let’s start with a look at how it works under the hood. React Native is built on a renderer that controls native UI on iOS and Android. The React application code runs in a JavaScript virtual machine in a background thread on the mobile device, leaving the main thread free to render the native UI. React Native provides a bridge for communication between the native layer and the JavaScript layer of the app. When the React components in your application are rendered, the React Native renderer serializes all UI changes that need to happen into a JSON-based format and sends this payload asynchronously across the bridge. The native layer receives and deserializes this payload, updating the native UI accordingly.
Diagram 1: React Native architecture.
Over the past year at Airbnb, we’ve invested heavily in React Native because we recognize the power of being able to share knowledge, engineers, and code across platforms. In November, we launched our new Experiences platform, which is largely written in React Native on our iOS and Android apps, and we formed a full-time React Native Infrastructure team to continue this investment.
Learn once, write anywhere. Tom Occhino, React Native: Bringing modern web techniques to mobile
React VR is Built on React Native
React VR’s architecture mirrors that of React Native, with the React application code running in a background thread — in this case, a Web Worker in the web browser. When the application’s React components are rendered, React VR utilizes the React Native bridge to serialize any necessary UI changes and pass them to the main thread, which in this case is the browser’s main JavaScript runtime. Here, React VR utilizes a library from Oculus called OVRUI to translate the payload of pending UI updates into Three.js commands, rendering a 3D scene using WebGL.
Diagram 2: React VR is built directly on top of React Native
Finally, React VR utilizes WebVR’s new navigator.getVRDisplays() API to send the 3D scene to the user’s head mounted display, such as an Oculus Rift, HTC Vive or Samsung Gear VR. WebVR, a new standard being spearheaded by Mozilla, is supported in recent builds of major web browsers. Check out webvr.info for the latest information on browser support.
Because React VR implements a lot of the same public APIs that React Native implements, we have access to the same vast ecosystem of patterns, libraries, and tools. It will feel familiar for any developer who has built React or React Native apps. We were able to get a VR prototype up and running quickly; in no time at all, we scaffolded a basic React application, set up Redux, and began hitting our production JSON API for sample data.
With hot module reloading and Chrome Dev tools debugging, we could iterate nearly as fast as in React web and React Native development, which allowed us to throw a bunch of UI ideas at the proverbial wall to see what would stick.
Using React (JavaScript) has turned out to be a bigger win for VR app development than I expected — UI dev is several x faster than Unity. John Carmack, Oculus CTO and original creator of Quake.
Flexbox in VR
React VR inherits React Native’s flexbox-based layout and styling engine, with a few tweaks to allow transforms in 3 dimensions.
Flexbox support for React Native is provided by Yoga, a cross-platform layout engine created by Facebook to simplify mobile development by translating flexbox directives into layout measurements. Because it’s written in C, Yoga (née css-layout) can be embedded natively in Objective-C and Java mobile apps. React VR also uses Yoga for flexbox layout. “But how?” you ask, “It’s written in C!” The React VR team has accomplished this by using Emscripten to cross-compile the Yoga C code into JavaScript. How cool is that?
This is a powerful feature of React VR: developers can use the same styling and layout system across web, React Native, and VR, which opens the doors to directly sharing layout styles across these platforms.
Sharing Primitives
Like React Native, React VR provides a set of basic primitives used to construct UI— <View> , <Text> , <Image> , Stylesheet —in addition to adding of its own VR-specific primitives, such as <Pano> , <Box> , among others. This allowed us to drop in some of our existing React Native components into VR, rendered on a 2D surface.
This is hugely exciting because we’ve built our UI component system upon react-primitives , a library we developed for sharing React components across platforms by providing the basic <View> , <Text> , <Image> , etc. primitives for a variety of platforms, including web, native, and Sketch.app (via our react-sketchapp project).
This means we can use the buttons, rows, icons and more directly in VR, keeping Airbnb design language consistent without having to rewrite it all from scratch.
Check out our engineer Leland Richardson’s talk at React Europe for a more in-depth look at the promise of react-primitives , below.
Airbnb engineer Leland Richardson’s talk at React Europe: “React as a Platform: A path towards a truly cross-platform UI”
<CylindricalLayer />
As you could imagine, placing 2D content onto a flat plane in 3D space often falls short of an optimal viewing experience. Currently, many VR apps solve this by rendering 2D UI onto a cylindrical plane curved in front of the viewer, giving it a “2.5D” feel.
A screenshot of Oculus home captured in Gear VR, showing of its use of a cylindrical layer for displaying 2D content for a “2.5D” feel
|
https://medium.com/airbnb-engineering/prototyping-with-react-vr-4d5ab91b6f5a
|
[]
|
2018-05-03 05:59:44.401000+00:00
|
['Product', 'React', 'Reactvr', 'VR', 'Prototyping']
|
Title Prototyping React VRContent React One React’s biggest innovation enables developer describe system UI web mobile app set declarative component power declarative approach description UI decoupled implementation allowing author build custom “renderers” target platform web browser hardware terminal application music synthesizer Sketchapp React paradigm perfect wrapping complexity underlying platform APIs providing consistent smooth tool developer using Jon Gold “Painting Code” httpairbnbdesignpaintingwithcode React Native Hood React VR built top React Native let’s start look work hood React Native built renderer control native UI iOS Android React application code run JavaScript virtual machine background thread mobile device leaving main thread free render native UI React Native provides bridge communication native layer JavaScript layer app React component application rendered React Native renderer serializes UI change need happen JSONbased format sends payload asynchronously across bridge native layer receives deserializes payload updating native UI accordingly Diagram 1 React Native architecture past year Airbnb we’ve invested heavily React Native recognize power able share knowledge engineer code across platform November launched new Experiences platform largely written React Native iOS Android apps formed fulltime React Native Infrastructure team continue investment Learn write anywhere Tom Occhino React Native Bringing modern web technique mobile React VR Built React Native React VR’s architecture mirror React Native React application code running background thread — case Web Worker web browser application’s React component rendered React VR utilizes React Native bridge serialize necessary UI change pas main thread case browser’s main JavaScript runtime React VR utilizes library Oculus called OVRUI translate payload pending UI update Threejs command rendering 3D scene using WebGL Diagram 2 React VR built directly top React Native Finally React VR utilizes WebVR’s new navigatorgetVRDisplays API send 3D scene user’s head mounted display Oculus Rift HTC Vive Samsung Gear VR WebVR new standard spearheaded Mozilla supported recent build major web browser Check webvrinfo latest information browser support React VR implement lot public APIs React Native implement access vast ecosystem pattern library tool feel familiar developer built React React Native apps able get VR prototype running quickly time scaffolded basic React application set Redux began hitting production JSON API sample data hot module reloading Chrome Dev tool debugging could iterate nearly fast React web React Native development allowed u throw bunch UI idea proverbial wall see would stick Using React JavaScript turned bigger win VR app development expected — UI dev several x faster Unity John Carmack Oculus CTO original creator Quake Flexbox VR React VR inherits React Native’s flexboxbased layout styling engine tweak allow transforms 3 dimension Flexbox support React Native provided Yoga crossplatform layout engine created Facebook simplify mobile development translating flexbox directive layout measurement it’s written C Yoga née csslayout embedded natively ObjectiveC Java mobile apps React VR also us Yoga flexbox layout “But how” ask “It’s written C” React VR team accomplished using Emscripten crosscompile Yoga C code JavaScript cool powerful feature React VR developer use styling layout system across web React Native VR open door directly sharing layout style across platform Sharing Primitives Like React Native React VR provides set basic primitive used construct UI— View Text Image Stylesheet —in addition adding VRspecific primitive Pano Box among others allowed u drop existing React Native component VR rendered 2D surface hugely exciting we’ve built UI component system upon reactprimitives library developed sharing React component across platform providing basic View Text Image etc primitive variety platform including web native Sketchapp via reactsketchapp project mean use button row icon directly VR keeping Airbnb design language consistent without rewrite scratch Check engineer Leland Richardson’s talk React Europe indepth look promise reactprimitives Airbnb engineer Leland Richardson’s talk React Europe “React Platform path towards truly crossplatform UI” CylindricalLayer could imagine placing 2D content onto flat plane 3D space often fall short optimal viewing experience Currently many VR apps solve rendering 2D UI onto cylindrical plane curved front viewer giving “25D” feel screenshot Oculus home captured Gear VR showing use cylindrical layer displaying 2D content “25D” feelTags Product React Reactvr VR Prototyping
|
5,120 |
Top 10 Features in Azure Synapse Analytics Workspace
|
You also have the ability to Copy Data from multiple format:
Image by Author
3. Data Flow
This is probably my favorite feature in Azure Synapse because it brings down the barrier in cleansing data. I’m a big proponent of making it easier to get things done (I think everyone should be :D). Data Flow brings SQL right to your doorsteps with the ability to perform common task like JOINS, UNIONS, Lookups, SELECT, Filter, Sort, Alter and much more. All with little to no code.
Image by Author
It also gives you a good visual of your data cleansing process. Take a look an example below.
Image by Author
4. Pipeline
Once you’ve created a Copy Job or Data Flow. You can run it through a pipeline. This gives you the chance to automate the process by Scheduling a Job by adding a trigger.
Image by Author
or
adding other activities to your pipeline like a Spark Job, Azure Function, Store Procedure, Machine Learning, or Conditionals (If, then, ForEach etc).
Image by Author
5. Write SQL Scripts
Under SQL Script you can write your familiar SQL Statements. It has the flexibility to connect to External Data Source outside of your Synapse Workspace e.g. Hadoop, Azure Data Lake Store or Azure Blog Storage. You can also connect to Public Datasets. Check out an examples below
Visualize your SQL Output
In the results window of a SQL Query, you have the option to visualize your results by changing your view menu from Table to Chart. This gives you the option to customize your results, for example the query I ran below gives me the option to view my result as a Line Chart, I can also edit the legends as I please and give it a Label. Once I’m done I can save the Chart as an image for further use outside of Azure Synapse.
|
https://towardsdatascience.com/top-10-features-in-azure-synapse-analytics-workspace-ec4618a7fa69
|
['Dayo Bamikole']
|
2020-08-26 13:17:22.612000+00:00
|
['Spark', 'Big Data', 'Azure', 'Azure Synapse Analytics', 'Data Warehouse']
|
Title Top 10 Features Azure Synapse Analytics WorkspaceContent also ability Copy Data multiple format Image Author 3 Data Flow probably favorite feature Azure Synapse brings barrier cleansing data I’m big proponent making easier get thing done think everyone Data Flow brings SQL right doorstep ability perform common task like JOINS UNIONS Lookups SELECT Filter Sort Alter much little code Image Author also give good visual data cleansing process Take look example Image Author 4 Pipeline you’ve created Copy Job Data Flow run pipeline give chance automate process Scheduling Job adding trigger Image Author adding activity pipeline like Spark Job Azure Function Store Procedure Machine Learning Conditionals ForEach etc Image Author 5 Write SQL Scripts SQL Script write familiar SQL Statements flexibility connect External Data Source outside Synapse Workspace eg Hadoop Azure Data Lake Store Azure Blog Storage also connect Public Datasets Check example Visualize SQL Output result window SQL Query option visualize result changing view menu Table Chart give option customize result example query ran give option view result Line Chart also edit legend please give Label I’m done save Chart image use outside Azure SynapseTags Spark Big Data Azure Azure Synapse Analytics Data Warehouse
|
5,121 |
Partitioning in Apache Spark
|
First of some words about the most basic concept — a partition:
Partition — a logical chunk of a large data set.
Very often data we are processing can be separated into logical partitions (ie. payments from the same country, ads displayed for given cookie, etc). In Spark, they are distributed among nodes when shuffling occurs.
Spark can run 1 concurrent task for every partition of an RDD (up to the number of cores in the cluster). If you’re cluster has 20 cores, you should have at least 20 partitions (in practice 2–3x times more). From the other hand a single partition typically shouldn’t contain more than 128MB and a single shuffle block cannot be larger than 2GB (see SPARK-6235).
In general, more numerous partitions allow work to be distributed among more workers, but fewer partitions allow work to be done in larger chunks (and often quicker).
Spark’s partitioning feature is available on all RDDs of key/value pairs.
Why care?
For one, quite important reason — performance. By having all relevant data in one place (node) we reduce the overhead of shuffling (need for serialization and network traffic).
Also understanding how Spark deals with partitions allow us to control the application parallelism (which leads to better cluster utilization — fewer costs).
But keep in mind that partitioning will not be helpful in all applications. For example, if a given RDD is scanned only once, there is no point in partitioning it in advance. It’s useful only when a dataset is reused multiple times (in key-oriented situations using functions like join() ).
We will use the following list of numbers for investigating the behavior.
Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Playing with partitions
Let’s start with creating a local context with allocated one thread only and parallelizing a collection with using all defaults. We are going to use glom() function that will expose the structure of created partitions.
From API: glom() - return an RDD created by coalescing all elements within each partition into a list.
Each RDD also possesses information about partitioning schema (you will see later that it can be invoked explicitly or derived via some transformations).
From API: partitioner - inspect partitioner information used for the RDD.
Output
Number of partitions: 1
Partitioner: None
Partitions structure: [[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]
Ok, so what happened under the hood?
Spark uses different partitioning schemes for various types of RDDs and operations. In a case of using parallelize() data is evenly distributed between partitions using their indices (no partitioning scheme is used).
If there is no partitioner the partitioning is not based upon characteristic of data but distribution is random and uniformed across nodes. Different rules apply for various data sources and structures (ie. when loading data using textFile() or using tuple objects). Good sumary is provided here.
If you look inside parallelize() source code you will see that the number of partitions can be distinguished either by setting numSlice argument or by using spark.defaultParallelism property (which is reading context information).
Now let’s try to allow our driver to use two local cores.
Output
Default parallelism: 2
Number of partitions: 2
Partitioner: None
Partitions structure: [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
Ok, that worked as expected — the data was distributed across two partitions and each will be executed in a separate thread.
But what will happen when the number of partitions exceeds the number of data records?
Output
Number of partitions: 15
Partitioner: None
Partitions structure: [[], [0], [1], [], [2], [3], [], [4], [5], [], [6], [7], [], [8], [9]]
You can see that Spark created requested a number of partitions but most of them are empty. This is bad because the time needed to prepare a new thread for processing data (one element) is significantly greater than processing time itself (you can analyze it in Spark UI).
Custom partitions with partitionBy()
partitionBy() transformation allows applying custom partitioning logic over the RDD.
Let’s try to partition the data further by taking advantage of domain-specific knowledge.
Warning — to use partitionBy() RDD must consist of tuple (pair) objects. It's a transformation, so a new RDD will be returned. It's highly adviseable to persist it for more optimal later usage.
Because partitionBy() requires data to be in key/value format we will need to transform the data.
In PySpark an object is considered valid for PairRDD operations if it can be unpacked as follows k, v = kv . You can read more about the requirements here.
Output
Number of partitions: 2
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a56fabd0>
Partitions structure: [[(0, 0), (2, 2), (4, 4), (6, 6), (8, 8)], [(1, 1), (3, 3), (5, 5), (7, 7), (9, 9)]]
You can see that now the elements are distributed differently. A few interesting things happened:
parallelize(nums) - we are transforming Python array into RDD with no partitioning scheme, map(lambda el: (el, el)) - transforming data into the form of a tuple, partitionBy(2) - splitting data into 2 chunks using default hash partitioner,
Spark used a partitioner function to distinguish which to which partition assign each record. It can be specified as the second argument to the partitionBy() . The partition number is then evaluated as follows partition = partitionFunc(key) % num_partitions .
By default PySpark implementation uses hash partitioning as the partitioning function.
Let’s perform an additional sanity check.
Output
Element: [0]: 0 % 2 = partition 0
Element: [1]: 1 % 2 = partition 1
Element: [2]: 2 % 2 = partition 0
Element: [3]: 3 % 2 = partition 1
Element: [4]: 4 % 2 = partition 0
Element: [5]: 5 % 2 = partition 1
Element: [6]: 6 % 2 = partition 0
Element: [7]: 7 % 2 = partition 1
Element: [8]: 8 % 2 = partition 0
Element: [9]: 9 % 2 = partition 1
But let’s get into a more realistic example. Imagine that our data consist of various dummy transactions made across different countries.
We know that further analysis will be performed analyzing many similar records within the same country. To optimize network traffic it seems to be a good idea to put records from one country in one node.
To meet this requirement, we will need a custom partitioner:
Custom partitioner — function returning an integer for given object (tuple key).
Output
1
1
4
By validating our partitioner we can see what partitions are assigned for each country.
Pay attention for potential data skews. If some keys are overrepresented in the dataset it can result in suboptimal resource usage and potential failure.
Output
Number of partitions: 4
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a56b7bd0>
Partitions structure: [[('United Kingdom', {'country': 'United Kingdom', 'amount': 100, 'name': 'Bob'}), ('United Kingdom', {'country': 'United Kingdom', 'amount': 15, 'name': 'James'}), ('Germany', {'country': 'Germany', 'amount': 200, 'name': 'Johannes'})], [], [('Poland', {'country': 'Poland', 'amount': 51, 'name': 'Marek'}), ('Poland', {'country': 'Poland', 'amount': 75, 'name': 'Paul'})], []]
It worked as expected all records from a single country is within one partition. We can do some work directly on them without worrying about shuffling by using the mapPartitions() function.
From API: mapPartitions() converts each partition of the source RDD into multiple elements of the result (possibly none). One important usage can be some heavyweight initialization (that should be done once for many elements). Using mapPartitions() it can be done once per worker task/thread/partition instead of running map() for each RDD data element.
In the example below, we will calculate the sum of sales in each partition (in this case such operations make no sense, but the point is to show how to pass data into mapPartitions() function).
Output
Partitions structure: [[('Poland', {'country': 'Poland', 'amount': 51, 'name': 'Marek'}), ('Germany', {'country': 'Germany', 'amount': 200, 'name': 'Johannes'}), ('Poland', {'country': 'Poland', 'amount': 75, 'name': 'Paul'})], [('United Kingdom', {'country': 'United Kingdom', 'amount': 100, 'name': 'Bob'}), ('United Kingdom', {'country': 'United Kingdom', 'amount': 15, 'name': 'James'})], []]
Total sales for each partition: [326, 115, 0]
Working with DataFrames
Nowadays we are all advised to abandon operations on raw RDDs and use structured DataFrames (or Datasets if using Java or Scala) from Spark SQL module. Creators made it very easy to create custom partitioners in this case.
Output
Number of partitions: 2
Partitioner: None
Partitions structure: [[Row(amount=100, country=u'United Kingdom', name=u'Bob'), Row(amount=15, country=u'United Kingdom', name=u'James')], [Row(amount=51, country=u'Poland', name=u'Marek'), Row(amount=200, country=u'Germany', name=u'Johannes'), Row(amount=75, country=u'Poland', name=u'Paul')]] After 'repartition()'
Number of partitions: 50
Partitioner: None
Partitions structure: [[], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [Row(amount=200, country=u'Germany', name=u'Johannes')], [], [Row(amount=51, country=u'Poland', name=u'Marek'), Row(amount=75, country=u'Poland', name=u'Paul')], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [Row(amount=100, country=u'United Kingdom', name=u'Bob'), Row(amount=15, country=u'United Kingdom', name=u'James')], [], [], [], []]
You can see that DataFrames expose a modified repartition() method taking as an argument a column name. When not specifying number of partitions a default value is used (taken from the config parameter spark.sql.shuffle.partitions ).
Let’s take a closer look at this method at the general.
coalesce() and repartition()
coalesce() and repartition() transformations are used for changing the number of partitions in the RDD.
repartition() is calling coalesce() with explicit shuffling.
The rules for using are as follows:
if you are increasing the number of partitions use repartition() (performing full shuffle),
the number of partitions use (performing full shuffle), if you are decreasing the number of partitions use coalesce() (minimizes shuffles)
Code below shows how repartitioning works (data is represented using DataFrames).
Output
Number of partitions: 2
Partitions structure: [[Row(num=0), Row(num=1), Row(num=2), Row(num=3), Row(num=4)], [Row(num=5), Row(num=6), Row(num=7), Row(num=8), Row(num=9)]] Number of partitions: 4
Partitions structure: [[Row(num=1), Row(num=6)], [Row(num=2), Row(num=7)], [Row(num=3), Row(num=8)], [Row(num=0), Row(num=4), Row(num=5), Row(num=9)]]
Vanishing partitioning schema
Many available RDD operations will take advantage of underlying partitioning. On the other hand operations like map() cause the new RDD to forget the parent's partitioning information.
Operations that benefit from partitioning
All operations performing shuffling data by key will benefit from partitioning. Some examples are cogroup() , groupWith() , join() , leftOuterJoin() , rightOuterJoin() , groupByKey() , reduceByKey() , combineByKey() or lookup() .
Operations that affect partitioning
Spark knows internally how each of it’s operations affects partitioning, and automatically sets the partitioner on RDDs created by operations that partition that data.
But the are some transformations that cannot guarantee to produce known partitioning — for example calling map() could theoretically modify the key of each element.
Output
Number of partitions: 2
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a5711310>
Partitions structure: [[(0, 0), (2, 2), (4, 4), (6, 6), (8, 8)], [(1, 1), (3, 3), (5, 5), (7, 7), (9, 9)]] Number of partitions: 2
Partitioner: None
Partitions structure: [[(0, 0), (2, 4), (4, 8), (6, 12), (8, 16)], [(1, 2), (3, 6), (5, 10), (7, 14), (9, 18)]]
Spark does not analyze your functions to check whether they retain the key.
Instead, there are some functions provided that guarantee that each tuple’s key remains the same — mapValues() , flatMapValues() or filter() (if the parent has a partitioner).
Output
Number of partitions: 2
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a56b7d90>
Partitions structure: [[(0, 0), (2, 2), (4, 4), (6, 6), (8, 8)], [(1, 1), (3, 3), (5, 5), (7, 7), (9, 9)]] Number of partitions: 2
Partitioner: <pyspark.rdd.Partitioner object at 0x7f97a56b7d90>
Partitions structure: [[(0, 0), (2, 4), (4, 8), (6, 12), (8, 16)], [(1, 2), (3, 6), (5, 10), (7, 14), (9, 18)]]
Memory issues
Have you ever seen this mysterious piece of text — java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE ?
Looking into stack trace it can be spotted that it’s not coming from within you app but from Spark internals. The reason is that in Spark you cannot have shuffle block greater than 2GB.
Shuffle block — data transferred across stages between executors.
This happens because Spark uses ByteBuffer as abstraction for storing block and it's limited by Integer.MAX_SIZE (2 GB).
It’s especially problematic for Spark SQL (various aggregation functions) because the default number of partitions to use when doing shuffle is set to 200 (it can lead to high shuffle block sizes that can sometimes exceed 2GB).
So what can be done:
Increase the number of partitions (thereby, reducing the average partition size) by increasing the value of spark.sql.shuffle.partitions for Spark SQL or by calling repartition() or coalesce() on RDDs, Get rid of skew in data
It’s good to know that Spark uses different logic for memory management when the number of partitions is greater than 2000 (uses high compression algorithm). So if you have ~2000 partitions it’s worth bumping it up to 2001 which will result in smaller memory footprint.
Take-aways
Spark partitioning is available on all RDDs of key/value pairs and causes the system to group elements based on a function of each key.
Features
tuples in the same partition are guaranteed to be on the same machine,
each node in the cluster can contain more than one partition,
the total number of partitions are configurable (by default set to the total number of cores on all executor nodes)
Performance tuning checklist
have the correct number of partitions (according to cluster specification) — check this and that for guidance,
consider using custom partitioners,
check if your transformations preserve partition schema,
check if memory could be optimized by bumping number of partitions to 2001
Settings
spark.default.parallelism - sets up the number of partitions to use for HashPartitioner (can be overridden when creating SparkContext object),
- sets up the number of partitions to use for HashPartitioner (can be overridden when creating object), spark.sql.shuffle.partitions - controls the number of partitions for operations on DataFrames (default is 200)
As the final thought note that the number of partitions also determine how many files will be generated by actions saving an RDD to files.
Sources
|
https://medium.com/parrot-prediction/partitioning-in-apache-spark-8134ad840b0
|
['Norbert Kozlowski']
|
2018-01-11 20:45:26.932000+00:00
|
['Data Science', 'Big Data', 'Apache Spark', 'Tutorial']
|
Title Partitioning Apache SparkContent First word basic concept — partition Partition — logical chunk large data set often data processing separated logical partition ie payment country ad displayed given cookie etc Spark distributed among node shuffling occurs Spark run 1 concurrent task every partition RDD number core cluster you’re cluster 20 core least 20 partition practice 2–3x time hand single partition typically shouldn’t contain 128MB single shuffle block cannot larger 2GB see SPARK6235 general numerous partition allow work distributed among worker fewer partition allow work done larger chunk often quicker Spark’s partitioning feature available RDDs keyvalue pair care one quite important reason — performance relevant data one place node reduce overhead shuffling need serialization network traffic Also understanding Spark deal partition allow u control application parallelism lead better cluster utilization — fewer cost keep mind partitioning helpful application example given RDD scanned point partitioning advance It’s useful dataset reused multiple time keyoriented situation using function like join use following list number investigating behavior Output 0 1 2 3 4 5 6 7 8 9 Playing partition Let’s start creating local context allocated one thread parallelizing collection using default going use glom function expose structure created partition API glom return RDD created coalescing element within partition list RDD also posse information partitioning schema see later invoked explicitly derived via transformation API partitioner inspect partitioner information used RDD Output Number partition 1 Partitioner None Partitions structure 0 1 2 3 4 5 6 7 8 9 Ok happened hood Spark us different partitioning scheme various type RDDs operation case using parallelize data evenly distributed partition using index partitioning scheme used partitioner partitioning based upon characteristic data distribution random uniformed across node Different rule apply various data source structure ie loading data using textFile using tuple object Good sumary provided look inside parallelize source code see number partition distinguished either setting numSlice argument using sparkdefaultParallelism property reading context information let’s try allow driver use two local core Output Default parallelism 2 Number partition 2 Partitioner None Partitions structure 0 1 2 3 4 5 6 7 8 9 Ok worked expected — data distributed across two partition executed separate thread happen number partition exceeds number data record Output Number partition 15 Partitioner None Partitions structure 0 1 2 3 4 5 6 7 8 9 see Spark created requested number partition empty bad time needed prepare new thread processing data one element significantly greater processing time analyze Spark UI Custom partition partitionBy partitionBy transformation allows applying custom partitioning logic RDD Let’s try partition data taking advantage domainspecific knowledge Warning — use partitionBy RDD must consist tuple pair object transformation new RDD returned highly adviseable persist optimal later usage partitionBy requires data keyvalue format need transform data PySpark object considered valid PairRDD operation unpacked follows k v kv read requirement Output Number partition 2 Partitioner pysparkrddPartitioner object 0x7f97a56fabd0 Partitions structure 0 0 2 2 4 4 6 6 8 8 1 1 3 3 5 5 7 7 9 9 see element distributed differently interesting thing happened parallelizenums transforming Python array RDD partitioning scheme maplambda el el el transforming data form tuple partitionBy2 splitting data 2 chunk using default hash partitioner Spark used partitioner function distinguish partition assign record specified second argument partitionBy partition number evaluated follows partition partitionFunckey numpartitions default PySpark implementation us hash partitioning partitioning function Let’s perform additional sanity check Output Element 0 0 2 partition 0 Element 1 1 2 partition 1 Element 2 2 2 partition 0 Element 3 3 2 partition 1 Element 4 4 2 partition 0 Element 5 5 2 partition 1 Element 6 6 2 partition 0 Element 7 7 2 partition 1 Element 8 8 2 partition 0 Element 9 9 2 partition 1 let’s get realistic example Imagine data consist various dummy transaction made across different country know analysis performed analyzing many similar record within country optimize network traffic seems good idea put record one country one node meet requirement need custom partitioner Custom partitioner — function returning integer given object tuple key Output 1 1 4 validating partitioner see partition assigned country Pay attention potential data skews key overrepresented dataset result suboptimal resource usage potential failure Output Number partition 4 Partitioner pysparkrddPartitioner object 0x7f97a56b7bd0 Partitions structure United Kingdom country United Kingdom amount 100 name Bob United Kingdom country United Kingdom amount 15 name James Germany country Germany amount 200 name Johannes Poland country Poland amount 51 name Marek Poland country Poland amount 75 name Paul worked expected record single country within one partition work directly without worrying shuffling using mapPartitions function API mapPartitions convert partition source RDD multiple element result possibly none One important usage heavyweight initialization done many element Using mapPartitions done per worker taskthreadpartition instead running map RDD data element example calculate sum sale partition case operation make sense point show pas data mapPartitions function Output Partitions structure Poland country Poland amount 51 name Marek Germany country Germany amount 200 name Johannes Poland country Poland amount 75 name Paul United Kingdom country United Kingdom amount 100 name Bob United Kingdom country United Kingdom amount 15 name James Total sale partition 326 115 0 Working DataFrames Nowadays advised abandon operation raw RDDs use structured DataFrames Datasets using Java Scala Spark SQL module Creators made easy create custom partitioners case Output Number partition 2 Partitioner None Partitions structure Rowamount100 countryuUnited Kingdom nameuBob Rowamount15 countryuUnited Kingdom nameuJames Rowamount51 countryuPoland nameuMarek Rowamount200 countryuGermany nameuJohannes Rowamount75 countryuPoland nameuPaul repartition Number partition 50 Partitioner None Partitions structure Rowamount200 countryuGermany nameuJohannes Rowamount51 countryuPoland nameuMarek Rowamount75 countryuPoland nameuPaul Rowamount100 countryuUnited Kingdom nameuBob Rowamount15 countryuUnited Kingdom nameuJames see DataFrames expose modified repartition method taking argument column name specifying number partition default value used taken config parameter sparksqlshufflepartitions Let’s take closer look method general coalesce repartition coalesce repartition transformation used changing number partition RDD repartition calling coalesce explicit shuffling rule using follows increasing number partition use repartition performing full shuffle number partition use performing full shuffle decreasing number partition use coalesce minimizes shuffle Code show repartitioning work data represented using DataFrames Output Number partition 2 Partitions structure Rownum0 Rownum1 Rownum2 Rownum3 Rownum4 Rownum5 Rownum6 Rownum7 Rownum8 Rownum9 Number partition 4 Partitions structure Rownum1 Rownum6 Rownum2 Rownum7 Rownum3 Rownum8 Rownum0 Rownum4 Rownum5 Rownum9 Vanishing partitioning schema Many available RDD operation take advantage underlying partitioning hand operation like map cause new RDD forget parent partitioning information Operations benefit partitioning operation performing shuffling data key benefit partitioning example cogroup groupWith join leftOuterJoin rightOuterJoin groupByKey reduceByKey combineByKey lookup Operations affect partitioning Spark know internally it’s operation affect partitioning automatically set partitioner RDDs created operation partition data transformation cannot guarantee produce known partitioning — example calling map could theoretically modify key element Output Number partition 2 Partitioner pysparkrddPartitioner object 0x7f97a5711310 Partitions structure 0 0 2 2 4 4 6 6 8 8 1 1 3 3 5 5 7 7 9 9 Number partition 2 Partitioner None Partitions structure 0 0 2 4 4 8 6 12 8 16 1 2 3 6 5 10 7 14 9 18 Spark analyze function check whether retain key Instead function provided guarantee tuple’s key remains — mapValues flatMapValues filter parent partitioner Output Number partition 2 Partitioner pysparkrddPartitioner object 0x7f97a56b7d90 Partitions structure 0 0 2 2 4 4 6 6 8 8 1 1 3 3 5 5 7 7 9 9 Number partition 2 Partitioner pysparkrddPartitioner object 0x7f97a56b7d90 Partitions structure 0 0 2 4 4 8 6 12 8 16 1 2 3 6 5 10 7 14 9 18 Memory issue ever seen mysterious piece text — javalangIllegalArgumentException Size exceeds IntegerMAXVALUE Looking stack trace spotted it’s coming within app Spark internals reason Spark cannot shuffle block greater 2GB Shuffle block — data transferred across stage executor happens Spark us ByteBuffer abstraction storing block limited IntegerMAXSIZE 2 GB It’s especially problematic Spark SQL various aggregation function default number partition use shuffle set 200 lead high shuffle block size sometimes exceed 2GB done Increase number partition thereby reducing average partition size increasing value sparksqlshufflepartitions Spark SQL calling repartition coalesce RDDs Get rid skew data It’s good know Spark us different logic memory management number partition greater 2000 us high compression algorithm 2000 partition it’s worth bumping 2001 result smaller memory footprint Takeaways Spark partitioning available RDDs keyvalue pair cause system group element based function key Features tuples partition guaranteed machine node cluster contain one partition total number partition configurable default set total number core executor node Performance tuning checklist correct number partition according cluster specification — check guidance consider using custom partitioners check transformation preserve partition schema check memory could optimized bumping number partition 2001 Settings sparkdefaultparallelism set number partition use HashPartitioner overridden creating SparkContext object set number partition use HashPartitioner overridden creating object sparksqlshufflepartitions control number partition operation DataFrames default 200 final thought note number partition also determine many file generated action saving RDD file SourcesTags Data Science Big Data Apache Spark Tutorial
|
5,122 |
Clustering population in London to find a suitable location for ethnic restaurant
|
As a digital communication professional, data science is not new to me, but I have never systematically learned it, just picked up the necessary skills on the job. So one of my COVID-19 lockdown commitments was to look into data science and machine learning in a bit more structured way. Google must have guessed my intention as I got a few targeted ads — which led me to IBM’s Data Science Program on Coursera. This post is not a review, it’s about my capstone project.
The idea was to find a suitable location for a traditional ethnic restaurant in London using exploratory data analysis and machine learning. It made sense to look into London’s Chinese population — if you live in the city you must have heard of the Chinese district, possibly the worst place to open a new traditional restaurant, right? But then where else, are there other parts of the city where the concentration of Chinese people* is high and there aren’t many popular Chinese restaurants?
*in the census data people are classified according to their own perceived ethnic group and cultural background
To get things started we need quite a few datasets to work with. Most of the files can be found in my repo except the data files as those are quite big — but check the notebook for the links:
London’s census data (2011 is the latest) broken down to Middle Layer Super Output Areas (MSOAs) level
level Population weighted centroids for MSOAs
Land area of MSOAs
Shapefiles for MSOAs
Foursquare to get the most popular venue types
Before jumping in…
If you are using any APIs, database connections, or else in your code, it’s a really good practice not to include the credentials in the files that you share, upload to GitHub, or to any other code sharing platform. Personally, I like to use dotenv for Python-related projects — it’s easy to set up: create a .env file in your root folder, store your credentials there, and then add to the beginning of your Jupyter Notebook the following:
%load_ext dotenv
%dotenv # get your key client_id = %env CLIENT_ID
cliend_secret = %env CLIENT_SECRET
Initial exploratory data analysis
I won’t include here the data wrangling that I did for the combined dataframe of the census, centroids, and land areas datasets — please check the repo if you are interested. I ended up with the following table:
The MSOAs name and total columns are in the table for reference only, we are not going to use those in our analysis. The Chinese population column contains the combined population of all Chinese related ethnic groups, while the latitude and longitude columns are the population-weighted centroids of the MSOAs.
The idea is to find out which neighborhoods have a higher Chinese population, and then explore the most popular restaurant types in those areas. The Office of National Statistics works with different kinds of output areas: Middle Layer Super Output Areas (MSOAs) is a sensible choice as it provides the desired granularity to define our own ‘neighborhoods’. Using the MSOAs, folium’s choropleth map showing the Chinese population will look like the following.
|
https://nubianlachlan.medium.com/clustering-population-in-london-to-find-a-suitable-location-for-ethnic-restaurant-4b610b49673e
|
['Balint Hudecz']
|
2020-10-31 13:38:37.615000+00:00
|
['Kmeans', 'Population', 'Data Science', 'Clustering', 'Data Visualization']
|
Title Clustering population London find suitable location ethnic restaurantContent digital communication professional data science new never systematically learned picked necessary skill job one COVID19 lockdown commitment look data science machine learning bit structured way Google must guessed intention got targeted ad — led IBM’s Data Science Program Coursera post review it’s capstone project idea find suitable location traditional ethnic restaurant London using exploratory data analysis machine learning made sense look London’s Chinese population — live city must heard Chinese district possibly worst place open new traditional restaurant right else part city concentration Chinese people high aren’t many popular Chinese restaurant census data people classified according perceived ethnic group cultural background get thing started need quite datasets work file found repo except data file quite big — check notebook link London’s census data 2011 latest broken Middle Layer Super Output Areas MSOAs level level Population weighted centroid MSOAs Land area MSOAs Shapefiles MSOAs Foursquare get popular venue type jumping in… using APIs database connection else code it’s really good practice include credential file share upload GitHub code sharing platform Personally like use dotenv Pythonrelated project — it’s easy set create env file root folder store credential add beginning Jupyter Notebook following loadext dotenv dotenv get key clientid env CLIENTID cliendsecret env CLIENTSECRET Initial exploratory data analysis won’t include data wrangling combined dataframe census centroid land area datasets — please check repo interested ended following table MSOAs name total column table reference going use analysis Chinese population column contains combined population Chinese related ethnic group latitude longitude column populationweighted centroid MSOAs idea find neighborhood higher Chinese population explore popular restaurant type area Office National Statistics work different kind output area Middle Layer Super Output Areas MSOAs sensible choice provides desired granularity define ‘neighborhoods’ Using MSOAs folium’s choropleth map showing Chinese population look like followingTags Kmeans Population Data Science Clustering Data Visualization
|
5,123 |
The Simple Two-Step Funnel I Use to Qualify Freelance Leads
|
Finding freelance leads can be a thankless task.
You send out 30+ pitches and land two clients and that’s considered a Very Good Outcome.
So, you sign a contract with those two clients, start the work, and everything is unicorns and rainbows; it’s all hunkydory.
Each project lasts for a heady two weeks, but then you hit the ground with a bone-crunching bump and a dawning realisation: you have to send out another 30+ pitches to get another two clients.
It’s relentless and can seem like a lot of work — particularly when you’re busy doing your actual job. Because of course you’re going to forget to pitch some weeks if you’re snowed under with your latest contract, right?
But pitching is the only way to get work… Isn’t it?
Or is it…?
In the early days of freelancing, spending 20+ hours a month identifying clients and sending out pitches can be incredibly rewarding. It can bolster your schedule for a good few months and get good work flowing in.
For long-term freelancers, 20+ hours a month is way too much time to spend landing new work. Those are potentially billable hours that could be spent honing your craft, building relationships with existing clients, or, you know, doing actual work that you get paid for.
Your Marketing Tactics Need to Change
Halfway through my freelance career, my marketing tactics took a sharp left turn.
At the start, pitching was the number one way I landed new work, whether that was via job boards or identifying and reaching out to leads myself.
Two years in, I didn’t have the time to send 10 pitches a day to potential clients, but I still needed a steady stream of work coming in for when my contracts ended or the leads dried up.
It was at this point I started creating content geared towards my target client (I should note here that this was also the point that my target client took a sharp left turn — I went from writing for travel and hospitality companies to working for marketing, SaaS, and ecommerce brands).
As a writer, creating content came naturally to me and it served two purposes:
It showed off my skill set to potential clients (you’ve got to practice what you preach, right?)
It created a presence that brought freelance leads directly to my door
This was content that I spent an hour or so creating, but that continued to work hard at bringing in clients long after I had pressed publish. It was a far cry from the 20+ hours I was spending pitching in the early days.
So which kind of content worked best?
Throughout the past 5 years, I’ve crafted a really simple lead generation process that doesn’t take up a lot of time but that is truly effective.
It works in two stages:
Create a downloadable “lead magnet” that gives your prospects extra value in exchange for their email address Publish regular blog posts that touch on specific pain points your target audience have
The two stages work in tandem to, first of all, qualify potential leads by identifying pain points that prospects are willing to pay for and, secondly, grabbing their email address so you can nurture a potential relationship.
Let’s talk more about the two stages in isolation.
1. Create a Downloadable Lead Magnet
You see these everywhere.
“Grab your FREE checklist for writing blog posts”
“Download your FREE ebook and learn how to get 5,000 subscribers without spending a single penny at all. Nada. Nothing”
You get the gist.
While I’m always wary of “GET THIS AMAZING FREE THING” copy, there’s a reason so many brands are pushing out lead magnets to their readers.
It’s because they offer a win-win situation.
The reader gets access to even more value, while the site owner gets a new subscriber that they can nurture via email.
There’s a really simple formula I use for creating a Really Good Lead Magnet.
You take ONE pain point your target client has and you provide ONE solution that can be achieved in a day.
People want to know that you understand their needs and they want to be able to take quick action. Remember, your lead magnet should tie into the services you offer too, so if you’re a designer, it should be design related; if you’re a writer, it needs to have something to do with content.
Here are some lead magnet ideas that use this formula:
A guide to creating a one-page client proposal in 20 minutes
7 healthy but tasty recipe ideas for the next week
Tweaks you can make to your website UX today
A checklist to make your website mobile-ready right now
Let’s take my target client for a moment: startup SaaS companies that create software for ecommerce brands to use.
Their biggest pain point is attracting new users. I help them create content to attract new users, so I might play around with these lead magnet ideas:
3 Ways to Optimise Your Blog Posts and Get More Users
A Copy Checklist for Getting More App Users
How This Ecommerce Brand Gained 50 New Users Through Content Creation
You’ll notice that the latter example is more of a case study, and this can work particularly well if you’ve got a really great story from one of your clients.
I once sold out a client’s program in 10 minutes with an email campaign. If I was focusing on selling email copy, I might weave this into my lead magnet (e.g. How X sold out their program in 10 minutes with engaging emails).
2. Publish Regular Blog Posts
In order to attract people to your lead magnet in the first place, you have to publish Good Regular Content.
Usually this takes the form of blog posts, but it might also be video content or something else depending on your skills and services.
For me, writing was the obvious choice.
It’s not enough to churn out weak 300-word posts and hope for the best. If you want to attract High Quality Clients, your posts have to be good enough to attract people in the first place and good enough for them to stick around and read it.
To do this:
Write a list of pain points your target client has (think about the common questions your prospects will be searching for in order to land on your website)
(think about the common questions your prospects will be searching for in order to land on your website) Determine how your services tie into those pain points (take the ecommerce SaaS example I highlighted above)
(take the ecommerce SaaS example I highlighted above) Create content that combines the two together
For me, a list of potential blog posts might look something like this:
How Ecommerce Brands Can Use This Writing Technique to Attract More Users
5 Headline Formulas for Forward-Thinking SaaS Brands
The Biggest Copywriting Mistakes SaaS Brands Make on Their Websites
These are just off the top of my head, but you get the idea.
Your blog posts should be intriguing enough for your target clients to click into and then, somewhere within all of that good content, you want to offer them your lead magnet in exchange for their email address.
Try Out This Two-Step Funnel Yourself
Funnels often put the fear of god in people. They can be complex and overwhelming, but they really don’t have to be.
If you’re past the point where you can afford to spend 20+ hours a month pitching new prospects, funnels might be the answer for you. And, luckily, this one only involves two simple (but very effective) steps!
— -
Mission: Get Better Clients
This post forms part of my mission for March: to help freelancers get better paying, higher quality clients.
I’m publishing 30 posts in 30 days aimed at helping freelancers like YOU build a better business (you can follow my story on Wanderful World, on Twitter, or on Instagram).
Here’s what you can do next:
|
https://lizziedavey.medium.com/the-simple-two-step-funnel-i-use-to-qualify-freelance-leads-3c44b200af6a
|
['Lizzie Davey']
|
2020-03-03 11:48:21.741000+00:00
|
['Freelance', 'Freelancing', 'Business Strategy', 'Entrepreneurship', 'Lead Generation']
|
Title Simple TwoStep Funnel Use Qualify Freelance LeadsContent Finding freelance lead thankless task send 30 pitch land two client that’s considered Good Outcome sign contract two client start work everything unicorn rainbow it’s hunkydory project last heady two week hit ground bonecrunching bump dawning realisation send another 30 pitch get another two client It’s relentless seem like lot work — particularly you’re busy actual job course you’re going forget pitch week you’re snowed latest contract right pitching way get work… Isn’t it… early day freelancing spending 20 hour month identifying client sending pitch incredibly rewarding bolster schedule good month get good work flowing longterm freelancer 20 hour month way much time spend landing new work potentially billable hour could spent honing craft building relationship existing client know actual work get paid Marketing Tactics Need Change Halfway freelance career marketing tactic took sharp left turn start pitching number one way landed new work whether via job board identifying reaching lead Two year didn’t time send 10 pitch day potential client still needed steady stream work coming contract ended lead dried point started creating content geared towards target client note also point target client took sharp left turn — went writing travel hospitality company working marketing SaaS ecommerce brand writer creating content came naturally served two purpose showed skill set potential client you’ve got practice preach right created presence brought freelance lead directly door content spent hour creating continued work hard bringing client long pressed publish far cry 20 hour spending pitching early day kind content worked best Throughout past 5 year I’ve crafted really simple lead generation process doesn’t take lot time truly effective work two stage Create downloadable “lead magnet” give prospect extra value exchange email address Publish regular blog post touch specific pain point target audience two stage work tandem first qualify potential lead identifying pain point prospect willing pay secondly grabbing email address nurture potential relationship Let’s talk two stage isolation 1 Create Downloadable Lead Magnet see everywhere “Grab FREE checklist writing blog posts” “Download FREE ebook learn get 5000 subscriber without spending single penny Nada Nothing” get gist I’m always wary “GET AMAZING FREE THING” copy there’s reason many brand pushing lead magnet reader It’s offer winwin situation reader get access even value site owner get new subscriber nurture via email There’s really simple formula use creating Really Good Lead Magnet take ONE pain point target client provide ONE solution achieved day People want know understand need want able take quick action Remember lead magnet tie service offer you’re designer design related you’re writer need something content lead magnet idea use formula guide creating onepage client proposal 20 minute 7 healthy tasty recipe idea next week Tweaks make website UX today checklist make website mobileready right Let’s take target client moment startup SaaS company create software ecommerce brand use biggest pain point attracting new user help create content attract new user might play around lead magnet idea 3 Ways Optimise Blog Posts Get Users Copy Checklist Getting App Users Ecommerce Brand Gained 50 New Users Content Creation You’ll notice latter example case study work particularly well you’ve got really great story one client sold client’s program 10 minute email campaign focusing selling email copy might weave lead magnet eg X sold program 10 minute engaging email 2 Publish Regular Blog Posts order attract people lead magnet first place publish Good Regular Content Usually take form blog post might also video content something else depending skill service writing obvious choice It’s enough churn weak 300word post hope best want attract High Quality Clients post good enough attract people first place good enough stick around read Write list pain point target client think common question prospect searching order land website think common question prospect searching order land website Determine service tie pain point take ecommerce SaaS example highlighted take ecommerce SaaS example highlighted Create content combine two together list potential blog post might look something like Ecommerce Brands Use Writing Technique Attract Users 5 Headline Formulas ForwardThinking SaaS Brands Biggest Copywriting Mistakes SaaS Brands Make Websites top head get idea blog post intriguing enough target client click somewhere within good content want offer lead magnet exchange email address Try TwoStep Funnel Funnels often put fear god people complex overwhelming really don’t you’re past point afford spend 20 hour month pitching new prospect funnel might answer luckily one involves two simple effective step — Mission Get Better Clients post form part mission March help freelancer get better paying higher quality client I’m publishing 30 post 30 day aimed helping freelancer like build better business follow story Wanderful World Twitter Instagram Here’s nextTags Freelance Freelancing Business Strategy Entrepreneurship Lead Generation
|
5,124 |
Swapping with the Monster
|
POETRY
Swapping with the Monster
A poem on understanding why I’m an asshole sometimes
You come and go as you please. You love the surprise
and the grimace on my face when I realize
I’m being mean and cruel for no reason.
I don’t always recognize you — you’re subversive and sly.
Disguised as a simple annoyance, a whim, a mood,
you make yourself at home and wait for a chance to strike:
when I’m with a loved one or at my happiest.
Let’s switch, beast. I’ll be the fiend that feeds the fear
you constantly carry in the back of your mind.
Maybe I’ll take up residence in a nook you haven’t decluttered in a while.
But don’t worry, I’ll pay rent for it — I’ll feed you
apathy, pride, envy, and regret.
I’ll then package it all in poetry; you’ll think you’re deep and shit
while I remorselessly fuel your pain-body.
But I’m not like you. The Light in me, the Presence, would try
to understand you, where you come from, and the suffering
you conceal. Serenely,
I’d ambush you with care and understanding.
I’d let you untangle your want for attention and revenge,
your need to act out when things don’t go your way,
when you don’t receive what you’re entitled to.
I’d humbly dilute your heartache with tenderness.
Does that sound good? Let’s swap.
Lola Sense © All Rights Reserved.
|
https://medium.com/scrittura/swapping-with-the-monster-757b64c4a261
|
['Lola Sense']
|
2020-12-14 15:11:56.638000+00:00
|
['Self-awareness', 'Self Improvement', 'Self Love', 'Self', 'Poetry']
|
Title Swapping MonsterContent POETRY Swapping Monster poem understanding I’m asshole sometimes come go please love surprise grimace face realize I’m mean cruel reason don’t always recognize — you’re subversive sly Disguised simple annoyance whim mood make home wait chance strike I’m loved one happiest Let’s switch beast I’ll fiend feed fear constantly carry back mind Maybe I’ll take residence nook haven’t decluttered don’t worry I’ll pay rent — I’ll feed apathy pride envy regret I’ll package poetry you’ll think you’re deep shit remorselessly fuel painbody I’m like Light Presence would try understand come suffering conceal Serenely I’d ambush care understanding I’d let untangle want attention revenge need act thing don’t go way don’t receive you’re entitled I’d humbly dilute heartache tenderness sound good Let’s swap Lola Sense © Rights ReservedTags Selfawareness Self Improvement Self Love Self Poetry
|
5,125 |
How to Hash in Python
|
3. Secure Hashing
Secure hashes and message digests have evolved over the years. From MD5 to SHA1 to SHA256 to SHA512.
Each method grows in size, improving security and reducing the risk of hash collisions. A collision is when two different arrays of data resolve to the same hash.
Hashing can take a large amount of arbitrary data and build a digest of the content. Open-source software builds digests of their packages to help users know that they can trust that files haven’t been tampered with. Small changes to the file will result in a much different hash.
Look at how different two MD5 hashes are after changing one character.
>>> import hashlib
>>> hashlib.md5(b"test1").hexdigest()
'5a105e8b9d40e1329780d62ea2265d8a'
>>> hashlib.md5(b"test2").hexdigest()
'ad0234829205b9033196ba818f7a872b'
Let’s look at some common secure hash algorithms.
MD5– 16 bytes/128 bit
MD5 hashes are 16 bytes or 128 bits long. See the example below, note that a hex digest is representing each byte as a hex string (i.e. the leading 09 is one byte). MD5 hashes are no longer commonly used.
>>> import hashlib
>>> hashlib.md5(b"test").hexdigest()
'098f6bcd4621d373cade4e832627b4f6'
>>> len(hashlib.md5(b"test").digest())
16
SHA1–20 bytes/160 bit
SHA1 hashes are 20 bytes or 160 bits long. SHA1 hashes are also no longer commonly used.
>>> import hashlib
>>> hashlib.sha1(b"test").hexdigest()
'a94a8fe5ccb19ba61c4c0873d391e987982fbbd3'
>>> len(hashlib.sha1(b"test").digest())
20
SHA256–32 bytes/256 bit
SHA256 hashes are 32 bytes or 256 bits long. SHA256 hashes are commonly used.
>>> import hashlib
>>> hashlib.sha256(b"test").hexdigest()
'9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08'
>>> len(hashlib.sha256(b"test").digest())
32
SHA512–64 bytes/512 bit
SHA512 hashes are 64 bytes or 512 bits long. SHA512 hashes are commonly used.
|
https://medium.com/better-programming/how-to-hash-in-python-8bf181806141
|
['David Mezzetti']
|
2020-01-23 00:34:25.222000+00:00
|
['Encryption', 'Programming', 'Cybersecurity', 'Python', 'Duplicate Detection']
|
Title Hash PythonContent 3 Secure Hashing Secure hash message digest evolved year MD5 SHA1 SHA256 SHA512 method grows size improving security reducing risk hash collision collision two different array data resolve hash Hashing take large amount arbitrary data build digest content Opensource software build digest package help user know trust file haven’t tampered Small change file result much different hash Look different two MD5 hash changing one character import hashlib hashlibmd5btest1hexdigest 5a105e8b9d40e1329780d62ea2265d8a hashlibmd5btest2hexdigest ad0234829205b9033196ba818f7a872b Let’s look common secure hash algorithm MD5– 16 bytes128 bit MD5 hash 16 byte 128 bit long See example note hex digest representing byte hex string ie leading 09 one byte MD5 hash longer commonly used import hashlib hashlibmd5btesthexdigest 098f6bcd4621d373cade4e832627b4f6 lenhashlibmd5btestdigest 16 SHA1–20 bytes160 bit SHA1 hash 20 byte 160 bit long SHA1 hash also longer commonly used import hashlib hashlibsha1btesthexdigest a94a8fe5ccb19ba61c4c0873d391e987982fbbd3 lenhashlibsha1btestdigest 20 SHA256–32 bytes256 bit SHA256 hash 32 byte 256 bit long SHA256 hash commonly used import hashlib hashlibsha256btesthexdigest 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 lenhashlibsha256btestdigest 32 SHA512–64 bytes512 bit SHA512 hash 64 byte 512 bit long SHA512 hash commonly usedTags Encryption Programming Cybersecurity Python Duplicate Detection
|
5,126 |
The Latest: The Boston Globe is publishing fiction now (May 11, 2020)
|
The Latest: The Boston Globe is publishing fiction now (May 11, 2020)
Subscribe to The Idea, a weekly newsletter on the business of media, for more news, analysis, and interviews.
THE NEWS
The Boston Globe is publishing a serialized fictional “mystery thriller” set in Boston over the course of two weeks, with prominent placement on its website homepage and the A1 of its print edition.
SO WHAT
This is the latest example of how fiction — and other non-journalism content, like poetry — can serve readers in ways that journalism can’t: countering news fatigue, reaching new readers, and driving habit and conversions with new products.
Publishers have turned to fiction to help engage and retain readers experiencing news fatigue, an issue affecting about a third of those surveyed by Reuters in 2019. The Boston Globe framed its decision as giving readers a respite from COVID-19 coverage, a real need given that seven in 10 Americans want to take breaks from pandemic news according to Pew. Even outside of pandemic times, The Verge, for example, hoped that it would “refresh the feeds of followers who may be getting burnt out of Facebook data privacy explainers or the government shutdown” with Better Worlds, an initiative last year to publish original and adapted science fiction stories.
Different kinds of content can bring in new readers. These can be fans of the writers published (as The Verge banked on) or more generally, consumers of fiction or poetry. A viral story can also entice even those who aren’t fiction readers: The New Yorker’s Cat Person, for example, was the second-most viewed story on its website in 2017, even though it was published in mid-December.
Fiction can also inspire new products that drive habit and subscriptions. The New Yorker, for example, spotlights its published fiction in a bi-weekly Books & Fiction newsletter. On its fiction podcast, recent New Yorker fiction contributors read other author’s stories from the archive. The magazine also launched a poetry bot, which sent followers on Facebook Messenger and Twitter a poem a day from its archives. Monica Racic, The New Yorker’s director of production and multimedia, told us that the poetry bot’s audience was highly engaged and subscribed at a significantly higher rate than people who came to the site from other avenues.
LOOK FOR
More blurring of the line between fiction and journalism. It isn’t necessarily black-and-white: speculative journalism, which intertwines reporting with science fiction to imagine possible futures, has grown in popularity. Readers can find the form in the op-ed pages of The New York Times, which runs an “Op-Eds from the Future” series and publications like High Country News and McSweeney’s Quarterly, which devoted entire issues to speculative climate change writing.
Also look for forays into fiction in non-text forms. Publishers may build on their experiments with speculative journalism and enter the fiction podcast game. This could open the door to optioning as another business justification for fiction production down the line, as podcast-native companies like Gimlet Media have done. Gimlet’s Homecoming, for example, became an Amazon Show, and the company has a whole division dedicated to making Hollywood deals (read our conversation with Chris Giliberti, Head of Gimlet Pictures for more).
|
https://medium.com/the-idea/the-latest-the-boston-globe-is-publishing-fiction-now-may-11-2020-449bec76edd9
|
['Saanya Jain']
|
2020-05-11 21:01:00.952000+00:00
|
['Media', 'Fiction', 'Journalism', 'The Latest']
|
Title Latest Boston Globe publishing fiction May 11 2020Content Latest Boston Globe publishing fiction May 11 2020 Subscribe Idea weekly newsletter business medium news analysis interview NEWS Boston Globe publishing serialized fictional “mystery thriller” set Boston course two week prominent placement website homepage A1 print edition latest example fiction — nonjournalism content like poetry — serve reader way journalism can’t countering news fatigue reaching new reader driving habit conversion new product Publishers turned fiction help engage retain reader experiencing news fatigue issue affecting third surveyed Reuters 2019 Boston Globe framed decision giving reader respite COVID19 coverage real need given seven 10 Americans want take break pandemic news according Pew Even outside pandemic time Verge example hoped would “refresh feed follower may getting burnt Facebook data privacy explainers government shutdown” Better Worlds initiative last year publish original adapted science fiction story Different kind content bring new reader fan writer published Verge banked generally consumer fiction poetry viral story also entice even aren’t fiction reader New Yorker’s Cat Person example secondmost viewed story website 2017 even though published midDecember Fiction also inspire new product drive habit subscription New Yorker example spotlight published fiction biweekly Books Fiction newsletter fiction podcast recent New Yorker fiction contributor read author’s story archive magazine also launched poetry bot sent follower Facebook Messenger Twitter poem day archive Monica Racic New Yorker’s director production multimedia told u poetry bot’s audience highly engaged subscribed significantly higher rate people came site avenue LOOK blurring line fiction journalism isn’t necessarily blackandwhite speculative journalism intertwines reporting science fiction imagine possible future grown popularity Readers find form oped page New York Times run “OpEds Future” series publication like High Country News McSweeney’s Quarterly devoted entire issue speculative climate change writing Also look foray fiction nontext form Publishers may build experiment speculative journalism enter fiction podcast game could open door optioning another business justification fiction production line podcastnative company like Gimlet Media done Gimlet’s Homecoming example became Amazon Show company whole division dedicated making Hollywood deal read conversation Chris Giliberti Head Gimlet Pictures moreTags Media Fiction Journalism Latest
|
5,127 |
Serious and Easy Crypto With AES/GCM
|
First things first, here are some definitions:
Key: A byte array that both parties have, 128-bit (16 bytes)
Nonce: A number used once, 96-bit (12 bytes)
Authorization data: A public text from sender, arbitrary size.
Authorization tag: A byte array created at sender side, 128-bit (16 bytes)
Plain text: Data to be encrypted, can be text or binary, arbitrary size.
Cipher text: Encrypted data, same size as plain data.
4 of these components should be transmitted/received by communicating parties; nonce, cipher, authorization tag and authorization data. Below is the structure for these 4 components, considered as the data package to be carried over i.e. a network.
public struct AESData
{
public string nonce;
public string cipher;
public string authTag;
public string authData;
}
Encrypted data is essentially a byte array, containing non-ASCII characters. We’ll use base64 encoding to convert byte arrays to strings, so the package can be sent and received over any text-based communication medium.
Encryption
AES/GCM, when encrypting, takes the key, nonce, authorization data and plain text as input, gives cipher text and authorization tag as output.
Plain text gets encrypted using the key and the nonce, creating cipher text. Then, authorization data is mixed with cipher text using the key, resulting 128-bit authorization tag, which is to prove both authenticity and integrity of the message.
Below piece of code shows how encryption is done in C#.
// declare aes variables
byte[] key = new byte[16];
byte[] nonce = new byte[12];
byte[] authTag = new byte[16];
byte[] authData = utf8enc.GetBytes("Auth data");
// data to be transmitted and received in base64 encoding
AESData aesData = new AESData();
// assign plain text
string plainText = "Hello AES/GCM! Some non-standard chars: öçşığü";
// convert the plain text string to a byte array
byte[] plainBytes = utf8enc.GetBytes(plainText);
// allocate the cipher text byte array as the same size as the plain text byte array
byte[] cipher = new byte[plainBytes.Length];
// perform encryption
using (AesGcm aesgcm = new AesGcm(key)) aesgcm.Encrypt(nonce, plainBytes, cipher, authTag, authData);
// encode aes data to Base64 strings, which will be transmitted
aesData.nonce = Convert.ToBase64String(nonce);
aesData.cipher = Convert.ToBase64String(cipher);
aesData.authTag = Convert.ToBase64String(authTag);
aesData.authData = Convert.ToBase64String(authData);
Decryption
AES/GCM, when decrypting, takes the key, nonce, authorization data, authorization tag and cipher text as input, gives plain text as output.
Authorization data is mixed with cipher text using the key, resulting 128-bit authorization tag. If this calculated tag matches the received tag, then the message is really sent by the sender, since only the person with the correct key can create that authorization tag. Then, cipher text gets decrypted using the key and the nonce, creating plain text.
Below piece of code shows how decryption is done in C#.
// decode received Base64 strings to aes data
nonce = Convert.FromBase64String(aesData.nonce);
cipher = Convert.FromBase64String(aesData.cipher);
authTag = Convert.FromBase64String(aesData.authTag);
authData = Convert.FromBase64String(aesData.authData);
// allocate the decrypted text byte array as the same size as the plain text byte array
byte[] decryptedBytes = new byte[cipher.Length];
// perform decryption
using (AesGcm aesgcm = new AesGcm(key)) aesgcm.Decrypt(nonce, cipher, authTag, decryptedBytes, authData);
// convert the byte array to the plain text string
string decryptedText = utf8enc.GetString(decryptedBytes);
What about nonce?
So far we got data authenticity, integrity and confidentiality, without using nonce value. But what if an eavesdropper saves a copy of a package, despite being unable to see the plain content, send it to receiver at a later time. How the receiver can determine whether the package is from the real sender or an eavesdropper? This is called replay attack.
Nonce value can be used to have protection against replay attack. Communicating parties may both have a record of nonce value of zero at the beginning. Every time one party preparing an AES package, may increase nonce value by one and use that value. Receiver may check if nonce value in the received package is greater than the last value of nonce on its record or not. If it’s not, then that package is a replayed package, so receiver discards it.
|
https://medium.com/swlh/serious-and-easy-crypto-with-aes-gcm-708e5176a198
|
['Yaşar Yücel Yeşilbağ']
|
2020-12-15 11:01:32.520000+00:00
|
['Aes Gcm', 'Cryptography', 'Dotnet Core', 'Csharp', 'Cross Platform']
|
Title Serious Easy Crypto AESGCMContent First thing first definition Key byte array party 128bit 16 byte Nonce number used 96bit 12 byte Authorization data public text sender arbitrary size Authorization tag byte array created sender side 128bit 16 byte Plain text Data encrypted text binary arbitrary size Cipher text Encrypted data size plain data 4 component transmittedreceived communicating party nonce cipher authorization tag authorization data structure 4 component considered data package carried ie network public struct AESData public string nonce public string cipher public string authTag public string authData Encrypted data essentially byte array containing nonASCII character We’ll use base64 encoding convert byte array string package sent received textbased communication medium Encryption AESGCM encrypting take key nonce authorization data plain text input give cipher text authorization tag output Plain text get encrypted using key nonce creating cipher text authorization data mixed cipher text using key resulting 128bit authorization tag prove authenticity integrity message piece code show encryption done C declare aes variable byte key new byte16 byte nonce new byte12 byte authTag new byte16 byte authData utf8encGetBytesAuth data data transmitted received base64 encoding AESData aesData new AESData assign plain text string plainText Hello AESGCM nonstandard char öçşığü convert plain text string byte array byte plainBytes utf8encGetBytesplainText allocate cipher text byte array size plain text byte array byte cipher new byteplainBytesLength perform encryption using AesGcm aesgcm new AesGcmkey aesgcmEncryptnonce plainBytes cipher authTag authData encode aes data Base64 string transmitted aesDatanonce ConvertToBase64Stringnonce aesDatacipher ConvertToBase64Stringcipher aesDataauthTag ConvertToBase64StringauthTag aesDataauthData ConvertToBase64StringauthData Decryption AESGCM decrypting take key nonce authorization data authorization tag cipher text input give plain text output Authorization data mixed cipher text using key resulting 128bit authorization tag calculated tag match received tag message really sent sender since person correct key create authorization tag cipher text get decrypted using key nonce creating plain text piece code show decryption done C decode received Base64 string aes data nonce ConvertFromBase64StringaesDatanonce cipher ConvertFromBase64StringaesDatacipher authTag ConvertFromBase64StringaesDataauthTag authData ConvertFromBase64StringaesDataauthData allocate decrypted text byte array size plain text byte array byte decryptedBytes new bytecipherLength perform decryption using AesGcm aesgcm new AesGcmkey aesgcmDecryptnonce cipher authTag decryptedBytes authData convert byte array plain text string string decryptedText utf8encGetStringdecryptedBytes nonce far got data authenticity integrity confidentiality without using nonce value eavesdropper save copy package despite unable see plain content send receiver later time receiver determine whether package real sender eavesdropper called replay attack Nonce value used protection replay attack Communicating party may record nonce value zero beginning Every time one party preparing AES package may increase nonce value one use value Receiver may check nonce value received package greater last value nonce record it’s package replayed package receiver discard itTags Aes Gcm Cryptography Dotnet Core Csharp Cross Platform
|
5,128 |
Germantown: Growing Food in a Pandemic
|
Jasmine Thompson starts getting the lot at Awbury Agricultural Village ready for growing. (Maleka Fruean for Germantown Info Hub)
Germantown is known for its mix of houses, apartments, and row homes, as well as for its variety of green spaces.
And in that green space, neighbors — from the novice grower to the experienced farmer — have been catalyzed by the Covid-19 outbreak to start growing food, for themselves and others.
“ Honestly, I just really saw so many inspiring efforts, especially around food and the giving of it for free, and I just wanted to grow free food for people because of the crisis,” said Jasmine Thompson.
Thompson grew up in East Germantown near Stenton and Wister. She has a full time job working in community food systems at the Food Trust, but also created Philly Forests. Originally it was her personal landscaping business.
When the pandemic began, she adapted her plans.
“So many of my family members and friends lost their jobs because of this,” Thompson said. She knew she wanted to grow food and give it away, so she approached Awbury Agricultural Village, an area in the Awbury Arboretum that had some available land. Projects that were philanthropic in nature could apply to grow on the lots, so Philly Forests became a food growing project.
She was able to procure a lot, with access to a small greenhouse area. She paid a reduced fee for the entire growing season, and even with a late start into preparation, plans on growing a variety of vegetables.
Sid Bailey prepares his garden. (Photo: Valerie Peghini Bailey)
She also hopes to include a combination of harvest-your-own days and free boxes of fresh produce delivered in Germantown, with volunteers on board to help with the operation.
There are no eligibility guidelines. Thompson is still trying to figure out how to make sure boxes of food will get to the most vulnerable in the neighborhood — older folks who can’t leave home and the immunocompromised. Getting the word out about the program is also something she is thinking about. Right now, information is only available through social media.
But it’s not just experienced farmers that are growing food. Kristen O’Guin is a sexuality educator in Mt. Airy. Dwendolyn Lloyd is a pastry chef and Sid Bailey is an online translator, both from Germantown. All three of them decided this was a good time to start growing.
Bailey knew that it was important for him and his family to have access to fresh foods. He also felt the experience itself was valuable.
“I think in most of this neighborhood there’s enough space to have food production in the area,” Bailey said. “It’s not only about safety and resilience, but also growing food is a lot of fun.”
“It’s so nice to get food fresh from your garden or your neighbor’s garden,” he continued. “It makes you connected to your food, and the quality.”
Dwendolyn Lloyd does some type of gardening every year, but this year she was especially inspired to become more active. “Now I have a lot more time… I don’t know what is happening in the future so let’s start planting more fruits and veggies.” She helps out with her next door neighbors’ gardens and is beginning to share seeds, and trying to plan for what seems like an unpredictable future.
Most of Lloyd’s family from Germantown has grown food, especially her sister Amanda. “My sister grows a lot of food, she has an empty lot next to her house, “ said Lloyd. The lot isn’t owned by anybody as far as they know, and Amanda cleaned it up years ago and started planting. Amanda now gardens the lot every year, and is thinking about adding chickens this year.
Food insecurity and fear of our industrial food chains and food systems being interrupted or infected has resulted in these kind of responses before.
George Boudreau, a history and research fellow at University of Pennsylvania, recalls some of what he refers to as the “second founding of the Germantown area”. Boudreau says during the yellow fever epidemic of 1793, many people fled from downtown and started camping in the area along the Schuylkill. Wealthier folks rented apartments and houses in the greener spaces of Germantown.
“People moved up here because it provided health and well being,” Boudreau continued. “People were up here growing food and taking it down to the city.”
Two Facebook groups connect those interested in Germantown gardening: Germantown Victory Gardens and Germantown Growing Together.
|
https://medium.com/germantown-info-hub/germantown-growing-food-in-a-pandemic-9bc1116af88
|
['Maleka Fruean']
|
2020-05-27 22:20:58.202000+00:00
|
['Gardening', 'Philadelphia', 'Coronavirus', 'Pandemic']
|
Title Germantown Growing Food PandemicContent Jasmine Thompson start getting lot Awbury Agricultural Village ready growing Maleka Fruean Germantown Info Hub Germantown known mix house apartment row home well variety green space green space neighbor — novice grower experienced farmer — catalyzed Covid19 outbreak start growing food others “ Honestly really saw many inspiring effort especially around food giving free wanted grow free food people crisis” said Jasmine Thompson Thompson grew East Germantown near Stenton Wister full time job working community food system Food Trust also created Philly Forests Originally personal landscaping business pandemic began adapted plan “So many family member friend lost job this” Thompson said knew wanted grow food give away approached Awbury Agricultural Village area Awbury Arboretum available land Projects philanthropic nature could apply grow lot Philly Forests became food growing project able procure lot access small greenhouse area paid reduced fee entire growing season even late start preparation plan growing variety vegetable Sid Bailey prepares garden Photo Valerie Peghini Bailey also hope include combination harvestyourown day free box fresh produce delivered Germantown volunteer board help operation eligibility guideline Thompson still trying figure make sure box food get vulnerable neighborhood — older folk can’t leave home immunocompromised Getting word program also something thinking Right information available social medium it’s experienced farmer growing food Kristen O’Guin sexuality educator Mt Airy Dwendolyn Lloyd pastry chef Sid Bailey online translator Germantown three decided good time start growing Bailey knew important family access fresh food also felt experience valuable “I think neighborhood there’s enough space food production area” Bailey said “It’s safety resilience also growing food lot fun” “It’s nice get food fresh garden neighbor’s garden” continued “It make connected food quality” Dwendolyn Lloyd type gardening every year year especially inspired become active “Now lot time… don’t know happening future let’s start planting fruit veggies” help next door neighbors’ garden beginning share seed trying plan seems like unpredictable future Lloyd’s family Germantown grown food especially sister Amanda “My sister grows lot food empty lot next house “ said Lloyd lot isn’t owned anybody far know Amanda cleaned year ago started planting Amanda garden lot every year thinking adding chicken year Food insecurity fear industrial food chain food system interrupted infected resulted kind response George Boudreau history research fellow University Pennsylvania recall refers “second founding Germantown area” Boudreau say yellow fever epidemic 1793 many people fled downtown started camping area along Schuylkill Wealthier folk rented apartment house greener space Germantown “People moved provided health well being” Boudreau continued “People growing food taking city” Two Facebook group connect interested Germantown gardening Germantown Victory Gardens Germantown Growing TogetherTags Gardening Philadelphia Coronavirus Pandemic
|
5,129 |
How I turned coffee into Bitcoin
|
I few months ago I had to lower my coffee consumption and was struggling to find the best way to “force” myself to do so. Nothing major prompted the change, but wanted a catalyst to really stop.
I have been following Bitcoin and other cryptocurrency for awhile, and thought I would use this as a way to get more.
In looking at all my options I figured out a way to turn coffee into Bitcoin…sort of.
The Digital Alchemist at Work
To really motivate myself I figured I would lower my costs by stopping purchasing coffee every day (easy!) but then divert funds into purchasing Bitcoin dollar cost averaging into the currency (hard!).
This prompted me to start looking for a recurring purchase into a cryptocurrency that was liquid and growing and quickly settled on BTC.
They say that using Bitcoin is key to its success and by purchase more and forcing a use case, I can help the overall ecosystem better. Having more will create usage, which will create value in the network, which will drive more use and hopefully the ecosystem as a whole.
Enter Coinbase (they have a referral program!) which is a great solution for setting up a reoccurring payment buy in platform. Below is what I set up:
The trick to dollar cost averaging into more Bitcoin was setting up a simple reoccurring transaction every week. This would prevent me from making a few coffee purchases (let’s be honest — this will get you just 1 cup in some NYC spots) and built my BTC balance.
While not the most economical approach as Coinbase has their fee structure, it really worked. I have been doing this for the past 6–9 months and it has worked well. Inadvertently I have lowered my spending habits too, being in less stores and coffee shops which evens out the fees a bit.
The question of course is where to store things, and I may wait for another post for that. In the interim BTC/USD has been trending in the right direction — although technically I shouldn’t care as I am now better equipped to weather a downturn.
While I am not quite there yet, I was partly inspired by my friend Steve
I think it is entirely possible that the value generated from purchases will yield a result, I think it’s far more likely that I will spend it first thus helping the ecosystem as a whole.
I have always been a tinkerer, and to really understand something you need to use it — this project has given me a great solution to coffee consumption and a BTC balance to spend on things.
|
https://medium.com/startup-grind/turning-coffee-into-bitcoin-706d3569afde
|
['Eric Friedman']
|
2017-02-09 23:46:07.754000+00:00
|
['Bitcoin', 'Life Lessons', 'Productivity', 'Coffee', 'Finance']
|
Title turned coffee BitcoinContent month ago lower coffee consumption struggling find best way “force” Nothing major prompted change wanted catalyst really stop following Bitcoin cryptocurrency awhile thought would use way get looking option figured way turn coffee Bitcoin…sort Digital Alchemist Work really motivate figured would lower cost stopping purchasing coffee every day easy divert fund purchasing Bitcoin dollar cost averaging currency hard prompted start looking recurring purchase cryptocurrency liquid growing quickly settled BTC say using Bitcoin key success purchase forcing use case help overall ecosystem better create usage create value network drive use hopefully ecosystem whole Enter Coinbase referral program great solution setting reoccurring payment buy platform set trick dollar cost averaging Bitcoin setting simple reoccurring transaction every week would prevent making coffee purchase let’s honest — get 1 cup NYC spot built BTC balance economical approach Coinbase fee structure really worked past 6–9 month worked well Inadvertently lowered spending habit le store coffee shop even fee bit question course store thing may wait another post interim BTCUSD trending right direction — although technically shouldn’t care better equipped weather downturn quite yet partly inspired friend Steve think entirely possible value generated purchase yield result think it’s far likely spend first thus helping ecosystem whole always tinkerer really understand something need use — project given great solution coffee consumption BTC balance spend thingsTags Bitcoin Life Lessons Productivity Coffee Finance
|
5,130 |
An exciting journey begins for 40 young journalists in European newsrooms
|
In less than two months, over 800 students applied for the GNI Fellowship in Europe, which offers summer placements in a wide range of media organisations across 11 countries. The European Journalism Centre (EJC) was thrilled with the enthusiastic response. Today we are proud to announce the recipients of the 40 fellowships.
When we announced the launch of the GNI Fellowship programme earlier this year, we explained why we wanted to connect young talent with leading news organisations in Europe. In order to embrace change, news organisations must learn how to best use technology and integrate young professionals into multidisciplinary teams.
The GNI Fellowship provides fellows with the chance to step into the professional world in a highly competitive job market. All positions will be paid. Fellows will be embedded in some of the most innovative newsrooms in Europe and, at the same time, these newsrooms will benefit from their energy and new ideas.
The selection process carried out by each host organisation was hard-won and we were awed by the motivation and richness of experience of many candidates.
Meet the 40 GNI Fellows of 2019
The group is a diverse pool of aspiring journalists with backgrounds that range from design and engineering, to computer science and economics. They will soon join their selected newsrooms to explore the intersection of journalism and technology.
Here are the names of the 40 fellows and the areas of work they will be focusing on:
Data journalism and visualisation
“This is the first time I get the chance to be a valuable part of a journalistic project — instead of just being the young intern. Also, I am finally able to apply my knowledge and creative ideas to a ‘real-life’ environment.” Lea Weinmann
Design and product development
“What fascinates me most about journalism is its unstoppable speed. As a designer it is absolutely necessary to follow, and to be ahead. How to master this challenge is nowhere better to learn than in journalism.” Nicola Ritter
Audience engagement and digital storytelling
“Having a strong record in local journalism, I want to learn how to apply new means of researching, telling and displaying stories to this sector of news — because local stories tend to affect people more.” Anna Klein
Fact-checking, verification and investigative journalism
|
https://medium.com/we-are-the-european-journalism-centre/an-exciting-journey-begins-for-40-young-journalists-in-european-newsrooms-ee7553c7a61d
|
['Paula Montañà Tor']
|
2019-05-17 13:06:52.257000+00:00
|
['Innovation', 'Newsroom', 'Journalism', 'Fellowship', 'Updates']
|
Title exciting journey begin 40 young journalist European newsroomsContent le two month 800 student applied GNI Fellowship Europe offer summer placement wide range medium organisation across 11 country European Journalism Centre EJC thrilled enthusiastic response Today proud announce recipient 40 fellowship announced launch GNI Fellowship programme earlier year explained wanted connect young talent leading news organisation Europe order embrace change news organisation must learn best use technology integrate young professional multidisciplinary team GNI Fellowship provides fellow chance step professional world highly competitive job market position paid Fellows embedded innovative newsroom Europe time newsroom benefit energy new idea selection process carried host organisation hardwon awed motivation richness experience many candidate Meet 40 GNI Fellows 2019 group diverse pool aspiring journalist background range design engineering computer science economics soon join selected newsroom explore intersection journalism technology name 40 fellow area work focusing Data journalism visualisation “This first time get chance valuable part journalistic project — instead young intern Also finally able apply knowledge creative idea ‘reallife’ environment” Lea Weinmann Design product development “What fascinates journalism unstoppable speed designer absolutely necessary follow ahead master challenge nowhere better learn journalism” Nicola Ritter Audience engagement digital storytelling “Having strong record local journalism want learn apply new mean researching telling displaying story sector news — local story tend affect people more” Anna Klein Factchecking verification investigative journalismTags Innovation Newsroom Journalism Fellowship Updates
|
5,131 |
A Necessary Nihilism. “Natural science produces ancestral…
|
“Natural science produces ancestral statements, such as that the universe is roughly 13.7 billion years old, that the earth formed roughly 4.5 billion years ago, that life developed on earth approximately 3.5 billion years ago, and that the earliest ancestors of the genus Homo emerged about 2 million years ago. Yet it is also generating an ever-increasing number of ‘descendent’ statements, such as that the Milky Way will collide with the Andromeda galaxy in 3 billion years; that the earth will be incinerated by the sun 4 billions years hence; that all the stars in the universe will stop shining in 100 trillion years; and that eventually, one trillion, trillion, trillion years from now, all matter in the cosmos will disintegrate into unbound elementary particles. Philosophers should be more astonished by such statements than they seem to be, for they present a serious problem for post-Kantian philosophy.“ -Ray Brassier, Nihil Unbound: Enlightenment and Extinction (Palgrave-Macmillan, 2007), 49–50.
Ray Brassier may be the rightful heir to Friedrich Nietzsche’s wrestle with nihilism, in no small part because I think Brassier possesses a far more articulate and considered version of nihilism than most.
When many use the term, I think they usually mean pessimism, which is not necessarily nihilistic as pessimism assigns value to the universe; even if that value is in the negative, it is still a valuation, and not genuinely nihilistic. Brassier grasps this: if the universe persists as is, earth will die in the expansion of our sun into a red giant, then the sun itself will evaporate. The last stars, all red dwarfs in the end, will each blink out of existence. Then the universe will then expand into a thin nothingness and beyond.
I believe Brassier is correct that, as Western philosophy now stands, few if any have sufficiently addressed this conundrum: that, as things now stand, the universe is simply bound for extinction, evaporation, all our lives, cares, worries, values, meanings to vanish with it. I think Ray Kurzweil expresses a uniquely transhumanist hope that our distant descendants, more advanced than us as we are than our pre-human ancestors, will decide what to do when that time comes — -for now, it’s irrelevant. I largely agree with Kurzweil, if only in saying that this extinction only seems inevitable to humanity at present, but that this delusion plays into the naive belief that evolution stops with us, Homo sapiens. Evolution carries on and up, or it collapses; it never stops.
I’m still ambivalent, though: there’s some “pie in the sky” to assuming our descendants will fix things down the line, and maybe some futility in planning for an event I don’t plan to be around for. My extinction will come long before that of the last stars. What about the prospect of genuinely ultimate cosmic extinction that captivates me, though? If it’s an inevitability, then, as Brassier says, in logical space-time everything is already dead. There’s no argument to be made against it. Maybe it’s how one wishes to respond, then.
Total cosmic extinction seems only depressing if one sticks to what the Bhagavad Gita calls karma, which despite its Western bastardizations is not “you get what you put out,” but the attachment to valuing action based on results. Depressing though it may (initially) seem, extinction only scalds the soul if one sees the value in his or her actions in their results. But there are actions which are valuable in themselves, the Gita insists, not because they leave a lasting effect but because they are.
The pessimism of extinction comes from mourning for lost possibilities, perhaps; but what alternatives are there to an inevitability? What’s to be said of a universe bound for nihil yet capable of producing affectation so powerful one may end or begin life because of it? I’m not pompous enough to give a verdict on where the universe will be in trillions of years, or to suggest how one should respond, but I will say I’m unconvinced that depression or pessimism must go hand-in-hand with nihilism, or that nihilism cannot possess beauty, wonder. Brassier’s nihilism, at least, seems to me not antithetical to these things. He’s against attempts to re-instill the universe with meaning after our collective disillusionment, but I”m unsure what ‘meaning’ he wishes to eschew. At the very least, if absolute extinction is the total future, inevitable, perhaps the Gita and Eastern thought may have something to say here: don’t find value in ‘the winding-up scene’; even if you eschew ‘value’ and ‘meaning,’ act for action’s sake, be for being’s sake.
A scene from Nietzsche’s The Gay Science may be apropos: the mad man coming to tell the laughing masses that God is dead and they’ve killed him. He’s a panicky, hopelessly nostalgic fool who can’t deal with the death of his old scheme of meaning, but the crowd is not innocent. The crowd scoffs, but only at the madman; they can’t quite laugh off his point. The Gay Science’s thesis seems to be not just that our old system of meaning is no longer viable, but that, with the gift of science, we’ve uncovered a world we no longer know how to live in.
Modern science since the nineteenth century has only exacerbated Nietzsche’s thesis. We’ve dislodged ourselves (for the most part) from inadequate systems of meaning, but in stumbling from the bind we’ve yet to catch our balance.
|
https://medium.com/interfaith-now/a-necessary-nihilism-facing-the-elephant-in-the-room-with-ray-brassier-41618150bcd
|
['Nathan Smith']
|
2020-12-20 10:33:55.418000+00:00
|
['Science', 'Nihilism', 'Ray Brassier', 'Philosophy', 'Hinduism']
|
Title Necessary Nihilism “Natural science produce ancestral…Content “Natural science produce ancestral statement universe roughly 137 billion year old earth formed roughly 45 billion year ago life developed earth approximately 35 billion year ago earliest ancestor genus Homo emerged 2 million year ago Yet also generating everincreasing number ‘descendent’ statement Milky Way collide Andromeda galaxy 3 billion year earth incinerated sun 4 billion year hence star universe stop shining 100 trillion year eventually one trillion trillion trillion year matter cosmos disintegrate unbound elementary particle Philosophers astonished statement seem present serious problem postKantian philosophy“ Ray Brassier Nihil Unbound Enlightenment Extinction PalgraveMacmillan 2007 49–50 Ray Brassier may rightful heir Friedrich Nietzsche’s wrestle nihilism small part think Brassier posse far articulate considered version nihilism many use term think usually mean pessimism necessarily nihilistic pessimism assigns value universe even value negative still valuation genuinely nihilistic Brassier grasp universe persists earth die expansion sun red giant sun evaporate last star red dwarf end blink existence universe expand thin nothingness beyond believe Brassier correct Western philosophy stand sufficiently addressed conundrum thing stand universe simply bound extinction evaporation life care worry value meaning vanish think Ray Kurzweil express uniquely transhumanist hope distant descendant advanced u prehuman ancestor decide time come — it’s irrelevant largely agree Kurzweil saying extinction seems inevitable humanity present delusion play naive belief evolution stop u Homo sapiens Evolution carry collapse never stop I’m still ambivalent though there’s “pie sky” assuming descendant fix thing line maybe futility planning event don’t plan around extinction come long last star prospect genuinely ultimate cosmic extinction captivates though it’s inevitability Brassier say logical spacetime everything already dead There’s argument made Maybe it’s one wish respond Total cosmic extinction seems depressing one stick Bhagavad Gita call karma despite Western bastardization “you get put out” attachment valuing action based result Depressing though may initially seem extinction scald soul one see value action result action valuable Gita insists leave lasting effect pessimism extinction come mourning lost possibility perhaps alternative inevitability What’s said universe bound nihil yet capable producing affectation powerful one may end begin life I’m pompous enough give verdict universe trillion year suggest one respond say I’m unconvinced depression pessimism must go handinhand nihilism nihilism cannot posse beauty wonder Brassier’s nihilism least seems antithetical thing He’s attempt reinstill universe meaning collective disillusionment I”m unsure ‘meaning’ wish eschew least absolute extinction total future inevitable perhaps Gita Eastern thought may something say don’t find value ‘the windingup scene’ even eschew ‘value’ ‘meaning’ act action’s sake being’s sake scene Nietzsche’s Gay Science may apropos mad man coming tell laughing mass God dead they’ve killed He’s panicky hopelessly nostalgic fool can’t deal death old scheme meaning crowd innocent crowd scoff madman can’t quite laugh point Gay Science’s thesis seems old system meaning longer viable gift science we’ve uncovered world longer know live Modern science since nineteenth century exacerbated Nietzsche’s thesis We’ve dislodged part inadequate system meaning stumbling bind we’ve yet catch balanceTags Science Nihilism Ray Brassier Philosophy Hinduism
|
5,132 |
[Python][Selenium] 人生苦短,把麻煩的 Chrome Browser Driver Version Mapping 自動化
|
Packages matching selenium
Chocolatey is software management automation for Windows that wraps installers, executables, zips, and scripts into…
|
https://medium.com/drunk-wis/python-selenium-chrome-browser-%E8%88%87-driver-%E6%83%B1%E4%BA%BA%E7%9A%84%E7%89%88%E6%9C%AC%E7%AE%A1%E7%90%86-cbaf1d1861ce
|
[]
|
2020-07-19 14:08:49.253000+00:00
|
['Selenium', 'Windows', 'Chrome', 'Automation', 'Python']
|
Title PythonSelenium 人生苦短,把麻煩的 Chrome Browser Driver Version Mapping 自動化Content Packages matching selenium Chocolatey software management automation Windows wrap installers executables zip script into…Tags Selenium Windows Chrome Automation Python
|
5,133 |
7 Reasons Why Writers Make the Best Lovers
|
Have you ever flipped to the back cover of a book just to admire the author’s bio? Of course, you have.
Writers are some of the sexiest creatures to ever grace the surface of our planet. Not only are we incredibly talented, but we have the ability to capture your attention for hours on end, leaving you smiling, crying, or shaking your knees in anticipation.
So it should come as no surprise that writers also happen to make the best lovers.
1. We have a keen eye for detail
Unlike our fellow non-writer brothers and sisters, we have an acute sense for the smallest details. After all, we’re tasked with remembering every single feature, component, and element for all our stories and characters. We know exactly who “has luscious blond locks like the morning sun” or who “can flutter around the room like a delicate butterfly emerging from her cocoon.”
So obviously, we remember how you like to be fucked.
Gentle nibbles on your right earlobe? Tell us no more.
Subtle scratches down the spine of your back with our nails? We know that too.
A slice of pizza post-coitus? Baby, it’s already heating up in the oven.
2. We have passion flowing out of every pore
Boring just doesn’t suit a writer. We are passionate beings that seek out inspiration in everything we do.
No one wants to read a lifeless story, so why would anyone want to have sex with a lifeless lover? Our thoughts exactly.
Every single part of us is constantly on the lookout for new feelings, new emotions, and above all else, new experiences. And luckily for you, that carnal craving for passion always follows us into the bedroom.
If we can’t make your toes curl or your eyes roll back into your skull, then we aren’t doing our job right.
3. We’re good with our hands
Trust us — being able to type 90+ words a minute gives us the ability to diddle your genitals in ways you never thought possible.
4. We love adventure
Our imaginations are almost as uncontrollable as our sexual urges. And that’s why we don’t mind stepping outside of our comfort zone to try out that new vibrator or to give that new leather whip a few cracks. In fact, we love experimenting with new things in and out of the bedroom.
It’s inherently ingrained in a writer to play around with different topics, characters, and storylines, so we aren’t afraid of dipping our toes into the sexual unknown.
To some people, we can be a little too adventurous. But in our minds, if you aren’t living on the edge, then you don’t have a life that’s worth living.
5. We pay attention to our surroundings
When it comes to storytelling, setting up the plot is almost as important as the story itself. Writers have a firm grasp on what it takes to get those juices flowing and can easily set the scene to make their partners feel comfortable and relaxed.
From the flickering candles to the oiled back massages, writers are professionals at crafting the perfect atmosphere for a sexual rendevous. And if you happen to be more of the back-alley-by-the-dumpster lover, don’t worry your little heart out. We’re equally prepared for that too.
6. We’re great communicators
Because duh — that’s our job.
To a writer, communicating with another person comes second nature. We aren’t shy about asking for what we want, whether it’s to be spanked on our booties or slapped on our titties.
Communication is one of our most powerful tools, and words are our most valuable assets.
If we can make a scene about a grain of sand stretch out to three pages, then we can easily share our feelings beyond a few simple head nods or shoulder shrugs.
7. We’re ridiculously attractive
It should go without saying, but yeah, writers are incredibly attractive. Not only do we wear those adorable thick-framed glasses, but we also have perfected the art of the messy hairdo.
We’re scrappy-chic — but in a way where we look completely flawless. You won’t know whether we woke up like this or spent three hours in front of the mirror primping for your visit.
|
https://medium.com/sex-and-satire/7-reasons-why-writers-make-the-best-lovers-f3df1918e359
|
['Ms. Part Time Wino']
|
2020-12-08 14:10:39.953000+00:00
|
['Writers On Writing', 'Humor', 'Sexuality', 'Writing', 'Dating']
|
Title 7 Reasons Writers Make Best LoversContent ever flipped back cover book admire author’s bio course Writers sexiest creature ever grace surface planet incredibly talented ability capture attention hour end leaving smiling cry shaking knee anticipation come surprise writer also happen make best lover 1 keen eye detail Unlike fellow nonwriter brother sister acute sense smallest detail we’re tasked remembering every single feature component element story character know exactly “has luscious blond lock like morning sun” “can flutter around room like delicate butterfly emerging cocoon” obviously remember like fucked Gentle nibble right earlobe Tell u Subtle scratch spine back nail know slice pizza postcoitus Baby it’s already heating oven 2 passion flowing every pore Boring doesn’t suit writer passionate being seek inspiration everything one want read lifeless story would anyone want sex lifeless lover thought exactly Every single part u constantly lookout new feeling new emotion else new experience luckily carnal craving passion always follows u bedroom can’t make toe curl eye roll back skull aren’t job right 3 We’re good hand Trust u — able type 90 word minute give u ability diddle genitals way never thought possible 4 love adventure imagination almost uncontrollable sexual urge that’s don’t mind stepping outside comfort zone try new vibrator give new leather whip crack fact love experimenting new thing bedroom It’s inherently ingrained writer play around different topic character storyline aren’t afraid dipping toe sexual unknown people little adventurous mind aren’t living edge don’t life that’s worth living 5 pay attention surroundings come storytelling setting plot almost important story Writers firm grasp take get juice flowing easily set scene make partner feel comfortable relaxed flickering candle oiled back massage writer professional crafting perfect atmosphere sexual rendevous happen backalleybythedumpster lover don’t worry little heart We’re equally prepared 6 We’re great communicator duh — that’s job writer communicating another person come second nature aren’t shy asking want whether it’s spanked booty slapped titty Communication one powerful tool word valuable asset make scene grain sand stretch three page easily share feeling beyond simple head nod shoulder shrug 7 We’re ridiculously attractive go without saying yeah writer incredibly attractive wear adorable thickframed glass also perfected art messy hairdo We’re scrappychic — way look completely flawless won’t know whether woke like spent three hour front mirror primping visitTags Writers Writing Humor Sexuality Writing Dating
|
5,134 |
Keeping Up with Deep Learning — 26 Nov 2020
|
Keeping Up with Deep Learning — 26 Nov 2020
Deep learning papers, blog posts, Github repos, etc. that I liked this week
Photo by Sebastian Pena Lambarri on Unsplash
This is the second edition of my weekly update on deep learning. Every Thursday, I’ll release a new batch of research papers, blog posts, Github repos, etc. that I liked over the past week. Links are provided for each featured project, so you can dive in and learn about whatever catches your eye. If you missed last week’s edition, you can find it here. All thoughts and opinions are my own. Follow me or check back next week for more. Enjoy!
Very Deep VAEs
[ArXiv][Github]
OpenAI recently showed that Variational AutoEncoders can outperform other likelihood-based generative models at creating realistic 2D images. Although the authors don’t compare their results against adversarial models (e.g. StyleGAN or PGAN), it’s clear from the generated images that VAE can’t match the performance of GANs yet. But this is still really exciting research, because VAEs are much easier to train and understand than GANs. For that reason, I’m excited to see more research like this from OpenAI, because if VAEs ever match the performance of GANs, they will be highly preferred over adversarial models.
Images generated with VAEs. Source: https://github.com/openai/vdvae
End-to-End Object Detection with Adaptive Clustering Transformer
[ArXiv]
Adaptive Clustering Transformer is one of the first notable research papers to improve upon DETR. (DETR was a recent landmark paper for object detection using transformers. See here for more details.) DETR requires a large amount of training for good performance, but ACE reduces training time by about 30%. Personally, I’m surprised this paper hasn’t received more attention. DETR is an amazing development for computer vision, and we need research like this to advance the state of the art!
Obligatory photo of a Transformer… Photo by Arseny Togulev on Unsplash
Propagate Yourself
[ArXiv]
Propagate Yourself advances the state of the art for unsupervised learning on vision-related tasks. Unsupervised learning greatly reduces the amount of labeled data needed for deep learning, because it uses unlabeled samples to learn meaningful representations of the data. I’m a big believer in unsupervised and self-supervised learning, and I recently wrote an article on self-supervised learning with BYOL! Check it out for a thorough introduction to self-supervised training of neural networks.
Design Space for Graph Neural Networks
[ArXiv]
This is a survey paper of various architectures for graph neural networks. GNNs have grown tremendously in popularity over the past few years, because unlike other types of neural networks, they’re able to process irregular data types like 3D point clouds. They’re used in many 3D object detection applications, which in turn is used by most self-driving algorithms. The landscape for GNNs has changed a lot over the past couple of years, and I highly recommend this paper if you need a refresher.
NVIDIA DGX A100
[Webpage][YouTube]
NVIDIA recently released the DGX A100 — a small server with some serious GPU horsepower. Each DGX packs four A100 chips (fastest GPUs in the world at the time of writing) and up to 640 GB of GPU memory. Unfortunately, each DGX station costs a minimum of $200,000, which essentially guarantees that I’ll never use one. (Google Cloud now offers A100s in beta mode, which is much more appealing.) But it’s admittedly fun to marvel at the horsepower we’re able to pack into one server, compared to just 5 years ago.
Photo by Francesco Lo Giudice on Unsplash
Google Cloud MLE Certification
[Course][Blog Post]
I believe that online developer certifications are the way of the future. Too many ML Engineer positions currently require a graduate degree, plus multiple years of professional experience. In order to grow our field, we need an accessible, low-cost option to get qualified for ML-focused jobs. That’s exactly what the Google Cloud MLE Certification tries to accomplish. For the price of $200, you’ll learn many of the requisite skills to land a job in today’s ML Engineering landscape. (I have no affiliation with Google or their certification programs. This is purely a personal endorsement of online ML courses like this one.)
Photo by MD Duran on Unsplash
Conclusion
Research has slowed down slightly with Thanksgiving this week, but it’s still a very exciting time for deep learning. Expect to see a lot more papers involving transformers and self-supervised learning in the near future — both are extremely hot topics right now, and for good reason. (If you’re not familiar with those, check out my recent articles on Transformers from Scratch and Self-supervised Learning with BYOL.) If you enjoyed the article, follow me to get all future weekly updates and other technical articles.
|
https://medium.com/the-dl/keeping-up-with-deep-learning-26-oct-2020-6a5bedeb11b9
|
['Frank Odom']
|
2020-12-10 17:49:25.980000+00:00
|
['Programming', 'Deep Learning', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
|
Title Keeping Deep Learning — 26 Nov 2020Content Keeping Deep Learning — 26 Nov 2020 Deep learning paper blog post Github repos etc liked week Photo Sebastian Pena Lambarri Unsplash second edition weekly update deep learning Every Thursday I’ll release new batch research paper blog post Github repos etc liked past week Links provided featured project dive learn whatever catch eye missed last week’s edition find thought opinion Follow check back next week Enjoy Deep VAEs ArXivGithub OpenAI recently showed Variational AutoEncoders outperform likelihoodbased generative model creating realistic 2D image Although author don’t compare result adversarial model eg StyleGAN PGAN it’s clear generated image VAE can’t match performance GANs yet still really exciting research VAEs much easier train understand GANs reason I’m excited see research like OpenAI VAEs ever match performance GANs highly preferred adversarial model Images generated VAEs Source httpsgithubcomopenaivdvae EndtoEnd Object Detection Adaptive Clustering Transformer ArXiv Adaptive Clustering Transformer one first notable research paper improve upon DETR DETR recent landmark paper object detection using transformer See detail DETR requires large amount training good performance ACE reduces training time 30 Personally I’m surprised paper hasn’t received attention DETR amazing development computer vision need research like advance state art Obligatory photo Transformer… Photo Arseny Togulev Unsplash Propagate ArXiv Propagate advance state art unsupervised learning visionrelated task Unsupervised learning greatly reduces amount labeled data needed deep learning us unlabeled sample learn meaningful representation data I’m big believer unsupervised selfsupervised learning recently wrote article selfsupervised learning BYOL Check thorough introduction selfsupervised training neural network Design Space Graph Neural Networks ArXiv survey paper various architecture graph neural network GNNs grown tremendously popularity past year unlike type neural network they’re able process irregular data type like 3D point cloud They’re used many 3D object detection application turn used selfdriving algorithm landscape GNNs changed lot past couple year highly recommend paper need refresher NVIDIA DGX A100 WebpageYouTube NVIDIA recently released DGX A100 — small server serious GPU horsepower DGX pack four A100 chip fastest GPUs world time writing 640 GB GPU memory Unfortunately DGX station cost minimum 200000 essentially guarantee I’ll never use one Google Cloud offer A100s beta mode much appealing it’s admittedly fun marvel horsepower we’re able pack one server compared 5 year ago Photo Francesco Lo Giudice Unsplash Google Cloud MLE Certification CourseBlog Post believe online developer certification way future many ML Engineer position currently require graduate degree plus multiple year professional experience order grow field need accessible lowcost option get qualified MLfocused job That’s exactly Google Cloud MLE Certification try accomplish price 200 you’ll learn many requisite skill land job today’s ML Engineering landscape affiliation Google certification program purely personal endorsement online ML course like one Photo MD Duran Unsplash Conclusion Research slowed slightly Thanksgiving week it’s still exciting time deep learning Expect see lot paper involving transformer selfsupervised learning near future — extremely hot topic right good reason you’re familiar check recent article Transformers Scratch Selfsupervised Learning BYOL enjoyed article follow get future weekly update technical articlesTags Programming Deep Learning Artificial Intelligence Data Science Machine Learning
|
5,135 |
Blood Moving Fast
|
©Shannon Mastromonico 2020
Me from yesterday. Pacific
Masking pain with too much
eager
puppy
Today I am feline
Keen, barbed wire boundaries
Breathing
Eyes on watch. Today
I am a tempest
Blood moving fast
Governing my own galaxies
and gravities
Electric
Vast
|
https://medium.com/scrittura/blood-moving-fast-b42f1893a821
|
['Shannon Mastromonico']
|
2020-02-27 15:15:43.670000+00:00
|
['Canadian Poet', 'Poetry Community', 'Writing', 'Writer', 'Poetry']
|
Title Blood Moving FastContent ©Shannon Mastromonico 2020 yesterday Pacific Masking pain much eager puppy Today feline Keen barbed wire boundary Breathing Eyes watch Today tempest Blood moving fast Governing galaxy gravity Electric VastTags Canadian Poet Poetry Community Writing Writer Poetry
|
5,136 |
AWS CDK and Typescript: Using the New ApiGatewayV2
|
Using the new API Gateway V2 is a 3-steps-process:
Create an integration.
Define an HTTP API.
Add Routes.
Note that there’s still no easy way to use WebSockets.
Create an integration
The integration is used to tell API Gateway where it should send incoming requests to. In the most simplistic case, this would be your Lambda handler. But Load Balancers or an HTTP Proxy are also possible. What might be strange is that this comes in its own package @aws-cdk/aws-apigatewayv2-integrations .
const httpApiIntegration = new LambdaProxyIntegration({
handler: fn,
});
Define an HTTP API
You have to create an HTTP API that will hold all of your routes. This instance is the equivalent of the LambdaRestApi from V1, but it has a different API.
const httpApi = new HttpApi(this, "MyApi");
There’s also a third parameter for options, such as a CORS configuration or the default integration.
Add Routes
With routes, you can specify which integration will be triggered depending on the path that has been entered or the HTTP Method that has been used.
httpApi.addRoutes({
path: "/",
methods: [HttpMethod.ANY],
integration: httpApiIntegration,
});
Bonus: Add Cloudfront
Putting a Cloudfront in front of your API Gateway V2 is luckily not that different from V1.
const feCf = new CloudFrontWebDistribution(this, "MyCf", {
defaultRootObject: "/",
originConfigs: [{
customOriginSource: {
domainName: `${httpApi.httpApiId}.execute-api.${this.region}.${this.urlSuffix}`,
},
behaviors: [{
isDefaultBehavior: true,
}],
}],
enableIpV6: true,
});
Done. If you want to create an Output to your console to get the Cloudfront Domain Name, you will have to use distributionDomainName now, instead of domainName .
new cdk.CfnOutput(this, "myOut", {
value: feCf.distributionDomainName,
});
That’s it!
Thank you very much for your attention; I hope that this was any help for you.
|
https://medium.com/swlh/aws-cdk-and-typescript-using-the-new-apigatewayv2-f7ad06c560d3
|
['Enrico Gruner']
|
2020-11-24 18:10:46.134000+00:00
|
['Typescript', 'Aws Tutorial', 'Aws Cdk', 'AWS Lambda', 'Api Gateway']
|
Title AWS CDK Typescript Using New ApiGatewayV2Content Using new API Gateway V2 3stepsprocess Create integration Define HTTP API Add Routes Note there’s still easy way use WebSockets Create integration integration used tell API Gateway send incoming request simplistic case would Lambda handler Load Balancers HTTP Proxy also possible might strange come package awscdkawsapigatewayv2integrations const httpApiIntegration new LambdaProxyIntegration handler fn Define HTTP API create HTTP API hold route instance equivalent LambdaRestApi V1 different API const httpApi new HttpApithis MyApi There’s also third parameter option CORS configuration default integration Add Routes route specify integration triggered depending path entered HTTP Method used httpApiaddRoutes path method HttpMethodANY integration httpApiIntegration Bonus Add Cloudfront Putting Cloudfront front API Gateway V2 luckily different V1 const feCf new CloudFrontWebDistributionthis MyCf defaultRootObject originConfigs customOriginSource domainName httpApihttpApiIdexecuteapithisregionthisurlSuffix behavior isDefaultBehavior true enableIpV6 true Done want create Output console get Cloudfront Domain Name use distributionDomainName instead domainName new cdkCfnOutputthis myOut value feCfdistributionDomainName That’s Thank much attention hope help youTags Typescript Aws Tutorial Aws Cdk AWS Lambda Api Gateway
|
5,137 |
What’s in a Name?
|
What’s in a Name?
Is there a better option than changing the names of our brands and institutions?
Photo by Nong Vang on Unsplash
Earlier this year, Quaker Oats announced the end of Aunt Jemima. The company said, “Aunt Jemima’s origins are based on a racial stereotype” and the removal of the name and logo represents a stride “toward progress on racial equality.”
Shortly thereafter, the Washington Redskins football team announced they would be dropping the word “Redskins” and their logo. Then, the Cleveland Indians baseball team said it would also consider changing its name.
Not to be undone, Princeton University renamed a programme and building formerly named after Woodrow Wilson. The University of Southern California changed the name of one of their buildings that was previously named after a president who supported eugenics.
That these names and brands are mired in the racism and hatred of the past is without question. Today, we are striving toward something better, so, understandably, people want these names and brands stricken from history. What they represent are heart-wrenching and enraging facts, which still impact our societies to this day.
But, is the removal of these names and brands a gut-reaction that misses a crucial opportunity? In our efforts to right the wrongs of past and present injustices, could we be going overboard? Could there be a better way?
Lest we forget
Many countries observe what was originally called Armistice Day. This was first held on November 11, 1919, in honor of the ending of the First World War. Member states of the Commonwealth now refer to this day as Remembrance Day. In the United States, it’s called Veteran’s Day.
We use this day as a reminder of the sacrifices and hardships that past generations have endured due to war so that we could enjoy the freedoms of the present. The phrase “lest we forget” has been used as a plea to not forget the past and therefore not allow it to be repeated.
By changing the names of existing brands and institutions, are we purposefully striking out on the road to forgetfulness? In our efforts to make people feel safe and welcome, could our current solution be pushing us in a direction in which we are more likely to repeat the past rather than prevent it?
“Not all tears are an evil”
The history of the human race is rife with heartache. Unimaginable suffering has been endured and abided. And suffering continues to this day.
But not all suffering is bad. Not all emotional pain is negative. Sometimes it inspires us to be more and better.
When we look upon our shared history, when we examine it in a fair and unbiased way, we see our follies and our triumphs, our sins and our virtuous acts. We see the mistakes we have made and the pain we have caused. We see ourselves in everyone: the victims and the perpetrators.
What every human must learn from history is that we are all capable of incredible kindness and unspeakable malice. This is a painful lesson. No one wants to accept that within them lies the potential for evil. Yet, there it is. It’s a haunting, uncomfortable truth. One we forget at our peril. After all, November 11th is not only a day to feel grateful for what we’ve been given but a day for remembering the horrors of which humans are more than capable.
It is with this painful knowledge that we can move forward more appropriately, more carefully, and more thoughtfully. As J.R.R Tolkien wrote in The Return of the King:
“I will not say: do not weep; for not all tears are an evil.”
What can we do, instead?
We are at a crucial moment in history. The choices we make in the coming decades will determine the fate of our species, as well as this beautiful, unique planet we call home. What we do next is of the utmost importance.
When it comes to renaming our brands and institutions, let’s pause for a moment to consider our options. By removing these names and logos from public spaces, we may be creating an environment that has less emotional “triggers”, but at what cost?
Purging our public spaces of negative imagery may cause us to forget the hardships we’ve endured and thereby lead us to commit more evils in the future. This purge might cause us to repeat the past in unforeseeable ways.
Could we, instead, re-claim these names and logos for ourselves and re-purpose them toward our own ends?
What if the stories of these names and brands weren’t over? What if we could create new endings for them? Endings that don’t stop at racism or sexism or any other “-ism”, but ones that remind us of where we’ve come from and the better future we’re fighting for?
What if instead of demanding companies and institutions purge these names and logos from our public spaces, we pressure them to write these new stories? Stories that includes the truth of the heartache and the pain, but also the promise of a better tomorrow? Could these names and brands be used to inspire positive change if only they were presented in a different light?
Yes, it would hurt. Yes, it would be uncomfortable. Yes, it would have to be done thoughtfully and carefully. But, it would paint the struggle of humanity’s pursuit of justice and fairness in a more accurate and, I think, more hopeful light.
After all, human progress is not made by forgetting our past follies, but by embracing them and using what we’ve learned to make life better.
|
https://jeff-valdivia.medium.com/whats-in-a-name-a066869a602a
|
['Jeff Valdivia']
|
2020-12-30 21:22:03.209000+00:00
|
['Politics', 'Advertising', 'History', 'Psychology', 'Racism']
|
Title What’s NameContent What’s Name better option changing name brand institution Photo Nong Vang Unsplash Earlier year Quaker Oats announced end Aunt Jemima company said “Aunt Jemima’s origin based racial stereotype” removal name logo represents stride “toward progress racial equality” Shortly thereafter Washington Redskins football team announced would dropping word “Redskins” logo Cleveland Indians baseball team said would also consider changing name undone Princeton University renamed programme building formerly named Woodrow Wilson University Southern California changed name one building previously named president supported eugenics name brand mired racism hatred past without question Today striving toward something better understandably people want name brand stricken history represent heartwrenching enraging fact still impact society day removal name brand gutreaction miss crucial opportunity effort right wrong past present injustice could going overboard Could better way Lest forget Many country observe originally called Armistice Day first held November 11 1919 honor ending First World War Member state Commonwealth refer day Remembrance Day United States it’s called Veteran’s Day use day reminder sacrifice hardship past generation endured due war could enjoy freedom present phrase “lest forget” used plea forget past therefore allow repeated changing name existing brand institution purposefully striking road forgetfulness effort make people feel safe welcome could current solution pushing u direction likely repeat past rather prevent “Not tear evil” history human race rife heartache Unimaginable suffering endured abided suffering continues day suffering bad emotional pain negative Sometimes inspires u better look upon shared history examine fair unbiased way see folly triumph sin virtuous act see mistake made pain caused see everyone victim perpetrator every human must learn history capable incredible kindness unspeakable malice painful lesson one want accept within lie potential evil Yet It’s haunting uncomfortable truth One forget peril November 11th day feel grateful we’ve given day remembering horror human capable painful knowledge move forward appropriately carefully thoughtfully JRR Tolkien wrote Return King “I say weep tear evil” instead crucial moment history choice make coming decade determine fate specie well beautiful unique planet call home next utmost importance come renaming brand institution let’s pause moment consider option removing name logo public space may creating environment le emotional “triggers” cost Purging public space negative imagery may cause u forget hardship we’ve endured thereby lead u commit evil future purge might cause u repeat past unforeseeable way Could instead reclaim name logo repurpose toward end story name brand weren’t could create new ending Endings don’t stop racism sexism “ism” one remind u we’ve come better future we’re fighting instead demanding company institution purge name logo public space pressure write new story Stories includes truth heartache pain also promise better tomorrow Could name brand used inspire positive change presented different light Yes would hurt Yes would uncomfortable Yes would done thoughtfully carefully would paint struggle humanity’s pursuit justice fairness accurate think hopeful light human progress made forgetting past folly embracing using we’ve learned make life betterTags Politics Advertising History Psychology Racism
|
5,138 |
The Literally Literary Weekly Update #2
|
One Last Note
We have 27,365 followers at Literally Literary. We have approximately 200 writers. Even with these tremendous numbers, most of the submitted works get less than 20 views. Why? Algorithms.
How do we combat this and support each other? Bookmark our homepage and once a day, come here and see what you missed. The only way we can be the kind of community we all want to be is to support each others’ works by reading them.
Our homepage has a Top 25 that is updated every single day with new works published in the last month. Below that, you will find our latest works and then trending ones that you may have missed. Be a participant and read works from amazing writers that maybe you don’t follow yet, but might want to.
|
https://medium.com/literally-literary/the-literally-literary-weekly-update-2-a4b3deacbd20
|
['Jonathan Greene']
|
2019-12-18 15:20:15.380000+00:00
|
['Writing', 'Blogging', 'Ll Letters', 'Publication', 'Publishing']
|
Title Literally Literary Weekly Update 2Content One Last Note 27365 follower Literally Literary approximately 200 writer Even tremendous number submitted work get le 20 view Algorithms combat support Bookmark homepage day come see missed way kind community want support others’ work reading homepage Top 25 updated every single day new work published last month find latest work trending one may missed participant read work amazing writer maybe don’t follow yet might want toTags Writing Blogging Letters Publication Publishing
|
5,139 |
Why (Nearly) Every Novelist I Know Has a Day Job
|
Why (Nearly) Every Novelist I Know Has a Day Job
It’s difficult to sustain a life by writing; there’s zero shame in working a 9-to-5, too.
These collected works earned the author $1.59 in royalties.
Whenever the topic of writers and pay comes up, I always enjoy dropping this little nugget: “Nearly every novelist I know has a day job.”
I don’t mean the novelists whose indie-press masterpiece sold a grand total of 15 copies (14 of them to family) before disappearing into the ether; I know mega-successful novelists, the kind whose books were optioned for movies and television shows on the way to the New York Times bestseller lists, who nonetheless hold down a 9-to-5.
Some do it for the healthcare. Others because they genuinely liked the jobs they were working before they hit it big, and have zero urge to quit now. But I also suspect there’s another element at work: fear.
Writing books doesn’t yield a consistent income, to put it mildly. According to a new study by the Authors Guild, the median pay for full-time writers was $20,300 in 2017; for those writing part-time, $6,080. Among those part-time writers, income has dropped noticeably, from $10,500 in 2009. To make matters worse, the number of magazine and newspaper venues has declined precipitously over the past few years, restricting the opportunities to supplement income via articles.
Unless you’re already a mega-selling author, or your publisher is willing to take a very expensive chance on your groundbreaking book, your advances will vary from project to project — and that’s before we talk royalties, which can fluctuate considerably. Hence the fear; you have zero idea how much you might be making a year or two from now.
Granted, I do know some folks for whom novel-writing is their one and only job. Some are retired, and writing is their second career; others have family money of some sort, or a rich spouse, or at least a spouse willing to foot most of the bills. The majority, however, work some other gig: PR, journalism, teaching, video editing, driving, and so on.
“The people who are able to practice the trade of authoring are people who have other sources of income,” a book editor is quoted as saying in the Times.
All that being said, there’s a discrepancy between the reality of the writer’s life, and the perception of the writer’s life by people not in the business; I blame Hollywood, which often portrays writers as living in enormous New York City apartments, enjoying an expensive lunch with their agents before driving up to their second home on the Hudson. Some writers do have that lifestyle (I’ve known some of them), but for the vast majority, writing is a side-hustle, even if they have several published books on their special Author’s Shelf.
This is why things like book piracy hurt; in many cases, those who illegally download novels or nonfiction tomes aren’t stealing from millionaires who won’t miss the extra $10 — they’re taking from people whose margins are already razor-thin or nonexistent. It’s also why many authors get really, really irritated when people ask for free copies of their books.
In other words, life for many writers is hard, and only getting harder — you have to be in it for the love. And find a job that can sustain you.
|
https://nkolakowski.medium.com/why-nearly-every-novelist-i-know-has-a-day-job-948e9c8e0753
|
['Nick Kolakowski']
|
2019-01-07 21:02:59.849000+00:00
|
['Publishing', 'Novel Writing', 'Authors', 'Writing', 'Publishing Industry']
|
Title Nearly Every Novelist Know Day JobContent Nearly Every Novelist Know Day Job It’s difficult sustain life writing there’s zero shame working 9to5 collected work earned author 159 royalty Whenever topic writer pay come always enjoy dropping little nugget “Nearly every novelist know day job” don’t mean novelist whose indiepress masterpiece sold grand total 15 copy 14 family disappearing ether know megasuccessful novelist kind whose book optioned movie television show way New York Times bestseller list nonetheless hold 9to5 healthcare Others genuinely liked job working hit big zero urge quit also suspect there’s another element work fear Writing book doesn’t yield consistent income put mildly According new study Authors Guild median pay fulltime writer 20300 2017 writing parttime 6080 Among parttime writer income dropped noticeably 10500 2009 make matter worse number magazine newspaper venue declined precipitously past year restricting opportunity supplement income via article Unless you’re already megaselling author publisher willing take expensive chance groundbreaking book advance vary project project — that’s talk royalty fluctuate considerably Hence fear zero idea much might making year two Granted know folk novelwriting one job retired writing second career others family money sort rich spouse least spouse willing foot bill majority however work gig PR journalism teaching video editing driving “The people able practice trade authoring people source income” book editor quoted saying Times said there’s discrepancy reality writer’s life perception writer’s life people business blame Hollywood often portrays writer living enormous New York City apartment enjoying expensive lunch agent driving second home Hudson writer lifestyle I’ve known vast majority writing sidehustle even several published book special Author’s Shelf thing like book piracy hurt many case illegally download novel nonfiction tome aren’t stealing millionaire won’t miss extra 10 — they’re taking people whose margin already razorthin nonexistent It’s also many author get really really irritated people ask free copy book word life many writer hard getting harder — love find job sustain youTags Publishing Novel Writing Authors Writing Publishing Industry
|
5,140 |
Non-Linear Paths
|
Non-Linear Paths
Email Refrigerator :: 15
Dots in triangular pattern by unknown source
Hi there!
Big news in our house. This month, Golda went from two naps down to one. For most of you, this is not that exciting. Actually, even for me, it’s not that exciting. It means more activity planning, less downtime to rest and clean, and earlier bed times. It has also meant sleep regression.
For the non-parents, sleep regression is that thing where parents think that their kid is finally sleeping through the night. And then said kid decides to wake up at 11pm until 3am for a week. Just because. It happens about every 2 or 3 months.
A child’s brain develops asynchronously. Big growth in one area means a temporary regression in another. As they learn to walk, their language skills might lag slightly. While they learn about object permanence and their own independence (that’s where Golda is right now… lots of “no!”), sleep might be disrupted. This is normal. She’ll emerge with a more regular sleep pattern and show new signs of development.
Growth requires relapse.
It’s a great reminder for all of us, thinking about the state of the world and the state of our country. It often feels dire and hopeless, scary and gruesome. Daily. But maybe we’re going through a regression. The optimist in me is saying that just like a baby, in order for progress to happen, maybe we need a moment of backwards motion…
The paths that we’re on are not always direct. Sometimes we go backwards to go forwards. Sometimes our journeys are cyclical, returning where we started but with a new perspective. Sometimes, it’s pure chaos. Let’s talk about the non-linear paths in our lives.
Happy snacking.
Night Waves by Pi Slices
I. Cyclical Thinking
Learning to surf, I first believed that it was like riding a bike. And sure, the metaphor of not forgetting how to do it is true. But the biggest difference that quickly became apparent is that every ride leads to a fall, no matter how good I get. As the wave approached I would paddle to match its speed, it would lift me up, I’d stand up and ride as it crashed and swallowed me.
Waves are cyclical. Increase, crest, decrease, trough. And repeat. Our world is full of cycles. Moon cycles. Hormonal cycles. Seasons. Sleep. Time.
But our culture is obsessed with linear thinking: step by step guides to better, more, bigger. We’re used to seeing progress as following a win with a bigger win. So we’re surprised when things in our lives don’t follow that linear progression:
•Relationships don’t always get better and better. There are times of unhappiness and resentment, growth and independence. Regressions aren’t reasons to leave, they’re indicators of a cycle.
•Creativity often comes in waves of inspiration and breakthroughs and then creative blocks.
•Sustaining hard work requires rest. (But because we follow cultural values that promote overworking and under-resting, the idea of slowing down, taking time off, or even sleeping becomes counter to the capitalistic pursuit of greatness through work.)
It’s in our nature to project and predict. And it can feel great to forecast our lives growing upward and ever-expanding on a linear path. Every year, more passport stamps. Every move, a bigger house. Each lease, a better car. New job, higher salary.
But that’s not how most things actually work. We don’t have to tie our happiness or definitions of success to linear growth. Cyclical thinking could be going back to a place we’ve already visited in order to go deeper. Choosing to downsize our house to live more financially comfortable. Taking a pay cut for a job that will actually make us happier and have more time.
Cyclical thinking can be comforting when things don’t go perfectly, or when we make a choice that might feel “backwards.” Backwards only exists in linear thinking. On a long enough timeline, there’s always an ebbing after expansion.
Our lives are the waves we ride. Sometimes we luck into good timing and drop into to an epic wave before tumbling. And then resetting ourselves and getting up again. Sometimes the next set comes quickly and we ride another, sometimes we have to wait in the calm.
But, thinking cyclically, there is always another wave coming from the horizon to lift us.
Convergence 2 by Janusz Jurek
II. Chaos Theory in Career paths
This month, I reconnected with someone from high school. We weren’t friends then, but he posted on LinkedIn two weeks ago and it made me reach out to propose a business partnership.
It’s been said that most of us will meet upwards of 10,000 people in our lives. Likely more. Each of those people add to the noise in an infinite sea of data points in our lives. Its chaos. It’s impossible to know which one will lead to a job, a marriage, lifelong friendship, heartbreak, betrayal, mentorship…
I’m far from being an expert in chaos theory but there are two principles I do understand. The first is that in a chaotic system, there is no simple cause-and-effect patterns; everything is a result of multiple, unpredictable forces. The weather is a great example of this.
The second principle is there are patterns amid the chaos. Order, while impossible to forecast, does emerge over time.
Our career trajectories can be better understood through chaos theory. No simple cause-and-effect. Over the course of our lifetime, we will meet people that might consider us for a job or introduce us to someone pivotal. But because we happened to email at the right time or recently posted on Instagram, the forces of the universe collide into a dream job offer. Or an investment. Or business partner. There is no way to predict outcomes.
We’re constantly looking for patterns to emerge amid our career. Most people I know (myself included) look back on our last 5–10 years of work and re-edit our story. We try and summarize the patterns by (re)defining ourselves– either through a new bio, LinkedIn title, our website, or resume.
Our resume! What better example of shoehorning a chaotic system into a linear framework than our resume? Chronological. Bulleted accomplishments. The expectation that each role is more senior than the last, salary has been linear, and that gaps between jobs are minimal or explainable.
The resumes of the future will be far from linear stories. They will chart the chaotic paths to our present, they will embrace the integration of work into the rest of our lives, they will weave the thread of work together with the threads of travel, relationships, learning, and creativity.
Our careers are unpredictable, non-linear, chaotic systems. Rather than try and plan the whole thing, what if we just chose what we want to learn next, what kind of environment we can focus in, and what kinds of people we work with best?
Because the rest is just noise.
III. The Swerve
In 2012, I took a week-long trip to Peru to see the ruins of Machu Picchu. Of course I made spreadsheets and custom maps and Google docs about the best restaurants and things to do from Lima to Cusco. After weeks and weeks of planning I finally arrive, and the worst possible thing happened…no wifi.
So here I am in Peru, a Google Drive full of things to do, and no way to access it. Luckily I wrote down my hotel information and printed out my train tickets. But for almost a full week, I’m relying on my memory to direct me.
The first day, I happen upon a really fun bar where I make conversation with the bartender who tells me about a great, but not so well-known restaurant. There, I try some of the most incredible dishes of corn and coconut and cuy (when in Lima…). Over the course of 3 days in Cusco, I run into the same woman 3 times at different coffee shops and restaurants and start up a conversation with her. I still keep in contact with Becky today and just saw her a couple months ago when she was visiting NYC.
More than the impact of seeing the ancient city or the awe-inspiring view after climbing the mountain Huyana Picchu, my biggest learning of the trip was a new life philosophy.
I call it The Swerve.
To swerve is to change direction suddenly. The premise of The Swerve is to pick a direction and be open to leaving the path. When traveling, research all possibilities and then under-plan the time, choosing one or two intended highlight of the day. While heading towards the destination, the intent is to be open to what’s around and willing to leave the path in service of our own feelings. Choosing to swerve down an interesting street might lead to an unknown cafe or hidden vintage store. That might open other possibility. Alternatively, it might be a dead end but the upside is that we can always go back to our original direction.
It’s a useful method of travel but it’s also a philosophy that applies to other courses in life. In our careers, in dating, in creative work, in our health. We can set a goal, not as a destination but as a direction. Along the way, learning what we can and being open to letting our feelings influence us to new and exciting divergent paths.
The world is an exciting place with too many side roads offering possibilities we might have never expected. We’ll never find them with our faces buried in a map (or phone). Look up.
Ready? Set. Swerve.
Kiss by Quibe
In finishing up this refrigerator, I have been hyper-aware of my own creative process. Sometimes, this document looks like a mess, I don’t know where to begin. Sometimes, I’m struck by an idea and it writes itself. The creative process is definitely a non-linear one. But in order to feel good about my work, I usually keep track of it linearly. Counting hours, or days in a row that I’ve showed up to work, checking a box when I feel good about my contribution for the day. Our emotions love linear progress. And even when our world and our work are chaotic or cyclical or backwards, we may feel the need for linearity. It’s ok to resist it.
Thanks for taking time to read this. I hope it’s made you see things a little differently or made whatever you’re going through right now a little clearer.
As always, I love hearing any thoughts this might have stirred. And if you feel like sharing it, there’s no greater compliment.
Enjoy the chaos out there,
-Jake
|
https://medium.com/email-refrigerator/non-linear-paths-57e4f79f5b9d
|
['Jake Kahana']
|
2020-12-28 00:57:58.019000+00:00
|
['Life Lessons', 'Careers', 'Planning', 'Patterns', 'Cycles']
|
Title NonLinear PathsContent NonLinear Paths Email Refrigerator 15 Dots triangular pattern unknown source Hi Big news house month Golda went two nap one exciting Actually even it’s exciting mean activity planning le downtime rest clean earlier bed time also meant sleep regression nonparents sleep regression thing parent think kid finally sleeping night said kid decides wake 11pm 3am week happens every 2 3 month child’s brain develops asynchronously Big growth one area mean temporary regression another learn walk language skill might lag slightly learn object permanence independence that’s Golda right now… lot “no” sleep might disrupted normal She’ll emerge regular sleep pattern show new sign development Growth requires relapse It’s great reminder u thinking state world state country often feel dire hopeless scary gruesome Daily maybe we’re going regression optimist saying like baby order progress happen maybe need moment backwards motion… path we’re always direct Sometimes go backwards go forward Sometimes journey cyclical returning started new perspective Sometimes it’s pure chaos Let’s talk nonlinear path life Happy snacking Night Waves Pi Slices Cyclical Thinking Learning surf first believed like riding bike sure metaphor forgetting true biggest difference quickly became apparent every ride lead fall matter good get wave approached would paddle match speed would lift I’d stand ride crashed swallowed Waves cyclical Increase crest decrease trough repeat world full cycle Moon cycle Hormonal cycle Seasons Sleep Time culture obsessed linear thinking step step guide better bigger We’re used seeing progress following win bigger win we’re surprised thing life don’t follow linear progression •Relationships don’t always get better better time unhappiness resentment growth independence Regressions aren’t reason leave they’re indicator cycle •Creativity often come wave inspiration breakthrough creative block •Sustaining hard work requires rest follow cultural value promote overworking underresting idea slowing taking time even sleeping becomes counter capitalistic pursuit greatness work It’s nature project predict feel great forecast life growing upward everexpanding linear path Every year passport stamp Every move bigger house lease better car New job higher salary that’s thing actually work don’t tie happiness definition success linear growth Cyclical thinking could going back place we’ve already visited order go deeper Choosing downsize house live financially comfortable Taking pay cut job actually make u happier time Cyclical thinking comforting thing don’t go perfectly make choice might feel “backwards” Backwards exists linear thinking long enough timeline there’s always ebbing expansion life wave ride Sometimes luck good timing drop epic wave tumbling resetting getting Sometimes next set come quickly ride another sometimes wait calm thinking cyclically always another wave coming horizon lift u Convergence 2 Janusz Jurek II Chaos Theory Career path month reconnected someone high school weren’t friend posted LinkedIn two week ago made reach propose business partnership It’s said u meet upwards 10000 people life Likely people add noise infinite sea data point life chaos It’s impossible know one lead job marriage lifelong friendship heartbreak betrayal mentorship… I’m far expert chaos theory two principle understand first chaotic system simple causeandeffect pattern everything result multiple unpredictable force weather great example second principle pattern amid chaos Order impossible forecast emerge time career trajectory better understood chaos theory simple causeandeffect course lifetime meet people might consider u job introduce u someone pivotal happened email right time recently posted Instagram force universe collide dream job offer investment business partner way predict outcome We’re constantly looking pattern emerge amid career people know included look back last 5–10 year work reedit story try summarize pattern redefining ourselves– either new bio LinkedIn title website resume resume better example shoehorning chaotic system linear framework resume Chronological Bulleted accomplishment expectation role senior last salary linear gap job minimal explainable resume future far linear story chart chaotic path present embrace integration work rest life weave thread work together thread travel relationship learning creativity career unpredictable nonlinear chaotic system Rather try plan whole thing chose want learn next kind environment focus kind people work best rest noise III Swerve 2012 took weeklong trip Peru see ruin Machu Picchu course made spreadsheet custom map Google doc best restaurant thing Lima Cusco week week planning finally arrive worst possible thing happened…no wifi Peru Google Drive full thing way access Luckily wrote hotel information printed train ticket almost full week I’m relying memory direct first day happen upon really fun bar make conversation bartender tell great wellknown restaurant try incredible dish corn coconut cuy Lima… course 3 day Cusco run woman 3 time different coffee shop restaurant start conversation still keep contact Becky today saw couple month ago visiting NYC impact seeing ancient city aweinspiring view climbing mountain Huyana Picchu biggest learning trip new life philosophy call Swerve swerve change direction suddenly premise Swerve pick direction open leaving path traveling research possibility underplan time choosing one two intended highlight day heading towards destination intent open what’s around willing leave path service feeling Choosing swerve interesting street might lead unknown cafe hidden vintage store might open possibility Alternatively might dead end upside always go back original direction It’s useful method travel it’s also philosophy applies course life career dating creative work health set goal destination direction Along way learning open letting feeling influence u new exciting divergent path world exciting place many side road offering possibility might never expected We’ll never find face buried map phone Look Ready Set Swerve Kiss Quibe finishing refrigerator hyperaware creative process Sometimes document look like mess don’t know begin Sometimes I’m struck idea writes creative process definitely nonlinear one order feel good work usually keep track linearly Counting hour day row I’ve showed work checking box feel good contribution day emotion love linear progress even world work chaotic cyclical backwards may feel need linearity It’s ok resist Thanks taking time read hope it’s made see thing little differently made whatever you’re going right little clearer always love hearing thought might stirred feel like sharing there’s greater compliment Enjoy chaos JakeTags Life Lessons Careers Planning Patterns Cycles
|
5,141 |
COVID-19 and the first war of data science
|
In the subtitle of his remarkable history about the race for the nuclear bomb, science writer and historian of science Jim Baggott referred to World War II as the “first war of physics”.
Today, the efforts waged to curb the COVID-19 pandemic may be the first example of a large-scale, global data-driven response to a worldwide crisis, and as such perhaps the first war of data science.
It is difficult to overstate just how much data has become available in an extremely short time, and open science and the networks for sharing clinical and epidemiological data have enabled an unprecedented depth of analytics. We have the data, we have the tools, and we have experts. They are working hard, but we’ve never done this before and haven’t trained for it. While much progress is being made, we still must overcome many challenges.
The Starschema COVID-19 dataset
One of the tactical challenges lies in making the data “analytics ready” — ensuring the data is accurate, readily available, constantly updated, and in a format that can be easily used by data scientists on the front lines.
The case count dataset collated by Johns Hopkins University’s Center for Systems Science and Engineering (JHU CSSE) is referenced and relied on by hundreds if not thousands of data scientists. This dataset, and the dashboard JHU CSSE provides, is very important and helpful, but it wasn’t constructed in a way that makes the data analytics-ready and often data scientists who want to use the dataset need to go through a time-consuming process to unpivot, union and clean the data before they can use it in their own models and applications.
Based on the JHU CSSE data stream, Starschema has built — and will continue to update and improve — an analytics-ready dataset that draws on multiple data sources, eventually integrating high temporal resolution domestic data (e.g. the dataset provided by Italy’s Department of Civil Protection). The entire dataset is available for download via AWS S3, as well as via Snowflake‘s Data Exchange. This will enable data scientists, epidemiologists and analysts to access the most up-to-date data on COVID-19 cases through a cloud-based data warehouse, including datasets enriched with relevant information such as population densities and geolocation data. Users can leverage this information to build inferential models of disease propagation and provide their customers with unique insights into the behavior of this novel epidemic.
The Starschema COVID-19 data set on Snowflake’s Data Exchange
The Starschema COVID-19 dataset is in a classical “long” format — each permutation of a point in time and a case type — confirmed, recovered, deceased, active — is a row. In addition, many of the inconsistencies between county-level reporting — prior to 10 March 2020 — and state-level reporting have been resolved.
We are in the process of collating all data pertaining to COVID-19 and making additional data available, such as information on population size, population density, and other metrics that enable contingency planning, forecasting, and visualization. As the data size expands, we are constructing an Airflow-based dynamic DAG in order to provide a reproducible workflow for rapid ingestion and transformation of data.
The outlook
As the COVID-19 pandemic progresses, we can expect data to play an increasing role in both public and private operations. Public health authorities are already using data-driven approaches to monitor the spread of SARS-CoV-2, and phylogenetic analyses are used to identify whether particular strains of SARS-CoV-2 carry a higher risk. In the private sector, information on the number of cases is used to support business contingency operations and analyze supply chains for possible vulnerabilities.
With the increasing importance of data, ensuring data quality and consistency is paramount. Through this dataset Starschema intends to provide a practical demonstration of best-of-breed data quality assurance and data management procedures to provide public health professionals, contingency planners, and enterprises with an outline of data management sound enough to stake lives on.
As we fight SARS-CoV-2, data-driven approaches may well be what gives humanity an edge. With the rapid expansion and democratization of data and advanced analytics, data science is in a unique position to bring the tools, techniques, and procedures that were developed in the analytics domain over the last decade to bear on this unprecedented challenge.
If you have any questions regarding this dataset or how to use it, please reach out. We are here to help.
|
https://medium.com/starschema-blog/covid-19-and-the-first-war-of-data-science-980798f075ef
|
['Chris Von Csefalvay']
|
2020-03-19 00:10:27.044000+00:00
|
['Coronavirus', 'Co Vid 19', 'Data Science', 'Machine Learning', 'Dataset']
|
Title COVID19 first war data scienceContent subtitle remarkable history race nuclear bomb science writer historian science Jim Baggott referred World War II “first war physics” Today effort waged curb COVID19 pandemic may first example largescale global datadriven response worldwide crisis perhaps first war data science difficult overstate much data become available extremely short time open science network sharing clinical epidemiological data enabled unprecedented depth analytics data tool expert working hard we’ve never done haven’t trained much progress made still must overcome many challenge Starschema COVID19 dataset One tactical challenge lie making data “analytics ready” — ensuring data accurate readily available constantly updated format easily used data scientist front line case count dataset collated Johns Hopkins University’s Center Systems Science Engineering JHU CSSE referenced relied hundred thousand data scientist dataset dashboard JHU CSSE provides important helpful wasn’t constructed way make data analyticsready often data scientist want use dataset need go timeconsuming process unpivot union clean data use model application Based JHU CSSE data stream Starschema built — continue update improve — analyticsready dataset draw multiple data source eventually integrating high temporal resolution domestic data eg dataset provided Italy’s Department Civil Protection entire dataset available download via AWS S3 well via Snowflake‘s Data Exchange enable data scientist epidemiologist analyst access uptodate data COVID19 case cloudbased data warehouse including datasets enriched relevant information population density geolocation data Users leverage information build inferential model disease propagation provide customer unique insight behavior novel epidemic Starschema COVID19 data set Snowflake’s Data Exchange Starschema COVID19 dataset classical “long” format — permutation point time case type — confirmed recovered deceased active — row addition many inconsistency countylevel reporting — prior 10 March 2020 — statelevel reporting resolved process collating data pertaining COVID19 making additional data available information population size population density metric enable contingency planning forecasting visualization data size expands constructing Airflowbased dynamic DAG order provide reproducible workflow rapid ingestion transformation data outlook COVID19 pandemic progress expect data play increasing role public private operation Public health authority already using datadriven approach monitor spread SARSCoV2 phylogenetic analysis used identify whether particular strain SARSCoV2 carry higher risk private sector information number case used support business contingency operation analyze supply chain possible vulnerability increasing importance data ensuring data quality consistency paramount dataset Starschema intends provide practical demonstration bestofbreed data quality assurance data management procedure provide public health professional contingency planner enterprise outline data management sound enough stake life fight SARSCoV2 datadriven approach may well give humanity edge rapid expansion democratization data advanced analytics data science unique position bring tool technique procedure developed analytics domain last decade bear unprecedented challenge question regarding dataset use please reach helpTags Coronavirus Co Vid 19 Data Science Machine Learning Dataset
|
5,142 |
Black holes— The Strangest Objects in The Universe Are Finally Acknowledged By The Nobel Committee
|
A Brief Explanation of Their Work
Roger Penrose wrote a ground-breaking paper with the late Stephen Hawking in 1970. They presented in that paper a new generalized theorem on spacetime singularities which revolutionized our understanding of spacetime. They proved that if our universe obeys Einstein’s general theory of relativity and Friedmann’s models about the universe, then there is essentially a singularity at the beginning. According to the Nobel Prize website:
“Penrose used ingenious mathematical methods in his proof that black holes are a direct consequence of Albert Einstein’s general theory of relativity.”
This time the Nobel prize is awarded for theoretical and observational discoveries. The theoretical work was being done in the 1970s by the legends Stephan Hawking and Roger Penrose, and it was observationally proved in the 21st century. They used Einstein’s general theory of relativity and proved the singularity theorems i.e the existence of a point in space-time where the gravitational pull is so strong that even light could not escape. These were just mathematical manipulations at that time but now it's a truth in front of us.
Now it’s an obvious question to ask why they both didn’t receive Nobel prize when we knew the black hole’s existence for so long. The answer to this is the Nobel community likes a theory to be experimentally or observationally verified. For example, Einstein predicted the existence of gravitational waves in 1916 and the Nobel community awarded them in 2017 upon their detection. This year it is awarded because we finally have received the incredible image of a supermassive black hole in the center of M-87 the previous year. It was brought to light by a huge collaborative team of the EHT (Event Horizon Telescope).
Now one can wonder why not the prize is shared between the EHT team and Roger Penrose. Well, it is hard to swallow but it's the policy of Nobel that the prize is only shared between three people and not the entire collaboration.
Professor Genzel and Andrea Ghez both have done an incredible job to answer the biggest mystery that lies inside our MilkyWay galaxy. They were observing stars near the galactic center using high-resolution infrared telescopes.
Infrared light has a long wavelength than visible light, which means it’s not blocked by tiny particles of interstellar dust in the space. We can’t see the center of our galaxy with visible light but with infrared light. Hence, in this way, one can measure the position and speed of stars locating near the center of the MilkyWay, then work out the orbits of those stars and the influence of gravity around them, after that one can calculate the mass of the central object to speculate the size of that object at the center.
In 1969, Donald Lynden-Bell and Martin Rees have put forward a suggestion that our home galaxy might contain a dense, compact supermassive black hole at its core. But there was no way to verify that because the core of the galaxy was always hidden behind gases and interstellar dust.
At that time Genzel was working as a post-doctoral fellow at UC Berkely with the late Charles Townes who was a Nobel laureate. His collaborator asserted that he presented a “remarkable technique, in which he can measure very accurately and determine quite precisely the mass and behavior of stars circulating around the galactic center.”
Together with this technique and the ground-based telescopes at ESO (European Southern Observatory) i.e NTT (New Technology Telescopes) located in Chile, they observed the motion and position of ten stars for about four years from 1992 to 1996. They collected data and modeled the orbits of those stars and derived the mass of a central object as about 2.45 million times the mass of the Sun. But that was just one result and hence can not be generalized at that time.
Observation by Genzel’s research group about the stars orbiting the galactic center also confirmed the predictions of Einstein's general theory of relativity; that the orbit of a star follows a flower-like pattern (perihelion shift) while orbiting a massive object.
Andrea Ghez has done that with much more precision with her team the galactic center group at UCLA. With more high-resolution W.M. Keck observatory’s telescopes in Hawaii, they observed the interstellar medium and dust around the supermassive black hole i.e Sagittarius A*.
They used adaptive spectroscopy, more precisely the laser technology to avoid turbulence which disrupts the light under observation in the atmosphere. They removed all kinds of turbulence and noise from their data. They observed more than 3000 stars for 12 years from 1995 to 2007. Finally, they obtained the result of the mass of a central object to be 4.5 million times the mass of the Sun. She also studied the dynamics and interactions between the stars. That observation again confirmed the previous result that a dense compact supermassive black hole resides in the galactic center.
The amazing work done by both independent teams has been recognized by royal Swedish academy of sciences. They said it “has given us the most convincing evidence yet of a supermassive black hole at the center of the Milky Way.” Keck Observatory Director Hilton Lewis admired Ghez as “one of our most passionate and tenacious Keck users.”
Andrea has been acknowledged by many people from MIT in very kind words. Nergis Mavalvala said,
|
https://medium.com/mathphy-exclusive/black-holes-the-strangest-objects-in-the-universe-are-finally-acknowledged-by-the-nobel-community-ee87b8b056a6
|
['Areeba Merriam']
|
2020-10-15 04:02:56.319000+00:00
|
['Astronomy', 'Nobel Prize', 'Black Holes', 'Space', 'Science']
|
Title Black holes— Strangest Objects Universe Finally Acknowledged Nobel CommitteeContent Brief Explanation Work Roger Penrose wrote groundbreaking paper late Stephen Hawking 1970 presented paper new generalized theorem spacetime singularity revolutionized understanding spacetime proved universe obeys Einstein’s general theory relativity Friedmann’s model universe essentially singularity beginning According Nobel Prize website “Penrose used ingenious mathematical method proof black hole direct consequence Albert Einstein’s general theory relativity” time Nobel prize awarded theoretical observational discovery theoretical work done 1970s legend Stephan Hawking Roger Penrose observationally proved 21st century used Einstein’s general theory relativity proved singularity theorem ie existence point spacetime gravitational pull strong even light could escape mathematical manipulation time truth front u it’s obvious question ask didn’t receive Nobel prize knew black hole’s existence long answer Nobel community like theory experimentally observationally verified example Einstein predicted existence gravitational wave 1916 Nobel community awarded 2017 upon detection year awarded finally received incredible image supermassive black hole center M87 previous year brought light huge collaborative team EHT Event Horizon Telescope one wonder prize shared EHT team Roger Penrose Well hard swallow policy Nobel prize shared three people entire collaboration Professor Genzel Andrea Ghez done incredible job answer biggest mystery lie inside MilkyWay galaxy observing star near galactic center using highresolution infrared telescope Infrared light long wavelength visible light mean it’s blocked tiny particle interstellar dust space can’t see center galaxy visible light infrared light Hence way one measure position speed star locating near center MilkyWay work orbit star influence gravity around one calculate mass central object speculate size object center 1969 Donald LyndenBell Martin Rees put forward suggestion home galaxy might contain dense compact supermassive black hole core way verify core galaxy always hidden behind gas interstellar dust time Genzel working postdoctoral fellow UC Berkely late Charles Townes Nobel laureate collaborator asserted presented “remarkable technique measure accurately determine quite precisely mass behavior star circulating around galactic center” Together technique groundbased telescope ESO European Southern Observatory ie NTT New Technology Telescopes located Chile observed motion position ten star four year 1992 1996 collected data modeled orbit star derived mass central object 245 million time mass Sun one result hence generalized time Observation Genzel’s research group star orbiting galactic center also confirmed prediction Einsteins general theory relativity orbit star follows flowerlike pattern perihelion shift orbiting massive object Andrea Ghez done much precision team galactic center group UCLA highresolution WM Keck observatory’s telescope Hawaii observed interstellar medium dust around supermassive black hole ie Sagittarius used adaptive spectroscopy precisely laser technology avoid turbulence disrupts light observation atmosphere removed kind turbulence noise data observed 3000 star 12 year 1995 2007 Finally obtained result mass central object 45 million time mass Sun also studied dynamic interaction star observation confirmed previous result dense compact supermassive black hole resides galactic center amazing work done independent team recognized royal Swedish academy science said “has given u convincing evidence yet supermassive black hole center Milky Way” Keck Observatory Director Hilton Lewis admired Ghez “one passionate tenacious Keck users” Andrea acknowledged many people MIT kind word Nergis Mavalvala saidTags Astronomy Nobel Prize Black Holes Space Science
|
5,143 |
Constraints
|
Constraints by definition are not normally considered as something positive, they mean limitations or restrictions. For me, are a necessity.
Is not amazing how constraints can help to realise how much time you waste?
Cleaning the house just in time before your parents get home.
Finishing the suitcase just int time to go straight to the airport.
That assignment that has been half-way done for three weeks, but is sharply completed on the final day just few minutes before the deadline.
Living on a tight budget. Taking care of other on a minimum wage.
Constraints are a powerful thing and we should expose ourselves more to them.
I’ve been preaching for a while, and probably have it as a personal motto: “Excess of resources are also a problem”.
For businesses.
For people.
For start-ups.
And even for governments.
The more you have of x, the more you waste.
Time, money, you name it.
The trick for me, is to realise it, not punishing yourself about how much have been wasted, but learn from it, aiming to maximise those constraints in your favour.
tHow to create those constraints, that will allow you to work at max, with less, while getting (in most of the cases) the best possible outcome.
For me is all about time, wasting and management. I’m always impressed of how much I’m able to do in short periods of time when pressure is involved.
This is post is just a clear example of me taking advantage of the circumstances when possible.
This have been written in about 5 minutes (aiming to edit later). I’m in serious rush, I’m very limited on time; having to leave the flat in 30 minutes, not before completing two other tasks.
The idea just came to my mind and immediately thought it was worthwhile to write about it. I had two choices.
1 — Leave if for later, miss the momentum and inspiration; meaning I will probably pick up the subject later, will have more time to work on it, increasing the the possibilities of abandoning it until my next bright moment.
2 — Embrace the limitations and just get it done. Write it down as draft, but put it all out there, looking for the best possible outcome, like it’s going live right away.
This, is a clear exercise of number two, and have to admit I’m very pleased. Not necessarily my best piece, but certainly not my worst either. And it’s done.
What is your constraint and how does it work for you?
|
https://medium.com/thoughts-on-the-go-journal/constraints-1276515d2e12
|
['Joseph Emmi']
|
2016-10-24 22:58:06.226000+00:00
|
['Personal', 'Constraints', 'Self Improvement', 'Life', 'Productivity']
|
Title ConstraintsContent Constraints definition normally considered something positive mean limitation restriction necessity amazing constraint help realise much time waste Cleaning house time parent get home Finishing suitcase int time go straight airport assignment halfway done three week sharply completed final day minute deadline Living tight budget Taking care minimum wage Constraints powerful thing expose I’ve preaching probably personal motto “Excess resource also problem” business people startup even government x waste Time money name trick realise punishing much wasted learn aiming maximise constraint favour tHow create constraint allow work max le getting case best possible outcome time wasting management I’m always impressed much I’m able short period time pressure involved post clear example taking advantage circumstance possible written 5 minute aiming edit later I’m serious rush I’m limited time leave flat 30 minute completing two task idea came mind immediately thought worthwhile write two choice 1 — Leave later miss momentum inspiration meaning probably pick subject later time work increasing possibility abandoning next bright moment 2 — Embrace limitation get done Write draft put looking best possible outcome like it’s going live right away clear exercise number two admit I’m pleased necessarily best piece certainly worst either it’s done constraint work youTags Personal Constraints Self Improvement Life Productivity
|
5,144 |
[Java-201a] The Hidden Dangers of Scanner
|
Don’t Close Scanner!
When we use Scanner and leave it open, our IDE may complain that we have a resource leak. Even if the IDE does not complain, usually it’s a good idea to close something if we don’t need it anymore. The intuition, then, is to simply close it… Right? Consider the following code snippet:
public class Main {
public static void getInput1() {
Scanner scanner = new Scanner(System.in);
scanner.nextLine(); // Ask for user input
scanner.close();
}
public static void getInput2() {
Scanner scanner = new Scanner(System.in);
scanner.nextLine(); // Ask for user input
scanner.close();
}
public static void main(String[] args) {
getInput1();
getInput2();
}
}
At a glance, there is nothing wrong with the code. We create a new Scanner, close it, create another one, and close that one. However, if we try to run it, this is what we get:
ss
Exception in thread "main" java.util.NoSuchElementException: No line found
at java.util.Scanner.nextLine(Scanner.java:1540)
at com.usc.csci201x.Main.getInput2(Main.java:17)
at com.usc.csci201x.Main.main(Main.java:23) Process finished with exit code 1
I typed ss .
What’s going on here then? Observe the constructor of Scanner when we instantiated it: we gave it System.in . That’s an okay thing to do. However, it has an implication that may not be obvious: when we do scanner.close() , we are closing System.in as well.
What is System.in ?
System.in is the system input stream opened up by the JVM. Normally it stays open throughout the lifetime of the program so we can capture user input. However, if we close it prematurely, we will not be able to capture user input anymore. Hence, when we try to read input with System.in again, it tells us No line found .
The Solution
Just don’t close Scanner.
If you must close Scanner, you can wrap System.in inside of a FilterInputStream and override the close() method:
Scanner scanner = new Scanner(new FilterInputStream(System.in) {
@Override
public void close() throws IOException {
//don't close System.in!
}
});
And now it is safe to call scanner.close() as it will only close FilterInputStream instead of System.in .
If your application is single-threaded, consider making Scanner static. Maybe like so:
|
https://medium.com/swlh/java-201a-the-hidden-dangers-of-scanner-7c8d651a1943
|
['Jack Boyuan Xu']
|
2020-01-25 07:17:51.034000+00:00
|
['USC', 'Viterbi', 'Programming', 'Java']
|
Title Java201a Hidden Dangers ScannerContent Don’t Close Scanner use Scanner leave open IDE may complain resource leak Even IDE complain usually it’s good idea close something don’t need anymore intuition simply close it… Right Consider following code snippet public class Main public static void getInput1 Scanner scanner new ScannerSystemin scannernextLine Ask user input scannerclose public static void getInput2 Scanner scanner new ScannerSystemin scannernextLine Ask user input scannerclose public static void mainString args getInput1 getInput2 glance nothing wrong code create new Scanner close create another one close one However try run get s Exception thread main javautilNoSuchElementException line found javautilScannernextLineScannerjava1540 comusccsci201xMaingetInput2Mainjava17 comusccsci201xMainmainMainjava23 Process finished exit code 1 typed s What’s going Observe constructor Scanner instantiated gave Systemin That’s okay thing However implication may obvious scannerclose closing Systemin well Systemin Systemin system input stream opened JVM Normally stay open throughout lifetime program capture user input However close prematurely able capture user input anymore Hence try read input Systemin tell u line found Solution don’t close Scanner must close Scanner wrap Systemin inside FilterInputStream override close method Scanner scanner new Scannernew FilterInputStreamSystemin Override public void close throw IOException dont close Systemin safe call scannerclose close FilterInputStream instead Systemin application singlethreaded consider making Scanner static Maybe like soTags USC Viterbi Programming Java
|
5,145 |
3 to read: The trouble with platforms & newsrooms | ‘Darts & Laurels’ goes monthly | Planning a…
|
3 to read: The trouble with platforms & newsrooms | ‘Darts & Laurels’ goes monthly | Planning a data story
By Matt Carroll <@MattatMIT>
A good week: Platforms vs newsrooms; ‘Darts & Laurels’ goes monthly (yeah!); and some help for writing data stories.
Get notified via email: Send note to 3toread (at) gmail.com
“3 to read” online
Matt Carroll runs the Future of News initiative at the MIT Media Lab.
|
https://medium.com/3-to-read/3-to-read-the-trouble-with-platforms-newsrooms-darts-laurels-goes-monthly-planning-a-9000c065a08c
|
['Matt Carroll']
|
2016-07-13 12:01:12.473000+00:00
|
['Journalism', 'Cuny', 'Media', 'Data', 'Data Visualization']
|
Title 3 read trouble platform newsroom ‘Darts Laurels’ go monthly Planning a…Content 3 read trouble platform newsroom ‘Darts Laurels’ go monthly Planning data story Matt Carroll MattatMIT good week Platforms v newsroom ‘Darts Laurels’ go monthly yeah help writing data story Get notified via email Send note 3toread gmailcom “3 read” online Matt Carroll run Future News initiative MIT Media LabTags Journalism Cuny Media Data Data Visualization
|
5,146 |
Exploring New York City Event Permits with Vega-Lite
|
By visualizing information, we turn it into a landscape that you can explore with your eyes, a sort of information map. And when you’re lost in information, an information map is kind of useful. - David McCandless (Journalist and Information Designer)
You’ve landed on one of the tens of thousands of datasets in Enigma Public, the world’s broadest repository of public data, and are interested in exploring it visually. Given the infinite range of visual forms that are possible to represent any dataset, the options may feel overwhelming. A good tool helps to constrain this vast design space, and gives the user a solid set of principles to build upon.
Vega-Lite is a layer of abstraction on top of d3.js. Developed at the University of Washington Interactive Data Lab, it is a web-based “grammar of graphics” that gives users the power to rapidly experiment with different visual encodings for their data. As a web based tool, it lets the user create both static and interactive data graphics, making it an excellent item to have in any data explorer’s toolbox.
Click to read my interactive tutorial on exploring data with Vega-Lite!
Never heard of Observable Notebooks before? Read on!
Data scientists and journalists alike love using “notebook”-style tools such as Jupyter (in contrast to plain text editors) for many reasons, including
Ability to present text, code, and graphics side-by-side
Ability to run and iterate on code one section at a time through “cells”
A better overall coding experience
Observablehq is a free, web-based notebook for data science, founded by a team of folks with roots in the open-source data visualization community (Mike Bostock, Tom MacWright). Unlike Jupyter, readers can view and run Observable notebooks without needing to install anything, making it an ideal tool for sharing reproducible and interactive analyses.
_______________________________________________________________
Interested in solving complex, unique problems? We’re hiring.
|
https://medium.com/enigma-engineering/exploring-new-york-city-event-permits-with-vega-lite-f83178ff9a8d
|
['Cameron Yick']
|
2018-04-03 14:06:39.412000+00:00
|
['Open Data', 'D3js', 'Vega Lite', 'Engineering', 'Data Visualization']
|
Title Exploring New York City Event Permits VegaLiteContent visualizing information turn landscape explore eye sort information map you’re lost information information map kind useful David McCandless Journalist Information Designer You’ve landed one ten thousand datasets Enigma Public world’s broadest repository public data interested exploring visually Given infinite range visual form possible represent dataset option may feel overwhelming good tool help constrain vast design space give user solid set principle build upon VegaLite layer abstraction top d3js Developed University Washington Interactive Data Lab webbased “grammar graphics” give user power rapidly experiment different visual encoding data web based tool let user create static interactive data graphic making excellent item data explorer’s toolbox Click read interactive tutorial exploring data VegaLite Never heard Observable Notebooks Read Data scientist journalist alike love using “notebook”style tool Jupyter contrast plain text editor many reason including Ability present text code graphic sidebyside Ability run iterate code one section time “cells” better overall coding experience Observablehq free webbased notebook data science founded team folk root opensource data visualization community Mike Bostock Tom MacWright Unlike Jupyter reader view run Observable notebook without needing install anything making ideal tool sharing reproducible interactive analysis Interested solving complex unique problem We’re hiringTags Open Data D3js Vega Lite Engineering Data Visualization
|
5,147 |
The CIA’s War On WikiLeaks Founder Julian Assange
|
(Image: Lance Page / t r u t h o u t; Adapted: public domain / Wikimedia)
On behalf of the Central Intelligence Agency, a Spanish security company called Undercover Global spied on WikiLeaks founder Julian Assange while he was living in the Ecuador embassy in London.
The Spanish newspaper El Pais reported on September 25 that the company’s CEO David Morales repeatedly handed over audio and video. When cameras were installed in the embassy in December 2017, “Morales requested that his technicians install an external streaming access point in the same area so that all of the recordings could be accessed instantly by the United States.”
Technicians planted microphones in the embassy’s fire extinguishers, as well as the women’s bathroom, where Assange held regular meetings with his lawyers — Melynda Taylor, Jennifer Robinson, and Baltasar Garzon.
Morales’ company was hired by Ecuador, but Ecuador apparently had no idea that Morales formed a relationship with the CIA.
The world laughed at Assange when it was reported in a book from David Leigh and Luke Harding that he once dressed as an old woman because he believed CIA agents were following him. It doesn’t seem as absurd now.
A Tremendous Coup for the CIA
WikiLeaks founder Julian Assange as he was expelled from Ecuador embassy. Screenshot of Ruptly coverage.
Julian Assange was expelled from the embassy and arrested by British authorities on April 11. It was subsequently revealed that the U.S. Justice Department indicted him on a conspiracy to commit a computer crime charge, and in May, a superseding indictment charged him with several violations of the Espionage Act.
He became the first journalist to be indicted under the 1917 law, which was passed to criminalize “seditious” conduct during World War I.
The WikiLeaks founder was incarcerated at Her Majesty’s Prison Belmarsh in London. A court found him guilty of violating bail conditions when he sought political asylum from Ecuador in 2012. He was sentenced to 50 weeks in prison. But following his sentence, authorities refused to release him. They decided Assange should remain in the facility until a February hearing, where the U.S. government will argue for his extradition.
The expulsion, arrest, and jailing of Assange represented a tremendous coup for the CIA, which views WikiLeaks as a “hostile intelligence service.”
“It is time to call out WikiLeaks for what it really is — a non-state hostile intelligence service often abetted by state actors like Russia,” Mike Pompeo declared in April 2017, when he was CIA director.
“Julian Assange and his kind are not the slightest bit interested in improving civil liberties or enhancing personal freedom. They have pretended that America’s First Amendment freedoms shield them from justice. They may have believed that, but they are wrong.”
Pompeo added, “Assange is a narcissist who has created nothing of value. He relies on the dirty work of others to make himself famous. He is a fraud — a coward hiding behind a screen. And in Kansas [Pompeo was a representative from Kansas], we know something about false wizards.”
Unwanted Scrutiny
The CIA’s loathing for Assange stems from the fact that the dissident media organization exposed the agency to unwanted scrutiny for its actions numerous times.
In 2010, WikiLeaks published two Red Cell memos from the CIA. One memo from March 2010 outlined “pressure points” the agency could focus upon to sustain western European support for the Afghanistan War. It brazenly suggested “public apathy enables leaders to ignore voters” because only a fraction of French and German respondents identified the war as “the most urgent issue facing their nation.”
The second memo from February 2010 examined what would happen if the U.S. was viewed as an incubator and “exporter of terrorism.” It warned, “Foreign partners may be less willing to cooperate with the United States on extrajudicial activities, including detention, transfer [rendition], and interrogation of suspects in third party countries.”
“If foreign regimes believe the U.S. position on rendition is too one-sided, favoring the U.S. but not them, they could obstruct U.S. efforts to detain terrorism suspects. For example, in 2005 Italy issued criminal arrest warrants for U.S. agents involved in the abduction of an Egyptian cleric and his rendition to Egypt. The proliferation of such cases would not only challenge U.S. bilateral relations with other countries but also damage global counterterrorism efforts,” the February memo added.
On these memos, which were disclosed by U.S. military whistleblower Chelsea Manning, she said, “The content of two of these documents upset me greatly. I had difficulty believing what this section was doing.”
CIA Renditions Further Exposed
More than 250,000 diplomatic cables from the U.S. State Department, largely from the period of 2003–2010, were provided by Manning to WikiLeaks. There were several that brought unwanted scrutiny to the CIA.
The CIA abducted Khaled el-Masri in 2003. He was beaten, stripped naked, violated by a suppository, chained spread-eagled on an aircraft, injected with drugs, and flown to a secret CIA prison in Kabul known as the “Salt Pit.” El-Masri was tortured and eventually went on hunger strike, which led to personnel force-feeding him. He was released in May 2004, after the CIA realized they had the wrong man.
Cables showed the pressure the U.S. government applied to German prosecutors and officials so 13 CIA agents, who were allegedly involved in el-Masri’s abduction, escaped accountability. They were urged to “weigh carefully at every step of the way the implications for relations.”
Pressure was also applied to prosecutors and officials in Germany. They feared that magistrate Baltasar Garzón, who is now one of Assange’s attorneys, would investigate CIA rendition flights.
The cache of documents brought attention to Sweden’s decision to curtail CIA rendition flights after Swedish authorities realized stopovers were made at Stockholm’s Arlanda International Airport.
During the “Arab Spring,” cables from Egypt showed Omar Suleiman, the former intelligence chief who Egyptian president Hosni Mubarak selected as his potential successor, highlighted his collaboration with the CIA. Suleiman oversaw the rendition and torture of dozens of detainees. Abu Omar, who was kidnapped by the CIA in Milan in 2003, was tortured when Suleiman was intelligence chief.
The world also learned that the CIA drew up a “spying wishlist” for diplomats at the United Nations. The list targeted UN Secretary General Ban Ki-moon and other senior members. The agency sought “foreign diplomats’ internet user account details and passwords,” as well as “biometric” details of “current emerging leaders and advisers.” It was quite an embarrassing revelation for the CIA.
As cables spread in the international media, the CIA launched the WikiLeaks Task Force to assess the impacts of the disclosures.
Documents revealed by NSA whistleblower Edward Snowden showed during this same period the security agencies had a “Manhunting Timeline” for Assange. They pressured Australia, Britain, Germany, Iceland, and other Western governments to concoct a prosecution against him.
Several NSA analysts even wanted WikiLeaks to be designated a “malicious foreign actor” so the organization and its associates could be targeted with surveillance, an attitude likely supported by CIA personnel.
‘We Look Forward To Sharing Great Classified Info About You’
The CIA joined Twitter in June 2014. WikiLeaks welcomed the CIA by tweeting at the agency, “We look forward to sharing great classified info about you.” They shared links to the Red Cell memos and a link to a search for “CIA” documents in their website’s database.
By December, the media organization published a CIA report on the agency’s “high value target” assassination program. It assessed attacks on insurgent groups in Afghanistan, Algeria, Chechnya, Colombia, Iraq, Israel, Libya, Northern Ireland, Pakistan, Peru, Sri Lanka, and Thailand.
The review acknowledged such operations, which include drone strikes, “increase the level of insurgent support,” especially if the strikes “enhance insurgent leaders’ lore, if noncombatants are killed in the attacks, if legitimate or semilegitimate politicians aligned with the insurgents are targeted, or if the government is already seen as overly repressive or violent.”
WikiLeaks also released two internal CIA documents from 2011 and 2012 detailing how spies should elude secondary screenings at airports and maintain their cover. The CIA was concerned that the Schengen Area — ”a group of 26 European countries that have abolished passport control at shared borders” — would makie harder for operatives because they planned to subject travelers to biometric security measures.
After CIA director John Brennan had his personal AOL account by hackers, the contents were provided to WikiLeaks for a series of publications that took place in October 2015.
Julian Assange. Photo by Ministerio de Cultura de la Nación Argentina (culturaargentina) on Flickr.
U.S. Intelligence Steps Up Effort To Discredit WikiLeaks
As Democratic presidential candidate Hillary Clinton campaigned against President Donald Trump, WikiLeaks published emails from John Podesta, chairman for the Clinton campaign.The national security establishment alleged the publication was part of a Russian plot to interfere in the 2016 election.
Assange held a press conference in January 2017, where he countered, “Even if you accept that the Russian intelligence services hacked Democratic Party institutions, as it is normal for the major intelligence services to hack each others’ major political parties on a constant basis to obtain intelligence,” you have to ask, “what was the intent of those Russian hacks? And do they connect to our publications? Or is it simply incidental?”.
“The U.S. intelligence community is not aware of when WikiLeaks obtained its material or when the sequencing of our material was done or how we obtained our material directly. So there seems to be a great fog in the connection to WikiLeaks,” Assange contended.
He maintained, “As we have already stated, WikiLeaks sources in relation to the Podesta emails and the DNC leak are not members of any government. They are not state parties. They do not come from the Russian government.”
“The [Clinton campaign] emails that we released during the election dated up to March [2016]. U.S. intelligence services and consultants for the DNC say Russian intelligence services started hacking DNC in 2015. Now, Trump is clearly not on the horizon in any substantial manner in 2015,” Assange added.
Yet, in the information war between WikiLeaks and the U.S. government, Brennan responded during an appearance on PBS’ “NewsHour.” “[Assange is] not exactly a bastion of truth and integrity. And so therefore I wouldn’t ascribe to any of these individuals making comments that [they are] providing the whole unvarnished truth.”
Special Counsel Robert Mueller oversaw a wide-ranging investigation into alleged Russian interference in the 2016 election. The report, released in April 2019, did not confirm, without a doubt, that Russian intelligence agents or individuals tied to Russian intelligence agencies passed on the emails from the Clinton campaign to WikiLeaks.
CIA Loses Control Of Largest Batch Of Documents Ever
Mike Pompeo, CIA director from January 2017 to April 2018 (Photo: U.S. Government)
In February 2017, WikiLeaks published “CIA espionage orders” that called attention to how all of the major political parties in France were “targeted for infiltration” in the run-up to the 2012 presidential election.
The media organization followed that with the “Vault 7” materials — what they described as the “largest ever publication of confidential documents on the agency.” It was hugely embarrassing for the agency.
“The CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation,” WikiLeaks declared in a press release. “This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA.”
“The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive,” WikiLeaks added.
Nearly 9,000 documents came from “an isolated, high-security network inside the CIA’s Center for Cyber Intelligence.” (WikiLeaks indicated the espionage orders published in February were from this cache of information.)
The publication brought scrutiny to the CIA’s “fleet of hackers,” who targeted smartphones and computers. It exposed a program called “Weeping Angel” that made it possible for the CIA to attack Samsung F8000 TVs and convert them into spying devices.
As CNBC reported, the CIA had 14 “zero-day exploits,” which were “software vulnerabilities” that had no fix yet. The agency used them to “hack Apple’s iOS devices such as iPads and iPhones.” Documents showed the “exploits were shared with other organizations including the National Security Agency (NSA) and GCHQ, another U.K. spy agency. The CIA did not tell Apple about these vulnerabilities.”
WikiLeaks additionally revealed that CIA targeted Microsoft Windows, as well as Signal and WhatsApp users, with malware.
The CIA responded, “The American public should be deeply troubled by any Wikileaks disclosure designed to damage the intelligence community’s ability to protect America against terrorists and other adversaries. Such disclosures not only jeopardize U.S. personnel and operations but also equip our adversaries with tools and information to do us harm.”
But the damage was done. The CIA was forced to engage with the allegations by insisting the agency’s activities are “subject to oversight to ensure that they comply fully with U.S. law and the Constitution.” Apple, Samsung, and Microsoft took the disclosures very seriously.
Assange attempted to force a public debate that high-ranking CIA officials did not want to have.
“There is an extreme proliferation risk in the development of cyber ‘weapons,’ Assange stated. Comparisons can be drawn between the uncontrolled proliferation of such ‘weapons,’ which results from the inability to contain them combined with their high market value, and the global arms trade. But the significance of “Year Zero” goes well beyond the choice between cyberwar and cyberpeace.”
(Note: Josh Schulte, a former CIA employee, was charged with violating the Espionage Act when he allegedly disclosed the files to WikiLeaks. He was jailed at the Metropolitan Correctional Center in New York.)
CIA Exploits New Leadership In Ecuador
Lenín Moreno was elected president of Ecuador in May 2017. At the time, the U.S. Justice Department had essentially abandoned their grand jury investigation into WikiLeaks. President Barack Obama’s administration declined to pursue charges against Assange. But officials in the national security apparatus recognized a political shift in Ecuador and exploited it.
By December, the CIA was able to fight back against Assange and WikiLeaks by installing spying devices in the Ecuador embassy.
Former CIA officer John Kiriakou contended, “The attitude at the CIA is that he really did commit espionage. This isn’t about freedom of speech or freedom of the press because they don’t care about freedom of speech or freedom of the press. All they care about is controlling the flow of information and so Julian was a threat to them.”
Recall, as the Senate intelligence committee compiled a study on the CIA’s rendition, detention, and interrogation program, the CIA flouted restrictions on domestic spying and targeted Senate staff. Personnel even hacked into Senate computers.
“The CIA likes nothing more than being able to operate unfettered,” Kiriakou further declared.
He also commented, “[Moreno] did the CIA’s bidding. I have no idea why he would do such a thing, but he was the perfect person to take over the leadership of Ecuador at exactly the time that the CIA needed a friend there.”
As 2018 progressed, restrictions imposed by the Ecuador government on what Assange was allowed to do on the internet and in his daily work for WikiLeaks intensified.
A doctor named Sondra Crosby, who evaluated Assange’s health on February 23, described the embassy surveillance she experienced during her visit. She left the embassy at one point to pick up some food and returned to the room where they were meeting to find her confidential medical notes were taken. She found her notes “in a space utilized by embassy surveillance staff” and presumed they were read, a violation of doctor-patient confidentiality.
Forcing the removal of Assange from the embassy was a major victory for the CIA, and if prosecutors win his extradition to the United States, the agency will have a hand in how the trial unfolds.
|
https://medium.com/discourse/the-cias-war-on-wikileaks-founder-julian-assange-4a26b78fa042
|
['Kevin Gosztola']
|
2019-10-07 12:44:03.624000+00:00
|
['Politics', 'Wikileaks', 'News', 'CIA', 'Journalism']
|
Title CIA’s War WikiLeaks Founder Julian AssangeContent Image Lance Page r u h u Adapted public domain Wikimedia behalf Central Intelligence Agency Spanish security company called Undercover Global spied WikiLeaks founder Julian Assange living Ecuador embassy London Spanish newspaper El Pais reported September 25 company’s CEO David Morales repeatedly handed audio video camera installed embassy December 2017 “Morales requested technician install external streaming access point area recording could accessed instantly United States” Technicians planted microphone embassy’s fire extinguisher well women’s bathroom Assange held regular meeting lawyer — Melynda Taylor Jennifer Robinson Baltasar Garzon Morales’ company hired Ecuador Ecuador apparently idea Morales formed relationship CIA world laughed Assange reported book David Leigh Luke Harding dressed old woman believed CIA agent following doesn’t seem absurd Tremendous Coup CIA WikiLeaks founder Julian Assange expelled Ecuador embassy Screenshot Ruptly coverage Julian Assange expelled embassy arrested British authority April 11 subsequently revealed US Justice Department indicted conspiracy commit computer crime charge May superseding indictment charged several violation Espionage Act became first journalist indicted 1917 law passed criminalize “seditious” conduct World War WikiLeaks founder incarcerated Majesty’s Prison Belmarsh London court found guilty violating bail condition sought political asylum Ecuador 2012 sentenced 50 week prison following sentence authority refused release decided Assange remain facility February hearing US government argue extradition expulsion arrest jailing Assange represented tremendous coup CIA view WikiLeaks “hostile intelligence service” “It time call WikiLeaks really — nonstate hostile intelligence service often abetted state actor like Russia” Mike Pompeo declared April 2017 CIA director “Julian Assange kind slightest bit interested improving civil liberty enhancing personal freedom pretended America’s First Amendment freedom shield justice may believed wrong” Pompeo added “Assange narcissist created nothing value relies dirty work others make famous fraud — coward hiding behind screen Kansas Pompeo representative Kansas know something false wizards” Unwanted Scrutiny CIA’s loathing Assange stem fact dissident medium organization exposed agency unwanted scrutiny action numerous time 2010 WikiLeaks published two Red Cell memo CIA One memo March 2010 outlined “pressure points” agency could focus upon sustain western European support Afghanistan War brazenly suggested “public apathy enables leader ignore voters” fraction French German respondent identified war “the urgent issue facing nation” second memo February 2010 examined would happen US viewed incubator “exporter terrorism” warned “Foreign partner may le willing cooperate United States extrajudicial activity including detention transfer rendition interrogation suspect third party countries” “If foreign regime believe US position rendition onesided favoring US could obstruct US effort detain terrorism suspect example 2005 Italy issued criminal arrest warrant US agent involved abduction Egyptian cleric rendition Egypt proliferation case would challenge US bilateral relation country also damage global counterterrorism efforts” February memo added memo disclosed US military whistleblower Chelsea Manning said “The content two document upset greatly difficulty believing section doing” CIA Renditions Exposed 250000 diplomatic cable US State Department largely period 2003–2010 provided Manning WikiLeaks several brought unwanted scrutiny CIA CIA abducted Khaled elMasri 2003 beaten stripped naked violated suppository chained spreadeagled aircraft injected drug flown secret CIA prison Kabul known “Salt Pit” ElMasri tortured eventually went hunger strike led personnel forcefeeding released May 2004 CIA realized wrong man Cables showed pressure US government applied German prosecutor official 13 CIA agent allegedly involved elMasri’s abduction escaped accountability urged “weigh carefully every step way implication relations” Pressure also applied prosecutor official Germany feared magistrate Baltasar Garzón one Assange’s attorney would investigate CIA rendition flight cache document brought attention Sweden’s decision curtail CIA rendition flight Swedish authority realized stopover made Stockholm’s Arlanda International Airport “Arab Spring” cable Egypt showed Omar Suleiman former intelligence chief Egyptian president Hosni Mubarak selected potential successor highlighted collaboration CIA Suleiman oversaw rendition torture dozen detainee Abu Omar kidnapped CIA Milan 2003 tortured Suleiman intelligence chief world also learned CIA drew “spying wishlist” diplomat United Nations list targeted UN Secretary General Ban Kimoon senior member agency sought “foreign diplomats’ internet user account detail passwords” well “biometric” detail “current emerging leader advisers” quite embarrassing revelation CIA cable spread international medium CIA launched WikiLeaks Task Force ass impact disclosure Documents revealed NSA whistleblower Edward Snowden showed period security agency “Manhunting Timeline” Assange pressured Australia Britain Germany Iceland Western government concoct prosecution Several NSA analyst even wanted WikiLeaks designated “malicious foreign actor” organization associate could targeted surveillance attitude likely supported CIA personnel ‘We Look Forward Sharing Great Classified Info You’ CIA joined Twitter June 2014 WikiLeaks welcomed CIA tweeting agency “We look forward sharing great classified info you” shared link Red Cell memo link search “CIA” document website’s database December medium organization published CIA report agency’s “high value target” assassination program assessed attack insurgent group Afghanistan Algeria Chechnya Colombia Iraq Israel Libya Northern Ireland Pakistan Peru Sri Lanka Thailand review acknowledged operation include drone strike “increase level insurgent support” especially strike “enhance insurgent leaders’ lore noncombatant killed attack legitimate semilegitimate politician aligned insurgent targeted government already seen overly repressive violent” WikiLeaks also released two internal CIA document 2011 2012 detailing spy elude secondary screening airport maintain cover CIA concerned Schengen Area — ”a group 26 European country abolished passport control shared borders” — would makie harder operative planned subject traveler biometric security measure CIA director John Brennan personal AOL account hacker content provided WikiLeaks series publication took place October 2015 Julian Assange Photo Ministerio de Cultura de la Nación Argentina culturaargentina Flickr US Intelligence Steps Effort Discredit WikiLeaks Democratic presidential candidate Hillary Clinton campaigned President Donald Trump WikiLeaks published email John Podesta chairman Clinton campaignThe national security establishment alleged publication part Russian plot interfere 2016 election Assange held press conference January 2017 countered “Even accept Russian intelligence service hacked Democratic Party institution normal major intelligence service hack others’ major political party constant basis obtain intelligence” ask “what intent Russian hack connect publication simply incidental” “The US intelligence community aware WikiLeaks obtained material sequencing material done obtained material directly seems great fog connection WikiLeaks” Assange contended maintained “As already stated WikiLeaks source relation Podesta email DNC leak member government state party come Russian government” “The Clinton campaign email released election dated March 2016 US intelligence service consultant DNC say Russian intelligence service started hacking DNC 2015 Trump clearly horizon substantial manner 2015” Assange added Yet information war WikiLeaks US government Brennan responded appearance PBS’ “NewsHour” “Assange exactly bastion truth integrity therefore wouldn’t ascribe individual making comment providing whole unvarnished truth” Special Counsel Robert Mueller oversaw wideranging investigation alleged Russian interference 2016 election report released April 2019 confirm without doubt Russian intelligence agent individual tied Russian intelligence agency passed email Clinton campaign WikiLeaks CIA Loses Control Largest Batch Documents Ever Mike Pompeo CIA director January 2017 April 2018 Photo US Government February 2017 WikiLeaks published “CIA espionage orders” called attention major political party France “targeted infiltration” runup 2012 presidential election medium organization followed “Vault 7” material — described “largest ever publication confidential document agency” hugely embarrassing agency “The CIA lost control majority hacking arsenal including malware virus trojan weaponized “zero day” exploit malware remote control system associated documentation” WikiLeaks declared press release “This extraordinary collection amount several hundred million line code give possessor entire hacking capacity CIA” “The archive appears circulated among former US government hacker contractor unauthorized manner one provided WikiLeaks portion archive” WikiLeaks added Nearly 9000 document came “an isolated highsecurity network inside CIA’s Center Cyber Intelligence” WikiLeaks indicated espionage order published February cache information publication brought scrutiny CIA’s “fleet hackers” targeted smartphones computer exposed program called “Weeping Angel” made possible CIA attack Samsung F8000 TVs convert spying device CNBC reported CIA 14 “zeroday exploits” “software vulnerabilities” fix yet agency used “hack Apple’s iOS device iPads iPhones” Documents showed “exploits shared organization including National Security Agency NSA GCHQ another UK spy agency CIA tell Apple vulnerabilities” WikiLeaks additionally revealed CIA targeted Microsoft Windows well Signal WhatsApp user malware CIA responded “The American public deeply troubled Wikileaks disclosure designed damage intelligence community’s ability protect America terrorist adversary disclosure jeopardize US personnel operation also equip adversary tool information u harm” damage done CIA forced engage allegation insisting agency’s activity “subject oversight ensure comply fully US law Constitution” Apple Samsung Microsoft took disclosure seriously Assange attempted force public debate highranking CIA official want “There extreme proliferation risk development cyber ‘weapons’ Assange stated Comparisons drawn uncontrolled proliferation ‘weapons’ result inability contain combined high market value global arm trade significance “Year Zero” go well beyond choice cyberwar cyberpeace” Note Josh Schulte former CIA employee charged violating Espionage Act allegedly disclosed file WikiLeaks jailed Metropolitan Correctional Center New York CIA Exploits New Leadership Ecuador Lenín Moreno elected president Ecuador May 2017 time US Justice Department essentially abandoned grand jury investigation WikiLeaks President Barack Obama’s administration declined pursue charge Assange official national security apparatus recognized political shift Ecuador exploited December CIA able fight back Assange WikiLeaks installing spying device Ecuador embassy Former CIA officer John Kiriakou contended “The attitude CIA really commit espionage isn’t freedom speech freedom press don’t care freedom speech freedom press care controlling flow information Julian threat them” Recall Senate intelligence committee compiled study CIA’s rendition detention interrogation program CIA flouted restriction domestic spying targeted Senate staff Personnel even hacked Senate computer “The CIA like nothing able operate unfettered” Kiriakou declared also commented “Moreno CIA’s bidding idea would thing perfect person take leadership Ecuador exactly time CIA needed friend there” 2018 progressed restriction imposed Ecuador government Assange allowed internet daily work WikiLeaks intensified doctor named Sondra Crosby evaluated Assange’s health February 23 described embassy surveillance experienced visit left embassy one point pick food returned room meeting find confidential medical note taken found note “in space utilized embassy surveillance staff” presumed read violation doctorpatient confidentiality Forcing removal Assange embassy major victory CIA prosecutor win extradition United States agency hand trial unfoldsTags Politics Wikileaks News CIA Journalism
|
5,148 |
Marketing lessons from ‘The Jetsons’
|
Marketing lessons from ‘The Jetsons’
What can a 58-year-old TV show teach us about modern sales?
Image via Mark Anderson on Flickr
In 2020, there’s no way to deny that technology is taking over — it’s cleaning your floors, monitoring your home for burglars, paying for your groceries, allowing you to eavesdrop on live-in guests, and controlling your television and computer viewing. Mostly it’s convenient and fun to use. Other times it can be creepy.
But fans of the ’60s (or ’80s) “The Jetsons” managed to make their tech-savvy businesses and home life more cool than it is invasive. There were no Echo users who had recorded conversations sent to someone’s employer. Hackers weren’t breaking into Amazon Ring to harass small children. Alexa and Google Home weren’t targeting users for phishing out unauthorized password changes.
But if you look closely enough, the Jetsons had to do their fair share of marketing cleanup in a tech-crazed world too. Here’s how I saw them do it.
|
https://medium.com/we-need-to-talk/marketing-lessons-from-the-jetsons-b40ecfabe269
|
['Shamontiel L. Vaughn']
|
2020-10-12 02:35:37.146000+00:00
|
['Technology', 'Marketing', 'Advertising', 'Space', 'Inventions']
|
Title Marketing lesson ‘The Jetsons’Content Marketing lesson ‘The Jetsons’ 58yearold TV show teach u modern sale Image via Mark Anderson Flickr 2020 there’s way deny technology taking — it’s cleaning floor monitoring home burglar paying grocery allowing eavesdrop livein guest controlling television computer viewing Mostly it’s convenient fun use time creepy fan ’60s ’80s “The Jetsons” managed make techsavvy business home life cool invasive Echo user recorded conversation sent someone’s employer Hackers weren’t breaking Amazon Ring harass small child Alexa Google Home weren’t targeting user phishing unauthorized password change look closely enough Jetsons fair share marketing cleanup techcrazed world Here’s saw itTags Technology Marketing Advertising Space Inventions
|
5,149 |
How to Write a Good Essay
|
So you need to learn how to write a good essay. This may seem like a pretty intimidating task, but it’s really not that bad when you take the time to know and understand what you’re doing.
A standard essay has a lot of working parts. There’s the formatting, thesis statement, writing structure, grammar and punctuation, and much more. It can seem overwhelming when you think about how many elements you need to remember. But it doesn’t have to be that hard. With the right advice, you can get ahead and make sure that you turn in a paper that will blow your professor’s mind and get you the grade you need to ace your class.
Ready to learn how to write a good essay? We’ll walk you through it, from beginning to end. With our help, you can learn and understand exactly what goes into an A+ essay. Let’s start at the beginning.
Types of Essays and Papers
First, it’s good to take a look at the different types of essays that you could be writing. Each type of essay will have different requirements or formats that you should follow in order to complete the best work possible.
Here are some of the more common essay assignments you may need to write during your time at school:
● Argumentative Essay: This type of essay will present an argument to the reader and provide solid evidence as to why they should agree with your stance.
● Research Essay: A research essay takes an in-depth look at a specific topic using lots of reliable and academic sources, facts, and other data. It’s similar to the expository essay below.
● Expository Essay: This type of essay is used to explain something without taking a particular stance. When writing this paper, assume that you are writing for an audience that knows nothing about the topic and provide them with facts and data.
● Compare/Contrast Essay: With a compare/contrast essay, you are taking two things and analyzing them to showcase their similarities and differences.
● Personal or Reflective Essay: Generally, this type of essay doesn’t always follow typical format and can make use of first-person voice to reflect on your thoughts and experiences about something specific.
● Literature Review: A literature review essentially provides an overview of the literature and research that has already been done about a particular topic.
● Book Review: A book review essay is done to provide a critical analysis about a book or other piece of literature. It generally includes a summary and assessment.
How to Start an Essay
If you’re not overly familiar with how to write a good essay, it can be tricky to know where to start. This is the point where most people sit down, stare at a blank document, and start to get stressed. Don’t let yourself get stressed out before you’ve even done anything.
Every good essay starts with a topic and a plan. Begin by determining which type of essay you’re going to write. This helps you pick the right topic. For example, if you’re writing an argumentative essay, you want to make sure that you choose a topic you have an opinion about and can argue one way or another. If you’re writing a research paper, you want to make sure you choose a topic that you can find a lot of academic research about.
So, with that being said, it’s time to choose your topic.
Choosing the Right Topic For Your Paper
Choose your topic wisely. A good topic makes a big difference when it comes to your paper. It’s what drives all of your research, defines your writing, and keeps people interested — including yourself. Do you really want to spend the next few weeks writing about some topic you couldn’t care less about? Probably not. Don’t make things harder on yourself. Put some thought into this portion of your paper, or you’ll really regret it when you sit down to write.
It Should Be Interesting to You
You’re going to be doing a lot of reading and writing about this topic, so you should always choose something you’re interested in wherever possible. Sometimes you’re given your topic and don’t have a choice, but you can still spin it so that it’s something that interests you. This is incredibly important. You’re going to be sifting through academic journals and dedicating a lot of your time becoming an expert in this topic. Make sure you’re not going to get bored.
Being interested in the topic also helps you write content that really engages your reader and hooks them right away. When you’re excited about something, you want to show all of the facts and present the best argument about that topic. If you aren’t interested in what you’re writing about, how can you sell that topic to your reader?
Do the Research First
Start with some research. Don’t make a decision until you’ve been able to take a look at what’s out there and how much research you’re actually going to find about it. Often, doing initial research helps you notice and identify any trends in this topic and if there are certain research questions that come up more than others. For example, you may find that there’s a certain question or issue that keeps popping up when you’re doing the initial research. If you keep seeing those patterns, this can guide you because it may be something you want to look into.
Start Broad, Then Narrow It Down
Your topic should be something that you can narrow down to one statement or argument. Start with a broad topic that you know you want to write about (or that you have to write about as per your teacher’s request). Then, think about smaller topics within that broad argument, and figure out how you want to get specific. Find your niche and go with it.
You can’t simply take a broad topic and write about it. This is not the best way to learn how to write a good essay. You’ll find way too much research to actually make a point about something, and your essay will just be filled with generic information. This makes it really hard to find the focus of your paper, which will score you a lower grade.
For example, a topic about World War II would be really broad for one essay. Instead, you could narrow that topic down to one specific topic about World War II. So, if you’re writing an argumentative essay, you could choose the topic “why aerial warfare during World War II changed modern warfare” or “contributions by women during World War II.”
However, be cautious about being too narrow with your topic. Make sure you can still find enough relevant information before you start writing. And don’t worry — you can always adjust your thesis statement after you start writing. In fact, this happens to the best of the best more often than you can imagine. It’s all part of the writing process.
Crafting the Perfect Thesis Statement
Your thesis statement is the most important part of your essay. It’s the argument or statement that will guide the rest of your paper. You will be using your thesis statement to structure your entire paper, guide your research and determine what points you should include, and to formulate your overall argument that indicates your knowledge and opinions on the subject.
A thesis statement is basically your answer to a research question. Think about what you want to answer within your paper. This question could be something basic, such as “why were William Shakespeare’s plays and sonnets important to the English language?” Once you have your question, think about your answer, and put it into a sentence. So, for this particular question, your thesis statement could look something like this:
William Shakespeare’s plays and sonnets were important to the English language because they developed many words and terms still used today, he was the first writer to use modern prose, and he set a precedent that today’s playwrights still follow.
Now, this is still a broad thesis statement because you could fill up pages and pages about each of those arguments. But you can see the idea of how we are trying to narrow down your thesis and formulate arguments that answer the research question you’ve selected. Don’t be afraid to continue narrowing down your thesis and refining it until you’ve hit something perfectly narrow.
A thesis statement should also act as an outline for your paper, which tells your readers what you’re going to present to them and how you will be organizing that argument. It is not uncommon to see thesis statements that state outright what the paper is aiming to do. For example, you could use a thesis statement that looks like this:
This research paper will examine the contributions William Shakespeare made to the English language by analyzing his use of modern prose in three of his plays: Richard III, Hamlet, and Titus Andronicus.
Generally, your thesis should be a maximum of one to two sentences. If you can’t explain your argument or the purpose of your paper within two sentences, you need to narrow it down further or find another way to describe what you’re thinking.
Decide On the Right Essay Format to Use, Then Make an Outline
Once you’ve decided on your perfect thesis statement, you can start to plan out how your essay will be structured in a nice outline. Some professors will ask you to provide your outline before you start the research paper as an initial assignment. However, even if your professor doesn’t ask for this, you should still make sure you always use an outline to help yourself as you write.
This is one of the biggest secrets when learning how to write a good essay. A good outline always gives you something to follow and helps you stay on track without getting sidetracked. Once you do a couple papers using an outline, you won’t want to write one without an outline again.
The Importance of an Essay Outline
Making an outline to follow for your essay can be a major help when it comes to your research and writing. It will help you stay on track, and guide you as you begin to write your paper, ensuring that you stay organized and follow your thesis statement. A structured essay outline also helps you understand what you need to write about and where you should look for sources and information. Then, you can stay on track and make sure you are only looking for information that helps your paper without getting distracted by unnecessary details that don’t matter to your paper.
Your outline should, of course, follow the specific format for your essay. The professor of your course will have likely provided you with essay assignment instructions, which sometimes include the format you should be using. Determining which essay format to follow comes down to two main factors: the type of essay you’re writing, and the referencing style you’re using. Sometimes your professor will tell you which style guide to follow, while others will give you the choice.
Standard Essay Format: Building a Tasty Burger
Most essays follow the standard format of an introduction, body paragraphs for each argument or statement, and a conclusion. You will often see this type of essay format being described as the Hamburger Outline. That’s because the meat, cheese, and toppings (your body paragraphs and the bulk of your argument) are in the middle, while the buns hold it together and round it out (your introduction and conclusion). This also goes for each individual paragraph: each point needs a topic sentence and a conclusion sentence to round it out, just like burger buns.
Here’s a basic outline you should follow according to the standard burger outline:
1. Introduction Paragraph
a. The first sentence should be catchy and attention-grabbing.
b. Then, introduce the topic and provide some basic background about what you’re going to be covering.
c. The last line should be your thesis statement.
2. Body Paragraph 1: First Argument or Point
a. Start with a topic sentence introducing the point you’ll be making in that paragraph.
b. Use evidence and sources to make your points.
c. Write a transition sentence that concludes your argument and leads into the next paragraph.
3. Body Paragraph 2: Second Argument or Point
a. Start with the topic sentence introducing your point and arguments.
b. Use evidence and sources to make your points.
c. Add the transition sentence to lead into the next paragraph.
4. Body Paragraph 3: Third Argument or Point
a. Start with your topic sentence.
b. Add your evidence.
c. Conclude with your transition sentence.
5. Conclusion Paragraph
a. Restate your thesis statement (not word for word, though).
b. Summarize your arguments and provide further questions/thoughts, or relate your arguments to a greater context.
Specific Essay Formats For Different Types of Papers
If you’re writing a specific type of essay, your paper structure might look slightly different than the standard burger format. However, they’re all going to follow the basic concept of the introduction, body paragraphs, and conclusion.
For example, argumentative essays look a little different. Argumentative essay format generally contains a section where objections or opposing viewpoints are expressed and rebutted. You want to make sure this comes after your main arguments and before your conclusion. Some argumentative essays also include a section for rebuttal after each main argument, showcasing that you have acknowledged both sides of the story.
How to Write a Good Essay Using the Proper Referencing Styles
It’s important that you properly use the specified referencing style in your paper. You could lose marks simply for not following these guidelines. These are lost marks that could easily be avoided if you check the online referencing guides and take the time to follow the right instructions set out by each style manual.
There are usually three main types of referencing styles used to write most academic papers. They are MLA, APA, and Chicago/Turabian. If your program is more specialized, you may find that you are required to use other types of citation, such as ASA or Harvard. However, these three are the most common styles you will encounter and you will likely use at least one of them throughout your time in school.
MLA Citation
Modern Language Association (MLA) citation is a general format typically used in the humanities. A typical in-text citation using MLA contains the author’s last name and the page number. Here is an example (with a completely fabricated fact):
Shakespeare’s Macbeth is commonly associated with the Gunpowder Plot of 1605 and the subsequent execution of Henry Garnet for crimes of treason (Hudson 22).
When using MLA, your sources will be listed at the end of the paper in a separate Works Cited page. For a full guide on MLA citations and references, visit our handy MLA citation guide. However, to give you some idea, a typical MLA Works Cited entry for a book looks like this:
Hudson, Mila. A Global Guide to Shakespeare. Philadelphia, PA: University of Pennsylvania Press, 2008.
Papers using MLA citation style do not require a title page and usually just have the student’s name, the professor’s name, class title, and date in the upper left corner, with the title centered on the next line. Page numbers are in the top right corner with the student’s last name and the page number.
APA Citation
American Psychological Association (APA) is commonly used for papers within the social science and behavioral science fields. It’s a little more tricky than MLA because there are some specifics you need to follow. In-text citations include the author’s last name, date of publication, and page number. They look like this:
One study found that one in four Americans are diagnosed with ADHD (Ingers, 2004, p. 324).
Sources are listed at the end of the paper on a separate References page. Generally, titles are written in sentence form (with capitals only for proper nouns and at the beginning). A typical reference for an academic journal would look like this:
Ingers, E. (2004). ADHD clinical trial studies in small town America: Finding solutions for young children.
The Journal of Social Science Research, 14(3), pp. 296–340.
Your paper should include a title page with the name of the paper centered on the page, then the institution name and the student’s name on their own lines approximately two to three lines below the title. Page numbers are in the top right corner, with the title of the paper in all capitals on the top left of the page. The title page is structured slightly different — in front of the title, it should state “running head:” and continue with the title.
Here is an in-depth guide on how to cite specific sources in APA, including some examples if you’re not sure about what you’re doing.
Chicago/Turabian Citation
Chicago/Turabian citation is a very common citation style for history papers, but is also used for fine arts and business related subjects. It uses the footnotes-bibliography format. This consists of footnotes at the bottom of each page with a short form reference, with a full bibliography at the end of the paper. Your first footnote from a specific source will be a full version, slightly modified from the bibliography, and then any footnotes following would be shortened.
Here is an example using a completely made up source from a peer-reviewed journal. The in-text citation would include the sentence followed by the footnote number.
First Footnote: John Hughes, “Kamikaze Fighters in World War II,” The Journal of War History 22, no. 1 (March 2002): 68.
Subsequent Footnotes: Hughes, “Kamikaze Fighters,” 68.
Bibliography Entry:
Hughes, John. “Kamikaze Fighters in World War II.” The Journal of War History 22, no. 1 (March 2002):
50–80.
Papers using the Chicago style citation generally include a title page, with the title of the paper centered in the middle, and then the student’s name, the professor’s name, class title, and date on their own lines a few spaces down from the title.
Don’t Overlook the Introduction
The introduction of your paper is extremely important. When learning how to write a good essay, think about it from the perspective of the reader. One of the first things you’ll notice is the introduction. This is where you’re going to hook your reader and write something catchy that makes them want to keep reading. You have to give your reader enough information to understand what you’re getting at, without spilling the arguments and evidence you’re going to use in the body of the paper. Essentially, you’re explaining to your reader why it’s worth it for them to read the rest of your paper.
Start with your first sentence. Think of something that will make someone become unable to resist reading to find out more. You should avoid using cliches when you’re trying to think of something catchy. This can be hard because we’re so used to seeing those cliches in other areas of our lives, but they really have no place in a paper and often professors will dock you for being unoriginal.
When writing the rest of the introduction, start broad and then narrow down until you come to your thesis statement. It’s best to write with the assumption that your audience doesn’t know much about the topic. Give your audience a bit of context as to what you’re going to talk about so that they have enough background information to understand the points you’re making. For example, if you’re writing a paper about one of the characters in a book, give the audience a small summary about the book and the author.
If you need to, leave your introduction and write it after you’ve written the rest of the paper, or at least some of the main body paragraphs. Sometimes you need a little bit of context from the rest of the paper to understand what you need to be telling your reader, so it can be helpful to do this afterward.
Body Paragraphs
All essays, regardless of format, should be separated into different body paragraphs for each main point you’re making. Each body paragraph should begin with a topic sentence that introduces the specific point you’ll be making in that paragraph. This is almost like a mini thesis statement introducing that specific detail. At the end of each body paragraph, you should have a concluding sentence that acts as a transition to the next paragraph, whether that’s a new topic point or your conclusion.
Basically, you want to follow the same structure you would use for your introduction. Start broad, and then narrow it down until you’ve included the details and evidence to argue your point. Use as many citations from sources as you need to prove your point, but always make sure that you explain yourself and justify why that information is relevant. You need to be able to contextualize your sources and show that you have a broader understanding of the subject at hand.
There are two main styles when incorporating research and sources into your body paragraphs: induction and deduction. When using induction, you are taking specific details and information and forming a general conclusion. With deduction, you’re doing the opposite. You take general information and details, and narrow down a specific conclusion about those details. Induction is based on facts and logistics, while deduction is based on reasoning.
So, for example, if you are using induction to show that Macbeth is not a qualified leader in Shakespeare’s Macbeth, you’d prove this by showcasing how many people died under his watch and how many enemies he created. On the other hand, if you are using deduction to prove that Macbeth is not a worthy leader, you could argue that good leaders don’t kill kings and show remorse for others. Therefore, since Macbeth does not show qualities of a good leader, he is not one himself.
Nailing Your Conclusion
The conclusion is where you’re going to sum up everything. This is where you take your paper, package all the information, and put a nice bow on top to present it.
All conclusions should begin with a sentence re-stating your thesis statement from the introduction. This should be the same points, but paraphrased in a new way. After that, restate some of the general information that takes you back to your original points. Don’t start introducing new ideas and concepts. If you haven’t already talked about it in the paper, don’t mention it now. This is a summary.
A good conclusion provides the reader with something to think about. Think of this like the “so what?” portion of the essay. Why should your reader even care about what you have to say? Why are you talking about this? This is where it’s a good idea to relate your information to the current day or explain why it’s a significant subject to talk about now. For example, if you’re writing that paper about aerial fighting in World War II, talk about why this is relevant for us to talk about today. You could do so by mentioning the way our modern wars are fought from the skies and that aerial warfare paved the way for nuclear weapons, which changed the game for everyone.
Lastly, your final sentence should leave an impression on your reader while concluding everything in your paper. Be sure to go out with a bang!
Reliable Research is Key
With most good essays, research will be key. Sometimes you’ll have a specific number of sources you need to use to hit minimum requirements for your paper. Other times, it’ll be up to you and what you find in your research.
You will have already done a little bit of initial research when deciding on your topic and thesis statement, so now you can expand on that. Don’t be afraid to broaden your horizons. Check books, browse academic journals, and even ask your local librarian if you need to.
If you really want to know how to write a good essay, pay attention to your sources. The strongest essays are backed up with a good variety of primary and secondary sources, with only reliable and credible information. Here is a breakdown of the main types of sources you may use when writing essays.
Use Academic, Peer Reviewed Sources
Preferably, unless your teacher has specified otherwise, you want to use reliable sources from your school’s library or online academic database. These should always be peer reviewed. You can find this in the journal’s guidelines, the specific article details, or by filtering for peer reviewed articles when searching your online library.
Stay away from Wikipedia and other online encyclopedias. Professors hate these because they aren’t factual or peer reviewed and often can be edited by just about anyone.
Books are great too, but sometimes they can be risky because many people write them with bias about a certain subject or topic. When using books, you need to be sure that you are using something that is published by a historian, a professor, or another expert in the field. However, this depends on the subject of your paper. If you’re writing about a certain event in history and you’d like to use a book written by a firsthand witness, use quotes from their book sparingly to emphasize your point.
Primary Sources
There are two main types of academic sources that could be included in your paper: primary and secondary sources. Secondary sources are those mentioned above — anything that is peer reviewed or written from the perspective of someone providing an analysis of an event or other subject.
Primary sources are generally firsthand accounts or documents from a specific event or time. They are commonly used in history and the humanities, but could apply to many other types of essays.
Some common types of primary sources include:
● Letters
● Diary entries
● Reports
● Interviews
● Government documents (such as the U.S. Constitution)
● Newspaper articles or advertisements from the time period
● Manuscripts or plays
● Other correspondence, such as ship’s logs
● Journal articles that provide new research conclusions or results
Finding primary sources can be difficult, but many of these documents are available online. For history papers, try the Internet History Sourcebooks Project. If you’re looking for old government documents from a particular time period, you can try your country’s National Archives. Your school’s library should also have its own collection available for you to use.
How to Write a Good Essay Title For Your Paper
Of course, your paper will need a catchy and awesome title. It’s best to save this step for last, when you’re done writing your essay. If you have a working title at the beginning, that’s great. But go back to this at the end, when all of the details are fresh in your mind and you know exactly what the content of the essay includes.
A good title should be interesting, unique, original, and relate directly to your thesis. Yes, that seems like a lot for one title. But it’s an important part of getting your reader’s attention and telling them what you’re going to be talking about. It will establish the tone, the context, and the premise of the paper.
So, how do we decide this title? Don’t be afraid to get creative. Write out a bunch of options and see which one catches your eye. The more you draft, the easier it is to find something that works. You can even ask a friend or classmate to take a look at giving you feedback on which title they like best. When in doubt, use a how or why question.
More Essay Writing Tips to Follow as You Go
As with many other things in life, writing an essay has its fair share of tips and tricks that many writers develop over the years. Some of these seem basic, but they’re easy to overlook when you’re worried about getting everything done right. Here are some additional essay writing tips to improve your writing that could help you learn how to write a good essay.
● Don’t use a first person voice, EVER, unless your teacher has specifically requested it or you are writing a personal/reflective essay.
● Avoid contractions or casual language.
● Always proofread and edit your work. Re-read the paper and check for clarity issues, smoothness, and flow.
● Be open to feedback from peers. This is how you learn and grow.
● Read all of the instructions carefully. Your professors are expecting you to follow directions and are grading you based on these expectations.
● Slow down, don’t rush, and give yourself time. It’s easy to miss details when you’re pushing yourself to go faster.
● Avoid run on sentences.
● Keep it consistent. Make sure you’re using the same tense throughout the paper, and that you’re sticking to one style of spelling. For example, don’t start an essay in American spelling and then finish it in British spelling.
● Don’t stress yourself out! Take breaks and reward yourself for a job well done.
Still Can’t Figure Out How to Write an Essay? Get Essay Writing Help From Homework Help Global When You Need It
All of this seems like a whole lot of information to take in. When it comes to writing essays and getting ahead in school, it never hurts to ask for help. Sometimes you just don’t have time to balance a social life or a part time job and the amount of schoolwork that keeps piling up. If you’re fading under stress and piles of work, don’t hesitate to reach out to a professional who can help.
For starters, take a look at Episode 57 of The Homework Help Show, where we go over how to write a good essay. Our host, Cath Anne, goes over some awesome tips and tricks that can help you feel more comfortable with your assignments. If you still feel a little lost, consider looking into a professional custom essay writing service.
Homework Help Global provides reliable essay writing services that can help you get your paper done to the highest possible standard. Our team of highly educated professional and academic writers are here to take a load off your shoulders and complete your assignments with utmost care and consideration to every detail. We take care of the hard work, so you don’t have to worry about trying to check all of the boxes and meet all of the requirements.
If this sounds like something that you could benefit from, get in touch with us for a quote today. Our team is on hand and ready to take on your assignments.
|
https://medium.com/the-homework-help-global-blog/how-to-write-a-good-essay-d9447d9aa5ee
|
['Homework Help Global']
|
2019-07-31 20:53:54.425000+00:00
|
['Media', 'Writing', 'Essay', 'Education', 'Essay Writing']
|
Title Write Good EssayContent need learn write good essay may seem like pretty intimidating task it’s really bad take time know understand you’re standard essay lot working part There’s formatting thesis statement writing structure grammar punctuation much seem overwhelming think many element need remember doesn’t hard right advice get ahead make sure turn paper blow professor’s mind get grade need ace class Ready learn write good essay We’ll walk beginning end help learn understand exactly go essay Let’s start beginning Types Essays Papers First it’s good take look different type essay could writing type essay different requirement format follow order complete best work possible common essay assignment may need write time school ● Argumentative Essay type essay present argument reader provide solid evidence agree stance ● Research Essay research essay take indepth look specific topic using lot reliable academic source fact data It’s similar expository essay ● Expository Essay type essay used explain something without taking particular stance writing paper assume writing audience know nothing topic provide fact data ● CompareContrast Essay comparecontrast essay taking two thing analyzing showcase similarity difference ● Personal Reflective Essay Generally type essay doesn’t always follow typical format make use firstperson voice reflect thought experience something specific ● Literature Review literature review essentially provides overview literature research already done particular topic ● Book Review book review essay done provide critical analysis book piece literature generally includes summary assessment Start Essay you’re overly familiar write good essay tricky know start point people sit stare blank document start get stressed Don’t let get stressed you’ve even done anything Every good essay start topic plan Begin determining type essay you’re going write help pick right topic example you’re writing argumentative essay want make sure choose topic opinion argue one way another you’re writing research paper want make sure choose topic find lot academic research said it’s time choose topic Choosing Right Topic Paper Choose topic wisely good topic make big difference come paper It’s drive research defines writing keep people interested — including really want spend next week writing topic couldn’t care le Probably Don’t make thing harder Put thought portion paper you’ll really regret sit write Interesting You’re going lot reading writing topic always choose something you’re interested wherever possible Sometimes you’re given topic don’t choice still spin it’s something interest incredibly important You’re going sifting academic journal dedicating lot time becoming expert topic Make sure you’re going get bored interested topic also help write content really engages reader hook right away you’re excited something want show fact present best argument topic aren’t interested you’re writing sell topic reader Research First Start research Don’t make decision you’ve able take look what’s much research you’re actually going find Often initial research help notice identify trend topic certain research question come others example may find there’s certain question issue keep popping you’re initial research keep seeing pattern guide may something want look Start Broad Narrow topic something narrow one statement argument Start broad topic know want write write per teacher’s request think smaller topic within broad argument figure want get specific Find niche go can’t simply take broad topic write best way learn write good essay You’ll find way much research actually make point something essay filled generic information make really hard find focus paper score lower grade example topic World War II would really broad one essay Instead could narrow topic one specific topic World War II you’re writing argumentative essay could choose topic “why aerial warfare World War II changed modern warfare” “contributions woman World War II” However cautious narrow topic Make sure still find enough relevant information start writing don’t worry — always adjust thesis statement start writing fact happens best best often imagine It’s part writing process Crafting Perfect Thesis Statement thesis statement important part essay It’s argument statement guide rest paper using thesis statement structure entire paper guide research determine point include formulate overall argument indicates knowledge opinion subject thesis statement basically answer research question Think want answer within paper question could something basic “why William Shakespeare’s play sonnet important English language” question think answer put sentence particular question thesis statement could look something like William Shakespeare’s play sonnet important English language developed many word term still used today first writer use modern prose set precedent today’s playwright still follow still broad thesis statement could fill page page argument see idea trying narrow thesis formulate argument answer research question you’ve selected Don’t afraid continue narrowing thesis refining you’ve hit something perfectly narrow thesis statement also act outline paper tell reader you’re going present organizing argument uncommon see thesis statement state outright paper aiming example could use thesis statement look like research paper examine contribution William Shakespeare made English language analyzing use modern prose three play Richard III Hamlet Titus Andronicus Generally thesis maximum one two sentence can’t explain argument purpose paper within two sentence need narrow find another way describe you’re thinking Decide Right Essay Format Use Make Outline you’ve decided perfect thesis statement start plan essay structured nice outline professor ask provide outline start research paper initial assignment However even professor doesn’t ask still make sure always use outline help write one biggest secret learning write good essay good outline always give something follow help stay track without getting sidetracked couple paper using outline won’t want write one without outline Importance Essay Outline Making outline follow essay major help come research writing help stay track guide begin write paper ensuring stay organized follow thesis statement structured essay outline also help understand need write look source information stay track make sure looking information help paper without getting distracted unnecessary detail don’t matter paper outline course follow specific format essay professor course likely provided essay assignment instruction sometimes include format using Determining essay format follow come two main factor type essay you’re writing referencing style you’re using Sometimes professor tell style guide follow others give choice Standard Essay Format Building Tasty Burger essay follow standard format introduction body paragraph argument statement conclusion often see type essay format described Hamburger Outline That’s meat cheese topping body paragraph bulk argument middle bun hold together round introduction conclusion also go individual paragraph point need topic sentence conclusion sentence round like burger bun Here’s basic outline follow according standard burger outline 1 Introduction Paragraph first sentence catchy attentiongrabbing b introduce topic provide basic background you’re going covering c last line thesis statement 2 Body Paragraph 1 First Argument Point Start topic sentence introducing point you’ll making paragraph b Use evidence source make point c Write transition sentence concludes argument lead next paragraph 3 Body Paragraph 2 Second Argument Point Start topic sentence introducing point argument b Use evidence source make point c Add transition sentence lead next paragraph 4 Body Paragraph 3 Third Argument Point Start topic sentence b Add evidence c Conclude transition sentence 5 Conclusion Paragraph Restate thesis statement word word though b Summarize argument provide questionsthoughts relate argument greater context Specific Essay Formats Different Types Papers you’re writing specific type essay paper structure might look slightly different standard burger format However they’re going follow basic concept introduction body paragraph conclusion example argumentative essay look little different Argumentative essay format generally contains section objection opposing viewpoint expressed rebutted want make sure come main argument conclusion argumentative essay also include section rebuttal main argument showcasing acknowledged side story Write Good Essay Using Proper Referencing Styles It’s important properly use specified referencing style paper could lose mark simply following guideline lost mark could easily avoided check online referencing guide take time follow right instruction set style manual usually three main type referencing style used write academic paper MLA APA ChicagoTurabian program specialized may find required use type citation ASA Harvard However three common style encounter likely use least one throughout time school MLA Citation Modern Language Association MLA citation general format typically used humanity typical intext citation using MLA contains author’s last name page number example completely fabricated fact Shakespeare’s Macbeth commonly associated Gunpowder Plot 1605 subsequent execution Henry Garnet crime treason Hudson 22 using MLA source listed end paper separate Works Cited page full guide MLA citation reference visit handy MLA citation guide However give idea typical MLA Works Cited entry book look like Hudson Mila Global Guide Shakespeare Philadelphia PA University Pennsylvania Press 2008 Papers using MLA citation style require title page usually student’s name professor’s name class title date upper left corner title centered next line Page number top right corner student’s last name page number APA Citation American Psychological Association APA commonly used paper within social science behavioral science field It’s little tricky MLA specific need follow Intext citation include author’s last name date publication page number look like One study found one four Americans diagnosed ADHD Ingers 2004 p 324 Sources listed end paper separate References page Generally title written sentence form capital proper noun beginning typical reference academic journal would look like Ingers E 2004 ADHD clinical trial study small town America Finding solution young child Journal Social Science Research 143 pp 296–340 paper include title page name paper centered page institution name student’s name line approximately two three line title Page number top right corner title paper capital top left page title page structured slightly different — front title state “running head” continue title indepth guide cite specific source APA including example you’re sure you’re ChicagoTurabian Citation ChicagoTurabian citation common citation style history paper also used fine art business related subject us footnotesbibliography format consists footnote bottom page short form reference full bibliography end paper first footnote specific source full version slightly modified bibliography footnote following would shortened example using completely made source peerreviewed journal intext citation would include sentence followed footnote number First Footnote John Hughes “Kamikaze Fighters World War II” Journal War History 22 1 March 2002 68 Subsequent Footnotes Hughes “Kamikaze Fighters” 68 Bibliography Entry Hughes John “Kamikaze Fighters World War II” Journal War History 22 1 March 2002 50–80 Papers using Chicago style citation generally include title page title paper centered middle student’s name professor’s name class title date line space title Don’t Overlook Introduction introduction paper extremely important learning write good essay think perspective reader One first thing you’ll notice introduction you’re going hook reader write something catchy make want keep reading give reader enough information understand you’re getting without spilling argument evidence you’re going use body paper Essentially you’re explaining reader it’s worth read rest paper Start first sentence Think something make someone become unable resist reading find avoid using cliche you’re trying think something catchy hard we’re used seeing cliche area life really place paper often professor dock unoriginal writing rest introduction start broad narrow come thesis statement It’s best write assumption audience doesn’t know much topic Give audience bit context you’re going talk enough background information understand point you’re making example you’re writing paper one character book give audience small summary book author need leave introduction write you’ve written rest paper least main body paragraph Sometimes need little bit context rest paper understand need telling reader helpful afterward Body Paragraphs essay regardless format separated different body paragraph main point you’re making body paragraph begin topic sentence introduces specific point you’ll making paragraph almost like mini thesis statement introducing specific detail end body paragraph concluding sentence act transition next paragraph whether that’s new topic point conclusion Basically want follow structure would use introduction Start broad narrow you’ve included detail evidence argue point Use many citation source need prove point always make sure explain justify information relevant need able contextualize source show broader understanding subject hand two main style incorporating research source body paragraph induction deduction using induction taking specific detail information forming general conclusion deduction you’re opposite take general information detail narrow specific conclusion detail Induction based fact logistics deduction based reasoning example using induction show Macbeth qualified leader Shakespeare’s Macbeth you’d prove showcasing many people died watch many enemy created hand using deduction prove Macbeth worthy leader could argue good leader don’t kill king show remorse others Therefore since Macbeth show quality good leader one Nailing Conclusion conclusion you’re going sum everything take paper package information put nice bow top present conclusion begin sentence restating thesis statement introduction point paraphrased new way restate general information take back original point Don’t start introducing new idea concept haven’t already talked paper don’t mention summary good conclusion provides reader something think Think like “so what” portion essay reader even care say talking it’s good idea relate information current day explain it’s significant subject talk example you’re writing paper aerial fighting World War II talk relevant u talk today could mentioning way modern war fought sky aerial warfare paved way nuclear weapon changed game everyone Lastly final sentence leave impression reader concluding everything paper sure go bang Reliable Research Key good essay research key Sometimes you’ll specific number source need use hit minimum requirement paper time it’ll find research already done little bit initial research deciding topic thesis statement expand Don’t afraid broaden horizon Check book browse academic journal even ask local librarian need really want know write good essay pay attention source strongest essay backed good variety primary secondary source reliable credible information breakdown main type source may use writing essay Use Academic Peer Reviewed Sources Preferably unless teacher specified otherwise want use reliable source school’s library online academic database always peer reviewed find journal’s guideline specific article detail filtering peer reviewed article searching online library Stay away Wikipedia online encyclopedia Professors hate aren’t factual peer reviewed often edited anyone Books great sometimes risky many people write bias certain subject topic using book need sure using something published historian professor another expert field However depends subject paper you’re writing certain event history you’d like use book written firsthand witness use quote book sparingly emphasize point Primary Sources two main type academic source could included paper primary secondary source Secondary source mentioned — anything peer reviewed written perspective someone providing analysis event subject Primary source generally firsthand account document specific event time commonly used history humanity could apply many type essay common type primary source include ● Letters ● Diary entry ● Reports ● Interviews ● Government document US Constitution ● Newspaper article advertisement time period ● Manuscripts play ● correspondence ship’s log ● Journal article provide new research conclusion result Finding primary source difficult many document available online history paper try Internet History Sourcebooks Project you’re looking old government document particular time period try country’s National Archives school’s library also collection available use Write Good Essay Title Paper course paper need catchy awesome title It’s best save step last you’re done writing essay working title beginning that’s great go back end detail fresh mind know exactly content essay includes good title interesting unique original relate directly thesis Yes seems like lot one title it’s important part getting reader’s attention telling you’re going talking establish tone context premise paper decide title Don’t afraid get creative Write bunch option see one catch eye draft easier find something work even ask friend classmate take look giving feedback title like best doubt use question Essay Writing Tips Follow Go many thing life writing essay fair share tip trick many writer develop year seem basic they’re easy overlook you’re worried getting everything done right additional essay writing tip improve writing could help learn write good essay ● Don’t use first person voice EVER unless teacher specifically requested writing personalreflective essay ● Avoid contraction casual language ● Always proofread edit work Reread paper check clarity issue smoothness flow ● open feedback peer learn grow ● Read instruction carefully professor expecting follow direction grading based expectation ● Slow don’t rush give time It’s easy miss detail you’re pushing go faster ● Avoid run sentence ● Keep consistent Make sure you’re using tense throughout paper you’re sticking one style spelling example don’t start essay American spelling finish British spelling ● Don’t stress Take break reward job well done Still Can’t Figure Write Essay Get Essay Writing Help Homework Help Global Need seems like whole lot information take come writing essay getting ahead school never hurt ask help Sometimes don’t time balance social life part time job amount schoolwork keep piling you’re fading stress pile work don’t hesitate reach professional help starter take look Episode 57 Homework Help Show go write good essay host Cath Anne go awesome tip trick help feel comfortable assignment still feel little lost consider looking professional custom essay writing service Homework Help Global provides reliable essay writing service help get paper done highest possible standard team highly educated professional academic writer take load shoulder complete assignment utmost care consideration every detail take care hard work don’t worry trying check box meet requirement sound like something could benefit get touch u quote today team hand ready take assignmentsTags Media Writing Essay Education Essay Writing
|
5,150 |
The Hidden Impact of Product Messaging on Your Scaling Business
|
Time for the obvious statement of the day: the startup journey is chock full of challenges. From fundraising to selling to just keeping the lights on, founding teams have plenty to deal with. But there is one thing that should be at the top of a startup’s to-do list: positioning and messaging.
I know what you’re probably thinking. Of all the things on that to-do list, why is positioning and messaging so important?
It’s so important because having the wrong messaging impacts all aspects of your business. Whether you’re pitching, selling, or marketing, messaging is involved. And creating the right messaging is not a one-and-done process. It takes trial and error, feedback, validation, and revisiting the drawing board numerous times to finally get the story right. And even when you do get it right, the story will have to scale with your business, which means that your messaging journey is never really done.
This is one lesson that Eric Prugh learned along his journey co-founding PactSafe, a platform that empowers high-velocity contract acceptance through seamless clickwrap agreements. In the early days, the PactSafe team was constantly testing and adjusting to find the right story to tell. While PactSafe is scaling and growing, it wasn’t always that way.
As Eric explains, “It took us a year and some change to really get that first big deal across the line. And you know I think it was obviously a combination of the right message, right time, right product, but it’s also aligning to the right type of person that’s going to align to the product and understand the value.” Through the journey to finding the right message, Eric learned a lot of other lessons along the way, and he shared those during his guest appearance on the Better Product podcast. Here are a few of the things he learned.
Your lack of sales success can either be a product problem or a message problem.
Some sales meetings don’t convert, and that’s just the name of the game. But if you’re striking out repeatedly and don’t know where you’re going wrong, you could have a product problem or you could have a message problem. If it is a product problem, then there are things you can do to address it and make sure that it is solving real problems. But if you’re like Eric and you’re confident that your product is addressing real problems and providing value, then it may be time to check in on the story you’re telling.
Strong messaging is more than just explaining what you do. In order to craft a compelling story and resonate with your audience in a way that drives them to buy, you need a real understanding of the value you provide, the right stories to tell, and the personas of the people who can derive the most value from your product. That way, you can be confident that the story you’re telling is the one that your audience wants to hear and it is the one that will make your product irresistible.
“It was really challenging in the beginning to get people to care. And it was because we started out with a very horizontal focus where we said anybody with online terms and conditions can use this product. And that is such a classic startup mistake to think about it that way.”
Refining your message takes persistence (and a lot of questions).
Finding the right messaging doesn’t happen overnight, and you certainly can’t do it from a bubble. In order to find the message that resonates with customers, you need to talk to customers over and over again. You should be obsessively looking for chances to test out your ideas and making in-the-moment tweaks to be constantly honing your message for each audience.
Start it early, seek validation, and don’t be afraid to get real feedback. In the early days of PactSafe, Eric and team were constantly testing, and it was from those failed meetings that they learned the mistakes they were making. As Eric explained, “It was really challenging in the beginning to get people to care. And it was because we started out with a very horizontal focus where we said anybody with online terms and conditions can use this product. And that is such a classic startup mistake to think about it that way.”
After learning lessons, making changes, and collecting wins to site in the process, they were able to align on the right messaging that spoke to the buyer’s pains and helped convert sales.
“If we didn’t have customers to talk about that were using the product, if we didn’t have the stories that related back to the real value that PactSafe provided… That’s what got us to the right messaging. That’s what got us to the right personas, and to think about things the right way.”
You need to scale your message alongside your product, and make sure that you have the right team to support it.
When you’re in startup mode, you’re likely extra responsive to your customers because you want to keep them around. But as you scale you’re going to have more differing opinions and it can be easy to lose sight of what you’re working towards. But just like you use a product roadmap to guide product growth, you need to align your message to the high-level vision and value proposition of your product, and understand when to say no. As Eric of PactSafe explains, “We definitely fell prey to being too responsive in the beginning and I think there’s a little bit of residue of that even today.”
Don’t get me wrong, it is important to listen to your customers and make sure you’re scaling in a way that aligns to their needs and goals. But resist the urge to be too accommodating and instead root your team in the foundational position and message that you want to communicate in the market.
The best way to do this is to hire someone to lead the effort. In the beginning when there is just a handful of people responsible for the message externally, it is pretty easy to keep it aligned. But as your team grows and your organization becomes more complex, product marketing can become a big blind spot for the business. If you have groups of people out there telling a different story, it will cause confusion, but bringing product marketing in house will control the message and align it to the product’s value.
|
https://medium.com/better-product/the-hidden-impact-of-product-messaging-on-your-scaling-business-15cd31504957
|
[]
|
2019-06-27 17:38:20.790000+00:00
|
['Scaleup', 'Startup', 'Product Marketing', 'Messaging', 'Positioning']
|
Title Hidden Impact Product Messaging Scaling BusinessContent Time obvious statement day startup journey chock full challenge fundraising selling keeping light founding team plenty deal one thing top startup’s todo list positioning messaging know you’re probably thinking thing todo list positioning messaging important It’s important wrong messaging impact aspect business Whether you’re pitching selling marketing messaging involved creating right messaging oneanddone process take trial error feedback validation revisiting drawing board numerous time finally get story right even get right story scale business mean messaging journey never really done one lesson Eric Prugh learned along journey cofounding PactSafe platform empowers highvelocity contract acceptance seamless clickwrap agreement early day PactSafe team constantly testing adjusting find right story tell PactSafe scaling growing wasn’t always way Eric explains “It took u year change really get first big deal across line know think obviously combination right message right time right product it’s also aligning right type person that’s going align product understand value” journey finding right message Eric learned lot lesson along way shared guest appearance Better Product podcast thing learned lack sale success either product problem message problem sale meeting don’t convert that’s name game you’re striking repeatedly don’t know you’re going wrong could product problem could message problem product problem thing address make sure solving real problem you’re like Eric you’re confident product addressing real problem providing value may time check story you’re telling Strong messaging explaining order craft compelling story resonate audience way drive buy need real understanding value provide right story tell persona people derive value product way confident story you’re telling one audience want hear one make product irresistible “It really challenging beginning get people care started horizontal focus said anybody online term condition use product classic startup mistake think way” Refining message take persistence lot question Finding right messaging doesn’t happen overnight certainly can’t bubble order find message resonates customer need talk customer obsessively looking chance test idea making inthemoment tweak constantly honing message audience Start early seek validation don’t afraid get real feedback early day PactSafe Eric team constantly testing failed meeting learned mistake making Eric explained “It really challenging beginning get people care started horizontal focus said anybody online term condition use product classic startup mistake think way” learning lesson making change collecting win site process able align right messaging spoke buyer’s pain helped convert sale “If didn’t customer talk using product didn’t story related back real value PactSafe provided… That’s got u right messaging That’s got u right persona think thing right way” need scale message alongside product make sure right team support you’re startup mode you’re likely extra responsive customer want keep around scale you’re going differing opinion easy lose sight you’re working towards like use product roadmap guide product growth need align message highlevel vision value proposition product understand say Eric PactSafe explains “We definitely fell prey responsive beginning think there’s little bit residue even today” Don’t get wrong important listen customer make sure you’re scaling way aligns need goal resist urge accommodating instead root team foundational position message want communicate market best way hire someone lead effort beginning handful people responsible message externally pretty easy keep aligned team grows organization becomes complex product marketing become big blind spot business group people telling different story cause confusion bringing product marketing house control message align product’s valueTags Scaleup Startup Product Marketing Messaging Positioning
|
5,151 |
Stay The Course
|
Image courtesy of the Toronto Star
This is a chart from the fine folks at the Toronto Star, using information from Johns Hopkins University’s COVID-19 dashboard. This information represents the growth in a variety of countries around the world.
That green line that represents China finally starts to make the bend around the middle of February, when they finally ease the growth of COVID-19. This is approximately a month after the city of Wuhan was first quarantined and isolated from the rest of China.
It is also approximately a month before the easing of the physical distancing and quarantine restrictions in Wuhan.
It takes roughly two months of serious self-isolation, physical distancing, and quarantine measures to get this pandemic under control.
We aren’t out of the woods yet, folks.
Most of us in North America are only a couple of weeks into serious self-isolation and quarantine measures, and the numbers are incredibly different in US and Canada.
Look at the bright orange line that represents US cases of COVID-19. It shows no sign of relenting in its ever-increasing numbers. It is a very ugly line that should be sparking fear in every American, especially those that don’t have access to adequate and affordable healthcare.
Now, look at the red line that represents Canada. While the US cases have increased on a sharper curve than any other country (including China), Canadian cases have increased on a much slower rate.
Here in Canada, we have taken physical distancing seriously, and enacted it early. The results are showing in our numbers and our much slower rate of increase. This isn’t to say that there aren’t Americans who are doing everything they can, just that it appears to have been more successful in Canada to this date.
The bigger story from this chart? We’ve got a long way to go. Until we start to see consistent change in the rate of daily increase, we can’t even begin to think about resuming daily life. The World Health organization said in its March 30 briefing that the data being reported now represents the situation two weeks ago. We won’t know about today until two weeks from now.
So, it doesn’t matter how restless you are, nor how much you miss your loved ones. We have to stay the course and stay isolated until we are told otherwise.
We have done hard things before.
We will do hard things again.
This is just one more hard thing, and we can do it.
But we have to do it together.
|
https://matthewwoodall.medium.com/stay-the-course-2cb015a24582
|
['Matthew Woodall']
|
2020-03-31 13:23:43.632000+00:00
|
['Challenge', 'Health', 'Data', 'Life', 'Covid 19']
|
Title Stay CourseContent Image courtesy Toronto Star chart fine folk Toronto Star using information Johns Hopkins University’s COVID19 dashboard information represents growth variety country around world green line represents China finally start make bend around middle February finally ease growth COVID19 approximately month city Wuhan first quarantined isolated rest China also approximately month easing physical distancing quarantine restriction Wuhan take roughly two month serious selfisolation physical distancing quarantine measure get pandemic control aren’t wood yet folk u North America couple week serious selfisolation quarantine measure number incredibly different US Canada Look bright orange line represents US case COVID19 show sign relenting everincreasing number ugly line sparking fear every American especially don’t access adequate affordable healthcare look red line represents Canada US case increased sharper curve country including China Canadian case increased much slower rate Canada taken physical distancing seriously enacted early result showing number much slower rate increase isn’t say aren’t Americans everything appears successful Canada date bigger story chart We’ve got long way go start see consistent change rate daily increase can’t even begin think resuming daily life World Health organization said March 30 briefing data reported represents situation two week ago won’t know today two week doesn’t matter restless much miss loved one stay course stay isolated told otherwise done hard thing hard thing one hard thing togetherTags Challenge Health Data Life Covid 19
|
5,152 |
Singlism Is Officially in the Dictionary Now
|
Singlism Is Officially in the Dictionary Now
It defines the stereotyping and stigmatizing of people who are single.
Photo by Edgar Gomezon Unsplash
Coined by Dr. Bella DePaulo, the word singlism has just been officially added to the Cambridge English Dictionary. The definition in the dictionary reads as follows:
Singlism: unfair treatment of people who are single (= not married).
The word has appeared in Dr. DePaulo’s work since at least 2005. Much has been said about how singles are perceived in a society that values marriage and life-long partnerships so highly, but having a word like “singlism” in the dictionary highlights how important it has become to address the negative bias against people who are single and their lifestyle.
Author of Singled Out: How Singles Are Stereotyped, Stigmatized, and Ignored, and Still Live Happily Ever After, Dr. DePaulo has long been an advocate of singles. She defends that the single life isn’t devoid of happiness and fulfillment like so many tend to believe, but that it’s instead a valid lifestyle choice that fits some people even better than marriage.
Dr. DePaulo also highlights the importance of minding microaggressions against singles. She recognizes the problem isn’t as damaging as microaggressions rooted in racism or sexism, but she points out that the bias against singles can be just as pervasive — only we’re too used to ignoring and thinking of it as “normal,” or “just the way things are.”
Many singles can definitely relate to examples of singlism in their daily lives. From being asked why are they still single to having people assuming their lives are easy and free from any complication due to not having to work things out with a partner on a daily basis.
While the impact of microaggressions is still the subject of debate, there’s no denying our culture at large often implies that there’s something wrong with people who are single and that the anxiety over the social pressure to partner up can be draining on a person’s mental health. Understanding singles anxiety through the lens of singlism can help find new solutions to fight it, including broader validation and social acceptance of the single lifestyle.
Having singlism included in the dictionary may not solve any issue of prejudice or negative bias by itself, but having a word that defines that issue is an important tool in discussing it. Despite people like Dr. DePaulo dedicating their life’s work to it, it’s a discussion that still has a lot to develop in the future.
|
https://medium.com/acid-sugar/singlism-is-officially-in-the-dictionary-now-830ec98d1d19
|
['Renata Gomes']
|
2020-12-19 14:25:17.720000+00:00
|
['Relationships', 'Love', 'Life Lessons', 'Psychology', 'Lifestyle']
|
Title Singlism Officially Dictionary NowContent Singlism Officially Dictionary defines stereotyping stigmatizing people single Photo Edgar Gomezon Unsplash Coined Dr Bella DePaulo word singlism officially added Cambridge English Dictionary definition dictionary read follows Singlism unfair treatment people single married word appeared Dr DePaulo’s work since least 2005 Much said single perceived society value marriage lifelong partnership highly word like “singlism” dictionary highlight important become address negative bias people single lifestyle Author Singled Singles Stereotyped Stigmatized Ignored Still Live Happily Ever Dr DePaulo long advocate single defends single life isn’t devoid happiness fulfillment like many tend believe it’s instead valid lifestyle choice fit people even better marriage Dr DePaulo also highlight importance minding microaggressions single recognizes problem isn’t damaging microaggressions rooted racism sexism point bias single pervasive — we’re used ignoring thinking “normal” “just way thing are” Many single definitely relate example singlism daily life asked still single people assuming life easy free complication due work thing partner daily basis impact microaggressions still subject debate there’s denying culture large often implies there’s something wrong people single anxiety social pressure partner draining person’s mental health Understanding single anxiety lens singlism help find new solution fight including broader validation social acceptance single lifestyle singlism included dictionary may solve issue prejudice negative bias word defines issue important tool discussing Despite people like Dr DePaulo dedicating life’s work it’s discussion still lot develop futureTags Relationships Love Life Lessons Psychology Lifestyle
|
5,153 |
Forms with Formik & Yup
|
Overview
Capturing data from users is important when developing mobile apps and websites. When we are building a react website or a react-native mobile app, one hurdle people always encounter is picking a reliable forms library to facilitate capturing information and managing it — the UI components might be easy to build but capturing the input; providing validated & meaningful feedback and submitting the form is a lot to think about.
Formik is a library that has taken into consideration all the downsides of any previously built libraries and introduced an easy way to handle forms without any ‘magic’. There is a small learning curve, but once we pass this, the library makes form state management a breeze!
This article will focus on creating a simple login page using Formik with code examples to reference. By the end of this article, the mentioned learning curve should be addressed. For context, the structure will be:
1. General Setup
2. Validate inputs and provide meaningful feedback to the user
3. Understand how to submit a form
UI Set Up
React-Bootstrap comes with a lot of basic components to make a decent form— not super flashy, but the set up is as below, the following code is the UI setup only and has no logic implemented yet.
Note: You can use any UI library, some have special Formik plugins — with react-bootstrap we can use the Formik library directly.
|
https://medium.com/javascript-in-plain-english/forms-with-formik-yup-5ae384352f47
|
['Sufiyaan Mitha']
|
2020-11-19 08:49:29.776000+00:00
|
['React Native', 'Technology', 'Software Development', 'React', 'JavaScript']
|
Title Forms Formik YupContent Overview Capturing data user important developing mobile apps website building react website reactnative mobile app one hurdle people always encounter picking reliable form library facilitate capturing information managing — UI component might easy build capturing input providing validated meaningful feedback submitting form lot think Formik library taken consideration downside previously built library introduced easy way handle form without ‘magic’ small learning curve pas library make form state management breeze article focus creating simple login page using Formik code example reference end article mentioned learning curve addressed context structure 1 General Setup 2 Validate input provide meaningful feedback user 3 Understand submit form UI Set ReactBootstrap come lot basic component make decent form— super flashy set following code UI setup logic implemented yet Note use UI library special Formik plugins — reactbootstrap use Formik library directlyTags React Native Technology Software Development React JavaScript
|
5,154 |
Coping with Trauma During the Holidays
|
Post Traumatic Stress Disorder or PTSD is defined as a condition that includes flashbacks and memories of a traumatic event, avoidant behaviors, anxiety, and depression. The holidays can be filled with triggers for those who suffer from PTSD. Whether you are spending more time with family* or social distancing in your own home, the holidays can provide its own set of challenges. Your triggers do not define you and there are ways to cope with them when you experience them. Below is a list of ways you can have a safe and healthy holiday season.
Join a group therapy session or sign up for one on one therapy sessions
Group therapy can be helpful to show you that you are not alone. Many people can share their different experiences and coping skills that may be able to help you through this stressful time. One on one therapy is another option that can assist you in sorting through and coping with uncomfortable feelings and memories. There can be community programs in your area which may provide these services at little to no cost.
Find some “Me Time”
If you are spending a lot of time with your family, find a moment that you can spend doing an activity that will improve your mental health. Go for a walk, meditate, work on a craft or hobby, make sure you find some time, especially if you are feeling anxious or overwhelmed. Exercise is another way to find that “me time.” Exercise has also proven to help with the symptoms of PTSD. You can go for a jog, watch a tutorial on YouTube for yoga, ride your bike, there are many ways to relieve that stress.
Reach out to others
It isn’t unusual for someone who is suffering from PTSD to cut themselves off from their support system and isolate. Isolation, however, can worsen PTSD symptoms. Reach out to trusted friends or family members who may be able to help you through any depression or anxiety.
Set boundaries with others — and yourself
When spending time around family during the holidays, it is important to address any boundaries that you may have. Setting up boundaries can help you cope with trauma and sort through feelings in your own way. If it is safe to do so, talk to a trusted family member about what your boundaries are and what they look like. For example, If you need extra “me time,” communicating with a family member that you are living or staying with, can help open up dialogue and communication so that you can get what you need without any conflict.
You can also set up boundaries with yourself as well. You can give yourself permission to not engage in a discussion that could be harmful to your well-being, set up boundaries with yourself with stress eating and drinking, and also not allow yourself to feel pressured to buy a lot of gifts. Do what you can with what you have, and the sentiment will shine through.
Overall, remember to be gentle with yourself. At the end of the year, we may need to be even kinder to ourselves as we process all that we have been through. Just remember that you are worth feeling better.
*The CDC recommends only celebrating with those in your own household.*
|
https://medium.com/matthews-place/coping-with-trauma-during-the-holidays-e3d3a2038643
|
[]
|
2020-12-07 18:10:44.626000+00:00
|
['Mental Health', 'PTSD', 'Holidays', 'Trauma', 'Family']
|
Title Coping Trauma HolidaysContent Post Traumatic Stress Disorder PTSD defined condition includes flashback memory traumatic event avoidant behavior anxiety depression holiday filled trigger suffer PTSD Whether spending time family social distancing home holiday provide set challenge trigger define way cope experience list way safe healthy holiday season Join group therapy session sign one one therapy session Group therapy helpful show alone Many people share different experience coping skill may able help stressful time One one therapy another option assist sorting coping uncomfortable feeling memory community program area may provide service little cost Find “Me Time” spending lot time family find moment spend activity improve mental health Go walk meditate work craft hobby make sure find time especially feeling anxious overwhelmed Exercise another way find “me time” Exercise also proven help symptom PTSD go jog watch tutorial YouTube yoga ride bike many way relieve stress Reach others isn’t unusual someone suffering PTSD cut support system isolate Isolation however worsen PTSD symptom Reach trusted friend family member may able help depression anxiety Set boundary others — spending time around family holiday important address boundary may Setting boundary help cope trauma sort feeling way safe talk trusted family member boundary look like example need extra “me time” communicating family member living staying help open dialogue communication get need without conflict also set boundary well give permission engage discussion could harmful wellbeing set boundary stress eating drinking also allow feel pressured buy lot gift sentiment shine Overall remember gentle end year may need even kinder process remember worth feeling better CDC recommends celebrating householdTags Mental Health PTSD Holidays Trauma Family
|
5,155 |
Redis vs Memcached — Which one to pick?
|
Redis vs Memcached
When people talk about the Performance Improvement of an application, the one integral factor that everyone considers is server-side caching. Identifying the right cache provider that suits the requirement is an integral part of adopting the server-side caching.
Redis and Memcached are widely used open-source cache providers across the world. Most of the Cloud providers support Redis and Memcached out of the box.
In this article, I would like to share similarities and differences between the Redis and Memcached and when do we need to go for Redis or Memcached.
Similarities between Redis vs Memcached
Key-value pair data stores
Supports Data Partitioning
Sub-millisecond latency
NoSQL family
Open-source
Supported by the Majority of programming languages and Cloud providers
Redis vs Memcached — Feature Comparison
DataTypes Supported
Memcached: Supports only simple key-value pair structure
Redis: Supports data types like strings, lists, sets, sorted sets, hashes, bit arrays, geospatial, and hyper logs.
Redis allows you to access or change parts of a data object without having to load the entire object to an applicational level, modify it, and then re-store the updated version.
Memory Management
Memcached: Strictly in memory and extended to save key-value details into drive using an extension extstore
Redis: Can store the details to disk when the physical memory is fully occupied. Redis has the mechanism to swap the values that are least recently used to disk and the latest values into the physical memory.
Data Size Limits
Memcached: Can only store the data of size up to 1 MB
Redis: can store the data of size up to 512 MB (string values)
Data Persistence
Memcached: Doesn’t support data persistence
Redis: Supports data persistence using RDB snapshot and AOF Log persistence policies
Cluster Mode (Distributed caching)
Memcached: Memcached doesn’t support the distributed mechanism out of the box. This can be achieved on the client-side using a consistent hashing strategy
Redis: Supports distributed cache (Clustering)
Multi-Threading
Memcached: Supports multithreading and hence can effectively use the multiple cores of the system
Redis: Doesn’t support multi-threading
Scaling
Memcached: can ve vertically scalable. Horizontal scalability is achieved from the client-side only (Using consistent hash algorithm)
Redis: Can be horizontally scalable
Data replication
Memcached: Doesn’t support data replication
Redis: Supports data replication out of the box. Redis Cluster introduces the master node and slave node to ensure data availability. Redis Cluster has two corresponding slave nodes for redundancy.
Supported Eviction Policies
Memcached:
Least Recently Used (LRU)
Redis:
No Eviction (Returns an error if the memory limit has been reached when trying to insert more data)
All keys LRU (Evicts the least recently used keys out of all keys)
Volatile LFU (Evicts the least frequently used keys out of all keys)
All keys random (Randomly evicts keys out of all keys)
Volatile random (Randomly evicts keys with an “expire” field set)
Volatile TTL (Evicts the shortest time-to-live and least recently used keys out of all keys with an “expire” field set.)
volatile LRU (Evicts the least recently used keys out of all keys with an “expire” field set)
volatile LFU (Evicts the least frequently used keys out of all keys with an “expire” field set)
Transaction Management
Memcached: Doesn’t support transactions
Redis: Support transactions
When to go for Memcached?
Memcached is recommended when dealing with smaller and static data. When dealing with larger data sets, Memcached has to serialize and deserialize the data while saving and retrieving from the cache and require more space to store it. When dealing with smaller projects, it is better to go with the Memcached due to its multi-threading nature and vertical scalability. Clustering requires a considerable amount of effort to configure the infrastructure.
When to go for Redis?
Redis supports various data types to handle various types of data. Its clustering and data persistence features make it a good choice for large applications. Additional features like Message queue and transactions allow Redis to perform beyond the cache-store.
In addition to the above-mentioned features, Redis supports the below features as well
Message queuing support (Pub/sub)
snapshots for data archiving/restoring purpose
Lua scripting
Conclusion: Redis and Memcached both can perform very well as a Cache store. Which one to pick varies from project to project.
It is wise to consider the pros and cons of the providers right from the inception phase to avoid changes and migrations during the project.
Hope you enjoyed the article. Please share your thoughts/ ideas in the comments box. Thank you for reading it.
References:
https://medium.com/@Alibaba_Cloud/redis-vs-memcached-in-memory-data-storage-systems-3395279b0941
|
https://medium.com/techmonks/redis-vs-memcached-which-one-to-pick-401b0c3cbf94
|
['Anji']
|
2020-09-24 15:31:52.802000+00:00
|
['Cache', 'Microservices', 'Redis', 'Memcached']
|
Title Redis v Memcached — one pickContent Redis v Memcached people talk Performance Improvement application one integral factor everyone considers serverside caching Identifying right cache provider suit requirement integral part adopting serverside caching Redis Memcached widely used opensource cache provider across world Cloud provider support Redis Memcached box article would like share similarity difference Redis Memcached need go Redis Memcached Similarities Redis v Memcached Keyvalue pair data store Supports Data Partitioning Submillisecond latency NoSQL family Opensource Supported Majority programming language Cloud provider Redis v Memcached — Feature Comparison DataTypes Supported Memcached Supports simple keyvalue pair structure Redis Supports data type like string list set sorted set hash bit array geospatial hyper log Redis allows access change part data object without load entire object applicational level modify restore updated version Memory Management Memcached Strictly memory extended save keyvalue detail drive using extension extstore Redis store detail disk physical memory fully occupied Redis mechanism swap value least recently used disk latest value physical memory Data Size Limits Memcached store data size 1 MB Redis store data size 512 MB string value Data Persistence Memcached Doesn’t support data persistence Redis Supports data persistence using RDB snapshot AOF Log persistence policy Cluster Mode Distributed caching Memcached Memcached doesn’t support distributed mechanism box achieved clientside using consistent hashing strategy Redis Supports distributed cache Clustering MultiThreading Memcached Supports multithreading hence effectively use multiple core system Redis Doesn’t support multithreading Scaling Memcached vertically scalable Horizontal scalability achieved clientside Using consistent hash algorithm Redis horizontally scalable Data replication Memcached Doesn’t support data replication Redis Supports data replication box Redis Cluster introduces master node slave node ensure data availability Redis Cluster two corresponding slave node redundancy Supported Eviction Policies Memcached Least Recently Used LRU Redis Eviction Returns error memory limit reached trying insert data key LRU Evicts least recently used key key Volatile LFU Evicts least frequently used key key key random Randomly evicts key key Volatile random Randomly evicts key “expire” field set Volatile TTL Evicts shortest timetolive least recently used key key “expire” field set volatile LRU Evicts least recently used key key “expire” field set volatile LFU Evicts least frequently used key key “expire” field set Transaction Management Memcached Doesn’t support transaction Redis Support transaction go Memcached Memcached recommended dealing smaller static data dealing larger data set Memcached serialize deserialize data saving retrieving cache require space store dealing smaller project better go Memcached due multithreading nature vertical scalability Clustering requires considerable amount effort configure infrastructure go Redis Redis support various data type handle various type data clustering data persistence feature make good choice large application Additional feature like Message queue transaction allow Redis perform beyond cachestore addition abovementioned feature Redis support feature well Message queuing support Pubsub snapshot data archivingrestoring purpose Lua scripting Conclusion Redis Memcached perform well Cache store one pick varies project project wise consider pro con provider right inception phase avoid change migration project Hope enjoyed article Please share thought idea comment box Thank reading References httpsmediumcomAlibabaCloudredisvsmemcachedinmemorydatastoragesystems3395279b0941Tags Cache Microservices Redis Memcached
|
5,156 |
Learning SQL the Hard Way
|
Joins in SQL
Till now, we have learned how we can work with single tables. But in reality, we need to work with multiple tables.
So, the next thing we would want to learn is how to do joins.
Now joins are an integral and an essential part of a MySQL Database and understanding them is necessary. The below visual talks about most of the joins that exist in SQL. I usually end up using just the LEFT JOIN, and INNER JOIN, so I will start with LEFT JOIN.
The LEFT JOIN is used when you want to keep all the records in the left table(A) and merge B on the matching records. The records of A where B is not merged are kept as NULL in the resulting table. The MySQL Syntax is:
SELECT A.col1, A.col2, B.col3, B.col4
FROM A
LEFT JOIN B
ON A.col2=B.col3
Here we select col1 and col2 from table A and col3 and col4 from table B. We also specify which common columns to join on using the ON statement.
The INNER JOIN is used when you want to merge A and B and only to keep the common records in A and B.
Example:
To give you a use case lets go back to our Sakila database. Suppose we wanted to find out how many copies of each movie we do have in our inventory. You can get that by using:
SELECT film_id,count(film_id) as num_copies
FROM sakila.inventory
GROUP BY film_id
ORDER BY num_copies DESC;
Does this result look interesting? Not really. IDs don’t make sense to us humans, and if we can get the names of the movies, we would be able to process the information better. So we snoop around and see that the table film has got film_id as well as the title of the film.
So we have all the data, but how do we get it in a single view?
Come Joins to the rescue. We need to add the title to our inventory table information. We can do this using —
SELECT A.*, B.title
FROM sakila.inventory A
LEFT JOIN sakila.film B
ON A.film_id = B.film_id
This will add another column to your inventory table information. As you might notice some films are in the film table that we don’t have in the inventory . We used a left join since we wanted to keep whatever is in the inventory table and join it with its corresponding counterpart in the film table and not everything in the film table.
So now we have got the title as another field in the data. This is just what we wanted, but we haven’t solved the whole puzzle yet. We want title and num_copies of the title in the inventory.
But before we can go any further, we should understand the concept of inner queries first.
|
https://towardsdatascience.com/learning-sql-the-hard-way-4173f11b26f1
|
['Rahul Agarwal']
|
2020-09-28 11:21:58.749000+00:00
|
['Sql', 'Data Science', 'Programming', 'Machine Learning', 'Productivity']
|
Title Learning SQL Hard WayContent Joins SQL Till learned work single table reality need work multiple table next thing would want learn join join integral essential part MySQL Database understanding necessary visual talk join exist SQL usually end using LEFT JOIN INNER JOIN start LEFT JOIN LEFT JOIN used want keep record left tableA merge B matching record record B merged kept NULL resulting table MySQL Syntax SELECT Acol1 Acol2 Bcol3 Bcol4 LEFT JOIN B Acol2Bcol3 select col1 col2 table col3 col4 table B also specify common column join using statement INNER JOIN used want merge B keep common record B Example give use case let go back Sakila database Suppose wanted find many copy movie inventory get using SELECT filmidcountfilmid numcopies sakilainventory GROUP filmid ORDER numcopies DESC result look interesting really IDs don’t make sense u human get name movie would able process information better snoop around see table film got filmid well title film data get single view Come Joins rescue need add title inventory table information using — SELECT Btitle sakilainventory LEFT JOIN sakilafilm B Afilmid Bfilmid add another column inventory table information might notice film film table don’t inventory used left join since wanted keep whatever inventory table join corresponding counterpart film table everything film table got title another field data wanted haven’t solved whole puzzle yet want title numcopies title inventory go understand concept inner query firstTags Sql Data Science Programming Machine Learning Productivity
|
5,157 |
7 (more) tips to quickly improve your UIs
|
Originally published at marcandrew.me on October 22nd, 2020.
Creating beautiful, but practical UIs takes time, with many design revisions along the way. I know. I’ve been there many times before myself.
But what I’ve discovered over the years is that by making some simple adjustments you can quickly improve the designs you’re trying to create.
In this follow up article (Here’s Parts 1, 2, and 3), I’ve once again put together a small, and easy to put into practice selection of tips that can, with little effort, help improve both your designs (UI), and the overall user experience (UX).
In this part I’ve focused primarily on Typography because I know how beneficial a good understanding of type can be when it comes to producing beautiful, practical designs.
Let’s dive on in…
|
https://uxdesign.cc/7-more-tips-to-quickly-improve-your-uis-1c2e8f446777
|
['Marc Andrew']
|
2020-11-18 10:21:11.039000+00:00
|
['UI', 'Product Design', 'Design', 'UI Design', 'Visual Design']
|
Title 7 tip quickly improve UIsContent Originally published marcandrewme October 22nd 2020 Creating beautiful practical UIs take time many design revision along way know I’ve many time I’ve discovered year making simple adjustment quickly improve design you’re trying create follow article Here’s Parts 1 2 3 I’ve put together small easy put practice selection tip little effort help improve design UI overall user experience UX part I’ve focused primarily Typography know beneficial good understanding type come producing beautiful practical design Let’s dive in…Tags UI Product Design Design UI Design Visual Design
|
5,158 |
GitHub Sponsors & Gitcoin Grants =❤️ 🤖
|
GitHub Sponsors & Gitcoin Grants =❤️ 🤖
Support your OSS Work with Gitcoin Grants on Github’s new Sponsors feature
The Gitcoin team was thrilled to see today that GitHub formally launched Sponsors, a way for users of GitHub to find ways to financially support their favorite projects through a variety of platforms of their choice.
This is what it looks like in GitHub’s UI:
We’d like to invite users of Gitcoin Grants to set up GitHub Sponsors on their repos. Here’s how to do it:
Go to the repository associated with Gitcoin Grants. Click ‘Settings’.
3. Scroll down to “Features’ and check “Sponsorships” and then “Set up Sponsor Button”
4. A text editor will appear on the next page that allows you to enter information about how to sponsor your repository. Paste the following text into the text editor:
custom: <gitcoin_grants_url>
and replace <gitcoin_grants_url> with your Gitcoin Grants URL.
Success!
Users will now see a “Sponsor” button on your repo:
Grow Open Source
GitHub Sponsors shows that open source sustainability is a real priority for GitHub and for Microsoft, and that’s something we’re very excited to see. We’re also excited that we’re able to engage with them on this initiative and look forward to seeing how their work on open source sustainability evolves moving forward. Here’s to a brighter future for open source software.
To learn more about Gitcoin, click below. We welcome you on our journey to grow open source while changing the way we work.
|
https://medium.com/gitcoin/gitcoin-grants-github-sponsors-b516192c048
|
['Kevin Owocki']
|
2019-05-23 19:08:25.228000+00:00
|
['Open Source', 'Ethereum', 'Gitcoin', 'Sustainability']
|
Title GitHub Sponsors Gitcoin Grants ❤️ 🤖Content GitHub Sponsors Gitcoin Grants ❤️ 🤖 Support OSS Work Gitcoin Grants Github’s new Sponsors feature Gitcoin team thrilled see today GitHub formally launched Sponsors way user GitHub find way financially support favorite project variety platform choice look like GitHub’s UI We’d like invite user Gitcoin Grants set GitHub Sponsors repos Here’s Go repository associated Gitcoin Grants Click ‘Settings’ 3 Scroll “Features’ check “Sponsorships” “Set Sponsor Button” 4 text editor appear next page allows enter information sponsor repository Paste following text text editor custom gitcoingrantsurl replace gitcoingrantsurl Gitcoin Grants URL Success Users see “Sponsor” button repo Grow Open Source GitHub Sponsors show open source sustainability real priority GitHub Microsoft that’s something we’re excited see We’re also excited we’re able engage initiative look forward seeing work open source sustainability evolves moving forward Here’s brighter future open source software learn Gitcoin click welcome journey grow open source changing way workTags Open Source Ethereum Gitcoin Sustainability
|
5,159 |
The History of the Eames Chair
|
The History of the Eames Chair 360modern Follow Jan 10 · 4 min read
The history of the Eames Chair must start with Charles and Ray Eames and their partnership with the Herman Miller Furniture Company. The chair was released in 1956 with the official name Eames Lounge (670) and Ottoman (671). The chair was the first one designed by the Eameses for the high-end market and has become one of the most famous chair designs ever created.
The exact reasons why the Eames Chair design became such a powerful and lasting design throughout the years are hard to pinpoint. However, as with all fine art, whether it be music, sculpture, painting, or design, there is something within the pieces that resonates with the viewers of the time and it continues to resonate throughout the ages. The Eames Chair has this je ne sais quoi. Now, the design is admired and desired for more than just its artistry, but also for its place in modern design history.
An Introduction to Charles and Ray Eames
Let’s step back from the chair for a moment to look deeper into the two people who created it. This husband and wife team created many stylish designs to be mass-produced at an affordable price. Their goal was to bring a fine interior design option to the average consumer in America.
However, the Eames Chair was something new for the talented duo. In fact, when the chair debuted on the Arlene Francis Home Show in 1956, it was described as “quite a departure” compared to their earlier designs.
At first, the Eames Chair was just a concept and a project taken on by the design team. They wanted to create a high-end, luxury chair. It needed to be stylish, chic, modern, and, most of all, comfortable. Charles and Ray Eames wanted the design to make the user feel as snug as if they were a baseball in a worn leather mitt.
Employees at the Herman Miller factory polish the molded plywood shells in the seventies.
Photo courtesy of Herman Miller and Dwell.com
Since the design couple was well-versed in mass-production pieces, they worked with both plastic and plywood when designing this chair. Both of these materials are common today, but in the 1950s, the Eameses were some of the first designers to use plywood and plastic in their designs.
Three pieces of molded plywood were used to create the Eames Chair: a base, a separate headrest, and a backrest. All three pieces were also covered in a rosewood veneer with later versions using walnut, cherry, and other finishes. Black or brown leather cushions were used to complete the design and a matching ottoman completed the set.
A 1959 advertisement for the Lounge set emphasizes its comfort. Another ad from the era reads “A good chair, nowadays, is hard to find,” and suggests that it’s “the only modern chair designed to relax you in the tradition of the good old club chair.” Charles took on the project because he was “fed up with the complaints that modern isn’t comfortable.”
Photo courtesy of Herman Miller and Dwell.com
Herman Miller Company sold the Eames Lounge Chair and it hit the market for the first time in 1956. This modern, stylish, and functional chair offered a beautiful look at a high-end furniture piece using plywood in a new way. It quickly rose in popularity and is one of the first chairs to be mass produced at a luxury price point.
The Eames Chair has become so popular it has been featured in many ways throughout the years. The TV show Frasier featured the chair and ottoman in the apartment of Frasier Crane. It’s referred to in the show as “the best-engineered chair in the world.”
Shark Tank, another TV show, replaced all the chairs on set with Eames Lounge Chairs after eight seasons. They wanted to create a more modern-looking set.
A version of the historic Eames Chair has also been featured in the Museum of Modern Art in New York City, the Henry Ford Museum in Dearborn, Michigan, and the Museum of Fine Arts Boston. Many mid-century modern homes also feature the Eames Chair and it’s a very popular choice for many celebrities choosing MCM as their design style.
To celebrate the 50th anniversary of the Eames Chair in 2006, Herman Miller released new models of the chair using a sustainable Palisander rosewood veneer. The chair is still in production today and remains one of the most influential furniture designs in American history.
|
https://medium.com/360modern/the-history-of-the-eames-chair-ddee6b92aee2
|
[]
|
2020-01-10 23:43:58.671000+00:00
|
['Design', 'Mid Century Modern', 'Eames Chairs', 'Modernism']
|
Title History Eames ChairContent History Eames Chair 360modern Follow Jan 10 · 4 min read history Eames Chair must start Charles Ray Eames partnership Herman Miller Furniture Company chair released 1956 official name Eames Lounge 670 Ottoman 671 chair first one designed Eameses highend market become one famous chair design ever created exact reason Eames Chair design became powerful lasting design throughout year hard pinpoint However fine art whether music sculpture painting design something within piece resonates viewer time continues resonate throughout age Eames Chair je ne sais quoi design admired desired artistry also place modern design history Introduction Charles Ray Eames Let’s step back chair moment look deeper two people created husband wife team created many stylish design massproduced affordable price goal bring fine interior design option average consumer America However Eames Chair something new talented duo fact chair debuted Arlene Francis Home Show 1956 described “quite departure” compared earlier design first Eames Chair concept project taken design team wanted create highend luxury chair needed stylish chic modern comfortable Charles Ray Eames wanted design make user feel snug baseball worn leather mitt Employees Herman Miller factory polish molded plywood shell seventy Photo courtesy Herman Miller Dwellcom Since design couple wellversed massproduction piece worked plastic plywood designing chair material common today 1950s Eameses first designer use plywood plastic design Three piece molded plywood used create Eames Chair base separate headrest backrest three piece also covered rosewood veneer later version using walnut cherry finish Black brown leather cushion used complete design matching ottoman completed set 1959 advertisement Lounge set emphasizes comfort Another ad era read “A good chair nowadays hard find” suggests it’s “the modern chair designed relax tradition good old club chair” Charles took project “fed complaint modern isn’t comfortable” Photo courtesy Herman Miller Dwellcom Herman Miller Company sold Eames Lounge Chair hit market first time 1956 modern stylish functional chair offered beautiful look highend furniture piece using plywood new way quickly rose popularity one first chair mass produced luxury price point Eames Chair become popular featured many way throughout year TV show Frasier featured chair ottoman apartment Frasier Crane It’s referred show “the bestengineered chair world” Shark Tank another TV show replaced chair set Eames Lounge Chairs eight season wanted create modernlooking set version historic Eames Chair also featured Museum Modern Art New York City Henry Ford Museum Dearborn Michigan Museum Fine Arts Boston Many midcentury modern home also feature Eames Chair it’s popular choice many celebrity choosing MCM design style celebrate 50th anniversary Eames Chair 2006 Herman Miller released new model chair using sustainable Palisander rosewood veneer chair still production today remains one influential furniture design American historyTags Design Mid Century Modern Eames Chairs Modernism
|
5,160 |
Medium’s New Icon Has A Hidden Meaning?
|
Humor
Medium’s New Icon Has A Hidden Meaning?
It’s time for a change, again.
Photo by Devin Avery on Unsplash
I was shocked when I opened my Medium page this afternoon ( 564 times so far today!) The old comforting, solid, and stolid M logo/icon had gone!
Was this a prank or a joke? Well, it’s not April 1st. It must be something else. Is it related to a conspiracy theory, deep state, or the swamp?
My partner asked me why I was so upset. She was on the phone trying to comfort me.
“As far as I can see, it’s three circles. Starting with the biggest one, then a Medium one and finally a small one which seems to be hiding behind the corner. It’s weird, disconcerting and downright upsetting.” “Maybe it just means Large, Medium and Small to cater for all sizes”, my partner said. “You know, the big fish in the pond, then the medium ones like you and finally the small one for beginners who are just dipping their toes in the water.”
We chatted away for a while and I told her that this was just one of the major changes at Medium. The claps had all gone. They were still there but it was like clapping in an empty room. One clap was the same as 50. They did not count at all, so why not just put a Like as on Facebook and be done with it.
I must say my forefinger on my right hand will enjoy the break because holding it down for 50 claps was very tiring. I bet many readers will be glad too and they will stop being criticized as being too mean and miserly.
I feel sorry for Roz Warren who used to have great fun adding the right number of claps to make a nice round number. I mean 500 is so much more impressive than 481. I hope she can find a new hobby soon.
Curation is gone. You will be lucky now if you get “distributed.” That word is so cold and impersonal. There is no indication either where your article is being sent and under what topic. It sounds like a courier delivery.
I know that thousands and thousands of writers have been released from curation jail and are celebrating — all keeping their distance, of course. Must be such a relief after being in jail all that time cooped up with cranky and frustrated writers.
Then all the earning algorithms have been changed. It’s all based on being “relational.” I don’t like that word at all. I believe it means scratching each other’s backs (reading each other’s stories).
Medium delights in changing their featured articles every 5 minutes or so. This is to make sure that you have more than enough to read and can dedicate your entire life to Medium. Makes sense, I suppose. I mean they are paying us a pittance.
We have to forge relationships. Read and write basically. That’s all it takes. Sounds like elementary school.
I bet there will be new rules on responses soon. How many claps to give? How many words?
My partner logged into the Medium app on her phone and I heard her shout with joy.
“The new logo is so pretty — I just love it!” she whooped.
If you need more laughs check out my new book The Brits Are Bonkers
|
https://medium.com/the-haven/mediums-new-icon-has-a-hidden-meaning-2d34ccfdf820
|
['Robert W. Locke']
|
2020-10-17 14:29:31.245000+00:00
|
['Writers On Writing', 'Medium Icon', 'Humor', 'Writing', 'Medium']
|
Title Medium’s New Icon Hidden MeaningContent Humor Medium’s New Icon Hidden Meaning It’s time change Photo Devin Avery Unsplash shocked opened Medium page afternoon 564 time far today old comforting solid stolid logoicon gone prank joke Well it’s April 1st must something else related conspiracy theory deep state swamp partner asked upset phone trying comfort “As far see it’s three circle Starting biggest one Medium one finally small one seems hiding behind corner It’s weird disconcerting downright upsetting” “Maybe mean Large Medium Small cater sizes” partner said “You know big fish pond medium one like finally small one beginner dipping toe water” chatted away told one major change Medium clap gone still like clapping empty room One clap 50 count put Like Facebook done must say forefinger right hand enjoy break holding 50 clap tiring bet many reader glad stop criticized mean miserly feel sorry Roz Warren used great fun adding right number clap make nice round number mean 500 much impressive 481 hope find new hobby soon Curation gone lucky get “distributed” word cold impersonal indication either article sent topic sound like courier delivery know thousand thousand writer released curation jail celebrating — keeping distance course Must relief jail time cooped cranky frustrated writer earning algorithm changed It’s based “relational” don’t like word believe mean scratching other’s back reading other’s story Medium delight changing featured article every 5 minute make sure enough read dedicate entire life Medium Makes sense suppose mean paying u pittance forge relationship Read write basically That’s take Sounds like elementary school bet new rule response soon many clap give many word partner logged Medium app phone heard shout joy “The new logo pretty — love it” whooped need laugh check new book Brits BonkersTags Writers Writing Medium Icon Humor Writing Medium
|
5,161 |
How to Simulate a Pandemic in Python
|
Introduction
What’s a better time to simulate the spread of a disease than during a global pandemic? I don’t have much more to say — let’s jump right into programming a simple disease simulation.
In real life, there are hundreds of factors that affect how fast a contagion spreads, both from person to person and on a broader population-wide scale. I’m no epidemiologist but I’ve done my best to set up a fairly basic simulation that can mimic how a virus can infect people and spread throughout a population.
In my program, I will be using object-based programming. With this method, we could theoretically customize individual people and add in more events and factors — such as more complicated social dynamics.
Keep in mind that this is an introduction and serves as the most basic model that can be built on top of.
Variables/Explanation
Fundamentally, our program will function around a single concept: any given person who is infected by our simulation’s disease has the potential to spread it to whoever they meet. Each person in our “peopleDictionary” will have a set number of friends (gaussian randomization for accuracy) and they may meet any one or more of these friends on a day to day basis.
For our starting round of simulations, we won’t implement face masks or lockdowns — we’ll just let the virus spread when people meet their friends and see if we can get that iconic pandemic “curve” which the news always talks about flattening.
So, we’ll use a Person() class and add a few characteristics. Firstly, we’ll assume that some very tiny percentage of characters simulated will already have immunity to our disease from the get-go, for whatever reason. I’m setting that at 1% (in reality, it’d be far lower but because our simulation runs so fast, a large portion like this makes a bit more sense). At the start of the simulation, the user will be prompted to enter this percentage.
Next, we have contagiousness, the all-important factor. When a person is not infected, this remains at 0. It also returns to 0 once a person ceases to be contagious and gains immunity. However, when a person is infected, this contagious value is somewhere between 0 and 100%, and it massively changes their chance of infecting a friend.
Before we implement this factor, we need to understand Gaussian Distribution. This mathematical function allows us to more accurately calculate random values between 1 and 100. Rather than the values being distributed purely randomly across the spectrum, most of them cluster around the median average point, making for a more realistic output:
As you can see, this bell-shaped function will be a lot better for our random characteristic variables because most people will have an average level of contagiousness, rather than a purely random percentage. I’ll show you how to implement this later.
We then have the variables “mask” and “lockdown” which are both boolean variables. These will be used to add a little bit of variety to our simulation after it is running.
Lastly, we have the “friends” variable for any given person. Just like contagiousness, this is a Gaussian Distribution that ends up with most people having about 5 friends that they regularly see. In our simulation, everyone lives in a super social society where on average a person meets with 2 people face to face every day. In real life, this is probably not as realistic but we’re using it because we don’t want a super slow simulation. Of course, you can make any modifications to the code that you like.
There are also a couple of other variables that will be used actively in the simulation and I’ll get to those as we go on!
Step-by-Step Walkthrough
So let’s get coding this simulation! First, there are three imports we have to do:
from scipy.stats import norm
import random
import time
SciPy will allow us to calculate values within the Gaussian Distribution we talked about. The random library will be for any variables we need that should be purely random, and the time library is just for convenience if we want to run the simulation slowly and watch the spread of the disease.
Next, we create our Person() class:
# simulation of a single person
class Person():
def __init__(self, startingImmunity):
if random.randint(0,100)<startingImmunity:
self.immunity = True
else:
self.immunity = False
self.contagiousness = 0
self.mask = False
self.contagiousDays = 0
#use gaussian distribution for number of friends; average is 5 friends
self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0))
def wearMask(self):
self.contagiousness /= 2
Why are we passing the variable startingImmunity to this class exactly? Remember how we could enter what percentage of the population would have natural immunity from day 1? When the user gives this percentage, for every person “spawned” into our simulation we’ll use random to find out if they’re one of those lucky few to already be immune — in which case the self.immunity boolean is set to True, protecting them from all infection down the line.
The remaining class variables are self-explanatory, except self.friends, which is the Gaussian Distribution we talked about. It’s definitely worth reading the documentation to get a better idea of how this works!
def initiateSim():
numPeople = int(input("Population: "))
startingImmunity = int(input("Percentage of people with natural immunity: "))
startingInfecters = int(input("How many people will be infectious at t=0: "))
for x in range(0,numPeople):
peopleDictionary.append(Person(startingImmunity))
for x in range(0,startingInfecters):
peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10)
daysContagious = int(input("How many days contagious: "))
lockdownDay = int(input("Day for lockdown to be enforced: "))
maskDay = int(input("Day for masks to be used: "))
return daysContagious, lockdownDay, maskDay
After setting up our class, we need a function to initiate the simulation. I’m calling this initiateSim() and it’ll prompt the user for four inputs — population, natural immunity population, contagious people at day 0, and how many days a person will stay contagious for. This daysContagious variable should actually be random — or even better, dependent on any number of personal health conditions, such as immune compromisation — but let’s keep it like this for a basic simulation. I found from testing that it is most interesting to run the simulation with a 4–9 day contagious period.
We spawn the inputted number of people into the simulation. To start the disease, we pick people at random to be our “startingInfecters”. As you can see, we’re assigning a Gaussian variable to each one for their level of contagiousness! (Any time a person is made contagious in the simulation we’ll repeat this process.)
We return the number of days someone will stay contagious for, like mentioned.
Now, this simulation will be done day by day, so let’s set up a function:
def runDay(daysContagious, lockdown):
#this section simulates the spread, so it only operates on contagious people, thus:
for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]:
peopleCouldMeetToday = int(person.friends/2)
if peopleCouldMeetToday > 0:
peopleMetToday = random.randint(0,peopleCouldMeetToday)
else:
peopleMetToday = 0
if lockdown == True:
peopleMetToday= 0
for x in range(0,peopleMetToday):
friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)]
if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False:
friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10)
print(peopleDictionary.index(person), " >>> ", peopleDictionary.index(friendInQuestion))
The runDay function takes daysContagious for reasons explained later. In our first for loop, we’re using a list comprehension to find the people who are capable of spreading the disease — that is, they are contagious and have friends. We’re then calculating the number of people they could meet on that day. The maximum is 50% of their friends, and then we’re using a standard random.randint() to generate how many they actually do meet on that day.
Then we use another embedded for loop to randomly select each friend that was met from the peopleDictionary[]. For the friend to have a chance of being infected, they can’t be immune to the disease. They also have to have a contagiousness of 0 — if they’re already infected, the encounter won’t influence them. We then use the infecter’s contagiousness percentage in a random function to find out if the friendInQuestion will be infected. Finally, if they do get infected, we go ahead and assign them a Gaussian Distribution variable for their contagiousness!
I added in a simple print statement as a marker which will allow us to follow the simulation in the console as it is running. At the end of our program, we’ll add functionality to save the results to a text file anyway, but it’s cool to see little tags that tell you who is infecting who.
Next part of our runDay() function:
for person in [person for person in peopleDictionary if person.contagiousness>0]:
person.contagiousDays += 1
if person.contagiousDays > daysContagious:
person.immunity = True
person.contagiousness = 0
print("|||", peopleDictionary.index(person), " |||")
Basically, all we’re doing here is finding all the people who are contagious and incrementing their contagiousDays variable by 1. If they’ve been contagious for more days than the daysContagious time the user selected, they will become immune and hence their contagiousness drops to 0. (Again, another print marker to show that the given person has gained immunity.)
I know I could have put this in the previous for loop but not to make my programming too dense, I separated it. Sue me.
Finally, to tie it all together, we need to do a bit of admin:
lockdown = False
daysContagious, lockdownDay, maskDay = initiateSim()
saveFile = open("pandemicsave3.txt", "a")
for x in range(0,100):
if x==lockdownDay:
lockdown = True
if x == maskDay:
for person in peopleDictionary:
person.wearMask()
print("DAY ", x)
runDay(daysContagious,lockdown)
write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + "
"
saveFile.write(write)
print(len([person for person in peopleDictionary if person.contagiousness>0]), " people are contagious on this day.") saveFile.close()
This is pretty self-explanatory. We get the daysContagious value by initiating the simulation, we open our save file, then cycle through the days up to day 100. Each day we use a list comprehension to get the number of people contagious and write it to our save file. I also added one final print statement so we can track the disease’s progression in the console.
And that’s it! I only explained the basics of the code, but let’s talk about the extra variables that you may have noticed…
Lockdown variable
Adding a lockdown variable is quite simple. First, add this in before the section where we cycle through each of the friends a person meets (see code above):
if lockdown == True:
peopleMetToday = 0 for x in range(0, peopleMetToday):
Now, you want to select when the lockdown is enforced? No problem. Add a user prompt tight inside your initiateSim() function.
lockdownDay = int(input("Day for lockdown to be enforced: "))
return daysContagious
return lockdownDay
Return it, and update the function call. Then, we need to define our lockdown boolean, and set it to true when we reach the correct date:
lockdown = False
daysContagious, lockdownDay = initiateSim()
saveFile = open("pandemicsave2.txt", "a")
for x in range(0,100):
if x == lockdownDay:
lockdown = True
print("DAY ", x)
You can see that I just added 3 more lines into where we manage the simulation. Simple and easy, then you will want to pass the lockdown boolean to your runDay() function and make sure the runDay() function can accept it:
runDay(daysContagious, lockdown)
And:
def runDay(daysContagious, lockdown):
That’s the lockdown added. See the results section to find out how the implementation of a lockdown affected the spread of the disease!
Facemasks
Finally, we want to add facemasks. I could add all sorts of ways that this changes how a disease spreads, but for us, we’ll just use it to decrease each person’s contagiousness. All we have to do is give the Person() class a function that tells them to wear a face mask:
def wearMask(self):
self.contagiousness /= 2
Yep, just halving their contagiousness is they wear a mask. Update initiateSim() so we can ask the user for what date the masks should come into use:
maskDay = int(input("Day for masks to be used: "))
return daysContagious, lockdownDay, maskDay
And update our call:
daysContagious, lockdownDay, maskDay = initiateSim()
Finally, we’ll edit the section where we cycle through the days so that if the day reaches maskDay, then we tell every person to run their wearMask() function:
if x == maskDay:
for person in peopleDictionary:
person.wearMask()
If only it was this easy in real life, right?
Well what do you know, we’ve created a simple pandemic simulation with the ability to simulate each individual person, change attributes of the virus, enforce lockdowns, and make people wear face masks. Let’s look at our results:
Results
I’m putting all the data gathered from my text save files into Excel.
5000 people, 1 starting infecter, 1% starting immunity, 7 days contagious, no lockdown or masks:
As expected, a nice smooth curve — almost mathematically perfect. By the end of the simulation, every has gained immunity and the cases drop to a 0, which continues until all the days have completed.
Now let’s see what happens to the previous result when you implement some countermeasures:
Now what we have here is really interesting. Take the blue line. This is the simulation without any countermeasures, just like our previous result. However, when we implement a lockdown on day 15, it has a huge effect on the orange line; the spread of the disease is curbed before it can really take off, and look at that gradual curve back down again — that’s where there are no new cases and people are gradually becoming immune!
We can then compare that to the gray line, where we implement lockdown just 5 days later than orange. It has a drastically lower effect because that five-day delay really made a difference to the number of cases.
Finally, take a look at the yellow line. This is where we implement face masks, and it’s probably the most interesting simulation of all. You can see at day 15, there is a sudden change in the gradient of the line which affects how fast the disease spreads. It probably would have increased much more rapidly without the face masks! About day 21, there is a peak, and thanks to the masks, it is substantially less than the blue line, where there were no countermeasures! There is also a tiny secondary peak, and the overall summit of the curve lasts longer than any other simulation. Can you figure out why?
Next Steps
Just to clarify, this was supposed to be a simple simulation. It is, of course, very basic with very limited parameters and functionality. However, it is incredible to see how much we can learn from a simulation that takes up barely a hundred lines of code. It really puts into perspective the impact lockdowns and face masks had.
I encourage anyone reading this with a programming mindset to go out and improve my code. I’d recommend the following features:
Face masks randomly (Gaussian?) affect contagiousness
Not everyone obeys lockdown, and even for those who do, there is a chance of an infection happening, say, during a grocery shopping trip
A certain percentage of people wear face masks, and this varies on a day to day basis
More social dynamics, or parameters in general.
The idea of communities.
If anyone does take on the challenge of upgrading this code, I’d love to see what results you get from playing around with the factors. Thanks for reading!
Full code:
from scipy.stats import norm
import random
import time peopleDictionary = [] #simulation of a single person
class Person():
def __init__(self, startingImmunity):
if random.randint(0,100)<startingImmunity:
self.immunity = True
else:
self.immunity = False
self.contagiousness = 0
self.mask = False
self.contagiousDays = 0
#use gaussian distribution for number of friends; average is 5 friends
self.friends = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0))
def wearMask(self):
self.contagiousness /= 2
def initiateSim():
numPeople = int(input("Population: "))
startingImmunity = int(input("Percentage of people with natural immunity: "))
startingInfecters = int(input("How many people will be infectious at t=0: "))
for x in range(0,numPeople):
peopleDictionary.append(Person(startingImmunity))
for x in range(0,startingInfecters):
peopleDictionary[random.randint(0,len(peopleDictionary)-1)].contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10)
daysContagious = int(input("How many days contagious: "))
lockdownDay = int(input("Day for lockdown to be enforced: "))
maskDay = int(input("Day for masks to be used: "))
return daysContagious, lockdownDay, maskDay
def runDay(daysContagious, lockdown):
#this section simulates the spread, so it only operates on contagious people, thus:
for person in [person for person in peopleDictionary if person.contagiousness>0 and person.friends>0]:
peopleCouldMeetToday = int(person.friends/2)
if peopleCouldMeetToday > 0:
peopleMetToday = random.randint(0,peopleCouldMeetToday)
else:
peopleMetToday = 0
if lockdown == True:
peopleMetToday= 0
for x in range(0,peopleMetToday):
friendInQuestion = peopleDictionary[random.randint(0,len(peopleDictionary)-1)]
if random.randint(0,100)<person.contagiousness and friendInQuestion.contagiousness == 0 and friendInQuestion.immunity==False:
friendInQuestion.contagiousness = int((norm.rvs(size=1,loc=0.5,scale=0.15)[0]*10).round(0)*10)
print(peopleDictionary.index(person), " >>> ", peopleDictionary.index(friendInQuestion))
for person in [person for person in peopleDictionary if person.contagiousness>0]:
person.contagiousDays += 1
if person.contagiousDays > daysContagious:
person.immunity = True
person.contagiousness = 0
print("|||", peopleDictionary.index(person), " |||")
lockdown = False
daysContagious, lockdownDay, maskDay = initiateSim()
saveFile = open("pandemicsave3.txt", "a")
for x in range(0,100):
if x==lockdownDay:
lockdown = True
if x == maskDay:
for person in peopleDictionary:
person.wearMask()
print("DAY ", x)
runDay(daysContagious,lockdown)
write = str(len([person for person in peopleDictionary if person.contagiousness>0])) + "
"
saveFile.write(write)
print(len([person for person in peopleDictionary if person.contagiousness>0]), " people are contagious on this day.") saveFile.close()
Thanks for Reading!
I hope you found this entertaining and possibly inspiring! There are so many ways that you can improve this model, so I encourage you to see what you can build and see if you can simulate real-life even closer.
As always, I wish you the best in your endeavors!
Not sure what to read next? I’ve picked another article for you:
Terence Shin
|
https://towardsdatascience.com/simulating-the-pandemic-in-python-2aa8f7383b55
|
['Terence Shin']
|
2020-12-21 03:35:16.322000+00:00
|
['Data Science', 'Programming', 'Simulation', 'Python', 'Pandemic']
|
Title Simulate Pandemic PythonContent Introduction What’s better time simulate spread disease global pandemic don’t much say — let’s jump right programming simple disease simulation real life hundred factor affect fast contagion spread person person broader populationwide scale I’m epidemiologist I’ve done best set fairly basic simulation mimic virus infect people spread throughout population program using objectbased programming method could theoretically customize individual people add event factor — complicated social dynamic Keep mind introduction serf basic model built top VariablesExplanation Fundamentally program function around single concept given person infected simulation’s disease potential spread whoever meet person “peopleDictionary” set number friend gaussian randomization accuracy may meet one friend day day basis starting round simulation won’t implement face mask lockdown — we’ll let virus spread people meet friend see get iconic pandemic “curve” news always talk flattening we’ll use Person class add characteristic Firstly we’ll assume tiny percentage character simulated already immunity disease getgo whatever reason I’m setting 1 reality it’d far lower simulation run fast large portion like make bit sense start simulation user prompted enter percentage Next contagiousness allimportant factor person infected remains 0 also return 0 person cease contagious gain immunity However person infected contagious value somewhere 0 100 massively change chance infecting friend implement factor need understand Gaussian Distribution mathematical function allows u accurately calculate random value 1 100 Rather value distributed purely randomly across spectrum cluster around median average point making realistic output see bellshaped function lot better random characteristic variable people average level contagiousness rather purely random percentage I’ll show implement later variable “mask” “lockdown” boolean variable used add little bit variety simulation running Lastly “friends” variable given person like contagiousness Gaussian Distribution end people 5 friend regularly see simulation everyone life super social society average person meet 2 people face face every day real life probably realistic we’re using don’t want super slow simulation course make modification code like also couple variable used actively simulation I’ll get go StepbyStep Walkthrough let’s get coding simulation First three import scipystats import norm import random import time SciPy allow u calculate value within Gaussian Distribution talked random library variable need purely random time library convenience want run simulation slowly watch spread disease Next create Person class simulation single person class Person def initself startingImmunity randomrandint0100startingImmunity selfimmunity True else selfimmunity False selfcontagiousness 0 selfmask False selfcontagiousDays 0 use gaussian distribution number friend average 5 friend selffriends intnormrvssize1loc05scale015010round0 def wearMaskself selfcontagiousness 2 passing variable startingImmunity class exactly Remember could enter percentage population would natural immunity day 1 user give percentage every person “spawned” simulation we’ll use random find they’re one lucky already immune — case selfimmunity boolean set True protecting infection line remaining class variable selfexplanatory except selffriends Gaussian Distribution talked It’s definitely worth reading documentation get better idea work def initiateSim numPeople intinputPopulation startingImmunity intinputPercentage people natural immunity startingInfecters intinputHow many people infectious t0 x range0numPeople peopleDictionaryappendPersonstartingImmunity x range0startingInfecters peopleDictionaryrandomrandint0lenpeopleDictionary1contagiousness intnormrvssize1loc05scale015010round010 daysContagious intinputHow many day contagious lockdownDay intinputDay lockdown enforced maskDay intinputDay mask used return daysContagious lockdownDay maskDay setting class need function initiate simulation I’m calling initiateSim it’ll prompt user four input — population natural immunity population contagious people day 0 many day person stay contagious daysContagious variable actually random — even better dependent number personal health condition immune compromisation — let’s keep like basic simulation found testing interesting run simulation 4–9 day contagious period spawn inputted number people simulation start disease pick people random “startingInfecters” see we’re assigning Gaussian variable one level contagiousness time person made contagious simulation we’ll repeat process return number day someone stay contagious like mentioned simulation done day day let’s set function def runDaydaysContagious lockdown section simulates spread operates contagious people thus person person person peopleDictionary personcontagiousness0 personfriends0 peopleCouldMeetToday intpersonfriends2 peopleCouldMeetToday 0 peopleMetToday randomrandint0peopleCouldMeetToday else peopleMetToday 0 lockdown True peopleMetToday 0 x range0peopleMetToday friendInQuestion peopleDictionaryrandomrandint0lenpeopleDictionary1 randomrandint0100personcontagiousness friendInQuestioncontagiousness 0 friendInQuestionimmunityFalse friendInQuestioncontagiousness intnormrvssize1loc05scale015010round010 printpeopleDictionaryindexperson peopleDictionaryindexfriendInQuestion runDay function take daysContagious reason explained later first loop we’re using list comprehension find people capable spreading disease — contagious friend We’re calculating number people could meet day maximum 50 friend we’re using standard randomrandint generate many actually meet day use another embedded loop randomly select friend met peopleDictionary friend chance infected can’t immune disease also contagiousness 0 — they’re already infected encounter won’t influence use infecter’s contagiousness percentage random function find friendInQuestion infected Finally get infected go ahead assign Gaussian Distribution variable contagiousness added simple print statement marker allow u follow simulation console running end program we’ll add functionality save result text file anyway it’s cool see little tag tell infecting Next part runDay function person person person peopleDictionary personcontagiousness0 personcontagiousDays 1 personcontagiousDays daysContagious personimmunity True personcontagiousness 0 print peopleDictionaryindexperson Basically we’re finding people contagious incrementing contagiousDays variable 1 they’ve contagious day daysContagious time user selected become immune hence contagiousness drop 0 another print marker show given person gained immunity know could put previous loop make programming dense separated Sue Finally tie together need bit admin lockdown False daysContagious lockdownDay maskDay initiateSim saveFile openpandemicsave3txt x range0100 xlockdownDay lockdown True x maskDay person peopleDictionary personwearMask printDAY x runDaydaysContagiouslockdown write strlenperson person peopleDictionary personcontagiousness0 saveFilewritewrite printlenperson person peopleDictionary personcontagiousness0 people contagious day saveFileclose pretty selfexplanatory get daysContagious value initiating simulation open save file cycle day day 100 day use list comprehension get number people contagious write save file also added one final print statement track disease’s progression console that’s explained basic code let’s talk extra variable may noticed… Lockdown variable Adding lockdown variable quite simple First add section cycle friend person meet see code lockdown True peopleMetToday 0 x range0 peopleMetToday want select lockdown enforced problem Add user prompt tight inside initiateSim function lockdownDay intinputDay lockdown enforced return daysContagious return lockdownDay Return update function call need define lockdown boolean set true reach correct date lockdown False daysContagious lockdownDay initiateSim saveFile openpandemicsave2txt x range0100 x lockdownDay lockdown True printDAY x see added 3 line manage simulation Simple easy want pas lockdown boolean runDay function make sure runDay function accept runDaydaysContagious lockdown def runDaydaysContagious lockdown That’s lockdown added See result section find implementation lockdown affected spread disease Facemasks Finally want add facemasks could add sort way change disease spread u we’ll use decrease person’s contagiousness give Person class function tell wear face mask def wearMaskself selfcontagiousness 2 Yep halving contagiousness wear mask Update initiateSim ask user date mask come use maskDay intinputDay mask used return daysContagious lockdownDay maskDay update call daysContagious lockdownDay maskDay initiateSim Finally we’ll edit section cycle day day reach maskDay tell every person run wearMask function x maskDay person peopleDictionary personwearMask easy real life right Well know we’ve created simple pandemic simulation ability simulate individual person change attribute virus enforce lockdown make people wear face mask Let’s look result Results I’m putting data gathered text save file Excel 5000 people 1 starting infecter 1 starting immunity 7 day contagious lockdown mask expected nice smooth curve — almost mathematically perfect end simulation every gained immunity case drop 0 continues day completed let’s see happens previous result implement countermeasure really interesting Take blue line simulation without countermeasure like previous result However implement lockdown day 15 huge effect orange line spread disease curbed really take look gradual curve back — that’s new case people gradually becoming immune compare gray line implement lockdown 5 day later orange drastically lower effect fiveday delay really made difference number case Finally take look yellow line implement face mask it’s probably interesting simulation see day 15 sudden change gradient line affect fast disease spread probably would increased much rapidly without face mask day 21 peak thanks mask substantially le blue line countermeasure also tiny secondary peak overall summit curve last longer simulation figure Next Steps clarify supposed simple simulation course basic limited parameter functionality However incredible see much learn simulation take barely hundred line code really put perspective impact lockdown face mask encourage anyone reading programming mindset go improve code I’d recommend following feature Face mask randomly Gaussian affect contagiousness everyone obeys lockdown even chance infection happening say grocery shopping trip certain percentage people wear face mask varies day day basis social dynamic parameter general idea community anyone take challenge upgrading code I’d love see result get playing around factor Thanks reading Full code scipystats import norm import random import time peopleDictionary simulation single person class Person def initself startingImmunity randomrandint0100startingImmunity selfimmunity True else selfimmunity False selfcontagiousness 0 selfmask False selfcontagiousDays 0 use gaussian distribution number friend average 5 friend selffriends intnormrvssize1loc05scale015010round0 def wearMaskself selfcontagiousness 2 def initiateSim numPeople intinputPopulation startingImmunity intinputPercentage people natural immunity startingInfecters intinputHow many people infectious t0 x range0numPeople peopleDictionaryappendPersonstartingImmunity x range0startingInfecters peopleDictionaryrandomrandint0lenpeopleDictionary1contagiousness intnormrvssize1loc05scale015010round010 daysContagious intinputHow many day contagious lockdownDay intinputDay lockdown enforced maskDay intinputDay mask used return daysContagious lockdownDay maskDay def runDaydaysContagious lockdown section simulates spread operates contagious people thus person person person peopleDictionary personcontagiousness0 personfriends0 peopleCouldMeetToday intpersonfriends2 peopleCouldMeetToday 0 peopleMetToday randomrandint0peopleCouldMeetToday else peopleMetToday 0 lockdown True peopleMetToday 0 x range0peopleMetToday friendInQuestion peopleDictionaryrandomrandint0lenpeopleDictionary1 randomrandint0100personcontagiousness friendInQuestioncontagiousness 0 friendInQuestionimmunityFalse friendInQuestioncontagiousness intnormrvssize1loc05scale015010round010 printpeopleDictionaryindexperson peopleDictionaryindexfriendInQuestion person person person peopleDictionary personcontagiousness0 personcontagiousDays 1 personcontagiousDays daysContagious personimmunity True personcontagiousness 0 print peopleDictionaryindexperson lockdown False daysContagious lockdownDay maskDay initiateSim saveFile openpandemicsave3txt x range0100 xlockdownDay lockdown True x maskDay person peopleDictionary personwearMask printDAY x runDaydaysContagiouslockdown write strlenperson person peopleDictionary personcontagiousness0 saveFilewritewrite printlenperson person peopleDictionary personcontagiousness0 people contagious day saveFileclose Thanks Reading hope found entertaining possibly inspiring many way improve model encourage see build see simulate reallife even closer always wish best endeavor sure read next I’ve picked another article Terence ShinTags Data Science Programming Simulation Python Pandemic
|
5,162 |
Stop Worrying and Create your Deep Learning Server in 30 minutes
|
Setting up Amazon EC2 Machine
I am assuming that you have an AWS account, and you have access to the AWS Console. If not, you might need to sign up for an Amazon AWS account.
First of all, we need to go to the Services tab to access the EC2 dashboard.
2. On the EC2 Dashboard, you can start by creating your instance.
3. Amazon provides Community AMIs(Amazon Machine Image) with Deep Learning software preinstalled. To access these AMIs, you need to look in the community AMIs and search for “Ubuntu Deep Learning” in the Search Tab. You can choose any other Linux flavor, but I have found Ubuntu to be most useful for my Deep Learning needs. In the present setup, I will use The Deep Learning AMI (Ubuntu 18.04) Version 27.0
4. Once you select an AMI, you can select the Instance Type. It is here you specify the number of CPUs, Memory, and GPUs you will require in your system. Amazon provides a lot of options to choose from based on one’s individual needs. You can filter for GPU instances using the “Filter by” filter.
In this tutorial, I have gone with p2.xlarge instance, which provides NVIDIA K80 GPU with 2,496 parallel processing cores and 12GiB of GPU memory. To know about different instance types, you can look at the documentation here and the pricing here.
5. You can change the storage that is attached to the machine in the 4th step. It is okay if you don’t add storage upfront, as you can also do this later. I change the storage from 90 GB to 500 GB as most of the deep learning needs will require proper storage.
6. That’s all, and you can Launch the Instance after going to the Final Review instance settings Screen. Once you click on Launch, you will see this screen. Just type in any key name in the Key Pair Name and click on “Download key pair”. Your key will be downloaded to your machine by the name you provided. For me, it got saved as “aws_key.pem”. Once you do that, you can click on “Launch Instances”.
Keep this key pair safe as this will be required whenever you want to login to your instance.
7. You can now click on “View Instances” on the next page to see your instance. This is how your instance will look like:
8. To connect to your instance, Just open a terminal window in your Local machine and browse to the folder where you have kept your key pair file and modify some permissions.
chmod 400 aws_key.pem
Once you do that, you will be able to connect to your instance by SSHing. The SSH command will be of the form:
ssh -i "aws_key.pem" ubuntu@<Your PublicDNS(IPv4)>
For me, the command was:
ssh -i "aws_key.pem" [email protected]
Also, keep in mind that the Public DNS might change once you shut down your instance.
9. You have already got your machine up and ready. This machine contains different environments that have various libraries you might need. This particular machine has MXNet, Tensorflow, and Pytorch with different versions of python. And the best thing is that we get all this preinstalled, so it just works out of the box.
|
https://towardsdatascience.com/stop-worrying-and-create-your-deep-learning-server-in-30-minutes-bb5bd956b8de
|
['Rahul Agarwal']
|
2020-04-30 07:33:30.234000+00:00
|
['Programming', 'Deep Learning', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
|
Title Stop Worrying Create Deep Learning Server 30 minutesContent Setting Amazon EC2 Machine assuming AWS account access AWS Console might need sign Amazon AWS account First need go Services tab access EC2 dashboard 2 EC2 Dashboard start creating instance 3 Amazon provides Community AMIsAmazon Machine Image Deep Learning software preinstalled access AMIs need look community AMIs search “Ubuntu Deep Learning” Search Tab choose Linux flavor found Ubuntu useful Deep Learning need present setup use Deep Learning AMI Ubuntu 1804 Version 270 4 select AMI select Instance Type specify number CPUs Memory GPUs require system Amazon provides lot option choose based one’s individual need filter GPU instance using “Filter by” filter tutorial gone p2xlarge instance provides NVIDIA K80 GPU 2496 parallel processing core 12GiB GPU memory know different instance type look documentation pricing 5 change storage attached machine 4th step okay don’t add storage upfront also later change storage 90 GB 500 GB deep learning need require proper storage 6 That’s Launch Instance going Final Review instance setting Screen click Launch see screen type key name Key Pair Name click “Download key pair” key downloaded machine name provided got saved “awskeypem” click “Launch Instances” Keep key pair safe required whenever want login instance 7 click “View Instances” next page see instance instance look like 8 connect instance open terminal window Local machine browse folder kept key pair file modify permission chmod 400 awskeypem able connect instance SSHing SSH command form ssh awskeypem ubuntuYour PublicDNSIPv4 command ssh awskeypem ubuntuec254202223197uswest2computeamazonawscom Also keep mind Public DNS might change shut instance 9 already got machine ready machine contains different environment various library might need particular machine MXNet Tensorflow Pytorch different version python best thing get preinstalled work boxTags Programming Deep Learning Artificial Intelligence Data Science Machine Learning
|
5,163 |
The Magic Trick of Machine Learning — The Kernel Trick
|
Welcome to our machine learning magic show!
You must have heard that magicians never reveal their secrets. Machine learning has several tricks that seem like magic. In this blog, we hope to reveal one secret of machine learning called the Kernel Trick. If you have encountered the problem of datasets too big to use your models on, models too complex to use your data on, or nonlinear data, then this trick will be useful for you. When used wisely, this trick can speed up algorithms, reduce memory usage, and uncover hidden patterns.
Sounds exciting and want to know more? Focus, otherwise the magic trick might elude you.
Intuition: Why is it needed?
Let’s start with a summary of the notation used and the concepts we already know.
Summary of Notations
L2 Regularized Regression
Consider the least-squares regularized objective function where we want to learn weights that minimize the following expression:
Cost (objective) function of L2 regularized least squares
Equivalently, this can be written in matrix form as:
Recalling from calculus I, to minimize an expression f(x), we need to set its derivative to zero f´ (x) = 0 and solve for x. The equivalent multidimensional case x is to calculate the gradient of the function and set it to zero vector ∇f(x) = 0 and solve the linear system of equations. For the least-squares objective function (1), the solution in matrix form would be:
The gradient of cost function and normal equations
which is known as the “normal equations”. The solution to the normal equations is:
The solution to normal equations
Once the optimal weights (w) are obtained, to make predictions on the test data Xtest, we need to calculate y = Xtest · w.
If you are not convinced, check the dimensions of the matrix operations to see that it is a sound derivation.
Check the dimensions of the matrix operations
What is the cost of computing linear regression?
Let us analyze the cost of solving the normal equations and obtaining w. For this, let us first consider the cost of the dot product of two vectors u and v. The dot product of two vectors is defined as:
dot products
As we can see, the pattern is to multiply each element of the vector and then add them. So, a dot product of an n-dimensional vector requires n multiplications and n-1 additions. Assuming the cost of multiplication and addition are constant, the total number of operations in computing the dot product is n+n-1. In big-O notation, this is O(n).
Training (learning weights w)
To learn weights w, we need to perform matrix operations. Below is the breakdown of each matrix operation involved in computing w.
–– Multiplication: Xᵀ X
The result of the matrix multiplication Xᵀ X is a d×d matrix. There will be a total of d² elements in this matrix. Each element of the resulting matrix is a dot product of two n-dimensional vectors. So, the cost of forming Xᵀ X is O(d²·n).
–– Addition: Xᵀ X+λI = B
Matrix addition does not change the size of the matrix; thus, the result will be d×d matrix. One element of the first matrix will be added to one element of the second matrix, a total of d² summations. So, the cost of adding two matrices is O(d²).
–– Matrix inversion: (Xᵀ X +λI)⁻ ¹ = B⁻ ¹ = C
The basic matrix inversion algorithm is Gauss-Jordan elimination (row reduction). Solving a d×d system of equations via Gauss-Jordan elimination is known to be O(d³).
–– Multiplication : (Xᵀ X +λI)⁻ ¹ Xᵀ = C Xᵀ = D
Similar to multiplying Xᵀ X, calculating C Xᵀ is the result of (d×d)(d×n) matrices which yields a d×n matrix whose complexity is O(dn·d) or O(d²n).
–– Multiplication : (Xᵀ X +λI)⁻ ¹ Xᵀy = Dy = w
Similar to multiplying Xᵀ X, calculating Dy is the result of (d×n)(n×1) matrices which yields a d×1 vector whose complexity is O(d·n).
–– Adding it all up the cost of all the operation yields,
Which simplifies to,
The complexity of regularized linear regression
As can be observed, the complexity depends on n and d. Let’s analyze the two scenarios.
If n>>d, this means that our data (matrix) has lots of examples (rows). In this case, we have lots of data points but few features. The number of examples dominates; therefore, the cost of performing least-squares is linear in the number of examples.
Not bad if we have tens or hundreds of examples. But what if we have 100,000? What about millions? What about billions? It’s not bad either since the computation cost is O(n), which is about the same as if we were to read n data points.
If d>>n, this means that our data (matrix) has lots of features (columns). In this case, we have lots of features but few examples. The number of features dominates; therefore, the cost of performing least-squares is cubic in the number of features.
Even with a few features, the computation cost is high. What if we have 1000 features? What about 100,000? What about millions? It would be highly infeasible with a cost of O(d³)!
But that’s not all folks! In computing the weights w, we have to form XᵀX, which will result in a (d×d) matrix. Can we even store in memory such a large matrix if d is very, very large?
|
https://medium.com/sfu-cspmp/the-magic-trick-of-machine-learning-the-kernel-trick-b4b21787805a
|
['Sachin Kumar']
|
2020-04-01 02:17:17.189000+00:00
|
['Kernel Trick', 'Big Data', 'Blog Post', 'Data Science', 'Linear Algebra']
|
Title Magic Trick Machine Learning — Kernel TrickContent Welcome machine learning magic show must heard magician never reveal secret Machine learning several trick seem like magic blog hope reveal one secret machine learning called Kernel Trick encountered problem datasets big use model model complex use data nonlinear data trick useful used wisely trick speed algorithm reduce memory usage uncover hidden pattern Sounds exciting want know Focus otherwise magic trick might elude Intuition needed Let’s start summary notation used concept already know Summary Notations L2 Regularized Regression Consider leastsquares regularized objective function want learn weight minimize following expression Cost objective function L2 regularized least square Equivalently written matrix form Recalling calculus minimize expression fx need set derivative zero f´ x 0 solve x equivalent multidimensional case x calculate gradient function set zero vector ∇fx 0 solve linear system equation leastsquares objective function 1 solution matrix form would gradient cost function normal equation known “normal equations” solution normal equation solution normal equation optimal weight w obtained make prediction test data Xtest need calculate Xtest · w convinced check dimension matrix operation see sound derivation Check dimension matrix operation cost computing linear regression Let u analyze cost solving normal equation obtaining w let u first consider cost dot product two vector u v dot product two vector defined dot product see pattern multiply element vector add dot product ndimensional vector requires n multiplication n1 addition Assuming cost multiplication addition constant total number operation computing dot product nn1 bigO notation Training learning weight w learn weight w need perform matrix operation breakdown matrix operation involved computing w –– Multiplication Xᵀ X result matrix multiplication Xᵀ X d×d matrix total d² element matrix element resulting matrix dot product two ndimensional vector cost forming Xᵀ X Od²·n –– Addition Xᵀ XλI B Matrix addition change size matrix thus result d×d matrix One element first matrix added one element second matrix total d² summation cost adding two matrix Od² –– Matrix inversion Xᵀ X λI⁻ ¹ B⁻ ¹ C basic matrix inversion algorithm GaussJordan elimination row reduction Solving d×d system equation via GaussJordan elimination known Od³ –– Multiplication Xᵀ X λI⁻ ¹ Xᵀ C Xᵀ Similar multiplying Xᵀ X calculating C Xᵀ result d×dd×n matrix yield d×n matrix whose complexity Odn·d Od²n –– Multiplication Xᵀ X λI⁻ ¹ Xᵀy Dy w Similar multiplying Xᵀ X calculating Dy result d×nn×1 matrix yield d×1 vector whose complexity Od·n –– Adding cost operation yield simplifies complexity regularized linear regression observed complexity depends n Let’s analyze two scenario nd mean data matrix lot example row case lot data point feature number example dominates therefore cost performing leastsquares linear number example bad ten hundred example 100000 million billion It’s bad either since computation cost read n data point dn mean data matrix lot feature column case lot feature example number feature dominates therefore cost performing leastsquares cubic number feature Even feature computation cost high 1000 feature 100000 million would highly infeasible cost Od³ that’s folk computing weight w form XᵀX result d×d matrix even store memory large matrix largeTags Kernel Trick Big Data Blog Post Data Science Linear Algebra
|
5,164 |
Christian Counselors (The Blind Shepherds)
|
Aesop’s Fables: The Wolf and The Shepherd
The healing journey brings with it many moments of contemplation, where you ponder what went wrong or why certain relationships didn’t last. My disenchantment with the Christian counseling I sought out along the way has led to some of the most profound soul searching of my life. That isn’t because I had a massive shift in my belief system. My disappointment was due to the asinine approach taken, the judgmental tone, and the refusal to change course when the original method proved unsuccessful. It also wasn’t because they were all fundamentalists who took every word of the Bible literally. They were all fairly liberal evangelicals, who prided themselves on their open-mindedness.
Many people go to therapy, hoping to change, but not change too much. They don’t want someone to upset their sense of normalcy entirely, which is what I was doing when I pursued a Christian counselor. The last thing I wanted was to have some smug secular humanist that tried to pick at my religious traditions. I envisioned a Christopher Hitchens type, sitting in the armchair, shooting me a bemused look if my faith was to come up. I’ve matured enough to know that’s not how therapy goes, but in my young 20s, I was vulnerable.
Modern Christianity, liberal evangelism, and Catholic social justice dogma, in particular, all have an unhealthy tendency to enable and encourage co-dependency inadvertently. I recall the many times I tried to put up boundaries with people in my life, and I was made to feel as if I was bull-headed, and they attempted to talk me out of it. The scriptures that were often cited when I was trying to cut cords of dependency were “Forgive them, Lord, for they know not what they do” (Luke 23:34) or “Forgive as you have been forgiven” (Colossians 3:13).
Dr. DeLauro, the most consequential of the several Christian-minded therapists I worked with, came from a background of pastoral counseling. I could tell he was a product of Catholic school right away, for I often felt as if I was in the principal’s office. There was always an air of self-righteous judgment when I admitted to a rather ugly feeling. Perhaps his reaction was rooted in the theological view that good deeds get you into heaven, or their understanding of the human body, which Catholics have unfortunately been told is something sinful and shameful. It was as if he was instructing me that, even if you resent these notions, still follow them anyway, regardless if the paradox between your mind and body is tearing you up on the inside.
I felt as if I had been exposed to the chronic virus of Catholic guilt, which I now realize defined much of his worldview. As the Bible says, “Love the Lord your God with all of your heart, all of your soul, and all of your mind.” We can’t only focus on our thoughts as they do in CBT and interpersonal psychotherapy. I do not doubt that Dr. DeLauro had been through his share of heartbreak and tribulations in life, but it seems to me that he never really accepted himself as a child of God. He accepted his circumstances, his flaws, his tribulations, but he never accepted himself in spite of those things. All he was focused on was how to redirect negative thought patterns when those issues presented themselves. The sentiment of feeling like “I’m worthless, it’s all my fault, I’m unlovable,” is not just an issue of thinking patterns; they are an absence of self-love. The Catholic mentality turns the act of suffering into a virtue, but when does this become an exercise in masochism? It’s as if they don’t think the layperson is worthy of the answers required to end their suffering.
I was always struck by how strangely Dr. DeLauro reacted whenever I brought up issues regarding abuse of power. Whether it was in my own life or just a casual observation of the culture, there was a dogmatic insistence on forgiveness, acceptance, and just simply moving on. While I am not suggesting that those concepts are wrong in any way, for a psychologist to push those so incessantly runs the danger of equating any attempt to seek accountability, as akin to being unforgiving or bitter. By that logic, the efforts to uncover the Catholic Church abuse scandal or the #MeToo movement were negative, because those seeking accountability hadn’t truly forgiven their perpetrators.
Tales of Accountability: The story of the Boston Globe’s investigation of child abuse by Catholic priests.
My guess is that his passive attitude towards such movements seeking accountability is because the older generations thought consequences were out of the question. All you could do was accept what happened and move on, the perpetrators were never going to be made to answer for their crimes. There was no Human Resources department, no investigative team of journalists wanting to interview you, no internet hashtag to show solidarity with survivors, they were alone. While many victims are having their voices heard for the first time, I imagine many are agitated at being reminded of old wounds, they’ve learned to cope. That doesn’t mean they’ve healed, but healing is a scary endeavor that takes immense courage, and some would rather let sleeping dogs lie. There is no judgment, but those who do want to find peace and resolve shouldn’t be shamed for their choices either.
|
https://medium.com/invisible-illness/christian-counselors-aka-the-blind-shepards-cc2928bc8d91
|
['Quinton Heisler']
|
2020-02-04 03:43:39.483000+00:00
|
['Christianity', 'Religion', 'Therapy', 'Mental Health']
|
Title Christian Counselors Blind ShepherdsContent Aesop’s Fables Wolf Shepherd healing journey brings many moment contemplation ponder went wrong certain relationship didn’t last disenchantment Christian counseling sought along way led profound soul searching life isn’t massive shift belief system disappointment due asinine approach taken judgmental tone refusal change course original method proved unsuccessful also wasn’t fundamentalist took every word Bible literally fairly liberal evangelicals prided openmindedness Many people go therapy hoping change change much don’t want someone upset sense normalcy entirely pursued Christian counselor last thing wanted smug secular humanist tried pick religious tradition envisioned Christopher Hitchens type sitting armchair shooting bemused look faith come I’ve matured enough know that’s therapy go young 20 vulnerable Modern Christianity liberal evangelism Catholic social justice dogma particular unhealthy tendency enable encourage codependency inadvertently recall many time tried put boundary people life made feel bullheaded attempted talk scripture often cited trying cut cord dependency “Forgive Lord know do” Luke 2334 “Forgive forgiven” Colossians 313 Dr DeLauro consequential several Christianminded therapist worked came background pastoral counseling could tell product Catholic school right away often felt principal’s office always air selfrighteous judgment admitted rather ugly feeling Perhaps reaction rooted theological view good deed get heaven understanding human body Catholics unfortunately told something sinful shameful instructing even resent notion still follow anyway regardless paradox mind body tearing inside felt exposed chronic virus Catholic guilt realize defined much worldview Bible say “Love Lord God heart soul mind” can’t focus thought CBT interpersonal psychotherapy doubt Dr DeLauro share heartbreak tribulation life seems never really accepted child God accepted circumstance flaw tribulation never accepted spite thing focused redirect negative thought pattern issue presented sentiment feeling like “I’m worthless it’s fault I’m unlovable” issue thinking pattern absence selflove Catholic mentality turn act suffering virtue become exercise masochism It’s don’t think layperson worthy answer required end suffering always struck strangely Dr DeLauro reacted whenever brought issue regarding abuse power Whether life casual observation culture dogmatic insistence forgiveness acceptance simply moving suggesting concept wrong way psychologist push incessantly run danger equating attempt seek accountability akin unforgiving bitter logic effort uncover Catholic Church abuse scandal MeToo movement negative seeking accountability hadn’t truly forgiven perpetrator Tales Accountability story Boston Globe’s investigation child abuse Catholic priest guess passive attitude towards movement seeking accountability older generation thought consequence question could accept happened move perpetrator never going made answer crime Human Resources department investigative team journalist wanting interview internet hashtag show solidarity survivor alone many victim voice heard first time imagine many agitated reminded old wound they’ve learned cope doesn’t mean they’ve healed healing scary endeavor take immense courage would rather let sleeping dog lie judgment want find peace resolve shouldn’t shamed choice eitherTags Christianity Religion Therapy Mental Health
|
5,165 |
Decision Tree Algorithm In Machine Learning
|
A decision tree is a non-parametric supervised machine learning algorithm. It is extremely useful in classifying or labels the object. It works for both categorical and continuous datasets. It is like a tree structure in which the root node and its child node should be present. It has a child node that denotes a feature of the dataset. Prediction can be made with a leaf or terminal node.
Recursive Greedy Algorithm
A recursive greedy algorithm is a very simple, intuitive algorithm that is used in the optimization problems.
What a recursive greedy algorithm does that at every step you have a choice. Instead of evaluating all choices recursively and picking the best one choice, go with that, Recurse and do the same thing. So basically a recursive greedy algorithm picks the locally optimal choice hoping to get the best globally optimal solution.
Greedy algorithms are very powerful in some problems, such as Huffman encoding, or Dijkstra’s algorithm, which is used in data structure and algorithms. We will be using this algorithm for the formation of the tree.
Step for learning the decision tree:
step 1: Start with an empty tree
step 2: Select a feature to split the data
For each split of the tree :
step 3: If nothing to do more, predict with the last leaf node or terminal node
step 4: Otherwise, go to step 2 & continue (recurse) to split
decision tree example
For example — let’s say, start with an empty tree and I pick a feature to split on. In our case, we split on credit. So we decided to say, take my data and split on which data points have excellent credit, which ones have fair credit and which ones have poor credit, and then for each subset of data, excellent, fair, poor, I continue thinking about what to do next. So I decide in the case of excellent credit, there was nothing else to do. So I stop, but in the other two cases, there was more to do and what I do is what’s called recursion. I go back to step two but only look at the subset of the data that has credit fair and then only look at a subset of data that has credit poor. Now if you look at this algorithm so far, it sounds a little abstract, but there are a few points that we need to make more concrete. We have to decide how to pick the feature to split on. We split on credit in our example, but we could have split on something else like the term of the loan or my income. And then since we having recursion here at the end, we have to figure out when to stop recursion, when to not go, and expand another node in the tree.
Problem 1: Feature split selection
Given a subset of dataset m(a node in a tree)
For each feature h(x):
Split data of M according to feature h (x) Compute classification error split
Chose feature h*(x) with lowest classification error
classification error is the number of mistakes in node and divides by the number of the data points in that node. For example — In the above diagram in the root node we made 18 as a risky loan that would be considered as a mistake and the total number of the data points in the root node will 40 so we can calculate the classification error of the root node. The second thing we will calculate the classification error of other nodes and try to see which feature has the lowest error that would our best split node.
Problem 2: Where will we do stopping the condition?
The first stop in the condition is stopped splitting when all the data agree on the value of y.
when already split on all the features. we can say there is nothing left in our dataset.
The Common Parameters of the Decision Tree
criterion: Gini or entropy, (default=gini)
The function to measure the quality of a split. Supported criteria are Gini for the Gini impurity and “entropy” for the information gain. It decides which node to splitting starts.
max_depth: int or None, (default = None)
The first hyperparameter is to tune in a decision tree is max_depth.
max_depth is what name suggests the maximum depth of the tree that you allow the tree to grow to
The deeper the tree, the more splits it has & it captures more information about the data.
however, in general, a decision tree overfits for large depth values. The tree perfectly predicts all of the train data, but it fails to capture the pattern in new data.
so you have to find the right max depth using hyperparameter tuning either grid search or random search to arrive at the best possible values of max_depth.
min_samples_split: int or float, (default = 2)
An internal node will have further splits (also called children)
min_samples_split specifies the minimum number of sample required to split an internal node.
we can either specify a number to denote the minimum number or a fraction to denote the percentage of samples in an internal node.
min_samples_leaf: int or float (default = 1)
A leaf node is a node without any children(without any further splits).
min_samples_leaf is the minimum number of samples required to be at a leaf node.
This parameter is similar to min_sample_splits, however, this describes the minimum number of samples at the leaf, the base of the tree.
This hyperparameter can also avoid overfitting.
max_features: int, float, string (default = None)
max_features represents the number of features to consider when looking for the best split.
We can either specify a number to denote the max_features at each split or a fraction to denote the percentage of features to consider while making a split.
we also have options such as sqrt, log2, None.
This method is used to control overfitting. In fact, it is similar to the technique used in a random forest, except in random forest we start with sampling also from the data and we generate multiple trees.
Code For Decision Tree Algorithms:
We will be using a dataset from the LendingClub. A parsed and clean form of the dataset is available here. Make sure you download the dataset before running the following command.
The train and validation dataset can found here.
Thanks for reading…..
Recommended Article
|
https://medium.com/ai-in-plain-english/decision-tree-algorithm-in-machine-learning-8aecef85ae6d
|
['Bhanwar Saini']
|
2020-09-30 14:26:53.296000+00:00
|
['Python3', 'Programming', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
|
Title Decision Tree Algorithm Machine LearningContent decision tree nonparametric supervised machine learning algorithm extremely useful classifying label object work categorical continuous datasets like tree structure root node child node present child node denotes feature dataset Prediction made leaf terminal node Recursive Greedy Algorithm recursive greedy algorithm simple intuitive algorithm used optimization problem recursive greedy algorithm every step choice Instead evaluating choice recursively picking best one choice go Recurse thing basically recursive greedy algorithm pick locally optimal choice hoping get best globally optimal solution Greedy algorithm powerful problem Huffman encoding Dijkstra’s algorithm used data structure algorithm using algorithm formation tree Step learning decision tree step 1 Start empty tree step 2 Select feature split data split tree step 3 nothing predict last leaf node terminal node step 4 Otherwise go step 2 continue recurse split decision tree example example — let’s say start empty tree pick feature split case split credit decided say take data split data point excellent credit one fair credit one poor credit subset data excellent fair poor continue thinking next decide case excellent credit nothing else stop two case what’s called recursion go back step two look subset data credit fair look subset data credit poor look algorithm far sound little abstract point need make concrete decide pick feature split split credit example could split something else like term loan income since recursion end figure stop recursion go expand another node tree Problem 1 Feature split selection Given subset dataset node tree feature hx Split data according feature h x Compute classification error split Chose feature hx lowest classification error classification error number mistake node divide number data point node example — diagram root node made 18 risky loan would considered mistake total number data point root node 40 calculate classification error root node second thing calculate classification error node try see feature lowest error would best split node Problem 2 stopping condition first stop condition stopped splitting data agree value already split feature say nothing left dataset Common Parameters Decision Tree criterion Gini entropy defaultgini function measure quality split Supported criterion Gini Gini impurity “entropy” information gain decides node splitting start maxdepth int None default None first hyperparameter tune decision tree maxdepth maxdepth name suggests maximum depth tree allow tree grow deeper tree split capture information data however general decision tree overfits large depth value tree perfectly predicts train data fails capture pattern new data find right max depth using hyperparameter tuning either grid search random search arrive best possible value maxdepth minsamplessplit int float default 2 internal node split also called child minsamplessplit specifies minimum number sample required split internal node either specify number denote minimum number fraction denote percentage sample internal node minsamplesleaf int float default 1 leaf node node without childrenwithout split minsamplesleaf minimum number sample required leaf node parameter similar minsamplesplits however describes minimum number sample leaf base tree hyperparameter also avoid overfitting maxfeatures int float string default None maxfeatures represents number feature consider looking best split either specify number denote maxfeatures split fraction denote percentage feature consider making split also option sqrt log2 None method used control overfitting fact similar technique used random forest except random forest start sampling also data generate multiple tree Code Decision Tree Algorithms using dataset LendingClub parsed clean form dataset available Make sure download dataset running following command train validation dataset found Thanks reading… Recommended ArticleTags Python3 Programming Artificial Intelligence Data Science Machine Learning
|
5,166 |
Introducing Kubeflow to Zeals
|
Hi there, this is Allen from Zeals Japan. I work as a SRE / gopher which mainly responsible for microservices development.
Background
Story
Nowadays machine learning is everywhere and we do believe it will still be trending in the next few years. Data scientists are working on large datasets on a daily basis to develop models that help the business in different areas.
What’s wrong?
We are not an exception as our machine learning team is working on different datasets and across multiple areas including deep learning (DL), natural language processing (NLP) and behaviour prediction to improve our product, but as we are handling massive amounts of data, they soon realized working on local or using cloud provider notebook like colab or kaggle dragging down their productivity significantly.
Unable to scale and secure more resources when they are handling heavier workload
Limited access to GPU
If you are using cloud notebook result will not persist automatically but will reset once you idle or exit
Hard to share notebook to with your co-workers
No way to use a custom image, need to setup the environment every time
Hard to share common dataset among the team
Researching
Current Implementation
Originally we were using a helm chart to install a jupyterhub on our kubernetes cluster and we were having a hard time managing the resources and shared datasets using shared volume.
As a part of the infrastructure team, we need to adjust the resources frequently for the ML team which is not so ideal and obviously dragging down both team’s productivity.
Tools?
There are multiple solutions available in the community kubespawner and helm chart we originally used.
kubespawner
Stars: 328 (2020–08–12)
Pros
Able to spawn multiple notebook deployment separated by namespace
Extremely customizable configuration based on python API
Ability to mount different volumes to different notebook deployment
Cons
Community is small
Lacking support on cloud native features, such as setting up network, kubernetes cluster, permission etc still need to handle manually
Lacking support of authorization
zero-to-jupyterhub-k8s
Stars: 740 (2020–08–12)
Pros
Official support, helm chart that’s published by jupyterhub
Easy to setup and manage by helm
With good authorization support such as github and google OAuth
Cons
Limited support on individual namespace
Hard to declare and mount volumes based on notebook usage
Lacking support on cloud native features, such as setting up network, kubernetes cluster, permission etc still need to handle manually
Kubeflow
Stars: 9.2k (2020–08–12)
Pros
Good support from both author and community
Good support for different cloud platform
Not limited to notebook, but also got other tools that help with machine learning process such as pipeline and hyperparameters tuning
Able to easily separate namespace between different user without changing any code
Can easily mount multiple volumes based on the notebook usage
Dynamic GPU support
Cons
Very huge stack and hard to understand and customize
Need to be in an individual cluster, running cost is higher
Steep learning curve, compare to a plain notebook, using Kubeflow also requires you to have knowledge of kubernetes when you using tools like pipelines and hyperparameters tuning
What We Chose?
Kubeflow is our pick. From the above comparison, we can easily see that Kubeflow has got so many features that I think we will need in the future. The entire solution also came as a box so looks like it can be setup quite easily.
I quickly found that this might be the one I am looking for and I can’t wait to try it.
Recently they released the first stable version 1.0 back in march, and I think it’s a good time for us to try it.
Installation
Try it out first!
At this stage, I haven’t decided to proceed with Kubeflow, but as an infrastructure person, We always need to test the tool before we introduce it to others.
The installation is quite simple for Kubeflow if you are running on cloud. They have out of the box installation script for each cloud provider and you just need to simply run it.
Setting up the project
Since we are running on GCP, I’ll use that as an example, but you can also find the cloud provider you are using or even if you are hosting on-premise cluster you will find your page here.
It’s good for you to create a new GCP project when you trying on something so it will be isolated from other environment.
Installing CLI
Following the steps here to setup OAuth so Kubeflow CLI can get access to the GCP resources.
First we need to install the Kubeflow CLI, you can find the latest binary on github releases
Setup the GCP bases
After that just some standard gcloud configuration
Note that multi-zone is not yet supported if you want to use GPU, we’re using asia-east-1 here since it is the only region that has K80 GPU support right now (2020 July).
Spinning up the cluster
Spinning up the cluster simply running
Customizing the deployment
For simplicity we are directly applying here, but if you want to customize the manifest it’s also possible by running
Verify the installation
Get the kube context
Accessing the UI
Kubeflow will automatically generate an endpoint in this format, it can take a few minutes before it is accessible.
That’s all, pretty easy! Now you can access the link and check on UI
Setting up notebook serverCreating the notebook server
Navigate to Notebook servers -> New server
You can see there are tons of configurations we can make!
Settings for the notebook server
Breaking down a bit
Image
Able to use prebuilt tensorflow notebook server or custom notebook server image
You can install and prebuilt common dependencies images and everyone now has access to the same setup!
CPU / RAM
Workspace volume
Each notebook create a new workspace volume by default. This ensure you won’t loss your process if you are away or the pod accidentally shutdown
You can even share workspace volume if you configure it to ReadWriteMany if you want to share with you team as well
Data volumes
Now it’s super easy to share dataset by just using data volumes, scientists just need to choose which dataset they want to use
You can even mount multiple dataset in a same notebook server
Configurations
This is used for store credentials or secrets, if you are using GCP, the list will default with Google credentials so you can access gcloud command or SQL dataset / big query to access even more data.
GPUs
Now it’s dynamic! but don’t forget to turn it off one you finished using, otherwise it may blow up your bill!
Running our first experiment
Setup
I created a notebook server with all default parameters, running on tensorflow-2.1.0-notebook-cpu:1.0.0 image.
I want to build a simple salary prediction model following fastai tutorial.
Since the image doesn’t come with fastai, simply install it
Training the model
We simply copy the code from the tutorial
We successfully trained a model!
We can simply save the current checkpoint in the workspace and retrain it next time!
Hyperparameters Tuning
Katib
Not limited to notebook servers, Kubeflow also has tons of other modules that are very convenient to data scientists, katib is one of the modules that you can use.
Katib provides both Hyperparameter Tuning and Neural Architecture Search , we will try out hyperparameter tuning here.
Writing the job
Using katib is extremely easy, if you are familiar with Kubernetes manifests it will be even easier for you. Katib uses Job on Kubernetes and repeatedly runs your job until it hits the target value of maximum runs.
We will use the same model salary prediction but this time we do want to tune those input values.
Training script
Collecting metrics
Katib will automatically collect the train metrics from stdout, so we only need to print it out.
in the args we pass lr , num_layers and emb_szs as hyper parameters.
Job definition
Explanation:
We using objectiveMetricName: accuracy as the target metrics and the target value is goal: 0.9
lr: random from 0.01 to 0.03
num_layers: random from 50 to 100
emb_szs: random from 10 to 50
We also configured the max count maxTrialCount: 12
Result
The job will automatically start once you submit it.
The result will update on each job finished and you can see the result in HP -> Monitor
Previously we didn’t even conduct HP tuning using only jupyter notebook, either you write a very huge loop that makes it run for a decade, or simply use your 6th sense to decide the HP.
Conclusion
Kubeflow is very good out of the box tool that allows you to set up the analysis environment without any pain. It provide several powerful modules like
Also there are more features that we haven’t touched in this article as well, such as:
Namespaced permission management
Sharing notebook among the team or outsiders
Continuous training and deployment for machine learning model
Continuous ETL integration with cloud storage or data warehouse
All of those are very common requirements from data scientists and it fit for most of the company as well.
We are still in the middle of transition so didn’t manage to cover all features on Kubeflow, will definitely want to write more about it after we explore more on it.
We are hiring!
We are the industry leader of chatbot commerce in Japan, Our company is based in Tokyo, if you are talented engineer and you are interested in our company! Simply to drop an application here we can start with some casual talk first.
Opening Roles
(Sorry for that it’s still japanese right now, we are working on translating it to English right now)
|
https://medium.com/zeals-tech-blog/introducing-kubeflow-to-zeals-c41b6199d2b9
|
['Allen Ng']
|
2020-09-15 11:24:59.270000+00:00
|
['Machine Learning', 'Deep Learning', 'Kubernetes', 'Kubeflow', 'Pytorch']
|
Title Introducing Kubeflow ZealsContent Hi Allen Zeals Japan work SRE gopher mainly responsible microservices development Background Story Nowadays machine learning everywhere believe still trending next year Data scientist working large datasets daily basis develop model help business different area What’s wrong exception machine learning team working different datasets across multiple area including deep learning DL natural language processing NLP behaviour prediction improve product handling massive amount data soon realized working local using cloud provider notebook like colab kaggle dragging productivity significantly Unable scale secure resource handling heavier workload Limited access GPU using cloud notebook result persist automatically reset idle exit Hard share notebook coworkers way use custom image need setup environment every time Hard share common dataset among team Researching Current Implementation Originally using helm chart install jupyterhub kubernetes cluster hard time managing resource shared datasets using shared volume part infrastructure team need adjust resource frequently ML team ideal obviously dragging team’s productivity Tools multiple solution available community kubespawner helm chart originally used kubespawner Stars 328 2020–08–12 Pros Able spawn multiple notebook deployment separated namespace Extremely customizable configuration based python API Ability mount different volume different notebook deployment Cons Community small Lacking support cloud native feature setting network kubernetes cluster permission etc still need handle manually Lacking support authorization zerotojupyterhubk8s Stars 740 2020–08–12 Pros Official support helm chart that’s published jupyterhub Easy setup manage helm good authorization support github google OAuth Cons Limited support individual namespace Hard declare mount volume based notebook usage Lacking support cloud native feature setting network kubernetes cluster permission etc still need handle manually Kubeflow Stars 92k 2020–08–12 Pros Good support author community Good support different cloud platform limited notebook also got tool help machine learning process pipeline hyperparameters tuning Able easily separate namespace different user without changing code easily mount multiple volume based notebook usage Dynamic GPU support Cons huge stack hard understand customize Need individual cluster running cost higher Steep learning curve compare plain notebook using Kubeflow also requires knowledge kubernetes using tool like pipeline hyperparameters tuning Chose Kubeflow pick comparison easily see Kubeflow got many feature think need future entire solution also came box look like setup quite easily quickly found might one looking can’t wait try Recently released first stable version 10 back march think it’s good time u try Installation Try first stage haven’t decided proceed Kubeflow infrastructure person always need test tool introduce others installation quite simple Kubeflow running cloud box installation script cloud provider need simply run Setting project Since running GCP I’ll use example also find cloud provider using even hosting onpremise cluster find page It’s good create new GCP project trying something isolated environment Installing CLI Following step setup OAuth Kubeflow CLI get access GCP resource First need install Kubeflow CLI find latest binary github release Setup GCP base standard gcloud configuration Note multizone yet supported want use GPU we’re using asiaeast1 since region K80 GPU support right 2020 July Spinning cluster Spinning cluster simply running Customizing deployment simplicity directly applying want customize manifest it’s also possible running Verify installation Get kube context Accessing UI Kubeflow automatically generate endpoint format take minute accessible That’s pretty easy access link check UI Setting notebook serverCreating notebook server Navigate Notebook server New server see ton configuration make Settings notebook server Breaking bit Image Able use prebuilt tensorflow notebook server custom notebook server image install prebuilt common dependency image everyone access setup CPU RAM Workspace volume notebook create new workspace volume default ensure won’t loss process away pod accidentally shutdown even share workspace volume configure ReadWriteMany want share team well Data volume it’s super easy share dataset using data volume scientist need choose dataset want use even mount multiple dataset notebook server Configurations used store credential secret using GCP list default Google credential access gcloud command SQL dataset big query access even data GPUs it’s dynamic don’t forget turn one finished using otherwise may blow bill Running first experiment Setup created notebook server default parameter running tensorflow210notebookcpu100 image want build simple salary prediction model following fastai tutorial Since image doesn’t come fastai simply install Training model simply copy code tutorial successfully trained model simply save current checkpoint workspace retrain next time Hyperparameters Tuning Katib limited notebook server Kubeflow also ton module convenient data scientist katib one module use Katib provides Hyperparameter Tuning Neural Architecture Search try hyperparameter tuning Writing job Using katib extremely easy familiar Kubernetes manifest even easier Katib us Job Kubernetes repeatedly run job hit target value maximum run use model salary prediction time want tune input value Training script Collecting metric Katib automatically collect train metric stdout need print args pas lr numlayers embszs hyper parameter Job definition Explanation using objectiveMetricName accuracy target metric target value goal 09 lr random 001 003 numlayers random 50 100 embszs random 10 50 also configured max count maxTrialCount 12 Result job automatically start submit result update job finished see result HP Monitor Previously didn’t even conduct HP tuning using jupyter notebook either write huge loop make run decade simply use 6th sense decide HP Conclusion Kubeflow good box tool allows set analysis environment without pain provide several powerful module like Also feature haven’t touched article well Namespaced permission management Sharing notebook among team outsider Continuous training deployment machine learning model Continuous ETL integration cloud storage data warehouse common requirement data scientist fit company well still middle transition didn’t manage cover feature Kubeflow definitely want write explore hiring industry leader chatbot commerce Japan company based Tokyo talented engineer interested company Simply drop application start casual talk first Opening Roles Sorry it’s still japanese right working translating English right nowTags Machine Learning Deep Learning Kubernetes Kubeflow Pytorch
|
5,167 |
How I Make Steady Money Daily On Medium
|
1. Publish regularly.
Duh! As I said, none of my articles has hit it big. The most views I have is 305 with a 60% read ratio. I have to publish regularly to make sure I keep up income.
I publish 4 times a week and do my best to avoid weekends, but sometimes it can’t be helped. There’s the myth that publishing on weekends is not as effective, which I have to tell you I have noticed that this is true. Publishing between Monday-Thursday is probably best.
2. Spread your work on socials.
This is not the key to getting a good amount of views/reads, but it does help. There’s a lot of Facebook groups out there that I am part of that are extremely interactive and you can publish links to your work on there;
Medium Writers Lounge
Medium Magic
Medium Writers Boost
There’s also Quora, Reddit, LinkedIn, Slack, Twitter and whatever else you use to promote your work.
As well as promotion, being interactive in general will only do you well. Clapping for other writers and reading successes does bring me joy.
3. Write about writing.
I have noticed that my articles focusing on writing are the most successful. This is probably the niche I should follow. But this isn’t me as a writer. I like the talk about everything and anything, so if I have an idea, I’ll write about it.
You do not have to follow one particular niche. I most certainly do not, but I have noticing that writing about writing does draw people in.
4. Submit to publications
Submitting to publications means that many others will naturally come across your work on the publication homepage, generating more views and reads thus making you some dollars/cents.
Saying this, there have been cases where people have self-published and editors have reached out asking if they can publish it.
At the end of the day, a good piece of writing is a good piece of writing.
5. Practice your writing.
I haven’t really taken time to focus on bettering my writing. I tend to just write, run it through Grammarly and Microsoft Word and if nothing is red or blue, Thunderbird is go.
Being six weeks into Medium, I can see the improvements from when I first started to where I am now. This does not mean that improvements cannot be made and I know that sooner or later I will have to focus on this.
I want to make my content engaging. I want to make it a good, beneficial read for someone. I want someone to just enjoy my work. The only way to do this is to improve.
6. Check your stats everyday.
Many people probably advise against this, but I don’t. Every morning, check your stats even if you haven’t published. That’s ONCE a day, by the way. Becoming obsessed with stats isn’t good and it doesn’t make the numbers increase.
You want the dollar signs to go up, not down or stay stagnant. My advice is to make a note per day of how much you have and just work out the difference a day, then work out the average (divide by how many days in that month) of how much you earned that month.
|
https://medium.com/illumination/how-i-make-steady-money-daily-on-medium-4754420a27d3
|
['Shamar M']
|
2020-12-22 09:24:58.093000+00:00
|
['Advice', 'Money Management', 'Writing', 'Money', 'Writing Tips']
|
Title Make Steady Money Daily MediumContent 1 Publish regularly Duh said none article hit big view 305 60 read ratio publish regularly make sure keep income publish 4 time week best avoid weekend sometimes can’t helped There’s myth publishing weekend effective tell noticed true Publishing MondayThursday probably best 2 Spread work social key getting good amount viewsreads help There’s lot Facebook group part extremely interactive publish link work Medium Writers Lounge Medium Magic Medium Writers Boost There’s also Quora Reddit LinkedIn Slack Twitter whatever else use promote work well promotion interactive general well Clapping writer reading success bring joy 3 Write writing noticed article focusing writing successful probably niche follow isn’t writer like talk everything anything idea I’ll write follow one particular niche certainly noticing writing writing draw people 4 Submit publication Submitting publication mean many others naturally come across work publication homepage generating view read thus making dollarscents Saying case people selfpublished editor reached asking publish end day good piece writing good piece writing 5 Practice writing haven’t really taken time focus bettering writing tend write run Grammarly Microsoft Word nothing red blue Thunderbird go six week Medium see improvement first started mean improvement cannot made know sooner later focus want make content engaging want make good beneficial read someone want someone enjoy work way improve 6 Check stats everyday Many people probably advise don’t Every morning check stats even haven’t published That’s day way Becoming obsessed stats isn’t good doesn’t make number increase want dollar sign go stay stagnant advice make note per day much work difference day work average divide many day month much earned monthTags Advice Money Management Writing Money Writing Tips
|
5,168 |
Local Architecture in a Globalized World
|
Local Architecture in a Globalized World gestalten Follow Sep 25 · 6 min read
Today, architects around the world face an unprecedented series of challenges. Issues such as the rapid growth of populations, societal and political instability, and climate change present those working on the built environment with new levels of complexity. This impacts the myriad decisions that go into any architectural assignment: selecting materials and organizing labor, the use of space, and the interaction between a building and its surroundings.
Globalization brings faster and more streamlined knowledge-sharing and integrative solutions, but also the risk of standardization and homogenization. A particular truth in ever-expanding cities, where skyscrapers and high-tech construction systems define our image of what cities “should” be, dominating not only the skyline but the headlines and prevailing academic discourse. In this context, the progression of architecture continues to be shaped by an overtly Western perspective. Size, technological advancement, territorial dominance, material innovation, and engineering prowess are often privileged in the professional conversation, according to a Western idea of what constitutes progress. Even beyond these borders, in regions where colonization has marked the architectural landscape, systemization, and mass production-the pillars of our integrated international culture-threaten to neuter local knowledge and tradition.
Fernando and Humberto Campana, the siblings behind design studio Campana Brothers, are committed to using native materials and reviving traditional handicrafts in their practice. For a family home in São Paulo, they blurred the distinction between outside and inside using materials and techniques that maintain a continuous dialogue with nature, space, and natural light. (Photo: Leonardo Finotti, Beyond the West)
Architectural practitioners are, then, presented with a unique opportunity. Beyond satisfying the basic human need for shelter, the profession now has the potential to profoundly reshape the way we live in the 21st-century. The tools of the global economy-the powers of communication and production-equip designers and builders to make decisions that will affect our future livelihood and that of the planet. Our latest release Beyond the West explores the diversity of global architectural cultures and, in doing so, proves that this approach is possible, and indeed will flourish.
The book investigates architecture that challenges our current grasp of the discipline, looking beyond the Western world to discover alternative solutions to globally relevant issues such as sustainability, transport and migration, material innovation, and even wellness. It aims to uncover the architecture of regional cultures and unpack what localization can mean in a global context by investigating how regional social, cultural, and economic conditions can produce intuitive and original architectural strategies. Beyond the West looks to thriving practices and projects that are scarcely recognized in this sphere of architecture, we highlight what progress looks like in Asia, Africa, and the Americas, with a focus on vernacular applications. We discover how everyday needs are better met when approached from a place of authenticity-one that serves the requirements of a specific situation at a particular time.
Architecture must respond closely to its environment to resonate with the landscape around it and the people who use it. It benefits humans when attention is paid to local surroundings-to weather patterns, economic restrictions, and cultural traditions. That is not to say that Western ideas have no place outside of their borders. Our featured projects and architects do not work in blind isolation; they understand the benefits of a globally integrated world and utilize knowledge gleaned in other parts of the globe.
For Kuala Lumpur’s Chempenai House, WHBC Architects created a green living approach in a concrete jungle. Mimicking the local ecosystem, the structure is designed to self-cool and enhance vegetation growth. (Photo: Ben Hosking, Beyond the West)
We also recognize and applaud the many projects in the West that adhere to principles of regionality, sustainability, and respect for context and environment. We have chosen, however, to place our focus outside the West, uncovering projects with a sensitivity to local strictures in countries such as Brazil, Burkina Faso, Vietnam, and elsewhere. A deep consideration for the local climate, resources, and cultures can initiate tremendous advances in the built environment.
The projects featured in the book-from a low-impact mountainside bungalow modeled on Sri Lankan watch huts to an isolated Namibian desert retreat inspired by the nest of a local bird-are examples of a careful, research-driven, and localized approach. In many cases, tradition and intuition are the driving forces behind the overall concept. Working with available materials and in harmony with the surrounding terrain, architects find inspiration in traditional knowledge and skills. Bricks, stones, or bamboo from the surrounding landscapes are made or cut by local hands, creating local employment, and instilling local pride. These are appropriate responses when challenges include transport logistics, limitations, and the availability of materials.
Art, nature, local materials, and traditional techniques coexist in this unconventional structure that resists the definition of “gallery” because it is entirely devoid of straight lines. SFER IK Museion by Roth Architecture is an interdisciplinary creative space in Tulum, Mexico. (Photo: Fernando Artigas, Beyond the West)
Many of the practices producing these trailblazing projects model new ways of thinking and working by their very make up. They often have younger and more gender-balanced workforces, and they employ open-ended, democratic decision-making processes with a distinctly contemporary and thoughtful working culture. This facilitates innovative and original problem-solving, and may, in time, contribute to a shift in mindset within the discipline as a whole.
Diversity and attentiveness to new voices are crucial for the development of contemporary architectural practices, just as localism and consideration for the environment are for individual projects. This book does not present a comprehensive list of localized architecture; instead, it offers an intriguing glimpse into this design category beyond Western borders, and we hope that it becomes a starting point for further exploration of these regions and the individual architects affecting change. Our goal is for the book to spark curiosity and encourage readers to explore the immense opportunities that arise when we cast our vision beyond the architectural cultures of Europe and North America.
The “New Andean Architecture” of Freddy Mamani has shaped the identity of El Alto in Bolivia, where the former bricklayer has fashioned over 60 buildings. Mamani’s “cholets” (a name combining “chalet” with “cholo,” a derogatory term for an indigenous person) are vibrantly colored edifices decorated in geometric acrylic paneling, glossy chrome, and reflective glass. (Photo: Tatewaki Nio, Beyond the West)
The featured projects offer solutions to some of the challenges facing our planet, and together they represent the possibility of a better future. The architects showcased, often working in response to rapid urban growth, climate change, and political and economic instability, have doggedly drawn on their training, the knowledge of their peers, and their intuition to develop unique local solutions. This grounded, curious approach should inspire other members of the profession. It is a local call to arms for an international industry.
Tracy Lynn Chemaly and Faye Robinson with an introduction to Beyond the West, our latest release exploring a global architecture movement linked to locality.
|
https://medium.com/gestalten/local-architecture-in-a-globalized-world-d3b093085cb
|
[]
|
2020-09-25 14:22:52.485000+00:00
|
['Architecture', 'Globalization', 'Design', 'Building', 'Architects']
|
Title Local Architecture Globalized WorldContent Local Architecture Globalized World gestalt Follow Sep 25 · 6 min read Today architect around world face unprecedented series challenge Issues rapid growth population societal political instability climate change present working built environment new level complexity impact myriad decision go architectural assignment selecting material organizing labor use space interaction building surroundings Globalization brings faster streamlined knowledgesharing integrative solution also risk standardization homogenization particular truth everexpanding city skyscraper hightech construction system define image city “should” dominating skyline headline prevailing academic discourse context progression architecture continues shaped overtly Western perspective Size technological advancement territorial dominance material innovation engineering prowess often privileged professional conversation according Western idea constitutes progress Even beyond border region colonization marked architectural landscape systemization mass productionthe pillar integrated international culturethreaten neuter local knowledge tradition Fernando Humberto Campana sibling behind design studio Campana Brothers committed using native material reviving traditional handicraft practice family home São Paulo blurred distinction outside inside using material technique maintain continuous dialogue nature space natural light Photo Leonardo Finotti Beyond West Architectural practitioner presented unique opportunity Beyond satisfying basic human need shelter profession potential profoundly reshape way live 21stcentury tool global economythe power communication productionequip designer builder make decision affect future livelihood planet latest release Beyond West explores diversity global architectural culture prof approach possible indeed flourish book investigates architecture challenge current grasp discipline looking beyond Western world discover alternative solution globally relevant issue sustainability transport migration material innovation even wellness aim uncover architecture regional culture unpack localization mean global context investigating regional social cultural economic condition produce intuitive original architectural strategy Beyond West look thriving practice project scarcely recognized sphere architecture highlight progress look like Asia Africa Americas focus vernacular application discover everyday need better met approached place authenticityone serf requirement specific situation particular time Architecture must respond closely environment resonate landscape around people use benefit human attention paid local surroundingsto weather pattern economic restriction cultural tradition say Western idea place outside border featured project architect work blind isolation understand benefit globally integrated world utilize knowledge gleaned part globe Kuala Lumpur’s Chempenai House WHBC Architects created green living approach concrete jungle Mimicking local ecosystem structure designed selfcool enhance vegetation growth Photo Ben Hosking Beyond West also recognize applaud many project West adhere principle regionality sustainability respect context environment chosen however place focus outside West uncovering project sensitivity local stricture country Brazil Burkina Faso Vietnam elsewhere deep consideration local climate resource culture initiate tremendous advance built environment project featured bookfrom lowimpact mountainside bungalow modeled Sri Lankan watch hut isolated Namibian desert retreat inspired nest local birdare example careful researchdriven localized approach many case tradition intuition driving force behind overall concept Working available material harmony surrounding terrain architect find inspiration traditional knowledge skill Bricks stone bamboo surrounding landscape made cut local hand creating local employment instilling local pride appropriate response challenge include transport logistics limitation availability material Art nature local material traditional technique coexist unconventional structure resists definition “gallery” entirely devoid straight line SFER IK Museion Roth Architecture interdisciplinary creative space Tulum Mexico Photo Fernando Artigas Beyond West Many practice producing trailblazing project model new way thinking working make often younger genderbalanced workforce employ openended democratic decisionmaking process distinctly contemporary thoughtful working culture facilitates innovative original problemsolving may time contribute shift mindset within discipline whole Diversity attentiveness new voice crucial development contemporary architectural practice localism consideration environment individual project book present comprehensive list localized architecture instead offer intriguing glimpse design category beyond Western border hope becomes starting point exploration region individual architect affecting change goal book spark curiosity encourage reader explore immense opportunity arise cast vision beyond architectural culture Europe North America “New Andean Architecture” Freddy Mamani shaped identity El Alto Bolivia former bricklayer fashioned 60 building Mamani’s “cholets” name combining “chalet” “cholo” derogatory term indigenous person vibrantly colored edifice decorated geometric acrylic paneling glossy chrome reflective glass Photo Tatewaki Nio Beyond West featured project offer solution challenge facing planet together represent possibility better future architect showcased often working response rapid urban growth climate change political economic instability doggedly drawn training knowledge peer intuition develop unique local solution grounded curious approach inspire member profession local call arm international industry Tracy Lynn Chemaly Faye Robinson introduction Beyond West latest release exploring global architecture movement linked localityTags Architecture Globalization Design Building Architects
|
5,169 |
Drug Deal with God
|
Creative Nonfiction Contest Finalists
Drug Deal with God
Mixing LSD and Weed
Photo by Matt Flores on Unsplash
I waved my hand in front of my face. Tracers followed across the space, smudging the air in an arc. I’d dropped acid before, but this seemed more intense.
“Whoa. Are you seeing what I’m seeing?” I asked.
Ed, ever the philosopher, replied, “Can anyone ever see what someone else is seeing?”
With LSD, I thought I could experience a higher consciousness or tap into dormant parts of my brain. I was taking Intro to Philosophy, and we’d just studied Descartes — the whole idea of not being able to trust our senses. I fixated on the concept of everything coming to us via our senses and our senses could be deceived, combining it with I think, therefore I am. I had proof I existed, but could not prove the outside world existed.
The LSD melded the two concepts. I wondered if my senses were being deceived at that time, in that room. Was I really sitting on Ed’s weight bench in his apartment? Was Ed even real?
Photo by Jr Korpa on Unsplash
We’d met when I worked at Farell’s Ice Cream Parlour — Ed the assistant manager. Blond, smart, funny, with a George Washington nose — too big to be devastatingly handsome by American standards — but cute enough. I was in love. I took him to be my boyfriend. He took my virginity. I was disappointed. I thought there would be a bigger orgasm with intercourse. I’d been doing that on my own for years.
He moved to Tucson, and I followed, thinking I would marry him, and we’d have big-nosed babies. But he wanted to see other people — we could still be friends and sometimes lovers.
Our friend Traci passed the bong.
“Hey, what if none of you are really here?” I took a tiny bong hit. My head swirled. My mouth dried into a desert. I exhaled and watched a trail of faeries dance into the room.
Ed took a hit. “Does it really matter?”
“Yeah.” Green stairs appeared in my vision, climbing on Ed, on Traci, on the wall. My horizon moved, rocking back and forth with nothing to hold on to. “Let’s get out of here. I’m creeping out.”
“Okay, okay. Let’s go to a movie? Pile in the jeep. We’ll go see what’s playing.”
I wanted Ed to hold me. Kiss me. Save me from this eerie feeling. But he wouldn’t, and I wouldn’t ask. A few months earlier, we’d painted Ed’s Jeep with what was supposed to look like camouflage, but ended up looking like cowhide. Every time I saw his jeep on campus, I knew he was fucking Hazel. It hurt. Why didn’t he want me anymore?
Photo by GoaShape on Unsplash
My consciousness drifted in and out of my body. I thought it was morning. I thought it was night. I thought I was going crazy. Time distorted and mixed events nonlinearly. Camel colored mud slithered over our skin, cool then itchy. We ate pizza. We knocked on a trailer door, needing a phone. We were at Ed’s again? Or before? A man wore a yellow hardhat. Jäger shots out of paper cups. We were too dirty to come inside. A backyard with a garden hose, a dog barked — his yap like scraping metal. A winch pulled the Jeep out of the muck as I rubbed tan mud on my arms. The horn was stuck upside down. Water droplets on the grass turned to emeralds and diamonds. The Jeep wailed like a beached whale. We four-wheeled in a muddy construction site. Fun until we flipped.
“Get out of the Jeep, Tammy.” Ed parked in front of my cottage.
“There’s something wrong with me. I think I’m going crazy.” Fear swirled inside me. My blood felt prickly under my skin. I was hot and cold intermittently. “Please don’t leave me. The acid hasn’t worn off. It should have worn off by now.” I hated my desperation.
“There’s nothing wrong with you. We’ve been up for twenty-four hours. Go sleep.”
He didn’t understand. Something was wrong with my brain, and if I lost control, if I slept and gave my brain over to my subconscious, I might not come back. I clung to the Jeep. I would not budge.
“Fine.” He peeled out, issuing a golden dust cloud.
He drove for what seemed like miles from my cottage, turned off the Jeep, pocketed the keys, and left. Jogged away. I cried. I shivered. I leaned over and threw up pink pizza guts. I recognized the grocery across the street and then the gas station. Neither were open. Time didn’t make sense. I walked and walked and walked. Found my tiny house.
I laid on my couch waiting to die. I made a deal with God, if he let me live, I would never do drugs again.
|
https://medium.com/inspired-writer/drug-deal-with-god-8314f9479499
|
['Tam Francis']
|
2020-12-21 13:02:21.480000+00:00
|
['Drugs', 'Perception', 'Philosophy', 'Nonfiction', 'Weed']
|
Title Drug Deal GodContent Creative Nonfiction Contest Finalists Drug Deal God Mixing LSD Weed Photo Matt Flores Unsplash waved hand front face Tracers followed across space smudging air arc I’d dropped acid seemed intense “Whoa seeing I’m seeing” asked Ed ever philosopher replied “Can anyone ever see someone else seeing” LSD thought could experience higher consciousness tap dormant part brain taking Intro Philosophy we’d studied Descartes — whole idea able trust sens fixated concept everything coming u via sens sens could deceived combining think therefore proof existed could prove outside world existed LSD melded two concept wondered sens deceived time room really sitting Ed’s weight bench apartment Ed even real Photo Jr Korpa Unsplash We’d met worked Farell’s Ice Cream Parlour — Ed assistant manager Blond smart funny George Washington nose — big devastatingly handsome American standard — cute enough love took boyfriend took virginity disappointed thought would bigger orgasm intercourse I’d year moved Tucson followed thinking would marry we’d bignosed baby wanted see people — could still friend sometimes lover friend Traci passed bong “Hey none really here” took tiny bong hit head swirled mouth dried desert exhaled watched trail faery dance room Ed took hit “Does really matter” “Yeah” Green stair appeared vision climbing Ed Traci wall horizon moved rocking back forth nothing hold “Let’s get I’m creeping out” “Okay okay Let’s go movie Pile jeep We’ll go see what’s playing” wanted Ed hold Kiss Save eerie feeling wouldn’t wouldn’t ask month earlier we’d painted Ed’s Jeep supposed look like camouflage ended looking like cowhide Every time saw jeep campus knew fucking Hazel hurt didn’t want anymore Photo GoaShape Unsplash consciousness drifted body thought morning thought night thought going crazy Time distorted mixed event nonlinearly Camel colored mud slithered skin cool itchy ate pizza knocked trailer door needing phone Ed’s man wore yellow hardhat Jäger shot paper cup dirty come inside backyard garden hose dog barked — yap like scraping metal winch pulled Jeep muck rubbed tan mud arm horn stuck upside Water droplet grass turned emerald diamond Jeep wailed like beached whale fourwheeled muddy construction site Fun flipped “Get Jeep Tammy” Ed parked front cottage “There’s something wrong think I’m going crazy” Fear swirled inside blood felt prickly skin hot cold intermittently “Please don’t leave acid hasn’t worn worn now” hated desperation “There’s nothing wrong We’ve twentyfour hour Go sleep” didn’t understand Something wrong brain lost control slept gave brain subconscious might come back clung Jeep would budge “Fine” peeled issuing golden dust cloud drove seemed like mile cottage turned Jeep pocketed key left Jogged away cried shivered leaned threw pink pizza gut recognized grocery across street gas station Neither open Time didn’t make sense walked walked walked Found tiny house laid couch waiting die made deal God let live would never drug againTags Drugs Perception Philosophy Nonfiction Weed
|
5,170 |
A day spent at the Singapore Cloud and Datacenter Convention 2019
|
According to forbes.com, more data has been created in the past two years than in the entire previous history of the human race. And by 2020, one third of all data will pass through the cloud.
Data centers are mushrooming, cloud services are flourishing.
On the 11th of July, I attended the Cloud and Datacenter Convention at the Sand Expo Convention Centre in Singapore, where around 1000 professionals and 30 exhibitors gathered to share and discuss future thinking and lessons learned in cloud and data innovation. Here’s my summary of some key ideas I took away pre-, during and post-convention.
Sustainability was a key topic where aspects of cost, talent and technology were discussed.
The Natural Resources Defense Council (NRDC) estimates that data centers consume up to 3% of all global electricity production. Of the total operational cost of running a data center, 40% of the energy is for powering and cooling the massive amounts of equipment in the center. To keep the cost of running a datacenter down, a cooling system is required and is one of the most important factors. Liquid cooling systems and other innovative ways of wiring to promote better airflow were methods being discussed. Cooling systems are especially important for countries near the equator like ours.
The cost of building the data centers is another big concern. More and more businesses are depending on data centers and cloud services and no one can afford downtime. Modern data centers are built to withstand winds of 200 km/h and 9.0 magnitude of earthquakes.
The rapid growth of data centers leads to higher demand for talents not only to build and maintain the data centers, but also to manage them. It is a segment that has not been previously emphasised. Almost half of the talent in the field now will be retiring in 8 years, and most of the jobs available 5 years down the road may not exist yet. Schools are not currently offering courses that would supply the industry with sufficient and suitable talents with only a few courses in the United States and Europe available. As a speaker pointed out, these tend to be either too US-centric or EU-centric.
There is a real talent crunch. Students might not see the needs of the future so reaching out to students before they embark on their study might lure in more future talent.
This leads to handing over some managing tasks to artificial intelligence (AI) with China taking the lead in conducting research in this field.
Data is just some 1s and 0s if users do not make meaningful data analysis. Data is widely available now and using it with responsibility would be a virtue that we need to instill in the younger generation.
Regulation and data-sovereignty were topics I found very interesting. Data-sovereignty does not promise data security is what most speakers agreed to during one of the panels. Also, regulations in one industry shall enable another industry rather than hindering its development. For example, data collected in the transportation industry shall enable the development of road infrastructure.
As for the future of cloud services and data centers, more and more applications are cloud native. Edge-computing and hyperscale data centers are gaining attention too. Data processing for internet of things (IoT) needs more frequent connection between the devices and the cloud for better performance. Processing of data will be able to achieve greater results using devices with better frequency of updating data between the database and the devices.
My conclusion to the vast amount of information I received in 6 hours?
Talent crunch is real and similar in the Blockchain world. Education and training are important. As a trainer, I am glad that I am actively participating in the development of this segment for both online and offline training. In my opinion, it is a good move for NEM to develop its online training portal to help train others in using our blockchain technology.
Peer-to-peer networks like Blockchain need cloud services. How would data centers affect Blockchain? How could AI, IoT and Blockchain be effectively integrated through services by data centers? When technologies converge, they are more powerful.
|
https://medium.com/nemofficial/a-day-spent-in-singapore-cloud-and-datacenter-convention-2019-5b20359cf6ad
|
['Nem Official', 'Editors']
|
2019-09-24 12:44:18.687000+00:00
|
['Event', 'Cloud Computing', 'Nem Blockchain', 'Nem Foundation', 'Technology']
|
Title day spent Singapore Cloud Datacenter Convention 2019Content According forbescom data created past two year entire previous history human race 2020 one third data pas cloud Data center mushrooming cloud service flourishing 11th July attended Cloud Datacenter Convention Sand Expo Convention Centre Singapore around 1000 professional 30 exhibitor gathered share discus future thinking lesson learned cloud data innovation Here’s summary key idea took away pre postconvention Sustainability key topic aspect cost talent technology discussed Natural Resources Defense Council NRDC estimate data center consume 3 global electricity production total operational cost running data center 40 energy powering cooling massive amount equipment center keep cost running datacenter cooling system required one important factor Liquid cooling system innovative way wiring promote better airflow method discussed Cooling system especially important country near equator like cost building data center another big concern business depending data center cloud service one afford downtime Modern data center built withstand wind 200 kmh 90 magnitude earthquake rapid growth data center lead higher demand talent build maintain data center also manage segment previously emphasised Almost half talent field retiring 8 year job available 5 year road may exist yet Schools currently offering course would supply industry sufficient suitable talent course United States Europe available speaker pointed tend either UScentric EUcentric real talent crunch Students might see need future reaching student embark study might lure future talent lead handing managing task artificial intelligence AI China taking lead conducting research field Data 1 0 user make meaningful data analysis Data widely available using responsibility would virtue need instill younger generation Regulation datasovereignty topic found interesting Datasovereignty promise data security speaker agreed one panel Also regulation one industry shall enable another industry rather hindering development example data collected transportation industry shall enable development road infrastructure future cloud service data center application cloud native Edgecomputing hyperscale data center gaining attention Data processing internet thing IoT need frequent connection device cloud better performance Processing data able achieve greater result using device better frequency updating data database device conclusion vast amount information received 6 hour Talent crunch real similar Blockchain world Education training important trainer glad actively participating development segment online offline training opinion good move NEM develop online training portal help train others using blockchain technology Peertopeer network like Blockchain need cloud service would data center affect Blockchain could AI IoT Blockchain effectively integrated service data center technology converge powerfulTags Event Cloud Computing Nem Blockchain Nem Foundation Technology
|
5,171 |
Why Do More Buying Choices Cause Unhappiness?
|
Have you ever felt unhappy about a purchase you made despite spending hours reading product descriptions and reviews, comparing dozens of options, and finally choosing what you perceived to be the best deal? I faced the same problem many years ago, when I still lacked the knowledge of effective shopping techniques around buying choices.
We make shopping mistakes because of how our brain is wired and because retailers use human psychology to their advantage by manipulating the shopping process, particularly in digital contexts. Amazon and other retailers want us to spend as much time as possible on their websites to tempt us with a variety of add-ons and options and to cause us FOMO (fear of missing out) on the best possible deal. This drains our time and wallets — and even our happiness. It’s a good thing you can learn more about the psychological dangers of shopping through cutting-edge research in behavioral economics and cognitive neuroscience.
Choose + Buy = Happiness?
Tom’s wife usually does the grocery shopping in the family, but she had the flu so Tom went instead. Selecting the fruits and veggies went fine, but he hit a wall when he got to the bread section. There were over 60 varieties to choose from. Tom examined the ingredients and made comparisons; he wanted to get it right, after all. After 10 minutes of deliberation, he picked one that seemed like the perfect choice.
However, he had to repeat the process for the rest of the packaged goods. Different brands offered a host of choices, and his wife’s usual shopping list that said “bread” or “cheese” didn’t help. By the time he was finished shopping and paid for everything, he was tired and miserable.
Why did Tom have this kind of experience? Shouldn’t he be happy that there were many choices in the supermarket? After all, mass media presents the narrative that abundance of choice equates with happiness.
According to neuroscience and behavioral economics research, the real story is more complicated than that. While having some options make us feel good, once we get beyond that small number, we feel less and less happy the more choices we get.
For example, in one study, shoppers at a food market saw a display table with free samples of 24 different types of gourmet jam. On another day in the same market, the display table had 6 different types of jam. The larger display attracted more interest, but people who saw the smaller selection were 10 times more likely to purchase the jam, and they felt better doing so compared with those who had to select among the larger display.
This phenomenon was later named “choice paralysis”, referring to the fact that after a certain minimal number of choices, additional options cause us to feel worse about making a decision and also makes us unlikely to decide in the first place. This applies to both major and minor decisions in life, such as when choosing a retirement plan or something as simple as ice cream flavors.
Loss Aversion & Post-Purchase Rationalization of Buying Choices
Why do more choices cause unhappiness? Well, one typical judgment error we make because of the wiring in our brains is called loss aversion. Our gut reactions prefer avoiding losses to making gains. This is probably because of our evolutionary background; our minds evolved for the savanna environment, not for our modern shopping context. Due to this, when we have lots of options, we feel anxious about making the wrong choice and losing out on the best one.
Even having the opportunity to change your mind can be problematic. As it turns out, the benefit of having the option to exchange a product or get a refund is a myth. Another counterintuitive behavioral economics finding shows that people prefer to have the option to refund their purchases but feel more satisfied if the shopping decision is irreversible.
This is due to a phenomenon called post-purchase rationalization, which is also called choice-supportive bias. Research finds that after making a final decision, we try to justify it. We focus on the positives and brush off the negative aspects. After all, if you’re a smart person, you would not make a bad purchase, right? However, if the choice can be reversed, this post-purchase rationalization doesn’t turn on, and we’ll keep thinking about whether it was the right choice.
In-Person vs. Online Shopping in Buying Choices
Online shopping in many ways facilitates a more unhappy shopping experience. Let’s start with choices. According to research, most consumers have the same process of online decision making. The shopping process divides into two stages: first, lightly screen a large set of products to come up with a smaller subset of potential options; second, perform an in-depth screening of the items in this subset.
The much wider selection of products online, compared to a brick-and-mortar store, gives online shoppers the opportunity to examine a greater number of potential options. We know we tend to like more options, believing (wrongly) that the more options we examine, the happier we will feel about our final choice. As a result, we make ourselves less happy by examining even more products online, without even seeing the damage we’re doing to our happiness.
Another counterintuitive problem: it’s easier to return items you purchased online than in a store. For a store, you have to drive back there, wait in line and explain what had gone wrong, and then head back home. By contrast, most large online retailers will ask you to print a return label from their website and then ship the item back to them. Oftentimes, they would also pay for the shipping fee. This process is easier and takes much less time, thus many shoppers tend to see their online shopping decisions as tentative, therefore making themselves unhappy.
Another challenging aspect of online shopping concerns data privacy and security. Shoppers feel unhappy about the extensive tracking of their data online. Smart consumers know about and feel concerned about the risks involved in online shopping because of how online retailers store and sell their information.
How Can Your Buying Choices Promote Happiness?
Digging into research on factors that made my shopping a more unhappy experience years ago helped me improve my buying decisions. When choosing what to buy, the number one technique involves satisficing as opposed to maximizing. This is backed up by extensive research, both involving in-person and online shopping.
Maximizing behavior refers to finding the perfect option when shopping. Maximizers exhaust all available options to make sure that they get the best deal in terms of performance, price, and so on. They have high expectations, and they anticipate that the product will fulfill this promise.
It’s the opposite for satisficers. They set certain minimal criteria that need to be met, then search for the first available product that meets this criteria. They look for products that are “good enough” and those that can get the job done, even without the bells and whistles or savings which they might have found in an extended search.
Research shows that maximizing behavior results in less happiness, less satisfaction, and more regret than satisficing. This finding applies especially in societies that value individual choice highly, such Western Europe and the United States. In societies that place less of a focus on individual choice such as China, maximizing has only a slight correlation with unhappiness, yet still contributes to it.
To be happier, satisfice and limit your choices! Make a shortlist that compares a reasonable number of options and doesn’t include every product available. There’s no such thing as the perfect deal. Buying something that gets the job done, without excessive searching, is going to make you happier in the long run.
When you’re shopping in person, avoid Tom’s problems by skipping big supermarkets with a gazillion options of every product. Instead, go to grocery stores with a small selection of acceptable products, whether Aldi for cheaper price or Trader Joe’s for higher quality. You don’t need 40 types of butter, do you? Just 4 will do. If you really need to go to the supermarket, save yourself from the hassle of choosing from so many varieties by going for the store brand every time.
When shopping in-person and especially online, it helps to get objective information in advance to limit your options. Make sure to use credible product reviews and media sources for these.
You will also probably feel happier about a purchase by ignoring free return or refund offers, unless the product is defective. Treat each shopping decision as final and irreversible, and get post-purchase rationalization working for you. Combine this with satisficing to get great results, because when you focus on “good enough”, your brain automatically highlights the positives, downplays the negatives, and lowers your expectations.
Key Takeaway
More buying choices lead to less happiness. To make better shopping decisions, satisfice and limit your options. There is no such thing as the perfect deal, so look for products that are good enough. — -> Click to tweet
Questions to Consider (please share your answers below)
When was the last time you were dissatisfied with a purchase despite spending hours comparing products and reading online reviews?
Is there anything in the article that will help you become a satisficer?
Which next steps will you take based on reading this article?
Adapted version of an article originally published in Top10.com
Image credit: Pixabay/StockSnap
— -
Bio: Dr. Gleb Tsipursky is an internationally-recognized thought leader on a mission to protect leaders from dangerous judgment errors known as cognitive biases by developing the most effective decision-making strategies. A best-selling author, he is best known for Never Go With Your Gut: How Pioneering Leaders Make the Best Decisions and Avoid Business Disasters (Career Press, 2019), The Blindspots Between Us: How to Overcome Unconscious Cognitive Bias and Build Better Relationships (New Harbinger, 2020), and Resilience: Adapt and Plan for the New Abnormal of the COVID-19 Coronavirus Pandemic (Changemakers Books, 2020). He has over 550 articles and 450 interviews in Inc. Magazine, Entrepreneur, CBS News, Time, Business Insider, Government Executive, The Chronicle of Philanthropy, Fast Company, and elsewhere. His expertise comes from over 20 years of consulting, coaching, and speaking and training as the CEO of Disaster Avoidance Experts, and over 15 years in academia as a behavioral economist and cognitive neuroscientist. Contact him at Gleb[at]DisasterAvoidanceExperts[dot]com, Twitter @gleb_tsipursky, Instagram @dr_gleb_tsipursky, LinkedIn, and register for his free Wise Decision Maker Course.
Gain Access to Expert View — Subscribe to DDI Intel
|
https://medium.com/datadriveninvestor/why-do-more-buying-choices-cause-unhappiness-ede92ffb262f
|
['Dr. Gleb Tsipursky']
|
2020-11-03 17:22:41.862000+00:00
|
['Behavioral Economics', 'Psychology', 'Shopping', 'Cognitive Bias']
|
Title Buying Choices Cause UnhappinessContent ever felt unhappy purchase made despite spending hour reading product description review comparing dozen option finally choosing perceived best deal faced problem many year ago still lacked knowledge effective shopping technique around buying choice make shopping mistake brain wired retailer use human psychology advantage manipulating shopping process particularly digital context Amazon retailer want u spend much time possible website tempt u variety addons option cause u FOMO fear missing best possible deal drain time wallet — even happiness It’s good thing learn psychological danger shopping cuttingedge research behavioral economics cognitive neuroscience Choose Buy Happiness Tom’s wife usually grocery shopping family flu Tom went instead Selecting fruit veggie went fine hit wall got bread section 60 variety choose Tom examined ingredient made comparison wanted get right 10 minute deliberation picked one seemed like perfect choice However repeat process rest packaged good Different brand offered host choice wife’s usual shopping list said “bread” “cheese” didn’t help time finished shopping paid everything tired miserable Tom kind experience Shouldn’t happy many choice supermarket mass medium present narrative abundance choice equates happiness According neuroscience behavioral economics research real story complicated option make u feel good get beyond small number feel le le happy choice get example one study shopper food market saw display table free sample 24 different type gourmet jam another day market display table 6 different type jam larger display attracted interest people saw smaller selection 10 time likely purchase jam felt better compared select among larger display phenomenon later named “choice paralysis” referring fact certain minimal number choice additional option cause u feel worse making decision also make u unlikely decide first place applies major minor decision life choosing retirement plan something simple ice cream flavor Loss Aversion PostPurchase Rationalization Buying Choices choice cause unhappiness Well one typical judgment error make wiring brain called loss aversion gut reaction prefer avoiding loss making gain probably evolutionary background mind evolved savanna environment modern shopping context Due lot option feel anxious making wrong choice losing best one Even opportunity change mind problematic turn benefit option exchange product get refund myth Another counterintuitive behavioral economics finding show people prefer option refund purchase feel satisfied shopping decision irreversible due phenomenon called postpurchase rationalization also called choicesupportive bias Research find making final decision try justify focus positive brush negative aspect you’re smart person would make bad purchase right However choice reversed postpurchase rationalization doesn’t turn we’ll keep thinking whether right choice InPerson v Online Shopping Buying Choices Online shopping many way facilitates unhappy shopping experience Let’s start choice According research consumer process online decision making shopping process divide two stage first lightly screen large set product come smaller subset potential option second perform indepth screening item subset much wider selection product online compared brickandmortar store give online shopper opportunity examine greater number potential option know tend like option believing wrongly option examine happier feel final choice result make le happy examining even product online without even seeing damage we’re happiness Another counterintuitive problem it’s easier return item purchased online store store drive back wait line explain gone wrong head back home contrast large online retailer ask print return label website ship item back Oftentimes would also pay shipping fee process easier take much le time thus many shopper tend see online shopping decision tentative therefore making unhappy Another challenging aspect online shopping concern data privacy security Shoppers feel unhappy extensive tracking data online Smart consumer know feel concerned risk involved online shopping online retailer store sell information Buying Choices Promote Happiness Digging research factor made shopping unhappy experience year ago helped improve buying decision choosing buy number one technique involves satisficing opposed maximizing backed extensive research involving inperson online shopping Maximizing behavior refers finding perfect option shopping Maximizers exhaust available option make sure get best deal term performance price high expectation anticipate product fulfill promise It’s opposite satisficers set certain minimal criterion need met search first available product meet criterion look product “good enough” get job done even without bell whistle saving might found extended search Research show maximizing behavior result le happiness le satisfaction regret satisficing finding applies especially society value individual choice highly Western Europe United States society place le focus individual choice China maximizing slight correlation unhappiness yet still contributes happier satisfice limit choice Make shortlist compare reasonable number option doesn’t include every product available There’s thing perfect deal Buying something get job done without excessive searching going make happier long run you’re shopping person avoid Tom’s problem skipping big supermarket gazillion option every product Instead go grocery store small selection acceptable product whether Aldi cheaper price Trader Joe’s higher quality don’t need 40 type butter 4 really need go supermarket save hassle choosing many variety going store brand every time shopping inperson especially online help get objective information advance limit option Make sure use credible product review medium source also probably feel happier purchase ignoring free return refund offer unless product defective Treat shopping decision final irreversible get postpurchase rationalization working Combine satisficing get great result focus “good enough” brain automatically highlight positive downplays negative lower expectation Key Takeaway buying choice lead le happiness make better shopping decision satisfice limit option thing perfect deal look product good enough — Click tweet Questions Consider please share answer last time dissatisfied purchase despite spending hour comparing product reading online review anything article help become satisficer next step take based reading article Adapted version article originally published Top10com Image credit PixabayStockSnap — Bio Dr Gleb Tsipursky internationallyrecognized thought leader mission protect leader dangerous judgment error known cognitive bias developing effective decisionmaking strategy bestselling author best known Never Go Gut Pioneering Leaders Make Best Decisions Avoid Business Disasters Career Press 2019 Blindspots Us Overcome Unconscious Cognitive Bias Build Better Relationships New Harbinger 2020 Resilience Adapt Plan New Abnormal COVID19 Coronavirus Pandemic Changemakers Books 2020 550 article 450 interview Inc Magazine Entrepreneur CBS News Time Business Insider Government Executive Chronicle Philanthropy Fast Company elsewhere expertise come 20 year consulting coaching speaking training CEO Disaster Avoidance Experts 15 year academia behavioral economist cognitive neuroscientist Contact GlebatDisasterAvoidanceExpertsdotcom Twitter glebtsipursky Instagram drglebtsipursky LinkedIn register free Wise Decision Maker Course Gain Access Expert View — Subscribe DDI IntelTags Behavioral Economics Psychology Shopping Cognitive Bias
|
5,172 |
New Tools for Funders: Supporting DEI in Journalism
|
By Angelica Das, Democracy Fund, and Katie Donnelly and Michelle Polyak, Dot Connector Studio
As part of Democracy Fund’s efforts to address diversity, equity, and inclusion (DEI) in journalism, Dot Connector Studio has developed two tools — the Journalism DEI Tracker and the Journalism DEI Wheel — to help funders and journalists understand the complete landscape of the field, including resources and strategies for advancing DEI within journalism.
Our recent report, Advancing Diversity, Equity, and Inclusion in Journalism: What Funders Can Do, revealed that DEI within journalism is an under-funded area, and recommended that funders share more resources on this topic across a diverse pool of grantees. These two tools are designed to help funders do just that. The Journalism DEI Tracker catalogs information and resources on DEI in journalism, and the Journalism DEI Wheel allows funders and stakeholders to focus on particular solutions for advancing DEI within journalism by demonstrating the range of strategies and focus areas to consider.
To put it simply, the Journalism DEI Tracker tracks the who and the what of the field; the Journalism DEI Wheel captures the how.
1. The Journalism DEI Tracker
The Journalism DEI Tracker is a regularly-updated online database that identifies organizations, news outlets and projects, and educational institutions working to support DEI in journalism across the country. It also collects resources related to diversity, equity, and inclusion in journalism. Foundations can use the Journalism DEI Tracker as a first-step guide for identifying prospective grantees, as well as to find useful resources to share with current grantees. Journalism organizations and other stakeholders can use it to find opportunities for professional development, recruitment, collaboration, and resources to improve their coverage.
The Journalism DEI Tracker includes:
Professional organizations that support women journalists and journalists of color
News outlets and projects led by and serving women journalists and journalists of color
Professional development and training opportunities for women journalists and journalists of color (grants, scholarships, fellowships, and leadership training)
(grants, scholarships, fellowships, and leadership training) Academic institutions with journalism and communications programs to include in recruitment efforts to ensure a more diverse pipeline (Historically Black Colleges and Universities, Hispanic Serving Institutions, and Tribal Colleges)
(Historically Black Colleges and Universities, Hispanic Serving Institutions, and Tribal Colleges) Resources for journalism organizations to promote respectful and inclusive coverage (industry reports, diversity style guides, curricula, and toolkits)
2. The Journalism DEI Wheel
Designed to be complementary to the Journalism DEI Tracker, the Journalism DEI Wheel is meant to help funders in particular inform grantmaking by seeing the bigger picture on a higher level, with useful examples and resources for further illumination. Funders can explore the spokes of the Journalism DEI Wheel to see how DEI in journalism is currently being addressed across key areas: education and training; organizational culture; news coverage; engagement; distribution; innovation; evaluation; the larger journalism industry; and funding.
Each area is divided into smaller points of intervention. For example, if you click on “Education/Training,” you will see opportunities to advance DEI in journalism through high school programs, college programs, scholarships, internships, fellowships, mid-career programs, and executive training. Click on any one of these to learn more and find specific examples, including lists of relevant initiatives on the Journalism DEI Tracker.
The Journalism DEI Wheel demonstrates that there are many areas for addressing DEI in journalism. A funder may be focused on one aspect — say, improving news coverage — but not considering other aspects that may be related, such as improving newsroom culture. Of course, no single funder can — or should! — address every possible point of intervention, but viewing the range of possibilities can help illuminate gaps in current portfolios and identify new opportunities.
Not all areas are equally resourced. For example, there is a dearth of publicly-available resources available for journalism organizations when it comes to DEI in hiring, leadership, and general organizational culture. This is particularly disconcerting when we know that there are well-documented leadership gaps in the broader nonprofit field for people of color, women, and LGBTQ individuals. There is a clear need for leaders of DEI-focused journalism organizations to have up-to-date information on not just legal requirements, but also best practices in hiring, evaluation, and promotion. And, as our recent report shows, there is a clear need for funders to support such efforts.
We hope you will use these tools to inform your work, spark conversations among colleagues, and continue to promote this critically important work. We welcome your feedback: let us know how the tools are working for you, and how we can continue to improve them.
Email us at [email protected] with any additions, corrections, or suggestions for improvement.
|
https://medium.com/the-engaged-journalism-lab/new-tools-for-media-funders-supporting-dei-in-journalism-47f3e9e4a202
|
['Angelica Das']
|
2019-10-24 15:39:16.220000+00:00
|
['Diversity', 'Tools', 'Equity', 'Journalism', 'Inclusion']
|
Title New Tools Funders Supporting DEI JournalismContent Angelica Das Democracy Fund Katie Donnelly Michelle Polyak Dot Connector Studio part Democracy Fund’s effort address diversity equity inclusion DEI journalism Dot Connector Studio developed two tool — Journalism DEI Tracker Journalism DEI Wheel — help funders journalist understand complete landscape field including resource strategy advancing DEI within journalism recent report Advancing Diversity Equity Inclusion Journalism Funders revealed DEI within journalism underfunded area recommended funders share resource topic across diverse pool grantee two tool designed help funders Journalism DEI Tracker catalog information resource DEI journalism Journalism DEI Wheel allows funders stakeholder focus particular solution advancing DEI within journalism demonstrating range strategy focus area consider put simply Journalism DEI Tracker track field Journalism DEI Wheel capture 1 Journalism DEI Tracker Journalism DEI Tracker regularlyupdated online database identifies organization news outlet project educational institution working support DEI journalism across country also collect resource related diversity equity inclusion journalism Foundations use Journalism DEI Tracker firststep guide identifying prospective grantee well find useful resource share current grantee Journalism organization stakeholder use find opportunity professional development recruitment collaboration resource improve coverage Journalism DEI Tracker includes Professional organization support woman journalist journalist color News outlet project led serving woman journalist journalist color Professional development training opportunity woman journalist journalist color grant scholarship fellowship leadership training grant scholarship fellowship leadership training Academic institution journalism communication program include recruitment effort ensure diverse pipeline Historically Black Colleges Universities Hispanic Serving Institutions Tribal Colleges Historically Black Colleges Universities Hispanic Serving Institutions Tribal Colleges Resources journalism organization promote respectful inclusive coverage industry report diversity style guide curriculum toolkits 2 Journalism DEI Wheel Designed complementary Journalism DEI Tracker Journalism DEI Wheel meant help funders particular inform grantmaking seeing bigger picture higher level useful example resource illumination Funders explore spoke Journalism DEI Wheel see DEI journalism currently addressed across key area education training organizational culture news coverage engagement distribution innovation evaluation larger journalism industry funding area divided smaller point intervention example click “EducationTraining” see opportunity advance DEI journalism high school program college program scholarship internship fellowship midcareer program executive training Click one learn find specific example including list relevant initiative Journalism DEI Tracker Journalism DEI Wheel demonstrates many area addressing DEI journalism funder may focused one aspect — say improving news coverage — considering aspect may related improving newsroom culture course single funder — — address every possible point intervention viewing range possibility help illuminate gap current portfolio identify new opportunity area equally resourced example dearth publiclyavailable resource available journalism organization come DEI hiring leadership general organizational culture particularly disconcerting know welldocumented leadership gap broader nonprofit field people color woman LGBTQ individual clear need leader DEIfocused journalism organization uptodate information legal requirement also best practice hiring evaluation promotion recent report show clear need funders support effort hope use tool inform work spark conversation among colleague continue promote critically important work welcome feedback let u know tool working continue improve Email u EJlabdemocracyfundorg addition correction suggestion improvementTags Diversity Tools Equity Journalism Inclusion
|
5,173 |
Review: ParseNet — Looking Wider to See Better (Semantic Segmentation)
|
1. ParseNet Module
ParseNet Module
Actually, ParseNet is simple as in the figure above.
Normalization Using l2 Norm for each channel
At the lower path, at certain conv layer, Normalization using l2 norm is performed for each channel.
At the upper path, at certain conv layer, we perform global average pooling of those feature maps at that conv layer, and perform normalization using l2 norm. Unpooling is just replicating that values of that global averaged pooled vector to be the same size with the lower path so that they can be concatenated.
Features are in different scale at different layers
The reason of having the L2 norm is that, because the earlier layers usually have larger values than the later layers.
The above example showing that the features at different layers have different scales of values. After normalization, all features will have the same value range. And they are all concatenated together.
And a learnable scaling factor γ for each channel is also introduced after normalization:
3. Results
3.1. SiftFlow
SiftFlow Dataset
By adding and normalizing pool6 + fc7 + conv5 + conv4 using ParseNet module to FCN-32s, 40.4% mean IOU is obtained which is better than FCN-16s.
3.2. PASCAL Context
PASCAL Context Dataset
By adding and normalizing pool6 + fc7 + conv5 + conv4 + conv3 using ParseNet module to FCN-32s, 40.4% IOU is obtained which is better than FCN-8s.
We can also see that, without normalization, it does not work well for the ParseNet module.
3.3. PASCAL VOC 2012
|
https://medium.com/datadriveninvestor/review-parsenet-looking-wider-to-see-better-semantic-segmentation-aa6b6a380990
|
['Sik-Ho Tsang']
|
2019-03-20 15:57:51.438000+00:00
|
['Convolutional Network', 'Deep Learning', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
|
Title Review ParseNet — Looking Wider See Better Semantic SegmentationContent 1 ParseNet Module ParseNet Module Actually ParseNet simple figure Normalization Using l2 Norm channel lower path certain conv layer Normalization using l2 norm performed channel upper path certain conv layer perform global average pooling feature map conv layer perform normalization using l2 norm Unpooling replicating value global averaged pooled vector size lower path concatenated Features different scale different layer reason L2 norm earlier layer usually larger value later layer example showing feature different layer different scale value normalization feature value range concatenated together learnable scaling factor γ channel also introduced normalization 3 Results 31 SiftFlow SiftFlow Dataset adding normalizing pool6 fc7 conv5 conv4 using ParseNet module FCN32s 404 mean IOU obtained better FCN16s 32 PASCAL Context PASCAL Context Dataset adding normalizing pool6 fc7 conv5 conv4 conv3 using ParseNet module FCN32s 404 IOU obtained better FCN8s also see without normalization work well ParseNet module 33 PASCAL VOC 2012Tags Convolutional Network Deep Learning Artificial Intelligence Data Science Machine Learning
|
5,174 |
How to Get Back in the Habit of Reading Books After Graduating from University
|
How to Get Back in the Habit of Reading Books After Graduating from University
Since leaving university, I’ve read 883 books
Image by <a href=”https://pixabay.com/users/stocksnap-894430/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=2593394">StockSnap</a> from <a href=”https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=2593394">Pixabay</a>
School may have killed your passion for the printed word, but it can be brought back to life.
You have not been condemned to some kind of a bookless desert-future where you’ll never read anything longer than 140 characters ever again.
There’s a spark left where your love of books used to be, and I’m going to help you find it, and re-ignite it into a Guy Montag-sized flame.
Since leaving university, I’ve read 883 books (I counted lol), and my, shall we say, “Passion for Proust,” my “Hankering for Hemingway,” my “Desire for DeLillo” (sorry), remains undiminished.
Now, if you haven’t even looked at a physical book since you last closed your final exam booklet; if you used to love reading, but you can’t get excited about books any more because of all the required reading you’ve had to do for school; or if it’s been so long since you’ve had your life changed by a book that you weren’t forced to read that you’ve resigned yourself to your bookless fate — then this article’s for you.
If you’re a really hard case, we may have to resort to some extreme measures, but as Kafka said, reading great books is like taking an axe to the frozen sea inside us. So what we’re going to do is thaw you out a little bit, and welcome you back into the warm, inviting world of books and literature.
But first, some really good news:
You’re way ahead of most people already.
See, a lot of would-be readers struggle with some pretty nasty confidence issues resulting from a lifetime of non-reading. Books have never really been a big part of their lives, and thus the so-called Great Books take on this sort of mythical quality that can be quite intimidating.
I’ve even felt it before, and I’m no stranger to the Classics. I mean, I swear I’ve never even heard of James Patterson (I’m kidding; I’m not actually a snob, I just play one on the internet).
But people like you and I probably got into university because we loved to read.
We were pretty damn good at it, ahead of most of our classmates probably, and I bet that, in the beginning, no one had to force you to pick up a book instead of a remote.
You were (are) a reader. Reading was just what you did.
Half the battle is simply realizing that your reading instincts have never really left you in the first place. They may have been dormant for a little while, but they’re still there. Books can become a part of your life again. Here’s how.
|
https://medium.com/the-innovation/how-to-get-back-in-the-habit-of-reading-books-after-graduation-125b94611789
|
['Matt Karamazov']
|
2020-12-22 16:32:46.471000+00:00
|
['Life Lessons', 'Self Improvement', 'Reading', 'Education', 'Books']
|
Title Get Back Habit Reading Books Graduating UniversityContent Get Back Habit Reading Books Graduating University Since leaving university I’ve read 883 book Image href”httpspixabaycomusersstocksnap894430utmsourcelinkattributionutmmediumreferralutmcampaignimageutmcontent2593394StockSnapa href”httpspixabaycomutmsourcelinkattributionutmmediumreferralutmcampaignimageutmcontent2593394Pixabaya School may killed passion printed word brought back life condemned kind bookless desertfuture you’ll never read anything longer 140 character ever There’s spark left love book used I’m going help find reignite Guy Montagsized flame Since leaving university I’ve read 883 book counted lol shall say “Passion Proust” “Hankering Hemingway” “Desire DeLillo” sorry remains undiminished haven’t even looked physical book since last closed final exam booklet used love reading can’t get excited book required reading you’ve school it’s long since you’ve life changed book weren’t forced read you’ve resigned bookless fate — article’s you’re really hard case may resort extreme measure Kafka said reading great book like taking axe frozen sea inside u we’re going thaw little bit welcome back warm inviting world book literature first really good news You’re way ahead people already See lot wouldbe reader struggle pretty nasty confidence issue resulting lifetime nonreading Books never really big part life thus socalled Great Books take sort mythical quality quite intimidating I’ve even felt I’m stranger Classics mean swear I’ve never even heard James Patterson I’m kidding I’m actually snob play one internet people like probably got university loved read pretty damn good ahead classmate probably bet beginning one force pick book instead remote reader Reading Half battle simply realizing reading instinct never really left first place may dormant little they’re still Books become part life Here’s howTags Life Lessons Self Improvement Reading Education Books
|
5,175 |
Huellas del Coronavirus
|
in In Fitness And In Health
|
https://medium.com/conexo-vc-es/huellas-del-coronavirus-3b4bd8ee39a1
|
['Isaac De La Peña']
|
2020-09-09 01:45:12.954000+00:00
|
['Covid 19', 'España', 'Venture Capital', 'Portugal', 'Coronavirus']
|
Title Huellas del CoronavirusContent Fitness HealthTags Covid 19 España Venture Capital Portugal Coronavirus
|
5,176 |
Why What React? & Its basics…
|
React from Facebook
HELLO FRIENDS!!!
In this article, we are going to learn and answer the below-mentioned questions…
1. What is React?
2. Why React?
After that, we will be taking a look at the basics of React.
What is React
React.js is an open-source JavaScript library that is used for building user interfaces specifically for single-page applications.
React allows us to create reusable UI components. React was first created by Jordan Walke, a software engineer working for Facebook. React first deployed on Facebook’s newsfeed in 2011 and on Instagram.com in 2012.
React allows developers to create large web applications that can change data, without reloading the page.
This corresponds to the view in the MVC template. It can be used with a combination of other JavaScript libraries or frameworks, such as Angular JS in MVC.
React JS is also called simply to React or React.js.
In simple words,
React is a library to build UIs which are fast, scalable and simple.
It uses different components to build a single full-fledged app.
This whole process can be easily understood as a puzzle that is to be solved and to solved we need to put the correct component at the correct place once all the components are placed in the right place the app gets completed.
But the catch is that the components are to be built by the developers.
|
https://medium.com/quick-code/why-what-react-its-basics-5abf9a6caa2f
|
['Bhavishya Negi']
|
2020-05-03 00:53:53.012000+00:00
|
['React', 'Reactjs', 'Web Development', 'Programming', 'India']
|
Title React basics…Content React Facebook HELLO FRIENDS article going learn answer belowmentioned questions… 1 React 2 React taking look basic React React Reactjs opensource JavaScript library used building user interface specifically singlepage application React allows u create reusable UI component React first created Jordan Walke software engineer working Facebook React first deployed Facebook’s newsfeed 2011 Instagramcom 2012 React allows developer create large web application change data without reloading page corresponds view MVC template used combination JavaScript library framework Angular JS MVC React JS also called simply React Reactjs simple word React library build UIs fast scalable simple us different component build single fullfledged app whole process easily understood puzzle solved solved need put correct component correct place component placed right place app get completed catch component built developersTags React Reactjs Web Development Programming India
|
5,177 |
Creative Destruction or Just Destruction? An Analysis of Fortune 100 Companies in 1955 and 2020
|
The small number of companies on the Fortune 500 or 100 for both 1955 and 2020 is not due to creative destruction, and it does not symbolize the strength of the U.S. economy, as some claim. For instance, the American Enterprise Institute’s (AEI) analysis found that only 52 companies on the Fortune 500 in 1955 were still on the list in 2020, a conclusion I do not dispute. It claims that “The fact that nearly nine of every 10 Fortune 500 companies in 1955 are gone, merged, reorganized, or contracted demonstrates that there’s been a lot of market disruption, churning, and Schumpeterian creative destruction over the last six decades.” It continues: “The constant turnover in the Fortune 500 is a positive sign of the dynamism and innovation that characterizes a vibrant consumer-oriented market economy, and that dynamic turnover is speeding up in today’s hyper-competitive global economy[1].”
But is creative destruction the reason for the small number of companies remaining on the Fortune 500, creative destruction that is led by young vibrant American firms introducing highly productive new technologies? I admit that these are my words and not the words of the AEI. Nevertheless, behind the AEI’s claim that the small number of remaining companies is due to “market disruption, churning, and Schumpeterian creative destruction” is the notion that American companies are doing the disrupting and thus Americans are benefiting from this creative destruction through higher productivity, incomes and standard of living.
We know that the last part of the last sentence is not true. Productivity data demonstrates a clear and persistent growth slowdown over the last 80 years, as many readers, particularly those who are followers of Robert Gordon will know[2]. This trend has continued, and a judging panel noted “that the 2010s were the worst decade for productivity growth since the early 19th century[3], despite the positive impact of globalization on productivity. The continued slowdown suggests there was also a slowdown in technological and innovative output in recent decades, an issue we can also address using the change in companies in the Fortune 100 between 1955 and 2020.
I categorized each company in the Fortune 100 into sectors, and in some cases industries, in order to see how the list of companies have changed between 1955 and 2020[4]. The reasons for the rise and fall of these industries are then discussed, drawing from many historical sources. From this analysis, this article concludes that the main reason for the small number of remaining companies is due to the evolution of American economy away from manufacturing to services, an evolution that is more due to foreign competition and financial engineering than to creative destruction by small American companies.
As shown in Table 1, the number of manufacturing and oil companies fell from 74 and 14 to 20 and 9 respectively while the number of financial/insurance, information and communication technology, health care, and retail companies rose from 3 in total to 22, 17, 12, and 11 respectively (total of 62). The ICT companies represent the biggest disruption by new American startups, innovation, venture capital and IPOs[5] while the manufacturing companies declined mostly because of foreign competition and they were replaced by old banks and insurance companies, many of which were founded in the 19th century. In fact, the average age of companies increased from 63 in 1955 to 100 years in 2020, not a sign of newly founded startups disrupting old line companies. M&A were also a big driver of change with the rise and fall of conglomerates, hostile takeovers, and other financial engineering, which reduced the number of oil and manufacturing companies.
Delving into these trends in more detail, the number of oil, tire, auto, steel, food, chemical companies also fell due to foreign competition, consolidation, and some technological change. The number of oil companies dropped from 16 to 6 from consolidation, and not from technological change, despite the rise of fracking. Exxon and Mobil merged as did Chevron and Texaco with the latter two firms acquiring Union Oil, Unocol, and Pure Oil along the way. Sinclair Oil and ARCO were acquired by British Petroleum and BP America was no longer considered an entity for the Fortune 500 in 2020. There were no fracking companies in the Fortune 100 in 2020
The number of auto and tire companies fell from 9 to 2 from both consolidation and foreign competition, but again not from technological change. Electric vehicles had less than 2% of the market in 2019 and Tesla is still far from being a Fortune 500 much less a Fortune 100 company. Instead, component and tire suppliers were either acquired or driven out of business by Japanese and other competition. Firestone was acquired by Bridgestone, Uniroyal and Goodrich by Michelin, and Goodyear still exists at a much smaller scale. Although there was some innovation by Japanese companies in terms of manufacturing techniques, there were no large product innovations and overall, it was Japanese companies doing the innovation and not American ones.
Steel was also decimated by foreign competition, initially by Japan and later by China, causing the number of companies in the Fortune 100 fell from six to zero. Bethlehem and National Steel went bankrupt and the others (Armco, Youngstown sheet & Tube) still exist as much smaller companies; Republic Steel and Jones & Laughlin merged to form LTV. Unlike the auto, rubber, and oil industries, however, the basic oxygen furnace and continuous casting were examples of creative destruction, which resulted in big productivity advantages for the Japanese and European producers.
Other metals such as aluminum and lead were also impacted by foreign competition, particularly from China, and also creative destruction by plastics. For instance, plastic bottles have replaced a significant fraction of aluminum cans and glass bottles (and also reduced demand for steel in many assembled products). The results is that the number of other metal producers dropped from four to zero, including Alcoa and Reynolds Aluminum leaving the list. As an aside, several glass bottle and canning companies also fell of the list between 1955 and 2020. Despite the growth in plastic usage, however, the number of chemical companies fell from eight to one. Dow and Dupont merged and acquired Union Carbide along the way. Monsanto was acquired by Bayer and thus is no longer an American company.
The number of food companies also dramatically decreased, falling from 20 to four, probably because more meals are eaten outside or are delivered to homes. Mergers impacted on General Foods, Kraft, Standard Brands, and Pillsbury, yet one resulting entity, Kraft Heinz, is not in the Fortune 100. The two food companies in the Fortune 100, Tyson Food and Archer Daniel Midland, might be considered disruptors because of Tyson’s innovations in chicken processing using assembly lines and ADM’s emphasis on intermediate food products.
Other new companies include retail, healthcare, finance/insurance, drug, and information & communication technology. The last two are certainly the case of new technologies disrupting old ones, and retail, might also be considered an example of disruption. The number of drug companies rose from zero to four and the number of ICT companies rose from 6 to 17. Their stories are told elsewhere so there is no need to tell them here. Nevertheless, the increases from 6 to 17 does not tell the whole story because the 6 on the 1955 list were analog telephone (AT&T), radio/TV (RCA, CBS), and early computer (IBM, Sperry) companies, nothing like the semiconductor, personal computer, software and Internet companies that were to follow.
Retail companies might also be considered examples of technology disruption because they used information technology to increase product variety and manage increasingly complicated supply chains. Companies such as Walmart, Costco, Walgreens, Kroger, Home Depot, Target, Lowe’s Albertsons, and Best Buy sell us a remarkable number of different products at low prices, courtesy of computers, software and other devices. Retailers such as Albertsons might also be considered food disruptors because they have brought a greater variety of food to consumers.
The healthcare, finance and insurance companies have been the biggest replacements for the manufacturing companies that dominated the 1955 list. They represented more than half the companies in the 2020 list, but most are old companies. Most of the finance and insurance companies can trace their roots back more than 100 years with some going back to before America’s Civil War. These companies have slowly grown over time benefiting from the deregulation that allowed them to cross state lines and thus become huge national banks, investment companies, and insurance providers. Just as healthcare now represents about 18% of GNP[6], and finance and insurance about 8%[7], companies from these industries represent 34% of the Fortune 100.
In summary, the American Enterprise Institute and many others have misinterpreted the reasons for the small number of 1955 companies that still remain on the 2020 list. The evolution of the Fortune 100 is not a symbol of creative destruction, it merely reflects the evolution of the American economy, from manufacturing to services, driven mostly by foreign competition, hostile takeovers, and the rise and fall of conglomerates. Although some of this was driven by innovation, particularly in the ICT and drug sectors, innovation played a small role in the decline in the number of steel, auto, tire, chemical, and oil companies on the list.
Does this tell us something about the future? It probably tells us that we can expect continued changes in the companies, but little changes in products and processes and thus few improvements in productivity. Analyses of startups lead to the same conclusions[8]. If we want a better future, we need to rethink how R&D and innovation are done.
[1] https://fee.org/articles/comparing-1955s-fortune-500-to-2019s-fortune-500/
[2] The Rise and Fall of American Growth, Robert Gordon, 2016, Princeton University Press
[3] https://www.ft.com/content/8d7ef9b2-24b4-11ea-9a4f-963f0ec7e134
[4] Here are the companies for 1955 (https://archive.fortune.com/magazines/fortune/fortune500_archive/full/1955/) and those for 2020 (https://fortune.com/fortune500/2020/search/). I added Ford to the 1955 list because for some reason Fortune failed to put it on the list.
[5] https://medium.com/@jeffreyleefunk/the-most-valuable-startups-founded-since-1975-none-have-been-founded-since-2004-8bc142b67051
[6] https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NationalHealthAccountsHistorical#:~:text=U.S.%20health%20care%20spending%20grew,spending%20accounted%20for%2017.7%20percent.
[7] https://fred.stlouisfed.org/series/VAPGDPFI
[8] https://medium.com/@jeffreyleefunk/what-will-happen-to-todays-privately-held-unicorns-valued-at-1-4-trillion-13f507797487
https://medium.com/@jeffreyleefunk/why-are-todays-startup-unicorns-doing-worse-than-those-of-the-past-1c8ece718ab0 https://medium.com/@jeffreyleefunk/are-there-any-industries-in-which-ex-unicorns-are-profitable-747eca652170
https://medium.com/@jeffreyleefunk/how-successful-are-todays-startup-unicorns-893043f32d24
|
https://medium.com/swlh/creative-destruction-or-just-destruction-an-analysis-of-fortune-100-companies-in-1955-and-2020-91a36f60287d
|
['Jeffrey Lee Funk']
|
2020-10-28 05:58:16.849000+00:00
|
['Disruption', 'Innovation', 'Startup', 'Venture Capital', 'Technology']
|
Title Creative Destruction Destruction Analysis Fortune 100 Companies 1955 2020Content small number company Fortune 500 100 1955 2020 due creative destruction symbolize strength US economy claim instance American Enterprise Institute’s AEI analysis found 52 company Fortune 500 1955 still list 2020 conclusion dispute claim “The fact nearly nine every 10 Fortune 500 company 1955 gone merged reorganized contracted demonstrates there’s lot market disruption churning Schumpeterian creative destruction last six decades” continues “The constant turnover Fortune 500 positive sign dynamism innovation characterizes vibrant consumeroriented market economy dynamic turnover speeding today’s hypercompetitive global economy1” creative destruction reason small number company remaining Fortune 500 creative destruction led young vibrant American firm introducing highly productive new technology admit word word AEI Nevertheless behind AEI’s claim small number remaining company due “market disruption churning Schumpeterian creative destruction” notion American company disrupting thus Americans benefiting creative destruction higher productivity income standard living know last part last sentence true Productivity data demonstrates clear persistent growth slowdown last 80 year many reader particularly follower Robert Gordon know2 trend continued judging panel noted “that 2010s worst decade productivity growth since early 19th century3 despite positive impact globalization productivity continued slowdown suggests also slowdown technological innovative output recent decade issue also address using change company Fortune 100 1955 2020 categorized company Fortune 100 sector case industry order see list company changed 1955 20204 reason rise fall industry discussed drawing many historical source analysis article concludes main reason small number remaining company due evolution American economy away manufacturing service evolution due foreign competition financial engineering creative destruction small American company shown Table 1 number manufacturing oil company fell 74 14 20 9 respectively number financialinsurance information communication technology health care retail company rose 3 total 22 17 12 11 respectively total 62 ICT company represent biggest disruption new American startup innovation venture capital IPOs5 manufacturing company declined mostly foreign competition replaced old bank insurance company many founded 19th century fact average age company increased 63 1955 100 year 2020 sign newly founded startup disrupting old line company also big driver change rise fall conglomerate hostile takeover financial engineering reduced number oil manufacturing company Delving trend detail number oil tire auto steel food chemical company also fell due foreign competition consolidation technological change number oil company dropped 16 6 consolidation technological change despite rise fracking Exxon Mobil merged Chevron Texaco latter two firm acquiring Union Oil Unocol Pure Oil along way Sinclair Oil ARCO acquired British Petroleum BP America longer considered entity Fortune 500 2020 fracking company Fortune 100 2020 number auto tire company fell 9 2 consolidation foreign competition technological change Electric vehicle le 2 market 2019 Tesla still far Fortune 500 much le Fortune 100 company Instead component tire supplier either acquired driven business Japanese competition Firestone acquired Bridgestone Uniroyal Goodrich Michelin Goodyear still exists much smaller scale Although innovation Japanese company term manufacturing technique large product innovation overall Japanese company innovation American one Steel also decimated foreign competition initially Japan later China causing number company Fortune 100 fell six zero Bethlehem National Steel went bankrupt others Armco Youngstown sheet Tube still exist much smaller company Republic Steel Jones Laughlin merged form LTV Unlike auto rubber oil industry however basic oxygen furnace continuous casting example creative destruction resulted big productivity advantage Japanese European producer metal aluminum lead also impacted foreign competition particularly China also creative destruction plastic instance plastic bottle replaced significant fraction aluminum can glass bottle also reduced demand steel many assembled product result number metal producer dropped four zero including Alcoa Reynolds Aluminum leaving list aside several glass bottle canning company also fell list 1955 2020 Despite growth plastic usage however number chemical company fell eight one Dow Dupont merged acquired Union Carbide along way Monsanto acquired Bayer thus longer American company number food company also dramatically decreased falling 20 four probably meal eaten outside delivered home Mergers impacted General Foods Kraft Standard Brands Pillsbury yet one resulting entity Kraft Heinz Fortune 100 two food company Fortune 100 Tyson Food Archer Daniel Midland might considered disruptors Tyson’s innovation chicken processing using assembly line ADM’s emphasis intermediate food product new company include retail healthcare financeinsurance drug information communication technology last two certainly case new technology disrupting old one retail might also considered example disruption number drug company rose zero four number ICT company rose 6 17 story told elsewhere need tell Nevertheless increase 6 17 tell whole story 6 1955 list analog telephone ATT radioTV RCA CBS early computer IBM Sperry company nothing like semiconductor personal computer software Internet company follow Retail company might also considered example technology disruption used information technology increase product variety manage increasingly complicated supply chain Companies Walmart Costco Walgreens Kroger Home Depot Target Lowe’s Albertsons Best Buy sell u remarkable number different product low price courtesy computer software device Retailers Albertsons might also considered food disruptors brought greater variety food consumer healthcare finance insurance company biggest replacement manufacturing company dominated 1955 list represented half company 2020 list old company finance insurance company trace root back 100 year going back America’s Civil War company slowly grown time benefiting deregulation allowed cross state line thus become huge national bank investment company insurance provider healthcare represents 18 GNP6 finance insurance 87 company industry represent 34 Fortune 100 summary American Enterprise Institute many others misinterpreted reason small number 1955 company still remain 2020 list evolution Fortune 100 symbol creative destruction merely reflects evolution American economy manufacturing service driven mostly foreign competition hostile takeover rise fall conglomerate Although driven innovation particularly ICT drug sector innovation played small role decline number steel auto tire chemical oil company list tell u something future probably tell u expect continued change company little change product process thus improvement productivity Analyses startup lead conclusions8 want better future need rethink RD innovation done 1 httpsfeeorgarticlescomparing1955sfortune500to2019sfortune500 2 Rise Fall American Growth Robert Gordon 2016 Princeton University Press 3 httpswwwftcomcontent8d7ef9b224b411ea9a4f963f0ec7e134 4 company 1955 httpsarchivefortunecommagazinesfortunefortune500archivefull1955 2020 httpsfortunecomfortune5002020search added Ford 1955 list reason Fortune failed put list 5 httpsmediumcomjeffreyleefunkthemostvaluablestartupsfoundedsince1975nonehavebeenfoundedsince20048bc142b67051 6 httpswwwcmsgovResearchStatisticsDataandSystemsStatisticsTrendsandReportsNationalHealthExpendDataNationalHealthAccountsHistoricaltextUS20health20care20spending20grewspending20accounted20for2017720percent 7 httpsfredstlouisfedorgseriesVAPGDPFI 8 httpsmediumcomjeffreyleefunkwhatwillhappentotodaysprivatelyheldunicornsvaluedat14trillion13f507797487 httpsmediumcomjeffreyleefunkwhyaretodaysstartupunicornsdoingworsethanthoseofthepast1c8ece718ab0 httpsmediumcomjeffreyleefunkarethereanyindustriesinwhichexunicornsareprofitable747eca652170 httpsmediumcomjeffreyleefunkhowsuccessfularetodaysstartupunicorns893043f32d24Tags Disruption Innovation Startup Venture Capital Technology
|
5,178 |
The Top Ten Most Profitable Fitness Apps Markets
|
Phones are no longer simply a means of communication, like they were five years ago. Now, they help us pay for purchases. They can be used as a ticket on public transport or used as a door key, car alarm, video recorder, or portable PC. They even let us talk to someone named Siri.
Mobile apps have significantly changed our way of life, and they have also helped some to get rich.
In our second research project (after the mobile games market), we aimed to assess which markets are the most promising for developers. We took the market of mobile fitness apps as an example.
76% of revenues in the fitness app market are generated by the first 10 countries:
It’s interesting to compare the top mobile app markets with the sports activity rating (this rating is based on information from each country’s sporting events, which have their own “weight” according to special criteria: the scale of the event, its impact on the world of sport and society, participation of representatives of different countries and continents, etc.).
KEY FACTS ABOUT THE FITNESS APP MARKET
According to Newzoo, the revenue of the mobile app market reached $44.8bn in 2016 (61.8bn according to App Annie) and is projected to grow to $80.6bn in 2020 (to 139.1bn in 2021, according to App Annie). 81–82% of the revenue was gained by games.
Categories of non-gaming apps in order of decreasing revenue (Newzoo):
social networks (usually dating sites),
entertainment (videos),
music,
books,
education,
productivity,
photography,
medicine,
health and fitness.
According to Statista, the volume of the mobile fitness app market was $1.778bn in 2016, and it is expected to grow to $4.1bn in 2021. Thus, according to App Annie, the percentage of fitness apps in revenue was about 15% in 2016, and the share will be approximately 12% in 2021, which is not insignificant.
The market of mobile fitness apps is evolving in large part due to the fact that more and more people seek a healthy lifestyle. As a result, new devices helping people stay in shape have appeared and the Internet of Things is growing. In 2015, according to the Globe-Go company, apps in the Health and Fitness category came in second by increased amount of time spent with them, giving way only to music apps.
KEY PLAYERS IN THE ONLINE-FITNESS MARKET
The main players in the online fitness market (according to Statista)are shown below. All of them are engaged in the production of sportswear and sports footwear, fitness bracelets and other gadgets that generate the main revenue. In addition, all companies have their own apps for Google Play and the App Store, which also produce additional proceeds.
Despite the fact that the market for fitness bracelets, trackers, smart clothes and watches is larger in volume than the fitness apps market, the latter is growing faster and is projected to catch up with the physical devices market in 2021.
FOUR KEY MOBILE FITNESS APPS
We compared the data of four apps which regularly hold one of the top revenue positions in the Health and Fitness category in the App Store and Google Play.
1. Sweat: Kayla Itsines Fitness by The Bikini Body Training Company is owned by Australian fitness trainer and entrepreneur Kayla Itsines, number 51 in the list of the richest young entrepreneurs of Australia.
Despite the fact that Kayla didn’t launch the Android version of her app until April 2016, she managed to earn $14.56m in 2016 (more than 30% of the company’s $46m total revenue). According to Bloomberg, the Sweat app earned in 2016 more than any other app in the Health & Fitness category.
The app provides nutrition recommendations and a series of aerobic exercises in the Bikini Body Guide (BBG) to practice at home.
According to the Facebook Audience tool (data for June 2017), 99% of Kayla’s page followers in Facebook are women.
2. Calorie Counter & Diet Tracker by MyFitnessPal is the revenue leader in the shops of mobile apps among a vast list of apps of this company. The MyFitnessPal platform is owned by an American manufacturer of Under Armour sportswear and footwear.
The company’s revenue, according to the annual report, amounted to $4.83bn in 2016, and 1.6% of this revenue was from the Connected Fitness Platform ($80.447m, an increase of 66% compared to the previous year), which includes all apps of the company. $8.8m was from by the Calorie Counter & Diet Tracker app.
The app makes it easier to count the calorie content of dishes and make a plan to reduce calories for weight loss.
According to the Facebook Audience tool, 77% of followers on the app’s Facebook page are women.
3. Headspace: Guided Meditation and Mindfulness by the US company Headspace. The company was founded in 2010 in the UK by Rich Pierson and Andy Puddicombe, a former Buddhist monk and meditation expert.
The Android version of the app wasn’t launched until in 2016, but the company managed to earn $5.78m by the end of year.
The app contains auxiliary materials for home meditation, nutrition recommendations and materials for dealing with depression and anxiety (especially during pregnancy).
According to the Facebook Audience tool, 70% of followers on the company’s Facebook page are women.
4. Runtastic Results: Workout & Strength Training by the Austrian company Runtastic (bought by Adidas in 2015 for $239m). The company has 20 different mobile apps, and Runtastic Results is the leader among them in terms of revenue. In 2015, the company earned €11m (according to the annual Adidas report). The revenue of the entire Adidas company was €19.291bn in 2016.
In 2016, the Runtastic Results Workout app made Adidas $3.6m.
The app contains a set of exercises for working out at home without special equipment and recommendations for nutrition.
According to the Facebook Audience tool, women constitute only 45% of the official Runtastic company page followers on Facebook .
LOCALIZATION AND REGIONAL SPECIFIC FEATURES
A few interesting facts about these countries in the context of mobile apps:
In China, a sleep tracker is one of the most profitable fitness apps.
Fitness apps for meditation are popular in the India market.
The successful launch of an Android app in China is only possible if you agree at least with ten local markets, or even better with twenty ones.
Localization into the languages of the 10 countries with the largest mobile app revenues will allow coverage of not only these countries, but also others (e. g. Bengali is the official language of the People’s Republic of Bangladesh). We have included Spanish in the list of languages. After English, it is the second most popular language in the USA, and it is included in the traditional localization list for EFIGS + CJK mobile platforms.
Allcorrect company is one of the leading localizer of mobile apps. We have localized more than 400 mobile apps and will be happy to help you with the localization of your mobile app. If you have any research-related questions or suggestions, please write us at [email protected] — we will be happy to answer.
|
https://medium.com/software-and-games-localization/the-top-ten-most-profitable-fitness-apps-markets-448dbbdded6c
|
['Allcorrect Blog']
|
2017-07-11 12:41:08.536000+00:00
|
['Apps', 'Mobile Apps', 'Mobile App Development', 'Fitness', 'Mobile Marketing']
|
Title Top Ten Profitable Fitness Apps MarketsContent Phones longer simply mean communication like five year ago help u pay purchase used ticket public transport used door key car alarm video recorder portable PC even let u talk someone named Siri Mobile apps significantly changed way life also helped get rich second research project mobile game market aimed ass market promising developer took market mobile fitness apps example 76 revenue fitness app market generated first 10 country It’s interesting compare top mobile app market sport activity rating rating based information country’s sporting event “weight” according special criterion scale event impact world sport society participation representative different country continent etc KEY FACTS FITNESS APP MARKET According Newzoo revenue mobile app market reached 448bn 2016 618bn according App Annie projected grow 806bn 2020 1391bn 2021 according App Annie 81–82 revenue gained game Categories nongaming apps order decreasing revenue Newzoo social network usually dating site entertainment video music book education productivity photography medicine health fitness According Statista volume mobile fitness app market 1778bn 2016 expected grow 41bn 2021 Thus according App Annie percentage fitness apps revenue 15 2016 share approximately 12 2021 insignificant market mobile fitness apps evolving large part due fact people seek healthy lifestyle result new device helping people stay shape appeared Internet Things growing 2015 according GlobeGo company apps Health Fitness category came second increased amount time spent giving way music apps KEY PLAYERS ONLINEFITNESS MARKET main player online fitness market according Statistaare shown engaged production sportswear sport footwear fitness bracelet gadget generate main revenue addition company apps Google Play App Store also produce additional proceeds Despite fact market fitness bracelet tracker smart clothes watch larger volume fitness apps market latter growing faster projected catch physical device market 2021 FOUR KEY MOBILE FITNESS APPS compared data four apps regularly hold one top revenue position Health Fitness category App Store Google Play 1 Sweat Kayla Itsines Fitness Bikini Body Training Company owned Australian fitness trainer entrepreneur Kayla Itsines number 51 list richest young entrepreneur Australia Despite fact Kayla didn’t launch Android version app April 2016 managed earn 1456m 2016 30 company’s 46m total revenue According Bloomberg Sweat app earned 2016 app Health Fitness category app provides nutrition recommendation series aerobic exercise Bikini Body Guide BBG practice home According Facebook Audience tool data June 2017 99 Kayla’s page follower Facebook woman 2 Calorie Counter Diet Tracker MyFitnessPal revenue leader shop mobile apps among vast list apps company MyFitnessPal platform owned American manufacturer Armour sportswear footwear company’s revenue according annual report amounted 483bn 2016 16 revenue Connected Fitness Platform 80447m increase 66 compared previous year includes apps company 88m Calorie Counter Diet Tracker app app make easier count calorie content dish make plan reduce calorie weight loss According Facebook Audience tool 77 follower app’s Facebook page woman 3 Headspace Guided Meditation Mindfulness US company Headspace company founded 2010 UK Rich Pierson Andy Puddicombe former Buddhist monk meditation expert Android version app wasn’t launched 2016 company managed earn 578m end year app contains auxiliary material home meditation nutrition recommendation material dealing depression anxiety especially pregnancy According Facebook Audience tool 70 follower company’s Facebook page woman 4 Runtastic Results Workout Strength Training Austrian company Runtastic bought Adidas 2015 239m company 20 different mobile apps Runtastic Results leader among term revenue 2015 company earned €11m according annual Adidas report revenue entire Adidas company €19291bn 2016 2016 Runtastic Results Workout app made Adidas 36m app contains set exercise working home without special equipment recommendation nutrition According Facebook Audience tool woman constitute 45 official Runtastic company page follower Facebook LOCALIZATION REGIONAL SPECIFIC FEATURES interesting fact country context mobile apps China sleep tracker one profitable fitness apps Fitness apps meditation popular India market successful launch Android app China possible agree least ten local market even better twenty one Localization language 10 country largest mobile app revenue allow coverage country also others e g Bengali official language People’s Republic Bangladesh included Spanish list language English second popular language USA included traditional localization list EFIGS CJK mobile platform Allcorrect company one leading localizer mobile apps localized 400 mobile apps happy help localization mobile app researchrelated question suggestion please write u orderallcorrectcom — happy answerTags Apps Mobile Apps Mobile App Development Fitness Mobile Marketing
|
5,179 |
Epic App Update! Send crypto anywhere in the world in seconds
|
You have probably heard the news that Crypterium is now running the fastest crypto transactions in the world.
Crypto transfers in 1 second to people who don’t even have crypto wallets? It’s a reality now.
Send crypto to your mom, your friend, your ex-girlfriend from Iceland. You can even give crypto as a present to someone’s birthday.
Download Crypterium App in Apple Store or Google Play!
About Crypterium
CCrypterium is building a mobile app that will turn cryptocurrencies into money that you can spend with the same ease as cash. Shop around the world and pay with your coins and tokens at any NFC terminal, or via scanning the QR codes. Make purchases in online stores, pay your bills, or just send money across borders in seconds, reliably and for a fraction of a penny.
Join our Telegram news channel or other social media to stay updated!
Website ๏ Telegram ๏ Facebook ๏ Twitter ๏ BitcoinTalk ๏ Reddit ๏ YouTube ๏ LinkedIn
|
https://medium.com/crypterium/send-crypto-anywhere-in-the-world-in-seconds-1cf9f04febbb
|
[]
|
2018-11-10 09:56:12.508000+00:00
|
['Cryptocurrency', 'Bitcoin', 'Finance', 'Technology', 'Mobile App Development']
|
Title Epic App Update Send crypto anywhere world secondsContent probably heard news Crypterium running fastest crypto transaction world Crypto transfer 1 second people don’t even crypto wallet It’s reality Send crypto mom friend exgirlfriend Iceland even give crypto present someone’s birthday Download Crypterium App Apple Store Google Play Crypterium CCrypterium building mobile app turn cryptocurrencies money spend ease cash Shop around world pay coin token NFC terminal via scanning QR code Make purchase online store pay bill send money across border second reliably fraction penny Join Telegram news channel social medium stay updated Website ๏ Telegram ๏ Facebook ๏ Twitter ๏ BitcoinTalk ๏ Reddit ๏ YouTube ๏ LinkedInTags Cryptocurrency Bitcoin Finance Technology Mobile App Development
|
5,180 |
3 Lessons on Customer Empathy: A Recap of Unbounce’s Call to Action Conference
|
1. Use SEO to Solve People’s Problems, and Google will Reward You.
Though marketers use Google on the daily, it’s easy to forget that people turn to search to find an answer in a time of need — not for sport. When we try to use SEO to force searchers into a funnel, we aren’t solving their problem, and Google can see their disappointment when this happens (thanks to computer learning!)
As Wil Reynolds, Founder of SEER Interactive, puts it, “When I do my job well, I’m solving problems that people are searching for.”
To illustrate the difference between forcing searchers into a funnel and solving their problems, I did a search for “compare SEO companies” and came up with some surprising results:
Notice the paid results at the top are all SEO companies — not comparisons of SEO companies. These ads don’t answer my question, and what’s worse, the first two companies aren’t even local results. “Searchberg.com” is based in New York, and “Zebratechies.com” is based in India — how is this relevant when I live in Vancouver?
Below the paid results are the organically ranked results, and thankfully, they actually provide an answer to my question. These results are a great indicator of the kind of content you need to provide searchers to rank organically for this search term, and this content is worth imitating.
To recap: If you want to rank organically, you have to solve people’s problems.
2. Focus on “Jobs to Be Done” Vs. Traditional Personas.
“Personas can’t tell us what was happening in a person’s life that led to a decision.”
— Claire Suellentrop, Founder, Love Your Customers
Customer personas are the standard tool used to understand your target market and what motivates them to buy. The only problem is, they often focus on shallow demographic details that paint a limited picture of a customer’s motivations for buying. I’ve created a sample customer persona based on me:
Sample Sam:
Career: Marketing Coordinator Age: 20 Something Marital Status: Single (no kids) Urban Locations Online Behaviours: Browses Reddit religiously. Prefers Instagram over other social media platforms. Relies on online ratings for online shopping and refers to them for in-store purchase. Goals and Challenges: Become a better marketer. Balance work, friends, family, and career development.
While this might tell you a little bit about what my motivations are and what’s important to me, it can’t truly inform you about my buying habits.
Specifically, from the persona above, can you infer why I bought these shorts?
I’ll save you the effort: you can’t.
Instead of solely focusing on creating customer personas, identify a customer’s “job to be done” — the task they’re trying to accomplish by buying a particular product or service.
In my case, I’m going to Europe in a month, and in an effort to tan my legs, remain comfortable while walking/hiking all day, and possibly look not dumpy when out for dinner, these shorts satisfied my needs.
When you focus on what job a customer is trying to accomplish, you learn a lot more about what motivates anybody to buy your product. Claire Suellentrop, my newfound hero for imparting this wisdom, has conveniently outlined how to do this:
To further personalize this, “help me” can be replaced with any other relevant term:
In my case, my motivation for buying shorts looks like this:
“When I’m switching between hiking in the countryside and exploring the city, equip me so I can feel appropriately dressed no matter where my travels take me.”
Now doesn’t that paint a better picture?
3. Use Positioning to Provide the Context in Which Your Value is the Most Obvious to Customers.
Positioning is one of the most important aspects of marketing strategy, and, as April Dunford notes, “bad positioning can kill even a great product.” At this stage, it is imperative you understand your customers’ pain points inside out and show them exactly how your product is the right solution.
So, how is this done? Well, it can be simpler than you think:
Listen to what your customers say about how your product addresses their pains. Integrate this feedback into your value propositions, positioning statement, and content. Forever.
Following this method establishes your product in the context of a solution to a specific problem they already know they have, making your product relevant to their lives.
So, what does this look like in practice? Speaker, Amy Harrison, Founder of Harrison Amy Copywriting, provides an example with Corsodyl, a mouthwash for gum disease. Albeit unconventional, Corsodyl does a perfect job of telling customers how it addresses their pains:
Customers immediately know what Corsodyl does. Furthermore, the language used is plain enough that it looks exactly like a searcher’s input into Google, which means customers will easily relate to the messaging. In this way, Corsodyl has provided the context in which the value of their product is immediately obvious to the customer.
|
https://medium.com/insights-from-the-incubator/3-lessons-on-customer-empathy-a-recap-of-unbounces-call-to-action-conference-d2dbef5d843e
|
['Samantha Grandinetti']
|
2017-08-30 20:47:19.504000+00:00
|
['Marketing', 'Digital Marketing', 'Digital Marketing Agency', 'Digital Marketing Tips', 'Customer Focus']
|
Title 3 Lessons Customer Empathy Recap Unbounce’s Call Action ConferenceContent 1 Use SEO Solve People’s Problems Google Reward Though marketer use Google daily it’s easy forget people turn search find answer time need — sport try use SEO force searcher funnel aren’t solving problem Google see disappointment happens thanks computer learning Wil Reynolds Founder SEER Interactive put “When job well I’m solving problem people searching for” illustrate difference forcing searcher funnel solving problem search “compare SEO companies” came surprising result Notice paid result top SEO company — comparison SEO company ad don’t answer question what’s worse first two company aren’t even local result “Searchbergcom” based New York “Zebratechiescom” based India — relevant live Vancouver paid result organically ranked result thankfully actually provide answer question result great indicator kind content need provide searcher rank organically search term content worth imitating recap want rank organically solve people’s problem 2 Focus “Jobs Done” Vs Traditional Personas “Personas can’t tell u happening person’s life led decision” — Claire Suellentrop Founder Love Customers Customer persona standard tool used understand target market motivates buy problem often focus shallow demographic detail paint limited picture customer’s motivation buying I’ve created sample customer persona based Sample Sam Career Marketing Coordinator Age 20 Something Marital Status Single kid Urban Locations Online Behaviours Browses Reddit religiously Prefers Instagram social medium platform Relies online rating online shopping refers instore purchase Goals Challenges Become better marketer Balance work friend family career development might tell little bit motivation what’s important can’t truly inform buying habit Specifically persona infer bought short I’ll save effort can’t Instead solely focusing creating customer persona identify customer’s “job done” — task they’re trying accomplish buying particular product service case I’m going Europe month effort tan leg remain comfortable walkinghiking day possibly look dumpy dinner short satisfied need focus job customer trying accomplish learn lot motivates anybody buy product Claire Suellentrop newfound hero imparting wisdom conveniently outlined personalize “help me” replaced relevant term case motivation buying short look like “When I’m switching hiking countryside exploring city equip feel appropriately dressed matter travel take me” doesn’t paint better picture 3 Use Positioning Provide Context Value Obvious Customers Positioning one important aspect marketing strategy April Dunford note “bad positioning kill even great product” stage imperative understand customers’ pain point inside show exactly product right solution done Well simpler think Listen customer say product address pain Integrate feedback value proposition positioning statement content Forever Following method establishes product context solution specific problem already know making product relevant life look like practice Speaker Amy Harrison Founder Harrison Amy Copywriting provides example Corsodyl mouthwash gum disease Albeit unconventional Corsodyl perfect job telling customer address pain Customers immediately know Corsodyl Furthermore language used plain enough look exactly like searcher’s input Google mean customer easily relate messaging way Corsodyl provided context value product immediately obvious customerTags Marketing Digital Marketing Digital Marketing Agency Digital Marketing Tips Customer Focus
|
5,181 |
React Native at Airbnb: The Technology
|
This is the second in a series of blog posts in which we outline our experience with React Native and what is next for mobile at Airbnb.
React Native itself is a relatively new and fast-moving platform in the cross-section of Android, iOS, web, and cross-platform frameworks. After two years, we can safely say that React Native is revolutionary in many ways. It is a paradigm shift for mobile and we were able to reap the benefits of many of its goals. However, its benefits didn’t come without significant pain points.
What Worked Well
Cross-Platform
The primary benefit of React Native is the fact that code you write runs natively on Android and iOS. Most features that used React Native were able to achieve 95–100% shared code and 0.2% of files were platform-specific (*.android.js/*.ios.js).
Unified Design Language System (DLS)
We developed a cross-platform design language called DLS. We have Android, iOS, React Native, and web versions of every component. Having a unified design language was amenable to writing cross-platform features because it meant that designs, component names, and screens were consistent across platforms. However, we were still able to make platform-appropriate decisions where applicable. For example, we use the native Toolbar on Android and UINavigationBar on iOS and we chose to hide disclosure indicators on Android because they don’t adhere to the Android platform design guidelines.
We opted to rewrite components instead of wrapping native ones because it was more reliable to make platform-appropriate APIs individually for each platform and reduced the maintenance overhead for Android and iOS engineers who may not know how to properly test changes in React Native. However, it did cause fragmentation between the platforms in which native and React Native versions of the same component would get out of sync.
React
There is a reason that React is the most-loved web framework. It is simple yet powerful and scales well to large codebases. Some of the things we particularly like are:
Components: React Components enforce separation of concerns with well-defined props and state. This is a major contributor to React’s scalability.
React Components enforce separation of concerns with well-defined props and state. This is a major contributor to React’s scalability. Simplified Lifecycles: Android and, to a slightly lesser extent, iOS lifecycles are notoriously complex. Functional reactive React components fundamentally solve this problem and made learning React Native dramatically simpler than learning Android or iOS.
Android and, to a slightly lesser extent, iOS lifecycles are notoriously complex. Functional reactive React components fundamentally solve this problem and made learning React Native dramatically simpler than learning Android or iOS. Declarative: The declarative nature of React helped keep our UI in sync with the underlying state.
Iteration Speed
While developing in React Native, we were able to reliably use hot reloading to test our changes on Android and iOS in just a second or two. Even though build performance is a top priority for our native apps, it has never come close to the iteration speed we achieved with React Native. At best, native compilation times are 15 seconds but can be as high as 20 minutes for full builds.
Investing in Infrastructure
We developed extensive integrations into our native infrastructure. All core pieces such as networking, i18n, experimentation, shared element transitions, device info, account info, and many others were wrapped in a single React Native API. These bridges were some of the more complex pieces because we wanted to wrap the existing Android and iOS APIs into something that was consistent and canonical for React. While keeping these bridges up to date with the rapid iteration and development of new infrastructure was a constant game of catch up, the investment by the infrastructure team made product work much easier.
Without this heavy investment in infrastructure, React Native would have led to a subpar developer and user experiences. As a result, we don’t believe React Native can be simply tacked on to an existing app without a significant and continuous investment.
Performance
One of the largest concerns around React Native was its performance. However, in practice, this was rarely a problem. Most of our React Native screens feel as fluid as our native ones. Performance is often thought of in a single dimension. We frequently saw mobile engineers look at JS and think “slower than Java”. However, moving business logic and layout off of the main thread actually improves render performance in many cases.
When we did see performance issues, they were usually caused by excessive rendering and were mitigated by effectively using shouldComponentUpdate, removeClippedSubviews, and better use of Redux.
However, the initialization and first-render time (outlined below) made React Native perform poorly for launch screens, deeplinks, and increased the TTI time while navigating between screens. In addition, screens that dropped frames were difficult to debug because Yoga translates between React Native components and native views.
Redux
We used Redux for state management which we found effective and prevented the UI from ever getting out of sync with state and enabled easy data sharing across screens. However, Redux is notorious for its boilerplate and has a relatively difficult learning curve. We provided generators for some common templates but it was still one of the most challenging pieces and source of confusion while working with React Native. It is worth noting that these challenges were not React Native specific.
Backed by Native
Because everything in React Native can be bridged by native code, we were ultimately able to build many things we weren’t sure were possible at the beginning such as:
Shared element transitions: We built a <SharedElement> component that is backed by native shared element code on Android and iOS. This even works between native and React Native screens. Lottie: We were able to get Lottie working in React Native by wrapping the existing libraries on Android and iOS. Native networking stack: React Native uses our existing native networking stack and cache on both platforms. Other core infra: Just like networking, we wrapped the rest of our existing native infrastructure such as i18n, experimentation, etc. so that it worked seamlessly in React Native.
Static Analysis
We have a strong history of using eslint on web which we were able to leverage. However, we were the first platform at Airbnb to pioneer prettier. We found it to be effective at reducing nits and bikeshedding on PRs. Prettier is now being actively investigated by our web infrastructure team.
We also used analytics to measure render times and performance to figure out which screens were the top priority to investigate for performance issues.
Because React Native was smaller and newer than our web infrastructure, it proved to be a good testbed for new ideas. Many of the tools and ideas we created for React Native are being adopted by web now.
Animations
Thanks to the React Native Animated library, we were able to achieve jank-free animations and even interaction-driven animations such as scrolling parallax.
JS/React Open Source
Because React Native truly runs React and javascript, we were able to leverage the extremely vast array of javascript projects such as redux, reselect, jest, etc.
Flexbox
React Native handles layout with Yoga, a cross-platform C library that handles layout calculations via the flexbox API. Early on, we were hit with Yoga limitations such as the lack of aspect ratios but they have been added in subsequent updates. Plus, fun tutorials such as flexbox froggy made onboarding more enjoyable.
Collaboration with Web
Late in the React Native exploration, we began building for web, iOS, and Android at once. Given that web also uses Redux, we found large swaths of code that could be shared across web and native platforms with no alterations.
What didn’t work well
React Native Immaturity
React Native is less mature than Android or iOS. It is newer, highly ambitious, and moving extremely quickly. While React Native works well in most situations, there are instances in which its immaturity shows through and makes something that would be trivial in native very difficult. Unfortunately, these instances are hard to predict and can take anywhere from hours to many days to work around.
Maintaining a Fork of React Native
Due to React Native’s immaturity, there were times in which we needed to patch the React Native source. In addition to contributing back to React Native, we had to maintain a fork in which we could quickly merge changes and bump our version. Over the two years, we had to add roughly 50 commits on top of React Native. This makes the process of upgrading React Native extremely painful.
JavaScript Tooling
JavaScript is an untyped language. The lack of type safety was both difficult to scale and became a point of contention for mobile engineers used to typed languages who may have otherwise been interested in learning React Native. We explored adopting flow but cryptic error messages led to a frustrating developer experience. We also explored TypeScript but integrating it into our existing infrastructure such as babel and metro bundler proved to be problematic. However, we are continuing to actively investigate TypeScript on web.
Refactoring
A side-effect of JavaScript being untyped is that refactoring was extremely difficult and error-prone. Renaming props, especially props with a common name like onClick or props that are passed through multiple components were a nightmare to refactor accurately. To make matters worse, the refactors broke in production instead of at compile time and were hard to add proper static analysis for.
JavaScriptCore inconsistencies
One subtle and tricky aspect of React Native is due to the fact that it is executed on a JavaScriptCore environment. The following are consequences we encountered as a result:
iOS ships with its own JavaScriptCore out of the box. This meant that iOS was mostly consistent and not problematic for us.
Android doesn’t ship its own JavaScriptCore so React Native bundles its own. However, the one you get by default is ancient. As a result, we had to go out of our way to bundle a newer one.
While debugging, React Native attaches to a Chrome Developer Tools instance. This is great because it is a powerful debugger. However, once the debugger is attached, all JavaScript runs within Chrome’s V8 engine. This is fine 99.9% of the time. However, in one instance, we got bit when toLocaleString worked on iOS but only worked on Android while debugging. It turns out that the Android JSC doesn’t include it and it was silently failing unless you were debugging in which case it was using V8 which does. Without knowing technical details like this, it can lead to days of painful debugging for product engineers.
React Native Open Source Libraries
Learning a platform is difficult and time-consuming. Most people only know one or two platforms well. React Native libraries that have native bridges such as maps, video, etc. requires equal knowledge of all three platforms to be successful. We found that most React Native Open source projects were written by people who had experience with only one or two. This led to inconsistencies or unexpected bugs on Android or iOS.
On Android, many React Native libraries also require you to use a relative path to node_modules rather than publishing maven artifacts which are inconsistent with what is expected by the community.
Parallel Infrastructure and Feature Work
We have accumulated many years of native infrastructure on Android and iOS. However, in React Native, we started with a blank slate and had to write or create bridges of all existing infrastructure. This meant that there were times in which a product engineer needed some functionality that didn’t yet exist. At that point, they either had to work in a platform they were unfamiliar with and outside the scope of their project to build it or be blocked until it could be created.
Crash Monitoring
We use Bugsnag for crash reporting on Android and iOS. While we were able to get Bugsnag generally working on both platforms, it was less reliable and required more work than it did on our other platforms. Because React Native is relatively new and rare in the industry, we had to build a significant amount of infrastructure such as uploading source maps in-house and had to work with Bugsnag to be able to do things like filter crashes by just those that occurred in React Native.
Due to the amount of custom infrastructure around React Native, we would occasionally have serious issues in which crashes weren’t reported or source maps weren’t properly uploaded.
Finally, debugging React Native crashes were often more challenging if the issue spanned React Native and native code since stack traces don’t jump between React Native and native.
Native Bridge
React Native has a bridge API to communicate between native and React Native. While it works as expected, it is extremely cumbersome to write. Firstly, it requires all three development environments to be properly set up. We also experienced many issues in which the types coming from JavaScript were unexpected. For example, integers were often wrapped by strings, an issue that isn’t realized until it is passed over a bridge. To make matters worse, sometimes iOS will fail silently while Android will crash. We began to investigate automatically generating bridge code from TypeScript definitions towards the end of 2017 but it was too little too late.
Initialization Time
Before React Native can render for the first time, you must initialize its runtime. Unfortunately, this takes several seconds for an app of our size, even on a high-end device. This made using React Native for launch screens nearly impossible. We minimized the first-render time for React Native by initializing it at app-launch.
Initial Render Time
Unlike with native screens, rendering React Native requires at least one full main thread -> js -> yoga layout thread -> main thread round trip before there is enough information to render a screen for the first time. We saw an average initial p90 render of 280ms on iOS and 440ms on Android. On Android, we used the postponeEnterTransition API which is normally used for shared element transitions to delay showing the screen until it has rendered. On iOS, we had issues setting the navbar configuration from React Native fast enough. As a result, we added an artificial delay of 50ms to all React Native screen transitions to prevent the navbar from flickering once the configuration was loaded.
App Size
React Native also has a non-negligible impact on app size. On Android, the total size of React Native (Java + JS + native libraries such as Yoga + Javascript Runtime) was 8mb per ABI. With both x86 and arm (32 bit only) in one APK, it would have been closer to 12mb.
64-bit
We still can’t ship a 64-bit APK on Android because of this issue.
Gestures
We avoided using React Native for screens that involved complex gestures because the touch subsystem for Android and iOS are different enough that coming up with a unified API has been challenging for the entire React Native community. However, work is continuing to progress and react-native-gesture-handler just hit 1.0.
Long Lists
React Native has made some progress in this area with libraries like FlatList. However, they are nowhere near the maturity and flexibility of RecyclerView on Android or UICollectionView on iOS. Many of the limitations are difficult to overcome because of the threading. Adapter data can’t be accessed synchronously so it is possible to see views flash in as they get asynchronously rendered while scrolling quickly. Text also can’t be measured synchronously so iOS can’t make certain optimizations with pre-computed cell heights.
Upgrading React Native
Although most React Native upgrades were trivial, there were a few that wound up being painful. In particular, it was nearly impossible to use React Native 0.43 (April 2017) to 0.49 (October 2017) because it used React 16 alpha and beta. This was hugely problematic because most React libraries that are designed for web use don’t support pre-release React versions. The process of wrangling the proper dependencies for this upgrade was a major detriment to other React Native infrastructure work in mid-2017.
Accessibility
In 2017, we did a major accessibility overhaul in which we invested significant efforts to ensure that people with disabilities can use Airbnb to book a listing that can accommodate their needs. However, there were many holes in the React Native accessibility APIs. In order to meet even a minimum acceptable accessibility bar, we had to maintain our own fork of React Native where we could merge fixes. For these case, a one-line fix on Android or iOS wound up taking days of figuring out how to add it to React Native, cherry picking it, then filing an issue on React Native core and following up on it over the coming weeks.
Troublesome Crashes
We have had to deal with a few very bizarre crashes that are hard to fix. For example, we are currently experiencing this crash on the @ReactProp annotation and have been unable to reproduce it on any device, even those with identical hardware and software to ones that are crashing in the wild.
SavedInstanceState Across Processes on Android
Android frequently cleans up background processes but gives them a chance to synchronously save their state in a bundle. However, on React Native, all state is only accessible in the js thread so this can’t be done synchronously. Even if this weren’t the case, redux as a state store is not compatible with this approach because it contains a mix of serializable and non-serializable data and may contain more data than can fit within the savedInstanceState bundle which would lead to crashes in production.
|
https://medium.com/airbnb-engineering/react-native-at-airbnb-the-technology-dafd0b43838
|
['Gabriel Peal']
|
2018-06-25 15:57:00.044000+00:00
|
['Android', 'React', 'React Native', 'iOS', 'Mobile']
|
Title React Native Airbnb TechnologyContent second series blog post outline experience React Native next mobile Airbnb React Native relatively new fastmoving platform crosssection Android iOS web crossplatform framework two year safely say React Native revolutionary many way paradigm shift mobile able reap benefit many goal However benefit didn’t come without significant pain point Worked Well CrossPlatform primary benefit React Native fact code write run natively Android iOS feature used React Native able achieve 95–100 shared code 02 file platformspecific androidjsiosjs Unified Design Language System DLS developed crossplatform design language called DLS Android iOS React Native web version every component unified design language amenable writing crossplatform feature meant design component name screen consistent across platform However still able make platformappropriate decision applicable example use native Toolbar Android UINavigationBar iOS chose hide disclosure indicator Android don’t adhere Android platform design guideline opted rewrite component instead wrapping native one reliable make platformappropriate APIs individually platform reduced maintenance overhead Android iOS engineer may know properly test change React Native However cause fragmentation platform native React Native version component would get sync React reason React mostloved web framework simple yet powerful scale well large codebases thing particularly like Components React Components enforce separation concern welldefined prop state major contributor React’s scalability React Components enforce separation concern welldefined prop state major contributor React’s scalability Simplified Lifecycles Android slightly lesser extent iOS lifecycles notoriously complex Functional reactive React component fundamentally solve problem made learning React Native dramatically simpler learning Android iOS Android slightly lesser extent iOS lifecycles notoriously complex Functional reactive React component fundamentally solve problem made learning React Native dramatically simpler learning Android iOS Declarative declarative nature React helped keep UI sync underlying state Iteration Speed developing React Native able reliably use hot reloading test change Android iOS second two Even though build performance top priority native apps never come close iteration speed achieved React Native best native compilation time 15 second high 20 minute full build Investing Infrastructure developed extensive integration native infrastructure core piece networking i18n experimentation shared element transition device info account info many others wrapped single React Native API bridge complex piece wanted wrap existing Android iOS APIs something consistent canonical React keeping bridge date rapid iteration development new infrastructure constant game catch investment infrastructure team made product work much easier Without heavy investment infrastructure React Native would led subpar developer user experience result don’t believe React Native simply tacked existing app without significant continuous investment Performance One largest concern around React Native performance However practice rarely problem React Native screen feel fluid native one Performance often thought single dimension frequently saw mobile engineer look JS think “slower Java” However moving business logic layout main thread actually improves render performance many case see performance issue usually caused excessive rendering mitigated effectively using shouldComponentUpdate removeClippedSubviews better use Redux However initialization firstrender time outlined made React Native perform poorly launch screen deeplinks increased TTI time navigating screen addition screen dropped frame difficult debug Yoga translates React Native component native view Redux used Redux state management found effective prevented UI ever getting sync state enabled easy data sharing across screen However Redux notorious boilerplate relatively difficult learning curve provided generator common template still one challenging piece source confusion working React Native worth noting challenge React Native specific Backed Native everything React Native bridged native code ultimately able build many thing weren’t sure possible beginning Shared element transition built SharedElement component backed native shared element code Android iOS even work native React Native screen Lottie able get Lottie working React Native wrapping existing library Android iOS Native networking stack React Native us existing native networking stack cache platform core infra like networking wrapped rest existing native infrastructure i18n experimentation etc worked seamlessly React Native Static Analysis strong history using eslint web able leverage However first platform Airbnb pioneer prettier found effective reducing nit bikeshedding PRs Prettier actively investigated web infrastructure team also used analytics measure render time performance figure screen top priority investigate performance issue React Native smaller newer web infrastructure proved good testbed new idea Many tool idea created React Native adopted web Animations Thanks React Native Animated library able achieve jankfree animation even interactiondriven animation scrolling parallax JSReact Open Source React Native truly run React javascript able leverage extremely vast array javascript project redux reselect jest etc Flexbox React Native handle layout Yoga crossplatform C library handle layout calculation via flexbox API Early hit Yoga limitation lack aspect ratio added subsequent update Plus fun tutorial flexbox froggy made onboarding enjoyable Collaboration Web Late React Native exploration began building web iOS Android Given web also us Redux found large swath code could shared across web native platform alteration didn’t work well React Native Immaturity React Native le mature Android iOS newer highly ambitious moving extremely quickly React Native work well situation instance immaturity show make something would trivial native difficult Unfortunately instance hard predict take anywhere hour many day work around Maintaining Fork React Native Due React Native’s immaturity time needed patch React Native source addition contributing back React Native maintain fork could quickly merge change bump version two year add roughly 50 commits top React Native make process upgrading React Native extremely painful JavaScript Tooling JavaScript untyped language lack type safety difficult scale became point contention mobile engineer used typed language may otherwise interested learning React Native explored adopting flow cryptic error message led frustrating developer experience also explored TypeScript integrating existing infrastructure babel metro bundler proved problematic However continuing actively investigate TypeScript web Refactoring sideeffect JavaScript untyped refactoring extremely difficult errorprone Renaming prop especially prop common name like onClick prop passed multiple component nightmare refactor accurately make matter worse refactors broke production instead compile time hard add proper static analysis JavaScriptCore inconsistency One subtle tricky aspect React Native due fact executed JavaScriptCore environment following consequence encountered result iOS ship JavaScriptCore box meant iOS mostly consistent problematic u Android doesn’t ship JavaScriptCore React Native bundle However one get default ancient result go way bundle newer one debugging React Native attache Chrome Developer Tools instance great powerful debugger However debugger attached JavaScript run within Chrome’s V8 engine fine 999 time However one instance got bit toLocaleString worked iOS worked Android debugging turn Android JSC doesn’t include silently failing unless debugging case using V8 Without knowing technical detail like lead day painful debugging product engineer React Native Open Source Libraries Learning platform difficult timeconsuming people know one two platform well React Native library native bridge map video etc requires equal knowledge three platform successful found React Native Open source project written people experience one two led inconsistency unexpected bug Android iOS Android many React Native library also require use relative path nodemodules rather publishing maven artifact inconsistent expected community Parallel Infrastructure Feature Work accumulated many year native infrastructure Android iOS However React Native started blank slate write create bridge existing infrastructure meant time product engineer needed functionality didn’t yet exist point either work platform unfamiliar outside scope project build blocked could created Crash Monitoring use Bugsnag crash reporting Android iOS able get Bugsnag generally working platform le reliable required work platform React Native relatively new rare industry build significant amount infrastructure uploading source map inhouse work Bugsnag able thing like filter crash occurred React Native Due amount custom infrastructure around React Native would occasionally serious issue crash weren’t reported source map weren’t properly uploaded Finally debugging React Native crash often challenging issue spanned React Native native code since stack trace don’t jump React Native native Native Bridge React Native bridge API communicate native React Native work expected extremely cumbersome write Firstly requires three development environment properly set also experienced many issue type coming JavaScript unexpected example integer often wrapped string issue isn’t realized passed bridge make matter worse sometimes iOS fail silently Android crash began investigate automatically generating bridge code TypeScript definition towards end 2017 little late Initialization Time React Native render first time must initialize runtime Unfortunately take several second app size even highend device made using React Native launch screen nearly impossible minimized firstrender time React Native initializing applaunch Initial Render Time Unlike native screen rendering React Native requires least one full main thread j yoga layout thread main thread round trip enough information render screen first time saw average initial p90 render 280ms iOS 440ms Android Android used postponeEnterTransition API normally used shared element transition delay showing screen rendered iOS issue setting navbar configuration React Native fast enough result added artificial delay 50ms React Native screen transition prevent navbar flickering configuration loaded App Size React Native also nonnegligible impact app size Android total size React Native Java JS native library Yoga Javascript Runtime 8mb per ABI x86 arm 32 bit one APK would closer 12mb 64bit still can’t ship 64bit APK Android issue Gestures avoided using React Native screen involved complex gesture touch subsystem Android iOS different enough coming unified API challenging entire React Native community However work continuing progress reactnativegesturehandler hit 10 Long Lists React Native made progress area library like FlatList However nowhere near maturity flexibility RecyclerView Android UICollectionView iOS Many limitation difficult overcome threading Adapter data can’t accessed synchronously possible see view flash get asynchronously rendered scrolling quickly Text also can’t measured synchronously iOS can’t make certain optimization precomputed cell height Upgrading React Native Although React Native upgrade trivial wound painful particular nearly impossible use React Native 043 April 2017 049 October 2017 used React 16 alpha beta hugely problematic React library designed web use don’t support prerelease React version process wrangling proper dependency upgrade major detriment React Native infrastructure work mid2017 Accessibility 2017 major accessibility overhaul invested significant effort ensure people disability use Airbnb book listing accommodate need However many hole React Native accessibility APIs order meet even minimum acceptable accessibility bar maintain fork React Native could merge fix case oneline fix Android iOS wound taking day figuring add React Native cherry picking filing issue React Native core following coming week Troublesome Crashes deal bizarre crash hard fix example currently experiencing crash ReactProp annotation unable reproduce device even identical hardware software one crashing wild SavedInstanceState Across Processes Android Android frequently clean background process give chance synchronously save state bundle However React Native state accessible j thread can’t done synchronously Even weren’t case redux state store compatible approach contains mix serializable nonserializable data may contain data fit within savedInstanceState bundle would lead crash productionTags Android React React Native iOS Mobile
|
5,182 |
Tips to advance towards a career in Machine Learning
|
Machine Learning is a branch of Artificial Intelligence that gives a machine the ability to automatically learn and improve from an experience, rather than being explicitly programmed. ML is amongst the hottest career choices in 2020. Companies like Google, Amazon, Microsoft, Oracle, and many others are making a transition towards Machine Learning and Artificial Intelligence.
There are various career options in the field of Machine Learning such as Machine Learning Engineer, Machine Learning Analyst, NLP Data Scientist, Data Sciences Lead, and Machine Learning Scientist.
To help the aspiring Machine Learning enthusiasts, here are some tips which will help you build a career in Machine Learning. So, without any further ado, let us jump on to the prerequisites of Machine Learning.
Understanding the Prerequisites
The realm of machine learning comes with its own sets of requirements as well as qualifications. A career in machine learning is a steep one and requires you to have basic skill sets to start your journey into the industry.
To advance your career in Machine Learning, you must have exposure to the following skills:
Stats and probability
Algorithms
Applied mathematics
Hands-on experience in programming languages like Python, C++, Java or R.
And, Distributed Computing
To kick start your career, start working on the skills you lack. Get a book on probability, brush up your coding skills, and work on your weaker areas. Along with that, you can also take up a substantial course on artificial intelligence and machine learning.
Take online courses
To advance your career in machine learning, you have to broaden your skill-set as much as possible. And as a heads up, you can start with the online courses and then to hone your skills and then brush your knowledge, you can participate in various competitions in this field such as Kaggle, Analytics Vidhya
There is another popular approach available, you can accelerate your learning with the help of bootcamps
Practice as much as possible
If you want to master the Machine Learning skills, you need to practice on datasets. You must know the process of working on a problem end-to-end. Learn to map that process onto a tool and practice the process on a dataset in a targeted way.
Work on real-world machine learning problems to see how it can actually be used in fields like education, science, technology, and medicine. You can take the machine learning problems from Kaggle.com right now to build your machine learning model.
Working on different projects will give you the required experience for a job as it will make your resume stand out. The certificate from your institution and the number of projects you have worked on matters a lot. Now that you have worked on building basic knowledge and are able to understand what you are learning, it is time to start working on some machine learning projects that can help you furnish your skills.
Try collaborating with your mates for projects to constantly upskill yourself as it will help in giving you practical exposure and which will help you in meeting industry requirements.
Build a machine learning portfolio
Designers and artists use a portfolio to showcase examples of prior work to prospective clients and employers. Your machine learning portfolio should be a compilation of your completed independent projects, each of which uses machine learning in some way. The portfolio should be accessible, small, completed, independent, and most importantly, understandable.
A set of your finished projects will offer a knowledge base for you to reflect on and leverage as you push into projects further from your comfort zones. You can have a code repository such as GitHub or BitBucket as well to list your projects.
Such sites encourage you to provide a readme file to clearly describe the purpose and findings for each project. You can even add images, graphs, videos, and links in your code repositories. Give instructions to download the project and recreate the results as well. Want people to re-run your work? Try making it as accessible and easy as possible.
Bottom line
Now that you have an idea of how to build a career in Machine Learning, it is time for you to pull up your socks. Know your strengths and work towards the weaknesses to stay ahead of your competitors. Enroll yourself in a Machine Learning course and build a strong foundation for a rewarding career in Machine Learning.
|
https://medium.com/codingninjas-blog/machine-learning-is-a-branch-of-artificial-intelligence-that-gives-a-machine-the-ability-to-352370bb53ef
|
['Coding Ninjas']
|
2020-02-03 12:52:03.070000+00:00
|
['Data Science', 'Artificial Intelligence', 'Machine Learning']
|
Title Tips advance towards career Machine LearningContent Machine Learning branch Artificial Intelligence give machine ability automatically learn improve experience rather explicitly programmed ML amongst hottest career choice 2020 Companies like Google Amazon Microsoft Oracle many others making transition towards Machine Learning Artificial Intelligence various career option field Machine Learning Machine Learning Engineer Machine Learning Analyst NLP Data Scientist Data Sciences Lead Machine Learning Scientist help aspiring Machine Learning enthusiast tip help build career Machine Learning without ado let u jump prerequisite Machine Learning Understanding Prerequisites realm machine learning come set requirement well qualification career machine learning steep one requires basic skill set start journey industry advance career Machine Learning must exposure following skill Stats probability Algorithms Applied mathematics Handson experience programming language like Python C Java R Distributed Computing kick start career start working skill lack Get book probability brush coding skill work weaker area Along also take substantial course artificial intelligence machine learning Take online course advance career machine learning broaden skillset much possible head start online course hone skill brush knowledge participate various competition field Kaggle Analytics Vidhya another popular approach available accelerate learning help bootcamps Practice much possible want master Machine Learning skill need practice datasets must know process working problem endtoend Learn map process onto tool practice process dataset targeted way Work realworld machine learning problem see actually used field like education science technology medicine take machine learning problem Kagglecom right build machine learning model Working different project give required experience job make resume stand certificate institution number project worked matter lot worked building basic knowledge able understand learning time start working machine learning project help furnish skill Try collaborating mate project constantly upskill help giving practical exposure help meeting industry requirement Build machine learning portfolio Designers artist use portfolio showcase example prior work prospective client employer machine learning portfolio compilation completed independent project us machine learning way portfolio accessible small completed independent importantly understandable set finished project offer knowledge base reflect leverage push project comfort zone code repository GitHub BitBucket well list project site encourage provide readme file clearly describe purpose finding project even add image graph video link code repository Give instruction download project recreate result well Want people rerun work Try making accessible easy possible Bottom line idea build career Machine Learning time pull sock Know strength work towards weakness stay ahead competitor Enroll Machine Learning course build strong foundation rewarding career Machine LearningTags Data Science Artificial Intelligence Machine Learning
|
5,183 |
Designing for accessibility is not that hard
|
Designing for accessibility is not that hard
Seven easy-to-implement guidelines to design a more accessible web ❤️
Digital accessibility refers to the practice of building digital content and applications that can be used by a wide range of people, including individuals who have visual, motor, auditory, speech, or cognitive disabilities.
There’s a myth that making a website accessible is difficult and expensive, but it doesn’t have to. Designing a product from scratch that meets the requirements for accessibility doesn’t add extra features or content; therefore there shouldn’t be additional cost and effort.
Fixing a site that is already inaccessible may require some effort, though. When I used to work at Carbon Health, we checked the accessibility of our site using the AXE Chrome Extension. We found 28 violations that we needed to solve on the home page alone. It sounded complicated, but we discovered that these problems were not that hard to correct; it was just a matter of investing time and research to solve them. We were able to get to zero errors in a couple of days.
I want to share with you some of the simple steps we took so you can also make your sites more accessible. These principles focus on web and mobile accessibility.
But before we get started, let’s talk about why that’s important.
Why designing for accessibility? 🤔
As designers, we have the power and responsibility to make sure that everyone has access to what we create regardless of ability, context, or situation. The great thing about making our work accessible is that it brings a better experience to everyone.
There are over 56 million people in the United States (nearly 1 in 5) and over 1 billion people worldwide who have a disability. In 2017, there were 814 website accessibility lawsuits filed in federal and state courts. These two pieces of data alone should convince us of the importance of designing for accessibility.
There is also a strong business case for accessibility: studies show that accessible websites have better search results, they reach a bigger audience, they’re SEO friendly, have faster download times, they encourage good coding practices, and they always have better usability.
These seven guidelines are relatively easy to implement and can help your products get closer to meet level AA of the Web Content Accessibility Guidelines (WCAG 2.0), and work on the most commonly used assistive technologies — including screen readers, screen magnifiers, and speech recognition tools.
1. Add enough color contrast 🖍
|
https://uxdesign.cc/designing-for-accessibility-is-not-that-hard-c04cc4779d94
|
['Pablo Stanley']
|
2018-06-29 21:14:50.301000+00:00
|
['User Experience', 'Accessibility', 'UX', 'Design', 'Design Thinking']
|
Title Designing accessibility hardContent Designing accessibility hard Seven easytoimplement guideline design accessible web ❤️ Digital accessibility refers practice building digital content application used wide range people including individual visual motor auditory speech cognitive disability There’s myth making website accessible difficult expensive doesn’t Designing product scratch meet requirement accessibility doesn’t add extra feature content therefore shouldn’t additional cost effort Fixing site already inaccessible may require effort though used work Carbon Health checked accessibility site using AXE Chrome Extension found 28 violation needed solve home page alone sounded complicated discovered problem hard correct matter investing time research solve able get zero error couple day want share simple step took also make site accessible principle focus web mobile accessibility get started let’s talk that’s important designing accessibility 🤔 designer power responsibility make sure everyone access create regardless ability context situation great thing making work accessible brings better experience everyone 56 million people United States nearly 1 5 1 billion people worldwide disability 2017 814 website accessibility lawsuit filed federal state court two piece data alone convince u importance designing accessibility also strong business case accessibility study show accessible website better search result reach bigger audience they’re SEO friendly faster download time encourage good coding practice always better usability seven guideline relatively easy implement help product get closer meet level AA Web Content Accessibility Guidelines WCAG 20 work commonly used assistive technology — including screen reader screen magnifier speech recognition tool 1 Add enough color contrast 🖍Tags User Experience Accessibility UX Design Design Thinking
|
5,184 |
This Is How to Declutter Your Brain so You Can Achieve Higher-Level Thinking
|
This Is How to Declutter Your Brain so You Can Achieve Higher-Level Thinking
And produce results you didn’t think you were capable of.
Photo by lifecoachcode
Your brain is like your home. If there’s garbage everywhere, you will feel stressed and function at a drastically lower level.
Higher-level thinking looks like this:
Your brain comes up with ideas. A few of them are good.
You can access effortless levels of creativity — helpful if you’re a writer, an entrepreneur, musician, comedian, or actor.
You don’t walk around with huge levels of brain fog that clouds your sense of judgment.
When your brain is cluttered, you blow up at every tiny thing. You need a clear mind to achieve extraordinary results.
First, let’s toss out the mess from your brain:
1. What you can’t control
There’s a lot in your life you can’t control. The fastest way to clutter your brain with B.S. is to think a lot about what you can’t control. A question to change your life:
“What are you going to do about it?”
This question acts as a pattern interrupt. You break up the story of that which you can’t control when you force yourself to take personal responsibility. What you can’t control directs you when you let it.
2. Junk food for your brain
Anger and frustration are junk food for your brain. They don’t get you anywhere. Turn anger and frustration into action. Don’t worry about revenge — focus on results. Results make anger look stupid, rather than you.
3. People-centric thinking
Thinking about people is low-level thinking. Your brain is smarter when you focus on ideas. Humans are a messed up species and critiquing them is hilarious when you think deeply about it.
You really want people to act the right way, be kind, respect your views, be generous towards you, believe the same as you, get on with everybody? Seriously, this is an immature view of the world. Don’t worry about judging people. This is brain clutter.
Change yourself to change the world.
4. Money driven obsessions
Money is an obsession. When you think money will solve all your problems you mess up your potential. Money solves nothing. America has lots of money. Do they have many problems these days?
The time you spend thinking about money can be better utilized.
5. Your grandmother’s inheritance
Yep, throw the old girl’s stuff away (joking). The junk you hoard takes up space in your physical world.
Try to think clearly while you’re standing in the middle of a rubbish dump that smells like a public toilet after a hundred beer guts have taken a half-time moment of liquid relief. You can’t. It’s hard to put your finger on it.
Inspirational thoughts that drive you towards action aren’t going to happen amongst a pile of junk. Your brain is too busy focusing on where all your physical possessions belong. Other people’s stuff is the hardest to throwaway.
There’s a broken sense of connection, or you feel like you owe it to them to hold onto it all. Meanwhile, upstairs in your brain, your thinking becomes cluttered the same way your physical world is.
Putting stuff away declutters your mind.
|
https://medium.com/the-ascent/this-is-how-to-declutter-your-brain-so-you-can-achieve-higher-level-thinking-67de03bc9116
|
['Tim Denning']
|
2020-12-13 17:03:29.796000+00:00
|
['Psychology', 'Self Improvement', 'Inspiration', 'Learning', 'Work']
|
Title Declutter Brain Achieve HigherLevel ThinkingContent Declutter Brain Achieve HigherLevel Thinking produce result didn’t think capable Photo lifecoachcode brain like home there’s garbage everywhere feel stressed function drastically lower level Higherlevel thinking look like brain come idea good access effortless level creativity — helpful you’re writer entrepreneur musician comedian actor don’t walk around huge level brain fog cloud sense judgment brain cluttered blow every tiny thing need clear mind achieve extraordinary result First let’s toss mess brain 1 can’t control There’s lot life can’t control fastest way clutter brain BS think lot can’t control question change life “What going it” question act pattern interrupt break story can’t control force take personal responsibility can’t control directs let 2 Junk food brain Anger frustration junk food brain don’t get anywhere Turn anger frustration action Don’t worry revenge — focus result Results make anger look stupid rather 3 Peoplecentric thinking Thinking people lowlevel thinking brain smarter focus idea Humans messed specie critiquing hilarious think deeply really want people act right way kind respect view generous towards believe get everybody Seriously immature view world Don’t worry judging people brain clutter Change change world 4 Money driven obsession Money obsession think money solve problem mess potential Money solves nothing America lot money many problem day time spend thinking money better utilized 5 grandmother’s inheritance Yep throw old girl’s stuff away joking junk hoard take space physical world Try think clearly you’re standing middle rubbish dump smell like public toilet hundred beer gut taken halftime moment liquid relief can’t It’s hard put finger Inspirational thought drive towards action aren’t going happen amongst pile junk brain busy focusing physical possession belong people’s stuff hardest throwaway There’s broken sense connection feel like owe hold onto Meanwhile upstairs brain thinking becomes cluttered way physical world Putting stuff away declutters mindTags Psychology Self Improvement Inspiration Learning Work
|
5,185 |
The Toughest Essay
|
Photo by Green Chameleon on Unsplash
Write an essay about the one thing you love,
Was the toughest assignment I ever got.
To compartmentalise you into
Introduction, body, and conclusion
Was not possible —
You needed no introduction,
The largest part of the essay
Was too small to describe you,
And I’d hate our love to have a conclusion.
I would require a lifetime
To write a narrative essay
That would tell our story
Without missing the minutest of details.
Our love was too vast
To paint a picture through
A descriptive essay;
It simply wouldn’t fit the canvas.
To write an expository essay
Was an onerous task.
Facts, statistics, and numbers
Wouldn’t be proof enough of the way I felt.
Only a fool would need sound reasoning
In the form of a persuasive essay
To make our love look convincing;
Our love was never debatable.
I could write a thousand words about you,
But only three would matter the most.
|
https://krishnabetai.medium.com/the-toughest-essay-b41567e78ed0
|
['Krishna Betai']
|
2019-04-27 18:28:48.153000+00:00
|
['Love', 'Writing', 'Poetry', 'NaPoWriMo', 'National Poetry Month']
|
Title Toughest EssayContent Photo Green Chameleon Unsplash Write essay one thing love toughest assignment ever got compartmentalise Introduction body conclusion possible — needed introduction largest part essay small describe I’d hate love conclusion would require lifetime write narrative essay would tell story Without missing minutest detail love vast paint picture descriptive essay simply wouldn’t fit canvas write expository essay onerous task Facts statistic number Wouldn’t proof enough way felt fool would need sound reasoning form persuasive essay make love look convincing love never debatable could write thousand word three would matter mostTags Love Writing Poetry NaPoWriMo National Poetry Month
|
5,186 |
Take Out — Out Take. * Extra-long comic!* When you’re afraid…
|
* Extra-long comic!*
When you’re afraid ordering in will lead to your last supper.
|
https://backgroundnoisecomic.medium.com/take-out-out-take-e3af0b4efefd
|
['Background Noise Comics']
|
2020-04-05 17:51:26.612000+00:00
|
['Humor', 'Anxiety', 'Comics', 'Quarantine', 'Coronavirus']
|
Title Take — Take Extralong comic you’re afraid…Content Extralong comic you’re afraid ordering lead last supperTags Humor Anxiety Comics Quarantine Coronavirus
|
5,187 |
The Human Genome Is Full of Viruses
|
Viruses are powerful, ancient, and vital to our existence, but they are extremely simple constructions. They tend to be nothing more than a few pieces: a protein capsid, which is a simplistic and protective shell; a protein called a polymerase, which carries out most of the functions related to replicating the viral genome; and a sequence of nucleotides — either RNA or DNA — that encode for the previously mentioned viral proteins. The image below shows one of the ways that these viral components can be assembled into a unified whole. Unlike a human genome, a viral genome can be thought of as a self-contained model of the entire viral form. Within its RNA or DNA, a virus contains all the instructions necessary to create an entirely new body for itself and to replicate those same instructions. The simplicity and self-contained nature of viruses makes them phenomenal tools for biological engineering and medicine.
Viruses (specifically bacteriophages) as imaged with an electron microscope. Image Credit: Wikimedia Commons
Viruses are so simple that they don’t always need their own body to survive; they have circadian rhythms like all living things. We experience these rhythms through cycles of sleep and wakefulness, whereas viral rhythms occur as periods of dormancy between rounds of infection. Viruses don’t technically have a body during their dormant phase — they are nothing more than a string of letters in the book of the genome. But, as soon as something disturbs their sleep (like a mutation or a new virus invading the host) viruses can awaken and rebuild their physical bodies from a purely genetic form. When the wrong (or right, depending on your perspective) protein manages to leak out of a dormant viral gene, it is like the virus is suddenly awake again. A new physical body means that it has all the tools necessary to replicate.
Even beyond these rhythmic cycles, certain kinds of viruses don’t need a physical form at all. These disembodied viruses are called transposable elements, or transposons. True viruses have a body made from proteins, but transposons are mobile genetic elements — sequences of DNA that physically move in and out of genomes. For this reason, they are often referred to as “jumping genes.” Transposons do very much the same thing as true viruses, i.e. they copy and paste themselves throughout genomes. They are so similar to true viruses that some endogenous retroviruses (ERVs) are themselves transposons. As stated above, ~8% of the human genome is made up of ERVs, but nearly 50% of the human genome is made of transposons! Humans are basically just big piles of viral-like sequences.
Transposable elements (transposons) are sequences of DNA that literally jump in and out of the genome. Image Credit: Harvard University
Transposons have a disturbing capacity to disrupt important genes by inserting themselves into the DNA sequences. It’s like if a series of words in a book could physically move around from page to page — these words would have a high likelihood of jumping into the middle of a sentence, thereby making it nonsensical. Amazingly, transposons preferentially insert themselves into important and functional genes — as if those jumping words wanted to disrupt the most interesting parts of the book rather than the index or bibliography. This is a powerful evolutionary strategy, since transposons are much more likely to get “read” by a cell if they jump into the middle of an important (and therefore, active) gene.
Transposons can very easily mess up important genes that we need to survive, so it has been theorized that epigenetic mechanisms evolved to stop transposons from moving around the genome. Furthermore, since transposons can rapidly alter DNA sequences, they are thought to play a major role in the processes of evolution and speciation (how a species evolves into a new form). In plants, transposons become highly active in response to stressful conditions, and this could act as a rapid source of short-term mutation when the environment starts pressuring you to survive or die. In addition, an animal’s genome changes when they are domesticated (like going from a wolf to a dog, or from an aurochs to a cow), and a majority of these changes occur in transposon sequences. No one is really sure why or how this happens, but it is clear that viruses play a very important role in rapid genetic change.
|
https://medium.com/medical-myths-and-models/the-human-genome-is-full-of-viruses-c18ba52ac195
|
['Ben L. Callif']
|
2020-05-22 15:31:58.341000+00:00
|
['Viral', 'Epigenetics', 'Genetics', 'Biology', 'Science']
|
Title Human Genome Full VirusesContent Viruses powerful ancient vital existence extremely simple construction tend nothing piece protein capsid simplistic protective shell protein called polymerase carry function related replicating viral genome sequence nucleotide — either RNA DNA — encode previously mentioned viral protein image show one way viral component assembled unified whole Unlike human genome viral genome thought selfcontained model entire viral form Within RNA DNA virus contains instruction necessary create entirely new body replicate instruction simplicity selfcontained nature virus make phenomenal tool biological engineering medicine Viruses specifically bacteriophage imaged electron microscope Image Credit Wikimedia Commons Viruses simple don’t always need body survive circadian rhythm like living thing experience rhythm cycle sleep wakefulness whereas viral rhythm occur period dormancy round infection Viruses don’t technically body dormant phase — nothing string letter book genome soon something disturbs sleep like mutation new virus invading host virus awaken rebuild physical body purely genetic form wrong right depending perspective protein manages leak dormant viral gene like virus suddenly awake new physical body mean tool necessary replicate Even beyond rhythmic cycle certain kind virus don’t need physical form disembodied virus called transposable element transposon True virus body made protein transposon mobile genetic element — sequence DNA physically move genome reason often referred “jumping genes” Transposons much thing true virus ie copy paste throughout genome similar true virus endogenous retrovirus ERVs transposon stated 8 human genome made ERVs nearly 50 human genome made transposon Humans basically big pile virallike sequence Transposable element transposon sequence DNA literally jump genome Image Credit Harvard University Transposons disturbing capacity disrupt important gene inserting DNA sequence It’s like series word book could physically move around page page — word would high likelihood jumping middle sentence thereby making nonsensical Amazingly transposon preferentially insert important functional gene — jumping word wanted disrupt interesting part book rather index bibliography powerful evolutionary strategy since transposon much likely get “read” cell jump middle important therefore active gene Transposons easily mess important gene need survive theorized epigenetic mechanism evolved stop transposon moving around genome Furthermore since transposon rapidly alter DNA sequence thought play major role process evolution speciation specie evolves new form plant transposon become highly active response stressful condition could act rapid source shortterm mutation environment start pressuring survive die addition animal’s genome change domesticated like going wolf dog aurochs cow majority change occur transposon sequence one really sure happens clear virus play important role rapid genetic changeTags Viral Epigenetics Genetics Biology Science
|
5,188 |
Why Being Angry In The Age of Trump Is Deadly
|
Image: what’s up.co.nz
Why Being Angry In The Age of Trump Is Deadly
I need to save myself, fast!
Yes, it’s about Donald Trump, and the trappings that come with managing a hostile climate that breeds ill-will towards assigned enemies, representing the utter devastation of Black lives.
And of course it didn’t take a maniacal president with a penchant for radicalizing White terrorists for anyone of us to be fully aware of the real and present danger that police brutality exacts, or to comprehend the traitorous levy of a woefully biased judicial system.
But when your existence is an alarm clock that goes off every hour on the hour without any reprieve, due to the mechanisms of high-priced platforms that were built for this season of chaos, on the scale that enables the validation of evilness from the highest office in the land — there’s the threat of slowly but surely lose your shit.
Aside from the growing and unreasonable habit of tracking Twitter accounts belonging to enemies of the state with roaring clap backs that weirdly lighten the burden of discontent, there’s also the overall feeling of emotional and physical fatigue.
Getting older is a blessing but it’s also the brutal lesson in how some things will never change. You can only control so much. And even when you give all you’ve got, the outcome won’t match that investment. You keep trying to embrace daily reminders of how your best days are still ahead, but that’s a challenging task when the demise of another year is at hand.
You do the best you can to keep your head up.
As a writer, I find solace in the words that I create with the discipline that has helped to inspire other genres of expression, thereby opening up a whole new world that needed arousal.
But the mental disease of unrelenting despair that’s briefly alleviated with the Buddhist chants and CBD tinctures is a personal pain that has to be addressed accordingly sooner rather than later. And while I was planning on seriously exploring options for therapy in the new year, the awfulness of today frightens me into going with the “sooner” route.
The tense conservation that morning with my mother has become a frequent interaction that transpires from accumulated years of eventfulness. The good, the bad, the ugly and uglier starts demanding the attention that was delayed. And since the next decade looms with the danger of what unfinished business can deliver, it’s healthy to clean house when you can.
All I wanted was to head to the gym before the brisk walk to the shopping plaza to pick up necessary items. Visiting my parents in their neck of the woods is daunting when you’re not driving. As a longtime resident of New York City, I was accustomed to trekking or jogging everywhere with the least amount of obstacles.
But those privileges aren’ transferable to the rest of the country because drivers always have the right of way, which explains why it takes forever to cross the street. And when the light does change, you must run to make that small window and avoid getting hit.
The lack of sidewalks is also a bummer, and the apartment complex where my parents live tried to rectify that issue by carving out a slab that can barely fit a full-sized adult.
I noticed that as I made my way out of the entrance and towards the busy street to wait for the light. As I stood watching the cars go by, the feelings of irritation were settled in my sight. You think a lot when you’re at that age where everything is just as big a deal as you fear.
As I made my way down the hilly part after crossing the street, I was at least able to note that it was a gorgeous fall day. The two Black girls with their Chick-fil-A bags had joined my procession and walked ahead of me, and because their casual pace didn’t gel with my speed, I decided to step down from the fashioned sidewalk and come to street level.
After living in cities that traditionally devalue the safety of anything walking, I’ve been trained not to assume that drivers give a damn. This means keeping close to the curb when I’m close to to the street. And so I made sure to stay within the confines of the painted slab that indicated I was far enough away from the slow moving vehicles.
Suddenly there was a startlingly loud horn from behind and the car drove forward with the irate driver yelling that I was on the sidewalk.
The two Black girls immediately commented on the driver’s rudeness, and their reaction keyed into my shock and resentment, especially when I knew that he was being an asshole for the fuck of it. I definitely wasn’t the moving object purposely blocking his view.
The burning anger was intensified by the fact that he was a White male, and coupled with the Black girls echoing and confirming the obvious hostility, I was inspired to flip the bastard off.
His advantage was being able to drive off, while I kept walking in the same direction.
I knew I was on a mission even before I could stop myself.
The sweltering inferno had devoured me in seconds, and the only way to cool off was to let it all out. He parked where I could get to him, and as he got out of his car, and I got a glimpse of his gym attire, for some reason, there was even more fury breathing out of me.
I stood in the middle of the parking area and gave him the finger again, which encouraged his curses, and motivated me to say something that I’ve never said before, at least not on a Saturday morning, in a family friendly shopping center.
I won’t say exactly what it was because I’m too mortified, but it was in reference to his race.
As soon as the words fell out, I found the cold chill that I thought would make me feel better.
By the time I returned to myself, my whole body was covered in sweat and I was shaking uncontrollably. Getting that angry has happened before, but it was usually regulated to the discipline of saving my energy for those who deserve it, instead of bonafide idiots.
The fact that I wasn’t willing to just keep it moving, and insisted on hunting my prey, for the duty of letting him know why he was a piece of shit, which had more to do with the color of skin, was an unnerving realization that shook me to the core.
First off, I could’ve gotten blasted away by this dude, and the authorities would blame me for my murder based on bystanders, who would readily attest to how the enraged Black woman forced the burly White guy to defend himself.
And then there’s the glaring evidence of how the internalized data from the “summer of hate” when White folks were calling the cops on Black folks, occupying the spaces that belong to every human, must’ve stuck around for the manifestations that you don’t see coming.
We think we’re okay.
As long as the daily rituals are fulfilled without disruptions , there’s no incentive for check-ins, just to re-affirm that we’re not stealthily falling apart at the seams.
I’m not regretful for the first “fuck you,” but I hate that it escalated to the point where I became unrecognizable.
At the same time, I’m willing to give myself a break, as a Black woman, who can’t afford to act out like her White counterparts without strong the likelihood that I could be arrested or worse.
But most importantly, it’s apparent that I have to proactively seek the self-care that will keep me centered and protected in the knowledge of how I’m too precious to resort to street fights with grown adult males, who have won even before the gauntlet opens.
I am most certainly angry, and this disposition retains the heat that explodes before I can duck for cover.
I don’t want to be the Black woman who downplayed how those temperatures can rise just in time for the encounters that will turn a mid-morning stroll into the journey that won’t bring me back, for reasons that weren’t worth the gamble.
Everything is about race, it always has been.
I was convinced the White guy who sternly blew that horn at me, did so as a way to embarrass and demean, and since it was blatantly unprovoked, it instinctively felt like a racist attack.
Maybe I was wrong to rush to that conclusion, but either way, I shouldn’t have invited more shit to the pile with that confrontation. And I absolutely shouldn’t have said what I said because I know better.
There are things I have to change and probably scrub away from my calendar of activities in order to prepare for the bigger assignment of tending to the exposed nerves, that are so much more entangled than I could’ve imagined.
These times are not ordinary or livable, and while we do our best to navigate through the terrain of lawlessness with the tools of disengagement, the falsehood of our mental state only buckles when real life drives though — fast and furious.
What happened makes it’s clear that drastic adjustments must be made.
Once I re-entered the complex after the mess, and made my way down the driveway to the apartments, paying extra attention to the painted stain of a sidewalk, I walked past a lovely young Black woman, who gave me the most amazing smile, which I happily reciprocated
Only then did the hot tears stream down.
Yeah, I have to save myself, fast!
|
https://nilegirl.medium.com/why-being-angry-in-the-age-of-trump-is-deadly-4aee5f7c887d
|
['Ezinne Ukoha']
|
2019-10-21 16:42:38.739000+00:00
|
['Mental Health', 'Therapy', 'Race', 'BlackLivesMatter', 'Social Media']
|
Title Angry Age Trump DeadlyContent Image what’s upconz Angry Age Trump Deadly need save fast Yes it’s Donald Trump trapping come managing hostile climate breed illwill towards assigned enemy representing utter devastation Black life course didn’t take maniacal president penchant radicalizing White terrorist anyone u fully aware real present danger police brutality exacts comprehend traitorous levy woefully biased judicial system existence alarm clock go every hour hour without reprieve due mechanism highpriced platform built season chaos scale enables validation evilness highest office land — there’s threat slowly surely lose shit Aside growing unreasonable habit tracking Twitter account belonging enemy state roaring clap back weirdly lighten burden discontent there’s also overall feeling emotional physical fatigue Getting older blessing it’s also brutal lesson thing never change control much even give you’ve got outcome won’t match investment keep trying embrace daily reminder best day still ahead that’s challenging task demise another year hand best keep head writer find solace word create discipline helped inspire genre expression thereby opening whole new world needed arousal mental disease unrelenting despair that’s briefly alleviated Buddhist chant CBD tincture personal pain addressed accordingly sooner rather later planning seriously exploring option therapy new year awfulness today frightens going “sooner” route tense conservation morning mother become frequent interaction transpires accumulated year eventfulness good bad ugly uglier start demanding attention delayed since next decade loom danger unfinished business deliver it’s healthy clean house wanted head gym brisk walk shopping plaza pick necessary item Visiting parent neck wood daunting you’re driving longtime resident New York City accustomed trekking jogging everywhere least amount obstacle privilege aren’ transferable rest country driver always right way explains take forever cross street light change must run make small window avoid getting hit lack sidewalk also bummer apartment complex parent live tried rectify issue carving slab barely fit fullsized adult noticed made way entrance towards busy street wait light stood watching car go feeling irritation settled sight think lot you’re age everything big deal fear made way hilly part crossing street least able note gorgeous fall day two Black girl ChickfilA bag joined procession walked ahead casual pace didn’t gel speed decided step fashioned sidewalk come street level living city traditionally devalue safety anything walking I’ve trained assume driver give damn mean keeping close curb I’m close street made sure stay within confines painted slab indicated far enough away slow moving vehicle Suddenly startlingly loud horn behind car drove forward irate driver yelling sidewalk two Black girl immediately commented driver’s rudeness reaction keyed shock resentment especially knew asshole fuck definitely wasn’t moving object purposely blocking view burning anger intensified fact White male coupled Black girl echoing confirming obvious hostility inspired flip bastard advantage able drive kept walking direction knew mission even could stop sweltering inferno devoured second way cool let parked could get got car got glimpse gym attire reason even fury breathing stood middle parking area gave finger encouraged curse motivated say something I’ve never said least Saturday morning family friendly shopping center won’t say exactly I’m mortified reference race soon word fell found cold chill thought would make feel better time returned whole body covered sweat shaking uncontrollably Getting angry happened usually regulated discipline saving energy deserve instead bonafide idiot fact wasn’t willing keep moving insisted hunting prey duty letting know piece shit color skin unnerving realization shook core First could’ve gotten blasted away dude authority would blame murder based bystander would readily attest enraged Black woman forced burly White guy defend there’s glaring evidence internalized data “summer hate” White folk calling cop Black folk occupying space belong every human must’ve stuck around manifestation don’t see coming think we’re okay long daily ritual fulfilled without disruption there’s incentive checkins reaffirm we’re stealthily falling apart seam I’m regretful first “fuck you” hate escalated point became unrecognizable time I’m willing give break Black woman can’t afford act like White counterpart without strong likelihood could arrested worse importantly it’s apparent proactively seek selfcare keep centered protected knowledge I’m precious resort street fight grown adult male even gauntlet open certainly angry disposition retains heat explodes duck cover don’t want Black woman downplayed temperature rise time encounter turn midmorning stroll journey won’t bring back reason weren’t worth gamble Everything race always convinced White guy sternly blew horn way embarrass demean since blatantly unprovoked instinctively felt like racist attack Maybe wrong rush conclusion either way shouldn’t invited shit pile confrontation absolutely shouldn’t said said know better thing change probably scrub away calendar activity order prepare bigger assignment tending exposed nerve much entangled could’ve imagined time ordinary livable best navigate terrain lawlessness tool disengagement falsehood mental state buckle real life drive though — fast furious happened make it’s clear drastic adjustment must made reentered complex mess made way driveway apartment paying extra attention painted stain sidewalk walked past lovely young Black woman gave amazing smile happily reciprocated hot tear stream Yeah save fastTags Mental Health Therapy Race BlackLivesMatter Social Media
|
5,189 |
How to Reimagine Your Boring Brand
|
We live in a time of change, where financial markets, capital investments, and entire industries can shift rapidly. Where startups disrupt old ways of doing things. And where the consumer is in charge, expecting consistent content delivered across devices.
So it’s no surprise that businesses can fall behind or fail to distinguish themselves in such a fluid environment. If you’re looking to reinvent your brand, you want to start the process by asking lots of questions — leaving your assumptions in your wake.
One question your company will need to ask is: Do we truly understand how customers are using our products? Even an innovative company might be surprised at that answer.
SurveyMonkey began their brand update with a survey that revealed a big surprise: people weren’t using their product as a survey tool. Rather, they were using it to unleash creative thinking. Such lightbulb moments led them to a new mission: Power the Curious.
Reimagining a brand isn’t something to be taken lightly. Here are some pointers I’ve gleaned from working on branding projects for Element Three clients.
Understand Your Starting Point
Whether you’re revamping a venerable brand or rebranding after a wave of acquisitions, you need to find your starting point by researching the industry. This should be an objective process of discovering what’s new in the space, where things are now, and where things are headed. You can read my approach to industry research here.
Gather Internal Perspectives
Another important aspect of a brand overhaul is understanding the perspectives of your employees and key internal stakeholders. A marketing agency can serve as an objective third party in this area by developing an employee survey or conducting interviews with key internal stakeholders.
Employees are in a position to contribute valuable ideas that can help you overcome the status quo. As this Fast Company article says, “bad ideas are the seeds of great ideas.”
You might find an internal consensus that your company needs a new name. If that’s the case, keep in mind that naming is a beast of a process in and of itself. Call off the committee and read this post.
Know Your Audience
Customer research can take a number of different forms: from online surveys to phone interviews to in-person observations of them using your product. Interviews can be a powerful means of gathering information because you can ask follow-up questions to intriguing or unexpected responses. Let’s look at a couple of companies that made the most of insights gained from interacting with their consumers.
Pabst Blue Ribbon might not come to mind when you think of great beer, but you’ve got to hand it to them for their comeback. When their sales bottomed out in 2001, they turned to an Atlanta agency called Fizz for help, according to the HubSpot blog. There wasn’t any money for traditional advertising, so Fizz went straight to the customer to find out why they drank PBR. As it turned out, the PBR fans were “early hipsters” who avoid mainstream brands. So with this information, the agency spurred PBR’s comeback with a strategy of sponsoring events like skating parties and art gallery openings.
Lego experienced an amazing comeback of its own — one that’s been hailed as the greatest turnaround in corporate history, according to The Guardian newspaper. This revival stemmed from really understanding their audience: kids and their families. The Danish company is said to conduct the world’s largest ethnographic study of children, something they refer to as “camping with consumers.” Lego’s Global Insights group travels the world to observe how kids play with Lego and learn why some sets are more popular than others. Here are two takeaways:
Understand Your Audience Segments
Lego learned that boys tend to be more interested in the battle of good versus evil, while girls tend to want figures with greater detail and realism. Such insights helped Lego, whose customer base was largely boys, break into the girls’ market with Lego Friends, a different type of product that appeals to what they learned girls want.
Stick to Your Strengths
Lego boss Vig Knudstorp inherited theme parks, clothes, jewelry, and video games when he took over in 2001, the Guardian article said. Knudstorp sold assets — like the theme parks — for which Lego had no expertise. And under his leadership Lego has also decided to find partnerships for new projects such as movies, rather than trying to create everything themselves.
Know Your Adversaries
Another aspect of updating your brand involves clearly understanding how your competitors are positioned, what they’re saying to the marketplace, and where they’re saying it. You likely have strong feelings about certain competitors and how you stack up. But do you realize that the companies you compete with online are often a different group than the ones you compete with offline?
When we conduct a competitive audit at E3, we research how the client compares to key competitors in terms of:
Clarity of message: How easy is it to follow and understand what they’re saying?
Design and UX: What is the visual identity?
Channel efficacy: How effective is the marketing across channels?
Reputation: What’s the sentiment among customers, employees, and the industry?
Differentiators: What sets them apart?
Synthesize Your Findings
After all your research is done, it’s time to synthesize your findings. You’ll need to assess what your customers have told you they care about, what your key stakeholders think you should be communicating, and what you’re actually communicating to the marketplace.
You’ll want to consider the main themes that emerged in the research. Ask yourself questions like:
Which themes keep coming up?
Were there surprising bits of information?
How do emerging trends relate to the customer? My company? Competitors?
What are the areas of greatest opportunity?
Where are the potential threats?
How are competitors positioned? What about us?
How are we different from everyone else?
What’s a positioning we can truly own?
Answering these questions and identifying what’s lacking in your current messaging can help you craft a story that’s unique, authentic, and bold — one that can grab attention across online and offline channels.
Find Your Bold Story
Your story is what sets you apart, what pulls people in. It must be relatable. That means it’s okay to admit your failures and give people a window into the struggles you’ve experienced as a company; such transparency can help you connect with consumers on an emotional level.
Some stories take companies back to their roots. Others like Target find a new position that they can truly own. Target, which was going head-to-head with big-box stores like Wal-Mart, differentiated itself by becoming a fashion-forward brand that maintained its low prices. The style revamp extended beyond the products to the design of the stores.
Wherever your story takes you, it’s crucial to get key internal stakeholders on board with your direction early on. An effective internal launch can serve as a rallying cry that gets employees excited before your new story is broadcast to the world; that’s why it’s important to get the internal launch right, especially in a large company.
Element Three has significant experience in working on branding projects with clients in a variety of industries. Check out the work we did to modernize Indiana’s municipal advocacy organization.
Perspective from Derek Smith, Senior Writer and Element Three
|
https://medium.com/element-three/how-to-reimagine-your-boring-brand-beed650ed0c6
|
['Element Three']
|
2018-08-03 15:17:02.889000+00:00
|
['Brand Strategy', 'Marketing', 'Brands', 'Branding']
|
Title Reimagine Boring BrandContent live time change financial market capital investment entire industry shift rapidly startup disrupt old way thing consumer charge expecting consistent content delivered across device it’s surprise business fall behind fail distinguish fluid environment you’re looking reinvent brand want start process asking lot question — leaving assumption wake One question company need ask truly understand customer using product Even innovative company might surprised answer SurveyMonkey began brand update survey revealed big surprise people weren’t using product survey tool Rather using unleash creative thinking lightbulb moment led new mission Power Curious Reimagining brand isn’t something taken lightly pointer I’ve gleaned working branding project Element Three client Understand Starting Point Whether you’re revamping venerable brand rebranding wave acquisition need find starting point researching industry objective process discovering what’s new space thing thing headed read approach industry research Gather Internal Perspectives Another important aspect brand overhaul understanding perspective employee key internal stakeholder marketing agency serve objective third party area developing employee survey conducting interview key internal stakeholder Employees position contribute valuable idea help overcome status quo Fast Company article say “bad idea seed great ideas” might find internal consensus company need new name that’s case keep mind naming beast process Call committee read post Know Audience Customer research take number different form online survey phone interview inperson observation using product Interviews powerful mean gathering information ask followup question intriguing unexpected response Let’s look couple company made insight gained interacting consumer Pabst Blue Ribbon might come mind think great beer you’ve got hand comeback sale bottomed 2001 turned Atlanta agency called Fizz help according HubSpot blog wasn’t money traditional advertising Fizz went straight customer find drank PBR turned PBR fan “early hipsters” avoid mainstream brand information agency spurred PBR’s comeback strategy sponsoring event like skating party art gallery opening Lego experienced amazing comeback — one that’s hailed greatest turnaround corporate history according Guardian newspaper revival stemmed really understanding audience kid family Danish company said conduct world’s largest ethnographic study child something refer “camping consumers” Lego’s Global Insights group travel world observe kid play Lego learn set popular others two takeaway Understand Audience Segments Lego learned boy tend interested battle good versus evil girl tend want figure greater detail realism insight helped Lego whose customer base largely boy break girls’ market Lego Friends different type product appeal learned girl want Stick Strengths Lego bos Vig Knudstorp inherited theme park clothes jewelry video game took 2001 Guardian article said Knudstorp sold asset — like theme park — Lego expertise leadership Lego also decided find partnership new project movie rather trying create everything Know Adversaries Another aspect updating brand involves clearly understanding competitor positioned they’re saying marketplace they’re saying likely strong feeling certain competitor stack realize company compete online often different group one compete offline conduct competitive audit E3 research client compare key competitor term Clarity message easy follow understand they’re saying Design UX visual identity Channel efficacy effective marketing across channel Reputation What’s sentiment among customer employee industry Differentiators set apart Synthesize Findings research done it’s time synthesize finding You’ll need ass customer told care key stakeholder think communicating you’re actually communicating marketplace You’ll want consider main theme emerged research Ask question like theme keep coming surprising bit information emerging trend relate customer company Competitors area greatest opportunity potential threat competitor positioned u different everyone else What’s positioning truly Answering question identifying what’s lacking current messaging help craft story that’s unique authentic bold — one grab attention across online offline channel Find Bold Story story set apart pull people must relatable mean it’s okay admit failure give people window struggle you’ve experienced company transparency help connect consumer emotional level story take company back root Others like Target find new position truly Target going headtohead bigbox store like WalMart differentiated becoming fashionforward brand maintained low price style revamp extended beyond product design store Wherever story take it’s crucial get key internal stakeholder board direction early effective internal launch serve rallying cry get employee excited new story broadcast world that’s it’s important get internal launch right especially large company Element Three significant experience working branding project client variety industry Check work modernize Indiana’s municipal advocacy organization Perspective Derek Smith Senior Writer Element ThreeTags Brand Strategy Marketing Brands Branding
|
5,190 |
It’s not just a snap
|
It’s not just a snap
How photographs convey multiple layers of stories
Photo by Marco Xu on Unsplash
Up until 2 years ago, photographing just meant clicking a shutter to me. Then, I fell madly in love with it.
I found myself alone with an old mirrorless camera in my hand, wandering the streets of Bologna with no clue about what it meant to take a picture, except for the few basics notions about the exposure triangle: at that time, I had no idea that these cold numbers said very little about a photograph.
There is a whole world behind it, an entire narrative arc beginning with the choice of a subject and ending with an image, which of course is the sum of the story it’s meant to convey and the story of how it was conceived. Now that I look back, it’s safe to say that someone so much into verbal and written storytelling as I am could not help falling in love with photography too.
I can’t say I’m an expert right now, but if these two years of practice have taught me anything (aside from the fact that you never stop learning something you’re passionate about) it is that the technical aspect is just a small part of the whole story. It’s just an instrument to tell a story, not the story itself. I see it as the pen and paper (or the screen and keyboard) for a writer.
Because that single image, captured in a fraction of a second, tells a lot about who took it and about who looks at it as well.
First layer: planning
There is no photo without planning. Before a portrait photography session begins, the photographer sets up the scene and places the subject inside, choosing each setting in accordance with the theme of the shooting session.
In landscape photography, it’s not possible to build the entire scene from scratch as in the case of an indoor portrait or a still-life picture, but planning still plays a strong role. For starters, several apps and websites allow the photographer to get a glimpse at the spots around them, and about the best day and hour to capture them. A landscape photographer may get on the spot in advance (even a few hours earlier) and begin to experiment with the composition: this way, everything will be in place by the time the picture is meant to be taken.
Believe it or not, even street photography is not a completely instantaneous process: while this discipline might look like it’s based on instinct and luck, a good street photographer is so experienced he can kind of anticipate what has the potential to become a good setting for a photo.
If I have to compare it with a step in the writing process, it corresponds to the “What if” moment, when we begin to explore the possibility for a story to be told. From that point on, the writer/photographer will take the steps allowing them to show the world the story they painted inside their head.
Second layer: shooting
It’s the moment when photographers look at the scene in front of them and measure it against their expectations. It might meet them, exceed them, or it can even fall short, and this is where the on-site adjustments take place. In the case of a writer, it’s when he/she sits down and puts words on paper, and they have a chance to see how they feel in the outside world.
The way the light enters the scene might be slightly different from what we expected, maybe it’s brighter or dimmer. If it’s an outside shot, weather plays quite an important role, and it can change quickly. These small adjustments are the story here, together with the choices about composition.
“Composition” is the process through which the photographer decides what to include in the picture and what to leave out, and how to arrange the elements “making the cut” so that the observer’s eyes are directed where they need to be.
It’s the reason why so many times, after we’ve released the shutter, we feel disappointed by the result. What we saw with our eyes was so good, then why doesn’t the image in the camera preview live up to the real scene? To keep up with the metaphor with the writing process, this happens because at a certain point we have to decide what to sacrifice: we can’t fit the entire scene inside the picture or inside a book, not even with the widest-angle lenses in the world or with an endless supply of ink and paper. Some digression will add something to the story, some others will spoil it and produce a confusing soup.
Third layer: postprocessing
Here comes what can be regarded as the style we use to tell a story, even if talking about postprocessing is kinda like walking on ice: everyone has got their opinion about it, and it usually lies at either end of the spectrum. For some people, post-processing is a way of enhancing the picture so that it reveals its true potential, for others it’s cheating.
I tend to prefer the former point of view: the digital negative is the raw story as if it was told by a machine, while post-processing expresses how I put it on paper, the words I choose to make it mine and to present it to the world.
The amount and nature of post-processing deemed reasonable for an image is something quite personal, and it could potentially turn a photograph upside down. A general rule of thumb is that if the composition is good there will be no need for sensitive tuning such as boosting saturation and contrast or enhancing the colors in an unnatural way to have the picture convey something. A vocabulary too polished can spoil a story, in the same way as an excess of post-processing can spoil a picture. It will make it stand out for sure, but it’s not necessarily a good thing.
Spot-on and discrete post-processing, on the other hand, can help the spatial arrangement of elements in the scene in enhancing what the photographer wants the viewer to see while setting up a certain mood, like an accurate choice of words will add a personal touch to the story.
Fourth layer: the viewer’s eye
As much as the photographer can try to convey a message with the choices made when planning, shooting, or editing, everyone who’ll ever see the picture will have their own personal impression about it.
Photo by Paul Skorupskas on Unsplash
If the photographer has made a good job with the composition, the viewer will see exactly what they wanted them to see, but everyone will have their own takeaway about what they observe because there are no clear words to express it. This, in my opinion, is the biggest difference with respect to the writing process, and it is fascinating to think about how our brain can come up with a story (or many stories) just looking at an image.
Here is an example:
This picture is far from being technically perfect, but it is an attempt at telling a story. Even a short piece of flash fiction would have given us a setting and a precise story development, while in the case of this picture this is up to us. We can make up our own hypotheses just looking at it.
What is the figure on the left saying to the one on the right? Are they fighting, or just deciding where to go from there? Where will that road take them?
(Spoiler: in a place full of chestnuts).
|
https://medium.com/photo-dojo/its-not-just-a-snap-7a86090e8d04
|
['Giulia Picciau']
|
2020-09-12 05:50:04.350000+00:00
|
['Stories', 'Storytelling', 'Photography']
|
Title It’s snapContent It’s snap photograph convey multiple layer story Photo Marco Xu Unsplash 2 year ago photographing meant clicking shutter fell madly love found alone old mirrorless camera hand wandering street Bologna clue meant take picture except basic notion exposure triangle time idea cold number said little photograph whole world behind entire narrative arc beginning choice subject ending image course sum story it’s meant convey story conceived look back it’s safe say someone much verbal written storytelling could help falling love photography can’t say I’m expert right two year practice taught anything aside fact never stop learning something you’re passionate technical aspect small part whole story It’s instrument tell story story see pen paper screen keyboard writer single image captured fraction second tell lot took look well First layer planning photo without planning portrait photography session begin photographer set scene place subject inside choosing setting accordance theme shooting session landscape photography it’s possible build entire scene scratch case indoor portrait stilllife picture planning still play strong role starter several apps website allow photographer get glimpse spot around best day hour capture landscape photographer may get spot advance even hour earlier begin experiment composition way everything place time picture meant taken Believe even street photography completely instantaneous process discipline might look like it’s based instinct luck good street photographer experienced kind anticipate potential become good setting photo compare step writing process corresponds “What if” moment begin explore possibility story told point writerphotographer take step allowing show world story painted inside head Second layer shooting It’s moment photographer look scene front measure expectation might meet exceed even fall short onsite adjustment take place case writer it’s heshe sits put word paper chance see feel outside world way light enters scene might slightly different expected maybe it’s brighter dimmer it’s outside shot weather play quite important role change quickly small adjustment story together choice composition “Composition” process photographer decides include picture leave arrange element “making cut” observer’s eye directed need It’s reason many time we’ve released shutter feel disappointed result saw eye good doesn’t image camera preview live real scene keep metaphor writing process happens certain point decide sacrifice can’t fit entire scene inside picture inside book even widestangle lens world endless supply ink paper digression add something story others spoil produce confusing soup Third layer postprocessing come regarded style use tell story even talking postprocessing kinda like walking ice everyone got opinion usually lie either end spectrum people postprocessing way enhancing picture reveals true potential others it’s cheating tend prefer former point view digital negative raw story told machine postprocessing express put paper word choose make mine present world amount nature postprocessing deemed reasonable image something quite personal could potentially turn photograph upside general rule thumb composition good need sensitive tuning boosting saturation contrast enhancing color unnatural way picture convey something vocabulary polished spoil story way excess postprocessing spoil picture make stand sure it’s necessarily good thing Spoton discrete postprocessing hand help spatial arrangement element scene enhancing photographer want viewer see setting certain mood like accurate choice word add personal touch story Fourth layer viewer’s eye much photographer try convey message choice made planning shooting editing everyone who’ll ever see picture personal impression Photo Paul Skorupskas Unsplash photographer made good job composition viewer see exactly wanted see everyone takeaway observe clear word express opinion biggest difference respect writing process fascinating think brain come story many story looking image example picture far technically perfect attempt telling story Even short piece flash fiction would given u setting precise story development case picture u make hypothesis looking figure left saying one right fighting deciding go road take Spoiler place full chestnutsTags Stories Storytelling Photography
|
5,191 |
Sprouting Nightmares Sometimes Grow into Magnificent Treasures
|
She woke up in a puddle of warm liquid. Trisha realized she’d lost control of her bladder. But it was too early. How could her water break twenty-four weeks before she was due? The nurses and doctor at the emergency room rushed her into a hospital bed and didn’t say much.
After an ultrasound, a pelvic exam, and a blood test, the doctor advised Trisha,
“Maam, we have to share unfortunate news with you. We’re going to have to induce a miscarriage unless you wish to go home and wait for it to happen. Your amniotic fluid is extremely low, and your baby's health is in danger. ”
Trisha glared at the doctor and said, “I’m not going anywhere. That’s why I pay all that money for insurance. I’m having my baby.”
The cold empty hospital room felt like an isolated desert with death all around her, but Trisha remembered her roots. Strength and resolve flooded her heart and veins. The Mwezi family stayed firm through any storm, and the love in her heart for her unborn child remained unshakable.
She knew the answer but asked anyway, “Doctor, will my baby live? How can I be in labor this early?”
“We’ve never seen a baby, at this stage, survive a preterm premature rupture.”
The pressure of the following silence exploded in Trisha’s ears. She would not give up on her baby or herself. The doctor recommended termination, but Trisha opted for a high-risk treatment.
GenoMedz Pharmaceuticals was doing a research/trial on a steroid replacement that offered a drug to promote organ growth, bone growth, and health without the harmful effects of steroids. They called the drug a SARMS(Selective Androgen Receptor Modulators). SARMS are more selective and easily absorbed than steroids. They do not increase testosterone but can be targeted for more specific uses.
Her doctor said, “Trisha, I think this could work. Tests have shown this SARM, Lava, is very safe. Many other SARMS have been developed and approved for a wide variety of medical uses.”
“What does Lava stand for?”
“It’s a long scientific name even I have trouble with. Sign here and we can begin the treatment.” She extended her clipboard to Trisha. The doctor wasted no time.
Trisha went home the next morning. Dr. Andrea Jefferson instructed her to drink a gallon of water a day to get her amniotic fluids up. The SARMS, Lava, would help speed up the baby’s development, and if she could make it to 23 weeks, she could return to the hospital for bed rest until delivering her baby.
Week 30 of her pregnancy arrived much more quickly than she anticipated. Resting in the hospital bed, the last 7 weeks was the hardest thing she’d ever done. Fortunately for her, AmSend( We Send Your Money Fast Service Inc.) gave her medical leave; executives have their perqs. Trisha knew it was time to deliver Robyn.
She felt the baby’s need to get out. Her lower back felt like a giant crushed her spine between its huge hands, and every joint in her body was on fire. Her water broke again, but this time it was only a trickle. Then came the contractions. Whoever dreamed up deep breathing was an effective way to deal with contractions was a sadist, Trisha thought.
|
https://medium.com/the-partnered-pen/sprouting-nightmares-sometimes-grow-into-magnificent-treasures-261f31ec0c66
|
['Greg Prince']
|
2020-10-21 03:53:47.053000+00:00
|
['Short Story', 'Fiction', 'Horror', 'Storytelling', 'Racism']
|
Title Sprouting Nightmares Sometimes Grow Magnificent TreasuresContent woke puddle warm liquid Trisha realized she’d lost control bladder early could water break twentyfour week due nurse doctor emergency room rushed hospital bed didn’t say much ultrasound pelvic exam blood test doctor advised Trisha “Maam share unfortunate news We’re going induce miscarriage unless wish go home wait happen amniotic fluid extremely low baby health danger ” Trisha glared doctor said “I’m going anywhere That’s pay money insurance I’m baby” cold empty hospital room felt like isolated desert death around Trisha remembered root Strength resolve flooded heart vein Mwezi family stayed firm storm love heart unborn child remained unshakable knew answer asked anyway “Doctor baby live labor early” “We’ve never seen baby stage survive preterm premature rupture” pressure following silence exploded Trisha’s ear would give baby doctor recommended termination Trisha opted highrisk treatment GenoMedz Pharmaceuticals researchtrial steroid replacement offered drug promote organ growth bone growth health without harmful effect steroid called drug SARMSSelective Androgen Receptor Modulators SARMS selective easily absorbed steroid increase testosterone targeted specific us doctor said “Trisha think could work Tests shown SARM Lava safe Many SARMS developed approved wide variety medical uses” “What Lava stand for” “It’s long scientific name even trouble Sign begin treatment” extended clipboard Trisha doctor wasted time Trisha went home next morning Dr Andrea Jefferson instructed drink gallon water day get amniotic fluid SARMS Lava would help speed baby’s development could make 23 week could return hospital bed rest delivering baby Week 30 pregnancy arrived much quickly anticipated Resting hospital bed last 7 week hardest thing she’d ever done Fortunately AmSend Send Money Fast Service Inc gave medical leave executive perqs Trisha knew time deliver Robyn felt baby’s need get lower back felt like giant crushed spine huge hand every joint body fire water broke time trickle came contraction Whoever dreamed deep breathing effective way deal contraction sadist Trisha thoughtTags Short Story Fiction Horror Storytelling Racism
|
5,192 |
Taming data inconsistencies
|
Vamsi Ponnekanti | Pinterest engineer, Infrastructure
On every Pinner’s profile there’s a count for the number of Pins they’ve both saved and liked. Similarly, each board shows the number of Pins saved to it. At times, Pinners would report that counts were incorrect, and so we built an internal tool to correct the issue by recomputing the count. The Pinner Operations team used the tool to fix inconsistencies when they were reported, but, over time, such reports were growing. It wasn’t only a burden on that team, but it could have caused Pinners to perceive that Pinterest wasn’t reliable. We needed to determine the root of the problem and substantially reduce such reports.
Here I’ll detail why some of those counts were wrong, the possible solutions, the approach we took and the results we’ve seen.
Addressing the problem
After digging into the issue, we found a number of reasons counts appeared wrong, including:
Pinner deactivations: If user A liked a Pin saved by user B, and user B deactivated their account, the Pin was not shown to user A, but the Pin was still included in the count. (Note: there’s a very small number of Pinners who deactivate and reactivate their accounts multiple times.)
Spam/Porn filtering: If a Pin was suspected to have spam or porn content, it wasn’t shown, but was still reflected in the count. The scheme to identify spam/porn content is continually evolving, and domains are added or removed from the suspect list almost every day.
Non-transactional updates: Some counts were updated asynchronously in a separate transaction to optimize the latency of operations such as Pin creation. In some rare failure scenarios, it was possible the count wasn’t updated.
Possible solutions
Option 1: Fix the root causes
We first looked at ideas for fixing the root causes. Whenever a Pinner deactivated/reactivated their account, we could update the counts in the profiles of all Pinners who may have liked their Pins.
Likewise, whenever there’s a change in porn/spam filtering schemes, we could update the counts in the profiles of the owners of those impacted Pins, and in the profiles and boards of the Pinners who’ve liked and saved such Pins.
To address the problem caused by non-transactional updates, we would need to update the count in same transaction, which may increase latency of operations such as Pin creation.
This solution wouldn’t only be expensive, but it also wouldn’t fix the inconsistencies that already exist.
Option 2: Offline analysis and repair
Another approach commonly used in similar situations is offline analysis and repair, where we could dump data from our online databases to Hadoop daily, and they’d be queryable from Hive. We could have an offline job(s) that finds the inconsistencies, and another background job(s) fixing those inconsistencies in the online databases. Since online data could’ve changed after offline analysis was performed, it would need to be revalidated before updating the count in the online database.
We found this to be a good solution and used a similar method to fix inconsistencies in our HBase store. However, based on that past experience, we knew the effort to build it was no small task.
Option 3: Online detection and quick repair
We also thought about online detection and quick repair. If a Pinner detects an inconsistency in the data on their profile, our system should also be able to detect it. For example, if a Pinner scrolls through all of the Pins they’ve liked, the system could check if the count of liked Pins shown matched the displayed count in their profile. This way of detection is simpler than the previous solutions, with small overhead.
Once an incorrect count is detected, we could queue a job to fix the count so we could display the correct count the next time. We already had a framework, PinLater, that could queue arbitrary jobs for later execution. Typically, such jobs run within seconds of queuing.
The job to fix the count would have information about which counter to fix (such as user ID or board ID), as well as information about the stored count (i.e. the displayed count) and the actual count. This job could use the same logic as our internal tool to recompute and fix the counts.
The count fixing job would check if both the stored count and actual count are still same as at the time of queueing. If so, it would update the stored count to the actual count. If the stored count or the actual count are different from what they were at the time of queuing, the count isn’t updated as it indicates some change of state, such as new Pins that may have come in.
However, this solution doesn’t detect all inconsistencies. For example, if a Pinner doesn’t click on ‘likes’ or ‘Pins’ in their profile, or if they don’t scroll through their entire list of Pins, it can’t detect inconsistencies. This solution also isn’t appropriate for inconsistencies that are expensive to detect during online reads.
The chosen solution
In the short term, the third solution (online detection and quick repair) was preferred since it fixes existing inconsistencies in the count in near real-time, and is simple to implement. In the longer term, we may still build the second solution.
Results
After deploying and gradually ramping up the solution to repair counts for one percent of users to 100 percent, nearly all Pinner reports about count inconsistencies vanished. The table below shows the number of reports over a period of about 12 weeks, including a few weeks before the launch, the ramp-up period and the weeks following launch.
|
https://medium.com/pinterest-engineering/taming-data-inconsistencies-96ae43ced0ce
|
['Pinterest Engineering']
|
2017-02-21 18:58:25.403000+00:00
|
['Pinterest', 'Infrastructure', 'Engineering', 'Data']
|
Title Taming data inconsistenciesContent Vamsi Ponnekanti Pinterest engineer Infrastructure every Pinner’s profile there’s count number Pins they’ve saved liked Similarly board show number Pins saved time Pinners would report count incorrect built internal tool correct issue recomputing count Pinner Operations team used tool fix inconsistency reported time report growing wasn’t burden team could caused Pinners perceive Pinterest wasn’t reliable needed determine root problem substantially reduce report I’ll detail count wrong possible solution approach took result we’ve seen Addressing problem digging issue found number reason count appeared wrong including Pinner deactivation user liked Pin saved user B user B deactivated account Pin shown user Pin still included count Note there’s small number Pinners deactivate reactivate account multiple time SpamPorn filtering Pin suspected spam porn content wasn’t shown still reflected count scheme identify spamporn content continually evolving domain added removed suspect list almost every day Nontransactional update count updated asynchronously separate transaction optimize latency operation Pin creation rare failure scenario possible count wasn’t updated Possible solution Option 1 Fix root cause first looked idea fixing root cause Whenever Pinner deactivatedreactivated account could update count profile Pinners may liked Pins Likewise whenever there’s change pornspam filtering scheme could update count profile owner impacted Pins profile board Pinners who’ve liked saved Pins address problem caused nontransactional update would need update count transaction may increase latency operation Pin creation solution wouldn’t expensive also wouldn’t fix inconsistency already exist Option 2 Offline analysis repair Another approach commonly used similar situation offline analysis repair could dump data online database Hadoop daily they’d queryable Hive could offline job find inconsistency another background job fixing inconsistency online database Since online data could’ve changed offline analysis performed would need revalidated updating count online database found good solution used similar method fix inconsistency HBase store However based past experience knew effort build small task Option 3 Online detection quick repair also thought online detection quick repair Pinner detects inconsistency data profile system also able detect example Pinner scroll Pins they’ve liked system could check count liked Pins shown matched displayed count profile way detection simpler previous solution small overhead incorrect count detected could queue job fix count could display correct count next time already framework PinLater could queue arbitrary job later execution Typically job run within second queuing job fix count would information counter fix user ID board ID well information stored count ie displayed count actual count job could use logic internal tool recompute fix count count fixing job would check stored count actual count still time queueing would update stored count actual count stored count actual count different time queuing count isn’t updated indicates change state new Pins may come However solution doesn’t detect inconsistency example Pinner doesn’t click ‘likes’ ‘Pins’ profile don’t scroll entire list Pins can’t detect inconsistency solution also isn’t appropriate inconsistency expensive detect online read chosen solution short term third solution online detection quick repair preferred since fix existing inconsistency count near realtime simple implement longer term may still build second solution Results deploying gradually ramping solution repair count one percent user 100 percent nearly Pinner report count inconsistency vanished table show number report period 12 week including week launch rampup period week following launchTags Pinterest Infrastructure Engineering Data
|
5,193 |
Hints and Tips for Coursera Google Cloud Platform Fundamentals: Core Infrastructure
|
I recommend Architecting with Google Cloud Platform (GCP) or Developing applications with GCP Courseraspecializations. Going through these specializations will significantly help you obtain the Google Cloud Architect certification or Google Cloud Developer certification. More importantly, the hands-on labs enable you to build GCPapplications.
The course, Coursera Google Cloud Platform Fundamentals: Core Infrastructure, is first of the five courses in Cloud Engineering with Google Cloud specialization and first of the six courses of Cloud Architecture with Google Cloud Professional Certificate specialization.
Google Cloud Associate Cloud Engineer Certificate and Cloud Architecture with Google Cloud Professional Certificate share the first three classes.
The Coursera Google Cloud Platform Fundamentals: Core Infrastructure class consists of five essential services:
Virtual Machines: Compute Engine (IaaS); Use Profiles and Network: IAM and VPV; Storage, BigTable, SQL, Spanner; Containers and Load Balancing: Kubernetes Engine. Supply the code all Services provided: AppEngine (PaaS)
The blog assumes an advanced level of prior knowledge. Below I list terms that are outside the scope of the blog.
The summary is a broader list of Linux/Unix commands, applications, and GCP command-line applications.
Links are given for some general terms you need:
Anthos;
A WS EC2 ;
Cloud Computing;
Cluster;
data encryption and encryption key management [ 2 ];
]; Docker [4,5,6,7] ;
; key, key code, key management [ 8 ];
]; Security key part or key pair [9].
Creating a free account
You get $300 in credit for 90 days per account.
Using your free account
You will not use your new free account for the labs.
Hint: Anti-spoiler alert. If you think some of the hints are spoilers or cheats, they are not. Please read all the hints. Tip: Not really a tip but a feature of GCS. The feature is that Google wants you to learn how to use GCS. Tip: Every lab is in QwikLabs. At the beginning of QuikLabs, a new account is created without requiring a credit card. You are given full access and use to the GCP Console, gcloud, and gstart. Tip: A QwikLabs is 30 minutes long. Lab sessions can be restarted, whether you fail or succeed. However, services and resources created are deleted at the end of the timed session. The time length is usually long enough for short experiments. Tip: You can use GCS Colab with a GPU or a TPU for free. You do not have to pay Coursera $50 per month to use the “free” QuikLabs.
Google Cloud Platform networking and operational tools and services
Compute Engine Service — The base compute component.
The GCP Compute Engine Service is for launching one to a fleet of clusters of virtual machines.
Tip: You can use other browsers, but all lab videos use Chrome. Tip: Now how to use incognito window in Chrome browser. Tip: Your local terminal use of ssh of an AWS EC2 instance requires you to:
Log in to the AWS console. Open up the specific EC2 key pair, note the key pair, and note the external IP address (Considered as one step if you neural atypical). Launch a local terminal.
3. log in using ssh , keypair, and IP address;
Opening up an ssh session in GCP reduces to one step.
Login to the GCP console. open up the specific GCS Compute Engine instance dashboard and click on the ssh icon. (Considered as one step if you neural typical).
Tip: The select tool that starts the lab. The icon you click is at the bottom of the page after Coursera'’ Honor Code box.
Figure 1.
Google Cloud Platform Fundamentals: Core Infrastructure>Week 1>Lab 3: Getting Started with Cloud Storage and Cloud SQL.
Tip: Memorize the table of GCP Cloud storage types and how they differ from each. Tip: You need to know SQL’s basics to perform Lab 3 and understand what you are doing. Tip: Lab 3 is longer and more complex. It gives you up to fifty minutes on timed Lab 3. You will need all time to complete all 6 steps. Tip: The last two of six steps are optional. Tip: If you run out of time, if you show 15/15 in an orange rectangle, located in the top right-hand corner, you will receive 100% on the quiz.
Summary
Tip: All labs are clicks and cut&paste. Hint: You spend a considerable amount of time in a Linux/Unix shell. You need to know Linux/Unix commands: >>, bash, cd, chmod, cp, curl, dd, export, grep, kill, ls, nano, sed, sudo touch. Hint:. You should know the following network and internet terms: DNS, http:, https, IP, TCP, UDP, URL, virtual private network. Hint:. You should know what is and when to use the GCP shell commands: gcloud, gsutil, kubectl. Hint:. You should be familiar with the following Linux/Linux applications: apt-get, git clone, gzip, Nginx, pip install, python, virtualenv, yams.
Yeah!!
I hope this blog is of help in going through Coursera Google Cloud Platform Fundamentals: Core Infrastructure.
|
https://medium.com/analytics-vidhya/hints-and-tips-for-coursera-google-cloud-platform-fundamentals-core-infrastructure-42b781bbc311
|
['Bruce H. Cottman']
|
2020-11-08 17:14:45.180000+00:00
|
['Cloud Computing', 'Learning', 'Certification', 'Course', 'Google Cloud Platform']
|
Title Hints Tips Coursera Google Cloud Platform Fundamentals Core InfrastructureContent recommend Architecting Google Cloud Platform GCP Developing application GCP Courseraspecializations Going specialization significantly help obtain Google Cloud Architect certification Google Cloud Developer certification importantly handson lab enable build GCPapplications course Coursera Google Cloud Platform Fundamentals Core Infrastructure first five course Cloud Engineering Google Cloud specialization first six course Cloud Architecture Google Cloud Professional Certificate specialization Google Cloud Associate Cloud Engineer Certificate Cloud Architecture Google Cloud Professional Certificate share first three class Coursera Google Cloud Platform Fundamentals Core Infrastructure class consists five essential service Virtual Machines Compute Engine IaaS Use Profiles Network IAM VPV Storage BigTable SQL Spanner Containers Load Balancing Kubernetes Engine Supply code Services provided AppEngine PaaS blog assumes advanced level prior knowledge list term outside scope blog summary broader list LinuxUnix command application GCP commandline application Links given general term need Anthos WS EC2 Cloud Computing Cluster data encryption encryption key management 2 Docker 4567 key key code key management 8 Security key part key pair 9 Creating free account get 300 credit 90 day per account Using free account use new free account lab Hint Antispoiler alert think hint spoiler cheat Please read hint Tip really tip feature GCS feature Google want learn use GCS Tip Every lab QwikLabs beginning QuikLabs new account created without requiring credit card given full access use GCP Console gcloud gstart Tip QwikLabs 30 minute long Lab session restarted whether fail succeed However service resource created deleted end timed session time length usually long enough short experiment Tip use GCS Colab GPU TPU free pay Coursera 50 per month use “free” QuikLabs Google Cloud Platform networking operational tool service Compute Engine Service — base compute component GCP Compute Engine Service launching one fleet cluster virtual machine Tip use browser lab video use Chrome Tip use incognito window Chrome browser Tip local terminal use ssh AWS EC2 instance requires Log AWS console Open specific EC2 key pair note key pair note external IP address Considered one step neural atypical Launch local terminal 3 log using ssh keypair IP address Opening ssh session GCP reduces one step Login GCP console open specific GCS Compute Engine instance dashboard click ssh icon Considered one step neural typical Tip select tool start lab icon click bottom page Coursera’ Honor Code box Figure 1 Google Cloud Platform Fundamentals Core InfrastructureWeek 1Lab 3 Getting Started Cloud Storage Cloud SQL Tip Memorize table GCP Cloud storage type differ Tip need know SQL’s basic perform Lab 3 understand Tip Lab 3 longer complex give fifty minute timed Lab 3 need time complete 6 step Tip last two six step optional Tip run time show 1515 orange rectangle located top righthand corner receive 100 quiz Summary Tip lab click cutpaste Hint spend considerable amount time LinuxUnix shell need know LinuxUnix command bash cd chmod cp curl dd export grep kill l nano sed sudo touch Hint know following network internet term DNS http http IP TCP UDP URL virtual private network Hint know use GCP shell command gcloud gsutil kubectl Hint familiar following LinuxLinux application aptget git clone gzip Nginx pip install python virtualenv yam Yeah hope blog help going Coursera Google Cloud Platform Fundamentals Core InfrastructureTags Cloud Computing Learning Certification Course Google Cloud Platform
|
5,194 |
We Need to Destigmatise Taking Antidepressants
|
I met up with my mom the other day. I haven’t seen her in years, and it was nice to spend some time with her. I was a little worried about whether I’d be able to make the trip, but luckily I was having a good day.
While eating, she asked me about my mental health. For some background, I’ve had issues my whole life, but it was only this year I opened up about my struggles. Mental illness is stigmatised but even more so in the Black community. Many of us are children to immigrants who rose from nothing, so for us to be depressed “over nothing” is seen as an insult.
To my surprise, she was very understanding and supportive. She understood my hesitation to tell her and promised that when it came to university, she would ensure everything was taken care of. She would also arrange a therapist.
She is great; however, one thing she has been a little iffy over is me taking antidepressants. She mentioned on the phone how she wished I didn’t need to take them. Then over our meal suggested I stay off them and see how I do.
|
https://medium.com/an-injustice/we-need-to-destigmatise-taking-antidepressants-7a6598177dd9
|
[]
|
2019-12-08 12:46:33.092000+00:00
|
['Borderline Personality', 'Life Lessons', 'Mental Health', 'Culture', 'Zuva']
|
Title Need Destigmatise Taking AntidepressantsContent met mom day haven’t seen year nice spend time little worried whether I’d able make trip luckily good day eating asked mental health background I’ve issue whole life year opened struggle Mental illness stigmatised even Black community Many u child immigrant rose nothing u depressed “over nothing” seen insult surprise understanding supportive understood hesitation tell promised came university would ensure everything taken care would also arrange therapist great however one thing little iffy taking antidepressant mentioned phone wished didn’t need take meal suggested stay see doTags Borderline Personality Life Lessons Mental Health Culture Zuva
|
5,195 |
The Sacred Heart of Mormonism. “I am not a God afar off, I am a…
|
“I am not a God afar off, I am a brother and friend;
Within your bosoms I reside, and you reside in me:
Lo! we are One; forgiving all Evil; Not seeking recompense!” — William Blake, Jerusalem: The Emanation of the Giant Albion (c. 1803–1820), ch. 1, plate 4, lines 18–28
The Latter-day Saint temple experience being what it is, I’ve had to learn how to write about “something” without actually writing about “something.”
There are very particular portions of the temple endowment ordinance which participants promise never to disclose, except in a highly particular ritual context (nothing fun, spooky, or sexy, I assure you), but the aura surrounding those things is tremendously taboo for many Mormons. Consider it something like the Pharisaic philosophy of hedging in the Law with additional requirements, not as a means of disrespecting the Law or becoming overly anal about it, but as ensuring that one never violates a tenet of the Law by putting further distance between oneself and the very possibility. I believe a similar impulse, often with the same ad hoc theological rationalization, can be found among many Latter-day Saints. The sensation of “sacred anxiety,” of experiencing a buzzing angst the closer one comes to the “Sacred” (whatever it may be; and there are, of course, varying answers), is prevalent in religious communities with defined organizations, hierarchies, or systematic theologies.
In short: there’s something special at the center, cascading outward in various contingent forms (rituals, texts, organizations, etc.), which derives much of its value from being held beyond the tedium and typical experiences of the everyday community, while yet exerting an oddly demanding influence upon that community. It’s the tension of transcendence and immanence, we could say. But much of what’s special about the Mormon temple seems to lie just beyond that threshold where many (though certainly not all; perhaps not even most) Latter-day Saints are unwilling to go. I can respect that, and not only because I’ve experienced the endowment numerous times myself. I too hold to my promise to not disclose particular symbols used in the ordinance (they’re not hard to find, but I’m uninterested in giving them to you). I think many Latter-day Saints reinforce this sacred boundary with anxiety because it’s easier to believe something awful will happen if you transgress that boundary than to waste away your minutes, hours, and days contemplating in the celestial room at the center of a temple “what it all means” (a practice that is unfortunately discouraged, though for understandable reasons; some temples are quite busy). So many Latter-day Saints may experience not only indignation (understandably) when they happen upon some hidden-cam exposé of their most sacred rituals and symbols on YouTube or the like, but anxiety as well — a kind of palpable (albeit often undefined) fear that something bad could happen. I’ve felt it before, the sense that you’re dealing with something “heavier” than what you typically encounter — a sign that you’re inching toward the Sacred at the heart of a community.
I appreciate Hugh Nibley’s view on this matter of inadvertently brushing the sacred boundary:
“Even though everyone may discover what goes on in the temple, and many have already revealed it, the important thing is that I do not reveal these things; they must remain sacred to me. I must preserve a zone of sanctity which cannot be violated whether or not anyone else in the room has the remotest idea what the situation really is. … No matter what happens, it will, then, always remain secret; only I know exactly the weight and force of the covenants I have made — I and the Lord with whom I have made them — unless I choose to reveal them. If I do not, then they are secret and sacred no matter what others may say or do. Anyone who would reveal these things has not understood them, and therefore that person has not given them away. You cannot reveal what you do not know!” — Hugh Nibley, Temple and Cosmos: Beyond This Ignorant Present (Deseret Book, 1992), 64
While I think sacred anxiety is natural to many religious people with distinct sacred boundaries, it needn’t be such a visceral trauma every time one accidentally brushes the border. Perhaps, to build on Nibley’s idea, sacred anxiety may be somewhat misdirected anyway. Rather than fearing that something terrible may happen were one to disclose these things, perhaps we may simply say that to do so is to deaden them, in a way similar to how removing a light bulb from its socket turns it into nothing more than a cumbersome, useless object. Put another way: perhaps these symbols, to borrow a Buddhist aphorism, are fingers pointing to the moon; to this one may add the characteristically Buddhist warning to not conflate the former with the latter.
Some are rather uncomfortable with the idea of Mormons having “secrets,” assuming that “secret” must mean “sinister,” but I think that’s overly presumptuous already. One popular response many Latter-day Saints have been tempted to give is that the temple is not “secret” but “sacred,” but this gives into the same false dichotomy that something is either “secret/evil” or “open/sacred/good.” That’s a longer conversation, but perhaps we can re-contextualize sacred silence. Rather than silence before an otherwise touchy God who “will not be mocked,” perhaps to simply show the symbols off is to demean them into what we typically demean everything else ostensibly religious into: a quick, one-and-done kind of tech manual you can read quickly, understand, then throw away. Instead, rather than merely didactic manuals (or tedious articles), these symbols are instead meant to trigger experiences of the sacred at once aloof from yet demanding upon a community. To share them openly is to pretend that that is all they are, self-explanatory and at face value; to hold sacred silence is to refuse to attempt the futile task of putting the sacred into the tedious and often problematic language of the everyday — to pretend that we’re dealing with just another piece of trivia we can hear, sigh at, then forget altogether.
It’s an odd bind to be in, because, as I said above, it puts me in the position of being essentially unable to write to other Latter-day Saints (let alone to non-Mormons) about what I believe is essentially the center and foundation of Mormonism (at least within the LDS Church). On the other hand, far from an after-the-fact rationalization haphazardly cobbled together to explain why I can’t talk about this or that, experience has taught me that the Sacred is notoriously difficult to put into words (at least for me!), an experience I read back onto the sacred silence surrounding the Mormon temple experience (rather than the other way around). I’ve dropped tid-bits here and there in my writing, but much of my writing has been only around the topic, if not eventually deferring to conclusions not as answers but as terminus points where I can’t speak much further anyway.
It seems to me that “religious” communities (an odd and frankly problematic category, but we’ll use it for now) typically construct or at least hold onto and hand down symbols — rituals, texts, temples, gods, heroes, stories, etc. — because they encapsulate something of the Sacred of that community, around which they orient themselves on a fundamental level while yet keeping that sacred something far enough away as to not let it be battered by the uncertainty of otherwise transient everyday life. It’s a feeling I get when I hear about human tribes near the end of the Ice Age worshiping gods that look a lot like the animals they hunted or interacted with the most, of Egyptians contemplating the dogs of the deadly desert in the dog-headed god of death Anubis, of the Dogon calling upon Amma who holds all things together, Buddhists carving statues and building temples to the one person or few people to ever achieve the enlightenment they’re after, and so on and so on. These aren’t just passing ideas or places to visit on occasion to “pay respects,” but principles which serve as the “strange attractors” which render the typical chaos of their lives into some intelligible and thus manageable pattern. We build temples to the things we value the very most in our lives, regardless of what others may think, or whether they may strike every member of the community as equally important.
In this vein, I’d wonder what sits at the center of the Latter-day Saint temple and thus at the center of Mormonism (at least in one interpretation of Mormonism within the LDS Church). I don’t think it must be something unique to the LDS Church, unknown to all others, in order to be meaningful; but it’s a question that comes to my mind often. I’m tempted at times to say that at the center of the temple, we could say (at the risk of slipping into too much short-hand), is “Elohim.”
By and large in Latter-day Saint discourse, “Elohim” is defined as a heterosexual, cisgender, monogamous couple (the LDS Church currently acts out that interpretation in sealings, for instance); but while that could work for me in my particular life (as a straight, cisgender, monogamous male), I also know it’s exceptionally exclusionary to so many other good, beautiful, and true lives that take shapes other than my own. To do so is to cast off an entire portion of the body of Elohim. However, I think that, to borrow a cue from Dialogue’s editor Taylor Petrey, Mormonism has a “post-heterosexual” potential. I think Elohim even presumes it, in a way. In any event, Mormonism’s understanding of its central symbols has indeed evolved dramatically since 1830; perhaps this is another direction in which Mormon culture’s interpretations could develop.
While Joseph Smith seems to have largely thought of temple sealings as more or less between men and women (albeit in a polygamous context), he seems to have derived sealings from his larger view of Elohim (or “Eloheim”), a term which he always used in the plural for a council or family of beings. Working from there, perhaps we may say that, rather than an exclusively heteronormative couple, Elohim is instead all human relations as such, containing all and thus not limited to any one of its components. Elohim is that Plural One who sends, generates, or emanates Michael, who is Adam and Eve, and thus all humanity. In other words, Elohim is the most fundamental singularity rendered plural in that it brings each and every one of us into existence from out of itself. Sealings could invite us, then, in a ritual context, to see the Other as a splinter of the infinite Elohim — as a piece of God — and to become woven back together in such a way as to not dilute the authentic uniqueness of either the other or oneself.
As I said, I don’t think this is self-explanatory. Even so, I don’t think we need to limit Elohim purely to human relations. Our lives are composed of so much more — as is Elohim, it would seem. The endowment doesn’t just recollect the interactions of various beings with one another, but of those beings’ interactions with the worlds around them in creation, degradation, temptation, negotiation, redemption, and on and on. “Elohim” names our connections to one another as well as to all else — the earth and all of nature, to the universe. In that view, everything is a fragment of “Elohim,” which in this sense would be, therefore, not just another name for existence or the universe, but for all things simultaneously in their unimaginable and inexhaustible diversity and evolutionary complexity as well as their fundamental unity — always already at one, even if prone to forget that.
Rather than an exclusive focus on individuation — the emergence and articulation of an individual person — or collectivity — the overall life of the community — Elohim may instead inch closer to the concept of transindividuation. As one commentator defines the term:
“…it seems to me that thinking in terms of transindividuation is different than simply another description of the process of individuation; in the sense that transindividuation is always about what exceeds this process, both in the sense of the formation of a collective individuation, or the individuation of collectives, but also the preindividual relations that exceed any individuation. These relations are the basis of transformations both individual and political. This seems to me to be the ultimate test, or gamble, of thinking transindividuation — that it makes it possible to rethink and rearticulate the relationship between individual crises and collective transformation, the constitution of new types of organization, in an age of utter fragmentation and isolation.” — “Talkin’ Transindividuation and Collectivity: A Dialogue Between Jason Read and Jeremy Gilbert,” Capacious: Journal for Emerging Affect Inquiry 1 (4), 2019. (PDF)
Conceiving Elohim as transindividuation (or perhaps we may say intersubjectivity) — as opposed to only either individuality or collectivity, or even the tension between them — invites participants in the LDS temple to contemplate the ways in which they are always already nested in the same ground as all others. Rather than merely defining oneself through negation, in opposition to others, or by uncritically absorbing one’s surroundings, we may instead see ourselves much like Michael and Jehovah: not Gods creating the world from nothing, but persons working with pre-existing materials they themselves did not create and of which they themselves are composed.
In a somewhat Hegelian twist, whatever one may be in particular is only ever because of “all the rest” that one is not them — the true infinity against which one defines their finite self, “you” foregrounded against “everything else.” Elohim is a symbol which calls one to reconcile with that dialectic tension, rather than to answer it with antagonism, recognizing that were it not for that tension one would not exist at all.
“…infinite, in the sense in which it is taken by this reflection (namely, as opposed to the finite), has in it its Other just because it is opposed to it; that, therefore, it is limited and itself finite. Therefore, if it is asked how the infinite becomes finite, the answer is that there is no infinite which first is infinite and then must become finite or pass on to finitude, but that for itself it is already finite as much as infinite. … Or, rather, this should be said, that the infinite has ever passed out to finitude; that absolutely, it does not exist, by itself and without having its Other in itself …” — G. W. F. Hegel, Science of Logic, tr. by W. H. Johnston and L. G. Struthers (The Macmillan Co., New York, 1929), I, 166 f
Of course, this isn’t an interpretation likely to take hold in Mormon culture (and perhaps it doesn’t need to), but it’s been a fundamental part of my own spirituality for some time now, albeit in varying terminologies and conceptual frameworks. In highly particular terms: Elohim is the totality of existence, that Whole of which everything is a part. If you will, Jesus is the one who lived faithful to that reality in his life of unconditional love. And, by extension, the endowment, as a “putting on” of the identity of Christ, is a ritualized invitation to seek that same experience as Jesus — to realize down to the marrow of your bones that all things and people, just as they are, belong and go together.
|
https://medium.com/interfaith-now/the-sacred-heart-of-mormonism-notes-on-endowment-elohim-and-entanglement-f882a920c7b9
|
['Nathan Smith']
|
2020-01-04 19:10:03.086000+00:00
|
['Spirituality', 'Philosophy', 'Mormon', 'Religion', 'God']
|
Title Sacred Heart Mormonism “I God afar a…Content “I God afar brother friend Within bosom reside reside Lo One forgiving Evil seeking recompense” — William Blake Jerusalem Emanation Giant Albion c 1803–1820 ch 1 plate 4 line 18–28 Latterday Saint temple experience I’ve learn write “something” without actually writing “something” particular portion temple endowment ordinance participant promise never disclose except highly particular ritual context nothing fun spooky sexy assure aura surrounding thing tremendously taboo many Mormons Consider something like Pharisaic philosophy hedging Law additional requirement mean disrespecting Law becoming overly anal ensuring one never violates tenet Law putting distance oneself possibility believe similar impulse often ad hoc theological rationalization found among many Latterday Saints sensation “sacred anxiety” experiencing buzzing angst closer one come “Sacred” whatever may course varying answer prevalent religious community defined organization hierarchy systematic theology short there’s something special center cascading outward various contingent form ritual text organization etc derives much value held beyond tedium typical experience everyday community yet exerting oddly demanding influence upon community It’s tension transcendence immanence could say much what’s special Mormon temple seems lie beyond threshold many though certainly perhaps even Latterday Saints unwilling go respect I’ve experienced endowment numerous time hold promise disclose particular symbol used ordinance they’re hard find I’m uninterested giving think many Latterday Saints reinforce sacred boundary anxiety it’s easier believe something awful happen transgress boundary waste away minute hour day contemplating celestial room center temple “what means” practice unfortunately discouraged though understandable reason temple quite busy many Latterday Saints may experience indignation understandably happen upon hiddencam exposé sacred ritual symbol YouTube like anxiety well — kind palpable albeit often undefined fear something bad could happen I’ve felt sense you’re dealing something “heavier” typically encounter — sign you’re inching toward Sacred heart community appreciate Hugh Nibley’s view matter inadvertently brushing sacred boundary “Even though everyone may discover go temple many already revealed important thing reveal thing must remain sacred must preserve zone sanctity cannot violated whether anyone else room remotest idea situation really … matter happens always remain secret know exactly weight force covenant made — Lord made — unless choose reveal secret sacred matter others may say Anyone would reveal thing understood therefore person given away cannot reveal know” — Hugh Nibley Temple Cosmos Beyond Ignorant Present Deseret Book 1992 64 think sacred anxiety natural many religious people distinct sacred boundary needn’t visceral trauma every time one accidentally brush border Perhaps build Nibley’s idea sacred anxiety may somewhat misdirected anyway Rather fearing something terrible may happen one disclose thing perhaps may simply say deaden way similar removing light bulb socket turn nothing cumbersome useless object Put another way perhaps symbol borrow Buddhist aphorism finger pointing moon one may add characteristically Buddhist warning conflate former latter rather uncomfortable idea Mormons “secrets” assuming “secret” must mean “sinister” think that’s overly presumptuous already One popular response many Latterday Saints tempted give temple “secret” “sacred” give false dichotomy something either “secretevil” “opensacredgood” That’s longer conversation perhaps recontextualize sacred silence Rather silence otherwise touchy God “will mocked” perhaps simply show symbol demean typically demean everything else ostensibly religious quick oneanddone kind tech manual read quickly understand throw away Instead rather merely didactic manual tedious article symbol instead meant trigger experience sacred aloof yet demanding upon community share openly pretend selfexplanatory face value hold sacred silence refuse attempt futile task putting sacred tedious often problematic language everyday — pretend we’re dealing another piece trivia hear sigh forget altogether It’s odd bind said put position essentially unable write Latterday Saints let alone nonMormons believe essentially center foundation Mormonism least within LDS Church hand far afterthefact rationalization haphazardly cobbled together explain can’t talk experience taught Sacred notoriously difficult put word least experience read back onto sacred silence surrounding Mormon temple experience rather way around I’ve dropped tidbit writing much writing around topic eventually deferring conclusion answer terminus point can’t speak much anyway seems “religious” community odd frankly problematic category we’ll use typically construct least hold onto hand symbol — ritual text temple god hero story etc — encapsulate something Sacred community around orient fundamental level yet keeping sacred something far enough away let battered uncertainty otherwise transient everyday life It’s feeling get hear human tribe near end Ice Age worshiping god look lot like animal hunted interacted Egyptians contemplating dog deadly desert dogheaded god death Anubis Dogon calling upon Amma hold thing together Buddhists carving statue building temple one person people ever achieve enlightenment they’re aren’t passing idea place visit occasion “pay respects” principle serve “strange attractors” render typical chaos life intelligible thus manageable pattern build temple thing value life regardless others may think whether may strike every member community equally important vein I’d wonder sits center Latterday Saint temple thus center Mormonism least one interpretation Mormonism within LDS Church don’t think must something unique LDS Church unknown others order meaningful it’s question come mind often I’m tempted time say center temple could say risk slipping much shorthand “Elohim” large Latterday Saint discourse “Elohim” defined heterosexual cisgender monogamous couple LDS Church currently act interpretation sealing instance could work particular life straight cisgender monogamous male also know it’s exceptionally exclusionary many good beautiful true life take shape cast entire portion body Elohim However think borrow cue Dialogue’s editor Taylor Petrey Mormonism “postheterosexual” potential think Elohim even presumes way event Mormonism’s understanding central symbol indeed evolved dramatically since 1830 perhaps another direction Mormon culture’s interpretation could develop Joseph Smith seems largely thought temple sealing le men woman albeit polygamous context seems derived sealing larger view Elohim “Eloheim” term always used plural council family being Working perhaps may say rather exclusively heteronormative couple Elohim instead human relation containing thus limited one component Elohim Plural One sends generates emanates Michael Adam Eve thus humanity word Elohim fundamental singularity rendered plural brings every one u existence Sealings could invite u ritual context see splinter infinite Elohim — piece God — become woven back together way dilute authentic uniqueness either oneself said don’t think selfexplanatory Even don’t think need limit Elohim purely human relation life composed much — Elohim would seem endowment doesn’t recollect interaction various being one another beings’ interaction world around creation degradation temptation negotiation redemption “Elohim” name connection one another well else — earth nature universe view everything fragment “Elohim” sense would therefore another name existence universe thing simultaneously unimaginable inexhaustible diversity evolutionary complexity well fundamental unity — always already one even prone forget Rather exclusive focus individuation — emergence articulation individual person — collectivity — overall life community — Elohim may instead inch closer concept transindividuation one commentator defines term “…it seems thinking term transindividuation different simply another description process individuation sense transindividuation always exceeds process sense formation collective individuation individuation collective also preindividual relation exceed individuation relation basis transformation individual political seems ultimate test gamble thinking transindividuation — make possible rethink rearticulate relationship individual crisis collective transformation constitution new type organization age utter fragmentation isolation” — “Talkin’ Transindividuation Collectivity Dialogue Jason Read Jeremy Gilbert” Capacious Journal Emerging Affect Inquiry 1 4 2019 PDF Conceiving Elohim transindividuation perhaps may say intersubjectivity — opposed either individuality collectivity even tension — invite participant LDS temple contemplate way always already nested ground others Rather merely defining oneself negation opposition others uncritically absorbing one’s surroundings may instead see much like Michael Jehovah Gods creating world nothing person working preexisting material create composed somewhat Hegelian twist whatever one may particular ever “all rest” one — true infinity one defines finite self “you” foregrounded “everything else” Elohim symbol call one reconcile dialectic tension rather answer antagonism recognizing tension one would exist “…infinite sense taken reflection namely opposed finite opposed therefore limited finite Therefore asked infinite becomes finite answer infinite first infinite must become finite pas finitude already finite much infinite … rather said infinite ever passed finitude absolutely exist without …” — G W F Hegel Science Logic tr W H Johnston L G Struthers Macmillan Co New York 1929 166 f course isn’t interpretation likely take hold Mormon culture perhaps doesn’t need it’s fundamental part spirituality time albeit varying terminology conceptual framework highly particular term Elohim totality existence Whole everything part Jesus one lived faithful reality life unconditional love extension endowment “putting on” identity Christ ritualized invitation seek experience Jesus — realize marrow bone thing people belong go togetherTags Spirituality Philosophy Mormon Religion God
|
5,196 |
Breasts and Eggs
|
The right to question what is not right for us
I was sorting the notes taken throughout the reading. A pattern is obvious: the right to be who we are. As much as the title hinted to us about the main issues to be explored, it was the stand of Natsuko compared to her sister’s (Makiko’s) and her niece’s (Midoriko’s) that challenged my own assumptions about being a woman. Why do we accept what was cast to us without questioning its legitimacy? On top of that, the unchallenged rules and norms were conceived by a different gender with the power to make it last for centuries.
Reproductive rights from the lens of a child
I was drawn to the thoughts of Midoriko. As a child, she was quick to judge her mother (Makiko), but each judgment always came with a reason. As the International Safe Abortion Day (28 September) is around the corner, it is timely to look at her angst in the context of reproductive rights. Do children have no right to dictate if they should be born? While her personal experience in school drove her into more internal conflicts, we ought to reflect on how many burdens a child or youth needs to put up with due to some unconscious decision made by the parents. While many would say that having a baby is part of the life process and a big part of a family, have parents thought about why this is normal? If a father or mother says they want to have babies, does the desire justify on its own?
|
https://medium.com/fourth-wave/breasts-and-eggs-a35ee30b8e75
|
['Yong Yee Chong']
|
2020-10-06 09:28:58.904000+00:00
|
['Gender Equality', 'Women', 'Feminism', 'Japan', 'Books']
|
Title Breasts EggsContent right question right u sorting note taken throughout reading pattern obvious right much title hinted u main issue explored stand Natsuko compared sister’s Makiko’s niece’s Midoriko’s challenged assumption woman accept cast u without questioning legitimacy top unchallenged rule norm conceived different gender power make last century Reproductive right lens child drawn thought Midoriko child quick judge mother Makiko judgment always came reason International Safe Abortion Day 28 September around corner timely look angst context reproductive right child right dictate born personal experience school drove internal conflict ought reflect many burden child youth need put due unconscious decision made parent many would say baby part life process big part family parent thought normal father mother say want baby desire justify ownTags Gender Equality Women Feminism Japan Books
|
5,197 |
Haptics for enhanced UX: designing for all
|
Both the masters play their game in their unique style. Here are a few takeaways:
1. Haptic is always complementary
Google and Apple both believe that haptic feedback should be complementary. In most cases, haptics can never be a primary means of communication but it can be paired with other audio and visual elements.
2. Use haptics to enhance the usability
Haptics should be used to enhance feedback and convey useful information to your users. They should not be leveraged to decorate interactions or provide feedback that is not necessary for the experience. For example, using haptic feedback to indicate a layered or complex gesture, not for normal button press interactions.
3. Minimize battery consumption
We’ve heard that enabling vibrations can drain the battery, but here’s something you need to know. Adjusting the vibration’s intensity and sharpness can help reduce the energy intake for producing vibrations. The less the motor vibrates, the less energy consumed.
4. Custom Vibration Patterns
Apple allows you to define your own vibration patterns for your incoming alerts, alarms, etc. For me, it’s kind of a playground where I can define new vibration patterns for my projects and see how it feels in my hands. This may come handy for you if you’re an iPhone user.
5. 3D Touch Gestures
Again, this is an Apple-only feature. 3D Touch being more sensitive than Force Touch, it has been developed to work using capacitive sensors integrated into the display. Using 3D touch people can access additional functionality by applying varying levels of pressure to the touchscreen.
Does vibration alone enhance the user experience?
Maybe. In most cases, haptics can never be a primary means of communication but it can be paired with other audio and visual elements.
In certain cases where haptic is the only mode of communication, we should be more specific on how we use the haptic and where. For example, when a device has the sound turned off or incapable of producing sounds, or even for features like Talkback.
It is a good idea to avoid…
1. Trying to fit where it won’t
It is true that haptic will enhance the user experience but at times like haptics are making the user uncomfortable, or get irritated, check if you can fix it by adding some visual cue or sound to compliment, but if that doesn’t work as expected, it is better to avoid haptic there.
2. Avoid conflict with system patterns
The operating systems like iOS and Android have already defined a set of vibrating patterns in their OS interface. But as a designer/developer, we are always allowed to create some of our own haptic patterns. But we must ensure our patterns won’t conflict with the system patterns as they may serve different intentions.
3. Skipping the Usability Test
It is always must to test your haptic designs before letting them out to the real users. Testing helps us to identify the issues where the triggered vibrations disrupt other experiences of the product. The early you test, the more efficient the product will be.
|
https://uxdesign.cc/haptics-for-enhanced-ux-9f7e2f3c0e
|
['Avinash Bussa']
|
2020-10-02 14:51:46.743000+00:00
|
['Design', 'Interaction Design', 'Vibration', 'User Experience', 'Haptics']
|
Title Haptics enhanced UX designing allContent master play game unique style takeaway 1 Haptic always complementary Google Apple believe haptic feedback complementary case haptics never primary mean communication paired audio visual element 2 Use haptics enhance usability Haptics used enhance feedback convey useful information user leveraged decorate interaction provide feedback necessary experience example using haptic feedback indicate layered complex gesture normal button press interaction 3 Minimize battery consumption We’ve heard enabling vibration drain battery here’s something need know Adjusting vibration’s intensity sharpness help reduce energy intake producing vibration le motor vibrates le energy consumed 4 Custom Vibration Patterns Apple allows define vibration pattern incoming alert alarm etc it’s kind playground define new vibration pattern project see feel hand may come handy you’re iPhone user 5 3D Touch Gestures Appleonly feature 3D Touch sensitive Force Touch developed work using capacitive sensor integrated display Using 3D touch people access additional functionality applying varying level pressure touchscreen vibration alone enhance user experience Maybe case haptics never primary mean communication paired audio visual element certain case haptic mode communication specific use haptic example device sound turned incapable producing sound even feature like Talkback good idea avoid… 1 Trying fit won’t true haptic enhance user experience time like haptics making user uncomfortable get irritated check fix adding visual cue sound compliment doesn’t work expected better avoid haptic 2 Avoid conflict system pattern operating system like iOS Android already defined set vibrating pattern OS interface designerdeveloper always allowed create haptic pattern must ensure pattern won’t conflict system pattern may serve different intention 3 Skipping Usability Test always must test haptic design letting real user Testing help u identify issue triggered vibration disrupt experience product early test efficient product beTags Design Interaction Design Vibration User Experience Haptics
|
5,198 |
Messenger Bot using Flask and API.AI
|
Setting up API.AI
API.AI is a conversational experiences platform. Register and create an account at API.AI. Create an agent to access test console. Navigate to settings of the agent and get the Client Access Token.
Creating an intent
An intent is mapping between user expression and the desired response. We can train which user input fits to which intent. We can also use built-in parameters within the user input and set context. This helps us categorise the user inputs.
Setting up the Flask Server
Add client access token, page access token and verify token. A page has to be created for the bot. That page access token is needed. Verify token can be anything as per developer choice.
CLIENT_ACCESS_TOKEN = 'your_client_access_token_from_api_ai'
PAGE_ACCESS_TOKEN = 'your_facebook_page_access_token'
VERIFY_TOKEN = 'verification_token_for_facebook_chatbot'
Webhooks
Facebook webhook sends the request to the application and if the verification is complete i.e. the token returned by application matches the token entered in facebook application, request comes to the application server.
All messages sent by the facebook messenger are received on the application server and are handled.
Parse message
Application server receives the message but the intent is yet to be recognised. So, we use python library apiai to get the intent. To use apiai, client access token is needed.
API.AI parses the message sent by the user and receives the bot response from API.AI which contains the intent along with other information.
If everything is alright, response[‘status’][‘code’] should be 200. response[‘result’][‘metadata’][‘intentName’] contains the intent.
After knowing the intent, we can do actions on it according to our requirements.
We can not only parse texts but can also parse events. We can also set context and assign session id to the request.
Sending back message to Facebook user
To send message to the facebook user, call Facebook Graph API using python requests library.
sender_id is present in the request body.
Make a flask app which contains the above routes (webhooks) and methods.
Activate a virtual environment, install all depedencies and run the flask app.
$ virtualenv .env
$ pip install -r requirments.txt
$ python flask_app.py
Set up tunnelling to localhost
We can use ngrok to expose local webserver to the internet so that it can be used for callback verification needed to be done for using a webhook with facebook app.
$ wget $ wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip (for Linux)$ wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-darwin-amd64.zip (for Mac OS) $ unzip /path/to/ngrok.zip
$ ./ngrok http <port>
Ngrok port should be same as the port to which the flask app is listening.
Please note that a secure callback URL (https) is needed for verification.
Set up facebook messenger
Create a facebook page and facebook app. Add Webhook to the app.
Add Messenger to the app and generate token for the page which has to used for chat.
This page access token is used in the flask application to send messages back to the facebook user.
Enable webhook integration with callback URL and verify token
The callback url should be the ngrok url and the verify token should be same as that defined in the flask app.
Select events for page subscription
|
https://medium.com/ymedialabs-innovation/messenger-bot-using-flask-and-api-ai-f34f6e2eb6e6
|
['Rahul Nayak']
|
2018-01-26 04:26:54.326000+00:00
|
['Flask', 'Backend', 'Python', 'Apiai', 'Bots']
|
Title Messenger Bot using Flask APIAIContent Setting APIAI APIAI conversational experience platform Register create account APIAI Create agent access test console Navigate setting agent get Client Access Token Creating intent intent mapping user expression desired response train user input fit intent also use builtin parameter within user input set context help u categorise user input Setting Flask Server Add client access token page access token verify token page created bot page access token needed Verify token anything per developer choice CLIENTACCESSTOKEN yourclientaccesstokenfromapiai PAGEACCESSTOKEN yourfacebookpageaccesstoken VERIFYTOKEN verificationtokenforfacebookchatbot Webhooks Facebook webhook sends request application verification complete ie token returned application match token entered facebook application request come application server message sent facebook messenger received application server handled Parse message Application server receives message intent yet recognised use python library apiai get intent use apiai client access token needed APIAI par message sent user receives bot response APIAI contains intent along information everything alright response‘status’‘code’ 200 response‘result’‘metadata’‘intentName’ contains intent knowing intent action according requirement parse text also parse event also set context assign session id request Sending back message Facebook user send message facebook user call Facebook Graph API using python request library senderid present request body Make flask app contains route webhooks method Activate virtual environment install depedencies run flask app virtualenv env pip install r requirmentstxt python flaskapppy Set tunnelling localhost use ngrok expose local webserver internet used callback verification needed done using webhook facebook app wget wget httpsbinequinoxioc4VmDzA7iaHbngrokstablelinuxamd64zip Linux wget httpsbinequinoxioc4VmDzA7iaHbngrokstabledarwinamd64zip Mac OS unzip pathtongrokzip ngrok http port Ngrok port port flask app listening Please note secure callback URL http needed verification Set facebook messenger Create facebook page facebook app Add Webhook app Add Messenger app generate token page used chat page access token used flask application send message back facebook user Enable webhook integration callback URL verify token callback url ngrok url verify token defined flask app Select event page subscriptionTags Flask Backend Python Apiai Bots
|
5,199 |
Multinomial Logistic Regression In a Nutshell
|
Now that we understand multinomial logistic regression, let’s apply our knowledge. We’ll be building the MLR by following the MLR in the graph above (Figure 1).
Data
Our data will be the Fashion MNIST dataset from Kaggle. The dataset is stored as a DataFrame with 60,000 rows, where each row represents an image. The DataFrame also has 785 columns, where the first column of the DataFrame represents the label of the image (Figure 3.2). The rest of the 784 columns contain the RGB-values for the pixels of each training image (Figure 3.1). Each image pixel number ranges from 0 to 255 based on its RGB value.
Figure 3.1. Sample image of a shirt from the training set.
Figure 3.2. Labels
Task:
Split the DataFrame into DataFrame X and DataFrame Y
Convert DataFrame X to an array
One-hot encoding Y values and convert DataFrame Y to an array
We are using one-hot encoder to transform the original Y values into one-hot encoded Y values because our predicted values are probabilities. I will explain this later in the next step.
Figure 4. Given X- and Y-values and desired X- and Y-values
Score & Softmax
Task:
Compute the score values
Define an activation function
Run the activation function to compute errors
Looking at Figure 1, the next step would be computing the dot product between the vectors containing features and weights. Our original weight vector will be an array of 0s because we do not have any better values. Don’t worry, our weight will be constantly updating as the loss function is minimized. The dot product is called the score. This score is the deciding factor that predicts whether our image is a T-shirt/top, a dress or a coat.
Figure 5. Softmax function. Photo credit to Wiki Commons
Before we utilize the score to predict the label, we have 2 problems. Remember that we one-hot encode our scores because our predicted values are probabilities? Our current scores are not probability values because they contain negative values and thus do not range from 0 to 1. So, we need to implement the Softmax function in order to normalize the scores. This exponent normalization function would convert our scores into positives and turn them into probabilities (Figure 5).
In an array of probability values for each possible result, the argmax of the probability values is the Y value. For example, in an array of 10 probabilities, if the 5th element has the highest probability, then the image label is a coat, since the 5th element in the Y values is the coat (Figure 3.1).
Figure 6. Score and Softmax functions in Python
Gradient Descent & Loss Function
Task:
Define a gradient function
Define a loss function
Optimize the loss function
After the Softmax function computes the probability values in the initial iteration, it is not guaranteed that the argmax matches the correct Y value. We need to iterate multiple times until we are confident about our argmax. To validate our argmax, we need to set up a loss function. We will use cross-entropy loss.
Figure 7. Cross-entropy loss in Python
The way to maximize the correctness is to minimize the loss in cross entropy function. To do that, we will apply gradient descent. Specifically, we will use stochastic ٖgradient descent. Stochastic gradient descent is no different than regular gradient descent. The term “stochastic” means random, meaning the gradient descent will be done by randomly selecting a sample of features. Then, instead of taking the gradient of all features, we are only calculating the gradient for the sample features. The purpose of stochastic gradient descent is to decrease the number of iterations and save time.
Figure 8. Gradient descent and stochastic gradient descent formulas
In order to achieve randomness, we will disrupt the order inside the X array (permutation). Each time we sample an image from the X array, we’ll compute the stochastic gradient descent and update the weight. Then the updated weight will be used to find the minimum value of the loss function. The number of times the minimum is found in the loss function is known as the number of epochs. Typically, more epochs would lead to better results since there is more training involved. However, too many epochs would lead to overfitting. Choosing a good epoch depends on the loss values, there is an article that talks about how to choose a good number of epochs here.
Train & Test
Task:
Define a training set and a test set
Train our samples
Visualize our loss values
Now that we have optimized the loss function, let’s test our model on our data. Our training sample has 60,000 images, we will split 80% of the images as the train set and the other 20% as the test set based on the Pareto principle. While we fit the model, let’s keep track of the loss values to see if the model is working correctly.
Figure 9. Losses after iterations
We can clearly see that the value of the loss function is decreasing substantially at first, and that’s because the predicted probabilities are nowhere close to the target value. That means our loss values are far from the minimum. As we are getting close to the minimum, the error is getting smaller because the predicted probabilities are getting more and more accurate.
Accuracy
After fitting the model on the training set, let’s see the result for our test set predictions.
Figure 10. Prediction result
It looks like our accuracy is about 85% (Figure 11), which is not so bad.
Figure 11. Accuracy Score
Challenge
Now that we have our initial predictions, see if you can improve the accuracy by adjusting the parameters in the model or adding more features. Tuning parameters like the learning rate and epoch is something to start with. I’ve attached the code and datasets for you to play around with. Enjoy!
|
https://medium.com/ds3ucsd/multinomial-logistic-regression-in-a-nutshell-53c94b30448f
|
['Wilson Xie']
|
2020-12-11 09:30:25.988000+00:00
|
['Neural Networks', 'Python', 'Logistic Regression', 'Data Science', 'Machine Learning']
|
Title Multinomial Logistic Regression NutshellContent understand multinomial logistic regression let’s apply knowledge We’ll building MLR following MLR graph Figure 1 Data data Fashion MNIST dataset Kaggle dataset stored DataFrame 60000 row row represents image DataFrame also 785 column first column DataFrame represents label image Figure 32 rest 784 column contain RGBvalues pixel training image Figure 31 image pixel number range 0 255 based RGB value Figure 31 Sample image shirt training set Figure 32 Labels Task Split DataFrame DataFrame X DataFrame Convert DataFrame X array Onehot encoding value convert DataFrame array using onehot encoder transform original value onehot encoded value predicted value probability explain later next step Figure 4 Given X Yvalues desired X Yvalues Score Softmax Task Compute score value Define activation function Run activation function compute error Looking Figure 1 next step would computing dot product vector containing feature weight original weight vector array 0 better value Don’t worry weight constantly updating loss function minimized dot product called score score deciding factor predicts whether image Tshirttop dress coat Figure 5 Softmax function Photo credit Wiki Commons utilize score predict label 2 problem Remember onehot encode score predicted value probability current score probability value contain negative value thus range 0 1 need implement Softmax function order normalize score exponent normalization function would convert score positive turn probability Figure 5 array probability value possible result argmax probability value value example array 10 probability 5th element highest probability image label coat since 5th element value coat Figure 31 Figure 6 Score Softmax function Python Gradient Descent Loss Function Task Define gradient function Define loss function Optimize loss function Softmax function computes probability value initial iteration guaranteed argmax match correct value need iterate multiple time confident argmax validate argmax need set loss function use crossentropy loss Figure 7 Crossentropy loss Python way maximize correctness minimize loss cross entropy function apply gradient descent Specifically use stochastic ٖgradient descent Stochastic gradient descent different regular gradient descent term “stochastic” mean random meaning gradient descent done randomly selecting sample feature instead taking gradient feature calculating gradient sample feature purpose stochastic gradient descent decrease number iteration save time Figure 8 Gradient descent stochastic gradient descent formula order achieve randomness disrupt order inside X array permutation time sample image X array we’ll compute stochastic gradient descent update weight updated weight used find minimum value loss function number time minimum found loss function known number epoch Typically epoch would lead better result since training involved However many epoch would lead overfitting Choosing good epoch depends loss value article talk choose good number epoch Train Test Task Define training set test set Train sample Visualize loss value optimized loss function let’s test model data training sample 60000 image split 80 image train set 20 test set based Pareto principle fit model let’s keep track loss value see model working correctly Figure 9 Losses iteration clearly see value loss function decreasing substantially first that’s predicted probability nowhere close target value mean loss value far minimum getting close minimum error getting smaller predicted probability getting accurate Accuracy fitting model training set let’s see result test set prediction Figure 10 Prediction result look like accuracy 85 Figure 11 bad Figure 11 Accuracy Score Challenge initial prediction see improve accuracy adjusting parameter model adding feature Tuning parameter like learning rate epoch something start I’ve attached code datasets play around EnjoyTags Neural Networks Python Logistic Regression Data Science Machine Learning
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.