Unnamed: 0
int64 0
192k
| title
stringlengths 1
200
| text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
| info
stringlengths 45
90.4k
|
---|---|---|---|---|---|---|---|
4,900 |
My ‘To Be Played’ List is Out of Control so Naturally I Started Playing Skyrim Again
|
It’s kinda hard to believe that Skyrim came out 9 years ago. In some ways, it seems like it’s been here forever. Certainly it’s remained in the forefront of my gaming consciousness. Maybe its assumed permanence is some side effect of its ubiquity – its been ported practically everywhere. And I’ve been there for each iteration:
on the Xbox 360 at launch in 2011.
the legendary edition on PC a few years later.
when it arrived on the Xbox One in 2016, with mod support.
on the Switch in 2017 – Skyrim on the go!
Thanks to Xbox’s streaming functionality, I’ve even played Skyrim briefly on my phone.
Skyrim was the last game I waited in a line to purchase on the day it released; the occasion remains unique in that I clearly remember the mostly uneventful transaction nine years later.
Most games are launched on Tuesdays but November 11, 2011 was a Friday. I believe they launched on Friday because of the cool 11/11/11 date. I was appreciative of it for less symbolic reasons: I’d taken the day off from work, which meant I had 3 uninterrupted days of play ahead of me.
There were probably a half dozen of us queued up outside the GameStop in the mall, waiting for employees to roll up the gate at 10 AM and let us have at thee. My wife had come along good-naturedly, even though I’m sure there were other things she’d rather be doing on a day off. The kids had been shuffled off to school and we were waiting to buy a videogame.
I clearly remember thinking that we were probably the only people who had to adjust our schedule to be there, a pair of 30 year-olds surrounded by college kids. The guy in front of us was a portly twenty-something wrapped in a green denim duster, greasy hair askew at awkward angles. He kept rattling the cage and demanding the employees “gimme my skrim”, like an imprisoned madman yelling at the guards. He quieted once we were let inside, but renewed his demands with vigor once he had the complete attention of the cashier. I honestly believe it’s the only thing he said the whole time.
My own transaction was notable only in that it was quick and without any ranting.
My first character was a burly Nord in the style of Conan the Barbarian. He carried a giant sword and eschewed magic. I don’t recall his name, but before I stopped playing him we’d conquered the dragon threat, reclaimed the north for the Nords, and stood atop the Champions. I purchased the DLC but put the game aside before I made any headway.
Later forays into Skyrim came through selective mods, which enhanced, improved, or modified the gameplay, freshening up an experience I’d already sunk several hundred hours into. I no longer cared about the Dragonborn storyline, one of the game’s weakest. Rather, I just kinda wandered and let the game find me. And pursue some of the quests I’d not previously attempted.
Last week I fired up Skyrim for the first time in at least a year, reviving a game with a date stamp of 2017. Thus was Mister Whiskers reborn from the digital ashes and turned loose on a world in search of heroes once again.
But Mister Whiskers is nobody’s hero. Hence all the tomb raiding.
|
https://medium.com/fan-fare/my-to-be-played-list-is-out-of-control-so-naturally-i-started-playing-skyrim-again-40877fc233c2
|
['Eric Pierce']
|
2020-12-28 15:08:01.291000+00:00
|
['Gaming', 'Pop Culture', 'Skyrim', 'Xbox', 'Writing']
|
Title ‘To Played’ List Control Naturally Started Playing Skyrim AgainContent It’s kinda hard believe Skyrim came 9 year ago way seems like it’s forever Certainly it’s remained forefront gaming consciousness Maybe assumed permanence side effect ubiquity – ported practically everywhere I’ve iteration Xbox 360 launch 2011 legendary edition PC year later arrived Xbox One 2016 mod support Switch 2017 – Skyrim go Thanks Xbox’s streaming functionality I’ve even played Skyrim briefly phone Skyrim last game waited line purchase day released occasion remains unique clearly remember mostly uneventful transaction nine year later game launched Tuesdays November 11 2011 Friday believe launched Friday cool 111111 date appreciative le symbolic reason I’d taken day work meant 3 uninterrupted day play ahead probably half dozen u queued outside GameStop mall waiting employee roll gate 10 let u thee wife come along goodnaturedly even though I’m sure thing she’d rather day kid shuffled school waiting buy videogame clearly remember thinking probably people adjust schedule pair 30 yearolds surrounded college kid guy front u portly twentysomething wrapped green denim duster greasy hair askew awkward angle kept rattling cage demanding employee “gimme skrim” like imprisoned madman yelling guard quieted let inside renewed demand vigor complete attention cashier honestly believe it’s thing said whole time transaction notable quick without ranting first character burly Nord style Conan Barbarian carried giant sword eschewed magic don’t recall name stopped playing we’d conquered dragon threat reclaimed north Nords stood atop Champions purchased DLC put game aside made headway Later foray Skyrim came selective mod enhanced improved modified gameplay freshening experience I’d already sunk several hundred hour longer cared Dragonborn storyline one game’s weakest Rather kinda wandered let game find pursue quest I’d previously attempted Last week fired Skyrim first time least year reviving game date stamp 2017 Thus Mister Whiskers reborn digital ash turned loose world search hero Mister Whiskers nobody’s hero Hence tomb raidingTags Gaming Pop Culture Skyrim Xbox Writing
|
4,901 |
Dissociative Identity Disorder
|
An Explanation in Laymen’s Terms
Photo by Elijah Hiett on Unsplash
There are many psychological disorders which, if you are not in the know, may seem obscure or strange. Dissociative identity disorder (DID) is one such condition. The public view of DID has been shaped by what they see in the movies and popular television programming. Many folks, upon hearing the many myths about the disorder, find it to be frightening or even something to be desired. More understanding of the facts connected to DID needs to be talked about openly with the public so that people can be more informed about the realities of what dissociative identity disorder is and how it affects those who live with it. Awareness is the driving force behind the writing of this article, offering a brief explanation of dissociative identity disorder, written in layman’s terms.
Dissociation
Dissociation is a fancy word for “zoning out”, and all humans do it. In fact, dissociation is the human brain’s way of dealing with, among other things, overwhelming circumstances, and boredom. A good example of a common dissociative incidence, that most people will find familiar, is the movie theatre experience.
You go to the theater to see a movie you have been looking forward to for months. You sit down in an empty row with your popcorn and soda, and the movie begins. Soon, you get thoroughly engrossed in the film’s plot. After the movie ends, you are surprised at the late hour. Not only this, but you suddenly become aware that there are people sitting beside you that weren’t there when you began watching the movie and that you have eaten your popcorn and drank your soda. You have little recollection of the other people seating themselves beside you, or of your eating and drinking your treats.
When Dissociation Goes Terribly Wrong
Photo by Jurica Koletić on Unsplash
The human mind is a marvelous complex of organized thought. However, sometimes we are met with circumstances that are too hard to deal with in any organized manner. When we experience these overwhelming circumstances, we utilize what is termed as defense mechanisms.
Dissociation is one of these defense mechanisms that work well when we find ourselves overwhelmed or bored. When we find ourselves in stressful or boring situations, we simply “check out” or dissociate until our intellect is needed once more. Dissociative identity disorder is a defense mechanism taken to the extreme, where dissociation becomes a life-changing obstacle. With dissociative identity disorder, this human ability to dissociate causes one to become disconnected from one’s thoughts, feelings, and memories. In this state, the survivor is protected from what their mind has determined to be overwhelming circumstances. This is all done unconsciously, just as was the theatre experience.
While dissociation can be a wonderful coping mechanism when one is in danger, or bored, it can also be very destructive. The life of a person living with Dissociative Identity Disorder is full of destroyed friendships, ended romantic relationships, lost jobs, and many other important factors in life that most take for granted. Sometimes survivors can lose their sense of right and wrong, while dissociated and get in trouble with the police. Another effect might be financial problems due to alters who do not understand that credit and debit cards are not bottomless.
The alternate Ego States
The hallmark, and best-known symptom by the public, of dissociative identity disorder, is the presence of alternate ego states (alters).
Ego states are a normal function of the human mind and are found in everyone. We form a new ego state with each new experience, to be triggered when we experience something similar in the future. This enables us to know how to cope with the new situation by drawing on what we did previously. In most people, these states of consciousness can communicate with one another, and together they form what is perceived as a cohesive personality, with a running timeline of the events in that person’s life.
Like other humans, persons living with dissociative identity disorder form new ego states in new situations to help cope with similar situations in the future. However, many of their ego states were formed during highly traumatic and emotionally charged past events. This sets up the survivor for the perfect storm. When triggered by a stimulus that reminds the survivor’s brain of a long-ago traumatic event, they recognize that event as happening in the now. Since the event from the past was highly traumatic, the survivor utilizes the coping mechanism of dissociation to escape. In this disconnected state, the old ego state doesn’t merely tell the person how to cope, it is forced to take over.
Since the events that forced the creation of the ego states in the past were supercharged with emotion and fear, they have become separated by amnesiac walls for self-protection, and self-preservation. These barriers prevent proper communication between the ego states, and as a result, the events that happened during the original trauma, as well as the events that happen while the person is in a dissociated state, are not communicated to the original personality nor with each other. The survivor experiences the consequences of these dissociative events by people telling them about things they have said or done that they do not remember. This is frightening and becomes very disruptive to their lives. Their personality has become so fragmented that the experience of having a continuous timeline of life events is lost.
Time Loss
Photo by Jon Tyson on Unsplash
Time loss is another hallmark of dissociative identity disorder that is described by those who live with it as being their number one enemy. The effect of lacking the reassurance that they will not dissociate and awaken hours, days or years later is staggering. To understand this phenomenon, one must first speak of how most humans experience time.
Most people experience time as the illusion of it linerally passing from moment to moment. Although they may not remember every event of each day, because their lives run in a predictable sequence, there is the comforting feeling of knowing pretty much what has happened in any given hour. The triggers that cause an ego state to be triggered are all about, with the average person experiencing these triggers as fond reminiscences to pleasant times in their past. These trips down memory lane do not last long, and often leave the one experiencing it feeling warm and fuzzy. As such, they do not disrupt the experiences of their timeline.
People who live with dissociative identity disorder do not have this luxury.
To a survivor, triggers do not feel warm and fuzzy. They are experienced as flashbacks to horrendous events in the person’s past, and this memory thread, in turn, activates a disconnected ego state. The result is a dissociative event, which is means that the time experienced by the person living with dissociative identity disorder is chopped up and not continuous. Their timeline is splintered and experienced by leaps and jumps which can sometimes entail hours, days or even years. One cannot express how disruptive this effect can be on a person’s life.
Defeating the Stigma Surrounding DID
People who live with dissociative identity disorder face many obstacles in their lives, but the one most agree is the hardest is the stigma. According to the Webster’s New World Dictionary, the short definition of stigma is, “a mark of disgrace or reproach or a perceived negative attribute that causes someone to devalue or think less of the whole person.” Unfortunately, stigma is often thought of as being synonymous with shame, and this thinking keeps many who are living with a severe mental health diagnosis like DID from seeking and receiving the help they need.
Receiving a diagnosis such as dissociative identity disorder is difficult enough. One must now only own the horrible facts of the past, but also one faces grieving over a lost childhood and any illusion that it was normal. To have to face family, friends, and co-workers who shun or shame them, is an enormous burden. There is staggering pain involved in working on the issues that caused the disorder to form, and it takes many years of very hard work to overcome their effects on life today. Very often people who live with DID are faced with devastating isolation due to the horrific stigma involved with their disorder. These innocent victims of a disorder they do not want and did not cause are ridiculed, mocked and feared by society.
The Exploitation of DID
There is a popular movie, which came out in 2017, which helps to perpetuate the myth that people struggling with this disorder are endowed with supernatural powers and are murderous psychotic killers. One must agree that the character in the movie makes for a fantastic horror story, but the fear it instills in the publics thoughts about people who live with DID is more than troubling. The truth is that most people who experience dissociative identity disorder are much more likely to be victims of crime than perpetrators. Yes, there are in every population demographic a certain percentage of people who are criminally minded or even dangerous. However, the portrayal of people living with DID in the movies has been historically tilted to show them as insane or worse. There are even questions proposed regularly on social media forums wondering if people who have developed dissociative identity disorder can climb walls.
The answer to these inquiries and wonderings is a lot less glamorous than Hollywood would like. In a dissociated state, persons living with dissociative identity disorder are extremely vulnerable to being mugged, raped, and murdered. The main reason for this propensity is that the ego state that is in control during the dissociative event, does not understand or comprehend the complexities of society, or how to protect themselves. Because of this, they can easily fall prey to unscrupulous people.
The Stigma Must Be Combated
Photo by Mag Pole on Unsplash
There are ways to combat this misunderstanding of the public that has been caused by using DID as a money-making venture in the media. One way is to openly discuss the realities of dissociative identity disorder, having people whose lives are restricted by its effects tell what it has done to their lives. Putting a face on the disorder, and allowing the public to glimpse the tragic ways persons living with DID struggle daily, may help end the stigma.
Another way that may be much harder to accomplish, is to force the media to talk about these realities before their films are shown in theatres. A short clip, explaining that their film is fiction and that persons who live with dissociative identity disorder experience life in a much different way than what is to be portrayed, can help remind moviegoers that what they are seeing is make-believe and not factual.
Not Weird or Strange
The main objective of this piece has been to help people understand that survivors living with dissociative identity disorder are not weird or strange. They are ordinary people who have taken the human ability to escape overwhelming trauma through dissociation to a higher level.
Indeed, many would have gone insane or died were it not for the very human ability to flee into their minds.
The next time you are involved in a discussion where a myth is being propagated about dissociative identity disorder, speak up. Do not remain silent. The tragic effects on the lives of survivors demand that we all dispel the myths and misinformation rampant in society today. After all, we’re not talking about animals, demons or monsters. We are speaking of human beings who have survived overwhelming odds. They deserve dignity and respect.
|
https://shirleydavis-23968.medium.com/dissociative-identity-disorder-31ca16c3fbef
|
['Shirley J. Davis']
|
2019-08-18 15:47:19.333000+00:00
|
['Truth', 'Stigma', 'Mental Health', 'Did', 'Laymen']
|
Title Dissociative Identity DisorderContent Explanation Laymen’s Terms Photo Elijah Hiett Unsplash many psychological disorder know may seem obscure strange Dissociative identity disorder one condition public view shaped see movie popular television programming Many folk upon hearing many myth disorder find frightening even something desired understanding fact connected need talked openly public people informed reality dissociative identity disorder affect live Awareness driving force behind writing article offering brief explanation dissociative identity disorder written layman’s term Dissociation Dissociation fancy word “zoning out” human fact dissociation human brain’s way dealing among thing overwhelming circumstance boredom good example common dissociative incidence people find familiar movie theatre experience go theater see movie looking forward month sit empty row popcorn soda movie begin Soon get thoroughly engrossed film’s plot movie end surprised late hour suddenly become aware people sitting beside weren’t began watching movie eaten popcorn drank soda little recollection people seating beside eating drinking treat Dissociation Goes Terribly Wrong Photo Jurica Koletić Unsplash human mind marvelous complex organized thought However sometimes met circumstance hard deal organized manner experience overwhelming circumstance utilize termed defense mechanism Dissociation one defense mechanism work well find overwhelmed bored find stressful boring situation simply “check out” dissociate intellect needed Dissociative identity disorder defense mechanism taken extreme dissociation becomes lifechanging obstacle dissociative identity disorder human ability dissociate cause one become disconnected one’s thought feeling memory state survivor protected mind determined overwhelming circumstance done unconsciously theatre experience dissociation wonderful coping mechanism one danger bored also destructive life person living Dissociative Identity Disorder full destroyed friendship ended romantic relationship lost job many important factor life take granted Sometimes survivor lose sense right wrong dissociated get trouble police Another effect might financial problem due alters understand credit debit card bottomless alternate Ego States hallmark bestknown symptom public dissociative identity disorder presence alternate ego state alters Ego state normal function human mind found everyone form new ego state new experience triggered experience something similar future enables u know cope new situation drawing previously people state consciousness communicate one another together form perceived cohesive personality running timeline event person’s life Like human person living dissociative identity disorder form new ego state new situation help cope similar situation future However many ego state formed highly traumatic emotionally charged past event set survivor perfect storm triggered stimulus reminds survivor’s brain longago traumatic event recognize event happening Since event past highly traumatic survivor utilizes coping mechanism dissociation escape disconnected state old ego state doesn’t merely tell person cope forced take Since event forced creation ego state past supercharged emotion fear become separated amnesiac wall selfprotection selfpreservation barrier prevent proper communication ego state result event happened original trauma well event happen person dissociated state communicated original personality survivor experience consequence dissociative event people telling thing said done remember frightening becomes disruptive life personality become fragmented experience continuous timeline life event lost Time Loss Photo Jon Tyson Unsplash Time loss another hallmark dissociative identity disorder described live number one enemy effect lacking reassurance dissociate awaken hour day year later staggering understand phenomenon one must first speak human experience time people experience time illusion linerally passing moment moment Although may remember every event day life run predictable sequence comforting feeling knowing pretty much happened given hour trigger cause ego state triggered average person experiencing trigger fond reminiscence pleasant time past trip memory lane last long often leave one experiencing feeling warm fuzzy disrupt experience timeline People live dissociative identity disorder luxury survivor trigger feel warm fuzzy experienced flashback horrendous event person’s past memory thread turn activates disconnected ego state result dissociative event mean time experienced person living dissociative identity disorder chopped continuous timeline splintered experienced leap jump sometimes entail hour day even year One cannot express disruptive effect person’s life Defeating Stigma Surrounding People live dissociative identity disorder face many obstacle life one agree hardest stigma According Webster’s New World Dictionary short definition stigma “a mark disgrace reproach perceived negative attribute cause someone devalue think le whole person” Unfortunately stigma often thought synonymous shame thinking keep many living severe mental health diagnosis like seeking receiving help need Receiving diagnosis dissociative identity disorder difficult enough One must horrible fact past also one face grieving lost childhood illusion normal face family friend coworkers shun shame enormous burden staggering pain involved working issue caused disorder form take many year hard work overcome effect life today often people live faced devastating isolation due horrific stigma involved disorder innocent victim disorder want cause ridiculed mocked feared society Exploitation popular movie came 2017 help perpetuate myth people struggling disorder endowed supernatural power murderous psychotic killer One must agree character movie make fantastic horror story fear instills public thought people live troubling truth people experience dissociative identity disorder much likely victim crime perpetrator Yes every population demographic certain percentage people criminally minded even dangerous However portrayal people living movie historically tilted show insane worse even question proposed regularly social medium forum wondering people developed dissociative identity disorder climb wall answer inquiry wonderings lot le glamorous Hollywood would like dissociated state person living dissociative identity disorder extremely vulnerable mugged raped murdered main reason propensity ego state control dissociative event understand comprehend complexity society protect easily fall prey unscrupulous people Stigma Must Combated Photo Mag Pole Unsplash way combat misunderstanding public caused using moneymaking venture medium One way openly discus reality dissociative identity disorder people whose life restricted effect tell done life Putting face disorder allowing public glimpse tragic way person living struggle daily may help end stigma Another way may much harder accomplish force medium talk reality film shown theatre short clip explaining film fiction person live dissociative identity disorder experience life much different way portrayed help remind moviegoer seeing makebelieve factual Weird Strange main objective piece help people understand survivor living dissociative identity disorder weird strange ordinary people taken human ability escape overwhelming trauma dissociation higher level Indeed many would gone insane died human ability flee mind next time involved discussion myth propagated dissociative identity disorder speak remain silent tragic effect life survivor demand dispel myth misinformation rampant society today we’re talking animal demon monster speaking human being survived overwhelming odds deserve dignity respectTags Truth Stigma Mental Health Laymen
|
4,902 |
Modern Reflections of the Past
|
I can still recall his face and on good days, his voice. My grandfather was my favorite person. He was more than a grandfather — he was the sturdy tree we would climb on when we were children, tossing us over his shoulder and making us laugh until we were exhausted. He was the smell of summer and the excitement that came with visiting his beach house each weekend.
My grandfather died when I was in high school, it was the longest day of my life so far.
When I first heard about The History Project and began planning our workshop series, I couldn’t help but think about him. The stories I now make up, the places my imagination goes while looking at old photos.
Working with The History Project and the team was an incredible and reflective experience. I was instantly moved by Niles’s story and all that the platform could do, organically making you think of memories that may have been tucked away in the sock drawer of our minds.
I am the Program Associate of Senior Planet, the country’s first technology themed senior center. My job allows me to work closely with the thousands of members that come through our door on a monthly basis, the most rewarding part of my job. Working with the History Project introduced me to a group of our members on a deeper level. I was able to hear their stories, their memories — their lives. What occurred before they stumbled upon our center, what brought them to us, to me.
I think technology is so important to seniors, it gives us all the opportunity to reflect on some of their favorite life moments. No matter what age you are, technology reminds us of our stories. The History Project and similar tools bring something personal to technology, something leaving us all hungry, not just for the past, but for the power of memories, and how we can create more of them.
|
https://medium.com/the-history-project/modern-reflections-of-the-past-72611168aecb
|
['Emily Guzewicz']
|
2016-08-09 15:53:26.018000+00:00
|
['Storytelling', 'History']
|
Title Modern Reflections PastContent still recall face good day voice grandfather favorite person grandfather — sturdy tree would climb child tossing u shoulder making u laugh exhausted smell summer excitement came visiting beach house weekend grandfather died high school longest day life far first heard History Project began planning workshop series couldn’t help think story make place imagination go looking old photo Working History Project team incredible reflective experience instantly moved Niles’s story platform could organically making think memory may tucked away sock drawer mind Program Associate Senior Planet country’s first technology themed senior center job allows work closely thousand member come door monthly basis rewarding part job Working History Project introduced group member deeper level able hear story memory — life occurred stumbled upon center brought u think technology important senior give u opportunity reflect favorite life moment matter age technology reminds u story History Project similar tool bring something personal technology something leaving u hungry past power memory create themTags Storytelling History
|
4,903 |
How to Add Scroll to Top Feature in Your Vue.js App
|
Photo by Mark Rasmuson on Unsplash
If a page has a long list, then it is convenient for users if the page has an element that scrolls to somewhere on the page with one click. In plain JavaScript, there is the window.scrollTo and element.scrollTo functions which take the x, y coordinate of the screen as parameters, which isn’t too practical for most cases. There’s also the scrollIntoView function available for DOM element objects. You can call it to scroll to the element that’s calling this function.
With Vue.js, we can do this easily with the Vue-ScrollTo directive located at https://github.com/rigor789/vue-scrollTo. It allows us to scroll to an element identified by ID of an element and also add animation to the scrolling. It makes implementing this feature.
In this article, we will build a recipe app that has tool tips to guide users on how to add recipes into a form. Users can enter the name of their dish, the ingredients, the steps and upload a photo. In the entry, there will be a ‘Scroll to Top’ button to let the user scroll back to the top automatically by clicking the button. We will build the app with Vue.js.
We start building the app by running the Vue CLI. We run it by entering:
npx @vue/cli create recipe-app
Then select ‘Manually select features’. Next, we select Babel, Vue Router, Vuex, and CSS Preprocessor in the list. After that, we install a few packages. We will install Axios for making HTTP requests to our back end. BootstrapVue for styling, V-Tooltip for the tooltips, Vue-ScrollTo for scrolling and Vee-Validate for form validation. We install the packages by running npm i axios bootstrap-vue v-tooltip vee-validate vue-scrollto .
Now we move on to creating the components. Create a file called RecipeForm.vue in the components folder and add:
<ValidationObserver ref="observer" v-slot="{ invalid }">
<b-form
<b-form-group
label="Name"
v-tooltip="{
content: 'Enter Your Recipe Name Here',
classes: ['info'],
targetClasses: ['it-has-a-tooltip'],
}"
>
<ValidationProvider name="name" rules="required" v-slot="{ errors }">
<b-form-input
type="text"
:state="errors.length == 0"
v-model="form.name"
required
placeholder="Name"
name="name"
></b-form-input>
<b-form-invalid-feedback :state="errors.length == 0">Name is requied.</b-form-invalid-feedback>
</ValidationProvider>
</b-form-group> @submit .prevent="onSubmit" novalidate> Name is requied. <b-form-group
label="Ingredients"
v-tooltip="{
content: 'Enter Your Recipe Description Here',
classes: ['info'],
targetClasses: ['it-has-a-tooltip'],
}"
>
<ValidationProvider name="ingredients" rules="required" v-slot="{ errors }">
<b-form-textarea
:state="errors.length == 0"
v-model="form.ingredients"
required
placeholder="Ingredients"
name="ingredients"
rows="8"
></b-form-textarea>
<b-form-invalid-feedback :state="errors.length == 0">Ingredients is requied.</b-form-invalid-feedback>
</ValidationProvider>
</b-form-group> <b-form-group
label="Recipe"
v-tooltip="{
content: 'Enter Your Recipe Here',
classes: ['info'],
targetClasses: ['it-has-a-tooltip'],
}"
>
<ValidationProvider name="recipe" rules="required" v-slot="{ errors }">
<b-form-textarea
:state="errors.length == 0"
v-model="form.recipe"
required
placeholder="Recipe"
name="recipe"
rows="15"
></b-form-textarea>
<b-form-invalid-feedback :state="errors.length == 0">Recipe is requied.</b-form-invalid-feedback>
</ValidationProvider>
</b-form-group>
<input type="file" style="display: none" ref="file"
<b-button
v-tooltip="{
content: 'Upload Photo of Your Dish Here',
classes: ['info'],
targetClasses: ['it-has-a-tooltip'],
}"
>Upload Photo</b-button>
</b-form-group> @change ="onChangeFileUpload($event)" /> @click ="$refs.file.click()"v-tooltip="{content: 'Upload Photo of Your Dish Here',classes: ['info'],targetClasses: ['it-has-a-tooltip'],}">Upload Photo <img ref="photo" :src="form.photo" class="photo" /> <br />
<b-button type="reset" variant="danger"
</b-form>
</ValidationObserver>
</template> Submit @click ="cancel()">Cancel <script>
import { requestsMixin } from "@/mixins/requestsMixin"; export default {
name: "RecipeForm",
mixins: [requestsMixin],
props: {
edit: Boolean,
recipe: Object
},
methods: {
async onSubmit() {
const isValid = await this.$refs.observer.validate();
if (!isValid || !this.form.photo) {
return;
} if (this.edit) {
await this.editRecipe(this.form);
} else {
await this.addRecipe(this.form);
}
const { data } = await this.getRecipes();
this.$store.commit("setRecipes", data);
this.$emit("saved");
},
cancel() {
this.$emit("cancelled");
},
onChangeFileUpload($event) {
const file = $event.target.files[0];
const reader = new FileReader();
reader.onload = () => {
this.$refs.photo.src = reader.result;
this.form.photo = reader.result;
};
reader.readAsDataURL(file);
}
},
data() {
return {
form: {}
};
},
watch: {
recipe: {
handler(val) {
this.form = JSON.parse(JSON.stringify(val || {}));
},
deep: true,
immediate: true
}
}
};
</script> <style>
.photo {
width: 100%;
margin-bottom: 10px;
}
</style>
In this file, we have a form to let users enter their recipe. We have text inputs and a file upload file to let users upload a photo. We use Vee-Validate to validate our inputs. We use the ValidationObserver component to watch for the validity of the form inside the component and ValidationProvider to check for the validation rule of the inputted value of the input inside the component. Inside the ValidationProvider , we have our BootstrapVue input for the text input fields.
Each form field has a tooltip with additional instructions. The v-tooltip directive is provided by the V-Tooltip library. We set the content of the tooltip and the classes here, and we can set other options like delay in displaying, the position and the background color of the tooltip. A full list of options is available at https://github.com/Akryum/v-tooltip.
The photo upload works by letting users open the file upload dialog with the Upload Photo button. The button would click on the hidden file input when the Upload Photo button is clicked. After the user selects a file, then the onChangeFileUpload function is called. In this function, we have the FileReader object which sets the src attribute of the img tag to show the uploaded image, and also the this.form.photo field. readAsDataUrl reads the image into a string so we can submit it without extra effort.
This form is also used for editing recipes, so we have a watch block to watch for the recipe prop, which we will pass into this component when there is something to be edited.
Next we create a mixins folder and add requestsMixin.js into the mixins folder. In the file, we add:
const axios = require("axios"); const APIURL = " http://localhost:3000 ";const axios = require("axios"); export const requestsMixin = {
methods: {
getRecipes() {
return axios.get(`${APIURL}/recipes`);
}, addRecipe(data) {
return axios.post(`${APIURL}/recipes`, data);
}, editRecipe(data) {
return axios.put(`${APIURL}/recipes/${data.id}`, data);
}, deleteRecipe(id) {
return axios.delete(`${APIURL}/recipes/${id}`);
}
}
};
These are the functions we use in our components to make HTTP requests to get and save our data.
Next in Home.vue , replace the existing code with:
<div class="page" id='top'>
<h1 class="text-center">Recipes</h1>
<b-button-toolbar class="button-toolbar">
<b-button
</b-button-toolbar> Recipes @click ="openAddModal()" variant="primary">Add Recipe <b-card
v-for="r in recipes"
:key="r.id"
:title="r.name"
:img-src="r.photo"
img-alt="Image"
img-top
tag="article"
class="recipe-card"
img-bottom
>
<b-card-text>
<h1>Ingredients</h1>
<div class="wrap">{{r.ingredients}}</div>
</b-card-text> <b-card-text>
<h1>Recipe</h1>
<div class="wrap">{{r.recipe}}</div>
</b-card-text> <b-button
href="#"
v-scroll-to="{
el: '#top',
container: 'body',
duration: 500,
easing: 'linear',
offset: -200,
force: true,
cancelable: true,
x: false,
y: true
}"
variant="primary"
>Scroll to Top</b-button> @click ="openEditModal(r)" variant="primary">Edit
</b-card> @click ="deleteOneRecipe(r.id)" variant="danger">Delete
<RecipeForm
</b-modal> @saved ="closeModal()" @cancelled ="closeModal()" :edit="false" />
<RecipeForm
:edit="true"
:recipe="selectedRecipe"
/>
</b-modal>
</div>
</template> @saved ="closeModal()" @cancelled ="closeModal()":edit="true":recipe="selectedRecipe"/> <script>
// @ is an alias to /src
import RecipeForm from "@/components/RecipeForm.vue";
import { requestsMixin } from "@/mixins/requestsMixin"; export default {
name: "home",
components: {
RecipeForm
},
mixins: [requestsMixin],
computed: {
recipes() {
return this.$store.state.recipes;
}
},
beforeMount() {
this.getAllRecipes();
},
data() {
return {
selectedRecipe: {}
};
},
methods: {
openAddModal() {
this.$bvModal.show("add-modal");
},
openEditModal(recipe) {
this.$bvModal.show("edit-modal");
this.selectedRecipe = recipe;
},
closeModal() {
this.$bvModal.hide("add-modal");
this.$bvModal.hide("edit-modal");
this.selectedRecipe = {};
},
async deleteOneRecipe(id) {
await this.deleteRecipe(id);
this.getAllRecipes();
},
async getAllRecipes() {
const { data } = await this.getRecipes();
this.$store.commit("setRecipes", data);
}
}
};
</script> <style scoped>
.recipe-card {
width: 95vw;
margin: 0 auto;
max-width: 700px;
} .wrap {
white-space: pre-wrap;
}
</style>
In this file, we have a list of BootstrapVue cards to display a list of recipe entries and let users open and close the add and edit modals. We have buttons on each card to let users edit or delete each entry. Each card has an image of the recipe at the bottom which was uploaded when the recipe is entered. For scrolling to top functionality, we used the v-scroll-to directive provided by the V-ScrollTo library. To make scrolling smooth, we set the easing property to linear . Also, we set the duration of the scroll to 500 milliseconds. el is the selector of the element we want to scroll to. Setting force to true means that scrolling will be performed, even if the scroll target is already in view. cancelable is true means that the user can cancel scrolling. x set to false means that we don’t want to scroll horizontally, and y set to true means we want to scroll vertically. container is the selector for the container element that will be scrolled. offset is the offset in the number of pixels when scrolling. The full list of options is at https://github.com/rigor789/vue-scrollTo.
In the scripts section, we have the beforeMount hook to get all the password entries during page load with the getRecipes function we wrote in our mixin. When the Edit button is clicked, the selectedRecipe variable is set, and we pass it to the RecipeForm for editing.
To delete a recipe, we call deleteRecipe in our mixin to make the request to the back end.
The CSS in the wrap class is for rendering line break characters as line breaks.
Next in App.vue , we replace the existing code with:
<template>
<div id="app">
<b-navbar toggleable="lg" type="dark" variant="info">
<b-navbar-brand to="/">Recipes App</b-navbar-brand> <b-navbar-toggle target="nav-collapse"></b-navbar-toggle> <b-collapse id="nav-collapse" is-nav>
<b-navbar-nav>
<b-nav-item to="/" :active="path == '/'">Home</b-nav-item>
</b-navbar-nav>
</b-collapse>
</b-navbar>
<router-view />
</div>
</template> <script>
export default {
data() {
return {
path: this.$route && this.$route.path
};
},
watch: {
$route(route) {
this.path = route.path;
}
}
};
</script> <style lang="scss">
.page {
padding: 20px;
margin: 0 auto;
max-width: 700px;
} button {
margin-right: 10px !important;
} .button-toolbar {
margin-bottom: 10px;
} .tooltip {
display: block !important;
z-index: 10000; .tooltip-inner {
background: black;
color: white;
border-radius: 16px;
padding: 5px 10px 4px;
} .tooltip-arrow {
width: 0;
height: 0;
border-style: solid;
position: absolute;
margin: 5px;
border-color: black;
} &[x-placement^="top"] {
margin-bottom: 5px; .tooltip-arrow {
border-width: 5px 5px 0 5px;
border-left-color: transparent !important;
border-right-color: transparent !important;
border-bottom-color: transparent !important;
bottom: -5px;
left: calc(50% - 5px);
margin-top: 0;
margin-bottom: 0;
}
} &[x-placement^="bottom"] {
margin-top: 5px; .tooltip-arrow {
border-width: 0 5px 5px 5px;
border-left-color: transparent !important;
border-right-color: transparent !important;
border-top-color: transparent !important;
top: -5px;
left: calc(50% - 5px);
margin-top: 0;
margin-bottom: 0;
}
} &[x-placement^="right"] {
margin-left: 5px; .tooltip-arrow {
border-width: 5px 5px 5px 0;
border-left-color: transparent !important;
border-top-color: transparent !important;
border-bottom-color: transparent !important;
left: -5px;
top: calc(50% - 5px);
margin-left: 0;
margin-right: 0;
}
} &[x-placement^="left"] {
margin-right: 5px; .tooltip-arrow {
border-width: 5px 0 5px 5px;
border-top-color: transparent !important;
border-right-color: transparent !important;
border-bottom-color: transparent !important;
right: -5px;
top: calc(50% - 5px);
margin-left: 0;
margin-right: 0;
}
} &[aria-hidden="true"] {
visibility: hidden;
opacity: 0;
transition: opacity 0.15s, visibility 0.15s;
} &[aria-hidden="false"] {
visibility: visible;
opacity: 1;
transition: opacity 0.15s;
}
}
</style>
to add a Bootstrap navigation bar to the top of our pages, and a router-view to display the routes we define. Also, we have the V-Tooltip styles in the style section. This style section isn’t scoped so the styles will apply globally. In the .page selector, we add some padding to our pages and set max-width to 700px so that the cards won’t be too wide. We also added some margins to our buttons.
Next in main.js , we replace the existing code with:
import Vue from "vue";
import App from "./App.vue";
import router from "./router";
import store from "./store";
import BootstrapVue from "bootstrap-vue";
import VTooltip from "v-tooltip";
import "bootstrap/dist/css/bootstrap.css";
import "bootstrap-vue/dist/bootstrap-vue.css";
import { ValidationProvider, extend, ValidationObserver } from "vee-validate";
import { required } from "vee-validate/dist/rules";
extend("required", required);
Vue.component("ValidationProvider", ValidationProvider);
Vue.component("ValidationObserver", ValidationObserver);
Vue.use(BootstrapVue);
Vue.use(VTooltip); Vue.config.productionTip = false; new Vue({
router,
store,
render: h => h(App)
}).$mount("#app");
We added all the libraries we need here, including BootstrapVue JavaScript and CSS, Vee-Validate components along with the validation rules, and the V-Tooltip directive we used in the components.
In router.js we replace the existing code with:
import Vue from "vue";
import Router from "vue-router";
import Home from "./views/Home.vue"; Vue.use(Router); export default new Router({
mode: "history",
base: process.env.BASE_URL,
routes: [
{
path: "/",
name: "home",
component: Home
}
]
});
to include the home page in our routes so users can see the page.
And in store.js , we replace the existing code with:
import Vue from "vue";
import Vuex from "vuex"; Vue.use(Vuex); export default new Vuex.Store({
state: {
recipes: []
},
mutations: {
setRecipes(state, payload) {
state.recipes = payload;
}
},
actions: {}
});
to add our recipes state to the store so we can observer it in the computed block of RecipeForm and HomePage components. We have the setRecipes function to update the passwords state and we use it in the components by call this.$store.commit(“setRecipes”, response.data); like we did in RecipeForm .
Finally, in index.html , we replace the existing code with:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width,initial-scale=1.0" />
<link rel="icon" href="<%= BASE_URL %>favicon.ico" />
<title>Recipe App</title>
</head>
<body>
<noscript>
<strong
>We're sorry but vue-tooltip-tutorial-app doesn't work properly without
JavaScript enabled. Please enable it to continue.</strong
>
</noscript>
<div id="app"></div>
<!-- built files will be auto injected -->
</body>
</html>
to change the title.
After all the hard work, we can start our app by running npm run serve .
To start the back end, we first install the json-server package by running npm i json-server . Then, go to our project folder and run:
json-server --watch db.json
In db.json , change the text to:
{
" recipes ": [
]
}
So we have the recipes endpoints defined in the requests.js available.
After all the hard work, we get:
|
https://medium.com/swlh/how-to-add-scroll-to-top-feature-in-your-vue-js-app-9adf799a04c1
|
['John Au-Yeung']
|
2019-11-28 15:36:38.455000+00:00
|
['Technology', 'Programming', 'Software Development', 'JavaScript', 'Vuejs']
|
Title Add Scroll Top Feature Vuejs AppContent Photo Mark Rasmuson Unsplash page long list convenient user page element scroll somewhere page one click plain JavaScript windowscrollTo elementscrollTo function take x coordinate screen parameter isn’t practical case There’s also scrollIntoView function available DOM element object call scroll element that’s calling function Vuejs easily VueScrollTo directive located httpsgithubcomrigor789vuescrollTo allows u scroll element identified ID element also add animation scrolling make implementing feature article build recipe app tool tip guide user add recipe form Users enter name dish ingredient step upload photo entry ‘Scroll Top’ button let user scroll back top automatically clicking button build app Vuejs start building app running Vue CLI run entering npx vuecli create recipeapp select ‘Manually select features’ Next select Babel Vue Router Vuex CSS Preprocessor list install package install Axios making HTTP request back end BootstrapVue styling VTooltip tooltips VueScrollTo scrolling VeeValidate form validation install package running npm axios bootstrapvue vtooltip veevalidate vuescrollto move creating component Create file called RecipeFormvue component folder add ValidationObserver refobserver vslot invalid bform bformgroup labelName vtooltip content Enter Recipe Name class info targetClasses ithasatooltip ValidationProvider namename rulesrequired vslot error bforminput typetext stateerrorslength 0 vmodelformname required placeholderName namename bforminput bforminvalidfeedback stateerrorslength 0Name requiedbforminvalidfeedback ValidationProvider bformgroup submit preventonSubmit novalidate Name requied bformgroup labelIngredients vtooltip content Enter Recipe Description class info targetClasses ithasatooltip ValidationProvider nameingredients rulesrequired vslot error bformtextarea stateerrorslength 0 vmodelformingredients required placeholderIngredients nameingredients rows8 bformtextarea bforminvalidfeedback stateerrorslength 0Ingredients requiedbforminvalidfeedback ValidationProvider bformgroup bformgroup labelRecipe vtooltip content Enter Recipe class info targetClasses ithasatooltip ValidationProvider namerecipe rulesrequired vslot error bformtextarea stateerrorslength 0 vmodelformrecipe required placeholderRecipe namerecipe rows15 bformtextarea bforminvalidfeedback stateerrorslength 0Recipe requiedbforminvalidfeedback ValidationProvider bformgroup input typefile styledisplay none reffile bbutton vtooltip content Upload Photo Dish class info targetClasses ithasatooltip Upload Photobbutton bformgroup change onChangeFileUploadevent click refsfileclickvtooltipcontent Upload Photo Dish Hereclasses infotargetClasses ithasatooltipUpload Photo img refphoto srcformphoto classphoto br bbutton typereset variantdanger bform ValidationObserver template Submit click cancelCancel script import requestsMixin mixinsrequestsMixin export default name RecipeForm mixins requestsMixin prop edit Boolean recipe Object method async onSubmit const isValid await thisrefsobservervalidate isValid thisformphoto return thisedit await thiseditRecipethisform else await thisaddRecipethisform const data await thisgetRecipes thisstorecommitsetRecipes data thisemitsaved cancel thisemitcancelled onChangeFileUploadevent const file eventtargetfiles0 const reader new FileReader readeronload thisrefsphotosrc readerresult thisformphoto readerresult readerreadAsDataURLfile data return form watch recipe handlerval thisform JSONparseJSONstringifyval deep true immediate true script style photo width 100 marginbottom 10px style file form let user enter recipe text input file upload file let user upload photo use VeeValidate validate input use ValidationObserver component watch validity form inside component ValidationProvider check validation rule inputted value input inside component Inside ValidationProvider BootstrapVue input text input field form field tooltip additional instruction vtooltip directive provided VTooltip library set content tooltip class set option like delay displaying position background color tooltip full list option available httpsgithubcomAkryumvtooltip photo upload work letting user open file upload dialog Upload Photo button button would click hidden file input Upload Photo button clicked user selects file onChangeFileUpload function called function FileReader object set src attribute img tag show uploaded image also thisformphoto field readAsDataUrl read image string submit without extra effort form also used editing recipe watch block watch recipe prop pas component something edited Next create mixins folder add requestsMixinjs mixins folder file add const axios requireaxios const APIURL httplocalhost3000 const axios requireaxios export const requestsMixin method getRecipes return axiosgetAPIURLrecipes addRecipedata return axiospostAPIURLrecipes data editRecipedata return axiosputAPIURLrecipesdataid data deleteRecipeid return axiosdeleteAPIURLrecipesid function use component make HTTP request get save data Next Homevue replace existing code div classpage idtop h1 classtextcenterRecipesh1 bbuttontoolbar classbuttontoolbar bbutton bbuttontoolbar Recipes click openAddModal variantprimaryAdd Recipe bcard vforr recipe keyrid titlername imgsrcrphoto imgaltImage imgtop tagarticle classrecipecard imgbottom bcardtext h1Ingredientsh1 div classwrapringredientsdiv bcardtext bcardtext h1Recipeh1 div classwraprrecipediv bcardtext bbutton href vscrollto el top container body duration 500 easing linear offset 200 force true cancelable true x false true variantprimary Scroll Topbbutton click openEditModalr variantprimaryEdit bcard click deleteOneReciperid variantdangerDelete RecipeForm bmodal saved closeModal cancelled closeModal editfalse RecipeForm edittrue recipeselectedRecipe bmodal div template saved closeModal cancelled closeModaledittruerecipeselectedRecipe script alias src import RecipeForm componentsRecipeFormvue import requestsMixin mixinsrequestsMixin export default name home component RecipeForm mixins requestsMixin computed recipe return thisstorestaterecipes beforeMount thisgetAllRecipes data return selectedRecipe method openAddModal thisbvModalshowaddmodal openEditModalrecipe thisbvModalshoweditmodal thisselectedRecipe recipe closeModal thisbvModalhideaddmodal thisbvModalhideeditmodal thisselectedRecipe async deleteOneRecipeid await thisdeleteRecipeid thisgetAllRecipes async getAllRecipes const data await thisgetRecipes thisstorecommitsetRecipes data script style scoped recipecard width 95vw margin 0 auto maxwidth 700px wrap whitespace prewrap style file list BootstrapVue card display list recipe entry let user open close add edit modal button card let user edit delete entry card image recipe bottom uploaded recipe entered scrolling top functionality used vscrollto directive provided VScrollTo library make scrolling smooth set easing property linear Also set duration scroll 500 millisecond el selector element want scroll Setting force true mean scrolling performed even scroll target already view cancelable true mean user cancel scrolling x set false mean don’t want scroll horizontally set true mean want scroll vertically container selector container element scrolled offset offset number pixel scrolling full list option httpsgithubcomrigor789vuescrollTo script section beforeMount hook get password entry page load getRecipes function wrote mixin Edit button clicked selectedRecipe variable set pas RecipeForm editing delete recipe call deleteRecipe mixin make request back end CSS wrap class rendering line break character line break Next Appvue replace existing code template div idapp bnavbar toggleablelg typedark variantinfo bnavbarbrand toRecipes Appbnavbarbrand bnavbartoggle targetnavcollapsebnavbartoggle bcollapse idnavcollapse isnav bnavbarnav bnavitem activepath Homebnavitem bnavbarnav bcollapse bnavbar routerview div template script export default data return path thisroute thisroutepath watch routeroute thispath routepath script style langscss page padding 20px margin 0 auto maxwidth 700px button marginright 10px important buttontoolbar marginbottom 10px tooltip display block important zindex 10000 tooltipinner background black color white borderradius 16px padding 5px 10px 4px tooltiparrow width 0 height 0 borderstyle solid position absolute margin 5px bordercolor black xplacementtop marginbottom 5px tooltiparrow borderwidth 5px 5px 0 5px borderleftcolor transparent important borderrightcolor transparent important borderbottomcolor transparent important bottom 5px left calc50 5px margintop 0 marginbottom 0 xplacementbottom margintop 5px tooltiparrow borderwidth 0 5px 5px 5px borderleftcolor transparent important borderrightcolor transparent important bordertopcolor transparent important top 5px left calc50 5px margintop 0 marginbottom 0 xplacementright marginleft 5px tooltiparrow borderwidth 5px 5px 5px 0 borderleftcolor transparent important bordertopcolor transparent important borderbottomcolor transparent important left 5px top calc50 5px marginleft 0 marginright 0 xplacementleft marginright 5px tooltiparrow borderwidth 5px 0 5px 5px bordertopcolor transparent important borderrightcolor transparent important borderbottomcolor transparent important right 5px top calc50 5px marginleft 0 marginright 0 ariahiddentrue visibility hidden opacity 0 transition opacity 015s visibility 015s ariahiddenfalse visibility visible opacity 1 transition opacity 015s style add Bootstrap navigation bar top page routerview display route define Also VTooltip style style section style section isn’t scoped style apply globally page selector add padding page set maxwidth 700px card won’t wide also added margin button Next mainjs replace existing code import Vue vue import App Appvue import router router import store store import BootstrapVue bootstrapvue import VTooltip vtooltip import bootstrapdistcssbootstrapcss import bootstrapvuedistbootstrapvuecss import ValidationProvider extend ValidationObserver veevalidate import required veevalidatedistrules extendrequired required VuecomponentValidationProvider ValidationProvider VuecomponentValidationObserver ValidationObserver VueuseBootstrapVue VueuseVTooltip VueconfigproductionTip false new Vue router store render h hApp mountapp added library need including BootstrapVue JavaScript CSS VeeValidate component along validation rule VTooltip directive used component routerjs replace existing code import Vue vue import Router vuerouter import Home viewsHomevue VueuseRouter export default new Router mode history base processenvBASEURL route path name home component Home include home page route user see page storejs replace existing code import Vue vue import Vuex vuex VueuseVuex export default new VuexStore state recipe mutation setRecipesstate payload staterecipes payload action add recipe state store observer computed block RecipeForm HomePage component setRecipes function update password state use component call thisstorecommit“setRecipes” responsedata like RecipeForm Finally indexhtml replace existing code DOCTYPE html html langen head meta charsetutf8 meta httpequivXUACompatible contentIEedge meta nameviewport contentwidthdevicewidthinitialscale10 link relicon href BASEURL faviconico titleRecipe Apptitle head body noscript strong sorry vuetooltiptutorialapp doesnt work properly without JavaScript enabled Please enable continuestrong noscript div idappdiv built file auto injected body html change title hard work start app running npm run serve start back end first install jsonserver package running npm jsonserver go project folder run jsonserver watch dbjson dbjson change text recipe recipe endpoint defined requestsjs available hard work getTags Technology Programming Software Development JavaScript Vuejs
|
4,904 |
Designing for Peace of Mind
|
Growing up is one of the few experiences that everyone can relate to. And whether it’s a distant memory or a more recent one, a hallmark of childhood is that healthy push-pull between freedom and safety. We want breathing room for exploration and growth, but also parameters within which to feel safe and respected. How those are defined and enacted differs for every family, however, so when a group of people at Microsoft came together to build the Family Safety app, you can imagine the breadth and complexity of that purview.
It wasn’t just about identifying core needs and corresponding solutions, it was also about creating a design framework that would be robust and flexible enough to apply atop any family’s needs. Ultimately, the app’s ability to drive productive conversations proved central to their design thinking.
Conversations support family achievement in ways that are effective, sustainable, and collaborative because they foster co-creation and respect the agency of kids and adults alike. Short or long-term goals, screen time, grades, driving — these are among the many topics that families come together to define, discuss, and plan for.
From a design perspective, we can create experiences that support and spark those conversations. Conversations where you co-define the environments you want to create, habits you want to build, and adjustments you might need to make while en route because, as 2020 has reminded us in ALL CAPS, life is unpredictable. Through building better habits and conversations, the app hopes to provide families with more peace of mind.
Creating emotive and inclusive visual designs
|
https://medium.com/microsoft-design/designing-for-peace-of-mind-defebeddd6da
|
['Cayla Dorsey']
|
2020-08-06 18:07:37.597000+00:00
|
['Family', 'Safety', 'Technology', 'Microsoft', 'Design']
|
Title Designing Peace MindContent Growing one experience everyone relate whether it’s distant memory recent one hallmark childhood healthy pushpull freedom safety want breathing room exploration growth also parameter within feel safe respected defined enacted differs every family however group people Microsoft came together build Family Safety app imagine breadth complexity purview wasn’t identifying core need corresponding solution also creating design framework would robust flexible enough apply atop family’s need Ultimately app’s ability drive productive conversation proved central design thinking Conversations support family achievement way effective sustainable collaborative foster cocreation respect agency kid adult alike Short longterm goal screen time grade driving — among many topic family come together define discus plan design perspective create experience support spark conversation Conversations codefine environment want create habit want build adjustment might need make en route 2020 reminded u CAPS life unpredictable building better habit conversation app hope provide family peace mind Creating emotive inclusive visual designsTags Family Safety Technology Microsoft Design
|
4,905 |
WordPress: One of the Best Content Management Systems
|
WordPress is an impressive content management system (CMS) built upon a PHP / MySQL foundation. By all counts, it is the most utilized CMS in the world, installed on some 60 million plus websites and used by over 20% of the top 10 million websites.
WordPress enables people to maintain and manage website content through a web-based “backend” system. It provides relatively simple mechanisms for creating blog posts and static pages, then organizing those posts and pages into categories and menus. It also provides a nice media library tool useful for managing the image assets in use within your site. The media library handles technical overhead like thumbnail generation.
For developers, WordPress delivers well-documented mechanisms for extending its core functionality through a theming architecture, a plugin architecture, function overrides, and action hooks & filters. This flexibility has given birth to a huge community of developers who have contributed themes and plugins that help people customize their site and get more functionality out of it. And of course, expert WordPress developers can use this functionality to build wholly unique themes and features for their clients and employers.
Pros:
Provides a working CMS “out of the box”
Simple customizations are pretty easy to do
Lots of themes and plugins to satisfy common needs
Relatively fast, especially with performance plugins
Great for SEO with SEO plugins
Non-technical people can maintain and manage the site content
Great for news, opinion, magazine, and small business marketing sites where the focus is on content
It’s free and open source!
Cons:
Its flexibility enables inexperienced developers to implement bad practices that can make upgrades difficult or impossible
Not well suited for use with highly custom sites where unique functionality is the focus
Not well suited for use in situations with many different structured document types
Does not naturally take advantage of many of the latest techniques and technologies in web development
Is WordPress Right for Me?
Great question, and, of course, it depends. The question of which platform to develop your project on is an important one, and should be carefully considered with respect to the vision for your site.
If you expect your site’s success will be driven primarily by the content you (or your writers) produce, and that content will have a relatively simple organizational structure, then WordPress is probably a great platform for your site.
On the other hand, if you expect the primary value of your site to be in its unique features, or you have complex data structures, or expect lots of growth in unpredictable ways, then you may want to consult with some experts and consider alternative platforms.
Endertech’s WordPress Tips & Tricks
Use the Yoast SEO Plugin
For most sites, SEO is a very important consideration. You need to be producing content for your site that searchers will find relevant, and you need to structure the content so that search engines know what the page is about. The Yoast SEO plugin provides both tools and a lot of helpful guidance to assist in these matters.
Use the W3 Total Cache Plugin
Performance is a critical component of your site’s success. People are used to the speed of Google and other top echelon sites, and can quickly get frustrated with slow performing sites. Many of the techniques for achieving top speeds are quite sophisticated, and the W3 Total Cache Plugin does a great job of simplifying their implementation and providing an interface for switching various performance features on and off.
Use Child Themes
Many amateur developers, just trying to do things quick and easy, will hack away at installed themes and core files to get the desired look or function for a WordPress site. This is the wrong approach. The proper way to extend a theme is by following the established process for creating a Child Theme. This pattern will prevent you from hacking on core files, and enable the underlying WordPress installation to be safely updated.
Don’t Hack the Core
Similar to using Child Themes… don’t modify core WordPress files. This breaks future compatibility, and can introduce serious security concerns. Extend functions within your Child Theme, or create your own plugins for most advanced functionality. Be aware that the best WordPress plugins often times provide hooks you can use to override default functionality with minimal effort.
Interact with Us! → Instagram | Facebook | Twitter
See our latest and most read stories on our new Medium publication, Endertech Insights !
|
https://medium.com/endertech-insights/wordpress-one-of-the-best-content-management-systems-ef1273b673ef
|
[]
|
2017-09-12 18:55:20.862000+00:00
|
['WordPress', 'Content Marketing', 'Development', 'Web Development', 'SEO']
|
Title WordPress One Best Content Management SystemsContent WordPress impressive content management system CMS built upon PHP MySQL foundation count utilized CMS world installed 60 million plus website used 20 top 10 million website WordPress enables people maintain manage website content webbased “backend” system provides relatively simple mechanism creating blog post static page organizing post page category menu also provides nice medium library tool useful managing image asset use within site medium library handle technical overhead like thumbnail generation developer WordPress delivers welldocumented mechanism extending core functionality theming architecture plugin architecture function override action hook filter flexibility given birth huge community developer contributed theme plugins help people customize site get functionality course expert WordPress developer use functionality build wholly unique theme feature client employer Pros Provides working CMS “out box” Simple customizations pretty easy Lots theme plugins satisfy common need Relatively fast especially performance plugins Great SEO SEO plugins Nontechnical people maintain manage site content Great news opinion magazine small business marketing site focus content It’s free open source Cons flexibility enables inexperienced developer implement bad practice make upgrade difficult impossible well suited use highly custom site unique functionality focus well suited use situation many different structured document type naturally take advantage many latest technique technology web development WordPress Right Great question course depends question platform develop project important one carefully considered respect vision site expect site’s success driven primarily content writer produce content relatively simple organizational structure WordPress probably great platform site hand expect primary value site unique feature complex data structure expect lot growth unpredictable way may want consult expert consider alternative platform Endertech’s WordPress Tips Tricks Use Yoast SEO Plugin site SEO important consideration need producing content site searcher find relevant need structure content search engine know page Yoast SEO plugin provides tool lot helpful guidance assist matter Use W3 Total Cache Plugin Performance critical component site’s success People used speed Google top echelon site quickly get frustrated slow performing site Many technique achieving top speed quite sophisticated W3 Total Cache Plugin great job simplifying implementation providing interface switching various performance feature Use Child Themes Many amateur developer trying thing quick easy hack away installed theme core file get desired look function WordPress site wrong approach proper way extend theme following established process creating Child Theme pattern prevent hacking core file enable underlying WordPress installation safely updated Don’t Hack Core Similar using Child Themes… don’t modify core WordPress file break future compatibility introduce serious security concern Extend function within Child Theme create plugins advanced functionality aware best WordPress plugins often time provide hook use override default functionality minimal effort Interact Us → Instagram Facebook Twitter See latest read story new Medium publication Endertech Insights Tags WordPress Content Marketing Development Web Development SEO
|
4,906 |
Corporate Storytelling
|
Corporate Storytelling
Preserving stories of companies & businesspeople
Corporate Books
We have broad experience in producing high quality books for institutions & corporations. From the history of a company to the professional trajectory of an entrepreneur, an executive or the dean of a school. Here some samples of our work.
“Flash of Genius”
The history of Andrew Corporation
Founded in 1937 in Chicago, Illinois, the Andrew Corporation was the most important manufacturer of satellite antennas. A few years ago the company was sold and in order to preserve the legacy of his grandparents, and that of the company, Edward Andrew Jr. hired MSB Storytelling to produce “Flash of Genius.” Browse the book online
A life in the company
A retirement surprise present
The CEO of Carlson Wagonlit Travel, Geoffrey Marshall, receiving his surprise retirement book.
Companies such as American Express, Aegon, Pfizer, Carlson-Wagonlit among others hired MSB Storytelling to produce a surprise retirement book as a present to honor top executives retiring from their companies.
Watch Geoffrey Marshall testimonial
A hundred years, thousands of lives
Schools and Colleges anniversary books
We produced several books for educational institutions turning 100 and 50 year anniversaries. Belgrano Day School (100 years), St. Agnes School (50 years) and St. Hilda’s College (100 years). Also we made many retirement surprise books for their Deans and Directors. Watch a General Director testimonial.
THE STEEL COVER BOOK
Company Anniversary Book
Guidi Industries, a metallurgical engineering company that produces metal parts for the automotive industry celebrated their 70th anniversary. In MSB Storytelling we designed an original anniversary book with molded steel covers.
|
https://medium.com/msb-storytelling/corporative-storytelling-454b5b8ed2a5
|
['My Special Book']
|
2018-02-09 19:51:34.578000+00:00
|
['Anniversay', 'Corporate Books', 'Retirement Gift', 'Storytelling']
|
Title Corporate StorytellingContent Corporate Storytelling Preserving story company businesspeople Corporate Books broad experience producing high quality book institution corporation history company professional trajectory entrepreneur executive dean school sample work “Flash Genius” history Andrew Corporation Founded 1937 Chicago Illinois Andrew Corporation important manufacturer satellite antenna year ago company sold order preserve legacy grandparent company Edward Andrew Jr hired MSB Storytelling produce “Flash Genius” Browse book online life company retirement surprise present CEO Carlson Wagonlit Travel Geoffrey Marshall receiving surprise retirement book Companies American Express Aegon Pfizer CarlsonWagonlit among others hired MSB Storytelling produce surprise retirement book present honor top executive retiring company Watch Geoffrey Marshall testimonial hundred year thousand life Schools Colleges anniversary book produced several book educational institution turning 100 50 year anniversary Belgrano Day School 100 year St Agnes School 50 year St Hilda’s College 100 year Also made many retirement surprise book Deans Directors Watch General Director testimonial STEEL COVER BOOK Company Anniversary Book Guidi Industries metallurgical engineering company produce metal part automotive industry celebrated 70th anniversary MSB Storytelling designed original anniversary book molded steel coversTags Anniversay Corporate Books Retirement Gift Storytelling
|
4,907 |
Warby Parker’s Online Vision Test Provides Clues About The Difficulty Of Disrupting Healthcare
|
Can the healthcare industry be disrupted? Warby Parker is one of the companies that is certainly trying. It has created a platform on which customers can take an online vision test and get prescription glasses ordered and delivered. Founded in 2010 by four classmates from Philadelphia’s Wharton School of Business, Warby Parker is rumored to be valued at about $1 billion. There has also been some excitement about Amazon’s recent moves into the pharmacy business. Can these new entrants, with a technology background succeed in these industries?
When we think of disruption, we are often impressed by companies that develop new technologies and clever business models. But successful disruption requires more than that. In fact, disrupting industries such as healthcare, education and transportation will require more than just clever products, services and business models — companies will have to think more deeply about their business environment and work hard to influence policy decision-making.
The Third Wave
In the book The Third Wave, Steve Case identifies three waves of internet technologies. In the first wave (1985–1999), companies like Cisco, IBM, AOL and others were building the necessary technologies for the internet. Their work mostly involved laying the foundations of the online world that we have today. In the second wave (2000–2015), the app economy and mobile revolution took hold. Companies like Amazon, Google, Facebook, Twitter and others were able to leverage the foundations built in the first wave to create great products and services.
There is evidence that this second wave has peaked and the next technology wave is now starting to take place. According to Steve Case, during the third wave (2016-present), the internet will be fully integrated into everything we do and every product we use. In the era of the internet of everything, industries such as healthcare, education, transportation and food production will be impacted by ubiquitous connectivity.
Beyond Innovation
All this is exciting news if you are an emerging startup, but caution needs to be taken. During the second wave, successful startups were created on top of infrastructures that had already been built during the first wave. A high school senior could create a disruptive startup from their bedroom. This was the era of incubators, accelerators, minimal investments and rapid growth.
The third wave is much more similar to the first wave. Disrupting incumbent companies during the third wave will be a lot more difficult and expensive. Innovators will have to make larger financial investments, form partnerships with other companies and influence government policy. These requirements play well into the inbuilt advantages of large companies, who have the resources to invest in the R&D required to create healthcare products.
When it comes to resources, Warby Parker is not exactly poor. As already noted, the company is rumored to be worth $1 billion. However, in the United States, there are currently 11 states in which the use of online vision testing is illegal. People wanting to buy new prescription glasses have to go and see an optometrist. In these states, the business environment matters more than Warby Parkers clever technology and business model. The company will only succeed in these states if it can somehow get this legal policy changed.
The Business Environment
Uber have faced similar challenges, as they have tried to scale in Europe. They have been banned or voluntarily pulled out of several countries such as Denmark, Greece and Hungary. This is not because their product or service is awful. In my opinion, Uber has a great business model — perhaps one of the best in the way it leverages technology to deliver on demand transportation services. However, in order for Uber to scale globally into every country it has to work with policy makers within those geographies. The business environment matters more for Uber than its clever technology and business model.
Airbnb has also faced similar challenges with regulations in different cities across the world. Tesla and other electric car makers are on track to change how we drive over the next decade. But it is still a fact that the global success of companies working in this industry will depend on policy changes and significant infrastructure developments. Beyond how great the products are, the business environment matters.
On Disruption
The healthcare industry can and will be disrupted. The same is true for education and transportation. But the companies that will succeed are those that are not just focused on their technology and business models. The companies that will succeed will be those that will invest time, energy and resources into changing their business environment. This will involve challenging vested interests and getting governments and other institutions to change their policies and laws. This is not a job for the high school senior working from their bedroom.
This article was first published on Forbes where Tendayi Viki is a regular contributor. Tendayi Viki is the author of The Corporate Startup, an award winning book on how large companies can build their internal ecosystems to innovate for the future while running their core business.
|
https://tendayiviki.medium.com/warby-parkers-online-vision-test-provides-clues-about-the-difficulty-of-disrupting-healthcare-b9fee46fe4a4
|
['Tendayi Viki']
|
2018-11-05 07:31:01.676000+00:00
|
['Innovation', 'Entrepreneurship']
|
Title Warby Parker’s Online Vision Test Provides Clues Difficulty Disrupting HealthcareContent healthcare industry disrupted Warby Parker one company certainly trying created platform customer take online vision test get prescription glass ordered delivered Founded 2010 four classmate Philadelphia’s Wharton School Business Warby Parker rumored valued 1 billion also excitement Amazon’s recent move pharmacy business new entrant technology background succeed industry think disruption often impressed company develop new technology clever business model successful disruption requires fact disrupting industry healthcare education transportation require clever product service business model — company think deeply business environment work hard influence policy decisionmaking Third Wave book Third Wave Steve Case identifies three wave internet technology first wave 1985–1999 company like Cisco IBM AOL others building necessary technology internet work mostly involved laying foundation online world today second wave 2000–2015 app economy mobile revolution took hold Companies like Amazon Google Facebook Twitter others able leverage foundation built first wave create great product service evidence second wave peaked next technology wave starting take place According Steve Case third wave 2016present internet fully integrated everything every product use era internet everything industry healthcare education transportation food production impacted ubiquitous connectivity Beyond Innovation exciting news emerging startup caution need taken second wave successful startup created top infrastructure already built first wave high school senior could create disruptive startup bedroom era incubator accelerator minimal investment rapid growth third wave much similar first wave Disrupting incumbent company third wave lot difficult expensive Innovators make larger financial investment form partnership company influence government policy requirement play well inbuilt advantage large company resource invest RD required create healthcare product come resource Warby Parker exactly poor already noted company rumored worth 1 billion However United States currently 11 state use online vision testing illegal People wanting buy new prescription glass go see optometrist state business environment matter Warby Parkers clever technology business model company succeed state somehow get legal policy changed Business Environment Uber faced similar challenge tried scale Europe banned voluntarily pulled several country Denmark Greece Hungary product service awful opinion Uber great business model — perhaps one best way leverage technology deliver demand transportation service However order Uber scale globally every country work policy maker within geography business environment matter Uber clever technology business model Airbnb also faced similar challenge regulation different city across world Tesla electric car maker track change drive next decade still fact global success company working industry depend policy change significant infrastructure development Beyond great product business environment matter Disruption healthcare industry disrupted true education transportation company succeed focused technology business model company succeed invest time energy resource changing business environment involve challenging vested interest getting government institution change policy law job high school senior working bedroom article first published Forbes Tendayi Viki regular contributor Tendayi Viki author Corporate Startup award winning book large company build internal ecosystem innovate future running core businessTags Innovation Entrepreneurship
|
4,908 |
How to Select All <div> Elements on a Page using JavaScript
|
How to Select All <div> Elements with jQuery
For completeness, I think it is important to look at how you would select all <div> elements using the popular jQuery library, in case you start working on a project using jQuery or just prefer jQuery’s syntax.
Though I don’t usually use jQuery myself, I find it’s useful to be aware of jQuery syntax so I know what I’m looking at when I see it. While jQuery’s popularity has been decreasing in favor of native syntax and frameworks like React or Vue, you’ll commonly see it on older projects.
Needing to know jQuery syntax is especially useful when looking at legacy code that someone might not have updated in the last five or ten years.
The jQuery syntax is a little simpler, but not much more: $("div") is the jQuery equivalent to document.getElementsByTagName("div") .
Here is the API documentation on jQuery’s element selector:
“Description: Selects all elements with the given tag name. → element: An element to search for. Refers to the tagName of DOM nodes. JavaScript’s getElementsByTagName() function is called to return the appropriate elements when this expression is used.” — jQuery Docs
As you can see, $("div") is actually just a wrapper for the JavaScript function we already covered — syntactic sugar for convenience purposes.
Here is a brief code example, where I load jQuery from the console and then use it to select all the <div> elements on the page.
View the raw code as a GitHub gist
Note that jQuery’s $("div") method returns the exact same HTMLCollection as the getElementsByTagName() call — the jQuery selector is just a wrapper around the native JavaScript syntax that makes it more concise.
|
https://medium.com/datadriveninvestor/how-to-select-all-div-elements-on-a-page-using-javascript-9b2cd16af740
|
['Dr. Derek Austin']
|
2020-11-19 20:03:26.272000+00:00
|
['Software Engineering', 'Programming', 'Software Development', 'Web Development', 'JavaScript']
|
Title Select div Elements Page using JavaScriptContent Select div Elements jQuery completeness think important look would select div element using popular jQuery library case start working project using jQuery prefer jQuery’s syntax Though don’t usually use jQuery find it’s useful aware jQuery syntax know I’m looking see jQuery’s popularity decreasing favor native syntax framework like React Vue you’ll commonly see older project Needing know jQuery syntax especially useful looking legacy code someone might updated last five ten year jQuery syntax little simpler much div jQuery equivalent documentgetElementsByTagNamediv API documentation jQuery’s element selector “Description Selects element given tag name → element element search Refers tagName DOM node JavaScript’s getElementsByTagName function called return appropriate element expression used” — jQuery Docs see div actually wrapper JavaScript function already covered — syntactic sugar convenience purpose brief code example load jQuery console use select div element page View raw code GitHub gist Note jQuery’s div method return exact HTMLCollection getElementsByTagName call — jQuery selector wrapper around native JavaScript syntax make conciseTags Software Engineering Programming Software Development Web Development JavaScript
|
4,909 |
It Gets Dark Before the Light
|
“Where thoughts go, energy flows.” — a spiritual proverb
You were told not to be stupid, but you were already smart.
They convinced you that you needed to create an identity, but you were born perfect.
They said that you needed to exceed the average. It was a lie. Average people don’t exist in the real world.
You were unique. You were sorry that you were different.
They said they loved you, but they didn’t know how. (They tried their best.)
They demanded that you fight their demons for them because they failed. You tricked yourself into believing they were your demons.
If you were a boy, you needed to make enough money, said the people who failed at love.
If you were a girl, you were too fat or your hair was too curly, said the sad people who failed at love.
You bought the clothes. You bought the makeup. You bought the house. You bought everything that anyone would ever want, just in case.
You thought you were kind. No one bought it.
The ground beneath your feet began to shift violently. Doubt began to seep. You stopped believing you were smart. The days became long.
You took every relationship that had shown you unconditional love and smashed it to pieces.
You sighed with relief. There was nothing left to screw up.
You were finally alone. You let yourself cry. You stopped struggling. Everything became very still.
You thought you would die but your heart kept beating. It didn’t care what you thought.
Your heart stopped apologizing. It became quiet. You were still breathing.
You saw the upside to the blackness of the night. If you fell, you knew you could get up. It was hard at first.
You stopped fretting and you started exploring. You practiced falling.
From the dark, there comes the light. Thank god for the dark. Merry Christmas, everyone, and a happy New Year.
Call to action
Hit the ❤ button if you liked this article! You’ll help others find it.
Sign up for my free weekly newsletter (packed with thoughts I don’t share anywhere else) at janehwangbo.com.
|
https://medium.com/personal-growth/it-gets-dark-before-the-light-1bff1d97d2d8
|
['Jane Hwangbo']
|
2017-04-06 21:52:13.265000+00:00
|
['Love', 'Life Lessons', 'Entrepreneurship', 'Life', 'Poetry']
|
Title Gets Dark LightContent “Where thought go energy flows” — spiritual proverb told stupid already smart convinced needed create identity born perfect said needed exceed average lie Average people don’t exist real world unique sorry different said loved didn’t know tried best demanded fight demon failed tricked believing demon boy needed make enough money said people failed love girl fat hair curly said sad people failed love bought clothes bought makeup bought house bought everything anyone would ever want case thought kind one bought ground beneath foot began shift violently Doubt began seep stopped believing smart day became long took every relationship shown unconditional love smashed piece sighed relief nothing left screw finally alone let cry stopped struggling Everything became still thought would die heart kept beating didn’t care thought heart stopped apologizing became quiet still breathing saw upside blackness night fell knew could get hard first stopped fretting started exploring practiced falling dark come light Thank god dark Merry Christmas everyone happy New Year Call action Hit ❤ button liked article You’ll help others find Sign free weekly newsletter packed thought don’t share anywhere else janehwangbocomTags Love Life Lessons Entrepreneurship Life Poetry
|
4,910 |
My Own Time
|
Photo credit: Pixabay
So between my pending surgical procedures and my mother-in-law’s recent diagnosis of T-cell lymphoma, I find myself “working” from home again. I helped my client find a full-time caregiver since I am no longer able. Ok, so I haven’t exactly figured out where the money will come from yet, however, I have been working hard, taking more classes in copywriting, doing little projects and sending work out here and there. Suddenly though, my family seems to think that just because I no longer work outside the house, that I have all the time in the world! “You aren’t doing anything now, you should be able to go shopping when you want, take Claudette to all of her appointments, etc.” Uh, no. What I said was, if I was able to, I wouldn’t mind doing what I can for whomever needs it to keep my husband, or brother-in-law from missing any work. That doesn’t mean I’m at everyone’s beck and call 24/7! I’m trying to write here, I’m working on my own time, fulfilling my dream of becoming a freelance writer, not twiddling my thumbs at home. And actually, it isn’t my whole family making me feel that way, just my sister-in-law. She really doesn’t get it I guess, that what I do here IS a job. Even if I haven’t made any big money yet, just pennies really. But someday, I’ll have it all worked out. Alright, just had to get that off my chest, rant over. Anyone else ever feel that way? I know you do, I’ve heard it before from many of you writers out there. How do you deal? What do you say to these non-believers that your work is justifiable? Me, I’m just waiting until that day the check arrives in the mail, or I finally sign that first client. Then I can show her and say, “Told ya.”
|
https://medium.com/100-naked-words/my-own-time-30e7b4268abe
|
['Kim Smyth']
|
2017-10-12 19:01:01.134000+00:00
|
['100 Naked Words', 'Freelance Writing Jobs', 'Work Life Balance', 'Writing']
|
Title TimeContent Photo credit Pixabay pending surgical procedure motherinlaw’s recent diagnosis Tcell lymphoma find “working” home helped client find fulltime caregiver since longer able Ok haven’t exactly figured money come yet however working hard taking class copywriting little project sending work Suddenly though family seems think longer work outside house time world “You aren’t anything able go shopping want take Claudette appointment etc” Uh said able wouldn’t mind whomever need keep husband brotherinlaw missing work doesn’t mean I’m everyone’s beck call 247 I’m trying write I’m working time fulfilling dream becoming freelance writer twiddling thumb home actually isn’t whole family making feel way sisterinlaw really doesn’t get guess job Even haven’t made big money yet penny really someday I’ll worked Alright get chest rant Anyone else ever feel way know I’ve heard many writer deal say nonbeliever work justifiable I’m waiting day check arrives mail finally sign first client show say “Told ya”Tags 100 Naked Words Freelance Writing Jobs Work Life Balance Writing
|
4,911 |
A girl born with 4 legs and 2 vaginae!!?
|
Often, when people tell someone about themselves, they put forward their date of birth, date of marriage or similar things related to their life. But have you ever read that a person is giving details of body parts in his introduction?
like; 'I have two hands, one eye and two ears'. This is utterly stupid but someone who needs to do it too.
She is different from the world and in her introduction she has to say that 'I have four legs, not two’.
This is Myrtle Corbin
Myrtle Corbin was born in Tennessee, United States. His birth surprised everyone. The very innocent and lovely Myrtle came to this world with only one difference compared to the rest of the people and that is her four legs. Yes, Myrtle has not two but four legs, two in the same general position, and the other two legs are attached to a separate limb just in the middle of her body.
According to doctors, the two separate legs associated with Myrtle are weak and Myrtle cannot even fully control them. These legs are shorter and more delicate than their other legs.
These feet belong to none other than myrtle
According to doctors, the middle two legs are not of her own, but of her diaphysis twin sister. Now you might not be able to understand this and it seems strange to you, but it is true.
Her twin sister, who is associated with Myrtle, is her twin sister who has not come into this world.
Doctors say that sometimes there are twins who are connected, but the body of one becomes full size but the other Some parts of the body are joined with the previous child. This means that Myrtle had a twin sister who was in her mother's womb before she was born. But only Myrtle was born who did not come along with birth but brought her twin sister's feet.
This world is amazing
It is strange and bizarre that you have brought someone else’s body parts with you after birth. According to doctors, Myrtle could control her unborn sister’s body, but it was very challenging for her to use it while walking. It was also said that those two legs connected with those two legs had only 3-3 fingers. This strange thing about Myrtle also made him famous all over the world. When she was only 13 years old, a biography was written on her life, named
'Biography of Myrtle Corbin’.
Myrtle married also
Myrtle also had a sister named Willey Ann. She was married in the year 1885 to a boy named Locke Bunnell. Locke had a brother, Doctor James Clinton Bicknell, who proposed to Marital shortly after his brother’s marriage.
Myrtle and James’s marriage proclaims true love.
There is one more important point here and that not only Myrtle but also her unborn sister can have sex. That is, not one but two vaginas were present in Myrtle’s body.
Myrtle is said to have given birth to eight children, three of whom died in childhood.
In the context of Myrtle's children, it has also been said that two of her three children were born from her vagina and the other two from the other vagina. Now whether this fact is true or not, but it is considered possible to be seen medically.
This world is also one miracle.
Just think how would you feel if you have 4 legs or 3 hands or 3 eyes or 4 ears?
I am amazed and you?
|
https://medium.com/illumination/a-girl-born-with-4-legs-and-2-vaginae-62c3b8cafefc
|
['Nilesh Mithiya']
|
2020-12-19 19:26:38.310000+00:00
|
['Mystery', 'Girls', 'Biology', 'Vagina', 'Science']
|
Title girl born 4 leg 2 vaginaeContent Often people tell someone put forward date birth date marriage similar thing related life ever read person giving detail body part introduction like two hand one eye two ear utterly stupid someone need different world introduction say four leg two’ Myrtle Corbin Myrtle Corbin born Tennessee United States birth surprised everyone innocent lovely Myrtle came world one difference compared rest people four leg Yes Myrtle two four leg two general position two leg attached separate limb middle body According doctor two separate leg associated Myrtle weak Myrtle cannot even fully control leg shorter delicate leg foot belong none myrtle According doctor middle two leg diaphysis twin sister might able understand seems strange true twin sister associated Myrtle twin sister come world Doctors say sometimes twin connected body one becomes full size part body joined previous child mean Myrtle twin sister mother womb born Myrtle born come along birth brought twin sister foot world amazing strange bizarre brought someone else’s body part birth According doctor Myrtle could control unborn sister’s body challenging use walking also said two leg connected two leg 33 finger strange thing Myrtle also made famous world 13 year old biography written life named Biography Myrtle Corbin’ Myrtle married also Myrtle also sister named Willey Ann married year 1885 boy named Locke Bunnell Locke brother Doctor James Clinton Bicknell proposed Marital shortly brother’s marriage Myrtle James’s marriage proclaims true love one important point Myrtle also unborn sister sex one two vagina present Myrtle’s body Myrtle said given birth eight child three died childhood context Myrtles child also said two three child born vagina two vagina whether fact true considered possible seen medically world also one miracle think would feel 4 leg 3 hand 3 eye 4 ear amazed youTags Mystery Girls Biology Vagina Science
|
4,912 |
Digital radio brings tight targeting and happy listeners
|
Comedians Lee and Dean, who present special shows for Fix Radio in May 2018.
One of the benefits of digital platforms is that you can launch new, niche radio stations there.
My daughter goes to sleep listening to a children’s radio station on her digital radio, as one example. The programming from Kinderling Kids Radio from Sydney, is safe and age-appropriate — and successfully relaxes her for a good night’s rest.
Another example is a Chicago start-up, Quantum Music, which is running online radio stations for the seven million Chinese ex-pats who live in North America. There’s clearly a need for a station like this — but probably not possible for an FM frequency everywhere.
And then, in the ultra-competitive market of London, where a typical listener can find over 110 radio stations on their DAB receiver, there’s Fix Radio.
See what they did there
Ask anyone who’s had work done on their home or office recently, and they’ll tell you that builders, plumbers, plasterers and the rest are heavy radio listeners, right through their work day. Fix Radio — tagline “We’re nailing it!” — is a radio station specifically aimed at builders and tradespeople.
The station, available on DAB, online and through an app, has just celebrated its first birthday, and is growing both in listenership and in advertising — describing itself as an “important platform to the construction industry’s biggest brands”.
Listener marketing in a city which is often cold and damp has been well-targeted and simple: the “Bacon Butty Tour” consists of brightly-coloured vans which drive to building sites across London: they have so far handed out over 20,000 hot butties to promote the station. Craftily, the vans also visit clients, too.
Butty delivery van. White, naturally.
If the weather is important to the marketing of the station, it’s also important content. Detailed weather updates keep outdoor workers informed — laying concrete in the rain isn’t a good plan, it turns out — and the station’s website nicely contains specific weather forecasts for some of London’s most busy construction areas.
Programming is nicely tailored to the audience, who listen long and often to the station. There’s a “no-repeat guarantee” for the music they play, which is all up-tempo. There’s daily construction industry news, and sports news; lots of “music marathons” as well as focused chat in drivetime periods.
The station also airs a weekly show with advice on everything from tool thefts to tax advice and doing your own accounts; and as any good station does, it supports a charity that is relevant to its listeners — a construction industry charity, in this case.
New platforms are, essentially, making these types of radio stations possible. Stations aimed at this tight demographic couldn’t operate in the limited world of FM spectrum; but with digital, radio stations can target much tighter.
Fix Radio seems to be a good example of what you can do with a clear focus on your audience, and a digital platform to reach them.
|
https://jamesrcridland.medium.com/digital-radio-brings-tight-targeting-and-happy-listeners-4c99665198b9
|
['James Cridland']
|
2018-04-29 01:28:22.375000+00:00
|
['Radio', 'Targeting', 'Marketing', 'Digital', 'Content']
|
Title Digital radio brings tight targeting happy listenersContent Comedians Lee Dean present special show Fix Radio May 2018 One benefit digital platform launch new niche radio station daughter go sleep listening children’s radio station digital radio one example programming Kinderling Kids Radio Sydney safe ageappropriate — successfully relaxes good night’s rest Another example Chicago startup Quantum Music running online radio station seven million Chinese expat live North America There’s clearly need station like — probably possible FM frequency everywhere ultracompetitive market London typical listener find 110 radio station DAB receiver there’s Fix Radio See Ask anyone who’s work done home office recently they’ll tell builder plumber plasterer rest heavy radio listener right work day Fix Radio — tagline “We’re nailing it” — radio station specifically aimed builder tradespeople station available DAB online app celebrated first birthday growing listenership advertising — describing “important platform construction industry’s biggest brands” Listener marketing city often cold damp welltargeted simple “Bacon Butty Tour” consists brightlycoloured van drive building site across London far handed 20000 hot butty promote station Craftily van also visit client Butty delivery van White naturally weather important marketing station it’s also important content Detailed weather update keep outdoor worker informed — laying concrete rain isn’t good plan turn — station’s website nicely contains specific weather forecast London’s busy construction area Programming nicely tailored audience listen long often station There’s “norepeat guarantee” music play uptempo There’s daily construction industry news sport news lot “music marathons” well focused chat drivetime period station also air weekly show advice everything tool theft tax advice account good station support charity relevant listener — construction industry charity case New platform essentially making type radio station possible Stations aimed tight demographic couldn’t operate limited world FM spectrum digital radio station target much tighter Fix Radio seems good example clear focus audience digital platform reach themTags Radio Targeting Marketing Digital Content
|
4,913 |
Cover the King
|
Cover the King
Author’s note- This story is an homage to the genre of mystery/crime short that once appeared in magazines like Ellery Queen or Arthur Hitchcock Magazine.
It was while Bethanne was constructing the timeline for the homicide detective that she declared that everything that occurred that morning stemmed from an innocuous invitation the year before.
“I should have known then, that we were here on borrowed time,” she said.
“Known when?” asked Detective Simpson.
“Right after we moved into the complex. Our neighbors, the Rileys came to introduce themselves. They said we’d have to come over for a night of cards sometime. I should have known right then and there that no good would come of it. Not that I could have stopped it, of course. Saying the words “cards” to Dick was like saying “cheesecake” to someone on a diet. Once it got into his head, he’d become obsessive about it.”
“Liked to play, did he?” asked the policeman. “For money, or just simple enjoyment?”
“Both,” responded Bethanne. “Although I don’t think his enjoyment was simple,in any sense. It was more . . . primordial might be the only way to describe it, or maybe primal. You see, he simply had to win.”
“Sore loser, then?” Detective Simpson scribbled into his notebook and added an asterisk and underlined the sentence.
Bethanne shook her head wryly. “That doesn’t even come close to describing it, Detective. If he lost, it could only be through sinister forces in the universe working against him or maybe a cheating opponent. He would actually throw the cards on the floor, like a four year old. It was the most embarrassing thing you could possibly imagine. If only I had listened to my mother. Oh well.” She frowned and began scraping at a spot on her apron.
“Your mother didn’t like him?” asked Detective Simpson.
“Oh no, she never met him” answered Bethanne. “But he failed her boyfriend test. Before she died she told me it was fail-safe and to never get involved with any man who tanked on both parts.”
The female officer who was sitting in on the interview, Officer Lucas, who until now had been sitting silently making her own notes during the conversation, suddenly perked up and leaned in to ask her a question.
“What is this ‘boyfriend test’ of your mother’s? Professional interest only” she added when Simpson slightly sniggered.
“Oh, when I was about sixteen she told me that all you needed to know about whether someone was good boyfriend or even husband material, was to give him two tests as early as possible, so as not to waste time”.
“And these were, what?” asked Lucas.
“ Should I leave the room?” interrupted Detective Simpson. “You’ll tell me but then have to kill me, violation of the sisterhood sort of thing?”
Bethanne looked at him and smiled. “You’ll think they’re very silly. But it’s as accurate as a Geiger counter and radioactivity”.
“Please, don’t keep us waiting.” Officer Lucas looked like she was going to snap her pencil in suppressed impatience.
“Number 1 — ask him in a very public and crowded place to hold your purse while you go to the ladies room. If he refuses, strike one. Very, very serious strike.”
Detective Simpson shifted uncomfortably. “So what’s the big deal if he doesn’t want to hold your purse?”
“Insecure in his masculinity” replied Bethanne. “He doesn’t know anyone there, what does he care if a bunch of anonymous strangers see him holding a purse for a couple of minutes? If he’s that insecure, warning, warning,red alert, abandon ship. Probably in constant need to be domineering and in control of his docile female partner at all times. At least according to my mom.”
“Seems pretty flimsy to me,” Detective Simpson snorted. “Baby /bathwater thing. Hardly a firing offense.”
“Test 2?” Officer Lucas was determined that they not be diverted.
“Test 2 is to play a game with them and see how they handle both winning and losing. Someone who has to always win has no sense of priorities or possibly even compassion or empathy. It’s fine to enjoy winning, but it’s a matter of degree, do you see?” Bethanne stared intently at Officer Lucas, totally ignoring Detective Simpson.
Simpson took the opportunity to go over to the pot and refresh his coffee. He recognized that Lucas had achieved one of her famous ‘bonding moments’ with the subject and he had no desire to impede the process.
“Oh, I see completely. My ex-husband enjoyed nothing more than bankrupting everyone else in Monopoly. Even the kids when they were little! I like your mom’s test. I can think of a few people to share it with.” Officer Lucas tapped some quick notes into her personal phone before closing it and picking up her official notebook again.
“I think we’ve strayed from the subject at hand,” said Detective Simpson, returning to the table. “Let’s get back to the card party at the Riley’s. That was when — last October? What happened then?”
Bethanne sighed. “Everything that I would have expected to happen. Dick started off okay, but as the night wore on and he had more than a few drinks, the real Dick made his appearance. He ridiculed people for their plays, insisted on showing them how they could have managed their hand better, disparaged the food, and won a great deal of money. Detective, have you ever heard anyone chortle?”
“Chortle? You mean like laugh? That’s Advanced Placement English to me. Who chortles except in Victorian novels?” Detective Simpson looked at his watch and sighed. Would this interview ever end? He’d had enough of maternal character tests and vocabulary quizzes. Get on with it, lady!
“Why is this relevant?” Officer Lucas, ever diligent, thought Bethanne in her own meandering way, was leading them down the road to the all important motive responsible for the corpse in the dining room.
Bethanne twisted her wedding ring and looked at Simpson. “You’re right, Detective. Most people have never heard an actual chortle in real life. It’s the sound of triumph, glee, greed and victory all wrapped up together in one sound. That was Dick’s only laugh. His single enjoyment was when he had reduced someone else to rubble. Anyway, that night all he did was chortle. He made everyone feel like imbeciles and poor cooks. I saw the looks and I knew that we would never be invited to another social gathering at Meadows of Runnymeade as long as we lived here. Same old story.”
She started to sniffle and accepted the tissue offered by Officer Lucas before continuing her narrative.
“Officers, we have moved every other year for as long as I can remember. He didn’t mind being a social pariah, but I did. I got lonely, you see. All I ever wanted was to have people over to dinner and play cards and have shopping outings with girlfriends like other people but Dick always spoiled it. He always says that this time it will be different, he’ll keep it in check, be everyone’s best bud, but he never does. Show him the deck of cards and he has to be Master of the Universe. So we’re looking at new complexes now that our lease expires next month. I really really liked it here. I wanted to stay, but Dick says who wants to live surrounded by village idiots.” A single lonely tear traced it’s way down Bethanne’s cheek.
“Tell us what happened this morning, Mrs. Morrow. There must be a reason for what you did.” Officer Lucas spoke softly and kindly to the drab little woman who sat across from her at the table and tried not to notice the numerous speckles of blood that covered her apron.
“I was making breakfast for Dick and listening to my morning show on the radio. He had gone out to get the mail and had just come back in. I had my solitaire game laid out on the table so I could get back to it after Dick ate and I had done the dishes. I had just learned this game a little while ago- double decks and quite complicated. I never win, maybe 1 in fifty. I was about to have a win, I could just feel it. All the kings were covered. But breakfast has to be right at nine, or else, so I had to break away. So Dick comes in with the mail,takes a look at my game, picks up the cards and plays out the hand in about two minutes. I didn’t even know he knew this game! Anyway, after winning, he pushed all the cards together, both decks (!), so sorting them will take forever, and then says “Bethanne, why do you waste your time on that rubbish? Surely there’s something more exciting for you to do aside from endless solitaire. I think your brain is rotting from inactivity. And this garbage on the radio!” and then he turned off my show.
I served him his breakfast and we ate in total silence. I felt very strange, like there was a big weight sitting inside of me that was bursting to get out. I kept quiet and took deep breaths and thought of nice things, just like my daughter showed me. She’s a beautiful girl, lives in Seattle now so we never get to see her. And then I did the dishes.
Dick went through the mail and there was something there from our property manager. We own a few rental properties, Dick buys them at tax sales. They’re all really run down and I’m embarrassed to even own them. Anyway, we had this one tenant, a really nice woman who’s lived there for five years but lately the rent has been slow because she got sick, poor thing. I asked if we couldn’t give her a break as all the other rents were coming in just fine and we had a surplus in the account but Dick said that while one old cow feeling sorry for another old cow was touching, business was business and it was no concern of his if someone got sick, that was their problem.
Dick insisted on reading the letter from the property manager out loud, even though I said he didn’t need to. But he just had to share his victory. It said that our “issue” on River Street had been resolved, the eviction was successful and we would get to keep the entire security deposit. And then he did it. I was drying my old heavy cast iron skillet and I had it right in my hands when he did it. I had a kind of flash and I thought my brain had actually exploded. I don’t really remember what happened next, but when I came out of it, he was lying there and I still had the skillet in my hand and it had blood all over it. So did my apron and the tablecloth and the rug. I can’t believe that I did that. If only he hadn’t done it, maybe he would still be alive and we could move again and get a fresh start.” Bethanne finally lost her composure and slumped over the table, her body wracked by giant heaving sobs.
“Mrs. Morrow, I’m missing something. What exactly did your husband do that upset you so much? Did he try to hit you or abuse you in anyway? Was it self defense?” Detective Simpson waited for the answer that he knew would determine this woman’s future — whether charges would be pressed, a trial held, prison time or not, the whole ball of wax.
“He. . . he . . . CHORTLED! And I knew it couldn’t go on. I couldn’t go on and he couldn’t go on, so I just tried to make it stop.”
***
Later, after booking was complete and all the reports filed, Detective Simpson and Officer Lucas headed out to the Central Marketplace and Food Court to catch a bite to eat after a long and arduous day. They were waiting in line for the latest craze, Hong Kong bubble wrap waffles with short ribs, when Detective Simpson felt something shoved into his hands.
“Hold this, while I catch the Ladies Room. I won’t be a sec,” Officer Lucas dashed off and melted into the throngs of consumers.
Detective Ernie Simpson looked down and saw that he was now custodian of Officer Eleanor Lucas’ handbag.
— — — — — — — — — — — — — — — — — — — — — — — —
© 2018 Valerie Kittell
|
https://medium.com/thecabbagegarden/cover-the-king-9134ad9c7656
|
['Valerie Kittell']
|
2020-08-22 17:43:37.269000+00:00
|
['Short Story', 'Fiction', 'Lit', 'Writing', 'Mystery']
|
Title Cover KingContent Cover King Author’s note story homage genre mysterycrime short appeared magazine like Ellery Queen Arthur Hitchcock Magazine Bethanne constructing timeline homicide detective declared everything occurred morning stemmed innocuous invitation year “I known borrowed time” said “Known when” asked Detective Simpson “Right moved complex neighbor Rileys came introduce said we’d come night card sometime known right good would come could stopped course Saying word “cards” Dick like saying “cheesecake” someone diet got head he’d become obsessive it” “Liked play he” asked policeman “For money simple enjoyment” “Both” responded Bethanne “Although don’t think enjoyment simplein sense primordial might way describe maybe primal see simply win” “Sore loser then” Detective Simpson scribbled notebook added asterisk underlined sentence Bethanne shook head wryly “That doesn’t even come close describing Detective lost could sinister force universe working maybe cheating opponent would actually throw card floor like four year old embarrassing thing could possibly imagine listened mother Oh well” frowned began scraping spot apron “Your mother didn’t like him” asked Detective Simpson “Oh never met him” answered Bethanne “But failed boyfriend test died told failsafe never get involved man tanked parts” female officer sitting interview Officer Lucas sitting silently making note conversation suddenly perked leaned ask question “What ‘boyfriend test’ mother’s Professional interest only” added Simpson slightly sniggered “Oh sixteen told needed know whether someone good boyfriend even husband material give two test early possible waste time” “And what” asked Lucas “ leave room” interrupted Detective Simpson “You’ll tell kill violation sisterhood sort thing” Bethanne looked smiled “You’ll think they’re silly it’s accurate Geiger counter radioactivity” “Please don’t keep u waiting” Officer Lucas looked like going snap pencil suppressed impatience “Number 1 — ask public crowded place hold purse go lady room refuse strike one serious strike” Detective Simpson shifted uncomfortably “So what’s big deal doesn’t want hold purse” “Insecure masculinity” replied Bethanne “He doesn’t know anyone care bunch anonymous stranger see holding purse couple minute he’s insecure warning warningred alert abandon ship Probably constant need domineering control docile female partner time least according mom” “Seems pretty flimsy me” Detective Simpson snorted “Baby bathwater thing Hardly firing offense” “Test 2” Officer Lucas determined diverted “Test 2 play game see handle winning losing Someone always win sense priority possibly even compassion empathy It’s fine enjoy winning it’s matter degree see” Bethanne stared intently Officer Lucas totally ignoring Detective Simpson Simpson took opportunity go pot refresh coffee recognized Lucas achieved one famous ‘bonding moments’ subject desire impede process “Oh see completely exhusband enjoyed nothing bankrupting everyone else Monopoly Even kid little like mom’s test think people share with” Officer Lucas tapped quick note personal phone closing picking official notebook “I think we’ve strayed subject hand” said Detective Simpson returning table “Let’s get back card party Riley’s — last October happened then” Bethanne sighed “Everything would expected happen Dick started okay night wore drink real Dick made appearance ridiculed people play insisted showing could managed hand better disparaged food great deal money Detective ever heard anyone chortle” “Chortle mean like laugh That’s Advanced Placement English chortle except Victorian novels” Detective Simpson looked watch sighed Would interview ever end He’d enough maternal character test vocabulary quiz Get lady “Why relevant” Officer Lucas ever diligent thought Bethanne meandering way leading road important motive responsible corpse dining room Bethanne twisted wedding ring looked Simpson “You’re right Detective people never heard actual chortle real life It’s sound triumph glee greed victory wrapped together one sound Dick’s laugh single enjoyment reduced someone else rubble Anyway night chortle made everyone feel like imbecile poor cook saw look knew would never invited another social gathering Meadows Runnymeade long lived old story” started sniffle accepted tissue offered Officer Lucas continuing narrative “Officers moved every year long remember didn’t mind social pariah got lonely see ever wanted people dinner play card shopping outing girlfriend like people Dick always spoiled always say time different he’ll keep check everyone’s best bud never Show deck card Master Universe we’re looking new complex lease expires next month really really liked wanted stay Dick say want live surrounded village idiots” single lonely tear traced it’s way Bethanne’s cheek “Tell u happened morning Mrs Morrow must reason did” Officer Lucas spoke softly kindly drab little woman sat across table tried notice numerous speckle blood covered apron “I making breakfast Dick listening morning show radio gone get mail come back solitaire game laid table could get back Dick ate done dish learned game little ago double deck quite complicated never win maybe 1 fifty win could feel king covered breakfast right nine else break away Dick come mailtakes look game pick card play hand two minute didn’t even know knew game Anyway winning pushed card together deck sorting take forever say “Bethanne waste time rubbish Surely there’s something exciting aside endless solitaire think brain rotting inactivity garbage radio” turned show served breakfast ate total silence felt strange like big weight sitting inside bursting get kept quiet took deep breath thought nice thing like daughter showed She’s beautiful girl life Seattle never get see dish Dick went mail something property manager rental property Dick buy tax sale They’re really run I’m embarrassed even Anyway one tenant really nice woman who’s lived five year lately rent slow got sick poor thing asked couldn’t give break rent coming fine surplus account Dick said one old cow feeling sorry another old cow touching business business concern someone got sick problem Dick insisted reading letter property manager loud even though said didn’t need share victory said “issue” River Street resolved eviction successful would get keep entire security deposit drying old heavy cast iron skillet right hand kind flash thought brain actually exploded don’t really remember happened next came lying still skillet hand blood apron tablecloth rug can’t believe hadn’t done maybe would still alive could move get fresh start” Bethanne finally lost composure slumped table body wracked giant heaving sob “Mrs Morrow I’m missing something exactly husband upset much try hit abuse anyway self defense” Detective Simpson waited answer knew would determine woman’s future — whether charge would pressed trial held prison time whole ball wax “He CHORTLED knew couldn’t go couldn’t go couldn’t go tried make stop” Later booking complete report filed Detective Simpson Officer Lucas headed Central Marketplace Food Court catch bite eat long arduous day waiting line latest craze Hong Kong bubble wrap waffle short rib Detective Simpson felt something shoved hand “Hold catch Ladies Room won’t sec” Officer Lucas dashed melted throng consumer Detective Ernie Simpson looked saw custodian Officer Eleanor Lucas’ handbag — — — — — — — — — — — — — — — — — — — — — — — — © 2018 Valerie KittellTags Short Story Fiction Lit Writing Mystery
|
4,914 |
Chi-Square Hypothesis Testing in Statistics
|
Chi-Square Hypothesis Testing in Statistics
Relationship association between categorical features
Photo by William Iven on Unsplash
Chi-square test is a non-parametric test in hypothesis testing to know the association of two categorical features in bi-variate data or records. Non-parametric tests are distribution-free test because it is based on very less number of assumptions that’s why it is not normally distributed. When the target variable doesn’t show normal distribution can be seen target are in ordinal or in nominal and existence of outliers. The Chi-square test also stated that the variance of a sample is somehow equal to the population from which the sample was taken. That’s why called the hypothesis for population variance.
To test whether one categorical variable is associated or has an effect on another categorical value, we check the hypothesis on these two conditions shown below:
H0: Two categorical variables are independent of each other.
H1: Two categorical variables are not independent of each other.
H0 and H1 are the null hypotheses and alternate hypotheses, respectively.
After testing, if we get to know that we have to reject the null hypothesis, then we have to accept the alternate hypothesis that says both categorical data have some level of association. The test performs on p-values that determine if the p-value is less than 0.05, then both categorical values have a strong association, and if the p-value is more than 0.05, then they are independent.
The formula for Chi-Square is shown below:
Chi-Square Formula. Image by Author
The distribution of Chi-square called is Z square distribution, and the diagram of chi-square is shown below:
Chi-Square distribution. Image by Author
This test is only on categorical data such as gender ( male, female), color ( red, green, orange, etc.), and other binary categories.
Many learners are still don’t know many things in their learning path, and we are always trying to get knowledge in a simple, meaningful way. The below tree will give you a little hint on how to choose a test for bivariate data.
Hint to choose test and plot for bi-variate data. Image by Author
We will take an example of a preference between ice-cream and chocolate in adults and children. The two hypotheses are given below:
Age and preference for ice-cream and chocolate are independent.
Age and preference for ice-cream and chocolate are not independent.
Consider the table for the analysis, as shown below:
Category & Category data. Image by Author
The next step to add the row and column to make a divide by total.
Getting Expecting Values
Now we have both observed value and expected value. We will calculate the chi-square value for each cell by applying the formula we saw above.
Calculated chi-square values for each cell. Image by author
After adding all values, the overall chi-square value is now 4.102. Well, this chi-square value is similar to the z test. Now to get the critical chi-square value with a degree of freedom. The DOF is one less than the total number of total rows and columns.
In a row, we have two rows and two columns also. So the DOF will be as shown below:
DOF = (row-1)*(column-1) = (2–1)*(2–1) = 1
After knowing the degree of freedom, we can calculate the critical chi-square value with the help of the alpha value. The alpha value is the value that comes after choosing the confidence of interval.
Confidence interval and Alpha value. Image by author
The Alpha value can choose from this table, as shown in the photo.
When we see the chi-square table with a 5% alpha value, the critical values come at 3.84. We can observe these values through a chi-square distribution photo.
View of two values. Image by author
We can see that the chi-square value is greater than the critical value. So, we have to reject the null hypothesis. Also, if we see that if we choose the alpha value to 1%, the critical value comes 6.64. P-value is between 5% and 1% means if we have a significance region is 5%, then still we have to reject the null hypothesis. But, if the 1% alpha value, the chi-square value is less than critical, then we have to accept the null hypothesis.
Types of Chi-Square test
Test for independence (Two-way chi-square test): Good for categorical values association.
Test for the goodness of fit (One-way chi-square test): Good to check observed values differ from the theoretical value.
Conclusion:
The Chi-square test is very good when we have categories features in a data set.
Reach me on my LinkedIn. Mail me at [email protected].
Recommended Articles
2. Python Data Structures Data-types and Objects
3. MySQL: Zero to Hero
|
https://medium.com/towards-artificial-intelligence/chi-square-hypothesis-testing-in-statistics-87884bc73d99
|
['Amit Chauhan']
|
2020-12-31 02:34:40.653000+00:00
|
['Programming', 'Statistics', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
|
Title ChiSquare Hypothesis Testing StatisticsContent ChiSquare Hypothesis Testing Statistics Relationship association categorical feature Photo William Iven Unsplash Chisquare test nonparametric test hypothesis testing know association two categorical feature bivariate data record Nonparametric test distributionfree test based le number assumption that’s normally distributed target variable doesn’t show normal distribution seen target ordinal nominal existence outlier Chisquare test also stated variance sample somehow equal population sample taken That’s called hypothesis population variance test whether one categorical variable associated effect another categorical value check hypothesis two condition shown H0 Two categorical variable independent H1 Two categorical variable independent H0 H1 null hypothesis alternate hypothesis respectively testing get know reject null hypothesis accept alternate hypothesis say categorical data level association test performs pvalues determine pvalue le 005 categorical value strong association pvalue 005 independent formula ChiSquare shown ChiSquare Formula Image Author distribution Chisquare called Z square distribution diagram chisquare shown ChiSquare distribution Image Author test categorical data gender male female color red green orange etc binary category Many learner still don’t know many thing learning path always trying get knowledge simple meaningful way tree give little hint choose test bivariate data Hint choose test plot bivariate data Image Author take example preference icecream chocolate adult child two hypothesis given Age preference icecream chocolate independent Age preference icecream chocolate independent Consider table analysis shown Category Category data Image Author next step add row column make divide total Getting Expecting Values observed value expected value calculate chisquare value cell applying formula saw Calculated chisquare value cell Image author adding value overall chisquare value 4102 Well chisquare value similar z test get critical chisquare value degree freedom DOF one le total number total row column row two row two column also DOF shown DOF row1column1 2–12–1 1 knowing degree freedom calculate critical chisquare value help alpha value alpha value value come choosing confidence interval Confidence interval Alpha value Image author Alpha value choose table shown photo see chisquare table 5 alpha value critical value come 384 observe value chisquare distribution photo View two value Image author see chisquare value greater critical value reject null hypothesis Also see choose alpha value 1 critical value come 664 Pvalue 5 1 mean significance region 5 still reject null hypothesis 1 alpha value chisquare value le critical accept null hypothesis Types ChiSquare test Test independence Twoway chisquare test Good categorical value association Test goodness fit Oneway chisquare test Good check observed value differ theoretical value Conclusion Chisquare test good category feature data set Reach LinkedIn Mail amitpriusgmailcom Recommended Articles 2 Python Data Structures Datatypes Objects 3 MySQL Zero HeroTags Programming Statistics Artificial Intelligence Data Science Machine Learning
|
4,915 |
Why Leaving a Project Unpolished Is OK
|
Does Your Customer Really Want the Bells and Whistles?
“80 percent of features in the average software product are rarely or never used. Publicly-traded cloud software companies collectively invested up to $29.5 billion developing these features, dollars that could have been spent on higher value features and unrealized customer value.” — Pendo’s 2019 Feature Adoption Report.
Don’t spend time adding low-value features backed by assumptions. Perfectionists often tailor products to their ideal image, not necessarily the customer’s. The customer may appreciate the extra features, but not all improvements will change their decision to purchase or retain a product.
Example: A web developer spends a month trying to create a user experience that shows data in a table with filterable/sortable columns.
Unexpected outcome: The customer uses the export feature to download the data and view it in Excel.
Sometimes a simple conversation with a customer can save tons of time and money. In this real-world case, the customer provided the data to a third-party for auditing. There would never be a case where the auditors would be allowed access to the fancy web table.
|
https://medium.com/better-programming/why-leaving-a-project-unpolished-is-ok-fe4a43836f40
|
['Kevin Fawcett']
|
2020-10-23 16:44:56.947000+00:00
|
['Programming', 'Agile', 'Startup', 'User Experience', 'Product Management']
|
Title Leaving Project Unpolished OKContent Customer Really Want Bells Whistles “80 percent feature average software product rarely never used Publiclytraded cloud software company collectively invested 295 billion developing feature dollar could spent higher value feature unrealized customer value” — Pendo’s 2019 Feature Adoption Report Don’t spend time adding lowvalue feature backed assumption Perfectionists often tailor product ideal image necessarily customer’s customer may appreciate extra feature improvement change decision purchase retain product Example web developer spends month trying create user experience show data table filterablesortable column Unexpected outcome customer us export feature download data view Excel Sometimes simple conversation customer save ton time money realworld case customer provided data thirdparty auditing would never case auditor would allowed access fancy web tableTags Programming Agile Startup User Experience Product Management
|
4,916 |
Dreams Do Come True
|
Introduction to ILLUMINATION-MIRROR
Photo by bruce mars on Unsplash
I’ve been writing since I was a young child. I wrote in spurts and was never consistent. Most of the time through my early 20s, it was when I was down, a relationship break up, or some sort of personal catastrophe.
I began writing again about a year ago at the urging of a friend Linda Aileen Miller. She helped me discover ILLUMINATION and ILLUMINATION-Curated and a great community of writers.
I have to say, I didn’t write in earnest until recently. I didn’t have the confidence or commitment to take responsibility for what I needed to do. Thanks to many fine writers, I began to write more seriously and more often. Some are sad, some are light, some are whimsical, and some stories tend to be serious. Take a look at this article I wrote as I was losing my beloved King Charles Maggie Mae. It gave me great comfort and solace to write this story and begin the process of her transitioning to the Rainbow Bridge. There is nothing that will replace her in my heart but it helped me tremendously to move on in the grieving process.
I also write about family experiences, mental health issues, and just for fun!
I have been leaning more towards poetry and there are so many outlets to share your writing on these platforms. Change is the name of the game. We’re embracing it here and I hope you’ll join us!
This is an article about mental health I designed about panic attacks. I tried to be light, include personal experience, and also facts!
Once again, there are so many incredible writers in ever way you can imagine. I hope you’ll join us! Feel free to reach out to me! I’ve learned so much in this past year! I’m happy to oblige! Participate in Challenges. It not only introduces your work but it introduces you to a wider audience, to many writers, and is an opportunity for growth and fun!
|
https://medium.com/illuminations-mirror/dreams-do-come-true-8f1aa312ab4d
|
['Janny S Heart']
|
2020-12-29 16:55:22.284000+00:00
|
['Illumination', 'Illumination Curated', 'Writing', 'Self Improvement', 'Illumination Poetry']
|
Title Dreams Come TrueContent Introduction ILLUMINATIONMIRROR Photo bruce mar Unsplash I’ve writing since young child wrote spurt never consistent time early 20 relationship break sort personal catastrophe began writing year ago urging friend Linda Aileen Miller helped discover ILLUMINATION ILLUMINATIONCurated great community writer say didn’t write earnest recently didn’t confidence commitment take responsibility needed Thanks many fine writer began write seriously often sad light whimsical story tend serious Take look article wrote losing beloved King Charles Maggie Mae gave great comfort solace write story begin process transitioning Rainbow Bridge nothing replace heart helped tremendously move grieving process also write family experience mental health issue fun leaning towards poetry many outlet share writing platform Change name game We’re embracing hope you’ll join u article mental health designed panic attack tried light include personal experience also fact many incredible writer ever way imagine hope you’ll join u Feel free reach I’ve learned much past year I’m happy oblige Participate Challenges introduces work introduces wider audience many writer opportunity growth funTags Illumination Illumination Curated Writing Self Improvement Illumination Poetry
|
4,917 |
Five lessons on the most pressing challenges for newsroom leaders
|
Newsrooms are still largely driven by the on-going, underlying sense of urgency that came with the digital age of news production. But leaders need time to reflect if they want to build a powerful editorial strategy, run a newsroom successfully, and create a space where ideas can flourish.
That’s exactly what we invited our participants to do at the News Impact Academy in Copenhagen, led by Design Strategist Tran Ha. She applies design thinking to a newsroom environment to provide our 19 participants with the tools they need to tackle their most pressing challenges.
These are the main takeaways for newsroom leaders:
1. How to drive innovation in your newsroom
Strong leadership is often associated with revolutionary ideas, but innovation doesn’t have to be bright and shiny. It happens by introducing small changes on a daily basis. “We would all love to come up with that one idea that no-one has ever thought about, but in reality, most ideas already exist,” said Tran.
To be innovative you need to look at the problem that everyone else is looking at, but frame it in a way so that you can actually take action on it. Put effort into finding the right questions instead of rushing into solutions. “We’re just not looking at needs and opportunities in the right way,” said Tran.
This approach forces you to take a step back and spend a lot of time in the problem sphere. “It will feel uncomfortable and messy,” she said. But when you listen to people — your team or your audience — you’ll find out where the friction point is, which allows you to identify the need.
Once you have a solution for this need in mind, set up a small experiment that allows you to back out quickly if it’s not working; e.g. a weekly debrief, daily team-lunch, an internal newsletter. If your prototype catches on, build upon it, turn small innovations into habits so they stick and real change happens.
Participants working on a prototype as part of a design thinking exercise
2. How to lead a team while managing the day-to-day
Leadership can be a lonely path, so it’s important to build a network of people who really understand your challenges, for example, the difference between leadership and management. “Leading is about giving a vision and managing means achieving that vision,” said one participant.
Our default mode tends to be managing, according to Tran, which is when you’re focusing on tasks. “But the hardest work is the strategy,” she added, “that’s why people don’t do it. A lot of leaders are not good at leadership, because they are focusing on tasks,” Tran said.
On a daily basis, it’s coming and going between managing and leading, putting out fires, thinking long-term, micro-managing, and giving autonomy. In a busy environment, where you have to get things done, you need to know when you can take time to focus on your leadership.
You can switch to leadership mode when…
you or your team are stuck and you don’t know how to get it done
the team is out of alignment
you have a high profile project, you want to get it right, and the answer is not so straight forward
you can take time to reflect
3. How to motivate staff
“Autonomy and freedom are the best drivers for creativity,” said Jakob Nielsen, one of our guest speakers at the News Impact Academy. He is the Editor-in-Chief of Altinget, one of Denmark’s most innovative newsrooms, who produce a slow version of political news, primarily for B-to-B subscribers.
Christina Andreasen, Deputy Managing Editor at Berlingske and Jakob Nielsen, Editor-in-Chief at Altinget in conversation
Jakob’s leadership strategy is all about trust. You have to be clear about goals and red lines while giving people freedom and ownership, according to him. “My team can experiment and if they fail that’s fine, I’m not on their backs,” he said. “Give people freedom!”
For example, his team is encouraged to apply for external funding to set up new projects— a creative competitive process that brings good energy to the newsroom. This allowed Altinget to become the leader in robot journalism in Denmark and enabled them to bring in new skills through such projects.
4. How to lead multi-disciplinary teams
Jakob’s collaboration with project managers made him realise that you can do great things when you’re open to change. “For 15 years I’ve been working with people that did what I did [produce stories], then for the first time I got to work with people doing stuff I didn’t know anything about,” he said.
Many newsrooms are hiring programmers, graphic designers and other people with skill-sets that are fairly new to journalism. “Developers and designers think very differently,” said one participant. De-marginalising these new roles within the newsroom is key to bringing innovation to journalism.
“If you only work with people that do what you do, you’ll end up with the same answer,” said one participant.
5. How to deal with resistance to change
A (remote) discussion with Anita Zielina about change management on 27 September 2019
You’ll always have people who are resistant to change, according to Anita Zielina, director of innovation and leadership at Craig Newmark J-School at CUNY, who led our discussion on change management. “What makes it so hard is that human beings, in general, are change-averse,” she said.
“People who are excited about change tend to forget that not everyone sees it that way”, Anita Zielina, director of innovation and leadership at Craig Newmark J-School at CUNY
Anita recommends leaders to focus their attention on caring for change-makers instead of spending time with the resistant ones. “Your stars are valuable allies who will drive the rest of the people and make others feel excited about change,” she said.
The best way to support your change-makers is to create as much freedom as possible, help them in their careers, and show them your plans, according to Anita. “Let them have fun and keep the negative energy away,” she said, “try to battle the fights on culture so that they don’t have to do it.”
Change projects need constant direction and active prioritising, according to Anita. “They’re like a plant that needs watering every day,” she said. Ideas become successful only when people talk about it every day to point the way and the light at the end of the tunnel.
“You want people to ask questions and discuss ideas so don’t be afraid to over-communicate,” she said, “when you sound like a broken record, you may still have people who will ask: Excuse me, why are we doing this?” But when the energy levels come up and things start to grow and scale, you’ll know you’re on the right track.
|
https://medium.com/we-are-the-european-journalism-centre/five-lessons-on-the-most-pressing-challenges-for-newsroom-leaders-593eaf1d596a
|
['Ingrid Cobben']
|
2019-10-15 05:16:01.573000+00:00
|
['Leadership', 'Media', 'Insights', 'Journalism', 'Design Thinking']
|
Title Five lesson pressing challenge newsroom leadersContent Newsrooms still largely driven ongoing underlying sense urgency came digital age news production leader need time reflect want build powerful editorial strategy run newsroom successfully create space idea flourish That’s exactly invited participant News Impact Academy Copenhagen led Design Strategist Tran Ha applies design thinking newsroom environment provide 19 participant tool need tackle pressing challenge main takeaway newsroom leader 1 drive innovation newsroom Strong leadership often associated revolutionary idea innovation doesn’t bright shiny happens introducing small change daily basis “We would love come one idea noone ever thought reality idea already exist” said Tran innovative need look problem everyone else looking frame way actually take action Put effort finding right question instead rushing solution “We’re looking need opportunity right way” said Tran approach force take step back spend lot time problem sphere “It feel uncomfortable messy” said listen people — team audience — you’ll find friction point allows identify need solution need mind set small experiment allows back quickly it’s working eg weekly debrief daily teamlunch internal newsletter prototype catch build upon turn small innovation habit stick real change happens Participants working prototype part design thinking exercise 2 lead team managing daytoday Leadership lonely path it’s important build network people really understand challenge example difference leadership management “Leading giving vision managing mean achieving vision” said one participant default mode tends managing according Tran you’re focusing task “But hardest work strategy” added “that’s people don’t lot leader good leadership focusing tasks” Tran said daily basis it’s coming going managing leading putting fire thinking longterm micromanaging giving autonomy busy environment get thing done need know take time focus leadership switch leadership mode when… team stuck don’t know get done team alignment high profile project want get right answer straight forward take time reflect 3 motivate staff “Autonomy freedom best driver creativity” said Jakob Nielsen one guest speaker News Impact Academy EditorinChief Altinget one Denmark’s innovative newsroom produce slow version political news primarily BtoB subscriber Christina Andreasen Deputy Managing Editor Berlingske Jakob Nielsen EditorinChief Altinget conversation Jakob’s leadership strategy trust clear goal red line giving people freedom ownership according “My team experiment fail that’s fine I’m backs” said “Give people freedom” example team encouraged apply external funding set new projects— creative competitive process brings good energy newsroom allowed Altinget become leader robot journalism Denmark enabled bring new skill project 4 lead multidisciplinary team Jakob’s collaboration project manager made realise great thing you’re open change “For 15 year I’ve working people produce story first time got work people stuff didn’t know anything about” said Many newsroom hiring programmer graphic designer people skillsets fairly new journalism “Developers designer think differently” said one participant Demarginalising new role within newsroom key bringing innovation journalism “If work people you’ll end answer” said one participant 5 deal resistance change remote discussion Anita Zielina change management 27 September 2019 You’ll always people resistant change according Anita Zielina director innovation leadership Craig Newmark JSchool CUNY led discussion change management “What make hard human being general changeaverse” said “People excited change tend forget everyone see way” Anita Zielina director innovation leadership Craig Newmark JSchool CUNY Anita recommends leader focus attention caring changemakers instead spending time resistant one “Your star valuable ally drive rest people make others feel excited change” said best way support changemakers create much freedom possible help career show plan according Anita “Let fun keep negative energy away” said “try battle fight culture don’t it” Change project need constant direction active prioritising according Anita “They’re like plant need watering every day” said Ideas become successful people talk every day point way light end tunnel “You want people ask question discus idea don’t afraid overcommunicate” said “when sound like broken record may still people ask Excuse this” energy level come thing start grow scale you’ll know you’re right trackTags Leadership Media Insights Journalism Design Thinking
|
4,918 |
Do we get to Choose how to Live?
|
This train is a high-speed womb.
Compressed into my burgundy window seat in a carriage decked out in shades of red and pink, I watch the French, then Belgian, then Dutch countryside zip past.
Three hours and twenty minutes after leaving Paris, I am reborn on a platform at Amsterdam Centraal station. Gingerly, I make my way to the elevator, unsure how to proceed with a present so full of promise it looks unreal, such is the contrast with what came before.
I can’t shake off the feeling that my being in the Netherlands is yet another anomaly, in line with the many more that now make up my daily reality.
After the parasite in my head curtailed my freedom and held me hostage for five years, leading an autonomous life again is a shocking experience. Not a day goes by that I’m not mildly surprised I’m still around, all the more as all I actively wanted for five years was to find a way not to be.
Unable to think, unable to write, unable to support myself, unable to imagine a future that stretched beyond the next hour, I was done with life.
Over the course of five years, major depressive disorder was the slow death that almost disappeared me. Little by little, I lost track of what made me me as I sank deeper into an affective, emotional, professional, and intellectual coma.
I was never supposed to wake up. Or exit the void. Or throw myself back into and at life.
And yet, here I am, suitcase in tow, blown away by it all.
Granted, it is also windy in Amsterdam.
|
https://asingularstory.medium.com/do-we-get-to-choose-how-to-live-4ab64d3b8a63
|
['A Singular Story']
|
2020-04-17 20:02:01.167000+00:00
|
['Travel', 'Personal Growth', 'Life Lessons', 'Mental Health', 'Self']
|
Title get Choose LiveContent train highspeed womb Compressed burgundy window seat carriage decked shade red pink watch French Belgian Dutch countryside zip past Three hour twenty minute leaving Paris reborn platform Amsterdam Centraal station Gingerly make way elevator unsure proceed present full promise look unreal contrast came can’t shake feeling Netherlands yet another anomaly line many make daily reality parasite head curtailed freedom held hostage five year leading autonomous life shocking experience day go I’m mildly surprised I’m still around actively wanted five year find way Unable think unable write unable support unable imagine future stretched beyond next hour done life course five year major depressive disorder slow death almost disappeared Little little lost track made sank deeper affective emotional professional intellectual coma never supposed wake exit void throw back life yet suitcase tow blown away Granted also windy AmsterdamTags Travel Personal Growth Life Lessons Mental Health Self
|
4,919 |
Pandas Trick #1 — Change the default number of rows returned from the head method
|
The pandas DataFrame head method returns the first 5 rows by default. This is controlled by the parameter n . In this trick, we will use partialmethod from the functools standard library to set n to a different number. This trick is available on the Dunder Data YouTube channel (Subscribe!).
Become an Expert
If you want to be trusted to make decisions using pandas, you must become an expert. I have completely mastered pandas and have developed courses and exercises that will massively improve your knowledge and efficiency to do data analysis.
Master Data Analysis with Python — My comprehensive course with 800+ pages, 350+ exercises, multiple projects, and detailed solutions that will help you become an expert at pandas.
Get a sample of the material by enrolling in the free Intro to Pandas course.
First, let’s read in a sample DataFrame containing bike rides from the city of Chicago and call the head method with the defaults. Note, that it returns 5 rows.
import pandas as pd
bikes = pd.read_csv('data/bikes.csv')
bikes.head()
The functools standard library comes packaged with partialmethod which allows you to set parameters of a particular method. You can set any number of parameters with it. Below, we reassign the DataFrame head method so that it returns 3 rows as a default instead of 5.
from functools import partialmethod
pd.DataFrame.head = partialmethod(pd.DataFrame.head, n=3)
bikes.head()
Why do this?
I use the head method frequently to shorten the displayed output of the DataFrame. When I am creating live tutorials, 5 rows can take up too much space on the screen, so changing this default to 2 or 3 makes sense and saves a bit of time. It can also be helpful if you have written a large report that makes use of many calls to the head method, and want to shorten all of those outputs with a single command.
General usage
partialmethod is only available in python 3 and can be used for all methods to set some or all of the parameters to a particular value.
Master Python, Data Science and Machine Learning
Immerse yourself in my comprehensive path for mastering data science and machine learning with Python. Purchase the All Access Pass to get lifetime access to all current and future courses. Some of the courses it contains:
Exercise Python — A comprehensive introduction to Python (200+ pages, 100+ exercises)
— A comprehensive introduction to Python (200+ pages, 100+ exercises) Master Data Analysis with Python — The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises)
— The most comprehensive course available to learn pandas. (800+ pages and 300+ exercises) Master Machine Learning with Python — A deep dive into doing machine learning with scikit-learn constantly updated to showcase the latest and greatest tools. (300+ pages)
Get the All Access Pass now!
|
https://medium.com/dunder-data/pandas-trick-1-change-the-default-number-of-rows-returned-from-the-head-method-bc7c21ce0d53
|
['Ted Petrou']
|
2020-08-25 03:12:24.866000+00:00
|
['Data Science', 'Pandas', 'Programming', 'Python']
|
Title Pandas Trick 1 — Change default number row returned head methodContent panda DataFrame head method return first 5 row default controlled parameter n trick use partialmethod functools standard library set n different number trick available Dunder Data YouTube channel Subscribe Become Expert want trusted make decision using panda must become expert completely mastered panda developed course exercise massively improve knowledge efficiency data analysis Master Data Analysis Python — comprehensive course 800 page 350 exercise multiple project detailed solution help become expert panda Get sample material enrolling free Intro Pandas course First let’s read sample DataFrame containing bike ride city Chicago call head method default Note return 5 row import panda pd bike pdreadcsvdatabikescsv bikeshead functools standard library come packaged partialmethod allows set parameter particular method set number parameter reassign DataFrame head method return 3 row default instead 5 functools import partialmethod pdDataFramehead partialmethodpdDataFramehead n3 bikeshead use head method frequently shorten displayed output DataFrame creating live tutorial 5 row take much space screen changing default 2 3 make sense save bit time also helpful written large report make use many call head method want shorten output single command General usage partialmethod available python 3 used method set parameter particular value Master Python Data Science Machine Learning Immerse comprehensive path mastering data science machine learning Python Purchase Access Pass get lifetime access current future course course contains Exercise Python — comprehensive introduction Python 200 page 100 exercise — comprehensive introduction Python 200 page 100 exercise Master Data Analysis Python — comprehensive course available learn panda 800 page 300 exercise — comprehensive course available learn panda 800 page 300 exercise Master Machine Learning Python — deep dive machine learning scikitlearn constantly updated showcase latest greatest tool 300 page Get Access Pass nowTags Data Science Pandas Programming Python
|
4,920 |
TensorFlow está muerto, que viva TensorFlow
|
Head of Decision Intelligence, Google. Hello (multilingual) world! This account is for translated versions of my English language articles. twitter.com/quaesita
Follow
|
https://medium.com/datos-y-ciencia/tensorflow-est%C3%A1-muerto-que-viva-tensorflow-f18c86ca5515
|
['Cassie Kozyrkov']
|
2019-10-10 02:43:27.155000+00:00
|
['Artificial Intelligence', 'Ciencia Y Datos', 'Data Science', 'Technology', 'Machine Learning']
|
Title TensorFlow está muerto que viva TensorFlowContent Head Decision Intelligence Google Hello multilingual world account translated version English language article twittercomquaesita FollowTags Artificial Intelligence Ciencia Datos Data Science Technology Machine Learning
|
4,921 |
I Wanted to be Sexually Liberated, So I Became a Stripper
|
I Wanted to be Sexually Liberated, So I Became a Stripper
All my life, I thought the naked body was sinful and meant to be hidden. At 23, I finished business school and started stripping.
Photo by Ashley Byrd on Unsplash
I grew up in a conservative home, in an equally conservative town, and my parents had my future planned out from a very early age; Marriage, children, and a corporate job, just like my mother and father. There’s no denying how lucky I was as a kid to have everything I needed. I went to church every Sunday with my family, had great grades in school, and competed in gymnastics in my free time. I was the baby, daddy’s little girl, and I lived to please my parents.
Naturally, when I entered puberty, and my body began changing — to the appreciation and fascination of boys my age — it confused and delighted me to be recognized for something I hadn’t worked for.
My body was just part of me, yet my new curves were celebrated by the opposite sex, criticized by my female classmates, and chastised by my mother and grandmother.
I know it may sound crazy, but even my relationship with my father changed when I could no longer wear my training bras. He became noticeably awkward around me, and he stopped inviting me out with him after he witnessed men his age paying me attention at the hardware store.
It was a confusing feeling for a 13-year-old girl to be suddenly punished by her father for something out of her control.
As an adult, I covered up my body and fell into place because I thought my relationship with my dad depended on it. I let go of any wild dreams of accepting my physical self and exploring my sexuality, and instead, I dated men who were just as conservative and concerned with marriage as I was — men who were just as protective about my body as my father. I wasn’t a virgin past 19, but I held back so many parts of myself for the fear of being “sinful.”
It’s no secret that children who grow up in homes like mine — homes where they can’t make their own choices about hobbies or the clothes they wear, homes where they are taught that sex and the naked body is shameful — grow up resentful and desperate to rebel. They want to do anything and everything to go against the status quo.
Is it any wonder that I rebelled right into working at a strip club? I think not.
|
https://medium.com/erin-taylor-club/i-was-taught-to-be-ashamed-of-my-body-and-at-23-i-became-a-stripper-3a862de93647
|
['Erin Taylor']
|
2020-12-27 01:09:49.564000+00:00
|
['Mental Health', 'Sex', 'Sexuality', 'Self', 'Sex Work']
|
Title Wanted Sexually Liberated Became StripperContent Wanted Sexually Liberated Became Stripper life thought naked body sinful meant hidden 23 finished business school started stripping Photo Ashley Byrd Unsplash grew conservative home equally conservative town parent future planned early age Marriage child corporate job like mother father There’s denying lucky kid everything needed went church every Sunday family great grade school competed gymnastics free time baby daddy’s little girl lived please parent Naturally entered puberty body began changing — appreciation fascination boy age — confused delighted recognized something hadn’t worked body part yet new curve celebrated opposite sex criticized female classmate chastised mother grandmother know may sound crazy even relationship father changed could longer wear training bra became noticeably awkward around stopped inviting witnessed men age paying attention hardware store confusing feeling 13yearold girl suddenly punished father something control adult covered body fell place thought relationship dad depended let go wild dream accepting physical self exploring sexuality instead dated men conservative concerned marriage — men protective body father wasn’t virgin past 19 held back many part fear “sinful” It’s secret child grow home like mine — home can’t make choice hobby clothes wear home taught sex naked body shameful — grow resentful desperate rebel want anything everything go status quo wonder rebelled right working strip club think notTags Mental Health Sex Sexuality Self Sex Work
|
4,922 |
สิ่งที่ควรคำนึงถึง 10 ข้อ ก่อนที่คุณจะสร้างแผนภูมิเกี่ยวกับ COVID-19
|
Sign up for The 'Gale
By Nightingale
Keep up with the latest from Nightingale, the journal of the Data Visualization Society Take a look
|
https://medium.com/nightingale/%E0%B8%AA%E0%B8%B4%E0%B9%88%E0%B8%87%E0%B8%97%E0%B8%B5%E0%B9%88%E0%B8%84%E0%B8%A7%E0%B8%A3%E0%B8%84%E0%B8%B3%E0%B8%99%E0%B8%B6%E0%B8%87%E0%B8%96%E0%B8%B6%E0%B8%87-10-%E0%B8%82%E0%B9%89%E0%B8%AD-%E0%B8%81%E0%B9%88%E0%B8%AD%E0%B8%99%E0%B8%97%E0%B8%B5%E0%B9%88%E0%B8%84%E0%B8%B8%E0%B8%93%E0%B8%88%E0%B8%B0%E0%B8%AA%E0%B8%A3%E0%B9%89%E0%B8%B2%E0%B8%87%E0%B9%81%E0%B8%9C%E0%B8%99%E0%B8%A0%E0%B8%B9%E0%B8%A1%E0%B8%B4%E0%B9%80%E0%B8%81%E0%B8%B5%E0%B9%88%E0%B8%A2%E0%B8%A7%E0%B8%81%E0%B8%B1%E0%B8%9A-covid-19-5c57b7853cf4
|
['Pattarawat Chormai']
|
2020-04-10 10:22:16.931000+00:00
|
['Coronavirus', 'Data', 'Data Visualization', 'Covid 19']
|
Title สิ่งที่ควรคำนึงถึง 10 ข้อ ก่อนที่คุณจะสร้างแผนภูมิเกี่ยวกับ COVID19Content Sign Gale Nightingale Keep latest Nightingale journal Data Visualization Society Take lookTags Coronavirus Data Data Visualization Covid 19
|
4,923 |
Open Source Libraries for Apps on iOS
|
The Action
We hadn’t any possibility to push or affect the release of App. So we decided to pick up the most innovative features that we made in App and by using a completely different approach in form cases and developing — create open-source libraries that anybody can touch and grab for their own projects.
On the developing side, shortly, we have decided to code simple and clean code as much as possible on the Swift UIKit with itemized documentation according to the guidelines. And that’s my best definition of what we were going to do by dev side 😂 . I hope, soon my teammate also will prepare his own storyline.
First concepts
My approach was to design all selected features like parts of one product. So basically, they can be presented as a concept of any app visually. I came to the conclusion that I will do for a popular brand’s product, but which has not been released yet and will launch approximately when we planned to publish our libraries in the public domain at the same time. That product was an Apple TV+. All views will be our representation of how this application might look like with our invents.
The Cellular Number Mask
We have been building an ecosystem for wholesale products. It would be impractical to force users to register on each platform individually when they use several of our platforms. The most obvious way to create our own userbase system for gaining access via one account to all products, like Google Account.
The best practice in a registering method is signing by social media because it does not require additional authorization. But we had configured a difficult system that allows us to use only email or mobile phone number. Since the first iterations of the main service released in 2014, we had a serious group of customers who already had accounts. And based on it, we figured out that 72% of users use their phone number as a way to log in, and the rest 28% of their email addresses. 28% is a huge amount to ignore them and we cannot leave a mobile phone number as a single process. We must design both methods but we absolutely didn’t want to allocate two input forms in the registration window.
We had a challenge, how to combine two processes into one input form. Email forms don’t require any additional features than the usual text ones. However, the numbers basically more conveniently type into masks that include predefined country code and cells for other digits. According to the researches, only 0,009% of users in the world use the digit as the first symbol for an email address. My idea was to design input that had a default view in the idle state and will transform into a custom field with a cellular mask immediately after typing the first digit. During designing this feature we found similar decisions, even found an open-source library that almost contains our needs, and for saving time we have used that one with some list of customizations. And that point also motivates us now to create an open-source library with our vision in order to help other teams as we were.
Theoretically, our Cellular Number Mask can be useful for fintech projects where also exist serious security guidelines that also don’t allow the use of social networks during authorizing in the application.
|
https://medium.com/swlh/open-source-libraries-for-apps-on-ios-9c85b85c8593
|
['Ehrlan Zholdosh']
|
2020-10-30 10:39:52.340000+00:00
|
['Libraries', 'Design', 'Open Source Software', 'iOS', 'Swift']
|
Title Open Source Libraries Apps iOSContent Action hadn’t possibility push affect release App decided pick innovative feature made App using completely different approach form case developing — create opensource library anybody touch grab project developing side shortly decided code simple clean code much possible Swift UIKit itemized documentation according guideline that’s best definition going dev side 😂 hope soon teammate also prepare storyline First concept approach design selected feature like part one product basically presented concept app visually came conclusion popular brand’s product released yet launch approximately planned publish library public domain time product Apple TV view representation application might look like invents Cellular Number Mask building ecosystem wholesale product would impractical force user register platform individually use several platform obvious way create userbase system gaining access via one account product like Google Account best practice registering method signing social medium require additional authorization configured difficult system allows u use email mobile phone number Since first iteration main service released 2014 serious group customer already account based figured 72 user use phone number way log rest 28 email address 28 huge amount ignore cannot leave mobile phone number single process must design method absolutely didn’t want allocate two input form registration window challenge combine two process one input form Email form don’t require additional feature usual text one However number basically conveniently type mask include predefined country code cell digit According research 0009 user world use digit first symbol email address idea design input default view idle state transform custom field cellular mask immediately typing first digit designing feature found similar decision even found opensource library almost contains need saving time used one list customizations point also motivates u create opensource library vision order help team Theoretically Cellular Number Mask useful fintech project also exist serious security guideline also don’t allow use social network authorizing applicationTags Libraries Design Open Source Software iOS Swift
|
4,924 |
The (un)surprising cause of the next financial crisis.
|
The (un)surprising cause of the next financial crisis.
You know that it is coming, you just don’t know how … The next financial crisis.
Most predictions agree on the timing (2019 / 2020) and that it will be one of the deepest crises in the world history (the economy has been growing consistently for 10 years now), but no one can predict how it will happen. Knowing that, of course, would be priceless. Traders and hedge fund managers who took the ‘right side of the trade’ and bet against the 2007 subprime crisis made hundreds of billions over night. Jon Paulson and many others got famous and (even much) richer.
Crises are certain; they are inevitable like death and taxes because the world economy is based on the concept of greed. Human greed is good — it is what drives the economy up … and then gets it going again after it falls over the cliff. It is a necessary substance; a serum and a poison in one.
It was in the U.S. that greed got reinvented as a good thing. That is, it was the new world that injected poor immigrants with the idea of American dream and made money into a universal merit of success. And ever since it has been the U.S. that gave the world market bubbles. Last time it was the subprime mortgages and derivatives that no one understood; this time it is the tech sector.
There are two sides to the tech sector. Facebook and Google are what Standard Oil Co. Inc. was at the beginning of the 20th century—a well tolerated monopoly. Together with Apple, they form the ‘old’ side of the tech sector — the one that actually makes money … and tons of it. But beyond that,there is the utopian tech sector. You know: Silicon Valley, its millennial founder-entrepreneurs, with zen-like smiles, enlightened modesty and open shirts. So very unlike the old world dominated by Wall Street, they and their tiny little start-ups inspired the beginning of the present economic recovery by promising to create an infinitely more efficient and cleaner way of doing things, exchanging factories and steel for smartphones and abstract products.
Three companies — Amazon, UBER and Tesla (ATU) — are a perfect embodiment of this trend. To some extent, ATU are truly revolutionary — making our lifes cheaper (Amazon), more practical (UBER) and exciting (Tesla) — but for the most part, people love them because they are a sensation.
Good enough is just not good enough when it comes to financial markets. People don’t want to be told that they should stick their savings into a 1.5%-yielding super-slow-growing annuity. No, we all want our chance at becoming millionaires, and if that requires us to chase a sensation, then so be it. This is not to say that UBER hasn’t changed the urban transportation for good, that Tesla doesn’t make innovative cars, or that Amazon doesn’t offer convenience and price that others can’t match. No. The problem is that somewhere down the line, ATU have become a religion of future and people forgot to value them on the cold basis of profits & losses. Somewhere down the line, ATU freed themselves from the dryness of discounted cash flow analyses, and they took the whole market with them on a trip to forever.
Amazon — a nightmare of the old-industry CEOs — continues on its path toward the world domination because it can afford to reinvest everything it makes and more*. It is a company doomed to die. For now, the world is still large enough. Amazon can keep on growing, expanding into other sectors and devouring its competitors. That, of course, makes it into an irresistible investment proposition — a company on a path to become the ultimate monopoly; the one place where we buy everything. Inevitably, though, it will reach a point when it will grow too big. It will either have to be cut in pieces by the government labelled as a ‘monopoly’, or it will be re-priced by the market, increasing its cost of capital because the promise of further growth and domination will make no sense anymore (people will begin to ask for a profit eventually).
*Amazon is a life work of an ex-hedge fund manager (who else better to trick investors’ minds) — a company which functions like a perpetuum mobile. Jeff Bezos has once drawn his Amazon flywheel concept on a napkin. Lower prices lead to more customers, which attract more outside sellers to Amazon, who should be OK with smaller profit in order to be able to sell on Amazon. Reinvestment of all profits improves company and reduces cost leading to lower prices again in a vicious endless circle. One arrow is missing — that arrow going out: ‘profits to shareholders’.
UBER — a clever, but first and foremost a cheaper way to do urban transportation. A company that has burned through billions of private money to become the one and only player in the business that it created — intelligent transportation. But, for all its dominance, UBER is facing a ‘Groupon moment’. Its technology is becoming commoditized and re-used by cheaper alternative providers.
Tesla — This one is easy. A tech company on a mission to beat auto industry on its home turf. Tesla burns cash at a breathless pace in a race against time to get its Model 3 production fully running before other giants enter the space with better alternatives. Blessed by what has been until now an inexhaustible level of positive investor sentiment, the company’s plans are truly colossal, but the last estimates show that it will burn through its cash reserves by the end of this financial year. If Musk is forced to go to financial markets asking for money again this year, it could be a make-or-break moment for Tesla.
People always fight the last war. Scarred by 2007, investors are looking for the next big short; the next Collateralized Debt Obligation, or other structured product that could bring the markets down. There might not be one this time round. All financial crises are caused by some sort of a market bubble; a situation when prices overshoot value, which eventually leads to an ‘aha’ moment and a downward spiral. ATU and other utopian tech companies have infected markets with a profound belief that they are about to revolutionise world. This made them into cash-burning giants with a zero cost of capital. And that’s fine for now. But, their eventual (and unavoidable) collapse may prove to be systemic, dragging down the market, not because of their size (which is insignificant), but because investors will reprice the market, taking the promise of infinitely more efficient tomorrow out of the equation.
See you on the other side,
|
https://medium.com/thatmeaning/the-un-surprising-cause-of-the-next-financial-crisis-dc59eb2379c4
|
['George Salapa']
|
2018-05-05 17:04:13.652000+00:00
|
['Economics', 'Market', 'Amazon', 'Tesla', 'Uber']
|
Title unsurprising cause next financial crisisContent unsurprising cause next financial crisis know coming don’t know … next financial crisis prediction agree timing 2019 2020 one deepest crisis world history economy growing consistently 10 year one predict happen Knowing course would priceless Traders hedge fund manager took ‘right side trade’ bet 2007 subprime crisis made hundred billion night Jon Paulson many others got famous even much richer Crises certain inevitable like death tax world economy based concept greed Human greed good — drive economy … get going fall cliff necessary substance serum poison one US greed got reinvented good thing new world injected poor immigrant idea American dream made money universal merit success ever since US gave world market bubble Last time subprime mortgage derivative one understood time tech sector two side tech sector Facebook Google Standard Oil Co Inc beginning 20th century—a well tolerated monopoly Together Apple form ‘old’ side tech sector — one actually make money … ton beyond thatthere utopian tech sector know Silicon Valley millennial founderentrepreneurs zenlike smile enlightened modesty open shirt unlike old world dominated Wall Street tiny little startup inspired beginning present economic recovery promising create infinitely efficient cleaner way thing exchanging factory steel smartphones abstract product Three company — Amazon UBER Tesla ATU — perfect embodiment trend extent ATU truly revolutionary — making life cheaper Amazon practical UBER exciting Tesla — part people love sensation Good enough good enough come financial market People don’t want told stick saving 15yielding superslowgrowing annuity want chance becoming millionaire requires u chase sensation say UBER hasn’t changed urban transportation good Tesla doesn’t make innovative car Amazon doesn’t offer convenience price others can’t match problem somewhere line ATU become religion future people forgot value cold basis profit loss Somewhere line ATU freed dryness discounted cash flow analysis took whole market trip forever Amazon — nightmare oldindustry CEOs — continues path toward world domination afford reinvest everything make company doomed die world still large enough Amazon keep growing expanding sector devouring competitor course make irresistible investment proposition — company path become ultimate monopoly one place buy everything Inevitably though reach point grow big either cut piece government labelled ‘monopoly’ repriced market increasing cost capital promise growth domination make sense anymore people begin ask profit eventually Amazon life work exhedge fund manager else better trick investors’ mind — company function like perpetuum mobile Jeff Bezos drawn Amazon flywheel concept napkin Lower price lead customer attract outside seller Amazon OK smaller profit order able sell Amazon Reinvestment profit improves company reduces cost leading lower price vicious endless circle One arrow missing — arrow going ‘profits shareholders’ UBER — clever first foremost cheaper way urban transportation company burned billion private money become one player business created — intelligent transportation dominance UBER facing ‘Groupon moment’ technology becoming commoditized reused cheaper alternative provider Tesla — one easy tech company mission beat auto industry home turf Tesla burn cash breathless pace race time get Model 3 production fully running giant enter space better alternative Blessed inexhaustible level positive investor sentiment company’s plan truly colossal last estimate show burn cash reserve end financial year Musk forced go financial market asking money year could makeorbreak moment Tesla People always fight last war Scarred 2007 investor looking next big short next Collateralized Debt Obligation structured product could bring market might one time round financial crisis caused sort market bubble situation price overshoot value eventually lead ‘aha’ moment downward spiral ATU utopian tech company infected market profound belief revolutionise world made cashburning giant zero cost capital that’s fine eventual unavoidable collapse may prove systemic dragging market size insignificant investor reprice market taking promise infinitely efficient tomorrow equation See sideTags Economics Market Amazon Tesla Uber
|
4,925 |
Predicting Movie Genres using NLP
|
Introduction
I was intrigued going through this amazing article on building a multi-label image classification model last week. The data scientist in me started exploring possibilities of transforming this idea into a Natural Language Processing (NLP) problem.
That article showcases computer vision techniques to predict a movie’s genre. So I had to find a way to convert that problem statement into text-based data. Now, most NLP tutorials look at solving single-label classification challenges (when there’s only one label per observation).
But movies are not one-dimensional. One movie can span several genres. Now THAT is a challenge I love to embrace as a data scientist. I extracted a bunch of movie plot summaries and got down to work using this concept of multi-label classification. And the results, even using a simple model, are truly impressive.
In this article, we will take a very hands-on approach to understanding multi-label classification in NLP. I had a lot fun building the movie genre prediction model using NLP and I’m sure you will as well. Let’s dig in!
Table of Contents
Brief Introduction to Multi-Label Classification Setting up our Multi-Label Classification Problem Statement About the Dataset Our Strategy to Build a Movie Genre Prediction Model Implementation: Using Multi-Label Classification to Build a Movie Genre Prediction Model (in Python)
Brief Introduction to Multi-Label Classification
I’m as excited as you are to jump into the code and start building our genre classification model. Before we do that, however, let me introduce you to the concept of multi-label classification in NLP. It’s important to first understand the technique before diving into the implementation.
The underlying concept is apparent in the name — multi-label classification. Here, an instance/record can have multiple labels and the number of labels per instance is not fixed.
Let me explain this using a simple example. Take a look at the below tables, where ‘X’ represents the input variables and ‘y’ represents the target variables (which we are predicting):
‘y’ is a binary target variable in Table 1. Hence, there are only two labels — t1 and t2
‘y’ contains more than two labels in Table 2. But, notice how there is only one label for every input in both these tables
You must have guessed why Table 3 stands out. We have multiple tags here, not just across the table, but for individual inputs as well
We cannot apply traditional classification algorithms directly on this kind of dataset. Why? Because these algorithms expect a single label for every input, when instead we have multiple labels. It’s an intriguing challenge and one that we will solve in this article.
You can get a more in-depth understanding of multi-label classification problems in the below article:
Setting up our Multi-Label Classification Problem Statement
There are several ways of building a recommendation engine. When it comes to movie genres, you can slice and dice the data based on multiple variables. But here’s a simple approach — build a model that can automatically predict genre tags! I can already imagine the possibilities of adding such an option to a recommender. A win-win for everyone.
Our task is to build a model that can predict the genre of a movie using just the plot details (available in text form).
Take a look at the below snapshot from IMDb and pick out the different things on display:
There’s a LOT of information in such a tiny space:
Movie title
Movie rating in the top-right corner
Total movie duration
Release date
And of course, the movie genres which I have highlighted in the magenta coloured bounding box
Genres tell us what to expect from the movie. And since these genres are clickable (at least on IMDb), they allow us to discover other similar movies of the same ilk. What seemed like a simple product feature suddenly has so many promising options. 🙂
About the Dataset
We will use the CMU Movie Summary Corpus open dataset for our project. You can download the dataset directly from this link.
This dataset contains multiple files, but we’ll focus on only two of them for now:
movie.metadata.tsv: Metadata for 81,741 movies, extracted from the November 4, 2012 dump of Freebase. The movie genre tags are available in this file
Metadata for 81,741 movies, extracted from the November 4, 2012 dump of Freebase. The movie genre tags are available in this file plot_summaries.txt: Plot summaries of 42,306 movies extracted from the November 2, 2012 dump of English-language Wikipedia. Each line contains the Wikipedia movie ID (which indexes into movie.metadata.tsv) followed by the plot summary
Our Strategy to Build a Movie Genre Prediction Model
We know that we can’t use supervised classification algorithms directly on a multi-label dataset. Therefore, we’ll first have to transform our target variable. Let’s see how to do this using a dummy dataset:
Here, X and y are the features and labels, respectively — it is a multi-label dataset. Now, we will use the Binary Relevance approach to transform our target variable, y. We will first take out the unique labels in our dataset:
Unique labels = [ t1, t2, t3, t4, t5 ]
There are 5 unique tags in the data. Next, we need to replace the current target variable with multiple target variables, each belonging to the unique labels of the dataset. Since there are 5 unique labels, there will be 5 new target variables with values 0 and 1 as shown below:
We have now covered the necessary ground to finally start solving this problem. In the next section, we will finally make an Automatic Movie Genre Prediction System using Python!
Implementation: Using Multi-Label Classification to Build a Movie Genre Prediction Model (in Python)
We have understood the problem statement and built a logical strategy to design our model. Let’s bring it all together and start coding!
Import the required libraries
We will start by importing the libraries necessary to our project:
Load Data
Let’s load the movie metadata file first. Use ‘\t’ as the separator as it is a tab separated file (.tsv):
meta = pd.read_csv("movie.metadata.tsv", sep = '\t', header = None) meta.head()
Oh wait — there are no headers in this dataset. The first column is the unique movie id, the third column is the name of the movie, and the last column contains the movie genre(s). We will not use the rest of the columns in this analysis.
Let’s add column names to the aforementioned three variables:
# rename columns
meta.columns = ["movie_id",1,"movie_name",3,4,5,6,7,"genre"]
Now, we will load the movie plot dataset into memory. This data comes in a text file with each row consisting of a movie id and a plot of the movie. We will read it line-by-line:
Next, split the movie ids and the plots into two separate lists. We will use these lists to form a dataframe:
Let’s see what we have in the ‘movies’ dataframe:
movies.head()
Perfect! We have both the movie id and the corresponding movie plot.
Data Exploration and Pre-processing
Let’s add the movie names and their genres from the movie metadata file by merging the latter into the former based on the movie_id column:
Great! We have added both movie names and genres. However, the genres are in a dictionary notation. It will be easier to work with them if we can convert them into a Python list. We’ll do this using the first row:
movies['genre'][0]
Output:
'{"/m/07s9rl0": "Drama", "/m/03q4nz": "World cinema"}'
We can’t access the genres in this row by using just .values( ). Can you guess why? This is because this text is a string, not a dictionary. We will have to convert this string into a dictionary. We will take the help of the json library here:
type(json.loads(movies['genre'][0]))
Output:
dict
We can now easily access this row’s genres:
json.loads(movies['genre'][0]).values()
Output:
dict_values(['Drama', 'World cinema'])
This code helps us to extract all the genres from the movies data. Once done, add the extracted genres as lists back to the movies dataframe:
Some of the samples might not contain any genre tags. We should remove those samples as they won’t play a part in our model building process:
# remove samples with 0 genre tags
movies_new = movies[~(movies['genre_new'].str.len() == 0)] movies_new.shape, movies.shape
Output:
((41793, 5), (42204, 5))
Only 411 samples had no genre tags. Let’s take a look at the dataframe once again:
movies.head()
Notice that the genres are now in a list format. Are you curious to find how many movie genres have been covered in this dataset? The below code answers this question:
# get all genre tags in a list
all_genres = sum(genres,[])
len(set(all_genres))
Output:
363
There are over 363 unique genre tags in our dataset. That is quite a big number. I can hardy recall 5–6 genres! Let’s find out what are these tags. We will use FreqDist( ) from the nltk library to create a dictionary of genres and their occurrence count across the dataset:
I personally feel visualizing the data is a much better method than simply putting out numbers. So, let’s plot the distribution of the movie genres:
Next, we will clean our data a bit. I will use some very basic text cleaning steps (as that is not the focus area of this article):
Let’s apply the function on the movie plots by using the apply-lambda duo:
movies_new['clean_plot'] = movies_new['plot'].apply(lambda x:
clean_text(x))
Feel free to check the new versus old movie plots. I have provided a few random samples below:
In the clean_plot column, all the text is in lowercase and there are also no punctuation marks. Our text cleaning has worked like a charm.
The function below will visualize the words and their frequency in a set of documents. Let’s use it to find out the most frequent words in the movie plots column:
Most of the terms in the above plot are stopwords. These stopwords carry far less meaning than other keywords in the text (they just add noise to the data). I’m going to go ahead and remove them from the plots’ text. You can download the list of stopwords from the nltk library:
nltk.download('stopwords')
Let’s remove the stopwords:
Check the most frequent terms sans the stopwords:
freq_words(movies_new['clean_plot'], 100)
Looks much better, doesn’t it? Far more interesting and meaningful words have now emerged, such as “police”, “family”, “money”, “city”, etc.
I mentioned earlier that we will treat this multi-label classification problem as a Binary Relevance problem. Hence, we will now one hot encode the target variable, i.e., genre_new by using sklearn’s MultiLabelBinarizer( ). Since there are 363 unique genre tags, there are going to be 363 new target variables.
Now, it’s time to turn our focus to extracting features from the cleaned version of the movie plots data. For this article, I will be using TF-IDF features. Feel free to use any other feature extraction method you are comfortable with, such as Bag-of-Words, word2vec, GloVe, or ELMo.
I recommend checking out the below articles to learn more about the different ways of creating features from text:
tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=10000)
I have used the 10,000 most frequent words in the data as my features. You can try any other number as well for the max_features parameter.
Now, before creating TF-IDF features, we will split our data into train and validation sets for training and evaluating our model’s performance. I’m going with a 80–20 split — 80% of the data samples in the train set and the rest in the validation set:
Now we can create features for the train and the validation set:
# create TF-IDF features
xtrain_tfidf = tfidf_vectorizer.fit_transform(xtrain)
xval_tfidf = tfidf_vectorizer.transform(xval)
Build Your Movie Genre Prediction Model
We are all set for the model building part! This is what we’ve been waiting for.
Remember, we will have to build a model for every one-hot encoded target variable. Since we have 363 target variables, we will have to fit 363 different models with the same set of predictors (TF-IDF features).
As you can imagine, training 363 models can take a considerable amount of time on a modest system. Hence, I will build a Logistic Regression model as it is quick to train on limited computational power:
from sklearn.linear_model import LogisticRegression # Binary Relevance
from sklearn.multiclass import OneVsRestClassifier # Performance metric
from sklearn.metrics import f1_score
We will use sk-learn’s OneVsRestClassifier class to solve this problem as a Binary Relevance or one-vs-all problem:
lr = LogisticRegression()
clf = OneVsRestClassifier(lr)
Finally, fit the model on the train set:
# fit model on train data
clf.fit(xtrain_tfidf, ytrain)
Predict movie genres on the validation set:
# make predictions for validation set
y_pred = clf.predict(xval_tfidf)
Let’s check out a sample from these predictions:
y_pred[3]
It is a binary one-dimensional array of length 363. Basically, it is the one-hot encoded form of the unique genre tags. We will have to find a way to convert it into movie genre tags.
Luckily, sk-learn comes to our rescue once again. We will use the inverse_transform( ) function along with the MultiLabelBinarizer( ) object to convert the predicted arrays into movie genre tags:
multilabel_binarizer.inverse_transform(y_pred)[3]
Output:
('Action', 'Drama')
Wow! That was smooth.
However, to evaluate our model’s overall performance, we need to take into consideration all the predictions and the entire target variable of the validation set:
# evaluate performance
f1_score(yval, y_pred, average="micro")
Output:
0.31539641943734015
We get a decent F1 score of 0.315. These predictions were made based on a threshold value of 0.5, which means that the probabilities greater than or equal to 0.5 were converted to 1’s and the rest to 0's.
Let’s try to change this threshold value and see if that improves our model’s score:
# predict probabilities
y_pred_prob = clf.predict_proba(xval_tfidf)
Now set a threshold value:
t = 0.3 # threshold value
y_pred_new = (y_pred_prob >= t).astype(int)
I have tried 0.3 as the threshold value. You should try other values as well. Let’s check the F1 score again on these new predictions.
# evaluate performance
f1_score(yval, y_pred_new, average="micro")
Output:
0.4378456703198025
That is quite a big boost in our model’s performance. A better approach to find the right threshold value would be to use a k-fold cross validation setup and try different values.
Create Inference Function
Wait — we are not done with the problem yet. We also have to take care of the new data or new movie plots that will come in the future, right? Our movie genre prediction system should be able to take a movie plot in raw form as input and generate its genre tag(s).
To achieve this, let’s build an inference function. It will take a movie plot text and follow the below steps:
Clean the text
Remove stopwords from the cleaned text
Extract features from the text
Make predictions
Return the predicted movie genre tags
Let’s test this inference function on a few samples from our validation set:
Yay! We’ve built a very serviceable model. The model is not yet able to predict rare genre tags but that’s a challenge for another time (or you could take it up and let us know the approach you followed).
Where to go from here?
If you are looking for similar challenges, you’ll find the below links useful. I have solved a Stackoverflow Questions Tag Prediction problem using both machine learning and deep learning models in our course on Natural Language Processing.
The links to the course are below for your reference:
End Notes
I would love to see different approaches and techniques from our community to achieve better results. Try to use different feature extraction methods, build different models, fine-tune those models, etc. There are so many things that you can try. Don’t stop yourself here — go on and experiment!
Feel free to discuss and comment in the comment section below. The full code is available here.
You can also read this article on Analytics Vidhya’s Android APP
Related Articles
|
https://medium.com/analytics-vidhya/predicting-movie-genres-using-nlp-46d70b97c67d
|
['Prateek Joshi']
|
2019-06-10 06:22:02.641000+00:00
|
['NLP', 'Python', 'Data Science', 'Text Analytics', 'Machine Learning']
|
Title Predicting Movie Genres using NLPContent Introduction intrigued going amazing article building multilabel image classification model last week data scientist started exploring possibility transforming idea Natural Language Processing NLP problem article showcase computer vision technique predict movie’s genre find way convert problem statement textbased data NLP tutorial look solving singlelabel classification challenge there’s one label per observation movie onedimensional One movie span several genre challenge love embrace data scientist extracted bunch movie plot summary got work using concept multilabel classification result even using simple model truly impressive article take handson approach understanding multilabel classification NLP lot fun building movie genre prediction model using NLP I’m sure well Let’s dig Table Contents Brief Introduction MultiLabel Classification Setting MultiLabel Classification Problem Statement Dataset Strategy Build Movie Genre Prediction Model Implementation Using MultiLabel Classification Build Movie Genre Prediction Model Python Brief Introduction MultiLabel Classification I’m excited jump code start building genre classification model however let introduce concept multilabel classification NLP It’s important first understand technique diving implementation underlying concept apparent name — multilabel classification instancerecord multiple label number label per instance fixed Let explain using simple example Take look table ‘X’ represents input variable ‘y’ represents target variable predicting ‘y’ binary target variable Table 1 Hence two label — t1 t2 ‘y’ contains two label Table 2 notice one label every input table must guessed Table 3 stand multiple tag across table individual input well cannot apply traditional classification algorithm directly kind dataset algorithm expect single label every input instead multiple label It’s intriguing challenge one solve article get indepth understanding multilabel classification problem article Setting MultiLabel Classification Problem Statement several way building recommendation engine come movie genre slice dice data based multiple variable here’s simple approach — build model automatically predict genre tag already imagine possibility adding option recommender winwin everyone task build model predict genre movie using plot detail available text form Take look snapshot IMDb pick different thing display There’s LOT information tiny space Movie title Movie rating topright corner Total movie duration Release date course movie genre highlighted magenta coloured bounding box Genres tell u expect movie since genre clickable least IMDb allow u discover similar movie ilk seemed like simple product feature suddenly many promising option 🙂 Dataset use CMU Movie Summary Corpus open dataset project download dataset directly link dataset contains multiple file we’ll focus two moviemetadatatsv Metadata 81741 movie extracted November 4 2012 dump Freebase movie genre tag available file Metadata 81741 movie extracted November 4 2012 dump Freebase movie genre tag available file plotsummariestxt Plot summary 42306 movie extracted November 2 2012 dump Englishlanguage Wikipedia line contains Wikipedia movie ID index moviemetadatatsv followed plot summary Strategy Build Movie Genre Prediction Model know can’t use supervised classification algorithm directly multilabel dataset Therefore we’ll first transform target variable Let’s see using dummy dataset X feature label respectively — multilabel dataset use Binary Relevance approach transform target variable first take unique label dataset Unique label t1 t2 t3 t4 t5 5 unique tag data Next need replace current target variable multiple target variable belonging unique label dataset Since 5 unique label 5 new target variable value 0 1 shown covered necessary ground finally start solving problem next section finally make Automatic Movie Genre Prediction System using Python Implementation Using MultiLabel Classification Build Movie Genre Prediction Model Python understood problem statement built logical strategy design model Let’s bring together start coding Import required library start importing library necessary project Load Data Let’s load movie metadata file first Use ‘t’ separator tab separated file tsv meta pdreadcsvmoviemetadatatsv sep header None metahead Oh wait — header dataset first column unique movie id third column name movie last column contains movie genre use rest column analysis Let’s add column name aforementioned three variable rename column metacolumns movieid1moviename34567genre load movie plot dataset memory data come text file row consisting movie id plot movie read linebyline Next split movie id plot two separate list use list form dataframe Let’s see ‘movies’ dataframe movieshead Perfect movie id corresponding movie plot Data Exploration Preprocessing Let’s add movie name genre movie metadata file merging latter former based movieid column Great added movie name genre However genre dictionary notation easier work convert Python list We’ll using first row moviesgenre0 Output m07s9rl0 Drama m03q4nz World cinema can’t access genre row using value guess text string dictionary convert string dictionary take help json library typejsonloadsmoviesgenre0 Output dict easily access row’s genre jsonloadsmoviesgenre0values Output dictvaluesDrama World cinema code help u extract genre movie data done add extracted genre list back movie dataframe sample might contain genre tag remove sample won’t play part model building process remove sample 0 genre tag moviesnew moviesmoviesgenrenewstrlen 0 moviesnewshape moviesshape Output 41793 5 42204 5 411 sample genre tag Let’s take look dataframe movieshead Notice genre list format curious find many movie genre covered dataset code answer question get genre tag list allgenres sumgenres lensetallgenres Output 363 363 unique genre tag dataset quite big number hardy recall 5–6 genre Let’s find tag use FreqDist nltk library create dictionary genre occurrence count across dataset personally feel visualizing data much better method simply putting number let’s plot distribution movie genre Next clean data bit use basic text cleaning step focus area article Let’s apply function movie plot using applylambda duo moviesnewcleanplot moviesnewplotapplylambda x cleantextx Feel free check new versus old movie plot provided random sample cleanplot column text lowercase also punctuation mark text cleaning worked like charm function visualize word frequency set document Let’s use find frequent word movie plot column term plot stopwords stopwords carry far le meaning keywords text add noise data I’m going go ahead remove plots’ text download list stopwords nltk library nltkdownloadstopwords Let’s remove stopwords Check frequent term sans stopwords freqwordsmoviesnewcleanplot 100 Looks much better doesn’t Far interesting meaningful word emerged “police” “family” “money” “city” etc mentioned earlier treat multilabel classification problem Binary Relevance problem Hence one hot encode target variable ie genrenew using sklearn’s MultiLabelBinarizer Since 363 unique genre tag going 363 new target variable it’s time turn focus extracting feature cleaned version movie plot data article using TFIDF feature Feel free use feature extraction method comfortable BagofWords word2vec GloVe ELMo recommend checking article learn different way creating feature text tfidfvectorizer TfidfVectorizermaxdf08 maxfeatures10000 used 10000 frequent word data feature try number well maxfeatures parameter creating TFIDF feature split data train validation set training evaluating model’s performance I’m going 80–20 split — 80 data sample train set rest validation set create feature train validation set create TFIDF feature xtraintfidf tfidfvectorizerfittransformxtrain xvaltfidf tfidfvectorizertransformxval Build Movie Genre Prediction Model set model building part we’ve waiting Remember build model every onehot encoded target variable Since 363 target variable fit 363 different model set predictor TFIDF feature imagine training 363 model take considerable amount time modest system Hence build Logistic Regression model quick train limited computational power sklearnlinearmodel import LogisticRegression Binary Relevance sklearnmulticlass import OneVsRestClassifier Performance metric sklearnmetrics import f1score use sklearn’s OneVsRestClassifier class solve problem Binary Relevance onevsall problem lr LogisticRegression clf OneVsRestClassifierlr Finally fit model train set fit model train data clffitxtraintfidf ytrain Predict movie genre validation set make prediction validation set ypred clfpredictxvaltfidf Let’s check sample prediction ypred3 binary onedimensional array length 363 Basically onehot encoded form unique genre tag find way convert movie genre tag Luckily sklearn come rescue use inversetransform function along MultiLabelBinarizer object convert predicted array movie genre tag multilabelbinarizerinversetransformypred3 Output Action Drama Wow smooth However evaluate model’s overall performance need take consideration prediction entire target variable validation set evaluate performance f1scoreyval ypred averagemicro Output 031539641943734015 get decent F1 score 0315 prediction made based threshold value 05 mean probability greater equal 05 converted 1’s rest 0 Let’s try change threshold value see improves model’s score predict probability ypredprob clfpredictprobaxvaltfidf set threshold value 03 threshold value yprednew ypredprob tastypeint tried 03 threshold value try value well Let’s check F1 score new prediction evaluate performance f1scoreyval yprednew averagemicro Output 04378456703198025 quite big boost model’s performance better approach find right threshold value would use kfold cross validation setup try different value Create Inference Function Wait — done problem yet also take care new data new movie plot come future right movie genre prediction system able take movie plot raw form input generate genre tag achieve let’s build inference function take movie plot text follow step Clean text Remove stopwords cleaned text Extract feature text Make prediction Return predicted movie genre tag Let’s test inference function sample validation set Yay We’ve built serviceable model model yet able predict rare genre tag that’s challenge another time could take let u know approach followed go looking similar challenge you’ll find link useful solved Stackoverflow Questions Tag Prediction problem using machine learning deep learning model course Natural Language Processing link course reference End Notes would love see different approach technique community achieve better result Try use different feature extraction method build different model finetune model etc many thing try Don’t stop — go experiment Feel free discus comment comment section full code available also read article Analytics Vidhya’s Android APP Related ArticlesTags NLP Python Data Science Text Analytics Machine Learning
|
4,926 |
How I went from broke college kid to traveling the world as a Digital Nomad
|
While in my second year of undergrad, I was feeling stuck and confused about my future. I was working part-time, making $150 per week while accumulating debt pursuing a degree that I wasn’t passionate about. I hoped to achieve big things in my life but was unsure what steps I needed to take to get ahead.
I had just transferred to a much bigger university than my previous school and didn’t know anyone. To make matters worse, I was living in an off-campus apartment with a random guy who didn’t even attend my school, and I had to commute to classes every day.
My situation was the result of poor planning on my part, but regardless of how I got where I was, I wasn’t living the exciting college life that I had hoped or expected I would be.
Although I eventually made friends and acclimated to my situation, I knew that I couldn’t stand to remain in college for the duration of my 4-year degree. I knew I would have to face the reality of my situation, sooner or later. The truth that a marketing major wasn’t something I was interested in pursuing as a career and that I was only in college because I didn’t know what else to do at that stage of my life.
Social pressure was the main reason I went to college in the first place, and staying made it appear to my parents and friends from high school that I was doing something with my life.
I knew that even if I managed to make it to graduation, I would be receiving a marketing degree that would be obsolete by the time I received it.
I had to switch my mindset from waiting for the right information to be given to me, to being an autodidact and seeking out the proper knowledge and education.
If I wanted to escape college and still succeed in my life, then I knew I needed a plan. Dropping out without a plan would likely result in me working minimum wage jobs and living at home, which wouldn’t be an improvement to college life by any stretch.
I knew I needed two things before I could drop out
A marketable skill — most college dropouts that fail don’t have one.
I couldn’t enter the real world without something of value to offer. Without skills, I was only worth the hours that I was willing to work. Not to mention that I’d be replaceable by anyone willing to do the same work as me for less.
Marketable skills are what college and universities are supposed to be providing to their students, but they often fail to do so. The good news is that it’s not the 1800s or even the 1900s anymore — in today’s world, most skills can be acquired online for free. Unless you’re studying to become a doctor — you probably shouldn’t learn how to perform surgery from Youtube videos.
The best part about the internet is that you can use it to learn skills and to find people who are willing to pay for those skills.
The majority of people hiring freelancers on the internet don’t care about degrees; they care about results. I knew that if I could build a portfolio of work around a skillset, then it would be just as good or better than earning a college degree.
|
https://dansapio.medium.com/how-i-went-from-a-broke-college-kid-to-traveling-the-world-as-a-digital-nomad-21c0f213faa1
|
['Danny Sapio']
|
2019-11-03 22:42:22.487000+00:00
|
['Travel', 'Freelancing', 'Entrepreneurship', 'Graphic Design', 'Digital Nomads']
|
Title went broke college kid traveling world Digital NomadContent second year undergrad feeling stuck confused future working parttime making 150 per week accumulating debt pursuing degree wasn’t passionate hoped achieve big thing life unsure step needed take get ahead transferred much bigger university previous school didn’t know anyone make matter worse living offcampus apartment random guy didn’t even attend school commute class every day situation result poor planning part regardless got wasn’t living exciting college life hoped expected would Although eventually made friend acclimated situation knew couldn’t stand remain college duration 4year degree knew would face reality situation sooner later truth marketing major wasn’t something interested pursuing career college didn’t know else stage life Social pressure main reason went college first place staying made appear parent friend high school something life knew even managed make graduation would receiving marketing degree would obsolete time received switch mindset waiting right information given autodidact seeking proper knowledge education wanted escape college still succeed life knew needed plan Dropping without plan would likely result working minimum wage job living home wouldn’t improvement college life stretch knew needed two thing could drop marketable skill — college dropout fail don’t one couldn’t enter real world without something value offer Without skill worth hour willing work mention I’d replaceable anyone willing work le Marketable skill college university supposed providing student often fail good news it’s 1800s even 1900s anymore — today’s world skill acquired online free Unless you’re studying become doctor — probably shouldn’t learn perform surgery Youtube video best part internet use learn skill find people willing pay skill majority people hiring freelancer internet don’t care degree care result knew could build portfolio work around skillset would good better earning college degreeTags Travel Freelancing Entrepreneurship Graphic Design Digital Nomads
|
4,927 |
Making big moves in Big Data with Hadoop, Hive, Parquet, Hue and Docker
|
Making big moves in Big Data with Hadoop, Hive, Parquet, Hue and Docker
Jump and run in this brief introduction to Big Data
What data at most big companies in 2020 looks like. Seriously.
The goal of this article is to introduce you to some key concepts in the buzzword realm of Big Data. After reading this article — potentially with some additional googling — you should be able to (more or less) understand how this whole Hadoop thing works.
So, to be more precise in this article you will:
Learn a lot of definitions (yay)
Spin up a Hadoop cluster with a few bells and whistles via docker-compose.
Understand Parquet files and how to convert your csv datasets into Parquet files.
Run SQL (technically HiveQL but it is very similar) queries on your Parquet files like it’s nobody’s business with Hive.
Also, be expected to have some basic knowledge in Docker & docker-compose, running Python scripts etc. — nothing crazy but it is better if you know in advance.
If you are already using an alternative to Hadoop’s ecosystem — cool — this article is more geared towards readers that have to get familiar with Hadoop due to work, university etc. and is just one “solution” to the Big Data conundrum with its pros and cons respectively.
Big Data: this is the big one. Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time. Now you might ask the simple question (and a lot of people have): “How big is big data?” Well, to be frank that is a very hard question and depending on how fast technology moves, what one considers “big” data today might be “small” data tomorrow. Nonetheless, the definition above is pretty timeless since it refers to sizes beyond the ability of commonly used tools — this is your reference line; so in 2020 let’s bite the bullet and say the following is true: when you start dealing with double digit TB datasets in your DBs and above, you are probably hitting the limits of some of the more run-of-the-mill tools and it is maybe time to look into distributed computing and potentially this article.
Hadoop: a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. The 3 most important core modules you should know about are:
(Storage) HDFS: a distributed file system with high-throughput access and redundancy (copies of all files are maintained across the cluster) at the same time
(Resource Management) YARN a.k.a. Yet Another Ressource Negotiator: a framework for job scheduling and cluster resources management e.g. which nodes are available etc.
(Processing) MapReduce: a YARN-based system for parallel processing of large datasets. This is the main algorithm used to seamlessly distribute computational tasks across the nodes of the cluster. You can read upon the origins of MapReduce around the Web. Popular alternatives are TEZ and Spark, which were developed later for processing data even more efficiently.
Parts of the Hadoop Ecosystem in one diagram. Focus on HDFS, YARN, MapReduce and Hive for now.
Hive: a data warehouse software that facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive. So, basically Hive sits on top of the aforementioned Hadoop stack and it allows you to directly use SQL on your cluster.
Hue: an open source SQL Assistant for Databases & Data Warehouses i.e. an easy GUI for looking into HDFS, Hive etc. Very handy for beginners! It is maintained by Cloudera and you can find it on GitHub.
Parquet: a columnar storage* format available to any project in the Hadoop ecosystem. You can read why this is a good idea with big data sets in the explanation below. Again, there are a lot of alternatives but this technology is free, open-source and widely used in production across the industry.
*columnar storage: in normal row-based DBs e.g. MySQL you are storing data in rows, which are then distributed across different blocks if you cannot fit your data on one block. Now if you have a lot of columns and rows in your data distributed across multiple blocks, things can get pretty slow. Which is why you could instead store each column in a separate block. In that case you can access all the data of a column by just accessing one block. There is a great longer explanation on the concept here. As AWS puts it (unrelated to Parquet but still true): “Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk.” Here is also another comparison to CSV, which shows how much storage you can save and what kind of speedups you can expect.
Just a great Jaws reference and this is here solely to brighten your mood if the article is already too much.
Building something
Ok. Time to build something! Namely a Hadoop cluster with Hive on top of it to run SQL queries on Parquet files stored in HDFS all the while visualizing everything in Hue. Does this sentence make more sense now than in the beginning of the article? Cool.
There are a lot of ways to do this and Hadoop is famous for operating computer clusters built from “commodity hardware” but since this is just a learning exercise, it is a bit easier to quickly spin up a small Hadoop cluster with the aforementioned stack in Docker with docker-compose — you can of course do this in Kubernetes too but it is way beyond the scope of this article. This setup here will not be even close to production but then again this article should merely serve as a gateway for your big data journey.
Here is the repo for this article:
A nicer view of the docker-compose file in the repository, which should roughly sketch out the architecture of the project.
Some highlights from the implementation in Docker
This was not quite as straightforward as initially thought, so here are some pointers from the development in case you want to customize this further. The focus of this article is not to give you a crash course in Docker and docker-compose either, so this section is brief and only highlights some places where you could get stuck.
If you want anything to work with Hue you need to figure out to override Hue’s default configs by mounting the hue-overrides.ini file (you will find it in the repo and the override in docker-compose). Obvious right? Wink, wink.
In hue-overrides.ini you should be looking at: [[database]] => this is the internal Hue DB, [[hdfs_clusters]] => connecting to HDFS to view files in Hue, [[yarn_clusters]] => setting up YARN and [beeswax] => connecting to Hive to run SQL queries in Hue.
If you do not have this line thrift_version=7 in hue-overrides.ini, Hue will refuse to connect to the Hive (=Thrift) Server because it is defaulting to a Hive Server version that is too high. This took hours.
If you use Hue’s default SQLite DB, you will get the message “database locked” when you try to connect to Hive => that is why there is a db-hue Postgres DB in the docker-compose file. Something about SQLite not being suitable for a multi-threaded environment as described here. Cloudera should work on their error messages…
POSTGRES_DB, POSTGRES_USER, POSTGRES_PASSWORD in hadoop-hive.env can be used with the official postgres Docker image to directly create the DB user when you spin up your container. Check.
Watch out with your 5432 port and not exposing it multiple times since PGDB is running more than once for this project (once as a metastore for Hive and once as a DB for hue)
tl;dr on the next steps
Ok. Short summary on what will happen next for the impatient engineers:
Start Hue, Hive and your Hadoop nodes with docker-compose up Download a .csv dataset from Kaggle & convert it with the supplied Python script Import said Parquet file to HDFS via Hue & preview it to make sure things are OK Create empty Hive table with the Parquet file schema after inspecting the schema with the parquet-tools CLI tool Import file from HDFS into Hive’s table Run some SQL queries!
Starting the cluster and launching Hue with docker-compose
Well, since everything is already setup, just clone the repository on your computer and type docker-compose up in your terminal. That’s it. Then go to localhost:8888 and you should (after setting up the initial password for Hue) see this screen:
This screen shows you your Hadoop cluster’s HDFS while the sidebar shows you the DB tables in Hive’s metastore — both of which are empty in this case.
Uploading Parquet files on HDFS and previewing them in Hue
When trying to open some (quite a few) Parquet files in Hue you are going to get the following error message:
“Failed to read Parquet file”
and in your docker-compose logs:
NameError: global name ‘snappy’ is not defined
Turns out that Hue does not support the snappy compression that is the default for a lot of Parquet converting tools like pandas. There is no workaround for this except for recreating your Parquet files (if they are using snappy). Worst UX ever on Cloudera’s side…
In the GitHub repository you will find a parquet_converter.py, which uses pandas and specifies the compression as None, so one does not default to snappy that will subsequently breaking Hue. Meaning you can take any dataset from e.g. Kaggle in .csv format and convert it to Parquet with the provided Python module.
At this point — if you are unfraid of the CLI — the best suggestion is for you to forget Hue and just use Hive and HDFS directly for your Parquet files. But if you stuck around with Hue like me you could see a UK Punctuality Statistics report from Kaggle that was converted with the Python script mentioned above and then uploaded as a file:
The File Browser in Hue when you click on the successfully imported Parquet file. You can access the File Browser from the dark sidebar on the left.
Creating a Hive table from your Parquet file and schema
After seeing that your data was properly imported, you can create your Hive table. For this you should run the following command in your command line in the folder where you converted your file (probably /your_github_clone/data):
parquet-tools schema 201801_Punctuality_Statistics_Full_Analysis.parquet
This will output the schema you need to create the table in Hue (UTF8 = string in Hue):
message schema {
optional binary run_date (UTF8);
optional int64 reporting_period;
optional binary reporting_airport (UTF8);
optional binary origin_destination_country (UTF8);
Time to create your table:
The preview of the new table you created. Go the DB icon in the dark sidebar and create a new table manually with the schema as described above. Afterwards click on the import button in the dark sidebar and import your Parquet file into the empty table. Afterwards you should see the above screen.
Running SQL queries
Running SQL queries was promised and it shall be delivered. The first icon in Hue’S sidebar is its query editor.
What if you wanted to find out all the flights from Poland that had an average delay of more than 10 minutes?
SELECT * FROM `2018_punctuality_statistics` WHERE origin_destination_country=’POLAND’ AND average_delay_mins>=10;
The auto-complete feature in the editor is fantastic, so even if you are a SQL novice, you should be able to play around with your data without too much effort.
Finally.
Time to let go
Dear reader, sadly you have reached the end of this article. Please write in the comments below if you feel like this journey should have a sequel. So, to recap you learned how to run a Hadoop cluster with Hive for running SQL queries all the while visualizing everything in Hue with docker-compose. Not bad.
This is of course a very very simplified look into what is possible with Hadoop but chances are that you are just starting out in the field, so give yourself some time and build on this knowledge and infrastructure. Furthermore, there are great online courses out there that you can look into next.
Looking into 2020 and beyond
Now, if you have been sort of listening in on Hadoop’s ecosystem over the last few years you would have seen that the two biggest players on the market — Cloudera and Hortonworks — merged around a year ago amid a slow down of the Hadoop big data market. Add the fact that people seem way more interested in Kubernetes than in older Hadoop specific technologies like YARN for resource management and orchestration, the fast adoption of DL frameworks like PyTorch and you sort of have a perfect storm forming for the ageing Hadoop stack. Nonetheless projects like Apache Spark are chugging along by e.g. introducing Kubernetes as an alternative to YARN. Exciting times for the ecosystem!
Sources:
|
https://towardsdatascience.com/making-big-moves-in-big-data-with-hadoop-hive-parquet-hue-and-docker-320a52ca175
|
['Nikolay Dimolarov']
|
2020-08-06 08:13:18.985000+00:00
|
['Big Data', 'Analytics', 'Hive', 'Hadoop', 'Docker']
|
Title Making big move Big Data Hadoop Hive Parquet Hue DockerContent Making big move Big Data Hadoop Hive Parquet Hue Docker Jump run brief introduction Big Data data big company 2020 look like Seriously goal article introduce key concept buzzword realm Big Data reading article — potentially additional googling — able le understand whole Hadoop thing work precise article Learn lot definition yay Spin Hadoop cluster bell whistle via dockercompose Understand Parquet file convert csv datasets Parquet file Run SQL technically HiveQL similar query Parquet file like it’s nobody’s business Hive Also expected basic knowledge Docker dockercompose running Python script etc — nothing crazy better know advance already using alternative Hadoop’s ecosystem — cool — article geared towards reader get familiar Hadoop due work university etc one “solution” Big Data conundrum pro con respectively Big Data big one Big data usually includes data set size beyond ability commonly used software tool capture curate manage process data within tolerable elapsed time might ask simple question lot people “How big big data” Well frank hard question depending fast technology move one considers “big” data today might “small” data tomorrow Nonetheless definition pretty timeless since refers size beyond ability commonly used tool — reference line 2020 let’s bite bullet say following true start dealing double digit TB datasets DBs probably hitting limit runofthemill tool maybe time look distributed computing potentially article Hadoop framework allows distributed processing large data set across cluster computer using simple programming model designed scale single server thousand machine offering local computation storage 3 important core module know Storage HDFS distributed file system highthroughput access redundancy copy file maintained across cluster time Resource Management YARN aka Yet Another Ressource Negotiator framework job scheduling cluster resource management eg node available etc Processing MapReduce YARNbased system parallel processing large datasets main algorithm used seamlessly distribute computational task across node cluster read upon origin MapReduce around Web Popular alternative TEZ Spark developed later processing data even efficiently Parts Hadoop Ecosystem one diagram Focus HDFS YARN MapReduce Hive Hive data warehouse software facilitates reading writing managing large datasets residing distributed storage using SQL Structure projected onto data already storage command line tool JDBC driver provided connect user Hive basically Hive sits top aforementioned Hadoop stack allows directly use SQL cluster Hue open source SQL Assistant Databases Data Warehouses ie easy GUI looking HDFS Hive etc handy beginner maintained Cloudera find GitHub Parquet columnar storage format available project Hadoop ecosystem read good idea big data set explanation lot alternative technology free opensource widely used production across industry columnar storage normal rowbased DBs eg MySQL storing data row distributed across different block cannot fit data one block lot column row data distributed across multiple block thing get pretty slow could instead store column separate block case access data column accessing one block great longer explanation concept AWS put unrelated Parquet still true “Columnar storage database table important factor optimizing analytic query performance drastically reduces overall disk IO requirement reduces amount data need load disk” also another comparison CSV show much storage save kind speedup expect great Jaws reference solely brighten mood article already much Building something Ok Time build something Namely Hadoop cluster Hive top run SQL query Parquet file stored HDFS visualizing everything Hue sentence make sense beginning article Cool lot way Hadoop famous operating computer cluster built “commodity hardware” since learning exercise bit easier quickly spin small Hadoop cluster aforementioned stack Docker dockercompose — course Kubernetes way beyond scope article setup even close production article merely serve gateway big data journey repo article nicer view dockercompose file repository roughly sketch architecture project highlight implementation Docker quite straightforward initially thought pointer development case want customize focus article give crash course Docker dockercompose either section brief highlight place could get stuck want anything work Hue need figure override Hue’s default configs mounting hueoverridesini file find repo override dockercompose Obvious right Wink wink hueoverridesini looking database internal Hue DB hdfsclusters connecting HDFS view file Hue yarnclusters setting YARN beeswax connecting Hive run SQL query Hue line thriftversion7 hueoverridesini Hue refuse connect Hive Thrift Server defaulting Hive Server version high took hour use Hue’s default SQLite DB get message “database locked” try connect Hive dbhue Postgres DB dockercompose file Something SQLite suitable multithreaded environment described Cloudera work error messages… POSTGRESDB POSTGRESUSER POSTGRESPASSWORD hadoophiveenv used official postgres Docker image directly create DB user spin container Check Watch 5432 port exposing multiple time since PGDB running project metastore Hive DB hue tldr next step Ok Short summary happen next impatient engineer Start Hue Hive Hadoop node dockercompose Download csv dataset Kaggle convert supplied Python script Import said Parquet file HDFS via Hue preview make sure thing OK Create empty Hive table Parquet file schema inspecting schema parquettools CLI tool Import file HDFS Hive’s table Run SQL query Starting cluster launching Hue dockercompose Well since everything already setup clone repository computer type dockercompose terminal That’s go localhost8888 setting initial password Hue see screen screen show Hadoop cluster’s HDFS sidebar show DB table Hive’s metastore — empty case Uploading Parquet file HDFS previewing Hue trying open quite Parquet file Hue going get following error message “Failed read Parquet file” dockercompose log NameError global name ‘snappy’ defined Turns Hue support snappy compression default lot Parquet converting tool like panda workaround except recreating Parquet file using snappy Worst UX ever Cloudera’s side… GitHub repository find parquetconverterpy us panda specifies compression None one default snappy subsequently breaking Hue Meaning take dataset eg Kaggle csv format convert Parquet provided Python module point — unfraid CLI — best suggestion forget Hue use Hive HDFS directly Parquet file stuck around Hue like could see UK Punctuality Statistics report Kaggle converted Python script mentioned uploaded file File Browser Hue click successfully imported Parquet file access File Browser dark sidebar left Creating Hive table Parquet file schema seeing data properly imported create Hive table run following command command line folder converted file probably yourgithubclonedata parquettools schema 201801PunctualityStatisticsFullAnalysisparquet output schema need create table Hue UTF8 string Hue message schema optional binary rundate UTF8 optional int64 reportingperiod optional binary reportingairport UTF8 optional binary origindestinationcountry UTF8 Time create table preview new table created Go DB icon dark sidebar create new table manually schema described Afterwards click import button dark sidebar import Parquet file empty table Afterwards see screen Running SQL query Running SQL query promised shall delivered first icon Hue’S sidebar query editor wanted find flight Poland average delay 10 minute SELECT 2018punctualitystatistics origindestinationcountry’POLAND’ averagedelaymins10 autocomplete feature editor fantastic even SQL novice able play around data without much effort Finally Time let go Dear reader sadly reached end article Please write comment feel like journey sequel recap learned run Hadoop cluster Hive running SQL query visualizing everything Hue dockercompose bad course simplified look possible Hadoop chance starting field give time build knowledge infrastructure Furthermore great online course look next Looking 2020 beyond sort listening Hadoop’s ecosystem last year would seen two biggest player market — Cloudera Hortonworks — merged around year ago amid slow Hadoop big data market Add fact people seem way interested Kubernetes older Hadoop specific technology like YARN resource management orchestration fast adoption DL framework like PyTorch sort perfect storm forming ageing Hadoop stack Nonetheless project like Apache Spark chugging along eg introducing Kubernetes alternative YARN Exciting time ecosystem SourcesTags Big Data Analytics Hive Hadoop Docker
|
4,928 |
O que é e Como Funciona o Amazon FBA? [GUIA COMPLETO]
|
in In Fitness And In Health
|
https://medium.com/blog-do-dinheiro/o-que-%C3%A9-e-como-funciona-o-amazon-fba-guia-completo-b2b3771c0088
|
['Bruno Cunha']
|
2019-12-18 16:57:16.453000+00:00
|
['Ganhar Dinheiro Internet', 'Ganhar Dinheiro', 'Amazon', 'Ganhar Dinheiro Online', 'Amazon Fba']
|
Title que é e Como Funciona Amazon FBA GUIA COMPLETOContent Fitness HealthTags Ganhar Dinheiro Internet Ganhar Dinheiro Amazon Ganhar Dinheiro Online Amazon Fba
|
4,929 |
In search of framework
|
When it comes to execution, it is not just the state-of-the-art technology or the people, it is THE HOW (structured approach equipped with methods. techniques, frameworks) that makes all the difference.
With unabated enthusiasm in analytics and data driven decision-making(DDDM), I extensively looked around for information on existing frameworks for moving from data to decisions and how the data analytics projects and programmes are managed.
To my total surprise, the search results on this topic were pretty disappointing and unimpressive to say the least. whatever happened to information deluge.
There are two possibilities for this though -
1. We may well be in very infancy phase of it (data to decisions)
2. Of-course my search skills can be questioned.
Before I dive into my findings on frameworks from known players in the arena, a very quickie definition of — What is Framework?
A framework is a real or conceptual structure intended to serve as a guidance. It is defined as an underlying set of ideas, principles, agreements or guidelines that provide a ‘roadmap’ and supportive structure for achieving a specific goal.
Top 3 -
1. By Gartner
The only framework with holistic approach to analytics programme spanning cross strategy and performance. Gartner uses business analytics as an umbrella term to represent BI, analytics, and performance management combining with it data driven decisions.(Rest covered in separate blog)
2. By PWC —
Four aspects of the framework -
Discovery — define the problem phase, develop a hypothesis. collect and explore data. Insights — perform the data analysis. Actions — Link insights to actionable recommendations. Outcomes — Execution plan and review the outcomes that have transformed the business.
3. BADIR by Aryng
The data to decision journey is five stepped process, from determining business questions to recommendations. BADIR is an acronym which stands for -
Business Question Analysis Plan Data Collection Insights Recommendations
It can’t get any simpler than this, the titles of all 5 steps are indicative and self-explanatory, so allow me to skip the unnecessary detailing just for the sake of it.
The top 3 are chosen purely on the basis of its ease of use, flexible approach to incorporate different methods, techniques(as highlighted by articles).
To summarise, In all honesty, framework neither is a silver bullet nor does it guarantee to make the project/Analytics Efforts fail-proof.
But hey, lack of using any framework would let decision makers either drive in the wrong direction or drown in the data.
|
https://medium.com/datacrat/in-search-of-framework-f707c3f2fca6
|
[]
|
2019-01-03 15:42:14.470000+00:00
|
['Big Data', 'Data Analytics', 'Data Driven Decisions', 'Data Science', 'Business Analytics']
|
Title search frameworkContent come execution stateoftheart technology people structured approach equipped method technique framework make difference unabated enthusiasm analytics data driven decisionmakingDDDM extensively looked around information existing framework moving data decision data analytics project programme managed total surprise search result topic pretty disappointing unimpressive say least whatever happened information deluge two possibility though 1 may well infancy phase data decision 2 Ofcourse search skill questioned dive finding framework known player arena quickie definition — Framework framework real conceptual structure intended serve guidance defined underlying set idea principle agreement guideline provide ‘roadmap’ supportive structure achieving specific goal Top 3 1 Gartner framework holistic approach analytics programme spanning cross strategy performance Gartner us business analytics umbrella term represent BI analytics performance management combining data driven decisionsRest covered separate blog 2 PWC — Four aspect framework Discovery — define problem phase develop hypothesis collect explore data Insights — perform data analysis Actions — Link insight actionable recommendation Outcomes — Execution plan review outcome transformed business 3 BADIR Aryng data decision journey five stepped process determining business question recommendation BADIR acronym stand Business Question Analysis Plan Data Collection Insights Recommendations can’t get simpler title 5 step indicative selfexplanatory allow skip unnecessary detailing sake top 3 chosen purely basis ease use flexible approach incorporate different method techniquesas highlighted article summarise honesty framework neither silver bullet guarantee make projectAnalytics Efforts failproof hey lack using framework would let decision maker either drive wrong direction drown dataTags Big Data Data Analytics Data Driven Decisions Data Science Business Analytics
|
4,930 |
Are You Substituting Real Life For Online Life?
|
Are You Substituting Real Life For Online Life?
How easy it is to fall into the mindless scrolling routine and not even notice
One of my favourite parts about social media is how easy it is to get lost in all it offers, especially when you’re in dire need of a distraction. All you’re required to do is scroll through and it engulfs you. You’re in a trance, wandering through the land of tweets, pictures, stories and best of all–memes. It’s working. You’re laughing, it’s distracting you from your problems; it’s great. Until it’s not. But you don’t realize this for a while. Most people don’t.
I’ve found that most of us don’t know who we are or the reason we do the things we do. There’s this set image we have of ourselves, the person we believe we are (which is most likely the person we were at a point but aren’t anymore). We haven’t processed the fact that we’ve grown and changed into a different person over time–sometimes negatively. This is the reason why when other people tell us we are at fault or have a flaw, it’s tough to admit. We assume if it were true, we would have noticed. We feel we are fully in control of ourselves and unsusceptible to any form of change we don’t condone.
When people spoke about their unhealthy relationship with social media, I scoffed and thought “can’t be me.” Like most people do, I believed I was immune and those kinds of things don’t happen to me. That I’m too smart to let it happen. So I was indifferent about my relationship with social media, never giving it much thought.
Twitter is my most used platform, so when I refer to social media, it takes up ninety percent of that space in my life. At the start of the year, I went on breaks I planned to be a weeklong but ended up barely lasting three days. Every time Twitter triggered any negative emotion, I’d deactivate my account only to be back in a week right where I started, mindlessly scrolling and falling deeper into this virtual world. It got worse every time I came back. I knew I needed to distance myself, but I didn’t know how to do it.
It took some time, but I finally saw I had become so attached that I had substituted my actual life for it. I was in a “social media induced reverie” where I was more concerned with maintaining my image online than dealing with the life in front of me. The way I looked and portrayed myself there mattered more. I ended up spending more time online than I needed to.
My mental health was suffering. It gave me anxiety; I compared myself to random people I didn’t know and wished I was living their lives–they seemed to have things under control. Social media is supposed to be fun, but it wasn’t anymore. It was an act I was trying to keep up with but couldn’t because of how quickly things change on there. It kept outrunning me; I kept falling short, and it exhausted me. I asked how much of myself I will sacrifice for a world that existed on-screen.
After my previous failed attempts, I tried another social media break but this time with a purpose and no set time frame so as to reduce the pressure — this time, for as long as I could manage. My goal was to come out of it less attached, dependent and influenced by social media. To stop being obsessed with the image others had of me in their minds and how to tweak it in my favour.
I stayed away from Twitter for a month and two days. I was so proud of myself because not too long ago, I couldn’t handle three days without a relapse.
The best thing during this break was the peace of mind I experienced. There was no image “upkeep” to worry about. No one to compare myself to. I felt a lighter, like this baggage I had been dragging around wasn’t with me anymore. It was amazing. The thing about a social media hiatus, it forces you to do things you keep putting off because all the time you complained you never had is suddenly there. So, I tried to create a habit out of reading an article a day; I immersed myself in books, movies and music that I loved and worked on improving my writing.
I saw that the more I stayed away, the more I wanted to stay away. I never thought I’d reach a point in my life where I would consider leaving social media for good–especially Twitter. But I started contemplating never going back because I realised it took way more than it gave me and memes weren’t a good enough reason to stay.
Although, I’m not so ignorant and dense as to believe that Twitter is the sole cause of all my problems. They existed prior to Twitter but it just happens to be a trigger and contributor to the bigger issue of my insecurities which I need to deal with personally. Even if I leave Twitter behind, it wouldn’t take long for something else to trigger the anxiety and misbelieves in me and get me struggling with the issues I thought I left behind when I gave up Twitter.
After pondering on it for a long time, I decided not to give it up. I’ve learnt a lot from Twitter, made amazing friends and gotten a lot of inspiration. It was the way I engaged with it and let it get to me that was the problem. So the plan is to use it in moderation, to my advantage and surround myself with people and things that inspire me not things that bring me down and make me depressed. And more importantly to take the space to work on myself away from all social media.
But saying these things and doing them are different. Going back, I’ve found it’s easy to get sucked back in and fall into the mindless scrolling routine again. So, I have to give myself time to work out the best combination that won’t leave me down. I know the journey isn’t linear and I don’t know how much time it’ll take. But I know I don’t want to spend my time on there trying to be a perfect person for the benefit of strangers who couldn’t care less.
|
https://medium.com/an-injustice/are-you-substituting-real-life-for-online-life-f5b2c900748
|
['Fatima Mohammed']
|
2020-05-06 15:13:47.618000+00:00
|
['Social Media', 'Mental Health', 'Addiction', 'Insecurity', 'Culture']
|
Title Substituting Real Life Online LifeContent Substituting Real Life Online Life easy fall mindless scrolling routine even notice One favourite part social medium easy get lost offer especially you’re dire need distraction you’re required scroll engulfs You’re trance wandering land tweet picture story best all–memes It’s working You’re laughing it’s distracting problem it’s great it’s don’t realize people don’t I’ve found u don’t know reason thing There’s set image person believe likely person point aren’t anymore haven’t processed fact we’ve grown changed different person time–sometimes negatively reason people tell u fault flaw it’s tough admit assume true would noticed feel fully control unsusceptible form change don’t condone people spoke unhealthy relationship social medium scoffed thought “can’t me” Like people believed immune kind thing don’t happen I’m smart let happen indifferent relationship social medium never giving much thought Twitter used platform refer social medium take ninety percent space life start year went break planned weeklong ended barely lasting three day Every time Twitter triggered negative emotion I’d deactivate account back week right started mindlessly scrolling falling deeper virtual world got worse every time came back knew needed distance didn’t know took time finally saw become attached substituted actual life “social medium induced reverie” concerned maintaining image online dealing life front way looked portrayed mattered ended spending time online needed mental health suffering gave anxiety compared random people didn’t know wished living lives–they seemed thing control Social medium supposed fun wasn’t anymore act trying keep couldn’t quickly thing change kept outrunning kept falling short exhausted asked much sacrifice world existed onscreen previous failed attempt tried another social medium break time purpose set time frame reduce pressure — time long could manage goal come le attached dependent influenced social medium stop obsessed image others mind tweak favour stayed away Twitter month two day proud long ago couldn’t handle three day without relapse best thing break peace mind experienced image “upkeep” worry one compare felt lighter like baggage dragging around wasn’t anymore amazing thing social medium hiatus force thing keep putting time complained never suddenly tried create habit reading article day immersed book movie music loved worked improving writing saw stayed away wanted stay away never thought I’d reach point life would consider leaving social medium good–especially Twitter started contemplating never going back realised took way gave meme weren’t good enough reason stay Although I’m ignorant dense believe Twitter sole cause problem existed prior Twitter happens trigger contributor bigger issue insecurity need deal personally Even leave Twitter behind wouldn’t take long something else trigger anxiety misbelieves get struggling issue thought left behind gave Twitter pondering long time decided give I’ve learnt lot Twitter made amazing friend gotten lot inspiration way engaged let get problem plan use moderation advantage surround people thing inspire thing bring make depressed importantly take space work away social medium saying thing different Going back I’ve found it’s easy get sucked back fall mindless scrolling routine give time work best combination won’t leave know journey isn’t linear don’t know much time it’ll take know don’t want spend time trying perfect person benefit stranger couldn’t care lessTags Social Media Mental Health Addiction Insecurity Culture
|
4,931 |
No Money, No Problem — 6 Ways to Get Capital for Your Startup
|
No Money, No Problem — 6 Ways to Get Capital for Your Startup
Some common ways to approach scaling your business
If you are considering raising extra money for your startup, here are 6 ways to get investments and turn your business into a multi-billion dollar corporation!
I spent the last 6 years in Silicon Valley working for companies like Yelp and Airbnb, being surrounded by founders on one side and Venture Capital investors on the other. The Silicon Valley dream is to start a company which:
Has a mission is to change the world
Has the potential to grow into a multi-billion-dollar company
Impacts the lives of millions (or ideally, billions, of people)
Has the potential to raise impressive funding rounds from the most reputable VC funds (like Sequoia, a16z, or Founders Fund).
Annie Spratt at Unsplash
However…
Getting money from VCs is only one of the ways to get funding. Let’s explore it further.
There are funds that focus on a specific company stage (early stage, series A, B, etc) and multi-stage funds who diversify their portfolio across companies’ tenure.
Venture Capital investors are making a lot of bets, which is expensive. Funds are looking for a high ROI, to cover their previous losses and to get a return to the LPs who invested in the fund. For example, if you had invested $10,000 in Uber in 2010 it would be worth $127.5M today (!).
VCs not only give you money, they ideally give you support and the network. Raising capital accelerates your growth and an opportunity to conquer the market quicker.
But keep in mind that taking venture capital means signing up to scale and grow rapidly, as VCs are looking for some exit for your startup within the 10 year period, which might not be right for every team and business.
It’s important not to disillusion yourself with VC money. Sometimes venture capital is necessary for your success as a startup. But sometimes it might be a trigger on the road to failure.
Startup team at Unsplash
2. An alternative to VCs is bootstrapping your business. You don’t need venture capital to get started in most industries to solve a real problem for customers and charge money for it. Here are three ways to think about this:
Bootstrapping a startup means starting lean and without the help of outside capital. It means continuing to fuel growth internally from cash flow produced by the business. Quite a few famous companies opted-in for this path and scaled billion-dollar ventures. Spanx, GoPro, Github, just to name a few. You are forced to immediately start thinking about how your business will be making money. It’s sometimes disappointing to see that profitability is almost the last thing some companies think about — they comfortably continue burning through VC funding and might end up dying without figuring out how to make their business model work.
Bootstrapping certainly has pros like staying in control and dictating the direction of the business, but it also comes with certain cons, such as lack of capital to grow when you really need to, no access to the support network that could come through going to an accelerator or raising VC money, and of course, the risk of burning through your savings.
Danielle Maclnnes at Unsplash
3. Accelerators are also a path to get funding and accelerate your startup; some of the most famous accelerators include YC, 500 startups, and Techstrars.
An accelerator is a 3 to 6 months program meant to accelerate your startup. Accelerators are not technically free, depending on the way you look at it — they offer you funding for a % of your company.
Accelerators typically offer seed money in exchange for equity in the company. This may range from $10,000 to over $120,000. For example, YC takes 7% equity for 120k. Accelerators can be great. They provide a high-pressure environment, peer support, and mentor network support to help you grow your company. And usually, accelerators have a “demo day” where you would present your startup to get noticed by potential future investors.
It’s important to have an action plan and a clear “why” for joining an accelerator — you must be committed to the idea, understand that you are giving away equity, and ensure that this is the right accelerator for your specific startup: look at the team running it, make sure it’s the right set of mentors and the program can give you the right support.
Faria Anzum at Unsplash
4. Another way to get funding for your startup is to turn to angel investors.
Angel investors are accredited individuals who support founders and usually write checks between 5 to 100k. Angels primarily invest in pre-seed and seed companies and are focused on qualitatively evaluating an opportunity: the founding team, their interest in the space, etc.
5. Crowdfunding has seen a rise in the last decade. Oculus, Larian Studios, Pebble all had their kickstart through crowdfunding. I believe we will see more and more influencers crowdfunding their ideas — they already have the needed reach to get their projects funded.
Crowdfunding could be great if you’re trying to prove a concept or test the demand. It does go without saying though that platforms like Indigogo and Kickstarter mostly offer a payment collection gateway: success of the campaign still hugely depends on the work you put into spreading the word about it.
6. Last but not least, I want to cover grants. A startup small business grant is monetary funding from the government or an organization that is given in order to help small companies and nonprofits succeed in building and growing their business.
Unlike loans, you don’t have to pay this money back. It may come with rules that dictate how you can spend it.
To conclude, there are a lot of paths you can take to get capital for your business. The most important thing to keep in mind is to optimize for your business, your goals, how you want to grow the business, what you want the exit to look like and basing your decisions off of these factors.
P.S. I also made a YouTube video about this topic. You can check it out here: https://www.youtube.com/watch?v=kYeTtcER99o
|
https://medium.com/startup-grind/no-money-no-problem-6-ways-to-get-capital-for-your-startup-ce38268f9568
|
['Luba Yudasina']
|
2020-09-30 16:21:57.575000+00:00
|
['Startup Lessons', 'Startup', 'Venture Capital', 'Fundraising', 'Scaling']
|
Title Money Problem — 6 Ways Get Capital StartupContent Money Problem — 6 Ways Get Capital Startup common way approach scaling business considering raising extra money startup 6 way get investment turn business multibillion dollar corporation spent last 6 year Silicon Valley working company like Yelp Airbnb surrounded founder one side Venture Capital investor Silicon Valley dream start company mission change world potential grow multibilliondollar company Impacts life million ideally billion people potential raise impressive funding round reputable VC fund like Sequoia a16z Founders Fund Annie Spratt Unsplash However… Getting money VCs one way get funding Let’s explore fund focus specific company stage early stage series B etc multistage fund diversify portfolio across companies’ tenure Venture Capital investor making lot bet expensive Funds looking high ROI cover previous loss get return LPs invested fund example invested 10000 Uber 2010 would worth 1275M today VCs give money ideally give support network Raising capital accelerates growth opportunity conquer market quicker keep mind taking venture capital mean signing scale grow rapidly VCs looking exit startup within 10 year period might right every team business It’s important disillusion VC money Sometimes venture capital necessary success startup sometimes might trigger road failure Startup team Unsplash 2 alternative VCs bootstrapping business don’t need venture capital get started industry solve real problem customer charge money three way think Bootstrapping startup mean starting lean without help outside capital mean continuing fuel growth internally cash flow produced business Quite famous company optedin path scaled billiondollar venture Spanx GoPro Github name forced immediately start thinking business making money It’s sometimes disappointing see profitability almost last thing company think — comfortably continue burning VC funding might end dying without figuring make business model work Bootstrapping certainly pro like staying control dictating direction business also come certain con lack capital grow really need access support network could come going accelerator raising VC money course risk burning saving Danielle Maclnnes Unsplash 3 Accelerators also path get funding accelerate startup famous accelerator include YC 500 startup Techstrars accelerator 3 6 month program meant accelerate startup Accelerators technically free depending way look — offer funding company Accelerators typically offer seed money exchange equity company may range 10000 120000 example YC take 7 equity 120k Accelerators great provide highpressure environment peer support mentor network support help grow company usually accelerator “demo day” would present startup get noticed potential future investor It’s important action plan clear “why” joining accelerator — must committed idea understand giving away equity ensure right accelerator specific startup look team running make sure it’s right set mentor program give right support Faria Anzum Unsplash 4 Another way get funding startup turn angel investor Angel investor accredited individual support founder usually write check 5 100k Angels primarily invest preseed seed company focused qualitatively evaluating opportunity founding team interest space etc 5 Crowdfunding seen rise last decade Oculus Larian Studios Pebble kickstart crowdfunding believe see influencers crowdfunding idea — already needed reach get project funded Crowdfunding could great you’re trying prove concept test demand go without saying though platform like Indigogo Kickstarter mostly offer payment collection gateway success campaign still hugely depends work put spreading word 6 Last least want cover grant startup small business grant monetary funding government organization given order help small company nonprofit succeed building growing business Unlike loan don’t pay money back may come rule dictate spend conclude lot path take get capital business important thing keep mind optimize business goal want grow business want exit look like basing decision factor PS also made YouTube video topic check httpswwwyoutubecomwatchvkYeTtcER99oTags Startup Lessons Startup Venture Capital Fundraising Scaling
|
4,932 |
BubbleTone: New Version Released
|
BubbleTone is a new generation messenger with a number of useful features. Customized for the needs of mobile operators, the messenger provides additional revenue streams on voice calls and SMS messages from the app and other telecom services.
The Bubbletone team is glad to announce that a new version of our messenger has been released on October 26.
We have fixed a number of software bugs and implemented a number of improvements to make it easier to use the application. So, what has been changed, exactly?
Interface
The first/main tab now contains a list of calls that a user has made, answered or missed;
New icons for the “make a call within the Bubbletone network” (peer-to-peer calls) and “make a call to PSTN” options;
The option “save gender information” in the “personal information” section in a user’s personal account now works properly;
The last message sent or received by the user now stays visible, even while the user is typing a new message with the pop-up keyboard;
There are a number of Android phones that have the following defect: their proximity sensors do not work properly. As a result, when calls came in, the screens of these phones would go black. This problem has been solved;
The settings menu includes a section called “additional info”. Among other information, this section includes the user’s Bubbletone number and balance. The Bubbletone number is the number which the user receives after they have signed an agreement with an operator. In the previous app version, this information disappeared from time to time. This problem has been solved.
SMS messages
In the previous version, if a user had a large number of text messages stored on their phone, the app worked slowly. In the new version, the app works quickly regardless of the amount of messages stored on the phone.
Distribution of broadcast messages
Operators are now able to receive notifications on whether their users have received and read the messages broadcasted to them.
Audio and video connection
We have upgraded codecs. As of now, we use VP8 codecs to provide a better quality of audio and video connection, provided the users are using Internet with high bandwidth;
When a user would make a call, they would have to listen to a ringback tone. As of now, originators of calls can enjoy listening to music when placing a call, while the destination terminal is alerting the receiving party.
Messages
Previously, after starting a Hide-Chat — a secret chat, hidden from prying eyes — the user could not make the chat visible in case there was no need in hiding it anymore; Users can now make Hide-chats visible or invisible again whenever they need to;
The development team has built the final version of Smoke-Chat — a solution that prevents secret messages from being forwarded or copied. After a message has been read, it disappears (it is shown burning down, and then the smoke clears) so the recipient cannot save, copy or forward it;
In the previous version, to send a video to other users, a user was able to record a video with the time limit of 30 seconds. As of now, there is no time limit. Users can record videos that last as long as they need.
Settings
The additional section “Notifications and Ringtones” has been added to the settings menu. This section allows users to choose ringtones for notifications when they receive messages and calls, switch vibration on/off and send notification signals to headphones.
Our development team continues to work on the application. In the coming future, more useful features will be added. In November, our team will also start to actively promote the app among mobile operators. More news will be coming soon. Stay tuned!
The messenger is available on the App Store and Google Play:
https://itunes.apple.com/us/app/bubbletone/id1298142945?mt=8
https://play.google.com/store/apps/details?id=com.countrycom.bubbletone
|
https://medium.com/bubbletone-blockchain-in-telecom/bubbletone-new-version-released-a2788646cdaf
|
['Bubbletone Blockchain In Telecom']
|
2019-05-15 09:57:59.443000+00:00
|
['Android', 'Mobile Application', 'iOS', 'Messenger', 'Mobile App Development']
|
Title BubbleTone New Version ReleasedContent BubbleTone new generation messenger number useful feature Customized need mobile operator messenger provides additional revenue stream voice call SMS message app telecom service Bubbletone team glad announce new version messenger released October 26 fixed number software bug implemented number improvement make easier use application changed exactly Interface firstmain tab contains list call user made answered missed New icon “make call within Bubbletone network” peertopeer call “make call PSTN” option option “save gender information” “personal information” section user’s personal account work properly last message sent received user stay visible even user typing new message popup keyboard number Android phone following defect proximity sensor work properly result call came screen phone would go black problem solved setting menu includes section called “additional info” Among information section includes user’s Bubbletone number balance Bubbletone number number user receives signed agreement operator previous app version information disappeared time time problem solved SMS message previous version user large number text message stored phone app worked slowly new version app work quickly regardless amount message stored phone Distribution broadcast message Operators able receive notification whether user received read message broadcasted Audio video connection upgraded codecs use VP8 codecs provide better quality audio video connection provided user using Internet high bandwidth user would make call would listen ringback tone originator call enjoy listening music placing call destination terminal alerting receiving party Messages Previously starting HideChat — secret chat hidden prying eye — user could make chat visible case need hiding anymore Users make Hidechats visible invisible whenever need development team built final version SmokeChat — solution prevents secret message forwarded copied message read disappears shown burning smoke clear recipient cannot save copy forward previous version send video user user able record video time limit 30 second time limit Users record video last long need Settings additional section “Notifications Ringtones” added setting menu section allows user choose ringtones notification receive message call switch vibration onoff send notification signal headphone development team continues work application coming future useful feature added November team also start actively promote app among mobile operator news coming soon Stay tuned messenger available App Store Google Play httpsitunesapplecomusappbubbletoneid1298142945mt8 httpsplaygooglecomstoreappsdetailsidcomcountrycombubbletoneTags Android Mobile Application iOS Messenger Mobile App Development
|
4,933 |
Improving Your Mental Health Begins By Talking To Other People
|
Improving Your Mental Health Begins By Talking To Other People
You don’t need to deal with your problems alone.
Photo by Julian Hanslmaier on Unsplash
I used to cry in my bedroom for several hours until the early hours of the morning. It was as if nobody understood me or could genuinely relate to what I was going through.
I felt overwhelmed. My anxiety was at an all-time high. Every day, my mental health was worsening. The quality of my relationships was deteriorating as I felt like I couldn’t confide in anyone. I didn’t want to deal with my problems alone. But at the time, I had no choice.
That’s when I realized something changed my life forever. Healthy relationships are built on a mutual sense of trust and understanding. So when you feel like nobody else can understand what you’re going through, the quality of your relationships will inevitably suffer as a result.
Which is why it’s essential to spend more time with your friends, family, and spouse. Strong relationships are a vital element of a happy life. Our loved ones support us during moments of adversity and help us keep everything together while our entire life feels like it’s falling apart.
|
https://medium.com/publishous/improving-your-mental-health-begins-by-talking-to-other-people-336d7f010a9d
|
['Matt Lillywhite']
|
2020-11-23 15:09:43.125000+00:00
|
['Relationships', 'Life Lessons', 'Mental Health', 'Self Improvement', 'Loneliness']
|
Title Improving Mental Health Begins Talking PeopleContent Improving Mental Health Begins Talking People don’t need deal problem alone Photo Julian Hanslmaier Unsplash used cry bedroom several hour early hour morning nobody understood could genuinely relate going felt overwhelmed anxiety alltime high Every day mental health worsening quality relationship deteriorating felt like couldn’t confide anyone didn’t want deal problem alone time choice That’s realized something changed life forever Healthy relationship built mutual sense trust understanding feel like nobody else understand you’re going quality relationship inevitably suffer result it’s essential spend time friend family spouse Strong relationship vital element happy life loved one support u moment adversity help u keep everything together entire life feel like it’s falling apartTags Relationships Life Lessons Mental Health Self Improvement Loneliness
|
4,934 |
The LMS: The Moral of the Story
|
From researching this market for over 6 years, I’ve decided to put my thoughts on paper…I mean, today’s version of paper being an online blog. If you are interested in the future of education technology and how it will impact students, faculty and administration then you must be interested in where the Learning Management System market is going. I’ve decided to break it down by phases of the origins, to the future.
From paper to online: the early players
Early players in the LMS market created platforms that essentially brought courses from being completely offline with paper and pencil, to online. Resources were made available on a faculty site, grades were put online and announcements were possible. What the early players did best was make a file system that was secure for the faculty to upload resources.
The cons…
These systems evolved in a way that other similar technologies have evolved. They became loaded with features upon features upon features. And then that file system became loaded with folders, and folders. And then some more folders…
Faculty and student reactions to using this poor and ugly system
The system got so out of touch, slow, and tedious, that even for the most simple task of uploading a file or assignment became ridiculously time consuming. Wasn’t this system supposed to make it easer to access files instead of printing things out for classes to get? It was starting to not be as simple and for faculty, some decided enough was enough, and they moved to other platforms or got rid of the system entirely. This is where the market started to shift and a few new players arrived.
The incremental system
As a phase II of the LMS market, Canvas by Instructure has done good things but mainly has been incrementally better than the previous legacy systems. Canvas by Instructure is a cloud based learning management system with additional grading and analytics features.
The platform is in my opinion a little better face to what Blackboard, D2L, Moodle, or any of the other legacy players have created. What Canvas did best was bring the LMS to the Cloud. They tout about a better gradebook, but in actuality, it’s pretty much the same as any other legacy LMS. I call them the Blackboard 1.5.
What really differentiates Canvas from the other players is their marketing. They are good at selling and making things sound much better than they actually are in my opinion.
And with that marketing, they were able to do something no one thought was possible — take over Blackboard market share. From 2008 when they were founded to 2010, they had zero contracts. However, from 2010 in their first contract signing to 2015, they went from zero to over half a billion in value. Goes to show that you don’t have to do much to overtake the competitors in the market. If you have a slightly better product and good marketing, you can win.
The next generation is social
It’s clear with the technology we use like facebook, twitter and linkedin that social media has taken over the modern interface. Messaging apps and social media are norms in the real world but in education, it seems that we are still around 10 or 15 years behind the game.
What I expect is a move to social media, mobile and apps within the LMS. It’s normal for students and faculty who utilize these applications on a daily basis outside of education. Why not have them in our educational lives?
This is quite similar to a few revolutions in technology over the past decade. Here’s some examples that can show how markets have previously been disrupted and how it relates to education.
iPhone vs Blackberry
Blackberry was the market leader of the phone market at one time. What Blackberry did best was provide an all-in-one solution with a great keypad and messaging app. Problem was that they built everything in-house. Meaning, it was very difficult to integrate applications outside from Blackberry itself. And they had higher costs than new comers like Apple since they were the one’s developing all the apps rather than the community doing it for them.
The phone of apps
Apple revolutionized the market not with the best core applications but with the combination of the core applications it had (i.e. Phone, Messaging, Safari, Music, etc) + Touch Design + App Store. The App Store is really what took the iPhone to the next level and conquered the phone market. The App Store allowed for any individual to customize their phone with applications they wanted. It allowed for a community to develop and make the iPhone that much better, every time. And guess what, Apple didn’t have to pay for this to occur. They could literally sit and watch as their ecosystem grew like wildfire. Before you knew it, they had many more apps than Blackberry could ever have.
Apple created a new category. An app phone.
Airbnb vs Hotels
You’d think with an industry that has been around for a very long time, that it would be impossible to break into it. I mean, that’s what the investors say…
Your home, anywhere
But with Airbnb, what they did best was prove the naysayers wrong. Hotels are often expensive and do not showcase a local vibe that many want when they are traveling. Airbnb provided a low cost and fun experience to gain the local culture of a community.
For the first time in this market, a company quickly took over the traditionally stagnant. And by doing so, they created a new category. The sharable home.
Uber vs Taxi’s
Hailing a cab will soon be a thing of the past
Anyone 5 years ago would have said that Taxi’s were just annoying but we just ‘had to deal with them’. The guys at Uber thought differently and said why not shake this up and make it mobile. Basically what uber did was made the taxi go mobile and made it more comfortable and convenient for drivers and passengers to get together.
They created a new category. The mobile taxi.
Moral of the story
All of these markets are breakable. Just because a market has been around for awhile does not make it impossible to break into. I firmly believe that the LMS market will be disrupted as well with a new category. The social learning platform.
Where’s the community in online education today?
Some things yet to be done in this LMS market:
Connecting the campus community to the LMS
Opening up the community to create applications into a single platform
Having a mobile ready platform (*not just an app as a checkbox for an RFP)
And, a framework based on a social network (i.e. community)
These are the fundamental differences in the LMS of the future. Imagine a platform that can make that happen. It’s just a matter of when this happens, not if.
|
https://medium.com/notebowl/the-lms-the-moral-of-the-story-a11d8f601d63
|
['Andrew Chaifetz']
|
2017-08-23 02:07:06.219000+00:00
|
['Startup', 'Edtech', 'Education Technology']
|
Title LMS Moral StoryContent researching market 6 year I’ve decided put thought paper…I mean today’s version paper online blog interested future education technology impact student faculty administration must interested Learning Management System market going I’ve decided break phase origin future paper online early player Early player LMS market created platform essentially brought course completely offline paper pencil online Resources made available faculty site grade put online announcement possible early player best make file system secure faculty upload resource cons… system evolved way similar technology evolved became loaded feature upon feature upon feature file system became loaded folder folder folders… Faculty student reaction using poor ugly system system got touch slow tedious even simple task uploading file assignment became ridiculously time consuming Wasn’t system supposed make easer access file instead printing thing class get starting simple faculty decided enough enough moved platform got rid system entirely market started shift new player arrived incremental system phase II LMS market Canvas Instructure done good thing mainly incrementally better previous legacy system Canvas Instructure cloud based learning management system additional grading analytics feature platform opinion little better face Blackboard D2L Moodle legacy player created Canvas best bring LMS Cloud tout better gradebook actuality it’s pretty much legacy LMS call Blackboard 15 really differentiates Canvas player marketing good selling making thing sound much better actually opinion marketing able something one thought possible — take Blackboard market share 2008 founded 2010 zero contract However 2010 first contract signing 2015 went zero half billion value Goes show don’t much overtake competitor market slightly better product good marketing win next generation social It’s clear technology use like facebook twitter linkedin social medium taken modern interface Messaging apps social medium norm real world education seems still around 10 15 year behind game expect move social medium mobile apps within LMS It’s normal student faculty utilize application daily basis outside education educational life quite similar revolution technology past decade Here’s example show market previously disrupted relates education iPhone v Blackberry Blackberry market leader phone market one time Blackberry best provide allinone solution great keypad messaging app Problem built everything inhouse Meaning difficult integrate application outside Blackberry higher cost new comer like Apple since one’s developing apps rather community phone apps Apple revolutionized market best core application combination core application ie Phone Messaging Safari Music etc Touch Design App Store App Store really took iPhone next level conquered phone market App Store allowed individual customize phone application wanted allowed community develop make iPhone much better every time guess Apple didn’t pay occur could literally sit watch ecosystem grew like wildfire knew many apps Blackberry could ever Apple created new category app phone Airbnb v Hotels You’d think industry around long time would impossible break mean that’s investor say… home anywhere Airbnb best prove naysayer wrong Hotels often expensive showcase local vibe many want traveling Airbnb provided low cost fun experience gain local culture community first time market company quickly took traditionally stagnant created new category sharable home Uber v Taxi’s Hailing cab soon thing past Anyone 5 year ago would said Taxi’s annoying ‘had deal them’ guy Uber thought differently said shake make mobile Basically uber made taxi go mobile made comfortable convenient driver passenger get together created new category mobile taxi Moral story market breakable market around awhile make impossible break firmly believe LMS market disrupted well new category social learning platform Where’s community online education today thing yet done LMS market Connecting campus community LMS Opening community create application single platform mobile ready platform app checkbox RFP framework based social network ie community fundamental difference LMS future Imagine platform make happen It’s matter happens ifTags Startup Edtech Education Technology
|
4,935 |
More Americans Willing to Take COVID Vaccine
|
PUBLIC OPINION AND COVID VACCINES
More Americans Willing to Take COVID Vaccine
Public confidence campaigns seem to be working
In a recent USA TODAY/Suffolk University Poll, 46% of Americans say they will take the vaccine as soon as they can. Compared to a USA TODAY poll in late October, that is close to double. Also in the new poll, 32% say they will wait for others to get the shots before they do so themselves.
This is great news, and it is indicative of the success of public campaigns by physicians, nurses, other healthcare leaders as well as public officials — including the esteemed Dr. Anthony Fauci — to boost confidence in the vaccines.
I myself did the same thing (see below), and thank God, so far the vaccine has been very well tolerated by me and my colleagues.
The vaccine is only truly effective at achieving herd immunity if a majority of the population gets the vaccine. So, the more people take it, the faster we can finally see an end to this nightmare.
I am very encouraged by this new poll finding, and I encourage everyone who gets their vaccine — especially physicians and nurses on the front lines — to publicize their vaccination and get the word out.
|
https://medium.com/beingwell/more-americans-willing-to-take-covid-vaccine-5ec4dfece751
|
['Dr. Hesham A. Hassaballa']
|
2020-12-23 21:54:25.004000+00:00
|
['Covid 19', 'Vaccines', 'Public Health', 'Medicine', 'Science']
|
Title Americans Willing Take COVID VaccineContent PUBLIC OPINION COVID VACCINES Americans Willing Take COVID Vaccine Public confidence campaign seem working recent USA TODAYSuffolk University Poll 46 Americans say take vaccine soon Compared USA TODAY poll late October close double Also new poll 32 say wait others get shot great news indicative success public campaign physician nurse healthcare leader well public official — including esteemed Dr Anthony Fauci — boost confidence vaccine thing see thank God far vaccine well tolerated colleague vaccine truly effective achieving herd immunity majority population get vaccine people take faster finally see end nightmare encouraged new poll finding encourage everyone get vaccine — especially physician nurse front line — publicize vaccination get word outTags Covid 19 Vaccines Public Health Medicine Science
|
4,936 |
What Is GitHub
|
Introduction
Humans have always thought of ways to connect and to work together as a team, over the years we have created new ways to connect virtually allowing us to work together without having to worry about the distance. Applications like Facebook and Twitter are now indispensable for us. Almost all video games allow you to play online with friends or strangers from all over the world. GitHub is another one of these pieces of software that allow people to connect and work together in a more efficient, fast, and dynamic way. In this blog, I will explain how GitHub works and the different things that we can use it for.
How it works
GitHub -in simple terms- is just a space or a cloud if you will where you can save, share, and edit files simultaneously with someone else. Every file, files, or whatever project that you save into GitHub is called a repository or “repo”.
Imagine that you work at Netflix as a product manager and one day you realize that you want to make changes to the homepage. You talk to the developer team and you explain all the changes that you want to do. Let’s suppose that all the code for Netflix is saved on GitHub in different files. The lead developer would then divide the task into different chunks and send the instructions to each developer. It would be very difficult if everyone was working on the same files right? One of the main features is being able to create a copy of the repo and have it in your computer that way if you write code that doesn't work then it won't affect the program because you would be working on a copy of the project, this is called cloning a repo.
|
https://medium.com/swlh/what-is-github-423f9049ab2d
|
['Sebastian De Lima']
|
2020-07-13 02:51:40.612000+00:00
|
['Programming', 'Computer Science', 'Github', 'Technology', 'Productivity']
|
Title GitHubContent Introduction Humans always thought way connect work together team year created new way connect virtually allowing u work together without worry distance Applications like Facebook Twitter indispensable u Almost video game allow play online friend stranger world GitHub another one piece software allow people connect work together efficient fast dynamic way blog explain GitHub work different thing use work GitHub simple term space cloud save share edit file simultaneously someone else Every file file whatever project save GitHub called repository “repo” Imagine work Netflix product manager one day realize want make change homepage talk developer team explain change want Let’s suppose code Netflix saved GitHub different file lead developer would divide task different chunk send instruction developer would difficult everyone working file right One main feature able create copy repo computer way write code doesnt work wont affect program would working copy project called cloning repoTags Programming Computer Science Github Technology Productivity
|
4,937 |
Low-Code Founders: Tracy Smith, founder of Vipii
|
Low-Code Founders: Tracy Smith, founder of Vipii
Inside the mind of an entrepreneur: our founder of the week tells us about the remote queuing solution he developed on Bubble to reduce physical queues in stores and lower the number of high-risk interactions.
What led you to entrepreneurship?
I’ve always been attracted to entrepreneurship, that’s why I’ve always done side projects alongside my studies. Even though I have a lot of ideas, I still can’t devote as much time to it as I would like.
I studied product management and am now an IT tech consultant. Although I’m not a developer from the start, I was able to extend my field of action and my skills thanks to Bubble which allowed me to develop new projects such as Vipii.
Can you introduce us to Vipii?
During the quarantine, many local stores had to close, preventing people from obtaining goods they needed. When they reopened, I noticed that people were flocking to the stores and there were a lot of queues, which was not very Covid friendly. I figured that these owners and people were in need of a queuing solution to limit these high-risk social contacts.
After doing market research to see what solutions already existed and especially their prices, I realized that they were not accessible to everyone, especially small businesses. That’s why I decided to develop a cheaper remote queue solution that would be simple to use. This idea is not revolutionary in itself, its real added value is its small price which allows everyone to have access to it in these difficult times.
To benefit from it as a business, you just have to register your venue on the application. As a visitor, all you have to do is search for the store you wish to visit and register directly on its page, without having to signup or download anything. The application then announces you when the way is clear.
What challenges have you encountered?
From a technical point of view, it has not always been easy to take Bubble in hand. Responsiveness and application optimization are particularly hard to achieve. The ideal would be to develop a very complete library of responsive elements in order to save time and gain fluidity, as you probably already have at Cube.
Then, there’s a lot of testing and fixing work to be done to make sure that everything works well, that there are no bugs and that the different solutions used to build the application (Twilio in my case) are well connected. It took me several days of viewing online tutorials, reading manuals and instructions for each solution used to feel comfortable with it.
How was the learning and development process on Bubble?
I liked the tool very much right away. I trained with it for a year and a half and I am now convinced that it will have a major role to play in the future.
Originally, I had some experience in backend development and it helped me a lot to understand how to work with Bubble. On the other hand, the ability to build a responsive application is not innate and I had a hard time with that part. I also had to use code to be able to put all the features I wanted, so I bought plugins and integrated them.
Today, I know that if one day I leave my job to start my own project, I will use Bubble to develop my product because I really appreciate the fast and simple iteration capacity that the tool offers compared to traditional code (with which it takes at least 2 weeks to fix the slightest bug when you can do it in a few seconds on Bubble).
I even talked to some developer friends who tested it, and they really liked the tool. They find that it allows them to focus on the things that really matter and add real value.
What advice would you give to those who are hesitant to embark on an entrepreneurial project?
The most important thing is to take the plunge and get your idea validated before spending too much time on Bubble to build the product. Because you have to be sure that the solution you bring will meet someone’s problem, that there are real needs. Otherwise, it is risky to invest time (and therefore money) in it.
This is true even if you want to develop a simple landing page: you must always have your idea validated beforehand by talking about it to get feedback.
|
https://medium.com/cube-insider/low-code-founders-tracy-smith-founder-of-vipii-94f37ce425ef
|
['Melanie Bialgues']
|
2020-12-29 08:16:14.964000+00:00
|
['No Code', 'Queue', 'Low Code', 'Entrepreneurship', 'Bubble']
|
Title LowCode Founders Tracy Smith founder VipiiContent LowCode Founders Tracy Smith founder Vipii Inside mind entrepreneur founder week tell u remote queuing solution developed Bubble reduce physical queue store lower number highrisk interaction led entrepreneurship I’ve always attracted entrepreneurship that’s I’ve always done side project alongside study Even though lot idea still can’t devote much time would like studied product management tech consultant Although I’m developer start able extend field action skill thanks Bubble allowed develop new project Vipii introduce u Vipii quarantine many local store close preventing people obtaining good needed reopened noticed people flocking store lot queue Covid friendly figured owner people need queuing solution limit highrisk social contact market research see solution already existed especially price realized accessible everyone especially small business That’s decided develop cheaper remote queue solution would simple use idea revolutionary real added value small price allows everyone access difficult time benefit business register venue application visitor search store wish visit register directly page without signup download anything application announces way clear challenge encountered technical point view always easy take Bubble hand Responsiveness application optimization particularly hard achieve ideal would develop complete library responsive element order save time gain fluidity probably already Cube there’s lot testing fixing work done make sure everything work well bug different solution used build application Twilio case well connected took several day viewing online tutorial reading manual instruction solution used feel comfortable learning development process Bubble liked tool much right away trained year half convinced major role play future Originally experience backend development helped lot understand work Bubble hand ability build responsive application innate hard time part also use code able put feature wanted bought plugins integrated Today know one day leave job start project use Bubble develop product really appreciate fast simple iteration capacity tool offer compared traditional code take least 2 week fix slightest bug second Bubble even talked developer friend tested really liked tool find allows focus thing really matter add real value advice would give hesitant embark entrepreneurial project important thing take plunge get idea validated spending much time Bubble build product sure solution bring meet someone’s problem real need Otherwise risky invest time therefore money true even want develop simple landing page must always idea validated beforehand talking get feedbackTags Code Queue Low Code Entrepreneurship Bubble
|
4,938 |
This Is My Playbook for Crushing It on Medium in 2021
|
SATIRE
This Is My Playbook for Crushing It on Medium in 2021
I survived 2020, 2021 is the time to thrive
Photo by Tnarg on Pexels.
I started writing seriously on Medium only in September but I have already published 24 articles, been curated 5 times, and am a top writer in two topics. In these three months, I have grown from zero followers to 111 followers (and counting).
I say this not to boast but to show you I have mastered the Medium opening game.
Tip 1: It is important to establish your authority early in the article.
I’ve spent weeks studying the advice of the established Medium gurus.
I’ve signed up to so many free Medium guru newsletters that my Gmail can no longer distinguish between spam and ordinary e-mail.
I’ve done all this work to make this — my 2021 Medium Playbook— to raise my standings in the Medium game.
I’m offering this playbook FREE for all you dear readers now, for this LIMITED TIME ONLY.
Once I’ve set up my Substack newsletter, my Zoom consulting business, and set up my ConvertKit e-mail thread, this playbook will only be available to subscribers to my $49 course.
So, act fast!
Just kidding. I don’t have any of these yet. But did you feel the urgency?
Tip 2: Hit them hard with the Call to Action
Without further ado, here are my three key approaches to crushing it on Medium.
|
https://medium.com/muddyum/this-is-my-playbook-for-crushing-it-on-medium-in-2021-d95e5d45b288
|
['U-Ming Lee']
|
2020-12-22 18:02:11.929000+00:00
|
['Satire', 'Humor', 'Culture', 'Writing', 'Comedy']
|
Title Playbook Crushing Medium 2021Content SATIRE Playbook Crushing Medium 2021 survived 2020 2021 time thrive Photo Tnarg Pexels started writing seriously Medium September already published 24 article curated 5 time top writer two topic three month grown zero follower 111 follower counting say boast show mastered Medium opening game Tip 1 important establish authority early article I’ve spent week studying advice established Medium guru I’ve signed many free Medium guru newsletter Gmail longer distinguish spam ordinary email I’ve done work make — 2021 Medium Playbook— raise standing Medium game I’m offering playbook FREE dear reader LIMITED TIME I’ve set Substack newsletter Zoom consulting business set ConvertKit email thread playbook available subscriber 49 course act fast kidding don’t yet feel urgency Tip 2 Hit hard Call Action Without ado three key approach crushing MediumTags Satire Humor Culture Writing Comedy
|
4,939 |
Journey to the UX — Part 1. Start asking questions!
|
Journey to the UX — Part 1
Start asking questions!
Today’s business people don’t just need to understand designers better. They need to become designers — Roger Martin
To step into the UX design world, the first thing we can develop in ourselves is to have prepared minds. You need to understand that the biggest part of user experience is what you don’t objectively see but, rather, what you feel. Because as UX designers, you are to advocate for users. If you don’t understand what their pain points, goals, desires are; who does? And in order to successfully build a rapport with your users, you should be able to empathize with them, to view your products through their eyes. You don’t have to be a born artist or a gifted designer, to begin with; but initially, you can start asking questions — begin with most redundant things, like doors.
To push or to pull — it’s a lifetime question
Now, you’ve probably never given much thought to doors. In fact, you’d ask, “Doors? What’s with them?”. Please, bear with me though.
Imagine this, on a sunny day, we pace along with our favorite songs in our ears on a journey to a fancy coffee shop. Swiftly, without pausing, we reach out to push a door that will lead us to delicious cupcakes and a swirly cappuccino then… *THUD* the door doesn’t budge because it ain’t a f-king PUSH door. As we flusteredly look around for any instruction on this door, our confidence cracks, our mood gets disrupted — gone from superb to whatever. All because of a door!!!
Almost everyone of us has experienced a situation where the door didn’t behave as we had expected intuitively. Some could have been in way more embarrassing scenarios — we could be slamming into a glass door without even realizing, then shamefully find ways to get through it. We could have been lost in the most innovative thought that could have opened to an entirely new discovery, but it was shattered due to the hinderance at the badly designed door.
A closer look
Let’s take a step back and do some observations.
Most of the time, instead of having doors open in both directions, most doors are one-way direction trips, meaning either inwards or outwards — as in human language, push or pull. Because we tend to spend as little effort as possible, with our human instinctual behavior to conserve energy whenever we could which is another sophisticated way of saying we are naturally so lazy that it is often the case that we more likely choose to push a door first. And yet, when we see something that fits the grip of our hands, our brain screams “PULL THE LEVER, KRONK!”.
Reference to an old meme to show my coolness. (©Disney’s The Emperor’s New School)
Sometimes, we get confused facing doors that seemingly look like they should be pushed while their labels signal us to pull and vice versa. This is where things get interesting.
An example of the confusing door.
We start realizing that no matter how big or small the door is, we interact with it mostly through its handle; except for the automatic doors since they aren’t manhandled (duh!). And there are so many different designs for that — namely lever, knob, grip, latch, etc. Occasionally, we encounter a slight variation of them, however, in the end all of their usability reasonably works in certain cases and also fails in others.
There are way more doorhandle designs than these.
We can ask a bunch of generic questions like:
What is the exact problem?
Who are the users?
Why do some doors open outwards while some open inwards?
Why do some certain doors have instruction labels (push/pull) on themselves and others don’t?
What should be good examples of door design?
Why do the existing designs work/fail?
As we’re getting into the topic, let’s go even deeper by making more specific questions:
Do users unknowingly expect the different types of door handles in order to behave accordingly?
In which situations should doors be set to open outwards?
Would the door’s aesthetics be affected if they have different handle types on either side? (E.g. one for push behavior, another one indicating pull action)
And so on.
Then, I did a little digging myself and…
My initial investigation concluded that most exterior doors in buildings are set to open outwards to prevent the cold wind blows the doors wide open and gets inside the building during winter time. On another note, the design choice also helps the doors close tighter to their frame, which reduces heat loss. Well, I was wronger than wrong. When the weather gets colder, the pressure built up from the warmer air inside the building will actually try to push its way out as to balance itself, hence, my wrong conclusion.
The correct answer is to prioritize the human escaping flow optimization. Because people need to get out as fast as possible under safety hazards like fire, so when the horde march, there won’t be a situation where the door can’t be opened due to the mass of people trying to push against it.
Door with an obvious hint for grabbing and pulling.
How do we choose door handles now that we know our typical user behavior? As long as doors are open outwards, we can use the handles with a strong “pull” cue on the outside, that tend to have a vertical long shape and are easy to grasp.
But do we need to use the same type for the other side too? The answer is “it depends”, as many would lean towards the symmetrical aesthetic purpose. In case we want it to be symmetrical, meaning the same type that hints “pull” to be used on the inside, we should put on the “push/pull” labels on both sides to indicate distinguished actions Although it may look beautifully balanced, it is definitely creating bad functionality leading to user frustration as discussed above, that is something we are trying to avoid.
Otherwise, a horizontal bar should be used on the inner side. Because it is an effective solution implying a “push” behavior. I have often seen its frequent use mainly in emergency fire exit doors. Some even push their cue stronger (puns intended) with vertical push plates with nothing to grab.
Left doors with horizontal bars is used for common emergency exit, while right doors are equipped with push plates which is commonly seen at hospitals.
Although, this solution can’t really be used with glass doors because, you know, aesthetics reason. Thus, there is no really perfect solution, and bad doors are everywhere.
|
https://medium.com/the-shortcut/journey-to-the-ux-part-1-1285e368ab2c
|
['Nam Nhu']
|
2019-04-08 18:51:10.667000+00:00
|
['Design', 'UX Design', 'User Design', 'User Experience']
|
Title Journey UX — Part 1 Start asking questionsContent Journey UX — Part 1 Start asking question Today’s business people don’t need understand designer better need become designer — Roger Martin step UX design world first thing develop prepared mind need understand biggest part user experience don’t objectively see rather feel UX designer advocate user don’t understand pain point goal desire order successfully build rapport user able empathize view product eye don’t born artist gifted designer begin initially start asking question — begin redundant thing like door push pull — it’s lifetime question you’ve probably never given much thought door fact you’d ask “Doors What’s them” Please bear though Imagine sunny day pace along favorite song ear journey fancy coffee shop Swiftly without pausing reach push door lead u delicious cupcake swirly cappuccino then… THUD door doesn’t budge ain’t fking PUSH door flusteredly look around instruction door confidence crack mood get disrupted — gone superb whatever door Almost everyone u experienced situation door didn’t behave expected intuitively could way embarrassing scenario — could slamming glass door without even realizing shamefully find way get could lost innovative thought could opened entirely new discovery shattered due hinderance badly designed door closer look Let’s take step back observation time instead door open direction door oneway direction trip meaning either inwards outwards — human language push pull tend spend little effort possible human instinctual behavior conserve energy whenever could another sophisticated way saying naturally lazy often case likely choose push door first yet see something fit grip hand brain scream “PULL LEVER KRONK” Reference old meme show coolness ©Disney’s Emperor’s New School Sometimes get confused facing door seemingly look like pushed label signal u pull vice versa thing get interesting example confusing door start realizing matter big small door interact mostly handle except automatic door since aren’t manhandled duh many different design — namely lever knob grip latch etc Occasionally encounter slight variation however end usability reasonably work certain case also fails others way doorhandle design ask bunch generic question like exact problem user door open outwards open inwards certain door instruction label pushpull others don’t good example door design existing design workfail we’re getting topic let’s go even deeper making specific question user unknowingly expect different type door handle order behave accordingly situation door set open outwards Would door’s aesthetic affected different handle type either side Eg one push behavior another one indicating pull action little digging and… initial investigation concluded exterior door building set open outwards prevent cold wind blow door wide open get inside building winter time another note design choice also help door close tighter frame reduces heat loss Well wronger wrong weather get colder pressure built warmer air inside building actually try push way balance hence wrong conclusion correct answer prioritize human escaping flow optimization people need get fast possible safety hazard like fire horde march won’t situation door can’t opened due mass people trying push Door obvious hint grabbing pulling choose door handle know typical user behavior long door open outwards use handle strong “pull” cue outside tend vertical long shape easy grasp need use type side answer “it depends” many would lean towards symmetrical aesthetic purpose case want symmetrical meaning type hint “pull” used inside put “pushpull” label side indicate distinguished action Although may look beautifully balanced definitely creating bad functionality leading user frustration discussed something trying avoid Otherwise horizontal bar used inner side effective solution implying “push” behavior often seen frequent use mainly emergency fire exit door even push cue stronger pun intended vertical push plate nothing grab Left door horizontal bar used common emergency exit right door equipped push plate commonly seen hospital Although solution can’t really used glass door know aesthetic reason Thus really perfect solution bad door everywhereTags Design UX Design User Design User Experience
|
4,940 |
How to excel at designing a great user onboarding experience
|
As a child who grew up in Delhi, I was always fascinated by the rich heritage that Delhi offers to its residents. The Mughal monuments, their grandeur past, the spectacular architecture- everything about them was just so enticing.
Except for one thing.
I was unaware of how to experience these monuments, the right way . I wanted to delve into the rich history of these monuments but I always found myself unhinged to my country’s magnificent past. At first, I blamed myself for not paying enough attention in my history classes. But then, as I talked to other people, they tagged along in my plight.
Here’s what I felt when I visited these monuments- I didn’t feel any connection while walking down the historical alleys of the fort. Even when I hired a travel guide, they just incessantly talked about irrelevant information, often swaying from one thing to another. The touts only cared about their money and had no interesting facts to share. Call it lack of knowledge or lack of empathy for visitors who came to spend their entire day at the monument. All I really wanted to do was to bask in the glory of a marvelous past. Alas! No one really cared what I wanted!
This was my experience until a few years back. However, it changed recently on my tour of a very famous Delhi monument. I opted for an audio guide this time and by the end of my tour, I was in awe of the whole experience. It was, as if, every nook and corner of the monument spoke to me, told me stories about the forlorn past. I was intrigued and, at the same time, happy.
Happy, because this was the first time someone uncovered the layers of my questions and valued my time.
User onboarding is quite similar to that.
If you look at the analogy, just like people who visit monuments, users come to visit your website/application for a purpose. They use the product as a medium to accomplish something in their lives — it could be sending bulk emails to their subscribers or booking tickets for their vacation.
So, the onboarding process should help your users understand how they can do so quickly without feeling too overwhelmed or helpless. The onboarding experience should be such that it delights them and makes them loyal users of your platform.
Therefore it’s important to identify-
The steps that you want the users to take, and How can you guide them at each step and help them perform the right actions?
Now, let’s dive straight into how you can design a better onboarding experience for your users.
The experience should help users feel connected
The ‘real’ purpose that people use an application is because they want a solution to their ‘problems’. For example- they are on social media to make new friends, connect with old ones, share their thoughts and be virtually present for all those guilty pleasures. Therefore, for any kind of platform, figuring out what ticks with users should be the first step while designing the onboarding experience.
For example-
How can you give flexibility to users when they want to sign up? Can they use their existing social credentials to access the web/mobile app? Do you really need all their personal details like date of birth, age, maiden name or alternate email address? Is it absolutely necessary to gather a user’s account/credit card information?
Let’s look at Twitter as an example.
When you are new to a platform, the first thing that piques your interest is- how does it work? What is it about? Twitter has answered this question on their landing page-
And the best part? Once you are successfully signed up after verifying your email address, it doesn’t force you into any compulsory actions. Every screen has a ‘skip’ stage and you can get back to any step later.
In the example below, Twitter tells you why you follow some people. It’s because only when you follow someone, their tweets will appear on your timeline. They also ask for your interests (with a valid reason) so that they can personalize your experience.
Onboarding experiences are like the first impression. They’ll fail if you are putting up your user for a wild goose chase without informing them why they ought to chase it.
So the important thing to keep in mind is that a successful sign up doesn’t count as a conversion. Your application is a success when your users understand the app and come back again and again to use it.
The experience should win users’ trust
According to a survey, 77% of users drop an app within 3 days after they download it. Of course, it’s not just the onboarding experience which is to blame. But it’s true that a good onboarding experience can help improve customer conversion and retention.
And for a good onboarding experience, you have to win over people’s trust. When users feel safe sharing their information, they are more likely to use the application again and again.
For instance, consider FinTech apps which can’t function properly without the friction of asking for confidential financial information. This friction point is inevitable to ensure smooth functioning of the application. However, it can be made more bearable for users by designing a comforting onboarding experience with detailed explanations on the safety features that you’ve included. It’s important to make users feel confident about using the app by entrusting that their money or personal info is safe with you. So, always ensure that both safety and simplicity are ticked off in your onboarding designs.
A good example is of Robinhood which is a very popular trading app. It gives detailed information on why it’s asking for particular information.
For example- when you are prompted to enter your country, it tells you that you need to fill this because federal laws demand it. Similarly, it communicates proactively why the app needs to have your SSN before you proceed with any further steps.
The experience should meet users’ expectations
Every user downloads an app or starts using a service for a reason. They have either heard rave reviews about the product from their friends or your marketing team has done an excellent job at grabbing those eyeballs. Therefore, when someone downloads the app, you have just one chance to grab that user through your excellent onboarding process. It’s your only shot to show that you’ll meet (or, even surpass) their expectations.
And to meet their expectations, you first need to understand your users and their mental models. Why are they here? What do they want to accomplish? Have they previously used a similar application?
This is important to consider because when the user is new to the platform, there is a possibility that (s)he has zero vocabularies/expertise in the area.
For example- when users sign up on Duolingo, they want to learn a new language. The good thing about their onboarding experience is that they give an option of taking a placement test before starting any lessons. This way they put the users at ease and if they have a prior experience they can revisit their fluency in the language.
Also, they do not prompt you to unnecessarily sign up on their platform. And when they do, they give a valid reason for it- they need the information to save your progress.
The final word
Considering that there are countless apps these days, it would be difficult to drill down on a fixed checklist which can help create an excellent onboarding.
Just as there isn’t approach to solve problems, there also isn’t any one silver bullet to nail user onboarding. Onboarding is something which will always be very product and geography specific. People from various backgrounds, with different mental models, are going to use the application.
The choice is with us- we can either let users wander about and swim in the uncertainty or we can welcome them with a flexible and resilient experience that serves them well.
One way to excel at it is to involve real users. You can prototype your user onboarding experience and get feedback from real users. And once it’s live, use the power of data to analyze if any further improvements can be made to it.
We hope this helps you look at user onboarding from a different perspective and make it count as a crucial part in the application’s success.
|
https://uxplanet.org/how-to-excel-at-designing-a-great-user-onboarding-experience-b87129485529
|
[]
|
2019-12-24 11:03:23.290000+00:00
|
['Onboarding', 'Design Thinking', 'User Onboarding', 'Design Process', 'Design']
|
Title excel designing great user onboarding experienceContent child grew Delhi always fascinated rich heritage Delhi offer resident Mughal monument grandeur past spectacular architecture everything enticing Except one thing unaware experience monument right way wanted delve rich history monument always found unhinged country’s magnificent past first blamed paying enough attention history class talked people tagged along plight Here’s felt visited monument didn’t feel connection walking historical alley fort Even hired travel guide incessantly talked irrelevant information often swaying one thing another tout cared money interesting fact share Call lack knowledge lack empathy visitor came spend entire day monument really wanted bask glory marvelous past Alas one really cared wanted experience year back However changed recently tour famous Delhi monument opted audio guide time end tour awe whole experience every nook corner monument spoke told story forlorn past intrigued time happy Happy first time someone uncovered layer question valued time User onboarding quite similar look analogy like people visit monument user come visit websiteapplication purpose use product medium accomplish something life — could sending bulk email subscriber booking ticket vacation onboarding process help user understand quickly without feeling overwhelmed helpless onboarding experience delight make loyal user platform Therefore it’s important identify step want user take guide step help perform right action let’s dive straight design better onboarding experience user experience help user feel connected ‘real’ purpose people use application want solution ‘problems’ example social medium make new friend connect old one share thought virtually present guilty pleasure Therefore kind platform figuring tick user first step designing onboarding experience example give flexibility user want sign use existing social credential access webmobile app really need personal detail like date birth age maiden name alternate email address absolutely necessary gather user’s accountcredit card information Let’s look Twitter example new platform first thing pique interest work Twitter answered question landing page best part successfully signed verifying email address doesn’t force compulsory action Every screen ‘skip’ stage get back step later example Twitter tell follow people It’s follow someone tweet appear timeline also ask interest valid reason personalize experience Onboarding experience like first impression They’ll fail putting user wild goose chase without informing ought chase important thing keep mind successful sign doesn’t count conversion application success user understand app come back use experience win users’ trust According survey 77 user drop app within 3 day download course it’s onboarding experience blame it’s true good onboarding experience help improve customer conversion retention good onboarding experience win people’s trust user feel safe sharing information likely use application instance consider FinTech apps can’t function properly without friction asking confidential financial information friction point inevitable ensure smooth functioning application However made bearable user designing comforting onboarding experience detailed explanation safety feature you’ve included It’s important make user feel confident using app entrusting money personal info safe always ensure safety simplicity ticked onboarding design good example Robinhood popular trading app give detailed information it’s asking particular information example prompted enter country tell need fill federal law demand Similarly communicates proactively app need SSN proceed step experience meet users’ expectation Every user downloads app start using service reason either heard rave review product friend marketing team done excellent job grabbing eyeball Therefore someone downloads app one chance grab user excellent onboarding process It’s shot show you’ll meet even surpass expectation meet expectation first need understand user mental model want accomplish previously used similar application important consider user new platform possibility zero vocabulariesexpertise area example user sign Duolingo want learn new language good thing onboarding experience give option taking placement test starting lesson way put user ease prior experience revisit fluency language Also prompt unnecessarily sign platform give valid reason need information save progress final word Considering countless apps day would difficult drill fixed checklist help create excellent onboarding isn’t approach solve problem also isn’t one silver bullet nail user onboarding Onboarding something always product geography specific People various background different mental model going use application choice u either let user wander swim uncertainty welcome flexible resilient experience serf well One way excel involve real user prototype user onboarding experience get feedback real user it’s live use power data analyze improvement made hope help look user onboarding different perspective make count crucial part application’s successTags Onboarding Design Thinking User Onboarding Design Process Design
|
4,941 |
Requiem for The Carnegie Deli
|
It was the winter of 1985, and my friend Jerry, his son Tim, and I had driven from East Tennessee to New York in order to interview the playwright/screenwriter, Horton Foote. Jerry was writing a book on Foote, and I was finishing my dissertation, also about Foote’s life and work. Not everyone is lucky enough to meet the subject of his Ph.D. dreams, but I was.
Horton was a gracious man, and he and his wife Lillian welcomed us into their East Village apartment. I wish I remembered more about the couple of hours we spent there, but the time buzzed me as if I were a kid sitting for those brief moments on at the Pizitz Department Store Santa’s knee in downtown Birmingham.
After we left, and we had driven all night and so were plenty tired, Jerry suggested we go uptown and have lunch at the Carnegie Deli. Didn’t have to ask twice, either.
I had been to only one other Jewish restaurant in New York at this point, and that was an obscure dairy restaurant that will be featured later in this series. So, here was to be my first taste of an authentic New York Jewish deli.
I have to add here that my wife and I had been semi-vegetarian until I left for this trip. We did eat fish and occasionally chicken, but NO RED MEAT. So at the Carnegie, I could have opted for any of the multitude of smoked fish plates, salads, or sandwiches.
Could have, but didn’t.
I chose the Jackie Mason, or was it the Henny Youngman, sandwich?: mounds of corned beef and pastrami on rye, with mustard, and horseradish on the side. If you never had the pleasure, these sandwiches weigh two-to-three pounds whether or not the waiter keeps his thumb on the scale.
My joke later on was that I spent the next week digesting that sandwich, though I don’t know whom I thought I was fooling, or why I thought that was a joke. That much red meat is hard on the digestive tract anyway, but having not consumed any red meat for the previous four years? Imagine.
I likely ate two whole kosher half-sour pickles before our food was served, too, and maybe I looked as green afterward as that semi-cucumber.
Still, whatever color you had in your mind, nothing could match the beet-horseradish color of Tim, Jerry’s fourteen year-old son, who decided that what he wanted for lunch was the hamburger.
I’m sure you’ve heard stories about gruff, brusque NYC waiters, and maybe they’re that way because goyish boys order burgers in a deli. In any case, our waiter intervened in Tim’s epicurean life:
“LISTEN KID, IF YOU WANTA BURGER, GO ACROSS THE STREET TO MCDONALD’S. THIS IS A DELI AND IN A DELI, YOU ORDER DELI SANDWICHES.”
You know when someone you don’t know means what he’s saying, and Tim was no exception.
Jerry and I watched the poor guy actually gulp, and then say,
“yessir, i will have a corned beef sandwich.”
Which he went on to eat with gusto, eyeing the waiter from then on with either love or fear (or both) in his eyes.
|
https://medium.com/one-table-one-world/requiem-for-the-carnegie-deli-dec7fda4f295
|
['Terry Barr']
|
2020-08-30 18:53:27.110000+00:00
|
['One Table One World', 'New York', 'Food', 'Jewish', 'Family']
|
Title Requiem Carnegie DeliContent winter 1985 friend Jerry son Tim driven East Tennessee New York order interview playwrightscreenwriter Horton Foote Jerry writing book Foote finishing dissertation also Foote’s life work everyone lucky enough meet subject PhD dream Horton gracious man wife Lillian welcomed u East Village apartment wish remembered couple hour spent time buzzed kid sitting brief moment Pizitz Department Store Santa’s knee downtown Birmingham left driven night plenty tired Jerry suggested go uptown lunch Carnegie Deli Didn’t ask twice either one Jewish restaurant New York point obscure dairy restaurant featured later series first taste authentic New York Jewish deli add wife semivegetarian left trip eat fish occasionally chicken RED MEAT Carnegie could opted multitude smoked fish plate salad sandwich Could didn’t chose Jackie Mason Henny Youngman sandwich mound corned beef pastrami rye mustard horseradish side never pleasure sandwich weigh twotothree pound whether waiter keep thumb scale joke later spent next week digesting sandwich though don’t know thought fooling thought joke much red meat hard digestive tract anyway consumed red meat previous four year Imagine likely ate two whole kosher halfsour pickle food served maybe looked green afterward semicucumber Still whatever color mind nothing could match beethorseradish color Tim Jerry’s fourteen yearold son decided wanted lunch hamburger I’m sure you’ve heard story gruff brusque NYC waiter maybe they’re way goyish boy order burger deli case waiter intervened Tim’s epicurean life “LISTEN KID WANTA BURGER GO ACROSS STREET MCDONALD’S DELI DELI ORDER DELI SANDWICHES” know someone don’t know mean he’s saying Tim exception Jerry watched poor guy actually gulp say “yessir corned beef sandwich” went eat gusto eyeing waiter either love fear eyesTags One Table One World New York Food Jewish Family
|
4,942 |
Why Not Use Kubernetes?
|
Why Not Use Kubernetes?
Is Kubernetes really right for your stack?
When to choose Kubernetes?
Many teams are excited to start using Kubernetes. Some are interested in the resilience, elasticity, portability, reliability and other advantages Kubernetes offers natively. Some are technology enthusiasts and just want an opportunity to work with this platform, to get to know more about it. Some developers want to acquire experience with it, so they can add another highly demanded skill to their resumé. In general, most developers these days want to work with Kubernetes at some point.
That may be a really good idea and it may not.
Kubernetes is Designed to Solve Distributed Architecture Issues
As defined by the official documentation website,
“Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more.”
It’s not exclusively made for distributed systems, but for containerized applications. Even so, it does provide many resources that make it easier to manage and scale distributed systems, like Microservices solutions. It’s also considered an orchestration system.
Automation and orchestration are different, but related concepts. Automation helps make your business more efficient by reducing or replacing human interaction with IT systems and instead using software to perform tasks in order to reduce cost, complexity, and errors. In general, automation refers to automating a single task. This is different from orchestration, which is how you can automate a process or workflow that involves many steps across multiple disparate systems. When you start by building automation into your processes, you can then orchestrate them to run automatically. — What is orchestration? RedHat official website
In other words, Kubernetes makes it easier to manage complex solutions that would be hard to maintain without a proper orchestration system. While you can implement DevOps engineering practices on your own, it’s not scalable if you go from dozens to hundreds of services.
|
https://medium.com/better-programming/why-not-use-kubernetes-52a89ada5e22
|
['Grazi Bonizi']
|
2020-06-04 13:39:28.884000+00:00
|
['Azure', 'Programming', 'Kubernetes', 'DevOps', 'Aks']
|
Title Use KubernetesContent Use Kubernetes Kubernetes really right stack choose Kubernetes Many team excited start using Kubernetes interested resilience elasticity portability reliability advantage Kubernetes offer natively technology enthusiast want opportunity work platform get know developer want acquire experience add another highly demanded skill resumé general developer day want work Kubernetes point may really good idea may Kubernetes Designed Solve Distributed Architecture Issues defined official documentation website “Kubernetes provides framework run distributed system resiliently take care scaling failover application provides deployment pattern more” It’s exclusively made distributed system containerized application Even provide many resource make easier manage scale distributed system like Microservices solution It’s also considered orchestration system Automation orchestration different related concept Automation help make business efficient reducing replacing human interaction system instead using software perform task order reduce cost complexity error general automation refers automating single task different orchestration automate process workflow involves many step across multiple disparate system start building automation process orchestrate run automatically — orchestration RedHat official website word Kubernetes make easier manage complex solution would hard maintain without proper orchestration system implement DevOps engineering practice it’s scalable go dozen hundred servicesTags Azure Programming Kubernetes DevOps Aks
|
4,943 |
How the U.S. Chamber of Commerce wrecked the economy — and made the pandemic worse
|
A version of this article appeared earlier in Salon
As hospital intensive care units overflow again, and delays in COVID-19 testing reports reach record levels in many cities, a conversation I recently had with Sen. Ed Markey, a Massachusetts Democrat, reminded me that I had forgotten something utterly critical: Donald Trump’s decision to unilaterally disarm America in the face of the coronavirus invasion was urged upon him by an ostensible defender of American business: the U.S. Chamber of Commerce.
When the pandemic reached America, we were not ready — any more than we were ready when Japan bombed Pearl Harbor. But Trump had the tools to do what the U.S. has often done: make up for lack of preparedness. The crucial gaps to fill in March were supplies for testing to limit the spread of the virus, and medical equipment to treat those who got sick — testing kits, swabs, reagents, masks, gowns and gloves — by the billions. Government health agencies estimated that if the pandemic took hold, the country would need, for example, 3.5 billion N95 medical masks. We had 12 million.
Presidents have available, and have routinely used, the Cold War-era Defense Production Act to overcome such critical supply shortages — not just in wartime, but also to ensure adequate relief supplies after natural disasters. DPA can be used to put emergency purchases at the head of a supply chain, but also to require factories to convert their output to provide needed equipment in adequate volumes. Members of Congress urged Trump to appoint a military official as DPA czar to coordinate production and distribution of essential pandemic-related medical supplies, as was done in the Korean War.
In March, Trump was leaning towards robust use of the DPA, fashioning himself as a wartime president and the battle against the coronavirus as America’s “big war.” He invoked the DPA to require General Motors to speed up its production of ventilators. But he quickly discovered that he had an enemy within — not the virus as such, but the U.S. Chamber of Commerce.
Within days of Trump’s announcement that he would mobilize as if for war and use the DPA, the Chamber of Commerce’s lobbying army swung into action. Among the Chamber’s arguments: DPA would impose “red tape on companies precisely when they need flexibility to deal with closed borders and shuttered factories.” Unstated: if the government took charge of the supply chain for tests, masks, and gowns, it could also prevent bidding wars from competing hospitals and states that would, and did, drive prices — and profits — through the roof.
In response, labor unions representing nurses, hospital staff and other frontline workers who were unable to be tested, or to obtain masks or gowns, protested and urged the Chamber to join a national mobilization to defeat the coronavirus. The appeal was not answered. On March 23, Markey and his Massachusetts colleague, Sen. Elizabeth Warren, wrote to the Chamber demanding an explanation. The Chamber’s response: Defense Production Act reliance was unnecessary, because “American companies will do whatever it takes to support America’s response to the pandemic and shore up the economy.”
But what the workers had feared — insufficient production, soaring prices and profits, chaotic delivery mechanisms — took over the health care market during the first wave of virus spikes in the Northeast. Some hospitals were paying 15 times the usual price for masks.
The Chamber continued its strenuous opposition to enabling the government to ensure an adequate supply of tests and PPE. Even Trump conceded that profiteering had taken over the market, and ultimately, he did invoke the DPA to prevent it — but when it came to the export market.
Supplies increased, but often at exorbitant prices. Only when the shutdowns of most of the American economy brought the number of hospitalizations down sharply did supply and demand come into temporary balance. Americans believed that if there was a second wave later in the year, at least health care workers on the front lines would have the tools they needed.
Trump, denied by Chamber opposition of an easy pathway to acting like a heroic wartime president, seemingly lost interest. By June, his focus had shifted from fighting the virus to reopening the economy and then reigniting the culture wars. Media headlines proclaiming his lack of interest did not even provoke “fake news” tweets from the East Wing of the White House.
The virus had a longer attention span. With major states like Florida, Texas and Arizona opening rapidly and prematurely, the holes in the jerry-built testing, tracing, and quarantine systems each state had fashioned without federal guidance gave the virus its chance.
Cases — although not, initially, fatalities — began to soar. Suddenly, what any nationally coordinated effort would have been tracking and resolving all along — that the nation had stepped up production of tests, masks and gowns to meet the needs of an economy in shutdown, but had nowhere near the level of supplies for a massive second wave — came home to roost. By early July, new cases were flooding the hospital capacity of even some of the nation’s major health care centers, such as Houston.
In New Orleans, testing centers had to close eight minutes after they opened. In 100-degree heat in Phoenix, lines at testing centers were eight hours long. San Antonio and Austin were forced to limit testing to those with symptoms, leaving the system utterly unable to detect asymptomatic cases, when research suggests that to 40% of infections take place. Wait times for test results soared with caseloads, so that even those who got tested might not find out they were positive until they were well past the contagious stage.
Nurses were again forced to use one N95 mask for weeks at a time. Prices for what was actually available soared. States reported they could only fulfill 10% of their orders. Even now, in July, the U.S. has nothing like the 3.5 billion N95 masks that we knew in January we would need. Indeed, how could we? No manufacturer could have guessed in March how severe the crisis would be by fall. With no one managing the system, it was clearly foolish for any private company to produce, on speculation, several billion masks. Market signals cannot prepare America for the massive increase in possible scale required by such a pandemic. Nor can individual cities and states trigger the necessary ramp-up of supplies. Only a systematic national plan designed to ensure that we were ready for the worst-case scenario could have protected us.
Inevitably, the “market solutions” favored by the U.S. Chamber of Commerce have failed America not once but twice. In failing, they have taken the nation’s economy over the precipice into a deep, long, economic decline. That the loudest voice claiming to represent U.S. business chose to defend profiteering, embrace short-term thinking, and risk the collapse of the American health care system and economy is both shameful and shocking.
And that most Americans, including me, had already forgotten this is scary. This is not the first time the Chamber has successfully pursued policies that put us enormously at risk — for years it has been one of the major forces preventing Washington from adopting even the most modest efforts to accelerate a transition off fossil fuels to save lives and protect the climate. But if the Chamber can get away once again with having caused massive economic damage while risking the health of millions, and with its public reputation unscathed, it is unlikely to change its behavior.
|
https://carlpope.medium.com/how-the-u-s-chamber-of-commerce-wrecked-the-economy-and-made-the-pandemic-worse-dd0ad8ee803d
|
['Carl Pope']
|
2020-07-24 18:01:41.802000+00:00
|
['Trump', 'Coronavirus', 'Chamber Of Commerce', 'Economy', 'Ppe']
|
Title US Chamber Commerce wrecked economy — made pandemic worseContent version article appeared earlier Salon hospital intensive care unit overflow delay COVID19 testing report reach record level many city conversation recently Sen Ed Markey Massachusetts Democrat reminded forgotten something utterly critical Donald Trump’s decision unilaterally disarm America face coronavirus invasion urged upon ostensible defender American business US Chamber Commerce pandemic reached America ready — ready Japan bombed Pearl Harbor Trump tool US often done make lack preparedness crucial gap fill March supply testing limit spread virus medical equipment treat got sick — testing kit swab reagent mask gown glove — billion Government health agency estimated pandemic took hold country would need example 35 billion N95 medical mask 12 million Presidents available routinely used Cold Warera Defense Production Act overcome critical supply shortage — wartime also ensure adequate relief supply natural disaster DPA used put emergency purchase head supply chain also require factory convert output provide needed equipment adequate volume Members Congress urged Trump appoint military official DPA czar coordinate production distribution essential pandemicrelated medical supply done Korean War March Trump leaning towards robust use DPA fashioning wartime president battle coronavirus America’s “big war” invoked DPA require General Motors speed production ventilator quickly discovered enemy within — virus US Chamber Commerce Within day Trump’s announcement would mobilize war use DPA Chamber Commerce’s lobbying army swung action Among Chamber’s argument DPA would impose “red tape company precisely need flexibility deal closed border shuttered factories” Unstated government took charge supply chain test mask gown could also prevent bidding war competing hospital state would drive price — profit — roof response labor union representing nurse hospital staff frontline worker unable tested obtain mask gown protested urged Chamber join national mobilization defeat coronavirus appeal answered March 23 Markey Massachusetts colleague Sen Elizabeth Warren wrote Chamber demanding explanation Chamber’s response Defense Production Act reliance unnecessary “American company whatever take support America’s response pandemic shore economy” worker feared — insufficient production soaring price profit chaotic delivery mechanism — took health care market first wave virus spike Northeast hospital paying 15 time usual price mask Chamber continued strenuous opposition enabling government ensure adequate supply test PPE Even Trump conceded profiteering taken market ultimately invoke DPA prevent — came export market Supplies increased often exorbitant price shutdown American economy brought number hospitalization sharply supply demand come temporary balance Americans believed second wave later year least health care worker front line would tool needed Trump denied Chamber opposition easy pathway acting like heroic wartime president seemingly lost interest June focus shifted fighting virus reopening economy reigniting culture war Media headline proclaiming lack interest even provoke “fake news” tweet East Wing White House virus longer attention span major state like Florida Texas Arizona opening rapidly prematurely hole jerrybuilt testing tracing quarantine system state fashioned without federal guidance gave virus chance Cases — although initially fatality — began soar Suddenly nationally coordinated effort would tracking resolving along — nation stepped production test mask gown meet need economy shutdown nowhere near level supply massive second wave — came home roost early July new case flooding hospital capacity even nation’s major health care center Houston New Orleans testing center close eight minute opened 100degree heat Phoenix line testing center eight hour long San Antonio Austin forced limit testing symptom leaving system utterly unable detect asymptomatic case research suggests 40 infection take place Wait time test result soared caseloads even got tested might find positive well past contagious stage Nurses forced use one N95 mask week time Prices actually available soared States reported could fulfill 10 order Even July US nothing like 35 billion N95 mask knew January would need Indeed could manufacturer could guessed March severe crisis would fall one managing system clearly foolish private company produce speculation several billion mask Market signal cannot prepare America massive increase possible scale required pandemic individual city state trigger necessary rampup supply systematic national plan designed ensure ready worstcase scenario could protected u Inevitably “market solutions” favored US Chamber Commerce failed America twice failing taken nation’s economy precipice deep long economic decline loudest voice claiming represent US business chose defend profiteering embrace shortterm thinking risk collapse American health care system economy shameful shocking Americans including already forgotten scary first time Chamber successfully pursued policy put u enormously risk — year one major force preventing Washington adopting even modest effort accelerate transition fossil fuel save life protect climate Chamber get away caused massive economic damage risking health million public reputation unscathed unlikely change behaviorTags Trump Coronavirus Chamber Commerce Economy Ppe
|
4,944 |
Can I Be Bothered?
|
Can I be bothered? No. Let’s be honest I have two choices. Yes or no. And, let’s face it, if you’re approached by a salesman and they’re not very attractive you’d be inclined to always say no. No is easier. It also means less work.
A better question, perhaps, would be why should I be bothered? I mean, this question asks for more; encourages deeper thought and reflection. So, why should I be bothered? Bothered to get out of bed, to have a shower, to leave my room, to go to work, to go to university, to write, to make and have a website.
Since I only have eight hundred words and I’m under assignment, I will only address one of these. Why should I be bothered as an aspiring author, to have a website? Do I even need one?
For starters, I’m a full-time student, work when I’m not studying, have a part-time blog and have religious commitments. I don’t have time. I don’t have the experience or knowledge. You also hear all those horrible stories about internet thieves stealing your details and hacking into your websites. Besides, doesn’t everyone use social media now? It’s easier to use social media, because I’ve been using it for a while. Plus, I bet it’s very expensive to have your own website.
Though, with a little research I found out that over four billion people around the world use the internet (Kemp, 2018). That’s a massive amount. I’d never be able to reach over four billion people in one place in one day, the traffic alone would be ridiculous! But, even if only 0.0001% of people were to look at your website that’s still over 4,000 viewing your site!
I do get why businesses would want to have a website. It’s also a good way of getting a product out there. Promoting your products with 24/7 advertisement and marketing. You can be promoting your business while you sleep. That is a nice thought, but how does it benefit me? I’m already lacking a great deal of sleep.
I read an article from a web-page on Writer’s Edit, a website dedicated to aspiring authors, and in that it argues that “Essentially for an author, your name is your brand; and for any brand, a website is an integral element in valuable promotion.” (Edstein). They also mention, Joanna Penn, bestselling author of over twenty-five fiction and non-fiction books, and on this subject she said:
“I’ve built a multi-six-figure business off the back of my author websites… Your website is one of the most important things to get sorted if you’re serious about your author Career…It’s your home on the internet and the hub for your books. It’s how readers, agents, publishers, journalists, bloggers and podcasters judge how professional you are.” (Penn).
On closer evaluation, I decided to look up a variety of my favourite authors to find that indeed they all have their own websites, each providing wonderful detail on their novels and news on future releases. While also providing links to various social media forums to follow. It seems a website is like a source for which you can centralise your crafted creativity while also filtering condensed teasers through all forums you’re a part of. It’s all united through one medium.
It also doesn’t cost anything to have a website. Sure, to get good, personalised domain names you will need to get out your wallet; but, in my opinion, I don’t need a personalised domain name just yet. I’m still honing my craft and building up a rapport. I can learn all the while the domain name is free and gain necessary experience and skills. So that when I am ready for branching out I will know what I’m doing. Social media, I’ve learnt from my searching, is like the bread crumbs that lead potential readers to your writing home, where you can sit them down with your wonderful ideas and get them stuck to your fan base.
The only concern I had, is that there’s still the potential of hacking. But Hellen Keller once said, “Life is either a daring adventure or nothing.” I guess you either take the leap and get the experience or you don’t and miss out on what could be life changing.
It seems to me that the answer to why I should be bothered to have a website as an aspiring author can be accompanied by a self-reflecting question: Am I serious about becoming an author? There are only two answers to that question. So, will I?
You can find my blog here.
References:
Edstein, Eloise. “Should Writers Have a Website?”. Writer’s Edit. https://writersedit.com/fiction-writing/should-writers-have-website/
Keller, Helen. P50. Let Us Have Faith. Doubleday & company, 1940.
Kemp, Simon. “Digital in 2018: World’s internet Users pass the 4 Billion Mark”. We Are Social. 30th January 2018. https://wearesocial.com/blog/2018/01/global-digital-report-2018
Penn, Joanna. “Step By Step Tutorial: How To Build Your Own Self-Hosted Author Website In 30 Minutes”. The Creative Penn. https://www.thecreativepenn.com/authorwebsite/
|
https://medium.com/clippings-autumn-2018/can-i-be-bothered-3765aaa89a9c
|
['Olivia Pettman']
|
2018-12-12 10:22:51.112000+00:00
|
['Website', 'Writing', 'Writer', 'Cccu', 'Social Media']
|
Title BotheredContent bothered Let’s honest two choice Yes let’s face you’re approached salesman they’re attractive you’d inclined always say easier also mean le work better question perhaps would bothered mean question asks encourages deeper thought reflection bothered Bothered get bed shower leave room go work go university write make website Since eight hundred word I’m assignment address one bothered aspiring author website even need one starter I’m fulltime student work I’m studying parttime blog religious commitment don’t time don’t experience knowledge also hear horrible story internet thief stealing detail hacking website Besides doesn’t everyone use social medium It’s easier use social medium I’ve using Plus bet it’s expensive website Though little research found four billion people around world use internet Kemp 2018 That’s massive amount I’d never able reach four billion people one place one day traffic alone would ridiculous even 00001 people look website that’s still 4000 viewing site get business would want website It’s also good way getting product Promoting product 247 advertisement marketing promoting business sleep nice thought benefit I’m already lacking great deal sleep read article webpage Writer’s Edit website dedicated aspiring author argues “Essentially author name brand brand website integral element valuable promotion” Edstein also mention Joanna Penn bestselling author twentyfive fiction nonfiction book subject said “I’ve built multisixfigure business back author websites… website one important thing get sorted you’re serious author Career…It’s home internet hub book It’s reader agent publisher journalist blogger podcasters judge professional are” Penn closer evaluation decided look variety favourite author find indeed website providing wonderful detail novel news future release also providing link various social medium forum follow seems website like source centralise crafted creativity also filtering condensed teaser forum you’re part It’s united one medium also doesn’t cost anything website Sure get good personalised domain name need get wallet opinion don’t need personalised domain name yet I’m still honing craft building rapport learn domain name free gain necessary experience skill ready branching know I’m Social medium I’ve learnt searching like bread crumb lead potential reader writing home sit wonderful idea get stuck fan base concern there’s still potential hacking Hellen Keller said “Life either daring adventure nothing” guess either take leap get experience don’t miss could life changing seems answer bothered website aspiring author accompanied selfreflecting question serious becoming author two answer question find blog References Edstein Eloise “Should Writers Website” Writer’s Edit httpswriterseditcomfictionwritingshouldwritershavewebsite Keller Helen P50 Let Us Faith Doubleday company 1940 Kemp Simon “Digital 2018 World’s internet Users pas 4 Billion Mark” Social 30th January 2018 httpswearesocialcomblog201801globaldigitalreport2018 Penn Joanna “Step Step Tutorial Build SelfHosted Author Website 30 Minutes” Creative Penn httpswwwthecreativepenncomauthorwebsiteTags Website Writing Writer Cccu Social Media
|
4,945 |
(Truly) Poor Economics
|
Esther Duflo and Abhijit Banerjee, who share a 2019 Nobel Memorial Prize in Economic Sciences with Michael Kremer, answer questions during a press conference at the Massachusetts Institute of Technology in Cambridge on Oct. 14. SCOTT EISEN/GETTY IMAGES
Why RCTs Are Not a Promising Tool for Development
He was a bold man that first ate an oyster.
— Jonathan Swift
If there has been a“next big thing” in the field of economics during the past decade, it is the application of techniques from medical research — specifically, “randomized controlled trials,” or RCTs — to assess the effectiveness of development or other government-initiated projects.
The general thrust of the work is as simple as it is seemingly brilliant: rather than employ complex, and often unreliable, econometric methods to tease out the extent to which a project or policy actually had a beneficial impact on intended beneficiaries, why not follow the tried and true methods employed by pharmaceutical researchers to assess the efficacy of medical treatments? The steps are:
Randomly divide the experimental population into a treatment group that receives “benefits” from the program, and a control group that doesn’t. Assess outcomes for both groups. Determine whether or not a significant difference exists between the two groups.
This approach, championed by MIT economists Esther Duflo and Abhijit Banerjee (recent recipients of the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, also known as the Nobel Prize in economics, and the coauthors of the book Poor Economics) among others, seems a self-evidently fabulous improvement on the status quo in development aid, which not infrequently in the past involved assessing program effectiveness by simply checking whether or not the money was spent. Clearly, a world in which aid money is allocated to projects that actually benefit people is preferable to one in which money goes to whoever most effectively maneuvers to get the money to start with and then most reliably manages to spend it. In this way, the use of RCTs in development arguably advances the goal of aid effectiveness, famously championed by NYU economist William Easterly in a series of books and a popular blog titled Aid Watch whose slogan is “Just Asking that Aid Benefit the Poor.”
Describe your favorite entrepreneurial initiative to one of the many “development” economists influenced by this line of thinking and the most probable response you’ll receive is: “Sounds wonderful, but where’s the evidence of effectiveness?” This question will be followed by the following admonition: “In order to find out whether or not it worked, you really should do an RCT.”
So what is the problem with applying RCTs to development?
Aside from their expense (administering an RCT generally costs upwards of $100,000), the Achilles heel of RCTs is a little thing known to the statistically inclined as “external validity” — a phrase that translates informally to “Who cares?”
The concept of external validity is straightforward. For any assessment, “internal validity” refers to the mechanism of conducting a clinical trial, and the reliability of results on the original setting. A professionally conducted RCT that yields a high level of statistical significance is said to be “internally valid.” However, it is fairly obvious that an intervention rigorously proven to work in one setting may or may not work in another setting. This second criterion — the extent to which results apply outside the original research setting — is known as “external validity.” External validity may be low because the populations in the original and the new research setting are not really comparable — for example, results of a clinical trial conducted on adults may not apply to children. But external validity may also be low because the environment in the new study setting is different in some fundamental way, not accounted for by the researcher, from the original study setting. Econometric studies that seek to draw conclusions about effectiveness from data that span large geographical areas or highly varied populations thus typically have lower levels of internal validity, but higher levels of external validity.
The fundamental issue is not the purity of the methodology employed (as exciting as such methodological purity is to the technically inclined) but rather the inherent complexity of the world being studied.
As (actual development economist) Ricardo Haussman states it:
Another method that looses its appeal in a world of high dimensionality is the randomized trial approach. A typical program, whether a conditional cash transfer, a micro-finance program or a health intervention can easily have 15 relevant dimensions. Assume that each dimension can only take 2 values. Then the possible combinations are 2^15 or 32,768 possible combinations. But randomized trials can only distinguish between a control group and 1 to 3 treatment groups.
Or as Don Berwick states in the context of public health interventions:
How can accumulating local reports of effectiveness of improvement interventions, such as rapid response systems, be reconciled with contrary findings from formal trials with their own varying imperfections? The reasons
for this apparent gap between science and experience lie deep in epistemology. The introduction of rapid response systems in hospitals is a complex, multicomponent intervention — essentially a process of social change. The effectiveness of these systems is sensitive to an array of influences: leadership, changing environments, details of implementation, organizational history, and much more. In such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect. (Emphasis added)
Finally, as Angus Deaton (also a Nobel Laureate in economics) and Nancy Cartwright helpfully clarify:
RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative program, combining with other methods, including conceptual and theoretical development, to discover not “what works,” but “why things work”.
Those who most vociferously and naïvely advocate that we apply techniques from medical research to economics make a fundamental error: They fail to appreciate the fact that, when it comes to external validity, medical research is the exception that proves the rule. Indeed, in aid-led development in general, of the few real historical successes, nearly all are in public health. Outside of public health, few of the large-scale, top-down development programs have in fact succeeded.
Why is this? Multiple conjectures are possible. But one persuasive one is this: when it comes to biophysical function, people are people. For this reason, a carefully developed medical protocol proven to be effective for one population is highly likely to work for another population. The smallpox vaccine tested on one population tended to work on other populations; this made it possible to eradicate smallpox. Oral rehydration therapy tested on one group of children tended to work of other groups of children; millions of children have been spared preventable deaths because the technique has been adopted on a global basis. Indeed, medical protocols have such a high level of external validity that, in the United States alone, tens if not hundreds of thousands of lives could be saved every year through a more determined focus on adherence to their particulars.
These huge successes were achieved, and continue to be achievable, though bold action taken by public health officials. They are rightly celebrated and encouraged, but — outside of other public health applications — not easily replicated. Successes in medicine contrast sharply with failures in other domains. Decades of efforts to design and deploy improved cook-stoves — with the linked aims of reducing both deforestation and the illness and death due to indoor air pollution — have so far primarily yielded an accumulation of Western inventions maladapted to needs and realities in various parts of the world, along with locally developed innovations that cannot be expanded to meet the true scale of the challenge. For development programs in general, and RCTs in particular, public health is the exception that proves the rule.
What does work in areas outside of public health? How is it possible to design, test, and implement effective solutions in environments where complexity and volatility are dominant?
The general principle applies: Success requires adaptability as well as structure, flexibility as well as structure — a societal capacity to scale successful efforts combined with an engrained practice of entrepreneurial exploration. As the uniquely insightful Mancur Olson wrote in his classic Power and Prosperity:
Because uncertainties are so pervasive and unfathomable, the most dynamic and prosperous societies are those that try many, many things. They are societies with countless thousands of entrepreneurs who have relatively good access to credit and venture capital.
What works in development, according to Olsen, is experimentation. Why? Because we don’t know what works.
|
https://auerswald.medium.com/truly-poor-economics-dc21944b7747
|
[]
|
2020-08-11 20:13:38.849000+00:00
|
['Economic Development', 'Randomized Control Trials', 'Entrepreneurship']
|
Title Truly Poor EconomicsContent Esther Duflo Abhijit Banerjee share 2019 Nobel Memorial Prize Economic Sciences Michael Kremer answer question press conference Massachusetts Institute Technology Cambridge Oct 14 SCOTT EISENGETTY IMAGES RCTs Promising Tool Development bold man first ate oyster — Jonathan Swift a“next big thing” field economics past decade application technique medical research — specifically “randomized controlled trials” RCTs — ass effectiveness development governmentinitiated project general thrust work simple seemingly brilliant rather employ complex often unreliable econometric method tease extent project policy actually beneficial impact intended beneficiary follow tried true method employed pharmaceutical researcher ass efficacy medical treatment step Randomly divide experimental population treatment group receives “benefits” program control group doesn’t Assess outcome group Determine whether significant difference exists two group approach championed MIT economist Esther Duflo Abhijit Banerjee recent recipient Sveriges Riksbank Prize Economic Sciences Memory Alfred Nobel also known Nobel Prize economics coauthor book Poor Economics among others seems selfevidently fabulous improvement status quo development aid infrequently past involved assessing program effectiveness simply checking whether money spent Clearly world aid money allocated project actually benefit people preferable one money go whoever effectively maneuver get money start reliably manages spend way use RCTs development arguably advance goal aid effectiveness famously championed NYU economist William Easterly series book popular blog titled Aid Watch whose slogan “Just Asking Aid Benefit Poor” Describe favorite entrepreneurial initiative one many “development” economist influenced line thinking probable response you’ll receive “Sounds wonderful where’s evidence effectiveness” question followed following admonition “In order find whether worked really RCT” problem applying RCTs development Aside expense administering RCT generally cost upwards 100000 Achilles heel RCTs little thing known statistically inclined “external validity” — phrase translates informally “Who cares” concept external validity straightforward assessment “internal validity” refers mechanism conducting clinical trial reliability result original setting professionally conducted RCT yield high level statistical significance said “internally valid” However fairly obvious intervention rigorously proven work one setting may may work another setting second criterion — extent result apply outside original research setting — known “external validity” External validity may low population original new research setting really comparable — example result clinical trial conducted adult may apply child external validity may also low environment new study setting different fundamental way accounted researcher original study setting Econometric study seek draw conclusion effectiveness data span large geographical area highly varied population thus typically lower level internal validity higher level external validity fundamental issue purity methodology employed exciting methodological purity technically inclined rather inherent complexity world studied actual development economist Ricardo Haussman state Another method loos appeal world high dimensionality randomized trial approach typical program whether conditional cash transfer microfinance program health intervention easily 15 relevant dimension Assume dimension take 2 value possible combination 215 32768 possible combination randomized trial distinguish control group 1 3 treatment group Berwick state context public health intervention accumulating local report effectiveness improvement intervention rapid response system reconciled contrary finding formal trial varying imperfection reason apparent gap science experience lie deep epistemology introduction rapid response system hospital complex multicomponent intervention — essentially process social change effectiveness system sensitive array influence leadership changing environment detail implementation organizational history much complex terrain RCT impoverished way learn Critics use truth standard context incorrect Emphasis added Finally Angus Deaton also Nobel Laureate economics Nancy Cartwright helpfully clarify RCTs play role building scientific knowledge useful prediction part cumulative program combining method including conceptual theoretical development discover “what works” “why thing work” vociferously naïvely advocate apply technique medical research economics make fundamental error fail appreciate fact come external validity medical research exception prof rule Indeed aidled development general real historical success nearly public health Outside public health largescale topdown development program fact succeeded Multiple conjecture possible one persuasive one come biophysical function people people reason carefully developed medical protocol proven effective one population highly likely work another population smallpox vaccine tested one population tended work population made possible eradicate smallpox Oral rehydration therapy tested one group child tended work group child million child spared preventable death technique adopted global basis Indeed medical protocol high level external validity United States alone ten hundred thousand life could saved every year determined focus adherence particular huge success achieved continue achievable though bold action taken public health official rightly celebrated encouraged — outside public health application — easily replicated Successes medicine contrast sharply failure domain Decades effort design deploy improved cookstove — linked aim reducing deforestation illness death due indoor air pollution — far primarily yielded accumulation Western invention maladapted need reality various part world along locally developed innovation cannot expanded meet true scale challenge development program general RCTs particular public health exception prof rule work area outside public health possible design test implement effective solution environment complexity volatility dominant general principle applies Success requires adaptability well structure flexibility well structure — societal capacity scale successful effort combined engrained practice entrepreneurial exploration uniquely insightful Mancur Olson wrote classic Power Prosperity uncertainty pervasive unfathomable dynamic prosperous society try many many thing society countless thousand entrepreneur relatively good access credit venture capital work development according Olsen experimentation don’t know worksTags Economic Development Randomized Control Trials Entrepreneurship
|
4,946 |
Career Advice for My 20-Year-Old Daughter
|
Prioritise being happy in what you do, because a career is a long time
We spend a lot of time working and while this sounds like a cliché, it’s true that happiness is what matters. You don’t necessarily get to choose where you work or who you work with. You’d better believe that even the most ideal-seeming job is going to be a drag at times.
With all this in mind, you’d better ensure that you choose work that makes you happy for most of the time, otherwise you’ll spend a lot of days struggling to get up in the mornings.
Money is just one reason for working — Don’t make it into the only thing that you pursue
I was convinced when I started working, that I could do anything asked of me as long as the pay was good. I completely underestimated just how fleeting the sense of fulfilment is from getting a big pay-cheque. What’s more enduring is the sense of worth you get from the work you do, and how you feel about doing it.
Creativity, a sense of meaning, a belief that your work is of value and service of others, being able to express yourself and your values through what you do and freedom to choose how you do your job, all count massively.
Having autonomy about how and where I work has also been an enormous factor for me. When I’ve had to commute daily to a certain location and have had to clock-on and off at set hours, I’ve found the structure to be constraining and frustrating. Having the freedom to be able to work at home when I’ve wanted and being trusted to get the job done while working the hours it requires, have both made me feel empowered and valued.
Money can quickly become an assumed part of life rather than a motivating factor. After a certain point money ceases to be a motivator for many, once their living expenses are met. Freedom, favourable working conditions and the ability to direct time as you see fit all count for much more.
I’ve written extensively about why I enjoy working from home. I don’t waste time commuting, I can fit in my side-hustle around my day job and it’s easier to fit in things like exercise around work which also enriches my quality of life. It won’t be for everyone, but I mention this as an example of where working conditions can outweigh the perceived importance of merely striving to make more money.
Don’t extend your living costs and become too dependent upon your income
Nothing strips work of enjoyment and meaning more than feeling like you’re stuck in a job and have to keep doing it whether you want to or not, merely because you are dependent on the paycheck to meet your commitments. I spent many years of my working life treating each payrise as an opportunity to increase my standard of living and my committed outgoings as a result. Bigger houses, fancier cars, hire-purchase agreements and indulgent spending meant that I became more and more reliant on my salary as it increased through my career.
If I’d realised earlier that ‘stuff’ doesn’t equate to happiness, and that there are few pleasures as great as knowing that you are financially stable and have a decent chunk of money saved and invested, then I would certainly have invested more and spent less.
Beyond a certain basic level we don’t really need more in life. When we get a pay rise this is an opportunity to start saving more with the goal of accumulating enough money that we can theoretically retire from work earlier. If I had my time again, I’d prioritise that over consumer spending. Being free of work earlier in life as a result of having built up a solid financial buffer is about as close as we can possibly get to buying more time in our lives.
Photo by Saulo Mohana on Unsplash
Don’t underestimate the value of boring old stability and safety
Safe and stable will mean different things to different people. The definition evolves as we get older, and when we’re young we might value this less than we will once we start a family.
When I started working, exhilaration and excitement were all I really wanted (besides money of course). I quickly found out that what I actually needed was a degree of stability and comfort in my life; the sort of stability that comes from having a lifestyle that I can comfortably afford, and enough money to save and invest regularly. Unfortunately it’s taken most of the last 20-years to achieve that state and I spent most of that time craving and chasing it.
If you can turn that on its head and build that stability from the ground up, I think it will serve you well.
Not everyone wants to be married with a home and 2.4 kids by the time they’re 30. Whatever level of stability you require, whether that’s enough money saved to fund your basic living costs for a year, or enough money for a plane ticket home from whatever corner of the world you decide to make home, prioritise putting that in place from the off.
Your income doesn’t have to come from a single source
I believe it will become more commonplace and more widely understood in future that people build portfolio careers and generate income from multiple sources rather than relying on one job for all their income.
My parents always advocated saving and investing as well as the power of property to generate income, but for many years into my early adult life I didn’t appreciate that this was about more than just making a provision for retirement. Had I embraced the idea sooner I might have been less dependent on my day-job than I remain to this day.
I’ve established a few small sources of income that now generate money for me regardless of whether I do anything with them or not. I’ve written and self-published books on Amazon, launched online training courses on sites like Udemy and I write regularly here on Medium. All of these generate evergreen passive income and I’m working hard to grow these. I also have my day job and do a small amount of freelance writing and ghost-writing in the evenings, to make extra money.
There are new opportunities to make money emerging all the time via technology, and there’s no better time to leverage and exploit these trends than when they first come about — be an early adopter. In recent years, trends that I’ve dabbled with have included:
Podcasts using Anchor.fm
Self-publishing on Amazon
Amazon Merch
Becoming a Fulfilled By Amazon retailer
Blogging on Medium
Creating a YouTube Channel
Email Marketing
Not all of these have been successful or made money. The key thing is to give them a go though, not just to dabble but to pick one of the many business opportunities that exist and to give it a good few months of effort to see what is possible. Each one that contributes $1 every month has the possibility to grow and scale over time.
Success takes a long time
Reviewing and retracing the steps taken in my career and in my side-hustles it’s difficult to identify any landmark moments when my fortunes radically changed. Success on even a small scale has taken time.
Progress is reflected in the compounded results from many average days of work put in over many years. There are many failures that have been necessary to yield the results that eventually lead to success.
This isn’t like the world of celebrity, where stars seem to rise from nowhere. Interestingly though, while those who are famous just for being famous seem to become so overnight (and fade away just as quickly), those who are genuinely successful and accomplished in their field, have usually taken many years to reach that state:
“I start early and I stay late, day after day, year after year. It took me 17 years and 114 days to become an overnight success” -Lionel Messi
Very few people start out in their career without a desire to progress, improve and succeed on at least some level. Unfortunately, many of us do underestimate how long it will take and how difficult it is to make this happen.
I think it’s unfortunate that for my daughter’s generation there are numerous factors conspiring against them to get what they want as quickly as they expect it. This video from Simon Sinek puts it better than I ever could:
It might be tedious and demoralising to think of the time it will take to succeed and to notionally ‘make it’. With lengthening lifespans and growing competition in the global workforce, this trend seems more likely to increase rather than decrease the time and effort that success demands. It would be as well to prepare for this.
Summing up
As I consider the advice for my daughter, while on one hand the situation could seem bleak (for the increased competition from a global workforce for example) I think the possibilities and opportunities for making money in unconventional ways, make this an exciting time to be entering the world of work.
I have at least another ten years (or more likely, twenty) before I hope to be able to stop working. I will be exploiting as many of these as I can for myself too.
I’m excited to share this journey with her.
|
https://medium.com/the-post-grad-survival-guide/career-advice-for-my-20-year-old-daughter-1cbc673931ed
|
[]
|
2019-11-15 15:01:03.105000+00:00
|
['Work', 'Parenting', 'Entrepreneurship', 'Self', 'Career Advice']
|
Title Career Advice 20YearOld DaughterContent Prioritise happy career long time spend lot time working sound like cliché it’s true happiness matter don’t necessarily get choose work work You’d better believe even idealseeming job going drag time mind you’d better ensure choose work make happy time otherwise you’ll spend lot day struggling get morning Money one reason working — Don’t make thing pursue convinced started working could anything asked long pay good completely underestimated fleeting sense fulfilment getting big paycheque What’s enduring sense worth get work feel Creativity sense meaning belief work value service others able express value freedom choose job count massively autonomy work also enormous factor I’ve commute daily certain location clockon set hour I’ve found structure constraining frustrating freedom able work home I’ve wanted trusted get job done working hour requires made feel empowered valued Money quickly become assumed part life rather motivating factor certain point money cease motivator many living expense met Freedom favourable working condition ability direct time see fit count much I’ve written extensively enjoy working home don’t waste time commuting fit sidehustle around day job it’s easier fit thing like exercise around work also enriches quality life won’t everyone mention example working condition outweigh perceived importance merely striving make money Don’t extend living cost become dependent upon income Nothing strip work enjoyment meaning feeling like you’re stuck job keep whether want merely dependent paycheck meet commitment spent many year working life treating payrise opportunity increase standard living committed outgoings result Bigger house fancier car hirepurchase agreement indulgent spending meant became reliant salary increased career I’d realised earlier ‘stuff’ doesn’t equate happiness pleasure great knowing financially stable decent chunk money saved invested would certainly invested spent le Beyond certain basic level don’t really need life get pay rise opportunity start saving goal accumulating enough money theoretically retire work earlier time I’d prioritise consumer spending free work earlier life result built solid financial buffer close possibly get buying time life Photo Saulo Mohana Unsplash Don’t underestimate value boring old stability safety Safe stable mean different thing different people definition evolves get older we’re young might value le start family started working exhilaration excitement really wanted besides money course quickly found actually needed degree stability comfort life sort stability come lifestyle comfortably afford enough money save invest regularly Unfortunately it’s taken last 20years achieve state spent time craving chasing turn head build stability ground think serve well everyone want married home 24 kid time they’re 30 Whatever level stability require whether that’s enough money saved fund basic living cost year enough money plane ticket home whatever corner world decide make home prioritise putting place income doesn’t come single source believe become commonplace widely understood future people build portfolio career generate income multiple source rather relying one job income parent always advocated saving investing well power property generate income many year early adult life didn’t appreciate making provision retirement embraced idea sooner might le dependent dayjob remain day I’ve established small source income generate money regardless whether anything I’ve written selfpublished book Amazon launched online training course site like Udemy write regularly Medium generate evergreen passive income I’m working hard grow also day job small amount freelance writing ghostwriting evening make extra money new opportunity make money emerging time via technology there’s better time leverage exploit trend first come — early adopter recent year trend I’ve dabbled included Podcasts using Anchorfm Selfpublishing Amazon Amazon Merch Becoming Fulfilled Amazon retailer Blogging Medium Creating YouTube Channel Email Marketing successful made money key thing give go though dabble pick one many business opportunity exist give good month effort see possible one contributes 1 every month possibility grow scale time Success take long time Reviewing retracing step taken career sidehustles it’s difficult identify landmark moment fortune radically changed Success even small scale taken time Progress reflected compounded result many average day work put many year many failure necessary yield result eventually lead success isn’t like world celebrity star seem rise nowhere Interestingly though famous famous seem become overnight fade away quickly genuinely successful accomplished field usually taken many year reach state “I start early stay late day day year year took 17 year 114 day become overnight success” Lionel Messi people start career without desire progress improve succeed least level Unfortunately many u underestimate long take difficult make happen think it’s unfortunate daughter’s generation numerous factor conspiring get want quickly expect video Simon Sinek put better ever could might tedious demoralising think time take succeed notionally ‘make it’ lengthening lifespan growing competition global workforce trend seems likely increase rather decrease time effort success demand would well prepare Summing consider advice daughter one hand situation could seem bleak increased competition global workforce example think possibility opportunity making money unconventional way make exciting time entering world work least another ten year likely twenty hope able stop working exploiting many I’m excited share journey herTags Work Parenting Entrepreneurship Self Career Advice
|
4,947 |
Amazon reimbursed me for a price drop after my purchase. Here’s how I did it.
|
Amazon reimbursed me for a price drop after my purchase. Here’s how I did it.
Spoke to 7 Amazon representatives in one month.
Photo by Christian Wiediger on Unsplash
You know that shitty feeling when you purchase something online only to discover a huge price drop shortly after?
I found out firsthand when I bought a Kindle as a gift earlier this year. I screamed “ethics!!!” but really, I was just a pawn of capitalism.
But I was not going to let it go. So I spent almost a month trying to get back the grand figure of $70 and I did.
Here’s what happened and how I did it:
Case in question
Product: Kindle Paperwhite — Now Waterproof with 2x the Storage — 8GB (International Version) (note: damn you fancy marketing titles)
Purchase price: $210
Price after sudden price drop: $140
Reimbursement sought: $70
Amazon representatives spoken to: 7
24 Feb — Order delivered
All great for now. But then came the sudden price drop.
I didn’t take a screenshot of it, rookie mistake because Amazon could just play it off. So take your screenshots if you ever get in such a situation.
03 Mar — The complaint
Note: I am seeking a “reimbursement” because frankly speaking, they are not obligated to “refund” me.
I started off a little humbly, hoping to get my $70 back and move on.
But then came the reply.
Amazon rep #1 — Tanveer
Tanveer’s response made more determined to be reimbursed.
I let him know what I thought of his “return for full refund and make a new purchase” suggestion.
04 Mar — Escalate
Still fairly calm. But as you can see, I’m not wasting time with Tanveer.
The first line of customer service (read: defence) can only offer so much. Always request to escalate to a higher authority when your issue is not immediately resolved.
Next, Habib came along.
Amazon rep #2 — Habib
Okay Habib is not helpful either but the key is to persist.
06 Mar — Escalate #2
Not wasting time with Habib either. Straight to the point.
18 Mar — Enter Amazon Rep #3
Amazon rep #3 — Zareena
Judging from the response crafted, Zareena is probably a more senior rep. So it’s likely that my case had been escalated but still, there was no resolution.
Firstly, Zareena didn’t say anything new besides fluffing up the service recovery. Second, notice the gap in response. It was a war of attrition. But joke’s on them, I was well prepared for battle — I was an unemployed fresh graduate who really needs his $70 back.
19 Mar — Leverage and reiterate
I reiterated what I said to Tanveer and Habib and also leveraged on the poor service response.
But out of nowhere, Purohit arrived.
19 Mar — Purohit confusion
Amazon rep #4 — Purohit
Zareena had disappeared. At this point, it became clear that they had switched to a strategy designed to confuse me. First, with the sudden change in rep, then in claiming some error with my email address.
I responded quickly.
20 Mar — Zareena reappears
But before I could sort things out with Purohit, Zareena reappeared. It only got more confusing. Amazon’s non-personalised service emails means neither Zareena nor Purohit were cc-ed in each other’s emails.
But what became clear was that they were trying testing my resolve. Confusion was just another tactic in their art of attrition.
Well played, but not this time Amazon.
24 Mar — Persist
As you can tell from the 4-day gap, I contemplated giving up. But I persisted, and asked to get on a direct call to resolve this circus show.
25 Mar —New challenger Lahari
Unfortunately, Zareena ghosted me again. However, a new rep came to my “service”.
Amazon rep #5 — Lahari
Lahari continued to toot the same tune as the 4 Amazon reps before her. But notably, she added that the return window for my purchase had expired which meant that I was unable the “return and full refund” solution was no longer available to me.
She was essentially saying “Too bad bro, we played the long game and you lost.” Now I knew I had to recalibrate my angle of attack onto Amazon’s live chat.
27 Mar — Amazon live chat
Here, I spoke to Tarunkishore and asked him to look into my case.
It seemed like live chat worked because Tarunkishore finally agreed to reimburse me. I believed it might be because he reviewed my case and saw how persistent I was.
An automated email followed to ask if my problem was solved.
Amazon rep #6 — Tarunkishore
Dear reader, I wished it was. But it wasn’t.
27 Mar — False resolution
When Tarunkishore reimbursed me, he issued me a $70.18 Amazon gift card even though I specifically requested to refund it through the card I used to purchase.
I was not going to be trapped into spending $70.18 on Amazon. I wanted a cash reimbursement to spend my money poorly elsewhere.
But this is the problem with Amazon and their customer service system.
Computer generated email addresses that tries to frustrate you and make you drop the matter.
31 Mar — Resolution
Amazon rep #7 — Harish
7 Amazon reps and almost a month later, I finally got my $70 back.
At what cost? An unnecessary amount of time and effort.
Was it worth it? I’d like to think yes.
So, the next time you encounter a price drop and want to get a reimbursement, get on a call or live chat. Escalate if your problem is not immediately resolved. State your argument clearly and persist if their solution does not make sense.
That could save you the trouble of going through a month-long journey with 7 different customer reps. But then again, maybe that’s just the how the game goes. So… seek a reimbursement at your own risk mate.
If you have a similar crazy customer service story, share them with me in the comments. Would be nice to know I’m not alone.
P.S. Special thanks to Tanveer, Habib, Zareena, Purohit, Lahari, Tarunkishore, and Harish.
|
https://medium.com/the-haven/amazon-reimbursed-me-for-a-price-drop-after-my-purchase-heres-how-i-did-it-f6fd1acad9bf
|
[]
|
2020-10-24 03:50:14.768000+00:00
|
['Customer Service', 'Amazon', 'Sales', 'Comedy', 'Shopping']
|
Title Amazon reimbursed price drop purchase Here’s itContent Amazon reimbursed price drop purchase Here’s Spoke 7 Amazon representative one month Photo Christian Wiediger Unsplash know shitty feeling purchase something online discover huge price drop shortly found firsthand bought Kindle gift earlier year screamed “ethics” really pawn capitalism going let go spent almost month trying get back grand figure 70 Here’s happened Case question Product Kindle Paperwhite — Waterproof 2x Storage — 8GB International Version note damn fancy marketing title Purchase price 210 Price sudden price drop 140 Reimbursement sought 70 Amazon representative spoken 7 24 Feb — Order delivered great came sudden price drop didn’t take screenshot rookie mistake Amazon could play take screenshots ever get situation 03 Mar — complaint Note seeking “reimbursement” frankly speaking obligated “refund” started little humbly hoping get 70 back move came reply Amazon rep 1 — Tanveer Tanveer’s response made determined reimbursed let know thought “return full refund make new purchase” suggestion 04 Mar — Escalate Still fairly calm see I’m wasting time Tanveer first line customer service read defence offer much Always request escalate higher authority issue immediately resolved Next Habib came along Amazon rep 2 — Habib Okay Habib helpful either key persist 06 Mar — Escalate 2 wasting time Habib either Straight point 18 Mar — Enter Amazon Rep 3 Amazon rep 3 — Zareena Judging response crafted Zareena probably senior rep it’s likely case escalated still resolution Firstly Zareena didn’t say anything new besides fluffing service recovery Second notice gap response war attrition joke’s well prepared battle — unemployed fresh graduate really need 70 back 19 Mar — Leverage reiterate reiterated said Tanveer Habib also leveraged poor service response nowhere Purohit arrived 19 Mar — Purohit confusion Amazon rep 4 — Purohit Zareena disappeared point became clear switched strategy designed confuse First sudden change rep claiming error email address responded quickly 20 Mar — Zareena reappears could sort thing Purohit Zareena reappeared got confusing Amazon’s nonpersonalised service email mean neither Zareena Purohit cced other’s email became clear trying testing resolve Confusion another tactic art attrition Well played time Amazon 24 Mar — Persist tell 4day gap contemplated giving persisted asked get direct call resolve circus show 25 Mar —New challenger Lahari Unfortunately Zareena ghosted However new rep came “service” Amazon rep 5 — Lahari Lahari continued toot tune 4 Amazon rep notably added return window purchase expired meant unable “return full refund” solution longer available essentially saying “Too bad bro played long game lost” knew recalibrate angle attack onto Amazon’s live chat 27 Mar — Amazon live chat spoke Tarunkishore asked look case seemed like live chat worked Tarunkishore finally agreed reimburse believed might reviewed case saw persistent automated email followed ask problem solved Amazon rep 6 — Tarunkishore Dear reader wished wasn’t 27 Mar — False resolution Tarunkishore reimbursed issued 7018 Amazon gift card even though specifically requested refund card used purchase going trapped spending 7018 Amazon wanted cash reimbursement spend money poorly elsewhere problem Amazon customer service system Computer generated email address try frustrate make drop matter 31 Mar — Resolution Amazon rep 7 — Harish 7 Amazon rep almost month later finally got 70 back cost unnecessary amount time effort worth I’d like think yes next time encounter price drop want get reimbursement get call live chat Escalate problem immediately resolved State argument clearly persist solution make sense could save trouble going monthlong journey 7 different customer rep maybe that’s game go So… seek reimbursement risk mate similar crazy customer service story share comment Would nice know I’m alone PS Special thanks Tanveer Habib Zareena Purohit Lahari Tarunkishore HarishTags Customer Service Amazon Sales Comedy Shopping
|
4,948 |
Lives in Hiding, Emotional Abuse, the Closet, and Other Concerns
|
The Reconnection
I was pleasantly surprised when Dinu decided to contact me again. I thought I had lost a friend for good but then here she was after five years, telling me that she was stupid and scared. Please forgive her.
Five years and many arguments after, her boyfriend had sat her down and told her she might be bisexual and finally gave her “permission” to talk to me.
As expected, after telling her of her sexual orientation, he also suggested a threesome to help her make sure that she’s bisexual.
So you stopped her from owning a part of her sexuality out of fear and then after many fights and disappointments told her it’s ok? Why did you need her to be 100% straight in the first place?
Why did you feel threatened by someone who lived at least a thousand miles away and only chatted with your girlfriend roughly once a month?
Why do you still casually joke and shame her for her attraction to women? Call her a lesbian and think it’s a derogatory term?
Can you even claim to love her?
The New Personality
As much as I was happy to reconnect with a “lost” friend, I was met with a changed version of the former friend.
The confusion our talk created was so much that I had to google signs of emotional abuse just to remind myself what it is. The details of our conversation went as below.
She tells me that he finally felt secure enough in her love for him.
My question is, what did she have to sacrifice to get there? The answer lies in her current state of being — it seems she had to break herself down to prove her devotion.
She tells me she has developed depression and an anxiety disorder. She has suicidal thoughts. She tells me she’s broken. She uses the identity of broken without question.
She tells me her boyfriend is amazing because he’s fixing her. Fixing? Broken? The friend I used to know did not speak this way about herself. Not even once.
If no hugely traumatic occurred in the past few years, then how could there be such a change in self-perception?
She asks me if I find her pretty. The person I used to know didn’t ask questions like “am I pretty?”. What kind of question is that? Maybe she should have asked these questions when she was supposed to be in her insecure adolescent phase.
She was similarly feeling unsure and lost about other aspects of her life.
What happened? Where did her self-esteem go?
In addition, she has developed a high emotional reactivity that wasn’t there before:
She was super quick to assume I don’t like her as a person anymore when I didn’t feed validating answers to all her insecure questions. She even imagined I hate her. She got a little angry, all in the span of one conversation. She showed jealousy over other people in my life, lacked boundaries, asked questions that are too familiar for someone who just reappeared in my life out of nowhere.
I could no longer recognize this person. She wasn’t like this before. Has she been emotionally abused? As far as I know, emotionally abused people tend to develop emotional instability as a result of the abuse.
Emotional abuse can even impact friendships because emotionally abused people often worry about how people truly see them and if they truly like them. — Sherry Gordon, Very Well Mind
Am I seeing things that are not there? I am sure that Dinu has changed but can I know for sure that there is something off about her relationship as well?
Is she suffering because of her boyfriend or are they suffering together because she can’t integrate the part of herself that likes other women into her current reality?
Then again, I remember that she wasn’t trying to disown her sexual attraction to women all those years prior to confiding in him.
|
https://medium.com/prismnpen/lives-in-hiding-emotional-abuse-the-closet-and-other-concerns-b0b5dcd854d
|
['Tima Loku']
|
2020-12-13 03:51:05.677000+00:00
|
['Relationships', 'LGBTQ', 'Mental Health', 'Women', 'Creative Non Fiction']
|
Title Lives Hiding Emotional Abuse Closet ConcernsContent Reconnection pleasantly surprised Dinu decided contact thought lost friend good five year telling stupid scared Please forgive Five year many argument boyfriend sat told might bisexual finally gave “permission” talk expected telling sexual orientation also suggested threesome help make sure she’s bisexual stopped owning part sexuality fear many fight disappointment told it’s ok need 100 straight first place feel threatened someone lived least thousand mile away chatted girlfriend roughly month still casually joke shame attraction woman Call lesbian think it’s derogatory term even claim love New Personality much happy reconnect “lost” friend met changed version former friend confusion talk created much google sign emotional abuse remind detail conversation went tell finally felt secure enough love question sacrifice get answer lie current state — seems break prove devotion tell developed depression anxiety disorder suicidal thought tell she’s broken us identity broken without question tell boyfriend amazing he’s fixing Fixing Broken friend used know speak way even hugely traumatic occurred past year could change selfperception asks find pretty person used know didn’t ask question like “am pretty” kind question Maybe asked question supposed insecure adolescent phase similarly feeling unsure lost aspect life happened selfesteem go addition developed high emotional reactivity wasn’t super quick assume don’t like person anymore didn’t feed validating answer insecure question even imagined hate got little angry span one conversation showed jealousy people life lacked boundary asked question familiar someone reappeared life nowhere could longer recognize person wasn’t like emotionally abused far know emotionally abused people tend develop emotional instability result abuse Emotional abuse even impact friendship emotionally abused people often worry people truly see truly like — Sherry Gordon Well Mind seeing thing sure Dinu changed know sure something relationship well suffering boyfriend suffering together can’t integrate part like woman current reality remember wasn’t trying disown sexual attraction woman year prior confiding himTags Relationships LGBTQ Mental Health Women Creative Non Fiction
|
4,949 |
I Keep Hearing That Working From Home Is The Future — And It Bugs Me to No End
|
Remember how, just until prior to the whole COVID-19 predicament, a lot of employers seemed to be firmly planted in their conviction that workers at home are not to be trusted? That they must be slacking off because, apparently, they have no self-control, let alone a desire to actually earn their salary?
Yup.
And suddenly — less than a year after — these same employers are telling us that working from home is the future and that they are considering leaving their office spaces behind for good.
The way I see it, there are two possibilities as to why. Either some employers are genuinely convinced that working from home is the wish of every single worker — and thus, now that company policy allows it, we might as well just all switch, because we all agree — right? Or — more likely, in my opinion — they realized they can fully get rid of the significant expenditure of renting office space, buying toilet paper, office supplies, coffee, and what have you. Of course, that can only be pulled off if every single employee works from home. Or, well, from anywhere except the office.
But here comes the catch — not everyone is into this whole working from home business. In fact, most friends and coworkers I’ve spoken to — people who had no prior experience as, say, freelancers — expressed that they would want to go back to the office sooner than later.
And I agree wholeheartedly.
I miss being around people. I need to see others puttering around the office, I need to be able to go pick up a coffee with a colleague, to sit on a sofa and chat, or just reposition myself every hour just to freshen up my environment. Doesn’t matter if I wear headphones through it all. The mere presence of others working beside me is both calming and inspiring.
Also, I live in an apartment that Google tells me is the equivalent of 215sqft. That’s tiny. I’ll go nuts staying here. And I don’t know about most people, but my chances to escape home would be slim to none. Where else could I go if not my office? Add to that the fact that, if you deal with confidential data or need reliable Internet all day as I do, you can’t really be sitting on some public wi-fi hotspot in a hip café (post-Corona, clearly).
While on that topic — I’ve had crazy anxiety about my Internet connection failing on me. If my domestic network fails and I can’t get into the meeting, it’s pretty much my problem to solve. When something like that happens in an office, no employee is under any real pressure — except maybe for the one who is in charge of keeping that connection up and running. But it’s mostly the company’s problem.
I just detest the concept of being liable for so much more while getting paid the same and slowly eroding my mental health as the line between life and work deteriorates. There is a myriad of reasons why someone’s personal space might not be conducive to work.
As I’ve mentioned — my own place is tiny, cramped, my desk is two steps away from my bed, and of course, there’s no company. Someone else’s home will have different limitations. What I have, that a lot of employers seem not to, is the awareness that everyone has challenges I have no idea about. That applies to WFH.
It’s not even only about the work part. Home used to be my sacred space — where I took care of myself and could wind down after the day’s challenges. Now it has become the place where I’m stressed more often than not. As nice as it is to be at the office, it’s just as nice to be able to leave it at the end of the day.
|
https://medium.com/illumination/i-keep-hearing-that-working-from-home-is-the-future-and-it-bugs-me-to-no-end-1318c6894f94
|
['Alice Audie']
|
2020-12-23 12:33:04.275000+00:00
|
['Covid 19', 'Work', 'Home', 'Mental Health', 'Self']
|
Title Keep Hearing Working Home Future — Bugs EndContent Remember prior whole COVID19 predicament lot employer seemed firmly planted conviction worker home trusted must slacking apparently selfcontrol let alone desire actually earn salary Yup suddenly — le year — employer telling u working home future considering leaving office space behind good way see two possibility Either employer genuinely convinced working home wish every single worker — thus company policy allows might well switch agree — right — likely opinion — realized fully get rid significant expenditure renting office space buying toilet paper office supply coffee course pulled every single employee work home well anywhere except office come catch — everyone whole working home business fact friend coworkers I’ve spoken — people prior experience say freelancer — expressed would want go back office sooner later agree wholeheartedly miss around people need see others puttering around office need able go pick coffee colleague sit sofa chat reposition every hour freshen environment Doesn’t matter wear headphone mere presence others working beside calming inspiring Also live apartment Google tell equivalent 215sqft That’s tiny I’ll go nut staying don’t know people chance escape home would slim none else could go office Add fact deal confidential data need reliable Internet day can’t really sitting public wifi hotspot hip café postCorona clearly topic — I’ve crazy anxiety Internet connection failing domestic network fails can’t get meeting it’s pretty much problem solve something like happens office employee real pressure — except maybe one charge keeping connection running it’s mostly company’s problem detest concept liable much getting paid slowly eroding mental health line life work deteriorates myriad reason someone’s personal space might conducive work I’ve mentioned — place tiny cramped desk two step away bed course there’s company Someone else’s home different limitation lot employer seem awareness everyone challenge idea applies WFH It’s even work part Home used sacred space — took care could wind day’s challenge become place I’m stressed often nice office it’s nice able leave end dayTags Covid 19 Work Home Mental Health Self
|
4,950 |
How We Use RocksDB at Rockset
|
In this blog, I’ll describe how we use RocksDB at Rockset and how we tuned it to get the most performance out of it. I assume that the reader is generally familiar with how Log-Structured Merge tree based storage engines like RocksDB work.
At Rockset, we want our users to be able to continuously ingest their data into Rockset with sub-second write latency and query it in 10s of milliseconds. For this, we need a storage engine that can support both fast online writes and fast reads. RocksDB is a high-performance storage engine that is built to support such workloads. RocksDB is used in production at Facebook, LinkedIn, Uber and many other companies. Projects like MongoRocks, Rocksandra, MyRocks etc. used RocksDB as a storage engine for existing popular databases and have been successful at significantly reducing space amplification and/or write latencies. RocksDB’s key-value model is also most suitable for implementing converged indexing, where each field in an input document is stored in a row-based store, column-based store, and a search index. So we decided to use RocksDB as our storage engine. We are lucky to have significant expertise on RocksDB in our team in the form of our CTO Dhruba Borthakur who founded RocksDB at Facebook. For each input field in an input document, we generate a set of key-value pairs and write them to RocksDB.
Let me quickly describe where the RocksDB storage nodes fall in the overall system architecture.
When a user creates a collection, we internally create N shards for the collection. Each shard is replicated k-ways (usually k=2) to achieve high read availability and each shard replica is assigned to a leaf node. Each leaf node is assigned many shard replicas of many collections. In our production environment each leaf node has around 100 shard replicas assigned to it. Leaf nodes create 1 RocksDB instance for each shard replica assigned to them. For each shard replica, leaf nodes continuously pull updates from a DistributedLogStore and apply the updates to the RocksDB instance. When a query is received, leaf nodes are assigned query plan fragments to serve data from some of the RocksDB instances assigned to them. For more details on leaf nodes, please refer to Aggregator Leaf Tailer blog or Rockset white paper.
To achieve query latency of milliseconds under 1000s of qps of sustained query load per leaf node while continuously applying incoming updates, we spent a lot of time tuning our RocksDB instances. Below, we describe how we tuned RocksDB for our use case.
RocksDB-Cloud
RocksDB is an embedded key-value store. The data in 1 RocksDB instance is not replicated to other machines. RocksDB cannot recover from machine failures. To achieve durability, we built RocksDB-Cloud. RocksDB-Cloud replicates all the data and metadata for a RocksDB instance to S3. Thus, all SST files written by leaf nodes get replicated to S3. When a leaf node machine fails, all shard replicas on that machine get assigned to other leaf nodes. For each new shard replica assignment, a leaf node reads the RocksDB files for that shard from corresponding S3 bucket and picks up where the failed leaf node left off.
Disable Write Ahead Log
RocksDB writes all its updates to a write ahead log and to the active in-memory memtable. The write ahead log is used to recover data in the memtables in the event of process restart. In our case, all the incoming updates for a collection are first written to a DistributedLogStore. The DistributedLogStore itself acts as a write ahead log for the incoming updates. Also, we do not need to guarantee data consistency across queries. It is ok to lose the data in the memtables and re-fetch it from the DistributedLogStore on restarts. For this reason, we disable RocksDB’s write ahead log. This means that all our RocksDB writes happen in-memory.
Writer Rate Limit
As mentioned above, leaf nodes are responsible for both applying incoming updates and serving data for queries. We can tolerate relatively much higher latency for writes than for queries. As much as possible, we always want to use a fraction of available compute capacity for processing writes and most of compute capacity for serving queries. We limit the number of bytes that can be written per second to all RocksDB instances assigned to a leaf node. We also limit the number of threads used to apply writes to RocksDB instances. This helps minimize the impact RocksDB writes could have on query latency. Also, by throttling writes in this manner, we never end up with imbalanced LSM tree or trigger RocksDB’s built-in unpredictable back-pressure/stall mechanism. Note that both of these features are not available in RocksDB, but we implemented them on top of RocksDB. RocksDB supports a rate limiter to throttle writes to the storage device, but we need a mechanism to throttle writes from the application to RocksDB.
Sorted Write Batch
RocksDB can achieve higher write throughput if individual updates are batched in a WriteBatch and further if consecutive keys in a write batch are in a sorted order. We take advantage of both of these. We batch incoming updates into micro-batches of ~100KB size and sort them before writing them to RocksDB.
Dynamic Level Target Sizes
In an LSM tree with leveled compaction policy, files from a level do not get compacted with files from the next level until the target size of the current level is exceeded. And the target size for each level is computed based on the specified L1 target size and level size multiplier (usually 10). This usually results in higher space amplification than desired until the last level has reached its target size as described on RocksDB blog. To alleviate this, RocksDB can dynamically set target sizes for each level based on the current size of the last level. We use this feature to achieve the expected 1.111 space amplification with RocksDB regardless of the amount of data stored in the RocksDB instance. It can be turned on by setting AdvancedColumnFamilyOptions::level_compaction_dynamic_level_bytes to true.
Shared Block Cache
As mentioned above, leaf nodes are assigned many shard replicas of many collections and there is one RocksDB instance for each shard replica. Instead of using a separate block cache for each RocksDB instance, we use 1 global block cache for all RocksDB instances on the leaf node. This helps achieve better memory utilization by evicting unused blocks across all shard replicas out of leaf memory. We give block cache about 25% of the memory available on a leaf pod. We intentionally do not make block cache even bigger even if there is spare memory available that is not used for processing queries. This is because we want the operating system page cache to have that spare memory. Page cache stores compressed blocks while block cache stores uncompressed blocks, so page cache can more densely pack file blocks that are not so hot. As described in Optimizing Space Amplification in RocksDB paper, page cache helped reduce file system reads by 52% for three RocksDB deployments observed at Facebook. And page cache is shared by all containers on a machine, so the shared page cache serves all leaf containers running on a machine.
No Compression For L0 & L1
By design, L0 and L1 levels in an LSM tree contain very little data compared to other levels. There is little to be gained by compressing the data in these levels. But, we could save some cpu by not compressing data in these levels. Every L0 to L1 compaction needs to access all files in L1. Also, range scans cannot use bloom filter and need to look up all files in L0. Both of these frequent cpu-intensive operations would use less cpu if data in L0 and L1 does not need to be uncompressed when read or compressed when written. This is why, and as recommended by RocksDB team, we do not compress data in L0 and L1, and use LZ4 for all other levels.
Bloom Filters On Key Prefixes
As described in converged indexing blog, we store every column of every document in RocksDB in 3 different ways and in 3 different key ranges. For queries, we read each of these key ranges differently. Specifically, we do not ever lookup a key in any of these key ranges using the exact key. We usually simply seek to a key using a smaller, shared prefix of the key. Therefore, we set BlockBasedTableOptions::whole_key_filtering to false so that whole keys are not used to populate and thereby pollute the bloom filters created for each SST. We also use a custom ColumnFamilyOptions::prefix_extractor so that only the useful prefix of the key is used for constructing the bloom filters.
Iterator Freepool
When reading data from RocksDB for processing queries, we need to create 1 or more rocksdb::Iterator s. For queries that perform range scans or retrieve many fields, we need to create many iterators. Our cpu profile showed that creating these iterators is expensive. We use a freepool of these iterators and try to reuse iterators within a query. We cannot reuse iterators across queries as each iterator refers to a specific RocksDB snapshot and we use the same RocksDB snapshot for a query.
Finally, here is the full list of configuration parameters we specify for our RocksDB instances.
|
https://medium.com/rocksetcloud/how-we-use-rocksdb-at-rockset-c500876fd6d4
|
[]
|
2019-09-05 19:04:14.545000+00:00
|
['Database', 'Rocksdb', 'Storage', 'Indexing', 'Software Engineering']
|
Title Use RocksDB RocksetContent blog I’ll describe use RocksDB Rockset tuned get performance assume reader generally familiar LogStructured Merge tree based storage engine like RocksDB work Rockset want user able continuously ingest data Rockset subsecond write latency query 10 millisecond need storage engine support fast online writes fast read RocksDB highperformance storage engine built support workload RocksDB used production Facebook LinkedIn Uber many company Projects like MongoRocks Rocksandra MyRocks etc used RocksDB storage engine existing popular database successful significantly reducing space amplification andor write latency RocksDB’s keyvalue model also suitable implementing converged indexing field input document stored rowbased store columnbased store search index decided use RocksDB storage engine lucky significant expertise RocksDB team form CTO Dhruba Borthakur founded RocksDB Facebook input field input document generate set keyvalue pair write RocksDB Let quickly describe RocksDB storage node fall overall system architecture user creates collection internally create N shard collection shard replicated kways usually k2 achieve high read availability shard replica assigned leaf node leaf node assigned many shard replica many collection production environment leaf node around 100 shard replica assigned Leaf node create 1 RocksDB instance shard replica assigned shard replica leaf node continuously pull update DistributedLogStore apply update RocksDB instance query received leaf node assigned query plan fragment serve data RocksDB instance assigned detail leaf node please refer Aggregator Leaf Tailer blog Rockset white paper achieve query latency millisecond 1000 qps sustained query load per leaf node continuously applying incoming update spent lot time tuning RocksDB instance describe tuned RocksDB use case RocksDBCloud RocksDB embedded keyvalue store data 1 RocksDB instance replicated machine RocksDB cannot recover machine failure achieve durability built RocksDBCloud RocksDBCloud replicates data metadata RocksDB instance S3 Thus SST file written leaf node get replicated S3 leaf node machine fails shard replica machine get assigned leaf node new shard replica assignment leaf node read RocksDB file shard corresponding S3 bucket pick failed leaf node left Disable Write Ahead Log RocksDB writes update write ahead log active inmemory memtable write ahead log used recover data memtables event process restart case incoming update collection first written DistributedLogStore DistributedLogStore act write ahead log incoming update Also need guarantee data consistency across query ok lose data memtables refetch DistributedLogStore restarts reason disable RocksDB’s write ahead log mean RocksDB writes happen inmemory Writer Rate Limit mentioned leaf node responsible applying incoming update serving data query tolerate relatively much higher latency writes query much possible always want use fraction available compute capacity processing writes compute capacity serving query limit number byte written per second RocksDB instance assigned leaf node also limit number thread used apply writes RocksDB instance help minimize impact RocksDB writes could query latency Also throttling writes manner never end imbalanced LSM tree trigger RocksDB’s builtin unpredictable backpressurestall mechanism Note feature available RocksDB implemented top RocksDB RocksDB support rate limiter throttle writes storage device need mechanism throttle writes application RocksDB Sorted Write Batch RocksDB achieve higher write throughput individual update batched WriteBatch consecutive key write batch sorted order take advantage batch incoming update microbatches 100KB size sort writing RocksDB Dynamic Level Target Sizes LSM tree leveled compaction policy file level get compacted file next level target size current level exceeded target size level computed based specified L1 target size level size multiplier usually 10 usually result higher space amplification desired last level reached target size described RocksDB blog alleviate RocksDB dynamically set target size level based current size last level use feature achieve expected 1111 space amplification RocksDB regardless amount data stored RocksDB instance turned setting AdvancedColumnFamilyOptionslevelcompactiondynamiclevelbytes true Shared Block Cache mentioned leaf node assigned many shard replica many collection one RocksDB instance shard replica Instead using separate block cache RocksDB instance use 1 global block cache RocksDB instance leaf node help achieve better memory utilization evicting unused block across shard replica leaf memory give block cache 25 memory available leaf pod intentionally make block cache even bigger even spare memory available used processing query want operating system page cache spare memory Page cache store compressed block block cache store uncompressed block page cache densely pack file block hot described Optimizing Space Amplification RocksDB paper page cache helped reduce file system read 52 three RocksDB deployment observed Facebook page cache shared container machine shared page cache serf leaf container running machine Compression L0 L1 design L0 L1 level LSM tree contain little data compared level little gained compressing data level could save cpu compressing data level Every L0 L1 compaction need access file L1 Also range scan cannot use bloom filter need look file L0 frequent cpuintensive operation would use le cpu data L0 L1 need uncompressed read compressed written recommended RocksDB team compress data L0 L1 use LZ4 level Bloom Filters Key Prefixes described converged indexing blog store every column every document RocksDB 3 different way 3 different key range query read key range differently Specifically ever lookup key key range using exact key usually simply seek key using smaller shared prefix key Therefore set BlockBasedTableOptionswholekeyfiltering false whole key used populate thereby pollute bloom filter created SST also use custom ColumnFamilyOptionsprefixextractor useful prefix key used constructing bloom filter Iterator Freepool reading data RocksDB processing query need create 1 rocksdbIterator query perform range scan retrieve many field need create many iterators cpu profile showed creating iterators expensive use freepool iterators try reuse iterators within query cannot reuse iterators across query iterator refers specific RocksDB snapshot use RocksDB snapshot query Finally full list configuration parameter specify RocksDB instancesTags Database Rocksdb Storage Indexing Software Engineering
|
4,951 |
Blaming the Victim
|
Blaming the Victim
Literally Literary and The Writing Cooperative prompt
Photo by Clayton Cardinalli on Unsplash
For three years as an undergraduate, I claimed a Social Work major. I wanted to do good work for the poor and disadvantaged. I was minoring in English and decided to take a course in Southern Literature in which we read several Faulkner short stories and his masterwork, Absalom, Absalom! As I read Faulkner’s prose about the benighted south, I felt kinship. I was an Alabamian, after all. I said goodbye to Social Work then and became an English major.
Yet from my social work days, I will always remember William Ryan’s seminal text, Blaming the Victim, from which he reflected on our nation’s pervasive, negative attitude toward our “have-nots.” An attitude that blames these unfortunate people for the downward course of their lives. That the poverty, neglect, and abuse they suffer is entirely their doing.
That they don’t have a job because they are too lazy to hold one.
Getting a job isn’t easy under any circumstance. I applied for approximately 100 college teaching positions up and down the eastern half of this country. My wife, herself finishing her Master’s degree in Counseling Psychology, was willing to go wherever I could get a job.
Of those 100 applications, I landed two interviews.
The one I got was at Presbyterian College in Clinton, South Carolina. Clinton, a former mill town, had fallen on hard times. The mill had shut down, and the town’s population, hovering at 10,000, was on the wane.
So upon my hiring, the first question I had for the college was whether or not it could also employ my wife, and if it couldn’t do that, could it help her find a job? I noticed early on that many faculty member’s spouses had staff jobs on campus, in the library and in administration.
However, the college already had a counselor, a member of the psychology faculty. As we all know, one counselor for 1200 students is surely adequate.
To her credit, though, this professional helped my wife find a job in another town to the south of Clinton— a good state job with benefits. The catch was that since we had freely chosen to live in Greenville, forty-five miles to the north of Clinton, our commute would be close to 200 miles per day, given that we had one car. We made this commute for two years until my wife became pregnant, quit that job, and then after our baby was a few months old, went into private practice.
In the end, everything worked out for us, and we’ve been in these same jobs for over thirty years, have raised two daughters, and finally got that second car.
So I shouldn’t complain. I am a full professor, have tenure, love my colleagues, students, and the courses I teach. Sure, there have been conflicts and down times, and the salary isn’t up to competitive standards, at least for those at my rank. Over my tenure, I have taught courses in Southern Literature, Film Studies, Creative Nonfiction, and I inaugurated a course in American Literature and Ethnic Identity, in which I have taught such works as Typical American, by Gish Jen, The Romance Reader, by Pearl Abraham, and Caucasia, by Danzy Senna.
I started this ethnic lit course for three reasons:
Our students are predominately Caucasian. My wife is a native of Iran. I am half-Jewish.
Of these three reasons, the last two formed the problem that I can never forget.
My “Exulansis.”
|
https://medium.com/literally-literary/blaming-the-victim-e8474f9e5425
|
['Terry Barr']
|
2019-11-27 02:28:23.729000+00:00
|
['Literally Literary', 'Exulansis', 'Nonfiction', 'Family', 'Education']
|
Title Blaming VictimContent Blaming Victim Literally Literary Writing Cooperative prompt Photo Clayton Cardinalli Unsplash three year undergraduate claimed Social Work major wanted good work poor disadvantaged minoring English decided take course Southern Literature read several Faulkner short story masterwork Absalom Absalom read Faulkner’s prose benighted south felt kinship Alabamian said goodbye Social Work became English major Yet social work day always remember William Ryan’s seminal text Blaming Victim reflected nation’s pervasive negative attitude toward “havenots” attitude blame unfortunate people downward course life poverty neglect abuse suffer entirely don’t job lazy hold one Getting job isn’t easy circumstance applied approximately 100 college teaching position eastern half country wife finishing Master’s degree Counseling Psychology willing go wherever could get job 100 application landed two interview one got Presbyterian College Clinton South Carolina Clinton former mill town fallen hard time mill shut town’s population hovering 10000 wane upon hiring first question college whether could also employ wife couldn’t could help find job noticed early many faculty member’s spouse staff job campus library administration However college already counselor member psychology faculty know one counselor 1200 student surely adequate credit though professional helped wife find job another town south Clinton— good state job benefit catch since freely chosen live Greenville fortyfive mile north Clinton commute would close 200 mile per day given one car made commute two year wife became pregnant quit job baby month old went private practice end everything worked u we’ve job thirty year raised two daughter finally got second car shouldn’t complain full professor tenure love colleague student course teach Sure conflict time salary isn’t competitive standard least rank tenure taught course Southern Literature Film Studies Creative Nonfiction inaugurated course American Literature Ethnic Identity taught work Typical American Gish Jen Romance Reader Pearl Abraham Caucasia Danzy Senna started ethnic lit course three reason student predominately Caucasian wife native Iran halfJewish three reason last two formed problem never forget “Exulansis”Tags Literally Literary Exulansis Nonfiction Family Education
|
4,952 |
Useful Mobile Marketing Tools to Grow Your Business in 2020 (Pricing Included)
|
Starting a winning mobile marketing strategy is anything but simple. There’s a lot that goes into it. You’re required to master many different techniques and juggle multiple mobile marketing channels at the same time. Unless you stay on top of your mobile marketing strategy, it will fall apart quickly.
That’s why smart marketers use different tools to help them. Mobile marketing tools not only make it easier to keep track of everything but improve your campaigns and grow your business.
To make your life easier, we have compiled a list of top mobile marketing tools you need to be a winner in 2020. The types of tools we cover range from analytics, social media, and text marketing tools to SEO, ASO, and multi-channel marketing tools.
Analytics Tools
Mobile marketing is all about the numbers. If you don’t follow key metrics, you won’t have any success with your mobile marketing campaign. Unless you get really, really lucky. However, the probability of that is very low. So just stick with analytics.
Here are a couple of analytics tools that will help you rule the mobile marketing world.
Google Analytics is an all-around great tool to have in your marketing belt. It offers you an incredible number of features for free. Even though Google Analytics is not made exclusively for mobile, there are still a lot of relevant metrics you can follow.
Google Analytics Features
Traffic reports
Campaign tracking
Conversion tracking
Custom dashboard
Keyword referrals
Goals
Event tracking
Real-time reporting
Attribution
Custom metrics
Pricing
Free
Enterprise (customized pricing)
AppsFlyer
This powerful tool specializes in marketing analytics and mobile attribution. You can use it to track and improve your mobile campaigns, monitor your app engagement, see where the installs come from, etc. This tool helps you use the power of data in order to make better marketing decisions. AppsFlyer is great for gaming, retail, travel, financial, entertainment, food & drink, and many other industries.
AppsFlyer Features
Impressions and cost reports
Multi-channel measurements
Lifetime value reports
Return of interest reports
Ad revenue attribution
Uninstall attribution
Deep linking
Email marketing management
A/B testing
Social media integration
Audience segmentation
Cohort analysis
Contextual targeting
Tracking of in-app events
Text messages
Website analytics
Location-based marketing
Pricing Plan
Basic Plan — pay as you go (free trial)
Custom Plan — Full features. Contact AppsFlyer for more details.
App Annie
This powerful tool is used for app analytics and mobile market research. It gives you access to all the data and analytics you need to improve your mobile marketing strategy. You can easily track the performance of your app on a user-friendly dashboard, and compare different metrics. Market analysis is another great App Annie feature. You can do extensive research and even track your competitors’ app store revenue.
If you want to take your mobile marketing strategy to a new level and make your app a success, you should try this mobile marketing tool. The basic plan is free, so why not?
App Annie Features
App store tracking
Paid search
App store optimization
Download and revenue estimates
App usage estimates
Advertising estimates
SDK insights
App ranking
Event tracking
App category intelligence
App keyword ranking
Pricing Plan
Free (only basic features)
Premium (full features). Cost determined according to many different factors like country location, number of apps, company size, etc.
Soomla
Soomla is a leading tool in monetization measurement. It provides you with important in-app advertising insights and allows you to control your ads. For example, Soomla lets you identify ads that are increasing churn rates and helps you eliminate them, which is quite useful.
Soomla Features
App monetization
Attribution
Cohort analysis
Funnel analysis
In-app events analysis
Ad revenue attribution
Pricing Plan
Basic tier ($999 per month)
Blue tier (price upon request)
Green tier (price upon request)
Blue tier (price upon request)
If you want to check out even more mobile app analytics platforms, click here.
Social Media Tools
Social media is an integral part of any mobile marketing strategy. As long as users keep spending hours glued to their screens and scrolling through social media feeds, marketers will try to capture their attention.
However, managing social media accounts requires great planning, organization, and strategy. That’s why markets use scheduling tools to keep track of social media.,
Hootsuite
Hootsuite is probably the most popular social media scheduling tool out there. There are many reasons for that. Hootsuite allows you to schedule posts for all top social media platforms, manage your content plan, and even track the performance of your posts with social analytics. What drives many people to use Hootsuite is their free plan which allows you to manage up to 3 social media profiles and schedule 30 posts. However, there are also paid plans for more advanced users.
What I like about Hootsuite is their easy-to-use dashboard, and content plan that makes it simple to keep track of scheduled posts.
Hootsuite Features
Scheduling
Monitoring
Content curation
Social analytics
Team management
24/7 support
Pricing Plan
Free
Professional ($19 per month)
Team ($99 per month)
Business ($599 per month)
Enterprise (contact Hootsuite for pricing)
SMS Marketing & Notifications Tools
Another important mobile marketing channel is text and notifications. It is a great technique for increasing your reach and keeping your audience engaged. Here are some easy-to-use SMS marketing tools for setting up your text campaign.
EZ Texting
EZ Texting tool makes it easy to get started with text marketing. It’s great for beginners and has all the features you need, like scheduling, tracking, personalization, and reports. 160 thousand marketers use the platform and more than 4 billion messages have been sent through it.
EZ Texting Features
SMS marketing
Sign-up forms
Keywords
MMS messages
Text to vote SMS polls
Shortcode services
Personalization
Drip campaigns
Scheduling
Tracking
Reports
Pricing Plan
14-day free trial
Plus ($49 per month)
Select ($94 per month)
Elite ($149 per month)
Pro ($250 per month)
Bronze ($450 per month)
Silver ($1100 per month)
Gold ($2000 per month)
Custom plan
Airship
Airship is a great tool for handling in-app messages, push notifications, and text messages. Use it to send relevant messages to your audience at the right time and boost user engagement.
Airship Features
Push notifications
In-app messaging
SMS
Web notifications
Mobile wallet
Open channel API
Automation
Personalization
Optimization
Analytics
Pricing
Price available upon request
WhatsApp Business
If you’re looking for a 100% free texting tool, then you should try WhatsApp Business. It’s not as fancy as other messaging tools and doesn’t have as many features, but it’s still quite useful.
WhatsApp Business Features
Business profile
Quick replies
Contact labels
Automated messages
Pricing
Free
Multi-Channel Marketing Tools
Trying to balance multiple marketing channels at once can be very frustrating and time-consuming. Unless, of course, you have some tools to help you.
Iterable
To manage your marketing efforts across multiple mobile channels, try Iterable. This tool uses AI to analyze user behavior and come up with the best time and channel to engage them. I found this to be a standout feature. However, you can use Iterable for many other things like creating and managing lifecycle campaigns and retargeting users.
Iterable Features
Multi-channel integration
Audience identification
Lifecycle campaign creation
Personalization
Optimization
Data integration
Real-time metrics
Pricing Plan
Free trial
Subscription starting from $500
SendPulse
SendPulse allows you to handle text messages, push notifications, web notifications, emails, Facebook messenger, and Viber all in one place. I would single out their drag and drop editor as an exceptional feature.
Send Pulse Features
Drag and drop editor
Subscriptions forms
Trigger emails
Reporting
API
Web push monetization
Automated sending
Email scheduler
Personalized notifications
Segmentation
Unsubscriber list
Chatbots
A/B testing
Pricing
Free trial
Customizable subscription
Pay as you go option
VIP Plan
App Store Optimization Tools
If you have an app, ASO is one of the key things you need to focus on to make it successful. To find out how to do app store optimization like a pro, check out our simple guide.
AppTweak
This tool is great for increasing the visibility of your mobile game or app. AppTweak offers you all the data you need to optimize app stores and increase downloads. Top mobile leaders like PayPal, Microsoft, Expedia, Amazon, and LinkedIn use this mobile marketing tool.
AppTweak Features
Keyword research
ASO report
Keyword monitoring
Keyword shuffler
Category ranking
Keyword data
Organic keywords
Paid keywords
Keyword auto-suggestion
Algorithm change detector
Keyword counter
Revenue estimates
API
Visibility score
Pricing Plan
Starter ($69 per month)
Guru ($299 per month)
Power ($599 per month)
Enterprise (custom plan)
SEO Tools
Search engine optimization is crucial for the success of your mobile marketing strategy. Here’s a great tool that will help you boost your organic mobile traffic.
SEMRush
With more than 4 million users, SEMRush is one of the leading SEO tools on the market. It allows you to do everything from competitor analysis, keyword research, deep link analysis, rank tracking, and traffic analysis.
SEMRush Features
Organic research
Organic traffic insights
Keyword research
Backlink building
Position tracking
Analytics
Rank tracking
Site audit
On-page SEO checker
Search engine sensor
Content audit
SEO content template
Traffic analytics
Pricing Plan
Pro ($99 per month)
Guru ($199 per month)
Business ($399 per month)
Enterprise (custom plan)
Have you used any of these mobile marketing tools? Are there any mobile marketing tools you think we should add to this list? Tell us in the comments below!
Read More About Mobile Marketing 👇
About Udonis:
In 2018 & 2019, Udonis Inc. served over 14.1 billion ads & acquired over 50 million users for mobile apps & games. We’re recognized as a leading mobile marketing agency by 5 major marketing review firms. We helped over 20 mobile apps & games reach the top charts. Want to know how we make it look so effortless? Meet us to find out.
|
https://medium.com/udonis/starting-a-winning-mobile-marketing-strategy-is-anything-but-simple-f4e5aa22bca4
|
['Andrea Knezovic']
|
2019-12-04 15:22:07.201000+00:00
|
['Marketing Tools', 'Digital Marketing Tools', 'Marketing', 'Mobile Marketing', 'Mobile Marketing Tips']
|
Title Useful Mobile Marketing Tools Grow Business 2020 Pricing IncludedContent Starting winning mobile marketing strategy anything simple There’s lot go You’re required master many different technique juggle multiple mobile marketing channel time Unless stay top mobile marketing strategy fall apart quickly That’s smart marketer use different tool help Mobile marketing tool make easier keep track everything improve campaign grow business make life easier compiled list top mobile marketing tool need winner 2020 type tool cover range analytics social medium text marketing tool SEO ASO multichannel marketing tool Analytics Tools Mobile marketing number don’t follow key metric won’t success mobile marketing campaign Unless get really really lucky However probability low stick analytics couple analytics tool help rule mobile marketing world Google Analytics allaround great tool marketing belt offer incredible number feature free Even though Google Analytics made exclusively mobile still lot relevant metric follow Google Analytics Features Traffic report Campaign tracking Conversion tracking Custom dashboard Keyword referral Goals Event tracking Realtime reporting Attribution Custom metric Pricing Free Enterprise customized pricing AppsFlyer powerful tool specializes marketing analytics mobile attribution use track improve mobile campaign monitor app engagement see installs come etc tool help use power data order make better marketing decision AppsFlyer great gaming retail travel financial entertainment food drink many industry AppsFlyer Features Impressions cost report Multichannel measurement Lifetime value report Return interest report Ad revenue attribution Uninstall attribution Deep linking Email marketing management AB testing Social medium integration Audience segmentation Cohort analysis Contextual targeting Tracking inapp event Text message Website analytics Locationbased marketing Pricing Plan Basic Plan — pay go free trial Custom Plan — Full feature Contact AppsFlyer detail App Annie powerful tool used app analytics mobile market research give access data analytics need improve mobile marketing strategy easily track performance app userfriendly dashboard compare different metric Market analysis another great App Annie feature extensive research even track competitors’ app store revenue want take mobile marketing strategy new level make app success try mobile marketing tool basic plan free App Annie Features App store tracking Paid search App store optimization Download revenue estimate App usage estimate Advertising estimate SDK insight App ranking Event tracking App category intelligence App keyword ranking Pricing Plan Free basic feature Premium full feature Cost determined according many different factor like country location number apps company size etc Soomla Soomla leading tool monetization measurement provides important inapp advertising insight allows control ad example Soomla let identify ad increasing churn rate help eliminate quite useful Soomla Features App monetization Attribution Cohort analysis Funnel analysis Inapp event analysis Ad revenue attribution Pricing Plan Basic tier 999 per month Blue tier price upon request Green tier price upon request Blue tier price upon request want check even mobile app analytics platform click Social Media Tools Social medium integral part mobile marketing strategy long user keep spending hour glued screen scrolling social medium feed marketer try capture attention However managing social medium account requires great planning organization strategy That’s market use scheduling tool keep track social medium Hootsuite Hootsuite probably popular social medium scheduling tool many reason Hootsuite allows schedule post top social medium platform manage content plan even track performance post social analytics drive many people use Hootsuite free plan allows manage 3 social medium profile schedule 30 post However also paid plan advanced user like Hootsuite easytouse dashboard content plan make simple keep track scheduled post Hootsuite Features Scheduling Monitoring Content curation Social analytics Team management 247 support Pricing Plan Free Professional 19 per month Team 99 per month Business 599 per month Enterprise contact Hootsuite pricing SMS Marketing Notifications Tools Another important mobile marketing channel text notification great technique increasing reach keeping audience engaged easytouse SMS marketing tool setting text campaign EZ Texting EZ Texting tool make easy get started text marketing It’s great beginner feature need like scheduling tracking personalization report 160 thousand marketer use platform 4 billion message sent EZ Texting Features SMS marketing Signup form Keywords MMS message Text vote SMS poll Shortcode service Personalization Drip campaign Scheduling Tracking Reports Pricing Plan 14day free trial Plus 49 per month Select 94 per month Elite 149 per month Pro 250 per month Bronze 450 per month Silver 1100 per month Gold 2000 per month Custom plan Airship Airship great tool handling inapp message push notification text message Use send relevant message audience right time boost user engagement Airship Features Push notification Inapp messaging SMS Web notification Mobile wallet Open channel API Automation Personalization Optimization Analytics Pricing Price available upon request WhatsApp Business you’re looking 100 free texting tool try WhatsApp Business It’s fancy messaging tool doesn’t many feature it’s still quite useful WhatsApp Business Features Business profile Quick reply Contact label Automated message Pricing Free MultiChannel Marketing Tools Trying balance multiple marketing channel frustrating timeconsuming Unless course tool help Iterable manage marketing effort across multiple mobile channel try Iterable tool us AI analyze user behavior come best time channel engage found standout feature However use Iterable many thing like creating managing lifecycle campaign retargeting user Iterable Features Multichannel integration Audience identification Lifecycle campaign creation Personalization Optimization Data integration Realtime metric Pricing Plan Free trial Subscription starting 500 SendPulse SendPulse allows handle text message push notification web notification email Facebook messenger Viber one place would single drag drop editor exceptional feature Send Pulse Features Drag drop editor Subscriptions form Trigger email Reporting API Web push monetization Automated sending Email scheduler Personalized notification Segmentation Unsubscriber list Chatbots AB testing Pricing Free trial Customizable subscription Pay go option VIP Plan App Store Optimization Tools app ASO one key thing need focus make successful find app store optimization like pro check simple guide AppTweak tool great increasing visibility mobile game app AppTweak offer data need optimize app store increase downloads Top mobile leader like PayPal Microsoft Expedia Amazon LinkedIn use mobile marketing tool AppTweak Features Keyword research ASO report Keyword monitoring Keyword shuffler Category ranking Keyword data Organic keywords Paid keywords Keyword autosuggestion Algorithm change detector Keyword counter Revenue estimate API Visibility score Pricing Plan Starter 69 per month Guru 299 per month Power 599 per month Enterprise custom plan SEO Tools Search engine optimization crucial success mobile marketing strategy Here’s great tool help boost organic mobile traffic SEMRush 4 million user SEMRush one leading SEO tool market allows everything competitor analysis keyword research deep link analysis rank tracking traffic analysis SEMRush Features Organic research Organic traffic insight Keyword research Backlink building Position tracking Analytics Rank tracking Site audit Onpage SEO checker Search engine sensor Content audit SEO content template Traffic analytics Pricing Plan Pro 99 per month Guru 199 per month Business 399 per month Enterprise custom plan used mobile marketing tool mobile marketing tool think add list Tell u comment Read Mobile Marketing 👇 Udonis 2018 2019 Udonis Inc served 141 billion ad acquired 50 million user mobile apps game We’re recognized leading mobile marketing agency 5 major marketing review firm helped 20 mobile apps game reach top chart Want know make look effortless Meet u find outTags Marketing Tools Digital Marketing Tools Marketing Mobile Marketing Mobile Marketing Tips
|
4,953 |
Ahmad Hasan Dani, RIP
|
The subcontinent lost a distinguished resource in archeology in the passing away of the great Pakistani Indologist, Ahmad Hasan Dani on January 26. Dani was born in Basna (now in Chhattisgarh) in 1920 and became the first Muslim to graduate from Banaras Hindu University (1944). His works on Harappa, Mohenjo-daro and the Sarasvati civilization have had a profound impact on our understanding of the subcontinent’s history. Dani was one of the few scholars who continued to challenge the two popular, yet divergent theories of whether the Indus Valley Civilization (IVC) was Dravidian (supported by scholars like Asko Parpola), or Aryan (“Out of India Theory”). He also continued to question theories of IVC’s cultural and religious continuity into early Vedic Hinduism, disagreeing, again, with his contemporaries on the issue.
In 1949, Dani was the first to propose a connection between this reference to “Hariyupiah” in the Rig Veda and the IVC center of Harappa:
In aid of Abhyavartin Cayamana,
Indra destroyed the seed of Varasikha.
At Hariyupiya he smote the vanguard of the Vrcivans,
and the rear fled frighted.
(Rig Veda, XXVII.5)
Professor Dani was a recipient of the Hilal-e-Imtiaz (“Crescent of Excellence”), Pakistan’s second highest honor. During his career, he published more than 30 books, and lived and worked all across the subcontinent, including Dhaka, Peshawar and South India. He was fluent in 14 Indo-European and Dravidian languages. Professor Dani’s intervention in 2005 prevented the construction of an amusement park over an archaeological site in Harappa. His research has helped a region burdened by 400 years of subjugation and slavery, rediscover itself. May he rest in peace.
|
https://medium.com/ini-filter-coffee-archives/ahmad-hasan-dani-rip-7abe807a512
|
['Rohan Joshi']
|
2017-02-27 17:13:58.159000+00:00
|
['Indus Valley Civilization', 'India', 'Archeology', 'Pakistan', 'World']
|
Title Ahmad Hasan Dani RIPContent subcontinent lost distinguished resource archeology passing away great Pakistani Indologist Ahmad Hasan Dani January 26 Dani born Basna Chhattisgarh 1920 became first Muslim graduate Banaras Hindu University 1944 work Harappa Mohenjodaro Sarasvati civilization profound impact understanding subcontinent’s history Dani one scholar continued challenge two popular yet divergent theory whether Indus Valley Civilization IVC Dravidian supported scholar like Asko Parpola Aryan “Out India Theory” also continued question theory IVC’s cultural religious continuity early Vedic Hinduism disagreeing contemporary issue 1949 Dani first propose connection reference “Hariyupiah” Rig Veda IVC center Harappa aid Abhyavartin Cayamana Indra destroyed seed Varasikha Hariyupiya smote vanguard Vrcivans rear fled frighted Rig Veda XXVII5 Professor Dani recipient HilaleImtiaz “Crescent Excellence” Pakistan’s second highest honor career published 30 book lived worked across subcontinent including Dhaka Peshawar South India fluent 14 IndoEuropean Dravidian language Professor Dani’s intervention 2005 prevented construction amusement park archaeological site Harappa research helped region burdened 400 year subjugation slavery rediscover May rest peaceTags Indus Valley Civilization India Archeology Pakistan World
|
4,954 |
Understanding the Spark insertInto function
|
Photo by @marcusloke on Unsplash
Raw Data Ingestion into a Data Lake with spark is a common currently used ETL approach. In some cases, the raw data is cleaned, serialized and exposed as Hive tables used by the analytics team to perform SQL like operations. Thus, spark provides two options for tables creation: managed and external tables. The difference between these is that unlike the manage tables where spark controls the storage and the metadata, on an external table spark does not control the data location and only manages the metadata.
In addition, often a retry strategy to overwrite some failed partitions is needed. For instance, a batch job (timestamp partitioned) failed for the partition 22/10/2019 and we need to re-run the job writing the correct data. Therefore, there are two options: a) regenerate and overwrite all the data, or b) process and overwrite the data for the needed partition. Option two is discarded due to performance issues, imagine that you have to process an entire month of data.
Consequently, the option first option is used and fortunately spark has the option dynamic partitionOverwriteMode that overwrites data only for partitions present in the current batch. This option works perfectly while writing data to an external data store like HDFS or S3; cases, where is possible to reload the external table metadata by a simple, CREATE EXTERNAL TABLE command.
However, for Hive tables stored in the meta store with dynamic partitions, there are some behaviors that we need to understand in order to keep the data quality and consistency. First of all, even when spark provides two functions to store data in a table saveAsTable and insertInto, there is an important difference between them:
SaveAsTable: creates the table structure and stores the first version of the data. However, the overwrite save mode works over all the partitions even when dynamic is configured.
creates the table structure and stores the first version of the data. However, the overwrite save mode the partitions even when dynamic is configured. insertInto: does not create the table structure, however, the overwrite save mode works only the needed partitions when dynamic is configured.
So, SaveAsTable could be used to create the table from a raw dataframe definition and then after the table is created, overwrites are done using the insertInto function in a straightforward pattern. Nevertheless, the insertInto presents some not well-documented behaviors while writing the partitioned data and some challenges while working with data that contains schema changes.
Order of the Columns Problem
Let’s write a simple unit test where a table is created from a data frame.
it should "Store table and insert into new record on new partitions" in {
val spark = ss
import spark.implicits._
val targetTable = "companies_table" val companiesDF = Seq(("A", "Company1"), ("B", "Company2")).toDF("id", "company")
companiesDF.write.mode(SaveMode.Overwrite).partitionBy("id").saveAsTable(targetTable)
val companiesHiveDF = ss.sql(s"SELECT * FROM ${targetTable}")
So far, the table was created correctly. Then, let’s overwrite some data using insertInto and perform some asserts.
val secondCompaniesDF = Seq(("C", "Company3"), ("D", "Company4"))
.toDF("id", "company")
secondCompaniesDF.write.mode(SaveMode.Append).insertInto(targetTable) val companiesHiveAfterInsertDF = ss.sql(s"SELECT * FROM ${targetTable}")
companiesDF.count() should equal(2)
companiesHiveAfterInsertDF.count() should equal(4)
companiesHiveDF.select("id").collect().map(_.get(0)) should contain allOf("A", "B")
companiesHiveAfterInsertDF.select("id").collect() should contain allOf("A", "B", "C", "D")
}
This should work properly. However, look at the following data print:
As you can see the asserts failed due to the positions of the columns. There are two reasons: a) saveAsTable uses the partition column and adds it at the end. b) insertInto works using the order of the columns (exactly as calling an SQL insertInto) instead of the columns name. In consequence, adding the partition column at the end fixes the issue as shown here:
//partition column should be at the end to match table schema.
val secondCompaniesDF = Seq(("Company3", "C"), ("Company4", "D"))
.toDF("company", "id")
secondCompaniesDF.write.mode(SaveMode.Append).insertInto(targetTable) val companiesHiveAfterInsertDF = ss.sql(s"SELECT * FROM ${targetTable}") companiesHiveAfterInsertDF.printSchema()
companiesHiveAfterInsertDF.show(false)
companiesDF.count() should equal(2)
companiesHiveAfterInsertDF.count() should equal(4)
companiesHiveDF.select("id").collect().map(_.get(0)) should contain allOf("A", "B")
companiesHiveAfterInsertDF.select("id").collect().map(_.get(0)) should contain allOf("A", "B", "C", "D")
}
Now the tests pass and the data is overwritten properly.
Matching the Table Schema
As described previously the order of the columns is important for the insertInto function. Besides, let’s image you are ingesting data that has a changing schema and you receive a new batch with a different number of columns.
New Batch With Extra Columns
Let’s test first the case when more columns are added.
//again adding the partition column at the end and trying to overwrite partition C.
val thirdCompaniesDF = Seq(("Company4", 10, "C"), ("Company5", 20, "F"))
.toDF("company", "size", "id")
thirdCompaniesDF.write.mode(SaveMode.Overwrite).insertInto(targetTable)
While trying to call insertInto the following error is shown:
Hence, a function that returns the missing columns in the table is needed:
def getMissingTableColumnsAgainstDataFrameSchema(df: DataFrame, tableDF: DataFrame): Set[String] = {
val dfSchema = df.schema.fields.map(v => (v.name, v.dataType)).toMap
val tableSchema = tableDF.schema.fields.map(v => (v.name, v.dataType)).toMap
val columnsMissingInTable = dfSchema.keys.toSet.diff(tableSchema.keys.toSet).map(x => x.concat(s" ${dfSchema.get(x).get.sql}"))
columnsMissingInTable
}
Then, the SQL ALTER TABLE command is executed. After this, the insertInto function works properly and the table schema is merged as you can see here:
val tableFlatDF = ss.sql(s"SELECT * FROM $targetTable limit 1")
val columnsMissingInTable = DataFrameSchemaUtils.getMissingTableColumnsAgainstDataFrameSchema(thirdCompaniesDF, tableFlatDF)
if (columnsMissingInTable.size > 0) {
ss.sql((s"ALTER TABLE $targetTable " +
s"ADD COLUMNS (${columnsMissingInTable.mkString(" , ")})"))
}
thirdCompaniesDF.write.mode(SaveMode.Overwrite).insertInto(targetTable)
val companiesHiveAfterInsertNewSchemaDF = ss.sql(s"SELECT * FROM $targetTable")
companiesHiveAfterInsertNewSchemaDF.printSchema()
companiesHiveAfterInsertNewSchemaDF.show(false)
New Batch With Fewer Columns
Let’s test now the case when fewer columns are received.
val fourthCompaniesDF = Seq("G", "H")
.toDF("id")
fourthCompaniesDF.write.mode(SaveMode.Overwrite).insertInto(targetTable)
The following error is shown:
|
https://towardsdatascience.com/understanding-the-spark-insertinto-function-1870175c3ee9
|
['Ronald Ángel']
|
2019-10-23 13:26:05.149000+00:00
|
['Spark', 'Big Data', 'Hive', 'Dynamic Partitions', 'Insert Into']
|
Title Understanding Spark insertInto functionContent Photo marcusloke Unsplash Raw Data Ingestion Data Lake spark common currently used ETL approach case raw data cleaned serialized exposed Hive table used analytics team perform SQL like operation Thus spark provides two option table creation managed external table difference unlike manage table spark control storage metadata external table spark control data location manages metadata addition often retry strategy overwrite failed partition needed instance batch job timestamp partitioned failed partition 22102019 need rerun job writing correct data Therefore two option regenerate overwrite data b process overwrite data needed partition Option two discarded due performance issue imagine process entire month data Consequently option first option used fortunately spark option dynamic partitionOverwriteMode overwrites data partition present current batch option work perfectly writing data external data store like HDFS S3 case possible reload external table metadata simple CREATE EXTERNAL TABLE command However Hive table stored meta store dynamic partition behavior need understand order keep data quality consistency First even spark provides two function store data table saveAsTable insertInto important difference SaveAsTable creates table structure store first version data However overwrite save mode work partition even dynamic configured creates table structure store first version data However overwrite save mode partition even dynamic configured insertInto create table structure however overwrite save mode work needed partition dynamic configured SaveAsTable could used create table raw dataframe definition table created overwrites done using insertInto function straightforward pattern Nevertheless insertInto present welldocumented behavior writing partitioned data challenge working data contains schema change Order Columns Problem Let’s write simple unit test table created data frame Store table insert new record new partition val spark s import sparkimplicits val targetTable companiestable val companiesDF SeqA Company1 B Company2toDFid company companiesDFwritemodeSaveModeOverwritepartitionByidsaveAsTabletargetTable val companiesHiveDF sssqlsSELECT targetTable far table created correctly let’s overwrite data using insertInto perform asserts val secondCompaniesDF SeqC Company3 Company4 toDFid company secondCompaniesDFwritemodeSaveModeAppendinsertIntotargetTable val companiesHiveAfterInsertDF sssqlsSELECT targetTable companiesDFcount equal2 companiesHiveAfterInsertDFcount equal4 companiesHiveDFselectidcollectmapget0 contain allOfA B companiesHiveAfterInsertDFselectidcollect contain allOfA B C work properly However look following data print see asserts failed due position column two reason saveAsTable us partition column add end b insertInto work using order column exactly calling SQL insertInto instead column name consequence adding partition column end fix issue shown partition column end match table schema val secondCompaniesDF SeqCompany3 C Company4 toDFcompany id secondCompaniesDFwritemodeSaveModeAppendinsertIntotargetTable val companiesHiveAfterInsertDF sssqlsSELECT targetTable companiesHiveAfterInsertDFprintSchema companiesHiveAfterInsertDFshowfalse companiesDFcount equal2 companiesHiveAfterInsertDFcount equal4 companiesHiveDFselectidcollectmapget0 contain allOfA B companiesHiveAfterInsertDFselectidcollectmapget0 contain allOfA B C test pas data overwritten properly Matching Table Schema described previously order column important insertInto function Besides let’s image ingesting data changing schema receive new batch different number column New Batch Extra Columns Let’s test first case column added adding partition column end trying overwrite partition C val thirdCompaniesDF SeqCompany4 10 C Company5 20 F toDFcompany size id thirdCompaniesDFwritemodeSaveModeOverwriteinsertIntotargetTable trying call insertInto following error shown Hence function return missing column table needed def getMissingTableColumnsAgainstDataFrameSchemadf DataFrame tableDF DataFrame SetString val dfSchema dfschemafieldsmapv vname vdataTypetoMap val tableSchema tableDFschemafieldsmapv vname vdataTypetoMap val columnsMissingInTable dfSchemakeystoSetdifftableSchemakeystoSetmapx xconcats dfSchemagetxgetsql columnsMissingInTable SQL ALTER TABLE command executed insertInto function work properly table schema merged see val tableFlatDF sssqlsSELECT targetTable limit 1 val columnsMissingInTable DataFrameSchemaUtilsgetMissingTableColumnsAgainstDataFrameSchemathirdCompaniesDF tableFlatDF columnsMissingInTablesize 0 sssqlsALTER TABLE targetTable sADD COLUMNS columnsMissingInTablemkString thirdCompaniesDFwritemodeSaveModeOverwriteinsertIntotargetTable val companiesHiveAfterInsertNewSchemaDF sssqlsSELECT targetTable companiesHiveAfterInsertNewSchemaDFprintSchema companiesHiveAfterInsertNewSchemaDFshowfalse New Batch Fewer Columns Let’s test case fewer column received val fourthCompaniesDF SeqG H toDFid fourthCompaniesDFwritemodeSaveModeOverwriteinsertIntotargetTable following error shownTags Spark Big Data Hive Dynamic Partitions Insert
|
4,955 |
I Am Living In A Box
|
I am living in a box
I created it to keep me safe
I thought I was free
I could no longer see my box but it was there
I kept bumping into it
Invisible restrictions
Constrictions
How to live prescriptions
All part of my box
I created it to keep me safe
I thought I was free
I can now feel my box I know it is there
I’ve tried kicking and punching it
It flinches
Moves a few inches
Then invades me like chinches
They live in my box
I created them to keep me safe
I thought I was free
I thought I was free
I thought I was free
|
https://medium.com/illumination/i-am-living-in-a-box-d61c60ffff30
|
['John Walter']
|
2020-07-28 05:28:09.574000+00:00
|
['Poetry', 'Freedom', 'Self', 'Safety', 'Mental Health']
|
Title Living BoxContent living box created keep safe thought free could longer see box kept bumping Invisible restriction Constrictions live prescription part box created keep safe thought free feel box know I’ve tried kicking punching flinch Moves inch invades like chinch live box created keep safe thought free thought free thought freeTags Poetry Freedom Self Safety Mental Health
|
4,956 |
Microsoft Research Unveils Three Efforts to Advance Deep Generative Models
|
Microsoft Research Unveils Three Efforts to Advance Deep Generative Models
Optimus, FQ-GAN and Prevalent bring new ideas to apply generative models at large scale.
Generative models have been an important component of machine learning for the last few decades. With the emergence of the deep learning, generative models started being combined with deep neural networks creating the field of deep generative models(DGMs). DGMs hold a lot of promise for the deep learning field as they have the ability of synthesizing data from observation. This feature can result key to improve the training of large scale models without requiring large amounts of data. Recently, Microsoft Research unveiled three new projects looking to advance research in DGMs.
One of the biggest questions surrounding DGMs is whether they can be applied with large-scale datasets. In recent years, we have seen plenty of examples of DGMs applied in a relatively small scale. However, the deep learning field is gravitating towards a “bigger is better” philosophy when comes to data and we are regularly seeing new models being trained in unfathomably big datasets. The idea of DGMs that can operate at that scale is one of the most active areas of research in the space and the focus of the Microsoft Research projects.
Types of DGMs
A good way to understand DGMs is to contrast it with its best-known complement: discriminative models. Often described as siblings: generative and discriminative models encompass the different ways in which we learn about the world. Conceptually, generative models can attempt to generalize everything they see whether discriminative models learn the unique properties in what they see. Both discriminative and generative models have strengths and weaknesses. Discriminative algorithms tend to perform incredibly well in classification tasks involving high quality datasets. However, generative models have the unique advantage that can create new datasets similar to existing data and operate very efficiently in environments that lack a lot of labeled datasets.
The essence of generative models was brilliantly captured in a 2016 blog post by OpenAI in which they stated that:
“Generative models are forced to discover and efficiently internalize the essence of the data in order to generate it.”
In that same blog post, OpenAI outlined a taxonomy for categorizing DGMs which included three main groups:
I. Variational Autoencoders: An encoder-decoder framework that allows to formalize this problem in the framework of probabilistic graphical models where we are maximizing a lower bound on the log likelihood of the data.
II. Autoregresive Models: This type of model factorize the distribution of the training data into conditional distributions effectively modeling every individual dimension of the dataset from previous dimensions.
III. Generative Adversarial Networks: A generator-discriminator framework that uses an adversarial game to generate the data distributions.
In recent years, we have seen major advancements applying DGMs to large scale models such as OpenAI GPT-2 or Microsoft’s Turing-NLG. These models followed similar learning principles: self-supervised pre-training with task-specific fine-tuning. The biggest question remains whether DGMs can be systematized for large-scale learning tasks. In that regard, Microsoft Research recently unveiled three major research efforts.
Optimus
In the paper Optimus: Organizing sentences with pre-trained modeling of a universal latent space, Microsoft Research introduces a large-scale VAE model for natural language tasks. Optimus provides an innovative DGM that can be both a powerful generative model and an effective representation learning framework for natural language.
Traditionally, large scale pre-trained natural language models have been specialized in a single role. Models such as GPT-2 or Megatron have proven to be powerful decoders while models like BERT exceled as large scale encoders. Optimus combines both approaches in an novel architecture shown below:
The Optimus architecture includes a BERT-based encoder and a GPT-2-based decoder. To connect BERT and GPT-2, Optimus used two different approaches. The first approach the latent variable (z) is represented as an additional memory-vector for the decoder to attend. Alternatively, the second approach adds the latent variable(z) is added on the bottom embedding layer the decoder and directly used in every decoding step.
The initial tests in Optimus showed key advantages over existing pre-trained language models:
1) Language Modeling: Compared with all existing small VAEs, Optimus shows much better representation learning performance, measured by mutual information and active units.
2) Guided Language Generation: Optimus showed unique capabilities to guide language generation at a semantic level.
3) Low-Resource Language Understanding: By learning unique feature patterns, Optimus showed better classification performance and faster adaption than alternative models.
FQ-GAN
In the paper Feature Quantization Improves GAN Training, Microsoft Research proposed a new DGM approach to image generation. FQ-GAN’s innovation relies on representing images in a discrete space rather than a continuous space.
Training with large datasets has been one of the main challenges of generative adversarial networks(GANs). Part of that challenge has been attributed to the fact that GANs rely on a non-stationary learning environment that depend on mini-batch statistics to match the features across different image regions. Since the mini-batch only provides an estimate, the true underlying distribution can only be learned after passing through a large number of mini-batches.
To address this challenge, FQ-GAN proposes the use of feature quantization (FQ) in the discriminator. A dictionary is first constructed via moving-averaged summary of features in recent training history for both true and fake data samples. This enables building a large and consistent dictionary on-the-fly that facilitates the online fashion of GAN training. Each dictionary item represents a unique feature prototype of similar image regions. By quantizing continuous features in traditional GANs into these dictionary items, the proposed FQ-GAN forces true and fake images to construct their feature representations from the limited values, when judged by discriminator. This alleviates the poor estimate issue of mini-batches in traditional GANs. The following diagram illustrate the main components of the FQ-GAN architecture:
The initial tests with FQ-GAN showed that the proposed model can improve image generation across diverse large-scale tasks. The FQ module proved to be effective in matching features in large training datasets. The principles of FQ-GAN can be easily incorporated into existing GAN architectures.
Prevalent
In the paper Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training, Microsoft Research introduces Prevalent, a DGM agent that can navigate a visual environment following language instructions. The challenge that Prevalent addresses is a classic one: training deep learning agents in multi-modal inputs is nothing short of a nightmare.
To address the multi-modal input challenge, Prevalent proposes to pre-train an encoder to align language instructions and visual states for joint representations. The image-text-action triplets at each time step are independently fed into the model, which is trained to predict the masked word tokens and next actions, thus formulating the visual and language navigation pre-training in the self-learning paradigm.
To model the visual and language navigation tasks, Prevalent relied on three fundamental datasets: : Room-to-room (R2R), cooperative vision-and-dialog navigation (CVDN), and “Help, Anna!” (HANNA). R2R is an in-domain task, where the language instruction is given at the beginning, describing the full navigation path. CVND and HANNA are out-of-domain tasks; the former is to navigate based on dialog history, while the latter is an interactive environment, where intermediate instructions are given in the middle of navigation.
The Prevalent architecture collects the image-text-action triplets are collected from the R2R dataset and fine-tuned for the tasks in the R2R, CVDN and HANNA environments. The result is an agent that is able to not only master the three environment but to effectively generalize knowledge for unseen environments and tasks.
DGMs are a key element to scale deep learning models. Microsoft Research efforts with Optimus, FQ-GAN and Prevalent present new ideas that can be incorporated into the new generation of DGM models. Microsoft Research open sourced the code related to this efforts together with the research papers.
|
https://jrodthoughts.medium.com/microsoft-research-unveils-three-efforts-to-advance-deep-generative-models-b1d2fe3395e8
|
['Jesus Rodriguez']
|
2020-04-27 12:59:53.496000+00:00
|
['Deep Learning', 'Invector Labs', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
|
Title Microsoft Research Unveils Three Efforts Advance Deep Generative ModelsContent Microsoft Research Unveils Three Efforts Advance Deep Generative Models Optimus FQGAN Prevalent bring new idea apply generative model large scale Generative model important component machine learning last decade emergence deep learning generative model started combined deep neural network creating field deep generative modelsDGMs DGMs hold lot promise deep learning field ability synthesizing data observation feature result key improve training large scale model without requiring large amount data Recently Microsoft Research unveiled three new project looking advance research DGMs One biggest question surrounding DGMs whether applied largescale datasets recent year seen plenty example DGMs applied relatively small scale However deep learning field gravitating towards “bigger better” philosophy come data regularly seeing new model trained unfathomably big datasets idea DGMs operate scale one active area research space focus Microsoft Research project Types DGMs good way understand DGMs contrast bestknown complement discriminative model Often described sibling generative discriminative model encompass different way learn world Conceptually generative model attempt generalize everything see whether discriminative model learn unique property see discriminative generative model strength weakness Discriminative algorithm tend perform incredibly well classification task involving high quality datasets However generative model unique advantage create new datasets similar existing data operate efficiently environment lack lot labeled datasets essence generative model brilliantly captured 2016 blog post OpenAI stated “Generative model forced discover efficiently internalize essence data order generate it” blog post OpenAI outlined taxonomy categorizing DGMs included three main group Variational Autoencoders encoderdecoder framework allows formalize problem framework probabilistic graphical model maximizing lower bound log likelihood data II Autoregresive Models type model factorize distribution training data conditional distribution effectively modeling every individual dimension dataset previous dimension III Generative Adversarial Networks generatordiscriminator framework us adversarial game generate data distribution recent year seen major advancement applying DGMs large scale model OpenAI GPT2 Microsoft’s TuringNLG model followed similar learning principle selfsupervised pretraining taskspecific finetuning biggest question remains whether DGMs systematized largescale learning task regard Microsoft Research recently unveiled three major research effort Optimus paper Optimus Organizing sentence pretrained modeling universal latent space Microsoft Research introduces largescale VAE model natural language task Optimus provides innovative DGM powerful generative model effective representation learning framework natural language Traditionally large scale pretrained natural language model specialized single role Models GPT2 Megatron proven powerful decoder model like BERT exceled large scale encoders Optimus combine approach novel architecture shown Optimus architecture includes BERTbased encoder GPT2based decoder connect BERT GPT2 Optimus used two different approach first approach latent variable z represented additional memoryvector decoder attend Alternatively second approach add latent variablez added bottom embedding layer decoder directly used every decoding step initial test Optimus showed key advantage existing pretrained language model 1 Language Modeling Compared existing small VAEs Optimus show much better representation learning performance measured mutual information active unit 2 Guided Language Generation Optimus showed unique capability guide language generation semantic level 3 LowResource Language Understanding learning unique feature pattern Optimus showed better classification performance faster adaption alternative model FQGAN paper Feature Quantization Improves GAN Training Microsoft Research proposed new DGM approach image generation FQGAN’s innovation relies representing image discrete space rather continuous space Training large datasets one main challenge generative adversarial networksGANs Part challenge attributed fact GANs rely nonstationary learning environment depend minibatch statistic match feature across different image region Since minibatch provides estimate true underlying distribution learned passing large number minibatches address challenge FQGAN proposes use feature quantization FQ discriminator dictionary first constructed via movingaveraged summary feature recent training history true fake data sample enables building large consistent dictionary onthefly facilitates online fashion GAN training dictionary item represents unique feature prototype similar image region quantizing continuous feature traditional GANs dictionary item proposed FQGAN force true fake image construct feature representation limited value judged discriminator alleviates poor estimate issue minibatches traditional GANs following diagram illustrate main component FQGAN architecture initial test FQGAN showed proposed model improve image generation across diverse largescale task FQ module proved effective matching feature large training datasets principle FQGAN easily incorporated existing GAN architecture Prevalent paper Towards Learning Generic Agent VisionandLanguage Navigation via Pretraining Microsoft Research introduces Prevalent DGM agent navigate visual environment following language instruction challenge Prevalent address classic one training deep learning agent multimodal input nothing short nightmare address multimodal input challenge Prevalent proposes pretrain encoder align language instruction visual state joint representation imagetextaction triplet time step independently fed model trained predict masked word token next action thus formulating visual language navigation pretraining selflearning paradigm model visual language navigation task Prevalent relied three fundamental datasets Roomtoroom R2R cooperative visionanddialog navigation CVDN “Help Anna” HANNA R2R indomain task language instruction given beginning describing full navigation path CVND HANNA outofdomain task former navigate based dialog history latter interactive environment intermediate instruction given middle navigation Prevalent architecture collect imagetextaction triplet collected R2R dataset finetuned task R2R CVDN HANNA environment result agent able master three environment effectively generalize knowledge unseen environment task DGMs key element scale deep learning model Microsoft Research effort Optimus FQGAN Prevalent present new idea incorporated new generation DGM model Microsoft Research open sourced code related effort together research papersTags Deep Learning Invector Labs Artificial Intelligence Data Science Machine Learning
|
4,957 |
The 3 Most Common Ways Law Firms Waste Money on Marketing
|
Getting distribution for your products or services is usually the single most challenging aspect of building any business, including a law firm. Anybody with a law degree can open up a solo practice. But how will you reach enough clients? And more importantly, how will you do so in a way that costs less than the amount of revenue you can generate from serving each one? Answering those questions is the very foundation of your business model. Here are 3 common ways law firms waste money on marketing to help you cut down on unnecessary expenditures, and set your business model up for success.
Law Firm Marketing: The Holy Grail
The legal industry is highly fragmented, with big firms at the top serving large corporations, and thousands of smaller firms competing for all the individual and small business clients. This fragmentation makes the market highly competitive, and drives up the cost of marketing and advertising.
Just look at this list of 2015’s highest priced search terms through Google advertising, and notice how many of them are related to finding a lawyer. That’s a lot of advertising dollars being directed at your target clientele by your competition.
Given the tremendous competition and high advertising costs, it’s clear that finding a way to effectively market your law firm without breaking the bank is the holy grail for your continued success and growth.
The particular marketing approach that will work best varies from one firm to the next. But, here are 3 common ways that many law firms waste money on marketing to give you a good idea of what not to do as you implement your law firm marketing plan.
The 3 Most Common Ways Law Firms Waste Money on Marketing
1. Not reaching the right audience
As we have written about before, the key to establishing a strong law firm brand is having a well-defined target audience, and positioning your law firm to create awareness within that group. If you are spending your marketing dollars aimlessly, without a clear target audience in mind, there’s a very high chance you will be wasting money.
Many law firms just throw up a billboard or bus stop ad with a phone number and hope for the best. But hope is not an effective marketing strategy. Before you put up a physical ad, you need to be absolutely sure that your target clientele will not only see that ad, but also be in a position where they would be likely to contact you after seeing it.
This is quite difficult to measure, which is one reason why online marketing can be more cost effective. Unlike physical ads, you have a much better ability to gauge the intent of a prospective client at the time they are interacting with your ad or marketing message (e.g. via direct search advertising, SEO optimized articles or blogposts, etc.).
But regardless of where and how you plan to market your law firm, it’s critical that you have first identified the demographics of your target audience. Think about the types of people that become your clients most frequently — age, sex, religion, ethnicity, location, social class, etc. are all factors to consider. Each of these data points will correspond closely to the marketing methods and channels that make the most sense.
Identifying your audience is one of the most important steps to take before implementing a marketing plan, so don’t overlook it, or you’ll be setting yourself up to waste money.
2. Using the wrong marketing strategy
Marketing is a game of different strokes for different folks, as they say. But it’s not about preference as much as it is about what strategies work, and why.
We covered this topic in detail in our post about the law firm marketing spectrum. The general concept is that the marketing methods available to you are largely determined by the type of law practice you run, the average cost of your services, and the lifetime of an average client.
You’ve got to carefully consider key performance indicators such as average client acquisition cost, client lifetime value, and ROI when deciding on your marketing strategy.
Failing to pick the right marketing strategy is a surefire way to waste money, so here are some general guidelines about the types of marketing strategies that work for various types of practices:
High Volume, Low Value: typically one-off legal matters such as traffic, criminal, basic business/real estate transactional work, etc.
word of mouth
referral partnerships
social media
blogging
Medium Volume, Medium Value: typically more complex matters such as estate planning, family law, civil litigation, or contingency cases like personal injury and employment
paid lead generation
PPC advertising
SEO
video marketing
print/physical advertising
Low Volume, High Value: typically longer term clients such as high net-worth estate planning/family law, corporate law, etc.
events
referral partnerships
networking
content marketing
There’s not a formula for picking the best marketing strategy for a law firm, and it may require some trial and error. The right marketing strategy will be one that enables you to reach your target audience and communicate your message to them in a cost effective manner.
3. Not tracking the results and adjusting over time
Marketing is anything but a “set it and forget it” activity. The world is constantly changing, and marketing strategies that worked in the past will eventually become obsolete as competitors catch on to trends, and new technologies change the landscape.
Not understanding the numbers and continuing to spend money on channels that don’t produce results is one of the biggest ways law firms waste money on marketing.
You’ve got to keep up with the trends, or your marketing efforts will gradually become less effective, and the costs will go up. You should constantly be tweaking your approach, maintain an open mind, and utilize technology to analyze the results.
Here are some basic guidelines about what data you should be collecting, how to capture it, and how to interpret it in order to continually improve your marketing strategy with time:
What to Track
The most important things to track are the number of leads each marketing channel produces, the total amount of money spent on each marketing activity, and the conversion rate for each of those channels over a given period of time.
How to Track It
In order to track these things, you will want to employ a combination of strategies.
For phone based lead channels, it’s important to have different phone numbers assigned to each particular advertisement or marketing method so that you can identify which channel produced a call.
For web based lead channels, you should set up Google analytics to track how many visitors you received, how they got to your website (e.g. from search, from social media, from an ad, etc.), and what percentage of them completed a goal (such as filling out your contact form, calling your office, or signing up for your newsletter).
You also need a good system in place, such as a law firm CRM, in order to track every lead and identify the source from which that lead came to your law firm.
How to Interpret It
With this data in hand, you should look at the total cost to acquire each paying client and compare that number across each of the channels.
If any channel isn’t producing good results, eliminate it immediately so you’re not wasting money. Next, create an action plan for ways to decrease your client acquisition costs and increase your conversion rates wherever possible.
For instance, strive to improve your SEO ranking so that you get more organic website traffic, rather than paying per click. Use the suggestions in this post to maximize your website conversion rate so that there is a higher likelihood of each visitor contacting you. And don’t fall victim to this bottleneck which costs many firms clients when getting engagement letters signed.
Continue to measure the results of your marketing on a monthly or quarterly basis, keep doing more of the things that work, and toss out the things that don’t. It’s that simple.
Summary
Creating an effective marketing plan is not going to be easy. In fact, it is often the hardest part of growing a law practice. Understanding the 3 biggest ways law firms waste money on marketing is an important first step. The three biggest mistakes many firms make are:
Not identifying and targeting the right audience Not utilizing the right marketing channels for your audience Failing to track the results and adjust your strategy with time
The good news is, if you avoid making these money-wasting mistakes, you’ll give yourself a good shot to implement a marketing strategy that’s destined for success from the start.
|
https://medium.com/law-firm-marketing/the-3-most-common-ways-law-firms-waste-money-on-marketing-b8c52cfde57d
|
['Aaron George']
|
2016-12-05 20:56:16.300000+00:00
|
['Law Firm Marketing', 'Growing A Law Firm', 'Business Of Law', 'Marketing', 'Kpi']
|
Title 3 Common Ways Law Firms Waste Money MarketingContent Getting distribution product service usually single challenging aspect building business including law firm Anybody law degree open solo practice reach enough client importantly way cost le amount revenue generate serving one Answering question foundation business model 3 common way law firm waste money marketing help cut unnecessary expenditure set business model success Law Firm Marketing Holy Grail legal industry highly fragmented big firm top serving large corporation thousand smaller firm competing individual small business client fragmentation make market highly competitive drive cost marketing advertising look list 2015’s highest priced search term Google advertising notice many related finding lawyer That’s lot advertising dollar directed target clientele competition Given tremendous competition high advertising cost it’s clear finding way effectively market law firm without breaking bank holy grail continued success growth particular marketing approach work best varies one firm next 3 common way many law firm waste money marketing give good idea implement law firm marketing plan 3 Common Ways Law Firms Waste Money Marketing 1 reaching right audience written key establishing strong law firm brand welldefined target audience positioning law firm create awareness within group spending marketing dollar aimlessly without clear target audience mind there’s high chance wasting money Many law firm throw billboard bus stop ad phone number hope best hope effective marketing strategy put physical ad need absolutely sure target clientele see ad also position would likely contact seeing quite difficult measure one reason online marketing cost effective Unlike physical ad much better ability gauge intent prospective client time interacting ad marketing message eg via direct search advertising SEO optimized article blogposts etc regardless plan market law firm it’s critical first identified demographic target audience Think type people become client frequently — age sex religion ethnicity location social class etc factor consider data point correspond closely marketing method channel make sense Identifying audience one important step take implementing marketing plan don’t overlook you’ll setting waste money 2 Using wrong marketing strategy Marketing game different stroke different folk say it’s preference much strategy work covered topic detail post law firm marketing spectrum general concept marketing method available largely determined type law practice run average cost service lifetime average client You’ve got carefully consider key performance indicator average client acquisition cost client lifetime value ROI deciding marketing strategy Failing pick right marketing strategy surefire way waste money general guideline type marketing strategy work various type practice High Volume Low Value typically oneoff legal matter traffic criminal basic businessreal estate transactional work etc word mouth referral partnership social medium blogging Medium Volume Medium Value typically complex matter estate planning family law civil litigation contingency case like personal injury employment paid lead generation PPC advertising SEO video marketing printphysical advertising Low Volume High Value typically longer term client high networth estate planningfamily law corporate law etc event referral partnership networking content marketing There’s formula picking best marketing strategy law firm may require trial error right marketing strategy one enables reach target audience communicate message cost effective manner 3 tracking result adjusting time Marketing anything “set forget it” activity world constantly changing marketing strategy worked past eventually become obsolete competitor catch trend new technology change landscape understanding number continuing spend money channel don’t produce result one biggest way law firm waste money marketing You’ve got keep trend marketing effort gradually become le effective cost go constantly tweaking approach maintain open mind utilize technology analyze result basic guideline data collecting capture interpret order continually improve marketing strategy time Track important thing track number lead marketing channel produce total amount money spent marketing activity conversion rate channel given period time Track order track thing want employ combination strategy phone based lead channel it’s important different phone number assigned particular advertisement marketing method identify channel produced call web based lead channel set Google analytics track many visitor received got website eg search social medium ad etc percentage completed goal filling contact form calling office signing newsletter also need good system place law firm CRM order track every lead identify source lead came law firm Interpret data hand look total cost acquire paying client compare number across channel channel isn’t producing good result eliminate immediately you’re wasting money Next create action plan way decrease client acquisition cost increase conversion rate wherever possible instance strive improve SEO ranking get organic website traffic rather paying per click Use suggestion post maximize website conversion rate higher likelihood visitor contacting don’t fall victim bottleneck cost many firm client getting engagement letter signed Continue measure result marketing monthly quarterly basis keep thing work toss thing don’t It’s simple Summary Creating effective marketing plan going easy fact often hardest part growing law practice Understanding 3 biggest way law firm waste money marketing important first step three biggest mistake many firm make identifying targeting right audience utilizing right marketing channel audience Failing track result adjust strategy time good news avoid making moneywasting mistake you’ll give good shot implement marketing strategy that’s destined success startTags Law Firm Marketing Growing Law Firm Business Law Marketing Kpi
|
4,958 |
Open Source & Secret Santa with Santulator
|
By Adam Carroll
Introduction
It’s time for a festive post on the King Tech Blog! A few weeks ago I released the first full version of Santulator, a fun, Open Source program to help you to run Secret Santa draws. I’m from a big family and I initially wrote this with families in mind but it has since proven to be useful for office Secret Santa draws too. It’s completely free, so download it and give it a try yourself. Santulator is a personal project and not a King project.
Since this is a technology blog, I’d like to share some details about how Santulator works. The project is under active development so I’ve fixed all the links to v1.1.0 to avoid them becoming stale as changes are made in the project on a regular basis.
Open Source
Open Source projects encapsulate an open technology exchange, collaborative participation, rapid prototyping, transparency, and community development. Santulator is entirely Open Source and is available on GitHub. If you are interested, I’d encourage you to fork the project and play around with the code yourself.
Internally, Santulator is built on lots of other great Open Source software, too many projects to mention them all here. I’ll highlight a few in this post to give you a flavour of what is available.
Personally, I’m a big fan of Open Source and it’s fun to participate in the various software development communities. I’m also the author of another Open Source project, VocabHunter and through my work on that, I have developed my enthusiasm for the world of Open Source software development.
User Interface
The Santulator user interface is built using JavaFX 11. This makes it possible to build a cross-platform desktop application on Mac, Linux and Windows. You can see the main Santulator screen here with the list of participants in the draw:
Another article I wrote for the King Tech Blog, “How JavaFX was used to build a desktop application”, explains in detail how this can be done.
The user interface itself is relatively simple. It consists of the main screen that I presented above along with a guided “wizard” that you use for running the draw. Here’s the wizard, in action:
The ControlsFX Open Source library provides the components for the wizard, along with those used in various other parts of the Santulator user interface.
Festive Colours
Corporate colour schemes in business applications have their place but that place is probably not in a Secret Santa program. One of the things that I wanted to achieve with Santulator is a festive look and feel I want it to look fun and not like a productivity tool. JavaFX makes it easy to change the colours and styles of user interface components using a dialect of Cascading Style Sheets (CSS). I took advantage of this by applying a festive colour scheme to the user interface:
A couple of years ago I went to the JavaOne conference (now renamed to Oracle Code One) through my job at King. There I attended a presentation “JavaFX Tips and Tricks” by Dirk Lemmermann. One of Dirk’s tips was to get to know the JavaFX default Modena style sheet and to learn from that. This tip proved to be very useful to this work as it was the key to finding exactly the styles that I needed to change. You can see the Modena style sheet in the JavaFX repository on GitHub here.
I chose a set of three base colours based on ideas of Christmas trees and wrapping paper. You can see them here:
I define these three basic colours in colours.css, as follows:
-colour-principal: #4FB684; -colour-bow: #FFDA90; -colour-significant: #B21118;
I then derive most of the other colours in the user interface, based on these three. For example, the colours for the “About” dialogue, are defined in colours.css by overriding styles from the Modena style sheet. The definition of the background colour for this dialogue looks like this:
.about { -fx-base: derive(-colour-principal, -40%); }
Overall we ended up with what I think is a nice festive “About” dialogue:
Packaging Santulator
Just as it was important for me to try to make using Santulator fun and simple, I also wanted to make it easy to install. Santulator is a cross-platform application that runs on Mac, Linux and Windows and I was keen to include an appropriate installer for each of those systems. For example, a Mac user should feel comfortable installing Santulator just as they would any other Mac application without needing to know that underneath it is implemented in Java.
I was able to achieve my goal of creating “native” installable bundles for Santulator using the Java Packager. These bundles include everything that the user needs to run the program, including Java itself. Also, thanks to JLink, only those Java 11 modules that are actually needed are included in the bundles. JLink was introduced in Java 9 and can be used to build a custom Java runtime image containing a specific set of modules. If you’re thinking about doing this in your own project, have a look at my article “Using the Java Packager with JDK 11”.
The Draw Results
When you run your Secret Santa draw, Santulator creates a collection of PDF files containing the draw results. One PDF is created for each participant telling them who they will buy a present for. Just for fun and to maintain the secret of the draw, the files can be protected with a password. That way you won’t accidentally see all of the draw results if you’re distributing the files.
Behind the scenes, Santulator uses the OpenPDF library to generate the PDF files and Bouncy Castle for the password protection. Thanks to these Open Source libraries, the Santulator code to generate the PDF files is nice and simple, as you can see here in PdfGiverAssignmentWriter.
Testing
As any software engineer will tell you, good automated tests are the foundation on which high-quality software is built. Santulator includes a suite of JUnit 5 tests that run automatically whenever the Gradle build is executed. In addition to the unit tests, Santulator also includes an automated GUI test. I have found that this is a great way to catch problems that might otherwise have slipped through the net. The GUI test is built using TestFX and to learn more about this, see my article “User Interface Testing with TestFX”.
The Santulator automated GUI test normally runs in “headless” mode. This means that the test doesn’t take over your display while you’re running the build. Sometimes, it’s useful to switch off headless mode while developing. Just for fun, here’s a video of the test in action with headless mode disabled. See if you can keep up with the testing robot!
Plans and Ideas for the Future
I’m really pleased to have released this first version of Santulator and to see people using it and sharing it with others. I also hope that the ideas in the Open Source Santulator code and in this article will be useful to people working on other projects. I’ve received a lot of feedback and some good ideas for future directions so I’ll have to see what I have time for in the new year. And being Open Source, I welcome technical contributions in the form of issue reporting and pull requests on GitHub.
Several users of the system have mentioned that it would be great to be able to email the results of the draw directly to the participants from within the program. Each email could contain an attachment with the password-protected PDF file that Santulator generates, personalised for the participant, containing the name of the person for whom they will buy a present.
Another thing I’d like to do is to make the system available in other languages. To make a start at this, all of the messages that the user sees have already been extracted to the Java Resource Bundle file, SantulatorBundle.properties. Since I live and work in Barcelona, it would seem natural to start with Spanish and Catalan language versions of Santulator.
I have lots of other ideas so if you’d like to keep up with the progress, use the “watch” function on the Santulator GitHub project.
Happy Holidays!
I hope you’ve enjoyed this quick run through of the technology behind Santulator. Feel free to fork the repository on GitHub and experiment with the code. It’s all Open Source and available free of charge.
And if you’re going to run your own Secret Santa draw, why not download Santulator and give it a try?
Have fun and a very happy holiday everyone!
|
https://medium.com/techking/open-source-secret-santa-with-santulator-9101972359fc
|
['Tech At King']
|
2018-12-14 09:32:02.759000+00:00
|
['Programming', 'Open Source', 'Java', 'Javafx', 'Technology']
|
Title Open Source Secret Santa SantulatorContent Adam Carroll Introduction It’s time festive post King Tech Blog week ago released first full version Santulator fun Open Source program help run Secret Santa draw I’m big family initially wrote family mind since proven useful office Secret Santa draw It’s completely free download give try Santulator personal project King project Since technology blog I’d like share detail Santulator work project active development I’ve fixed link v110 avoid becoming stale change made project regular basis Open Source Open Source project encapsulate open technology exchange collaborative participation rapid prototyping transparency community development Santulator entirely Open Source available GitHub interested I’d encourage fork project play around code Internally Santulator built lot great Open Source software many project mention I’ll highlight post give flavour available Personally I’m big fan Open Source it’s fun participate various software development community I’m also author another Open Source project VocabHunter work developed enthusiasm world Open Source software development User Interface Santulator user interface built using JavaFX 11 make possible build crossplatform desktop application Mac Linux Windows see main Santulator screen list participant draw Another article wrote King Tech Blog “How JavaFX used build desktop application” explains detail done user interface relatively simple consists main screen presented along guided “wizard” use running draw Here’s wizard action ControlsFX Open Source library provides component wizard along used various part Santulator user interface Festive Colours Corporate colour scheme business application place place probably Secret Santa program One thing wanted achieve Santulator festive look feel want look fun like productivity tool JavaFX make easy change colour style user interface component using dialect Cascading Style Sheets CSS took advantage applying festive colour scheme user interface couple year ago went JavaOne conference renamed Oracle Code One job King attended presentation “JavaFX Tips Tricks” Dirk Lemmermann One Dirk’s tip get know JavaFX default Modena style sheet learn tip proved useful work key finding exactly style needed change see Modena style sheet JavaFX repository GitHub chose set three base colour based idea Christmas tree wrapping paper see define three basic colour colourscss follows colourprincipal 4FB684 colourbow FFDA90 coloursignificant B21118 derive colour user interface based three example colour “About” dialogue defined colourscss overriding style Modena style sheet definition background colour dialogue look like fxbase derivecolourprincipal 40 Overall ended think nice festive “About” dialogue Packaging Santulator important try make using Santulator fun simple also wanted make easy install Santulator crossplatform application run Mac Linux Windows keen include appropriate installer system example Mac user feel comfortable installing Santulator would Mac application without needing know underneath implemented Java able achieve goal creating “native” installable bundle Santulator using Java Packager bundle include everything user need run program including Java Also thanks JLink Java 11 module actually needed included bundle JLink introduced Java 9 used build custom Java runtime image containing specific set module you’re thinking project look article “Using Java Packager JDK 11” Draw Results run Secret Santa draw Santulator creates collection PDF file containing draw result One PDF created participant telling buy present fun maintain secret draw file protected password way won’t accidentally see draw result you’re distributing file Behind scene Santulator us OpenPDF library generate PDF file Bouncy Castle password protection Thanks Open Source library Santulator code generate PDF file nice simple see PdfGiverAssignmentWriter Testing software engineer tell good automated test foundation highquality software built Santulator includes suite JUnit 5 test run automatically whenever Gradle build executed addition unit test Santulator also includes automated GUI test found great way catch problem might otherwise slipped net GUI test built using TestFX learn see article “User Interface Testing TestFX” Santulator automated GUI test normally run “headless” mode mean test doesn’t take display you’re running build Sometimes it’s useful switch headless mode developing fun here’s video test action headless mode disabled See keep testing robot Plans Ideas Future I’m really pleased released first version Santulator see people using sharing others also hope idea Open Source Santulator code article useful people working project I’ve received lot feedback good idea future direction I’ll see time new year Open Source welcome technical contribution form issue reporting pull request GitHub Several user system mentioned would great able email result draw directly participant within program email could contain attachment passwordprotected PDF file Santulator generates personalised participant containing name person buy present Another thing I’d like make system available language make start message user see already extracted Java Resource Bundle file SantulatorBundleproperties Since live work Barcelona would seem natural start Spanish Catalan language version Santulator lot idea you’d like keep progress use “watch” function Santulator GitHub project Happy Holidays hope you’ve enjoyed quick run technology behind Santulator Feel free fork repository GitHub experiment code It’s Open Source available free charge you’re going run Secret Santa draw download Santulator give try fun happy holiday everyoneTags Programming Open Source Java Javafx Technology
|
4,959 |
I Am a Mom Whose Neighborhood Burned During the Protests. It’s Time to Defund the Police.
|
It was yesterday. I was scrolling Twitter on my phone, and my sweet toddler hit me in the leg. When he hit me, it did not matter that some days earlier he had said, “Pippy nice.” The fact was, he hit me. I have learned, through my training as a mental health therapist, to ask questions about behaviors like this. Behaviors, after all, are a kind of communication.
What was my son trying to tell me? Was he too tired? Did he get too much screen time? Did he need my attention? What was going on just before he hit me?
I have learned, as a mother to young children, that you have to answer behaviors the same way you answer language. This is the context from which I say: The behaviors of the police demand answers.
I do not know all of the reasons the police are not agents of safety in my own community and many others. There are many ideas from many people who have been more awake to the problems of policing much longer than me. But I do know that the police’s behaviors are communication. It is time to listen to what police officers’ actions are saying, to what they have been saying for years.
The police are not solving our crimes. The police hurt people and then hurt the people who ask them to stop hurting people. Police officers are telling us with their bodies that there is a very big problem with policing in the United States.
When my toddler hits me, I work to understand what he is saying with his body. At the same time, when he hits, I do something else. I gently and firmly hold his hands. I say, “I won’t let you hit me.” I do this because I love him. Calling to defund and demilitarize the police is not necessarily an aggressive call to action, though it can be. It is not a way to demonize all people in uniform. It is loving boundaries, which we can set with anyone. Especially those we love. These boundaries can keep our community — all of our community — truly safe.
|
https://medium.com/fearless-she-wrote/i-am-a-mom-whose-neighborhood-burned-during-the-protests-its-time-to-defund-the-police-423b2ae7ea4d
|
['Emily Pg Erickson']
|
2020-06-08 14:54:06.583000+00:00
|
['Mental Health', 'Parenting', 'Politics', 'Equality', 'Racism']
|
Title Mom Whose Neighborhood Burned Protests It’s Time Defund PoliceContent yesterday scrolling Twitter phone sweet toddler hit leg hit matter day earlier said “Pippy nice” fact hit learned training mental health therapist ask question behavior like Behaviors kind communication son trying tell tired get much screen time need attention going hit learned mother young child answer behavior way answer language context say behavior police demand answer know reason police agent safety community many others many idea many people awake problem policing much longer know police’s behavior communication time listen police officers’ action saying saying year police solving crime police hurt people hurt people ask stop hurting people Police officer telling u body big problem policing United States toddler hit work understand saying body time hit something else gently firmly hold hand say “I won’t let hit me” love Calling defund demilitarize police necessarily aggressive call action though way demonize people uniform loving boundary set anyone Especially love boundary keep community — community — truly safeTags Mental Health Parenting Politics Equality Racism
|
4,960 |
Would it be wise to use iPhone 5s in 2020?
|
The iPhone 5s has been around since 2013 while it looks like iPhone 5 but it brought some key upgrades. When iPhone 5s first came out everybody loved that device because of it’s small form factor and the most secure touch ID in any smartphone at that time. Also, iPhone 5s is Apple’s first iPhone with a 64-bit mobile processor. Now we are in 2020 and if you can check the smartphones market especially the prices of Apple iPhone's, you will feel the prices of new iPhone’s is too high and not many people can afford or like to spend too much money on smartphones.
Photo by Daniel Roe on Unsplash
So the question arises Is it really worth it to use a 7 year old device in 2020?
For giving you a good review i tried to use iPhone 5s for a week to test if it really worth it to use or not. Let’s start:
Price:
You can easily buy iPhone 5s from ebay for $50 to $80.
Which is the most cheapest and up to date iPhone you can buy in 2020
Design:
If you see the design of iPhone 5s it is an old style design with big top and bottom bezels. It is not like 2020 style smartphones with small notches and punch hole camera cut out on screen. But it has a premium looking design. The body is made up of aluminum and glass. Which makes this phone a very solid device. It weighs only 112 grams and the size of the phone is 4.7 inches which helps to hold the device very easily and to use with one hand very easily. The best thing about this device is it comes with a capacitive finger print sensor which works pretty well, it is little bit slow if you compare with latest devices but still it completes the job very well.
Software Support:
Best thing i like about this device is it runs iOS 12 which is two years old iPhone software but the amazing thing is Apple still give software updates to this 7 years old device. As i am writing this article i got a new software update for iOS 12.4.8 which includes some important security patches. I am really amazed with this that Apple still give software support to a 7 year old device.
It support all the latest versions of apps available on App Store.
Also support all the latest Social Media Applications.
iMessage:
There are many method you can find online to get iMessage without buying any Apple product because Apple products are so expensive and not everyone can afford or like to spend too much money on electronics products. And there are many people who love to use Android but they also want to take an iPhone with them because many of their friends and family members use iMessage for communication purposes.
So if you want iMessage but don’t want to spend a lot of money on Apple latest devices you can use iPhone 5s easily for messaging purpose.
Social Media Experience:
I tried to use few of top famous Social media Applications those people usually use these days like Facebook, Instagram, Snapchat, Tiktok, YouTube Whatspp, Discord etc. And i have experienced you can easily use messaging applications like Whatspp, Discord or Facebook messenger every smoothly but when it comes to content looking applications like Facebook, Instagram, Snapchat i have experienced a little bit lag in Facebook, Snapchat and Instagtam usage and also in using their story feature.
Youtube on the other side works perfectly fine but if you are a person who likes to watch videos on bigger screen then this phone is not for you.
Gaming Experience:
In Gaming experience you already know how a 7 years old device with only 1GB of Ram can run latest games smoothly. But i have tried Player Unknown Battle Ground (PUBG) on it and it works perfectly fine on low settings. Yes it heats up the device but the overall game is playable. I tried so many light games on to it and they run pretty smoothly like Brawl Stars.
I can easily say that if the main purpose of you is to play games then this phone is not for you
Battery Experience:
It is an old device so don’t expect too much from this device in the battery department. If you are looking for an older device like this, i am sure you are not going to use this for more power use rather this device is for those people who have a very light phone usage like making phone calls, sending messages or using social media apps.
I used this phone for more than a week and i can easily get 4 to 4.5 hours of screen on time. Which is i think very good.
Camera Experience:
Camera performance is not good but also not bad. It comes with a 8 mega pixel rare camera which records 1080p HD video at 30 fps and a 1.2 mega pixel front selfie camera. It takes very good photos in bright light environment but when it comes to low light environment the performance of the camera is not really good.
Overall Usage Experience:
Overall the usage performance is pretty good for people who only use their smartphones for normal usage like phone calling, messaging and light social media and camera usage. It is a 7 years old device but i am amazed how quick and responsive this device is. Apps easily load in 3–4 seconds which is i am not a big deal for many users. Also it is very easy for one hand use because of a small form factor.
Conclusion:
Good for people who only use their phone for calling or messaging. Good for people who sometime use their phone to consume social media. Good for people who have smaller hands like me. Good for people who love to use Android but also carry an iPhone as side phone to use some Apple service like iMessages. Good for people who play light game on their phones.
If you like my article please feel free to give a clap. If you have any question or any suggestions for me please feel free to tell me in the comments section.
|
https://medium.com/macoclock/would-it-be-wise-to-use-iphone-5s-in-2020-3f08575ec19a
|
['Umar Usman']
|
2020-07-17 05:38:21.396000+00:00
|
['iPhone', 'iOS', 'Apple', '2020', 'Smartphones']
|
Title Would wise use iPhone 5 2020Content iPhone 5 around since 2013 look like iPhone 5 brought key upgrade iPhone 5 first came everybody loved device it’s small form factor secure touch ID smartphone time Also iPhone 5 Apple’s first iPhone 64bit mobile processor 2020 check smartphones market especially price Apple iPhones feel price new iPhone’s high many people afford like spend much money smartphones Photo Daniel Roe Unsplash question arises really worth use 7 year old device 2020 giving good review tried use iPhone 5 week test really worth use Let’s start Price easily buy iPhone 5 ebay 50 80 cheapest date iPhone buy 2020 Design see design iPhone 5 old style design big top bottom bezel like 2020 style smartphones small notch punch hole camera cut screen premium looking design body made aluminum glass make phone solid device weighs 112 gram size phone 47 inch help hold device easily use one hand easily best thing device come capacitive finger print sensor work pretty well little bit slow compare latest device still completes job well Software Support Best thing like device run iOS 12 two year old iPhone software amazing thing Apple still give software update 7 year old device writing article got new software update iOS 1248 includes important security patch really amazed Apple still give software support 7 year old device support latest version apps available App Store Also support latest Social Media Applications iMessage many method find online get iMessage without buying Apple product Apple product expensive everyone afford like spend much money electronics product many people love use Android also want take iPhone many friend family member use iMessage communication purpose want iMessage don’t want spend lot money Apple latest device use iPhone 5 easily messaging purpose Social Media Experience tried use top famous Social medium Applications people usually use day like Facebook Instagram Snapchat Tiktok YouTube Whatspp Discord etc experienced easily use messaging application like Whatspp Discord Facebook messenger every smoothly come content looking application like Facebook Instagram Snapchat experienced little bit lag Facebook Snapchat Instagtam usage also using story feature Youtube side work perfectly fine person like watch video bigger screen phone Gaming Experience Gaming experience already know 7 year old device 1GB Ram run latest game smoothly tried Player Unknown Battle Ground PUBG work perfectly fine low setting Yes heat device overall game playable tried many light game run pretty smoothly like Brawl Stars easily say main purpose play game phone Battery Experience old device don’t expect much device battery department looking older device like sure going use power use rather device people light phone usage like making phone call sending message using social medium apps used phone week easily get 4 45 hour screen time think good Camera Experience Camera performance good also bad come 8 mega pixel rare camera record 1080p HD video 30 fps 12 mega pixel front selfie camera take good photo bright light environment come low light environment performance camera really good Overall Usage Experience Overall usage performance pretty good people use smartphones normal usage like phone calling messaging light social medium camera usage 7 year old device amazed quick responsive device Apps easily load 3–4 second big deal many user Also easy one hand use small form factor Conclusion Good people use phone calling messaging Good people sometime use phone consume social medium Good people smaller hand like Good people love use Android also carry iPhone side phone use Apple service like iMessages Good people play light game phone like article please feel free give clap question suggestion please feel free tell comment sectionTags iPhone iOS Apple 2020 Smartphones
|
4,961 |
Objective Tragedies
|
Start with the objective. There’s the virus,
with all the havoc it’s wreaked —
the poorest students falling further behind
(no internet at home), the badly needed
therapy appointments cancelled or unattended,
passersby spitting in the direction of
folks with Asian descent.
Then there’s the subjective. Your favorite
Indian restaurant closing, indefinitely.
Not getting to hug your friend goodbye.
Breaking up, but tenderly,
each of you cupping the other’s face
the way you’d hold a robin’s egg.
The grocery store being out of milk.
Your world is small these days and it’s hard to
triage these things. But since we’re all public health
workers now, it’s become your job to triage,
so you will. You will be a fish in a
Venetian canal, where the waters
(you may have heard) are now clear.
Not, I will add, because of decreased pollution,
but because of decreased boat activity,
which allows sand and sediment to remain on the canal floor.
The difference matters little to the fish,
who can suddenly see where they’re swimming.
No time to distrust it. Keep going.
Keep going. Keep going.
|
https://medium.com/literally-literary/objective-tragedies-2ff491f8b11f
|
['Emma Jane Laplante']
|
2020-03-31 02:25:11.274000+00:00
|
['Relationships', 'Love', 'Travel', 'Environment', 'Poetry']
|
Title Objective TragediesContent Start objective There’s virus havoc it’s wreaked — poorest student falling behind internet home badly needed therapy appointment cancelled unattended passersby spitting direction folk Asian descent there’s subjective favorite Indian restaurant closing indefinitely getting hug friend goodbye Breaking tenderly cupping other’s face way you’d hold robin’s egg grocery store milk world small day it’s hard triage thing since we’re public health worker it’s become job triage fish Venetian canal water may heard clear add decreased pollution decreased boat activity allows sand sediment remain canal floor difference matter little fish suddenly see they’re swimming time distrust Keep going Keep going Keep goingTags Relationships Love Travel Environment Poetry
|
4,962 |
Chance Encounters
|
Subtle Glint
a poem
Photo by Daniel A. Teo
Gleams of the sun
delicately caresses the
ardent ripples of
blue
horizon composed
as the two elements
homogenizes into
ornate gold
Thin sheets of cloud
shadows land from heaven’s
glow
as day dims
and we settle down after the perspiration
on the fine shores
the very serenity
amidst the tempest of the sea
And as the waves collide into the rocks
we espy the last glimpse of around
still dazzling from the glows of the
sun
when they still remain gleaming
but instead under the subtle glint
of the evening lamp poles.
|
https://medium.com/chance-encounters/subdued-glint-91329e911057
|
['Daniel A. Teo']
|
2020-11-27 17:45:12.144000+00:00
|
['Poetry', 'Prose', 'Writing', 'Photography', 'Chance Encounters']
|
Title Chance EncountersContent Subtle Glint poem Photo Daniel Teo Gleams sun delicately caress ardent ripple blue horizon composed two element homogenizes ornate gold Thin sheet cloud shadow land heaven’s glow day dims settle perspiration fine shore serenity amidst tempest sea wave collide rock espy last glimpse around still dazzling glow sun still remain gleaming instead subtle glint evening lamp polesTags Poetry Prose Writing Photography Chance Encounters
|
4,963 |
What Should I Look for When Hiring a Book Editor?
|
What Should I Look for When Hiring a Book Editor?
A helpful list of the good and bad
Image credit: Christina Morillo from Pexels
Congratulations! You’ve written a book! Or maybe you’re still in the process of writing it, but you’ve already begun the search for an editor.
The first thing you’ll want to do before you even start looking for someone is to figure out what type of editing you need.
Now that you’ve narrowed down what you need, it’s time to find out who is best suited for the job. More specifically, who is best suited for YOUR job. Because there are many factors that come into play, the editor who did such a great job for your writing friend’s memoir may not be the right person for your YA science fiction, and that’s okay.
Here are a few things to consider:
Does the editor have a website?
Academic editors often don’t have their own websites; working for universities or research facilities will keep their name in the right circles, and they don’t need to advertise in the same way other editors do. An email address is sufficient.
But if you’re looking for a book editor, you’re more likely to find them if they have a website. If they’re part of a professional editing association, you should be able to find them in the directory of that association even if they don’t have a site of their own. The website can be elaborate or as simple as a landing page that takes you to their services and a contact form.
The important thing to note here is that the editor can be found somewhere other than a Goodreads thread, claiming they edit all types of books for $100. Someone who takes the time and effort to set up a website shows that they are willing to invest in their own business.
Why is it important to look for this investment? In a nutshell, we don’t tend to value that which costs us nothing. If an editor has invested in a website, they’re more likely to have invested in their education as well, because they see the benefit that education and experience can bring to the table.
Can you verify their claims?
If the editor you’re looking into says they’ve edited dozens of bestsellers, can you find proof of that claim? If they have their own business, you should be able to make use of the “Look Inside” feature on Amazon to see if their name or business name is somewhere in the front matter. If no mention of the business is made, you can look in the acknowledgments for a name or even send a quick email to the author or publisher to double-check.
You don’t have to be suspicious of everyone, but if you’ve found an editor on your own, you can’t just assume they’re telling the truth. I re-edited books for someone years ago after a not-really-an-editor butchered them, and that fake editor still had the books listed on her site long after her name was nowhere on them anymore due to extensive rewrites. The author even received an email from another author at one point — someone doing their research before hiring, thank goodness — asking how she liked that person’s services, and she was able to steer him clear of wasting his money after explaining the situation.
Can you get a sample edit of the editor’s work?
This one’s a little tricky, but I’ll explain. Many editors, myself included, offer a sample edit on a portion of your manuscript before agreeing to the job. This serves two purposes: you can see if that editor knows what they’re doing, and the editor can see how much work the MS needs (and will price accordingly).
There are authors who insist that you should never, ever pay for a sample edit. I used to think this way until I joined a group of professional editors and heard the reasons many of them chose to only do paid samples.
Those who do free samples will only do them on a small portion of the MS, usually 500–1000 words (2–4 pages), to limit their time spent on the tire-kickers. Even doing that small sample, pricing out the project, and sending an email with the information can take an hour of work time that pays nothing — and it often results in a thank-you with no actual contract signed, through no fault of your own other than the fact that someone else was cheaper.
Those who do paid samples are willing to edit a longer portion of the MS for a small fee that compensates at least some of the time spent. If the author books the job, that fee comes off the total price. I offer both free and paid sample edits now.
Note: if you’re looking for a developmental editor, you’re not likely to be able to get a sample, since developmental editing is such a broad-view type of edit on the manuscript as a whole. Some developmental editors will offer a paid sample on a few chapters, but be aware that the cost will be much higher than that of a copyedited sample, due to the nature of the edit.
Can you look at samples of the editor’s finished work?
As I mentioned above, the “Look Inside” feature on Amazon and other book-selling sites is a handy tool. Not only can you verify that your potential editor’s name is somewhere in there, but you can also see the first chapter or download a sample from the book. These are used as a teaser to entice buyers, but they’re a wonderful resource.
Just as you can tell whether a book is going to be interesting or not from the first chapters, you should be able to see whether the editor did a decent job or not in those same chapters. If you find a host of errors in the first handful of pages — genuine errors, not stylistic choices — that’s probably a good sign that you should remove that particular editor from your list of maybes.
You’re not likely to be able to see a before & after of another author’s work, because that would violate privacy. But you should be able to find enough samples of finished work to satisfy you.
Does the editor get referred by others?
One of the things I love about the authors I work with is that they refer me to others. That tells me they’re not only happy with my work, but that they’re willing to trust that I’d do a good job for someone else, too.
Word of mouth can make or break someone’s business. If someone is hesitant to recommend a particular professional, ask why. Are they known as hard to work with? Do they cause division in their professional groups? I was referred to an author recently by an editor I’d never actually worked with, and when I dropped a quick note to thank the editor for the job, she said she knew we all had similar skills, but that I seemed to be nice and not contentious in the groups we were both part of. I ended up with a great project to work on and a new connection as well.
Does the editor have testimonials?
I feature author testimonials on my website because each of them seems to focus on a different aspect of my services. A couple of them were written for me upon request, but the ones I like the best are those candid ones I’ve grabbed (with permission) from emails between the author and me.
I love to capture those moments when they’re excited and encouraged about what I’ve sent back to them, or when they’ve mentioned me in a blog post while promoting their book. This is how I end up with testimonials like, “I couldn’t find a single thing to fight with you about; rather disappointing.” And I enjoy every word of it.
Can you afford them?
Let me start off by saying that editing isn’t cheap. It’s a skilled job and it takes a lot of time, and you need to be willing to pay for that expertise and the time it takes to do the job well.
That said, there are enough editors out there who do their best to work with authors and budgets, so I wouldn’t start your search by stressing over cost. There will always be the extreme lows and extreme highs, and it’s up to you to see what’s best for your project and price range.
There is an editor who will fit your writing style and your budget. You just need to take your time and look thoroughly. The old saying of “good, fast, cheap — choose any two” is true. If you are trying to keep your costs down, you may find an editor who’s willing to work for a lower price by fitting your work bit by bit in between other, full-paying jobs. You may decide to go with a scaled-down version of the editing you want, or you may have done your homework early enough that you’ve saved enough to be able to afford exactly what you want with no compromises.
Everyone is different, and every budget is different. Some editors are willing to take multiple payments when they see a manuscript that really has potential, and others insist on full payment up front. The important thing is to ask if you have questions about pricing or payments, because you never know if something can be worked out.
The lowest estimate isn’t always the worst editor and the highest price doesn’t always guarantee the best editor. Look at all the edits from the samples you’ve gotten and see who really “gets” you and complements your writing style, and go from there.
Who are they really?
A final thought on hiring an editor for your book: check out who they are when they’re not being an editor. Do they have a social media presence? You don’t need to be a stalker, but it’s not a bad thing to check out someone’s Instagram, Twitter, Facebook, or other social media sites.
Can you still work with someone whose skills you admire, but who posts hateful slurs toward other cultures on their Twitter? What if you see a Facebook rant where they go on and on about “that idiot author” they’re working with? Do you want to be the next potential social media fodder for that person?
An author friend told me years ago about reading a tweet sent to him by a friend, with a “Check this out” attached. The tweet was from his editor, moaning about the book she was editing, how boring it was and how she wanted to gouge her eyes out. She mentioned specific instances and phrases that confirmed it was indeed his, and basically ridiculed his work, never considering that he’d see the tweet.
Personality does count. If an editor doesn’t seem to respect the opposite sex (let’s face it, this happens on both sides), then perhaps they’ll try to bully you into thinking you can’t question any changes they make in your MS. Or they’ll try to impose their own set of values in your writing to make it sound more like something they believe in (and again, this can happen on both sides of the fence).
My husband has a great saying: “Don’t forget what you already know about someone.” I suppose it goes along with Maya Angelou’s reminder of “When someone shows you who they are, believe them the first time.” There are enough great book editors out there that you should never have to work with someone you don’t respect, or who doesn’t respect you.
How can you know what to do?
Don’t panic. Take your time, look across a wide range of avenues, and do your research. Have I mentioned that you shouldn’t panic? Because you really shouldn’t. Choose with care and you could end up with a long and healthy working relationship.
If you’re currently looking for an editor for your book, I’d love to talk to you! Check out my website (the link is in my profile here on Medium) and fill out my contact form. I don’t set standards for others that I don’t meet myself, so feel free to check me out on any of my own social media.
Logo image property of Easy Reader Editing, LLC
Sign up to get my free ebook with over 60 resources for writers!
You just read another exciting post from the Book Mechanic: the source for writers and creators who want to make more work that sells and sell more work they make.
If you’d like to read more stories just like this one tap here to visit
|
https://medium.com/the-book-mechanic/what-should-i-look-for-when-hiring-a-book-editor-85222c299476
|
['Lynda Dietz']
|
2020-12-30 02:36:46.746000+00:00
|
['Book Editing', 'Book Editor', 'Editing And Proofreading', 'Writing', 'Editing']
|
Title Look Hiring Book EditorContent Look Hiring Book Editor helpful list good bad Image credit Christina Morillo Pexels Congratulations You’ve written book maybe you’re still process writing you’ve already begun search editor first thing you’ll want even start looking someone figure type editing need you’ve narrowed need it’s time find best suited job specifically best suited job many factor come play editor great job writing friend’s memoir may right person YA science fiction that’s okay thing consider editor website Academic editor often don’t website working university research facility keep name right circle don’t need advertise way editor email address sufficient you’re looking book editor you’re likely find website they’re part professional editing association able find directory association even don’t site website elaborate simple landing page take service contact form important thing note editor found somewhere Goodreads thread claiming edit type book 100 Someone take time effort set website show willing invest business important look investment nutshell don’t tend value cost u nothing editor invested website they’re likely invested education well see benefit education experience bring table verify claim editor you’re looking say they’ve edited dozen bestseller find proof claim business able make use “Look Inside” feature Amazon see name business name somewhere front matter mention business made look acknowledgment name even send quick email author publisher doublecheck don’t suspicious everyone you’ve found editor can’t assume they’re telling truth reedited book someone year ago notreallyaneditor butchered fake editor still book listed site long name nowhere anymore due extensive rewrite author even received email another author one point — someone research hiring thank goodness — asking liked person’s service able steer clear wasting money explaining situation get sample edit editor’s work one’s little tricky I’ll explain Many editor included offer sample edit portion manuscript agreeing job serf two purpose see editor know they’re editor see much work MS need price accordingly author insist never ever pay sample edit used think way joined group professional editor heard reason many chose paid sample free sample small portion MS usually 500–1000 word 2–4 page limit time spent tirekickers Even small sample pricing project sending email information take hour work time pay nothing — often result thankyou actual contract signed fault fact someone else cheaper paid sample willing edit longer portion MS small fee compensates least time spent author book job fee come total price offer free paid sample edits Note you’re looking developmental editor you’re likely able get sample since developmental editing broadview type edit manuscript whole developmental editor offer paid sample chapter aware cost much higher copyedited sample due nature edit look sample editor’s finished work mentioned “Look Inside” feature Amazon bookselling site handy tool verify potential editor’s name somewhere also see first chapter download sample book used teaser entice buyer they’re wonderful resource tell whether book going interesting first chapter able see whether editor decent job chapter find host error first handful page — genuine error stylistic choice — that’s probably good sign remove particular editor list maybes You’re likely able see another author’s work would violate privacy able find enough sample finished work satisfy editor get referred others One thing love author work refer others tell they’re happy work they’re willing trust I’d good job someone else Word mouth make break someone’s business someone hesitant recommend particular professional ask known hard work cause division professional group referred author recently editor I’d never actually worked dropped quick note thank editor job said knew similar skill seemed nice contentious group part ended great project work new connection well editor testimonial feature author testimonial website seems focus different aspect service couple written upon request one like best candid one I’ve grabbed permission email author love capture moment they’re excited encouraged I’ve sent back they’ve mentioned blog post promoting book end testimonial like “I couldn’t find single thing fight rather disappointing” enjoy every word afford Let start saying editing isn’t cheap It’s skilled job take lot time need willing pay expertise time take job well said enough editor best work author budget wouldn’t start search stressing cost always extreme low extreme high it’s see what’s best project price range editor fit writing style budget need take time look thoroughly old saying “good fast cheap — choose two” true trying keep cost may find editor who’s willing work lower price fitting work bit bit fullpaying job may decide go scaleddown version editing want may done homework early enough you’ve saved enough able afford exactly want compromise Everyone different every budget different editor willing take multiple payment see manuscript really potential others insist full payment front important thing ask question pricing payment never know something worked lowest estimate isn’t always worst editor highest price doesn’t always guarantee best editor Look edits sample you’ve gotten see really “gets” complement writing style go really final thought hiring editor book check they’re editor social medium presence don’t need stalker it’s bad thing check someone’s Instagram Twitter Facebook social medium site still work someone whose skill admire post hateful slur toward culture Twitter see Facebook rant go “that idiot author” they’re working want next potential social medium fodder person author friend told year ago reading tweet sent friend “Check out” attached tweet editor moaning book editing boring wanted gouge eye mentioned specific instance phrase confirmed indeed basically ridiculed work never considering he’d see tweet Personality count editor doesn’t seem respect opposite sex let’s face happens side perhaps they’ll try bully thinking can’t question change make MS they’ll try impose set value writing make sound like something believe happen side fence husband great saying “Don’t forget already know someone” suppose go along Maya Angelou’s reminder “When someone show believe first time” enough great book editor never work someone don’t respect doesn’t respect know Don’t panic Take time look across wide range avenue research mentioned shouldn’t panic really shouldn’t Choose care could end long healthy working relationship you’re currently looking editor book I’d love talk Check website link profile Medium fill contact form don’t set standard others don’t meet feel free check social medium Logo image property Easy Reader Editing LLC Sign get free ebook 60 resource writer read another exciting post Book Mechanic source writer creator want make work sell sell work make you’d like read story like one tap visitTags Book Editing Book Editor Editing Proofreading Writing Editing
|
4,964 |
Good Advertising and Bad Advertising
|
Consumers have a complicated relationship with advertising campaigns. On one hand, the popularity of ad-blocking software bears witness to the growing ambivalence that online readers have for AdWords, commercials, and promotional offers. This has become an issue both for websites that depend on ad revenue, and for companies that depend on ads to bring in clients and traffic. On the other hand, research shows that consumers don’t hate advertising per se — they just want it to be better. Furthermore, successful advertisements not only win new customers to a brand, but sometimes go viral in completely unironic ways.
Bad Advertising
Not every company has the money to launch a multi-million dollar commercial. But by looking at prominent examples of advertising campaigns that failed and backfired spectacularly, it’s possible to isolate trends that marketers should avoid at all costs.
The Kendall Jenner Pepsi Commercial
In the history of bad advertising, Pepsi may have won a world record with its magnificently bad decision in early 2017 to launch a commercial that mysteriously tied its product to political activism.
The self-styled “short film” follows celebrity Kendall Jenner through the streets of a generic American city in the midst of a nondescript rally attended by hordes of armored police. Although the situation seems tense, Jenner wins one of the officers over by offering him a can of (apparently magical) Pepsi soda.
Pepsi clearly believed they had something good going on with this commercial. After all, it cost them millions of dollars to produce. But on the day of its release, the Internet’s reaction was so universally negative that the company was forced to pull the ad before 24 hours had even elapsed, and publicly apologized; echoing public sentiment, Time Magazine called the stunt “an inauthentic cash-in on many people’s unhappiness”.
McDonald’s “Dead Dad” Commercial
To promote its “Fillet-o-Fish” sandwich in the U.K, fast good giant McDonald’s launched a bizarrely contemplative commercial about a boy who approaches his mother to ask about his deceased father. Naturally, the two go to a local McDonald’s for solace, where the boy orders a Fillet-o-Fish with tartar sauce. His mother longingly remarks, “That was your dad’s favorite too.”
The advertisement was panned, prompting Twitter outrage, and an article on the BBC’s website. Eventually McDonald’s formally apologized for something that many considered “distasteful”, and U.K regulator Advertising Standards Authority decided to reevaluate its run after receiving traumatized reports from across the country.
Burger King “O.K. Google” Commercial
Not to be outdone by their competitor, Burger King quickly took on the challenge of irritating the TV and YouTube watching public with a commercial that attempted to hijack users’ Google Home devices with the wake-word “Okay Google,” followed by the question, “What is a whopper burger?”
In theory, a Google Home device would have answered with a description lifted from Wikipedia:
The Whopper is a hamburger, consisting of a flame grilled beef patty, sesame seed bun, mayonnaise, lettuce, tomato, pickles, ketchup, and sliced onion.
Users found the tactic exploitative and annoying; Google responded by deliberately breaking the gimmick. But before this official response, Internet trolls worked hard to manipulate the Wikipedia article so that victims would be told the Whopper was made of “100% medium-sized child,” “cyanide,” and similar libels.
Reactions to the commercial were so negative that — excluding news commentary like the one posted here — all versions of the ad were removed from YouTube.
Good Advertising
An advertising campaign doesn’t have to become famous to be successful — but now that we’ve talked about infamously bad adverts, it’s time to talk about famously good ones.
The Man Your Man Could Smell Like
In 2010, a simple thirty second ad began a long series of commercials featuring former NFL receiver Isaiah Mustafa. In a bathroom, Mustafa pities his audience for lacking him as a significant other, but assures them that Old Spice deodorant can help to take away the sting. By the end of his surreal and rambling monologue, he is sitting on a horse at a beach.
The commercial received universally wide acclaim, and is up to 53 million views on YouTube. This number is particularly important, because it shows that people watched this ad — and still watch it — on purpose for fun, not because they’re forced to.
Old Spice saw a 107% increase in body wash sales after the commercial launched, and the company did not stop there: 186 more videos were produced immediately afterwards as part of a “Response Campaign” just to field messages from fans of the ad. This campaign raked in 5.9 million views on its first day — more than Barrack Obama’s victory speech had received on its first day. Truly this is a level of consumer interaction most brands can only dream about.
Purple Mattress Commercials
Purple Mattress exploded onto the scene in 2016, to the tune of $76 million in revenue. Much of this success can probably be attributed to the integrated creatives who simultaneously manage its social media presence and advertisements. At 81 million views, the first Purple Mattress commercial featuring an “egg test” to demonstrate the unique structure of the product is already more popular than the Old Spice commercial on YouTube.
Other commercials by the company feature a mother Sasquatch explaining the benefits of a Purple Mattress in the wilderness. The creativity and humor in these ads is widely commented on, but another conspicuous attribute is the company’s dedicated social media presence, as evinced by Twitter interactions with ecommerce editor Samantha Gordon:
Chuck Testa
The holy grail of online advertising is to become a meme — not merely to go viral, but to actually become a meme featured in organic image macros across the web. In 2011, this happened to unsuspecting California taxidermist Chuck Testa after uploading a seemingly low-budget commercial to the Internet.
The ad represents an elusive balance of “so bad it’s good,” with moments hilariously awkward enough to be endearing, and the weird but eminently meme-worthy slogan “Nope! Chuck Testa”.
On first glance, the whole thing looks like a happy accident — a low quality, local commercial that became ironically famous. In reality, the camp aesthetic was completely intentional, and engineered by Commercial Kings, brainchild of self-made Internet celebrities Rhett & Link with the intention of going viral. It worked.
|
https://medium.com/online-marketing-institute/good-advertising-and-bad-advertising-b8da6b89fd7
|
[]
|
2017-10-28 15:21:58.536000+00:00
|
['Advertising', 'Online Marketing', 'Marketing', 'Viral Marketing', 'Digital Marketing']
|
Title Good Advertising Bad AdvertisingContent Consumers complicated relationship advertising campaign one hand popularity adblocking software bear witness growing ambivalence online reader AdWords commercial promotional offer become issue website depend ad revenue company depend ad bring client traffic hand research show consumer don’t hate advertising per se — want better Furthermore successful advertisement win new customer brand sometimes go viral completely unironic way Bad Advertising every company money launch multimillion dollar commercial looking prominent example advertising campaign failed backfired spectacularly it’s possible isolate trend marketer avoid cost Kendall Jenner Pepsi Commercial history bad advertising Pepsi may world record magnificently bad decision early 2017 launch commercial mysteriously tied product political activism selfstyled “short film” follows celebrity Kendall Jenner street generic American city midst nondescript rally attended horde armored police Although situation seems tense Jenner win one officer offering apparently magical Pepsi soda Pepsi clearly believed something good going commercial cost million dollar produce day release Internet’s reaction universally negative company forced pull ad 24 hour even elapsed publicly apologized echoing public sentiment Time Magazine called stunt “an inauthentic cashin many people’s unhappiness” McDonald’s “Dead Dad” Commercial promote “FilletoFish” sandwich UK fast good giant McDonald’s launched bizarrely contemplative commercial boy approach mother ask deceased father Naturally two go local McDonald’s solace boy order FilletoFish tartar sauce mother longingly remark “That dad’s favorite too” advertisement panned prompting Twitter outrage article BBC’s website Eventually McDonald’s formally apologized something many considered “distasteful” UK regulator Advertising Standards Authority decided reevaluate run receiving traumatized report across country Burger King “OK Google” Commercial outdone competitor Burger King quickly took challenge irritating TV YouTube watching public commercial attempted hijack users’ Google Home device wakeword “Okay Google” followed question “What whopper burger” theory Google Home device would answered description lifted Wikipedia Whopper hamburger consisting flame grilled beef patty sesame seed bun mayonnaise lettuce tomato pickle ketchup sliced onion Users found tactic exploitative annoying Google responded deliberately breaking gimmick official response Internet troll worked hard manipulate Wikipedia article victim would told Whopper made “100 mediumsized child” “cyanide” similar libel Reactions commercial negative — excluding news commentary like one posted — version ad removed YouTube Good Advertising advertising campaign doesn’t become famous successful — we’ve talked infamously bad advert it’s time talk famously good one Man Man Could Smell Like 2010 simple thirty second ad began long series commercial featuring former NFL receiver Isaiah Mustafa bathroom Mustafa pity audience lacking significant assures Old Spice deodorant help take away sting end surreal rambling monologue sitting horse beach commercial received universally wide acclaim 53 million view YouTube number particularly important show people watched ad — still watch — purpose fun they’re forced Old Spice saw 107 increase body wash sale commercial launched company stop 186 video produced immediately afterwards part “Response Campaign” field message fan ad campaign raked 59 million view first day — Barrack Obama’s victory speech received first day Truly level consumer interaction brand dream Purple Mattress Commercials Purple Mattress exploded onto scene 2016 tune 76 million revenue Much success probably attributed integrated creatives simultaneously manage social medium presence advertisement 81 million view first Purple Mattress commercial featuring “egg test” demonstrate unique structure product already popular Old Spice commercial YouTube commercial company feature mother Sasquatch explaining benefit Purple Mattress wilderness creativity humor ad widely commented another conspicuous attribute company’s dedicated social medium presence evinced Twitter interaction ecommerce editor Samantha Gordon Chuck Testa holy grail online advertising become meme — merely go viral actually become meme featured organic image macro across web 2011 happened unsuspecting California taxidermist Chuck Testa uploading seemingly lowbudget commercial Internet ad represents elusive balance “so bad it’s good” moment hilariously awkward enough endearing weird eminently memeworthy slogan “Nope Chuck Testa” first glance whole thing look like happy accident — low quality local commercial became ironically famous reality camp aesthetic completely intentional engineered Commercial Kings brainchild selfmade Internet celebrity Rhett Link intention going viral workedTags Advertising Online Marketing Marketing Viral Marketing Digital Marketing
|
4,965 |
Hey! You!
|
Photo by JD Mason on Unsplash
January 2018:
Have you ever gotten neon light messages from The Universe?
Well, that was my week. Bill Engvall’s famous line “Here’s your sign!” kept running through my mind over and over as I walked through My Life the last few days. I suppose that’s what happens when you take your head out of your ass and start listening.
You see — I recently recommitted myself to my meditation practice.
I had been meditating all along on a regular, but I sat down (literally) and I decided I was going to find a slot, put it into my schedule every day. Not just when I felt like it, not just when there was time, not just when I was at the end of my rope and I knew I ‘needed’ it. Every. Single. Day.
So here is what happened.
White Feather recently replied to one of my stories and got me thinking about connections and our ‘one-ness’. And then there was this post from one of my favorite authors which popped up on my Medium feed.Add to that my random morning meditation selection played heavily on the ‘all beings are connected theme’.
The Universe had spoken.
I sat on my meditation cushion at the end of the meditation and I pondered some in silence. Message received, but what do I do with this information? How do I apply it? What challenge(s) lie ahead where this bit of insight is going to come in handy?
The difficulty of resolving and unifying the complex components of our personalities is an ongoing lifelong journey. It may, in fact, be THE journey. The reason we arrive, live, love and exist. Just to sort out and remember who we really are. To mesh all those bits back to ‘one-ness’. (Thank you White Feather for that!)
And not just ‘one-ness’ within ourselves but also with everything. Every. Single. Thing. Every soul. Every creature. Every living thing. Even every non-’living’ thing. From flora to fauna. Plants, animals, the wind, the sun, the moon, The Universe. We are all connected. We come from the same place — literally, figuratively, spiritually, and energetically.
Nearly every religious practice out there claims the same thing. Do unto others. Because don’t you see? The “others” aren’t really others…they are us. Or certainly part of us. Even scientists agree — living creatures share more in common with their DNA than they don’t. I just Googled it — we have 50% of our DNA in common with the banana I just had for breakfast. The fruit fly buzzing over the rest of the bunch — 60%. The lab mice we torture — 75–90% depending on if you look at theirs in us or ours in them. They have 90% of ours. We have 75% of theirs. There are no lines that divide us/them, black/ white, good/bad. There is only this. Life.
Yoga means ‘to yoke’ in ancient Sanskrit. The reason yogis began to do the asanas — the postures associated with what westerns think of as yoga is to be able to prepare their bodies to sit and meditate for longer and longer periods of time. Yoga has many branches — many ways of ‘yoking’ our spirit, soul, and body back together. Many paths back to ‘one-ness’.
I can only speak for My Path, of course. Every person gets to ponder their own steps along the way and decide for themselves what makes sense and what is complete bullshit.
I never did face any ultimate challenge this week which gave me an ‘ah-ha’ moment. What I did have though was a peaceful flow in My Life as I faced the everyday shit which normally made me sigh with discontent. I found myself more grateful. For everything. I looked for the ‘one-ness’ in everyone. And everything. I suppose that’s a pretty big ‘ah-ha’ moment in itself.
There is a quote I saw recently — again The Universe whispering in my ear — it goes like this:
“Prayer is talking to The Universe, meditation is listening to it” — Ankit Vekariya
Namaste.
|
https://medium.com/recycled/hey-you-4a641ecd4193
|
['Ann Litts']
|
2019-07-27 22:48:48.408000+00:00
|
['Self-awareness', 'Spirituality', 'Life Lessons', 'Meditation', 'Life']
|
Title Hey YouContent Photo JD Mason Unsplash January 2018 ever gotten neon light message Universe Well week Bill Engvall’s famous line “Here’s sign” kept running mind walked Life last day suppose that’s happens take head as start listening see — recently recommitted meditation practice meditating along regular sat literally decided going find slot put schedule every day felt like time end rope knew ‘needed’ Every Single Day happened White Feather recently replied one story got thinking connection ‘oneness’ post one favorite author popped Medium feedAdd random morning meditation selection played heavily ‘all being connected theme’ Universe spoken sat meditation cushion end meditation pondered silence Message received information apply challenge lie ahead bit insight going come handy difficulty resolving unifying complex component personality ongoing lifelong journey may fact journey reason arrive live love exist sort remember really mesh bit back ‘oneness’ Thank White Feather ‘oneness’ within also everything Every Single Thing Every soul Every creature Every living thing Even every non’living’ thing flora fauna Plants animal wind sun moon Universe connected come place — literally figuratively spiritually energetically Nearly every religious practice claim thing unto others don’t see “others” aren’t really others…they u certainly part u Even scientist agree — living creature share common DNA don’t Googled — 50 DNA common banana breakfast fruit fly buzzing rest bunch — 60 lab mouse torture — 75–90 depending look u 90 75 line divide usthem black white goodbad Life Yoga mean ‘to yoke’ ancient Sanskrit reason yogi began asana — posture associated western think yoga able prepare body sit meditate longer longer period time Yoga many branch — many way ‘yoking’ spirit soul body back together Many path back ‘oneness’ speak Path course Every person get ponder step along way decide make sense complete bullshit never face ultimate challenge week gave ‘ahha’ moment though peaceful flow Life faced everyday shit normally made sigh discontent found grateful everything looked ‘oneness’ everyone everything suppose that’s pretty big ‘ahha’ moment quote saw recently — Universe whispering ear — go like “Prayer talking Universe meditation listening it” — Ankit Vekariya NamasteTags Selfawareness Spirituality Life Lessons Meditation Life
|
4,966 |
How Do Language Models Predict the Next Word?🤔
|
How Do Language Models Predict the Next Word?🤔
N-gram language models - an introduction
Photo by Mick Haupt on Unsplash
Have you ever guessed what the next sentence in the paragraph you’re reading would likely talk about? Have you ever noticed that while reading, you almost always know the next word in the sentence?
Well, the answer to these questions is definitely Yes! As humans, we’re bestowed with the ability to read, understand languages and interpret contexts, and can almost always predict the next word in a text, based on what we’ve read so far.
Can we make a machine learning model do the same? Oh yeah! We very well can! And we already use such models everyday, here are some cool examples.
Autocomplete feature in Google Search (Image formatted by author)
Autocomplete feature in messaging apps (Image formatted by author)
In the context of Natural Language Processing, the task of predicting what word comes next is called Language Modeling.
Let’s take a simple example,
The students opened their _______.
What are the possible words that we can fill the blank with?
Books📗 📒📚 Notes📖 Laptops👩🏽💻 Minds💡🙂 Exams📑❔ Well, the list goes on.😊
Wait…why did we think of these words as the best choices, rather than ‘opened their Doors or Windows’? 🙄 It’s because we had the word students, and given the context ‘students’, the words such as books, notes and laptops seem more likely and therefore have a higher probability of occurrence than the words doors and windows.
Typically, this probability is what a language model aims at computing. Over the next few minutes, we’ll see the notion of n-grams, a very effective and popular traditional NLP technique, widely used before deep learning models became popular.
What does a language model do?
Describing in formal terms,
Given a text corpus with vocabulary. V ,
, Given a sequence of words, x(1),x(2),…,x(t) ,
, A language model essentially computes the probability distribution of the next word. x(t+1) .
Probability distribution of the next word x(t+1) given x(1)…x(t) (Image Source)
A language model, thus, assigns a probability to a piece of text. The probability can be expressed using the chain rule as the product of the following probabilities.
Probability of the first word being x(1)
Probability of the second word being x(2) given that the first word is x(1)
given that the first word is Probability of the third word being x(3) given that the first two words are x(1) and x(2)
given that the first two words are and In general, the conditional probability that x(i) is word i , given that the first (i-1) words are x(1),x(2),…,x(i-1)
The probability of the text according to the language model is:
Chain rule for the probability of a piece of text (Image Source)
How do we learn a language model?
Learn n-grams! 😊 An n-gram is a chunk of n consecutive words.
For our example, The students opened their _______, the following are the n-grams for n=1,2,3 and 4
uni grams: “the”, “students”, “opened”, ”their”
grams: “the”, “students”, “opened”, ”their” bi grams: “the students”, “students opened”, “opened their”
grams: “the students”, “students opened”, “opened their” tri grams: “the students opened”, “students opened their”
grams: “the students opened”, “students opened their” 4- grams: “the students opened their”
In an n-gram language model, we make an assumption that the word x(t+1) depends only on the previous (n-1) words. The idea is to collect how frequently the n-grams occur in our corpus and use it to predict the next word.
Dependence on previous (n-1) words (Image Source)
This equation, on applying the definition of conditional probability yields,
Probabilities of n-grams and (n-1) grams (Image Source)
How do we compute these probabilities?
To compute the probabilities of these n-grams and n-1 grams, we just go ahead and start counting them in a large text corpus! The Probability of n-gram/Probability of (n-1) gram is given by:
Count occurrences of n-grams (Image Source)
Let’s learn a 4-gram language model for the example,
As the proctor started the clock, the students opened their _____
In learning a 4-gram language model, the next word (the word that fills up the blank) depends only on the previous 3 words. If w is the word that goes into the blank, then we compute the conditional probability of the word w as follows:
Counting number of occurrences (Image Source)
In the above example, let us say we have the following:
"students opened their" occurred 1000 times "students opened their books" occurred 400 times
-> P(books/students opened their) = 0.4 "students opened their exams" occurred 200 times
-> P(exams/students opened their) = 0.2
The language model would predict the word books;
But given the context, is books really the right choice? Wouldn’t the word exams be a better fit?
Recall that we have,
As the proctor started the clock, the students opened their _____
Should we really have discarded the context ‘proctor’?🤔 Looks like we shouldn’t have.
This leads us to understand some of the problems associated with n-grams.
Disadvantages of the n-gram language model
Problems of Sparsity
What if “students opened their” never occurred in the corpus? The count term in the denominator would go to zero!
If the (n-1) gram never occurred in the corpus, then we cannot compute the probabilities. In that case, we may have to revert to using “opened their” instead of “students opened their”, and this strategy is called back-off.
What if “students opened their w” never occurred in the corpus? The count term in the numerator would be zero!
If word w never appeared after the n-1 gram, then we may have to add a small factor delta to the count that accounts for all words in the vocabulary V .This is called ‘smoothing’.
Sparsity problem increases with increasing n. In practice, n cannot be greater than 5
Problem of Storage
As we need to store count for all possible n-grams in the corpus, increasing n or increasing the size of the corpus, both tend to become storage-inefficient.
However, n-gram language models can also be used for text generation; a tutorial on generating text using such n-grams can be found in reference[2] given below.
In the next blog post, we shall see how Recurrent Neural Networks (RNNs) can be used to address some of the disadvantages of the n-gram language model.
Happy new year everyone! ✨ Wishing all of you a great year ahead! 🎉🎊🥳
References
[1] CS224n: Natural Language Processing with Deep Learning
[2] NLP for Hackers
|
https://medium.com/towards-artificial-intelligence/how-do-language-models-predict-the-next-word-66e0a705583e
|
['Bala Priya C']
|
2020-12-28 13:17:28.960000+00:00
|
['NLP', 'Artificial Intelligence', 'Machine Learning', 'Language']
|
Title Language Models Predict Next Word🤔Content Language Models Predict Next Word🤔 Ngram language model introduction Photo Mick Haupt Unsplash ever guessed next sentence paragraph you’re reading would likely talk ever noticed reading almost always know next word sentence Well answer question definitely Yes human we’re bestowed ability read understand language interpret context almost always predict next word text based we’ve read far make machine learning model Oh yeah well already use model everyday cool example Autocomplete feature Google Search Image formatted author Autocomplete feature messaging apps Image formatted author context Natural Language Processing task predicting word come next called Language Modeling Let’s take simple example student opened possible word fill blank Books📗 📒📚 Notes📖 Laptops👩🏽💻 Minds💡🙂 Exams📑❔ Well list go on😊 Wait…why think word best choice rather ‘opened Doors Windows’ 🙄 It’s word student given context ‘students’ word book note laptop seem likely therefore higher probability occurrence word door window Typically probability language model aim computing next minute we’ll see notion ngrams effective popular traditional NLP technique widely used deep learning model became popular language model Describing formal term Given text corpus vocabulary V Given sequence word x1x2…xt language model essentially computes probability distribution next word xt1 Probability distribution next word xt1 given x1…xt Image Source language model thus assigns probability piece text probability expressed using chain rule product following probability Probability first word x1 Probability second word x2 given first word x1 given first word Probability third word x3 given first two word x1 x2 given first two word general conditional probability xi word given first i1 word x1x2…xi1 probability text according language model Chain rule probability piece text Image Source learn language model Learn ngrams 😊 ngram chunk n consecutive word example student opened following ngrams n123 4 uni gram “the” “students” “opened” ”their” gram “the” “students” “opened” ”their” bi gram “the students” “students opened” “opened their” gram “the students” “students opened” “opened their” tri gram “the student opened” “students opened their” gram “the student opened” “students opened their” 4 gram “the student opened their” ngram language model make assumption word xt1 depends previous n1 word idea collect frequently ngrams occur corpus use predict next word Dependence previous n1 word Image Source equation applying definition conditional probability yield Probabilities ngrams n1 gram Image Source compute probability compute probability ngrams n1 gram go ahead start counting large text corpus Probability ngramProbability n1 gram given Count occurrence ngrams Image Source Let’s learn 4gram language model example proctor started clock student opened learning 4gram language model next word word fill blank depends previous 3 word w word go blank compute conditional probability word w follows Counting number occurrence Image Source example let u say following student opened occurred 1000 time student opened book occurred 400 time Pbooksstudents opened 04 student opened exam occurred 200 time Pexamsstudents opened 02 language model would predict word book given context book really right choice Wouldn’t word exam better fit Recall proctor started clock student opened really discarded context ‘proctor’🤔 Looks like shouldn’t lead u understand problem associated ngrams Disadvantages ngram language model Problems Sparsity “students opened their” never occurred corpus count term denominator would go zero n1 gram never occurred corpus cannot compute probability case may revert using “opened their” instead “students opened their” strategy called backoff “students opened w” never occurred corpus count term numerator would zero word w never appeared n1 gram may add small factor delta count account word vocabulary V called ‘smoothing’ Sparsity problem increase increasing n practice n cannot greater 5 Problem Storage need store count possible ngrams corpus increasing n increasing size corpus tend become storageinefficient However ngram language model also used text generation tutorial generating text using ngrams found reference2 given next blog post shall see Recurrent Neural Networks RNNs used address disadvantage ngram language model Happy new year everyone ✨ Wishing great year ahead 🎉🎊🥳 References 1 CS224n Natural Language Processing Deep Learning 2 NLP HackersTags NLP Artificial Intelligence Machine Learning Language
|
4,967 |
SwiftUI Tutorial — Lists and Navigation
|
List
We’re first going to start by adding a struct called EmojiItem that will represent each of the emoji in our List :
Now, we’re going to define an array of EmojiItem that we will display in the List . You can add this array inside the ContentView struct .
Now, we’re going to define a struct that we will use to display the emoji in a circle:
Finally, we can go to the body section of our ContentView to work on our app’s UI. We’re going to embed all the content of the app inside a NavigationView ; inside it, we will add a List view containing the emojiList we defined earlier. We’re going to embed each EmojiItem inside a HStack that will show the emoji inside a EmojiCircleView , and the emoji’s name in a Text view.
If we run our app now, we should be able to see the list of emoji, but nothing happens when tapping on row in the list…Yet.
|
https://medium.com/swlh/swiftui-tutorial-lists-and-navigation-16e1b4dbb98b
|
['Ale Patrón']
|
2020-10-14 19:31:35.503000+00:00
|
['Programming', 'Software Engineering', 'Xcode', 'iOS', 'Swift']
|
Title SwiftUI Tutorial — Lists NavigationContent List We’re first going start adding struct called EmojiItem represent emoji List we’re going define array EmojiItem display List add array inside ContentView struct we’re going define struct use display emoji circle Finally go body section ContentView work app’s UI We’re going embed content app inside NavigationView inside add List view containing emojiList defined earlier We’re going embed EmojiItem inside HStack show emoji inside EmojiCircleView emoji’s name Text view run app able see list emoji nothing happens tapping row list…YetTags Programming Software Engineering Xcode iOS Swift
|
4,968 |
Graph Data Modelling in Java
|
Modelling data is a crucial aspect of software engineering. Choosing appropriate data structures or databases is fundamental to the success of an application or a service.
In this article, I will discuss some techniques related to modelling data domains with graphs. In particular, I will show how labelled property graphs, and graph databases, can be an effective solution to some of the challenges we sometimes encounter with other models, such as relational databases, when we deal with highly-connected data.
By the end of this article we will have a simple — but fully functional — implementation of an in-memory labelled-property graph in Java. We’ll use this graph to run some queries on a sample dataset.
All the code presented here can be found on GitHub.
Sample domain: book reviews
Before writing this article, I headed over to kaggle.com and browsed through some of the data sets available there. Eventually, I picked this book review data set which we’ll use as a running example throughout this article.
This data set contains the following CSV files:
BX-Users.csv, with anonymised user data. Each user has a unique id, location and age.
BX-Books.csv, which contains each book’s ISBN, title, author, publisher and year of publication. This file also contains links to thumbnail pictures, but we won’t use those here.
BX-Book-Ratings.csv, which contains a row for each book review.
If we picture this data set as an entity-relationship (ER) diagram, this is how it would look like:
Some fields are denormalized, e.g. Author , Publisher , and Location . While this might or might not be what we eventually want (depending on the performance characteristics of certain queries), at this point let's assume we want our data in normalized form. After normalization, this is our updated ER diagram:
Note that we split Location into 3 tables ( City , State , and Country ) because the original field contained strings such as "San Francisco, California, USA" .
Now, suppose we store this data into a relational database. As a thought exercise, how easily can we express these two queries with SQL code?
Query 1 : get the average rating for each author.
: get the average rating for each author. Query 2: get all books reviewed by users from a specific country.
If you’re familiar with SQL, you’re probably thinking joins: whenever we have relations that span multiple tables, we typically navigate the relations by joining pairs of tables via primary and foreign keys.
However, both queries involve a non-trivial amount of joins — especially the second one. This is not necessarily a bad thing — relational databases can be quite good at executing joins efficiently — but it might lead to SQL code that is difficult to read, maintain, and optimize.
Moreover, in order to navigate many-to-many relationships, we had to create join tables (e.g. BookRating or BookPublished ). This is a common pattern which, however, adds some complexity to the overall schema.
We could denormalize some of the data, but this would have the side-effect of locking us into a specific view of our data and causing our model to be less flexible.
Domain modelling with Java classes
Now let’s suppose we want to store our entire data set in memory. The sample data set that we got from Kaggle contains less than a million entries, so it will easily fit in memory.
Storing our data in memory is a trivial but perfectly valid approach, especially if we want to support a read-only workload and we don’t need to guarantee write consistency and persistency. If we need to offer those guarantees, we can still store the data in memory, but we’ll probably need to ensure that our data structures are thread-safe and that we somehow persist the changes to non-volatile memory.
We’ll start with some classes, for example:
Soon, though, we are confronted with a question: how do we link these classes together? How do we establish relationships between books and authors?
In the case of Book and Author , we might consider that a book has one author (let's keep things simple and suppose each book has only one author), and use a direct reference:
This makes it trivial to get the author of a book. However, the reverse (getting all books written by an author) is more expensive, because we need to scan the entire list of books. For example, if we’re looking for all books by Dan Brown, we need to write something like this:
When we have many (e.g. millions) books, this will perform poorly. We could store references to books in the Author class itself:
However, this solution doesn’t “feel” good. Every time we want to change a book’s author, we have two places to change.
Besides, how should we handle relationships that contain extra data? For example, the relationship between a book and its publisher contains a piece of extra data, i.e. the year of publication. It’s not clear where we should put this field: should we declare it in the Book class? Then how do we handle cases when a book has multiple publishers and/or years of publication? Should we create a new class (e.g. BookPublished ), essentially adopting the join table pattern of relational databases?
Labelled property graphs
I would argue that both the relational and the Java reference models share a common trait: they represent relationships as entity data.
In both cases, not only does an entity contain attributes about itself (e.g. a Book table contains its title and ISBN code) but it also contains data about how it's connected to other entities.
This entity-centric (or table-centric) model, which is adopted by RDBMSs, has been traditionally successful thanks to its efficiency in storing and retrieving huge amounts of entities. If our data has a lot of loosely-connected entities, a table-centric model works well. However, when our data has a lot of relationships, representing them as data might result in more complex queries and, overall, in a less flexible data model.
Graph models adopt a different approach: in a graph, relationships are modelled explicitly and are treated as first-class citizens of the data model, just like entities.
You may recall from mathematics that a graph is a collection of nodes (also known as vertices) and edges (sometimes called relationships). Each node stores some data, and each edge connects two nodes. Here’s a picture of a sample graph with 6 nodes and 6 edges:
On top of this, a labelled property graph model adds a few extra features:
Each node and each edge have a label that identifies their role in the data model.
Each node and each edge store a set of key-value properties.
Each edge has a direction, i.e. it is a directed graph (as opposed to undirected graphs, where edges don’t have directions)
This is essentially the model adopted by production graph databases such as Neo4j and Titan.
So how can we model the book review domain as a labelled property graph? Here’s my first take at it:
Nodes have labels (e.g. the Book node is labelled "Book") and some properties -- for example, the Book node has two properties, isbn and title . These properties correspond to entity attributes in our ER diagram.
Edges have labels too (e.g. the edge connecting User and Book is labelled "Reviewed"). Some edges also have properties -- for example, the Reviewed edge has the rating property, which stores the rating a user gave to a book, and the Published by edge has a year property, i.e. the year the book was published. Other edges don't have properties -- for example, the In City edge which connects a user to the city where they live doesn't have properties because we don't need to store extra data on that relationship.
The picture above represents the schema of our graph model. When we create a graph instance and store some data in it, here is how it could be pictured (this is just a subset of the entire graph):
This could have easily been drawn on a whiteboard. In fact, when it comes to connected data, adopting a graph model is arguably quite natural and intuitive.
For more information about labelled property graphs, and how to model data with them, I recommend checking out Neo4j’s Graph Modeling Guidelines or Kelvin Lawrence’s free book on Apache Gremlin.
Implementing a labelled property graph in Java
Let’s see how we can implement a labelled property graph in Java. All the code in this section can be found in this repository on Github (the repository also contains code to parse the CSV files from Kaggle.)
We’ll start by defining the Node and Edge classes. We'll also define a common superclass, GraphElement , which represents elements that have a label and properties.
Nothing surprising here, except maybe those outgoingEdges and incomingEdges fields in the Node class. This is essentially how we connect nodes and edges together, and how we'll navigate the graph to extract meaningful data (we'll see that soon.)
I chose to represent outgoingEdges and incomingEdges as lists, but these might as well be sets (e.g. hash sets or tree sets) or other structures. The choice depends on a number of factors (e.g. whether we need to guarantee uniqueness of each edge.) However, these consideration are beyond the scope of this article -- if you are looking for efficient in-memory graph databases, you might want to consider products such as Memgraph or Neo4j embedded. For this example I decided to keep things simple and use plain array lists.
Also, each node has an id field. Unsurprisingly, the primary function of this field is to guarantee uniqueness of each node.
Next, we’ll define the Graph class which exposes methods for creating nodes and edges:
The two maps nodeIdToNode and nodeLabelToNode allow us retrieve nodes by their id and label, respectively. This will become especially useful when we start writing queries.
We define a BookReviewGraph class as a subclass of Graph :
We don’t use random-generated ids (e.g. UUIDs); instead, we repurpose entity data as ids. This has a couple of advantages:
it’s easier to debug, and
it’s easier to query (if we’re looking for books by author, we can just get the node with id = "author-" + authorName )
Sample queries
Now that we have a basic graph implementation for our book review domain, let’s see how we can implement the two queries we introduced above:
Query 1 : get the average rating for each author.
: get the average rating for each author. Query 2: get all books reviewed by users from a specific country.
The pattern for both queries will be the same: starting with a node (or a set of nodes), we navigate through edges and nodes, until we reach the data that we want to return.
In general, querying a graph means extracting a subgraph, i.e. a specific pattern of nodes and edges, that yields the desired answer.
Query 1: average rating for each author
Let’s start by writing a query that returns the average rating for a specific author.
The query will traverse a section of the graph in order to extract the data we want (average rating for each author.) We can break it down into 4 steps:
Start from the node corresponding to the author we are interested in. Navigate through the “Written by” edges that point to the author node. Get the source nodes of the “Written by” edges. These will be book nodes. Navigate through the “Reviewed” edges that point to the book nodes and extract the rating property from these edges.
As a picture:
In code:
Using this query, we can easily find the average rating for each author:
Here’s the output of the query with the data set we got from Kaggle:
Gustavs Miller 10.0
World Wildlife Fund 10.0
Alessandra Redies 10.0
Christopher J. Cramer 10.0
Ken Carlton 10.0
Jane Dunn 10.0
Michael Easton 10.0
Howard Zehr 10.0
ROGER MACBRIDE ALLEN 10.0
Michael Clark 10.0
...
Query 2: books reviewed by users from a specific country
This query adopts the same pattern as the previous one. We start with country nodes and traverse the graph until we get to book nodes, then we collect book titles:
In code:
Here’s the output we get with the Kaggle data set, when we query for books reviewed in Italy:
La Tregua
What She Saw
Diario di un anarchico foggiano (Le formiche)
Kept Woman OME
Always the Bridesmaid
A Box of Unfortunate Events: The Bad Beginning/The Reptile Room/The Wide Window/The Miserable Mill (A Series of Unfortunate Events)
Aui, Language of Space: Logos of Love, Pentecostal Peace, and Health Thru Harmony, Creation and Truth
Pet Sematary
Maria (Letteratura)
Potemkin Cola (Ossigeno)
...
Improving performance with indices
In both queries above, the starting point into the graph is a precise node which we get via the graph.getNodeById() method. This works when we know the id of the starting node.
However, in many cases we might not have a starting node — instead, we might need to start with a set of nodes.
For example, suppose we want to code the following query:
Query 3: given a book title, get the average age of users who reviewed a book with that title.
Our starting set of nodes is made up of the books with the given title. We could go through each book node and filter out books with a non-matching title:
This is essentially what databases refer to as a full table scan: in order to find the books we want, we have to go through all of them sequentially. The cost of this operation is linear in the number of books, so this can potentially take a long time.
In almost all cases (except maybe when the sequence is very short) we can improve performance by using a map (essentially a simplified version of an index in a database).
First, we create a map ( booksByTitleIndex ) which we update every time we add a new book:
Now we can use this index at the beginning of our query:
When we run this query on the Kaggle data set, here’s the output we get with title “Dracula”:
30.338983050847457
A HashMap is a good choice when we want to find exact matches on a certain property, because the average cost of finding exact matches is constant (O(1)).
When we want to find entities based on the total order of a certain property (e.g. get all users whose age is above 25), we can use a different data structure, e.g. a TreeMap, where the cost of finding elements is logarithmic.
If we wanted to find elements by prefix, a Trie would be the right choice to implement an index.
Conclusion
In this article we’ve seen how we can model a data domain using graphs, and how to implement a simple labelled property graph in Java. The code is quite basic and lacks error handling and thread-safety, but it’s a starting point to understand and apply graph modelling to data domains.
In the past year, I have worked on a project that has involved the analysis of high volumes of interconnected data. Modelling this data as graphs has allowed my team and myself to have an intuitive understanding of how the data is connected: transitioning from whiteboard brainstorming to data model implementation was a straightforward process. Initially we were unsure how the data would eventually be queried; adopting a graph model has allowed us to keep our model flexible and easy to query throughout the project.
The last decade has seen a growing interest in graph databases in the software community. Most of the technology powering these databases is not really new; what’s new is the fact that we now have a lot of interconnected data, and we want to query this data based on how elements are connected together, rather than just the values they hold.
|
https://medium.com/swlh/graph-data-modelling-in-java-dc815a0b8b24
|
['Alberto Venturini']
|
2020-09-22 10:31:53.804000+00:00
|
['Data Modeling', 'Java', 'Neo4j', 'Graph Database', 'Graph']
|
Title Graph Data Modelling JavaContent Modelling data crucial aspect software engineering Choosing appropriate data structure database fundamental success application service article discus technique related modelling data domain graph particular show labelled property graph graph database effective solution challenge sometimes encounter model relational database deal highlyconnected data end article simple — fully functional — implementation inmemory labelledproperty graph Java We’ll use graph run query sample dataset code presented found GitHub Sample domain book review writing article headed kagglecom browsed data set available Eventually picked book review data set we’ll use running example throughout article data set contains following CSV file BXUserscsv anonymised user data user unique id location age BXBookscsv contains book’s ISBN title author publisher year publication file also contains link thumbnail picture won’t use BXBookRatingscsv contains row book review picture data set entityrelationship ER diagram would look like field denormalized eg Author Publisher Location might might eventually want depending performance characteristic certain query point let assume want data normalized form normalization updated ER diagram Note split Location 3 table City State Country original field contained string San Francisco California USA suppose store data relational database thought exercise easily express two query SQL code Query 1 get average rating author get average rating author Query 2 get book reviewed user specific country you’re familiar SQL you’re probably thinking join whenever relation span multiple table typically navigate relation joining pair table via primary foreign key However query involve nontrivial amount join — especially second one necessarily bad thing — relational database quite good executing join efficiently — might lead SQL code difficult read maintain optimize Moreover order navigate manytomany relationship create join table eg BookRating BookPublished common pattern however add complexity overall schema could denormalize data would sideeffect locking u specific view data causing model le flexible Domain modelling Java class let’s suppose want store entire data set memory sample data set got Kaggle contains le million entry easily fit memory Storing data memory trivial perfectly valid approach especially want support readonly workload don’t need guarantee write consistency persistency need offer guarantee still store data memory we’ll probably need ensure data structure threadsafe somehow persist change nonvolatile memory We’ll start class example Soon though confronted question link class together establish relationship book author case Book Author might consider book one author let keep thing simple suppose book one author use direct reference make trivial get author book However reverse getting book written author expensive need scan entire list book example we’re looking book Dan Brown need write something like many eg million book perform poorly could store reference book Author class However solution doesn’t “feel” good Every time want change book’s author two place change Besides handle relationship contain extra data example relationship book publisher contains piece extra data ie year publication It’s clear put field declare Book class handle case book multiple publisher andor year publication create new class eg BookPublished essentially adopting join table pattern relational database Labelled property graph would argue relational Java reference model share common trait represent relationship entity data case entity contain attribute eg Book table contains title ISBN code also contains data connected entity entitycentric tablecentric model adopted RDBMSs traditionally successful thanks efficiency storing retrieving huge amount entity data lot looselyconnected entity tablecentric model work well However data lot relationship representing data might result complex query overall le flexible data model Graph model adopt different approach graph relationship modelled explicitly treated firstclass citizen data model like entity may recall mathematics graph collection node also known vertex edge sometimes called relationship node store data edge connects two node Here’s picture sample graph 6 node 6 edge top labelled property graph model add extra feature node edge label identifies role data model node edge store set keyvalue property edge direction ie directed graph opposed undirected graph edge don’t direction essentially model adopted production graph database Neo4j Titan model book review domain labelled property graph Here’s first take Nodes label eg Book node labelled Book property example Book node two property isbn title property correspond entity attribute ER diagram Edges label eg edge connecting User Book labelled Reviewed edge also property example Reviewed edge rating property store rating user gave book Published edge year property ie year book published edge dont property example City edge connects user city live doesnt property dont need store extra data relationship picture represents schema graph model create graph instance store data could pictured subset entire graph could easily drawn whiteboard fact come connected data adopting graph model arguably quite natural intuitive information labelled property graph model data recommend checking Neo4j’s Graph Modeling Guidelines Kelvin Lawrence’s free book Apache Gremlin Implementing labelled property graph Java Let’s see implement labelled property graph Java code section found repository Github repository also contains code parse CSV file Kaggle We’ll start defining Node Edge class Well also define common superclass GraphElement represents element label property Nothing surprising except maybe outgoingEdges incomingEdges field Node class essentially connect node edge together well navigate graph extract meaningful data well see soon chose represent outgoingEdges incomingEdges list might well set eg hash set tree set structure choice depends number factor eg whether need guarantee uniqueness edge However consideration beyond scope article looking efficient inmemory graph database might want consider product Memgraph Neo4j embedded example decided keep thing simple use plain array list Also node id field Unsurprisingly primary function field guarantee uniqueness node Next we’ll define Graph class expose method creating node edge two map nodeIdToNode nodeLabelToNode allow u retrieve node id label respectively become especially useful start writing query define BookReviewGraph class subclass Graph don’t use randomgenerated id eg UUIDs instead repurpose entity data id couple advantage it’s easier debug it’s easier query we’re looking book author get node id author authorName Sample query basic graph implementation book review domain let’s see implement two query introduced Query 1 get average rating author get average rating author Query 2 get book reviewed user specific country pattern query starting node set node navigate edge node reach data want return general querying graph mean extracting subgraph ie specific pattern node edge yield desired answer Query 1 average rating author Let’s start writing query return average rating specific author query traverse section graph order extract data want average rating author break 4 step Start node corresponding author interested Navigate “Written by” edge point author node Get source node “Written by” edge book node Navigate “Reviewed” edge point book node extract rating property edge picture code Using query easily find average rating author Here’s output query data set got Kaggle Gustavs Miller 100 World Wildlife Fund 100 Alessandra Redies 100 Christopher J Cramer 100 Ken Carlton 100 Jane Dunn 100 Michael Easton 100 Howard Zehr 100 ROGER MACBRIDE ALLEN 100 Michael Clark 100 Query 2 book reviewed user specific country query adopts pattern previous one start country node traverse graph get book node collect book title code Here’s output get Kaggle data set query book reviewed Italy La Tregua Saw Diario di un anarchico foggiano Le formiche Kept Woman OME Always Bridesmaid Box Unfortunate Events Bad BeginningThe Reptile RoomThe Wide WindowThe Miserable Mill Series Unfortunate Events Aui Language Space Logos Love Pentecostal Peace Health Thru Harmony Creation Truth Pet Sematary Maria Letteratura Potemkin Cola Ossigeno Improving performance index query starting point graph precise node get via graphgetNodeById method work know id starting node However many case might starting node — instead might need start set node example suppose want code following query Query 3 given book title get average age user reviewed book title starting set node made book given title could go book node filter book nonmatching title essentially database refer full table scan order find book want go sequentially cost operation linear number book potentially take long time almost case except maybe sequence short improve performance using map essentially simplified version index database First create map booksByTitleIndex update every time add new book use index beginning query run query Kaggle data set here’s output get title “Dracula” 30338983050847457 HashMap good choice want find exact match certain property average cost finding exact match constant O1 want find entity based total order certain property eg get user whose age 25 use different data structure eg TreeMap cost finding element logarithmic wanted find element prefix Trie would right choice implement index Conclusion article we’ve seen model data domain using graph implement simple labelled property graph Java code quite basic lack error handling threadsafety it’s starting point understand apply graph modelling data domain past year worked project involved analysis high volume interconnected data Modelling data graph allowed team intuitive understanding data connected transitioning whiteboard brainstorming data model implementation straightforward process Initially unsure data would eventually queried adopting graph model allowed u keep model flexible easy query throughout project last decade seen growing interest graph database software community technology powering database really new what’s new fact lot interconnected data want query data based element connected together rather value holdTags Data Modeling Java Neo4j Graph Database Graph
|
4,969 |
Are America’s Cultural Struggles the Result of Risk-Averse Incrementalism?
|
America is at war with itself — but are racial and political divisions really to blame, or is something deeper fueling our discontent? We’re joined by Dr. John Parmentola, Adjunct Staff Member at the RAND Corporation, to discuss his upcoming book, “Creating Wealth from Worthless Things”, which argues that a risk-averse focus on efficiency & incrementalism has hurt the economy, reduced individual opportunities, and widened the wealth gap underlying today’s social conflicts.
Welcome, John! Right now, there’s a lot of social unrest in America, with increasing social fragmentation along racial & political fault-lines, a rise of interest in socialism, and growing distrust of the financial system. How are diminishing opportunities driving these channels of unrest?
When opportunities are limited for quality jobs and social pathways to make a living, people will seek other ways to fulfill their needs. It’s fundamentally about survival, but it doesn’t always express itself that way — especially when it’s politically expedient to divide people into victim groups because of economic strife and despair.
Dr. John Parmentola, Adjunct Staff Member at the RAND Corporation. (Website)
People in difficult situations are highly susceptible to manipulation, and blaming others for their problems is an easy way to capture votes. However, this political strategy doesn’t solve the serious problem of economic strife and doesn’t help the over 40 million people in this country who would like to get out of poverty.
This problem doesn’t depend on your color, ethnicity, creed, gender, or sexual orientation. We need to provide these people with new social pathways that will enable them to improve their quality of life. Socialism & communism promise to do this and they’re easy to sell because they scapegoat the rich, but history tells us these models fail to fulfill vital human needs.
None of the issues we’re discussing are new — so what’s made 2020 such a “perfect storm” for violence & social unrest? It seems like COVID-19 and the quarantine sparked it — what’s driving it now?
The COVID-19 crisis has amplified the number of people in desperate situations and essentially flattened the economic strata of our country. The pandemic has pushed many people into desperation, and George Floyd was the spark that set everything ablaze. It was a tipping point for many people.
People are upset, and rightfully so — but this situation highlights symptoms of a much deeper problem that shouldn’t be overlooked. The virus will cause jobs to be eliminated because businesses will seize the opportunity to replace workers with technology. Businesses will strive to survive by reducing costs and becoming more efficient — but will workers be able to be retrained and can they adapt to this changing environment? That remains to be seen.
US National debt as a percentage of GDP is at record levels. (PGPF)
Whether government or private, the financial system is driven today by short-term interests; however, the deeper problem we face is a long-term one. Over the last 40 years, the government has accelerated its borrowing from the future to pay for immediate needs for political reasons.
This strategy cannot create the future, as it is completely focused on today’s problems. As for the private sector, it is primarily driven by quarterly returns to satisfy its stakeholders and investors’ financial interests. The private sector is dominated by efficiency and incremental improvements that are low-risk.
People should be angry about being left behind, but the blame is misplaced. The government has failed to provide them with quality education and investments in human potential to expand the frontiers of knowledge in science, engineering, and mathematics. What is required is a strategy that has the potential to expand human imagination as to what is possible, feasible and practical.
This strategy can create new opportunities for quality jobs, new industries, new products and services, and economic growth. Instead, what we see are empty, quick-fix promises that may sound comforting, but won’t make a substantial difference in the lives of the people and their families. It’s just the same old political dogma.
A protestor carries a flag upside down, a sign of distress, in Minneapolis in May 2020. (AP News)
Would it be accurate to say that when opportunities are reduced for people to make a decent life for themselves and their families, then people on the bottom of the socio-economic pyramid get hurt first? Are these protests a canary in a coal mine, so to speak?
This problem has been building up for some time now. We went from a single earner that could work hard enough to raise a family and realize the American dream to multiple earners with multiple jobs that struggle to accomplish the same thing. The quality of available jobs has declined substantially.
Meanwhile, our education system’s quality has also steadily declined since around 1980, especially in critical areas of the future, such as science, technology, engineering, and mathematics. There are not enough highly qualified teachers to prepare our precious youth for a highly competitive world where China and India are able to outproduce us in terms of graduates. Is it any surprise we’re falling behind in business?
This situation didn’t happen overnight — it’s the result of persistent decay that started nearly 40 years ago, and it has changed our national culture from one capable of risk-taking and bold steps to one of risk avoidance and incrementalism. This isn’t how the U.S. became the world economic and military power after WWII.
Over 40 years, we’ve reduced our investment in human potential and pushing out the frontiers of knowledge like we did after WWII. Those long-term investments after WWII produced an explosion of discoveries and science-inspired inventions that the world wanted and admired. Every invention we hold dear today came from those investments. We still depend and improve upon all these inventions to this day. They are a consequence of the digital, communications, and biotechnology revolutions that were all inspired by scientific discoveries.
Government R&D investment in basic research created new industries after WWII. (website)
Let’s talk about Summers and Bernanke’s secular stagnation, where savings exceed investment which leads to a chronic lack of demand. This is claimed to be a reason that the GDP growth rate has declined since 1980, and said to be exacerbated by income & wealth inequality. What can you tell me about that?
I’m not an economist, but I’m aware of three contributing trends to the systematic decline in the GDP growth rate since 1980. First, corporate America has been focused on efficiency, which has driven down the costs and prices for goods and services, and lowered the GDP growth rate by reducing the monetary value of all goods and services sold over time.
Another trend is a shift from wealth creation to wealth transfer, which happens as the private sector has been replacing legacy industries with more efficient ones that do the same thing in different ways.
Finally, since 1980 we have seen increasing market competition from countries like Japan, Germany, China, and South Korea that has limited growth. Historically, from 1950 to 1980, the annual GDP growth rate exceeded 5% about 20 times.
The US GDP growth rate has declined by half since 1950. (Tradingeconomics)
During this period, the U.S. produced many unique things the world wanted, and we had little competition. Since 1980, the annual GDP growth rate exceeded 5% only twice. Increasing global competition combined with a focus on efficiency has reduced scarcity of things and lowered prices to negatively impact GDP.
So, if efficiency and competition are stifling growth, the question is how to foster it? The answer is by investing in R&D at the levels after WWII to acquire new knowledge about how the natural world works. The discoveries that come from this investment expand human imagination through education, which inspires the creation of unique products and services for consumption. Fundamentally, the value of unique knowledge is very important to our nation’s economy by allowing the creation of entirely new industries that foster long-term economic growth and the creation of quality jobs.
Now, does decreasing wealth inequality play a role in increasing demand? Sure, but if we took the wealth of the top 20% of U.S. households and redistributed it evenly, it would temporarily increase spending, but that will not sustain long-term economic growth. Corporate wealth-transfer through higher efficiencies won’t stimulate growth, and neither will private wealth-transfer through tax redistribution.
So, I don’t see wealth transfer as a path to solving the problem of opportunity shortage. It would be a short-term fix to disparities in wealth, but the longer-term economic growth issue would still exist. In my view, “we the people”, working through the government, must do what the private sector cannot. We need to overcome short-term risk-averse thinking and invest in basic R&D to expand human imagination to create opportunities for growth. In my judgment, the current path we are on will likely lead us to stagnation and more social strife.
Federal R&D funds have been replaced by risk-averse private sector funding. (ITIF)
In your upcoming book, “Creating Wealth from Worthless Things,” you explore the creation of wealth through the expansion of human knowledge in contrast to optimizing the efficiency of production processes and incrementally improving products and services. Can you tell me a bit more about the key topics you cover in it?
The cause of economic growth has been an unresolved mystery pursued by leading economists for nearly two hundred and fifty years. Creating Wealth from Worthless Things is groundbreaking in that it identifies the actual cause of economic growth and wealth creation.
The book accomplishes this by describing the history of unexpected discoveries that inspired each of the six extraordinary technological revolutions that shaped the modern world’s creation. These events led to the creation of new industries, enormous numbers of jobs, and unique products and services that improved the entire world’s quality of life.
The discoveries I’m describing came from accumulating rules of how the natural world works, which in themselves had no commercial value — in other words, they were “worthless things”. The vast majority of them were funded through risk-taking by governments and wealthy patrons, and enabled the British Empire to become dominant by 1900 and made the U.S. the world-leading economic and military power after WWII.
Unfortunately, over the last 50 years, our culture has changed from one that was willing to take risks and bold steps to one that now embraces risk aversion and incrementalism. In the book, I explain the social and political forces that have caused this cultural transformation, and how this transition limits our future growth and increases the likelihood that China will replace us as a world leader.
This book concludes with a plan to avoid these potentially grave outcomes and explains the steps required to advance our economic growth. I also describe a new and unexpected technological revolution that will create new opportunities for a prosperous future for all humankind.
John Parmentola’s upcoming book, “Creating Wealth from Worthless Things”. (website)
Federal R&D spending has been on the decline since the 1960s, hasn’t it? Does that also apply to industries like computing, or is that highly successful market an exception that proves the rule?
The outcome of high-risk R&D is uncertain, which has led the private sector to focus on low-risk development commonly called innovation. This is a process of adapting inventions to an application in the form of marketable products or services. In contrast, R&D, especially research, is about the discovery of new rules of how nature works, which leads to massive breakthroughs but has a high degree of risk.
Since 1965, U.S. federal R&D funding as a percentage of G.D.P. has declined by 65%. During this period, private sector R&D increased by a factor of about 3 compared to federal R&D; however, unlike federal R&D, private sector R&D is primarily low-risk D to support existing product lines and services. What has been lost in this significant decline of federal R&D is opportunities for creating new pathways to economic growth.
Federal spending on basic vs. applied research development. (SSTI)
To appreciate this, imagine that the rules of chemical reactions and the discovery of the 98 fundamental elements of the periodic table never happened. Would the worldwide chemical industry today be generating $5 trillion per year and impacting the numerous industries that depend on it? I do not think so.
In regard to computing, except for the pursuit of quantum computing, the private sector has been squeezing out as much performance as they can out of well-understood materials and designs, but like other technologies and industries, they are reaching fundamental performance limitations.
Computing innovation continues with new approaches like A.I. and Machine Learning, which exploit familiar technology for new applications, but physics will eventually limit these as well unless something unexpected comes along.
In 2019, Parmentola spoke on “Creating Wealth from Worthless Things” at MIT. (Watch it on YouTube)
It sounds like you’re saying that slowing high-risk R&D and a focus on efficiency are hurting long-term growth and the availability of jobs. Do you view these trends as “putting the squeeze” on opportunities in America and fueling the social unrest that we’re seeing in the news?
Increasing efficiency and incremental improvements in things can produce valuable short-term results — the question is how long we can run our society on a model of incremental improvement, especially when increasing efficiency typically results in the net loss of jobs.
Moore’s Law is one example of incrementalism that may not last. (InfoQ)
Historically, the incremental improvement of technology based on current knowledge and available materials always has a finite lifetime. Performance eventually runs out of headroom, and incremental improvements eventually become too small and costly to pass onto consumers.
Many major technologies produced by the commercial sector and defense industry are reaching performance limits in such areas as transportation, computing, communications, energy production and storage, and defense technologies. Unless they overcome formidable performance barriers, we are headed toward stagnation in technology performance.
However, rather than focus on discovering unique materials and new knowledge to overcome performance barriers, companies in today’s risk-averse world have dedicated themselves to process improvement to increase efficiency. This involves incremental improvements in logistics or fulfillment and business processes. That’s not a revolution, it’s a path to stagnation, and one that eliminates jobs and removes social opportunities for people to achieve success in life.
Ultimately, the question of R&D spending leads us back to the topic of creating quality jobs and social pathways for people to realize the American dream, and it leads us to a future with reduced social tension between the “haves’” and the “have nots.” New knowledge inspires new inventions, industries, and jobs — and the more of it there is, the less tension and strife there will be.
About Our Guest
Dr. John Parmentola has built a highly distinguished career over four decades as a scientist, teacher, entrepreneur, inventor, innovator, a pioneer in founding new research fields, an international human rights activist, and a leader of complex research and development organizations with broad experience in the private sector, academia and high-level positions within the federal government and defense community.
He received the 2007 Presidential Rank Award for Meritorious Executive from President George W. Bush for my service to the Department of the Army. Dr. Parmentola was also an Air Force Intelligence Agency nominee for the 1996 R. V. Jones Award of the Central Intelligence Agency for his work in arms control verification, and a recipient of the Outstanding Civilian Service Award and the Superior Civilian Service Award for his many contributions to the U.S. Army. He is an Honorary Member of the U.S. Army STs, a Fellow of the American Association for the Advancement of Science, and a recipient of the U.S. Army 10 Greatest Inventions Award, the Alfred Raymond Prize, and the Sigma Xi Research Award. He has presented more than 500 speeches and published numerous scientific papers and articles on science and technology policy. He is also the author of an authoritative book on space defense.
Dr. Parmentola is currently a consultant to The RAND Corporation, where he works on defense, energy, and science and technology assessment, strategy, and planning issues for government agencies, both domestic and foreign. He also works on a volunteer basis for the National Academy of Sciences.
Before this role, Dr. Parmentola has served as a Senior Vice President at General Atomics, the Director for Research and Laboratory Management for the U.S. Army, Chief Scientist at the U.S. Department of Energy, Chief of Advanced Systems & Operations at the Defense Threat Reduction Agency.
Dr. Parmentola has a Ph.D. in Physics from M.I.T. and has served on the faculty of M.I.T., West Virginia University, and as a Fellow at the John F. Kennedy School of Government. Learn more about him on his website at: https://johnparmentola.com/
|
https://medium.com/predict/are-americas-cultural-struggles-the-result-of-risk-averse-incrementalism-5c9d6452f09c
|
['Tim Ventura']
|
2020-10-12 02:48:02.982000+00:00
|
['Innovation', 'America', 'Government', 'Technology', 'Science']
|
Title America’s Cultural Struggles Result RiskAverse IncrementalismContent America war — racial political division really blame something deeper fueling discontent We’re joined Dr John Parmentola Adjunct Staff Member RAND Corporation discus upcoming book “Creating Wealth Worthless Things” argues riskaverse focus efficiency incrementalism hurt economy reduced individual opportunity widened wealth gap underlying today’s social conflict Welcome John Right there’s lot social unrest America increasing social fragmentation along racial political faultlines rise interest socialism growing distrust financial system diminishing opportunity driving channel unrest opportunity limited quality job social pathway make living people seek way fulfill need It’s fundamentally survival doesn’t always express way — especially it’s politically expedient divide people victim group economic strife despair Dr John Parmentola Adjunct Staff Member RAND Corporation Website People difficult situation highly susceptible manipulation blaming others problem easy way capture vote However political strategy doesn’t solve serious problem economic strife doesn’t help 40 million people country would like get poverty problem doesn’t depend color ethnicity creed gender sexual orientation need provide people new social pathway enable improve quality life Socialism communism promise they’re easy sell scapegoat rich history tell u model fail fulfill vital human need None issue we’re discussing new — what’s made 2020 “perfect storm” violence social unrest seems like COVID19 quarantine sparked — what’s driving COVID19 crisis amplified number people desperate situation essentially flattened economic stratum country pandemic pushed many people desperation George Floyd spark set everything ablaze tipping point many people People upset rightfully — situation highlight symptom much deeper problem shouldn’t overlooked virus cause job eliminated business seize opportunity replace worker technology Businesses strive survive reducing cost becoming efficient — worker able retrained adapt changing environment remains seen US National debt percentage GDP record level PGPF Whether government private financial system driven today shortterm interest however deeper problem face longterm one last 40 year government accelerated borrowing future pay immediate need political reason strategy cannot create future completely focused today’s problem private sector primarily driven quarterly return satisfy stakeholder investors’ financial interest private sector dominated efficiency incremental improvement lowrisk People angry left behind blame misplaced government failed provide quality education investment human potential expand frontier knowledge science engineering mathematics required strategy potential expand human imagination possible feasible practical strategy create new opportunity quality job new industry new product service economic growth Instead see empty quickfix promise may sound comforting won’t make substantial difference life people family It’s old political dogma protestor carry flag upside sign distress Minneapolis May 2020 AP News Would accurate say opportunity reduced people make decent life family people bottom socioeconomic pyramid get hurt first protest canary coal mine speak problem building time went single earner could work hard enough raise family realize American dream multiple earner multiple job struggle accomplish thing quality available job declined substantially Meanwhile education system’s quality also steadily declined since around 1980 especially critical area future science technology engineering mathematics enough highly qualified teacher prepare precious youth highly competitive world China India able outproduce u term graduate surprise we’re falling behind business situation didn’t happen overnight — it’s result persistent decay started nearly 40 year ago changed national culture one capable risktaking bold step one risk avoidance incrementalism isn’t US became world economic military power WWII 40 year we’ve reduced investment human potential pushing frontier knowledge like WWII longterm investment WWII produced explosion discovery scienceinspired invention world wanted admired Every invention hold dear today came investment still depend improve upon invention day consequence digital communication biotechnology revolution inspired scientific discovery Government RD investment basic research created new industry WWII website Let’s talk Summers Bernanke’s secular stagnation saving exceed investment lead chronic lack demand claimed reason GDP growth rate declined since 1980 said exacerbated income wealth inequality tell I’m economist I’m aware three contributing trend systematic decline GDP growth rate since 1980 First corporate America focused efficiency driven cost price good service lowered GDP growth rate reducing monetary value good service sold time Another trend shift wealth creation wealth transfer happens private sector replacing legacy industry efficient one thing different way Finally since 1980 seen increasing market competition country like Japan Germany China South Korea limited growth Historically 1950 1980 annual GDP growth rate exceeded 5 20 time US GDP growth rate declined half since 1950 Tradingeconomics period US produced many unique thing world wanted little competition Since 1980 annual GDP growth rate exceeded 5 twice Increasing global competition combined focus efficiency reduced scarcity thing lowered price negatively impact GDP efficiency competition stifling growth question foster answer investing RD level WWII acquire new knowledge natural world work discovery come investment expand human imagination education inspires creation unique product service consumption Fundamentally value unique knowledge important nation’s economy allowing creation entirely new industry foster longterm economic growth creation quality job decreasing wealth inequality play role increasing demand Sure took wealth top 20 US household redistributed evenly would temporarily increase spending sustain longterm economic growth Corporate wealthtransfer higher efficiency won’t stimulate growth neither private wealthtransfer tax redistribution don’t see wealth transfer path solving problem opportunity shortage would shortterm fix disparity wealth longerterm economic growth issue would still exist view “we people” working government must private sector cannot need overcome shortterm riskaverse thinking invest basic RD expand human imagination create opportunity growth judgment current path likely lead u stagnation social strife Federal RD fund replaced riskaverse private sector funding ITIF upcoming book “Creating Wealth Worthless Things” explore creation wealth expansion human knowledge contrast optimizing efficiency production process incrementally improving product service tell bit key topic cover cause economic growth unresolved mystery pursued leading economist nearly two hundred fifty year Creating Wealth Worthless Things groundbreaking identifies actual cause economic growth wealth creation book accomplishes describing history unexpected discovery inspired six extraordinary technological revolution shaped modern world’s creation event led creation new industry enormous number job unique product service improved entire world’s quality life discovery I’m describing came accumulating rule natural world work commercial value — word “worthless things” vast majority funded risktaking government wealthy patron enabled British Empire become dominant 1900 made US worldleading economic military power WWII Unfortunately last 50 year culture changed one willing take risk bold step one embrace risk aversion incrementalism book explain social political force caused cultural transformation transition limit future growth increase likelihood China replace u world leader book concludes plan avoid potentially grave outcome explains step required advance economic growth also describe new unexpected technological revolution create new opportunity prosperous future humankind John Parmentola’s upcoming book “Creating Wealth Worthless Things” website Federal RD spending decline since 1960s hasn’t also apply industry like computing highly successful market exception prof rule outcome highrisk RD uncertain led private sector focus lowrisk development commonly called innovation process adapting invention application form marketable product service contrast RD especially research discovery new rule nature work lead massive breakthrough high degree risk Since 1965 US federal RD funding percentage GDP declined 65 period private sector RD increased factor 3 compared federal RD however unlike federal RD private sector RD primarily lowrisk support existing product line service lost significant decline federal RD opportunity creating new pathway economic growth Federal spending basic v applied research development SSTI appreciate imagine rule chemical reaction discovery 98 fundamental element periodic table never happened Would worldwide chemical industry today generating 5 trillion per year impacting numerous industry depend think regard computing except pursuit quantum computing private sector squeezing much performance wellunderstood material design like technology industry reaching fundamental performance limitation Computing innovation continues new approach like AI Machine Learning exploit familiar technology new application physic eventually limit well unless something unexpected come along 2019 Parmentola spoke “Creating Wealth Worthless Things” MIT Watch YouTube sound like you’re saying slowing highrisk RD focus efficiency hurting longterm growth availability job view trend “putting squeeze” opportunity America fueling social unrest we’re seeing news Increasing efficiency incremental improvement thing produce valuable shortterm result — question long run society model incremental improvement especially increasing efficiency typically result net loss job Moore’s Law one example incrementalism may last InfoQ Historically incremental improvement technology based current knowledge available material always finite lifetime Performance eventually run headroom incremental improvement eventually become small costly pas onto consumer Many major technology produced commercial sector defense industry reaching performance limit area transportation computing communication energy production storage defense technology Unless overcome formidable performance barrier headed toward stagnation technology performance However rather focus discovering unique material new knowledge overcome performance barrier company today’s riskaverse world dedicated process improvement increase efficiency involves incremental improvement logistics fulfillment business process That’s revolution it’s path stagnation one eliminates job remove social opportunity people achieve success life Ultimately question RD spending lead u back topic creating quality job social pathway people realize American dream lead u future reduced social tension “haves’” “have nots” New knowledge inspires new invention industry job — le tension strife Guest Dr John Parmentola built highly distinguished career four decade scientist teacher entrepreneur inventor innovator pioneer founding new research field international human right activist leader complex research development organization broad experience private sector academia highlevel position within federal government defense community received 2007 Presidential Rank Award Meritorious Executive President George W Bush service Department Army Dr Parmentola also Air Force Intelligence Agency nominee 1996 R V Jones Award Central Intelligence Agency work arm control verification recipient Outstanding Civilian Service Award Superior Civilian Service Award many contribution US Army Honorary Member US Army STs Fellow American Association Advancement Science recipient US Army 10 Greatest Inventions Award Alfred Raymond Prize Sigma Xi Research Award presented 500 speech published numerous scientific paper article science technology policy also author authoritative book space defense Dr Parmentola currently consultant RAND Corporation work defense energy science technology assessment strategy planning issue government agency domestic foreign also work volunteer basis National Academy Sciences role Dr Parmentola served Senior Vice President General Atomics Director Research Laboratory Management US Army Chief Scientist US Department Energy Chief Advanced Systems Operations Defense Threat Reduction Agency Dr Parmentola PhD Physics MIT served faculty MIT West Virginia University Fellow John F Kennedy School Government Learn website httpsjohnparmentolacomTags Innovation America Government Technology Science
|
4,970 |
Enabling Data-Driven Decisions
|
The Financial Times has always relied on facts and data to deliver the highest-quality journalism to our readers. The data-driven culture has always been part of the company values. Therefore how we manage our data internally within the organization is very important to us.
Fundamental to this is to have a dedicated central platform for telemetry and data management as part of our FT Core technology group. This is a group that is part of FT Product & Technology, owning three of the central technology platforms. They are powering up our customer-facing products spanning content publishing, content metadata, paywall, and analytics data.
My team is responsible for the Data Platform. This is the platform for telemetry and analytics data and our mission is to deliver reliable data with high quality in a timely manner to the internal users and teams at the FT to enable decision making and new product development.
We have a very big and diverse group of direct users and indirect consumers using and benefiting from the valuable data our platform collects and stores.
Financial Times board members — for strategic and tactical decision making.
— for strategic and tactical decision making. Marketing teams — for campaign design and planning to acquire and retain subscribers.
— for campaign design and planning to acquire and retain subscribers. Editorial teams — for monitoring the performance and the readership for the articles and content they produce.
— for monitoring the performance and the readership for the articles and content they produce. Advertising teams — for identifying sell/target subscribers groups for different products.
— for identifying sell/target subscribers groups for different products. Our Product teams — to design better products for the FT readers and drive personalization to help to acquire and retain them.
— to design better products for the FT readers and drive personalization to help to acquire and retain them. Analytics, Business Intelligence, Contact Strategy, and Data Science teams are among the main direct users of the Data Platform, using the data to conduct analysis, build dashboards and reports, and train models that are then widely used across the FT.
Let’s review several use cases for the data delivered by the FT Core Data Platform.
Power up Analytics
At Financial Times we use a variety of business metrics to better understand the impact and the opportunities for future growth. Some of them are around engagement. It is absolutely essential to understand how engaged our readers are. We do that by using a metric called RFV (Recency, Frequency, and Volume).
And what we are looking at is for every single reader:
When did they last come to our site? — this is recency
How often do they come in any given time period? — this is frequency
How much content do our customers read when they come to our site? — this is volume
Based on RFV we can determine the score of engagement for every single reader.
Power up Retention
As well as a user’s current level of engagement, it is useful to know who may be about to become engaged or disengaged. The Data Science team at the FT is developing and training RFV predictive models to identify individual and corporate subscribers that are moving from engaged to disengaged over the next 4 weeks and vice versa. Based on the results provided by these propensity models our Customer Care team is able to contact the readers that are likely to disengage and in many cases, successfully retain them.
Envoy is an internal decision engine used across our customer-facing product line and helping in building smarter products. This engine uses the results from the same models to consistently target predicted to disengage subscribers by offering personalized newsletters and predicting the next best action for them. This is another example of the data-driven culture at the FT. And the raw data comes entirely from the Data Platform.
Power up New Product development
FT.com is offering relevant content based on personal preferences
When new readers subscribe to FT.com, they provide information about the industry they are involved in. We store it as part of our arrangement models within the Abstraction Layer. It is further used for enabling a variety of convenient features like personalized topic recommendations as part of the subscriber’s personal myFT feed page. This is helping our readers by saving their time in finding relevant content based on personal preferences.
Power up Internal Tools
The performance of the best read ever article at FT.com
Lantern is an internal monitoring tool powered by data from the Data Platform. It is an editorial focussed tool that provides analytics for the content Financial Times is publishing. It is used by Editorial teams to monitor the performance of the content they are producing. The main metric used is Quality Reads which is calculated within the Streams Layer of the Data Platform and further provided for consumption to Lantern with minimal latency.
How do we support all these different use cases?
Streaming the data into the Data Lake and further flowing downstream to the consumers
We ingest clickstream data for the usage tracked at the web sites and the mobile apps for the digital version of Financial Times titles. That data is streamed nearly real-time via the Data Platform Stream Layer for further data processing. In addition to FT products usage data, we also ingest data from internal and external data vendors, for example, our internal platform managing the subscribers’ membership, the platform for the content our journalists are publishing with its metadata; as well as some external systems like Zuora for payments and Salesforce for corporate subscribers contracts.
All the data is stored with minimal latency within the Data Platform Data Lake and further used for building the Abstraction Layer where we generate valuable data models providing insights to the business departments. Clear and well-established data contracts provide those insights to the Analytics Layer where multiple audited metrics are calculated and ready to be used by the internal business departments. The same insights feed the generation of a variety of data science models.
Democratizing the data
All the way from Streams through Data Lake to Analytics Layer through Abstraction the Data Platform ensures data is highly secured, validated, and with the highest data quality. We ensure that the insights we are generating are not revealing any personally identifiable information and thus are ready to be democratized and widely used for decision making and ready to drive growth.
To ensure data democratization we provide tools for better visibility and understanding of the data along with some self-service enablement for easy data access and the ability for building new data workflows easily. We are constantly working on extending the platform with new capabilities to enable the rising demand for data, be able to generate more and more valuable insights and power up machine learning and various dashboards as part of business intelligence.
How are we building the Data Platform?
The mechanics for governing the streams, the lake, and the flows
The FT Data Platform lives entirely in the Cloud. We are building it using AWS managed services by preference as we are reducing the operational cost significantly.
Our big project now is to consolidate all our Stream Layer services to ingest new data via AWS MSK with Spark jobs and Apache AirFlow for the workflows orchestration. Currently, we are using Kinesis streams, SNS and SQS but our research shows that with the newly proposed approach we will have better scalability and more effective cost management.
Our data is landing and stored in the Data Lake. Now we are working on standardizing the formats by using the Parquet in S3 where it will be available for reading via Redshift Spectrum. Our Abstraction and Analytics Layers reside in the main Enterprise Data Warehouse using Redshift. For virtualizing the variety of underlying formats and data stores we plan to use Presto service. As an initial evaluation, we plan to use our own deployment of Presto within AWS (aka “Vanilla” Presto). Long term if we prove that the concept works for us we may migrate to some of the managed Presto services.
As a foundation of the platform, we are moving now from EC2 and ECS to EKS. We anticipate this migration will increase the platform’s scalability and operational cost-effectiveness significantly. For the data lineage, we are planning to try Apache Atlas. For monitoring and observing the system’s health, we will continue relying on Grafana, Splunk, and our internal monitoring platforms BizOps and Heimdall. For monitoring the data quality and automating this process across the internal FT systems, we built and integrated within the platform a homegrown Data Quality Metrics Checks framework.
How are we enabling the data to be consumed?
Execution Environment for Data Workflows
For enabling our tech teams to easily build new workflows, and our product teams to do product discovery and development more effectively, we recently developed a new capability. We named it internally E2 which stands for Execution Environment but relates nicely with Euler’s number. It is a processing engine for different algorithms like Data Science Models and Machine learning on top of the data we have in our Data Lake. Our ambition is to support scalable workflows written in a variety of programming technologies like R, Python, Java, and Spark.
We will continue sharing our adventures and challenges in building our core platforms using the latest technology trends. Stay tuned and if you recognize our mission as yours consider joining us in this exciting journey!
|
https://medium.com/ft-product-technology/enabling-data-driven-decisions-564359b79788
|
['Elena Georgieva']
|
2020-05-18 11:25:52.890000+00:00
|
['Big Data', 'Data Platforms', 'Financial Times']
|
Title Enabling DataDriven DecisionsContent Financial Times always relied fact data deliver highestquality journalism reader datadriven culture always part company value Therefore manage data internally within organization important u Fundamental dedicated central platform telemetry data management part FT Core technology group group part FT Product Technology owning three central technology platform powering customerfacing product spanning content publishing content metadata paywall analytics data team responsible Data Platform platform telemetry analytics data mission deliver reliable data high quality timely manner internal user team FT enable decision making new product development big diverse group direct user indirect consumer using benefiting valuable data platform collect store Financial Times board member — strategic tactical decision making — strategic tactical decision making Marketing team — campaign design planning acquire retain subscriber — campaign design planning acquire retain subscriber Editorial team — monitoring performance readership article content produce — monitoring performance readership article content produce Advertising team — identifying selltarget subscriber group different product — identifying selltarget subscriber group different product Product team — design better product FT reader drive personalization help acquire retain — design better product FT reader drive personalization help acquire retain Analytics Business Intelligence Contact Strategy Data Science team among main direct user Data Platform using data conduct analysis build dashboard report train model widely used across FT Let’s review several use case data delivered FT Core Data Platform Power Analytics Financial Times use variety business metric better understand impact opportunity future growth around engagement absolutely essential understand engaged reader using metric called RFV Recency Frequency Volume looking every single reader last come site — recency often come given time period — frequency much content customer read come site — volume Based RFV determine score engagement every single reader Power Retention well user’s current level engagement useful know may become engaged disengaged Data Science team FT developing training RFV predictive model identify individual corporate subscriber moving engaged disengaged next 4 week vice versa Based result provided propensity model Customer Care team able contact reader likely disengage many case successfully retain Envoy internal decision engine used across customerfacing product line helping building smarter product engine us result model consistently target predicted disengage subscriber offering personalized newsletter predicting next best action another example datadriven culture FT raw data come entirely Data Platform Power New Product development FTcom offering relevant content based personal preference new reader subscribe FTcom provide information industry involved store part arrangement model within Abstraction Layer used enabling variety convenient feature like personalized topic recommendation part subscriber’s personal myFT feed page helping reader saving time finding relevant content based personal preference Power Internal Tools performance best read ever article FTcom Lantern internal monitoring tool powered data Data Platform editorial focussed tool provides analytics content Financial Times publishing used Editorial team monitor performance content producing main metric used Quality Reads calculated within Streams Layer Data Platform provided consumption Lantern minimal latency support different use case Streaming data Data Lake flowing downstream consumer ingest clickstream data usage tracked web site mobile apps digital version Financial Times title data streamed nearly realtime via Data Platform Stream Layer data processing addition FT product usage data also ingest data internal external data vendor example internal platform managing subscribers’ membership platform content journalist publishing metadata well external system like Zuora payment Salesforce corporate subscriber contract data stored minimal latency within Data Platform Data Lake used building Abstraction Layer generate valuable data model providing insight business department Clear wellestablished data contract provide insight Analytics Layer multiple audited metric calculated ready used internal business department insight feed generation variety data science model Democratizing data way Streams Data Lake Analytics Layer Abstraction Data Platform ensures data highly secured validated highest data quality ensure insight generating revealing personally identifiable information thus ready democratized widely used decision making ready drive growth ensure data democratization provide tool better visibility understanding data along selfservice enablement easy data access ability building new data workflow easily constantly working extending platform new capability enable rising demand data able generate valuable insight power machine learning various dashboard part business intelligence building Data Platform mechanic governing stream lake flow FT Data Platform life entirely Cloud building using AWS managed service preference reducing operational cost significantly big project consolidate Stream Layer service ingest new data via AWS MSK Spark job Apache AirFlow workflow orchestration Currently using Kinesis stream SNS SQS research show newly proposed approach better scalability effective cost management data landing stored Data Lake working standardizing format using Parquet S3 available reading via Redshift Spectrum Abstraction Analytics Layers reside main Enterprise Data Warehouse using Redshift virtualizing variety underlying format data store plan use Presto service initial evaluation plan use deployment Presto within AWS aka “Vanilla” Presto Long term prove concept work u may migrate managed Presto service foundation platform moving EC2 ECS EKS anticipate migration increase platform’s scalability operational costeffectiveness significantly data lineage planning try Apache Atlas monitoring observing system’s health continue relying Grafana Splunk internal monitoring platform BizOps Heimdall monitoring data quality automating process across internal FT system built integrated within platform homegrown Data Quality Metrics Checks framework enabling data consumed Execution Environment Data Workflows enabling tech team easily build new workflow product team product discovery development effectively recently developed new capability named internally E2 stand Execution Environment relates nicely Euler’s number processing engine different algorithm like Data Science Models Machine learning top data Data Lake ambition support scalable workflow written variety programming technology like R Python Java Spark continue sharing adventure challenge building core platform using latest technology trend Stay tuned recognize mission consider joining u exciting journeyTags Big Data Data Platforms Financial Times
|
4,971 |
Rights and wrongs when creating profile
|
The biggest GDPR fine in Germany which H&M is to pay has uncovered a delicate yet scandalous problem — spying on employees. It was mentioned there that some profiles were created and continuously developed bringing details of about two hundred employees to a number of managers. But is there such a thing as righteous profile?
H&M will be charged €35.3 million — penalty imposed by the Data Protection Authority of Hamburg. The company, which has a service center in Nuremberg, is accused of collecting and storing private life data of its employees. H&M has allegedly been gathering too much data than it had rights to about hundreds of its employees since 2014. In the press release describing the incident it is said that lots of private details got documented by the company’s management, including information about family issues, religion, illness information and diagnoses. These records, in some cases quite elaborate and full of particularities, made for further processing and analysis were available for dozens of employees to access the information.
The arbitrariness with which the company’s managers acted collecting and recording private life data during casual talks as well as keeping a history of such details to create an illicit profile, is among the key problems which the Data Protection Authority is trying to convey by exacting the biggest ever GDPR fine within Germany so far.
The fact that the company collects private information surfaced unexpectedly in 2019 — after a configuration error in the system which spouted the data letting anyone working at the company access the information. The exposed details were available for hours.
60GB of information was provided by H&M to the DPA.
The company took many steps to recoup the level of security and transparency — the brand has reportedly contributed into compliance and launched a data protection program in Nuremberg. A new employee has been assigned to implement data protection coordinating. The new risk management framework of the affected service center included mechanisms preventing whistleblowing and updating privacy status.
It has been noted that H&M was going to reimburse the employees for major inconvenience.
Interestingly, according to Trust Anchor, the penalty considering the company’s turnover the penalty should have been two times higher — about €61 million, but thanks to the cooperation with the DPA it has been cut in half.
Now back to the concept of righteous profile and what it can be. The desire to create a profile for each key employee in an organisation is not really outrageous, it is all about what can and can’t be done. Recording casual talks with your colleague or making notes in order to hand them over for further processing is definitely not a legitimate option, but a major privacy breach. When it comes to employee assessment and evaluation of appropriateness of job assignment position, there are legal measures which should be taken.
The automated profiling solution can be implemented within a company’s system. It is developed to help HR department, to improve decision-making regarding a specialist’s appointing, tasks allocating and entrusting employees with crucial responsibilities, enhancing overall performance and increasing productivity, figuring out skillful and promising employees.
Only correspondence conducted via corporate channels during work hours and on corporate devices can be analysed in order to create an unbiased understanding of a staffer’s skills, diligence, strengths and weaknesses. Knowing these details allows avoid prejudices, erroneous opinion or overly personal likes and dislikes.
The system runs on strictly set algorithms and analyzes data excluding personal opinions and emotional baggage.
In some cases, companies, which hire remote employees and never have any opportunity to know each specialist in person, to pay attention to professional progress, involvement into the business processes and projects or growing disinterest, might want to learn a bit more about whether a hired professional’s goals, intents and dedication meet the same ones of the company.
Employee analysis helps enhance communication within a team and between managers and employees in many ways:
• Some jobs require specific qualities, and some of these jobs are serious enough to not accept even few mistakes or a person who pretends to be suitable but in fact quite unfitting for that very job. Of course, there are tests which can be made before taking such professionals onboard and letting them lead the project, but tests won’t show you the real picture of an employee in progress, whereas this is exactly what is important — to see how a specialist deals with an assigned job in progress, whether something changes in comparison to what was at the start.
• Know your employee individuality in order to not spoil the cooperation spirit by wrong attitude, have a unique approach to someone if necessary.
• Information can’t be trusted equally to everyone who gets access to corporate assets. Automated profiling has been developed to the extent when emotionally and informationally limited corporate correspondence can draw out results based on criteria which matter only within job-related issues, for example, whether a specialist can be trusted with the top secret data.
Appointing new CEO, bringing a teacher to work with children — it might be helpful to make sure that no recruitment mistake was made. Automated profiling lets you identify prevailing type of conduct, personality traits, purposes and respond to management decisions, influencing colleagues, motivation, loyalty and criminal tendencies without resorting to such illicit and utterly indecent ways of pulling information as spying and whistleblowing divulging private life details.
Automated profiling lets a company receive reliable and impartial staff assessment with no emotional baggage, know its implicit leaders who can manage the team if needed, make important decisions concerning the level of responsibility, appointing new positions, granting access to sensitive data and even measure healthy workplace environment.
|
https://medium.com/major-threats-to-your-business-human-factor/rights-and-wrongs-when-creating-profile-c8822b7f2fc1
|
['Alex Parfentiev']
|
2020-10-08 14:25:54.551000+00:00
|
['Profile', 'Privacy', 'Recruitment', 'Psychology', 'Data Breach']
|
Title Rights wrong creating profileContent biggest GDPR fine Germany HM pay uncovered delicate yet scandalous problem — spying employee mentioned profile created continuously developed bringing detail two hundred employee number manager thing righteous profile HM charged €353 million — penalty imposed Data Protection Authority Hamburg company service center Nuremberg accused collecting storing private life data employee HM allegedly gathering much data right hundred employee since 2014 press release describing incident said lot private detail got documented company’s management including information family issue religion illness information diagnosis record case quite elaborate full particularity made processing analysis available dozen employee access information arbitrariness company’s manager acted collecting recording private life data casual talk well keeping history detail create illicit profile among key problem Data Protection Authority trying convey exacting biggest ever GDPR fine within Germany far fact company collect private information surfaced unexpectedly 2019 — configuration error system spouted data letting anyone working company access information exposed detail available hour 60GB information provided HM DPA company took many step recoup level security transparency — brand reportedly contributed compliance launched data protection program Nuremberg new employee assigned implement data protection coordinating new risk management framework affected service center included mechanism preventing whistleblowing updating privacy status noted HM going reimburse employee major inconvenience Interestingly according Trust Anchor penalty considering company’s turnover penalty two time higher — €61 million thanks cooperation DPA cut half back concept righteous profile desire create profile key employee organisation really outrageous can’t done Recording casual talk colleague making note order hand processing definitely legitimate option major privacy breach come employee assessment evaluation appropriateness job assignment position legal measure taken automated profiling solution implemented within company’s system developed help HR department improve decisionmaking regarding specialist’s appointing task allocating entrusting employee crucial responsibility enhancing overall performance increasing productivity figuring skillful promising employee correspondence conducted via corporate channel work hour corporate device analysed order create unbiased understanding staffer’s skill diligence strength weakness Knowing detail allows avoid prejudice erroneous opinion overly personal like dislike system run strictly set algorithm analyzes data excluding personal opinion emotional baggage case company hire remote employee never opportunity know specialist person pay attention professional progress involvement business process project growing disinterest might want learn bit whether hired professional’s goal intent dedication meet one company Employee analysis help enhance communication within team manager employee many way • job require specific quality job serious enough accept even mistake person pretend suitable fact quite unfitting job course test made taking professional onboard letting lead project test won’t show real picture employee progress whereas exactly important — see specialist deal assigned job progress whether something change comparison start • Know employee individuality order spoil cooperation spirit wrong attitude unique approach someone necessary • Information can’t trusted equally everyone get access corporate asset Automated profiling developed extent emotionally informationally limited corporate correspondence draw result based criterion matter within jobrelated issue example whether specialist trusted top secret data Appointing new CEO bringing teacher work child — might helpful make sure recruitment mistake made Automated profiling let identify prevailing type conduct personality trait purpose respond management decision influencing colleague motivation loyalty criminal tendency without resorting illicit utterly indecent way pulling information spying whistleblowing divulging private life detail Automated profiling let company receive reliable impartial staff assessment emotional baggage know implicit leader manage team needed make important decision concerning level responsibility appointing new position granting access sensitive data even measure healthy workplace environmentTags Profile Privacy Recruitment Psychology Data Breach
|
4,972 |
Third Law of the Interface: Interfaces form an ecosystem
|
Interfaces live in an ecosystem and there is a fertile and conflicting exchange among them. When the engineers who created the first computers needed a device to program them, they simply adapted what they already had: the typewriter QWERTY keyboard. And when in the 1960s the computer needed a real-time output device, they had no doubts: the television screen was waiting for them. Like the synapses of a neuron or the valences of a chemical element, interfaces have the possibility of linking with other interfaces. Interfaces, as Claude Lévi-Strauss (1964) said about myths, engage in a dialogue and “think each other”.
The dialogue between interfaces does not discriminate against any type of device or human activity. What today is on the screen, yesterday was in the real world, and what will appear tomorrow in a videogame, will later be found on the Web. Interfaces form a network that looks like an expansive hypertext in perpetual transformation that carries out operations of movement, translation, transduction and metamorphosis.
The evolution of interfaces depends on the correlations that they establish with other interfaces. If the interface does not engage in a dialogue with other interfaces, it does not evolve and it runs the risk of being extinguished (Fourth Law).
The impossible interface
Sometimes the interface does not find good interlocutors for dialogue. The printing press, invented in China a millennium before Johannes Gutenberg, could not become consolidated in that society because it was almost impossible to dialogue with a system of ideographic writing in which each sign corresponds to a concept. As Marshall McLuhan (1962) explained, the interface of the Chinese press lacked an interlocutor: the Latin alphabet. The Gutenberg machine, on the other hand, integrated into one interface the wine press, the Latin alphabet, paper, binding systems and the techniques of fusing and molding lead.
Five hundred years after Gutenberg something similar happened with graphic interfaces. Several companies attempted to market a personal computer with a user- friendly interface (Apple Lisa in 1980, the Xerox Star in 1981), but they failed. Finally, in the prophetic year of 1984, the miracle occurred: the Macintosh, the machine for the rest of us, conquered the public.
Why did the Mac succeed where the Apple Lisa and the Xerox Star had failed? Because it established a dialogue between its graphic operating system, the printer laser of Hewlett-Packard and the PostScript language of Adobe. The union of these three technologies revolutionized the way the world understood computing, created new professional fields such as Desktop Publishing (DTP) and generated the conditions for the personal computer revolution in the 1980s (Lévy, 1992).
Perfect interfaces
The situation happened again at the beginning of the 21st century. As Steven Levy explains in The Perfect Thing (2006), the appearance of the iTunes software, the progressive reduction in the size of hard disks, the lower price of memories and the development of the Firewire interface converged into the coolest product of the new decade: the iPod. The iPod is an interface that integrates different hardware and software elements – a 1.8-inch hard drive, the Firewire connection, the MP3 format for audio compression – with the former Macintosh application for playing and managing music: iTunes. As in 1984 with the Macintosh, the interconnection of actors determined the success of the perfect thing. Just one year after Levy’s book was published it was already old. On 29th June 2007 Apple introduced a new perfect thing with an even more extended network of actors: the iPhone.
This description of high-technology devices that converge into a single interface should not eclipse the human actors that participate in them. Designers (the Apple design team, not just Steve Jobs), institutions of any kind (media, markets, Apple Stores, research labs, etc.) and, obviously, consumers, participate and interact in the network built around these almost perfect – new improved models are presented every semester – interfaces.
Theoretical networks
Social sciences have had an intermittent interest in technological change. Classical thinkers like Adam Smith, David Ricardo or Karl Marx saw mechanization or division of labor as fundamental topics of their economic theories. Nevertheless, from the end of the 19th century to the 1950s the economy was more interested in the equilibrium of variables so that the attention was focused on other fields. The development of a new school of thinking around the Austrian economist Joseph Schumpeter brought the problem of technology, innovation and entrepreneurship into focus again.
For many years, researchers believed that the role of inventors was central in the innovation process: that’s why we still talk about James Watt’s steam engine, Thomas Edison’s light bulb, Alexander Bell’s telephone and Steve Jobs’ Macintosh. To every name there is a corresponding artifact, or more than one (Thomas Edison also ‘invented’ the phonograph, and Steve Jobs the iPod, the iPhone and the iPad). This conception is based on the heroic role played by each individual inventor in the creation of a new artifact. Researchers like Nathan Rosenberg (1992), one of the most recognized historians of economy, denounced this ‘heroic theory of invention’ that impregnates our language, patent system and history books.
In this context, the Laws of the Interface prefer to establish a dialogue with conceptions and theories like the Social Construction of Technology (SCOT) (i.e. Bijker, Hughes and Pinch, 1987; Bijker and Law, 1992; Bijker, 1997), the Actor- Network Theory (ANT)(Callon, 1987; Law and Hassar, 1999; Latour, 2005), media ecology (McLuhan, 1962, 2003; McLuhan & McLuhan, 1992; Scolari, 2012, 2015; Strate, 2017), media archaeology (Huhtamo & Parikka, 2011; Parikka, 2012) and media evolution (Scolari, 2015, 2019). The contributions of Arthur (2009), Basalla (1988), Levinson (1997), Logan (2007), Frenken (2006), Manovich (2013) and Ziman (2000) have also been integrated into this interdisciplinary and polyphonic conversation.
The Laws of the Interface, in a few words, proposes an eco-evolutionary approach to socio-technological change based on the contributions of all of these authors and disciplines.
The content of one interface is always another interface
What happens when we deconstruct an interface? The windmill was one of the most important inventions of the Middle Ages. If we deconstruct a windmill, what do we find? A combination of the water mill and ship sails, two technologies invented in Antiquity. If we dismantle the water mill we will find a wheel, an axis and many other technological actors that interact with them. When we open an interface we always find more interfaces. This fractal dimension of interfaces could take the form of a new law or at least a corollary: the content of one interface is always another interface.
|
https://uxdesign.cc/third-law-of-the-interface-interfaces-form-an-ecosystem-e6293a108089
|
['Carlos A. Scolari']
|
2019-10-09 23:49:23.079000+00:00
|
['UI', 'Usability', 'Design', 'Interfaces', 'Technology']
|
Title Third Law Interface Interfaces form ecosystemContent Interfaces live ecosystem fertile conflicting exchange among engineer created first computer needed device program simply adapted already typewriter QWERTY keyboard 1960s computer needed realtime output device doubt television screen waiting Like synapsis neuron valence chemical element interface possibility linking interface Interfaces Claude LéviStrauss 1964 said myth engage dialogue “think other” dialogue interface discriminate type device human activity today screen yesterday real world appear tomorrow videogame later found Web Interfaces form network look like expansive hypertext perpetual transformation carry operation movement translation transduction metamorphosis evolution interface depends correlation establish interface interface engage dialogue interface evolve run risk extinguished Fourth Law impossible interface Sometimes interface find good interlocutor dialogue printing press invented China millennium Johannes Gutenberg could become consolidated society almost impossible dialogue system ideographic writing sign corresponds concept Marshall McLuhan 1962 explained interface Chinese press lacked interlocutor Latin alphabet Gutenberg machine hand integrated one interface wine press Latin alphabet paper binding system technique fusing molding lead Five hundred year Gutenberg something similar happened graphic interface Several company attempted market personal computer user friendly interface Apple Lisa 1980 Xerox Star 1981 failed Finally prophetic year 1984 miracle occurred Macintosh machine rest u conquered public Mac succeed Apple Lisa Xerox Star failed established dialogue graphic operating system printer laser HewlettPackard PostScript language Adobe union three technology revolutionized way world understood computing created new professional field Desktop Publishing DTP generated condition personal computer revolution 1980s Lévy 1992 Perfect interface situation happened beginning 21st century Steven Levy explains Perfect Thing 2006 appearance iTunes software progressive reduction size hard disk lower price memory development Firewire interface converged coolest product new decade iPod iPod interface integrates different hardware software element – 18inch hard drive Firewire connection MP3 format audio compression – former Macintosh application playing managing music iTunes 1984 Macintosh interconnection actor determined success perfect thing one year Levy’s book published already old 29th June 2007 Apple introduced new perfect thing even extended network actor iPhone description hightechnology device converge single interface eclipse human actor participate Designers Apple design team Steve Jobs institution kind medium market Apple Stores research lab etc obviously consumer participate interact network built around almost perfect – new improved model presented every semester – interface Theoretical network Social science intermittent interest technological change Classical thinker like Adam Smith David Ricardo Karl Marx saw mechanization division labor fundamental topic economic theory Nevertheless end 19th century 1950s economy interested equilibrium variable attention focused field development new school thinking around Austrian economist Joseph Schumpeter brought problem technology innovation entrepreneurship focus many year researcher believed role inventor central innovation process that’s still talk James Watt’s steam engine Thomas Edison’s light bulb Alexander Bell’s telephone Steve Jobs’ Macintosh every name corresponding artifact one Thomas Edison also ‘invented’ phonograph Steve Jobs iPod iPhone iPad conception based heroic role played individual inventor creation new artifact Researchers like Nathan Rosenberg 1992 one recognized historian economy denounced ‘heroic theory invention’ impregnates language patent system history book context Laws Interface prefer establish dialogue conception theory like Social Construction Technology SCOT ie Bijker Hughes Pinch 1987 Bijker Law 1992 Bijker 1997 Actor Network Theory ANTCallon 1987 Law Hassar 1999 Latour 2005 medium ecology McLuhan 1962 2003 McLuhan McLuhan 1992 Scolari 2012 2015 Strate 2017 medium archaeology Huhtamo Parikka 2011 Parikka 2012 medium evolution Scolari 2015 2019 contribution Arthur 2009 Basalla 1988 Levinson 1997 Logan 2007 Frenken 2006 Manovich 2013 Ziman 2000 also integrated interdisciplinary polyphonic conversation Laws Interface word proposes ecoevolutionary approach sociotechnological change based contribution author discipline content one interface always another interface happens deconstruct interface windmill one important invention Middle Ages deconstruct windmill find combination water mill ship sail two technology invented Antiquity dismantle water mill find wheel axis many technological actor interact open interface always find interface fractal dimension interface could take form new law least corollary content one interface always another interfaceTags UI Usability Design Interfaces Technology
|
4,973 |
The Latest Theory That May Answer the Origin of Covid-19
|
The Latest Theory That May Answer the Origin of Covid-19
The Mojiang Miners Passage (MMP) hypothesis explains many oddities of the Covid-19 pandemic.
Image by JohannaIris from Pixabay
In a July commentary, “A Proposed Origin for SARS-CoV-2 and the COVID-19 Pandemic,” Jonathan R Latham, virologist doctorate, and Allison Wilson, a professor of biology, presented the Mojiang Miners Passage (MMP) hypothesis that provides “a plausible and parsimonious explanation of all the key features of the COVID-19 pandemic and its origin,” they stated. “It accounts for the propensity of SARS-CoV-2 infections to target the lungs; the apparent preadapted nature of the virus; and its transmission from bats in Yunnan to humans in Wuhan.” Let’s see what the hypothesis is about.
What is known about the origin of SARS-CoV-2?
First, let’s start with the known facts. The closest relative to SARS-CoV-2 is RaTG13, a bat sarbecovirus isolated from the Yunnan Province of China, with about 96% genetic identity. A recent genomics study in Nature Microbiology shows that SARS-CoV-2 descended from RaTG13, which indicates that SARS-CoV-2 came from bats without any intermediate host. “Current sampling of pangolins does not implicate them as an intermediate host,” stated the study authors.
Prior research in May also said that pangolin is not the intermediate host that passed SARS-CoV-2 to humans. The pangolin coronavirus (pangolin-CoV-2020) and SARS-CoV-2 are only 90.32% identical. “Bat-CoV-RaTG13 was more genetically close to SARS-CoV-2 at both individual gene and genomic sequence level compared with the genomic sequence of pangolin-CoV-2020 assembled in this study,” this research concluded. “Our study does not support that SARS-CoV-2 evolved directly from the pangolin-CoV.”
SARS-CoV-2 came from a bat sarbecovirus called RaTG13, not pangolin, human-made, or the wet market. How it got spread into humans is still a mystery.
Another fact is that SARS-CoV-2 is not human-made. Genetic engineering leaves a ‘fingerprint’ in the organism’s genome, which can be caught with genetics techniques. In January, the Massachusetts Institute of Technology (MIT) used the Finding Engineering-Linked Indicators (FELIX) tool of the US Director of National Intelligence to confirm that SARS-CoV-2 was never genetically manipulated from any known coronaviruses. In fact, the FELIX tool shows that SARS-CoV-2 best matches are naturally occurring coronaviruses.
In late May, the Chinese CDC ruled out the Huanan wet market in Wuhan as the source of the SARS-CoV-2 outbreak. The WHO announced the same. SARS-CoV-2 was not found in any animals tested from the wet market. And a third of early Covid-19 patients never had contact with the wet market. Therefore, SARS-CoV-2 came from somewhere else besides the wet market in Wuhan.
So the fact is that SARS-CoV-2 came from a bat sarbecovirus called RaTG13, not pangolin, human-made, or the wet market. How it got spread into humans is still a mystery. “The organizations [of the U.S. intelligence community] decided to continue investigating two alternatives,” Sarah Scoles, a freelance science writer, wrote in OneZero. “The more likely explanation that the virus jumped from an animal to a human, and the more remote possibility that it was a natural virus released in a lab accident, which still hasn’t been ruled out [by the US intelligence community].”
What happened in Mojiang Mine in 2012
In late April of 2012, six miners at Mojiang Mine fell ill with unknown pneumonia. They were brought to the Kunming University Hospital in Yunnan, which is about 250 km from the mine. Their symptoms include dry cough (all patients), sputum (all), high fever (all), difficulty breathing (5), myalgia (5), low blood oxygen levels (4), headaches (3), and blood clots (2). Treatments used were steroids (all), antivirals (5), ventilation (3), and blood thinners (2). And half of the miners (3) died in the end.
Samples (at least blood and thymus tissues) from the miners were sent to the Wuhan Institute of Virology to determine the causative agent. The conclusion made was that “the unknown virus lead to severe pneumonia could be: The SARS-like-CoV from the Chinese rufous horseshoe bat,” wrote the authors. So, the miners had a coronavirus infection.
(Please also note that the original study that detailed the Mojiang miners’ pneumonia is a Masters thesis, which is still a credible scientific source supervised by a committee of academics.)
In that same year, ZhengLi Shi, director of the Center for Emerging Infectious Diseases at the Wuhan Institute of Virology, led a surveillance study at the Mojiang Mine. Her team collected fecal swabs from 276 bats. Using genetic sequencing, they detected nine coronaviruses species, of which six were never classified and one was RaTG13. (Recalled that RaTG13 is a bat sarbecovirus that is 96% identical to SARS-CoV-2).
Some of these coronavirus species had likely infected the miners, although no comparative studies have confirmed this. Such events are not at all new as bat-to-human spillovers of viruses had occurred before.
MMP hypothesis part I: Human passage
Recalled that in the surveillance study in Mojiang Mine, Zhengli Shi’s team discovered the bat coronavirus RaTG13, which is 96% identical to SARS-CoV-2. Although RaTG13 is the closest relative of SARS-CoV-2, a 4% genomic difference is still too vast, which requires 20–50 years of evolution. Hence, an intermediate host is likely involved, as was the case of SARS and MERS, because coronavirus evolution rate speeds up in a different species.
Contrary to this conventional view, the MMP (Mojiang Miners Passage) hypothesis states that RaTG13 may have evolved into SARS-CoV-2 in the miners in the Mojiang cave.
Coronaviruses usually infect the upper respiratory tract. Miners work in conditions of poor air quality, which compromises respiratory health. Thus, RaTG13 bat coronavirus might have made its way into the miners’ lower respiratory tract where the lungs reside. Lungs are a large organ, and since the miners’ pneumonia was severe enough to require prolonged hospitalization, the virus load must have been enormous. “Evolutionary change is in large part a function of the population size,” Dr. Latham and Prof. Wilson explained. “The lungs of the miners, we suggest, supported a very high viral load leading to proportionately rapid viral evolution.”
(The term ‘passage’ refers to a standard technique to ‘culture’ viruses in a new set of cells. As viruses can only replicate using another cell’s machinery, the passaging of viruses is required for research purposes. By this analogy, the RaTG13 bat coronavirus was passaged in humans in the Mojiang Mine.)
As mentioned above, the miners’ symptoms closely resemble that of Covid-19. “Anyone presenting with them today would immediately be assumed to have COVID-19,” Dr. Latham and Prof. Wilson remarked. And the corresponding treatments administered to the miners — steroids, antivirals, blood thinners, and ventilation — are precisely the same for Covid-19. Therefore, the patient zero of the pandemic might be one of the miners.
Why did the miners not spread the disease to others? The novel coronavirus is most contagious during the early phase of the disease, probably one to two days before symptom onset. The miners were only taken to the hospital when their pneumonia had become severe. And mask-wearing was probably widely practiced in hospital settings. Therefore, the coronavirus at that time might not get much of a chance to spread.
MMP hypothesis part II: The escape
Also recalled that samples (blood and thymus tissues) from the miners were sent to the Wuhan Institute of Virology for research purposes. As their labs were under construction at the time of sample collection, virologists may have begun experimentations in 2017 or 2018, Dr. Latham and Prof. Wilson said. Then the virus may have leaked from the lab by accident.
“The more likely explanation that the virus jumped from an animal to a human, and the more remote possibility that it was a natural virus released in a lab accident, which still hasn’t been ruled out [by the US intelligence community].”
It may be an outrageous thing to state, but unintentional lab-leaked microbes have happened many times around the world. According to USA Today, over 1100 lab accidents involving the escape of bacteria, viruses, or toxins to agriculture or humans were reported to federal regulations between 2008–2012. Even a 2009 paper in the New England Journal of Medicine admitted that the re-emergence of the 1977 H1N1 swine flu pandemic — that disappeared from the human population in 1957 — was “probably an accidental release from a laboratory source.” Moreover, SARS had escaped from labs six times — one in Singapore, one in Taiwan, and four in Beijing. So it is not surprising that coronaviruses had leaked from the Wuhan Institute of Virology as well.
“Accidents happen on a regular basis. We have seen a few cases of high-profile labs in recent years where accidents happened or mistakes were made,” Dr. Filippa Lentzos, a biosecurity expert at King’s College London, stated. “For instance, in 2014 at the CDC there were safety lapses involving Ebola virus, anthrax and bird flu, and there have been lapses at the NIH [National Institutes of Health] involving variola virus which causes smallpox.”
Not to mention that the Wuhan Institute of Virology received two official warnings from American embassy officials in 2018 concerning inadequate laboratory safety measures. Chinese national team has also found that the Wuhan lab did not meet federal standards in five categories. There were also reported accidents that lab workers got wounded from bat’s attack or exposed to bat urine, according to VOA News.
Still, this narrative is not to blame research. There is no doubt that science is the cornerstone of human civilization. It is just that sometimes unfortunate accidents happen.
MMP hypothesis in a nutshell
“We suggest, first, that inside the miners RaTG13 (or a very similar virus) evolved into SARS-CoV-2, an unusually pathogenic coronavirus highly adapted to humans,” Dr. Latham and Prof. Wilson said. “Second, that the [Zhengli] Shi lab used medical samples taken from the miners and sent to them by Kunming University Hospital for their research. It was this human-adapted virus, now known as SARS-CoV-2, that escaped from the WIV [Wuhan Institute of Virology] in 2019.”
“The closest known relative to SARS-CoV-2 is a virus sampled by Chinese researchers from six miners infected while working in a bat-infested cave in southern China in 2012. These miners developed symptoms we now associate with Covid-19. These viral samples were then taken to the Wuhan Institute of Virology…,” agreed Jamie Frederic Metzl, an author and senior fellow at the Atlantic Council. “If the virus jumped to humans through a series of human-animal encounters in the wild or wet markets, as Beijing has claimed, we would likely have seen evidence of people being infected elsewhere in China before the Wuhan outbreak. We have not. The alternative explanation, a lab escape, is far more plausible.”
What the MMP hypothesis explains
For one, SARS-CoV-2 binds to the human ACE2 receptor with remarkable efficiency. “Such exceptional affinities, ten to twenty times as great as that of the original SARS virus, do not arise at random, making it very hard to explain in any other way than for the virus to have been strongly selected in the presence of a human ACE2 receptor,” Dr. Latham and Prof. Wilson noted, such as in the workers in Mojiang Mine. And the bat sarbecovirus RaTG13 can indeed bind to the human ACE2 receptor. A study published in May in The Lancet Microbe has also shown that SARS-CoV-2 does not replicate efficiently in cultured (in a lab dish or plate) kidney and lung cells of bats — suggesting that SARS-CoV-2 probably evolved in a human host rather than bats.
“In short, the MMP theory is a plausible and parsimonious explanation of all the key features of the COVID-19 pandemic and its origin. It accounts for the propensity of SARS-CoV-2 infections to target the lungs; the apparent preadapted nature of the virus; and its transmission from bats in Yunnan to humans in Wuhan.”
Second, viruses usually undergo rapid evolution when they replicate in a new host. For instance, when MERS and SARS first adapt to humans, phylogenetic analyses revealed numerous mutations and recombinations in the viruses’ genomes. But such rapid evolution was not observed with SARS-CoV-2 genomes at the beginning of the pandemic. Yet SARS-CoV-2 has infected far more people than SARS and MERS did. “That is to say, its evolutionary leap to humans was completed before the 2019 pandemic began,” stated Dr. Latham and Prof. Wilson. “It is hard to imagine an explanation for this high adaptiveness other than some kind of passaging in a human body.”
Third, the MMP hypothesis complements known facts about the origin of SARS-CoV-2 — that it evolved from a bat sarbecovirus called RaTG13; it does not come from pangolin or the Huanan wet market, or by human design.
A complementary theory to the MMP hypothesis
Another convincing theory regarding the origin of SARS-CoV-2 concerns gain-of-function research. Laboratories around the world had intentionally made microbes adapt to different species — through serial passaging experiments — to study the possibility of epidemics. Hence, a possibility exists that SARS-CoV-2 was cultured (or passaged) in laboratory settings to improve its binding affinity to the ACE2 receptors.
This ‘gain-of-function lab escape’ has been theorized by some research groups to explain the uncanny rapid adaptability of SARS-CoV-2 to humans. One is a peer-reviewed research paper in Bioassays in August, titled “Might SARS‐CoV‐2 Have Arisen via Serial Passage through an Animal Host or Cell Culture?.” Another example is the commentary of Birger Sørensen, a Norwegian virologist specializing in HIV vaccine research, and his colleagues.
This ‘gain-of-function lab escape’ theory does not contradict the fact that SARS-CoV-2 is not genetically manipulated. Serial passaging mimics natural evolution in an accelerated fashion in the lab, so it does not count as direct genetic engineering or human-designed virus. It also does not contradict the MMP hypothesis, but rather complements it: RaTG13 or SARS-CoV-2 isolated from the Mojiang Mine may have undergone gain-of-function experiments in the Wuhan Institute of Virology before its accidental escape.
Short abstract and closing
The MMP hypothesis states that the bat sarbecovirus RaGT13 infected workers in the Mojiang Mine in 2012. This RaGT13 then underwent rapid evolution upon the encounter of a new organism, evolving into SARS-CoV-2. These infected miners also had pneumonia signs that were indistinguishable from Covid-19. The miners also underwent treatments used today for Covid-19. Samples from the miners were taken to the Wuhan Institute of Virology for research purposes. SARS-CoV-2 may have then leaked from the lab by accident. In between sample collection and viral escape, a possibility exists that gain-of-function experiments were done.
The MMP hypothesis (maybe plus the gain-of-function theory) explains many facets of the pandemic — such as its ability to infect the lower respiratory tract (which is uncommon of coronaviruses), its unusual adaptability to humans within a short timeframe, and its mysterious zoonotic transfer from bats in Yunnan to people in Wuhan. This hypothesis also does not contradict known facts that SARS-CoV-2 came from bats, not pangolin, human-designed, or the Huanan wet market in Wuhan.
Lastly, note that the MMP hypothesis is only a possibility and not proven yet. The WHO announced in August that they will conduct epidemiological investigations, with the help of China, on the early source of SARS-CoV-2 in Wuhan soon.
|
https://medium.com/microbial-instincts/the-latest-theory-that-may-answer-the-origin-of-covid-19-d9efbe7072ae
|
['Shin Jie Yong']
|
2020-10-16 09:13:43.909000+00:00
|
['Covid 19', 'Life', 'Technology', 'Science', 'Education']
|
Title Latest Theory May Answer Origin Covid19Content Latest Theory May Answer Origin Covid19 Mojiang Miners Passage MMP hypothesis explains many oddity Covid19 pandemic Image JohannaIris Pixabay July commentary “A Proposed Origin SARSCoV2 COVID19 Pandemic” Jonathan R Latham virologist doctorate Allison Wilson professor biology presented Mojiang Miners Passage MMP hypothesis provides “a plausible parsimonious explanation key feature COVID19 pandemic origin” stated “It account propensity SARSCoV2 infection target lung apparent preadapted nature virus transmission bat Yunnan human Wuhan” Let’s see hypothesis known origin SARSCoV2 First let’s start known fact closest relative SARSCoV2 RaTG13 bat sarbecovirus isolated Yunnan Province China 96 genetic identity recent genomics study Nature Microbiology show SARSCoV2 descended RaTG13 indicates SARSCoV2 came bat without intermediate host “Current sampling pangolin implicate intermediate host” stated study author Prior research May also said pangolin intermediate host passed SARSCoV2 human pangolin coronavirus pangolinCoV2020 SARSCoV2 9032 identical “BatCoVRaTG13 genetically close SARSCoV2 individual gene genomic sequence level compared genomic sequence pangolinCoV2020 assembled study” research concluded “Our study support SARSCoV2 evolved directly pangolinCoV” SARSCoV2 came bat sarbecovirus called RaTG13 pangolin humanmade wet market got spread human still mystery Another fact SARSCoV2 humanmade Genetic engineering leaf ‘fingerprint’ organism’s genome caught genetics technique January Massachusetts Institute Technology MIT used Finding EngineeringLinked Indicators FELIX tool US Director National Intelligence confirm SARSCoV2 never genetically manipulated known coronaviruses fact FELIX tool show SARSCoV2 best match naturally occurring coronaviruses late May Chinese CDC ruled Huanan wet market Wuhan source SARSCoV2 outbreak announced SARSCoV2 found animal tested wet market third early Covid19 patient never contact wet market Therefore SARSCoV2 came somewhere else besides wet market Wuhan fact SARSCoV2 came bat sarbecovirus called RaTG13 pangolin humanmade wet market got spread human still mystery “The organization US intelligence community decided continue investigating two alternatives” Sarah Scoles freelance science writer wrote OneZero “The likely explanation virus jumped animal human remote possibility natural virus released lab accident still hasn’t ruled US intelligence community” happened Mojiang Mine 2012 late April 2012 six miner Mojiang Mine fell ill unknown pneumonia brought Kunming University Hospital Yunnan 250 km mine symptom include dry cough patient sputum high fever difficulty breathing 5 myalgia 5 low blood oxygen level 4 headache 3 blood clot 2 Treatments used steroid antiviral 5 ventilation 3 blood thinner 2 half miner 3 died end Samples least blood thymus tissue miner sent Wuhan Institute Virology determine causative agent conclusion made “the unknown virus lead severe pneumonia could SARSlikeCoV Chinese rufous horseshoe bat” wrote author miner coronavirus infection Please also note original study detailed Mojiang miners’ pneumonia Masters thesis still credible scientific source supervised committee academic year ZhengLi Shi director Center Emerging Infectious Diseases Wuhan Institute Virology led surveillance study Mojiang Mine team collected fecal swab 276 bat Using genetic sequencing detected nine coronaviruses specie six never classified one RaTG13 Recalled RaTG13 bat sarbecovirus 96 identical SARSCoV2 coronavirus specie likely infected miner although comparative study confirmed event new battohuman spillover virus occurred MMP hypothesis part Human passage Recalled surveillance study Mojiang Mine Zhengli Shi’s team discovered bat coronavirus RaTG13 96 identical SARSCoV2 Although RaTG13 closest relative SARSCoV2 4 genomic difference still vast requires 20–50 year evolution Hence intermediate host likely involved case SARS MERS coronavirus evolution rate speed different specie Contrary conventional view MMP Mojiang Miners Passage hypothesis state RaTG13 may evolved SARSCoV2 miner Mojiang cave Coronaviruses usually infect upper respiratory tract Miners work condition poor air quality compromise respiratory health Thus RaTG13 bat coronavirus might made way miners’ lower respiratory tract lung reside Lungs large organ since miners’ pneumonia severe enough require prolonged hospitalization virus load must enormous “Evolutionary change large part function population size” Dr Latham Prof Wilson explained “The lung miner suggest supported high viral load leading proportionately rapid viral evolution” term ‘passage’ refers standard technique ‘culture’ virus new set cell virus replicate using another cell’s machinery passaging virus required research purpose analogy RaTG13 bat coronavirus passaged human Mojiang Mine mentioned miners’ symptom closely resemble Covid19 “Anyone presenting today would immediately assumed COVID19” Dr Latham Prof Wilson remarked corresponding treatment administered miner — steroid antiviral blood thinner ventilation — precisely Covid19 Therefore patient zero pandemic might one miner miner spread disease others novel coronavirus contagious early phase disease probably one two day symptom onset miner taken hospital pneumonia become severe maskwearing probably widely practiced hospital setting Therefore coronavirus time might get much chance spread MMP hypothesis part II escape Also recalled sample blood thymus tissue miner sent Wuhan Institute Virology research purpose lab construction time sample collection virologist may begun experimentation 2017 2018 Dr Latham Prof Wilson said virus may leaked lab accident “The likely explanation virus jumped animal human remote possibility natural virus released lab accident still hasn’t ruled US intelligence community” may outrageous thing state unintentional lableaked microbe happened many time around world According USA Today 1100 lab accident involving escape bacteria virus toxin agriculture human reported federal regulation 2008–2012 Even 2009 paper New England Journal Medicine admitted reemergence 1977 H1N1 swine flu pandemic — disappeared human population 1957 — “probably accidental release laboratory source” Moreover SARS escaped lab six time — one Singapore one Taiwan four Beijing surprising coronaviruses leaked Wuhan Institute Virology well “Accidents happen regular basis seen case highprofile lab recent year accident happened mistake made” Dr Filippa Lentzos biosecurity expert King’s College London stated “For instance 2014 CDC safety lapse involving Ebola virus anthrax bird flu lapse NIH National Institutes Health involving variola virus cause smallpox” mention Wuhan Institute Virology received two official warning American embassy official 2018 concerning inadequate laboratory safety measure Chinese national team also found Wuhan lab meet federal standard five category also reported accident lab worker got wounded bat’s attack exposed bat urine according VOA News Still narrative blame research doubt science cornerstone human civilization sometimes unfortunate accident happen MMP hypothesis nutshell “We suggest first inside miner RaTG13 similar virus evolved SARSCoV2 unusually pathogenic coronavirus highly adapted humans” Dr Latham Prof Wilson said “Second Zhengli Shi lab used medical sample taken miner sent Kunming University Hospital research humanadapted virus known SARSCoV2 escaped WIV Wuhan Institute Virology 2019” “The closest known relative SARSCoV2 virus sampled Chinese researcher six miner infected working batinfested cave southern China 2012 miner developed symptom associate Covid19 viral sample taken Wuhan Institute Virology…” agreed Jamie Frederic Metzl author senior fellow Atlantic Council “If virus jumped human series humananimal encounter wild wet market Beijing claimed would likely seen evidence people infected elsewhere China Wuhan outbreak alternative explanation lab escape far plausible” MMP hypothesis explains one SARSCoV2 bind human ACE2 receptor remarkable efficiency “Such exceptional affinity ten twenty time great original SARS virus arise random making hard explain way virus strongly selected presence human ACE2 receptor” Dr Latham Prof Wilson noted worker Mojiang Mine bat sarbecovirus RaTG13 indeed bind human ACE2 receptor study published May Lancet Microbe also shown SARSCoV2 replicate efficiently cultured lab dish plate kidney lung cell bat — suggesting SARSCoV2 probably evolved human host rather bat “In short MMP theory plausible parsimonious explanation key feature COVID19 pandemic origin account propensity SARSCoV2 infection target lung apparent preadapted nature virus transmission bat Yunnan human Wuhan” Second virus usually undergo rapid evolution replicate new host instance MERS SARS first adapt human phylogenetic analysis revealed numerous mutation recombination viruses’ genome rapid evolution observed SARSCoV2 genome beginning pandemic Yet SARSCoV2 infected far people SARS MERS “That say evolutionary leap human completed 2019 pandemic began” stated Dr Latham Prof Wilson “It hard imagine explanation high adaptiveness kind passaging human body” Third MMP hypothesis complement known fact origin SARSCoV2 — evolved bat sarbecovirus called RaTG13 come pangolin Huanan wet market human design complementary theory MMP hypothesis Another convincing theory regarding origin SARSCoV2 concern gainoffunction research Laboratories around world intentionally made microbe adapt different specie — serial passaging experiment — study possibility epidemic Hence possibility exists SARSCoV2 cultured passaged laboratory setting improve binding affinity ACE2 receptor ‘gainoffunction lab escape’ theorized research group explain uncanny rapid adaptability SARSCoV2 human One peerreviewed research paper Bioassays August titled “Might SARS‐CoV‐2 Arisen via Serial Passage Animal Host Cell Culture” Another example commentary Birger Sørensen Norwegian virologist specializing HIV vaccine research colleague ‘gainoffunction lab escape’ theory contradict fact SARSCoV2 genetically manipulated Serial passaging mimic natural evolution accelerated fashion lab count direct genetic engineering humandesigned virus also contradict MMP hypothesis rather complement RaTG13 SARSCoV2 isolated Mojiang Mine may undergone gainoffunction experiment Wuhan Institute Virology accidental escape Short abstract closing MMP hypothesis state bat sarbecovirus RaGT13 infected worker Mojiang Mine 2012 RaGT13 underwent rapid evolution upon encounter new organism evolving SARSCoV2 infected miner also pneumonia sign indistinguishable Covid19 miner also underwent treatment used today Covid19 Samples miner taken Wuhan Institute Virology research purpose SARSCoV2 may leaked lab accident sample collection viral escape possibility exists gainoffunction experiment done MMP hypothesis maybe plus gainoffunction theory explains many facet pandemic — ability infect lower respiratory tract uncommon coronaviruses unusual adaptability human within short timeframe mysterious zoonotic transfer bat Yunnan people Wuhan hypothesis also contradict known fact SARSCoV2 came bat pangolin humandesigned Huanan wet market Wuhan Lastly note MMP hypothesis possibility proven yet announced August conduct epidemiological investigation help China early source SARSCoV2 Wuhan soonTags Covid 19 Life Technology Science Education
|
4,974 |
Automation eating your industry? (Answer: Yes.) These are the skills that will always be valued in the workplace.
|
by Alison E. Berman ✍️
If you’d asked farmers a few hundred years ago what skills their kids would need to thrive, it wouldn’t have taken long to answer. They’d need to know how to milk a cow or plant a field. They needed general skills for a single profession that barely changed. This is how it’s been for most of human history.
But in the last few centuries? Not so much.
Each generation, and even within generations, we see some jobs largely disappear, while other ones pop up. Machines have automated much of manufacturing, for example, and they’ll automate even more soon. But as manufacturing jobs decline, they’ve been replaced by other once-unimaginable professions like bloggers, coders, dog walkers, or pro gamers.
In a world where these labor cycles are accelerating, the question is: What skills do we teach the next generation so they can keep pace?
More and more research shows that current curriculums, which teach siloed subject matter and specific vocational training, are not preparing students to succeed in the 21st century; a time of technological acceleration, market volatility, and uncertainty.
To address this, some schools have started teaching coding and other skills relevant to the technologies of today. But technology is changing so quickly that these new skills may not be relevant by the time students enter the job market.
In fact, in Cathy Davidson’s book, Now You See It, Davidson estimates that,
“65 percent of children entering grade school this year (2011) will end up working in careers that haven’t even been invented yet.”
Not only is it difficult to predict what careers will exist in the future, it is equally uncertain which technology-based skills will be viable 5 or 10 years from now.
So, what do we teach?
Finland recently shifted its national curriculum to a new model called the “phenomenon-based” approach. By 2020, the country will replace traditional classroom subjects with a topical approach highlighting the four Cs — communication, creativity, critical thinking, and collaboration. These four skills “are central to working in teams, and a reflection of the ‘hyperconnected’ world we live in today,” Singularity Hub Editor-in-Chief David Hill recently wrote.
Hill notes the four Cs directly correspond to the skills needed to be a successful 21st century entrepreneur, when accelerating change means the jobs we’re educating for today may not exist tomorrow. Finland’s approach reflects an important transition away from the antiquated model used in most US institutions; a model created for a slower, more stable labor market and economy that simply no longer exists.
In addition to the four Cs, successful entrepreneurs across the globe are demonstrating three additional soft skills that can be integrated into the classroom — adaptability, resiliency and grit, and a mindset of continuous learning.
These skills can equip students to be problem-solvers, inventive thinkers, and adaptive to the fast-paced change they are bound to encounter. In a world of uncertainty, the only constant is the ability to adapt, pivot, and get back on your feet.
Like Finland, the city of Buenos Aires is embracing change.
Select high school curriculums in the city of Buenos Aires now require technological education in the first two years and entrepreneurship in the last three years. Esteban Bullrich, Buenos Aires’ minister of education, told Singularity University in a recent interview:
“I want kids to get out of school and be able to create whatever future they want to create — to be able to change the world with the capabilities they earn and receive through formal schooling.” — Esteban Bullrich, Minister of Education, Buenos Aires
The idea is to teach students to be adaptive and equip them with skills that will be highly transferable in whatever reality they may face once out of school. Embedding these entrepreneurial skills in education will enable future leaders to move smoothly with the pace of technology. In fact, Mariano Mayer, director of entrepreneurship for the city of Buenos Aires, believes these soft skills will be valued most highly in future labor markets.
This message is consistent with research highlighted in a World Economic Forum and Boston Consulting Group report titled, New Vision for Education: Unlocking the Potential of Technology. The report breaks out the core 21st-century skills into three key categories — foundational literacies, competencies, and character qualities — with lifelong learning as a proficiency encompassing these categories.
From degree gathering to continuous learning
This continuous learning approach, in contrast to degree-oriented education, represents an important shift that is desperately needed in education. It also reflects the demands of the labor market — where lifelong learning and skill development are what keep an individual competitive, agile, and valued.
Singularity University CEO Rob Nail explains,
“The current setup does not match the way the world has and will continue to evolve. You get your certificate or degree and then supposedly you’re done. In the world that we’ve living in today, that doesn’t work.”
Transitioning the focus of education from degree-oriented to continuous learning holds obvious benefits for students. This shift in focus, however, may be a difficult situation for academic institutions to adapt to as education as an industry becomes increasingly democratized and decentralized.
Any large change requires that we overcome barriers—and in education, there are many — but one challenge, in particular, is fear of change.
“The fear of change has made us fall behind in terms of advancement in innovation and human activities,” Bullrich says. He goes on:
“We are discussing upgrades to our car instead of building a spaceship. We need to build a spaceship, but we don’t want to leave the car behind. Some changes appear large, but the truth is, it’s still a car. It doesn’t fly. That’s why education policy is not flying.”
Education and learning are ready to be reinvented. It’s time we get to work.
|
https://medium.com/singularityu/automation-eating-your-industry-22173e674d04
|
['Singularity University']
|
2017-03-29 01:49:39.584000+00:00
|
['Teaching', 'Entrepreneurship', 'Future Of Work', 'Future Of Learning', 'Education']
|
Title Automation eating industry Answer Yes skill always valued workplaceContent Alison E Berman ✍️ you’d asked farmer hundred year ago skill kid would need thrive wouldn’t taken long answer They’d need know milk cow plant field needed general skill single profession barely changed it’s human history last century much generation even within generation see job largely disappear one pop Machines automated much manufacturing example they’ll automate even soon manufacturing job decline they’ve replaced onceunimaginable profession like blogger coder dog walker pro gamers world labor cycle accelerating question skill teach next generation keep pace research show current curriculum teach siloed subject matter specific vocational training preparing student succeed 21st century time technological acceleration market volatility uncertainty address school started teaching coding skill relevant technology today technology changing quickly new skill may relevant time student enter job market fact Cathy Davidson’s book See Davidson estimate “65 percent child entering grade school year 2011 end working career haven’t even invented yet” difficult predict career exist future equally uncertain technologybased skill viable 5 10 year teach Finland recently shifted national curriculum new model called “phenomenonbased” approach 2020 country replace traditional classroom subject topical approach highlighting four Cs — communication creativity critical thinking collaboration four skill “are central working team reflection ‘hyperconnected’ world live today” Singularity Hub EditorinChief David Hill recently wrote Hill note four Cs directly correspond skill needed successful 21st century entrepreneur accelerating change mean job we’re educating today may exist tomorrow Finland’s approach reflects important transition away antiquated model used US institution model created slower stable labor market economy simply longer exists addition four Cs successful entrepreneur across globe demonstrating three additional soft skill integrated classroom — adaptability resiliency grit mindset continuous learning skill equip student problemsolvers inventive thinker adaptive fastpaced change bound encounter world uncertainty constant ability adapt pivot get back foot Like Finland city Buenos Aires embracing change Select high school curriculum city Buenos Aires require technological education first two year entrepreneurship last three year Esteban Bullrich Buenos Aires’ minister education told Singularity University recent interview “I want kid get school able create whatever future want create — able change world capability earn receive formal schooling” — Esteban Bullrich Minister Education Buenos Aires idea teach student adaptive equip skill highly transferable whatever reality may face school Embedding entrepreneurial skill education enable future leader move smoothly pace technology fact Mariano Mayer director entrepreneurship city Buenos Aires belief soft skill valued highly future labor market message consistent research highlighted World Economic Forum Boston Consulting Group report titled New Vision Education Unlocking Potential Technology report break core 21stcentury skill three key category — foundational literacy competency character quality — lifelong learning proficiency encompassing category degree gathering continuous learning continuous learning approach contrast degreeoriented education represents important shift desperately needed education also reflects demand labor market — lifelong learning skill development keep individual competitive agile valued Singularity University CEO Rob Nail explains “The current setup match way world continue evolve get certificate degree supposedly you’re done world we’ve living today doesn’t work” Transitioning focus education degreeoriented continuous learning hold obvious benefit student shift focus however may difficult situation academic institution adapt education industry becomes increasingly democratized decentralized large change requires overcome barriers—and education many — one challenge particular fear change “The fear change made u fall behind term advancement innovation human activities” Bullrich say go “We discussing upgrade car instead building spaceship need build spaceship don’t want leave car behind change appear large truth it’s still car doesn’t fly That’s education policy flying” Education learning ready reinvented It’s time get workTags Teaching Entrepreneurship Future Work Future Learning Education
|
4,975 |
The Hidden Yet Incredibly Powerful Benefits Of Being Self-Aware And Empathetic
|
The Hidden Yet Incredibly Powerful Benefits Of Being Self-Aware And Empathetic
As it turns out, facing myself is scary, but only at first. Then it’s the best thing I could’ve ever done.
Photo from Pexels.com
People might describe me as driven or ambitious but, truth be told, I’ve never been a career person.
My life has always been focused around relationships — romantic or not. I care deeply about the relationships in my life and they have played a huge role in my personal fulfillment and well-being.
When I was younger, I thought this was a bad thing. I felt ashamed for being so sensitive and emotionally impacted by romantic relationships.
While many of my peers went out and about to network and talk about their impressive CVs, all that I found a real interest in was anything concerning people.
I was unknowingly horning those so-called soft skills that keep being pointed out as the most significant qualities yet never seem to have enough weight in the real world.
Deep down, I’d always known what my core gifts are, but back then I didn’t know they were gifts and what they were good for.
For example, my empathy and self-awareness.
I couldn’t use these two qualities to directly make money. I couldn’t really quantify them (or at least didn’t know how) to show others that I could be of use to them and prove to myself I was good enough.
For a long time, I didn’t think very highly of myself. I even hated my own gifts. I was convinced that my empathy and sensitivity were my weaknesses which held me back in life.
I wished I could be a detached, too-cool-to-care-about-love, career-driven woman. I developed an attraction towards busy, emotionless people who seemed successful on the outside but had little empathy and depth.
Here are the quotes from my favourite self-help book called Deeper Dating by Ken Page that sum up my past dating experiences well:
Deep inside we know that these Core Gifts are worthy, and we never stop longing to find someone who treasures them, but after getting the message that these gifts are risky or unlovable, we learn to hide and bury them.
and
If we deny or dishonor a Core Gift, we are likely to choose someone who also dishonors it — and then to be intensely vulnerable to any negative judgment they have about us.
The author of this book and I went through similar journeys. He wrote:
As I came to value my sensitivity (a journey that still continues), my life began to change in wonderful ways. I started building love, not just chasing it down with people who weren’t particularly interested. I began to spend time with the precious people who honored me for who I was. I gradually stopped looking for tawdry sex and found myself meeting kind and available men more often. The more I embraced my authentic self, the more the quality of men I dated improved.
I could relate to this completely. Eventually, like the author, I also came to realise that I could choose to embrace my core gifts and let authentic love come into my life.
My whole world shifted.
All along, I had asked myself what my self-awareness and empathy were for. I had thought that these qualities were only self-directed and could not make me valuable to anyone.
If anything, they might have even caused me overwhelming stress and made letting go almost an impossible task.
But now I know.
Embarking on this journey of embracing my core gifts, I’ve gained a level of clarity that is both mature and transformative. It’s true that my self-awareness and empathy are more self-directed than benefiting others.
But it’s not entirely true.
It works like this: My self-awareness and empathy help me find myself first, so I can selflessly help others. And the gift is that it has happened at a rather young age.
I could imagine, without these qualities, I would end up wasting my time doing many jobs that don’t suit me, being in many relationships that don’t work for me, and regret it all later in life when the consequences become permanent.
My self-awareness and empathy have allowed me to put many psychological issues in the past and, thanks to this, I have so much time and space ahead of me to enjoy myself peacefully and do good for others.
The most amazing thing is that I have stopped feeling like I’m on a journey. There’s a deep sense of knowing in me that I’m here.
Every day, I’m here, I’m at the right place, and I’m true.
Being highly self-aware and empathetic has also led to having high emotional intelligence and good interpersonal skills, which serve my lifelong interest in relationships very well.
Most notably, when my core gift is empathy, the right people for me are, luckily, not those who are detached, careless, insensitive, or obsessed with work.
They’re those who are empathetic, understanding, and caring, with an emotional depth of an ocean.
I’m grateful every day that they’re the people I’ve already had in my life — the people I’ve intuitively navigated towards amidst pain and confusion.
Because of these people, there’s so much compassion, kindness, and lovingness in my world. It’s abundant and overflowing. It’s always been there and ready for me to come and bathe myself in it.
The moment I chose to appreciate those qualities in myself, I unlocked an ability to appreciate them in others and recognise their presence and power in everything I do.
As it turns out, facing myself is scary, but only at first. Then it’s the best thing I could’ve ever done.
|
https://medium.com/tinglymind/the-hidden-yet-incredibly-powerful-benefits-of-being-self-aware-and-empathetic-4c5d4c021386
|
['Ellen Nguyen']
|
2020-08-02 21:29:22.683000+00:00
|
['Self-awareness', 'Self Improvement', 'Life', 'Self', 'Empathy']
|
Title Hidden Yet Incredibly Powerful Benefits SelfAware EmpatheticContent Hidden Yet Incredibly Powerful Benefits SelfAware Empathetic turn facing scary first it’s best thing could’ve ever done Photo Pexelscom People might describe driven ambitious truth told I’ve never career person life always focused around relationship — romantic care deeply relationship life played huge role personal fulfillment wellbeing younger thought bad thing felt ashamed sensitive emotionally impacted romantic relationship many peer went network talk impressive CVs found real interest anything concerning people unknowingly horning socalled soft skill keep pointed significant quality yet never seem enough weight real world Deep I’d always known core gift back didn’t know gift good example empathy selfawareness couldn’t use two quality directly make money couldn’t really quantify least didn’t know show others could use prove good enough long time didn’t think highly even hated gift convinced empathy sensitivity weakness held back life wished could detached toocooltocareaboutlove careerdriven woman developed attraction towards busy emotionless people seemed successful outside little empathy depth quote favourite selfhelp book called Deeper Dating Ken Page sum past dating experience well Deep inside know Core Gifts worthy never stop longing find someone treasure getting message gift risky unlovable learn hide bury deny dishonor Core Gift likely choose someone also dishonor — intensely vulnerable negative judgment u author book went similar journey wrote came value sensitivity journey still continues life began change wonderful way started building love chasing people weren’t particularly interested began spend time precious people honored gradually stopped looking tawdry sex found meeting kind available men often embraced authentic self quality men dated improved could relate completely Eventually like author also came realise could choose embrace core gift let authentic love come life whole world shifted along asked selfawareness empathy thought quality selfdirected could make valuable anyone anything might even caused overwhelming stress made letting go almost impossible task know Embarking journey embracing core gift I’ve gained level clarity mature transformative It’s true selfawareness empathy selfdirected benefiting others it’s entirely true work like selfawareness empathy help find first selflessly help others gift happened rather young age could imagine without quality would end wasting time many job don’t suit many relationship don’t work regret later life consequence become permanent selfawareness empathy allowed put many psychological issue past thanks much time space ahead enjoy peacefully good others amazing thing stopped feeling like I’m journey There’s deep sense knowing I’m Every day I’m I’m right place I’m true highly selfaware empathetic also led high emotional intelligence good interpersonal skill serve lifelong interest relationship well notably core gift empathy right people luckily detached careless insensitive obsessed work They’re empathetic understanding caring emotional depth ocean I’m grateful every day they’re people I’ve already life — people I’ve intuitively navigated towards amidst pain confusion people there’s much compassion kindness lovingness world It’s abundant overflowing It’s always ready come bathe moment chose appreciate quality unlocked ability appreciate others recognise presence power everything turn facing scary first it’s best thing could’ve ever doneTags Selfawareness Self Improvement Life Self Empathy
|
4,976 |
Paint the Wall With Pain
|
Find the place the scar lies and pick the scab. No brushes are needed. Find the loose thread on the stitches. Pull. Slowly. This may hurt. Or fast if you prefer. Like a bandaid. One quick motion.
Do you feel better? No? I didn’t think so. Reach inside. Wipe your hands on the walls. Swirl into shapes. Flowers, trees, hearts. Swords to slay your dragons, your monsters, your own evil twin. Sometimes you have to lose yourself again and again before you can be found.
See that bucket beside you? You’ve collected your tears. Throw it there. Against the wall. Wash it all away. Not enough? Cry. Rinse. Repeat.
|
https://medium.com/brave-inspired/paint-the-wall-with-pain-4cdaff433268
|
['Gretchen Lee Bourquin', 'Pom-Poet']
|
2019-10-16 13:48:05.646000+00:00
|
['Mental Health', 'Courage', 'Prose Poem', 'Poetry', 'Healing']
|
Title Paint Wall PainContent Find place scar lie pick scab brush needed Find loose thread stitch Pull Slowly may hurt fast prefer Like bandaid One quick motion feel better didn’t think Reach inside Wipe hand wall Swirl shape Flowers tree heart Swords slay dragon monster evil twin Sometimes lose found See bucket beside You’ve collected tear Throw wall Wash away enough Cry Rinse RepeatTags Mental Health Courage Prose Poem Poetry Healing
|
4,977 |
The Psychological Facts That People Don’t Know About Book Reading
|
The Psychological Facts That People Don’t Know About Book Reading
A study done at the University of Sussex found that reading can reduce stress by up to 68%.
Photo by Matias North on Unsplash
Good books can inform you, enlighten you lead you in the right direction. once you start book reading you experience a new world and start loving the habit of reading you eventually get addicted to it.
I love book reading because it visualizes a new life experience or teaches valuable life lessons. Every book has some unique lesson when we start exploring then we learning something different.
A study found in the 2009 book reading is a way to relax and reduce stress. It is important to read a good book for at least a few minutes every day to stretch the brain muscles.
The book really a best friend when we board, upset, depressed, and annoyed. A good book always guides on a great path.
|
https://medium.com/the-innovation/the-psychological-facts-that-people-dont-know-about-book-reading-554d366665af
|
['Ashish Nishad']
|
2020-11-19 16:23:08.149000+00:00
|
['Anxiety', 'Personal Development', 'Reading', 'Depression', 'Books']
|
Title Psychological Facts People Don’t Know Book ReadingContent Psychological Facts People Don’t Know Book Reading study done University Sussex found reading reduce stress 68 Photo Matias North Unsplash Good book inform enlighten lead right direction start book reading experience new world start loving habit reading eventually get addicted love book reading visualizes new life experience teach valuable life lesson Every book unique lesson start exploring learning something different study found 2009 book reading way relax reduce stress important read good book least minute every day stretch brain muscle book really best friend board upset depressed annoyed good book always guide great pathTags Anxiety Personal Development Reading Depression Books
|
4,978 |
Lie to the world — don’t lie to yourself
|
Image by Kevinsphotos from Pixabay
Telling a lie is forgivable, but living a lie is not.
By constructing your world from lies, only destruction will come.
Even the smallest lie, if left unacknowledged, can kill any form of love.
Occasionally, I’ll ask people if they’re honest. Are you a liar? I’ll say.
Pointless question, right? The dishonest would just lie about it.
More revealing is how people think about their own lies — the specifications they give.
How often? When? The details mean more.
The reasons are way more interesting.
The most dangerous liars aren’t the ones who lie all the time.
The real shits are the ones who lie all the time AND claim to be terrible liars.
Oh, I suck at lying.
I’m incapable of telling anything but the truth. Be wary of anyone who says this.
They’re the same people who say they don’t understand the point of telling a lie—that such a reprehensible quality is simply beyond them.
Everyone knows the point of lying — it’s to distort reality.
The real honest people — the truth seekers, the ones with integrity, the people who are self-aware — these people usually admit to it when it matters most. When the stakes are high, when the cost involves hurting another — when the effects actually mean something, the honest people come clean. They don’t double down.
They want to purge themselves of this awful tendency.
They would feel appalled by how much they’ve lied, dismayed to the point of guilt and disappointment.
So, ultimately, I don’t fear the casual liar, because I am one.
All writers are. It’s a very human thing. It’s a very real thing.
Anyone who uses language can frame something in such a way that it no longer resembles the truth.
The people I really fear are the ones who have so thoroughly convinced themselves of their own honesty that they lose all ability to even recognize the most obvious deceit within themselves.
It is the self-deceiver who is most disconnected from reality and causes the most damage and pain to others.
|
https://medium.com/scuzzbucket/lie-to-the-world-dont-lie-to-yourself-ec29983d0d88
|
['Franco Amati']
|
2020-11-25 23:58:36.895000+00:00
|
['Prose', 'Scuzzbucket', 'Psychology', 'Lying', 'Poetry']
|
Title Lie world — don’t lie yourselfContent Image Kevinsphotos Pixabay Telling lie forgivable living lie constructing world lie destruction come Even smallest lie left unacknowledged kill form love Occasionally I’ll ask people they’re honest liar I’ll say Pointless question right dishonest would lie revealing people think lie — specification give often detail mean reason way interesting dangerous liar aren’t one lie time real shit one lie time claim terrible liar Oh suck lying I’m incapable telling anything truth wary anyone say They’re people say don’t understand point telling lie—that reprehensible quality simply beyond Everyone know point lying — it’s distort reality real honest people — truth seeker one integrity people selfaware — people usually admit matter stake high cost involves hurting another — effect actually mean something honest people come clean don’t double want purge awful tendency would feel appalled much they’ve lied dismayed point guilt disappointment ultimately don’t fear casual liar one writer It’s human thing It’s real thing Anyone us language frame something way longer resembles truth people really fear one thoroughly convinced honesty lose ability even recognize obvious deceit within selfdeceiver disconnected reality cause damage pain othersTags Prose Scuzzbucket Psychology Lying Poetry
|
4,979 |
Creating and training a U-Net model with PyTorch for 2D & 3D semantic segmentation: Training [3/4]
|
In the previous chapters we created our dataset and built the U-Net model. Now it is time to start training. For that we will write our own training loop within a simple Trainer class and save it in trainer.py. The idea is that we can instantiate a Trainer object with parameters such as the model , a criterion etc. and then call it’s class method run_trainer() to start training. This method will output the accumulated training loss, the validation loss and the learning rate that was used for training. Here is the code:
In order to create a trainer object the following parameters are required:
model : e.g. the U-Net
: e.g. the U-Net device : CPU or GPU
: CPU or GPU criterion : loss function (e.g. CrossEntropyLoss, DiceCoefficientLoss)
: loss function (e.g. CrossEntropyLoss, DiceCoefficientLoss) optimizer : e.g. SGD
: e.g. SGD training_DataLoader : a training dataloader
: a training dataloader validation_DataLoader : a validation dataloader
: a validation dataloader lr_scheduler : a learning rate scheduler (optional)
: a learning rate scheduler (optional) epochs : The number of epochs we want to train
: The number of epochs we want to train epoch : The epoch number from where training should start
Training can then be started with the class method run_trainer() . Since training is usually performed with a training and a validation phase, _train() and _validate() are two functions that are run once for every epoch we train with run_trainer() (line 33–53). If we have a lr_scheduler, we also perform a step with the lr_scheduler. To visualize the progress of training, I included a progress bar with the library tqdm — it just needs an iterable. Now let’s take a closer look on what happens when calling _train() and _validate() . If you are familiar with using PyTorch for network training, there is probably nothing new here.
In _train() we basically just iterate over our training dataloader and send our batches through the network in train mode (line 56–64). We then use this output together with our target to compute the loss with the loss function for the current batch (line 65). The computed loss is then appended in a temporary list (line 66–67). Based on the computed gradients, we perform a backward pass and a step with our optimizer to update the model’s parameters (line 68–69). At the end we update our progress bar for the training phase to show the current loss (line 71). The function outputs the mean of the temporary loss list and the learning rate that was used.
In _validate() , similar to _train() , we iterate over our validation dataloader, send our batches through the network in validation mode and compute the loss. This time, without computing the gradients and without performing a backward pass (line 78–97).
Start training with the Carvana dataset
Let’s create our Carvana data generators once again, but this time run the code within a jupyter notebook. The only thing we have to change to make it work as intended, is line 3 in trainer.py from: from tqdm import tqdm, trange to: from tqdm.notebook import tqdm, trange .
# Imports
from utils import get_filenames_of_path
import pathlib
from transformations import Compose, AlbuSeg2d, DenseTarget
from transformations import MoveAxis, Normalize01, Resize
from sklearn.model_selection import train_test_split
from customdatasets import SegmentationDataSet
import torch
from unet import UNet
from train import Trainer
from torch.utils.data import DataLoader
import albumentations
# root directory
root = pathlib.Path('/Carvana')
# input and target files
inputs = get_filenames_of_path(root / 'Input')
targets = get_filenames_of_path(root / 'Target')
# training transformations and augmentations
transforms_training = Compose([
Resize(input_size=(128, 128, 3), target_size=(128, 128)),
AlbuSeg2d(albu=albumentations.HorizontalFlip(p=0.5)),
DenseTarget(),
MoveAxis(),
Normalize01()
])
# validation transformations
transforms_validation = Compose([
Resize(input_size=(128, 128, 3), target_size=(128, 128)),
DenseTarget(),
MoveAxis(),
Normalize01()
])
# random seed
random_seed = 42
# split dataset into training set and validation set
train_size = 0.8 # 80:20 split
inputs_train, inputs_valid = train_test_split(
inputs,
random_state=random_seed,
train_size=train_size,
shuffle=True)
targets_train, targets_valid = train_test_split(
targets,
random_state=random_seed,
train_size=train_size,
shuffle=True)
# inputs_train, inputs_valid = inputs[:80], inputs[80:]
# targets_train, targets_valid = targets[:80], targets[:80]
# dataset training
dataset_train = SegmentationDataSet(inputs=inputs_train,
targets=targets_train,
transform=transforms_training)
# dataset validation
dataset_valid = SegmentationDataSet(inputs=inputs_valid,
targets=targets_valid,
transform=transforms_validation)
# dataloader training
dataloader_training = DataLoader(dataset=dataset_train,
batch_size=2,
shuffle=True)
# dataloader validation
dataloader_validation = DataLoader(dataset=dataset_valid,
batch_size=2,
shuffle=True)
Please note that I resize the images to 128x128x3 using Resize() to speed up training. This will generate batches of images that look like this:
from visual import Input_Target_Pair_Generator, show_input_target_pair_napari
gen = Input_Target_Pair_Generator(dataloader_training, rgb=True)
show_input_target_pair_napari(gen)
I can then instantiate the Trainer object and start training:
# device
if torch.cuda.is_available():
device = torch.device('cuda')
else:
torch.device('cpu')
# model
model = UNet(in_channels=3,
out_channels=2,
n_blocks=4,
start_filters=32,
activation='relu',
normalization='batch',
conv_mode='same',
dim=2).to(device)
# criterion
criterion = torch.nn.CrossEntropyLoss()
# optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# trainer
trainer = Trainer(model=model,
device=device,
criterion=criterion,
optimizer=optimizer,
training_DataLoader=dataloader_training,
validation_DataLoader=dataloader_validation,
lr_scheduler=None,
epochs=10,
epoch=0)
# start training
training_losses, validation_losses, learning_rates = trainer.run_trainer()
Training will look something like this:
Improve the data generator
Although training was performed on a NVIDIA 1070, it took 1:19 min to train 2 epochs with only 96 images (size 128x128x3) for each epoch. Why is that? The reason why this is so painfully slow, is because every time we generate a batch we read the data in full resolution (1918x1280x3) and resize it. And we do this for every epoch! Therefore, it would make more sense to either store the data in a lower resolution and then to pick the data up, or better: store the data in cache and access it when it’s needed. Or both. Let’s slightly change our custom SegmentationDataSet class (I will create a new file and name it customdatasets2.py , but you can replace your customdatasets.py with this one):
Here we added the argument use_cache and pre_transform . We basically just iterate over our input and target list and store the images in a list when we instantiate our dataset. When __getitem__ is called, an image-target pair from this list is returned. I added the pre_transform argument because I don’t want to change the original files. Instead, I want the images to be picked up, resized and stored in memory. Again, I included a progress bar to visualize the caching. If you want to run it correctly in Jupyter, please import tqdm from tqdm.notebook in line 4 in customdatasets2.py . Let’s try it out. The changes in code are the following:
# pre-transformations
pre_transforms = Compose([
Resize(input_size=(128, 128, 3), target_size=(128, 128)),
]) # training transformations and augmentations
transforms_training = Compose([
AlbuSeg2d(albu=albumentations.HorizontalFlip(p=0.5)),
DenseTarget(),
MoveAxis(),
Normalize01()
]) # validation transformations
transforms_validation = Compose([
DenseTarget(),
MoveAxis(),
Normalize01()
]) # random seed
random_seed = 42 # split dataset into training set and validation set
train_size = 0.8 # 80:20 split inputs_train, inputs_valid = train_test_split(
inputs,
random_state=random_seed,
train_size=train_size,
shuffle=True) targets_train, targets_valid = train_test_split(
targets,
random_state=random_seed,
train_size=train_size,
shuffle=True)
And it looks something like this:
The first progress bar represents the training dataloader and the second the validation dataloader. Let’s train again for 2 epochs and see how long it’ll take.
Training took about 2 seconds only! That’s much better. But there is one part we can still improve. Creating the dataset that reads images and stores them in memory takes a bit of time. When you look at the code and the CPU usage, you’ll notice that only one core is used! Let’s change it in a way, so that all cores are used. Here I use the multiprocessing library.
Before we perform training, let’s also make a quick detour and talk about the learning rate.
Learning rate finder
The learning rate is one of the most important hyperparameters in neural network training. Choosing proper learning rates throughout the learning procedure is difficult as a small learning rate leads to slow convergence while a high learning rate can cause divergence. Also, frequent parameter updates with high variance in SGD can cause fluctuations, which makes finding the (local) minimum for SGD even more difficult. To identify an optimal learning rate, we can test different learning rates empirically with a learning rate range test. Inspired by the best practices I picked up from the fast.ai course, I recommend using a learning rate finder before starting the actual training. Sylvain Gugger from fast.ai wrote a really good summary about this problem. The code that I will show you is based on Tanjid Hasan Tonmoy’s pytorch-lr-finder, which is an implementation of the learning rate range test from Leslie Smith. I only slightly modified the code and included a progressbar (yes, I like them).
Let’s perform such a learning rate range test. Since our dataset is rather small (96 images), we’ll perform some extra steps (1000). The upper progressbar displays the number of epochs and the lower progressbar shows the number of steps we perform on the current epoch.
Let’s plot the results of the test:
0.01 seems to be a good learning rate. We’ll take it. Let’s train for 100 epochs,
and visualize the training and validation loss. For that I will use matplotlib and write a function that I can add to the visual.py file.
Let’s see what the function plot_training() will output when we pass in our losses and the learning rate.
Training looks good!
Why you shouldn’t write your own training loop
Training can be carried out very easily in plain PyTorch. But you should actually do it only if you want to learn PyTorch. It is generally recommended to use higher level APIs, such as Lightning, Fast.ai or Skorch. But why? This is well explained in this article. You can probably imagine that if you want to integrate functionalities and features, such as logging, metrics, early stopping, mixed precision training and many more to your training loop, you’ll end up doing exactly what others have done already. However, chances are that your code won’t be as good and stable as theirs (hello spagetthi code) and you’ll spent too much time on integrating and debugging these things rather than focusing on your deep learning project. And although learning a new API can take some time, it might help you a lot in the long run. If you want to implement your own features anyway, take a look at this article, and use callbacks. For this series, I’ll try to keep it very simple and without the use of out-of-the-box solutions. I will only implement a very basic data processing pipeline/training loop with good ol’ PyTorch.
Summary
In this part, we performed training with a sample of the carvana dataset by creating a simple training loop. This training loop can be visualized with a progressbar and the result of training can be plotted with matplotlib. We noticed that training was painfully slow because our data was picked up very slowly by the data generator. Because of that, we changed it in a way so that data is only read once and then picked up from memory when needed. Additionally, we added a learning rate range finder, to determine an optimal learning rate which we then used for model training.
In the next chapter, we’ll let the model predict the segmentation maps of unseen image data.
|
https://johschmidt42.medium.com/creating-and-training-a-u-net-model-with-pytorch-for-2d-3d-semantic-segmentation-training-3-4-8242d31de234
|
['Johannes Schmidt']
|
2020-12-12 15:03:56.737000+00:00
|
['Python', 'Deep Learning', 'Semantic Segmentation', 'Pytorch', 'Tutorial']
|
Title Creating training UNet model PyTorch 2D 3D semantic segmentation Training 34Content previous chapter created dataset built UNet model time start training write training loop within simple Trainer class save trainerpy idea instantiate Trainer object parameter model criterion etc call it’s class method runtrainer start training method output accumulated training loss validation loss learning rate used training code order create trainer object following parameter required model eg UNet eg UNet device CPU GPU CPU GPU criterion loss function eg CrossEntropyLoss DiceCoefficientLoss loss function eg CrossEntropyLoss DiceCoefficientLoss optimizer eg SGD eg SGD trainingDataLoader training dataloader training dataloader validationDataLoader validation dataloader validation dataloader lrscheduler learning rate scheduler optional learning rate scheduler optional epoch number epoch want train number epoch want train epoch epoch number training start Training started class method runtrainer Since training usually performed training validation phase train validate two function run every epoch train runtrainer line 33–53 lrscheduler also perform step lrscheduler visualize progress training included progress bar library tqdm — need iterable let’s take closer look happens calling train validate familiar using PyTorch network training probably nothing new train basically iterate training dataloader send batch network train mode line 56–64 use output together target compute loss loss function current batch line 65 computed loss appended temporary list line 66–67 Based computed gradient perform backward pas step optimizer update model’s parameter line 68–69 end update progress bar training phase show current loss line 71 function output mean temporary loss list learning rate used validate similar train iterate validation dataloader send batch network validation mode compute loss time without computing gradient without performing backward pas line 78–97 Start training Carvana dataset Let’s create Carvana data generator time run code within jupyter notebook thing change make work intended line 3 trainerpy tqdm import tqdm trange tqdmnotebook import tqdm trange Imports utils import getfilenamesofpath import pathlib transformation import Compose AlbuSeg2d DenseTarget transformation import MoveAxis Normalize01 Resize sklearnmodelselection import traintestsplit customdatasets import SegmentationDataSet import torch unet import UNet train import Trainer torchutilsdata import DataLoader import albumentations root directory root pathlibPathCarvana input target file input getfilenamesofpathroot Input target getfilenamesofpathroot Target training transformation augmentation transformstraining Compose Resizeinputsize128 128 3 targetsize128 128 AlbuSeg2dalbualbumentationsHorizontalFlipp05 DenseTarget MoveAxis Normalize01 validation transformation transformsvalidation Compose Resizeinputsize128 128 3 targetsize128 128 DenseTarget MoveAxis Normalize01 random seed randomseed 42 split dataset training set validation set trainsize 08 8020 split inputstrain inputsvalid traintestsplit input randomstaterandomseed trainsizetrainsize shuffleTrue targetstrain targetsvalid traintestsplit target randomstaterandomseed trainsizetrainsize shuffleTrue inputstrain inputsvalid inputs80 inputs80 targetstrain targetsvalid targets80 targets80 dataset training datasettrain SegmentationDataSetinputsinputstrain targetstargetstrain transformtransformstraining dataset validation datasetvalid SegmentationDataSetinputsinputsvalid targetstargetsvalid transformtransformsvalidation dataloader training dataloadertraining DataLoaderdatasetdatasettrain batchsize2 shuffleTrue dataloader validation dataloadervalidation DataLoaderdatasetdatasetvalid batchsize2 shuffleTrue Please note resize image 128x128x3 using Resize speed training generate batch image look like visual import InputTargetPairGenerator showinputtargetpairnapari gen InputTargetPairGeneratordataloadertraining rgbTrue showinputtargetpairnaparigen instantiate Trainer object start training device torchcudaisavailable device torchdevicecuda else torchdevicecpu model model UNetinchannels3 outchannels2 nblocks4 startfilters32 activationrelu normalizationbatch convmodesame dim2todevice criterion criterion torchnnCrossEntropyLoss optimizer optimizer torchoptimSGDmodelparameters lr001 trainer trainer Trainermodelmodel devicedevice criterioncriterion optimizeroptimizer trainingDataLoaderdataloadertraining validationDataLoaderdataloadervalidation lrschedulerNone epochs10 epoch0 start training traininglosses validationlosses learningrates trainerruntrainer Training look something like Improve data generator Although training performed NVIDIA 1070 took 119 min train 2 epoch 96 image size 128x128x3 epoch reason painfully slow every time generate batch read data full resolution 1918x1280x3 resize every epoch Therefore would make sense either store data lower resolution pick data better store data cache access it’s needed Let’s slightly change custom SegmentationDataSet class create new file name customdatasets2py replace customdatasetspy one added argument usecache pretransform basically iterate input target list store image list instantiate dataset getitem called imagetarget pair list returned added pretransform argument don’t want change original file Instead want image picked resized stored memory included progress bar visualize caching want run correctly Jupyter please import tqdm tqdmnotebook line 4 customdatasets2py Let’s try change code following pretransformations pretransforms Compose Resizeinputsize128 128 3 targetsize128 128 training transformation augmentation transformstraining Compose AlbuSeg2dalbualbumentationsHorizontalFlipp05 DenseTarget MoveAxis Normalize01 validation transformation transformsvalidation Compose DenseTarget MoveAxis Normalize01 random seed randomseed 42 split dataset training set validation set trainsize 08 8020 split inputstrain inputsvalid traintestsplit input randomstaterandomseed trainsizetrainsize shuffleTrue targetstrain targetsvalid traintestsplit target randomstaterandomseed trainsizetrainsize shuffleTrue look something like first progress bar represents training dataloader second validation dataloader Let’s train 2 epoch see long it’ll take Training took 2 second That’s much better one part still improve Creating dataset read image store memory take bit time look code CPU usage you’ll notice one core used Let’s change way core used use multiprocessing library perform training let’s also make quick detour talk learning rate Learning rate finder learning rate one important hyperparameters neural network training Choosing proper learning rate throughout learning procedure difficult small learning rate lead slow convergence high learning rate cause divergence Also frequent parameter update high variance SGD cause fluctuation make finding local minimum SGD even difficult identify optimal learning rate test different learning rate empirically learning rate range test Inspired best practice picked fastai course recommend using learning rate finder starting actual training Sylvain Gugger fastai wrote really good summary problem code show based Tanjid Hasan Tonmoy’s pytorchlrfinder implementation learning rate range test Leslie Smith slightly modified code included progressbar yes like Let’s perform learning rate range test Since dataset rather small 96 image we’ll perform extra step 1000 upper progressbar display number epoch lower progressbar show number step perform current epoch Let’s plot result test 001 seems good learning rate We’ll take Let’s train 100 epoch visualize training validation loss use matplotlib write function add visualpy file Let’s see function plottraining output pas loss learning rate Training look good shouldn’t write training loop Training carried easily plain PyTorch actually want learn PyTorch generally recommended use higher level APIs Lightning Fastai Skorch well explained article probably imagine want integrate functionality feature logging metric early stopping mixed precision training many training loop you’ll end exactly others done already However chance code won’t good stable hello spagetthi code you’ll spent much time integrating debugging thing rather focusing deep learning project although learning new API take time might help lot long run want implement feature anyway take look article use callback series I’ll try keep simple without use outofthebox solution implement basic data processing pipelinetraining loop good ol’ PyTorch Summary part performed training sample carvana dataset creating simple training loop training loop visualized progressbar result training plotted matplotlib noticed training painfully slow data picked slowly data generator changed way data read picked memory needed Additionally added learning rate range finder determine optimal learning rate used model training next chapter we’ll let model predict segmentation map unseen image dataTags Python Deep Learning Semantic Segmentation Pytorch Tutorial
|
4,980 |
Does the Moon Move with You? No, That’s a Parallax.
|
SCIENCE
Does the Moon Move with You? No, That’s a Parallax.
Parallax or motion parallax is the phenomenon that an object’s apparent position varies concerning another object or the background when viewed from different positions.
Photo by Benjamin Voros on Unsplash
For example, if one moves to the right about the viewing direction (for example, while sitting in a vehicle looking to the left about the direction of travel), the direction in which one sees an object in the foreground will turn to the left faster than the direction in which an object in the foreground is seen. Background.
Distant objects thus seem to move partially with the observer’s movement relative to nearby objects. The brain translates the relative depth from this. This is known as depth perception.
Photography.
An object to be photographed appears to be slightly different from the viewfinder of a simple viewfinder camera than from the recording lens. The lens and viewfinder are not in the same position or directly behind each other are a few centimeters apart.
With a photo of an object that is a few meters away from the lens, the parallax does not cause any major problems because the lens and viewfinder then ‘see’ almost the same. However, the closer an object is to the camera, the greater the difference between the viewfinder image and the actual picture being taken, the parallax.
With a two-eyed SLR camera, such as those made by Rollei, Yashica, and Mamiya, the parallax can be canceled by the parallax converter, which is located between the camera and the tripod.
After focusing through the top lens, the camera is lifted with the converter, and the recording lens is level with the viewfinder lens previously.
The recording is then the same as the observation.
However, this only works with stationary objects. There are also cameras such as the Mamiya C 330 that show a parallax correction in the viewfinder so that the above is not an issue.
A one-eyed SLR camera fundamentally solves the parallax problem. Because the viewfinder shows exactly what the lens sees, there are no differences between the viewfinder crop and the actual image.
What will occur is that there is more image on the image than previously seen. This is because most SLR cameras have a focusing screen of about 90%. Only (semi) pro cameras have a limit of 100%.
With this type of camera, the image is transferred by the recording lens via a mirror and a prism directly into the viewfinder.
The mirror will pop up momentarily during the actual recording. This creates a different kind of parallax, time parallax. You lose your subject for a while, which does not happen with a rangefinder camera.
There are also digital cameras where the viewfinder image is electronically transferred from the image sensor to a monitor (LCD). These cameras are also parallax-free.
Another parallax problem occurs when creating a panorama by stitching individual photos together: there are often areas where it cannot be properly fitted.
Measuring instruments.
When reading analog measuring instruments, errors may occur due to parallax.
Two types of parallax can be distinguished here, the parallax between the measuring instrument and object and the parallax in the measuring instrument itself. Digital readouts are parallax-free because the result does not depend on the observation.
Parallax between measuring instrument and object.
Parallax can occur between the scale of the measuring instrument and the object being measured.
A well-known example is a ruler that is placed on an object. If the scale is on top of the ruler, there is a distance between the scale and the object measured.
The observer who moves his head slightly to the left or right sees the parallax’s effect: the scale appears to move relative to the object.
Even with a transparent ruler with the scale on the bottom, parallax can occur due to light refraction. If the observer cannot read straight from above, the scale is perceived slightly shifted.
The same amount shifts the part of the object below the ruler, but the part outside is not. Reading through the ruler gives the best result: the marking line and the point to be measured have the same deviation so that the parallax is, in fact, compensated.
The photo shows that the parallax compensation is best too close to the edge; further from the edge, parallax increases because the ruler’s top is slightly inclined.
Parallax in the measuring instrument itself.
Parallax can also occur in the measuring instrument itself when reading a pointer against a scale’s background. The simplest example is reading the time on an analog clock that is observed obliquely.
A commonly used method to combat this measurement error is the so-called mirror reading: just below or above the scale, there is a groove of approximately 5 mm wide in the dial, with a mirror behind it. By looking perpendicular to the dial, the mirror image and pointer coincide, and the observation is correct.
Another method, especially used in cheaper instruments, is the knife pointer, where the pointer’s tip lies in a vertical plane. When the tip of the pointer appears as thin as possible, the field of view is perpendicular from above. This method is much less accurate than that with the mirror scale.
The essence of these corrections is to improve the reproducibility of the reading angle. Even if the mirror or pointer is misaligned, the result is the same as when calibrating the instrument.
Artillery.
When using artillery in a combat situation, the fire control system must consider the position differences of the individual guns.
Although this is not directly related to observation (usually geographic coordinates are the focus), this problem’s geometric explanation is the same as for parallax.
Parallax is the basis for measurement methods.
The relative displacement or fall is photographically recorded in photogrammetry when taking aerial photographs from two different positions. In a stereoscope, the overlapping sets of photos can be viewed in three dimensions.
With a parallax meter equipped with a micrometer, the parallax difference of each point can be determined. The parallax shift is a measure of the height difference.
For example, height differences of buildings and objects can be determined, or terrain height maps can be made. In terrestrial photogrammetry, differences in distances are determined in photographs taken from the ground.
In astronomy, parallax is used to measure distances. The extent to which a star apparently shifts when observed from two opposite points of the Earth’s orbit (i.e., taking two measurements at the same place on Earth, but with a six-month difference in time) is used as a measure to determine the distance.
The parsec (a contraction of parallax seconds) is a unit of distance used in astronomy, defined as the distance an object must lie to be seen from Earth with a parallax of 1 arc second.
|
https://medium.com/carre4/does-the-moon-move-with-you-no-thats-a-parallax-c1368123146c
|
['Bryan Dijkhuizen']
|
2020-11-22 11:08:48.874000+00:00
|
['World', 'Science', 'Parallax', 'Math']
|
Title Moon Move That’s ParallaxContent SCIENCE Moon Move That’s Parallax Parallax motion parallax phenomenon object’s apparent position varies concerning another object background viewed different position Photo Benjamin Voros Unsplash example one move right viewing direction example sitting vehicle looking left direction travel direction one see object foreground turn left faster direction object foreground seen Background Distant object thus seem move partially observer’s movement relative nearby object brain translates relative depth known depth perception Photography object photographed appears slightly different viewfinder simple viewfinder camera recording lens lens viewfinder position directly behind centimeter apart photo object meter away lens parallax cause major problem lens viewfinder ‘see’ almost However closer object camera greater difference viewfinder image actual picture taken parallax twoeyed SLR camera made Rollei Yashica Mamiya parallax canceled parallax converter located camera tripod focusing top lens camera lifted converter recording lens level viewfinder lens previously recording observation However work stationary object also camera Mamiya C 330 show parallax correction viewfinder issue oneeyed SLR camera fundamentally solves parallax problem viewfinder show exactly lens see difference viewfinder crop actual image occur image image previously seen SLR camera focusing screen 90 semi pro camera limit 100 type camera image transferred recording lens via mirror prism directly viewfinder mirror pop momentarily actual recording creates different kind parallax time parallax lose subject happen rangefinder camera also digital camera viewfinder image electronically transferred image sensor monitor LCD camera also parallaxfree Another parallax problem occurs creating panorama stitching individual photo together often area cannot properly fitted Measuring instrument reading analog measuring instrument error may occur due parallax Two type parallax distinguished parallax measuring instrument object parallax measuring instrument Digital readout parallaxfree result depend observation Parallax measuring instrument object Parallax occur scale measuring instrument object measured wellknown example ruler placed object scale top ruler distance scale object measured observer move head slightly left right see parallax’s effect scale appears move relative object Even transparent ruler scale bottom parallax occur due light refraction observer cannot read straight scale perceived slightly shifted amount shift part object ruler part outside Reading ruler give best result marking line point measured deviation parallax fact compensated photo show parallax compensation best close edge edge parallax increase ruler’s top slightly inclined Parallax measuring instrument Parallax also occur measuring instrument reading pointer scale’s background simplest example reading time analog clock observed obliquely commonly used method combat measurement error socalled mirror reading scale groove approximately 5 mm wide dial mirror behind looking perpendicular dial mirror image pointer coincide observation correct Another method especially used cheaper instrument knife pointer pointer’s tip lie vertical plane tip pointer appears thin possible field view perpendicular method much le accurate mirror scale essence correction improve reproducibility reading angle Even mirror pointer misaligned result calibrating instrument Artillery using artillery combat situation fire control system must consider position difference individual gun Although directly related observation usually geographic coordinate focus problem’s geometric explanation parallax Parallax basis measurement method relative displacement fall photographically recorded photogrammetry taking aerial photograph two different position stereoscope overlapping set photo viewed three dimension parallax meter equipped micrometer parallax difference point determined parallax shift measure height difference example height difference building object determined terrain height map made terrestrial photogrammetry difference distance determined photograph taken ground astronomy parallax used measure distance extent star apparently shift observed two opposite point Earth’s orbit ie taking two measurement place Earth sixmonth difference time used measure determine distance parsec contraction parallax second unit distance used astronomy defined distance object must lie seen Earth parallax 1 arc secondTags World Science Parallax Math
|
4,981 |
Web Notifications Experiments for the June 7 U.S. Primary
|
Web Notifications Experiments for the June 7 U.S. Primary
A quick word on our most recent experiment.
Tonight (Tuesday, June 7) we’ll be sending out three sets of experimental notifications about the last significant primaries of the 2016 presidential election. We’ll send individual state results as they come in, insights from our reporters in the field at candidates’ events and, tomorrow morning, a recap of the ten most important highlights of the night.
If you’re already receiving notifications: You’ve probably landed on this page because you tapped on an alert. For future alerts, pull down on the notification to expand it to see all of the text.
If you’d like to sign up: Web notifications are currently only available on Chrome, so if you have an Android mobile phone (Samsung, included!), sign up here.
We’ll also be gathering feedback from those who participate and will send out a follow-up survey after the last alert on Wednesday. And when we have data and a good sense of what was most engaging to users, we’ll write up what we learned and post it here on Medium.
Questions or observations? Drop us a note: [email protected]
The Guardian Mobile Innovation Lab operates with the generous support of the John S. and James L. Knight Foundation.
|
https://medium.com/the-guardian-mobile-innovation-lab/web-notifications-experiments-for-the-june-7-u-s-primary-8b966aea2e0b
|
['Sasha Koren']
|
2016-06-07 22:54:18.958000+00:00
|
['Mobile', 'Notifications', 'Journalism']
|
Title Web Notifications Experiments June 7 US PrimaryContent Web Notifications Experiments June 7 US Primary quick word recent experiment Tonight Tuesday June 7 we’ll sending three set experimental notification last significant primary 2016 presidential election We’ll send individual state result come insight reporter field candidates’ event tomorrow morning recap ten important highlight night you’re already receiving notification You’ve probably landed page tapped alert future alert pull notification expand see text you’d like sign Web notification currently available Chrome Android mobile phone Samsung included sign We’ll also gathering feedback participate send followup survey last alert Wednesday data good sense engaging user we’ll write learned post Medium Questions observation Drop u note innovationlabtheguardiancom Guardian Mobile Innovation Lab operates generous support John James L Knight FoundationTags Mobile Notifications Journalism
|
4,982 |
HURON RIVER HAIKUS
|
Tobias Freeman
HURON RIVER HAIKUS
Two Damselflies
slow green river snaps
concentric rings bullet out
one damsel flies free.
Yellow Leaf
slow green river grabs
yellow leaf in eddy mouth
summer spins then sinks.
Leaving
slow green river hears
long necks trumpeting, we fly
less a few cygnets.
Unmapped
slow green river chills
below terrapins burrow
and aquifers invaded.
Cold Memory
frozen green river
records ice-storm limb fallen
cruel change cut in ice.
Snow Melt
churned green river bends
sunshine skips across ripples
without frogs humming.
Ollie-Ollie-Oxen-Free!
hometown river was
pooh-sticks, rock steps, soakers — gone
childhood friends and days.
Droughty
cracked green river mud
trunks rise tangled, still dead.
no sliders sunning.
Flash Flood
high green river fast
watershed drinks runoff —
xenocide byway.
Dioxane
our green river sang
once upon a time through towns —
swim advisory.
PolyFluoroAlkylS
green river lazy
worms on hooks, favorite holes —
fish advisory.
Superfund
in parts per billion
poisoned river dies — not a
molecule floats free.
|
https://medium.com/resistance-poetry/huron-river-haikus-d993f2a8c6b2
|
['Lisa Patrell']
|
2020-06-29 20:59:58.669000+00:00
|
['Swimming', 'Water', 'Environment', 'Resistance Poetry', 'Pollution']
|
Title HURON RIVER HAIKUSContent Tobias Freeman HURON RIVER HAIKUS Two Damselflies slow green river snap concentric ring bullet one damsel fly free Yellow Leaf slow green river grab yellow leaf eddy mouth summer spin sink Leaving slow green river hears long neck trumpeting fly le cygnet Unmapped slow green river chill terrapin burrow aquifer invaded Cold Memory frozen green river record icestorm limb fallen cruel change cut ice Snow Melt churned green river bend sunshine skip across ripple without frog humming OllieOllieOxenFree hometown river poohsticks rock step soaker — gone childhood friend day Droughty cracked green river mud trunk rise tangled still dead slider sunning Flash Flood high green river fast watershed drink runoff — xenocide byway Dioxane green river sang upon time town — swim advisory PolyFluoroAlkylS green river lazy worm hook favorite hole — fish advisory Superfund part per billion poisoned river dy — molecule float freeTags Swimming Water Environment Resistance Poetry Pollution
|
4,983 |
About Me — Yemeece. Beauty, Grace, Power, Complexity…
|
Writing took me on a journey inwards into me. I began to explore the parts of me I never knew existed. I fell in love with reading, and the books I read showed me that whatever I was feeling had been felt before, and that stripped me of my loneliness. I gulped up many books like a thirsty stranger who just survived a desert storm and had only found an oasis.
“Whatever l was feeling had been felt before by someone else and that stripped me of my loneliness.”
Unlike anything else in my life, including myself, writing was gentle with me. It sharpened my reasoning and helped me to ask the right questions. It began to strip my mind of toxic and limiting mindsets. I began to heal and know the freedom I find hard to describe in words. Gradually, I began to develop the courage to share my poems with the world. In 2019, I published my first collection of poems titled Stirred by life. This book has been an oasis for travelers like myself journeying through the desert of life.
My name is Yemeece. I am a writer and poet. I write for people who need a balm for their pain. I write to destroy toxic mindsets and harmful beliefs. I write for the underdogs like myself who need a silver streak in the grey sky to keep believing. I write for the afraid, the marginalized, the curious, for individuals who seek a healthy mind and a hearty soul. I write to you. I write for me.
|
https://medium.com/about-me-stories/about-me-yemeece-28287dcc64bd
|
['Yemisi Fadiya']
|
2020-12-08 11:02:05.539000+00:00
|
['Writing', 'Introduction', 'Female', 'Life', 'About Me']
|
Title — Yemeece Beauty Grace Power Complexity…Content Writing took journey inwards began explore part never knew existed fell love reading book read showed whatever feeling felt stripped loneliness gulped many book like thirsty stranger survived desert storm found oasis “Whatever l feeling felt someone else stripped loneliness” Unlike anything else life including writing gentle sharpened reasoning helped ask right question began strip mind toxic limiting mindset began heal know freedom find hard describe word Gradually began develop courage share poem world 2019 published first collection poem titled Stirred life book oasis traveler like journeying desert life name Yemeece writer poet write people need balm pain write destroy toxic mindset harmful belief write underdog like need silver streak grey sky keep believing write afraid marginalized curious individual seek healthy mind hearty soul write write meTags Writing Introduction Female Life
|
4,984 |
Building Android Apps with Mirah and Pindah
|
I gave a talk earlier today at the Beijing Ruby group today about my experiences building native Android applications with Mirah and Pindah.
Although I generally prefer building mobile web apps over native development when possible, Mirah is a really promising alternative to Java if you have to go native. Check out the preso below and get involved in the growing Mirah / Android community.
Make sure to grab the code for the example app on the GitHubs.
|
https://medium.com/zerosum-dot-org/building-android-apps-with-mirah-and-pindah-72420197d686
|
['Nick Plante']
|
2017-11-04 17:27:18.439000+00:00
|
['Android', 'Android App Development', 'Ruby', 'Mirah', 'Mobile App Development']
|
Title Building Android Apps Mirah PindahContent gave talk earlier today Beijing Ruby group today experience building native Android application Mirah Pindah Although generally prefer building mobile web apps native development possible Mirah really promising alternative Java go native Check preso get involved growing Mirah Android community Make sure grab code example app GitHubsTags Android Android App Development Ruby Mirah Mobile App Development
|
4,985 |
How to Buy a Quality Business on Flippa
|
I heard recently from the head of Flippa that my business partners and I had collectively bought and sold the most businesses by gross value. This was a surprise, as that’s not our day job, and we’ve done less that a dozen transactions. Our actual job is building SmartrMail.
Flippa is the biggest marketplace for buying and selling online businesses. We’ve used Flippa to help start businesses, grow our existing businesses, and to generate free cash flow as ‘side hutles’ while we were employed. We’ve done well financially from it: everything we’ve sold has been for at least 3x what we paid. It’s also taught us some important lessons on running and growing different types of internet businesses.
I’ve had friends often ask ‘how do I buy a website?’. So, for them and anyone else interested, here’s my guide on how we buy websites on Flippa:
1. Decide What Type of Business To Look For
First off, think about the type of business you might want to run. We’ve bought a range of businesses, from Design Blogs to Digital Template Stores, to Marketing Tools. Some are going to require more hands-on work to improve, grow and maintain. An eCommerce store will take more day to day running than a blog. But none of them are ‘passive’. All require effort and work to improve and maintain. My personal view on web businesses is if they’re not growing they’re dying, so you will need to always be actively growing your business.
Our personal preference is older, established websites that have an audience, but haven’t been given much attention from their owners. So they may have a steady (but declining) stream of organic traffic, or have an established customer purchase history, but haven’t been updated for a long time. We’re happy to buy old, ugly websites, as long as they’re on an easy to improve platform. In our experience, sites on platforms like WordPress, BigCommerce, Shopify, Magento are best as they can quickly be improved.
In eCommerce we tend to like digital goods over real goods as there’s no stock to handle. We avoid drop-shipping as it seems too hard to get good margins. Established blogs are great if you like and understand the topic. I’ll go into more detail around why we like blogs, and few ways we’ve quickly monetized blogs to 10x earnings, in another post.
It’s up to you what type of business you choose. We look at hundreds to find quality and value, and only buy businesses we can understand or confidently learn about quickly.
2. Search for Businesses at least 2 years old
Set Website Age to 2+ Years.
To quickly weed out 90% of the junk and scam businesses, limit your search for businesses that are minimum two years old.
Does this mean you might overlook a good, young business? Certainly. But it removes you having to sift through the bulk of the silt to find a speck of gold. Most scammers or quick flippers don’t have the patience set up and run a scam website for more than a year. You can set Web Site Age in advanced search.
2. Understand Valuation Multiples
You’re buying a business, so you should understand the basics of how business is valued, which is Annual Earnings Multiples. If you’re buying shares of a public company on a stock exchange, you might pay 20x Annual Earning for those shares. That is, that company’s annual earnings are 1/20 of the value of the company. Public companies are considered a ‘safe’ investment as there’s a lot of information about them, their books are audited by a trusted 3rd pary, and there’s high liquidity as you can easily sell your shares.
Buying on Flippa is the complete opposite.
There’s an incredibly high risk of getting ripped off when buying a business from an online marketplace, so the value should be much lower than 20x earnings. When looking at a businesses earnings, make sure you use the last 12 months of data to calculate ‘annual earnings’. Often people will look at the last 3 months and multiply that by 4. That’s a mistake. It’s human nature to optimise for earnings when you’re selling a business, so the last 3 months earnings are likely jacked up (by any means). Also, just assume some costs have been stripped out to boost earnings, espcially any owner’s time.
3. Buy Value
We look to buy companies on a Price to Earnings (P/E) of 1x. That is, if you held the company for a year, with no change in current earnings, it would pay back the value of the purchase price. We’ve bought businesses on valuations between 0.8x-3x earnings before. Mostly we’ve bought at around 1x-1.5x.
Sometimes we’ll pay up to 3x on a business that has obvious ways to grow earnings quickly, or that can provide one of our other businesses with immediate earnings growth.
Mostly we’ll stick to paying 1x — 1.5x earnings. The risk of buying something worth less than it seems is so high that we’d prefer not to go over this. It’s better to miss out on something good that isn’t cheap than lose your money.
4. Set Your Budget and Earnings Range
So lets say you’ve set an earnings range of 1x-2x P/E. You can work out what Monthly Profit range to set on search by dividing your total budget to buy a business by 24 and 12 (1x-2x).
For example, assume you’ve got $2,500 to spend on a business. The Monthy Profit slider could be set between $50–250 Profit per month. If you pay $2,500 for a business earning $250 per month, that’s a P/E of 0.8x (great!). Paying $2,500 for a business earning $50 per month is 4.2x, which is expensive. Ideally you’d pay up to $1,200 max for a $50 per month business.
4. Search for a Business You Can Run
Ok we understand how to value a business, so it’s time to browse away. Look for anything that you understand or thing you could learn about. If you’re buying a blog or ecommerce site, it’s pretty easy to understand what you’re getting.
For example, buying a blog about something you’re already interested in means you can hit the ground running with new content, and new marketing, as you understand the topic, and the audience. Neil Patel wrote a comprehensive guide to building up your blog audience. Read it in a day and that’s enough to grow your blog. As you build your audience you can add different streams of revenue outside of just Ads, which I’ll go into another time.
The beauty of buying an established business, as opposed to starting one, is you’re getting many years of hardwork for not a lot of money. If you stick to looking at businesses built on top of one of the established platforms mentioned above, you can easily find people on Upwork to help with technical and design updates.
5. Understand the Technology Risks
If you’re buying a more complicated tech business, you need to understand the technology risk. For example, if you’re looking at a SaaS, you need to research the market, and the technology it’s built with. Understand the cost to continue development and support. Same goes for native mobile apps on iOS and Android.
If you’re a developer and you find a business in an area you understand, then you’re at a distinct advantage in assessing it. If you’re not technically minded, then I’d suggest either partnering with someone who is, or avoiding businesses that rely heavily on technical innovation.
6. Be Patient and Thorough
There are thousands of listings, but most of the businesses on there are junk. The rest are scams. Seriously.
Only a handful are quality, so deep research and patience is needed. Expect to be looking for at least a couple of weeks at new listings. I’d suggest creating an account, and saving your particular search. That way you’ll get send a daily email digest of new listings matching your search critera.
This is the most important step: the patience to look at hundreds of businesses and saying no quickly, until you find one that has met all the above criteria. It’s something Warren Buffet and Charlie Munger talk about a lot: doing nothing for a long time until they find something excellent.
7. Be Ready to Move Quickly
When you find a business you like, that’s priced within the P/E range you’re comfortable with, message the seller. Ask:
Add you to their Google Analytics (you’ll need to give your gmail address) to verify their traffic sources.
What’s their reserve?
What would they do to grow the business?
Checkout the seller’s selling history, has the site been sold previously, if so what were the comments on it. Google the seller. Go to the website for sale, search around it, sign up, dig in and use it. Go to their social channels, check what people are saying about the business. If it’s selling something, buy something to see what it’s like.
If everything checks out, next set up a time to Skype/Hangouts call with the seller. Ask the same questions, reasons for selling, and make them screenshare and login to their payment gateway (PayPal, Stripe etc) and show you the actual earnings on the call. Do the same with the website’s admin panel, and email inbox.
Indepth research here is the most crucial step to avoid getting your face ripped off. Expect to deep research like this 3–10 times before you find and successfully buy a business.
If you start asking questions straight away, you’ll be prepared incase a business suddenly gets a Buy It Now price added. For businesses that have a high reserve price, just ‘watch’ them. It likely won’t sell, and if it does be thankful it wasn’t you that overpaid. Often first time sellers have unrealistic price expectations, and may relist a business a few times before it comes into a reasonable sale range.
8. Use Escrow to Buy
If you’re buying on Flippa, just use their Escrow services. Give yourself at least a 7 day escrow period. This means the business, email and social accounts will be transferred over to you on day 1, and you’ve got 7 days to check it’s legit before the funds are released to the buyer. Use the time to check the traffic (sales if there are any) is as it should be.
Woohoo, you’re now a business owner! Next step, growing your business…
|
https://medium.com/on-startups-and-such/how-to-buy-a-quality-business-on-flippa-4fa9701a069c
|
['George H.']
|
2020-11-05 05:49:32.950000+00:00
|
['Growth Hacking', 'Growth', 'Startup', 'Flippa', 'Hustle']
|
Title Buy Quality Business FlippaContent heard recently head Flippa business partner collectively bought sold business gross value surprise that’s day job we’ve done le dozen transaction actual job building SmartrMail Flippa biggest marketplace buying selling online business We’ve used Flippa help start business grow existing business generate free cash flow ‘side hutles’ employed We’ve done well financially everything we’ve sold least 3x paid It’s also taught u important lesson running growing different type internet business I’ve friend often ask ‘how buy website’ anyone else interested here’s guide buy website Flippa 1 Decide Type Business Look First think type business might want run We’ve bought range business Design Blogs Digital Template Stores Marketing Tools going require handson work improve grow maintain eCommerce store take day day running blog none ‘passive’ require effort work improve maintain personal view web business they’re growing they’re dying need always actively growing business personal preference older established website audience haven’t given much attention owner may steady declining stream organic traffic established customer purchase history haven’t updated long time We’re happy buy old ugly website long they’re easy improve platform experience site platform like WordPress BigCommerce Shopify Magento best quickly improved eCommerce tend like digital good real good there’s stock handle avoid dropshipping seems hard get good margin Established blog great like understand topic I’ll go detail around like blog way we’ve quickly monetized blog 10x earnings another post It’s type business choose look hundred find quality value buy business understand confidently learn quickly 2 Search Businesses least 2 year old Set Website Age 2 Years quickly weed 90 junk scam business limit search business minimum two year old mean might overlook good young business Certainly remove sift bulk silt find speck gold scammer quick flipper don’t patience set run scam website year set Web Site Age advanced search 2 Understand Valuation Multiples You’re buying business understand basic business valued Annual Earnings Multiples you’re buying share public company stock exchange might pay 20x Annual Earning share company’s annual earnings 120 value company Public company considered ‘safe’ investment there’s lot information book audited trusted 3rd pary there’s high liquidity easily sell share Buying Flippa complete opposite There’s incredibly high risk getting ripped buying business online marketplace value much lower 20x earnings looking business earnings make sure use last 12 month data calculate ‘annual earnings’ Often people look last 3 month multiply 4 That’s mistake It’s human nature optimise earnings you’re selling business last 3 month earnings likely jacked mean Also assume cost stripped boost earnings espcially owner’s time 3 Buy Value look buy company Price Earnings PE 1x held company year change current earnings would pay back value purchase price We’ve bought business valuation 08x3x earnings Mostly we’ve bought around 1x15x Sometimes we’ll pay 3x business obvious way grow earnings quickly provide one business immediate earnings growth Mostly we’ll stick paying 1x — 15x earnings risk buying something worth le seems high we’d prefer go It’s better miss something good isn’t cheap lose money 4 Set Budget Earnings Range let say you’ve set earnings range 1x2x PE work Monthly Profit range set search dividing total budget buy business 24 12 1x2x example assume you’ve got 2500 spend business Monthy Profit slider could set 50–250 Profit per month pay 2500 business earning 250 per month that’s PE 08x great Paying 2500 business earning 50 per month 42x expensive Ideally you’d pay 1200 max 50 per month business 4 Search Business Run Ok understand value business it’s time browse away Look anything understand thing could learn you’re buying blog ecommerce site it’s pretty easy understand you’re getting example buying blog something you’re already interested mean hit ground running new content new marketing understand topic audience Neil Patel wrote comprehensive guide building blog audience Read day that’s enough grow blog build audience add different stream revenue outside Ads I’ll go another time beauty buying established business opposed starting one you’re getting many year hardwork lot money stick looking business built top one established platform mentioned easily find people Upwork help technical design update 5 Understand Technology Risks you’re buying complicated tech business need understand technology risk example you’re looking SaaS need research market technology it’s built Understand cost continue development support go native mobile apps iOS Android you’re developer find business area understand you’re distinct advantage assessing you’re technically minded I’d suggest either partnering someone avoiding business rely heavily technical innovation 6 Patient Thorough thousand listing business junk rest scam Seriously handful quality deep research patience needed Expect looking least couple week new listing I’d suggest creating account saving particular search way you’ll get send daily email digest new listing matching search critera important step patience look hundred business saying quickly find one met criterion It’s something Warren Buffet Charlie Munger talk lot nothing long time find something excellent 7 Ready Move Quickly find business like that’s priced within PE range you’re comfortable message seller Ask Add Google Analytics you’ll need give gmail address verify traffic source What’s reserve would grow business Checkout seller’s selling history site sold previously comment Google seller Go website sale search around sign dig use Go social channel check people saying business it’s selling something buy something see it’s like everything check next set time SkypeHangouts call seller Ask question reason selling make screenshare login payment gateway PayPal Stripe etc show actual earnings call website’s admin panel email inbox Indepth research crucial step avoid getting face ripped Expect deep research like 3–10 time find successfully buy business start asking question straight away you’ll prepared incase business suddenly get Buy price added business high reserve price ‘watch’ likely won’t sell thankful wasn’t overpaid Often first time seller unrealistic price expectation may relist business time come reasonable sale range 8 Use Escrow Buy you’re buying Flippa use Escrow service Give least 7 day escrow period mean business email social account transferred day 1 you’ve got 7 day check it’s legit fund released buyer Use time check traffic sale Woohoo you’re business owner Next step growing business…Tags Growth Hacking Growth Startup Flippa Hustle
|
4,986 |
Wake Up at 4.… AND Get Enough Sleep?
|
To this day, I don’t know how I made it through high school.
Homeroom bell rang at an ungodly 7:13 a.m., and I lived in a rural part of the county. If I rode the bus to school, that meant a full hourlong ride from the bus stop (which was a 5-minute sprint from my house) to the school doors. So, I awoke at… 6:04 a.m.? Probably.
Once I was able to drive — and my mom was nice enough to find me a crappy car — I was able to make the drive in about 23 minutes (though it really should have taken me 34).
I don’t know how I survived that...
But I also don’t know how I managed to get out of bed that early for that long — while I was a teenager, who, let’s face it, all suck at getting out of bed.
All of them.
You know they do.
I remember my mom RAGING at me for what felt like hours. WHY wouldn’t I get out of bed? Didn’t I know I was just making things worse? The longer I waited, the more hurry I would be in, the more dangerous a drive. The more she had to yell, the more miserable I was making her… and everyone in the house.
I remember finally getting up and turning on the shower… then falling asleep on the toilet for 20 more minutes. More railing, weeping and wailing and gnashing of teeth. You’re wasting the water! You’re wasting the heat! Do you not have any respect for how hard I have to work to pay those bills?! Get ready and go to school you lazy $@#&!
Ok, my mom was nicer than that… but not much. (And we all know she was thinking it.)
I somehow made it through, and off to college.
College was a dream! I could schedule all my classes to suit MY schedule (and body type). (Which might have meant it took a few extra years, since most classes do not start afternoon 12 noon… but I digress.) I was able to do it how I wanted, and for several years I found myself very happily drifting to bed around 1 or 2 in the morning, and sleeping till about 10 or 11am.
That felt perfectly natural for me. And, in the convening life cycles where I’ve been able to pick my own schedule — including working at bars/restaurants after college, working in theatre during, and a couple of unemployment stints in the years since — that schedule is largely the one my body naturally gravitates toward, and has for a couple of decades now.
That’s right: decades.
|
https://medium.com/swlh/wake-up-at-4-and-get-enough-sleep-36f6fb404e12
|
['Heather Nowlin']
|
2020-07-03 20:22:46.951000+00:00
|
['Morning Routines', 'Sleep', 'Happiness', 'Life', 'Productivity']
|
Title Wake 4… Get Enough SleepContent day don’t know made high school Homeroom bell rang ungodly 713 lived rural part county rode bus school meant full hourlong ride bus stop 5minute sprint house school door awoke at… 604 Probably able drive — mom nice enough find crappy car — able make drive 23 minute though really taken 34 don’t know survived also don’t know managed get bed early long — teenager let’s face suck getting bed know remember mom RAGING felt like hour wouldn’t get bed Didn’t know making thing worse longer waited hurry would dangerous drive yell miserable making her… everyone house remember finally getting turning shower… falling asleep toilet 20 minute railing weeping wailing gnashing teeth You’re wasting water You’re wasting heat respect hard work pay bill Get ready go school lazy Ok mom nicer that… much know thinking somehow made college College dream could schedule class suit schedule body type might meant took extra year since class start afternoon 12 noon… digress able wanted several year found happily drifting bed around 1 2 morning sleeping till 10 11am felt perfectly natural convening life cycle I’ve able pick schedule — including working barsrestaurants college working theatre couple unemployment stint year since — schedule largely one body naturally gravitates toward couple decade That’s right decadesTags Morning Routines Sleep Happiness Life Productivity
|
4,987 |
¿Hola?
|
Digital product designer that also happens to do a bunch of other stuff on the side. Jack of all trades.
Follow
|
https://medium.com/postales-de-la-tormenta/hola-a58ba4624069
|
['Nicolás J. Engler']
|
2019-06-05 13:31:00.926000+00:00
|
['Short Story', 'Fiction', 'Creative Writing', 'Nonfiction']
|
Title ¿HolaContent Digital product designer also happens bunch stuff side Jack trade FollowTags Short Story Fiction Creative Writing Nonfiction
|
4,988 |
React’s Context API Explained
|
Usage of Context API
MovieContext.js
Let’s start by renaming our Movie.js file to MovieContext.js . This file will be responsible for passing data to other components.
Here we will start by importing createContext from the react module. Thus, alter your import statement as follows:
Imports in MovieContext.js
This createContext method will help us create a Context instance, which will aid in sending data to various other components.
Next, let’s export our Context instance like so:
Exporting the MyContext context instance
Now, we will export a function called MovieContext and then define it like so:
Further code in MovieContext.js
Lines 2–16 : Standard definition of our movies state
In our return block, write the following code (where code goes here is written on line 19 ):
Further code in MovieContext.js
These lines of code indicate that we are now fully capable of sharing data between components without passing down props manually.
The value attribute in the MyContext.Provider tag in line 1 is the data that we will share to various components.
props.children on line 2 means that the components that will be rendered between the MovieContext tags will have access to the data located in MovieContext .
If this is unclear, don’t worry. It will be explained through code later in this post.
MovieList.js
If you recall, this component was used to display the movies array.
Let’s modify this file.
First, import MyContext from MovieContext :
Imports in MovieList.js
Other than that, import useContext like so:
Further imports in MovieList.js
Within the MovieList function definition, start by writing the following line of code:
Using the MyContext object
This line basically declares a context hook called NewContext . This useContext function takes in an argument that asks for what context object it should use. As we want to use the MyContext instance, we pass in MyContext as the argument.
One question though: What is the value of the MyContext variable? We’ll find out its value shortly. Before that, we’ll have to change our code further.
Now, write the following code after the NewContext declaration:
Further code in MovieList
In this code, we are simply outputting the value of NewContext and then using the export statement so that the MovieList function can be used in other files.
In the end, our file will look like this:
App.js
First, add the following imports to App.js :
Imports in App.js
Now, in the return block, write the following code:
Further code in App.js
This code indicates that now MovieList has access to the data that was shared by MovieContext .
In the end, App.js looks like this:
Run the code. This will be the output:
The output of the code
So where did this string come from?
Let’s backtrack to MovieContext.js and find the following piece of code:
In MovieContext.js :
Code to find in MovieContext.js
This means that the data we write in the value attribute will be shared with the components.
|
https://medium.com/better-programming/reacts-context-api-explained-baebcee39d2f
|
['Hussain Arif']
|
2020-08-04 07:56:12.767000+00:00
|
['JavaScript', 'React', 'Reactjs', 'Nodejs', 'Programming']
|
Title React’s Context API ExplainedContent Usage Context API MovieContextjs Let’s start renaming Moviejs file MovieContextjs file responsible passing data component start importing createContext react module Thus alter import statement follows Imports MovieContextjs createContext method help u create Context instance aid sending data various component Next let’s export Context instance like Exporting MyContext context instance export function called MovieContext define like code MovieContextjs Lines 2–16 Standard definition movie state return block write following code code go written line 19 code MovieContextjs line code indicate fully capable sharing data component without passing prop manually value attribute MyContextProvider tag line 1 data share various component propschildren line 2 mean component rendered MovieContext tag access data located MovieContext unclear don’t worry explained code later post MovieListjs recall component used display movie array Let’s modify file First import MyContext MovieContext Imports MovieListjs import useContext like import MovieListjs Within MovieList function definition start writing following line code Using MyContext object line basically declares context hook called NewContext useContext function take argument asks context object use want use MyContext instance pas MyContext argument One question though value MyContext variable We’ll find value shortly we’ll change code write following code NewContext declaration code MovieList code simply outputting value NewContext using export statement MovieList function used file end file look like Appjs First add following import Appjs Imports Appjs return block write following code code Appjs code indicates MovieList access data shared MovieContext end Appjs look like Run code output output code string come Let’s backtrack MovieContextjs find following piece code MovieContextjs Code find MovieContextjs mean data write value attribute shared componentsTags JavaScript React Reactjs Nodejs Programming
|
4,989 |
Reasons to not install Hadoop on Windows
|
A few years ago, I was hearing from my colleagues, “don’t ever think about installing Hadoop on Windows operating system!”. I was not convinced of this saying because I am a big fan of Microsoft products, especially Windows.
In the past few years, I worked on several projects where we were asked to build a Big Data ecosystem using Hadoop and related technologies on Ubuntu. It was not so easy to work with these technologies, especially since there is a lack of online resources. Last month, I was asked to build a Big data ecosystem on Windows. Three technologies must be installed: Hadoop, Hive, and Pig.
At the end of the project, the only consequence I have is that “Think 1000 times before installing Hadoop and related technologies on Windows!”.
In this article, I will briefly describe the main reasons for this consequence.
Are these technologies developed to run on Windows?
The first releases of Hadoop were demonstrated and tested on GNU\Linux, while it was not tested under the Win32 operating system. For Hadoop 2.x and newer releases, Windows support was added, and a step-by-step guide was provided within the official documentation.
This should be ok if you are going only to install Hadoop. But when it comes to other related technologies such as Apache Hive, not all releases are supported (For apache Hive only 2.x releases supports Windows). You will need to do some hacks and workaround to install these technologies, such as using Cygwin utility to execute GNU/Linux shell commands or to copy cmd scripts from other releases.
Besides, some services may not work correctly. As an example, we installed Apache Hive and Apache Pig. We tried to connect with those technologies through the WebHCat Rest API or using the Microsoft Hive ODBC driver. The connection was made successfully, but we couldn’t execute any command since any simple command was throwing time out exceptions.
In brief, Windows is not as stable or as supported as Linux.
Other Reasons
Cost
One other reason is the licensing cost. Linux is a free and open-source operating system. It will be costly if you need to deploy a multi-node Hadoop cluster on Windows machines.
Lack of resources
In general, Big Data technologies don’t have many online resources. But, most of those resources are related to Linux while you may struggle in a small issue related to a Windows environment. Even if you ask for support on an online community like Stack Overflow, most experts work with Cloud-based Hadoop clusters or a Linux on-premise installation.
What to do if you are using Windows?
If you are using Windows, and need to use Hadoop and related technologies, you may:
Use Linux virtual machines to install Hadoop, note that your machine must have sufficient resources. Use a cloud-based Hadoop service such as Microsoft Azure Hadoop cluster. If you cannot go with any of those suggestions and you have to install Hadoop, you need first to search for the supported releases of all required technologies (Hadoop, Hive, Spark …). Then, you must choose the compatible ones. As an example, Hive 2.x releases only supports Windows while 3.x needs some hacks, and not all its features may work properly. So, if you need to install Apache Hive, you have to use Hadoop 2.x releases since newer releases are not compatible with Hive 2.x releases.
References
|
https://medium.com/munchy-bytes/reasons-to-not-install-hadoop-on-windows-5bf22f3f0005
|
['Hadi Fadlallah']
|
2020-08-27 10:35:08.877000+00:00
|
['Hadoop', 'Windows', 'Big Data']
|
Title Reasons install Hadoop WindowsContent year ago hearing colleague “don’t ever think installing Hadoop Windows operating system” convinced saying big fan Microsoft product especially Windows past year worked several project asked build Big Data ecosystem using Hadoop related technology Ubuntu easy work technology especially since lack online resource Last month asked build Big data ecosystem Windows Three technology must installed Hadoop Hive Pig end project consequence “Think 1000 time installing Hadoop related technology Windows” article briefly describe main reason consequence technology developed run Windows first release Hadoop demonstrated tested GNULinux tested Win32 operating system Hadoop 2x newer release Windows support added stepbystep guide provided within official documentation ok going install Hadoop come related technology Apache Hive release supported apache Hive 2x release support Windows need hack workaround install technology using Cygwin utility execute GNULinux shell command copy cmd script release Besides service may work correctly example installed Apache Hive Apache Pig tried connect technology WebHCat Rest API using Microsoft Hive ODBC driver connection made successfully couldn’t execute command since simple command throwing time exception brief Windows stable supported Linux Reasons Cost One reason licensing cost Linux free opensource operating system costly need deploy multinode Hadoop cluster Windows machine Lack resource general Big Data technology don’t many online resource resource related Linux may struggle small issue related Windows environment Even ask support online community like Stack Overflow expert work Cloudbased Hadoop cluster Linux onpremise installation using Windows using Windows need use Hadoop related technology may Use Linux virtual machine install Hadoop note machine must sufficient resource Use cloudbased Hadoop service Microsoft Azure Hadoop cluster cannot go suggestion install Hadoop need first search supported release required technology Hadoop Hive Spark … must choose compatible one example Hive 2x release support Windows 3x need hack feature may work properly need install Apache Hive use Hadoop 2x release since newer release compatible Hive 2x release ReferencesTags Hadoop Windows Big Data
|
4,990 |
How to Create an Evening Routine for a Productive Tomorrow
|
For some people, it seems so easy to follow a routine from the moment they wake up. They jump out of bed, journal, meditate and drink immune-boosting tea infused with fennel and goji berries. Then they go about their day being awesome.
You might want to begin your day with as little to think about as possible. You might have a family and children to organize. You might just not be a morning person. That’s okay.
To optimize your day, one of the most valuable things you can do is set up an evening routine for the night before. Your mornings may be gloriously unstructured, but you might thrive on having a process in the evenings to springboard into tomorrow from.
A routine should be like breathing. You shouldn’t have to constantly think about it or struggle to remember which steps to do and what order to do them in. It should mold seamlessly to your natural rhythm, making it easier to sustain the habit over time.
Creating a successful routine to optimize your day tomorrow can be broken down into three steps. These are reflection, preparation, and relaxation.
If you naturally have more energy in the evening, you might want to plan your day in detail before bed. If you're someone whose brain switches off after 6 pm, you might just want to take a shower and then jump into bed with a hot drink and a good book. Building a routine that works for you is all about finding your natural rhythm.
These three principles will help you build an effortless evening routine that will set you up for an amazing day tomorrow.
|
https://medium.com/swlh/how-to-create-an-evening-routine-for-a-productive-tomorrow-b02f9af10197
|
['Ruth Matthews']
|
2020-12-21 12:32:10.989000+00:00
|
['Happiness', 'Personal Development', 'Self Improvement', 'Self', 'Productivity']
|
Title Create Evening Routine Productive TomorrowContent people seems easy follow routine moment wake jump bed journal meditate drink immuneboosting tea infused fennel goji berry go day awesome might want begin day little think possible might family child organize might morning person That’s okay optimize day one valuable thing set evening routine night morning may gloriously unstructured might thrive process evening springboard tomorrow routine like breathing shouldn’t constantly think struggle remember step order mold seamlessly natural rhythm making easier sustain habit time Creating successful routine optimize day tomorrow broken three step reflection preparation relaxation naturally energy evening might want plan day detail bed youre someone whose brain switch 6 pm might want take shower jump bed hot drink good book Building routine work finding natural rhythm three principle help build effortless evening routine set amazing day tomorrowTags Happiness Personal Development Self Improvement Self Productivity
|
4,991 |
Do People Still Talk On The Phone?
|
A Tiny Moment of “Picking Up Where You Left Off”
Photo by Philipp Lansing on Unsplash
The Moment
I just got off the phone with a friend I had not spoken to in nine years. No major fight or disagreement explaining the time gap. I just hadn’t tried to call. Before our phone call, I was checking out his LinkedIn and it alerted him to my presence.
Minutes later a strange number appeared on my phone and went to voicemail. I usually erase these calls. Damn spammers! For some reason, I decided to listen to the message. Lo and behold, it was one of my former chef buddies checking up on me.
The Reflection
My first thought was nobody calls anymore and I texted him to that effect. I followed up with a phonecall and boom it was on! We checked in to review how we getting along. Addition of children, career changes, and other life stuff. During our fifteen-minute conversation, we talked, maybe two minutes, about the pandemic and other US craziness.
I am going to cliché to you now. Yes, it was like we had just talked the other day, but that’s the connection you have with some folks. At the end of our conversation, I felt energized and positive about my current life. I know right? How so in a catastrophic time? Let me offer a different outlook. Like Doc Holliday said to Wyatt Earp in the film, Tombstone, “there’s no normal life, just life.”
There’s no crazy life right now, just life. Don’t stop living it.
The Takeaway
Don’t let Covid or a splitting country stop you from being you.
Keep connecting with others. They want to connect too.
Every day, every moment doesn't have to be shrouded in nihilism.
Give out a yell, scream an ahhhhhh shit! Things stink! Then go do human stuff the best you can.
Don’t let technology steal your human connection.
You are not a text or an Instagram or a Youtube or an email.
Josh Kiev is an actor, chef, and decided not to be negative today!
|
https://medium.com/tiny-life-moments/do-people-still-talk-on-the-phone-9d4c74fd61e8
|
['Josh Kiev']
|
2020-11-21 09:50:17.638000+00:00
|
['Positive Thinking', 'Connection', 'Motivation', 'Friendship', 'Tiny Life Moments']
|
Title People Still Talk PhoneContent Tiny Moment “Picking Left Off” Photo Philipp Lansing Unsplash Moment got phone friend spoken nine year major fight disagreement explaining time gap hadn’t tried call phone call checking LinkedIn alerted presence Minutes later strange number appeared phone went voicemail usually erase call Damn spammer reason decided listen message Lo behold one former chef buddy checking Reflection first thought nobody call anymore texted effect followed phonecall boom checked review getting along Addition child career change life stuff fifteenminute conversation talked maybe two minute pandemic US craziness going cliché Yes like talked day that’s connection folk end conversation felt energized positive current life know right catastrophic time Let offer different outlook Like Doc Holliday said Wyatt Earp film Tombstone “there’s normal life life” There’s crazy life right life Don’t stop living Takeaway Don’t let Covid splitting country stop Keep connecting others want connect Every day every moment doesnt shrouded nihilism Give yell scream ahhhhhh shit Things stink go human stuff best Don’t let technology steal human connection text Instagram Youtube email Josh Kiev actor chef decided negative todayTags Positive Thinking Connection Motivation Friendship Tiny Life Moments
|
4,992 |
How Concept Drift Ruins Your Model Performance
|
The world is inherently dynamic and nonstationary — that is, constantly changing.
It is inevitable that the performance of many machine learning models will decline over time. This is particularly relevant for models related to human behavior. Contemporary machine learning models do not generalize well to new environments without explicit training examples. Model performance may start to degrade as data and the “ground truth” change over time. These are critical weaknesses that must be accounted for in machine learning systems. But what are the characteristics of such changes, which result in low-quality responses from models? How can you monitor and maintain your models so that the downstream user always sees quality outputs?
It is essential to anticipate how changes in data from dynamic real world environments will affect your models and how to handle these changes.
This article will first provide a theoretical foundation for understanding how concept drift can appear and behave and then discuss approaches for addressing drift. Understanding how data and models can drift is essential for designing a model monitor and response plan.
What is Concept Drift?
The image below provides a simple illustration of a “concept”. The 2-dimensional data points are mapped to either the color red or green, according to some unknown process G. The “concept” is the true mapping G→{green, red} of data points to color. The light grey line shows the learned mapping F that distinguishes between red and green data points that approximates G.
A concept — the distinction between red and green data points. The grey line represents the learned concept distinguishing between red and green.
Simply stated, concept drift occurs when there is a change in G, the underlying function that generates the data you observe. Of course, we cannot observe G directly. Instead, we observe G indirectly by sampling from the data generated by G. This definition assumes that we receive data points over time that describe some phenomenon that we want to model, such as toilet paper consumption.
In the case of toilet paper demand, the underlying data generating function is the quantity of toilet paper consumed (and hoarded). Actual flushing (G) and hoarding of toilet paper cannot be observed directly of course. Instead, we can only “sample” from actual consumption by observing the quantity purchased through various supply channels. We then train a model, F, of toilet paper consumption using observed purchase data. COVID-19 induced changes in toilet paper consumption and purchase patterns — aka drift in G.
What Does Concept Drift Look Like?
Drift can appear as either virtual drift (no immediate impact on model performance) or real drift (impact on model performance). Note that the distinction is relative to a model.
Real Concept Drift
Real concept drift is a change in the mechanism that generates your data, such that your model’s performance decreases. As shown by the illustration below, the concept (distinction between red and green data points) has rotated and changed shape. A model that learned the concept during Regime A is now obsolete under Regime B and will have poor performance. Absent updates to the model after real concept drift, the model will no longer correctly describe the full target concept space.
Real concept drift from Regime A to Regime B. The function learned during Regime A misclassifies some data points observed in Regime B.
In terms of toilet paper, the pre-COVID consumption pattern is “Regime A” and the post-COVID consumption pattern is “Regime B”. A model that proficiently forecasts TP demand in 2019 before the pandemic would perform poorly in 2020 during the pandemic. Vice versa, a model that has been updated to forecast TP demand during COVID-induced economic lockdowns, where people seldom leave home, would likely perform poorly on historical data where people often leave home.
Virtual Concept Drift
Concept drift in G does not necessarily affect the accuracy of your model F. In virtual concept drift, the distribution of observed data has changed, but the previously learned mapping F still correctly applies to the new data generating process G’. The chart below illustrates virtual drift from Regime A to Regime B. Although the distribution of the data in each regime is different, the model F still assigns the data point color correctly per the light grey line. Thus, your machine learning model only becomes obsolete under real concept drift.
Virtual concept drift from Regime A to Regime B. The function F (grey line dividing red and green points) trained on data from Regime A is still valid in Regime B.
In the real world, your model performance will not indicate that virtual drift has occurred because your model is still performing well! Indeed, even if you are monitoring the distribution of the data itself, you may require a “large” number of observations to ascertain the presence of virtual drift. (Monitoring the distribution of your data directly is usually impractical if it has more than a couple dimensions).
Is virtual drift a problem? It depends. If your objective is to ensure that your model continues to meet given performance metrics, then your model is sufficient in the presence of virtual drift. Virtual drift presents a hidden risk that the model may treat certain outlier observations from the new distribution incorrectly.
Hidden risk of virtual drift: What are the correct classifications of the grey data points in Regime B? Will the model (grey line) continue to produce accurate predictions for these points?
Drift Severity
The severity of drift can vary widely. Most concept drift occurs as intersected drift, where part of the input space has the same target class in both the old and new concepts. In the extreme, severe drift occurs when all examples are misclassified under the new target concept. Misclassifications can be due to new classes in the target concept and changes in the class definition.
Intersected drift occurred between Regime A and Regime B as only some data point classifications changed. Severe drift occurred between Regime A and Regime C, where all the classification of all data points changed.
If you can detect and measure drift, you could specify a threshold to distinguish between “major” and “minor” drift. Drift magnitude for a binary classification task could be measured as the percentage of observations whose class has changed. If the drift magnitude is less than the threshold, the model simply requires updating with new data. If the drift magnitude is greater than the threshold, the drift is “severe” and the model ought to be abandoned in favor of a freshly trained model. Severe drift is typically rare and the presence of severe drift is often apparent without the use of sophisticated drift detection.
Drift Velocity and Stability
Abrupt drift
Concept drift with a short drift duration is known as abrupt drift. Abrupt drift occurs when the data generating function suddenly stops generating data with concept G and suddenly generates data according to concept G’. The abrupt drift paradigm assumes that concept drift occurs over discrete periods of time, bounded by stable periods without drift.
As an example of a sequence of abrupt drifts, the medical journal Lancet reported that China changed the case definition for COVID-19 seven times between Jan 15 and March 3, 2020. Each change in definition caused a change in how the cases were counted; thus each change caused an abrupt drift in the concept of daily COVID case counts in China.
Incremental Drift and Stability
Incremental drift implies a long drift duration, also known as continuous drift. In this case, the change is a steady progression from concept G to concept G’. The speed, or duration, of concept drift is the number of time steps for a new concept to completely replace an old concept.
In the case of classification, at each subsequent timestep, fewer data points are classified according to the old concept G and more data points are classified according to the new concept G’. You could think of each of these time steps between G and G’ as distinct, intermediate concepts.
An example of incremental drift would be how consumer behavior gradually changes as COVID-19 economic lockdowns are lifted in an area — people may be hesitant to return to “normal” and only return slowly. Each small change in behavior is an intermediate concept between G (behavior during lockdown) and G’ (“normal” behavior after restriction is lifted). It is worth noting that the progression from behavior in G (lockdown) to G’ (“normal”) is nonlinear as people idiosyncratically alter their behavior in response to exogenous factors like changes in case numbers.
A concept is unstable between concepts G and G’, the period between concepts because it will change again. This instability could be observed as greater noise in the data and in your model metrics. Further, some concepts are inherently unstable and chaotic, never arriving at a stable concept, such as market price movements.
The chart below illustrates incremental drift that progresses over a period of 100 time steps. The functions v1(t) and v2(t) model the probability that an example from the old and new concepts, respectively, will be presented at time t. The speed of drift is the slope of v2(t).
Illustration of incremental drift. (Source: Minku, White, and Yao 2010)
How do you detect and address drift?
This discussion, of course, would be incomplete without solutions to detecting and correcting for drift in models and data. Many refer to the subject of drift detection as model monitoring.
The basic approach to addressing concept drift is to monitor your model to detect drift, retrain the model, and deploy the new model version.
This basic approach works well for regimes where you expect concept drift to abruptly shift from one stable concept to a new stable concept. It can also be acceptable in cases with small and/or incremental drift if there is some tolerance for variation in model performance. For less stable regimes or situations where continuous drift is expected, incremental updates of your model with new observations may be more appropriate. In the case of online model updates, it is still important to monitor for model drift.
There are three basic approaches to monitoring for concept drift.
Monitor model performance
Monitoring model performance is straight forward in principle: if the model performance declines below some expected level, reevaluate the model. To implement this, you need to make several decisions that will impact the sensitivity and frequency of your drift detection.
1. How many new predictions do you use in calculating model performance?
2. Which performance metric(s) do you evaluates and which threshold(s) do you apply?
3.How often will you monitor for drift?
4. How error-tolerant is the user of your model? This will help inform your answers to (1)-(3).
5. How do you respond to drift? Manual evaluation and retraining of your model? Automatic updates?
Monitor statistical measures of confidence in model predictions
Another approach is to monitor the distribution of model prediction or residual values or the confidence in those values. It is far easier to monitor distributional changes the value(s) produced by your model than the potentially high-dimensional input data. The specific formulation of the the statistical monitor depends on the speed and quantity of predictions to monitor. In addition to the questions in the previous section, some relevant decisions to make include:
Does the Kolmogorov–Smirnov (KS) test indicate that distribution of your prediction or residual values has changed? Some minimum number of examples are required in order to accurately compare whether two distributions are the same with the KS test.
Does a given prediction / residual fall within a given confidence interval of the distribution of predictions / residuals observed during training?
For other adaptive test statistics for drift detection in the academic literature, refer to Dries and Ruckert (2009) in the References section.
Online updates
One option is to train your model online — that is, automatically update your model weights with new observations on a periodic basis. The periodicity of updates could be daily, weekly, or each time you receive new data. This solution is ideal if you anticipate incremental concept drift or an unstable concept.
This option is not foolproof because there is still some risk that model drifts away from true target in spite of the online updates. This could occur for a number of reasons.
An outlier could have an outsized influence on the model in online training, pulling the learned model further away from the target concept. This risk can be mitigated by conducting online updates with batches of observations, rather than with single data points.
The learning rate could be too small, preventing the model from updating quickly enough in the presence of large drift.
The learning rate could also be too large, causing the model to overshoot the target concept and to continue to perform poorly.
For these reasons, it is still important to monitor models that are updated online.
Other approaches
Various algorithms have been proposed in the academic literature to detect concept drift. Work in drift detection generally aims to efficiently identify the true points of concept drift with accuracy while also minimizing the drift detection time. A review of these proposals is outside the scope of this article.
|
https://towardsdatascience.com/concept-drift-can-ruin-your-model-performance-and-how-to-address-it-dff08f97e29b
|
['Alexandra Amidon']
|
2020-07-11 18:30:44.886000+00:00
|
['Modeling', 'Regime Change', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
|
Title Concept Drift Ruins Model PerformanceContent world inherently dynamic nonstationary — constantly changing inevitable performance many machine learning model decline time particularly relevant model related human behavior Contemporary machine learning model generalize well new environment without explicit training example Model performance may start degrade data “ground truth” change time critical weakness must accounted machine learning system characteristic change result lowquality response model monitor maintain model downstream user always see quality output essential anticipate change data dynamic real world environment affect model handle change article first provide theoretical foundation understanding concept drift appear behave discus approach addressing drift Understanding data model drift essential designing model monitor response plan Concept Drift image provides simple illustration “concept” 2dimensional data point mapped either color red green according unknown process G “concept” true mapping G→green red data point color light grey line show learned mapping F distinguishes red green data point approximates G concept — distinction red green data point grey line represents learned concept distinguishing red green Simply stated concept drift occurs change G underlying function generates data observe course cannot observe G directly Instead observe G indirectly sampling data generated G definition assumes receive data point time describe phenomenon want model toilet paper consumption case toilet paper demand underlying data generating function quantity toilet paper consumed hoarded Actual flushing G hoarding toilet paper cannot observed directly course Instead “sample” actual consumption observing quantity purchased various supply channel train model F toilet paper consumption using observed purchase data COVID19 induced change toilet paper consumption purchase pattern — aka drift G Concept Drift Look Like Drift appear either virtual drift immediate impact model performance real drift impact model performance Note distinction relative model Real Concept Drift Real concept drift change mechanism generates data model’s performance decrease shown illustration concept distinction red green data point rotated changed shape model learned concept Regime obsolete Regime B poor performance Absent update model real concept drift model longer correctly describe full target concept space Real concept drift Regime Regime B function learned Regime misclassifies data point observed Regime B term toilet paper preCOVID consumption pattern “Regime A” postCOVID consumption pattern “Regime B” model proficiently forecast TP demand 2019 pandemic would perform poorly 2020 pandemic Vice versa model updated forecast TP demand COVIDinduced economic lockdown people seldom leave home would likely perform poorly historical data people often leave home Virtual Concept Drift Concept drift G necessarily affect accuracy model F virtual concept drift distribution observed data changed previously learned mapping F still correctly applies new data generating process G’ chart illustrates virtual drift Regime Regime B Although distribution data regime different model F still assigns data point color correctly per light grey line Thus machine learning model becomes obsolete real concept drift Virtual concept drift Regime Regime B function F grey line dividing red green point trained data Regime still valid Regime B real world model performance indicate virtual drift occurred model still performing well Indeed even monitoring distribution data may require “large” number observation ascertain presence virtual drift Monitoring distribution data directly usually impractical couple dimension virtual drift problem depends objective ensure model continues meet given performance metric model sufficient presence virtual drift Virtual drift present hidden risk model may treat certain outlier observation new distribution incorrectly Hidden risk virtual drift correct classification grey data point Regime B model grey line continue produce accurate prediction point Drift Severity severity drift vary widely concept drift occurs intersected drift part input space target class old new concept extreme severe drift occurs example misclassified new target concept Misclassifications due new class target concept change class definition Intersected drift occurred Regime Regime B data point classification changed Severe drift occurred Regime Regime C classification data point changed detect measure drift could specify threshold distinguish “major” “minor” drift Drift magnitude binary classification task could measured percentage observation whose class changed drift magnitude le threshold model simply requires updating new data drift magnitude greater threshold drift “severe” model ought abandoned favor freshly trained model Severe drift typically rare presence severe drift often apparent without use sophisticated drift detection Drift Velocity Stability Abrupt drift Concept drift short drift duration known abrupt drift Abrupt drift occurs data generating function suddenly stop generating data concept G suddenly generates data according concept G’ abrupt drift paradigm assumes concept drift occurs discrete period time bounded stable period without drift example sequence abrupt drift medical journal Lancet reported China changed case definition COVID19 seven time Jan 15 March 3 2020 change definition caused change case counted thus change caused abrupt drift concept daily COVID case count China Incremental Drift Stability Incremental drift implies long drift duration also known continuous drift case change steady progression concept G concept G’ speed duration concept drift number time step new concept completely replace old concept case classification subsequent timestep fewer data point classified according old concept G data point classified according new concept G’ could think time step G G’ distinct intermediate concept example incremental drift would consumer behavior gradually change COVID19 economic lockdown lifted area — people may hesitant return “normal” return slowly small change behavior intermediate concept G behavior lockdown G’ “normal” behavior restriction lifted worth noting progression behavior G lockdown G’ “normal” nonlinear people idiosyncratically alter behavior response exogenous factor like change case number concept unstable concept G G’ period concept change instability could observed greater noise data model metric concept inherently unstable chaotic never arriving stable concept market price movement chart illustrates incremental drift progress period 100 time step function v1t v2t model probability example old new concept respectively presented time speed drift slope v2t Illustration incremental drift Source Minku White Yao 2010 detect address drift discussion course would incomplete without solution detecting correcting drift model data Many refer subject drift detection model monitoring basic approach addressing concept drift monitor model detect drift retrain model deploy new model version basic approach work well regime expect concept drift abruptly shift one stable concept new stable concept also acceptable case small andor incremental drift tolerance variation model performance le stable regime situation continuous drift expected incremental update model new observation may appropriate case online model update still important monitor model drift three basic approach monitoring concept drift Monitor model performance Monitoring model performance straight forward principle model performance decline expected level reevaluate model implement need make several decision impact sensitivity frequency drift detection 1 many new prediction use calculating model performance 2 performance metric evaluates threshold apply 3How often monitor drift 4 errortolerant user model help inform answer 13 5 respond drift Manual evaluation retraining model Automatic update Monitor statistical measure confidence model prediction Another approach monitor distribution model prediction residual value confidence value far easier monitor distributional change value produced model potentially highdimensional input data specific formulation statistical monitor depends speed quantity prediction monitor addition question previous section relevant decision make include Kolmogorov–Smirnov KS test indicate distribution prediction residual value changed minimum number example required order accurately compare whether two distribution KS test given prediction residual fall within given confidence interval distribution prediction residual observed training adaptive test statistic drift detection academic literature refer Dries Ruckert 2009 References section Online update One option train model online — automatically update model weight new observation periodic basis periodicity update could daily weekly time receive new data solution ideal anticipate incremental concept drift unstable concept option foolproof still risk model drift away true target spite online update could occur number reason outlier could outsized influence model online training pulling learned model away target concept risk mitigated conducting online update batch observation rather single data point learning rate could small preventing model updating quickly enough presence large drift learning rate could also large causing model overshoot target concept continue perform poorly reason still important monitor model updated online approach Various algorithm proposed academic literature detect concept drift Work drift detection generally aim efficiently identify true point concept drift accuracy also minimizing drift detection time review proposal outside scope articleTags Modeling Regime Change Artificial Intelligence Data Science Machine Learning
|
4,993 |
How to Easily Fetch Binance Historical Trades Using Python
|
Coding Time
Parsing the arguments
The script will use the following arguments:
symbol : The symbol of the trading pair, defined by Binance. It can be queried here, or it may be copied from the URL of the Binance web app, excluding the _ character.
Remove the ‘_’ from the last part of the URL and you get the symbol
starting_date and ending_date : Self-explanatory. The expected format is mm/dd/yyyy , or, in Python slang, %m/%d/%Y .
To get the arguments, we’ll use the built-in sys (nothing too fancy around here), and to parse the date, we will be using the datetime library.
symbol = sys.argv[1]
starting_date = datetime.strptime(sys.argv[2], '%m/%d/%Y')
ending_date = datetime.strptime(sys.argv[3], '%m/%d/%Y') + timedelta(days=1) - timedelta(microseconds=1)
We are adding one day and subtracting one microsecond so that the ending_date time portion is always at 23:59:59.999 , making it more practical to get same-day intervals.
Fetching trades
With Binance’s API and using the aggTrades endpoint, we can get at most 1,000 trades in one request, and if we use start and end parameters, they can be at most one hour apart. After some failures, by fetching using time intervals (at some point or another, the liquidity would go crazy and I would lose some precious trades), I decided to try the from_id strategy.
The aggTrades endpoint is chosen because it returns the compressed trades. In that way, we won’t lose any precious information.
Get compressed, aggregate trades. Trades that fill at the same time, from the same order, with the same price will have the quantity aggregated.
The from_id strategy goes like this: We are going to get the first trade of the starting_date by sending date intervals to the endpoint. After that, we will fetch 1,000 trades, starting with the first fetched trade ID. Then, we will check if the last trade happened after our ending_date . If so, we have gone through all the time period and we can save the results to file. Otherwise, we will update our from_id variable to get the last trade ID and start the loop all over again.
Ugh, enough talking, let’s code.
Fetching the first trade ID
First, we create a new_end_date . That’s because we are using the aggTrades by passing a startTime and an endTime parameter. For now, we only need to know the first trade ID of the period, so we are adding 60 seconds to the period. In low liquidity pairs, this parameter can be changed because there is no guarantee that a trade occurred in the first minute of the day that’s been requested.
Then, parse the date using our helper function to convert it to a Unix millisecond representation by using the calendar.timegm function. The timegm function is preferred because it keeps the date in UTC.
def get_unix_ms_from_date(date):
return int(calendar.timegm(date.timetuple()) * 1000 + date.microsecond/1000)
The request’s response is a list of trade objects sorted by date, with the following format:
So, as we need the first trade ID, we will be returning the response[0]["a"] value.
Main loop
Now that we have the first trade ID, we can fetch trades 1,000 at a time, until we reach our ending_date . The following code will be called inside our main loop. It will perform our request using the from_id parameter, ditching the startDate and endDate parameters.
And now, here’s our main loop, which will perform the requests and create our DataFrame .
We check if the current_time that contains the date of the latest trade fetched is greater than our to_date , and if so, we:
fetch the trades using the from_id parameter
parameter update the from_id and current_time parameters, both with information from the latest trade fetched
and parameters, both with information from the latest trade fetched print a nice debug message
debug message pd.concat the trades fetched with the previous trades in our DataFrame
the trades fetched with the previous trades in our and sleep a little so that Binance won’t give us an ugly 429 HTTP response
Cleaning and saving
After assembling our DataFrame , we need to perform a simple data cleaning. We will remove the duplicates and trim the trades that happened after our to_date (we have that problem because we’re fetching in chunks of 1,000 trades, so it’s expected that we get some trades executed after our target end date).
We can encapsulate our trim functionality:
def trim(df, to_date):
return df[df['T'] <= get_unix_ms_from_date(to_date)]
And perform our data cleaning:
df.drop_duplicates(subset='a', inplace=True)
df = trim(df, to_date)
Now, we can save it to file using the to_csv method:
filename = f'binance__{symbol}__trades__from__{sys.argv[2].replace("/", "_")}__to__{sys.argv[3].replace("/", "_")}.csv'
df.to_csv(filename)
We can also use other data storage mechanisms, such as Arctic.
|
https://medium.com/better-programming/how-to-easily-fetch-your-binance-historical-trades-using-python-174a6569cebd
|
['Thiago Candido']
|
2020-05-07 14:22:50.551000+00:00
|
['Blockchain', 'Programming', 'Python', 'Cryptocurrency', 'Crypto']
|
Title Easily Fetch Binance Historical Trades Using PythonContent Coding Time Parsing argument script use following argument symbol symbol trading pair defined Binance queried may copied URL Binance web app excluding character Remove ‘’ last part URL get symbol startingdate endingdate Selfexplanatory expected format mmddyyyy Python slang mdY get argument we’ll use builtin sys nothing fancy around parse date using datetime library symbol sysargv1 startingdate datetimestrptimesysargv2 mdY endingdate datetimestrptimesysargv3 mdY timedeltadays1 timedeltamicroseconds1 adding one day subtracting one microsecond endingdate time portion always 235959999 making practical get sameday interval Fetching trade Binance’s API using aggTrades endpoint get 1000 trade one request use start end parameter one hour apart failure fetching using time interval point another liquidity would go crazy would lose precious trade decided try fromid strategy aggTrades endpoint chosen return compressed trade way won’t lose precious information Get compressed aggregate trade Trades fill time order price quantity aggregated fromid strategy go like going get first trade startingdate sending date interval endpoint fetch 1000 trade starting first fetched trade ID check last trade happened endingdate gone time period save result file Otherwise update fromid variable get last trade ID start loop Ugh enough talking let’s code Fetching first trade ID First create newenddate That’s using aggTrades passing startTime endTime parameter need know first trade ID period adding 60 second period low liquidity pair parameter changed guarantee trade occurred first minute day that’s requested parse date using helper function convert Unix millisecond representation using calendartimegm function timegm function preferred keep date UTC def getunixmsfromdatedate return intcalendartimegmdatetimetuple 1000 datemicrosecond1000 request’s response list trade object sorted date following format need first trade ID returning response0a value Main loop first trade ID fetch trade 1000 time reach endingdate following code called inside main loop perform request using fromid parameter ditching startDate endDate parameter here’s main loop perform request create DataFrame check currenttime contains date latest trade fetched greater todate fetch trade using fromid parameter parameter update fromid currenttime parameter information latest trade fetched parameter information latest trade fetched print nice debug message debug message pdconcat trade fetched previous trade DataFrame trade fetched previous trade sleep little Binance won’t give u ugly 429 HTTP response Cleaning saving assembling DataFrame need perform simple data cleaning remove duplicate trim trade happened todate problem we’re fetching chunk 1000 trade it’s expected get trade executed target end date encapsulate trim functionality def trimdf todate return dfdfT getunixmsfromdatetodate perform data cleaning dfdropduplicatessubseta inplaceTrue df trimdf todate save file using tocsv method filename fbinancesymboltradesfromsysargv2replace tosysargv3replace csv dftocsvfilename also use data storage mechanism ArcticTags Blockchain Programming Python Cryptocurrency Crypto
|
4,994 |
Evaluation Metrics Part 2
|
Let us discuss in brief, the other metrics in this picture
Prevalence
Prevalence is the fraction of the total population, that is labeled positive.
Negative Predictive Value
Negative Predictive Value or NPV is the proportion of negatively labeled samples which are correctly predicted negative.
Positive and Negative Predictive Value can again be expressed in terms of prevalence, specificty and sensitivity as
False Discovery Rate
FDR
FDR is the proportion of positively predicted samples which are originally labeled negative. In other words, it is the proportion of false positives out of all the positively predicted samples.
False Omission Rate
FOR
FOR is the proportion of negatively predicted samples which are originally labeled positive. In other words, it is the proportion of false negatives out of all the negatively predicted samples.
False Positive Rate
FPR, Fall-out
FPR is the proportion of negatively labeled samples which are incorrectly predicted positive.
False Negative Rate
FNR
FNR is the proportion of positively labeled samples which are incorrectly predicted negative.
Positive Likelihood Ratio
LR+
LR+ is the ratio of the probability of a sample being predicted positive given that the sample is originally labeled positive to the probability of the sample being predicted positive given that the sample is originally labeled negative. In real life scenario, LR+ denotes the probability of a person who has a disease testing positive divided by the probability of a person who does not have the disease testing positive. Higher the value of LR+, the more likely a positive test result is a true positive. On the other hand, LR+< 1 indicates that a positive test result is likely to be a false positive.
Negative Likelihood Ratio
LR-
LR- is the ratio of the probability of a sample being predicted negative given that the sample is originally labeled positive to the probability of the sample being predicted negative given that the sample is originally labeled negative. In real life scenario, LR- denotes the probability of a person who has a disease testing negative divided by the probability of a person who does not have the disease testing negative.
Diagnostic Odds Ratio
DOR
DOR is the measure of the effectiveness of a diagnostic test (or model), and is defined as the ratio of the odds of the test (prediction) being positive if the sample (subject) is originally positively labeled relative to the odds of the test (prediction) being positive if the sample (subject) is originally negatively labeled.
|
https://medium.com/the-owl/evaluation-metrics-part-2-756e380cd7f3
|
['Siladittya Manna']
|
2020-06-26 02:49:32.214000+00:00
|
['Deep Learning', 'Metrics', 'Python', 'Data Science', 'Machine Learning']
|
Title Evaluation Metrics Part 2Content Let u discus brief metric picture Prevalence Prevalence fraction total population labeled positive Negative Predictive Value Negative Predictive Value NPV proportion negatively labeled sample correctly predicted negative Positive Negative Predictive Value expressed term prevalence specificty sensitivity False Discovery Rate FDR FDR proportion positively predicted sample originally labeled negative word proportion false positive positively predicted sample False Omission Rate proportion negatively predicted sample originally labeled positive word proportion false negative negatively predicted sample False Positive Rate FPR Fallout FPR proportion negatively labeled sample incorrectly predicted positive False Negative Rate FNR FNR proportion positively labeled sample incorrectly predicted negative Positive Likelihood Ratio LR LR ratio probability sample predicted positive given sample originally labeled positive probability sample predicted positive given sample originally labeled negative real life scenario LR denotes probability person disease testing positive divided probability person disease testing positive Higher value LR likely positive test result true positive hand LR 1 indicates positive test result likely false positive Negative Likelihood Ratio LR LR ratio probability sample predicted negative given sample originally labeled positive probability sample predicted negative given sample originally labeled negative real life scenario LR denotes probability person disease testing negative divided probability person disease testing negative Diagnostic Odds Ratio DOR DOR measure effectiveness diagnostic test model defined ratio odds test prediction positive sample subject originally positively labeled relative odds test prediction positive sample subject originally negatively labeledTags Deep Learning Metrics Python Data Science Machine Learning
|
4,995 |
NLP visualizations for clear, immediate insights into text data and outputs
|
NLP visualizations for clear, immediate insights into text data and outputs
Using Plotly Express and Dash to explore data and present outputs in natural language processing (NLP) projects.
Samples of NLP visualizations
Extracting information from text remains a difficult, yet important challenge in the era of big data. Whether it comes to customer feedback, social media posts, or the news, the sheer volume of data to be analyzed can overwhelm information to be extracted.
This is where modern natural language processing (NLP) tools come in. They can capture prevailing moods about a particular topic or product (sentiment analysis), identify key topics from texts (summarization/classification), or amazingly even answer context-dependent questions (like Siri or Google Assistant). Their development has provided access to consistent, powerful, and scalable text analysis tools for individuals and organizations.
Still, aspects unique to languages can make it difficult to explore data for NLP or communicate result outputs. For instance, metrics that are applicable in the numerical domain may not be available for NLP. (E.g. what would be a mean, or a standard deviation of a set of word tokens?) Even if they could be calculated, presenting the data to audiences can be challenging.
Data visualization can help with this, of course, but it can be time-consuming to learn a particular package. Building a web dashboard can be even more challenging—often requiring languages unfamiliar to NLP practitioners such as CSS, HTML, and JavaScript.
So, in this article, we wanted to share with you ways that Plotly Express and Dash can ease some of this pain.
Plotly Express and Dash were designed with code readability and succinctness as priorities, to enable easy creation of high-quality local (Plotly Express) and web dashboard (Dash) visualizations. In other words, they aim to have data visualization support your work, not have it become a new headache.
With that said, let’s get into it! We use a consumer complaints database corpus for this example, but the concepts and visualizations we discuss should be universally applicable.
The code is available on this GitHub repository, and a deployed version of the app. Please feel free to follow along with this article, clone it, and make improvements!
(All analysis and notes here are for demonstration purposes only.)
Local visualizations
Data exploration
Our dataset contains over 18,000 rows and three columns. While this isn’t large by modern standards, it’s not really possible to ‘eyeball’ this raw data.
Let’s explore this dataset with Plotly Express, starting with the distribution of complaint counts by their date (to see trend over time):
Histogram of complaint counts by date
Now we’ll plot a histogram for the 20 companies with the most complaints:
Histogram of complaint counts by target company
Or by narrative length:
Histogram of complaint counts by narrative length
You may have noticed the succinctness of our code. Analysis by multiple variables, or changing to a log scale is also a cinch — just pass additional parameters as shown below:
Histogram of complaint counts by date (x-axis) and company (color)
Even better, these Plotly charts integrate seamlessly into Dash for dashboard generation as you will see later.
Now that we have looked at the distributions, let’s move on to review the text data in substance, starting with n-grams.
Visualizing n-grams
N-grams are simply sequences of tokens (words), and have many practical applications as well as being a great exploratory method. As single words can only tell us so much, let’s move straight to plotting counts of top bigrams.
Counts of top bigrams
Isn’t that neat? Most of these bigrams appear to indicate sensible groups of complaint types, and the counts show the volume of each group (credit report and credit card related complaints appear to be most common).
To drill down further into this data, a hierarchical visualization, such as a treemap, could be used. This example below divides the data by company and then whether the phrase ‘credit report’ is included. Box sizes indicate group sizing, and color indicates average narrative length.
Treemap showing the total share of complaints, portion mentioning credit reports, and average lengths
Notice that the visualization immediately reveals length-related patterns. Credit report related complaints tend to be longer, and a couple of companies’ complaints also stand out generally.
In some cases, you may wish to compare proportions of complaint bigrams for each company, in which case a stacked bar might be useful:
Stacked bar chart showing complaint proportion by bigram
Companies with higher volumes of credit card complaints pop out to the eye, as does one with a high student loan-related complaint.
For a closer review, we may even compare two companies directly, as done here for top 50 bigrams:
Bigram comparisons for two companies
This enables an easy comparison of two datasets by subject matter.
Qualitative comparisons
While we don’t have time to get into the technical weeds, very broadly speaking, word embeddings (dense embeddings to be precise) enable qualitative comparisons of words. They can represent words, and, by extension, concepts or documents as high dimensional vectors, which also provide opportunities for interesting visualizations. Take a look at this simple representation of bigrams using a bubble chart:
Displaying bigram concepts in a bubble chart
Here, high-dimensional bigrams are represented as two-dimensional representations using a dimensionality reduction technique called t-SNE.
Similar charts could be produced for any subset to compare text similarities and insights — say, for each company, or by length.
This might be a good opportunity to highlight that each of these charts were created in just a few lines of code using Plotly Express. Not only that, although you see static screenshots here, Plotly will generate interactive charts in your browser or notebook. Crucially, they can easily be incorporated into a live dashboard with Dash.
NLP dashboards made easy with Dash
The value proposition of Dash is similar to, and intertwined with, those that made Python the leading language for NLP. It has a low learning curve, readable yet succinct code, a thriving community of users, as well as useful libraries and modules that can be leveraged to create dashboards.
Significantly for data scientists who are not also web developers, Dash abstracts many elements of web development to Python, allowing you and your team to remain in the Pythonic state of mind if desired.
Take a look at this Dash example for a navigation bar — notice that the HTML/DOM elements all created from within Python.
This is the web app that the snippet was taken from.
Demo Dash web app (link)
Dash provides Python interfaces to web-based components, while being declarative and reactive. Together, it enables easy creation of flexible, informative front ends that are accessible for everyone to interact with, whether for data exploration or presentations.
As foreshadowed above, incorporating one of these Plotly Express charts into Dash is straightforward.
For example — the word embedding bubble chart can be implemented in Dash like this:
As implemented, the user can select a parameter (perplexity) as a dropdown item, which initiates the callback function and updates the graph reactively — changing the 2-dimensional representation of the vectors. Below is a comparison of the bubble charts, at two different perplexity values.
Dash app t-SNE graphs at different parameters
This two-company bigram comparison is also incorporated in the Dash application as shown below.
Dash app — N-gram comparison component
More importantly, we only needed around 30 lines of code to add each Plotly Express chart to the Dash app, including interactivity and formatting, all without ever leaving Python. We think that this will ultimately improve productivity and efficacy for data scientists such as yourself.
Obviously, this is just a quick skimming of what is possible in NLP visualizations, but we hope to have showed you the kind of simplicity and ease of use that we believe makes Dash and Plotly a powerful tool for NLP practitioners.
We invite you to explore the app and the code yourself, and create your own visualizations and dashboards and applications.
|
https://medium.com/plotly/nlp-visualisations-for-clear-immediate-insights-into-text-data-and-outputs-9ebfab168d5b
|
['Jp Hwang']
|
2020-03-30 16:37:04.161000+00:00
|
['Data Science', 'Programming', 'Python', 'Machine Learning', 'Technology']
|
Title NLP visualization clear immediate insight text data outputsContent NLP visualization clear immediate insight text data output Using Plotly Express Dash explore data present output natural language processing NLP project Samples NLP visualization Extracting information text remains difficult yet important challenge era big data Whether come customer feedback social medium post news sheer volume data analyzed overwhelm information extracted modern natural language processing NLP tool come capture prevailing mood particular topic product sentiment analysis identify key topic text summarizationclassification amazingly even answer contextdependent question like Siri Google Assistant development provided access consistent powerful scalable text analysis tool individual organization Still aspect unique language make difficult explore data NLP communicate result output instance metric applicable numerical domain may available NLP Eg would mean standard deviation set word token Even could calculated presenting data audience challenging Data visualization help course timeconsuming learn particular package Building web dashboard even challenging—often requiring language unfamiliar NLP practitioner CSS HTML JavaScript article wanted share way Plotly Express Dash ease pain Plotly Express Dash designed code readability succinctness priority enable easy creation highquality local Plotly Express web dashboard Dash visualization word aim data visualization support work become new headache said let’s get use consumer complaint database corpus example concept visualization discus universally applicable code available GitHub repository deployed version app Please feel free follow along article clone make improvement analysis note demonstration purpose Local visualization Data exploration dataset contains 18000 row three column isn’t large modern standard it’s really possible ‘eyeball’ raw data Let’s explore dataset Plotly Express starting distribution complaint count date see trend time Histogram complaint count date we’ll plot histogram 20 company complaint Histogram complaint count target company narrative length Histogram complaint count narrative length may noticed succinctness code Analysis multiple variable changing log scale also cinch — pas additional parameter shown Histogram complaint count date xaxis company color Even better Plotly chart integrate seamlessly Dash dashboard generation see later looked distribution let’s move review text data substance starting ngrams Visualizing ngrams Ngrams simply sequence token word many practical application well great exploratory method single word tell u much let’s move straight plotting count top bigram Counts top bigram Isn’t neat bigram appear indicate sensible group complaint type count show volume group credit report credit card related complaint appear common drill data hierarchical visualization treemap could used example divide data company whether phrase ‘credit report’ included Box size indicate group sizing color indicates average narrative length Treemap showing total share complaint portion mentioning credit report average length Notice visualization immediately reveals lengthrelated pattern Credit report related complaint tend longer couple companies’ complaint also stand generally case may wish compare proportion complaint bigram company case stacked bar might useful Stacked bar chart showing complaint proportion bigram Companies higher volume credit card complaint pop eye one high student loanrelated complaint closer review may even compare two company directly done top 50 bigram Bigram comparison two company enables easy comparison two datasets subject matter Qualitative comparison don’t time get technical weed broadly speaking word embeddings dense embeddings precise enable qualitative comparison word represent word extension concept document high dimensional vector also provide opportunity interesting visualization Take look simple representation bigram using bubble chart Displaying bigram concept bubble chart highdimensional bigram represented twodimensional representation using dimensionality reduction technique called tSNE Similar chart could produced subset compare text similarity insight — say company length might good opportunity highlight chart created line code using Plotly Express although see static screenshots Plotly generate interactive chart browser notebook Crucially easily incorporated live dashboard Dash NLP dashboard made easy Dash value proposition Dash similar intertwined made Python leading language NLP low learning curve readable yet succinct code thriving community user well useful library module leveraged create dashboard Significantly data scientist also web developer Dash abstract many element web development Python allowing team remain Pythonic state mind desired Take look Dash example navigation bar — notice HTMLDOM element created within Python web app snippet taken Demo Dash web app link Dash provides Python interface webbased component declarative reactive Together enables easy creation flexible informative front end accessible everyone interact whether data exploration presentation foreshadowed incorporating one Plotly Express chart Dash straightforward example — word embedding bubble chart implemented Dash like implemented user select parameter perplexity dropdown item initiate callback function update graph reactively — changing 2dimensional representation vector comparison bubble chart two different perplexity value Dash app tSNE graph different parameter twocompany bigram comparison also incorporated Dash application shown Dash app — Ngram comparison component importantly needed around 30 line code add Plotly Express chart Dash app including interactivity formatting without ever leaving Python think ultimately improve productivity efficacy data scientist Obviously quick skimming possible NLP visualization hope showed kind simplicity ease use believe make Dash Plotly powerful tool NLP practitioner invite explore app code create visualization dashboard applicationsTags Data Science Programming Python Machine Learning Technology
|
4,996 |
Swift — Creating a Custom View From a XIB (Updated for Swift 5)
|
9. Use Your View
We’re done! Now we’ve got a custom view, made from a XIB file, that we can use throughout our project. Let’s see how that looks.
Click on your storyboard file, and drag in a UIView. Click on the identity inspector in the upper right, and change the class to TestView (or whatever you named your UIView).
You can place it anywhere you want on the screen. We constrained it to the center of the screen, and had it take up 50% of the width, and 50% of the height — just for demonstration purposes.
Open up your assistant editor again. This time it should open to your view controller file. Drag the TestView in as an IBOutlet. Your view controller should now look like this:
One last thing — remember that label on our view? Let’s change the text. I’m going to use my old boss’s favorite greeting:
That’s it. Let’s run the project!
There you have it — an app that simulates life in an NBA front office.
Remember when you set up the XIB, and I said the size doesn’t really matter? This is why. The XIB is going to take up the size of this view — however big or small, or whatever shape you make it in your storyboard (or code). All that really matters is how you’ve constrained all of the elements within your XIB.
You can use this view throughout your app. If you’d like to go back and redesign how it looks, you can just change it in your XIB file, and all those changes will flow throughout the app.
Questions or comments? Let me know! For anyone interested in investing, please check out Stock Genius, a brand new stock free tracking app I built featuring real time prices directly from the exchange (and many custom views built from xibs!).
|
https://medium.com/better-programming/swift-3-creating-a-custom-view-from-a-xib-ecdfe5b3a960
|
['Brian Clouser']
|
2019-07-02 23:03:29.913000+00:00
|
['iOS', 'Swift', 'Mobile App Development', 'Swift 3', 'Programming']
|
Title Swift — Creating Custom View XIB Updated Swift 5Content 9 Use View We’re done we’ve got custom view made XIB file use throughout project Let’s see look Click storyboard file drag UIView Click identity inspector upper right change class TestView whatever named UIView place anywhere want screen constrained center screen take 50 width 50 height — demonstration purpose Open assistant editor time open view controller file Drag TestView IBOutlet view controller look like One last thing — remember label view Let’s change text I’m going use old boss’s favorite greeting That’s Let’s run project — app simulates life NBA front office Remember set XIB said size doesn’t really matter XIB going take size view — however big small whatever shape make storyboard code really matter you’ve constrained element within XIB use view throughout app you’d like go back redesign look change XIB file change flow throughout app Questions comment Let know anyone interested investing please check Stock Genius brand new stock free tracking app built featuring real time price directly exchange many custom view built xibsTags iOS Swift Mobile App Development Swift 3 Programming
|
4,997 |
Can You Solve The Mystery Of Edgar Allan Poe’s Death?
|
Can You Solve The Mystery Of Edgar Allan Poe’s Death?
Here’s the evidence that’s puzzled doctors for 150 years.
“EDGAR ALLAN POE died in Baltimore on Sunday last. His was one of the very few original minds that this country has produced. In the history of literature, he will hold a certain position and a high place. By the public of the day he is regarded rather with curiosity than with admiration. Many will be startled, but few will be grieved by the news. He had very few friends, and he was the friend of very few — if any.” — Richmond Semi-Weekly Examiner, October 12, 1849
Poe was a superstar, adorning cigar boxes. | Hulton Archive/Getty
A pioneer of detective fiction, short stories and sci-fi, America’s most famous horror writer left us with a whodunit uncomfortably similar to his own work. On the night of October 3, a journalist from the Baltimore Sun found Edgar Allan Poe, 40, sprawled half-conscious near a pub called Gunner’s Hall, wearing someone else’s soiled, shabby clothes. Poe had left Richmond, VA almost a week before, bound for Philadelphia to edit a book for a minor poet. A delirious Poe asked the journalist to contact Joseph E. Snodgrass, a magazine editor with medical training. Snodgrass arrived and brought Poe to Washington College Hospital, where he lapsed in and out of consciousness and was sometimes violent. He died four days later. The cause of death was listed as phrenitis: swelling of the brain.
In the century and a half since, numerous friends, enemies and biographers of Poe have offered no less than ten theories explaining his death, and almost all are based on circumstantial evidence.
Illustration of Poe’s most famous poem, “The Raven.” | Hulton Archives/Getty
Poe the drunkard
Poe famously had a problem processing alcohol: One drink lit him to the gills. A temperance activist, Snodgrass blamed the writer’s death on liquor in speeches nationwide.
But Poe’s attending physician, Dr. John J. Moran, published a pamphlet claiming a “perfectly sober” Poe did not smell of liquor when admitted to the hospital, and refused it when offered, drinking only water.
That said, Moran, who became a star on the speaking circuit with his colorful accounts of the poet’s death, has discredited himself in the eyes of historians with his widely varying reports of what Poe said and did in his final days.
Murder most foul
Moran and other chroniclers have said Poe fell afoul of “cooping,” a practice whereby one was drugged, put into different clothes to disguise identity and forced to vote several times. Gunner’s Hall was indeed a polling site in a local sheriff’s election being held the night he was found.
It’s also been theorized that Poe was waylaid by ruffians — or the three brothers of his wealthy fiancée, Elmira Shelton — who beat him and forced liquor down his throat. Historians have largely dismissed these theories, and while alcohol could have caused delirium, science suggests otherwise.
Poe’s obituary noted his “extreme personal beauty.” | Hulton Archive/Getty
Poe’s frankly awesome hair debunks several theories
In 1999, public health researcher Albert Donnay analyzed clippings of Poe’s hair taken after his death. Donnay had conjectured the writer had died of carbon monoxide poisoning from exposure to the coal gas that was used for heating at the time. But the text was not conclusive. It did show low levels of lead, indicating Poe had indeed refrained from drinking. The test also showed elevated levels of mercury, leading Chris Semtner of the Poe Museum in Richmond to propose that Poe had died of the mercury chloride prescribed after he was exposed to a malaria epidemic three months before his death. But the mercury found was 30 times below levels consistent with that.
Did the cat do it? Possibly not.
At a pathological conference in 1996, Dr. R. Michael Benitez was given a list of symptoms for an anonymous patient and asked to determine cause of death. The patient turned out to be Poe, and the diagnosis Benitez gave was rabies. Poe did own pets, including a cat who died soon after he did. But people with rabies are strongly averse to drinking water, which conflicts with Dr. Moran’s (admittedly unreliable) account of Poe downing the stuff. Benitez further acknowledged his diagnosis could not be proved without DNA evidence.
Poe’s (second) final resting place in Baltimore. | Authenticated News/Getty
That’s no brain!
Poe was initially buried in an unmarked grave in the Poe family plot in Baltimore. After visiting the unceremonious resting place, poet Paul Hamilton Lane raised funds to build a fine new monument. Poe’s body was relocated there in 1875, almost exactly 26 years after his death.
Brain tumors don’t decay as quickly as brains do. | stockdevil/Getty
The sexton overseeing the reburial, George W. Spence, later remarked that he had lifted up Poe’s skull when the body was exhumed and that “his brain rattled around inside just like a lump of mud, sir.” Knowing as we do now that the brain is one of the first parts of the body to decompose, author Matthew Pearl consulted with doctors who confirmed that the object could not have been Poe’s shrunken, dessicated brain — but could well have been a brain tumor. This would explain his delirious behavior.
We may really never know
Of course, the cause of the poet’s death could be far less fantastic. Poe was not feeling well before setting out from Richmond for Philadelphia, and both his fiancée and his doctor had advised him not to go. Flu, perhaps? Equally mundane causes like encephalitis or meningitis might have explained his brain swelling and other symptoms.
“I think he had a brain tumor. I don’t know that much of anything can be definite from our vantage point…And just because he had a brain tumor, doesn’t mean other things didn’t happen. You can have a brain tumor AND be murdered, or run over by a train or any other variety of things!”
— Matthew Pearl
Other mysteries remain. Moran’s notes indicated that the night before Poe’s death, he repeatedly uttered the name “Reynolds.” No scholarship has been able to determine who this person was.
Even Poe’s final words are in doubt. Moran, whose lurid accounts of the poet’s death made him a star, at one point claimed Poe uttered, “Lord help my poor soul.” Contradicting himself even on this final point, Moran alternatively recalled far more grandiose last words:
“He who arched the heavens and upholds the universe, has His decrees legibly written upon the frontlet of every human being and upon demaons incarnate.”
Author Pearl believes the mystery will never be solved: “I think an exhumation could answer questions using forensic analysis. But there’s no way it will really happen, and even if it somehow did any analysis would probably raise as many questions as propose answers.”
|
https://medium.com/omgfacts/can-you-solve-the-mystery-of-edgar-allan-poes-death-2e29e777effe
|
['New Visions']
|
2016-09-23 22:05:12.751000+00:00
|
['Edgar Allan Poe', 'Mind', 'Poetry', 'Crime', 'Books']
|
Title Solve Mystery Edgar Allan Poe’s DeathContent Solve Mystery Edgar Allan Poe’s Death Here’s evidence that’s puzzled doctor 150 year “EDGAR ALLAN POE died Baltimore Sunday last one original mind country produced history literature hold certain position high place public day regarded rather curiosity admiration Many startled grieved news friend friend — any” — Richmond SemiWeekly Examiner October 12 1849 Poe superstar adorning cigar box Hulton ArchiveGetty pioneer detective fiction short story scifi America’s famous horror writer left u whodunit uncomfortably similar work night October 3 journalist Baltimore Sun found Edgar Allan Poe 40 sprawled halfconscious near pub called Gunner’s Hall wearing someone else’s soiled shabby clothes Poe left Richmond VA almost week bound Philadelphia edit book minor poet delirious Poe asked journalist contact Joseph E Snodgrass magazine editor medical training Snodgrass arrived brought Poe Washington College Hospital lapsed consciousness sometimes violent died four day later cause death listed phrenitis swelling brain century half since numerous friend enemy biographer Poe offered le ten theory explaining death almost based circumstantial evidence Illustration Poe’s famous poem “The Raven” Hulton ArchivesGetty Poe drunkard Poe famously problem processing alcohol One drink lit gill temperance activist Snodgrass blamed writer’s death liquor speech nationwide Poe’s attending physician Dr John J Moran published pamphlet claiming “perfectly sober” Poe smell liquor admitted hospital refused offered drinking water said Moran became star speaking circuit colorful account poet’s death discredited eye historian widely varying report Poe said final day Murder foul Moran chronicler said Poe fell afoul “cooping” practice whereby one drugged put different clothes disguise identity forced vote several time Gunner’s Hall indeed polling site local sheriff’s election held night found It’s also theorized Poe waylaid ruffian — three brother wealthy fiancée Elmira Shelton — beat forced liquor throat Historians largely dismissed theory alcohol could caused delirium science suggests otherwise Poe’s obituary noted “extreme personal beauty” Hulton ArchiveGetty Poe’s frankly awesome hair debunks several theory 1999 public health researcher Albert Donnay analyzed clipping Poe’s hair taken death Donnay conjectured writer died carbon monoxide poisoning exposure coal gas used heating time text conclusive show low level lead indicating Poe indeed refrained drinking test also showed elevated level mercury leading Chris Semtner Poe Museum Richmond propose Poe died mercury chloride prescribed exposed malaria epidemic three month death mercury found 30 time level consistent cat Possibly pathological conference 1996 Dr R Michael Benitez given list symptom anonymous patient asked determine cause death patient turned Poe diagnosis Benitez gave rabies Poe pet including cat died soon people rabies strongly averse drinking water conflict Dr Moran’s admittedly unreliable account Poe downing stuff Benitez acknowledged diagnosis could proved without DNA evidence Poe’s second final resting place Baltimore Authenticated NewsGetty That’s brain Poe initially buried unmarked grave Poe family plot Baltimore visiting unceremonious resting place poet Paul Hamilton Lane raised fund build fine new monument Poe’s body relocated 1875 almost exactly 26 year death Brain tumor don’t decay quickly brain stockdevilGetty sexton overseeing reburial George W Spence later remarked lifted Poe’s skull body exhumed “his brain rattled around inside like lump mud sir” Knowing brain one first part body decompose author Matthew Pearl consulted doctor confirmed object could Poe’s shrunken dessicated brain — could well brain tumor would explain delirious behavior may really never know course cause poet’s death could far le fantastic Poe feeling well setting Richmond Philadelphia fiancée doctor advised go Flu perhaps Equally mundane cause like encephalitis meningitis might explained brain swelling symptom “I think brain tumor don’t know much anything definite vantage point…And brain tumor doesn’t mean thing didn’t happen brain tumor murdered run train variety things” — Matthew Pearl mystery remain Moran’s note indicated night Poe’s death repeatedly uttered name “Reynolds” scholarship able determine person Even Poe’s final word doubt Moran whose lurid account poet’s death made star one point claimed Poe uttered “Lord help poor soul” Contradicting even final point Moran alternatively recalled far grandiose last word “He arched heaven upholds universe decree legibly written upon frontlet every human upon demaons incarnate” Author Pearl belief mystery never solved “I think exhumation could answer question using forensic analysis there’s way really happen even somehow analysis would probably raise many question propose answers”Tags Edgar Allan Poe Mind Poetry Crime Books
|
4,998 |
Your Life Is Full of Porn. Stop Getting Yourself Off.
|
This Is Your Life On Porn
You wake up. You go to the toilet and relieve yourself. Your day starts out well. You have many goals you want to achieve and your to-do list is ready and waiting for you. “Just one sec,” you say to yourself. Then, porn enters the bathroom while your undies are wrapped around your legs.
Lifestyle Porn
You whip out your phone and open the app of your favorite magazine. Everywhere is consumer-focused ads telling you that your life could be better. They show the lifestyle you could be living if you weren’t living this 9–5 nightmare you call a life. You think to yourself “I’m so stupid. How do I escape the rat race?”
Lifestyle porn is all about people you’ll never meet and places you may never get to go. It’s a curated list of the top 1% of experiences you could have in your life. The lifestyle looks perfect.
You never see what it takes to earn the lifestyle, only the end result which is the lifestyle.
It’s frustrating as hell to watch lifestyle porn. Your lifestyle is never going to be the same as someone else and that’s the point. You can create your own lifestyle rather than copy a porn version you’ll never have.
Money Porn
Laptop. Laptop. Laptop.
They always have a laptop on their knees and downpayment on a Ferrari ready to be delivered during a socially distanced unveiling at their mortgaged home. The laptop is a symbol for one word: easy.
A laptop makes you think making money is easy.
If money was so gosh damn simple, we’d all be rolling in it and nobody would ever need to wake up early for work again. Money porn sells the dream that cash will solve all your problems.
If you only had money, then you’d have happiness. Money porn is a lie. Without meaning and fulfillment, money won’t do a thing for you. In fact, money can make your life worse, not better, if you haven’t discovered meaning or fulfillment first. Money can cause you to be a jerk and be addicted to the ridiculous goal of having to be first while others lose.
Nobody has to lose for you to win and that’s the problem with money porn.
Startup Porn
All you need is a business and you’ll be successful. In Australia, where I live, 9/10 startups fail in the first 5 years. This means startup porn is statistically designed to ruin your life.
A business is hard work. Easing your way into business is a superpower and all the startup porn ignores that. The startup peddlers tell you to walk away from everything and start a business. That’s stupid advice.
Adding too much risk to your life will only stress you out, leading you to make terrible decisions you’ll regret later in life.
You can be happy without a startup.
(If you’ve got a regular job, then you’re already an entrepreneur with one customer anyway.)
Revenge Porn
Social media makes this version of porn really easy.
You can sabotage other people in the comment’s section of their posts or in the hidden chamber of secrets known as direct messages. Seeking revenge feels good. Seeing people lose, so you can, win seems obvious.
When you eradicate the idea of winners and losers from your life, you welcome the gift of opportunity through the door of your mind.
The more people you help, the better you do for yourself.
If you help people, they will help you in return.
You can do more when you collaborate, than you can by yourself. The game of life is rigged against you. You can never be a winner at everything, so why even try to? It doesn’t make sense.
The need to win only leads to eventual disappointment. You can crush it today by giving up revenge porn and helping people do better.
People “Doing It” Porn
Porn consisting of people having sex is not good for you either. Most of this content shows scenes and acts you can never replicate. The bar you have for physical looks and crazy sexual acts will only increase.
Drop porn for real-life sex with your partner. It’s much better.
Influencer Porn
If you have a personal brand and lots of followers, you’ll do incredibly well.
A personal brand is everything, they say. No it’s not. Gary Vee explains social media nicely.
“I really miss when people understood that people who consume their content are a community, not a group of people that are there to serve their ambitions.”
Nobody gives a damn about how many followers you have or your brand. The influencer movement is a lie designed to keep you on social media platforms so you continue to play the game. Use social media, absolutely — but use it to be helpful and for a cause greater than your own selfish desires.
Take it from somebody who knows the social media game well — 100,000 followers feels like 1000. Followers and a brand won’t make you rich, successful, happy or die with no regrets.
Influencer porn is designed to sell you products, not make you successful in life. You don’t need any of it.
Cure Your Porn Addictions with This.
You don’t have to live a life of porn. Porn is the default option and we don’t even know it. I lived a life of porn too. Not anymore.
The secret to kill all forms of porn is discipline.
Discipline yourself to focus on what you know is good for you. You have a list of habits already that you probably follow — like exercise, reading, leisure time, meditation — and you can focus your time there and get far better returns than the endless porn-fuelled addictions of meaningless nonsense.
Porn is an addictive distraction to doing the work you know you want to do. Getting started with your life’s work each day is hard, but so is continually distracting yourself with life porn.
Whatever your version of porn is, abolish it.
You can get yourself off with life, rather than porn. It feels better too.
|
https://medium.com/the-ascent/your-life-is-full-of-porn-stop-getting-yourself-off-c16cc0b092f1
|
['Tim Denning']
|
2020-07-31 17:01:01.525000+00:00
|
['Addiction', 'Life', 'Money', 'Social Media', 'Productivity']
|
Title Life Full Porn Stop Getting OffContent Life Porn wake go toilet relieve day start well many goal want achieve todo list ready waiting “Just one sec” say porn enters bathroom undies wrapped around leg Lifestyle Porn whip phone open app favorite magazine Everywhere consumerfocused ad telling life could better show lifestyle could living weren’t living 9–5 nightmare call life think “I’m stupid escape rat race” Lifestyle porn people you’ll never meet place may never get go It’s curated list top 1 experience could life lifestyle look perfect never see take earn lifestyle end result lifestyle It’s frustrating hell watch lifestyle porn lifestyle never going someone else that’s point create lifestyle rather copy porn version you’ll never Money Porn Laptop Laptop Laptop always laptop knee downpayment Ferrari ready delivered socially distanced unveiling mortgaged home laptop symbol one word easy laptop make think making money easy money gosh damn simple we’d rolling nobody would ever need wake early work Money porn sell dream cash solve problem money you’d happiness Money porn lie Without meaning fulfillment money won’t thing fact money make life worse better haven’t discovered meaning fulfillment first Money cause jerk addicted ridiculous goal first others lose Nobody lose win that’s problem money porn Startup Porn need business you’ll successful Australia live 910 startup fail first 5 year mean startup porn statistically designed ruin life business hard work Easing way business superpower startup porn ignores startup peddler tell walk away everything start business That’s stupid advice Adding much risk life stress leading make terrible decision you’ll regret later life happy without startup you’ve got regular job you’re already entrepreneur one customer anyway Revenge Porn Social medium make version porn really easy sabotage people comment’s section post hidden chamber secret known direct message Seeking revenge feel good Seeing people lose win seems obvious eradicate idea winner loser life welcome gift opportunity door mind people help better help people help return collaborate game life rigged never winner everything even try doesn’t make sense need win lead eventual disappointment crush today giving revenge porn helping people better People “Doing It” Porn Porn consisting people sex good either content show scene act never replicate bar physical look crazy sexual act increase Drop porn reallife sex partner It’s much better Influencer Porn personal brand lot follower you’ll incredibly well personal brand everything say it’s Gary Vee explains social medium nicely “I really miss people understood people consume content community group people serve ambitions” Nobody give damn many follower brand influencer movement lie designed keep social medium platform continue play game Use social medium absolutely — use helpful cause greater selfish desire Take somebody know social medium game well — 100000 follower feel like 1000 Followers brand won’t make rich successful happy die regret Influencer porn designed sell product make successful life don’t need Cure Porn Addictions don’t live life porn Porn default option don’t even know lived life porn anymore secret kill form porn discipline Discipline focus know good list habit already probably follow — like exercise reading leisure time meditation — focus time get far better return endless pornfuelled addiction meaningless nonsense Porn addictive distraction work know want Getting started life’s work day hard continually distracting life porn Whatever version porn abolish get life rather porn feel better tooTags Addiction Life Money Social Media Productivity
|
4,999 |
Truly Customizing Power BI with React, Angular, or any web framework
|
With the growth of the amount of data available in organizations, presenting it in a clear and direct way is increasingly important. In this context, Power BI — Microsoft’s business analysis tool — has gained prominence.
Even counting on integrated components and navigation mechanisms, enough to meet most of the regular enterprise needs, the platform still stands out for its customization possibilities.
Besides being able to customize the platform’s built-in components, it is possible, with some front-end engineering skills, to develop new ones from scratch.
Developing a Power BI Custom Visual
The development process of a new component takes place through the programming of Custom Visuals, using the PowerBI Visual Tools package, or pbiviz, which can be installed with Node Package Manager — NPM.
Pbiviz command-line interface
The development of a Custom Visual requires only knowledge of conventional web technologies such as Typescript, HTML and CSS and can be enhanced by the use of frameworks such as React, Angular or D3.js.
The design of a Custom Visual is generated, with all that is necessary, by a CLI tool, provided by the NPM package that was indicated above.
Basically, the web developer will need to write little code — just two methods: the constructor and the update of a class that implements IVisual . In addition, the file capabilities.json , also generated by the CLI tool, allows the user to define properties, such as colors and fonts, which can be customized later during use by the end-user in Power BI.
export class Visual implements IVisual {
constructor(options: VisualConstructorOptions) {
// code here
}
public update(options: VisualUpdateOptions) {
// ... and here
}
}
The tooling provided by pbiviz allows the web developer to have instant feedback on his work, with updates that occur with hot reload. All supported by Power BI Service.
|
https://towardsdatascience.com/truly-customizing-power-bi-with-react-angular-or-any-web-framework-5652b86a723e
|
['Thiago Candido']
|
2020-09-14 13:28:41.472000+00:00
|
['React', 'Angular', 'Power Bi', 'Web Development', 'Data Science']
|
Title Truly Customizing Power BI React Angular web frameworkContent growth amount data available organization presenting clear direct way increasingly important context Power BI — Microsoft’s business analysis tool — gained prominence Even counting integrated component navigation mechanism enough meet regular enterprise need platform still stand customization possibility Besides able customize platform’s builtin component possible frontend engineering skill develop new one scratch Developing Power BI Custom Visual development process new component take place programming Custom Visuals using PowerBI Visual Tools package pbiviz installed Node Package Manager — NPM Pbiviz commandline interface development Custom Visual requires knowledge conventional web technology Typescript HTML CSS enhanced use framework React Angular D3js design Custom Visual generated necessary CLI tool provided NPM package indicated Basically web developer need write little code — two method constructor update class implement IVisual addition file capabilitiesjson also generated CLI tool allows user define property color font customized later use enduser Power BI export class Visual implement IVisual constructoroptions VisualConstructorOptions code public updateoptions VisualUpdateOptions tooling provided pbiviz allows web developer instant feedback work update occur hot reload supported Power BI ServiceTags React Angular Power Bi Web Development Data Science
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.