title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
829
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
| tags
stringlengths 6
263
|
---|---|---|---|---|---|
Elizabeth I’s Ghost
|
A crown and throne are the
Reminder of a French sword
Married to maternal blood,
An unmarked grave the
Offspring of this union.
The living child wandered
Nursery halls in the last of her
Clothes chosen for a
Bastard princess in disguise.
Springtime’s caress before her third
Year carried away the white in her
World with a simple stroke of
Pen and sword.
A removed presence haunts her
Childhood paces, “Why yesterday was
It was my princess and today
Just my lady?”
Each May is a reminder of
Another year since that day,
When she heard cannons
Celebrating a poor woman’s demise.
Twenty-three more springs did she see
Before the throne,
An undefeated love exceeded from the
Unmarked grave to the
Clasp of the child’s ring.
Forty-three more Mays,
Time elapsed, Gloriana found her
Mother waiting for
Physical separation was but a
Fleeting affair to earthly ties.
|
https://medium.com/coffee-house-writers/elizabeth-is-ghost-1a0c8afb8af2
|
['Heidi E. Cruz']
|
2018-07-02 12:01:01.160000+00:00
|
['Tudors', 'Elizabethan England', 'Free Verse', 'Poetry', 'Historical Fiction']
|
Sub-Topics of Economics You Can Choose From for Essay Writing
|
Understanding economics is not just about reading graphs and charts, but the world around them. Students discover it extremely hard to write an essay on economics. Writing an economics essay will provide the student with limited time to concentrate on other subjects. Economic essay help service will help the student to understand the level of complexity, and they will help the student to tackle the challenge of essay writing.
The term “economics” derived from the Greek language meaning of that word is “house management”. The economy is the same as managing the house duties, such as managing the day-to-day routine duties, managing the expenditure, etc. Same as in the economy needs to manage goods and services. Economics is a sub-part of social science that focuses on the distribution, production, and consumption of goods and services. The building blocks of economics are the studies of supply, demand, labor, and trade. It enables people to understand businesses, markets, and governments.
Economics touches all areas of human life. It can help to improve human life and make a better society. The economy always helps to understand how decisions are made, how markets work, how to improve outcomes, and how economic drive social systems to make a better society and solve more problems. This gives success in work and in life. Economics can be divided into two subtitles.
Macroeconomics
It is the study of whole economic progress and steps taken by the nation. It analyses the decision made by countries. It analyses entire industries and economies, rather than the individual’s business.
Microeconomics
It is the study of sub-sections on an individual level, such as demand for goods, supply, cost of production, labor, etc. It analyses the decision made by the people and individual businesses.
Economist classifies resources into three categories:
Labour: It is time human beings spend producing goods and services.
Capital: It consists of the long-lasting tools people use to produce goods and services.
Land: This is at the place where production takes place.
The essay should start with a “spark” that grabs the attention of the audience and makes them want to read on. Try to follow the below structure for your essay writing.
Introduction
Explain what your essay is about, why the topic is important? State the question. Give an answer to the question. Summarise your topic of the essay in support of this answer.
Main body
The main body is an important part of your essay. State the evidence that addresses your argument in the main body. The flow of writing must be clear from one paragraph to the next paragraph. Write the most logical statements that support your argument.
Conclusion
Give focus on the conclusion of the essay. In the conclusion section, summarise your findings to the essay question and restate your argument.
Bibliography
Make a list of the books and references you read while completing your essay writing work.
Conclusion
Essay writing on economics is a daunting task. Economic essay help delivers the quality work within the given deadline and makes you tension free from the essay submission task.
Summary
Student needs to write an economics essay to improve the writing skill as well as to grab the knowledge of economic aspects.
|
https://medium.com/@johnnoels/sub-topics-of-economics-you-can-choose-from-for-essay-writing-f48d2d10b8e8
|
['John Noels']
|
2020-12-23 10:24:35.528000+00:00
|
['Essay Writing', 'Economics', 'Essay Writing Service', 'Essay', 'Economic Growth']
|
I Want to Live, Now. Thanks.
|
‘It’s strange,’ the Doctor says, looking at the results. ‘It’s very strange,’ he says, turning to look me in the eye.
I’m speechless, trembling, perched on an uncomfortable chair in a tiny consulting room at the hospital. Yet again, I see my life unravelling.
He’s considered one of — if not the — best Gastroenterologist in the entire State.
‘To recap, you have Primary Biliary Cirrhosis. An auto-immune condition…’ he starts as I nod. I know this. My body is killing the lining of my Bile Ducts in my Liver. I turned up to my General Practitioner complaining of fatigue. I was just so tired, always tired. It turns out it’s hard to be happy or energetic when your body is at war with itself every single day. ‘… seldom affects men,’ he continues. ‘Especially not men as young as you…’ his voice drones back. I close my eyes. I know this. It normally manifests in women in their fifties. Not men in their thirties.
He’s like a bad Oscars presenter, dragging out the time he has in the spotlight. Nothing he is saying is new to me — I was diagnosed months ago. They’ve cut out part of my Liver, sent it to a lab to double-check. The disease is advanced. Eventually, my entire Liver will be nothing more than a lump of scar tissue and it will cease to function. If the medication they’ve given me doesn’t work, I have less than a decade before I need a transplant. It’s 2018. The chances of me surviving an organ transplant are acceptable, but if I could buy time before that happens, medical advancements would help me dramatically.
I stroke the notebook I’ve brought into the meeting with me. Inside it, I’ve written a prayer to a God that I’ve not believed in for almost as long as I’ve been angry at Him.
All it says is: ‘Please, God, my children deserve a dad.’
It’s a feeble little prayer, but it is mine and it is heartfelt.
I know the odds. The medicine they’ve put me on has almost zero chance of working. It doesn’t tend to work on those with advanced disease. It doesn’t tend to work on those who are young. It doesn’t tend to work on males. I need to roll triple sixes on the cosmic dice of luck to get out of this.
‘…and all your results and trending downwards,’ he finishes, looking at me and raising his eyebrows. ‘That’s good, Bradley. It means the medicine is working.’
I look up at him, surprised. I’ve beaten the odds again. I still have a marriage. I haven’t had a Bipolar episode in years. I’m not descending into an addiction to drinking or drugs. I haven’t attempted suicide in years. I’m able to function in society with no problem.
And now the medicine works?
I begin crying. There’s no foreplay, no easing in. I suddenly have two thick streams of tears running down my cheeks, bending away at my smile before dropping onto the notebook in my lap.
The Doctor flusters around, desperately trying to find a box of tissues. ‘This is quite unorthodox, unusual…’ he mutters to himself, grabbing the box and handing it to me. ‘You’re a very emotional man, Bradley.’
|
https://medium.com/@jayalmond/i-want-to-live-now-thanks-49aa1e886cc5
|
['Jay Almond']
|
2020-11-19 09:08:51.719000+00:00
|
['Gender Dysphoria', 'Life Stories', 'Transgender', 'Depression', 'Nonbinary']
|
How to Make the Most of Feedback About Your Startup
|
Pitching your company is hard enough, but how do you handle feedback? (cc flickr GES2016)
The hardest thing to do in entrepreneurship is to seek out, accept and prioritize feedback. This is one of the most common mistakes I see entrepreneurs make. Why we suffer from anti-input cognitive biases will be the topic of another essay, but suffice it to say that you need to master this skill.
Plus, the sooner you start exposing yourself to others’ opinions of your team, company, product and market, the better your chances of success.
Based on my work with startups in my coaching and mentorship practice, I’ve developed a 7-step approach for dealing with feedback about your startup. These steps can be used whether you are getting opinions from mentors, investors, team members, analysts or customers.
Know Their Backgrounds
Almost any meeting at which you’re pitching your startup idea will provide you with some background on the people you’re speaking to. Check out their bios or Linkedin profiles beforehand and understand where each of them are coming from. Perhaps one mentor has industry experience that would be relevant to you, while another has never even interacted with your category. Knowing their backgrounds will help illuminate their biases and allow you to contextualize the substance of their feedback.
Record the Meeting
Wherever possible you should record sessions where you’re receiving feedback. Make sure to get participants’ permission to do so beforehand. After the session, you can replay the feedback and take detailed notes. Often, during the pitch process it can be hard to catch the nuance and subtlety of someone’s feedback. This is due — in part — to our adrenaline running wild when we’re selling something. Adrenaline activates our fight-or-flight reflex and this can make it tough to actually hear what the other people are saying.
Listen — Don’t Argue
Because of this heightened adrenal state, it’s critical that you spend the vast majority of time interacting with feedback by listening. Do not argue with the things people are saying to you unless they specifically ask you a question. Unless the listener clearly demonstrates a lack of understanding of your pitch, don’t even try to clarify things. You will probably seem defensive, and defensiveness sets the wrong tone.
The more you let others in the room speak, the more content and fidelity you get. But if you feel your blood boiling or you’re getting a run up of nerves, take a deep breath, calm yourself and lean into that feeling without taking outward action. The more intentionally you do this, the easier it will be.
Ask Clarifying Questions
Most entrepreneurs spend their precious time trying to rebut or shape listeners’ perceptions of their startup during feedback sessions. If you are going to speak, the absolute best use of your time is to ask questions. The best question to ask is “why”, as in — “Why do you feel that way?” or “What experience have you had that informs your belief in that outcome?” These kinds of questions only help you get further clarity about what specifically the expert is proposing. This is particularly relevant with customer interviews, where you really want to know a few key things about your users’ experience:
What is your problem? How does this solve your problem? How do you solve your problem today (without this thing)?
If you think of your prospective investors and mentors as “customers” (they buy in by investing or agreeing to mentor you), this may also help how you run your Q&A.
Try to avoid questions like “What would you do” or “What do you propose we do.” These will make you seem weak and lacking in market knowledge or conviction. If you want to get someone to answer this question (because you’re looking for true market knowledge) it’s better to frame it as “What experiences have you had that are similar, and what did you do?”
Process Offline with Your Team
The time to process feedback is after the meeting, not during. Make a point of scheduling time with your co-founder or others after the session to review feedback, make notes and distill what was learned. If you have a recording, you can play it back at this meeting. I usually recommend waiting until the next day if possible to synthesize feedback. This way you can be better rested and not in a heightened state when thinking deeply about what to do with what you’ve been told.
Compile in a Doc
Take all the feedback you’ve received and put it in a shared document. You can divide it by section of your company pitch (e.g. team, product, market, etc) or by type of feedback. Either way, it would behoove you to note the main feedback, the person giving it, and whether it was positive, negative or neutral. Use the answers to your clarifying questions to fill in the gaps of your understanding, and don’t hesitate to track down and reach out to those prospective investors or mentors with questions after the fact.
Look for Patterns
As you digest the content of your feedback spreadsheet, your goal is to look for patterns. And this pattern-matching can be as — if not more — important than your gut instinct about what to do.
For example, in one startup I mentored, investor feedback repeatedly pointed out that their category (salon bookings) had a lot of competition, and that competition was not doing great. After hearing this a few times from different corners, the founders retooled their market focus and presentation to higher-end and multi-site salons, who were not being served by their competitors. They went on to raise significant capital and have done a stellar job of execution. This is not solely because of the feedback, but rather the feedback helped them see something and short-circuit an otherwise costly customer development process.
As you’re looking at patterns, be sure to include a “weighting” element based on the speaker’s experience and interest. In the salon example above, a single point of target-market feedback would be more heavily influential if given from a prospective salon owner than from a junior venture partner. Just like VCs, you’re trying to match patterns and find leverage in discerning signal from noise.
Deciding What to Do
Now that you’ve followed the process and received a ton of (well-organized) feedback, what do you do with that information? How do you turn feedback into action?
The first thing to remember is that this is your startup. You (and your co-founder) ultimately make the decisions and have to live with those decisions. Mentors, customers and investors get to go home at night. You’re living/breathing/sleeping your startup. If something legitimately doesn’t sit right with you, don’t do it.
Beyond that, you’re looking for patterns — and from people who you trust/have good opinions on the matter. Be especially conscious of the times when the feedback you’re receiving contradicts your plans or vision for the future. This is often the place where you get the most interesting learnings and outcomes.
Just don’t be over-reactive. I’ve seen many startups (my own included) whipsaw from one strategy to another after a single investor meeting. This is rarely a good idea, and can actively work against you by muddying your message and undermining your convictions or confidence.
Be true to yourself and persevere. But as my grandfather once said, “If 10 people tell you you’re drunk, lie down.” I think this is great advice for any alcoholic — or startup founder.
|
https://medium.com/swlh/how-to-leverage-startup-feedback-from-mentors-investors-and-customers-4a4d87669688
|
['Gabe Zichermann']
|
2020-06-19 12:56:47.951000+00:00
|
['Entrepreneurship', 'Startup', 'Feedback', 'Innovation', 'Pitching']
|
Istinomer fact-checked COVID-19 and the Serbian national election at the same time. Here’s how.
|
Istinomer fact-checked COVID-19 and the Serbian national election at the same time. Here’s how.
Fighting misinformation about the virus during the run-up to June’s boycotted election was an arduous task for this small Belgrade-based outlet. Now it’s thinking about how to use its readers as an early warning system.
Image: Istinomer
This case study is part of Resilience Reports, a series from the European Journalism Centre about how news organisations across Europe are adjusting their daily operations and business strategies as a result of the COVID-19 crisis.
|
https://medium.com/we-are-the-european-journalism-centre/holding-serbias-government-accountable-via-a-simple-browser-plugin-e7ffb3ed2e38
|
['Tara Kelly']
|
2020-09-07 09:49:24.972000+00:00
|
['Misinformation', 'Journalism', 'Media', 'Fact Checking', 'Case Study']
|
Moviegoer — Next Steps. With a functional prototype complete…
|
This is part of a series describing the development of Moviegoer, a multi-disciplinary data science project with the lofty goal of teaching machines how to “watch” movies and interpret emotion and antecedents (behavioral cause/effect).
The Moviegoer prototype is complete, the unveiling includes a technical demonstration as well as two background articles of why movies are key to training better emotional AI. With a functional prototype complete, it’s time to think of what comes next in the project. Here are six areas of improvement, and their context within the larger impacts of Moviegoer.
Improving Subtitle Processing
Subtitles are key for identifying dialogue, and they provide descriptions of non-dialogue sound effects and audio. However, there isn’t a set standard for subtitle formatting, and the Moviegoer prototype only covers a single format. Improving the subtitle processing will allow for the “watching” of many more movies.
Identifying Significant Scenes
Currently we identify two-character dialogue scenes. We can look for scenes containing the highest amount of emotional data, by looking for emotionally-charged scenes. This can be accomplished by finding scenes with lots of profanity; a fast/argumentative conversation speed; or loud dialogue. Additionally, we can look for hallmarks of important scenes, such as long takes. These types of cinematography features can identity important two-character dialogues, as well as non-dialogue set pieces — which may still contain important character information.
Understanding Scene Context
There are arguably a finite number of possibilities for a scene’s location and scenario. Characters sharing a meal at a restaurant. Characters saying goodbye at an airport. Characters walking down the street and talking. There are specific pieces of dialogue, background sound effects, or cinematography aspects that may indicate one of these scenarios or locations. We may want to hard-code a check for these features to help us understand what’s happening in the scene.
Dialogue Attribution
Attributing individual lines of dialogue to the characters who speak them would tremendously help with comprehending the plot and events of a film. However, it’s still very experimental for now, and wasn’t reliable enough to make it into the prototype. Currently, we’re looking for characters onscreen who have their mouths open, and using this to attribute dialogue. We may be able to do a better job of diarizing voice audio from two-person conversations, and using this to attribute dialogue, as well as saving voice encodings for each character.
Better NLP Implementation
Since we have individual scenes isolated, we also have a full conversation isolated. A scene’s dialogue represents an end-to-end conversation, and we can do a better job of identifying key words. We can use dictionaries for positive/negative sentiment, and more effectively use Named Entity Recognition to identify conversation topics. NLP is great for parsing entire sentences and conversations, and we can do more research to see what we can learn from this (even with imperfect dialogue attribution, from the above paragraph).
Emotional Modeling POC
The point of Moviegoer, of course, is to break movies into structured data for training of emotional AI models. Though we’re still working on turning movies into data, a proof-of-concept would help demonstrate the eventual capabilities and provide perspective on the importance of emotional AI modeling. The most prominent idea for a POC involves studying facial reactions/changes resulting from specific lines of dialogue.
Wanna see more?
|
https://medium.com/@moviegoer/moviegoer-next-steps-bae2a2f19c35
|
['Tim Lee']
|
2020-12-07 03:42:33.202000+00:00
|
['Artificial Intelligence', 'Filmmaking', 'Affective Computing', 'Data Science', 'Cinema']
|
Our Story
|
We are a couple who set out on an entrepreneurial journey in the middle of the pandemic. What better time to start a new adventure! The world was doing a reset which pushed us to do the same.
This year we witnessed so many tragedies. The claustrophobia that most of us felt during months of confinement paralleled the feeling of being unable to escape the horrifying reality around us.
But in the middle of the pandemic there were also valiant stories of compassion, bravery and ingenuity. People who refused to give up in the face of difficulties. Instead, they saw an opportunity to turn a new leaf.
Something similar happened to me and my partner. It was March 2020 and we had been frustrated with our corporate jobs for quite some time. Hiding behind our zoom screens those frustrations silently grew more and more painful.
Meanwhile, during confinement like many others we decided to take on a new challenge: sewing. At some point in time the frustration of wanting to leave our jobs brilliantly collided with our growing love for sewing. This was what led to our EUREKA moment!
Initially it was simply meant to make it easier for us to sew some masks and fix are clothes, and eventually make our own clothes (the dream!). But soon we realized we were not the only ones who had this problem. We had countless conversations with our friends, neighbors, coworkers and even fellow sewers on FB groups who all shared our need for something simpler and easier to use.
And so the idea of a minimalist sewing machine specifically designed for beginners started to grow in our heads.
After several weeks of work, we came up with the above concept design for a simplified sewing machine. We shared it among our friends and colleagues and realized there was potential. And thus our startup was born.
While searching for an inspiring name, we came across Miletos, an ancient Greek city considered the birthplace of modern science and philosophy. We knew instantly that this would be our name. More on Miletos and its history in another post!
We are now hard at work at making this machine a reality. Everyday is a new exciting challenge towards that one goal, and everyday brings us closer to our objective. We are currently building the first prototype, and we can’t wait to share it with you guys! Meanwhile we are getting better at sewing and are excited to share with you our experiences and creations.
Here on our blog we will not only share our journey as entrepreneurs, but also as sewing beginners learning and discovering the various joys and frustrations of sewing. Brace yourself to see our creations, both failures and successes! (mostly the failures, at the beginning 😜)
We hope you will get to enjoy our musings on sewing, entrepreneurship and #life.
Cheers, see u soon! 🖖
#miletos #sewing #sewingbeginners #sewingmachine #entrepreneurship #startup #happyfounders #smartsewingmachine
|
https://medium.com/@miletos/our-story-62329c182602
|
[]
|
2020-12-21 18:14:28.392000+00:00
|
['Sewing Machine', 'Startup Life', 'Entrepreneurship', 'Startup', 'Sewing']
|
Trading Customer Service for User Experience
|
When I first started working in a TD branch in my second year of university, I wasn’t sure what I wanted to do as a career. I wanted a part-time job that would give me the flexibility to explore what I wanted to do next in life — like graduate school or working with people. Through this role I quickly realized that I was good at making connections with people, catering to their needs, and providing possible solutions to their problems.
Working as a customer service representative (formerly known as a bank teller) taught me empathy and compassion; I learned through the shared life experiences of those who came to my wicket seeking guidance, support, and understanding. I saw first-hand the impact of financial decisions and the barriers to financial literacy. And, most importantly, I learned that my experiences aren’t the only experiences within the world. There is no universal experience and you need to adapt your service (and style) to the person, meeting them where they are at in that moment.
From my time as a frontline worker, I knew I wanted to make connections to help people. I planned to go to graduate school for communications, so I started to look for corporate roles at TD that provided a predictable schedule that my classes could work around. Taking a chance, I applied for a role supporting the executive leading the Innovation Center of Excellence, which included TD Lab, an innovation team focused on the unique financial experiences of students. I got the job, and that was when I was first exposed to the world of rapid ideation, prototyping, and design. I was hooked.
Had I not taken a chance to try something new, I would have never realized how much I love UX design. And how perfect a fit I am for the role.
Like many with a Bachelor of Arts degree in English, I thought my only pathway to a career was going back to school. I thought my skills weren’t transferable. Each year in university, my program’s administration would outline potential future jobs associated with English majors — design wasn’t one of them. Because of that, I didn’t think my type of creativity and problem-solving skills would apply to technical roles. But that’s the beauty of design — it’s truly a middle ground. If you are passionate about people, are an empathetic problem solver, and love diving into problems at a deeper level, that’s what matters.
When I realized I didn’t need to be an engineer to be in a technical role — that’s when I mentally opened my career opportunities and enrolled in the Brainstation UX Design Bootcamp.
This was the first time I was supervised in the end-to-end design process, while also learning about design principles. It was a great experience. I created a network and friendships that I still lean on to this day. We created our own little community — and having this community powers me to be the best designer that I can be. Taking this leap into formal design learning also set me up to further grow within our team.
Having taken a leave of absence to complete this program, I kept TD up to date on my projects. After completing my course, I came back to the team in a new role that was created to give me the opportunity to showcase my abilities and spend half of my time supporting the design team. After six months of demonstrating my skills and contributing to a number of design projects, I was offered a full-time UX/UI designer role at TD Lab in early March 2020. More than a year later, I’ve gone on to be the lead designer on projects and mentor my own co-op students
When you’re a designer, you’re forever learning. You’re constantly going back to the customer’s experience and I often think back to experiences that I had when I worked in a branch. Both in a branch and in an innovation lab, you learn that there’s no universal experience — two different people can experience the same application in two incredibly unique ways. The goal is trying to find ways to make both of their lives better, and we can only do that when we are empathetic.
With such a small, collaborative team at the Lab, I’m also able to learn about and appreciate so many different parts of the design and development process. Our culture is one that makes all of us learn and grow together. Even though I’m early in my career, I’m treated equally — my feedback, ideas, and designs all help shape the final product. At the end of the day, when I look at my work and what I’m creating, it all circles back to wanting to help and build up connections with people. They won’t know I made it, but if it makes their lives easier, that’s the goal.
The work is challenging but it’s so worth it. I love being able to be constantly challenged and stepping out of my comfort zone. My whole career journey has been stepping out of my comfort zone — and I wouldn’t want it any other way.
|
https://medium.com/td-lab/trading-customer-service-for-user-experience-c853c0f7ac37
|
['Stefany Pantuso']
|
2021-06-08 19:52:48.079000+00:00
|
['Career Paths', 'User Experience Designer', 'Career Change', 'User Experience Design', 'UX Design']
|
Let’s talk about “White Woman’s Instagram”
|
Let’s talk about “White Woman’s Instagram”
A still from Bo Burnham’s Netflix special, Inside.
Bo Burnham’s Netflix special, Inside, is an existential crisis disguised as a comedy film. It aims its fiercest satire right at the heart of the thing that birthed Burnham’s career in the first place: the Internet. He goes after Jeff Bezos, Twitch streamers, brands attempting to co-opt social causes, react videos, and more. And then there’s the song, “White Woman’s Instagram.”
While it’s not the eleven o’clock number (that would be “Welcome To the Internet,” a brilliantly malevolent act-out whose motifs sneakily course through the whole show), “White Woman’s Instagram” is the clear breakout hit. Netflix knows it; it features a still from that segment in its cover splash. It’s instantly catchy, and it neatly captures the performative allyship, the casual racism, and the soulless vacuity found on that app, most often in posts by, well, white women.
“Fresh-fallen snow on the ground / A golden retriever in a flower crown / Is this Heaven? / Or is it just a / White woman / A white woman’s Instagram?”
The song has clearly struck a chord. There are delightful TikTok videos of white women comparing their own self-curated Instagram accounts with Burnham’s satire and feeling, as they say online, pwned. Burnham bullseyes his target, down to the Mylar birthday balloons and the seductive eating of sweets.
But is it misogynist? A lot of stuff is, you know! Showing contempt for women and queer people has been a time-honored way for many men to reinforce their own masculinity. And it shows up all kinds of places: If you have disdain for pumpkin spice edibles but love you some IPA, you’re probably a misogynist! Do you hate Amy Schumer because she’s fat? Misogyny! (“Naw, I hate her ’cause she’s a joke stealer!” Okay, sir.) Do you only notice “lesbians” when they’re on PornHub (hot tip: probably not real lesbians!)? Do you grouse that women are “stuck up” and “full of themselves?” Friend, time to do some soul-searching. Google “patriarchy,” and start there.
But white women…oh, white women! Our culture is having a moment where we are scrutinizing white women, and speaking as a white woman myself, it’s overdue. White women sit at that horrible nexus, right next to power but not quite in it, and many weaponize their gender to double-down on their own white privilege, trading their own liberation for supposed safety within the patriarchy. It won’t work; it never works. White supremacy teaches that white women are delicate flowers, in need of protection from Black men especially, and the perception of threat to white women has been used to justify violence throughout our history. The woman who got Emmett Till murdered was white; Amy Cooper is white. The Tulsa Race Massacre saw over 300 people killed and a neighborhood firebombed on the word of a white woman.
So yes, white women: time for a reckoning. If we want to eradicate white supremacist patriarchy — and I sure do! — then white women have to get real about their own complicity in upholding the status quo.
But is that what Burnham is doing? And is Burnham — a self-described rich white guy — the one to do it?
The most surprising part of “White Woman’s Instagram” is the bridge:
“Her favorite photo of her mom / The caption says: / ‘I can’t believe it / It’s been a decade since you’ve been gone / Momma, I miss you / I miss sitting with you in the front yard / Still figuring out how to keep living without you / It’s got a little better, but it’s still hard / Momma, I got a job I love and my own apartment / Momma, I got a boyfriend, and I’m crazy about him / Your little girl didn’t do too bad / Momma, I love you. Give a hug and kiss to Dad’”
Suddenly, the shallowness of the woman’s Instagram takes on a different timbre. Her project of creating Heaven via a stylized recreation of her life is not the work of an airhead, but of a person experiencing real grief. Social media is stupid: it’s pictures of what you had for breakfast, and silly memes with nefarious origins, and inane chatter, and it’s all cut through with a real, human yearning for connection and belonging.
Burnham’s project seems to be exploring that primary tension. Is the white woman in the song any more ludicrous than the Twitch streamer later in the special who blithely thanks his viewers for subscribing to his channel while he plays Burnham’s life as a video game, making the protagonist cry and play the piano?
On the other hand, the broader context of Inside can’t be ignored. This is a special made in a culture whose default settings are racism and misogyny. Because of their “Karen” status, white women as a group make convenient targets. How many young comedy fans are chuckling at “White Woman’s Instagram” because they think the women on Instagram are narcissists? How much of their laughter is infused with that particular flavor of vicious glee reserved for kneecapping women as a whole?
Women on Instagram, even the white women Burnham parodies so effectively, are not, by and large, narcissists. It’s not narcissistic to post a picture of yourself making a duck face under flattering light. Women are taught from an early age that the most important thing about them — more important than their brains or their abilities — is how well they meet the shifting and impossible standard of the male gaze. It’s a fool’s errand, and around middle age, a lot of women get wise to the con and switch to easily maintained hairstyles and comfortable shoes. (Hey! That’s about the same time when society renders women invisible! What a coinkydink!)
And while Burnham offers his protagonist a moment of soulful introspection, the song doesn’t end there. After the bridge, the critiques continue to draw blood:
“Incredibly derivative political street art / A dreamcatcher bought from Urban Outfitters…”
Burnham’s whole comedy career is about having it both ways. His speciality is espousing a sincerely held belief while simultaneously mocking sincerely holding any beliefs. He indulges his own self-hatred, cleverly condemning himself both for his sincerity and his own mockery of his sincerity.
A classic example of this doubleness shows up later in the special, is in the song, “Problematic.” In it, Burnham laments the comedy of his younger days, with its casual homophobia and thoughtless racism:
“I started doing comedy / When I was just a sheltered kid / I wrote offensive shit / And I said it / Father, please forgive me / For I did not realize what I did / Or that I’d live to regret it… / Is anybody gonna hold me accountable?”
I believe that Burnham is sincere in his plea; he’s disavowed his early material in a host of interviews before this special aired. But also: while singing this song, declaring himself “problematic,” he stages himself as a literal Jesus on a cross, intercut with thirst-trap pictures of himself working out. It’s a sly bit of having it both ways. If you agree that Burnham needs to own up for his earlier work, well, maybe Bo agrees with you! And if you think “cancel culture” has gone too far, turning mere comedians into martyrs, well…maybe Bo agrees with you!
Comedy is based in agreement; Bo Burnham, or any comic, is only as successful as their arguments are persuasive. So playing both sides of the argument is a skillful bit of maneuvering. It lets you have your cake and eat it, too.
It’s the kind of trick that only rich white guys can get away with, because only white guys are granted the status of the Everyman. It’s their world. The rest of us — the seventy percent of Americans who aren’t white guys — have to live in it.
|
https://medium.com/@megangogerty/lets-talk-about-white-woman-s-instagram-b1a4e48bac49
|
['Megan Gogerty']
|
2021-06-10 23:04:00.990000+00:00
|
['Feminism', 'Instagram', 'Bo Burnham', 'Comedy', 'Misogyny']
|
Who Puts the Performance in Performance Sportswear?
|
The Unknown Brains Behind The Brands
Taiwan deserves to be known to consumers globally for its leadership in functionally advanced and eco-friendly textiles. However, Taiwan suppliers remain content to stay behind the scenes, so long as brands such as Nike and Adidas continue to bring big orders. What disadvantageous position does such a mindset leave Taiwan over the long term?
By Mark Stocker
Opinion@CommonWealth
中英切換
Outdoor sports are all the rage in Taiwan today. Interest in running, cycling, hiking, and camping has experienced tremendous growth in recent years. Last year alone there were more than 600 running races hosted on the island — that’s the equivalent of a staggering 1.7 events per day! Taipei City is now home to hundreds of micro-gyms and running clubs, and the city’s bike paths are overflowing with cyclists. This interest in outdoor sports has been matched by growing demand for branded performance sportswear. Little known to many, however, the technology and know-how that puts the performance in performance sportswear originates from right here in Taiwan.
The Technologies Behind Closed Doors
This week the textile industry made its way to the Taiwan World Trade Center for the TITAS trade event, Taiwan’s annual global textile trade show. Industry insiders from around the world met with Taiwan’s manufacturers, equipment makers, and innovators to discover the latest in textile innovation from the island. Many of the booths had their newest wares on display, but the truly innovative technologies weren’t out for public viewing. Instead, these technologies remained behind closed doors, available to only the most important industry customers. For Taiwan, this proclivity for closed-door promotion isn’t limited to the trade show floor. Consumers around the world remain equally in the dark about textile innovation coming from Taiwan.
Over the past three decades, the island of Taiwan has transformed itself from a maker of low-cost fabrics into a global powerhouse specializing in functionally advanced textiles.
Taiwan’s Ministry of Economic Affairs notes that more than 70% of the world’s outdoor sportswear — the largest application for functional textiles — is currently made using performance textiles from Taiwan.
Taiwan has more than 4,300 textile manufacturers employing over 140,000 people, and total production value reached NTD409.3 billion (USD13.5 billion) in 2015.
Customers of the island’s textiles include the world’s top brands. Everyone from Nike to Adidas to Under Armour sources their functional textiles from Taiwan. They do so because Taiwan’s depth and speed of innovation is critical to bringing the next generation of performance sportswear to market each new season. These brands also choose Taiwan for the country’s eco-friendly production capabilities; those that meet and exceed increasingly stringent demands for ethically sourced and sustainably manufactured fabrics.
The Innovation of Eco-Textiles
Taiwanese manufacturers not only lead the world in fabrics that keep sports enthusiasts cool in summer and warm in winter, the nation is increasingly the de facto supplier of “eco-textiles”. Taiwan’s textile manufacturing lays claim to innovative textiles made from a variety of recycled and sustainable materials. These include fabric fibers made from oils derived from coffee beans and even trash reclaimed from the world’s oceans. Such achievements are a testament to the ingenuity of the Taiwanese and to the capacity for Taiwan to lead the world toward more sustainable textiles and textile manufacturing practices.
The Downside of Keeping A Low Profile
I, however, remain concerned that Taiwan’s leadership in textile innovation could be short-lived if the nation fails to build a reputation among end-consumers. Despite Taiwan’s achievements, few consumers realize Taiwan’s role in the production of their favorite sportswear. When one buys an outdoor hiking jacket or a pair of yoga pants, the brand on the outside signals the product’s US or European heritage, and the label inside declares the country of production; nowadays that’s likely to be Vietnam or Bangladesh. The name Taiwan, however, fails to appear at all, despite the fact that the product’s material and much of the production innovation originated from Taiwan.
My experience tells me that Taiwanese suppliers aren’t all that bothered by the lack of publicity. Many of them remain content to stay behind the scenes, so long as brands such as Nike and Adidas continue to bring big orders. Such a mindset, unfortunately, leaves Taiwan in a disadvantageous position over the long term.
A lack of reputation among consumers diminishes manufacturers’ bargaining power.
Furthermore, this absence in publicity at the consumer level exposes Taiwan’s manufacturers to unnecessary risks, should the world’s leading brands decide to move their orders elsewhere in search of lower prices. Most damaging of all, however, is that despite Taiwan’s mastery of functional, smart and eco-textiles, upstart sportswear brands from the island are unable to tap a national reputation for textile innovation and green production to grow their own brands. It’s a reality that continues to keep Taiwan over-reliant on contract manufacturing.
For more stories of Taiwan and the greater China region, visit our website or subscribe to our newsletter.
Can Taiwan afford to miss this opportunity?
The island should be known to consumers globally for its leadership in functionally advanced and eco-friendly textiles — much as we recognize Italy as home to the world’s best leather, and New Zealand the home to the best wool. Consumers should be as confident in buying outdoor sportswear with textiles from Taiwan as they are in buying eyeglasses with lenses made in Switzerland.
What Taiwan needs is a strategic, long-term promotional campaign to promote the nation’s stewardship in functional and eco-friendly textile innovation and manufacturing practices.
This goes beyond the “Think Taiwan for Textiles” campaign sponsored by Taiwan Textile Federation, that focuses its marketing solely on industry insiders. Instead, the campaign should target end-consumers in major developed markets; those who make the vast majority of functional sportswear purchases and, according to a recent Nielsen study, are “willing to pay more for products and services that come from companies that are committed to positive social and environmental impact”. These ideals are in near perfect alignment with the offerings of Taiwan’s textile manufacturers. Taiwan is in as good a position as any nation to build a reputation as the standard-bearer for eco-textiles and sustainable manufacturing practices.
Such a reputation would benefit Taiwan on many levels. It would contribute to a virtuous cycle, where a positive reputation among consumers would lead to further growth, which in turn would result in greater investment, leading to more innovation. Not only would such a campaign help distance Taiwan from rivals such as Korea and China, it could also establish a market position that could be leveraged over decades, similar to Italy’s long-held repute for quality leather. Finally, such a standing would give young Taiwanese brands a solid foundation upon which to build their own brands as the Taiwan label becomes synonymous with innovative textiles and global-leading green production practices.
As a brand consultant, I never fail to encourage our clients to build a clear set of associations for their brand. These associations serve to build positive differentiation for their products and their company.
Taiwan is one of the few countries in a position to own the association of the world’s leader in functional textile innovation and as the epicenter of sustainable textile manufacturing. It goes without saying that such associations would be desirable associations for any nation. It’s time to move beyond the closed-door promotion of Taiwan’s textiles, and to reveal to the world’s consumers the Taiwanese innovation behind their favorite brands.
(This article is reproduced with the kind permission from Mark Stocker. It presents the opinion or perspective of the original author, which does not represent the standpoint of CommonWealth Magazine.)
|
https://medium.com/commonwealth-magazine/who-puts-the-performance-in-performance-sportswear-886e4c601d1e
|
['Commonwealth Magazine']
|
2019-03-18 16:46:08.167000+00:00
|
['Manufacturing', 'Sportswear', 'Branding', 'Taiwan', 'Textile Design']
|
The People We Were
|
Marvin returned to Yaoundé two days after Robert was discharged from the hospital. Robert drove him to the Garanti bus park and lingered until the bus was ready to take off. He hugged his brother close and tight, promising to visit him as soon as possible.
“Just make sure you’re completely healed before you come,” Marvin teased gruffly. “I don’t have a car to carry your half dead body to the hospital again.”
This earned him a playful shove which he laughed off. Robert watched the bus disappear into Douala’s early afternoon traffic before driving home. To his surprise, Tatiana sat in his living room with his parents when he returned. She gave him a worried once over, before coming to wrap her arms around him. He hugged her back with mixed feelings of gratitude and guilt.
“Thank God you are well,” she whispered.
“Thank you for coming,” he whispered back.
She insisted that he and his parents sit down so she could she serve them the food she brought.
“You have all been at the hospital,” she explained, ladling achu soup into the hole she had expertly made in the pounded cocoyams. “Let someone who was not there take care of you now.”
Even his mother seemed moved by her thoughtfulness.
Later that afternoon while his parents napped, Robert sat down next to her on the couch and took her hand in his.
“Thank you for still caring, Tati,” he stared down at their entwined hands, thinking about the many times he had resented her for not painting her nails in the delicate shade of creamy pink that Fese favored. Guilt swelled over gratitude.
“No wahala, Bobby. I’m glad you are feeling better.”
“Me too. I’ve never been so sick in my life!”
“Bertrand told me that you almost died, that the doctors didn’t know what was going on. I was so scared. Did they ever find out what you had?”
“All tests came back negative and then I got better.”
“Wow…” Tatiana said. “So it came and went, just like that?”
Robert started to say yes then hesitated. It had not come and gone just like that. Many things had happened before, during and since.
“With some spiritual intervention,” he replied still staring at their hands.
“Look at you talking about spiritual intervention!” Tatiana chuckled. “That illness really got to you.”
“I guess it did, Tati,” Robert said with a sigh. “I guess it did. My outlook on life is definitely different now. I feel like there is so much more to life than all the things I have been trying to pursue since I came back to Cameroon. Not that there is anything wrong with what I have been doing. I just feel like there is a lot I have been taking for granted and even more I have not bothered to explore, especially here in Cameroon, about the people, the cultures, what we believed before colonialism and Christianity and all these things which have changed the way we live and relate to each other.”
“Near death experiences always change people’s outlook on life,” Tatiana mused softly.
“This all started before I even got sick,” Robert explained. “When I was in Buea to cover Senator Effoe’s funeral I met this child from one of those villages in the Manyu division that soldiers burnt. Talking to her and hearing about her life really made me see that there is the Cameroon we know and the Cameroon we hope can be and that those two Cameroons are sitting on top of a Cameroon made of all the people who were in this part of the world before it became Cameroon — people whose worldviews, aspirations, and values are still influencing the way we live now. The question is how we are all going to move into the future with this new identity we have been given. I look at myself, for example, and I know that I am from Oku. But there is so much about being an Oku man that I still have to learn and these things I don’t know have already profoundly impacted my life in ways that I cannot even begin to explain, talk less of reconcile with what I think I know about the world. I feel like I have become a child again, and the world is this mysterious place that I have come to, and I have all these different groups of people from whom I can learn. I feel like someone unraveled my sense of self into this big wide space capable of holding so much more.”
“That is really deep, Bobby,” Tati squeezed his hand.
Something in her voice made him look up at her. She was smiling at him, but her eyes were watery with confusion and pain.
“Tati…” he began.
“What happened to us, Bobby?”
The anguish in her whispered question sliced through him. He closed his eyes in guilt.
“I’m so sorry, Tati,” he whispered back.
“But that’s the thing,” tears spilled down her face as she spoke. “I don’t know what you are sorry for. I don’t know what happened or what changed. You never gave any indication that something was wrong, then you just ended things. Next thing I hear is that you are in the hospital sick and possibly dying.”
Photo Credit: Adobe Stock
“It was very selfish of me to do what I did to you during our relationship, Tati. I was only thinking about myself. I didn’t call you when I got sick because I didn’t think it was fair and then I was too sick to do anything.”
“What did you do, though? Why did you end things so suddenly? Were you cheating on me with someone else?”
Robert felt like he had, given the way he felt about Fese, but couldn’t find a way to say it.
“I think I spent more time wanting you to be someone else rather than seeing and appreciating you for who you are.”
She gave him a long look after he spoke.
“Is it Fese?”
Robert stared at her in shock. How could she possibly have known? He started to deny it but as he opened his mouth to speak, Tatiana’s eyes narrowed, and her jaw clenched. Robert had come to know Tatiana well enough in the three years they spent together so he knew the clenched jaw, narrowed eyes, and steady gaze she was leveling on him at the moment were how she prepared herself to hear lies from someone she trusted to tell her the truth.
“I’m sorry, Tati.”
“I always suspected you had some feelings for her.”
“Nothing happened between us, I swear.”
“I know, Bobby. You’re not that kind of guy.” Tatiana said softly.
Neither of them said anything for a few seconds but Robert could hear the wheels of her mind turning from where he sat.
“You broke up with me when she and Bertrand got engaged!”
Her shock at the realization made Tatiana’s voice rise slightly.
“I’m so sorry, Tati,” he said again, thinking of Bertrand’s suggestion that he just be honest with her. “I wish I could go back and change things.”
“I’m sorry too,” she replied, sadly. “I wish things were different.”
“I wish they were too, Tati. I have a lot of soul searching and self-accountability to work through.”
“Sounds like you do,” she replied, standing up. To her credit, her tone was not nasty, just very matter of fact. Robert stood up with her.
“Thank you for coming to see me and for bringing food. My father really likes achu.”
Her eyes dulled with some gloomy emotion for a few seconds then she smiled. It wasn’t her normal happy smile, but it was genuine. It struck Robert then that Tatiana had been holding out hope that things could be mended between them.
“It was good to see him again.”
He helped her pack up the bowls she brought food in, which his mother had washed and set out to dry.
“Let me drive you home,” he offered.
She declined firmly so he walked with her to the road and insisted on paying her fare. She gave him one last hug before entering the taxi.
“Thank you for everything, Tati,” he whispered into her ear.
“You’re welcome, Bobby.”
(The People We Were is an excerpt from a longer work of fiction I am currently working on.)
|
https://medium.com/@mythologicalafricans/the-people-we-were-47e2863426b1
|
['Mythological Africans']
|
2021-08-11 21:40:52.702000+00:00
|
['Lovestory', 'Cameroon', 'Fiction']
|
Desperately Seeking Validation
|
You can please some of the people all of the time, you can please all of the people some of the time, but you can’t please all the people all of the time — Abraham Lincoln
People pleasing sounds like it should be a good thing; an earnest pursuit; but actually it’s a behavioural kink formed in childhood, occurring when a child becomes overly compliant in meeting their parent’s needs, in order to gain love, approval, and acceptance.
In other words, the need to please is borne out of insecurity, low self-esteem, and a desperate need to gain power, love, approval, and acceptance .
In adulthood, this learnt behaviour can manifest itself as an insidious manipulation tool, whereby we try to become all things to all people, and are reliant on controlling the opinion of others, in order to manage our own insecurities and self-love deficit.
In simple terms, people pleasing equates to saying Yes, when inside we’re screaming No; to habitually prioritising others over ourselves; to violating our personal boundaries; abandoning our moral code; bypassing our integrity, and acting disingenuously.
There’s only one reason we contort ourselves to comply with the expectations of others in this crude way; it’s because when the validation comes, the adrenal hit feels good, like a drug, and the opposite makes us feels worthless, shameful, ‘not enough’.
In a romantic context, people pleasing manifests as ‘love bombing’ — whereby somebody cycles in and out of your life like a mini tornado, showering you with gifts, acts of service, attention, compliments, and affection, only to disappear as soon as their needs have been met.
This ‘fake love’ fix can severely test your discernment, and confuse the hell out of you, as you are picked up and put you down like a toy, erroneously given false hope for something more, but never getting it.
Ultimately, this disingenuous way of living is both exhausting and toxic, leading to endless turmoil in relationships, and huge disruption to one’s well being.
The way out of this maze is simple: embrace personal integrity; be yourself, warts and all, and care less about what other people think. Remember:
|
https://medium.com/change-becomes-you/desperately-seeking-validation-238f1db7c25c
|
[]
|
2020-12-17 10:22:48.886000+00:00
|
['Psychology', 'Love And Dating', 'Self Improvement', 'Personal Growth', 'Relationships']
|
The cat
|
The Cat
Image By Author
Oh well you found your place,
Again. The unusual space.
The house is yours and nothing’s mine.
You are the cat, the house’s boss and wizard.
I strive to feed your needs
Yet nasty looks you pay me with…
I snuggle you a home,
And all you do is take my bed. And couch.
I work and bust my ass all day
And home I come to hear your word
I’m not enough! Not what you need!
How dare I be such little help? Alone, I left you again.
I chop your meat to feed you well,
I fluff your pillow and your bed,
I play with mice and thread and yarn
You look at me, my cat, what have I done?
Oh cat, you’re such a thing of joy,
The torture of real love. Those eyes… and claws.
Yet there you are in my darkest days,
One paw on my hand, a purr for my ear and heart.
You are my cat, the weird, insolent.
and oh so free of me. Yet one and loyal still.
How can this be? I fake disinterest, but then?
One meow and I’m at your paws!
|
https://medium.com/blueinsight/the-cat-549b1c266bda
|
['Moni Vazquez']
|
2020-12-20 23:15:38.164000+00:00
|
['Poetry', 'Blue Insights', 'Cats', 'Poetry On Medium', 'Pets']
|
The CUDOS Token Is Listing In January🥳
|
I think distributed cloud compute and tokenisation will become a standard— Jorg Roskowetz — Head of Blockchain Technology at AMD
We are super excited to announce that the CUDOS token will finally be publicly listed and available to the public in January 2021! We know that as a community you guys have been itching to get your hands on some CUDOS tokens!
We will be announcing all of the details regarding which platforms you can use to get your hands on some tokens very soon, so please keep an eye out! 👀
The launch of the CUDOS token will represent another step forward in our mission to decentralise the cloud 🌩️ as well as our mission to take the World Wide Web into Web 3.0 🌐.
To find out more about what we’re currently working on please visit our website or click on one of the social media links below.
|
https://medium.com/@cudostoken/the-cudos-token-is-listing-in-january-f911e592fa
|
[]
|
2020-12-18 16:32:53.900000+00:00
|
['Cudos', 'Listings', 'Token', 'Blockchain', 'Bitcoin']
|
Greek Speak — The truth about advertisements
|
I have said many times that I avoid using absolute terms for a lot of reasons. They invite argument because there are exceptions to rules. “Smoking is bad for you and you will get cancer and die from it!” “Bullshit! My grandmother smoked two packs a day and lived until she was 92 and died in peace!” Also, to say something “always, only or never, etc.” is to say that there is no other way it has ever been or could be. That is a sorely limited perspective which causes more damage than good when disseminated to the public such as I find in the endless memes on Facebook.
This will be an exception that my rule. Simply stated, at the heart of all (there’s my absolute term) advertisements is the suggestion that as we are now, we are not good enough. This plays on primal instincts to better ourselves for the purpose of evolving and propagating our species and to compete for top mating rights. The way advertisements do this is by appealing to one, or maybe two or all of these, but usually just one:
1. Pathos — our emotional self
2. Ethos — our sense of ethics
3. Logos — our sense of logic
I don’t think I need to explain those. I want to explain why I think this is true. Advertisements are created and designed to sell products and services, sometimes they act as a public service announcement and so on, but primarily, to sell stuff. Whether ads are uplifting or make us feel poorly, they exist for the purpose of enticing us into buying a product or service. That is the “definition” of an advertisement (imho).
This reduces us to mere consumers, that’s shitty. Those who are selling the products and services must do so forever, meaning we consumers must keep buying things forever or else the economy could collapse. That isn’t a good thing from any angle. But what is worse is the idea that we, as we are now, are not good enough! WTF? Who says so? What are they basing this absurd idea upon? What gives them the right to judge and serve me my own image of self-worth and then tell me that I am not good enough unless I buy their shampoo and look like the model in the commercial? You know what? If we start feeling that we are good enough, we will stop buying shit we don’t need! Whether we all wake up instantaneously, (which I seriously hope it does not happen) or gradually over time, we will evolve from a consumer/manufacturer-based society to whatever comes next. I don’t know what that would be because it hasn’t happened yet.
There is a whole lot more to this consumerist concept that I started writing about 13 years ago when I wrote my senior honors thesis. At the time I wrote it, I had the ideas, nascent though they were, but I lacked the communication skills I needed to really express what I was trying to say. I think I just did. Let me rephrase to be absolutely (just broke my rule again) clear on what I am telling you. You and I, as we are now, are good enough! I don’t mean stop everything because we have reached a point of perfection. I mean that we don’t need to keep buying a bunch of shit we don’t need in an effort to better ourselves.
Let’s work with what we have within the rules of our system. Let us join our minds and hearts, together for the purpose of promoting the happiness and well-being of all. Two heads are better than one and there is power in numbers.
Keep thinkin’ keep feelin’
I find that appealin’
Dont run and hide
Take a look inside
You’re on my mind
My body lost in time
Who am I, who is me?
Stop askin’ and create
Who YOU WANT to be!
Then life can be great
I manifest with my voice
It all depends on my choice
The future is here
Shit, its already gone
The root of all evil
Ain’t money’ my friend
I finally realized
It’s ignorance in the end
I don’t want a big house
I got better shit to do
Then cleanin’ all them rooms
That I never even use
Whats that you say
Get a maid…
I don’t need that shit
I do things in my own way
(One of my little raps)
|
https://medium.com/@karriekent/greek-speak-the-truth-about-advertisements-ebc16d36ba3
|
['Karrie Kent']
|
2019-10-24 12:56:32.749000+00:00
|
['Philosophy', 'Conflict Theory', 'Marx', 'Movie Quotes', 'Sociology']
|
America’s Best States for Craft Beer May Surprise You
|
What is America’s best beer state? The answer lies in finding the balance between quantity and quality.
Quantity
A lot of articles approach this question by looking at sheer presence of beer and number of craft breweries. These data points rarely yield any surprises. Colorado and the West Coast famously maintain a significant brewery presence. California in particular has by far the highest total number of breweries with 958 — by comparison, New York comes in at #2 with only 460 breweries.
It is also worth noting that Texas and Florida, two states not traditionally thought of as beer strongholds, have seen an incredible explosion in craft beer over the last decade catapulting them up to the top 10.
Notably, the four states mentioned above are behemoths in terms of population. Given how mainstream craft beer has become, it’s no surprise that population is starting to play a larger role in determining the presence of craft breweries in any one state.
A potentially more informative way of approaching the true impact of beer in a state’s culture nowadays is by adopting a per capita approach.
Viewed from this angle, while the Pacific Northwest and Colorado still live up to their respective reputations, northern New England is the true breakout star. In fact, Maine and Vermont pulled in the two highest scores overall, and Portland, ME has the most breweries per capita of any city, with about 18 per 50,000 people.
Quality
Taking a similar approach to quality also yields some interesting results. Using RateBeer’s annual ranking of 100 Best Brewers in the World, California expectedly dominates in terms of sheer count of breweries that made the list, with a strong showing from Florida, Michigan, and New York as well.
However, with total number of breweries factored in, Delaware absolutely dominates solely on the presence of world-famous Dogfish Head — a highly influential brand that makes up one of its only 32 breweries. Perhaps an even more unexpected twist is the top marks for Missouri, a state that manages to land three out of its 150 breweries on the list, including Maplewood’s Side Project Brewing at an impressive #2 overall.
Balancing both of these factors together, the states that have the best overall balance of per capita brewery presence and brewery quality are (in no particular order):
Maine Alaska Vermont
Though perhaps aided by their relatively small populations, the quality and quantity these states manage is nonetheless impressive. While California is undoubtedly still a great place to grab a beer, other parts of America have now earned their spot at the table.
|
https://medium.com/@ErinBrookins/americas-best-states-for-craft-beer-may-surprise-you-15315a657eb9
|
['Erin Brookins', 'Wilcox']
|
2021-08-09 20:52:11.637000+00:00
|
['Data Visualization', 'Craft Beer', 'Data', 'Beer', 'Maps']
|
How to Be Happy?
|
How to Be Happy?
Image by Shahariar Lenin from Pixabay
What is happiness? How much does it take to experience this condition? How to choose a key for it and what does it depend on? How to be happy, despite all the vicissitudes of life?
“Happiness is a person’s state that corresponds to the greatest inner satisfaction with the conditions of the being, the fullness, and meaningfulness of life, the realization of the human purpose”.
According to the axiomatics of Yoga, happiness is a natural state of any living creature!
Many people think that everything is relative, and to experience happiness, one must first experience pain and suffering: “There would be no happiness, but unhappiness would help.” Is it so?
Yoga says that unhappiness and pain do not exist. There is only the energy of enjoyment. Pain is a pleasure divided in half and collided with each other. That is, we suffer when we want to get exactly the opposite things at the same time.
Suppose we enjoy eating a cake and our slender body. But when we eat cake, our body gets better. There is pain. Faced with our two pleasures, there was an internal contradiction.
As a result of our own ignorance, we turn our natural pleasure into suffering and pain. However, there is a positive role for suffering. Acting like a “whip”, it makes us evolve, move, hone our minds, reconcile opposing trends, take some steps towards our happiness. Yoga claims that starting from the moment we receive the human body, we can get rid of any suffering and return to the natural state of happiness.
Where the direct Consciousness, Energy flows there
The process of reconciling our opposing trends is quite lengthy but fascinating. While we are on the path to unlimited happiness, Yoga offers us not to despair with failures, but to look at everything in a positive light.
In Yoga, there is the principle of Santoshi — contentment. He encourages us to rejoice in what we have, to see the positive in every moment of our life. Despair, discontent, complaints about fate — not the most attractive way of life, it creates additional suffering and problems.
There is a wonderful parable about the sage and his disciple. The sage said to the disciple: “Look around the room and remember all the brown objects.” The student did so. Then the sage asked him to close his eyes and name all the blue objects that he saw in the room. The student was indignant: “How so ?! You said to remember brown objects, I didn’t notice anything besides them. ” When he opened his eyes, to his surprise, he discovered quite a few things in the room in blue. The sage said: “In the same way in your life, if you are looking for a negative, then you will see one negative without noticing anything good.”
Yoga offers us practice for every day: celebrate for yourself at least three positive points. It can be a trifle, or something significant. Even in difficult situations, this practice helps to switch our consciousness to a positive wave.
“When you wake up in the morning, think what a precious privilege to be alive — to breathe, enjoy, think, love.” Marcus Aurelius
Any negative experience can be overcome
A person in his life periodically experiences feelings such as jealousy, envy, greed, resentment, anger, fear. It is not always possible to restrain their manifestation, remaining satisfied at every moment in time.
Yoga says: “Do not be so afraid of negative influences from the outside as your inadequate reaction to it. There are no bad or good emotions, there is an appropriate or inappropriate manifestation of them. ”
You need to understand that the manifestation of negative emotions interferes with our happiness. An excellent tool to help you get away from any “acute life moments” is Kriya Yoga. This is dynamic Yoga, focusing on internal sensations. During the practice, a person observes his thoughts, experiences, and is detached from them. As a result, it easily switches to positive and joy.
What hinders our happiness?
Sometimes it seems that happiness depends on external factors. We catch ourselves thinking: “Now if I had a huge house, an expensive car, a summer house on the island, I would definitely be happy!” We make our happiness dependent on others: people, circumstances, objects, etc. This situation makes happiness fragile and unstable.
To find the strength in ourselves and realize that the source of our happiness is ourselves, the practice of Yoga will help.
During yoga classes, our well-being and emotional background improve. This is due to an increase in the amount of our Prana or vitality. But often out of habit we spend this potential on things that do not lead us to our goals: empty conversations, negative thoughts, useless activities.
Inappropriate expenditure of forces and energy, like water flowing through holes in a barrel, our positive attitude leaks along with it. Therefore, it is worth analyzing your life and “patching holes” — that is, abandoning those habits that do not make us happier, healthier, and divert us from our goals.
Yoga will help to get rid of negative habits more easily and develop positive ones more quickly
Happiness is in every one of us and is our natural state. Applying the principle of contentment, we can see life in a joyful light. And the practice of Yoga will gradually help get rid of your inappropriate manifestations and habits that interfere with your happiness. Remember, our “Happiness is in our hands” and if “you want to be happy, just let it be”!
Good practice and be happy!
|
https://medium.com/illumination/how-to-be-happy-31be8c5dad67
|
['Koma Live']
|
2020-12-22 16:22:31.078000+00:00
|
['Life', 'Happiness', 'Love', 'Yoga', 'Self Improvement']
|
I Am The Wallflower Class Valedictorian And Tomorrow My Graduation Speech Is Going To Turn Some Heads
|
Photo by Vasily Koloda on Unsplash
Dan Stacy? I think he’s in my AP bio class.
Dan Stacy? Isn’t that the guy who built the robot that could read the PA announcements?
Dan Stacy? Didn’t he win some big award for debate or something?
Yes, yes, and yes, Michelle. I did all those damn things and then some. I’ve been quietly dominating this school for the past four years, showing absolutely no mercy, academically speaking, all while balancing extracurriculars and remaining dedicated to our community at large. Is 2,000 hours of community service a lot? Eat your heart out, Malcolm Gladwell.
Tomorrow is the culmination of my storied high school career and I intend to go out with 10 octaves and a couple of high fives. You see, while there has been no shortage of scholastic triumphs over the past four years, my social life has left something to be desired. No offense, Laurie. So when it comes time to give my graduation commencement speech tomorrow, I, Daniel Phillip Stacy, am going to leave it all on the table. Hey, I guess he was pretty cool after all. Damn, Dan Stacy can get it. Funniest speech ever. Right on all three counts, Tiffany. So with the vim and vigor I might typically attack a fundraising event for the swim team or an all night cram sesh on Candide, I’m going to straight up own this speech.
How do I open this thing? Easy. I start with a quotation, but nothing tame or lame like a line from Mary Oliver’s The Summer Day. Nah, I hit them with something edgy and divisive to wake them and shake up. I’m thinking either something contemporary like a lyric from a Megan Thee Stallion joint or I go old school and anti-establishment with a line from an early Sam Kinison record. Either way, they’re shocked into submission. What’s this guy doing? What’s he going to say next? Sit back, Angela. You’re in good hands. From there the choice is simple. I do some light and respectful roasting of the staff and faculty. Just playful and fun, nothing that’s going to hurt any feelings or cause any ill-will. I might take a jab at Mr. Willits for drawing up new blocking schemes for the football team during downtime in Algebra or make a dig about Ms. Lynch being chronically late for her own classes. Just silly stuff that everyone recognizes, but few have the courage to say out loud. I will not, however, make any jokes about Mr. Dandrige and Mrs. Vickers’ affair. After Mr. Dandrige’s wife, Leila, drew that big scarlet letter in lipstick on the windshield of Mrs. Vickers’ Volvo last semester, the cat’s been out of the bag on that one. That’s low hanging fruit with real life implications. I’m looking for laughs, not trial separations.
After the roast is out of the oven and I’ve got all 342 seniors eating out of the palm of my hand, I’ll go earnest with things for a bit, but not too earnest. I’ll talk about how this senior class has been like a tight-knit family these past 12 years (not really true but it’s an easy analogy) and share some fond memories from football games, talent shows, and dances. You know, the shit people get really sentimental about as time goes on. I’ll share somewhere between 4–7 memories, basically however long it takes to have the entire room in tears. Once I’m there, I’ll return to the family thing and I’ll say that no matter where you go and what you do, you can always come home to your family. I mean, I doubt I’ll come back too often, but people like hearing that they can. Wow, he’s really poetic. That was beautiful, Dan. Thanks, Becky. I meant every word of it.
From there, I’ll make a joke about when Ray Strumster’s pants fell down during the Homecoming game last year. It’s easy, but it’s inclusion is basically expected whenever anyone makes a major speech about our class. And Ray’s a good sport about it. Then I’ll close out with a choice quote. I’ll probably opt for “Lose Your Dreams and You Will Lose Your Mind” from The Rolling Stones’ classic “Ruby Tuesday” or some other populist lyric about pursuing your goals. And at that point, odds are, even I will be won over by the speech. All members of the senior class, faculty and staff, and family members in the audience will rise to their feet and thunderous applause will echo throughout the auditorium. I’ll graciously nod in recognition as I step down from the podium, knowing that I won hearts and minds and cemented my legacy.
After the ceremony, I’ll put on my tinted aviators and pose for photos with the other graduates. Walking to my car, I’ll hear a voice from behind me.
Great speech, Dan. You brought down the house!
I’ll thank him.
You coming to the party tonight?
I smile.
Yeah, Jeff. I think I will.
|
https://medium.com/@mradamdietz/i-am-the-wallflower-class-valedictorian-and-tomorrow-my-graduation-speech-is-going-to-turn-some-2d2f4b657e9
|
['Adam Dietz']
|
2020-12-04 01:53:02.101000+00:00
|
['High School', 'Graduation', 'Comedy', 'Humor', 'Satire']
|
How visual health histories can help military veterans
|
Work history was often insightful
As I asked the veterans about their work responsibilities in the military and over the course of their post-military career, I was impressed by their unique skills. Many described themselves as ‘fix-it guys’ with a gift for building things and understanding machinery. I normally don’t map out work history with my clients, but I saw how work was central to each person’s identity and story, so I included it on all 10 timelines. Dr. Rustad observed that, in his conversations with the veterans, work history gave them a sense of pride. About one veteran with severe depression, he wrote:
“The timeline helped me learn more historical facts about patient which were helpful in understanding his illness. For example, he was awarded a high honor in the service which showed his intelligence.”
Work history was often tied to mental and physical health. I noticed many work-related chemical exposures in people who later had memory loss or other possibly-related health problems. Here is a small excerpt from one person’s timeline — after a career working on oil rigs around the world, he was experiencing seizures and memory and cognitive issues.
About another veteran’s timeline, Dr. Rustad said:
“I enjoyed the intersection of the social/work history, with the medical history (example: chemical exposure came together with migraines.)”
And a third:
“Patient confirms accuracy of complex medical history. Interesting to possibly tie-in Agent Orange and Lymphoma. Interesting to see his thoughts about guilt surviving the Vietnam experience. Helpful to see on timeline the connection between chemo and subsequent cognitive decline.”
Of course, it’s not always possible to directly link chemical exposure to current health problems; but for those seeking answers about why this was happening to them or their loved one, these exposures offered one possible explanation and helped bring some semblance of logic and order to their life story.
The type of work someone chose was sometimes an indicator of their state of mind; for example, one veteran chose to be a long-haul trucker so that he could isolate himself from other people, a symptom of his PTSD. This was a new insight for him to discuss with his doctor.
By mapping out work history, we also could more easily see gaps where the veteran may have been struggling with mental or physical health issues and been unable to work. The below person had a 3-year gap in work due to health issues:
Side note: as these skilled veterans shared their impressive backgrounds with me, it was difficult to see them struggling to contribute to society because of their memory and cognitive issues. I could see that many of them were experiencing a crisis of meaning and purpose, unable to do the work that had brought them joy and satisfaction in the past.
|
https://medium.com/pictal-health/how-visual-health-histories-can-help-military-veterans-d60963f6cf2d
|
['Katie Mccurdy']
|
2020-01-09 16:54:00.715000+00:00
|
['Veterans', 'Design', 'Data Visualization', 'Healthcare', 'Military']
|
How Do I Love Thee?
|
Photo by Tanya Trofymchuk on Unsplash
How do I love thee?
Let me count the ways that I have broken,
self-neglected for your every need
the ways that I have held my breath
to the subsiding tide of your rage
I love thee with a love so desperate
born out of the agony
of a life unloved
made into a threatening kaleidoscope of
unending need
I love thee with all of me
so much, so that I do not get any of me
and if I might carefully confess,
this might be how I like it.
With visions of you my life begins,
every breath gets swathed in meaning
I will love you every waking moment of living
and I will renew every cell that’s me
to the call of your naming.
|
https://psiloveyou.xyz/how-do-i-love-thee-c5158e84b428
|
['Tima Loku']
|
2020-12-20 13:04:10.193000+00:00
|
['Poetry', 'Codependency', 'Relationships', 'Poetry Sunday', 'Love']
|
Sun And Flowers
|
Sun And Flowers
A poem 🌷
The little flower buds are still sleeping
they are swinging in the morning breeze
Colorful buds, you are pretty shy to bloom
you look so calm and quite
in the backyard in a busy city
Are you still waiting for the soft touches
from the golden sun rays?
spread your sweet fragrance
before the sun peeps over the high mountain
Pretty flower bud, you will bloom
with the clusters of morning beams
In evening when the sun disappears
Suddenly, you fade
Again, you bloom with your vibrant petals
displaying the exquisite beauty
It’s another walk with nature
And everything seems alive and delicious
|
https://medium.com/illumination/sun-and-flowers-4a9363383158
|
['Sujani Hansanali']
|
2020-08-14 16:39:44.818000+00:00
|
['Poetry', 'Nature', 'Sun', 'Happiness', 'Earth']
|
How Bitcoin Futures Affect Price
|
About one year ago, Bitcoin futures were launched in the market. Since then, it has gained massive popularity. It helps traders who cannot hold spot positions in bitcoins due to its complicated regulations. They can trade bitcoin futures. Future contracts also offer traders with hedging possibilities and risk mitigations. In this article, we will discuss in detail how bitcoin futures affect the price.
Bitcoin Futures and Bitcoin Spot Price
There are various exchanges to trade bitcoins. These exchanges release bitcoin contracts that last for three months in the future. For example, a bitcoin future launched in March will expire in May.
This type of contract is launched every month. An initial price for these contracts is set by market makers and the trading begins. As trading begins, the price of the contract depends on the demand and supply mechanism in the market. If the demand is more and supply is less, the price increases. Consequently, if the supply is more and demands less, the price decreases.
Any move in bitcoin spot prices has a direct effect on bitcoin futures. This dependency results in a synchronous movement in both the prices.
Fortunately, we have the formula to calculate the price of bitcoin futures from bitcoin spot prices.
Futures Price = Spot Price * (1 + rf — d)
Where rf stands for risk-free rate on an annual basis
And d stands for dividend
In the case of bitcoin this formula can be customized as follows:
1. The change in risk-free rate from an annual to a daily basis
2. There is no dividend in case of bitcoin so ‘d’ can be eliminated.
So the new formula is:
Bitcoin Futures Price = Bitcoin Spot Price [ 1 + rf (x / 365)]
Where x represent the no. of days left before expiry.
This formula is based on the concept of the cost of carrying. Anyone interested to invest in bitcoin futures contracts can also invest in secure bonds to earn a minimum risk-free rate of return. Hence, this formula also includes computation of returns which are at par with the risk-free rate over time till the contract expires.
Let’s verify this against a real-world example. Let us assume a situation where the risk-free rate value is 2.25%, $8,171 is the bitcoin spot price. Substituting these values in the formula above, the futures value comes at $8,175.3. This theoretically calculated is almost equal to the real price of $8,180 where the contract was closed as of April 18.
This small difference of $5 can be attributed to brokerage charges and market volatility. These factors can shift your payout by a few dollars.
Real-World Price Determination
The theoretical concept is fine. But when we come to real-world bitcoin futures price tend to have some wild swings in either direction. There may be times when futures contracts will not follow spot prices; they will not be in sync. Why does this occur? Let us try to understand.
This difference is because of the fact that the formula does not account for volatility in the market.
The market participants cannot ignore volatility.
If there are only two days left in the expiry of bitcoin futures, the formula tells us the price of bitcoin future must be very close to bitcoin spot price. However, due to high volatility in the market, its price can shoot up very high or very down within hours.
Moreover, there may be other market events that can fuel volatility. Like a particular market like India imposing a ban on cryptocurrency. This will be very bad news for the market and this will be reflected in the spot price. Also, bitcoins are traded 24/7 which means the market is always volatile. Its price may change within hours or even within minutes! However, the future market remains open only for a specified duration of hours. It may be the case that future price closed very close to the spot price on Monday. But overnight some local developments can change the price of bitcoin. Consequently, bitcoin futures and bitcoin spot price opens with a wide gap on Tuesday morning.
It’s unfortunate that this theoretical formula does not account for such instances. These small local or geographical developments have the power to drastically change the price of bitcoin. This uncertainty makes bitcoin futures a guessing game. Closer your guess to the actual spot price, better are your chances of profit.
How Bitcoin Futures Affect Price?
An interesting thing occurs when the people holding bitcoin futures decide to hold their gains. These gains become cumulative. The result is a steady rise in the market cap of BTC/USD.
Long Term Effect: If we compare bitcoin with gold, silver or platinum, we find that bitcoin futures will have little or no effect on the price. When gold was added to futures, it did not have any impact compared to international, political, and economic events.
Short Term Effect: Gold, silver and platinum data were available daily, these showed quite a bit of volatility.
In short, for the long term investor, futures don’t affect the price of bitcoin much. For a trader, it can be an event of great profit.
Why is this interesting? Surely futures can have a short term effect on bitcoin spot prices, but the long term price will be more dependent on events in the Bitcoin community like:
· Politics in the bitcoin/crypto community
· Institutional investment
· Government rules and regulations
· Competing cryptocurrencies
· Major bitcoin conferences around the world.
The Bottom Line
In spite of all the inconsistency and speculative mechanism involved with bitcoin futures, it remains a high stakes game. People are drawn towards it maybe because of the buzz or they have confidence in their guess. Combining this with 25/7 trading of spot prices, bitcoin futures become more complicated. However, this volatility and uncertainty also allow for huge profits.
NOTE: Investing in cryptocurrency and Initial Coin Offerings (ICOs) is a highly risky game. The writer does not recommend any such investment. If you really want to invest, consult your financial advisor for more information.
|
https://medium.com/@xbtpro/how-bitcoin-futures-affect-price-74629bf464a5
|
[]
|
2020-02-20 09:28:53.456000+00:00
|
['Bitcoin Price', 'Bitcoin Futures', 'Bitcoin', 'Bitcoin News', 'Cryptocurrency']
|
Natural Language Processing
|
Natural language processing is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data. Are you looking forward to increasing your revenue and profit?
Do you want a data-led decision? Data Wrangling offers numerous services ranging from Data Analysis, Machine learning, and Natural language processing.
Kindly send us a direct message
Definition — Wikipedia
#Services #AIaaS #Business #NLP #BigData #BusinessITOutsourcing #Datawrangling
|
https://medium.com/@datawrangling/natural-language-processing-ef1cb2d4d495
|
['Data Wrangling Llc']
|
2020-12-16 09:16:29.467000+00:00
|
['Business', 'Data Science', 'Business Strategy', 'Business Intelligence']
|
5 Simple yet Effective Tips for Creating High-Quality Content for Your Content Marketing Strategy
|
#4. Content happens frequently and consistently.
One of the toughest aspects of an effective content marketing strategy has nothing to do with the substance of what’s being communicated. It has to do with how much and how often you publish content.
The goal is to master the art of volume and consistency. It’s tough to do both, especially just starting out, so I find it best to start with the consistency aspect to start building the habit.
If you’re starting from ground zero on your content strategy, pick one or two avenues you want to focus on first. Let’s say you decide on company blog posts and thought leadership articles from your CEO. Pick an achievable goal based on your team capacity—maybe two 500-word blog posts, and one thought leadership article per week. Stick to that volume for six to eight weeks before implementing any other content marketing strategies.
Once you master that, you can either increase or expand your strategy. To increase is to continue what you’re doing but in greater volume. If your company blog posts and thought leadership articles are performing well, try adding two more blog posts per week and one more thought leadership article per week, or whatever’s manageable on a consistent basis.
Expanding is jumping into other types of content like podcasts or video interviews, leveraging other types of material to maximize output. For example, can you think of all the material you can create from one hour-long filmed podcast episode?
The video of the podcast can be uploaded to YouTube, and segmented into tons of small video clips for Instagram and TikTok. The audio can be used to promote as a podcast on Apple or Anchor.FM. The transcripts of the audio can be turned into multiple thought leadership articles or company blog posts, as well as short-form status updates for LinkedIn.
Of course, not all companies need to market on Instagram or YouTube, but expanding your strategy is always worth brainstorming internally to set yourself up with a plan to keep content ideas flowing.
At the end of the day, your preparedness is what’s going to keep you publishing on a frequent and consistent basis.
|
https://medium.com/swlh/5-simple-yet-effective-tips-for-creating-high-quality-content-for-your-content-marketing-strategy-88ed18271aa8
|
['Jack Martin']
|
2020-05-13 09:01:00.845000+00:00
|
['Marketing', 'Content Marketing', 'Writing', 'Business', 'Creativity']
|
Taiwan: Same-Sex Marriage
|
Taiwan: Same-Sex Marriage
After same-sex marriage has been legalized in Taiwan, there are some opposition voices emerged. Some people don’t really support government decisions because they think that same-sex marriage is unconventional, and it is problematic that this idea might profoundly affect their future child. Most of the voices are coming from the older generation; people’s age beyond sixty. I started to realize that there are substantial divergent views between the older generation and the younger generation. The word LGBTQ stands for a community, and each alphabet stands for a different meaning as well. L stands for lesbian, G stands for gay, B stands for bisexual, T stands for transgender, and last, Q stands for queer. Some people from the older generation might don’t really understand what LGBTQ is, they will unity call them gay in general, for me I found it to be so disrespect and also feeling upset about it.
Love is Love: no gender differences at all
As we knew, there is a huge divergent view on same-sex marriage between the older generation and the younger generation. Most of the elders do not agree on this event because on their perspective; they argue that marriage has always been between a man and a woman, so if we legalize same-sex marriage, it then redefines and destroys the original institution, which is not allowed. The most vital view they point out by saying that two people with same-sex cannot be able to procreate, which is not conducive to the family continuity and population development. Also, the older generations disagree the event by saying that same-sex marriage not only threatens the society but also undermine the traditional heterosexual marriage because then same-sex partners can afford the same fundamental rights as married couples; they think it is unfair for them to do so.
On the other hand, as the representation for the younger generation, I think that most of the young people are delighted about the event. In our perspective, people may decide their physiological gender when they are born; but when you fall in love with someone; it will be no difference no matter your love is a boy or a girl. It is always blessed that people have the chance to be able to love and which has nothing to do with gender. Loving someone should not be against or restricted by anyone; it can also be seen as the ending of discrimination, which enhances the human spirit makes our lives better as well.
The news also mentioned that the LGBTQ’s rights in Asia have been treated mostly biased and unfair. In China, homosexuality is legal but the discrimination and prejudices were still existing and it is still against the LGBTQ people in the environment. People are still found to have a prejudice against the LGBTQ community, treating them impolite and rude for no reason and that is pretty sad.
Therefore, as I am living in Japan right now, I am so curious about how most of the Japanese view on the event about Taiwan’s same-sex marriage; how they really thought about this event. I saw a Taiwanese YouTuber making this video by asking the Japanese what they feel about the event. Their responses make me so surprise because most of the Japanese hold on to a positive attitude toward this event. They think that due to the influence of foreign countries, Asia is gradually beginning to make some changes and the people these days are more and more accepting of the LGBTQ community. People said that, although Japan had not done much on it yet, the people are still hoping that the country can change, so that Japan an also become a more comfortable country for the LGBTQ community to live in. Furthermore, there is a piece of great news in Japan that just happened on the first day of July by telling the audience that Ibaraki, the prefecture near Tokyo has becomes the first prefecture to recognize same-sex couples. I feel so pleased that although it is just a tiny step, it is still a perfect opportunity for future progression; at least something has made a change, and some perspective does chance.
I will have to say that I supported same-sex marriage. I am proud that my country has passed this agreement and been a part of it. I am also glad to be able to share this blessing with everyone in the world as well and realize that hard works did pay off, and it’s worth it. I am very touched by the cohesion of the people and the attitude of the people that they work so hard for making this happen. Because of their effort, the LGBTQ group can be able to live in a country with people’s recognition and encouragement. Isn’t this the most important thing for everyone to be able to live in a democratic country with love and caring? People deserve to love who they want to love and also choose to marry whoever they love; there will be no distinction between men and women and no boundary between classes as well.
- Local Taiwanese news announced Taiwan for being a first Asia country to pass same-sex marriage.
*The link for the articles:
- https://edition.cnn.com/2019/05/17/asia/taiwan-same-sex-marriage-intl/index.html
- https://www.japantimes.co.jp/news/2019/05/24/asia-pacific/social-issues-asia-pacific/taiwan-poised-hold-first-gay-weddings-historic-day-asia/#.XSRKBNMktQJ
- https://www.japantimes.co.jp/news/2019/06/24/national/first-ibaraki-prefecture-issue-partnership-certificates-lgbt-couples-july/#.XSSsjdMktQI
https://www.bbc.com/news/av/world-asia-48309693/celebrations-as-taiwan-passes-same-sex-marriage-law
|
https://medium.com/@fafawawaj/taiwan-same-sex-marriage-2e4fd7015d35
|
['Chika Hashimoto']
|
2020-09-04 14:39:53.738000+00:00
|
['Law', 'LGBTQ', 'Same Sex Marriage', 'Constitution', 'Taiwan']
|
The Racial Issue in Brazil
|
The greatest challenge in the fight against racism in Brazil is ensuring that the legal tools built into the judicial system since the re-democratization take root in Brazilian society, both in public and private institutions, and that there are no setbacks. Translating the law into concrete actions is crucial in public safety and security, given the often-discriminatory treatment the police give to the poorest populations, most of whom are pardo (a Brazilian census category that refers to the admixed population) and black. It is also essential that a more significant number of black men and women — still flagrantly underrepresented — occupy positions of power in the Executive, Legislative, and Judiciary Branches (at all three levels of government) for the anti-racism cause to advance.
These were the primary conclusions of this webinar. The objective was to discuss how and why the racial issue gained prominence in Brazil’s political agenda and the role the black movement has played in this process. Two young and talented black social scientists who compile significant research and experience in this area were the guest speakers: professors Flavia Rios (Universidade Federal Fluminense) and Luiz Augusto Campos (Rio de Janeiro State University).
The event not only included discussions about the anti-racism movement in Brazil during the wave of protests awoken from when a white Cincinnati policeman murdered an African-American, George Floyd, on May 25, 2020 but also intended to publicize the Linhas do Tempo (Timelines) Project 1985–2018, recently launched by the FHC Foundation, which highlights the “Racial Issue”, among other topics that signified the construction of citizenship in that period.
“Brazilian anti-racism has a long tradition dating back to before the abolition of slavery, but it has been mainly since the 1950s that activists, intellectuals, and artists became more organized and consistent in deconstructing the myth that our country is a racial democracy,” said Flavia, who coordinates the Social Science program at the Universidade Federal Fluminense (UFF) and the Grupo de Estudos Guerreiro Ramos (NEGRA).
Studies on race relations and prejudice in Brazil
Rios specifically cited academic studies on race relations and racial prejudice in Brazil conducted by French sociologist Roger Bastide (1898–1974), after his arrival in Brazil in 1938 to teach sociology at the newly created University of São Paulo (USP) and, in later decades, by Brazilians Luiz Aguiar Costa Pinto (1920–2002), Florestan Fernandes (1920–1995), Octavio Ianni (1926–2004), and Fernando Henrique Cardoso (born 1931).
“The imperative work of these and other researchers and the institutions where they worked had consequences at Brazil’s Institute of Applied Economic Research (IPEA), which started to generate data more systematically on inequalities in Brazil, thus building a basis for future public policies to fight racism based on the growing pressure exerted by groups linked to the black movement, with the support of activists from other causes, including workers and feminists,” she said.
The fight for democracy drives the anti-racism movement
If the racial issue had already gained prominence in the academic world, from a critical perspective, it only won the political arena after the re-democratization of the country. The black movement was organized at the end of the 1970s. The movement established relations with political parties that opposed the military regime and took part in councils to defend the rights of the black population, which were created by elected governors from the Brazilian Democratic Movement (PMDB) and the Democratic Labor (PDT) Parties in 1982. The movement sought to incorporate their demands into the 1988 Constitution, managing to make racism a crime (under the Afonso Arinos Act of 1951, it was considered a misdemeanor).
Although the Sarney administration created the Palmares Cultural Foundation in 1988, in honor of the quilombola leader Zumbi dos Palmares (1655–1695), Rios believes it was during the Fernando Henrique Cardoso administration that “we began to see more interaction between the Brazilian government and the black movement, with Zumbi dos Palmares becoming recognized as a national hero, the progressive titling of quilombola lands, and the onset of debates on affirmative action policies.”
According to the researcher, public policies that favored racial equality made ground in the Luiz Inácio Lula da Silva administration (2003–2010), who created a Department linked to the presidency for this purpose, with ministry status, and promoted the approval of the Racial Equality Statute in Congress in 2010. At the same time, an increasing number of federal universities began to adopt racial quotas for new student admissions, which was ruled as constitutional by Brazil’s Federal Supreme Court in 2012. That same year, then-President Dilma Rousseff (2011–2016) signed the Higher Education Quotas Bill into law.
“I’m tracing back the history to refute the idea that the Brazilian population is passive in relation to racism. There is still much to be done, but it is fundamental to reject unsupported or even false information and to know the true political history of anti-racism in Brazil,” concluded the professor in her opening speech.
“The anti-racism movement is a complex and extensive network. Research from universities, public and private institutions, and NGOs has been instrumental in the production of studies, circulation of data, and political pressure,” agreed Luiz Augusto Campos. “But the Brazilian government only began to take responsibility for proposing anti-racist structural policies in the last 40 years. During the FHC, Lula, and Dilma administrations, these were progressive policies. It is important that the younger generations know this,” said the coordinator of the Estudos Multidisciplinares da Ação Afirmativa (GEMAA).
‘Brazilian politics are still mostly white’
Despite the advances, Brazilian politics are still mostly white. It will only be possible to talk about anti-racist policies in a more sustained way when politicians and public managers have more diverse profiles than now, said the author of “Ação Afirmativa: história, conceito e debates (Affirmative Action: history, concept, and debates)” (EdUERJ, 2018). “Why are there few black politicians in Brazil? The problem is not the lack of black and pardo candidates. They exist, but they hardly manage to get elected. The bottleneck is mainly in financing the campaigns of these black candidates,” he said.
“There is a myriad of possibilities for public policies to encourage more black participation in politics, but they have to be pragmatic. The Party Executives define how the resources of the Party Electoral Fund for each group are used. Hence the importance of having more democratic and diverse party leadership,” said the sociologist.
The researcher also highlighted the importance of racial quotas in public tenders, instituted in law since 2014 for the Federal Public Administration, mainly for the federal bench: “The Brazilian government mainly serves the poor and black communities, but it is run by white managers and politicians. That needs to change.” According to Campos, race in Brazil is not a topic among others, but a cross-cutting issue. “The racial dimension is expressed in all the leading problems Brazil has not yet overcome, including social inequality, violence, and access to education, health, and housing,” he concluded.
Black feminism
According to Flavia Rios, the Brazilian black movement is experiencing a period of women-led renewal. “It is the black women from the impoverished areas who are taking the front line in this fight. And this is great news,” she concluded.
Otávio Dias is a journalist specializing in politics and international affairs. A former correspondent for Folha in London and editor of the estadao.com.br website, he is currently the content editor at Fundação FHC.
Portuguese to English translation by Melissa Harkin & Todd Harkin (Harkin Translations)
|
https://medium.com/funda%C3%A7%C3%A3o-fhc/the-racial-issue-in-brazil-afe5adc6e57e
|
['Fundação Fhc']
|
2020-11-11 12:52:49.498000+00:00
|
['Brazil', 'Racial Justice', 'Racial Equity', 'Politics', 'English Version']
|
COVID is Not a Weight Loss Plan. Let’s Stop Talking About it That Way
|
Every scroll of social media and glance at headlines throughout the COVID-19 crisis has revealed something alarming: There has been so much commentary around weight and fitness.
Living a life that is healthy has many ends. Living a life for weight loss is different. And as a community, we must separate those two things.
At Dia & Co, we believe the most important perspective to have in this conversation is that health and weight are not the same. As soon as we decouple those two things, we can have conversations that are for everybody, because how we fuel and nurture our bodies is not size-dependent.
It’s OK for us to talk about health. But it’s not OK for us to put more pressure on women to reach antiquated, harmful beauty ideals. Particularly in this moment, where our world has been overturned, and our lives have been completely reset, those topics have never felt less important.
So let’s stop talking about them and start talking about what really matters. Let’s give ourselves credit for how we’ve made our lives fuller in quarantine — and let’s step out into the world ready to inspire.
Here are the things that I think actually matter:
Our health. Health is one of the most important things in our lives, and it has never felt as precious to me as it does right now. My mom was only 25 years older than I am today when I lost her to cancer in February. Watching the devastating impact of chronic illness has impacted my behavior and attitude around health in a way that has no bearing on weight.
My mom did her best to get through chemotherapy with a renewed focus on good nutrition — eating whole, nutrient-dense and antioxidant-rich foods like fruits, vegetables and green tea (I still drink a cup a day).
In the pandemic, focusing on my health has meant social distancing and consistent mask-wearing, hand hygiene, drinking plenty of water, enjoying home-cooked meals that provide a variety of nutrients and actually getting close to eight hours of sleep a night.
Our connections. I’ve had a lot of work Zoom meetings, for sure, but I’ve also connected with family in other parts of the country and world in ways that would have never happened before this. I’ve taken socially distant walks with dear friends and had the pleasure of daily dance, theatre and musical performances from my nieces in Kuwait on FaceTime.
There’s no “silver lining” in a pandemic that has devastated so many, but out of crisis comes learning, and we’ve been given an important lesson about the healing power of deepening relationships. Community matters more than ever right now. How have you connected with yours?
Our wellness. In a time when our health is in peril, wellness is an imperative. For me this has meant finally taking up meditation. Because I’m not commuting, I have an extra 30 minutes every morning and that time has never been better spent. Meditation has always seemed intimidating to me for some reason, but the Headspace app totally changed that.
Meditation is an exercise in focusing on the present. Can you imagine anything you’d want to do less right now? And yet it helps put everything — past regret, future anxiety — into perspective.
Our schedules. What can we streamline and shed? With no travel time between meetings and no work travel at all, suddenly there are blank spaces on my calendar.
Turns out, not all those meetings were necessary and video conferencing works very well. A mom I know recently posted on Instagram about her family’s weekends — they used to be over-scheduled, hectic and stressful, but now the only thing they have to do is hang in the backyard. For her, it’s a relief. This prolonged pause has been painful in a lot of ways, but it’s also helped us hone in on the essentials.
Our priorities. Ask yourself: What’s really important right now? How many of the things that have fallen away actually mattered? How many new perspectives can we carry forward in the world post-COVID?
After not seeing friends and family for months, there’ll be that high school reunion reveal at some point for all of us: Sure my hair is 4 inches longer and in dire need of a haircut, but how have we really changed? What have we done with our time away?
As a society, we still think about improving ourselves as physically improving. It’s time that we change that. I’m hoping we can finally shed the superficial concerns that have been a part of our societal fabric for so long.
What if we came out of the pandemic not different, but able to see ourselves in a different light? The Dia community has so much to offer the world. Post-COVID, when we emerge from isolation, let’s reveal that.
|
https://medium.com/@nadiaboujarwah/covid-is-not-a-weight-loss-plan-lets-stop-talking-about-it-that-way-496865668723
|
['Nadia Boujarwah']
|
2020-09-02 14:47:11.273000+00:00
|
['Weight Loss', 'Covid 19', 'Body Positive', 'Plus Size Fashion', 'Health']
|
Donald Trump’s Southern Strategy
|
Source: 1972 Howler yearbook at Wake Forest via Winston Salem Journal
The Southern Strategy is a Republican electoral strategy designed to increase political support among white voters by appealing to their racism and bigotry. Likewise, the Trump administration’s electoral approach has adopted the same divisive ideology — which has been quite useful thus far. By playing on those same racial tensions, Trump has been successful in driving a wedge between rural white voters and people of color.
As I’ve discussed many times in my work, the racially-charged, divisive approach to politics that we see today isn’t new. Much of the language that is a constant in our society became popularized with the Southern Strategy employed by Republicans in the 1960s. It’s not hard to notice the similarities in the rhetoric as the motivations behind the rhetoric are also similarly clear.
In the 1950s and 1960s, the civil rights movement and the dismantling of many Jim Crow laws deepened the existing racial tensions in much of the Southern United States — later spreading among rural voters nationally. At the time, Republicans such as Senator Barry Goldwater and then-presidential candidate Richard Nixon developed region-specific strategies that contributed to the political realignment of a large portion of white conservative voters who had traditionally supported the Democratic Party. It also helped push the Republican party further to the right. A push that continues today.
As with the Trump administration, the Southern Strategy presented narratives that suggest Republican leaders consciously appeal to the racial grievances of white voters. This top-down strategy is generally believed to be the driving force that transformed Republican politics during the civil rights era. While others challenge this narrative, suggesting a more bottom-up strategy — which recognizes the centrality of the racial backlash in the South that contributed to party realignment — thus proposing that the backlash was a defense of de facto segregation in the suburbs rather than overt resistance to racial integration. Either way, the racial backlash in America at the time was a story where both narratives fueled the flames of racial tensions respectively.
Because of this electoral strategy, the Republican Party has continuously failed to fight off the image that they are the party of White Supremacy. This maneuvering has made it difficult for Republicans to win back the support of Black voters in recent decades. In 2005, Republican National Committee chairman Ken Mehlman formally apologized to the National Association for the Advancement of Colored People (NAACP) for exploiting racial tensions to win elections while ignoring the Black vote altogether.
“Republican candidates often have prospered by ignoring black voters and even by exploiting racial tensions […] by the ’70s and into the ’80s and ’90s, the Democratic Party solidified its gains in the African-American community, and we Republicans did not effectively reach out. Some Republicans gave up on winning the African-American vote, looking the other way or trying to benefit politically from racial polarization. I am here today as the Republican chairman to tell you we were wrong.” — Ken Mehlman, 2005
The Southern Strategy’s main focus was based on attacking social justice reforms such as the Civil Rights Act and the Voting Rights Act. In the 1970s and 1980s, much of the politically active public would come to realize that the use of terminology like “states’ rights” was coded language for returning race relations to local control. Therefore circumventing federal civil rights law and desegregation laws. Although Reagan had used such terminology to appeal to the racial tensions of white voters as well, by the mid-1980s using “states’ rights” language as part of a political strategy had become unacceptable due to its historic racial undertones.
Mehlman’s apology would later prove to have fallen on deaf ears. Despite his less than sympathetic words, the actions of the Republican party at-large would continue its pursuit of exploiting racial tensions for political gain. By 2009, the election of Barack Obama provided a catalyst for traditionalists to undermine the need for civil rights laws.
Conservatives began to use the idea that Obama’s election meant racism was no longer a hindrance to the advancement of North American society. As Thomas Edge explains in the Journal of Black Studies in 2009, “Their purpose, it is argued, is to launch Southern Strategy 2.0, which seeks to use Obama’s victory to attack some of the results of the civil rights movement that helped make his rise possible. At the same time, it still plays on some of the overt racism of the first Southern Strategy, using Obama’s racial identity and politics to challenge whether he is “American” enough to lead the nation. Thus, conservatives use Obama’s image as a sign that racism is dead, while simultaneously attacking him with the same race-based tactics that have played such an important role in the recent history of the Republican Party.”
The only significant difference between the modernization of the Southern Strategy and its original platform is the language as it has evolved over the decades. The strategy, which has had nearly a half-century to take hold, is what drives a lot of the ignorance we see in America today. As these beliefs get passed down from generation to generation, fundamentalists are there screaming the same things as their predecessors. These generational commitments validate and help reinforce the same beliefs while allowing them to evolve as time passes.
The term “Southern Strategy” is now a bit of a misnomer due to its application having the inevitable effect of exposing the racial tensions of Northerners in the 1970s. But most academics still refer to it as such because this particular strategy is still in play — and remains unchanged despite the longevity and expansion of the cause.
“Thus, conservatives use Obama’s image as a sign that racism is dead, while simultaneously attacking him with the same race-based tactics that have played such an important role in the recent history of the Republican Party.” — Thomas Edge, 2009
Since the election of Barack Obama, those who are unaware of the history behind such gruff racist language have seen it first-hand. Watching it grow from the post-racial perception many Americans were consumed by, to the horrors of the repugnant rhetoric from a sitting president who regularly shocks the population at large. Not realizing the hateful rhetoric he employs has always been there. In the background. Growing and festering in conservative circles. By ruthless old white men who are all too content exploiting racial tensions in America.
The Southern Strategy is in full effect with no signs of slowing down anytime soon as it still seems to be the Republican’s primary strategy. In other words, they have willingly alienated people of color and have abandoned all hope of capturing our vote while focusing solely on the racial anxieties of conservative white voters.
|
https://extremearturo.medium.com/donald-trumps-southern-strategy-9f89fccbb39e
|
['Arturo Dominguez']
|
2019-07-20 21:00:00.628000+00:00
|
['Equality', 'Politics', 'Civil Rights', 'Donald Trump', 'Racism']
|
How to Create an Entity Relationship Diagram (ERD) for BigQuery?
|
How to Create an Entity Relationship Diagram (ERD) for BigQuery?
From work and personal projects, I’ve found it difficult but important for a team to document the relationships of tables (i.e., what are the foreign keys to join tables?). In this post, I’m going to quickly demonstrate how to create an Entity Relationship Diagram (ERD) for BigQuery on Google Cloud Platform (GCP) using QuickDatabaseDiagrams (QDB).
Step 1. Visit QDB at https://www.quickdatabasediagrams.com/
Try out for free without signing up!
Step 2. Retrieve the schema of BigQuery tables from BigQuery UI.
Replace <your-gcp-project-id>, <your-bigquery-dataset-name>, and <your-bigquery-table-name> for your own use case.
SELECT column_name, data_type FROM `<your-gcp-project-id>`.<your-bigquery-dataset-name>.INFORMATION_SCHEMA.COLUMNS WHERE table_name="<your-bigquery-table-name>"
2. Run the query above
3. Click “SAVE RESULTS” and then click “Copy to Clipboard”
4. Paste back to QDB, click “EDIT” and then “Untabify”
Step 3. Repeat Step 2 for the rest of your tables to be included in ERD
|
https://medium.com/@donny.chen.ai/how-to-create-an-entity-relationship-diagram-erd-for-bigquery-a1e66cc24225
|
['Donny Chen']
|
2019-12-20 00:10:00.138000+00:00
|
['Entity Relationship', 'Bigquery']
|
Bergembiralah Pekerja Jogja, Kita Dapat Bantuan Rp 600 Ribu
|
in Both Sides of the Table
|
https://medium.com/binokular/bergembiralah-warga-jogja-kita-dapat-bantuan-rp-600-ribu-ca593527233
|
['Indra Buwana']
|
2020-08-07 14:36:30.463000+00:00
|
['Kemenkeu', 'Bantuan Langsung Tunai', 'Subsidi', 'Pekerja']
|
Make it go faster
|
DevSnack #14: Web apps performance is crucial. If the app works fast the user experience is better, and this improves yours chances of success.
Ruby on Rails code optimization and cleanup, Rack MiniProfiler and more on this week’s DevSnack.
#1 — Web Performance is User experience
Lara Swanson (@lara_hogan) explains how the efforts to optimize your website’s performance could have a great effect on the entire user experience.
#2 — Web Performance Terms
Alex Pinto (@ad3pinto) created an amazing glossary with 50+ terms we need to understand on web performance.
#3 — Ruby on Rails code optimization and cleanup
Keeping your code clean and organized while developing a large Rails application can be quite a challenge, even for an experienced developer. Fortunately, Damir Svrtan (@damirsvrtan) outlines a whole category of gems that make this job much easier.
#4 — The Secret weapon of Ruby and Rails speed
Nate Berkopec (@nateberkopec) brings an exhaustive explanation about mini-profiler. Rack-mini-profiler (maintained by @samsaffron) is a powerful Swiss army knife for Rack app performance.
#5–20 Top Factors That Impact Website Response Time
Three seconds may not seem like a long time, but it could be the difference between making the online sale and losing a customer!
|
https://medium.com/moove-it/make-it-go-faster-11b1a68aa01c
|
['Blog Moove-It']
|
2016-08-04 12:13:52.762000+00:00
|
['Devsnack']
|
Taming data inconsistencies
|
Vamsi Ponnekanti | Pinterest engineer, Infrastructure
On every Pinner’s profile there’s a count for the number of Pins they’ve both saved and liked. Similarly, each board shows the number of Pins saved to it. At times, Pinners would report that counts were incorrect, and so we built an internal tool to correct the issue by recomputing the count. The Pinner Operations team used the tool to fix inconsistencies when they were reported, but, over time, such reports were growing. It wasn’t only a burden on that team, but it could have caused Pinners to perceive that Pinterest wasn’t reliable. We needed to determine the root of the problem and substantially reduce such reports.
Here I’ll detail why some of those counts were wrong, the possible solutions, the approach we took and the results we’ve seen.
Addressing the problem
After digging into the issue, we found a number of reasons counts appeared wrong, including:
Pinner deactivations: If user A liked a Pin saved by user B, and user B deactivated their account, the Pin was not shown to user A, but the Pin was still included in the count. (Note: there’s a very small number of Pinners who deactivate and reactivate their accounts multiple times.)
Spam/Porn filtering: If a Pin was suspected to have spam or porn content, it wasn’t shown, but was still reflected in the count. The scheme to identify spam/porn content is continually evolving, and domains are added or removed from the suspect list almost every day.
Non-transactional updates: Some counts were updated asynchronously in a separate transaction to optimize the latency of operations such as Pin creation. In some rare failure scenarios, it was possible the count wasn’t updated.
Possible solutions
Option 1: Fix the root causes
We first looked at ideas for fixing the root causes. Whenever a Pinner deactivated/reactivated their account, we could update the counts in the profiles of all Pinners who may have liked their Pins.
Likewise, whenever there’s a change in porn/spam filtering schemes, we could update the counts in the profiles of the owners of those impacted Pins, and in the profiles and boards of the Pinners who’ve liked and saved such Pins.
To address the problem caused by non-transactional updates, we would need to update the count in same transaction, which may increase latency of operations such as Pin creation.
This solution wouldn’t only be expensive, but it also wouldn’t fix the inconsistencies that already exist.
Option 2: Offline analysis and repair
Another approach commonly used in similar situations is offline analysis and repair, where we could dump data from our online databases to Hadoop daily, and they’d be queryable from Hive. We could have an offline job(s) that finds the inconsistencies, and another background job(s) fixing those inconsistencies in the online databases. Since online data could’ve changed after offline analysis was performed, it would need to be revalidated before updating the count in the online database.
We found this to be a good solution and used a similar method to fix inconsistencies in our HBase store. However, based on that past experience, we knew the effort to build it was no small task.
Option 3: Online detection and quick repair
We also thought about online detection and quick repair. If a Pinner detects an inconsistency in the data on their profile, our system should also be able to detect it. For example, if a Pinner scrolls through all of the Pins they’ve liked, the system could check if the count of liked Pins shown matched the displayed count in their profile. This way of detection is simpler than the previous solutions, with small overhead.
Once an incorrect count is detected, we could queue a job to fix the count so we could display the correct count the next time. We already had a framework, PinLater, that could queue arbitrary jobs for later execution. Typically, such jobs run within seconds of queuing.
The job to fix the count would have information about which counter to fix (such as user ID or board ID), as well as information about the stored count (i.e. the displayed count) and the actual count. This job could use the same logic as our internal tool to recompute and fix the counts.
The count fixing job would check if both the stored count and actual count are still same as at the time of queueing. If so, it would update the stored count to the actual count. If the stored count or the actual count are different from what they were at the time of queuing, the count isn’t updated as it indicates some change of state, such as new Pins that may have come in.
However, this solution doesn’t detect all inconsistencies. For example, if a Pinner doesn’t click on ‘likes’ or ‘Pins’ in their profile, or if they don’t scroll through their entire list of Pins, it can’t detect inconsistencies. This solution also isn’t appropriate for inconsistencies that are expensive to detect during online reads.
The chosen solution
In the short term, the third solution (online detection and quick repair) was preferred since it fixes existing inconsistencies in the count in near real-time, and is simple to implement. In the longer term, we may still build the second solution.
Results
After deploying and gradually ramping up the solution to repair counts for one percent of users to 100 percent, nearly all Pinner reports about count inconsistencies vanished. The table below shows the number of reports over a period of about 12 weeks, including a few weeks before the launch, the ramp-up period and the weeks following launch.
|
https://medium.com/pinterest-engineering/taming-data-inconsistencies-96ae43ced0ce
|
['Pinterest Engineering']
|
2017-02-21 18:58:25.403000+00:00
|
['Pinterest', 'Infrastructure', 'Engineering', 'Data']
|
Stock Picking for Beginners in 2020 (3 Key Steps To Follow)
|
Picking out the right stock to buy, as a beginning investor, can be like trying to pick out the perfect movie to watch on Netflix. There are so many choices and most of them, you’ve never heard of before — and if you make the wrong choice, well, that’ll just pretty much ruin your Friday night.
You could easily spend hours trying to pick a movie to watch on Netflix — same with stock picking.
Selecting the right stock to invest in, can be tough, but with the right plan and methodology — it’s doesn’t have to be.
So today, in this beginner’s guide to stock selection, I’ll break down my thought process and methodology on how I pick the right stocks for my investment strategy — through the 3 criteria I used to evaluate them. And this article is not just all concepts, I’m actually going to walk you guys through, tactically, each step in the process — with a real live stock example.
But before we get started, let’s just level-set the conversation. I’m talking about long term investing and selecting the right corporate stocks. For swing trading or day trading and evaluating other equities like mutual funds or ETFs, there are a lot of other considerations to take into account — so we’ll save those topics for a future article. Also, if you haven’t already read my beginner’s guide to the stock market where I outline the fundamentals of trading in the stock exchanges — make sure you check that out here.
So, now that we are level-set, for the sake of this exercise, let’s say that I want to dive into the technology sector — and I’m considering buying some Microsoft stock.
Daniel’s Brew
https://kleinoot.nl/ami/Video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-nho-.php
https://cartaodosus.info/video.php?video=Video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-hiz-.php
http://mix.gruposio.es/hvs/v-ideos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-yki-.php
https://kleinoot.nl/ami/v-ideos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-wvk-.php
http://esc.vidrio.org/wnc/videos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-bhy-.php
http://tanum.actiup.com/kck/v-ideos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-ygh-.php
http://mix.gruposio.es/hvs/videos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-vxt-.php
https://kleinoot.nl/ami/Video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-zyj-.php
https://cartaodosus.info/video.php?video=video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-jgi-.php
https://test2.activesilicon.com/kdx/Video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-azb-.php
http://mix.gruposio.es/hvs/Video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-vjk-.php
https://kleinoot.nl/ami/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-roe-.php
http://multi.provecracing.com/riq/v-ideos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-imf-.php
https://cartaodosus.info/video.php?video=Video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-mfr-.php
https://test2.activesilicon.com/kdx/Video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-lpv-.php
http://tanum.actiup.com/kck/video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-qak-.php
http://esc.vidrio.org/wnc/videos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-stu-.php
https://cartaodosus.info/video.php?video=Video-Tochigi-SC-Jubilo-Iwata-v-en-gb-qla-.php
https://kleinoot.nl/ami/v-ideos-Tochigi-SC-Jubilo-Iwata-v-en-gb-iqp-.php
http://multi.provecracing.com/riq/videos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-exd-.php
http://esc.vidrio.org/wnc/Video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-xkj-.php
http://mix.gruposio.es/hvs/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-anf-.php
https://test2.activesilicon.com/kdx/Video-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-mdj-.php
http://tanum.actiup.com/kck/videos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-dve-.php
http://tanum.actiup.com/kck/v-ideos-Tochigi-SC-Jubilo-Iwata-v-en-gb-mxp-.php
https://test2.activesilicon.com/kdx/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-mge-.php
http://multi.provecracing.com/riq/v-ideos-Renofa-Yamaguchi-Montedio-Yamagata-v-en-gb-gsd-.php
https://cartaodosus.info/video.php?video=videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-vby-.php
http://mix.gruposio.es/hvs/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-mjv-.php
https://kleinoot.nl/ami/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-bpd-.php
http://esc.vidrio.org/wnc/v-ideos-Tochigi-SC-Jubilo-Iwata-v-en-gb-tef-.php
https://kleinoot.nl/ami/video-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-eeu-.php
https://test2.activesilicon.com/kdx/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-nat-.php
http://esc.vidrio.org/wnc/v-ideos-Tochigi-SC-Jubilo-Iwata-v-en-gb-len-.php
http://multi.provecracing.com/riq/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-ses-.php
https://cartaodosus.info/video.php?video=video-Tochigi-SC-Jubilo-Iwata-v-en-gb-amq-.php
http://tanum.actiup.com/kck/Video-Tochigi-SC-Jubilo-Iwata-v-en-gb-kmz-.php
http://mix.gruposio.es/hvs/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-dtr-.php
http://multi.provecracing.com/riq/v-ideos-Tochigi-SC-Jubilo-Iwata-v-en-gb-yrq-.php
https://test2.activesilicon.com/kdx/video-Tochigi-SC-Jubilo-Iwata-v-en-gb-shi-.php
http://tanum.actiup.com/kck/Video-Tochigi-SC-Jubilo-Iwata-v-en-gb-bru-.php
https://kleinoot.nl/ami/Video-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-pmu-.php
https://cartaodosus.info/video.php?video=videos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-mjl-.php
http://mix.gruposio.es/hvs/video-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-kpr-.php
http://esc.vidrio.org/wnc/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-jsm-.php
http://multi.provecracing.com/riq/videos-Tochigi-SC-Jubilo-Iwata-v-en-gb-wuu-.php
https://kleinoot.nl/ami/v-ideos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-ewp-.php
https://test2.activesilicon.com/kdx/video-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-zkn-.php
http://mix.gruposio.es/hvs/v-ideos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-eke-.php
http://tanum.actiup.com/kck/Video-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-enn-.php
http://esc.vidrio.org/wnc/v-ideos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-jak-.php
https://cartaodosus.info/video.php?video=v-ideos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-acs-.php
http://mix.gruposio.es/hvs/videos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-xlk-.php
https://test2.activesilicon.com/kdx/v-ideos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-cgf-.php
http://multi.provecracing.com/riq/video-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-vqo-.php
https://kleinoot.nl/ami/videos-Omiya-Ardija-Albirex-Niigata-v-en-gb-uqn-.php
http://tanum.actiup.com/kck/video-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-hbd-.php
https://cartaodosus.info/video.php?video=Video-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-quc-.php
http://esc.vidrio.org/wnc/videos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-oow-.php
https://test2.activesilicon.com/kdx/video-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-veo-.php
http://tanum.actiup.com/kck/videos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-glo-.php
http://mix.gruposio.es/hvs/Video-Omiya-Ardija-Albirex-Niigata-v-en-gb-rda-.php
https://cartaodosus.info/video.php?video=v-ideos-Omiya-Ardija-Albirex-Niigata-v-en-gb-xwc-.php
https://kleinoot.nl/ami/video-Omiya-Ardija-Albirex-Niigata-v-en-gb-wwx-.php
http://multi.provecracing.com/riq/v-ideos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-vbl-.php
http://esc.vidrio.org/wnc/videos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-whc-.php
http://multi.provecracing.com/riq/v-ideos-Tokyo-Verdy-Mito-Hollyhock-v-en-gb-xqf-.php
https://test2.activesilicon.com/kdx/v-ideos-Omiya-Ardija-Albirex-Niigata-v-en-gb-ewv-.php
https://kleinoot.nl/ami/videos-Omiya-Ardija-Albirex-Niigata-v-en-gb-njk-.php
http://tanum.actiup.com/kck/video-Omiya-Ardija-Albirex-Niigata-v-en-gb-wha-.php
http://esc.vidrio.org/wnc/video-Omiya-Ardija-Albirex-Niigata-v-en-gb-hcv-.php
https://cartaodosus.info/video.php?video=Video-Omiya-Ardija-Albirex-Niigata-v-en-gb-txa-.php
http://mix.gruposio.es/hvs/videos-Omiya-Ardija-Albirex-Niigata-v-en-gb-fuk-.php
https://cartaodosus.info/video.php?video=v-ideos-Omiya-Ardija-Albirex-Niigata-v-en-gb-ahz-.php
https://test2.activesilicon.com/kdx/v-ideos-Omiya-Ardija-Albirex-Niigata-v-en-gb-twc-.php
https://kleinoot.nl/ami/v-ideos-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-ghq-.php
http://multi.provecracing.com/riq/Video-Omiya-Ardija-Albirex-Niigata-v-en-gb-vgs-.php
http://mix.gruposio.es/hvs/v-ideos-Omiya-Ardija-Albirex-Niigata-v-en-gb-poj-.php
http://esc.vidrio.org/wnc/v-ideos-Omiya-Ardija-Albirex-Niigata-v-en-gb-sht-.php
http://tanum.actiup.com/kck/v-ideos-Omiya-Ardija-Albirex-Niigata-v-en-gb-iaf-.php
http://mix.gruposio.es/hvs/videos-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-kib-.php
http://multi.provecracing.com/riq/Video-Omiya-Ardija-Albirex-Niigata-v-en-gb-lkx-.php
https://test2.activesilicon.com/kdx/v-ideos-Omiya-Ardija-Albirex-Niigata-v-en-gb-xxs-.php
https://cartaodosus.info/video.php?video=v-ideos-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-rrg-.php
http://esc.vidrio.org/wnc/v-ideos-Omiya-Ardija-Albirex-Niigata-v-en-gb-pyk-.php
http://tanum.actiup.com/kck/videos-Omiya-Ardija-Albirex-Niigata-v-en-gb-aot-.php
https://kleinoot.nl/ami/video-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-dlp-.php
http://multi.provecracing.com/riq/video-Omiya-Ardija-Albirex-Niigata-v-en-gb-plp-.php
https://cartaodosus.info/video.php?video=v-ideos-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-txw-.php
https://test2.activesilicon.com/kdx/v-ideos-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-neg-.php
http://tanum.actiup.com/kck/video-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-yhf-.php
http://mix.gruposio.es/hvs/videos-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-qgn-.php
http://esc.vidrio.org/wnc/video-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-ocd-.php
https://kleinoot.nl/ami/Video-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-hku-.php
http://tanum.actiup.com/kck/Video-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-rga-.php
http://multi.provecracing.com/riq/v-ideos-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-org-.php
https://cartaodosus.info/video.php?video=Video-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-qof-.php
http://mix.gruposio.es/hvs/videos-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-crb-.php
http://esc.vidrio.org/wnc/Video-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-npx-.php
https://test2.activesilicon.com/kdx/videos-Kyoto-Sanga-FC-Thespakusatsu-Gunma-v-en-gb-byu-.php
CRITERION 1: FUNDAMENTAL ANALYSIS
The first of my 3 criteria, when evaluating a stock, is to look at the company’s fundamentals.
Fundamental analysis is the method of assessing a stock’s inherent value by looking at all of the company’s business characteristics, which includes both tangible aspects like revenue, EBIT (earnings before interest & tax), assets on hand, etc, as well as intangible traits like the company’s brand equity or the distinctiveness of their technology, or the effectiveness of their c-suite, etc.
Honestly, there are so many things you could look at here — and if you try to evaluate every metric or aspect of the company’s business, you’ll be sure to get analysis paralysis.
So instead, I generally choose to focus on the following 3 fundamental factors:
1) Earnings Performance
Every publicly-traded company reports earnings on a quarterly basis. And for those of you that are not familiar with this term, earnings is just a fancy way of saying net income, or in other words, the margin after all of the operating expenses of doing business and all of the taxes have been taken away — simply put, it’s the profit that remains within the business after all expenses.
“Earnings” simply defined
The timing of these earnings releases is based on each individual company’s fiscal calendar, so they vary from company to company — but they do all report 4 times a year. Now, the purpose of the earnings reports is to provide the shareholders with a review of the company performance for the past quarter. These reports include updates on the company’s revenue, profits, how they track against financial projections, what new initiatives or programs they’ve undertaken, and what their forward-looking guidance looks like for the upcoming quarters. As publicly traded companies, they have an obligation to us, the shareholders, to be transparent and open about their performance so that we as traders can make the most informed decision when it comes to considering their stock for purchase — and that’s one of the reasons they have these quarterly reports.
One thing that is of key importance within this report is the company’s EPS — or earnings per share.
The EPS is the amount of profit that a company has earned in the last quarter, divided by the number of outstanding shares in the stock market — so basically, how much profit does the company have for each share of stock that’s out in the open market. Each company reports this out in their quarterly earnings statement and it is a good proxy for how profitable a company is.
“EPS” simply defined
Now before every quarterly earnings statement, financial analysts will predict what they think the EPS of a stock will be — and at every earnings release, a comparative analysis is done to determine whether or not the company beat, met or missed analyst EPS projections for that quarter. As such an important profitability metric, when a company beats EPS expectations, they are seen as having done a really good job and the market sentiment turns positive and the stock generally spikes up that day… but when a company misses, the opposite feeling occurs and the stock price generally falls for a short period of time.
(This isn’t 100% true for every stock and every situation, but again, generally speaking, this is the trend most stocks follow.)
Now let me walk you guys through how I use this data in my stock selection process. I start by looking up the historical performance of the company’s EPS vs analyst projections. If you’re using a full-service brokerage firm like Etrade, you simply navigate towards the earnings section and scroll down to the bar chart to see this info.
Past earnings performance for MSFT
What I look for is a consistent track record of EPS releases that have beaten analyst expectations — it doesn’t have to be green every single quarter, but if it shows a steady record of more greens than red — that’s a good sign.
In the case of this example, it looks like Microsoft has beaten expectations for the last 11 quarters — that’s amazing. Here is another example: Amazon.
Past earnings performance for AMZN
It’s not all green, but there are steady periods of exceeding analyst expectations — and that’s what you want to see. Past performance is not a 100% accurate indication of future performance, but if they have a good history of beating earnings projections, then you can feel more confident about making that same bet for the future quarters as well.
2) Company Profitability
Profitability is the second thing I look into when it comes to fundamental analysis. Now, when it comes to profitability, the first thing I typically look at is the net profit margin line of the company.
This percentage indicates how much net profit they have — that they can use to invest in the growth or innovation or any other new aspects of their company. This net profit margin line is also calculated after the distribution of any dividends — which means it’s the left-over profit, even after paying the shareholders a dividend. So the higher this number is, the more pure profit they have.
Net Profit Margin of MSFT
In the case of Microsoft — 31% is incredibly high, there are only a handful of Fortune 500 companies that operate at a higher rate of profitability than this one. It gives me the confidence to invest in companies that have high-profit margins because I know that these companies are the ones that are the best poise for innovation and transformation within their organizations, as times change because they have the financial means to do so.
Another factor in profitability that I look at, is the PE Ratio. This is a common one that most investors start with when they are looking to measure up a stock. It stands for Price to Earnings Ratio — and it’s calculated by taking the current stock price and dividing that by the EPS that we talked about earlier. Basically — what this shows is, compared to the amount of profit that a company has made per share of outstanding stock, how much more is the price of this stock?
In the case of Microsoft, their most recent EPS is $5.40. Microsoft stock, as of 4/8/2020 is trading around $165 per share. That means that this stock is trading for around 30 times more than the EPS — or in other words, 30 times more than the amount of profit the company has made per share of stock. This is also why analysts sometimes refers to this as the company’s earnings multiple.
So, why is this important? By distilling the profitability of a company down to the EPS and dividing the share price to that figure, we can get to a somewhat common and impartial value upon which we can measure how expensive a stock is compared to the profit that it brings. It’s a way to make as close to an apples-to-apples comparison of how overvalued or undervalued a stock price is, between 2 companies.
For example, we know that Microsoft has a PE ratio of 30. If we look at another stock like Apple Inc, it looks like their most recent PE ratio is 21. So an analyst might say that in this case — if we only look at the PE ratio in isolation and consider no other factors in our stock selection process, then Apple Inc would be a “less expensive” but because you are only paying 21 times the amount of profit they make for each share of the stock.
PE comparisons between MSFT and AAPL
Do you guys follow me here? If this sounds a little confusing and you’ll like to watch me walk through the explanation visually, click here.
3) Business Model and Growth Potential
The third and last factor I look at, when it comes to company fundamentals — is the business model and growth potential of the company. Now, this part of the analysis is more art than science but essentially, I want to make sure I understand, at a high level, what the business units of the company are and how the P&L operates — and compare that to how other companies operate and decide which business I feel has better growth potential in the near future. As a long term investor, it’s important that you know at least the basics of how your company operates in terms of the products and services it provides, how it generates income, and what plans they have to drive innovation and progression in their industry.
How I acquire this information is I usually go to that particular company’s website and look for the investor relations section. Within this section, there are usually key press releases that highlight new and noteworthy pieces of information about the company, as well as recordings of their latest quarterly release press conference and the annual report that comprehensively summarizes the yearly performance of the business as a whole. And I generally try to read as much of all of this as possible. Here is a look at what the most recent annual report looks like for Microsoft.
MSFT Annual Report (on Microsoft.com)
If you look at this report — it does a great job of outlining Microsoft’s projections of where the technology industry is headed, Microsoft’s current charter to align to the trajectory of the industry, past financial performance and an overview of the different business units and how they’ve contributed to the overall success of the company as a whole. There is a lot to glean from all of this information — and to be honest, if you’re interested in technology as a subject, it can actually be a pretty fun read as well.
Absorbing all of this information helps you understand how solid their brand is, how strong their strategic planning is and whether or not the leadership team has the right mindset to carry the company forward into the future — all key things to evaluate when deciding to invest in a stock.
That wraps up Fundamental Analysis — which was the first of my 3 criteria.
CRITERION 2: ANALYST RESEARCH
The second major step in my evaluation of stocks is studying financial analyst research.
There are hundreds of thousands of professionals out there — whose primary job is to analyze macro/micro-economic conditions and market trends to provide large banks and investment firms with recommendations on what sectors, industries or companies they need to invest in. And as part of this work, they evaluate the stocks of individual companies and provide something called an outlook or a price target — which is a prediction of which stocks to buy and what price they’ll hit in the next 12 months. Normally, you’d have to pay for research like that — but some brokerage firms will provide this to you, for free, as part of their standard service. Etrade happens to be one of those firms — so let me show you how to read this information and what I look for when I’m looking into these analyst's projections.
So here is the aggregated analyst research page for Microsoft.
Analyst Research Section for MSFT, on Etrade
The first thing you’ll notice right off the bat is the price chart of the analyst targets. As you can see, out of the 27 financial analysts that have provided 12-month outlooks, the highest price projection is $212, the low projection is $160, which this stock has already met, and the average price that these analysts think Microsoft stock will get to, within the next 12 months, is $192. Now you should always take these analyst recommendations with a grain of salt, as they are usually only right about 60–70% of the time — but still good insights to know.
Below the chart, it actually shows the 27 analysts and more detail regarding their ratings (including articles that expand upon their stock buy ratings & prices) so you can see what factors led them to their particular price targets.
Expanded analyst ratings
And below that, there are also other independent research firms that publish stock ratings and actions — which you can read to get even more depth on whether or not your stock would make a good long term investment at this point in time.
Buy recommendations from other research firms
It’s important to gather as much perspective as you can so you can be aware of the broad spectrum of opinions on a particular stock — and therefore make the most informed choice in your selection process.
CRITERION 3: PERSONAL INTUITION
Now, the last check that I perform before deciding on a stock to purchase — is to check my personal intuition.
-Do I know and trust the brand of the company?
-Do I enjoy the products or services this company provides?
-Do I believe this company’s offerings will be needed in the future and be pivotal to the ever-changing landscape of the consumer or commercial marketplace?
These are the questions that I ask myself to make sure I have a good gut feeling about this company. At the end of the day — you have to have a good feeling about what you are about to invest in. I know you are supposed to take the emotions out of trading, but we are not robots AND part of the fun and joy and excitement of buying a stake of ownership in a company, is knowing that you like what this company does and that you take pride in the ownership of this stock. It’s not the most quantitative or empirical step within my stock selection methodology — but for me, it’s actually a pretty important factor.
Now that pretty much sums up, at a high level, the outline of my stock selection process. but whenever I speak to people about this topic — without fail, there is always a question about technical analysis.
What about reading charts and indicators and things like that, Daniel?
Well, usually when I make a swing trades, I definitely make the technical analysis an important part of my research. I look at things like the moving averages, Bollinger bands, RSI and chart candles, etc — but for long term investing, these technical indicators aren’t as useful, especially when you’re looking at a 5–10 year time horizon.
Stockcharts.com has great charting tools — especially for beginning traders
Technical analysis is important in trying to focus on the exact buy point and sell point to maximize the spread between your entry and exit margin — but for long term investing, it’s not as critical to be so precise. Sometimes I buy low, sometimes I buy high — but if I’ve really done my research, picked the right company and I’m planning to hold long term — I’m confident that I’ll see a very nice return regardless of what my entry point was.
— — — — —
So — I hope this gives you a good basis on how to conduct fundamental stock analysis for your investment strategy.
As with all things, picking the right stock investment can be a bit of a time consuming effort, but the more you learn about the company you are buying into, the more confident you’ll feel about owning their stock and the more excited you’ll be about having them be part of your overall investment portfolio!
— — — — — — — — — — — — — — — — — — — — — — — —
Follow Daniel’s Brew: YouTube | Twitter | Instagram | Medium
**** Disclaimer *****
The content here is strictly the opinion of Daniel’s Brew and is for entertainment purposes only. It should not be considered professional financial investment or career advice. Investing and career decisions are personal choices that each individual must make for themselves in accordance with their situation and long term plans. Daniel’s Brew will not be held liable for any outcome as a result of anyone following the opinions provided in this content.
|
https://medium.com/@trapananto/stock-picking-for-beginners-in-2020-3-key-steps-to-follow-650f383fcfc2
|
[]
|
2020-12-20 05:47:21.991000+00:00
|
['Babies', 'Health', 'Education', 'Life', 'Coronavirus']
|
5 Mistakes to Avoid as a Startup Owner
|
It’s not uncommon that startups make mistakes; after all, it’s run by human beings, right! But, It’s crucial to avoid those mistakes to a more considerable extent as more than 80% of startups fail in their initial launching year. Starting a business and building it into a significant profit is not an easy task. There will be many pitfalls to avoid. Here we’re looking into mistakes that you should avoid as a startup owner, to be in the game for a long time.
Not having a strategy.
It’s easy not to plan anything, sit idle, that’s it. This is one of the most redundant mistakes that startup businesses tend to make when they start their venture. Most of them don’t even have a clear idea about what they should do with their business after launching it!.
Many individuals are getting on to entrepreneurship these days, as it seems to be a new trend in society. But, to lead the game, it’s crucial to have a clear strategy for each stage of your business. Let it be the downfall or clear progress; you must be well prepared to deal with it.
It’s advisable to have a business plan as it can help you to keep a written document for all your business's strategy, that can help you track the progress.
Doing everything by yourself.
Yeah! It’s your idea, and you will be the ideal person to handle the stuff, but it’s a complete waste of time. Hire someone, search for people who’re good at specific works and give them those responsibilities. You’re the boss, spending your time doing every task can reduce your productivity, and that isn’t good for business.
So, be ready to delegate the tasks to someone. But, don’t be eager to hire someone to work for you, make sure that it’s an ideal time to hire a new team member.
Underpricing products or services
Know your worth! It’s easy to underprice the products and services you provide. As you’re starting out, there will be a lot of confusion on fixing a price but, do some market research and approach other people in the industry to get a price suggestion for what you do. Be sure not to sell anything for loss, as it can destroy your business’s existence.
Many entrepreneurs tend to underprice their effort as sometimes they don’t value themselves much on their effort, or they haven’t actually done any market research. Getting underpaid can be a demotivator for your business, so, be sure to value your effort.
Skipping the legal works
It’s easy to forfeit the legalities of getting into a business, and many do. It’s critical how startups tend to not appoint a lawyer at the initial stages of their workings; it’s fair as the payment is a bit higher. But, several startups tend to suffer because of their “not that easy” approach to legal work.
Every country has its own rules and regulations to follow, which, when done in the best way, can help you to be in the business for long.
Choosing the wrong team
Choosing a partner for your business, and even hiring a team should not be taken lightly, as all of your business’s progress depends on how effective your team can be. Many startups tend to hire people based on just talent, but, it’s not the only thing to consider. Having that right individual in your team can boost your overall business performance.
So, make sure to choose the right team!
Conclusion
As a business owner, our decisions are significant. All of our businesses’ futures tend to lie in our actions. Understanding that responsibility and acting accordingly could help everyone attached to our venture get a better result for their efforts. Be sure to avoid these crucial mistakes that most startups make; it can help you build a better future for your business.
|
https://medium.com/startupwriter/5-mistakes-to-avoid-as-a-startup-owner-a489ba431e92
|
['Midhun Areeckal']
|
2020-12-27 14:00:53.045000+00:00
|
['Startup Life', 'Startup', 'Startup Lessons', 'Business Mistakes', 'Startup Mistakes']
|
Support Vector Machine. Support Vector Machine (SVM) is a…
|
Support Vector Machine (SVM) is a supervised classifier and is defined by a separating hyperplane. In other words, given a set of labeled data, SVM generates an optimal hyperplane in the feature space which demarcates different classes.
Confusing, isn’t it? Let’s understand it in layman's terms.
Suppose, you have a given set of points of two types (say □ and ○) on a paper which are linearly separable. The job of SVM is to find a straight line that asserts the set into two homogeneous types, and which is also situated as far as possible from all those points.
Evidently, both straight-line ‘A’ and ‘B’ separate the two types of points as desired. However, ‘A’ is precisely situated as far as possible from all those points. SVM, as a tool, will elect ‘A’ as the separating hyperplane. In the image, the light blue periphery around lines ‘A’ and ‘B’ is called ‘Margin’. It is defined as the distance from the hyperplane to the nearest point, multiplied by 2. In simpler terms, the hyperplane will stay in the middle of the margin. The higher margin will give the optimal hyperplane.
Working of SVM
Till now, we are familiar with the process of segregating two classes with a hyperplane. But a rather important question arises is how does it work, and separate the two given classes. Don’t worry, it’s not as complicated as it appears to be.
To understand this, we need to delve into the working of Support Vector Machine or SVM. In the process, we will look at different scenarios of how a hyperplane can be constructed. Just remember a thumb rule to identify the appropriate hyperplane:
Select the hyperplane which segregates the two classes better.
In the example, we need to draw a hyperplane such that it distinguishes the two classes. We start with randomly plotting of three hyperplanes along with the data set, as shown in the graph hereunder.
Now, we attempt to adjust the orientation of the hyperplanes in such a way that it homogeneously divides the given classes. Here, all three hyperplanes (‘A’, ‘B’ and ‘C’) segregate the classes well, but a rather pertinent question is “How can we identify the appropriate hyperplane?”
To answer the question, we further try to maximize the distances between the nearest data point and hyperplane, which would help us to decide the hyperplane. This distance, as we have learned, is called ‘Margin’.
As can be seen in the graph, the margin for hyperplane ‘B’ is comparatively higher than both ‘A’ and ‘C’. Therefore, we consider hyperplane ‘B’ as the best fit.
Another pertinent reason for selecting the hyperplane with a higher margin is the degree of robustness. That is to say, if we select a hyperplane having a low margin, there are higher chances of misclassification.
In the scenario given below, it is not possible to draw a linear hyperplane to classify the given set, then how does SVM demarcate the same? (Note that so far we have only looked at the linear hyperplane.)
These types of problems are very easy for the SVM algorithm, which solves it by introducing additional features.
Plotting the transformed points in the x-y plane, we get:
Now, we can easily draw a hyperplane which differentiates the two classes.
Hyper-parameter Tuning
Kernel
The most important hyper-parameter of SVM is ‘Kernel’. Given a list of observations, it maps them into specific feature space. Generally speaking, most of the observations are linearly separable after this transformation. Note that the default value of the kernel is ‘rbf’. Now, we would quickly scroll through different types of Kernel.
1. Linear Kernel
In Linear Kernel, we only have a Cost parameter.
2. Polynomial Kernel
Here, usually, ‘r’ is set to zero and ‘γ’ to a constant value. Along with the cost parameter ‘C’, the integer parameter ‘d’ has to be tuned. The value of ‘d’ ranges from 1 and 10.
3. Radial Kernel
It is the most popularly used kernel. It outperforms every other kernel due to its flexibility of separate observations. Here, the cost parameter ‘C’ and parameter ‘γ’ have to be tuned.
4. Gaussian Kernel
In the Gaussian Kernel, we calculate an N-dimensional kernel by picking ‘n’ patterns in data space first. Then, the kernel coordinates the points by calculating its distance to each of these chosen data points and thereafter, taking the Gaussian function of the distances.
Regularization
The regularization parameter is also known as the ‘C’ parameter. It provides the SVM optimization with regards to the degrees of obviating misclassification in each training example.
If ‘C’ is very large, the optimization will choose a hyperplane with a smaller margin. Similarly, a very small value of ‘C’ will force the optimizer to select a hyperplane whose margin is large, even as the selected hyperplane misclassifies more points.
Given hereinbelow are examples of two different types of ‘C’ parameters. In the left image, the regularization value is low and hence, it has some misclassification. Whereas, a large value of ‘C’ leads to choosing of smaller margin hyperplane.
Gamma
The gamma parameter characterizes how far the impact of a single training example reaches. A low value of gamma signifies points far away from hyperplane are considered in the calculation, and vice versa.
Margin
Margin is a line that separates the closest points belonging to different classes. A margin is considered to be good if the separation is larger for both the classes, and points belonging to one class should not cross to another class.
Conclusion
Considering the limitations of SVM, it doesn’t perform well when we have large data sets as the required training time taken is higher. SVM falls short when the data has more noise or given classes are overlapping. This is because it is difficult for SVM to draw hyperplane for overlapping classes. Further, SVM doesn’t directly provide probability estimates, these are calculated using an expensive five-fold cross-validation.
However, on the positive front, SVM works really well with clear margins of separation and is effective in high dimensional spaces. It also performs well in cases where the number of dimensions is greater than the number of samples. It uses a subset of training points in the decision function (called ‘support vectors’), and therefore, is also memory efficient.
|
https://medium.com/analytics-vidhya/support-vector-machine-bf7dfa64b893
|
['Aakash Jhawar']
|
2020-08-08 16:02:06.130000+00:00
|
['Machine Learning', 'Data Science', 'Support Vector Machine', 'Deep Learning', 'Hyperplane']
|
I really like the single lane race analogy.
|
I really like the single lane race analogy. I’ve allowed the success of the less deserving affect my strategy. I knew what to do but my god there’s so much shit being flung by flukes that had one piece of shit stick that are now selling classes for 600 bucks a pop that I started second guessing myself.
Stick to my plan. I know it works! Fuck these shit flingers!
(╯°□°)╯︵ ┻━┻
Thanks! I needed that.
|
https://medium.com/@hogantorah/i-really-like-the-single-lane-race-analogy-9b4bfd797c33
|
['Hogan Torah']
|
2020-12-11 15:46:49.322000+00:00
|
['Psychedelics', 'Financial Planning', 'Strategy', 'NBA', 'Self']
|
Blockchain, Bitcoin, Ethereum, Oh My!
|
Blockchain, and cryptocurrencies more specifically, are something that at this point a lot of people have heard of in some way, shape, or form. For most, it falls under the “get rich quick” schemes that everyone so adamantly wants. For others, it’s the technology behind the “wild west” cash grab of tokens and coins. While those two concepts definitely hold their own weight, there’s also a small group that recognizes the value these things bring from an understanding of trust.
You see, a lot of people my age harbor animosity and distrust for the institutions of today. They see “The Man” as something that suppresses creativity and the ability to grow. They feel that the establishment, and its 1%, simply feed off the bottom without any consideration for the wellbeing of the commons. While history has shown that there are issues with amassing exorbitant amounts of wealth at the top, I see the distrust in the establishment as a crutch and opportunity for improvement. Rather than extreme calls for revolution and upheaval of the existing systems, we must acknowledge that we wouldn’t be where we are today as a society without the institutions we’ve built. They’ve grown and helped establish the global economy we live in. They’ve brought about abundance and prosperity that the world has never seen. And while we have recognized this prosperity, there is no doubting the mistrust that it has formed due to the vast levels of inequality it has created. This distrust is also the primary reason why I believe blockchain technology has the potential to become commonplace within every industry in our economy.
At its core, cryptocurrencies and blockchains follow the basic principles of a ledger. Everything that happens on a chain is recorded and immutable. That means that I can’t simply tell you one thing and you must take my word for it. Instead, if your heart’s desire is to fact check what I’ve said, the information should be publically available on the blockchain and untampered with. Now that is a super simple way of explaining the concept, but in reality, that’s what this technology can do. Throw in the fact that it’s decentralized and utilizes a network of computers, like the one you’re currently using (the Internet), to reach a consensus on what is true and what is false, and you now reach the digitized means of institutionalizing things.
While still in its infancy, blockchain has a lot to offer to the world economy. Similar to the unknown capabilities of the Internet back in the '90s and early 2000s, blockchain has yet to face its true potential. That is why the market for tokens is super speculative and people see it as an opportunity to strike Bitcoin “gold”. In my mind, it’s a lot more like gambling on pets.com and hoping you’ve placed your bet on a winner. No one truly knows which chain or concepts will stick, but the fact is, trust is a concept that our economies have been running on for decades now, and something these technologies can reintroduce by co-opting it from the centralized walled gardens we use today.
Any major loss of faith in our economy due to a breach of trust could be disastrous. The biggest problem we face now more than ever is demonstrating to people that they should think critically about every piece of information that they are presented with before trusting it. This is because trust is a very hard thing to earn and extremely easy to lose. With blockchain technology, we can begin to form a consensus on topics and rebuild trust amongst our populations. Slowly we are beginning to bridge the gap of trust that exists within our traditional and digital economies, and I confidently believe within the next decade we will see institutions adopting these technologies to follow suit. In my mind, blockchain won’t be the downfall of institutions, but more so, the technology that helps facilitate better models of trust in the ones we have. Blockchain will engulf our existing systems and broadcast every move that is made within them to network participants, in turn creating a more transparent and cooperative society.
|
https://medium.com/@matt-olan/blockchain-bitcoin-ethereum-oh-my-db62c7964303
|
['Matthew Olan']
|
2020-12-07 20:54:50.224000+00:00
|
['Blockchain', 'Institutions', 'Society', 'Trust', 'Ethereum']
|
Can We Live Without Google?
|
Photo by Christian Wiediger on Unsplash
Can We Live Without Google?
This morning, Google went out of service, as it seems to be caused by a hacker’s attack, making millions of people unable to access their Google accounts, including Gmail and YouTube, after the world’s largest search engine crashed.
This comes just days after a significant breach of US Government agencies by Russian hackers. The outage lasted for more than an hour.
Well, I may say that 2020 has been a hard enough year up to here, but having the most important cloud services platform out-of-service can set the bar when it comes to the Worst year ever index.
And while it looks like various services are starting to come back again, I’m taking this morning as an excellent opportunity to put ourselves one question: What would our lives be like if Google services suddenly stopped working, forever?
Photo by Greg Bulla on Unsplash
How big is the Big G?
Have you ever noticed how we are dependent on services like, google search, Gmail, Google Docs, Youtube, Google calendar, among others?
Google is the most visited website in the world. To put a number on it, Google has been visited 62.19 billion times this year, and it processes over 3.5 billion searches per day. Google Lens recognized up to a billion different items; Google searches based on the device, 63 percent of Google’s US organic search traffic originated from mobile devices. Google is also a starting point for almost half of the product searches. 46 percent of product searches begin on Google
Without even taking a look at this stat, we know that we’re dependant on Google.
Multiple times per day, we’re turning to Google to resolve our queries. To be precise, 84 percent of people use Google 3+ times a day or more often.
I’m sure that many of us became destitute and incommunicable with the world; some of us have lost our ties and remained isolated from our friends and business.
I’m sure that many of us became destitute and incommunicable with the world; some of us have lost our ties and remained isolated from our friends and business.
Well, with a global pandemic out there, with really more serious issues still going on and thousands of people dying every day across the world by COVID-19, I know that 2020 has been apocalyptic enough. Even the scenario we had faced this morning has shown us that there is a significant dependency on Big G services, and for many people, this seems to be a natural “end”?
We must recognize that our whole daily life depends on corporations’ services; besides that, Google does not offer something unique; it only provides higher quality than the competition. And that’s why Google became so relevant.
I don’t think the problem here is to have a particular dependency on more agile services like cloud services, and I think we should share our dependencies with other companies that can supply our needs. The monopoly of our needings is the real threat here.
This morning was a kindly reminder that Without Google, a big part of the modern world would cut out, our documents would not be accessible, we could not reach any website in the world, simply because many of us are unable to remember URLs or we could not have our emails.
I’m pretty sure that if one day Google will not be there, other companies will appear, and we will be making the same vows of eternal love forever… but until now.. these hiccups make me feel anxious.
Photo by Brett Jordan on Unsplash
But how dependent are we?
Search engines like Google and internet databases have become a kind of “external memory” for our brain, according to a study published Thursday in the journal “Science,” which reveals that we have lost retentive memory of data, but we gained search skills.
Educators and scientists have already warned that man was becoming increasingly dependent on online information, but so far, there have been few studies to confirm it.
The study “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” shows like our “personal database,” known as the “Google effect,” and computers and online search engines have become something like an “external memory” system.
This study suggests that people are willing to think about computers when faced with difficult questions. When people expect to have access to information in the future, they have lower recall rates of the data itself rather than where to access it. And that’s precisely what makes so dependant on Google today.
Photo by Mitchell Luo on Unsplash
Is it possible to live without Google?
Maybe we should be considering some personal strategies to be less conditioned by Google, if not today,… soon.
Even if we do not see alternatives on the horizon or do not see the urgency, I still consider it a valid exercise.
We are giving too much of your life to just one massive and powerful company, which profits by targeting ads based on what you do on the websites, apps, and services it offers for free.
But at the same time, I recognize that, although it is possible, it is not an easy path. It requires adaptation and getting rid of some commodities. But it is not, as many think, a huge sacrifice. There is competition out there, and in many sectors, there are alternatives that are as good as Google’s. They are different, with advantages and disadvantages, but at least they are viable. Maybe we need to explore this more, no?
One reason to leave Big G could be if you consider the benefit you receive in return is not worth sacrificing your privacy. If you don’t mind wasting 10 seconds typing a full URL address on your address bar instead of running into a location guessed by Google search, you do not need google.
This is a very crazy case of thinking deeply: Google can know all the websites and physical places you’ve been. Nobody on this planet knows this, but Google does. You usually share this level of information with a few people.
I don’t see other companies walking this path, but if it happens, patience, I’m looking for other solutions.
Photo by Rajeshwar Bachu on Unsplash
Conclusion
Google has become the primary form of external or transactive memory for many uses. Information (documents, emails, presentations, appointments, videos, images, etc.) is collectively stored in its cloud services, and we don’t have to make costly efforts to find what we want. We can simply “Google” it.
We should not be surprised to find that more and more people are not memorizing relevant information because they trust that they can get it with their search skills.
The real point here is: do the benefits we receive from using the Google services in such a dangling way worth sacrificing our privacy and, in the end, our freedom?
One more thing…
If you want to read more about Google and how it is collecting an enormous amount of data about you at this moment, I wrote an article about it:
References
|
https://pub.towardsai.net/can-we-live-without-google-c2f3ed0be151
|
['Jair Ribeiro']
|
2021-03-22 10:32:35.542000+00:00
|
['Google Cloud Platform', 'Life Hacking', 'Search', 'Gmail', 'Google']
|
How to control Anger in Relationship? Anger Management
|
Many people say that we get angry a lot. Many times think not to get angry, but every time they fail. We also eat vows that we will never be angry now, but still very angry. Anger is something that can poison even in the most loving relationship. Anger first fills poison in our minds, then poisons in our lives, and finally poisons all over our world.
Welcome you all my readers! We all are human beings and anger is in our veins and it is a human nature. But the person who can control his anger and anxiety is called a gentleman. In this age of today, where we see humans, all find unhappy, all show their anger on each other as if it is our rightful birth. Let us assume that our human nature is like this, but the responsibility to control it is not that of nature, it is our responsibility.
So friends, in this post today we will know how to control anger, stress, Anxiety in a relationship? What are the anger issues, and how to manage short temper? and also What is Anger Management?
What is Anger?
Anger is an emotion of human being that can be dangerous for anyone. It can easily ruin relationship, life, mental power, etc.
A person who is angry remains unhappy even before anger, finds anger even when he is angry and remains sad even after anger. Doing the thought once, whenever we were angry, did we ever get happiness, comfort, and peace?
Will we allow anyone else to be angry with us, that he can tell us whatever he wants and can speak as he wants? No person can like the anger of anyone. So why would anyone accept our anger? Why would he like our anger?
A story for Anger Management
Once, a person went to his Guru Ji and said to his Guru that I feel very angry and because of that anger everyone in my house is upset with me. And because of that anger, I too have remorse and repentance, I could not see the sight of people. I speak such words in anger, which cannot be returned later.
We have to remember one thing, whatever words we say in anger, we may get their apology but the rest never forget those things. So we have to remember that whatever we say in anger, we will have to face those words throughout our life.
So he said to his guru that I am upset. I try hard but I do not lose my habit of anger. Then Guru Ji said that whenever you get angry, then remember one thing that I will do angry but tomorrow. I will not do it today. Whenever you get angry then just keep in mind that I will definitely do anger but not today. So after hearing this he said that this will calm my anger only a little.
I will get angry the next day too. Then Guru Ji said that once you do this, look at it.
Biggest Enemy of Human,
Anger
He went home and then he got angry on something, but he remembered his Guru. I will get angry but not tomorrow. Tomorrow is a distant thing; after a while, his anger subsided.
Anger, a poison
How to control it?
Anger is like a smoke, like a cloud which after a while goes ahead again. Just for a while, we have to wear that sobriety, just for a while we have to control it. But we cannot tolerate things.
Many people say that we do not get angry but people make us angry. Are we T.V? Whoever pressed any button of the remote, we started giving programs accordingly.
Why are we so light these days? It has become so easy to make anyone angry. And why is it so difficult to make anyone happy? Because this is our dirty habit.
We put the blame of every bad thing on someone else. Because of that. If it is not there will be someone else. We want the whole world to run according to us, people from all over the world follow us so that we do not get angry.
Everything in the world should change, every rule of the world should be changed, so that we do not get angry. But after changing all this, we will still get angry. Because the anger is really inside us. Other people serve only as a source to take out our anger.
Read More: Loneliness Partner
|
https://medium.com/@vbvbrb/how-to-control-anger-in-relationship-anger-management-9132b1c29502
|
['Loneliness Partner']
|
2020-05-05 08:28:14.473000+00:00
|
['Motivation', 'Anger', 'Happy', 'Relation', 'Love']
|
ExpressJS & GraphQL — Authentication & Access Control
|
Project Setup
I will not describe and show process of configuring Typescript, Nodemon and writing some things like interfaces. You will be able to take a look at whole app code on Github repository.
Link to repo will be at the end of this article.
Project structure
If you take a look inside graphql folder , you will notice two folders: authorized & unauthorized. I did a physical split of resolvers, queries, types and mutations for authorized and unauthorized schema.
Based on user authentication status, our app will know which kind of schema should be served to end user.
With this split we got one more security level.
When user which is not authenticated obtain GraphQL schema from server, he will not be able to see list of queries and mutations which require authentication.
Type definitions generator - src/graphql/type.generator.ts
I will create resolver register where we will be able to control which resolvers should be included in authorized and unauthorized schemas.
Resolver register — src/graphql/resolver.register.ts
Schema generator based on user auth status — src/graphql/schema.generator.ts
Generating GraphQL Express middleware — src/middlewares/graphql-express/graphql-express.middleware.ts
Now I will setup Authentication and Access control middlewares.
Authentication middleware will be running on entire GraphQL endpoint.
Access control middleware will be running on resolvers level.
Authentication middleware — src/middlewares/auth/auth.middleware.ts
Access control middleware — src/middlewares/access-control/access-control.middleware.ts
Now I will setup helper for running middlewares on resolver level.
Middleware check helper — src/middlewares/middleware.check.ts
NOTE: In real world scenario I would use real database and real authentication standard such as JWT. But in this article I’m faking the DB and authentication system.
In this fake DB, user objects have assigned different roles and tokens which will be used for authentication and authorization.
Tokens will simulate bearer tokens.
Fake DB — src/helpers/fake-db.helper.ts
Authentication query service — src/graphql/unauthorized/auth/auth.query.service.ts
Authentication resolver — src/graphql/unauthorized/auth/auth.resolver.ts
Authentication query types — src/graphql/unauthorized/query.graphql
Book mutation service — src/graphql/authorized/book/book.mutation.service.ts
Book query service — src/graphql/authorized/book/book.query.service.ts
Below you can see in book resolver how to use middlewares on resolver level:
Book resolver — src/graphql/authorized/book/book.resolver.ts
Mutation for authorized — src/graphql/authorized/mutation.graphql
Authorized queries — src/graphql/authorized/query.graphql
Now let’s wrap this all:
Server — src/server.ts
Let’s run server with npm start command
|
https://itnext.io/expressjs-graphql-authentication-access-control-c5c8fe360b07
|
['Nenad Borovčanin']
|
2020-03-16 10:52:29.981000+00:00
|
['GraphQL', 'Node', 'Nodejs', 'Typescript', 'Expressjs']
|
Review: “The Prom” misses its cues
|
Allow me to begin with a disclaimer: I haven’t seen any Ryan Murphy project since Glee, nor have I seen The Prom on stage. But judging by Murphy’s film adaptation, the show was either a perfect vehicle for him to provide more of his trademark subpar queer representation, or the same issues that plagued Glee dampened a perfectly decent musical about love and acceptance. I’m not just talking about the actors who are clearly too old to pass for high schoolers, though that doesn’t help things. It’s mostly the fact that this movie about overcoming homophobia spends more time focusing on Meryl Streep’s failed marriage than telling us anything about the young lesbian couple at the center of the conflict, which tinges the plot with a brow-raising irony.
In fairness, The Prom is not a poorly made film. It’s competent enough, but it largely coasts on the A-listers in the cast to impress the audience and makes little effort beyond that. The characters played by Meryl Streep, James Corden, Nicole Kidman, and Andrew Rannells are washed up Broadway actors in need of some positive press, and those names alone should show you the problem. For all their talent, they’re much too well-established in entertainment not to be distracting in these roles. I get the rationale in casting Meryl Streep as an aging Broadway diva, but her casting feels more like a constant wink at the audience than a character. Nicole Kidman can’t pass for a background chorus girl, and while I don’t necessarily think gay characters have to be played by gay actors, James Corden’s portrayal might be winning me over to that side (never mind his attempt at an American accent). Andrew Rannells is the only one who isn’t too overused in Hollywood to be enjoyable in his role as a jaded but clever out-of-work performer. That said, the original character probably could have been cut and made into a composite character with Corden’s role, given their shared predicament and midwest roots.
That’s a major problem with this adaptation. It has totally the wrong ideas about what material from the show works in a film and what doesn’t. The musical numbers are usually pretty fun to listen to, but only a few are actually fun to watch on screen, and every single song from the Broadway show is included at least partially. Possibly the most infuriating moment is when Emma and Alyssa’s part in “You Happened” stops dead in its tracks, killing the pacing and stripping the number of its insight into their relationship. Yet somehow there’s time to include in its entirety a ballad that literally sings the praises of theatre actors, visualized in the most boring way possible as most of the numbers are. The romance between Meryl Streep’s character and the high school principal could have easily been reduced or removed entirely, but it gets exponentially more focus than the supposed main plot of Emma trying to get equal treatment in her homophobic town.
The Broadway actors’ collective arc is supposed to end with them setting aside their own interests and starting to actually help Emma for her sake. But the movie only serves to validate their attempts at good PR by staying with them for the majority of the run time rather than giving Emma any defining traits or character flaws beyond being a bit shy. She says toward the beginning that she doesn’t want to be “a symbol”, but that’s what she’s relegated to. Jo Ellen Pellman gives a good performance, but she never gets to be an active character that the audience can connect with. She mostly reacts to things happening around her instead of participating in or really pushing back against anything. Her girlfriend Alyssa gets still less development, and even James Corden’s character doesn’t have much agency in his subplot. The movie ostensibly has queer issues at the forefront, but it sidelines its actual queer characters and puts their straight allies on a pedestal.
The plot isn’t horribly structured, but unbalanced in its development of the central characters. I quickly got tired of seeing Meryl Streep, James Corden, and Keegan Michael-Key, but I wanted to see more of Emma and Andrew Rannells’ character, and I couldn’t even tell you the name of Nicole Kidman’s character. Nor could I tell you why or how she forms a bond with Emma seemingly out of nowhere in the second half. Some antagonists are redeemed without being properly humanized beforehand, which puts it on the audience to forgive them with not much of a reason for some astonishingly cruel behavior. The pacing is mostly okay, but there are several key scenes that completely fizzle out and kill much-needed tension. There are some moments in the third act that get a genuine emotional response, but that’s more from what the story draws from real life than from the movie on its own merit.
I hesitate to call The Prom a bad film. It’s well-acted with some solid comedic moments, catchy songs, and a positive message. But much like the actors trying to improve their image by helping a poor young lesbian, the movie seemingly hopes that its message of tolerance will earn it enough goodwill to distract from its glaring flaws. It may suffice as a fun feel-good musical, but it’s probably a better use of your time to listen to the Broadway cast recording while doing something else.
An MFA student at Hollins University whose penchant for Disney led into a love for all things film. Film critic/essayist and screenwriter. View all posts by Mary McKeon
|
https://medium.com/@filmknife/review-the-prom-misses-its-cues-b30ea0e300b6
|
['Mary Mckeon']
|
2020-12-13 18:05:00.143000+00:00
|
['Film', 'Netflix', 'Musicals', 'Film Reviews', 'The Prom']
|
Data Preparation for Machine Learning
|
Preparing data is a fundamental activity in any machine learning project. Without adequate prepared data, machine learning algorithms can’t be trained, evaluated, or deployed. Data preparation often consumes well over half the budget of a machine learning project; consequently, it is important to gain as much value as possible from the data preparation process.
Note that throughout this document we use ‘model’ interchangeably with ‘machine learning algorithm’.
We present here a standardized approach to preparing data for training models. Our approach decouples preparing data from training models — making the approach equally applicable to preparing data for processing by an already-trained model.
Information generated during data preparation provides details of the characteristics of data — which in turn can inform decisions related to more effective collection of future raw data, and can offer insights when performance of an algorithm trained with the data does not meet expectations.
This document contains the following sections:
The Preliminaries — Introduces and explains fundamental concepts and requirements for preparing data.
Minimal Data Preparation — Outlines minimal capabilities and expected outcomes for a data preparation process.
Robust Data Preparation — Identifies additional data preparation processing that yields significantly more useful information about both the original raw data file as well as the clean prepared data file.
Best Practices — Summarizes the key concepts and processing approaches presented and suggests best practices.
Resources — Provides a brief introduction to the neuralstudio.ai automated machine learning platform — which generated the results used in explanations of concepts and strategies.
In the interest of clarity, and to avoid the clutter of tangential explanations related to accessing any particular database or other type of persistent storage, we base this discussion on data in the form of a ‘flat’ raw text file — a format that can be easily derived from any storage type or technology, and that is ubiquitous in machine learning projects. Subsequently, we will refer to an original baseline collection of data as a ‘raw data file’.
Furthermore, while our focus is preparing data, it bears mention that partitioning data (for example, into model training and model validation sets) is also an important activity in the full model development life cycle. Usually, partitioning involves simply holding out a portion of prepared data for model validation. More complex partitioning, such as may be required for time series data in which early portions of the series are used for training, and the latest portion is used for validation, or further sampling of prepared data to reflect desired distributions of values identified during the data preparation process, is beyond the current scope.
The Preliminaries
The primary motivation for preparing data is obvious: the core mathematics which underlie any machine learning algorithm require clean data. Proper data preparation ensures that data used in computations is of the correct type and is appropriately formatted so that training the model (as well as running the model when it is deployed) can proceed without sustaining exceptions.
Note that we draw an important distinction between ‘clean’ data, and ‘validated’ data.
A ‘clean’ data value (a) has a value that is not null (that is, the value is not empty), and (b) is correctly formatted for the particular field/source that produced the data.
In other words, if a field/data source is expected to contain/generate numeric values then a clean value is a number (meaning the value can be converted internally by a programming language to an integer or a floating-point primitive without causing an exception). If an exception occurs, that particular value representing the field/data source is treated as an arbitrary symbol (an alphanumeric character string). The final determination of whether a field is considered numeric or alphanumeric depends on the relative counts of occurrences of numbers and alphanumeric strings for each field in the raw data file.
A ‘validated’ data value has been further processed to confirm that the actual value (either as a number, or as a specific character string) lies within acceptable bounds, defined in the context of domain knowledge for a particular problem/environment.
Our focus here is the creation of clean data files from original raw (and potentially ‘messy’) data files. We acknowledge that validated data files are also important; however, producing a validated data file from a clean data file is relatively straightforward as long as the process which generates the clean data file produces information about the ranges of data values for every field in both raw and clean files.
Data Organization
As the first step in preparing data for use with machine learning algorithms, it is important to identify/confirm fundamental attributes of the data.
In a raw data file, data values are organized as lines (also called records), with individual values (either alphanumeric strings or numbers) contained in fields within records.
Special Characters
Fundamental attributes of a raw data file are defined by a small set of Special Characters.
The separation of data values into lines is based on a ‘Line Termination Style’. The separation of data lines into fields depends on a specific, unique within a file, ‘Field Delimiter’ character.
The Line Termination Style normally reflects the operating system in use when the file is created — either a version of Microsoft Windows using the Windows style of line termination; or MacOS/Linux/Unix, all of which use the same (but different from Windows) style of line termination.¹ However, the creator of a raw data file may explicitly set the Line Termination Style, and the creator of the raw data file always specifies the Field Delimiter.
As another Special Character, the creator of a raw data file may define a particular ‘Comment Character’, such as ! or #, to identify comment lines that should be ignored when the file is processed (that is, the lines should never be presented to a model). Comment lines permit including explanatory information in a raw data file. If a ‘Comment Character’ is defined, the data preparation process must ensure that comment lines are removed (not presented to a model), both during training and when a model is run.
Similarly, the creator of a raw data file may define a sequence of characters (for example, ‘-999’ or ‘N/A’) as an ‘Unknown Value Marker’ to indicate that a particular data value is not known.
An Unknown Value Marker permits distinguishing between a truly missing value (for example, a respondent left a field blank on a form), and a value that is unknown (the respondent provided an illegible response in a field on a form). In general, to a machine learning algorithm the implication is the same — a field whose ‘value’ is missing or is the Unknown Value Marker, should not be used in training or when a model is run.
If an Unknown Value Marker is defined, all occurrences of the character sequence must be identified and processed appropriately.
In particular, if the Unknown Value Marker is a number such as -999, and records containing the Unknown Value Marker are not ignored (removed), statistics generated on a per-field basis will not be accurate because the numeric value will incorrectly influence calculations. As a consequence, performance of models trained with the data will not reflect real causal relationships. Appropriate handling of Unknown Value Markers as well as missing values entails pre-processing the original raw data file, and removing (ignoring) all records with fields whose values are either missing or are the Unknown Value Marker.
The final important issue related to Special Characters is how numeric values are represented. Integer representation is generally straightforward — integers have an optional sign (+ or -) and a value based on only the digit characters 0–9.² The representation of real numbers, on the other hand, requires a ‘Decimal Separator’ character (either ‘.’ or ‘,’). Usually the Decimal Separator is determined by the Locale or Region setting of the operating system; however, depending on how the raw data file was created, the Decimal Separator of the file may not match the Decimal Separator of the operating system — which can lead to unexpected errors during processing.
Additionally, both integers and real numbers can be represented in scientific notation (which means a value could include the non-digit ‘e’ or ‘E’ character), and integers and real numbers can each have an associated ‘Currency Symbol’ and/or a ‘Thousands Separator’ character.
The Thousands Separator must be consistent with the Decimal Separator, and this is always the case when numeric values are generated by a programming language or saved from a spreadsheet. However, problems can arise when numeric data in a spreadsheet using a US Locale contains numbers which have the US Thousands Separator (comma), and the spreadsheet is saved as a CSV (comma separated value) file. Microsoft Excel and other spreadsheets will surround such values with double-quote marks, which while nominally ensuring the value is correct, can potentially cause problems when the file is read by other software that automatically treats values within quotation marks as alphanumeric strings.
The following image summarizes information that describes basic characteristics of a raw data file.
Several points to note with respect to this information.
First, there is no reference to a Thousands Separator character. Normally, any numeric values containing Thousands Separator characters in the raw data file would be ‘silently’ converted to numbers (integers or real numbers with the correct magnitude) when processed and placed in the final prepared data file, regardless of the language used to implement the data preparation software. If for some reason numeric values containing Thousands Separator characters are not handled correctly, the data preparation software in use has a serious deficiency.
Second, if a Currency Symbol is associated with a data value in the raw data file, it is quite possible that the value would be treated as character string, rather than a number. While this is likely not what is expected or intended, it highlights another important aspect of data preparation. A report produced by the data preparation process should contain a section that identifies the Field Type (numeric or string) of every field, for review by the model builder. As a general rule, machine learning algorithms expect numbers or strings — algorithms do not inherently ‘know’ about currency, so values which represent monetary values and contain a Currency Symbol should actually be represented simply as integer or real numbers of the proper magnitude, without any associated Currency Symbol.
While these fundamental attributes of data files are often taken for granted, every data preparation process, whether internally developed or implemented as a component of a machine learning framework, should routinely produce a report that explicitly identifies the special characters which effectively define the organization of data in the final prepared data file. Having this information at hand can help more quickly resolve issues when unexpected data preparation problems arise due to low-level data format issues.
Fields and Records
For ease of human identification and communication, fields are naturally referred to by their ordinal number, consecutively from 1 to the total number of fields in a record. Normally, field numbers do not explicitly appear in a raw data file unless they happen to have been chosen as field names. The creator of a raw data file specifies the names, if any, for fields. Field names must be separated by the same Field Delimiter character that also separates data values, and by extension, cannot contain the Field Delimiter character.
The following image illustrates these concepts, using an excerpt from an Excel spreadsheet (the data is artificial data that does not represent real persons).
The un-numbered top row in the image, containing 1, 2, 3 … is an Excel artifact which indicates how fields are separated, and the corresponding field numbers (although the Field Delimiter character employed is not visible). This row would not be included when the spreadsheet is saved.
The second row in the image, identified by the number 1 (1 is an Excel artifact, not data), contains names for each field. When the spreadsheet is saved, this row would comprise the first line in the saved file (the Excel artifact 1 would not appear in the file).
The other rows in the image, identified by the Excel artifact numbers 2, 3, and 4, contain values for each field in sequential data records.
The creator of a raw data file specifies the Field Delimiter character, either through software, or if the file is created from a spreadsheet, when the spreadsheet is saved.
To avoid issues when raw data files are transferred between computer systems and potentially used in different ‘Locales’, we suggest always saving data as ‘Tab’ delimited text files with a .txt file extension. When the raw data file is created the operating system will usually determine the Line Termination Style, unless the file is created by custom software that explicitly sets the Line Termination Style. Also, be aware of the Decimal Separator character used when a raw data file is created if the file may be transferred to a computer system which uses a different Locale. In particular, transferring data files between European and US computer systems can cause aggravating and sometimes hard to identify problems due to the use of different Decimal Separators.
Date and Time Fields
Raw data files often have fields which contain date and/or time values, either as Strings (meaning that the values contain non-numeric separators such as ‘-’ , ‘/’, or ‘:’) or numbers (for example, an integer value representing a time increment such as milliseconds or seconds from a fixed point in time). Either format will almost certainly not be interpretable by a machine learning algorithm unless some transformation is applied.
While identifying specific transformation methodologies is beyond out scope, in part because the transformations typically result in adding fields to records in the original raw data file, we will note that it is important to first identify what underlying information is expected to have utility from date/time values.
Eliciting cyclical or seasonal information generally requires transforming date/time values to yield continuous values which periodically repeat, corresponding to the ‘natural’ cycle of the other data values.
Eliciting information that reflects particular points in time (for example, week-day versus week-end behaviors or activities) generally requires a transformation that yields categories corresponding to days of the week.
Eliciting information that reflects sporadic activity (for example, the influence of holidays or special events) is often best captured by a Boolean transformation (a field such as ‘isHoliday’, with a value of ‘true’ or ‘false’).
Eliciting information that reflects the duration of activities requires first converting date/time fields to ‘pure’ numeric values, and then computing the difference between consecutive values, in whatever units (minutes, hours, days, etc.) are appropriate for the particular domain.
Ignored Fields
Based on human domain knowledge, in many situations the original raw data file includes fields which would not be useful for a machine learning algorithm.
For example, in the spreadsheet referenced previously, experience shows that the values of fields which contain names or addresses will have no causal relationship to the target value in a training data file — to the algorithm the values would be interpreted as many random, almost always unique, strings.
In the simplest case, ‘ignoring’ such fields means removing them from the raw data file; the resulting initial portion of a data record presented to an algorithm during training would then be as illustrated below.
Note that now ZIP is field 1, and that in general, when fields are ignored the relative positions of non-ignored fields would remain the same as in the original raw data file. More complex cases (ignoring non-contiguous fields in the interior of records) follow the same basic principle but require additional care to ensure that the order fields are presented to the model during training is maintained when the trained model is subsequently run.
The key point with respect to ignored fields is that during data preparation a model builder may and often does decide to ignore certain fields in a raw data file; if so, ignored fields must then be accounted for in subsequent modeling steps.
When fields in a raw data file are ignored (removed) and the file is used in training, it is also critical that the organization of training data records be maintained in downstream processing. In particular, when a model trained using revised records (with ignored fields removed) is deployed and subsequently processes new data, if the source of new data by default produces records in the form of the original raw data file, those records must be modified (fields ignored in training data must be removed from new data) before the model can correctly process new data in a record.
In addition, as discussed further below, the data preparation process itself may identify other fields that would not be useful in training a machine learning algorithm; whether to present such fields to the algorithm during training is a separate decision.
Minimal Data Preparation
To recap progress to this point, the fundamental attributes that define fields and records are assumed to have been confirmed, and any fields that the model builder decided to ignore are assumed to have been identified.
To create a prepared data file, at a bare minimum a data preparation process then should iteratively:
Read a record from the raw data file (a record is identified when a Line Termination Style character sequence is detected);
Ignore but count empty records (records which consist only of the Line Termination Style character sequence);
Ignore but count records that start with a Comment Character (if specified);
Confirm that the record has the expected number of fields (count records which do not);
Check each non-ignored field to confirm that it contains a value (count occurrences of missing values);
Check each non-ignored field value to confirm that expected numeric values are numbers (count occurrences when values expected to be numbers are not);³
If all field count and field value confirmations succeed, write the revised record to the clean prepared file (excluding any ignored fields when writing the record), and count the good record.
This minimal processing would yield a clean prepared data file that may be adequate if the original raw data file was relatively clean. However, should errors occur when the prepared data file is used for training models, there would be little information to help resolve issues, nor would there be summary information, including statistics, about values in the clean file that could help identify reasons for sub-optimal algorithm performance after training concludes.
These shortcomings will be addressed in the next section.
Robust Data Preparation
In addition to the minimal results identified in the preceding section, robust data preparation includes:
generating detailed statistics for all non-empty, clean values for each fundamental Field Type for all fields (minimum, mean, maximum, and standard deviation for continuous-value numeric values; discrete class counts for discrete numeric or string values); providing automated analysis of clean field values in order to identify fields (in addition to those identified by the modeler) that would not be useful for training a machine learning algorithm; and Attempting to automatically create an auxiliary clean data file in the event that the original raw data file contains too few complete clean data records to adequate train a machine learning algorithm (as discussed further below).
While producing this additional information requires multiple passes through a raw data file, unless the original file is known with certainty in advance to be clean, the benefits quickly become apparent and out-weigh the minor increased processing costs when the subject raw data file is ‘messy’. Statistics provide insights into the volatility of field values; identifying min-max ranges can guide developing strategies for dealing with outliers.
In addition, generating detailed information about field values highlights yet another issue — related to the representation of numeric values — which should be addressed through Robust Data Preparation. Often integer codes are used to differentiate between elements of classes or categories (for example, 1 means single, 2 means married, etc.). While this approach is straightforward for humans to interpret, integer code assignments can cause issues when interpreted by machine learning algorithms. Rather than ‘recognizing’ that a set of integers should be treated as discrete classes or categories, an algorithm may incorrectly treat the set as continuous numbers. This can dramatically affect whether or not the algorithm can discover whatever analytic utility the subject field might have.
This issue is addressed in neuralstudio.ai® (refer to the Resources section) by automatically applying the same heuristics during data preparation that are used by the core neuralstudio.ai machine learning engine to structure data before using it to train neural networks. Put simply, the heuristics generate histograms of values, and if a field has fewer unique integer values than some threshold⁴, the field values are treated as discrete categories rather than (relatively) continuous numbers.
We now review two possible ‘messy’ data scenarios — either the original raw data file yields some minimum⁵ number of clean complete records, such that the resulting prepared data file can be directly employed to train models, or the original raw data file does not and some additional principled heuristics must be applied in order to obtain the required minimum number of training records.
Scenario 1 — Data Preparation Yields Sufficient Clean Records
In this scenario, the report produced by a robust data preparation process provides information that both confirms (presumably) the model builder’s general expectations about data values as well as provides a rationale for undertaking future data collection efforts should model performance not be satisfactory.
Summary Information
The following table indicates basic summary counts that offer an ‘at-a-glance’ overview of the results of processing and errors detected. Particularly when a model builder is working with data supplied by another party, these summary counts comprise ‘sanity checks’ — if the number of total data records or the number of clean data records differ significantly from what were expected, the discrepancies should be resolved before continuing with training models.
Basic Data Preparation Summary Counts
In the next table, summary counts for IGNORED fields are provided. Note that while that this table reflects the analysis of field values that neuralstudio.ai® (refer to the Resources section) data preparation performs (one additional field was flagged ‘IGNORE’ as a result of too many unique string values), in general any robust data preparation process should report similar information.
Summary IGNORED Field Information
Field Detail Information
Summary field information should be supported by detailed information on a per-field basis, as illustrated below. This information allows the model builder to confirm that Field Type identified by the data preparation process for each field is consistent with what is expected; in addition it explicitly identifies fields that are not useful for modeling. Any anomalies or questions that result can be addressed before models are trained (or run) using the data.
Detailed Field Status Information
Field detail information should also include statistics for continuous-value and discrete-value fields, as illustrated in the following tables.
Continous-Value Field Statistics
Statistics for continuous-value fields will identify potentially volatile fields that may adversely impact model stability. For example, if two different models demonstrate comparable performance on training data, and Sensitivity Analysis (beyond our scope here) suggests that one model is influenced more by a field with a larger standard deviation than another model, other factors being equal the alternate model is the better choice for deployment.
Discrete-Value Field Statistics
Statistics for discrete-value fields will identify fields in which the discrete-value counts are skewed. This may be remedied by over-sampling/under-sampling as appropriate for skewed class counts, or it may suggest effort is needed to acquire additional training data containing more of the under-represented classes. The issue to resolve is that, depending on the size of the prepared data file relative to individual discrete value counts, under-represented classes may in effect be ignored when a machine learning algorithm is trained, which usually is not intended or desired.
Scenario 2 — Data Preparation Yields Insufficient Clean Records
In the event there are insufficient complete clean records in the original raw data file, some of the summary information described previously will not be available. However, a robust data preparation process should provide as much information as possible so that the model builder has guidance for obtaining additional data.
The following table illustrates important error summary information (note that some columns are not shown due to space constraints).
Summary Error Information
The Raw Lines Processed value (83003) indicates the total number of lines in the raw data file. Generally, the number of potential data records is 1 less (83002 in this file), since the first line in a file usually contains field names.
The Total Fields value (272) indicates the total number of fields in records in the raw data file.
The Field Errors per Record line indicates unique counts of errors in records, in ascending order. The Records line indicates the number of records which have the corresponding number of errors. In this example,
204 field errors occurred in 3135 records;
field errors occurred in records; 205 field errors occurred in 8343 records;
field errors occurred in records; . . .
222 field errors occurred in 1045 records; and so forth (error counts between 207 and 222, and after 224, are not shown).
This information suggests the level of effort that would be required to make the raw data file usable. Every record had at least 204 errors; if those records could be corrected (and if the errors overlapped errors in other records, which is often not the case) the resulting file would contain 3135 records and could possibly be used, at least for proof-of-concept models.
The Field Number and Field Name lines indicate, respectively, the number and name of each field in the data file; the Missing Values line indicates the number of records in which the corresponding field value was missing (in other words, there was no value for the field). Fields 20 and 21 in the above image (and additional fields not shown) were missing data in every record. Clearly for this particular file, missing data is the problem.
Whenever the number of missing values for a field approaches the number of data records in a file, unless there is a clearly defined way to obtain missing values, we recommend completely eliminating (removing) such fields from the raw data file before restarting the data preparation process. If for some reason the fields cannot be removed, they should be designated as Ignored by the model builder when the data preparation process is re-run.
Information about Ignored fields, as illustrated in the following table, is also important when the data preparation process does not yield a sufficient number of clean records — particularly if the process performs additional analysis of field values (as is the case with neuralstudio.ai).
IGNORED Field Information (All Fields not Shown)
The above image further substantiates that missing data is the primary problem in this file. Out of 272 total fields, 221 had too many missing values.
In general, fields ignored due to almost constant values (Code 4) should be reviewed. It is possible that many missing values in such fields masked the fact that multiple valid values occurred in the field, but the distribution was so skewed that initial data preparation processing ignored the values that occurred infrequently. Fixing missing values in these fields might result in some of the fields in fact being usable and relevant.
Fields with too many unique strings (Code 9) are most likely ignored appropriately, but domain knowledge would confirm that.
Fields ignored by user (Code 13) are not a concern — they were explicitly ignored by the model builder.
Model builder domain knowledge would determine whether to attempt to fix problems with the data in the fields ignored due to Analysis, or to accept the neuralstudio.ai assessment and ignore the fields in follow-up data preparation processing (the information here would indicate to the model builder which fields to designate ‘Ignore’ in follow-up processing).
Obtaining Sufficient Clean Data from a Messy Raw Data File
Fixing bad data values in a raw data file can be a time-consuming and laborious task; an automated approach can be extremely helpful. The following paragraphs describe the methodology used by neuralstudio.ai.
If, after the initial pass through a raw data file, neuralstudio.ai determines that there are insufficient clean data records, it will automatically attempt to create a new data file derived from the original raw data file. Replacement values will be generated for all missing values in approximately half of the fields in the original raw data file that were not flagged to ignore (either by the model builder, or during initial neuralstudio.ai processing).
The replacement strategy is a principled but simple automated process which transforms into a usable form a file that otherwise would not be suitable for use with machine learning algorithms.
For fields that contain continuous numeric values, the replacement for any missing value for a particular field is the average of all non-empty values for the field in the original raw data file.
For fields that contain discrete (numeric or alphanumeric) values, the replacement value for a particular field is the mode of all non-empty discrete values in the original raw data file.
To assist in assessing the quality of the original raw data file, neuralstudio.ai always generates statistics based on non-empty values of non-ignored fields. In addition, if neuralstudio.ai determines that a new file can be created, it generates statistics for non-ignored fields in the new file.
These statistics offer quick ‘sanity checks’ for raw data file quality, and if a new file is created, also allow confirming that distributions of values in the new file are not radically different from the original raw data file. The tables below contain statistics for a very messy original raw data file, and the corresponding newly created file. The right-most column in each table indicates the number of field values in original raw file records that were changed as corresponding records were added to the new file.
Continuous-Value Field Statistics and Replacements (All Fields not Shown)
Continuous field statistics comprise the minimum, mean, maximum, and standard deviation for all non-empty continuous-value fields. In the image above, no Field_19 values were changed (all records in the original raw data file had valid values for Field_19). For Field_38, 69 values were changed in records in the new file, and 6895 Field_172 values were changed. A ‘changed’ value means that an empty (missing) value for a field in the raw file was replaced with the average of all non-empty values for the particular field. For example, 69 empty Field_38 values were replaced by ‘0.02’, the mean Field_38 value, to make 69 complete records which were placed in the new file,
Discrete-Value Class Counts and Replacements (All Fields not Shown)
Discrete field class counts consist of counts of the occurrence of each non-empty discrete (numeric or alphanumeric) value in a field (a ‘value’ is a discrete class label). In the image above, Field_132 contained 10 unique class labels (which happened to be numbers); one value was changed in the new file. Field_135 comprised 17 unique class labels (which also happened to be numbers); 35 values were changed in records in the new file. A ‘changed’ value means that an empty (missing) value for a field in the raw file was replaced by the class label representing the mode of all non-empty values for the particular field. For example, one empty Field_132 value was replaced with the label ‘0’ to make a complete record which was placed in the new file.
Unless there are large discrepancies between statistics for fields in the original raw data file and statistics for fields in the new file, and counts of class labels in the raw file and counts of class labels in the new file, the new file can be used to train machine learning algorithms, while recognizing that the training data is not entirely representative of real-world conditions.
Best Practices
Preparing a clean data file suitable for training machine learning algorithms requires paying attention to the details of how data is stored and how data values are represented. This task is separate from identifying the relevance of data to a particular problem domain (that is, identifying what data actually influences the training target of a machine learning algorithm).
For all but the simplest of machine learning projects, model development is an iterative process. As an important step in model development, data preparation can also require iteration. Proper execution of a robust data preparation strategy can eliminate unnecessary data preparation iterations. Without question, process automation is important; at the same time, human domain knowledge is required to resolve ambiguities in the data as well as to ultimately decide what data is important and what data can be ignored.
Remember — data preparation begins at the source!
If data is acquired directly from sensors, ensure that hardware is properly maintained — sensors, communication equipment, connectors and connections. Eliminating spurious values at the source is often the most cost-effective strategy.
If data is generated by information from forms or documents completed by humans (which are then either transcribed by humans or optical character recognition software, or which serve directly or indirectly as a front-end for a database), build sanity checks into software that collects/aggregates the data. Fully test one-off custom software with the same rigor that is applied to full applications (yes this can be onerous — another reason for using standardized processes). Do not simply assume custom software is performing as expected. If at all possible, do not permit empty values in data; if empty values can occur, define a consistent method for identifying and handling them before the empty values make it into back-end storage.
Standardize, preferably at the organization level but definitely at the machine learning project level, the structure of raw data files. Software that generates training data files from database queries should use the same fundamental attributes (Field Delimiter, Decimal Separator, Line Termination Style) as comparable training data files saved from a spreadsheet for the same project.
If training data is aggregated by extracting data from one or more databases into an intermediate file before presentation to a machine learning algorithm, and the intermediate file is very large, during the extraction process duplicate the first several thousand records (for example, the first 1000 to 10000) and place them in a separate tab-delimited file. Then as a sanity check, open the small separate file in a spreadsheet and confirm that the basic line and field organization is correct.
While opening the separate file in a spreadsheet is a quick ‘user friendly’ way to identify obvious data issues, spreadsheets can also hide structural anomalies that may cause issues when the file is used by a machine learning algorithm. As a second sanity check, open the separate file in a plain text editor capable of visually rendering non-printing characters like ‘Tab’ and ‘Linefeed’, and confirm that non-printing characters are not inappropriately duplicated — particularly at the ends of lines (each pair of duplicate consecutive Field Delimiter characters indicates that a value is missing between the Field Delimiter characters).
Do not use numbers (very large, negative, zero, etc.) to flag missing or bad data, unless it is certain that data preparation processing will guarantee that the machine learning algorithm will ignore (that is, never ‘see’) the chosen flag value. Otherwise, if the algorithm treats the missing data flag as a number, whatever value was chosen will adversely impact any statistics calculations related to the corresponding numeric field.
Consider how the machine learning algorithm will be deployed when preparing data for training it. If the algorithm will be deployed in a batch mode (processing new data in a file structured exactly like the training data file), the data preparation process used for training data would apply without changes. If the algorithm will be deployed for use in an online (real or near-real time) environment, it may be necessary to not only prepare data, but to also invoke additional validation mechanisms (see the introduction) before presenting data to the algorithm. In turn, this may entail developing custom software to implement validation strategies.
Finally, as we have attempted to make abundantly clear, every detail about the structure of data, and every step in the data preparation process, should be logged or included in a report. When data is clean, such detailed information may seem unnecessary or superfluous. But if the information is produced routinely in a standardized process, when the process is applied to data that is not clean, the details will make resolving issues much easier.
Of course, that assumes the information which is generated is read!
Resources
The information supplied in tables which highlight or explain the data preparation concepts we have introduced was automatically generated by the Basic Data Preparation component of neuralstudio.ai. neuralstudio.ai is a neural network-based automated machine learning platform that opens the development and deployment of optimized neural network solutions to individuals and organizations that may not have extensive knowledge of or experience with neural networks and machine learning technologies. neuralstudio.ai is available around the world through the Microsoft Azure cloud.
neuralstudio.ai is designed from the ground up to support the entire life cycle of neural network development — training, evaluation, deployment, and performance monitoring. In addition to Basic Data Preparation, neuralstudio.ai offers
Enhanced Data Preparation, which extends the concepts introduced in Robust Data Preparation by producing individual neural network models to generate replacement values for every faulty field in a data record;
One-off neural network training for relatively quick proof-of-principle models;
Optimized ensembles of neural networks, where first individual network hyper-parameters are optimized by a genetic algorithm, and then ensembles of individual networks are further optimized by genetic algorithm to identify and rank the best performing ensembles;
Comprehensive deployment facilities ranging from execution of models in the cloud, to on-premises execution in Excel spreadsheets or custom enterprise applications, to execution in embedded systems using the NeuralWorks® runtime engine.
To learn more, we invite you to visit neuralstudio.ai and sign up for a free Guest account. If you have questions about data preparation in general, or the capabilities of neuralstudio.ai, you can reach us at [email protected].
We look forward to helping you along your machine learning path.
Footnotes
[1] At a character level, Windows uses a <carriage-return><linefeed> character sequence to terminate lines, while MacOS/Linux/Unix use only a <linefeed> character.
[2] Acknowledging that numbering systems other than the decimal system, such as the hexadecimal system, use additional characters.
[3] Two passes through a raw data file are required in order to determine which fields should be considered numeric fields.
[4] neuralstudio.ai implements a default threshold of 64 unique values.
[5] What constitutes a ‘minimum number’ is highly dependent on the specific problem domain; however, fewer than 50 records would arguably be insufficient in any real-world problem domain and is the minimum required by neuralstudio.ai.
Acknowledgements
The Data Preparation Maze image was created by Gerd Altmann.
We thank Alain Fuser, CEO of NEHOOV, Eleanor Hanna, Data Scientist at Valassis, and Jamal Clarke, Computer Scientist at NeuralStudio, for instructive comments.
|
https://medium.com/swlh/data-preparation-for-machine-learning-3e9c69596dd3
|
['Jack Copper']
|
2020-05-15 18:59:16.691000+00:00
|
['Data Preparation', 'Machine Learning', 'Data Science']
|
Making Recruitment Success Predictable
|
Making Recruitment Success Predictable
What makes recruitment predictable?
Interviews Generated: Candidates in front of clients is what everything from business development calls to candidate meetings to CV’s submitted all about.
All that matters is generating a certain number of interviews every week, month and quarter Interview to Offer Ratio: This is the number of meetings that need to be made to create an offer and becomes a ratio example 1 and 10.
This ratio makes you successful because you understand just how many people you send for interview end up with an offer in hand and thus have a chance of taking the job.
Offer to Passing Probation: This one is actually fundamental to building long-term relationships with clients and indeed candidates. It makes you successful because it outlines just how candidates you get to offer who stay at the company long enough for you to keep your job.
Using these Ratios to predict success
So how do you use these ratios and KPI’s to promote your recruitment business at or desk successfully?
Firstly, decide how many placements you want to make a month. This could be determined by a financial target or just a simple number like three placements. For our example, we are looking for 2 interviews.
Secondly, you need to understand your offer ratio to a candidate passing probation. The best way to figure this out is to do some digging over the last 18 months, find out how many candidates passed probation and how many offers you generated. Divide offers by candidates passing probation. So for our example, we have a ratio of 3:1 (three offers for one passing placement probation).
Thirdly, you need the ratio of interviewed candidates to offer made. For this delve deep into your stats and find the number of interviews generated over the last 18 months and divide it by the name of offers. In our example, we have a ratio of 10:1 (ten candidates interviewed for one placement).
Lastly, bring it all together. Take your goal times it by the number of offers and then times it by the interviews to get the total number of interviews generated.
In our example Goal:
2 placements per month Offer to Placement Passing Probation:
3:1 Interview to Offer:
To generate for 2 placements is 60 interviews or 15 per week or 3 per day.
Moving forward
Use the method above to focus your goal for each day. That way if you produce three interviews per day (as in our example) you will end up each and every month hitting your placement target. As ever we would love to hear what you have to say.
***
The Naked Recruit is funded through our affiliate link program with Acuity Scheduling. It is a tool that has saved me dozens of hours. Check out Acuity
***
Thank you for reading this Recruitment Hack. You can get a daily Recruitment Hack sent to your inbox by visiting Recruitment Hacks.
The Book: Recruitment Hacks is now available on Amazon.
***
|
https://medium.com/@joedavehenry/why-do-these-three-make-success-predictable-9ec121786049
|
['Home Working Henry']
|
2020-12-26 23:42:22.194000+00:00
|
['Recruitment']
|
Dream India Technologies is the best way to learn Spoken English
|
English is considered the universal language and common language for various methods of oral communication in businesses worldwide. English is easy to learn like any other non-native language but learning English for career purposes requires proper practice and time. The easiest way to learn any non-native language is to attempt to expose yourself to the real environment.
The Spoken English Institute in Guntur held at Dream India Technologies help expert trainers speak with confidence by exposing them to expert instructors with more than a decade of experience in learning English.
Points to focus on
To improve your communication in English you need to focus on listening, reading, writing and speaking. Dream India Technologies, which offers the Spoken English Institute in Guntur, follows a curriculum designed by subject matter experts catering to the needs of the business world.
Listen More
Hearing plays an essential role in learning English. Active listening helps to understand the tone and pronunciation of English and also helps to enrich your conversation. You can improve your English knowledge by listening to TV shows, movies, debates in English, and radio and music.
Reading a lot
Reading English can help you improve pronunciation, understand the context of word usage in sentences. The practice of reading English newspapers regularly expands a person’s vocabulary. Finally, you can speak fluently by using new words you have learned in regular conversations.
Practice writing
Generally, by writing stories and blogs you can improve your knowledge in the English language. You can start by writing a paragraph or an article on any topic and review yourself using dictionaries. This helps you to fix your mistakes by yourself.
Speak Fearless
Speaking is a challenging part of learning any language. You can try to communicate with others in English or record your speech later to listen and correct yourself. Join Spoken English Institute in Guntur to gain confidence in your English language skills.
Mirror exercise
You can boost your confidence and gain fluency by speaking in English to yourself in front of a mirror. You can memorize a few lines in English from any book or magazine and make repeated corrections using those lines.
The best way to strengthen your spoken English skills is to practice as much as possible and explore more and more. Spoken English Institute in Guntur organized by Dream India Technologies helps learners acquire English skills which are in high demand by many businesses. Our coaches bring out the best in you and help you excel in your communication skills and succeed in your career.
|
https://medium.com/@dreamindiatechno/dream-india-technologies-is-the-best-way-to-learn-spoken-english-6db402ec1cde
|
['Dream India Spoken English']
|
2019-11-14 10:12:16.911000+00:00
|
['English Learning', 'English', 'Spoken Word', 'Language Learning', 'English Language']
|
>⚽+LIVE|| Premier League 2020!! “Leicester City vs Manchester United” FULL-Match
|
WATCH Live Streaming (Leicester City vs Manchester United) Full HD [ULTRA ᴴᴰ1080p]
| Live Stream
Live sport streams free all around the world. Visit here to get up-to-the-minute sports news coverage, scores…
t.co
Watch Live Streaming : “Leicester City vs Manchester United” live stream In HD
VISIT HERE >> https://t.co/6Hi48cXEQi?amp=1
●LINE UP : Leicester City vs Manchester United, live
●Date : 7:30 PM,December 26, 2020
●VENUE: King Power Stadium
Facebook Live, LinkedIn Live, YouTube Live, Periscope, Instagram Live Over the past several years, major social media platforms democratized and commodified live streaming, with YouTube Live launching way back in 2011, and Facebook, Twitter, Instagram, and others (R.I.P. Meerkat) following suit. Most recently, LinkedIn launched live streaming on its platform, too, so businesses and professionals can reach their network in new and engaging ways.
These free platforms are great for brands and businesses looking to dip a toe in the live streaming pond, but they are not viable solutions for long-term scale and growth of a video strategy. Why? While ease of use is a major draw, for sure, none offer onboarding or customer support. If your team hits a snag with an event, you’re left to your own devices to problem-solve in real-time.
What’s more, streaming is only possible on a platform-by-platform basis. This means if you want to stream to Facebook and Twitter at the same time, you’ll need two cameras to live stream from each device — creating twice as much work (or more) and a less-than-ideal experience for the on-screen talent and viewers alike. ANOTHER POST.
Nobody is ever thrilled to pay nearly $11K for a golfer they probably hadn’t heard of until a handful of weeks ago, but both Zalatoris, along with Sam Burns will be at the top of my list this week on DraftKings.
Zalatoris gets the slight edge with his outstanding play in nearly every metric that’s not near the green. He ranks first amongst players in the field in SG: total and fourth in SG: approach. We also joke about getting the putter hot for a weekend, and if Zalatoris can improve on his SG: putting which ranks 129th in the field, he should absolutely contend yet again. Other Games.
He’s finished no worse than T19 in his last five events, including three top six finishes. His T6 result at Winged Foot opened up plenty of eyeballs, so he’ll be a popular play, which makes him all the more worthwhile in cash at the top.
While I was impressed with the way Will Zalatoris played last week at the U.S. Open, I’m more inclined this week to go with the more experienced Tour pro. Will will (that’s funny “Will will”) be a fixture on the PGA Tour — if not this year than next. He is one of the top players on the Korn Ferry Tour now. However, Corey Connors made the Playoffs and has been playing well last season (this is so weird calling it last season already). I’ll have both in a lineup this week, but my gut has me leaning with the Canadian. MORE POST.
|
https://medium.com/@katsumur/live-premier-league-2020-leicester-city-vs-manchester-united-full-match-a147a66a87d6
|
[]
|
2020-12-26 01:50:27.476000+00:00
|
['Social Media', 'Soccer']
|
Case study: our journey of self-exploration — UnlockFM rebranding
|
Let me share with you a story of how we rebranded the UnlockFM podcast. While it might look effortlessly simple on the surface, what hides behind is the immense level of deep thinking, painful decisions, and never-ending iterations that our team had endured. This story is not about the outcome. This story is about a roller coaster ride 🎢 of how we got there.
Most importantly, it’s about our team’s journey of self-exploration, optimism, and drive to build something that personally spoke to us, in the hopes that it will do the same to you.
By sharing our struggles and bumps along the journey, I hope to convince you that you can also do it (if you care enough). Even when you think you can’t.
Here is how it began:
Last year, my friend, Quyen, started the UnlockFm podcast, which sheds light on the hidden and inspiring Vietnamese talent. Each episode centers around speakers’ non-linear journeys of breaking out from traditional moulds to achieve their purpose. Listening to the podcast, I felt my urge to do something with my skills outside of the ‘9 to 5 work’ was growing stronger and stronger.
As I was scrolling social media, I stopped at the UnlockFM posts. My eyes started twitching due to the small text, inconsistent layout, and overused gradient colors. 🥴 Anybody has that, or is it just me?
|
https://bootcamp.uxdesign.cc/our-journey-of-self-exploration-unlockfm-rebranding-c0c762790965
|
['An Le']
|
2020-12-15 05:47:10.333000+00:00
|
['Product Design', 'Visual Design', 'Podcast', 'Branding', 'Self Exploration']
|
Unsupervised Learning: Hierarchical Clustering and DBSCAN
|
Unsupervised Learning: Hierarchical Clustering and DBSCAN
Source: Google Images
There are lots of methods to group our data points in machine learning for further analysis based on similarity. As a data scientist or a data analyst, we know this method called clustering. The most common clustering algorithm that used is K-means Clustering Algorithm. It is so popular because this algorithm is so simple and powerful. It computes the centroids and iterates until we find an optimal centroid to group the data points with the number of groups is represented by K.
Yet, I am not going to discuss about the K-means. I am going to explain other clustering algorithms such as Hierarchical Clustering and DBSCAN. Some of you might already know this two algorithms, but for me, it is something new and I am obsessed with it. Anyway, Let’s start to the first one!!
Hierarchical Clustering
This algorithm is also popular like the K-means algorithm. In the K-means clustering, we need to define the number of clusters or K before doing the clustering while in Hierarchical Clustering, it is the algorithm itself that automatically find the potential number of clusters for us to decide by using a tree shape from called Dendrogram.
What is Dendrogram?
A dendrogram is a diagram that shows the hierarchical relationship between objects. By using this we could see the number of clusters that we are going to use for the next steps.
Source: Towards Data Science
Source: Google Images
We can see that there are two variables in the last diagram. The first one is the data points itself (sample index) and the second one is distance. Distance in this graph is the representation of Euclidean distance. From the diagram, we can see that there are one big clusters in overall. However, if we cut the the distance in 60, there would be only three clusters (the green one, the red one(number 3, 4 and 2), and the last red one). We should cut the distance because if the number of distance is to big, the data points in clusters might have no certain similarities.
There are two of hierarchical clustering techniques:
1. Agglomerative Hierarchical clustering
It is a bottom-up approach, initially, each data point is considered as a cluster of its own, the similar data points or clusters merge as one in further iteration of finding clusters until one cluster or K clusters are form.
Source: Geeks of Geeks
2. Divisive Hierarchical clustering (DIANA)
In contrast, DIANA is a top-down approach, it assigns all of the data points to a single cluster and then split the cluster to two least similar clusters. We proceed recursively on each cluster until there is one cluster for each observation.
Source: Geeks for Geeks
Implementation
Visualise the dendrogram of our data
#Check the dendrogram
plt.figure(figsize = (12, 8))
dendogram = sch.dendrogram(sch.linkage(X_pc, method = 'ward'))
plt.title('Dendrogram')
plt.xlabel('Customers')
plt.ylabel('Distance')
plt.show()
Dendrogram Diagram
2. Use Agglomerative Clustering to Cluster the data
3. Append the result of clustering in new dataset
Result:
Clustering Visualisation
Why Hierarchical Clustering?
It has an advantage of not having to pre-define the number of clusters gives it quite an edge over algorithm such as K-Means. Yet, it does not work well when we have huge amount of data.
DBSCAN (Density-Based Spatial Clustering of Applications with Noise)
Now as we already talked about Partitioning method (K-means) and hierarchical clustering, we are going to talked about density-based spatial clustering of applications with noise (DBSCAN) method. Mostly, clustering algorithms work to look for spherical-shaped clusters and severely affect by the presence of noise and outlier in the data. Thus, the result of clustering is sometimes not quite good for real life data which contains lots of noise.
Source: Ryan Wingate
Here, the DBSCAN come to solve this issue. Clusters are dense regions in the data space, separated by regions of the lower density of points. The DBSCAN is based on this intuitive notion of “clusters” and “noise”. The key idea is that for each point of a cluster, the neighbourhood of a given radius has to contain at least a minimum number of points.
DBSCAN Parameters
1. Eps
Eps is the maximum radius of the neighbourhood. It defines the neighbourhood around a data point i.e. if the distance between two points is lower or equal to ‘eps’ then they are considered as neighbours. One way to find the eps value is based on the k-distance graph.
Source: Google Images with some edits by the Author
If the eps value is too small then large part of the data will be considered as outliers. If it is very large then the clusters will merge and majority of the data points will be in the same clusters.
2. MinPts
MinPts is minimum number of points in an Eps-neighbourhood (within eps radius) of that point. Larger the dataset, the larger value of MinPts must be chosen. Arule of thumb is to derive minPts from the number of dimensions D in the data set. minPts >= D + 1. For 2D data, take minPts = 4.
Source: Geeks for Geeks
Core: a point that has at least m points within distance n of itself.
Border: a point that has at least one core point at a distance n.
Noise: a point that is neither a core nor a border. It just like an outlier. It has less than m points within distance n from itself.
That are the 3 types of data points that we have in this algorithm.
Implementation
Fit the data to DBSCAN algorithm
#Use DBSCAN, -1 value means outliers
dbscan = DBSCAN(eps = 10, min_samples = 5)
y_pc_db = dbscan.fit_predict(X_pc)
y_pc_db
Result:
array([-1, 0, -1, 0, -1, 0, -1, -1, -1, 0, -1, -1, -1, 0, -1, 0, -1,
0, -1, -1, -1, 0, -1, 0, -1, 0, -1, -1, -1, 0, -1, 0, -1, -1,
-1, 0, -1, 0, -1, 0, -1, -1, 1, 1, -1, -1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, -1, 2, -1, 2, 1, 2, -1, 2, -1, 2, -1, 2, -1, 2,
-1, 2, -1, 2, -1, 2, -1, 2, -1, 2, -1, 2, -1, 2, 3, 2, 3,
2, -1, 2, -1, 2, -1, 2, -1, 2, -1, 2, -1, 2, 3, 2, 3, -1,
3, 2, -1, 2, -1, 2, -1, 2, -1, 2, -1, 2, -1, 2, -1, 2, -1,
-1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1])
You can notice that the result of clustering has value of -1. These values are the noise in the dataset.
2. Append the result of clustering in new dataset
Result:
3D Clustering Visualisation
Remember before, we have -1 values, those blue points is the-1 values (the noise). Here, we have four clusters. We can check if it is true using code down below.
target = X_pc_db['cluster']
print(target.nunique()) # number of clusters
Result:
4
Why DBSCAN?
It can automatically detect the number of clusters based on your input data
and parameters. DBSCAN can handle noise and outliers. All the outliers will be identified and marked without been classified into any cluster. Therefore, DBSCAN can also be used for Anomaly Detection (Outlier Detection). However, this algorithm is still difficult for heavy-cluttered dataset.
|
https://medium.com/analytics-vidhya/unsupervised-learning-hierarchical-clustering-and-dbscan-c38ffd8273d2
|
['Alifia C Harmadi']
|
2021-08-19 05:15:54.397000+00:00
|
['Clustering', 'Hierarchical Clustering', 'Customer Analytics', 'Unsupervised Learning', 'Dbscan']
|
PHP imagecolorallocate() & imagettftext() always generating black text
|
Today I encountered an issue with using PHP function imagettftext() for generating an image with some text. I was using the usual imagecolorallocate() function for generating color identifier.
$txt_font_color = imagecolorallocate($image3, $component_config['font_color']['r'], $component_config['font_color']['g'], $component_config['font_color']['b']);
imagettftext( $image3, $txt_font_size, $angel, $x, $y, $txt_font_color, $txt_font, $line );
Surprisingly it was generating black-colored text despite the set pure white color. Then I tried with giving different color codes and always it was generating black.
Later after googling on the issue, I found out that it is somewhat common. I was using a PNG with 8-bit palette. Apparently it given text color rgb(255,255,255) had been not in the palette of source image. In such cases function imagecolorallocate() returns false as it fails to process it.
The solution :
I used imagecolorclosest() function instead imagecolorallocate(). It solved the problem.
$txt_font_color = imagecolorclosest($image3, $component_config['font_color']['r'], $component_config['font_color']['g'], $component_config['font_color']['b']);
I also read that imagecreatetruecolor() along with imagecolorallocate() is also a solution.
Obviously there would be other solutions for this issue including changing the source image :) .
|
https://medium.com/@guirama/php-imagecolorallocate-imagettftext-always-generating-black-text-b359395b9e1
|
['Indunil Ramadasa']
|
2020-12-21 09:17:33.372000+00:00
|
['Imagettftext', 'PHP', 'Imagecolorallocate', 'Imagecolorclosest']
|
What Does 23% Body Fat Look Like?
|
Cold water immersion has reduced my body fat AND changed its composition from white fat to brown fat. Electronic body composition meters do not measure brown fat accurately. They said my body fat percentage was increasing, even though it was stable or going down.
What Does 23% Body Fat Look Like?
In my case, maybe it’s not what you expect.
I always knew I was a fat kid, who grew up to be a pudgy young man, who matured into an obese, middle-aged adult Dad. By mid-40’s, I was setting a bad example for my kids, my wife was unhappy, and I was embarrassed by the shape I was in.
So my daughter and I decided to do something about it. She wrote out a list of body weight, free weight, and yoga ball exercises for me, including little stick figures to demonstrate what each position looked like.
She called it The Fat Daddy Workout and she did it with me once a week.
Because my son has been Type 1 diabetic since he was six years old, I’d already learned a few things about the Atkin’s diet, low carb diets, blood glucose levels, insulin, and ketosis. So I eliminated carbs, took some of my son’s keto strips and started running my metabolism in and out of ketosis.
I dropped to 215, and people noticed.
To keep losing weight, I had to increase my exercise. So, I started experimenting with different classes at the gym.
Spin cycle was not for me. Yoga was better. Kickboxing was fun.
I finally settled into a co-ed weight training class that used a barbell, bench, and free weights. We did 60 minutes of lunges, cardio, and different presses to the thumpa-wumpa of a terrible gym class soundtrack. I made a friend and a couple of acquaintances.
I stuck with the fasting and I quit alcohol and I increased my caffeine intake about 3-fold.
I dropped to 185lb. My wife was buying my clothes at Goodwill. She’d take them back after a few weeks and buy me smaller clothes, because I was shrinking so fast. My waist went from 44 to 34 inches.
That was over three years ago and I’ve mostly kept the weight off, although the shirts I bought when I was 185lb don’t fit me now that I’m back up to 197lb.
As part of my journey, I’ve learned that most of what I thought I knew about diet, health, and nutrition is plain wrong. For example, much of the official, popular nutrition advice is full of outright lies
This scene from the 1980’s hit Beverly Hills Cop, featured Judge Reinhold improvising this scene about the health dangers of red meat. The whole thing was made up.
To learn what worked for me, I had to read things that didn’t come from official or popular sources, and run my own experiments on myself.
One of the things I read was Scott Carney’s What Doesn’t Kill Us, which described the incredible health benefits of hyper ventilation and immersion in freezing cold water.
Scott Carney is an investigative journalist who reported the remarkable truth of Wim Hof’s methods for controlling his own body.
In that book, Carney chronicles his own journey towards fitness under the tutelage of the brilliant Dutch eccentric Wim Hof, who pioneered cold immersion science and the benefits of maintaining brown fat.
As babies, we were all born with brown fat. Without regular cold exposure, we gradually lost our “baby fat.” Finally, by the time he time we reach adulthood, most of the fat in our body is white fat.
The white fat is a store of energy, but the brown fat contains extra mitochondria to burn that energy in a process called thermogenesis. In other words, brown fat helps burn off white fat to generate heat. With enough brown fat, a human body can withstand prolonged exposure to cold temperatures, including freezing water.Only regular exposure to cold will cause our bodies to build and maintain brown fat.
|
https://medium.com/morozko-method/what-does-23-body-fat-look-like-ec269396fa51
|
['Thomas P Seager']
|
2020-01-10 02:07:46.007000+00:00
|
['Ice Bath', 'Cold Water Immersion', 'Brown Fat', 'Weight Loss', 'Wim Hof']
|
Why Do We Have Beards? While Beards Are Completely Biologically Useless
|
Why Do We Have Beards? While Beards Are Completely Biologically Useless
Have you ever wondered why we have beards? In fact, scientists have shown that beards are not a biological trait of the body the way we thought for many years. Beard is like an ornamental plant.
Out of all the biological features of the human body, including other types of hair, only beard has no function and has only decorative meaning. That means it doesn’t actually function or perform any specific physiological effects. Let’s take a look at the remaining hairs on our body.
Body hair helps to regulate temperature. The hair on the head helps to protect the scalp from the sun and helps keep it warm in cold weather. Eyelashes help protect your eyes from insects or strange objects that can come in when you open them. The eyebrows help prevent sweat from flowing into the eyes. Armpit hair helps reduce friction when moving hands and keeps sweat from escaping. Genital hairs help protect against bacteria and also help reduce friction.
But beards do not have any specific effect. So why do they exist, and why only exist in men?
In the early days of beard research, evolutionary biologists thought that beards could have the same effects as hair or hair on the genitals. It is to keep body temperature and avoid bacteria coming into mouth contact. It sounds reasonable, but when it comes to another aspect, these inferences must be discarded.
That is 50% of the population worldwide, women with absolutely no beards. There are also differences between males and females in the wild, but rarely an important trait appears only in males, whereas females responsible for breeding, and they do not have that feature.
If the beard has an important role in the human body, it should be in both sexes. Instead, beards or mustaches appear only in men, in adulthood and until old age. They are simply there doing nothing, and they keep growing no matter how many times we shave.
Beard can help attract sex partners?
Professor Geoffrey Miller of the University of New Mexico, one of the preeminent evolutionary psychologists in the field, explains: “The two main explanations for beards in men are the opposite sex attractiveness (attracting women) and threatening the same sex (competing with other men). Beard can help signal a potential partner (sexual maturity and vigor)”.
It’s basically the same way other animals choose a partner. Male peacocks with the most colorful feathers are more likely to attract females, or antelopes with the most beautiful horns are more likely to reproduce. In the modern era, however, the beard did not function as much as a reproductive signal.
In fact, researchers have shown that some women like men with beards, some don’t and some even don’t care. But one thing is quite interesting, is that if in an environment with many beards, women will find it attractive by a smooth person and vice versa.
In evolutionary genetics, this is called “negative frequency dependence”. Scientists believe that within a population, a rare trait tends to be more advantageous. In guppies, for example, males with a combination of unique blobs have a greater chance of mating.
Therefore, the beard does not really mean attractive to the opposite sex, but it also depends on many other factors. So we can conclude that beards are really useless in human life.
Why do people have beards?
Throughout history, people have grown beards or shaves as a reaction to the choices made by enemies and opponents. The ancient Romans had shaved off their beards for 400 years, because their rival the ancient Greeks saw the beard as a symbol of position and wisdom.
For 270 years, the British lived under the threat of Viking invasion, the period 793 to 1066 was considered the “Age of Viking invasion”. The British had shaved as an anti-Viking culture with bushy beard.
Another influence comes from rulers and high-ranking individuals. Emperor Hadrian brought his beard back to Rome in the second century AD, and the entire ruling class of the Roman Empire followed suit, including several Hadrian successors.
In the Middle Ages, Henry V was the first king of England to shave, and because he was a great king, British society and the next seven kings followed in his footsteps. It was not until Henry VIII wanted to distinguish himself from his predecessors that he put his beard back on.
To this day, the beard still has no biological role to help the human body, other than the decorative effect. They even have the opposite effect, which is harmful. In a 1916 documentary film by McClure magazine, a doctor blamed beards for the spread of many known infectious diseases.
|
https://medium.com/@duyenthuy30397/why-do-we-have-beards-while-beards-are-completely-biologically-useless-d8b8d3af6fa0
|
['Dyedo Tikio']
|
2020-12-25 10:55:11.093000+00:00
|
['Knowledge', 'Biology', 'Science', 'Interesting Facts', 'Learning']
|
10k Trainer Redesign UX Case Study: Make Running Social Again
|
The Process
Product Research / Competitive Analysis:
I conducted research online, analyzing the design of competitors within the running market. Some of the apps I analyzed were 10k Trainer (of course), Strava, Mapmyrun, and Nike Run Club.
Current 10K Trainer App Design
Nike Run Club, Strava & MapMyRun
This research, coupled with the red routes, helped me develop a hierarchy of my design. I knew what needed to be most important on each screen.
Furthermore, I conducted a lot of research on Dribbble, looking for examples of successful UX/UI design. “What makes this design successful compared to the rest?” was the prominent question that propelled my research.
The overarching theme I found was simplicity. In almost every aspect — typography, layout, color scheme — the most successful designs were the simplest ones.
As Steve Jobs once said, “Innovation is saying no to 1,000 things”. I made sure to take this quote to heart. Simplicity became one of the key features to my design process.
User Stories
User stories are short, simple descriptions of a feature told from the perspective of the person who desires the new capability, usually a user or customer of the system.
User Personas & Red Routes
User Flows
The user personas, user stories, and red routes inform what the user flows look like. User flows are the beginning of the visual design of the app. They simply inform what will be displayed on each screen.
Sketching / Wireframing
My initial paper and low-fidelity wireframes were created to get a basic idea of what the app would actually look like. They essentially work as a rough draft for the high fidelity wireframes.
I created two low fidelity prototypes (shown below the paper wireframes).
These are the two most important screens for the design, so I prototyped these and planned to use the high fidelity versions of each as a template for the other screens.
Visual Design / Prototyping
Finally, it was time to work on the visual design for the high-fidelity prototypes. I went through multiple different color palettes and grid structures before arriving at my final product.
|
https://medium.com/@jared-poulsen/10k-trainer-ux-redesign-make-running-social-again-ad42bfe0115e
|
['Jared Poulsen']
|
2020-10-22 18:37:36.813000+00:00
|
['Running', 'Design', 'Covid 19', 'Apps', 'UX']
|
Suspicious Package Forces Evacuation at NYU Langone
|
Suspicious Package Forces Evacuation at NYU Langone
By: Izzie Ramirez, Arimeta Diop, Téa Kvetenadze, Zoe Haylock
Columbus Circle on Wednesday morning. Photo by Michael Shaffer.
NYU Langone’s Ambulatory Care unit was evacuated this morning as a result of a pipe bomb being found at a nearby post office. The package was addressed to James Clapper, former Director of National Intelligence and labeled with the the Time Warner Center’s address, where CNN is based. Authorities intercepted the pipe bomb at the 52nd Street post office, which is on the same block as the medical center.
The Time Warner Center at Columbus Circle was evacuated mid-morning on Wednesday as the NYPD investigated a suspicious package that was sent to the CNN offices there. The evacuations stemmed out of “an abundance of caution,” the news organization’s president said in a statement to staff that was posted to Twitter.
Some New Yorkers received emergency alerts on their phones warning residents near West 58th Street between Columbus and Eighth Avenue that a shelter was in place.
The shops at Columbus Circle were reportedly reopened by early afternoon, although the Time Warner offices remain evacuated.
The potential threat arrived as several other suspicious packages were intercepted, including one addressed to Hillary and Bill Clinton’s home in Chappaqua, N.Y., and another to former president Barack Obama’s home in Washington, D.C.
“Both packages were intercepted prior to being delivered to their intended location,” the Secret Service said in a statement. “The protectees did not receive the packages nor were they at risk of receiving them. The Secret Service has initiated a full scope criminal investigation that will leverage all available federal, state, and local resources to determine the source of the packages and identify those responsible.”
Gov. Andrew Cuomo said his midtown office had also received a suspicious device. There were some reports that stated the White House was also sent a suspicious package but the Secret Service clarified that these were incorrect. Similar packages were found at the homes and offices of prominent figures across the country, though it’s unclear if they are connected.
At a press conference Wednesday afternoon, Mayor Bill de Blasio confirmed that there are no other “credible” threats in the New York City area and encouraged New Yorkers to continue with their daily lives. The NYPD has “reinforced [a] clear, visible presence at key media locations across the city” and other “important” locations.
NYU Public Safety told NYU Local they have no reason to believe any NYU people or facilities are in danger in either New York or D.C.
“Like all New Yorkers, members of the NYU community are concerned by last few hours’ developments, we pray for the safety and well-being of those targeted by these explosive devices, and we are thankful that no one has been harmed,” said Marlon Lynch, Vice President for Global Campus Safety. “Those who have plotted these attacks have not declared their motives. But we in the NYU community should declare our unequivocal repudiation of violence and fear and hate in all its forms, and should, in our own political discourse on campus, display the kind of commitment to reasoned, peaceful debate that is in line with our highest values.”
This post will be updated as more information becomes available.
Updated: October 24, 2018
This article has been updated to include a statement from NYU Public Safety.
Updated: October 26, 2018
The content and headline of this article have been altered to include the latest incident.
|
https://nyulocal.com/suspicious-package-sent-to-time-warner-center-forces-evacuation-a28ece972061
|
['Nyu Local']
|
2018-10-30 16:57:08.692000+00:00
|
['Security', 'Hillary Clinton', 'City', 'New York', 'Cnn']
|
The New Startup Visa in Australia— a Guide for Beginners
|
James Cameron — Partner at Airtree Ventures
The other day a good mate asked me a question — “what’s the number 1 thing that Australia needs to learn from Silicon Valley?” A tonne of things came to mind — but one thing stood out more than anything else: we need to attract more great talent from around the world 🌎🚀
I can’t think of a single factor that has had a bigger impact on the success of Silicon Valley than skilled immigration.
The numbers are staggering:
More than half of the unicorns in the Valley were founded by immigrants to the US
If you look at the senior, non-founding roles at those same companies, the proportion of immigrants jumps to a whopping 71%
Some of our favourite portfolio companies at AirTree have been founded by immigrants to Australia like Manish from Dgraph, Mina from Different or Pieter from Secure Code Warrior.
I reckon if we can only learn one thing from Silicon Valley’s success — it’s that skilled immigration is critical to the success of a startup ecosystem 🇦🇺
The new ‘Startup Visa’
When the government decided to cut the 457 visa program, the startup ecosystem in Australia was rightly up in arms.
Fortunately, they’ve put in place a new Global Talent Scheme (GTS) pilot which is designed to cater for startups hiring needs — what we’re calling the ‘Startup visa’.
This scheme is aimed squarely at addressing Australia’s tech talent shortage.
The scheme is still only in pilot phase, and due to the reshuffle in the Department of Home Affairs last year, it has been a little slow to get off the ground.
However, it was recently announced that Q-CTRL was the first company to be certified as eligible to access the visa scheme under the Startup stream, and SafetyCulture the first under the Established Business stream.
Now that it’s up and running, this could be a game-changer for our ecosystem.
To figure out what the scheme’s all about and (more importantly) how you can get on it, we’ve teamed up with our friends at StartupAus and LegalVision to work through some of the most frequently asked questions.
Here’s our FAQ list with answers below:
1. What is it? 🤔
The ‘Startup visa’ is a brand new type of visa specifically designed for startup companies that operate in a technology-based or STEM-related-field.
This visa is part of the Global Talent Scheme (GTS) under the Temporary Skill Shortage (TSS) visa (subclass 482) which has now formally replaced the old 457 visa.
The Startup visa is designed to:
Help you attract top quality global talent, who possess highly specialised skills in their field
Fill niche roles within your business that you couldn’t otherwise fill from within the Australian labour market or the standard TSS visa program
It’s important to note that you can already sponsor people through the TSS visa program if the occupations you seek are available on:
The Short-term Skilled Occupation List (up to 2 year visa); or
The Medium and Long-term Strategic Skills List (up to 4 year visa with the option to apply for a permanent residency pathway)
You can search for occupation via the “eligible skilled occupations list”
However, the most significant benefit of the Startup visa is that you can employ candidates for emerging or niche occupations that are not currently available or appropriately defined under a single occupation on the eligible skilled occupation lists (STSOL or MLTSSL).
This means you can now employ talent:
In emerging sectors such as Quantum Computing, Artificial Intelligence and Virtual Reality — this is really helpful since the technology industry is changing so rapidly, it’s difficult to fit new occupations (such as a quantum engineer) into the eligible skilled occupation lists that were created for more traditional visa programs ~20 years ago
such as Quantum Computing, Artificial Intelligence and Virtual Reality — this is really helpful since the technology industry is changing so rapidly, it’s difficult to fit new occupations (such as a quantum engineer) into the eligible skilled occupation lists that were created for more traditional visa programs ~20 years ago To fill hybrid roles! In the more traditional visa programs you can only choose one occupation, and there are certain qualifications and requirements for that specific occupation that your ideal candidate may not meet exactly.
2. What kind of companies are eligible to apply for a Startup visa?
✅ You must be operating in a technology-based or STEM-related field (we’re hopeful this will cover most tech-focused startups in StartupAus!)
✅ You must be able to demonstrate that your recruitment policy gives first preference to Australian workers. The Labour Market Testing (“LMT”) under the Startup visa has more flexible requirements than the TSS visa (refer to: LMT website for comparisons). However you should be able to:
Provide evidence that you’ve advertised for the role in Australia (e.g. Seek, LinkedIn)
Keep a record of your job postings. And if no local applicants were successful, keep notes on why they were unsuccessful (e.g. not qualified enough etc.)
✅ Your company must be a good corporate citizen with no breaches of workplace law, or immigration law or any other applicable Australian law — though we hope you are doing all of this anyway!
✅ You may need to demonstrate that your employees are paid in accordance with current market salary rates for the occupation, noting:
The total amount can include equity
The minimum salary for the GTS is $80,000, of which at least $53,900 must be cash. The rest of the $80,000 minimum can be equity to the equivalent value (refer to: “Salary and Employment Condition Requirements for Sponsored Skilled Visas”)
Each occupation will need to meet its own industry specific salary benchmark by referring to sources such as joboutlook.gov.au, payscale or industry recruitment websites. If there is no clear benchmark to follow, you may need to demonstrate that you have taken appropriate measures to identify an appropriate salary for the nominated occupation
✅ You must be certified as eligible for the scheme by a ‘start-up authority’. This means you will need to meet at least one of the following requirements:
Received an investment of at least A$50,000 from an investment fund registered as an Early Stage Venture Capital Limited Partnership (“ESVCLP requirement”); or
Received an Accelerating Commercialization Grant at any time (“ACG requirement”)
This requirement is only for the early stages of the pilot program, and as the scheme matures and develops, we would expect this to transition to a points-based test.
The Department of Home Affairs (the “Department”) has also set up an independent GTS Startup Advisory Panel (the “Panel”) to help them decide if you are eligible for a Startup Visa.
Assuming you meet the ESVCLP or ACG requirement above, then the Department will be in touch to seek further evidence for the Panel to make their assessment.
✅ Finally, you must also be able to demonstrate that:
You cannot fill the positions through the traditional TSS visa program (refer to: Question 6 and 7 below); and
(refer to: Question 6 and 7 below); and Accessing the Startup visa will allow the creation of job opportunities and the transfer of skills to Australians
Once you become certified as an eligible company, you can access the Startup visa scheme and nominate up to 5 positions per year!
However, it’s important to note:
You must still lodge nomination applications for each overseas candidate — the Department will respond within 5–11 business days; and
Each candidate will still need to apply for a TSS visa online — the Department will respond within 5–11 business days (the process will be expedited given the Startup visa agreement is already in place)
3. What are the requirements for the candidates? 🤓
Candidates must:
Meet health, character and security requirements
Have no familial relationship with directors/shareholders of the company
Have qualifications that relate to the role they are applying for
Have at least 3 years’ work experience that is directly relevant to the position, and have the capacity to pass on skills/develop Australians
4. Key terms of the Startup visa
👉 The visa will last for up to 4 years, and if you decide you’d like to make the candidate a permanent employee, they may have access to a permanent residence pathway (“PR”) after 3 years.
👉 There are no age restrictions
👉 If the position ceases while the visa holder is on their temporary visa, they will have 60 days to find a new sponsor, apply for a new visa, or depart Australia.
5. How do I get one? 🙋
The pilot program will run until June 2019.
To get started:
First assess whether you can meet the ESVCLP or ACG requirements (refer to: Question 2 above and the Department’s website)
Then refer to the “Step by step process” on the Department’s website
Once an Expression of Interest has been submitted, the Department will request further info to assist the Panel in their assessment
The Department will continue to refine the process during the course of the pilot to ensure the Startup visa scheme is having the desired impact.
6. What is a Temporary Skill Shortage Visa (TSS)?
The TSS visa has two main streams:
The Short-term stream is for employers to source temporary overseas skilled workers in occupations that are needed to fill short-term skill shortages (occupations listed on the STSOL) — under this stream the visa is valid for a maximum of 2 years (or 4 years if an international trade obligation applies)
is for employers to source temporary overseas skilled workers in occupations that are needed to fill short-term skill shortages (occupations listed on the STSOL) — under this stream the visa is valid for a maximum of 2 years (or 4 years if an international trade obligation applies) The Medium-term stream is for employers to source highly skilled overseas workers in occupations that are needed to fill critical skills shortages (occupations on the MLTSSL) — under this stream, the visa is valid for up to 4 years
The TSS visa is a temporary visa and does not provide a right to permanent residence:
If the candidate’s occupation is on the STSOL there is no option to apply for permanent residence
If the candidate’s occupation is on the MLTSSL, they will have the option to apply for permanent residence after 3 years, provided their occupation remains in need in Australia
Candidates must:
✅ Have at least 2 years’ full time work experience directly relevant to the position and undertaken within the last 5 years
✅ Provide evidence that they meet the English language requirements
There are no age requirements. However, you (as the employer) will need to be an approved business sponsor.
7. Which visa stream should I apply for — the traditional TSS visa or the Startup visa?
8. What other visas can tech-focused startups use to employ international talent? 🧐
The TSS visa is the most common way for employers to sponsor foreign workers temporarily. And, if the candidate meets all the requirements under the Medium-term stream (including having an occupation on the MLTSSL), you can nominate them for PR on subclass 186 through the direct entry stream.
If the candidate is:
✅ Highly skilled;
✅ Has an eligible occupation; and
✅ Meets the points test and age threshold…
… they can apply for a visa under the general skilled migration program independently (subclass 189), or by being nominated by a state or territory government (subclass 190 or 489). Employers do not need to sponsor candidates for these visas.
|
https://medium.com/airtree-venture/the-new-startup-visa-in-australia-a-guide-for-beginners-b32dbf5e88f2
|
[]
|
2021-03-22 02:33:41.814000+00:00
|
['Startup', 'Immigration', 'Technology', 'Tech', 'Resources']
|
EP 45: Joseph Tsai คู่หูแข้งทอง Jack Ma
|
in The New York Times
|
https://medium.com/@terrynut/ep-45-joseph-tsai-%E0%B8%84%E0%B8%B9%E0%B9%88%E0%B8%AB%E0%B8%B9%E0%B9%81%E0%B8%82%E0%B9%89%E0%B8%87%E0%B8%97%E0%B8%AD%E0%B8%87-jack-ma-7e7ac747bd2d
|
['Nut P']
|
2020-12-26 12:02:10.023000+00:00
|
['Technology', 'Biography', 'NBA', 'Ecommerce', 'Business']
|
#Prochoice #Antichoice #Toxicmasculinity
|
After becoming pregnant I knew I had two choices that would be best for me. I could either continue with the pregnancy or I could have an abortion. I knew it was not a decision that I was required to make right away considering I was only 5 weeks along.
After my third positive pregnancy test, I knew it was time to tell my previous live in partner that I discovered I was pregnant. When I told him I really just needed the moral support, the love and to not feel so alone. For a moment in time I just needed it to live in the air.
I was immediately bombarded with a million text messages, him showing up at our home screaming at me that the only route we were taking was for me to get an abortion, that I was not the woman he wanted to have children with and that I would be a horrible mother. I knew it was anger and everything falling out of his mouth was to just convince me to make the decision that he wanted. The things he could say to control a situation that was not in his control.
|
https://medium.com/@ekelly72/prochoice-antichoice-toxicmasculinity-8642e9a5b9da
|
['Elizabeth Blaine']
|
2020-12-06 01:45:39.955000+00:00
|
['Abortion', 'Toxic Masculinity', 'Pro Life', 'Pro Choice', 'Feminism']
|
A Poem like thing.
|
I’m just a human
My eyes just watch the world,
Go around,
for I am,
A normal but a bit weird one
invisible,
rotting,
And unloved.
But still,
Even though death is the end,
I watch you,
And you just don’t see me.
|
https://medium.com/@gayumiwijewardana/a-poem-like-thing-66fab77e6522
|
['Gayumi Wijewardana']
|
2020-12-17 10:01:09.246000+00:00
|
['Life', 'Poem', 'Feelings', 'Poetry', 'Love']
|
Life Is Short, Eat The Pasta
|
…how I stopped wasting time, chasing ‘skinny’
Image from Nicole Width article on TheList.com
In 2009, a reporter asked model and actress Kate Moss what her life-mantra was. She replied, “Nothing tastes as good as skinny feels.” I was 18 when I read this, a highly impressionable teen, and this mantra stuck with me.
It inspired a core belief that lasted almost a decade: to be happy, I had to be skinny.
Over the years, I went on countless diets: Weight Watchers, Atkins, Keto, High Protein, Intermittent Fasting, The Apple and Water Diet, The Master Cleanse…if you’ve heard about it on Doctor Oz, I’ve tried it. Often, when I didn’t find results in dieting, my efforts turned into disordered eating.
I would opt for calorie restriction or over-exercising, sometimes both. At very desperate times, I turned to more unhealthy and often dangerous methods on my quest to be smaller.
When I entered my twenties, I avoided social events where I knew people would be drinking alcohol, because I was threatened by the consequences of consuming liquid calories.
If I absolutely couldn’t miss an event like a wedding or a birthday party, I’d make sure I didn’t eat anything leading up to it, or I’d choose to either eat or drink my calories that evening- and you guessed it, this often ended up in monstrous hangovers, drunk ugly-crying, and me punishing myself on the treadmill the next day for ‘losing control.’
In my mid-twenties, I moved in with my then boyfriend (now husband) and these patterns weren’t my little secret anymore. My friends and boyfriend started to notice my unhealthy behavior and self-deprecating comments about my weight, and they were quick to call me out on it. Yet, none of it seemed to phase me or deter me from my behavior.
I still jumped out of bed at the crack of dawn every morning to do a spin workout while my husband, seeing the bags under my eyes, would beg me to slow down. I still continued on, even when a friend expressed her concern and frustration that I was reprioritizing dinner plans with her for a double session at the gym.
Image from Emily DiNuzzo article on Insider.com
These comments and worries were received as just distractions to me, keeping me from what I needed to do everyday, get smaller…because like Moss said, “Nothing tastes as good as skinny feels.”
My quest to be thinner didn’t just play out in my personal life, but affected my professional life as well. At a lunch break or during a lull at work, I’d be scrolling on an instagram ‘Fitspo’ page or I’d google things like ‘How many calories in an orange?’ or ‘How to lose 10 lbs before the weekend.’ I’d scribble calorie math or new-diet grocery lists on post-its and spend my lunch break making Google spreadsheets of eating plans, structuring my weekly meals meticulously, making sure to offset each consumed calorie with an appropriately assigned daily workout.
Exhausting right?
Now here is the kicker… in all this time and with all this effort, I never saw any truly dramatic weight loss, EVER. Sure, maybe 5 lbs up and down, here and there, but as I spent all my spare time reaching for model-thinness I only found unhappiness, anxiety, low self-esteem, and loneliness in my suffering.
I didn’t notice at the time, but the amount of energy I was spending on my quest to become smaller, was keeping me from relishing all the joys in my life.
Things really came to a head when I got engaged over Christmas in 2018. As a bride-to-be, I quietly kicked my weight loss efforts into high-gear. The diet-culture around being a bride ran absolutely rampant in my private repertoire. After so many years creating excuses, I finally had a ‘real’ excuse to back out of social events or squeeze in a third workout of the day. I loved playing the “Bride Card” because everyone knows that brides want to look their best on their wedding day.
Hungry and anxious, 3 months before my wedding, my (now husband) and I went out for dinner with our friends, another couple. Noticing me pushing food around my plate, my friend boisterously and almost-maternally called me out, “KAT! Why aren’t you eating?” Ah, I thought, perfect time to use the Bride Card. So, I replied, “I’m a bride! It’s crunch time with the wedding only a few months away.”
I figured my response would be enough for her to lay-off and let me pick at my sad bowl of lettuce, but my friend just looked at me straight in the eye and sternly said, “You know, I did the same thing leading up to my wedding- I dieted and exercised like crazy. It was exhausting.” She went on to tell me that a few nights before her wedding, her parents took her out to a beautiful Italian restaurant for dinner. She was avoiding carbs like the plague and her mom noticed that she begrudgingly didn’t order pasta, like she always does. (Important to note, my friend is Italian and a serious foodie) I looked at my friend not sure where her story was going. Then she continued,
“My mom said to me, ‘a bowl of pasta tonight won’t have any affect on how you look tomorrow. Life is short, eat the pasta.’ So I ordered pasta and never looked back.”
I don’t remember my response to this or how the rest of the dinner went, but I’ll never forget her story and her mom’s words:
“Life is short, eat the pasta.”
BOOM. There was the ‘Ah-Ha’ moment I didn’t know I needed.
Her mom’s words suddenly embodied all the things I had been missing out on in my life. The ‘pasta’ became a metaphor for all the joys of life I’d been avoiding. I’d been focused so much on getting skinny that I forgot how unhappy I was becoming in the process and
for the first time, I felt the weight of all the things I missed out on, because I was so focused on losing weight.
After that night, I decided it was time to adopt a new mantra. So, I traded in Moss’s “Nothing tastes as good as skinny feels,” for “Life is short, eat the pasta.” With this mantra in mind, I started a new journey and my life began to open up.
For so long, I thought one bite of a bagel or one missed workout would send me into a spiral, ending in failure and pounds gained. So, I began to trust myself and I loosened the reins a bit. I started by taking a rest day from the gym each week. Then, I let myself have carbs on a weekday (Yep, I used to not allow myself to eat bread Monday-Friday). I even had a few pints of beer at my friend’s 30th birthday without thinking about the calories.
Little by little, I learned to trust myself and actually listen to my body and my stomach, instead of punishing and restricting them.
I engaged in small practices like this, everyday. Before I knew it, it was my wedding day. That morning as I brushed my teeth, I felt a panic and these thoughts filled my head:
“I should’ve fasted yesterday.”
“People are going to notice that I didn’t lose weight.”
“Should I skip breakfast?”
The anxiety rose and tightened in my chest. Then, I remembered that new mantra, “Life is short, eat the pasta.” Those pesky thoughts started to slip away and the anxiety went with them, leaving me completely present and focused on the pure joys of that day. The only thing that truly mattered was the incredible man waiting for me at the end of the aisle- Oh, and the gooey mini grilled cheese sandwiches we were serving at cocktail hour.
My husband and I, about to crush grilled cheese and pork sliders at our wedding >>> Photographer: Alanna Hogan Photography
Married a little over a year now and approaching 30, I am damn proud of myself for every moment I’ve succeeded in redirecting and reframing my thoughts and behavior patterns around my body and my weight, but the process certainly hasn’t been rainbows and butterflies. Spending so many years practicing unhealthy habits, living in a skewed reality, and following a set of deranged priorities, solicited a new set of challenges.
I’ve had to be uncomfortably vulnerable with my husband and close friends to build a support system. I’ve had to be brutally honest with myself, recognizing and avoiding triggers around me that could lead me back into a dangerous cycle.
Sure, I still struggle with doubt and insecurity about my body size, but those thoughts are now very rare and fleeting. I have reassigned all the energy I spent restricting food and exercising excessively, towards building my life. Social events, cheeseburgers, gentle movement, sleeping in, ice cold beer, (just to name a few), which were always a threat to me, now bring me so much joy.
No longer spending hours a day at the gym or calculating calories, I have more time to be a better friend, to connect with my family, to hone my writing skills, to educate myself on world issues, and to enjoy my marriage.
Sure, I workout and eat healthy most of the time, but I do it because it makes me feel good, not because I feel the need to be thinner.
It isn’t easy and it’s sometimes been very painful, to look back at all the time I wasted worrying about my weight — the things I could’ve accomplished, the happy hours I missed, the relationships I could’ve saved, the desserts I could’ve tasted…
But I am hopeful that my story will resonate with someone else who might be stuck in a weight loss rate race and might need a reminder that life is short and honestly, everything tastes better than skinny feels.
|
https://medium.com/fellowship-writers/life-is-short-eat-the-pasta-19d7c3321b08
|
['Kat Merry']
|
2021-01-04 11:02:47.672000+00:00
|
['Life Lessons', 'Self Improvement', 'Body Image', 'Mental Health']
|
8 Unique Outputs You Can Expect From Industrial Drones Mapping
|
Drones or Unmanned Aerial Vehicles (UAV) are on the verge of increasing commercial demands. They have taken over the tasks of many critical industrial applications. Aerial mapping and surveying are also one of the essential applications that drones can perform efficiently.
The ability of drones in capturing pictures and data from above has proved them vital in several surveying workflows. They can perform photogrammetry, 3D mapping, topographic surveying, land surveying, and more.
However, in contrast to the traditional methods, drones are more cost-effective and take only a fraction of the time to complete the tasks.
What Is Aerial Mapping and Drone Surveying?
Aerial mapping or Drone surveying refers to capturing data from above the ground using some sensor that is facing downwards to the floor. Different kinds of sensors are available for performing aerial mapping and drone surveys.
However, multispectral or RGB cameras and LiDAR are the indispensable sensors used for aerial mapping.
For aerial mapping drones using RGB cameras, a technique known as photogrammetry is used. In this technique, the drone captures the ground several times at different angles. A software assigns the tag number to each image with the coordinates of the location.
Finally, using photogrammetry software, the images are combined from multiple vantage points to generate detailed 2D and 3D maps.
What are the expected Data Outputs From Drones Mapping?
Depending on the type of sensor and method utilized for aerial mapping, you can have several different output results. Some of the actual output results are described below:
1. 3D Orthomosaic Map
A 3D Orthomosaic map is a 3D model of a piece of land. You can generate 3D maps easily using a drone.
All you need is a large number of digital photos of the site, and then a 3D rendering software will convert it into a 3D map.
2. 2D Orthomosaic Map
A 2D Orthomosaic map is a georeferenced 2D model of a land.
Thousands of digital images taken by the drone surveying combine to form a 2D map.
3. Thermal Maps
A thermal map provides the thermal image of the surveyed area. This type of technique is used to detect faults in numerous thermal applications. For instance, thermal mapping can be used to detect defects in solar plants by detecting hot spots.
The drone can also use the thermal imaging technique to detect humidity in tunnels. Using the thermal mapping drones, a surface with abnormal heats quickly shows up, and it becomes able to identify targets.
4. 3D model — Building Information Modeling
A 3D model of an object can be easily generated using a drone with an RGB camera. The drone rotates around the asset whose 3D model is desired.
The Building Information Model (BIM) provides various insights and key data to the constructors and the engineers. This BIM model helps in efficiently designing, planning and constructing buildings, etc.
5. LiDAR Point Cloud
LiDAR (Light Detection and Ranging) is a sensing technology. It uses rapid laser pulses to map out the surface of the earth.
LiDAR is advantageous when used to create terrain and elevation models and high-resolution digital textures for various industrial and aerial mapping applications.
6. Multispectral Map
A multispectral map captures the radiation from the surface and provides visuals beyond the visible light spectrum.
This type of technology is vital in agriculture applications and crop management. This kind of map needs a drone equipped with a multispectral camera.
7. Digital Surface Models (DSM)
Another significant deliverable that can be obtained by an aerial mapping drone is the digital surface model.
In this DSM format, each pixel has 3D Coordinated information, i.e., X and Y, and an elevation represented by Z. These kinds of models are generated from imagery captured by high-resolution cameras.
8. Digital Terrain Model (DTM)
Drones can also be used to provide you with a digital terrain model (DTM) which is also known as the Digital elevation model (DEM).
By using 3D photogrammetry modeling, the model can be manipulated by the bare earth. Each pixel of the data file contains the terrain’s elevation data in a digital format that relates to a rectangular grid.
3 Advantages of Aerial Mapping Drones
Drones in aerial mapping have numerous advantages over other traditional methods, below I’m listing the top 3 important benefits:
Aerial mapping using drones is five times faster than traditional methods for capturing topographic data. The cost is also reduced drastically using drones. Drones can easily access areas that are difficult to reach using other means. Drones can easily access mountainous terrains with steep slopes to create specific landscapes and maps. Moreover, drones can easily take off and land at any point of interest; thereby, making them highly efficient in different site conditions. Measurements and models created using a drone are more precise and accurate, as one drone flight can provide thousands of overlapped imagery with geolocation data.
Importance of Aerial Mapping Drones in Dubai
The United Arab Emirates is the most popular destination from across all cities globally and is known for its unique architectural constructions, new trendy buildings, and skyscrapers.
Relatively, the construction industry is one of the most vital business indicators and revenue-driving industries in the UAE, and therefore drones play an integral part in it.
Since UAE is always on the watch to adopt new innovative technologies, they are looking forward to using drone surveying for smart city modeling. In intelligent city modeling, they have devised a plan in which the drone would survey the city for security purposes.
This new Emirate’s ecosystem concept aims to manage drones around cities and interact and coordinate with people. This new management system developed in the UAE would control various aspects of drones. Such as no-fly zone control, beyond-visual-line-of-sight, and automated flight permissions.
These are all essential aspects of the safe flight of UAVs in populated urban areas. The system will also serve as a testing ground for numerous different applications of drone technology. Therefore, drones are becoming exceptionally important in the United Arab Emirates and all around the world.
Conclusion:
If you are considering an efficient, cost-effective, and reliable drone service specialist company for your aerial mapping project, GeoDrones provides drone services at the pace of your project needs. Unlock the potential of Drone services by surfing through the services provided by GeoDrones. It will enable you to embrace the best industry practices.
|
https://medium.com/@m-shawky-2021/8-unique-outputs-you-can-expect-from-industrial-drones-mapping-39228b56930
|
['Mohamed Shawky']
|
2021-08-02 00:00:00
|
['Aerial Mapping', 'Drone Surveying Company', 'Aerial Mapping Drones', 'Drone Service', 'Drones']
|
Build a Reinforcement Learning Terran Agent with PySC2 2.0 framework
|
USEFUL LINKS FOR THE TUTORIAL
BEFORE STARTING
Through this tutorial, you will build a Smart Terran Agent capable of learning to develop a better strategy over time through a reward system based on the actions it tooks and the state resulting from those.
System requirements
PySC2 2.0 framework installed
Pandas python library
Numpy python library
Basic python programming skills
Also take into consideration that reinforcement learning to the complete game is extremely complex and takes a lot of time and computational power, we can make things easier for ourselves by significantly reducing the complexity of the game, that’s why the bot will be playing against “itself”, so instead of having an opponent with all the abilities in the game the opponent will have the same abilities, restrictions we will define and it will behave randomly.
1.- IMPORTS
Import the libraries at the top of the file as following:
2.- CREATE A QTABLE FOR REINFORCMENT LEARNING
We must define the algorithm for our Machine Learning Agent, there is where the QlearningTable comes to action, it is a simplified version of reinforcement learning.
It is essentially a spreadsheet of all the states the game has been in, and how good or bad each action is within each state. The bot updates the values of each action depending on whether it wins or loses, and over time it builds a fairly good strategy for a variety of scenarios.
Inside the class of the QLearningTable we will also define the following methods
Choose Action Method
The main method of the learning table is choosing the action to perform, here with the e_greedy parameter means it will choose 90% of times the preferred action and the 10% of the time it will choose randomly for exploring extra possibilities of having a random action.
In order to choose the best action it first retrieves the value of each action for the current state, then chooses the highest value action. If multiple actions have the same highest value, it will choose one of those states at random.
Learn Method
The next important method here is learn. It takes as parameters:
s refers to the previous state
refers to the previous state a is that action that was performed in that state
is that action that was performed in that state r is the reward that was received after taking the action
is the reward that was received after taking the action s_ is the state the bot landed in after taking the action
First in q_predict we get the value that was given for taking the action when we were first in the state.
Next we determine the maximum possible value across all actions in the current state, discount it by the decay rate (0.9), and add the reward we received.
Finally we take the difference between the new value and the previous value and multiply it by the learning rate. We then add this to the previous action value and store it back in the Q table.
The result of all this is that the action will either increase or decrease a little depending on the state we end up in, which will make it either more or less likely to be chosen if we ever get into the previous state again in the future.
In simple terms it does a sort of mathematical calculations and finally updates the table accordingly and that is how it learns over time.
The check_state_exist method just check to see if the state is in the QLearningTable already, and if not it will add it with a value of 0 for all possible actions.
3.- DEFINE A BASE AGENT
Then we must define a Base Agent, an agent that both our random and learning agent will use. It has all of the actions that can be done and a few other methods that both agents can share to make them a little simpler.
Helper Functions
To perform these actions the bot will need helper functions described below:
Returns a specific set of units of the army (applies for buildings and troops)
Returns the units that are finished and not the ones that are being created (applies for buildings and troops)
Calculate the distances between a list of units and a specified point
Specific Actions
And finally the actions that can be done are:
The method that will send an idle SCV back to a mineral patch
Generate buildings (for further and deep reinforcement learning you can consider to add more type of buildings)
Create an army of marines and sending them to attack
Step method to know where our base is placed and do nothing (no operation to perform)
Its important to mention that unlike regular actions, raw actions do not crash if the action cannot be performed, but is best to perform your own checks so that error notifications do not appear in the game.
Finally each one of these methods will receive the observation from each step so that it can act independently.
4.- RANDOM AGENT
We choose an action at random here from our predefined list, and then we use Python’s getattr which essentially converts the action name into a method call, and passes in the observation as an argument.
5.- SMART AGENT
The Smart Agent is like the random agent but having more machine learning stuff because here is where we initialize the QLearning Table once the Agent is created. It takes the actions of the Base Agent and that is how the QLearningTable knows those are the actions that it can choose from and then perform.
New Game Method
Here we start a new game once the current it is finished by simply initializing some values like where the base is and the previous state and action.
Previous state and previous action are important for the reinforcement learning because we use those values each times it performs an action it stores that action in previous action variable so in the next step it knows what has already performed and the same with the state.
Get State Method
Essentially takes all the values of the game that we can find useful and important for example how many barracks, or supplies, or idle svcs we have, and then returning those in a tuple that can feed into our machine learning algorithm and it can know which is the current state of the game at certain point in game.
To make the Machine Learning Algorithm learn faster we just simply want to know if we can afford or not to do certain actions and not putting too much attention in how many minerals we have.
If you want to add more type of units or buildings and store its value, you can check the links at the top to find its function related
That seems like a lot of code but really we’re just keeping track of our units and the enemy units.
Step Action Method
It gets the current state of the game, it chooses and action, so it feeds the state into the QLearningTable and then chooses the best action or an action at random for finally return it.
Then what we want to do is to learn from that action and that state that were previously saved, so now we call the learn method in the QLeaningTable and pass those values and the reward received from the game (most of the times will be almost 0, 1 if we win or -1 if we lose) and then we pass in whether it is terminal or not and is kind of important because is not the same value a reward at the end of the game than the rewards we receive on the way to the end of the game.
Then we store the new previous state/action to use it in the next step of the game, and then we execute the action we have chosen.
If we don’t reset the previous_state and previous_action it could teach our agent incorrectly at the start of each game, so let’s reset the values when we start a new game:
6.- MAIN METHOD
At the end we have the method that runs the game to see what happens in real time. Here we create the SmartAgent and our RandomAgent, set those as the players, pass those also in the run loop to control both agents instead of one and then, once it starts it will open 2 windows, one for each agent.
Here you can see both agents running:
CONCLUSIONS
When you initially run this, both agents will do pretty much the same random actions, training some marines and attacking, building randomly or any other weird stuff, but this is just because the smart agent is still not smart enough.
Eventually the Smart Agent will discover that it can actually build more units for attacking, putting buildings in the front of the line to stop the enemy attack and by the time we get to hundreds of games played the strategy has already evolved.
There is also the fact that the random agent can have a better strategy just because is operating randomly, so the win percentage of the Smart Agent will not always be 100%.
There are things not considered like the health of the units, buildings, where the enemy marines are and more, so maybe implementing those in the future it can have better decisions and having a better win rate percentage.
Thanks for reading :) and you can find complete code here.
References
|
https://medium.com/@a01701804/build-a-reinforcement-learning-terran-agent-with-pysc2-2-0-framework-c6be51b13d48
|
['Juan Arturo Cruz Cardona']
|
2020-11-26 18:57:37.928000+00:00
|
['Python', 'Agents', 'Starcraft 2', 'Pysc2', 'Reinforcement Learning']
|
Anti-Solar Panels in 500 Words or Less
|
By Garima
Can darkness be an energy source?
The answer is yes. And Anti-Solar Panel is the solution.
The anti-solar panel prototype. (Image credit: Stanford University)
It is based on the concept of using heat to generate energy but an inverse version of the solar panel. We know about the traditional solar-panel that generates electricity using the heat difference between the Sun and the Earth.
An anti-Solar Panel is a device that can generate electricity during the night by making use of the heat difference between the surrounding air and the surface of the device that is cooling itself by emitting infrared radiations towards the night sky.
Let us look at how it would work.
Schematics of the energy dispersion towards the night sky. (Image credit: Aaswath P. Raman et al, Stanford University)
These cells work on the principle of the ‘Thermoradiative Process.’ It says an object which is hotter than its surrounding will radiate heat as infrared radiation. A warm object in the space will radiate the heat to its surrounding, which is cooler than itself.
The anti-Solar Panel contains a thermoelectric generator, one side exposed to the air temperature and the other in contact with an aluminium plate. The thermoelectric generator-based device harnesses the variance in temperature between the Earth and outer space by using “a passive cooling mechanism known as radiative sky cooling to maintain the cold side of a thermoelectric generator several degrees below ambient temperature.” The aluminium plate, like a solar panel, actually an anti-solar panel, is facing the night sky and radiates thermal energy towards the sky. This lowers the temperature of the plate some two-degree centigrade less than the lower part of the device that has the same temperature as the air. The aluminium plate is isolated from the ambient temperature with a transparent insulating panel that lets the radiating energy go through but blocks the heat exchange.
What makes it desirable?
The panels produce about a quarter of what traditional solar panels produce in a day.
The researchers are confident that they can increase the efficiency of these anti-solar Panels. On a large scale, this night-time generator could be extremely significant. It could power electronic devices in remote or low-resource areas that lack electricity at night. These anti-solar panels are cheaper to make and could potentially operate at night as well as in the day.
According to researchers, these could be run on wasted heat leftover from the industrial process. It could help to achieve carbon neutrality when carbon emissions are balanced with carbon removal, so no net carbon is released. Of course, those practical applications are yet to be realized. But still, a technology that does not rely on the burning of fossil fuels for our energy needs is worth exploring. The Earth gives everything to us, and we should contribute to healing it by reducing the carbon emission by our small-scale efforts and if we can then that would be a great pleasure to the whole of humanity.
|
https://medium.com/the-treatise/anti-solar-panels-in-500-words-or-less-ee5d315be258
|
['Asme Iiest Shibpur Student Section']
|
2020-12-17 23:05:20.324000+00:00
|
['Energy', 'Physics', 'Technology', 'Solar Energy', 'Science']
|
Python List Comprehension
|
Python List Comprehension
A quick and easy introduction to Python list comprehension
Photo by Amanda Jones on Unsplash
Introduction
In this article, I want to show you a very useful feature of the Python language: list comprehension. After reading, you will be able to write your code more efficiently and beautifully.
List comprehension is an elegant way to define and create lists based on existing lists or iterative objects.
Examples
Basic usage
Let’s imagine that we need to create the list of squared numbers from 0 to 10. I think if you are not familiar with list comprehension, you will do it like in the code below.
But wait, we can use the power of Python language, and do it more elegant:
If you check the arr values at the end of the programs you will see the same values. But with list comprehension, the code looks more compact and it’s easy to read.
Structure
The main structure of this operation have the next form:
[some_processed_variable for some_variable in iterable_object]
If condition
Let’s go deeper and add additional functionality to this structure. Suppose we need to get a list of only odd numbers. Now we can do it like in the code below.
We added an additional “if” condition to the end of the structure (of course we can do it by setting range argument, like range(1, 10, 2)).
Nested loops
We can write a nested loop in the list comprehension. For this, we write two loops one by one — second will be nested. Let’s check it with the common example — create a card deck where each card has suit and value.
As you can see we have two loops: main which iterate over suits and nested which iterate over a list of values.
We can write more nested loops, but if it becomes difficult to read and understand it is best not to.
Another good example of a nested loop with comprehension is creating a multiplication table. Below we create a list with strings in the format “number1 x number2= number1 * number2”. Where numbers 1 and 2 are numbers from 1 to 9.
Real example
The final example I met in one of the Kaggle competitions. There is a table with a lot of columns. And some of them have the format “x<some_number>”. And I wanted to get all these column names for future processing. With list comprehension, it was so easy. There is a small example of this task and solution for it in the next code cell.
Conclusions
In this article, we focused on the list comprehension mechanism in Python. The main concept is it is easy for usage and understanding. But we need to be careful with adding more nested loops and difficult conditionals to our structure — it can start to be messy.
Besides list comprehension, Python has dictionary comprehension and set comprehension. The main difference between them and the list comprehension is to use {} brackets instead of []. And for the dictionary, we need to specify the key with the value.
|
https://medium.com/quick-code/python-list-comprehension-b0dd894f776a
|
['Yaroslav Isaienkov']
|
2020-12-21 23:55:14.761000+00:00
|
['Python', 'Python3', 'Python List Comprehension', 'Tutorial', 'Python Programming']
|
Beam Delivers a Full Privacy Suite with Eager Electron 5.2 Release
|
Beam offers the ultimate level of privacy accompanied by opt-in auditability, delivering the best of both privacy and regulatory worlds. Once relying on a single protocol, today one can find a bunch of cool tech under the hood such as Lelantus-MW, Mimblewimble, Dandelion, decentralized atomic swaps marketplace and confidential assets, all packed with the best-in-class UX and free online customer support.
Let’s recap the amount of work done during almost 3 years, one milestone at a time.
Agile Atom: the first Mainnet
From the early days in 2018, Beam was conceptualized as a privacy coin that is very friendly to use. We crafted our own implementation of the Mimblewimble protocol where blockchain contains only UTXO, the sender and receiver wallets negotiate the transaction details off-chain and communicate directly. The wallets can either stay online or be helped by a “mailbox” service (we’ve called it SBBS) to enable the message exchange.
If a node would tell the entire network about its own transactions that would leak privacy for the sender. Dandelion helped to obfuscate the transaction’s origin, nodes send the transaction on several hops, before it is randomly broadcast to the wider network.
Beam Mainnet was successfully launched on January 3rd, 2019, on the 10th Bitcoin birthday.
Bright Boson: mobile apps
Time flew, we introduced mobile wallets that scored 4.7+ on the AppStore and PlayStore. We needed a smart way for them to sync with the blockchain after being offline for a while. It turned out that downloading a snapshot ain’t an easy task for mobile, this is where FlyClient came into the picture, preparing all the data in a single file for a mobile app to consume and get in sync fast and reliably in the background.
Also, our users were telling us that it takes too long for a wallet to sync with the blockchain. This is how FastSync was invented, reducing the nodes’ sync times by an order of magnitude.
Clear Cathode: Atomic Swaps
The next step was to enable everyone to get in and out of Beam in a confidential and decentralized way, straight from within the wallet. For this, we brought in Atomic Swaps with BTC, LTC, and QTUM. The in-wallet Atomic Swaps are complimented with a decentralized order book, by way of SBBS, the same used for addressing.
The liquidity has been bolstered with the open-source API for Atomic Swap market making. These are major steps towards a full-fledged decentralized exchange, extending the Beam wallets beyond just a friendly wallet for sending and receiving privately!
Double Doppler: foundation revisited
We laid down a lot of infrastructure for web wallets and many more things to come. From the UX perspective, we’ve introduced messages, pushed to the network, so that wallets will know about the exchange rate changes or when the new version of the wallet came out. Later on, these mechanisms will be called “oracles”.
Eager Electron: the way to DeFi
Confidential Assets issuance made possible on Beam. These share similarities with the Ethereum ERC20 tokens, but unlike ERC20s, Confidential Assets are first-class citizens (e.g. their UTXO are fairly indistinguishable from Beam UTXO on the blockchain) in the Beam suite of products.
Since Mimblewimble as a privacy layer is prone to flashlight attack, we’ve adapted LelantusMW from the Lelantus protocol by Aram Jivanyan. These Max Privacy transactions take the Beam anonymity set to a massive 64k!
Namely, if 64k UTXO belonging to different people are put into a jar, shuffled so that every one of these folks can withdraw a fresh UTXO of the same value he put in, that would render transactions on Beam practically unlikable.
Following Dandelion and Mimblewimble, Lelantus turned Beam into the best privacy coin with the sweetest UX on the market: the wallets smartly pick the coins, breaking up the first-come, first-served heuristics.
The shielded pool also allowed us to implement truly offline transactions. Making it possible for transactions to be sent without the sending and receiving wallets needing to talk.
In the coming 5.2 release, Atomic Swaps with Dash and Doge have been released.
Fierce Fermion: the future is now
Atomic Swaps with ETH and DAI are underway, expected in 5.3, with plenty more to look forward to on the horizon.
Embracing all the building blocks mentioned above, Beam launched BeamX, the pre-Тestnet experimental network for confidential DeFi. We have elaborated on BeamX, and what it brings to Beam in a recent article here. You can expect Algorithmic Stable coins, lending, Uniswap-like AMM, NFT tokens, and many other financial instruments to appear powered by the Beam blockchain.
To complement all the above numerous other innovations are also in the works, including a full-powered DEX, oracles, side-chains, wrapped assets and bridges with Ethereum and Polkadot.
Conclusion
Beam continues to progress far beyond just privacy, into the realm of an elegantly-syndicated suite of confidential financial applications. With much of the privacy coin mission accomplished, it is time for development pushing Beam beyond the competition, increasing the use cases and extensibility, promoting a private future with confidential DeFi.
|
https://medium.com/beam-mw/beam-delivers-a-full-privacy-suite-with-eager-electron-v5-2-release-aaf2581edd33
|
['Sasha Abramovich']
|
2020-12-08 17:17:42.104000+00:00
|
['Privacy', 'Cryptocurrency', 'Monero', 'Defi', 'Bitcoin']
|
How I Learned Romanian in 37 Easy Steps
|
How I Learned Romanian in 37 Easy Steps
Hey, now I can actually read this!
Step 1 — Speak Italian and Spanish and then laugh and dismiss with a wave the Romanian language. After all they’re all Romance languages, no? Practically all the same.
Step 2 — Meet some Romanians in the United States, ask ’em to tell you a bunch of words. Only remember one — opt — meaning the number eight. Really. The first day I showed up in Romania, that’s the only word I knew.
Step 3 — Go to Romania, meet 5,012 people who all speak English (naturally) and therefore teach you no Romanian at all.
Do not buy any Romanian-English dictionaries in Romania for some reason (LOL).
Step 4 — Go back to USA, look in every bookstore in your city, realize while there’s plenty of dictionaries and courses and verb lists for Portuguese and Russian, there’s nothing for Romanian. Nada, zip, zilch, zero.
Go onto Amazon dot com and find literally the only Romanian-English dictionary available, first printed in 1946 and never updated.
Step 5 — Every day at work, print out one article from an (online) Romanian newspaper. Haul out your antique dictionary and attempt to translate it word for word.
Note: This was especially enjoyable because the fun-loving Romanian powers that be decided to SWITCH UP the spelling of their language after 1989. Har har, my fine fellows!
Step 6 — Get half the words found and starting to be learned but be utterly confounded for hmm, I don’t know, a year or TWO about how in the world your dictionary (seemingly) doesn’t have half the words appearing in a mainstream newspaper.
Step 7 — Go to Romania a few more times, speak only English with everyone and therefore learn just a handful of words.
Step 8 — Finally find out that Pimsleur has a Romanian course. Yay! You park that puppy in your car stereo and learn Romanian on your way to work every day. Then you find out there’s only ONE lesson available and so you just learn how to say buna ziua (hello) with the right accent and then oh well, too bad so sad.
Step 9 — Move to Romania finally. No more visiting for me, baby!
Step 10 — Begin to go to the store by myself and always be extra super sure to maneuver myself so I can read the digits off the cash register because I can’t understand the so-called “numbers” the lady is telling me. Say buna ziua and if she tries to engage in small talk just nod, smile and mumble.
Step 11 — Finally realize that the “official” way Romanians say numbers is TOTALLY DIFFERENT than the way Romanians actually say numbers.
For example: pai-spre-zece is the OFFICIAL way to say 14. The “real” way Romanians say it is pai-shpay.
Step 11B — Be sure to never, ever order TWO of anything because it’s a number that’s “masculine” for some things and “feminine” for others and I don’t know which is which. So even if I want two of something, I always have to ask for three.
Step 12 — Start talking to gypsies, mostly beggars who approach me first. They’re the only ones who are patient enough to sit around and speak to me in Romanian.
Step 13 — Take my first train ride with nobody helping me.
Step 14 — Get into colossal arguments with my landlord lady, who doesn’t really speak English and is damn sure unhappy about my apartment cleaning skills. At one time she orders me to clean the stove with a toothbrush LOL.
Step 15 — Finally figure out that before 9am I’m supposed to say buna dimineata and that it’s dee-mee-NATZA not dee-mee-NEH-ATZA. Likewise buna seara (for after 6pm) is SEH-RA not SE-AH-RA.
Step 16 — Move to a street with a name ending in “ului” so finally, FINALLY master how to say that after 5,812 times of riding in a taxi and having to give my address to the driver.
Step 17 — Continue meeting Romanians (including girlfriends) who speak English better than I do, thus corroding my already rusty brain and its ability to learn a new language.
Step 18 — Stare at my TV which has no cable or satellite and only receives one channel (PRO TV — television for PROS). 90% of the programming is American shows with subtitles, which helps a little.
Grit my teeth and force myself to watch Romanian “comedies” like Trasniti in NATO (roughly “NATO hijinks” about some Beetle Bailey type soldiers who clown around in the barracks) and La Bloc (the Apartment Building — about a crew of “wacky neighbors”).
Step 19 — Move to another city, get cable TV and a girlfriend who loves shows like Surprize, Surprize (don’t ask — it’s horrible) and finally Schimb de Mame (literally “Mother Exchange”) which is actually pretty good. I get to see the inside of everyone’s apartments (on the TV) and realize I’m not the only one who has icons all over the wall and lots of LACE needlepoint stuff draping the tables and other bits of furniture.
Step 20 — One day be at the store and the total is 6 lei and give the lady 11 lei and when she gives me a quizzical look, formulate my VERY FIRST ROMANIAN SENTENCE EVER which was “so the change will be a 5 lei note” and she smiles, understands and does indeed five me 5 lei back and I skip home walking on sunshine.
Note: Actually this was during the “good old days” when Romanian money all had a billion more zeroes on it. But you get the idea.
Step 21 — Meet the parents of my girlfriend, who I mistakenly think don’t speak English so be “forced” to drink liquor with her dad and exchange witticisms and banter and then find out when I’m pretty well sloshed that ALL ALONG (hee hee!!) the mom speaks English just fine. Luckily I kept the dirty sex talk to a minimum — I THINK.
Step 22 — Begin showing off my new mastery at Romanian, mostly by engaging in conversation with taxi drivers. They in turn universally think I’m Hungarian. It takes me about six months to learn that I speak Romanian just like Marko Bela and so therefore I must be Hungarian like he is.
Note: Later I get to do impressions of Marko Bela for the amusement of my friends and admirers — KA-CHING!
Step 23 — Make friends with a Romanian guy, who speaks English beautifully, and meet a friend girl of his, who doesn’t. Those two start to date (or almost start dating) and then he suddenly gets a job in another city and so “passes” her onto me.
Yay, so now I’ve got my very first friend who DOESN’T speak English!
Step 24 — Continue to meet with her, get to know her roommate, cousins, brother, uncle, mother, father and assorted other people and find out not a single one of ’em speaks English at all. They’re all from Maramures where apparently it’s illegal to learn English or something. Oh well, their loss and my win!
Finally go to Maramures and go out in the town, meet a whole bunch of new Maramureseni people and find out THEY TOO do not speak English, not one lick of it. Speak Romanian until my tongue falls out of my head.
Step 25 — Keep talking to taxi drivers and cackle with evil delight as occasionally I find a driver who likes to rant and rage against either foreigners and/or Hungarians and all along he doesn’t know ME I’m not Romanian! Ha haa!
Note: The way to do this is LOTS OF MUMBLING. Lots of “da” and mumbling and nobody will ever find out *evil cackle*
Step 26 — Start buying children’s books in Romanian language like Capra Cu Trei Iezi, which was written by a Romanian guy and now I know why the hell it was never translated into English — it’s extremely gruesome and bloody and would scare the crap out of little American bambinos.
Step 27 — Pick up a copy of Romanian poems (Eminescu) sigh and realize I’ll never understand it in 10,000 years. Go to his special tree in Iasi though and take my picture in front of it and consider that a win.
Step 28 — Take a million trains to every part of the country from Craiova to Oradea to Botosani to Constanta and of course Bucharest. Engage in many conversations with the colorful cast of characters riding the rails and have many fine adventures, some of which I can never talk about, like the “incident” with the bisexual man. AHEM!
Step 29 — Finally get confident enough in Romanian to engage in the greatest sport played in this country, otherwise known as the Righteous Scolding.
In Romania, there’s a “correct” way of doing everything from putting on your socks to how to ride a bus and whenever anyone steps out of line, this is the time for a Righteous Scolding. You get to puff up your shoulders, use a very indignant tone of voice, perhaps some good finger waggling and lambaste the poor rule breaker with a good Righteous Scolding.
Step 30 — Speak Romanian even with Romanians who speak English and listen to them tell you over and over and over again that you don’t speak their language very well.
Meanwhile they are free to butcher English of course and argue with you that “am fost la mall” is “I went AT the mall” and just be smug as hell about how superior their knowledge of English grammar is to your own.
Step 31 — Continue speaking Romanian to anyone and everyone, including an old man who literally has no teeth (sweet guy though, I loved him), gypsies, beggars, country bumpkins, people from Oltenia (who have their own special past tense for verbs), people in Bucharest, people from the Banat and of course, Moldovans — all of whom have their own special accents, slang, pronunciations and even totally different words for ordinary things.
Step 32 — Go to Bucharest and meet one of the actors who was in La Bloc and tell him how the show helped you learn Romanian and what a shitty show it was and he laughs and agrees 1000% and sits down and drinks a beer with you and tells you many awesome anecdotes.
His character’s name on the show, btw, was “The American” and you find this ironic and amusing NOW but extremely frustrating and bizarre back when you were watching it.
Step 33 — Start getting stopped on the street and asked for directions. Grin with supreme delight as not only do you know where the thing is but you can explain how to get there in Romanian! Yay.
Since Romanians are genetically the WORST direction givers on the planet, I consider myself a hero for my valiant service in this regard.
Step 34 — Start learning Russian and then a whole HOST of the weird parts of Romanian grammar and syntax start making total sense to you.
Step 35 — Begin helping your Hungarian friends and exchange students from other countries with Romanian.
Step 36 — Go to Bucharest and have someone think you are actually a native from Transylvania. Yay!!! You win! You finally speak Romanian so good people think you’re FROM Romania.
Step 37 — Tell everyone you know about how you officially speak Romanian now and have been crowned the new King of Romania and have absolutely nobody be impressed whatsoever LOL.
But hey, I’m happy and that’s what matters.
See? There you go. Wasn’t so hard. Only took about 10 years :D
|
https://medium.com/@lifeinromania/how-i-learned-romanian-in-37-easy-steps-b4f185f173da
|
['Sam Ursu']
|
2017-11-27 08:09:19.025000+00:00
|
['Romania', 'Language Learning', 'Storyofmylife']
|
The Final Tango for The Simpsons and Acclaim Entertainment
|
Part 13 of a 25-part series looking back at every Simpsons video game ever made.
As I was a kid in the eighties and nineties, I was there for the tail end of comedic animated violence in its full glory. There were countless television shows that repackaged old cartoons from Warner Brothers, Disney, and Hanna-Barbera, among other studios, to fill half-hour slots on television stations that needed to pad out their daily programming blocks. This is how I usually caught what used to be short animated films that played before theatrical releases. I learned how Mickey Mouse used to be a mischievous fiend before he was watered down and turned into a corporate mascot, and that Warner Brothers shorts are rife with gun-totin’, anvil-droppin’, quip-spewin’ characters who routinely gave each other black eyes and concussions that disappeared in the next scene. I also learned how casual racism and sexism were thrown in alongside the casual violence, making rewatches of these so-called “classic” cartoons a hard pill to swallow.
But there’s one pair of characters from the era who are especially relevant to this chapter’s game. They were titans of cartoon violence, appearing in over one hundred sixty shorts across nearly four decades. These mute animal protagonists were caught in an eternal chase where a cat hunts down a mouse and always comes close to catching him, but is ultimately outwitted at every turn. I write, of course, of Tom and Jerry. Their film shorts were created and developed by Hanna-Barbera, and distributed by MGM for decades. Eventually, the merger parade caught up and threw all of Hanna-Barbera’s works under the Warner Brothers umbrella, and now Tom and Jerry live alongside their once-competitor characters such as Sylvester the Cat and Tweety Bird. The old Tom and Jerry shorts are available online at the Web Archive, and while the violence in their initial cartoons isn’t too horrifying, their means of maiming each other gets worse over the course of the series. There’s also the aforementioned casual racism in the form of the Mammy caricature who routinely appears to scold Tom for his messes. The cartoons can still be enjoyed, even with an eye on the problematic nature of old media.
Like me, many of the creators on The Simpsons grew up watching Tom and Jerry on their tiny tube televisions. All of the writers in the early days of the show came from traditional live action sitcoms or talk shows, and they slowly learned that they could get quite a lot wackier in an animated medium. The Simpsons themselves were often embroiled in violence, though it was usually Homer getting into a ridiculous fight or Bart getting beat up by bullies. But the writers seemingly wanted to take it to the extreme without breaking the reality of the show (Halloween episodes notwithstanding). And so they created their altered reality: The Itchy & Scratchy Show. Itchy was the stand-in for Jerry the mouse, and Scratchy the counterpart to Tom’s wily cat in pursuit. They shared the same dynamic wherein Scratchy is always trying to catch Itchy, but while Tom and Jerry engaged in light-hearted torture, Itchy & Scratchy outright tried to murder one another, with Itchy usually succeeding with one macabre plan or another. It was not uncommon for Scratchy to be maimed, melted, torn apart, or have internal organs viciously removed. Bart and Lisa love it when they watch this of course. The cartoon violence is dealt with head-on in the second season episode, “Itchy & Scratchy & Marge.” Marge tries and briefly succeeds at getting cartoon creators to ban violence on the show for the sake of the children, but this is ultimately overturned when Marge is unable to justify censoring one artistic medium while sparing another.
While TV writers wrestled with the ethics of cartoon violence, countless children played countless violent video games on their home consoles, and I was among them. Really, Itchy & Scratchy and their progenitors were quite tame by comparison.
Chasin’ down the babies.
Itchy & Scratchy translated to video game antagonists quite easily, first showing up as enemies with their own themed level in Bart’s House of Weirdness and then again in Bart’s Nightmare. Someone at Acclaim then decided they were interesting enough to star in their own games, making their first headline appearance in Itchy & Scratchy in Miniature Golf Madness on Game Boy. As detailed in chapter 11, that game featured Scratchy as the protagonist who kills Itchy many times in the game, an odd reversal considering Itchy is always the one who kills Scratchy on the television show. That’s where The Itchy & Scratchy Game comes in. With Itchy as the star, players could finally unleash the kind of cartoon mayhem they’d been watching on The Simpsons for over five years.
Acclaim needed a studio to develop this new Itchy & Scratchy title, and for one reason or another (likely the bottom line) they turned away from their previous collaborators. They wound up making the deal with Bits Corporation (later known as Bits Studios), a game developer based in London. They’d only just started in 1990, but like other studios in this series, their focus was almost entirely on ports and licensed games. They took a stab at licensed fare such as Spider-Man, Terminator, and oddballs like a game based on the 1994 version of Mary Shelley’s Frankenstein. And like the other companies, they fared a lot better in the days of 8- and 16-bit games, with seventy-five percent of their catalogue released before the dawn of the 3D era with the PlayStation’s debut in 1995. Bits Corporation continued to release games for about another decade until the company was dissolved by their parent, leaving no trace as their assets and properties were liquidated into obscurity.
Duel of the fates.
Itchy & Scratchy have no lore to speak of, like those previously cited cartoon inspirations. They are merely vessels of the age-old relationship between predator and prey. But of course in this case, the mouse is the vastly superior predator, relentlessly catching and mutilating his cat prey. And this is what the video game presents for players: play as Itchy, and kill Scratchy… a whole lot. That’s all. I understand some players don’t quibble over narrative, but this player always appreciates any thought put toward the underlying story as a character is guided along from one side of the screen to the other. The Itchy & Scratchy Game just presents a series of levels and enemies with no particular motive beyond cartoon murder, which I suppose is appropriate for the characters but does put into question the choice to turn The Itchy & Scratchy Show into a video game in the first place.
The game plays out in a repeated cycle of two stages: first, Itchy must fight through a platforming area full of Scratchy and his myriad weapons, environmental hazards, and small wind-up copies of Scratchy that hunt down Itchy like an army of possessed demon dolls. They’re easy to kill but are just a constant annoyance, always appearing on whatever platform Itchy is standing. Itchy can find a dizzying array of gruesome weapons scattered throughout the stages, in addition to power-ups like extra lives, cheesy speed boosts, health kits, and temporary invincibility. The mini-Scratchys also provide the ammunition for Itchy’s ranged weapons in the level, all of which vary but are identical in their function. They’re simply objects to throw at Scratchy, although they’re not really necessary until the second stage. In fact, the primary strategy of the first stage is to hoard the ammunition so Itchy is better prepared for the boss fight in the second stage. I mean, I guess the player is also supposed to kill Scratchy until his health bar is at zero, but it’s an easy and unexceptional task to complete.
The real challenge in the game — besides suffering Scratchy’s horrible shriek of pain — is the boss fights. They begin fairly easily, but with each successive level the boss fights become more and more challenging, introducing elaborate contraptions and hazards that Itchy must avoid while trying to toss weapons at Scratchy’s vehicle. But the challenge comes entirely from the level design. The strategy is always the same: hang back and lob objects at Scratchy. The only difference is the level of dodging the player has to perform in between those lobs.
Hey! Don’t scratch the turtle.
So it’s safe to say that the gameplay wasn’t exceptional. There isn’t even an ending scene or text to congratulate the player. It all just ends with a credit roll. This makes sense on some level: each level of the game is presented with a title card as if it’s a self-contained episode of The Itchy & Scratchy Show. However, when each level is just the exact same gameplay with a change of scenery, well, it kind of dulls the impact. But that touches on the areas where the game succeeds, namely the art and theming of each level. Although the animations and backgrounds are kind of bland and stiff, they’re also big and accurate, a feature that is noteworthy in contrast with previous Simpsons games on home systems. The series thus far has generally been bad at actually capturing the art design from the show. This game is arguably the first outside of The Simpsons Arcade game to really nail down the look of the animated characters. The backgrounds and enemy characters are also well-animated and bright, although the environments give the vibe of a props on a stage instead of real aspects of the environment. Each of the seven levels has a typical video game level theme, ranging from the prehistoric “Juracid Bath” to the mechanized “Disassembly Line.” But this is what players have to look forward to if they attempt to complete the game: it looks good. That’s… something.
Given how brief and simple the game is, one has to wonder if it’s as they originally intended. I can see it now: Acclaim is shipping licensed games left and right, but these are licenses they’ve been peddling in the video game space for years. Each game gets a little more expensive and a little less lucrative. And with The Simpsons, it was clear they’d started cutting costs since the last couple of Game Boy games and Virtual Bart on consoles, where the developers clearly went lean on the amount of content they squeezed into the games. Furthermore, the good folks at The Cutting Room Floor — a web catalog of the hidden content and debug features in video games — revealed that the SNES version of the game actually contains dialogue text from Bart, Lisa, and Krusty the Clown, indicating that the game was originally designed to include framing cutscenes where Krusty introduces the levels and Bart and Lisa comment on them as if they’re watching episodes of the show. This dialogue isn’t stellar, including lines such as “Woo Woo Woo!!! This is the one when Itchy is a red Indian,” so these may only be leftover bits from ideas that were tested early in the design process and abandoned. But really, it’s unlikely that these scenes would have helped improve the gameplay.
Itchy’s cartoon invincibility is no more.
There are other signs that The Itchy & Scratchy Game wasn’t quite all it could have been. For one, the version for the Sega Genesis that went as far as being reviewed in game magazines like GamePro didn’t actually make it to store shelves. It’s not clear if it even made it to the factory, but it’s likely that pre-release copies were shipped to reviewers before Acclaim decided it wasn’t worth the expense to distribute a Sega Genesis version in early 1995. It must have been one of these review copies that allowed savvy web pirates to create and distribute a ROM for players to play to this day. But Sega players did get an official release of the game, just on a much tinier screen. The Sega Game Gear game shipped alongside the SNES version, presenting a crunched down take on the game that strips away the game’s one positive quality: its art. The Game Gear characters are smaller by necessity but suffer for it, looking like the characters were thrown into a hydraulic press and squished down to fit. It’s cute in a way. But I’ll tell you what’s not cute — the game has no boss fights. You know, the one challenging aspect of the console games? Yeah, not here at all. The player’s only objective is to hunt down and kill Scratchy in each of the levels and move along until it ends. Also, the underwater high-jinks of level 4’s “The Pusseidon Adventure” are no more — the level was cut. So tinier sprites, no bosses, and one less level… it’s a tough sell.
Roasted on the Game Gear.
The Itchy & Scratchy Game is a notable last game in several categories: the last 16-bit Simpsons game, the last Simpsons game from both Acclaim and Bits Corporation (who only worked on the one), and really the last game from that intense early period of The Simpsons. Season six of the show was in full swing, showcasing episodes from the peak of the classic era. Legendary episodes like “A Star Is Burns” and “Lisa’s Wedding” premiered around the launch of this game into stores, during a time when the show was at its wackiest but the merchandise had begun to die down. There is no clearer sign than the fact that the world went on without a new Simpsons video game until 1997, and even then the game only released for Windows and Macintosh computers, leaving console players out in the cold for six years. Of course, given the reception to the dubious quality of Simpsons games from the early nineties, it might have been a relief.
|
https://noiselandco.medium.com/the-simpsons-and-acclaim-entertainments-final-tango-71512ebd7cc5
|
[]
|
2020-11-16 18:48:51.707000+00:00
|
['The Simpsons', 'Video Games']
|
नदिया बैरी हुई
|
yet another kid, tired of everyday econ classes
|
https://medium.com/@mystichues/%E0%A4%A8%E0%A4%A6%E0%A4%BF%E0%A4%AF%E0%A4%BE-%E0%A4%AC%E0%A5%88%E0%A4%B0%E0%A5%80-%E0%A4%B9%E0%A5%81%E0%A4%88-2573fcffd8a
|
[]
|
2020-12-19 16:29:46.242000+00:00
|
['Love', 'Broken', 'Windows', 'Soul', 'Lost']
|
My Medium Predictions for 2021
|
FUNNY
My Medium Predictions for 2021
It Could Happen
Photo by Sincerely Media on Unsplash
In 2021:
Anyone who posts about how much money they are making on Medium will have all of the money siphoned out of their bank account.
Anyone who posts advice about how you can make more money on Medium will have all of the money siphoned out of their bank account and transferred to your bank account.
Farts, Eyerolls and Gasps will be added to Claps as a metric. If you don’t like a post you will be able to give it up to 50 Farts, Gasps and/or Eyerolls. (This will have no impact on how much the writer gets paid but giving a post you really hate 50 farts will be fun.)
Medium will stop notifying you whenever someone you follow has highlighted something you have highlighted. Instead you’ll be notified whenever they have an orgasm while reading Medium erotica.
If you clap for a Medium post without actually reading it, a hand will come out of your screen and slap you.
If you post something that is racist, sexist, fat-shaming, homophobic or misogynistic, the little clap hand will give you the finger.
If you post a lie on Medium, the nose on your Medium Profile Portrait will grow.
Each day a random Medium creator will be elevated to the position of Curator for a Day and will be given the power to curate anything they want. (This won’t be a paying job, but curating all of your Medium pals will be fun.)
Glitter Highlighting will be introduced. If you like something, highlight it. If it makes you want to dance? Glitter highlight it.
Green with Envy Highlighting will be introduced for highlighting sentences you wish you’d written.
A special Sarcasm Font will be introduced for posting responses, so readers will know you’re being snarky instead of sincere.
Medium will change the algorithm so that nobody gets paid but poets.
There will be a new Turds for Trolls policy. The Algorithm knows where you live, troll. It will dispatch a Special Squad of Feces-Flinging Medium Monkeys to visit you and Curate your Manners.
The stock photographs available to accompany Medium posts will be limited to photos of squirrels, chickens and wombats. Except for #ferretFriday, when they will be limited to photos of ferrets.
And finally?
Ev Williams will admit that the algorithm is actually a dude named Al Gorithm and that Curators are actually corgis, as I disclosed in this humor piece:
BEST WISHES FOR A HAPPY AND PROSPEROUS 2021 TO ALL MY MEDIUM PALS.
( Writing Coach and Medium Sherpa Roz Warren writes for everyone from the Funny Times to the New York Times, has been in 13 Chicken Soup for the Soul collections, and is the author of Our Bodies, Our Shelves: Library Humor. Drop her a line at [email protected].)
|
https://medium.com/the-haven/my-medium-predictions-for-2021-d7e2dc6bac78
|
['Roz Warren']
|
2020-12-21 17:38:37.742000+00:00
|
['Writing On Medium', 'Humor', 'Funny', 'Medium', 'Roz Warren']
|
“Happier, healthy people are my greatest reward”
|
She sizes us up as if to find out whether we are worth her valuable time. We pass the test and sit down to chat.
‘The first two years were very difficult for me. I struggled with motivating villagers. How to make people who have never heard such things before understand about the importance of taking food supplements for stronger health, boiling water to get rid of contamination, sleeping under mosquito nets to avoid insect-borne diseases, and using toilets in order to avert stomach conditions?” Aunty Perp explains.
“People were worried about their daily meals, and stuck in their customary lifestyles and ancestral belief systems. Pregnant women, for example, did not like to use the health center services before and during birth. Once their child was born, they had no time to dedicate to their own or their baby’s health. I couldn’t blame them, because if they didn’t get back to work as early as possible, they would go hungry. Young mothers didn’t have the time or energy to breastfeed, and so many babies were malnourished, with some never living to see their first birthday,” she recalls.
Her strict expression betrays a big heart. Aunty Perp is passionate about changing the lives of her community for the better. Old habits die hard, and it takes dedication, good training and continued effort — including sweat and tears, at timesm — to gradually chip away at them. Perp is proud of the training she received through WFP’s support.
‘’I won’t lie, I do feel down sometimes, like nothing works,” Perp reveals with a glimmer of uncertainty in her eyes. “For example, when I have to run after people who can’t come to take their nutrition supplements on time because they have to work in the field. What keeps me going is the belief that I am doing the right thing. I want to help my people leave malnutrition and infant mortality behind them.”
Villagers trust that Aunty Perp does her best to keep them healthy. Photo: WFP/Vilakhone Sipaseuth
Things do change, if slowly. In the whole of Laos, half as many young mothers die today as only a decade ago, whereas mortality rates of children under age 5 have declined by 60 percent compared to 20 years ago.
So, too, in Louangtong village. Villagers today understand more about how their actions influence their health. They are more cooperative in getting their children immunized and take regular nutrition supplements to keep them healthy. As a result, infant mortality in the community has gradually declined.
Perp gives us another one of her famous stern looks as she watches us board our boat back to Pakbeng. “Happier, healthier people are my greatest reward,” she says — and we could swear there was a hint of a smile playing around her lips.
|
https://medium.com/world-food-programme-insight/happier-healthy-people-are-my-greatest-reward-604681177feb
|
['Wfp Asia']
|
2019-09-13 10:45:49.679000+00:00
|
['Nutrition', 'Laos', 'Humanitarian', 'Breastfeeding', 'Asia']
|
LEGAL REGULATION PROBLEMS OF CRYPTOCURRENCY DERIVATIVES
|
In the context of the transformation of the economy, attracting investments using traditional financial instruments is becoming more and more difficult — for most companies, the opportunity to attract financing through the use of classical instruments is limited or impossible (small and medium-sized businesses now have less access to bank lending, access to the exchange infrastructure, etc.) ). The main reasons are the tight regulation of the market, a large number of intermediaries, high costs of issuing and placing financial instruments. So, for example, to work with classic stock market instruments, the status of a qualified investor is required, it is necessary to comply with the regulatory requirements for the turnover of financial instruments, take into account the peculiarities of concluding transactions in the stock markets, take into account the peculiarities of the structure of transactions with foreign elements, etc. These problems can be seen in many international markets.
The transformation and “digitalization” of business leads to the need to “release” a new type of assets, and with the development of distributed ledger technology (blockchain technologies), as a solution to these problems, a new class of assets appears — crypto assets, as well as financial instruments — “cryptocurrency derivatives”, such like futures, contracts for difference (CFD), i.e. settlement futures transactions, options.
Cryptocurrency futures are standardized contracts that are traded on crypto exchanges with the intention of selling or buying an underlying asset (cryptocurrency) in the future for a specified price. Generally, futures are based on the spot (current) price and the difference between the current and future contracted price is the buyer’s profit or loss. A futures contract can be based on any asset, and it does not matter whether it is real or does not have any material characteristics but is simply a certain quantitative indicator, such as a percentage or an index.
Another example of a cryptocurrency derivative is the ZrCoin crypto token — a derivative financial instrument based on the real industrial production of synthetic zirconium dioxide, using “green” technologies. By purchasing ZrCoin tokens, investors finance the construction of a new production facility. Synth synthetic zirconium dioxide acts as a base active. Thus, the ZrCoin derivative is an option contract for the sale of zirconium dioxide in the form of an intangible asset ZrCoin, which includes an option (put option) to repurchase the zirconium dioxide at a specified time at an agreed price.
At the moment, it is known about several exchanges that have admitted to trading settlement derivative financial instruments, the underlying asset of which is bitcoin. For example, the Chicago Mercantile Exchange (CME) has allowed bitcoin-based futures to trade, the Cantor Exchange will offer bitcoin-based binary options to traders. A similar product is offered by the Cboe Futures Exchange (CFE).
Cboe Options Exchange — Chicago Board Options Exchange. The financial instrument is launched with the aim of reducing the risks associated with working on crypto-exchanges. At the same time, there are examples of exchanges that have issued derivatives since 2014, the underlying asset of which is bitcoin, — TeraExchange and North American Derivatives Exchange (NADEX) (available only to qualified investors, trading takes place on the CME Group and Cboe Options Exchange).
At the same time, in many countries of the world, the turnover of cryptocurrency derivatives is not regulated by legislation, but the market demand for this is high. Due to public law and systemic risks, buying and selling altcoins for fiat money is highly risky, so the liquidity of altcoins is mainly provided by the possibility of withdrawing funds back to bitcoins. In this regard, as the altcoin market grows, the speculative bitcoin market will be transformed into a market for rights of claims and exchange obligations. Accordingly, regulation should be aimed at the legal regulation of the rights of claims, the object of which is cryptocurrency and tokens.
Bitcoin is currently the underlying asset providing value to alternative virtual currencies and derivative products. In this regard, it is a fair point of view that bitcoin is not a means of payment and exchange, but rather a measure of value. Given the use of smart contracts in this area, we note that a smart contract is not a new type of civil law contract, but in fact, a derivative, where the underlying asset is bitcoin, and the derivative part is determined by the market value of the token. So, the turnover of tokens is the turnover of financial derivatives.
In addition, some tokens with a high degree of probability can be recognized as derivatives if the price of tokens sold as part of the ICO is tied to financial products or is determined by market indicators at a certain time interval or events that should occur in the future.
In this regard, it is necessary to pay attention to the exchange regulation of the circulation of claims for a new class of assets and to revise the legal provisions on financial derivatives, as well as on corporate governance (within the framework of such quasi-corporate procedures, tokens appear), project financing and partnerships.
The development of the derivatives market was the result of active innovation in the financial market, which was associated with the expansion of investment of fictitious (financial) capital that does not function directly in the production process and is not loan capital. The immediate reason for the emergence of derivatives was the increased mobility of the rates of traditional securities, foreign currencies, and interest rates on borrowed funds. The emergence of derivatives was due to the need for economic entities to manage risks by creating mechanisms for hedging the risk of possible changes in prices for underlying exchange-traded assets and thus reducing their own financial risks.
In April 2004, the EU adopted the Markets in Financial Instruments Directive (MiFID, Directive 2004/39/EC) aimed at strengthening the legal framework for the regulation of investment services and financial markets and establishing fundamental principles for regulating the market for financial instruments, including general requirements for admission of derivatives to trading — the instrument must allow an adequate assessment of its value, and the derivatives agreement must contain an effective means of resolving disputes.
As a result of the 2008 global economic crisis, financial sector reforms were implemented, which tightened the requirements for transactions in the financial sector. In August 2012, Regulation 648/2012/EC of the European Parliament and of the Council came into force on OTC derivatives, central counterparties, and trade repositories.
In addition, the adoption of the new Directive 2014/65/EU of the European Parliament and of the Council on Markets in Financial Instruments (Directive 2014/65/EU) was a key stage in the reform.
One of the significant innovations was the requirement for the mandatory conclusion of a part of transactions with derivatives on regulated platforms, as well as the establishment of control by the competent authorities of the Member States, which have the right to impose administrative or criminal sanctions for violation of the relevant provisions of the Regulation or Directive, over the fulfillment of a trading obligation. Thus, a trading obligation was provided for transactions with derivatives, i.e. norms on the conclusion of contracts on one of the types of regulated sites (among such sites, regulated markets, multilateral and organized trading sites, as well as similar platforms registered in third countries were distinguished). In addition, information on all transactions in OTC derivatives carried out in Europe must be provided to trading depositories and will be made available to regulatory authorities, including the European Securities and Markets Authority (ESMA), in order to control derivatives turnover, and transactions in standard derivatives contracts must be made through an approved central counterparty.
In January 2018, the EU completed a major reform that brought into force two new, closely interrelated legal acts — the second Markets in Financial Instruments Directive (MiFID II) and the Markets Regulation financial instruments “(Markets in Financial Instruments Regulation — MiFIR). Western doctrine directly qualifies these legal acts as the “backbone of financial regulation”.
In accordance with Art. 2 of the Regulation on Markets in Financial Instruments “derivative financial instruments” means financial instruments as defined by Directive 2014/65 / EU, while the Directive uses a broad concept of a financial instrument, covering not only securities but also derivatives, as well as some other assets … According to the Markets in Financial Instruments Directive, the attributes of a derivative are:
the corresponding type of contract (option, futures, swap, etc.);
the presence of one of the underlying assets covered by MiFID II;
settlement terms typical for financial derivatives (as a rule, financial derivatives are settlement ones, but under certain conditions, deliverable derivatives can be attributed to them).
Commodity financial derivatives can be deliverable, but they need to be traded in organized trading to qualify as financial instruments. At the same time, MiFID II and Delegated Regulation 2017/565 does not include services, currency, rights to real estate, intangible assets. Derivatives with these underlying assets are not financial and are not subject to MiFID II regulation. Accordingly, derivatives with underlying crypto-assets do not fall under the definition of financial instruments unless those crypto-assets themselves qualify as financial instruments.
Thus, products of online platforms selling digital currencies that are of interest to European regulators, such as binary options and contracts for differences (CFDs), are speculative contracts that do not have the main purpose of acquiring or disposing of cryptocurrencies, under which mutual requirements of the parties are determined by fluctuations in the exchange rate of the cryptocurrency, which is the underlying asset, where the main source is a cryptocurrency, and investors can bet on the result without owning the cryptocurrency itself, can be considered as derivative financial instruments.
Due to the fact that cryptocurrencies as an underlying asset pose a particular danger to investors, since the market is in the process of formation, and the volatility of the exchange rate is too high, the European regulator, in order to protect the rights of investors, has established special requirements for the ratio of equity and borrowed funds when making transactions with cryptocurrency derivatives.
In France, the Autorite des marches financiers (AMF) is also committed to the position that financial products based on cryptocurrencies must be regulated as derivatives and that the circulation of cryptocurrency derivatives must comply with the European Union Markets in Financial Instruments Directive. Platforms offering such products must comply with authorization rules, as well as comply with requirements to prohibit advertising through electronic media. Despite the fact that cryptocurrency derivatives are not included in the MiFID list, the French regulator adheres to the position that “a cash settlement cryptocurrency contract can qualify as derivatives, regardless of the legal qualification of the cryptocurrency.” So, online exchanges offering cryptocurrency derivatives must comply with the MiFID directive and operate within the framework of the European Market Infrastructure Regulation (EMIR). In addition, such cryptocurrencies also fall under the jurisdiction of the French anti-corruption law.
The Financial Conduct Authority (FCA) of the United Kingdom considers the following cryptocurrency derivatives as falling under the definition of a financial instrument: futures, which are characterized as transactions with deferred execution, but at the price specified in the contract, and not current at the time of execution; contracts for difference (CFDs), i.e. settlement urgent transactions; options.
Crypto market participants are responsible for the lack of a license, so they must conduct an analysis of their activities for trading in derivatives. FCA points out that many trading platforms offer a special product — transactions with cryptocurrencies for difference (CFDs), which can become an object of circulation. FCA notes that transactions are structured in such a way that they can not only entail a complete loss of funds invested in cryptocurrency but also make the consumer a debtor to the company with which the CFD is concluded. In this regard, FCA indicates that trading in these instruments is subject to the requirements provided for by the legislation on the securities market, namely, entities offering CFDs must be licensed to conduct these activities. Consumers have the right to file a complaint with the Financial Ombudsman Service and are entitled to claim compensation under the Financial Services Compensation Scheme. In the case of trading in another jurisdiction, individual complaints should be directed to the services of the respective jurisdiction. In order to protect the rights of investors, the FCA has established requirements for persons offering cryptocurrency derivatives.
The Hong Kong regulator also acknowledged the possibility of trading bitcoin-based futures, pointing out that although the cryptocurrency is not a security and its circulation is not regulated by the rules on the securities market, futures of any kind are subject to securities market legislation (Securities and Futures Ordinance ). In this regard, this activity is subject to licensing. At the same time, the relevant requirements are of an extraterritorial nature — they apply to anyone who publicly offers such services to an indefinite number of persons in Hong Kong, in particular to cryptocurrency exchanges.
Thus, derivatives, the underlying asset of which is cryptocurrencies, can be qualified as financial instruments from the point of view of the European Union Markets in Financial Instruments Directive. As such, online exchanges offering cryptocurrency derivatives must comply with the MiFID directive and operate within the framework of the European Market Infrastructure Regulation (EMIR). Concluding transactions with derivatives, organizing their trading, consulting and other services may be recognized as activities in the securities market, and therefore may require a license.
Regulators are drawing investors’ attention to the high-risk nature of financial products such as binary options and contracts for difference (CFDs). Despite the fact that these instruments are a way of hedging risks associated with the high volatility of cryptocurrencies, as a rule, CFDs are offered on rather unfavorable terms. In this regard, regulators are taking measures to protect the rights of investors (requirements for securing contracts, experience with such instruments, and other requirements).
In the Russian science of business law, derivatives are considered in two aspects: in the narrow sense, derivatives are fixed-term contracts with special conditions for their conclusion and execution. In a broad sense, derivatives are any market instruments based on primary income assets, such as goods, money, property, securities. They are used to obtain the highest income at a given level of risk or to obtain a given income at a minimum risk, reduce taxation, and to achieve other similar goals put forward by market participants. In the latter case, the class of derivatives includes not only forward contracts but also any other new market instruments such as secondary securities in their potentially infinite variety, combinations of securities with forwarding contracts, etc.
Taking into account the presence of such high-risk instruments on the Russian markets, regardless of the legal status of the cryptocurrency, financial instruments using bitcoin, since such instruments fully comply with the characteristics of derivatives, should be considered as derivatives with special legislation applied to them, and measures must be protecting the rights of investors, including the establishment of requirements for the availability of financial guarantees for transactions with cryptocurrency derivatives. These measures will also increase the investment attractiveness, and hence the demand for the underlying asset and its price. This means that the global demand for bitcoin and market liquidity will increase significantly. Given the limited resource and the predictable emission of digital gold, these factors should significantly increase the price and at the same time reduce its volatility.
We believe that the conclusion of transactions with cryptocurrency derivatives, the organization of their trades, consulting, and other services should be recognized as activities in the securities market, and therefore subject to licensing. Given the actual position of cryptocurrency in international markets, if the Russian regulator recognizes it as an independent underlying asset for financial instruments, this will give an impetus for the development of structured financial instruments based on bitcoin. On the other hand, it should be taken into account that the use of derivatives for speculative purposes leads to the transformation of the derivatives market into a high-risk one. Thus, historical experience has shown that an additional factor of destabilization of the urgent, and ultimately the entire financial market is the use of high-risk instruments as a basic asset, which include cryptocurrency (bitcoin).
However, derivatives are used not only for speculative purposes. It should be noted that the most important function of the derivatives market is the function of informing all participants in economic relations about prices in the market. As R. Kolb notes, the conclusion of transactions in financial derivatives leads to the establishment of prices that can be monitored and valued by the entire society, and this provides information to market observers about the real value of certain assets, as well as the direction of future economic development.
In this regard, given the emergence of new instruments, financial markets require expertise aimed at minimizing risks, which, first of all, should create incentives for the introduction and development of innovative institutions, as well as guarantee the protection of the rights of consumers and private investors.
|
https://medium.com/the-capital/legal-regulation-problems-of-cryptocurrency-derivatives-d5b7545c970c
|
[]
|
2020-12-23 02:27:17.677000+00:00
|
['Bitcoin', 'Blockchain', 'Regulation', 'Cryptocurrency', 'Derivatives']
|
Mexican Footprints II
|
I know I’ve had some sleep recently because last night I dreamt I was in a sleep-deprivation experiment, but I haven’t had much. Therefore you should also visit the other sites listed at the end for more intelligent comment.
A foot and an ancient footprint. Photo from the Mexican Footprints media section.
The dating of the Mexican footprints is proving to be a problem. This week in Nature the footprints are the subject of a Brief Communication Geochronology: Age of Mexican ash with alleged ‘footprints’. I’ve added two recent press release from Berkeley and Texas A&M on re-dating the prints. They tackle quite a problem, how do you sample an absence of something to get a date?
I printed out the article, which was really clever as I don’t have access to it from home. However I’ve just discovered I left it on the printer, which is really stupid. I’ll try and reconstruct the argument as best I can.
A good footprint is a hole. How do you date it? You could date the material its in but does that make sense? If I go outside now and chip a hole out of some local granite do I make a hole that’s millions of years old? So how about dating it by seeing what fills the hole? That sounds like a good idea except if they’re ancient holes they could still be refilled by modern material.
As far as I can tell the Liverpool team have dated using particles (shell?) that they say are within the soil horizon the prints appear in. Have they dated later infill. Even if they haven’t how reliable is a preservation of the sample? If the sample is contaminated with Carbon-14 then the dates could appear younger than they necessarily are.
The American team have used Argon / Argon dating. They’ve also got extra dating by examining the magnetic polarity of the sediments. The earth’s magnetic field flips over time and these rocks were laid down when the polarity of the earth’s magnetic field was inverted. This would also confirm that the rocks are much older than the 30kyr date claimed by the Liverpool team.
I’m inclined to go with the American interpretation on these prints, but there’s stuff in the American press releases that bothers me. It’s mentioned that the Liverpool team’s results “have yet to publish a peer-reviewed analysis of the footprints”, but they omit to mention that they will appear in a journal, Quaternary Science Reviews in January. Given there are plenty of papers ‘in press’ from September and earlier, I’d guess this has been going through the peer-review process for a while, and the fact they have a date would indicate that it’s been in a while and it passed. The American claim is factually correct, but only for another month. If I was patient enough I could post next month that the American claims about the lack of peer-reviewed publication are “demonstrably false”.
Additionally the two press releases seem to imply human colonisation of the Americas started at 11,500 years ago. Monte Verde would appear to push that back a thousand years. I’m told by people working in the field that (a) there are also pre-Clovis sites in Mexico and (b) Americans aren’t invited to view them. This quote by Tim White, Professor of integrative biology at UC Berkeley, might explain why:
“The evidence Paul has produced by dating basically means that this argument is over, unless indisputable footprints can be found sealed within the ash.”
The argument is over before the paper is published? I don’t know where the samples Liverpool dated are from, because the paper doesn’t appear till January. So how can I or he possibly know that the American re-dating is relevant to that material?
I suppose he could have seen the data going for peer-review. Personally if I saw a paper that had passed peer-review that I could prove wrong, then I’d submit my paper to appear in the same journal in the same or a following volume. I wouldn’t publish in a different journal before the original paper, but I could be doing something wrong I suppose.
I firmly think it’s up to the Liverpool team to prove their case and footprints alone won’t do it. Where are the artefacts? Where are the sites of similar date? This will take time to find. It’s rare that one find overturns orthodox ideas and often consensus is built up on carefully weighing the evidence. On the other hand the press releases from the USA have made me far more interested in reading QSR next month. If the aim of the Nature communication really was “to nip any misrepresentation in the bud,” then it’s failed spectacularly.
Abnormal Interests has also noted this arguing for caution on interpreting the finds as footprints, with which I agree. I’m more inclined to think these are not footprints But I wonder if the rebuttal is as convincing as is claimed. If I had a certain smackdown then I wouldn’t be playing media games. I may change my mind after reading QSR.
Afarensis has the response from Liverpool.
Badgerminor at Orbis Quintus notes that the “footprints” do look like footprints. I’d like to agree with that, but I don’t know how many marks there are at this site. If there are thousands then by sheer chance you’d expect some to look foot-ish so it becomes a self-selecting sample. I’d like to see dates for things from the site that aren’t claimed to be footprints to see if they differ. If the known quarrying marks date from the same period as the footprints using the same technique then you can write off the dating as inaccurate. If the dates differ then you have confirmation that the prints don’t just look different.
|
https://alunsalt.com/mexican-footprints-ii-2d4784be5569
|
['Alun Salt']
|
2016-04-13 10:42:30.326000+00:00
|
['Archaeology', 'Geology', 'Mexico']
|
STARING AND TAPPING VS. PLAYING AND CLAPPING?
|
I used to stand by the foyer and wait with my brand new maroon-colored Polygon bicycle for my friends at exactly 5:00, sitting there staring blankly into space while tapping my feet monotonously in a complete state of boredom and thinking about all the things I needed to tell my friends until all of them arrived. After everyone arrived we started playing tag with our bodies filled with adrenaline and we jumped into our swimming pool which was filled with joyful laughter and animated faces. Exploring and imagining our next potential games while gazing at the large metallic abandoned buildings in our complex, cycling till our legs ached and our brakes were boiling hot, and not come back until our parents threatened us by banning us from playing the next day.
The eagerness on the next day’s share of fun is what drove us.
Those were the best times of my life, but now modern technology has ruined the children of this age.
Now when I go down to the foyer, I don’t see 12-year-old kids cycling around and laughing but rather see them sitting in the corner huddled together staring at their phones and screaming at each other to “revive” them in the addictive game called PUBG. Their eyes are glued to their screens and are completely oblivious to everything around them making them equivalent to an automaton. They now know how to invite each other online and not know how to tell each other to play basketball or pass them the ball in the final minute of a football match. As we used to dare each other to explore the dark hallways of the underground parking lot, they explore the blue light enriched “maps” of the “new battle royale game”.
These great, fun-filled experiences are what shaped our present, they are not old ideas, but are ideas that are simply ignored. WE CAN CHANGE THAT!
Chartered Psychologist Dr. David Lewis suggests that modern-day kids are in danger of missing out on some of the finest and best of life’s experiences. Some reports reckon that an average person checks their phone up to 1050 times a week. This widespread use of technology trickles down to the youngest members of our families. Data from Britain shows almost 70 percent of “11- to 12-year-olds use a mobile phone and this increases to close to 90 percent by the age of 14.”
The warning that people are losing their lives on their phones comes out of the fear that technology is creating a generation of lonely children who struggle to make friends in the real world, but manage to become friends with grown men from halfway across the world.
Lack of exercise is also a key factor behind soaring levels of obesity, as many youngsters play video games for an average of nearly six hours each week. According to Amy Moes Williams a licensed clinical psychologist, Gamers spend most of their time, approximately 10 hours each week, on their tiny 5.8-inch, 2,436 x 1,125 display with less than half an hour a day on exercise.
Let’s stop them from racing cars on their screens, and start them racing each other outside!
Kids need to be stripped of the habit of taking their gadgets with them while going outside at a young age so that they learn to have fun without their enslaving devices. If your family restricts the phone usage to a minimum, others will follow, you can make the difference, you can be the change. It would be naive of you to watch your child’s childhood slip past without doing anything to change it. So bring this new idea into your family to give your children what they deserve.
Children don’t know what they are missing out on.
But you do.
LEAD THE CHANGE!
|
https://medium.com/@psingh21/staring-and-tapping-vs-playing-and-clapping-7d217e4b5abc
|
['Pranit Singh']
|
2020-10-06 14:01:09.230000+00:00
|
['Online Addiction', 'Happy', 'Childhood', 'Parents']
|
The Skyhook: How Humans Are Plucked From the Ground
|
The Fulton surface-to-air recovery system, also known as Skyhook, is a system used by the CIA, United States Navy, and the United States Air Force to pick people up from the ground using an airplane. The system uses a line attached to a balloon that is attached to a person. An aircraft intercepts the line connecting the person and the balloon, and they are lifted in the air from the ground and pulled into the aircraft. Here’s the story behind how it got its start and why it was used.
Pick Me Up
The history of this type of retrieval system can be traced back to the last few years of World War II. The British began using an extraction method that was based on a mail retrieval system invented during the 1920s and used by an early airline company called All American Aviation (which would become US Airways). This particular system used a line between two poles that would be snagged by an approaching airplane. The line was connected to a mail sack, which would then be winched into the plane.
The Army Air Force was also interested in a way to rescue airmen from difficult locations using this method and began tests of their own. At first, the tests were a failure because of the increased g-forces, and one test proved to be an even bigger failure for one unfortunate sheep.
The Army Air Force was able to make changes, however, and the first volunteer was successfully extracted by an aircraft from the ground on September 5, 1943. The Army Air Force was even able to retrieve a glider from the ground in Burma in 1944, its first operational success.
An Even Better System
Robert Fulton was an inventor who had come up with an aerial gunnery trainer that was eventually used by the Navy in 1942. After World War II, Fulton began working on an airplane that could be converted to an automobile, but he ran out of money. He then began to work on making the system previously used by All American Aviation better.
He started his experiments in 1950 and developed it using a weather balloon and nylon line. The Navy became interested in his modifications, and Fulton began testing it in El Centro, California. He even came up with a way for the aircraft to anchor the line when it was caught by the aircraft. By 1958, the Skyhook system was almost complete.
The way it worked was that a package could be dropped from an aircraft with the necessary supplies for a person on the ground. The package would contain a harness that was attached to a 500-foot nylon line and a portable helium bottle that would be used to inflate a balloon.
The aircraft was outfitted with steel tubes that were spread at a 70-degree angle on the nose. A marker was placed at the 425-foot level of the line, giving the aircraft spot to aim. When the aircraft made contact with the line, the balloon would be released, and the line would be secured to the plane by what was called a sky anchor. The line would then run under the aircraft where it would be retrieved by the crew and winched in, bringing the person, or cargo traveling at 125 mph, into the aircraft.
At first, the tests used dummies, but as things progressed, a live creature was needed to test the system’s effects, so a pig was used. The test didn’t go quite as planned. The pig was lifted off the ground and into the air, but it began to spin. The crew was able to get the pig on board safely, but the animal was disoriented. The pig wasn’t too happy about being used as a test subject because after it recovered from its ordeal, it attacked the crew.
The First Pickup
The first human to be picked up by the Skyhook was Staff Sergeant Levi Woods of the U.S. Marine Corps on August 12, 1958. Woods was successfully reeled into the aircraft and avoided the spin that had made the pig in the earlier test disoriented and eventually angry. He did so by extending his arms and the legs while in the air. Skyhook proved successful and was next used in Alaska in 1960 to pick up archaeological artifacts and geological samples from remote areas.
Operation Coldfeet
The Skyhook got its first operational use in 1961 in what became known as Operation Coldfeet. A naval aircraft on a survey mission had spotted an abandoned Soviet drift station in the Arctic. Drift stations drifted with the ice, and the Americans believed the Soviets were using submarine surveillance systems on them just as the U.S. was doing on theirs. The Soviets, a few days later, confirmed they had abandoned the station when the ice runway to it had become inaccessible.
The U.S. felt this was an opportunity to examine a Soviet drift station to compare and confirm what they were actually doing on it. But the problem was, how would someone get there? It was out of helicopter range, and an icebreaker couldn’t get to it. The Skyhook system became the answer. Two men began to train on the Skyhook system, Major James Smith from the Air Force, and Lieutenant Leonard LeSchack from the Navy.
Approval for the operation took time because of the fear of losing both men and the argument that the plan wouldn’t work. During this time, the drift station was moving farther away from the U.S. airbase in Greenland. In March 1962, the operation got a lift when it was announced that the Russians had abandoned another drift station, which was in a better position.
The flight to the station took finally took place in mid-April 1962, but it wasn’t until May 4, 1962, that the Russian station was found. But another problem had occurred with the lapse in time. The funding for Coldfeet had run out. It was then thought that the CIA might want to be in on the operation. Fulton had been working with the CIA since 1961, and a CIA front company in Arizona called Intermountain Aviation had pilots that been training with the Skyhook system since that time. Additional funding was acquired, and arrangements were made for the CIA to supply the aircraft.
Smith and LeSchack dropped to the station on May 28, 1962. They were allocated three days to study the station. The first attempt to pick up the men on May 30 failed because of fog. Another try was done again on June 1, but the station couldn’t be located. The pickup crew tried again the next day and were finally able to locate the station.
The conditions were far from ideal for a pickup since visibility was marginal, and the wind was blowing strong, but the airplane crew was able to retrieve the first planned load from the station, a cargo that consisted of 150 pounds of documents, samples, and exposed film. LeSchack was retrieved next, even though he had been blown across the ice because of the wind blowing the balloon. Smith had the same problem but was successfully extracted from the ice and into the aircraft. The intelligence they gathered from the station was considered very valuable and confirmed that the Soviets had surveillance systems in place aboard the station.
Skyhook was believed to have been used in other clandestine military and intelligence operations since Operation Coldfeet, but its role today is a well-guarded secret. We do know, however, that James Bond used it after defeating the bad guy in Thunderball.
Sources: NY Times, Daily Mail, CIA (1), CIA (2), Popular Mechanics, Wikipedia
Other Stories You Might Like:
Want to delve into more facts? Try The Wonderful World of Completely Random Facts series, here on Medium.
Find even more interesting facts in the four volumes of Knowledge Stew: The Guide to the Most Interesting Facts in the World.
More great stories are waiting for you at Knowledge Stew.
|
https://medium.com/knowledge-stew/the-skyhook-how-humans-are-plucked-from-the-ground-450e35139e5a
|
['Daniel Ganninger']
|
2020-08-03 13:06:01.184000+00:00
|
['Technology', 'Innovation', 'History', 'Aviation', 'Military']
|
Tim Cook: Execution Machine
|
Probably a bit unfair, but this is a clever cover by Bloomberg Businessweek for their Tim Cook profile by Austin Carr. I already highlighted the Warren Buffett quote. But a few other bits:
Cook’s innovation was to force Foxconn and others to adapt to the extravagant aesthetic and quality specifications demanded by Jobs and industrial design head Jony Ive. Apple engineers crafted specialized manufacturing equipment and traveled frequently to China, spending long hours not in conference rooms as their PC counterparts did but on production floors hunting for hardware refinements and bottlenecks on the line.
I feel like this is a nice, succinct way to highlight a massive (and massively overlooked) strength of Cook. Sure, he may not have the product vision but he is the one who was able to take that vision and execute on it in a way never seen before at scale. The iPhone is not what it is today if it was constantly backordered or far worse: faulty because the insane design requirements made it nearly impossible to build. Cook translated the form into function.
Contract manufacturers worked with all the big electronics companies, but Cook set Apple apart by spending big to buy up next-generation parts years in advance and striking exclusivity deals on key components to ensure Apple would get them ahead of rivals.
And he such translation with incredible foresight. The locking up of parts years ahead of the competition also enabled the iPhone to scale. Which sounds obvious but was probably the single most important thing he did.
At the same time he was obsessed with controlling Apple’s costs. Daniel Vidaña, then a supply management director, says Cook particularly fussed over fulfillment times. Faster turnarounds made customers happier and also reduced the financial strain of storing unsold inventory. Vidaña remembers him saying that Apple couldn’t afford to have “spoiled milk.” Cook lowered the company’s month’s worth of stockpiles to days’ and touted, according to a former longtime operations leader, that Apple was “out-Dell-ing Dell” in supply chain efficiencies.
“Out-Delling-ing Dell” is a nice foil to this.
Three people familiar with the company’s supply chain say there was an Apple employee whose job consisted of negotiating the cost of glue.
I love that it’s sourced three different ways. Was the Glue Guy the Blevinator?
Jobs loved to point out that Apple’s product lineup was so unrelentingly spare it could fit on a small table. At the time of his death, Apple sold two iPhones and one iPad; today it offers seven iPhones and five iPads. Cook also added high-priced products that amounted to accessories for the flagship mobile devices, such as AirPods and the Apple Watch.
On one hand, it’s hard — probably impossible! — to argue with the results. On the other, Apple is saying a lot more “yes’s” these days. And again, they probably should be! It’s just a contrast.
Beyond those and more quick shots at the “exceptionally boring” Cook, the main takeaway of this piece is that Apple’s U.S. plant in Texas — yes, the one Donald Trump infamously took credit for — is a bit of a disaster. The political upside makes the headache worth it for Apple, but anyone hoping for Apple to actually bring manufacturing back to the U.S. probably should not hold their breath.
|
https://medium.com/@mgs/tim-cook-execution-machine-c6a2c8d3aaf0
|
['M.G. Siegler']
|
2021-02-11 19:15:42.100000+00:00
|
['Tech', 'Business', 'Manufacturing', 'Tim Cook', 'Apple']
|
Is Humanity’s Fate to Die on Earth or Can Space Colonization Save Us?
|
Stephen Hawking once warned, “We must become an interplanetary species within 100 years or we’ll all die.”
Indeed, life on Earth poses its share of challenges. If human civilization isn’t wiped out by a global pandemic or war, climate change lurks on the horizon, posing a threat to our ability to survive long-term on this planet.
“I’m beginning to find life on Earth…burdensome. Pandemics, air pollution, groundwater contamination…Did you know that caffeine, antibiotics, and even birth-control hormones are finding their way into our drinking water? And now there’s circumstantial evidence that CO2 could be making our food less nutritious! As our numbers continue to approach carrying capacity, life will become less safe. Finally, either a solar flare will fry the electric grid sending civilization back to the 19th century, or the Kessler syndrome will strand humanity on a dying planet for millennia. I’ll gladly leave Earth if presented with the opportunity.” — Excerpt from my book K3+
We know that space offers virtually unlimited resources for humanity to break free from our home planet. But we’re limited to the visions of different space race tycoons like Elon Musk and Jeff Bezos, who completely suck the air out of the debate, and whose ultimate goals seem to be control, power, and self-interest.
Settling planets and moons align with some of those visions, while offering a high level of comfort to our deeply ingrained planetary bias. Nevertheless, scientists assert that trying to colonize other celestial bodies will require severe adaptations to the human body, possibly causing the settlers to branch into a different species within generations.
“We must become an interplanetary species within 100 years or we’ll all die.” -Stephen Hawking
However, there’s no way we can put humans on the closest interstellar planet, 4.2 light-years away, within 100 years. And if we do colonize a planet like Proxima b, it’s likely to be a hellish world, maybe even harder to settle than Mars. Furthermore, over 600 million years of evolution have conditioned multicellular life to Earth’s environment, which isn’t found anywhere else in the solar system — perhaps in the entire Milky Way galaxy. But because we were born on Earth, we cannot imagine a different form of colonization than settling other planets and moons.
It won’t be anything like when the Europeans arrived in the new world, bringing crops and livestock. They were able to breathe the air and their bodies were already accustomed to the conditions. However, when humans try to colonize another planet, our biology won’t be compatible with the gravity, atmospheric pressure, and surroundings. Seeding a new planet with our own chemistry and the microorganisms we need to survive will likely spell doom for the native life of that planet.
Artist rendition of Proxima b, Earth’s closest exoplanet. Image Credit: ESO/M. Kornmesser
Our fast-paced civilization would need hundreds of planets to continue growing. But with no other worlds in sight that can support billions of people, we must create room from scratch. In the time it would take to surround Mars or the moon with an atmosphere, we can build entire colonies in space, capable of housing billions. These will be safely hosted inside cylindrical megastructures, called rotating habitats, that perfectly replicate Earth’s gravity and atmospheric conditions, requiring no adaptations of the human body.
The great enabler: Closed-loop technologies
On Earth, plants inhale CO2 and exhale oxygen. We can replicate this cycle through closed-loop life support systems to recycle air, like we do in the International Space Station. Vertical farming combined with aeroponic irrigation will allow each of these colonies to be completely autonomous. Every single aspect of agriculture can be automated, leaving humans free to oversee the process and use their creativity to continue innovating and developing new varietals that will satisfy the branching tastes of the growing population.
We’ll be able to reuse water, similar to what the city of Cape Town, South Africa, has begun piloting. By recycling wastewater and making it safe for human consumption, they’ve demonstrated this closed-loop technology to be effective in delivering water to high-scarcity environments. This process also separates precious phosphorus, which is a key essential component of agriculture.
Where is the meat?
Although a plant-based diet is healthier, human beings are innate, voracious carnivores. The evolutionary big bang of our brains began right when our distant ancestors switched from eating plants to eating meat. One could argue that we’ve been conditioned to eat meat through millions of years of evolution, so humans likely won’t give it up without a fight.
Cultured meat can be produced in-vitro from animal cells, without the need for raising and slaughtering them. Although this technology is currently in its infancy, it has the potential within just decades to produce better steaks and countless other animal products, without the need to introduce these living beings to space colonies and sacrifice them. Genetic enhancements will push the genome to the limit, producing spectacular flavors and textures, while leaving the cruelty of our current farming system far behind.
Image Credit: Unsplash (Emerson Vieira)
So how do we bring these technologies to space?
We know how to recycle air and water, and to grow food using closed-loop systems that produce zero waste. But a multi-million-person colony would consume tremendous amounts of power. Fortunately for us, the sun produces plenty. Imagine the amount of energy consumed by all homes, industries, hospitals, farms, schools — and every other single human activity on Earth you can think of — for one year. The sun produces close to 500,000 times that amount, in just one second! It almost hurts to see all that power gone to waste, irradiated as heat to space.
Solar panels in space deliver up to 40 times the annual amount of reliable 24/7 energy than on Earth, and upcoming technologies will give them a tremendous efficiency boost. Furthermore, direct current (DC) will provide a far more efficient electric system than alternating current (AC) — no need for a bloated and wasteful electric grid, plagued by vulnerabilities. All these technologies can be adapted to work inside rotating habitats.
However, before we can start building these megastructures, entire industries will need to be created in space. Building them will require tens of thousands of rocket launches to deliver the base components for developing the initial infrastructure. Technologies like 3D printing will allow us to make great strides in this effort, but we’ll need to source the materials to be fed to these 3D printers — and to other construction devices — from space.
Image Credit: Katie Lane (Full distribution rights reserved by Erasmo Acosta)
Mining asteroids and the moon will provide the first hunks of raw materials to put together factories and engineering facilities. After a few decades, it will be possible to construct the first rotating colony — a tiny community of a thousand scientists and engineers to test and improve these technologies in space. But, in order to build island-sized dwellings that can house tens of millions, we’ll need a much larger source of raw materials.
Being the leftover core of a past planetary collision, the planet Mercury is made of 70 percent metals and 30 percent silicates. Mercury contains a much higher abundance of metals than Earth, and it will be easier to extract them due to its low gravity. Most of the mining operation will be dedicated to refining the ores, making them ready for construction applications.
Our current population of 7.8 billion people can be comfortably housed inside fewer than 400 of these island-sized rotating megastructures.
Image Credit: Katie Lane (Full distribution rights reserved by Erasmo Acosta)
The Future looks bright . . . if we don’t mess it up
Our current population of 7.8 billion people can be comfortably housed inside fewer than 400 of these island-sized rotating megastructures. Mining Mercury will provide an influx of construction materials, enough to build thousands of megastructures but, in the end, Mercury’s resources are still limited.
The total mass of planet Earth is estimated at roughly 5,842 quintillion tons. Imagine thousands of times that weight in metals and other valuable elements, such as carbon, silicon, nitrogen, and many others. The great news is, we don’t have to look far. Being a high metallicity star, the sun is the answer to all our future resource needs.
True, it’ll take a century — and then some — to develop the infrastructure to mine the sun’s raw materials. But with plenty of available energy to power the process, and no need to invent new physics, the process is feasible. Human beings are no strangers to such long projects. The Great Wall of China was built in 2,000 years, Stonehenge in 1,600 years, and Petra in the Jordan Desert in 850 years.
Life might evolve on planets, but these celestial bodies are not the best long-term option to sustain growing civilizations due to their limited availability of resources. We can begin building a true post-scarcity utopia in space-bound megastructures today. Many of the technologies already exist, and space has the available resources.
Hawking was right that we need to leave Earth within 100 years, but he, like the rest of us, was born and raised on Earth. We need to put our planetary bias aside and think outside the box. Our true savior lies in rotating habitats—floating oases that can comfortably house tens of millions and allow humankind to spread throughout the cosmos.
Space colonization can save humanity from the harsh threats that plague us on Earth.
|
https://medium.com/predict/7627800fee1b
|
['Erasmo Acosta']
|
2020-09-10 18:05:13.079000+00:00
|
['Climate Change', 'Science', 'Space', 'Armageddon', 'Technology']
|
Running a .NET Project. C# From Scratch Part 2.2
|
We know that the application is working since it produces the default output, but what exactly is happening when we use the dotnet run command?
Behind the Scenes
When you use the dotnet run command, .NET automatically does several things to make your project ready to run. In this section, we’ll look at the steps .NET takes behind the scenes to run your application.
Restore
The first thing that happens is .NET implicitly runs the command ‘dotnet restore’. The dotnet restore command tells .NET to go out and get the correct version of any packages used by your application.
Packages are pieces of codes that other developers have written and you can use in your application. In the world of .NET, the package system is called NUGET.
All of the packages being used by your application are recorded in the .csproj file, along with the version of the package being used. When the dotnet restore command is executed, .NET goes out and gets each package listed in your .csproj file from the internet.
The packages that are used in your application are called external dependencies.
In our basic project, we don’t have any external dependencies so nothing happens when .NET runs this command.
Build
Next, .NET runs the ‘dotnet build’ command implicitly.
This command tells .NET to compile your source code files into a binary file that can execute on your machine.
The tool that does this is the C# compiler. The C# compiler takes all of the source code files in your project as an input and outputs a single binary representation of your source code. This binary format is faster to execute on your machine.
The output file is a dll file.
Dll stands for Dynamic Linked Library. In .NET, this is referred to as an assembly.
Run
Finally, the dotnet run command is executed.
This command tells .NET to bring the application to life using the .NET Core Runtime and to execute the instructions in the dll file.
It’s important to note that if you want to run your application, you need to use the .NET Core Runtime. It’s not possible to run the application via the dll file directly.
This is because it’s the .NET Core Runtime that knows how to launch your application, manage memory, convert the instructions in the dll file into instructions that your processor understands and tear down the application when it’s finished running.
What’s Next?
In this part of the series, we learned how to run a .NET application using the dotnet run command. We also learned what .NET is doing in the background to convert our project into an application that can be run on our machine.
In the next part of the series, we’ll learn how to open the project in Visual Studio Code and look at the most important parts of our code editor. You can find that part of the series here.
|
https://kenbourke.medium.com/running-a-net-project-6d18b840b607
|
['Ken Bourke']
|
2020-12-16 15:23:07.453000+00:00
|
['Dotnet', 'Csharp', 'Software Development', 'Learn To Code', 'Programming']
|
From Monolith to Microservice Architecture on Kubernetes, part 3— Deploying our Scala app as a microservice
|
In this blog series we’ll discuss our journey at Cupenya of migrating our monolithic application to a microservice architecture running on Kubernetes. In the previous parts of the series we’ve seen how the core components of the infrastructure, the Api Gateway and the Authentication Service, were built. In this post we’re going to see how we converted our main application to a microservice and get to a fully working setup in Kubernetes which we could go live with.
Parts
Migrating our application
So we’ve explored the core components in our microservice architecture, the Api Gateway and the Authentication Service. We’ve looked at quite a bit of Scala code and have seen how the Kubernetes deployments are configured. I started this blog series with describing our current software stack. Remember, from a deployment perspective it looks rather traditional
Let’s see what it took to migrate our monolith to the new microservice infrastructure and benefit from
Increased agility and smaller & faster deployments
Individually scalability of services
More fine grained control over service SLA’s (i.e. some services are crucial and need to have failover in place and some are not)
The ability to have autonomous teams responsible for a subset of services
Migrating our frontend application
For the frontend it was pretty straight forward. We basically needed another nginx pod, similar to the one we used as ingress with the only difference that it needs to have the static resources bundled. Therefore, we created a custom docker image where we copy the contents of the dist output folder of our AngularJS application.
FROM nginx
COPY dist /usr/share/nginx/html
COPY frontend-nginx.conf /etc/nginx/conf.d/default.conf
This docker image we simply pulled in our pod and created an associated Kubernetes service. It didn’t need any special config because the Api Gateway was already configured to route all non /api traffic to this service.
Migrating our backend application
For the backend we started off simple. Let’s not immediately split up into a million tiny microservices, but let us begin by converting our application as a single Kubernetes service. It might be missing the point a little bit, but it’s a great place to start. Besides it shouldn’t be hard to build upon this proof-of-concept and split up our app into multiple microservices. Slowly we will learn where it makes sense to make the divisions.
To deploy our application in Kubernetes we first would need to build a container image out of it. We chose Docker and since the main application was a Scala project we could use the Docker plugin of the SBT Native Packager so we didn’t have to write a custom Dockerfile (We also have a few services written in Python where we manually wrote the Dockerfile). In our build.sbt we just had to apply a few settings.
We use the oracle-jdk:jdk-1.8 base image. Since we use Google Container Engine (GKE) we have to set our dockerRepository to eu.gcr.io and ensure our packageName is of the format $dockerRepoName/$appName (e.g. my-docker-repo/api-gateway ). We also generate a short commit hash from git to be used as docker image version.
With this config building the Docker image is part of the build process and can be easily executed in a continuous integration environment like Jenkins. Of course we still needed to build a Kubernetes descriptor file and deploy it to run our app in Kubernetes. We’ll get to how we automated that process in part 3 of this blog series. For now let’s just have a look at what a basic descriptor file for our new microservice looks like.
There we have it. Our first microservice which is, as explained earlier, not so micro. It just pulls in the created docker image for the whole Scala application, based on the git commit hash. It gets the authentication secret, needed to decode the JWT to obtain an authentication context, as an environment variable from a Kubernetes secret. We made sure we kept the same format for the authentication context and made sure the already handed out reference tokens remained valid. We don’t want our clients to notice us moving to a new infrastructure. It was not difficult to port this logic, though. In the end we were still keeping this information in the same database.
Connecting to an external database
One part we’re still missing is the connection to the database. As you can see in the descriptor file we expect the database, in this case MongoDB, to be available by a Kubernetes service called mongo-svc on port 27017. Since our database is not running inside pods, but is managed outside of the Kubernetes cluster, we needed a simple proxy service.
The trick here is that we define a Kubernetes service without a pod selector. By convention those type of services will bind to a Kubernetes Endpoint with the same name. Therefore we create a manual Endpoint descriptor and specify the address of the mongo or mongos (MongoDB’s routing service for sharded clusters) server. In case of, say, an Elasticsearch cluster you could also specify multiple IPs and have the Kubernetes service handle the load balancing.
Deploying the whole shebang
The final step to get our first microservice deployed and made available through the Api Gateway is to setup a Kubernetes service for it. However, currently as you’ve probably noticed the Api Gateway has a ‘limitation’ to register only a single resource per service. Since our first microservice is actually the whole application with many routes and served resources we had to work around that. We simply defined a Kubernetes service for each resource we wanted to serve and point them all to the same pod. Here’s a few examples.
As you can see the defined resource is different, but the all the pod selectors point to the cupenya-microservice pod. Therefore, the Api Gateway will register them as separate services and proxy requests to the corresponding REST Api in the cupenya-microservice pod. All we have to do now is tell Kubernetes to create our resources. You can keep the descriptor files separate or bundle them all in one file and run:
$ kubectl --namespace my-namespace apply -f my-descriptor-file.yaml
The order of operations doesn’t really matter much in this case. Kubernetes will create or update your specified resources in the namespace of choice and they will auto wire themselves into an operational system.
Well, that’s all there is to it to get our rather minimalistic setup running. In the next post we’re gonna build upon this and discuss other crucial aspects of a production system such as monitoring & health checks.
Thanks for reading my story. If you like this post, have any questions or think my code sucks, please leave a comment!
|
https://medium.com/jeroen-rosenberg/from-monolith-to-microservice-architecture-on-kubernetes-part-3-deploying-our-scala-app-as-a-d4d799e01ab6
|
['Jeroen Rosenberg']
|
2019-04-03 15:13:23.076000+00:00
|
['Scala', 'Kubernetes', 'Microservices', 'Docker', 'Architecture']
|
Airbnb and the Malala Fund Team Up to Send Travelers on 80-Day Trip Around the World
|
Airbnb and the Malala Fund Team Up to Send Travelers on 80-Day Trip Around the World AFAR Media Jun 17, 2019·3 min read
Courtesy of Airbnb
On June 13, Airbnb launched Airbnb Adventures, a new collection of more than 200 small-group excursions led by local hosts in destinations from Oman to Alaska. The platform, an extension of Airbnb Experiences, marks the first time Airbnb offers lodging, meals, and activities in one package for travelers. Flights aren’t included in the overall cost, but the trips — in most cases limited to 12 participants or fewer — are designed to be affordable. (Two- to 10-day tours range from $99 to $5,000.)
To mark the debut of Airbnb Adventures, the company announced another unique offering: a 12-week trip around the globe inspired by Around the World in Eighty Days, the famous French novel by Jules Verne. Up to eight guests (ages 21 and older) will be permitted to book the trip when it becomes available online on Thursday, June 20.
The trip, which departs from London on September 1 this year, will take a small group of travelers to six continents and 18 countries within the span of 80 days. Activities include visiting medieval towns in the mountains of Romania; exploring UNESCO World Heritage sites in Uzbekistan; hiking to traditional teahouses and monasteries in Bhutan; sleeping under the stars in the Australian outback; and navigating the ancient Kumano Kodo pilgrimage route in Japan. The circumnavigational adventure also includes rafting in Utah’s Canyonlands National Park, wildlife-spotting in the Galápagos, trekking in Chilean Patagonia, and bathing in Iceland’s geothermal waters.
Courtesy of Airbnb
All proceeds from the “Around the World in 80 Days” adventure will go to the Malala Fund.
When the 80-day adventure comes to an end on November 19, the small group will return to London before parting ways. Accommodation, transportation, and select meals are covered as part of the trip’s $4,987 per person price tag. The only expense not covered in the cost is the round-trip flight ticket to and from London.
As if all of the above didn’t already sound attractive enough, there’s more that makes this one-time-only offering sound, well, superb. This particular Airbnb Adventure is what the company calls a “social impact experience,” which means that all trip proceeds go toward a nonprofit organization that has partnered with Airbnb to raise awareness about its mission with travelers. In this case, 100 percent of the proceeds from the “Around the World in 80 Days” adventure will go to the Malala Fund, a nonprofit organization cofounded by Nobel laureate Malala Yousafzai and her father. So when you pay $4,987 to Airbnb for your spot on this crazy adventure, you’ll really be helping fund an organization that works to give young girls “free, safe, and quality education” in places like Syria and Afghanistan.
Around the World in 80 Days will be bookable as an Airbnb Adventure from Thursday, June 20, 2019.
>>Next: What It’s Like to Visit Area 51 With Airbnb
|
https://medium.com/@AFARmedia/airbnb-and-the-malala-fund-team-up-to-send-travelers-on-80-day-trip-around-the-world-2b2e02cbab67
|
['Afar Media']
|
2019-06-17 14:41:52.794000+00:00
|
['Airbnb', 'Malala']
|
Propriety
|
propriety.
the word pervades the taste buds
of my tongue-so rancid
I reach for a shot of whiskey
to flood it out
a fruitless endeavor
your family’s dignity rests in
your ability to maintain propriety
there is a little girl skipping down
the street outside of my window
wearing a scarf, her brother by her side
the cool breeze runs its fingers
through his free, meaningless hair
I take another shot.
numbness offsets
the acuteness of
the powerlessness
propriety is the profane method
that conditions me into submission
you are a grotesque version of Pavlov’s dog
slapped across the face each time you tried
to take action oppositional to your instructions
you are the patriarchy’s successful experiment
propriety is what suffocates you
it is the reason you scream,
but silently
you wouldn’t want to disturb
anyone with your pain.
|
https://medium.com/meri-shayari/propriety-dccb01aae5a
|
['Rebeca Ansar']
|
2019-10-30 03:01:03.148000+00:00
|
['Women', 'Feminism', 'Society', 'Poetry', 'Life']
|
3 ways to break your music in 2021
|
2) Here are the types of singles you should be creating and working
So we covered why singles are your best bet, so cool, but what kind of singles should you be dropping? Let’s go through it. You should have your: Lead Single, Follow Up Single, The Remix, and unreleased track.
Let’s start with your lead single. So all that money you were about to put into that studio time to record that full project, yeah scratch that. Put into your lead single. This should be mixed and mastered and get some dope cover art designed. This will get you a better chance of landing on playlists and taking the guesswork out of which song to promote. I know all your songs are amazing, we know, still choose the one that you know if your heart of hearts will work better than the rest. The other ones, put in a drive for later. Make sure you get a radio edit version cut as well and get that song distributed and promoted. This will give fans and influencers something to put into their daily lives and shareable content.
So bet, you released your amazing lead single, what now? Well similar to our first section you, say it with me, “FEED THE STREETS”. That’s right, that second song that you were choosing between well let’s make that your follow up single. Even better let’s make sure you drop that within the same 2 to 3 weeks of you releasing your lead single. And to be perfectly honest, if you have the ability, you should record a brand new single based on the feedback from the lead single. I know you are an artist but end of the day you are making music for your fans, lean in. Or if you are really feeling risky, release a follow up that sounds a bit different from your first, show your new fans your artistry. And what’s beautiful about releasing singles and not projects, if people don’t seem to love it like your other releases, well you didn’t spend months or even a year creating and releasing it, you can come and release another single next week.
Ok, so the remix, if you look at a lot of the more successful artists recently, they got a hot lead single then went to a DJ, Electronic artist, producer or another musician to add a little razzle dazzle to it. What happens is people either love it or hate it and then go listen to the original to compare. Double up. Each person touching the remix has their own fanbase and gives you additional content to (you know what I’m going to say)… FEED THE STREETS! So make sure you make a “stem” track so you can collab with fellow musicians and make some magic happen.
Lastly, this will segway nicely into our next topic around content, but stop holding on to those studio sessions or that song that isn’t quite perfect yet, release your unofficial releases. If you don’t want the song to be permanently out, there are a ton of ways to make these temporary releases through IG Reels, Stories, Tiktok, Triller etc. that all will just provide your fans with music to be held over until our next official release. This will get your fans to see a peek behind the curtain and truly become a fan of you and not just your music.
3) Create music videos or better yet have someone else do it
Ok, we are finally to our last section. Distributing your singles isn’t enough, you need visuals. Videos are probably the way people digest content the most nowadays and you have a ton of ways to do this. You have Youtube where you can do the music video, lyric video, live performances or studio sessions. Additionally you can put all of that content on social as well.
But you’re busy right? Well you don’t have to create each of your own videos, there are tools that can help. Breakr is a huge one. You can hire influencers on a range of different platforms to create content to your song that both of you can promote, while you leverage as official releases to create potentially viral content such as skits, contents, challenges and dance videos.
Without a doubt you should be creating multiple pieces of content for each release and always be looking to have your fans know that they are not alone and other people love your music as well. As you drop those videos, make sure you engage with EVERY user that comments, likes and messages, good and bad. It will only make them feel like they are officially on your team.
|
https://medium.com/@musicbreakr/3-ways-to-break-your-music-in-2021-f8d086cd3beb
|
['Dan Ware']
|
2021-01-04 22:27:55.724000+00:00
|
['Music Business', 'Hip Hop', 'Emerging Artists', 'Music', 'Música']
|
Customer Segmentation: Unsupervised Machine Learning Algorithms In Python
|
Business Problem:
A primary goal for any company and business is to understand their targeted customers. How their consumers operate and use their services. Every consumer may use a companies services differently. The problem we’re trying to solve is to define this delivery company’s consumers. To define certain behaviors and methods these consumers use the companies services for.
Data Exploration:
We were fortunate enough to be given a dataset of a startup delivery company. The dataset is a private dataset, therefore we won’t be linking it. The recorded data is data that has been collected in a span of six months. It’s divided into three sheets users,payment and orders.
Figure1: Orders data sheet columns and data types
The following is the orders data sheet, to give an idea of the data types we’re mostly working with. The orders data sheet contains 2604 records and 26 columns. The users contains 4663 records and 12 columns while the payment data sheet contains 3169 records and 6 columns.
To know a little more about the data before starting to implement our machine learning algorithms, we created a few EDA plots.
Figure2: Order status
Which product are user’s mostly using the app’s services for?
Figure3: Orders Pie chart
What is the time line for registered users ?
Figure4: Line graph of registered users over time.
Data Preparation:
We started by checking if there were any missing values.
Figure5: Missing values in the orders data sheet.
As the figure shows there are 6 columns with missing data points. The following code will demonstrate how we dealt with the missing values.
Next,we decided to divide the orders into 6 distinct categories for easier analysis, as shown in the figure below.
Figure6: Separating the orders into 6 categories.
Considering that there are three datasheets to work with, we read each data sheet using pandas dataframe then combined all three into one dataframe.
Figure7: Combining 3 dataframes into one.
Model Implementation:
Initially, before we decided to go with the customer segmentation route we were planning on implementing a supervised machine learning algorithm.However, we later realized that picking out an optimal target to base the supervised algorithm on wasn’t a suitable method given this dataset. After a few hours of brainstorming, we decided to do customer segmentation using two clustering algorithms: K-means and DBSCAN . Furthermore, we decided we’d like to test out the two different models.
Imported libraries:
We then went on and chose our features for the two clustering algorithms. we selected the most affected features related to buyers, such as Number of Orders , Total amount paid by this buyer , Number of orders paid Cash , Number of Orders paid by Card , Number of orders paid using STCPay and Count of unique product categorical ordered.
Note: STCPay is a digital wallet used in Saudi Arabia. Similar to Apple Pay.
Note: The clustering algorithms and EDA were conducted on two different google colab sheets that’s why the dataframe names differ from the figures shown above.
The next step was to scale the data and reduce the dimension using PCA.
We decided to reduce number of features before training our models. We used PCA technique for features reduction process. First, we created a Scree plot to help us to select the best number of components for PCA.
Figure8: Scree Plot.
As figure 8 shows, the best number of components for our PCA is 3,it describes the data best.Preserving 80% of the data’s variance.
After successfully choosing the right number of components,it’s time to fit the PCA with 3 components to our chosen features.
K-Mean:
Now, we start building our clustering model using K-mean algorithm. The most important parameter for K-mean algorithm is number of clusters. So, we should decide how many clusters we have in our data. There are many ways to select the best number of clusters for K-mean algorithm, in this project we applied two methods called Elbow method and Silhouette Analysis method.
1.Elbow method:
Elbow is the common method used to determine the best value of K. This method calculates the variance between data points within a cluster using the Sum of Squared Error. The best value of k to select is the point of inflection on the curve. As we see in the Elbow plot the best k for our model is 5.
Figure9: K-means Elbow Plot
2.Silhouette Analysis Method:
This method calculates the average silhouette value for each data point in the cluster, this value represents how similar a data point is to its own cluster. The range of this measure from -1 to 1. A value of 1 means the sample is far away from the neighboring clusters. The negative value refers to samples that might have been assigned to the wrong cluster.
The following GIF illustrates the idea of Silhouette Analysis Method, each row on the Silhouette plot represents one data point in the scatter plot and the X-axis refers to silhouette coefficient value. The red line indicates the average silhouette coefficient value for all samples in clusters. The cluster that has a high silhouette coefficient value is the best to choose. Finally, from the silhouette coefficient results we decided to go with 5 clusters.
Figure10: Silhouette Analysis
After the implementation of the elbow and silhouette analysis methods, we start building the K-mean algorithm with 5 clusters.
Below is a 3D scatter plot of the clusters created by K-means.
Figure11:K-means Clusters 3D Plot
DBSCAN:
For detecting outliers and anomalies in our dataset DBSCAN(density-based spatial clustering of applications with noise) is the most productive.The two determining parameters of DBSCAN are the eps and min_samples.
The eps is the distance that determines a data point’s neighbor. Points are neighbors if the distance between them is less than or equal to eps. To determine the right value of eps 3 methods were used: sklearn’s NearestNeighbors,k-distance elbow plot and an eps visualization.The optimal value for the eps is at the point of maximum curvature.
The min_samples is the number of points to form a cluster .It is determined based on domain knowledge and how big or small a dataset is. Given the number of dimensions of the dataset, min_samples is chosen.A good rule of thumb is minPts >= D + 1 and since our dataset is 3D that makes min_sample=4.For larger datasets minPts >= D*2.
There are 3 main data points in DBSCAN the core point, border point and finally what DBSCAN is good at detecting the outlier point. The core points are points within the eps’s radius. The border points are points that are accessible from the core points and with a less number of min_samples.An outlier is a point that isn’t a core or a border point.
Figure12:DBSCAN 3D Plot
In order to gain confidence regarding the eps value we went with the Elbow plot method again and an eps visualization graph.
Figure13:Eps Elbow Plot
Let’s zoom in on the plot:
Figure14:zoomed eps Elbow Plot
Figure15:Eps visualization graph
As the elbow plot and eps visualization graph show, the optimal value for eps is 1.
Results:
Using the K-means algorithm with a cluster count of 5 we see that the customers in cluster 3 prefer to pay using STCPay rather than card or cash in comparison to the other 4 clusters. In contrast customers in cluster 0 prefer to pay with cash the most.Cluster 0 also has the highest number of orders with 2072 orders while cluster 1 has 93 orders, being the least amount of orders out of the 5 clusters.In terms of profit cluster 0 generates the app the most amount of profit followed by cluster 3,2,1, and 4.
Using the DBSCAN algorithm with a cluster count of 5 we see that the customers in cluster 3 prefer to pay using STCPay rather than card or cash in comparison to the other 4 clusters. In contrast customers in cluster 0 prefer to pay with cash the most.Cluster 0 also has the highest number of orders with 2053 orders while cluster 2 has 26 orders, being the least amount of orders out of the 5 clusters.In terms of profit cluster 0 generates the app the most amount of profit followed by cluster -1,2,3, and 2.
A few actionable insights found that the delivery company could execute:
As seen from the K-mean results, the number of orders created by buyers in cluster 1 is greater than in other clusters, with an average number of is 46 orders during 6 months. This is a large number of orders for buyers, so it is possible these accounts are stores registered as buyers. We observe the same with cluster -1 in DBSCAN, with an average of 10 orders during 6 months for an individual buyer.
Most buyers pay with cash. In this case, maybe the buyers don’t know about the availability of different payment methods through the app. So, the marketing team should focus on clarifying the power of the app by providing different payment methods.
For more information regarding the code and it’s implementation, please visit our github repository:
Or you could checkout our dashboard to view the full insights that we’ve found: https://customer-segmentation-tuwaiq-2.herokuapp.com/
Note: The dashboard is a Dash App dashboard hosted on Heroku cloud server, it might take a few minutes to load.
Future Work:
Considering that the collected data was data of six months time, the models would be more robust with more data points to plot. The algorithms could be optimized in the future by feeding it more data. The company now may market to each segmented customers appropriately and show different advertisements to different customer segments.
|
https://towardsdatascience.com/customer-segmentation-unsupervised-machine-learning-algorithms-in-python-3ae4d6cfd41d
|
['Lama Ali Alzahrani']
|
2021-07-15 12:46:09.434000+00:00
|
['Dbscan', 'Kmean', 'Unsupervised Learning', 'Clustering', 'Customer Segmentation']
|
History still unreckoned with, tragedy still unfolding
|
The wall bordering Bethlehem’s Aida Refugee Camp in the occupied West Bank
November 29 is the day designated by the UN in 1977 to be an annually observed International Day of Solidarity with the Palestinian People. Every year it is preceded by weeks of discussion by various UN agencies, and generally culminates in the passage of over a dozen General Assembly resolutions condemning Israeli violations of international law and Palestinian human rights.
This year the green light that the Trump administration gave to Israel’s settlement expansion and appropriation of East Jerusalem and the West Bank’s Jordan Valley has given UN bodies plenty to discuss and deplore. For instance, as Americans were voting on November 3rd, the Israeli army was demolishing the entire Palestinian village of Khierbet Humsah, leaving 73 people, including 41 children, homeless in the winter rain.
Despite earmarking $3.8 billion in military aid to Israel every year, most Members of Congress are notoriously reluctant to criticize anything Israel does. Thus, the November 17th letter that 41 Democrats sent to Secretary of State Pompeo condemning the destruction of the village was a welcome gesture towards the need for accountability.
For now at least, the letter is unlikely to spur action. Pompeo was in the Middle East at the time. On November 19, he visited a West Bank Israeli settlement called Psagot and declared that settlement products should bear a ‘Made in Israel’ label. Psagot produces wine, including a blend named ‘Pompeo’ in his honor.
As The New York Times reported, Palestinians — including Palestinian Americans — are the legal owners of the 20 acres claimed by the Israeli settlement on which its grapes are grown. The winery’s major shareholders are American contributors to both Trump and Netanyahu. They also support Israel’s settlement enterprise, which is illegal under international law.
On the day of Pompeo’s unprecedented settlement visit, the UN General Assembly’s Third Committee endorsed, as the full General Assembly has done every year for decades, “the right of the Palestinian people to self-determination, including the right to their independent State of Palestine.” This is part of a package calling for Israel to end its occupation of Palestinian land and observe international law that the UN overwhelmingly supports every year.
As usual, the vote was strikingly lopsided: 163 cast in favor and five opposed (Israel, the United States, Micronesia, Nauru and the Marshall Islands). Abstaining were Australia, Cameroon, Guatemala, Honduras, Kiribati, Palau, Papua New Guinea, Rwanda, Togo and Tonga.
Flawed from the beginning
Israel’s supporters claim that General Assembly resolutions supportive of Palestinian rights are ‘one-sided’ and that the UN unfairly singles out Israel while turning a blind eye to violations elsewhere.
Such complaints ignore the extent to which Israel is the UN’s business, and has been from the very beginning. This history must not be forgotten.
In April 1947, a year and a half after the UN was founded with the mission of preventing war and upholding fundamental human rights and international law, Great Britain handed over to the fledgling body the responsibility for Palestine that it had been given by the League of Nations in 1922.
A UN Special Committee for Palestine (UNSCOP)then came up with two proposals: one, to partition the territory into two states; and the other, for a unitary state with a constitution guaranteeing equal rights for all its inhabitants.
After vigorous lobbying, the majority supported partition without asking the recently established International Court of Justice whether the UN had any jurisdiction to carve up Mandatory Palestine or any other territory.
On November 29, 1947, the date now commemorated by the International Day of Solidarity with the Palestinian People, the UN General Assembly voted 33 to 13 (with 10 countries including Great Britain abstaining) to support Resolution 181mandating partition and economic union.
The partition map gave 55% of the land to the Jewish population at a time when it owned only 7% and was a third of the population. Palestinians, who made up 65% of the population, were allocated 38% of Mandatory Palestine, a ‘solution’ rejected by Arab delegations to the UN. The resolution designated Jerusalem as a ‘corpus separatum’ or international zone under UN control.
In the war that followed the Zionist acceptance of Resolution 181, its military forces moved beyond the territory that Resolution 181 demarcated as the Jewish state and some 750,000 Palestinians were expelled from their homes.
On May 20, 1948, six days after the State of Israel declared its independence, UN envoy Count Folke Bernadotte arrived in Palestine to try to mediate a cease-fire. In his capacity as head of the Swedish Red Cross Bernadotte had saved thousands of Jews from Nazi camps. On September 17, after he called for the return of all Palestinian refugees and the demilitarization of Jerusalem, Bernadotte was assassinated by the Stern Gang (LEHI), one of whose leaders was the future Israeli prime minister, Yitzhak Shamir.
Bernadotte’s murder could have been on the minds of General Assembly members when, on December 10, 1948, they adopted the landmark Universal Declaration of Human Rights and the following day passed Resolution 194. It calls for Jerusalem to be placed under UN control and states “that the refugees wishing to return to their homes and live at peace with their neighbors should be permitted to do so at the earliest practicable date, and that compensation should be paid for the property of those choosing not to return and for loss of or damage to property which, under principles of international law or in equity, should be made good by the Governments or authorities responsible.”
The killing of the UN envoy, the fate of Jerusalem and Israel’s refusal to allow Palestinian refugees to return dominated much of the debate of the May 11, 1949 General Assembly plenary meeting considering Israel’s application to join the United Nations. There was a sharp exchange of deeply divided views.
On one side, were Arab voices like that of the delegate from Yemen, who was outspoken in his denunciation of the extent to which “power politics” had determined the fate of Palestine, “in total disregard of the rights of its people…The United Nations, by admitting Israel, would be offering shelter to a group which had not only imposed its rule by force on the people of Palestine, but which had also driven from their homes almost a million of those people… Zionists had not respected the resolutions of the General Assembly and had given no definite assurances that they would do so in the future. They felt that they were absolved of such assurances because under the shield of power politics they would always find excuses and apologies.”
On the other side, were the USA and a range of countries like Iceland that “had no doubt that the Government and the people of Israel would fulfil the assurances they had given with regard to Jerusalem, the Arab refugees and the investigation of the assassination of Count Bernadotte.” It voted to admit Israel to the UN in the belief that it would “strengthen the United Nations and contribute to the successful solution of current and future problems.”
The French delegate added that “no nation was better equipped to show its generosity and sense of justice than the very people which had suffered so long from injustice and hatred.”
Israel was admitted to the United Nations with 37 countries voting in favor, 12 against and 9 abstaining.
Unfinished business
Once the vote was taken, Moshe Sharett, Israel’s representative to the United Nations, thanked nations “whose UN delegations on November 29, 1947 had supported the historic resolution providing for the establishment of the Jewish state.” He said “its efforts would be directed to the absorption of the large scale immigration currently in progress, a veritable in-gathering of the exiles.”
He made no mention either of the Palestinian state that the ‘historic resolution’ had called for, or of the return of Palestinian refugees. Rather he said Israel had taken note of “certain problems still outstanding between Israel and its neighbors on the one hand and between Israel and the United Nations” and would “pursue its steadfast efforts” to resolve them.
Far from those ‘problems’ being resolved, they were greatly magnified when Israel occupied the remainder of historic Palestine in 1967. Since then the Jewish state has been shielded from international censure and UN Security Council resolutions by the power of its patron, the United States. Israel’s defiant intransigence has undermined the credibility of the United Nations and the standing of international law itself.
There never was a time in Israel’s history when the question of Palestine was not the UN’s business. And there never has yet been a time when words enshrined in resolutions as well as international law have proved as potent as power politics.
In 1977, thirty years after the UN issued Resolution 181 which Israel came to regard as its ‘birth certificate,’ the General Assembly showed its frustration with the way its words were repeatedly ignored and disparaged by dedicating November 29 to the Palestinian people.
And today, 73 years after Resolution 181 was agreed to, the rhetoric remains but the land on which a Palestinian state could be constructed has all but disappeared.
Nancy Murray, Ph.D.
|
https://medium.com/@numurray/history-still-unreckoned-with-tragedy-still-unfolding-c87ca39f34ed
|
['Nancy Murray']
|
2020-11-26 16:56:01.593000+00:00
|
['Palestine Solidarity', 'United Nations', 'Us Aid', 'Palestine', 'Israel']
|
Best Digital Cameras Under $300 [Expert Reviewed Picks]
|
Buying a new digital camera under $300 can be a big challenge, especially when looking at the newest with the most advanced features. But a good camera doesn’t have to cost an arm and a leg. Thanks to camera technology ramping ahead in recent years, there are many options for those shopping on a budget. We have taken the time to review some of the best cameras under $300 to let you shoot the greatest pictures possible.
Just because a camera is a few years old, doesn’t mean it’s out of the running. Oftentimes, a good older camera can have the same or somewhat similar specs and features as a new entry-level model. The image quality can be on par with new technology and that’s really where it counts.
Features You Should Look for in Cameras Under $300
Here are some things you should look for in a camera under $300:
Zoom: You want to find a camera that will zoom in on your range.
You want to find a camera that will zoom in on your range. Megapixels & Resolution: While zoom is great, resolution will decrease the more you zoom. A camera with 10 to 16 megapixels will be ideal for most photographers’ needs.
While zoom is great, resolution will decrease the more you zoom. A camera with 10 to 16 megapixels will be ideal for most photographers’ needs. Other Features: Size and weight are going to be important features, but you should also look for durability, LCD screen size, and other features.
Have a limited budget? Don’t worry. We have compiled a list of the best digital cameras under $300 available on the market!
(We get commissions for purchases made from links in this post.)
Panasonic Lumix DC-FZ80 Deals
The Panasonic LUMIX DC-FZ80 is a force to be reckoned with. It zooms to a maximum of 60 times the original frame to capture all the small details. This camera also has a 20mm wide-angle feature that is particularly beneficial while photographing breathtaking landscapes.
For videos, the Panasonic DC-FZ80 offers 4K video capture technology. All videos are captured in HD, and the result is stunning. Also, this camera performs particularly well in areas with low lighting. It yields a reasonably good outcome for nighttime shots. Also, it has a commendable image stabilization interface.
Post Focus is another smart technological advancement seen in this device. This system allows you to select the point of focus even after an image has been captured.
If you’d like to find out more about this compact camera, read the full review of the Panasonic LUMIX DC-FZ80. This is our best-reviewed camera under $300.
2. Canon PowerShot SX530 (Best Canon Camera Under $300)
Canon PowerShot SX530 Deals
This Canon camera is a high-quality DSLR. It offers a 24.1MP APS-C CMOS sensor and a DIGIC 4+ image processor. Its ISO capability is 100–6400, with up to 3-fps shooting. Moreover, this device has the ability to automatically choose the best mode for photography, depending on surrounding lighting conditions. This is called “Scene Intelligent Auto Mode.”
It has a nine-point AF with center cross-type point. The video quality is particularly good at 1080/30p in full HD mode. Its screen monitor is LCD and spans three inches with a 920k-dot build.
The Canon PowerShot SX530 comes with built-in Wi-Fi and NFC, making image and video sharing very convenient. All you need to do is connect the camera to your laptop or phone to start transferring data with no delay!
To get even more info, check out our Canon PowerShot SX530 review.
3. Canon PowerShot SX720 HS (Best for Features)
Canon PowerShot SX720 HS Deals
This Canon camera is hard to beat! The Canon PowerShot SX720 HS has many great features and is one of the more quality cameras under $300 that people on a budget should consider. This camera has ISO 3200 with 5.9 fps shooting and also employs a helpful zoom framing assistant to help you get the perfect shot.
It comes with a built-in image stabilization system to eliminate shaky shots and achieve a very refined final image. Image stabilization works on video mode, too, enabling you to get a less shaky video.
The Canon PowerShot SX720 has a 20.3MP camera and can zoom to 40 times the original size. Its video capturing ability is very good at 1080p. The viewfinder is an LCD, with a 2.3-inch display size.
This device boasts built-in NFC and Wi-Fi to allow for secure data sharing to laptops, phones, and other devices. Gone are the days when data could only be transferred via cable. With the Canon PowerShot SX720, everything is wireless and hassle-free!
Read the complete Canon PowerShot SX720 HS review to get all the details you need about this camera.
Canon PowerShot SX420 Deals
Canon is one of the most well-reputed camera brands in the world. You cannot go wrong with a Canon camera! The Canon PowerShot SX420 is a powerful digital camera at the top of the list for digital cameras under $300 that still offers an array of fantastic features.
This camera offers an optical zoom capacity of up to 42 times. It comes with a 20.0MP 1.2.3” CDD image sensor and a DIGIC 4+ image processor. All videos are recorded in HD mode with a resolution of 720p at 25 fps. What’s more, is that this product is designed with an intelligent IS Image Stabilization system that minimizes the effect of hand movement when recording.
The SX420 has built-in Wi-Fi making it very easy to share photos. The device can be synced to a phone or laptop, and all images and videos can be transferred without the hassle of wires. This is a handy tool and saves a lot of time when transferring data.
Get even more info about the design and how the camera functions in our Canon PowerShot SX420 review.
5. Canon PowerShot SX620 HS (Best LCD Screen)
Canon PowerShot SX620 HS Deals
Another Canon camera hitting the list of cameras under $300 The PowerShot SX620 is a device worth betting on. It has a one-year warranty from Canon and is guaranteed to last many years if taken care of. This device has a 20.2MP camera with a ½.3” CMOS sensor and DIGIC 4+ image sensory. Its LCD is three inches wide with a 922k-dot monitor and an electronic viewfinder.
For videos, this camera shoots at 1080p at 30 fps. Its image stabilization system helps achieve the ideal shot. The device weighs three pounds, which is average for a camera of its size. It is not too heavy and is easily portable.
This device comes with a kit that includes an adapter, an additional battery (Li-Ion), and a cleaning kit.
Head on over to our full Canon PowerShot SX620 HS review to find out more about how this camera performs.
6. Fujifilm FinePix S4800 (Best Fujifilm Camera Under $300)
Fujifilm FinePix S4800 Deals
Fujifilm is an age-old company that makes some of the most excellent cameras in the world. Its products sell very well with an international scope, and it is easy to see why. The Fujifilm FinePix S4800 has a three-inch wide TFT color LCD monitor. It boasts an impressive range of 16 million effective pixels and a 2.3 inch CCD image sensor with a primary color filter.
The lens offers a maximum of 30 times optical zoom capacity and a 7.2 times digital zoom. The focal length is f/4.33mm-129mm. this device’s full aperture details are f/3.1 / f/8 (wide), f/5.9 / f/20 (telephoto).
For video making, an impressive 1280x720p resolution is present with 30fps. A variety of shooting modes are offered and they can be altered to best suit the lighting and environment.
Nikon COOLPIX B500 Deals
Nikon is another spectacular company that has been manufacturing cameras for decades. Products from this brand can be trusted and are known for their quality and durability standards.
The COOLPIX B500 has a 16MP camera resolution with a 1/2.3″ BSI CMOS sensor. It also has a NIKKOR f/3/0–6/5 ED lens that offers a zooming capacity of up to 40 times the original image size via an optical zoom lens. With dynamic zoom, it can go up to 80 times!
With Bluetooth, Wi-Fi, and NFC connectivity options, transferring data to and from this camera is very convenient. There is no need for old wires and cables anymore to share photos and videos on your laptop or phone.
In terms of video recording, the B500 has a 1080p resolution at 30 fps, which is very good for a camera under $300. Also, all videos are recorded in HD and yield a great result.
Read the full Nikon COOLPIX B500 review to learn more and see if this camera is for you.
Bottom Line on Digital Cameras Under $300
Investing in a good digital camera is essential if you want to get post-worthy pictures. All the cameras discussed above are of excellent quality and give you a real bang for your buck. They all have additional memory cards. Hence there is no practical limit to the number of pictures and videos that can be stored in them.
All of these options of digital cameras under $300 are great for people on a budget, like students or people who do not pursue photography as a full-time job. These cameras provide abundance in terms of quality and will suffice for anyone who has a passion and calling for photography. They are also suitable for videographers as most of them offer an image stability mode that helps reduce noise while recording without a tripod.
All in all, these are some great digital cameras under $300. You must choose one that fits your personal needs so you can get the most out of your photography experience!
|
https://medium.com/lumoid/best-digital-cameras-under-300-expert-reviewed-picks-6696a52e778b
|
['Lumoid Staff']
|
2020-11-16 15:56:23.191000+00:00
|
['Buying Guide', 'Photography', 'Gear', 'Review', 'Cameras']
|
The Alcubierre Drive: Why Testing Practically Is So Difficult
|
Once humanity builds the Alcubierre drive (see previous story), they will naturally want to use it. The nearest star system, Alpha Centauri, is 4.367 light years away. An Alcubierre drive could allow a spaceship to fly there in two weeks.
What happens once the craft arrives? Depending on how much energy caught in the warp field, the entire system could be destroyed upon arrival. In this scenario, it would take 4.367 years for NASA’s satellites to recognize this. They would realize they have destroyed the best hope of interstellar mining, research, and colonization they could have hoped for. Meanwhile, the craft cannot blast back to Earth to inform scientists of the event, and humanity has lost that particular Alcubierre drive for almost a century while it slowly chugs back toward Earth at sublight speeds.
The drive cannot be tested on an asteroid belt, because of the lack of distance. There would be too little energy caught in the warp field for a proper test, unless the drive is flown outside of the shelter of Earth’s shadow and absorbs all the light the Sun can give it —more a demonstration of how powerful the Sun is then a test of how destructive the energy inherent to space is when trapped.
Luhman 16 is a star system 6.517 light years away. Due to its unique dual star nature, NASA will likely find a different system to test the drive on. Humanity will perhaps wait decades for the results of the flight, but in interstellar terms, that’s a month-long experiment.
Sources:
Sharp, Tim. “Alpha Centauri: Nearest Star System to the Sun.” Space.com, Space, 19 Jan. 2018, https://www.space.com/18090-alpha-centauri-nearest-star-system.html.
“Luhman 16 (WISE 1049–5319) Ab.” Luhman 16 (WISE 1049–5319), http://www.solstation.com/stars/wise1049.htm.
|
https://medium.com/@liam.trzebunia/the-alcubierre-drive-why-testing-practically-is-so-difficult-6270543f6a99
|
['Liam Trzebunia']
|
2019-11-15 15:34:40.257000+00:00
|
['NASA', 'Relativistic Rockets', 'Space', 'Futurism', 'Alcubierre Drive']
|
Key points from Livestream session with Deputy CTO
|
On November 17, we held our first Livestream with Crypterium’s IT Team to let our followers learn some tech details about the project from the first hands. If you missed the Livestream, feel free to replay it. If you prefer reading the news, here’s the short summary we’ve prepared for you.
How is the Crypterium App architecture built? What programming languages do we use?
Application architecture is a rather complex topic. We have a client-server architecture with a mobile front and a backend. We’re using the REST API, and we have WebSockets on the frontend, and in order to communicate with the backend. As for programming languages, we are coding in Kotlin and Java for Android and Swift for iOS. On backend, we have two teams. We are working on .net core c#, and jvm platform is our main platform for Java and Kotlin.
Another interesting technology that we have adopted is Hazelcast that we use as distributed cache solution, and Rabbitmq — our message queue. And we also use mongodb and some sql databases where applicable.
How do we build our system on microservices?
Microservices means services are fine-grained: they are separated from each other, they are completely independent. Each of them implements one simple business domain so you can develop parts of the product with small teams. It’s also easier to test them, and they communicate to each other using lightweight protocols, like HTTP. It’s very easy to deploy and maintain this overall structure, but it’s quite hard to enter.
Which service providers are we using?
Our main provider is Amazon with AWS services — our app is built using their container services. Among them is EC2 that we use to have scalable services, elastic load balancer to balance the traffic between clusters and inside them, fargate — container running service — by the way, I highly recommend it for those of you who needs speed when scaling, it’s really faster than EC2. We also use ECS — Elastic Container Service that orchestrates all the things running between fargate and ec2 instances, and s3 — object storage as buckets for static content. Finally we use Relational Database Service for hosting SQL databases.
To obey the local laws in some countries, we store some data in the local data centers, and technologies like mongodb sharding helps us achieve this.
Of course, we also have a Plan B, because we don’t want to rely fully on one provider only. We are ready to switch to any other hosting provider or to some local data centers, if we need to. What I’m saying is we’re not highly dependent on our providers.
How is the app support working? How do we ensure 24/7 support?
The main idea that we have a first-level support team working 24/7 who response to the users in chats and so on and a second-level support team working to the usual schedule with 2 day-offs per week. Together both teams collect the user’s feedback, provide analytics and bring the data to the development team.
How do we plan our sprints?
We are an agile company, and we use a framework that is called SCRUM. It’s an agile framework. It helps developers to collaborate with product owners, other stakeholders, customers and so on constantly. They always receive feedback, they have some reaction to this feedback. Each sprint lasts for two weeks, and there are several stages. The first stage is sprint planning when we discuss the sprint usability we discuss and plan the goals and tasks for the sprint. Then the development starts, we have daily meetings with the updates. After that, there’s a sprint review, where we show the demo to all the stakeholders so that they can give us the feedback. Finally, we have the sprint retrospective discussions of what has been done and what problems have we been facing if any.
What crypto are we going to add to the app?
There are no technical problems: from the tech side we have everything we need, we can add as many tokens as we want. The point is that most of the coins and tokens don’t meet the KYC and AML requirements, and our partners can’t work with tokens without KYC/AML. So our compliance team is working on it, and we are going to solve this problem. We will definitely have more tokens and coins over time.
About Crypterium
CCrypterium is building a mobile app that will turn cryptocurrencies into money that you can spend with the same ease as cash. Shop around the world and pay with your coins and tokens at any NFC terminal, or via scanning the QR codes. Make purchases in online stores, pay your bills, or just send money across borders in seconds, reliably and for a fraction of a penny.
Join our Telegram news channel or other social media to stay updated!
Website ๏ Telegram ๏ Facebook ๏ Twitter ๏ BitcoinTalk ๏ Reddit ๏ YouTube ๏ LinkedIn
|
https://medium.com/crypterium/key-points-from-tech-livestream-8304dc95f6fd
|
[]
|
2018-11-20 17:35:51.151000+00:00
|
['Tech', 'Technology', 'Mobile App Development', 'Fintech', 'Blockchain']
|
Design Process: Board Game Banker
|
Origin
Board Game Banker is a personal project that results from a night of board games with friends, when in a Monopoly game nobody wanted to be the banker.
Possible problem
The players argued that whoever plays the banker does not enjoy the game, having a greater responsibility and mental load. The use of paper for money management complicates this even more, since the bank is also obliged to intervene in transactions between players when they need “change” (change banknotes of a higher denomination for others of a lower one to pay an exact amount to another player).
Empathize
An investigation was conducted focused on user reviews of similar iOS, Android and Web applications in order to know their expectations and frustrations.
Insights:
The problem does not only exist in Monopoly, but also in other board games that use physical money or other scoring systems .
. Users value saving money by not having to buy the money management kits that are sold in games separately.
Users need to be aware of the status of transactions at all times.
It is tedious for the user to perform mathematical operations to determine the transaction amount according to the rules of each game.
Definition of the problem
How can we develop a product that makes it easier for users to manage and keep track of money or scores in board games?
Principles of design
It must be a general solution to board games and not specific to just one.
to board games to just one. It must prioritize readability and knowledge of the current state of the game by all players seated at a table.
by all players seated at a table. It must allow the user the assurance that transactions made are correct.
It must facilitate transactions that involve mathematical calculations.
Competitive analysis
When reading user reviews on similar products I found many recurring topics, including:
The user may in some cases need to resume a previous game in interrupted sessions.
in interrupted sessions. The user appreciates being able to use long names and that these are seen properly.
and that these are seen properly. The lack of separators (thousands and millions) in the numbers of interest complicates readability and hinders the experience.
Some users cast the phone screen to the television during gaming sessions.
This information became useful to build the product in later phases.
Sketching
|
https://medium.com/@rivarola/board-game-banker-eabe9e41cf99
|
['Gabriel Rivarola']
|
2020-12-29 23:31:09.715000+00:00
|
['Portfolio', 'UI Design', 'UX Design', 'UX Research']
|
Write your own Smart Contract and deploy it…
|
In this article, we will watch how we write our own contract and deploy as I mentioned in my previous article that I learned why we need a smart contract and where we need I will provide you in a comment section where you can read all the stuff that is related to smart contracts. So, now in this article, I will practically teach how you will write your own Smart Contract. Now, you open your browser and search this online http://remix.ethereum.org/ this website. This is Ethereum Virtual Machine (EVM) where you can write your first contract. You can either copy the above-mentioned link and you can find your EVM machine.
Remix Online Editor
This is the page you can view when you open the link. To compile and deploy we need to activate the setting. For this, you click on the solidity black arrow is showing solidity button when you click on the button your setting will be activated. Next, open your new file click on the plus button red box is showing where is it.
Write any name, for example, my_name.sol
After creating the file you will view the blank page here is the code that I write for you write this code. Ok now I will explain to you what is going on here in this code. The first tells you that the source code is written for Solidity version 0.4.0, or a newer version of the language up to, but not including version 0.7.0. Pragma is common instruction for compilers about how to treat the source code. A contract in the sense of Solidity is a collection of code (its functions) and data (its state) that resides at a specific address on the Ethereum blockchain. And you can write any name of your contract for this demo we write myContract. Just like any other programming language, we take a string with a public keyword with a variable name is myname. This whole line declares a state variable. So now the question is why we need a public keyword. The keyword public automatically generates a function that allows you to access the current value of the state variable from outside of the contract. Without this keyword, other contracts have no way to access the variable. After we declare a function with name writeanyname and then we declare a parameter string. Now, why we used memory keyword. In functions without the memory keyword, then solidity will try to use the storage structure, which currently compiles, but can produce unexpected results. Memory tells solidity to create a chunk of space for the variable at method runtime, guaranteeing its size and structure for future use in that method. And says it public. The public keyword in the function that we call this function easily after deploying a contract. This is code that I write for you and for very beginners who are here and want to learn this solidity language.
Compile your contract press the compile button for me it is Compile hi.sol
Then click on the black box button and deploy your first smart contract click on the deploy button here we need some gas to deploy our smart contract here it is free. Click on the Deploy button and deploy your contract.
Here you view this open this and then.
Click on the myname button here you view you Hi World variable.
Here you can write any name, anything you can write with double string, and then click on the writeanyname button and then click on the nyname button where you find your own written name.
So, this is all of this. I hope after a lot of struggle you are happy face because after this article. You can successfully write your own smart contract and also deploy it. For now, take care, stay safe, and stay home.
Related Article:
|
https://medium.com/@umairansar000/write-your-own-smart-contract-and-deploy-it-73e80486afba
|
['Umair Ansar']
|
2020-05-10 15:03:46.175000+00:00
|
['Blockchain Technology', 'Solidity', 'Etherem', 'Smart Contracts']
|
Cava and Cunnilingus
|
Cava and Cunnilingus
Photo by Taras Abbat on Unsplash
Bubbles.
The sweet tingling liquid bites my flesh as you slowly pour your cava over my exposed cunt. It tickles my clit, before running down my slit and mixing with my own ambrosia to froth, to fizz, to flavour me, ready for the hungry onslaught of your salivating mouth.
You lap the sticky residue from my buttocks and inner thighs. Which is alcohol, and which is arousal? They are one and the same, blending and coalescing, sweetness meeting salt.
Sucking intoxicating froth from my luscious fruit, you seem to seek drunkenness inside my sex. As though you have been lost for days in a barren desert, my cunt has become your oasis. Your respite, your replenishment, your resurrection.
Drink from me. Yes, devour and drown in me. Taste the sweet wine of my silky slick, and lose your sobriety in the frothy folds of my flavoursome flesh.
|
https://medium.com/@jupiterslair/cava-and-cunnilingus-82e3037a5ef8
|
['Jupiter Grant']
|
2020-12-01 12:19:17.679000+00:00
|
['Cunnilingus', 'Oral Sex', 'Sex', 'Erotica', 'Short Read']
|
Ethereum Development Update: Istanbul Hard-Fork — Edge
|
Ethereum is on the eve of its “Istanbul” hard fork. The upgrade is scheduled to occur at Block 9069000, which is expected to be confirmed this weekend. This will be one of many hard forks that will take place before Ethereum’s big transition from proof of work to proof of stake. Many of the changes for this fork are repricings of certain functions for the sake of better network performance and resilience. The upgrade also improves interoperability with other chains, as well as replay protection during chain forks. Below, we’ll go over these updates.
Eleven Ethereum Improvement Proposals (EIPs) were introduced for Istanbul, but only six were approved for implementation:
EIP-152: A hash function known as Blake2b was added. The addition of Blake2b will make verification of certain data on other Equihash PoW based chains more efficient and cheaper. For example, verifying Zcash block headers on Ethereum is currently slow and expensive. The addition of Blake2b would speed up this process and aid in increasing the interoperability between Ethereum and chains like Zcash. Interoperability, in the context of crypto-assets, enables asset swaps and improves communication between networks. The networks could, in effect, talk to each other and affect each other instead of being siloed off and running in parallel with no knowledge of each other as they do now. This could lead to Zcash being spendable in Ethereum dapps, Zcash improving Ethereum’s privacy, and Zcash and Ethereum holders being able to trade atomically. This update is a step in the direction of these possibilities.
2. EIP-1108: This proposal reprices the use of elliptic curve arithmetic. The Ethereum community felt that the costs of doing these operations are currently overpriced. Fast elliptic curve cryptography is an imperative of an increasing number of protocols built on top of Ethereum, some of which are attempting to tackle issues such as privacy and scalability. These protocols include, but are not limited to, the privacy protocols Aztec and Zether, and the scaling protocols, Matter and Rollup.
3. EIP-1344: Adds an opcode (operation code) which allows Ethereum smart contracts to identify the current ChainID during a contentious fork and verify the validity of digital signatures based on that ChainID. A ChainID helps the ecosystem determine the blockchain they’re operating on is in fact the chain they intend to operate on. Identifying which chain is which during a contentious fork is crucial to prevent replay attacks. Replay attacks, in the context of crypto forks, happen when a transaction made on one network can be “replayed” on the other side of the fork. During a contentious split of a crypto network two chains are created from the same history and probably have very few differences between them. From one initial asset, two assets are created. An unaware user spending on one chain gives up information, like a signature, that a savvy attacker could use to spend that user’s other asset on the opposite chain. There is an Ethereum smart contract users can call to check the current Chain ID, but Ethereum developers think the contract is complicated enough to cause problems after a hard fork. The proposed opcode is supposed to simplify the identification of the correct chain, preventing replay attacks.
4. EIP-1884: Some op-codes are more resource intensive relative to their pricing. This proposal seeks to raise the gas cost of these opcodes to bring their economic costs in line with the actual resources (CPU time, memory, etc.) they consume. Gas is a measure of the cost to execute an operation on the Ethereum Virtual Machine. Imbalances between resources consumed and gas costs cause two problems:
Attacks on the network are cheaper than they should be; malicious actors can fill blocks with underpriced operations, slowing down block processing. As a result of underpriced operations, blocks with equivalent block gas limits process in a much wider range of time than they should. Block gas limits are the maximum amount of gas allowed in a block to determine how many transactions can fit into a block. For example, let’s say we have four transactions where each transaction has a gas limit of 100, 200, 300, and 400. If the block gas limit is 600, then the first three transactions can fit in the block but not the fourth. If we have two blocks of equivalent gas but one is full of underpriced opcodes and the other isn’t, the block with underpriced operations will take longer to process. Even though they are supposed to be equivalent in gas, the actual resources it takes to process these operations are larger than the gas implied. By balancing costs and the actual resources consumed, the variance between processing times can be stabilized to a greater degree.
5. EIP-2028: Lowers the gas price of calling on-chain data from 68 gas per byte to 16 gas per byte. This increases the bandwidth of Ethereum by allowing more on-chain data into each block. This proposal could help some layer 2 scaling solutions that need to store data on-chain and be able to call that data cheaply when necessary.
6. EIP-2200: Creates a definition of net gas metering changes for the SSTORE (storage of data) opcode, enabling new usages for contract storage, and lowers excessive gas costs where it doesn’t match how most implementations of Ethereum work.
Ethereum in Edge
Our users don’t have to do anything at this time. These changes won’t affect your assets in any way. There doesn’t appear to be any serious disagreement around this hard-fork, but if there is an element in the community that refuses to upgrade and creates a minority network, we’ll make sure to communicate the steps, if any, that would need to be taken to protect your assets.
The next expected hard-fork of Ethereum is tentatively scheduled for June 2020 under the name Berlin, We should see some more gas re-pricings and we might even see a proof of work change. Stay tuned.
Download Edge on iOS
Download Edge from the Play Store
Android APK Direct Download
|
https://medium.com/edgewallet/ethereum-development-update-istanbul-hard-fork-edge-afd3fb5a8dfc
|
['Brett Maverick Musser']
|
2019-12-12 18:03:16.806000+00:00
|
['Istanbul', 'Ethereum', 'Cryptocurrency', 'Blockchain', 'Hard Fork']
|
📺 High tech — high life. 🔌 Quality of life in the…
|
🔌 Quality of life in the Post-Apocalyptic world got better.
Fellow Cryptolords!
In the post-apocalyptic world, High Quality of Life is a myth. Or not?
Today we’re getting down to the ground and talk about Life Quality. From the beginning of the Worldopo project Life Quality concept, let’s say, did not correspond with the actual quality of life. We want to change that!
New Buildings to sustain your Settlement
In the new concept, we assume that hexagons represent land with people living on it. A real, living, and breathing people — like you or me. As each person, we all need food, apartments, and someplace to have a few shots of whisky when winter comes. Of course, being happier makes one work better and make funnier jokes!
Thus, three new Life Quality Buildings come in Grocery to purchase cereals or any other food; Apartment House to stay for a night, and Bar Buildings to have that whiskey!
Happiness level = Productivity level.
Because Wolrdopo is a Business-oriented simulator, we want to make the world much more immersive and interactive than before. The first reason behind touching Quality of Life is that it brings cash circulation inside the Settlement Economic system to sustain other activities.
The second reason is the Realistic Economy: happier people work much better in Settlement buildings, like Hyperfactory or Financial Dep. The more comfortable people are — the more money you will get. In other words, Life Quality buildings influence the efficiency of your New World Business Empire!
Only together, we can get rid of misery and poverty in the post-apocalyptic world!
☝🏻 Follow us for mo’ updates!
👉🏻 Telegram: https://t.me/worldopo_ann
👉🏻 Facebook: https://www.facebook.com/worldopo
👉🏻 Twitter: https://twitter.com/Worldopo1
😉 See ya’ll around!
|
https://medium.com/worldopo/high-tech-high-life-2463073362ba
|
[]
|
2020-09-29 07:02:03.615000+00:00
|
['Game Development', 'Games', 'Development', 'Cryptogames', 'Crypto']
|
Sometimes happiness is just a mirrage…
|
We all have experience happiness, the feeling of being truly calm and compassionate about yourself. Well, this is the best phase of your life and we all want to be in that phase no matter what the situation. Happiness is the state of our life’s pendulum which we pass every now and then before starting on our next osccilation.
I was in a similar state since July, 2020. I would work all day long, count my earnings and declared myself to be successful. After quitting my job in March, 2020, I landed a good paying freelance job in May, 2020 and in July 2020 I also got a full time job. What more could one expect, my salary alost tripled, I had no expenses because I was staying at home (due to WFH) so all that I earned were my savings. My savings were more than my salary an year ago, so life was pretty good and as I called it happy.
Well, we all know of mirrage, the illusion, well as it turned out my happiness was also a mirrage, that I had created in my head. Having heard of jobs, money, savings all life long from everybody around, I felt I was successful and thus happy. The reality however was very different.
What shook me?
Just yesterday, I was working on my freelance work, and in two hours I completed their work and sent them the copy. What happened next came more as a shock to me, than an embarrassment The person tells me that all your work has been rejected because you completed it in just 2 hours. I did not know what to say, I told him to check the work instead of checking the timings, to which he replied, I don’t need to do that, I am not giving you any more work.
Well, this was extremely unexpected, because that same person had told me a day or two earlier that I was really good at my work and he trusted me with the work. Today he was telling me that he rejected my work because I completed it too soon. I basically was puncihsed for being effiecnet.
Now you may call it ego or self respect, but I decided to quote the as I did not want to work for someone, who valued time more than work. Anyways I did not want to argue and I left the situation as it is.
The next morning
Today morning, though it has;t been really long, but the same thoughts of what happened pondered over my mind, and I sat at my desk thinking what just happened and actually writing the pros and corns of it. To my surprise I had more pros than cons.
I realised today, how busy I had made myself due to that work, that I had stopped reading, writing, and even spending time with my family. I said I was happy because I probably did not have time to think what actually my emotions were.
I had actually started writing a book, which was very close to my heart, and after writing some 10 pages, I had just stopped because I did not have time for myself. I am a very self introspective person, but it had been really long since I had done it.
Today morning I sat for introspection and wrote all that I felt, and actually made a future plan for myself. This incident mademe realise that I had lost myself in this race for money and work. Maybe sometimes incidents like these are necessary so that we can get back to ourselves.
What Next
Well, the future now looks good (not happy), I have made some plans which include:
Reading Books
Learning something new each month
Writing answers on Medium every week.
Completing the book that I was writing.
Hopefully not make such stupid mistakes again.
|
https://medium.com/@braincuddle/sometimes-happiness-is-just-a-mirrage-dd90c76e4d06
|
['Ankita Pathak']
|
2020-09-15 04:10:15.409000+00:00
|
['Happiness', 'Work Life Balance', 'Life Lessons', 'Money Mindset']
|
Another Galaxy Away
|
Photo by Felix Mittermeier on Unsplash
My day begins with my father
who will die in one hour
“Do you want to go?”
I couldn’t just say “no.”
I had to hem and haw
like he might have once
with his father
who had a meeting at the bank
then, there I was running
outside with our neighbors
while my Dad fell
like a limb from a tree
five miles apart
another galaxy away
|
https://medium.com/the-neurons-of-heaven/another-galaxy-away-ff232b7e7b34
|
['Tim Shapiro']
|
2020-12-27 17:26:17.157000+00:00
|
['Death', 'Parenting', 'Fatherhood', 'Loss', 'Childhood']
|
7 Steps To Reduce Stress In 3 Minutes Or Less
|
Our stress level is a built in meter, a gauge much like the indicator on your car telling you that it’s overheating. A warning signal that pressure is building, starting to boil over. This could be a deadline at work, something that’s out of your control like road rage, or that high credit card bill.
Stress originates in the brain and then signals the nerves of the body, to ready itself for the classic “fight-or-flight” response.
What then results is sweating of the brow, as the heart begins to pound and the blood pressure rises.
The breathing becomes shallow and rapid, as the muscles in the body begin to tense. This is a natural chain reaction to stress.
This stress response, is instinctive and quick to elevate.
So what’s needed, is using mindful stress relieving techniques.
Once stress does begin to rise, it can get out of control and it becomes difficult to calm yourself down.
7 Ways To Reduce Stress
What’s clinically proven however, is there are ways to reduce this stress response once it kicks in.
This by interfering with a series of brain networks, which can calm things down at its source.
1.) — Pinpoint When The Stress Begins
If there’s any type of stress, what the amygdala does is takes over the brain, this to prepare it for the fight-or-flight response.
This is a primitive “auto-pilot” impulse, when our ancestors were faced with real life or death situations, such as fighting a sabre tooth tiger.
What then happens, is the breathing becomes rapid, as the body begins to clam up.
This response is an instantaneous instinctive reaction, that’s caused by the brain chemicals adrenaline and cortisol, shooting through the veins.
So the key, becomes knowing and practicing by monitoring the exact initial signs of the stress response, such as when the shoulders begin to tense up, and realizing this before the brain becomes hijacked.
2.) — Deep Rhythmic Breathing Using The 5–2–6 Technique
What mindfully controlled slow rhythmic breathing does is it activates the vagus nerve, which travels through the body, and links the brain to all the major organs such as the heart, lungs, and gut.
What the vagus nerve does, is it slows down the activated fight-or-flight response, reversing the body back to its previous relaxed state, which is known as “rest and digest.”
The way to activate this, is by performing the 5–2–6 breathing technique, once stress occurs:
* Begin breathing deeply in and out for 5 seconds
* Hold your breath for a count of 2 seconds
* Then exhale while breathing out through the mouth or nostrils for 6 seconds
3.) — Describe What You See
Describe verbally or in your mind, 3 things you can immediately see or imagine in front of you. Describe its size, color, shape, and texture.
For instance, “a small fluffy brown puppy jumping.”
This can be done wherever you are, indoors or outdoors.
What doing so does is it mindfully snaps your attention back to the present moment.
What it does, is eliminates the worry and fear that’s associated with what may happen in the immediate future.
Describing what you mindfully see, does is disengages the setting in the brain, which gets activated once you begin to feel stress.
What it does, is it distracts the brain, allowing it to ruminate.
4.) — Look At Images Of Nature
What looking at pictures of nature, such as a calm serene forest with a creek running through it, or an image of your pets or farm animals, does is it helps the brain and the heart recover quicker, when experiencing stress.
This is based on a study of University students, who were all stressed out because of an extremely difficult exam.
They were all told immediately after the exam, they all performed poorly.
What the researchers then did, was divided the participants into two separate groups.
The first group was shown images of a forest with a stream, this in a park like setting.
The second group was shown a busy hectic urban scene with congested chaotic traffic, where a crowd of people were shoving and pushing.
Those who were exposed to the serene image, had a quicker cardiovascular recovery, which resulted in lower blood pressure and heart rate.
Those who were shown the image of the congested city street, all reacted slower.
So the key becomes, once you begin to feel stress, intercept the response by viewing a serene image on your computer, or look outdoors provided there’s a peaceful forest like setting.
This to calm yourself down.
5.) — Channel The Stress Into Excitement
Instead of attempting to control the raging impulse that is stress, and trying to calm it down, attempt to harness that built up energy and the various stress chemicals.
Then funnel that energy towards helping you stay focused, motivated, or remaining to work harder.
Think about the passion you have for the task at hand, or the ideas you’re attempting to convey, this without becoming overwhelmed or flustered.
The key becomes to reinterpreting the stress or the feeling of anxiety, by channeling it as excitement and a challenge.
This could be when needing to make a speech, for instance.
Begin by generating positive feelings about communicating well, getting your point across with humor, this rather than attempting to calm the nerves down.
6.) — Mindfully Stand Erect
What standing erect and upright does, is naturally makes you feel more confident.
Physiologically, what it does is decreases the levels of stress hormones in the body.
Those who are in a constant forward slouch posture, this while performing some type of high-pressure task, are known to have more negative based feelings, than those who are consciously sitting or standing upright.
What studies show, is that standing in an upright posture, does is increases testosterone, while simultaneously decreasing the levels of the stress hormone cortisol.
What this combination does, is it forces you to feel less anxious, while being more assertive and confident.
7.) — Clench Then Release Your Right Fist Several Times
What clenching your right hand does, is it activates motion in the left side of the brain, which is known to be more verbal and logical thinking.
The right side of the brain, is more emotional and executive.
So when beginning to feel stress, fear or anxiety, which are all right brain functions, what activating your left brain by clenching the right fist does, is forces you to calm down.
|
https://medium.com/@mkawashima/7-steps-to-reduce-stress-in-3-minutes-or-less-c91b08f6e142
|
['Review On']
|
2020-12-07 16:17:58.854000+00:00
|
['Anxiety', 'Stress Management', 'Relaxation', 'Stress', 'Mindset']
|
InterValue, a Public and Functional Blockchain Designed to Surpass Ethereum and EOS
|
Bitcoin and its underlying blockchain technology gave birth to a decentralized network with a digital currency that has been running for 9 years without any major incident. One of the only problems of this network lies in the low transaction-processing speed.
The emergence of Ethereum marked the advancement to a new era in the age of blockchain. Ethereum’s smart contract technology allows decentralized applications to run on blockchain network. Speed, however, remains a major problem and Ethereum has not been able to achieve real improvements in this field, despite all efforts made by Vitalik Buterin and his team.
EOS, which was created in July 2017, fostered new innovations to increase the processing speed of the network and to create a developer-friendly blockchain environment supposed to make the use of blockchain more widespread.
However, some issues in EOS’ design, such as smart contract vulnerabilities, have triggered skepticism among the public regarding the security of the network and its ability to address some key technical challenges. For now, the community consensus promoted by EOS can solve some problems encountered by most public blockchains with their architecture, but once a flawed smart contract is deployed on the blockchain, it is likely to cause serious damage and even destroy the entire system.
Smart contract security is a key issue for the whole blockchain industry and requires designing new solutions in this regard, simplifying the complexity of smart contract writing and providing secure smart contract templates.
InterValue — a project focusing on blockchain infrastructure and platform-level core technology — may well be the project that will revolutionize the blockchain space with its superior smart contract environment as it provides several innovative solutions to overcome technical challenges.
InterValue uses a hierarchical approach similar to computer storage architecture in the implementation of smart contract functions. The Moses Virtual Machine (MVM) supports both declarative non-turing complete smart contracts and advanced Turing complete smart contracts.
The InterValue project conducted in-depth discussions on its unique smart contract system. Those discussions are documented here below in the form of a Q&A.
How does the Intervalue Smart Contract find a balance between security and functionality? In particular, how does the team address the problem of potential quantum attacks?
The InterValue team noticed that security and functionality are often contradictory. Therefore, smart contracts are designed to strike a balance between security and functionality.
InterValue uses the unique architecture of its “Moses virtual machine” supporting declarative non-turing complete smart contracts and advanced Turing complete smart contracts. Users can select one of these two types of contracts based on their experience and needs. This allows a good balance between security and functionality, as well as between computational cost-efficiency and complexity. Thus, it can meet diverse needs in terms of transaction characteristics.
Declarative smart contracts are simple to deploy. They are highly secure and their underlying logic is close to that of legal contracts. Advanced Turing complete smart contracts are relatively difficult to deploy and are mainly used to develop DApps with more complex program logic.
For anti-quantum attacks, InterValue also has a solution. Most current blockchain systems use the Elliptic Curve Digital Signature Scheme (ECDSA) and the SHA-1 series of encryption algorithms. However, efficient SHOR attacks can be performed against ECDSA in the context of quantum-attacks.
In order to achieve quantum-resistance, InterValue adopted a new anti-quantum attack cipher algorithm and replaced ECDSA with the NTRUsign-251 signature algorithm. The NTRUsign-251 signature algorithm is a public key cryptography algorithm based on lattice theory. Breaking the security of the signature algorithm requires solving the shortest vector problem in a 502-dimensional integer lattice. The SHOR algorithm attack is invalid is inefficient in this case.
Quantum computers also have no other algorithms to break this security. The best heuristic algorithms are also exponential, and the time complexity of attacking the NTRUsign-251 signature algorithm is about 2¹⁶⁸. Simulteanously, InterValue replaces the current SHA-1 series algorithm with the SHA-3 program’s winning algorithm Keccak512. Unlike the classic HASH algorithm, the Keccak512 algorithm uses a sponge structure, which contains many of the latest design concepts and ideas of hash functions and cryptographic algorithms.
Quantum computers have no great advantage in attacking HASH function. At present, the most effective attack method is GROVER algorithm, which can reduce the attack complexity of HASH algorithm from O(2^n) to O( 2^n/2) .However, the first preimage attack time complexity of the Keccak512 algorithm under quantum-attack is 2²⁵⁶ and its second preimage attack time complexity is 2¹²⁸, which is why NTRUsign251 signature algorithm and Keccak512 algorithm are used. InterValue can effectively resist quantum-attacks.
InterValue implements of smart contract functions in a hierarchical manner. Declarative non-Turing complete smart contracts, advanced Turing complete smart contracts as well as other technologies greatly expand area of possibilities for performing transactions.
Transaction anonymity and inter-node anonymous communication are also key characteristics of InterValue.
InterValue ensures the anonymous protection of transaction information by making transactions unlinkable and untraceable and constantly improving the anonymity protection.
Unlinkeability: for any two outgoing transactions it is impossible to prove they were sent to the same.
Untraceability: for each incoming transaction all possible senders are equiprobable.
Unlinkability and untraceability are attributes that must be satisfied by blockchains with strong privacy protection. InterValue guarantees unlinkability and untraceability by using one-time secret key and ring signature technology. InterValue designs and implements a zero-knowledge proof model as an optional feature that further enhances transaction anonymity.
The InterValue underlying communication network adopts a P2P architecture and then adds an anonymous access mechanism between nodes to ensure the privacy protection. Anonymous communications between nodes are established through local proxy servers that establish virtual circuits and enable application layer P2P encryption, incremental random path generation and so on. Thus, it ensures that the observer does not know where the data really comes from, and what the real destination of the data is.
Ethereum — the most famous blockchain for smart contracts — has been pointed out for a variety of bugs and vulnerabilities. How does Intervalue learn from Ethereum’s mistakes and how does it intend to surpass Ethereum?
InterValue noticed that Ethereum’s programing language (Solidity) is literal and lacks the tools to perform security checks.
Intervalue’s smart contract security principles focus on 3 aspects:
1) Enhancement of the Virtual Machine security: by designing and implementing the “Moses Virtual Machine” architecture, the sandbox structure is designed to protect smart contracts from malicious attacks from the system layer. The Moses Virtual Machine mainly protects smart contracts through technologies such as process-level security isolation execution environment, resource access whitelist mechanism, and minimum privilege principle.
2) Formal verification of embedded smart contract code: the embedded verification function of the code is in the compilation link, which enhances the security and control of the smart contract.
3) Updateable smart contracts: a discarding mechanism is applied to smart contracts that are deemed vulnerable. Besides, the newly deployed contracts can inherit the relevant status and data of existing contracts.
Do Ethereum smart contracts inherently have risks? Is the ERC20 system suitable to build token assets? Does InterValue use Bitcoin’s UTXO model? What new developments are there? InterValue offers advanced smart contract functions for off-chain data access. How is that breakthrough?
According to the InterValue team, two main risks weigh on Ethereum smart contracts. One is the vulnerability caused by the flaw in the design of the contract language itself; the other is the coder’s unfamiliarity with the smart contract language. Recently, some Tokens using the ERC20 standard have experienced serious problems, making the industry doubt whether ERC20 is suitable for storing important token assets.
InterValue is designed to integrate UTXO and account-based transactions for efficiency and security considerations. Intervalue’s advanced smart contracts for off-chain data access have broken through two main points: safe off-chain data access and safe use of off-chain data.
Safe off-chain data access is mainly realized through the built-in off-chain data access protocol and constructing an internal distributed data storage platform. Data can be read and written through the sandbox mode, and the security of off-chain data is checked before it is read. These features reinforce both security and convenience.
High costs are also a problem in today’s blockchain space. As a highly practical decentralized distributed application development platform, how revolutionary is EOS and why is it called blockchain 3.0?
Intervalue allows the deployment of applications that are as simple as today’s Internet and mobile applications but that are made possible by a sophisticated blockchain infrastructure. It simplifies the configuration issues with significant consequences on the public.
On the other hand, each application on Intervalue’s blockchain can set gas costs and fees according to its own needs. This greatly facilitates developers who need to build low-cost services flexibly.
In contrast to the Internet, blockchain technology should be able to support free applications. Making the blockchain free to use is the key to its widespread adoption. A free platform will also enable developers and businesses to create valuable new services.
Compared with EOS’s blockchain 3.0, Intervalue’s advantages are mainly reflected in the following aspects: the design is oriented to the practical blockchain 4.0 infrastructure and the technical features have a more-advanced design. The platform is designed to support large-scale popular applications.
Blockchain technology is still far from perfect and the difference between different blockchains is pretty tiny. Between the development phase and the construction of the ecosystem, there will not be many effective users. With continuous development and experience, the InterValue project will become more and more competitive in the future.
Projects with the strongest popularity one are not necessarily the best projects.
Considering its technical strength, transparency, infrastructure and current market conditions, InterValue has the opportunity to become the top next-generation blockchain project. If it can successfully achieve its goals, solve cross-chain transactions and double-layer architecture, problems such as transaction speed will have an important and historical significance for the blockchain world.
The InterValue project has great potential. It could definitely compete with EOS in terms of investment potential.
|
https://medium.com/intervalue/intervalue-a-public-and-functional-blockchain-designed-to-surpass-ethereum-and-eos-7919ff60451d
|
[]
|
2018-08-31 01:25:27.515000+00:00
|
['Inve', 'Blockchain', 'Technology', 'Bitcoin']
|
Backtrack Cybersecurity Toolkit
|
I used to recommend Auditor for security testing through Linux. Auditor was similar to Knoppix, the bootable “live” version of Linux, but it came bundled with a ton of security tools.
Now, BackTrack, a combination of Whax and Auditor has replaced it. This build is version 1.0 but if you were a fan of the Knoppix security build, Whax, or Auditor, check it out now. You do not have to invest anything since it can run from the CD. There is no installation required. Just download it, burn it to a CD, and go.
|
https://medium.com/security-thinking-cap/backtrack-cybersecurity-toolkit-f7b355e28240
|
['Eric Vanderburg']
|
2017-08-28 18:53:30.417000+00:00
|
['Security Software', 'Information Security', 'Backtrack', 'Linux']
|
The Pressures and Disadvantages Indonesia Have To Fight in the AFF Championship 2020
|
twitter:@achmzulfikar
linkedin: https://www.linkedin.com/in/achmad-zulfikar/
SEA Games 2017 : Vietnam VS Indonesia. Source: Football Tribe/Tran Tien
The AFF Championship will be held again this December. The tournament that determines the strongest national team in Southeast Asia will be held after being postponed last year due to Covid-19. Many fans feel enthusiastic about knowing how far their country will go and who will win the championship this year.
Vietnam will be the top contenders to win the AFF Championship this year after their success in going to the third phase of the 2022 World Cup qualification this year by defeating regional rivals, Indonesia, Malaysia, and Thailand. But Vietnam cannot underestimate them this time out, because they will learn from their losses and use those lessons to strengthen their team. One rival that Vietnam should keep an eye out for in the buildup for this year’s AFF Championship is Indonesia. The Timnas Garuda are reportedly drafting in European-based players as well as naturalizing foreign players of Indonesian descent to reinforce themselves for the upcoming tournament. This would make this year’s AFF Championship much harder to predict.
However, with so much optimism revolving around Indonesia, not many people know that the probability of them struggling is almost as likely as the probability of their success. This is because they will have a big disadvantage even before the tournament kicks off — they will probably be unable to deploy their best players come December. Why?
The biggest reason is that because the AFF Championship does not fall within the FIFA international matchday calendar. This means that clubs will be stricter in regards to allowing their players to report for national team duty. Even if this calendar issue is present since the infancy of the AFF Championship, this year will become the year where said issue will affect Indonesia the most. This is because in recent years, a lot of Indonesian players have gone abroad to further their careers.
If all the players playing their trade outside Southeast Asia are prohibited by their clubs to play in the AFF Championship, then Indonesia will be deprived of players with valuable experience playing in higher quality leagues. Even if nobody knows how much influence that they could exert to the squad or how good they will be whilst representing the Timnas Garuda, there’s already a hype surrounding the presence of foreign-based players within the Indonesia squad, so their absence would bring great disappointment to fans.
Three key players in recent times became the epitome of Indonesian players making it big in foreign soil, and their loss would be a huge disadvantage for Indonesia in the upcoming AFF Championship. The players in question are Asnawi Mangkualam Bahar, Egy Maulana Vikri, and Witan Sulaeman. Losing both Egy and Witan, who are currently playing in Slovakia and Poland respectively, will without a doubt affect Indonesia’s attacking prowess. With the skill, speed, the ability to read the game, decision making skills, and accuracy for both passing and shooting that the two players possess, they are probably the best wingers that Indonesia have right now.
However, while we cannot deny that both Witan and Egy are quality players, Indonesia still has many great wingers who can easily slip into their boots. Persipura Jayapura wonderkid Ramai Rumakiek and Irfan Jaya, who’s having a stellar season with a struggling PSS Sleman side, are good alternatives should both Witan and Egy are absent from this year’s AFF Championship.
It’s another story for replacing Asnawi, though.
Asnawi, who plays his football in the South Korean second tier with Ansan Greeners, has been the first-choice right-back ever since Shin Tae-yong assumed control of the national team. The PSM Makassar graduate has been considered one of Indonesia’s best players in recent times, and for good reason. He possesses great defending abilities within his area that are on par with more experienced players, as well as knowing well when to delay the game, when to tackle, when to commit a foul, and other essential skills that a defender must have.
Additionally, Asnawi’s attacking skills are just as potent as his defense. His acceleration and crossing from the right flank can pose danger to opposition defense, requiring opponents to foul him to stop him in his tracks.
Without the three of them, Indonesia will have to fight harder to achieve something from this year’s AFF Championship. The Football Federation of Indonesia (PSSI) have already set a tough goal for Tae-yong’s men — win their maiden AFF Championship title and end the “runners-up curse” that has been plaguing Indonesia since the start of the tournament. Indonesia have qualified for five AFF Championship finals but have failed to win any of them.
While everyone on the Indonesia national team is feeling the immense pressure to do well in the AFF Championship, the pressure on Tae-yong is undoubtedly the biggest, considering Indonesia’s wretched run of form that has continued into his reign. If he fails to meet the target set to him by the PSSI, there’s a good chance that Tae-yong will be sacked from his position.
With all the pressures and disadvantages that they have, it’ll be interesting to see how far Indonesia will go in this year’s AFF Championship. Will they shock their rivals and make a scene for themselves, or will they once again came up with nothing but disappointment for their fans? We’ll see how things unfold next month.
Posted in : https://football-tribe.com/asia/2021/11/22/the-pressures-and-disadvantages-indonesia-have-to-fight-in-the-aff-championship-2021/
|
https://medium.com/@achmadizulfi/the-pressures-and-disadvantages-indonesia-have-to-fight-in-the-aff-championship-2020-1d3747dba534
|
['Achmad Zulfikar']
|
2021-11-24 14:32:35.495000+00:00
|
['Asia', 'Indonesia', 'Football', 'Southeast Asia', 'Aff']
|
Headlines: BCCI President sourav ganguly tests Covid 19 positive in Kolkata
|
Sourav Ganguly, president of the Board of Control for Cricket in India (BCCI) and former captain of Team India, has been found positive in the corona virus test. According to the report, Ganguly’s Covid 19 test report came positive on Monday night.
Source || Dainik Jagran: A big news related to Sourav Ganguly, the President of the Board of Control for Cricket in India (BCCI) and former captain of Team India, has come to the fore. Sourav Ganguly has been found corona infected. Ganguly’s corona virus test has been found positive. BCCI boss Sourav Ganguly had conducted the test of Kovid 19 on Monday, the report of which was received late on Monday evening and that report was positive.
It is worth noting that about a year ago, BCCI President Sourav Ganguly’s brother Snehasish was found infected with Corona. During that time Ganguly’s corona report came negative, but this time Ganguly’s corona report has come positive. According to media reports, Sourav Ganguly has been admitted to Woodlands Hospital in Kolkata. Ganguly has been staying at his Kolkata home for a long time.
Has been the captain of the team for a long time.
Sourav Ganguly, who played international cricket for India for about 12 years, has also been the captain of Team India for a long time. Under his leadership, Team India performed well on foreign soil. Even under his captaincy, Team India played the final match of the 2003 World Cup, in which Team India had to face defeat at the hands of Australia. He has won many bilateral and multi-nation series for the country.
Sourav Ganguly, who made his international debut in the year 1996, played the last international match in the year 2008. After this, he definitely appeared in the IPL till the year 2012, but he had retired from international cricket. Not only this, he also appeared in cricket as an administrator and coach after retiring from competitive cricket. He has also served as the President of the Cricket Association of Bengal for a long time.
|
https://medium.com/@t20india.in/headlines-bcci-president-sourav-ganguly-tests-covid-19-positive-in-kolkata-8c93166e5b08
|
[]
|
2021-12-28 07:13:52.355000+00:00
|
['Bcci', 'Souravganguly', 'Covid 19', 'Headlines']
|
Bridging the Impact Investment Gap: Infrastructuring Tomorrow.
|
Financing the Transition — Blog1.
This is the first of a series of blogs on financing the transformations that our societies need. It is part of our project ‘Re-coding for a civic capital economy’, co-financed by EIT Climate-KIC, and also builds on ideas developed through the EmergenE Room programme with McConnell Foundation, our work with Viable Cities, and the EIT Climate-KIC supported Long Term Alliance.
Introduction
Our generation is facing an unprecedented long emergency — from climate crisis to economic hardships to health emergencies to social injustice — all further exposed and deepened by the ongoing Covid-19 pandemic. Governments around the world are increasingly aware that the scale of the present calamity requires radical capital, policy, and infrastructural solutions beyond our existing capabilities. The impact of these investment decisions will be felt for generations to come, with all the resulting implications both now and in the future. These intergenerational investments will need to address the needs of societies as a whole, carrying with them an array of accountabilities. To borrow from the future is both to assume responsibility and to operate with a duty of care for tomorrow.
In response, we have seen the development of Green New Deals around the world. Linked in part to recovery from the Covid-19 pandemic and in part to a post-carbon economy, the European Union is targeting initial investment of 646 billion euros to be matched by national governments. It is undoubtedly necessary, given the current state of the world, that investment does not follow the traditional disaster capital route of concentrating wealth. Instead we need to ensure a just and equitable transition to a net-zero carbon economy, advancing not just community benefit but also community wealth in all its forms.
This future is being cast in a moment where we face an accelerating deficit resulting from Covid-related spending in 2020. Governments are equally under pressure to implement rapid solutions intended to jump-start the real economy. This drives the policy landscape, with decisions about recovery funds tending to be based on the rear-view mirror rather than on what lies ahead. Competing incentives for cities and local stakeholders are seeded, with an emphasis on job creation and the repetition of historical approaches. This is at the expense of the innovation required to enable the next generation of solutions.
It has often been stated that societies, civilisations and cities live or die by the infrastructure they build. Many of the current infrastructures across our cities and wider society are no longer fit for purpose. In some cases, they are on the brink of collapse. Furthermore, we are at risk of building the wrong infrastructures for the civilisation of the 21st and 22nd centuries. What are the social, ecological, cultural, economic, physical, and institutional infrastructures that are necessary for this new age of long emergencies?
Over the past twelve months, Dark Matter Labs has been working on this challenge with various stakeholders throughout Europe, including Viable Cities, Scottish National Investment Bank, NatureScot, and other parts of the Scottish government, the EIT Climate-KIC Healthy Clean Cities programme, as well as with partners through The Long Alliance.
This work has made it apparent that the dimensional context and the scale of investments required for initiatives such as the Green New Deal are at an order of magnitude our societies and institutions have not yet grasped. We don’t yet have the tools, frameworks, and mechanisms necessary to bridge the capabilities, capacity, and capital gaps, enabling us to address this transition. We need to ‘re-code’ capital to enable the transformations we need.
In this series of blogs, we seek to articulate an urgent need for bridging the gap between the capital clearly aggregated at a macro scale and the investment needed for implementation of transition infrastructures on the ground.
The first blog outlines the five key intervention areas that present unique opportunities for macro capital to address directly the health, climate, economic, and social crises of our time. By understanding the interventions’ core underlying features we are then able to lay out ten strategic recommendations. All of them are linked together by new, or ‘re-coded’, forms of contractual and governance infrastructure for capital. These, in turn, can help foster net-zero, wealthier, and more resilient communities, with the robust social infrastructure necessary to address this new age of long emergencies.
Pathways Forward
What has emerged over the course of our work are five key areas of societal-scale transition infrastructures:
Nature-based assets. These include trees, wetlands, and pools of biodiversity. This future is being demonstrated by work across European cities, from Milan to Vienna and from Glasgow to Madrid, where municipalities are looking for whole-population-scale nature-based solutions. Examples include the planting of millions of street trees and the transformation of canopy cover across a city through the creation of a ring of urban forests. These solutions and future asset classes have some of the highest potential for capturable spillover values, whether that relates to the ‘heat island’ reduction effects of tree canopies, improvements to water drainage systems, or an array of health benefits such as the reduction of asthma rates in children. The need and value of these city-scale interventions are obvious but the viable operational, contractual, and governance means to execute and capitalise this vision are in urgent need of development. For more details, see some of our work at https://treesasinfrastructure.com. Deep retrofitting of our cities. Many municipalities, including Malmö, Sweden, and Sofia, Bulgaria, have announced radical missions to retrofit what amounts to millions of homes across Europe by 2030. Some are exploring pathways for the physical retrofit of their entire cities. Increasingly, our work shows the need to move beyond house-by-house energy retrofits to whole-street and whole-district approaches and whole-city investment cases and business models. This needs to be integrated with parallel vertical-supply-chain innovation and collective strategic procurement across utilities, insurers, governments, and landowners. In these societal-scale aggregative models, the deployment of capital needs to be directly linked to the spillover value of a street-wide or district-wide retrofit strategy, with new mechanisms that allow for the capture of both direct and indirect benefits. By establishing community wealth models at the heart of this approach, we emphasise community-driven outcomes as a key de-risking factor for what is inevitably disruptive work in neighbourhoods.
This approach generates returns on a longitudinal and macro-investment scale, which may not always be captured through more traditional project measures such as internal rates of return (IRR). This is because the spillover effects go far beyond the direct energy savings or even the indirect health implications. They affect a city’s unemployment rates, the local velocity of spending, community wealth and GDP, and drive direct positive benefits with direct as well as more indirect beneficiaries. The contractual integration of this whole value case is vital to creating a sustainable financial proposition, and the community/street-driven transition is the key to growing distributed community wealth, moving us away from the disaster capital concentration of resources, innovation, and new monopolies.
Again, the need and value of city-scale and societal-scale intervention is clear. It enables enormous savings in yearly carbon emissions from existing residential properties — one of the key emissions sources in our cities. Yet, our capacity to institutionally structure and organise the spillover values inherent in these societally entangled interventions is still missing. Without this capacity, residential retrofits will continue to be perceived, wrongly, as unpopular, capital-intensive, revenue-ineffective, and subsidy-dependent interventions, even if they are the only genuinely smart option. Social infrastructures. The social and civic fabric in our cities has suffered significant and chronic under-investement. While new financing commitments are announced regularly, the fundamental challenge lies in the funding side of the business case, specifically targeting mechanisms to recognise and capture spillover values. A good example of a strategic social infrastructure investment is the $7-a-day universal childcare scheme in Quebec, Canada. The programme started in 1997, and over a period of twenty years led to a tripling of the number of mothers with children under the age of five who participate in the Quebec workforce. The impact has been particularly significant for middle-to-low-income women. The resulting economic benefits are vast: $5-billion added to Quebec’s GDP and a 64% reduction in the number households who received social assistance. The spillover effects over the past twenty years of every parent being in a position to choose when or whether they return to work has spread well beyond tax dollars and touched on almost every aspect of Quebec’s society. Empowered women make decisions on what level and quality of schooling their children receive, what local businesses they support, where they invest their education and retirement funds, and what types of communities they want to live in. Strategically investing in public childcare, supporting minority women and first-time immigrants by guaranteeing them a basic universal income, are all examples of social infrastructures that produce returns beyond IRR. Our ability to finance such civic goods is critical for any meaningful transition. New, large-scale, urban developments. Even as the pandemic is undoubtedly impacting on cities’ fundamental economic and residential geography, many cities are (re)developing land for growth and economic adaptation. In this new age, large-scale developments cannot be justified by land value alone. Increasingly, they also need to be structured via a systems value lens — which is critical if they are to be sustainable, including both net-zero carbon in operational terms and from an embodied carbon perspective. As illustrated by our collaboration with the City of Edinburgh (and, more specifically, Granton’s Sustainability and Strategy initiative), they need to reconfigure a city’s housing offer, making it fit for a post-carbon, post-Covid economy. In addition, these developments need to be viewed in the context of how they can drive whole-value-chain innovations, how they support the seeding of new circular material and maintenance economies, and how they support the regeneration and adaptation of adjacent settlements. Equally important is how they can create the new regulatory innovations necessary to drive the wider transition of the city, and how they drive alternative tenures to rent and ownership, with an emphasis on the value of neighbourhoods rather than houses. Large-scale projects need to account for, drive, and value these system impacts if they are to manifest the societal outcomes necessary to support transition, and to justify the real estate sector’s social licence to operate in this age of long emergencies. Agricultural land and our food systems as a whole. Our soil holds the key to both our health, as a source of food, and to carbon sequestration and restoring biodiversity. Any meaningful transition will require us to reconfigure our food and nutrition systems toward regenerative practices. At present, it is more expensive to purchase an organically grown local apple than one cultivated far away, on an industrial scale using environmentally harmful pesticides and chemicals. The latter also accumulates largely underpriced water waste and associated carbon emissions during its transportation to market. A systemic shift in the economy of land and agriculture is vital for the transition. It involves the redesign of incentives, repricing of externalities, the valuing of soil quality maintenance and regeneration, and the inclusion of carbon sequestration potential into the value of land. The system value of the asset is strategically understood, and the case, evidence, and values are increasingly validated. But we now need to build the funding, financing, contractual, and governance structures to manifest this fundamental value.
In order to address the key challenges and asset classes of the future, we recognise there is an urgent need both to organise capital in an unprecedented manner and to re-code capital in contractual and governance terms — amongst other things, to enable the equitable and effective collaboration of public and ‘private’ capital. We believe that this re-coding of capital is fundamental to unlocking these near-now asset classes in order to generate their future values.
Re-Coding Capital
In order to manifest these near-now asset classes, three structural innovations are necessary.
Long Financing: New investment instruments for the long-term allocation of capital
Assets operating and functioning like infrastructure require both long-term vision and long-term allocation of capital. Building the capacity to finance a growing list of long-term transformational infrastructures will be a vital component of transition capital provision. This is hard to achieve within the constraints of short-term budgets and the need for rapid recycling of capital. How could we aim to reconfigure our cities and our communities for seven generations to come, if the nature of underlying investment remains locked to a period of eight-to-ten years?
For example, the idea of Smart Perpetual Bonds allows us to update an old financial instrument so that it meets the societal needs of the 21st century in a complex changing world. They:
Replace the need for repayment of capital with the in-perpetuity offer of fixed income.
Create the capacity for the contingent pricing of value based on preconfigured variables with minimum transactional overheads.
Offer radical, real-time transparency in the factoring and distribution of value providing smart provenance for investment.
Offer the option for the fractional trading of smart coupons on digital secondary markets, enabling the return of capital and income.
By implication, Smart Perpetual Bonds create pathways for financing a new class of assets that yield value across the long term.
Analogue versions of these methods have long been in existence, and their digital transformation is well underway. Raising capital using digital solutions ensures additional protection against arbitraged appreciation and fixes the true value of underlying investment rather than the cumulative effect of trading opportunities and excessive servicing costs.
There is another benefit. The next generation of value-conscious investors are seeking long-wealth opportunities that both ensure a viable fixed return and lead to radical societal transformation. Private banks entrusted with managing their wealth are in search of solutions. Structuring a portfolio of long-financing instruments, with Smart Perpetual Bonds being one of them, will transition vast amounts of progressive wealth towards the solutions that our communities require.
Through The Long Alliance, we will further develop the Smart Perpetual Bond as well as exploring other methods to channel long-term capital towards ‘long assets’, which are designed to provide returns far beyond our own lifetime.
New Public Governance Models: Off public balance sheets yet for public interest
Societal-scale interventions and whole-city transition infrastructures require new models of governance and organisation, which are fundamentally wired to acknowledge the societal entanglement of value. This involves building new institutional structures, which need to be:
Systemically machine-readable. Off public balance sheets yet publicly accountable. Smart civic asset managers, which transparently balance the flows of value and interactions across an ecosystem of stakeholders with evolving incentives.
These civic-development institutions are crucial mechanisms for contracting system value flows and the adjacent many-to-many spillovers. They represent the necessary progression of Public Private Partnerships (PPP), acknowledging our complex reality while limiting rent-seeking incentives that are traditionally structured into investment propositions.
These institutions would bridge the gap in terms of:
Originating new approaches to financing. Structuring and contracting the uplift value of these infrastructure investments. Investing in the development of many micro community-driven assets — such as a network of urban forests — for generating collective outcomes. Pooling and balancing flows of value across the system.
Such civic-development vehicles will disrupt the existing dynamics of public/private relationships and prevent the use of rent-seeking mechanisms that are currently layered in every investment opportunity.
Innovating Funding: New mechanisms for contracting value flow
The challenge of financing this near-now transformation asset class is not the development of new funds in itself, but rather the structuring and contracting for spillover value in order to service the capital. This requires new ways and means of commingling the distributions of value and liabilities, generated and mitigated by these novel societal infrastructures.
This has all been made possible by developments in the application of smart contracts and distributed ledger technologies. Smart contracts give us the capacity to construct whole new sets of value flows and currencies. They allow and enable radically transparent, open, contingent, micro flows of value between stakeholders, facilitating an automated and almost frictionless means to transact, pool, aggregate, and couple across the ambient, micro, and known distributions of value and their spillovers, which, to date, were unaccountable and untradable.
While this future offers radical possibility, it also requires new digital institutional infrastructure, such as digital public registries for land. The provision of this public infrastructure is vital for unlocking this future and for creating next-generation investment markets for Civic Infrastructure Assets.
Ten Initial Recommendations
Making this future manifest in the near term requires tangible action now. A series of critical issues need to be addressed together if we are to make the necessary transition. For example, how Green New Deals and recovery investments concerning the effects of the Covid-19 pandemic are structured over the next two quarters are essential to the direction we will take. While we continue to engage in research and development across several projects and with stakeholders from all sectors, the following initial recommendations are targeted in particular at governments (national and municipal) and strategic investment decision-makers. They form a six-month roadmap to create the necessary scaffolding that will enable a just transition to a post-carbon, post-Covid economy.
Novel types of infrastructure. It is vital that we broaden, at a policy level, the definitional scope of infrastructure beyond roads, public transport and energy, to include both civic and nature-based assets. We need to recognise the vital role novel infrastructure will play in the transition our cities and places face. This reclassification of infrastructure also needs to acknowledge the role tangible and intangible assets play in the functioning of our cities. While cities accept that these assets need substantial investment to enable net-zero transition, they are burdened by legacy financing and accounting mechanisms, as well as by the practical inability to capture value; both issues prevent them from making this intuitive leap. Impact investing. Social finance has paved the way for analysing a portfolio of returns rather than simply targeting a financial KPI. Now is the time to embed and advance these understandings and our initial outcomes-based finance strategies, developing a new investment thesis for transition infrastructures with a whole portfolio of impacts that can be captured in contracts. This will allow cities to escape the singularities of silo-value optimisation and the financial risk-return management of traditional infrastructures. It will enable them to capture a more holistic, population-scale stream of benefits from their transition investment portfolios. Co-beneficiary infrastructures. It is clearly critical that we take a co-beneficiary view of all our infrastructure investments, designing them specifically to build synergies instead of optimising investment in solutions for singular outcomes. For example, we can choose to optimise nature-based solutions for carbon sequestration (which will encourage us to establish fenced-off forest reserves, replacing critical agricultural land and possibly ignoring biodiversity) or we can approach nature-based solutions as vital co-beneficiary assets, which have carbon-capturing capacity but also the means to manage heat island effects for energy utilities, health outcomes for communities, sustainable urban drainage benefits for water utilities, flood risk reduction for insurance companies, and which address our growing biodiversity crisis. In this way, a tree becomes an excellent source of return beyond a traditional measure of IRR and the means to regenerate our cities. The design of infrastructure assets for co-beneficiary returns requires intentional advocacy and horizontal policy infrastructure. This will be vital if we are to use our intergenerational transition investment for greatest efficacy. Preferencing micro-infrastructures. Both resilience or anti-fragility, and community wealth development are necessary in an age of long emergencies. Across the global development landscape, we require a significant shift in our investment thesis, moving from a preference for large-scale, monolithic infrastructures to micro-networked, distributed infrastructure provision. To date, our urban and societal infrastructure development models have been skewed towards the large-scale and frequently disconnected from the local. A single, large forest is not the solution for cities, even if it feels easier and more efficient in terms of land assembly, procurement, and management. Instead, we need a network of dispersed micro urban forests, established and maintained by communities and neighbourhoods across the city, if we are to construct the intangible and tangible properties of these assets. This preference for micro and networked assets needs to be hardwired into our procurement thesis with appropriate community development and participation mechanisms. Rapid investment in digital infrastructure and capacity. The migration from analogue to digital infrastructure needs to be systemically advanced at scale and speed in order to facilitate the transitional revolution. Technical innovation investment will need to permeate a new spectrum of open, real-time, digital public services, ranging from data-sensing with Agent-Based-Models (ABM) capabilities to advanced digital public registries. Public investment in the development of these digital and oracle infrastructures — including the legal, financial, and regulatory standards and protocols — is vital to unlocking the new classes of business and value models necessary for an equitable transition. Transitioning from shovel-ready to shovel-worthy. Governments need to rapidly reassess the flow of infrastructure projects they have committed to fund, whether part of the Covid-19 recovery or pre-existing. This is necessary to ensure that they meet the priorities of the 21st century and are not locked into 2017/18 projections of the world to come. It also applies to investments in real estate, energy and transport, as these projects are often still ruled by short-term financing structures, with isolated linear value flows and private governance detached from the community wealth generation necessary as part of this transition investment. Building the rapid capacity for municipalities and provincial governments to undertake this review is vital and urgent, especially in a moment where there is increasing fiscal pressure and where economies worldwide are in a hiatus, providing a limited window of opportunity to redesign and reset infrastructures Developing new viability tools, mechanisms, and protocols. Our existing viability assessment tools and mechanisms are completely outdated, and do not take into account the strategic risks our cities and societies face. For example, we cannot speak about transformation of our food systems and urban landscape without addressing the land valuation practices that discount regenerative farming and that deprioritise urban forests in favour of condominium developments. These deeply embedded practices span appraisal firms, accounting practices, building permits, zoning regulations, and municipal tax rates. If governments commit to strategic investment into the novel infrastructures discussed here, the positive effect on adjacent industries’ assessed spillover values is bound to be significant. Strategic creation of lead markets for nature-based solutions will ensure the emergence of e.g. innovation sectors around finance, remote-sensing technologies, and smart-contracting services in cities and countries. Rebuilding public viability assessment tools, valuation mechanisms, discount rates, and appraisal incentives to take into account such upsides is a vital and necessary component of an investment system fit for an age of long emergencies. We make what we measure and, currently, we measure the wrong things. Developing alternative governance futures for transition infrastructures. PPP arrangements have not yielded the desired outcomes. Engineering a new typology of civic development vehicles that are off public balance sheets and yet publicly accountable should be the priority of every government. Rapid experiments on the ground. It is essential that governments invest early and quickly in a portfolio of experimental probes that advance novel micro-infrastructure financing experiments, from whole-street retrofits to urban forest networks. The development of adjacent supply chains, policies, institutional infrastructure, innovation, and data-and-evidence collection is equally essential. Investing to learn and build the capabilities of vital adjacent industries will be essential to the success of any large-scale deployment. We are involved in several EIT Climate-KIC projects that are already doing this, such as ReCode, the Health Clean Cities Deep Demonstration, and the Pandemic Response Projects focussing on district retrofits. Many more are needed. Market-making instruments.
Collective risk management facility. We need new hybrid institutions to pool and manage risk across both governmental departments and non-governmental sectors such as banks, utilities, and insurers. This pooling function will create the capacity to address the unmanaged long tail and the cascading risks that are a feature of the age of long emergencies. This function would further crystallise the demand for integrated procurement and co-beneficiary investment in civic infrastructure assets, thereby building the pathways for achieving outcomes such as lower asthma rates and decreased flooding occurrences.
Transition finance innovation sandbox. This should operate across government and non-governmental agencies, including policy-makers, regulators, professional institutes, standards agencies, investment houses and intermediaries, insurance organisations, rating agencies, and fintech startups, among others. Its primary purpose will be to support the design, experimentation, and implementation of new financial instruments and the necessary adjacent institutional infrastructure, including protocols, regulations, and standards for enabling the next-generation financial markets for transition investing.
Whole-value-chain investment. Governments will need to build on the learning from the many difficulties faced in initial Covid-19 response procurement, and invest in whole-value-chain renewal and innovation for a post-carbon, post-Covid economy. Addressing this crisis is not just about investing in a school or building a bridge, or even an urban forest. Its scale requires us to invest in the development of all of the resources and supply chains needed to truly build back better. The scale and speed of transition means we cannot wait for organic supply and demand matching and coordination to emerge. This will require the intentional acceleration of innovation capacity across whole supply chains.
For example, nature-based solutions require investment in remote-sensing capabilities as well as new financial instruments, and investment in elements as diverse as the production of seedlings, new labour market skills, or the capabilities and organisational cultures in municipal green space departments. The acceleration of these near-now infrastructure assets need rapid, coordinated investment across this whole supply chain, not only to match increased demand but to innovate practices, methods, and machinery for a post-carbon economy.
Building the institutional capacity to support the investments and innovation across whole value chains is vital to addressing the challenges of the transition and requires immediate governmental investment. This investment thesis needs to work in synchrony with the Green New Deals to drive the whole value chain innovation necessary for a post-carbon economy transition that is just and inclusive.
Conclusion
The structural transformation our societies and cities need present us with systemic challenges that requires a system-based response. The announcement of new commercial or quasi-commercial funds will not resolve this issue.
Our investment thesis is straightforward: we see the deep need for a new typology of value structuring which is macro in scale with regards to systemic orchestration, and micro and hyperlocal in the development of co-benefinary responses. We do not believe that projects simply connected to linear KPIs, funded by short-term investments, and governed by divergent long-term interests, are able to meet the urgent challenge any longer.
We see the roadmap unfolding through the parallel development of fundamental yet practical innovations across key asset classes, such as nature-based solutions and whole-city or whole-district retrofits. The political will and commitment is already manifest, yet the technical and functional capacity remains underdeveloped. To achieve the necessary scale and speed of systemic change will require a movement of open innovators and strategic sponsors across governments, sectors, and geographies.
We see cities and countries emerging as leaders and lead market makers as they invest in the digital transformation of their systems, creating secondary industries of technical expertise, and viewing their cities as platforms for novel asset development.
The next phase of this journey for us and our many partners is to develop on-the-ground experiments and to gather evidence in order to test and validate our investment hypothesis. We will openly share what we discover and learn.
If you want to find out more about how to engage with this mission, please contact Indy Johar, Anastasia Mourogova Millin, Raj Kalia, Joost Beunderman or Chloe Treger at [email protected].
|
https://provocations.darkmatterlabs.org/bridging-the-gap-infrastructuring-tomorrow-d19971b1f351
|
['Dark Matter']
|
2021-03-27 11:42:55.546000+00:00
|
['Fintech', 'Finance', 'Government', 'Cities', 'Infrastructure']
|
7 Ways We Can Change the World Into a Loving One
|
Take-Aways
Love each other, build a world for your children and their children. Something you would want them to live in, not something full of hatred and conflict, instead of something full of love and honor, something full of beauty, grace, and mercy. Show someone a little kindness and respect, if you are a believer in God remember to love your neighbor and love yourself while you are at it! We are all the same inside, treat people the way you want to be treated, it is the golden rule for a reason.
I hope you benefit and enjoy this, it has been laying heavily on my heart and mind. These 7 ways will go far in making the world a better place and while you are at it, I believe it will bring you inner peace and bliss for having made the world a better place!
If you enjoyed this, maybe you will enjoy this one as well.
|
https://medium.com/amplifies/7-ways-we-can-change-the-world-into-a-loving-one-aee1e97e92ca
|
['Tatiana Santana']
|
2021-01-09 06:07:41.112000+00:00
|
['Peace', 'Kindness', 'World', 'Love', 'Self Improvement']
|
Learning for Beginners in Pandas
|
Reading files is a fundamental task for any data analyst. Pandas, is one such library which helps us to achieve this task more efficiently. The Pandas library has rich features that makes the life of an analyst easy. In this blog, I would like to show some tips to read files using Pandas library. And, I will also discuss some errors and solutions to avoid them. These are the most frequent challenges encountered by early pandas programmers.
The first task is to install and import the pandas library and it is very simple.
If you haven’t installed pandas library earlier, install it using below line
Use the below line of code to import the library
pd is a widely accepted common shortcut to pandas. Off-course, you can use your own shortcuts, but make sure you don’t get confuse by creating new shortcuts.
Now to read a file in python we use the pandas method read_csv [ only to read csv files ] and we store it in a dataframe. So, what is a dataframe? . You can consider dataframe as a simple excel sheet containing rows and columns or like any other SQL table with relevant column names.
To read a file we just execute this simple one line code.
However, this will throw below error.
As backslash “\” is an escape character in python, to fix this we need to use double backslash “\\” as below
Alternatively, we can use raw string as below instead of adding double back slashes i.e simple addition of ‘r’ character at the start of the path.
Once we read the file, we can check how the data is arranged in the dataframe using the head method of the dataframe. The head method displays the top 5 rows of the data frame by default.
Here, we find all the columns are placed in a single column, this is not an ideal way to read a file. The file was read incorrectly, as the file contains semicolon as the delimiter, that separates each column and now we use “sep” parameter of read_csv method. This ensures the columns are separated as per the delimiter in the file.
In most cases the above method of reading files work fine, but there are situations where we get below error, this happens when the files have different type of encoding.
By default, python3 expects ‘utf-8 ‘ encoding in case of any deviation from this, it would simply throw above error. And, fortunately we can fix this issue easily. Just execute below code.
Here we are using the encoding parameter to mention the encoding that is being followed in the file.
Now, this is a big challenge to identify the encoding in a file. Unfortunately, we will not be able to identify all encodings but we can identify some of the most frequently used encodings using a python package chardet.
To install chardet
Import the package with below line of code
Run the below code snippet to identify the file encoding
The code opens the file and reads in bytes and chardet package reads the first hundred bytes and determines the type of coding the file that is being followed in the file. And displays the below result when the parameter “Encoding_Details” is printed.
In this tutorial we learned how to read files, handle path issues, encoding issues, validating the dataframe format and a way to detect encoding in files. This is a basic stuff and would really help pandas beginner.
Follow my future blogs for more tips on reading files using pandas library with different kinds of errors and different ways to handle them. Keep Learning …
|
https://vnimmana.medium.com/learning-for-beginners-in-pandas-43bacad4ad0e
|
['Vijay Krishna Nimmana']
|
2020-12-02 20:52:55.901000+00:00
|
['Chardet', 'Unicodedecodeerror', 'Encoding', 'Reading Files']
|
Being a Disabled Lover…. of History
|
Welcome to my latest blog post. I know I said I would be moving on to beliefs about disability this week. However, I fear I may have bitten off more than I can chew. You see, the topic is more complex than I imagined it in my head when I was planning it. It is taking me a while to piece the puzzle together, but it should be ready for next week. In the meantime, I have decided to give you an insight into the conflicting world of being disabled and a history lover. Unlike every other post I have written, I am making this one up as I’m typing, so I have no idea what I am going to say next.
Access to Records
As you have probably guessed by now, I enjoy learning and writing about history. (If I didn’t, you wouldn’t be reading this). There is just one tiny problem: history involves several old things. A shocker, I know! I think I should clarify the difference between history and archaeology. If I can recall my first week of university correctly, history uses written sources to analyses the past, whereas archaeology relies on physical objects, such as pottery. Does anyone else have the Indiana Jones music stuck in their head right now? Anyway, when studying history, it requires reading several books. As is well known, books are kept in libraries. However, libraries (the physical buildings) are not always accessible. You cannot just stroll into the library and pick out a book. At least not without knocking a few shelves over in the process.
Also, have you ever tried opening and reading a book that is several hundred years old when you have poor hand function? It’s giving me heart palpitations just thinking about it. Luckily for me, scanners exist, so I can have a digital version of a book on my computer. However, this is not always a possibility. This is particularly the case when it comes to historical records. I know that accessing records is not a disability specific problem. Some records are very old and therefore the institutions are understandably hesitant to let any Joe Blogs wander off the street and access them. The problem for me as a wheelchair user is that travelling around to visit various records isn’t that easy and that is before actually trying to enter the building. I know that certain records are online and there are projects working on digitizing more, but it is a slow process. Basically, what I am trying to say in a very long-winded fashion is that almost everything I research for this blog is limited to what I can find online using [insert preferred search engine here].
Access to Historical Sites
As I am an avid lover of history, I could spend all day in museums and at historical sites. In fact, I often wish I could live in a place like the British Museum, London, permanently. I’m sure there is a spare sarcophagus I could sleep in. I also enjoy visiting historical, particularly ancient, sites. However, being a wheelchair user raises several issues. Obviously, the Romans did not build Pompeii with wheelchair users in mind. Therefore, only a small percentage of the (giant) city is wheelchair friendly. If I can remember correctly, they have added ramps etc. in certain areas. The disabled part of me (wait isn’t that all of me?) was jumping for joy at being able to access sites I had read about for years. However, the history loving part of me was deeply troubled.
You see, the people who look after these sites usually do not get enough funding. When you combine this with the amount of people who pay a visit every year, many sites will be completely destroyed by wear and tear in a matter of years. Trying to adapt these sites to make them wheelchair accessible is only making things worse. To clarify, I am thinking of ancient sites that are already ruins. I know it sounds like I’m saying wheelchair users shouldn’t visit ancient sites. That’s because that is exactly what I am saying. However, it is not just wheelchair users that shouldn’t visit ancient sites. I feel that in order to properly preserve ancient sites, they should be closed to the public. Instead, I think technologies such as virtual reality should be used to allow visitors to explore sites. I know that this will probably never happen, but one can always dream.
To keep up to date with my latest blog posts, you can like my Facebook page, or follow me on Twitter. You can find them by clicking the relevant icons in the sidebar of my blog on Blogger.
After that rant, it is probably best to stick with relaying information I have researched. Next week I may or may not start my series on the history of beliefs surrounding disability. I guess you will just have to read it to find out.
The Wheelchair Historian
|
https://medium.com/@wheelchairhistory/being-a-disabled-lover-of-history-fac9f817dc03
|
['The Wheelchair Historian']
|
2020-12-19 16:55:25.280000+00:00
|
['Research', 'Accessibility', 'Tourism', 'History', 'Disability']
|
Craving sweets?
|
Try this out
Photo by Brian Chan on Unsplash
Indulging our senses is a given during the holidays.
However, when the holidays are long gone the sweet tooth can linger on indefinitely.
There are many ways to resist these cravings and they may or may not work overtime. One that you may not have heard of is getting to know and understand how your microbiome has a role to play in both creating the desire and reducing the desire.
A microbiome is a community of extremely microscopic lifeforms, or microorganisms, that ideally are peacefully coexisting within you. This community of trillions is in large part generated by DNA and unique to each of us.
These microorganisms can be helpful or harmful. A healthy gut will have both helpful ones and harmful ones and they coexist with no problem. The community is in harmony.
However, there are certain conditions that disrupt or bring back that harmony, and one of them is diet.
Choosing foods that have high fiber content feeds the anaerobic or friendly bacteria that prevent harmful or unfriendly bacteria. These foods are sometimes referred to as prebiotics and some examples are apples, asparagus, dandelions, garlic, or onions.
When friendly bacteria have food, they create short-chain fatty acids that nourish the colon and support healthy digestion.
You may be wondering how all this relates to craving sugar?
When the intestinal flora is out of harmony or balance, the unfriendly bacteria can be triggering your cravings. These microorganisms are intelligent and send signals to your brain that they need food.
“There is a unique pathway that has coevolved between animals and the resident bacteria in their gut, and there is a bottom-up communication about diet,” — Jane Foster, neuroscientist at McMaster University in Ontario
Not only does our diet have a role to play in our microbiome, but so do our thoughts and feelings. What is sometimes referred to as a gut-brain axis influences and is influenced by our microbiome.
In other words, friendly bacteria produce hormones that make us feel good and happy, and unfriendly bacteria produce hormones that make us feel depressed or anxious.
However, it goes both ways. The same bacteria that produce feel-good hormones also benefit from feel-good hormones. So, as we feel good, we feed good bacteria. When we feel bad, we feed bad bacteria.
“It’s always a fitness game — who gets to be more of the population. The gut microbes that promote serotonin are the ones that benefit from it,” — Elaine Hsiao
So, not only what we are craving but what we are feeling can impact our microbiome.
Here are some ways you can help your microbiome regain its balance and reduce or remove those sugar cravings.
Eat pre-biotics which feed the friendly bacteria. This is basically fiber. Some great options are asparagus, leeks, garlic, or artichokes. Take some deep calming breaths throughout the day which signal to the nervous system that all is well as anxiety can reduce diversity and good bacteria. Adding fermented foods to your diet helps to keep the microbiome diverse. Things like sauerkraut, kimchi, yogurt, or tempeh are rich in probiotics. Try adding some sour or bitter foods to your diet to balance out the sugar. You can try lemon water or bitter greens like arugula. Consider taking a probiotic to supplement a fiber-rich diet. Try using monk fruit or stevia when the sugar cravings kick in. Eat more protein. Things like nuts, seeds, and beans provide a sustainable energy source that helps keep us satisfied. Get adequate rest. When your sleep is reduced your brain's reward system is strengthened and this makes it harder to resist the cravings.
Once we realize that we are 1% human and 99% bacteria, we begin to explore more of this vast aspect of who we are and realize we can never truly be alone. We are part of this microbial community and our choices impact it.
“Balance is the key to everything. What we do, think, say, eat, feel, they all require awareness and through this awareness, we can grow.” -Koi Fresco
Becoming friends with your microbiome is a great way to love yourself in new and expanded ways.
|
https://medium.com/@pamblue/craving-sweets-5efb365c227e
|
['Pam Blue']
|
2021-01-04 17:36:47.557000+00:00
|
['Health', 'Holidays', 'Sweet', 'Desire', 'Balance']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.