Unnamed: 0
int64
0
192k
title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
info
stringlengths
45
90.4k
3,900
Hilma af Klint and Yayoi Kusama: Tapping spirituality and visions into art
Hilma af Klint and Yayoi Kusama: Tapping spirituality and visions into art The lives, work, and legacy of these creatives who dabbled in transcendental expression Group IX/SUW №12 by Hilma af Klint Hilma af Klint Deemed the first modern artist of the Western World, Hilma af Klint was a Swedish creative who credited her creative abilities to a divine, spiritual authority. Her abstract paintings in the 20th century reflected bold, imaginative aesthetics—carrying an essence the world had yet seen before. Because of this nature, she kept her work incredibly private and only permitted their release twenty years following her passing: a collection totaling 1,200 paintings, 100 texts, and 26,000 pages of notes. Only over the subsequent three decades post-1986 have her paintings and works begun to receive serious attention for their distinctive qualities.
https://uxdesign.cc/af-klint-and-kusama-tapping-spirituality-and-visions-into-art-c059e0960a75
['Michelle Chiu']
2020-12-14 00:46:43.851000+00:00
['Design', 'Culture', 'Creativity', 'Visual Design', 'Art']
Title Hilma af Klint Yayoi Kusama Tapping spirituality vision artContent Hilma af Klint Yayoi Kusama Tapping spirituality vision art life work legacy creatives dabbled transcendental expression Group IXSUW №12 Hilma af Klint Hilma af Klint Deemed first modern artist Western World Hilma af Klint Swedish creative credited creative ability divine spiritual authority abstract painting 20th century reflected bold imaginative aesthetics—carrying essence world yet seen nature kept work incredibly private permitted release twenty year following passing collection totaling 1200 painting 100 text 26000 page note subsequent three decade post1986 painting work begun receive serious attention distinctive qualitiesTags Design Culture Creativity Visual Design Art
3,901
Every Time Something Happens, Normal Changes Forever
Every Time Something Happens, Normal Changes Forever And we can never go back Do you remember when a loved one was flying to your city for a visit? You would drive to the airport and wait at the gate when they stepped off the plane. If you spent much time in an airport, you saw those joyous reunions at almost every gate. My wife used to travel a lot and more often than not, I would be the first person she saw when she left the jetway. Nevermore. Think about this. Someone born on September 12, 2001, will turn 19 years old this year. Grown adults in modern society. Probably frequent flyers. And they will never have a memory of 9/11. And the things that are inconveniences to us are normal to them. People who flew before 1970 remember a time with no scanning at all. The rash of skyjackings in the late ’60s changed that forever. Skyjacking. There’s a word most people under 50 aren’t familiar with. The point is, every time something abnormal happens, normal changes. Forever. The world has been through pandemics before. A lot of them. But this one is different. The way the world is reacting to it is different. And those reactions, not the pandemic itself, is what will change normal again. In what way? That remains to be seen. But some of the things we are doing now, right or wrong, will have a lasting effect on the world going forward. And some will be the new normal. Hoarding That’s a big one, and the most unusual. It’s unusual in that it was entirely unnecessary and senseless. No one needed to hoard anything, and the resulting shortages should never have occurred. But they did. And as a result, normal will shift a bit. Pantries will always be a bit more stocked than in the past. People won’t buy just one of anything. “Let’s get extra, just in case.” I think the shelf life on food labels will become more prominent. There will be an uptick in the sale of large freezers. And bidets. No one will ever have less than three cases of toilet paper stored in their homes. Ever. Social Distancing Will this ever completely go away? How long before people stop feeling a little uncomfortable going into someone else’s house? Or letting others into theirs. “Thanks for coming, it was great seeing you.” Door closes. “Get the sanitizer and wipe everything down!” How long before we stop being uneasy passing someone in the grocery store? Will we ever go to the store again without seeing someone wearing a mask and gloves? I think we will see changes in new and remodeled stores. Wider aisles. More touchless checkout. Plexiglas mounted between the cashier and customers. Will checkouts soon look like the windows at a pawnshop? Cash. It’s been going out of favor for decades. This may be the death knell. Carrying around and touching little pieces of paper that have been handle by thousands, if not millions of people. Who does that? Masks Everyone will own a few. Managing them will become part of everyday life. “Are you doing laundry today? I want to get the masks done.” They will become fashion accessories. The bedazzled fad will make a comeback. Sites like Redbubble, Etsy, and eBay will see a new growth industry in designer masks. Some of this won’t be a bad thing. I think in places like doctor’s offices and hospitals, masks will become mandatory for everyone. I don’t know who makes disposable masks, but you should buy some of their stock. Now. Travel For the travel industry to recover from this, they will have to make big changes. Normal in travel will change again. Somehow, hordes of people crammed together in small spaces have to end. I can see biometrics on the rise. We need to scan and identify people quickly without the bottleneck it now causes. Just think, if the ID process took 5 seconds instead of 20. I have no idea how airlines will react. Seats have gotten smaller and people closer together. We all breathe the same air for hours at a time. Air travel has always been dicey health-wise, so a new normal there is not a bad thing. And cruise ships? Personally, I see little change there. I have taken over 60 cruises and the procedures on cruise ships have always exceeded those everywhere else. Sure, outbreaks occur. But with 4,000 passengers and crew coming together from all over the planet in small spaces; I think the results have been phenomenal. Washing and sanitizing your hands. That was old news ten years ago on cruise ships. The world changes. Constantly. What we consider normal today would have been considered bizarre by our grandparents. There will be babies born this year into the new normal. How will they view our world? “Mommy, that man touched his face.” What will that world look like?
https://medium.com/live-your-life-on-purpose/every-time-something-happens-normal-changes-forever-68a03773dabc
['Darryl Brooks']
2020-05-06 21:49:08.615000+00:00
['Life Lessons', 'Covid 19', 'Society', 'Health', 'Life']
Title Every Time Something Happens Normal Changes ForeverContent Every Time Something Happens Normal Changes Forever never go back remember loved one flying city visit would drive airport wait gate stepped plane spent much time airport saw joyous reunion almost every gate wife used travel lot often would first person saw left jetway Nevermore Think Someone born September 12 2001 turn 19 year old year Grown adult modern society Probably frequent flyer never memory 911 thing inconvenience u normal People flew 1970 remember time scanning rash skyjackings late ’60s changed forever Skyjacking There’s word people 50 aren’t familiar point every time something abnormal happens normal change Forever world pandemic lot one different way world reacting different reaction pandemic change normal way remains seen thing right wrong lasting effect world going forward new normal Hoarding That’s big one unusual It’s unusual entirely unnecessary senseless one needed hoard anything resulting shortage never occurred result normal shift bit Pantries always bit stocked past People won’t buy one anything “Let’s get extra case” think shelf life food label become prominent uptick sale large freezer bidet one ever le three case toilet paper stored home Ever Social Distancing ever completely go away long people stop feeling little uncomfortable going someone else’s house letting others “Thanks coming great seeing you” Door close “Get sanitizer wipe everything down” long stop uneasy passing someone grocery store ever go store without seeing someone wearing mask glove think see change new remodeled store Wider aisle touchless checkout Plexiglas mounted cashier customer checkout soon look like window pawnshop Cash It’s going favor decade may death knell Carrying around touching little piece paper handle thousand million people Masks Everyone Managing become part everyday life “Are laundry today want get mask done” become fashion accessory bedazzled fad make comeback Sites like Redbubble Etsy eBay see new growth industry designer mask won’t bad thing think place like doctor’s office hospital mask become mandatory everyone don’t know make disposable mask buy stock Travel travel industry recover make big change Normal travel change Somehow horde people crammed together small space end see biometrics rise need scan identify people quickly without bottleneck cause think ID process took 5 second instead 20 idea airline react Seats gotten smaller people closer together breathe air hour time Air travel always dicey healthwise new normal bad thing cruise ship Personally see little change taken 60 cruise procedure cruise ship always exceeded everywhere else Sure outbreak occur 4000 passenger crew coming together planet small space think result phenomenal Washing sanitizing hand old news ten year ago cruise ship world change Constantly consider normal today would considered bizarre grandparent baby born year new normal view world “Mommy man touched face” world look likeTags Life Lessons Covid 19 Society Health Life
3,902
How I Built a Full-Time Career as a Freelance Writer
Lesson Two: Find a Golden Ticket One of the biggest problems freelancers tend to have is increasing their rates. They secure a few clients, those clients become accustomed to paying x amount of money per article, and then the budding writer gets stuck. They don’t want to demand more money and lose a client, but they also don’t want to spend years writing for the same nominal sums. How, then, do you increase your rates once the money starts coming in? Well, as with any sale, if you’re expecting customers to pay big money, you’d better start demonstrating big value. You wouldn’t spend $10,000 on a fake diamond ring, so don’t expect your clients to pay triple your current rate if you still have next to no experience. It doesn’t matter how much value you claim to provide. Anybody can say they’re providing value. You have to be able to prove it. You wouldn’t pay thousands for something if you couldn’t ascertain its value, so don’t expect clients to pay you a lot of money just because you tell them you’ll provide high-quality content. In the world of writing, the proof isn’t in the pudding, but in the experience. Often, not even a degree in creative writing is enough to persuade a person to pay you. Trust me. I’ve employed many writers, and not once have I asked to see a degree. I ask where they’ve been published. In my experience, being published in reputable spaces has enabled me to ramp up my rates quickly. Last winter, I was charging a standard price of £0.10 per word. Fast-forward 12 months and I’m being paid $500 for 500 words — $1 per word. That’s an enormous jump for a year, and the only way I was able to provide it was by demonstrating value. Interestingly, the client paying me those rates approached me. I didn’t apply to work for them. They found me through my work. So what changed? Well, around a year ago, I was catching up with my mum at a local cafe over a hot mug of coffee when my phone lit up, displaying an email that seemed too good to be true. After reading the subject line, “An Invitation From Arianna Huffington’s Thrive Global,” my first thought was ‘surely this is spam’. Spoiler alert: it wasn’t. Getting published to Thrive was a huge deal for me. But most importantly, it was a golden ticket that allowed me to increase my rates. Adding to that, I had two articles of mine go semi-viral, attracting 50K views each. More recently, my publication, Mind Cafe, exceeded 100K followers and began reaching millions of monthly readers, as well as welcoming esteemed writers such as Nir Eyal, Benjamin Hardy, PhD, and Brianna Wiest to our roster. All of these things communicate one thing to my clients. That is, that I know what I’m doing. I stand out amongst the competition, and therefore they’re happy to pay more money for my work. If you want to charge more and get away from those peanut-paying clients, you need to find ways to make a name for yourself either by growing an audience or being published in a reputable space. Those are your golden tickets — your credentials. Every decent feature is like an extra dollar in your pocket where your freelancing rates are concerned. You’ll probably be rejected a few times, but that’s okay. So long as you’re taking the time to write truly engaging, high-quality content, somebody will publish you, and that somebody will become your golden ticket.
https://medium.com/the-post-grad-survival-guide/how-i-built-a-full-time-career-as-a-freelance-writer-3d66f5090773
['Adrian Drew']
2020-12-11 13:27:15.585000+00:00
['Creativity', 'Business', 'Work', 'Freelancing', 'Writing']
Title Built FullTime Career Freelance WriterContent Lesson Two Find Golden Ticket One biggest problem freelancer tend increasing rate secure client client become accustomed paying x amount money per article budding writer get stuck don’t want demand money lose client also don’t want spend year writing nominal sum increase rate money start coming Well sale you’re expecting customer pay big money you’d better start demonstrating big value wouldn’t spend 10000 fake diamond ring don’t expect client pay triple current rate still next experience doesn’t matter much value claim provide Anybody say they’re providing value able prove wouldn’t pay thousand something couldn’t ascertain value don’t expect client pay lot money tell you’ll provide highquality content world writing proof isn’t pudding experience Often even degree creative writing enough persuade person pay Trust I’ve employed many writer asked see degree ask they’ve published experience published reputable space enabled ramp rate quickly Last winter charging standard price £010 per word Fastforward 12 month I’m paid 500 500 word — 1 per word That’s enormous jump year way able provide demonstrating value Interestingly client paying rate approached didn’t apply work found work changed Well around year ago catching mum local cafe hot mug coffee phone lit displaying email seemed good true reading subject line “An Invitation Arianna Huffington’s Thrive Global” first thought ‘surely spam’ Spoiler alert wasn’t Getting published Thrive huge deal importantly golden ticket allowed increase rate Adding two article mine go semiviral attracting 50K view recently publication Mind Cafe exceeded 100K follower began reaching million monthly reader well welcoming esteemed writer Nir Eyal Benjamin Hardy PhD Brianna Wiest roster thing communicate one thing client know I’m stand amongst competition therefore they’re happy pay money work want charge get away peanutpaying client need find way make name either growing audience published reputable space golden ticket — credential Every decent feature like extra dollar pocket freelancing rate concerned You’ll probably rejected time that’s okay long you’re taking time write truly engaging highquality content somebody publish somebody become golden ticketTags Creativity Business Work Freelancing Writing
3,903
The Asymmetric Top: Tackling Rigid Body Dynamics
Even legends are perplexed by rigid bodies When we think of the hard topics in physics, quantum mechanics and general relativity spring to mind. Although those topics are incredibly complex and non-intuitive, I personally feel that the motion of an asymmetric top trumps both QM and DR in complexity, making it one of the hardest concepts to grasp in physics. In this article, we will explore how to analyse the motion of the asymmetric top, of course with some constraints to make our analysis tractable. Problem Setup We consider the free rotation of an asymmetric top, for which we let all three moments of inertia be different. However, for simplicity, we impose the following condition on the moments of inertia: Anyone who has dabbled in classical mechanics will be familiar with Euler Equations, a set of equations that allow us to analyse rotational systems. In the case of free rotation, in other words, when net moments about all the axes are zero, Euler Equations reduce to the following form: Now our task will be to in some way find closed form solutions (or at least some simpler representation) of the angular velocities of the free rotating top. If we find the angular velocities of the top, we will have a fully determined system as we can find any other quantity using the angular velocities. Note that if the motion of the top included translation, then we will also need to know the linear velocities to fully determine the system. Approaching the solution We know from elementary physics that there are two main conservation theorems: total energy and momentum. In the case of rotation, momentum is replaced by angular momentum. Therefore, we can already produce two integrals of the equations of motion: Where E and M are the total energy and angular momentum respectively. We can actually write the first equation in terms of the components of angular momentum instead to make our analysis easier. We can already draw some conclusions about the relationship between the various angular velocities and moments of inertia. The energy conservation equation produces an ellipsoid with semi-axes sqrt(2EI1), sqrt(2EI2), sqrt(2EI3), and the angular momentum conservation equation produces a sphere with radius M. So when the angular momentum vector M moves around the component space, it moves along the intersection lines between the ellipsoid and sphere. The following figure illustrates that: Taken from Mechanics by Landau If we want to be rigorous, we can prove that intersections between the ellipsoid and sphere exist using the following inequality: We can prove this inequality by considering the initial condition we imposed on the moments of inertia and the surface equations of the ellipsoid and sphere. Let us examine the intersection curves in a bit more detail. When M² is slightly bigger than 2EI_1, the intersection curve is a small ellipse near axis x_1, and as M² tends to 2EI_1, the curve gets smaller until it shrinks to the axis x_1 itself. When M² gets larger, the curve also expands, until M² equals 2EI_2, where the curves become plane ellipses that intersect at the pole x_2. Now as M² increases past 2EI_2, the curves become closed ellipses near the x_3 poles. Conversely, when M² is slightly less than 2EI_3, the intersection curve is a small ellipse near the axis x_3, and as M² tends to 2EI_3, the curve shrinks to a point on x_3. We can also note a few things by looking at the nature of the curves. Firstly, since the all of the intersection curves are closed, there must be some sort of a period to those precession/rotation. Secondly, if we look at the size of the curve near the axes, we get an interesting result. Near axes x_1 and x_3, in other words, near the smallest and largest moment of inertia, the intersection curves are small and entirely in the neighborhood of the poles. This can be interpreted to be stable precession about the axis of largest and smallest moment of inertia. However, near the axis x_2, the intermediate moment of inertia, the intersection curves are non local and large. This means that deviations in rotation around the intermediate axis is unstable. This is consistent with the famous Tennis Racket Theorem (read more about it here), where it can be proved that the perturbation of motion about the intermediate axis is unstable. It is quite a remarkable way to prove the tennis racket theorem, purely graphically, with minimal mathematics. Analysing Angular Velocity Now that we have an idea of how the angular momentum and energy of the asymmetric top are interrelated, we can proceed and try to understand how the angular velocity evolves over the rotation of the top. We can first try and represent the angular velocities in terms of one another, and in terms of the constants of equation of motion, aka the energy and momentum. We get the following set of equations: Now, we can substitute these two equations into the Euler equation component for Omega_2: This expression hints to us that the integral for the angular velocity would be some form of an elliptic integral. Now we can add another condition to make our life easier: We now suggest the following change of variables to make the solution more tractable: Note that if the inequality is reversed, we can just interchange the signs of the moments of inertia in the substitutions. It is also useful to define a positive parameter k²<1, as Finally, we get a familiar integral Note that the origin of time is taken to be when Omega_2 is zero. This integral is non analytic, but inverting the integral gives us the Jacobian elliptic functions s=sn(tau). Now we can finally write our angular velocities as ‘closed’ form solution: Obviously these are periodic functions and we know that the period is Where K is the complete Jacobian integral of the first kind Note that at time T, the angular velocity vector returns back to its original position, but that does not mean that the asymmetric top itself returns to its original position. The solutions for the angular velocities might be elegant, but they do little for us mortals to understand the actual motion (maybe Landau was smart enough to visualise it). We can attempt to try and understand it by converting the angular velocities into equations involving Euler angles instead. However, the mathematics to do so are quite long and tricky, so I have omitted it. A Simpler Case Since the asymmetric top doesn’t really allow us to intuitively understand how the Euler equations work and how to interpret the results, we turn to a simpler problem. The following problem was actually set in Landau’s Mechanics textbook. Reduce to quadratures the problem of the motion of a heavy symmetrical top whole lowest point is fixed. We can represent this using the following diagram Taken from Mechanics by Landau We know that the Lagrangian for this system is Since phi and psi are both cyclic coordinates, in other words, their derivative in the Lagrangian is zero, we can already write down two integrals of the equations of motion Where We also know that the total conserved energy is Using our two integrals of equations of motion, we obtain Now, substituting those into our energy conservation equation, we obtain Where Note that the energy equation now resembles the sum of kinetic (as dictated by the parallel axis theorem) and potential energy (now called the effective potential). We know from standard analysis that this can now be represented as Evaluating this integral will give us the necessary solutions that we seek for the various angles. Note that this is also an elliptic integral. We know that E’ must be more than or equal to the effective potential. Also, the effective potential tends to infinity when theta is equal to either 0 or pi, and has a minimum between those points. So, the equation E’=U_eff must have two roots. We denote those two roots theta_1 and theta_2. When theta changes from theta1 to theta2, the derivative of phi might change sign depending on whether or not the difference M_z-M3cos(theta) changes sign. The different scenarios can result in the following type of motion: Taken from Mechanics by Landau When the derivative of phi does not change direction, we get the scenario in 49a, and this type of motion is known as nutation. Note that the curve shows the path of the axis of the top while the center of the sphere shows the fixed point of the top. If the derivative of phi does change direction, we get 49b, where the top moves in the opposite direction for a brief amount of time for phi. Lastly, if the theta1 or theta2 is equal to the difference M_z-M3cos(theta), then both derivatives of phi and psi vanish, resulting in the motion from 49c. Conclusion Hopefully this article has given you some insight on how the mind boggling field of rigid body dynamics works, in particular, how asymmetric tops rotate in free space. Notice that sometimes, looking at graphical representations of quantities can give us a lot of information instead of delving directly into the mathematics and getting stuck inside. Also, it sometimes helps to tackle simpler problems to understand how we can visualise the solutions, albeit for non-realistic scenarios.
https://medium.com/engineer-quant/the-asymmetric-top-tackling-rigid-body-dynamics-79f833567d22
['Vivek Palaniappan']
2019-09-23 14:28:14.786000+00:00
['Education', 'Physics', 'Science', 'Mathematics', 'Engineering']
Title Asymmetric Top Tackling Rigid Body DynamicsContent Even legend perplexed rigid body think hard topic physic quantum mechanic general relativity spring mind Although topic incredibly complex nonintuitive personally feel motion asymmetric top trump QM DR complexity making one hardest concept grasp physic article explore analyse motion asymmetric top course constraint make analysis tractable Problem Setup consider free rotation asymmetric top let three moment inertia different However simplicity impose following condition moment inertia Anyone dabbled classical mechanic familiar Euler Equations set equation allow u analyse rotational system case free rotation word net moment ax zero Euler Equations reduce following form task way find closed form solution least simpler representation angular velocity free rotating top find angular velocity top fully determined system find quantity using angular velocity Note motion top included translation also need know linear velocity fully determine system Approaching solution know elementary physic two main conservation theorem total energy momentum case rotation momentum replaced angular momentum Therefore already produce two integral equation motion E total energy angular momentum respectively actually write first equation term component angular momentum instead make analysis easier already draw conclusion relationship various angular velocity moment inertia energy conservation equation produce ellipsoid semiaxes sqrt2EI1 sqrt2EI2 sqrt2EI3 angular momentum conservation equation produce sphere radius angular momentum vector move around component space move along intersection line ellipsoid sphere following figure illustrates Taken Mechanics Landau want rigorous prove intersection ellipsoid sphere exist using following inequality prove inequality considering initial condition imposed moment inertia surface equation ellipsoid sphere Let u examine intersection curve bit detail M² slightly bigger 2EI1 intersection curve small ellipse near axis x1 M² tends 2EI1 curve get smaller shrink axis x1 M² get larger curve also expands M² equal 2EI2 curve become plane ellipsis intersect pole x2 M² increase past 2EI2 curve become closed ellipsis near x3 pole Conversely M² slightly le 2EI3 intersection curve small ellipse near axis x3 M² tends 2EI3 curve shrink point x3 also note thing looking nature curve Firstly since intersection curve closed must sort period precessionrotation Secondly look size curve near ax get interesting result Near ax x1 x3 word near smallest largest moment inertia intersection curve small entirely neighborhood pole interpreted stable precession axis largest smallest moment inertia However near axis x2 intermediate moment inertia intersection curve non local large mean deviation rotation around intermediate axis unstable consistent famous Tennis Racket Theorem read proved perturbation motion intermediate axis unstable quite remarkable way prove tennis racket theorem purely graphically minimal mathematics Analysing Angular Velocity idea angular momentum energy asymmetric top interrelated proceed try understand angular velocity evolves rotation top first try represent angular velocity term one another term constant equation motion aka energy momentum get following set equation substitute two equation Euler equation component Omega2 expression hint u integral angular velocity would form elliptic integral add another condition make life easier suggest following change variable make solution tractable Note inequality reversed interchange sign moment inertia substitution also useful define positive parameter k²1 Finally get familiar integral Note origin time taken Omega2 zero integral non analytic inverting integral give u Jacobian elliptic function ssntau finally write angular velocity ‘closed’ form solution Obviously periodic function know period K complete Jacobian integral first kind Note time angular velocity vector return back original position mean asymmetric top return original position solution angular velocity might elegant little u mortal understand actual motion maybe Landau smart enough visualise attempt try understand converting angular velocity equation involving Euler angle instead However mathematics quite long tricky omitted Simpler Case Since asymmetric top doesn’t really allow u intuitively understand Euler equation work interpret result turn simpler problem following problem actually set Landau’s Mechanics textbook Reduce quadrature problem motion heavy symmetrical top whole lowest point fixed represent using following diagram Taken Mechanics Landau know Lagrangian system Since phi psi cyclic coordinate word derivative Lagrangian zero already write two integral equation motion also know total conserved energy Using two integral equation motion obtain substituting energy conservation equation obtain Note energy equation resembles sum kinetic dictated parallel axis theorem potential energy called effective potential know standard analysis represented Evaluating integral give u necessary solution seek various angle Note also elliptic integral know E’ must equal effective potential Also effective potential tends infinity theta equal either 0 pi minimum point equation E’Ueff must two root denote two root theta1 theta2 theta change theta1 theta2 derivative phi might change sign depending whether difference MzM3costheta change sign different scenario result following type motion Taken Mechanics Landau derivative phi change direction get scenario 49a type motion known nutation Note curve show path axis top center sphere show fixed point top derivative phi change direction get 49b top move opposite direction brief amount time phi Lastly theta1 theta2 equal difference MzM3costheta derivative phi psi vanish resulting motion 49c Conclusion Hopefully article given insight mind boggling field rigid body dynamic work particular asymmetric top rotate free space Notice sometimes looking graphical representation quantity give u lot information instead delving directly mathematics getting stuck inside Also sometimes help tackle simpler problem understand visualise solution albeit nonrealistic scenariosTags Education Physics Science Mathematics Engineering
3,904
What Every Developers & Programmers Need to Read in June 🔥🔥
What Every Developers & Programmers Need to Read in June 🔥🔥 Keyul Follow Jun 10 · 2 min read Hello Readers, Staying at home gives you extra time. You have to utilize this time in the right direction. During this time, spend maximum of your time in learning new programming, tech stack or tool, build various projects, improve your overall skills. You keep reading the amazing posts written by our developers & programmers. These are the best blog posts we have picked for you this month. via — unsplash Special thanks to our contributors Alyssa Atkinson , Raouf Makhlouf , Elvina Sh, Karthick Nagarajan, Tommaso De Ponti, Madhuresh Gupta If you like this, please forward this email to your friends to share QuickCode publication or tweet about it. Help us to reach out to 10K members milestone. Thank you.
https://medium.com/quick-code/what-every-developers-programmers-need-to-read-in-june-cd7f2d7da4e6
[]
2020-06-10 04:39:52.248000+00:00
['Programming', 'Development', 'Coding', 'Software Development', 'Software Engineering']
Title Every Developers Programmers Need Read June 🔥🔥Content Every Developers Programmers Need Read June 🔥🔥 Keyul Follow Jun 10 · 2 min read Hello Readers Staying home give extra time utilize time right direction time spend maximum time learning new programming tech stack tool build various project improve overall skill keep reading amazing post written developer programmer best blog post picked month via — unsplash Special thanks contributor Alyssa Atkinson Raouf Makhlouf Elvina Sh Karthick Nagarajan Tommaso De Ponti Madhuresh Gupta like please forward email friend share QuickCode publication tweet Help u reach 10K member milestone Thank youTags Programming Development Coding Software Development Software Engineering
3,905
The Novelist’s Guide to Abject Failure
1. Before you even start, make sure everyone in your life is perfectly satisfied. In order to write, you’re going to need a lot of time to yourself. Seriously, a LOT of time. And, God help you if anyone interrupts you, because you’ll never get back on track. You need great, big gobs of uninterrupted time. If you’re going to write, you’ve got to do this right. Once you decide you’re going to plant yourself at your desk — and it HAS to be a monstrous slab of wood, like the desk Stephen King describes in On Writing — you need to partake in the great NASA tradition of a Go/No-Go check, to make sure that everyone in your life is willing to let you retreat into your writing. You start calling people: Kids — Go or No Go? Spouse — Go or No Go? Work — Go or No Go? In laws — Go or No Go? And so on. If you’re really going to be a writer, the universe will respect your decision, and all of the people in your life will find ways to help themselves. That’s how you know it’s meant to be. 2. You must go into isolation. YOU WILL GO TO THE PAPER TOWNS AND YOU WILL NEVER GO BACK. — Ancient Writerly Proverb. Once everybody in your life has decided that it’s okay for you to write, you must isolate yourself. As the proverb says, you must go to the paper towns. And, since there are no towns actually made out of paper, that means going to the town of the things that make paper: trees. You have to go out and get yourself a cabin in the middle of the forest. Ideally, there should be no access road, no heat, no cable, no internet, no bed. Just a slab of a desk and something to sit on. And electricity, I guess, if you’re working on a computer. Also, you can once again take the Stephen King route and hire yourself a guardian. This should be somebody who loves you, who sees your creative potential, and doesn’t mind getting their hands dirty to keep you on task. An affinity for needles and axes is a bonus. It will be their job to make sure that you stay in perfect isolation, free of any and all physical distractions, and to encourage you with daily writing goals. Yet another bonus: by the time you are finished, you’ll be excited to see your friends and family again. Their impositions on your time won’t seem so infuriating. 3. Wait for the muse to arrive. You don’t want to piss off Zeus. Whatever you do, you definitely don’t want that. In fact, I recommend not crossing anyone in the Greek pantheon. One of the first rules you learn in your creative writing MFA is that you have to wait for your muse to arrive. She has to be there before you do anything. If you start writing without filing the appropriate paperwork and waiting for your muse’s golden stamp of approval, you’ll never write anything again. Just ask Harper Lee. Unfortunately, your muse is a finicky creature with her own timetable and agenda. She usually likes to show up when you’re on the job — or squatting behind a bush in the forest, as the case may be — and she expects you to be ready and waiting. Be ready, writer. 4. Compare yourself to J.K. Rowling J.K. Rowling is not just a beloved author, but the yardstick by which all authors should compare their career. This means that you should do some serious planning. Your first book can go out into your country quietly, but it should explode in the overseas market. By the time you finish your third book, you should be a household name. By the time you finish your fifth book, Hollywood should be knocking down your door. Your sixth book should be enough to purchase a majestic castle outside of Edinburgh, and your seventh should make you richer than the Queen. But don’t stop where J.K. Rowling did: if there had been ten Harry Potter books (and I’m not talking about that fanfiction stageplay), she’d be worth more than The Vatican. Keep in mind that none of this was due to luck or serendipity. J.K. Rowling had a plan. J.K. Rowling had a dream. J.K. Rowling kept her eyes on the prize. J.K. Rowling got shit done. It’s totally a formula. 5. Guard Your Ideas Ferociously Your story idea is special. It is precious. Nothing like it has ever existed in the world before. And everybody is out to steal it. Everybody. Every innocent-looking writer in every innocuous-looking writer’s group is going to steal your idea. Keep it secret, keep it safe. Your first step is to register it with your country’s copyright office. This usually has a hefty fee attached to it, but it is worth every penny. Every single penny. You need to build Fort Knox around your idea, and a copyright is on your side. It didn’t hurt Mr. Disney none, now did it? You should also make sure that you breathe nothing about your novel to anyone. Not your spouse, not your siblings, not your kids. They might be spies for other writers. They might be reporting on you. They might take your precious, precious gemstone of an idea for themselves. You only get so many ideas, you know. Once you’re out, you’re out.
https://zachjpayne.medium.com/the-novelists-guide-to-abject-failure-4d5429941687
['Zach J. Payne']
2019-06-28 20:05:33.299000+00:00
['Humor', 'Learning', 'Creativity', 'Art', 'Writing']
Title Novelist’s Guide Abject FailureContent 1 even start make sure everyone life perfectly satisfied order write you’re going need lot time Seriously LOT time God help anyone interrupt you’ll never get back track need great big gob uninterrupted time you’re going write you’ve got right decide you’re going plant desk — monstrous slab wood like desk Stephen King describes Writing — need partake great NASA tradition GoNoGo check make sure everyone life willing let retreat writing start calling people Kids — Go Go Spouse — Go Go Work — Go Go law — Go Go you’re really going writer universe respect decision people life find way help That’s know it’s meant 2 must go isolation GO PAPER TOWNS NEVER GO BACK — Ancient Writerly Proverb everybody life decided it’s okay write must isolate proverb say must go paper town since town actually made paper mean going town thing make paper tree go get cabin middle forest Ideally access road heat cable internet bed slab desk something sit electricity guess you’re working computer Also take Stephen King route hire guardian somebody love see creative potential doesn’t mind getting hand dirty keep task affinity needle ax bonus job make sure stay perfect isolation free physical distraction encourage daily writing goal Yet another bonus time finished you’ll excited see friend family imposition time won’t seem infuriating 3 Wait muse arrive don’t want piss Zeus Whatever definitely don’t want fact recommend crossing anyone Greek pantheon One first rule learn creative writing MFA wait muse arrive anything start writing without filing appropriate paperwork waiting muse’s golden stamp approval you’ll never write anything ask Harper Lee Unfortunately muse finicky creature timetable agenda usually like show you’re job — squatting behind bush forest case may — expects ready waiting ready writer 4 Compare JK Rowling JK Rowling beloved author yardstick author compare career mean serious planning first book go country quietly explode overseas market time finish third book household name time finish fifth book Hollywood knocking door sixth book enough purchase majestic castle outside Edinburgh seventh make richer Queen don’t stop JK Rowling ten Harry Potter book I’m talking fanfiction stageplay she’d worth Vatican Keep mind none due luck serendipity JK Rowling plan JK Rowling dream JK Rowling kept eye prize JK Rowling got shit done It’s totally formula 5 Guard Ideas Ferociously story idea special precious Nothing like ever existed world everybody steal Everybody Every innocentlooking writer every innocuouslooking writer’s group going steal idea Keep secret keep safe first step register country’s copyright office usually hefty fee attached worth every penny Every single penny need build Fort Knox around idea copyright side didn’t hurt Mr Disney none also make sure breathe nothing novel anyone spouse sibling kid might spy writer might reporting might take precious precious gemstone idea get many idea know you’re you’re outTags Humor Learning Creativity Art Writing
3,906
I Want to Travel Without Killing Polar Bears
Glacier National Park, where many glaciers are melting and more have already disappeared Recently I read this New York Times article that broke down global warming, and in particular how it is affected by aviation, into digestible information that alarmed me. Did you know that one passenger on a 2,500-mile flight is responsible for melting 32 square feet (or, for those who are metrically challenged: 3 square meters) of Arctic summer sea ice cover? Now we both do. This article popped up on my Facebook feed shortly after having a conversation with a friend about this exact topic (the creepiness of that deserves its own post — I digress). In the time that has passed since the two of us last saw each other, my friend has adopted a vegan lifestyle, forsaken his car, and has now decided that after one “last hurrah” flight to hike the Annapurna Circuit in Nepal next year, he is swearing off transport by plane for good — and he encouraged me to do the same. Basically for the reasons that I’ve just read about in this NYT piece, with the thumbnail image of a sorrowful cartoon polar bear perched on an ice floe as airplanes fly overhead. I was primed to read this article after last week’s conversation with my friend (following which I of course desperately googled the topic, hoping I would find a goldmine of information telling me that flying isn’t all that bad, really. I failed to find the goldmine.) Recently I purchased a one-way flight from Sri Lanka (where I’ve been living and working for the past year) to India. In just a couple of weeks, I’ll be beginning my 5 month tour of Asia (perhaps more on that later). Like many other privileged millennials, I love to see new places, meet interesting people, eat weird food. Get out of my comfort zone. But does my desire to see the world (and my western lifestyle in general) justify that, according to this study by John Nolt, “the average American causes through his/her greenhouse gas emissions the serious suffering and/or deaths of two future people”? A while ago, I read a BBC article explaining why brain biases prevent action on climate change issues. I can see the effect of many of these biases in my own life and my attitude towards world travel — particularly hyperbolic discounting, and the bystander effect. For example, I choose to believe that the present is more important than the future and that it is the job of governments and companies to take climate action, while it is my job to travel the world now, while I’m young (and before it’s all destroyed). I asked another friend his opinion on the topic. His point was essentially that we need better technologies to mitigate these issues, but one thing is certain — humans refuse to downgrade their quality of life. The rest of the world is catching up to the standards of the west, which is a problem because we aren’t practicing sustainable living. I watched a Ted Talk called “100 Solutions to Reverse Global Warming” by Chad Frischmann of Project Drawdown, which claims to be the world’s leading source of climate solutions. “Drawdown” refers to the point when greenhouse gasses in the atmosphere level off and then start to decline, thereby reducing global warming. Through its research, Project Drawdown has identified 100 solutions that will make drawdown possible, all of which exist today and can be fully utilized with technology. By proposing real, attainable solutions, they aim to change the negative narrative surrounding the topic of climate change into one of opportunity and hope. Out of all the solutions to reversing global warming laid out by Project Drawdown, aviation was not at the top of the list (it’s #43). So what tops the list? While diving into this website and clicking through many other links, I realized that I didn’t know too much about global warming, what’s causing it, and what can be done to prevent it. But as I read, I began to feel much more optimistic about these proposed solutions and saw how I could be putting many into practice in my own life. A plant-rich diet and reducing food waste? Of course I can do that! Turn off the AC? I live in the tropics, but still I know I can vastly cut down here. Supporting programs to educate girls and promote family planning? Quite easy to get involved. I liked and saved this Instagram post by Sophia Bush, an activist who I admire, also generally relating to this topic: BUT, I still feel a nagging sense of guilt when I think about the damage that the flight from Sri Lanka to India will incur, or the continuing negative impact that my 5 month tour of Asia could potentially cause if I’m not intentional about the way I travel. So, what’s my point in all of this? Honestly, I’m not sure. But with the wealth of knowledge at my fingertips, I can’t use the excuse of ignorance. If I’m going to travel and see the world, I need to make sustainable travel my top priority. What I’ve realized is there are many ways that I can adapt my lifestyle, without downgrading my quality of life, that will in turn help my planet and my fellow humans. But that first requires me to be aware, to care, to learn, and to make changes. As Aziz Ansari put it…
https://sarahngottshall.medium.com/i-want-to-travel-without-killing-polar-bears-f3c711a6c91
['Sarah Gottshall']
2019-08-23 14:59:30.122000+00:00
['Climate Change', 'Environment', 'Millennials', 'Sustainability', 'Travel']
Title Want Travel Without Killing Polar BearsContent Glacier National Park many glacier melting already disappeared Recently read New York Times article broke global warming particular affected aviation digestible information alarmed know one passenger 2500mile flight responsible melting 32 square foot metrically challenged 3 square meter Arctic summer sea ice cover article popped Facebook feed shortly conversation friend exact topic creepiness deserves post — digress time passed since two u last saw friend adopted vegan lifestyle forsaken car decided one “last hurrah” flight hike Annapurna Circuit Nepal next year swearing transport plane good — encouraged Basically reason I’ve read NYT piece thumbnail image sorrowful cartoon polar bear perched ice floe airplane fly overhead primed read article last week’s conversation friend following course desperately googled topic hoping would find goldmine information telling flying isn’t bad really failed find goldmine Recently purchased oneway flight Sri Lanka I’ve living working past year India couple week I’ll beginning 5 month tour Asia perhaps later Like many privileged millennials love see new place meet interesting people eat weird food Get comfort zone desire see world western lifestyle general justify according study John Nolt “the average American cause hisher greenhouse gas emission serious suffering andor death two future people” ago read BBC article explaining brain bias prevent action climate change issue see effect many bias life attitude towards world travel — particularly hyperbolic discounting bystander effect example choose believe present important future job government company take climate action job travel world I’m young it’s destroyed asked another friend opinion topic point essentially need better technology mitigate issue one thing certain — human refuse downgrade quality life rest world catching standard west problem aren’t practicing sustainable living watched Ted Talk called “100 Solutions Reverse Global Warming” Chad Frischmann Project Drawdown claim world’s leading source climate solution “Drawdown” refers point greenhouse gas atmosphere level start decline thereby reducing global warming research Project Drawdown identified 100 solution make drawdown possible exist today fully utilized technology proposing real attainable solution aim change negative narrative surrounding topic climate change one opportunity hope solution reversing global warming laid Project Drawdown aviation top list it’s 43 top list diving website clicking many link realized didn’t know much global warming what’s causing done prevent read began feel much optimistic proposed solution saw could putting many practice life plantrich diet reducing food waste course Turn AC live tropic still know vastly cut Supporting program educate girl promote family planning Quite easy get involved liked saved Instagram post Sophia Bush activist admire also generally relating topic still feel nagging sense guilt think damage flight Sri Lanka India incur continuing negative impact 5 month tour Asia could potentially cause I’m intentional way travel what’s point Honestly I’m sure wealth knowledge fingertip can’t use excuse ignorance I’m going travel see world need make sustainable travel top priority I’ve realized many way adapt lifestyle without downgrading quality life turn help planet fellow human first requires aware care learn make change Aziz Ansari put it…Tags Climate Change Environment Millennials Sustainability Travel
3,907
Intelligent Visual Data Discovery with Lux — A Python library
EDA with Lux: Supporting a Visual dataframe workflow Image from the presentation with permission from the author df When we print out the data frame, we see the default pandas table display. We can toggle it to get a set of recommendations generated automatically by Lux. Image by Author The recommendations in lux are organized by three different tabs, which represent potential next steps that users can take in their exploration. The Correlation Tab: shows a set of pairwise relationships between quantitative attributes ranked by the most correlated to the least correlated one. Image by Author We can see that the penguin flipper length and body mass show a positive correlation. Penguins’ culmen length and depth also show some interesting patterns, and it appears that there is a negative correlation. To be specific, the culmen is the upper ridge of a bird’s bill. Image by Author The Distribution Tab shows a set of univariate distributions ranked by the most skewed to the least skewed. Image by Author The Occurrence Tab shows a set of bar charts that can be generated from the data set. Image by Author This tab shows there are three different species of penguins — Adelie, Chinstrap, and Gentoo. There are also three different islands — Torgersen, Biscoe, and Dream; and both male and female species have been included in the dataset. Intent-based recommendations Beyond the basic recommendations, we can also specify our analysis intent. Let's say that we want to find out how the culmen length varies with the species. We can set the intent here as [‘culmen_length_mm’,’species’]. When we print out the data frame again, we can see that the recommendations are steered to what is relevant to the intent that we’ve specified. df.intent = ['culmen_length_mm','species'] df On the left-hand side in the image below, what we see is Current Visualization corresponding to the attributes that we have selected. On the right-hand side, we have Enhance i.e. what happens when we add an attribute to the current selection. We also have the Filter tab which adds filter while fixing the selected variable. Image by Author If you closely look at the correlations within species, culmen length and depth are positively correlated. This is a classic example of Simpson’s paradox. Image by Author Finally, you can get a pretty clear separation between all three species by looking at flipper length versus culmen length. Image by Author Exporting visualizations from Widget Lux also makes it pretty easy to export and share the generated visualizations. The visualizations can be exported into a static HTML as follows: df.save_as_html('file.html') We can also access the set of recommendations generated for the data frames via the properties recommendation. The output is a dictionary, keyed by the name of the recommendation category. df.recommendation Image by Author Exporting Visualizations as Code Not only can we export visualization as HTML but also as code. The GIF below shows how you can view the first bar chart's code in the Occurrence tab. The visualizations can then be exported to code in Altair for further edits or as Vega-Lite specification. More details can be found in the documentation.
https://towardsdatascience.com/intelligent-visual-data-discovery-with-lux-a-python-library-dc36a5742b2f
['Parul Pandey']
2020-12-22 10:26:45.962000+00:00
['Exploratory Data Analysis', 'Python', 'Data Visualization', 'Data Analysis', 'Lux']
Title Intelligent Visual Data Discovery Lux — Python libraryContent EDA Lux Supporting Visual dataframe workflow Image presentation permission author df print data frame see default panda table display toggle get set recommendation generated automatically Lux Image Author recommendation lux organized three different tab represent potential next step user take exploration Correlation Tab show set pairwise relationship quantitative attribute ranked correlated least correlated one Image Author see penguin flipper length body mass show positive correlation Penguins’ culmen length depth also show interesting pattern appears negative correlation specific culmen upper ridge bird’s bill Image Author Distribution Tab show set univariate distribution ranked skewed least skewed Image Author Occurrence Tab show set bar chart generated data set Image Author tab show three different specie penguin — Adelie Chinstrap Gentoo also three different island — Torgersen Biscoe Dream male female specie included dataset Intentbased recommendation Beyond basic recommendation also specify analysis intent Lets say want find culmen length varies specie set intent ‘culmenlengthmm’’species’ print data frame see recommendation steered relevant intent we’ve specified dfintent culmenlengthmmspecies df lefthand side image see Current Visualization corresponding attribute selected righthand side Enhance ie happens add attribute current selection also Filter tab add filter fixing selected variable Image Author closely look correlation within specie culmen length depth positively correlated classic example Simpson’s paradox Image Author Finally get pretty clear separation three specie looking flipper length versus culmen length Image Author Exporting visualization Widget Lux also make pretty easy export share generated visualization visualization exported static HTML follows dfsaveashtmlfilehtml also access set recommendation generated data frame via property recommendation output dictionary keyed name recommendation category dfrecommendation Image Author Exporting Visualizations Code export visualization HTML also code GIF show view first bar chart code Occurrence tab visualization exported code Altair edits VegaLite specification detail found documentationTags Exploratory Data Analysis Python Data Visualization Data Analysis Lux
3,908
12 Ways the World Will Change When Everyone Works Remotely
Workplace studies in 2019 have reached a common conclusion — remote work is here to stay. Once people try working remotely, up to 99% want to continue, while 95% would recommend the practice to others. But that’s not all. A Zapier survey revealed that 74% of workers would quit their jobs for the ability to work from anywhere. Two in three believe that the traditional workplace will be obsolete within the next decade. Source: Buffer State of Remote Work Report They’re right. According to the U.S. Census Bureau, the number of people working remotely has been rising for the past ten years. Meanwhile, UpWork projects that the majority of the workforce will be freelancing as soon as 2027. Globally, one billion people could be working in a remote capacity by 2035. Whether people become remote employees, online entrepreneurs, freelancers, or other gig workers — one thing’s for sure — life will be nothing like the current 9–5. The world will change and reflect this new reality.
https://medium.com/swlh/12-ways-the-world-will-change-when-everyone-works-remotely-cb8927ef1853
['Kristin Wilson']
2020-04-16 18:46:39.615000+00:00
['Work', 'Freelancing', 'Business', 'Startup', 'Future']
Title 12 Ways World Change Everyone Works RemotelyContent Workplace study 2019 reached common conclusion — remote work stay people try working remotely 99 want continue 95 would recommend practice others that’s Zapier survey revealed 74 worker would quit job ability work anywhere Two three believe traditional workplace obsolete within next decade Source Buffer State Remote Work Report They’re right According US Census Bureau number people working remotely rising past ten year Meanwhile UpWork project majority workforce freelancing soon 2027 Globally one billion people could working remote capacity 2035 Whether people become remote employee online entrepreneur freelancer gig worker — one thing’s sure — life nothing like current 9–5 world change reflect new realityTags Work Freelancing Business Startup Future
3,909
2020 Was the Turning Point for CRISPR
2020 Was the Turning Point for CRISPR Scientists took huge strides toward using the gene-editing tool for medical treatments Photo: Yuichiro Chino/Getty Images Amid a raging global pandemic, the field of gene editing made major strides in 2020. For years, scientists have been breathlessly hopeful about the potential of the gene-editing tool CRISPR to transform medicine. In 2020, some of CRISPR’s first real achievements finally came to light — and two of CRISPR’s inventors won the Nobel Prize. The idea behind CRISPR-based medicine sounds simple: By tweaking a disease-causing gene, a disease could be treated at its source — and possibly even cured. The other allure of gene editing for medical reasons is its permanence. Instead of a lifetime of drugs, patients with rare and chronic diseases like muscular dystrophy or cystic fibrosis could instead get a one-time treatment that could have benefits for life. This idea has proven difficult to realize. For one, scientists have to figure out how to get the gene-editing molecules to the right cells in the body. Once there, the molecules need to modify enough cells in order to have an impact on the disease. Both of these things need to happen without causing unpleasant or toxic side effects that would make the treatment too risky. With advances in CRISPR technology, scientists showed this year that we’re getting closer to gene-editing cures. “2020 is the year that we have definitive proof that we are headed to a future where we as a species will genetically engineer human beings for purposes of treating disease or preventing them from developing disease,” Fyodor Urnov, PhD, a gene-editing expert and professor of molecular and cell biology at the University of California, Berkeley, tells Future Human. Here’s why 2020 was such a milestone year for CRISPR. CRISPR has eliminated symptoms of genetic blood diseases in patients In July 2019, scientists at Vertex Pharmaceuticals of Boston and CRISPR Therapeutics of Cambridge, Massachusetts used a groundbreaking approach to treat a woman with sickle cell disease, an inherited blood disorder that affects 100,000 Americans — most of them Black — and often leads to early death. They first removed the blood-producing stem cells from her bone marrow. Using CRISPR, they edited her diseased cells in the lab. They then infused the modified cells back into her bloodstream. The idea is that the edited cells will travel back to the bone marrow and start producing healthy blood cells. The complex procedure requires spending several weeks in a hospital. NPR followed the story of the first patient, named Victoria Gray. On December 5 of this year, the companies reported in the New England Journal of Medicine that the treatment has relieved Gray from the debilitating episodes known as “pain crises” that are typical of sickle cell. Another person with a related inherited blood disorder called beta-thalassemia is also symptom-free more than a year after receiving CRISPR-edited cells. Beta-thalassemia patients require blood transfusions every few weeks, but the person who received the CRISPR treatment hasn’t needed a single blood transfusion since. “This is a really dramatic change in the quality of life for these patients,” says Giuseppe Ciaramella, PhD, president and chief scientific officer of Beam Therapeutics, a gene-editing startup based in Cambridge, Massachusetts. Vertex and CRISPR Therapeutics have now treated a total of 19 patients with sickle cell disease or beta-thalassemia. Both diseases arise from mutations in the HBB gene, which makes an important blood protein called hemoglobin. In sickle cell patients, faulty hemoglobin distorts red blood cells, causing them to stick together and clog the blood vessels. In beta-thalassemia, meanwhile, the body doesn’t make enough hemoglobin. Scientists made a single genetic edit to patients’ cells to switch on the production of a similar protein, called fetal hemoglobin, which can compensate for the diseased or missing hemoglobin. CRISPR Therapeutics’ CEO Samarth Kulkarni, PhD, has said the treatment has the potential to be curative for people with these disorders. CRISPR was used to edit genes inside a person for the first time At an Oregon hospital in March, a patient with a type of inherited blindness became the first to receive a gene-editing injection directly into their eye. It was the first time CRISPR was used in an attempt to edit a gene inside someone’s body. A second person this year also received the experimental treatment, which is designed to snip out a genetic mutation responsible for their severe visual impairment. “In other words, this is a transition from CRISPR in hospital wards to CRISPR in a syringe,” Urnov says. Editas Medicine, the Cambridge, Massachusetts-based company behind the treatment, has yet to release any data showing how well the injection is working. In an email to Future Human, the company said the first patient’s vision remains stable. The two people treated so far have very low vision and are receiving a low dose of the CRISPR therapy. “It is unknown if these patients’ visual pathways are intact,” a company spokesperson tells Future Human. “Even if editing occurs as predicted, if the visual pathways are not intact, their vision would not improve.” The company is testing the therapy on people with a type of progressive vision loss called Leber congenital amaurosis, which often begins early in life. Editas scientists need to make sure the injection is safe and doesn’t cause any side effects before testing a higher dose in people who may have a better chance of vision correction. The company plans to inject the gene-editing treatment in up to 18 adults and eventually wants to treat younger patients, who are most likely to still have functioning visual pathways. In another trial, a person with a rare disease called transthyretin amyloidosis received an IV infusion of CRISPR in November. The disease causes abnormal deposits of protein in organs, which leads to a loss of sensation in the extremities and voluntary control of bodily functions. Developed by another Cambridge, Massachusetts biotech firm, Intellia Therapeutics, the treatment is also meant to edit a person’s genes inside the body. The trial is just getting underway in the U.K., where the company plans to enroll up to 28 patients. CRISPR got more precise Despite its versatility, CRISPR is still error-prone. For the past few years, scientists have been working on more precise versions of CRISPR that are potentially safer than the original. This year, they made notable progress in advancing these new versions to human patients. One downside of traditional CRISPR is that it breaks DNA’s double helix structure in order to delete or edit a gene. When the DNA repairs itself, it doesn’t do so perfectly, and some of the DNA letters around the edited gene get scrambled. In a newer form of CRISPR called base editing, the aim is to simply swap out one DNA letter for another rather than breaking DNA, explains Ciaramella. In one key test of base editing, delivered via a single injection, it successfully lowered LDL or “bad” cholesterol in 14 monkeys. The treatment acts on two genes found in the liver that help regulate cholesterol and fat. The Massachusetts company that developed the injection, Verve Therapeutics, announced the findings at a virtual meeting in June. Ciaramella’s company, Beam Therapeutics, which is also pursuing base editing, presented lab and mouse data at the American Society of Hematology annual meeting in December to support the safety of the approach for sickle cell disease. The company said it hopes to begin a clinical trial next year. Scribe Therapeutics of Alameda, California, is using yet another form of CRISPR dubbed X-editing. The company’s CEO, Ben Oakes, tells Future Human that X-editing is designed to be safer than classic CRISPR. It uses a smaller protein that more efficiently and precisely snips DNA. Co-founded by CRISPR pioneer and Nobel winner Jennifer Doudna, PhD, the company will use the new gene-editing tool to develop treatments for neurological diseases. “As cool and exciting as this technology is, it has to go in someone’s body,” Oakes says. “And it’s critical that we get it right.”
https://futurehuman.medium.com/2020-was-the-turning-point-for-crispr-5a66cb44ad0a
['Emily Mullin']
2020-12-18 20:47:45.742000+00:00
['CRISPR', 'Technology', 'Science', 'Biotech', 'Future']
Title 2020 Turning Point CRISPRContent 2020 Turning Point CRISPR Scientists took huge stride toward using geneediting tool medical treatment Photo Yuichiro ChinoGetty Images Amid raging global pandemic field gene editing made major stride 2020 year scientist breathlessly hopeful potential geneediting tool CRISPR transform medicine 2020 CRISPR’s first real achievement finally came light — two CRISPR’s inventor Nobel Prize idea behind CRISPRbased medicine sound simple tweaking diseasecausing gene disease could treated source — possibly even cured allure gene editing medical reason permanence Instead lifetime drug patient rare chronic disease like muscular dystrophy cystic fibrosis could instead get onetime treatment could benefit life idea proven difficult realize one scientist figure get geneediting molecule right cell body molecule need modify enough cell order impact disease thing need happen without causing unpleasant toxic side effect would make treatment risky advance CRISPR technology scientist showed year we’re getting closer geneediting cure “2020 year definitive proof headed future specie genetically engineer human being purpose treating disease preventing developing disease” Fyodor Urnov PhD geneediting expert professor molecular cell biology University California Berkeley tell Future Human Here’s 2020 milestone year CRISPR CRISPR eliminated symptom genetic blood disease patient July 2019 scientist Vertex Pharmaceuticals Boston CRISPR Therapeutics Cambridge Massachusetts used groundbreaking approach treat woman sickle cell disease inherited blood disorder affect 100000 Americans — Black — often lead early death first removed bloodproducing stem cell bone marrow Using CRISPR edited diseased cell lab infused modified cell back bloodstream idea edited cell travel back bone marrow start producing healthy blood cell complex procedure requires spending several week hospital NPR followed story first patient named Victoria Gray December 5 year company reported New England Journal Medicine treatment relieved Gray debilitating episode known “pain crises” typical sickle cell Another person related inherited blood disorder called betathalassemia also symptomfree year receiving CRISPRedited cell Betathalassemia patient require blood transfusion every week person received CRISPR treatment hasn’t needed single blood transfusion since “This really dramatic change quality life patients” say Giuseppe Ciaramella PhD president chief scientific officer Beam Therapeutics geneediting startup based Cambridge Massachusetts Vertex CRISPR Therapeutics treated total 19 patient sickle cell disease betathalassemia disease arise mutation HBB gene make important blood protein called hemoglobin sickle cell patient faulty hemoglobin distorts red blood cell causing stick together clog blood vessel betathalassemia meanwhile body doesn’t make enough hemoglobin Scientists made single genetic edit patients’ cell switch production similar protein called fetal hemoglobin compensate diseased missing hemoglobin CRISPR Therapeutics’ CEO Samarth Kulkarni PhD said treatment potential curative people disorder CRISPR used edit gene inside person first time Oregon hospital March patient type inherited blindness became first receive geneediting injection directly eye first time CRISPR used attempt edit gene inside someone’s body second person year also received experimental treatment designed snip genetic mutation responsible severe visual impairment “In word transition CRISPR hospital ward CRISPR syringe” Urnov say Editas Medicine Cambridge Massachusettsbased company behind treatment yet release data showing well injection working email Future Human company said first patient’s vision remains stable two people treated far low vision receiving low dose CRISPR therapy “It unknown patients’ visual pathway intact” company spokesperson tell Future Human “Even editing occurs predicted visual pathway intact vision would improve” company testing therapy people type progressive vision loss called Leber congenital amaurosis often begin early life Editas scientist need make sure injection safe doesn’t cause side effect testing higher dose people may better chance vision correction company plan inject geneediting treatment 18 adult eventually want treat younger patient likely still functioning visual pathway another trial person rare disease called transthyretin amyloidosis received IV infusion CRISPR November disease cause abnormal deposit protein organ lead loss sensation extremity voluntary control bodily function Developed another Cambridge Massachusetts biotech firm Intellia Therapeutics treatment also meant edit person’s gene inside body trial getting underway UK company plan enroll 28 patient CRISPR got precise Despite versatility CRISPR still errorprone past year scientist working precise version CRISPR potentially safer original year made notable progress advancing new version human patient One downside traditional CRISPR break DNA’s double helix structure order delete edit gene DNA repair doesn’t perfectly DNA letter around edited gene get scrambled newer form CRISPR called base editing aim simply swap one DNA letter another rather breaking DNA explains Ciaramella one key test base editing delivered via single injection successfully lowered LDL “bad” cholesterol 14 monkey treatment act two gene found liver help regulate cholesterol fat Massachusetts company developed injection Verve Therapeutics announced finding virtual meeting June Ciaramella’s company Beam Therapeutics also pursuing base editing presented lab mouse data American Society Hematology annual meeting December support safety approach sickle cell disease company said hope begin clinical trial next year Scribe Therapeutics Alameda California using yet another form CRISPR dubbed Xediting company’s CEO Ben Oakes tell Future Human Xediting designed safer classic CRISPR us smaller protein efficiently precisely snip DNA Cofounded CRISPR pioneer Nobel winner Jennifer Doudna PhD company use new geneediting tool develop treatment neurological disease “As cool exciting technology go someone’s body” Oakes say “And it’s critical get right”Tags CRISPR Technology Science Biotech Future
3,910
The Privileged Have Entered Their Escape Pods
Now, pandemics don’t necessarily bring out our best instincts either. No matter how many mutual aid networks, school committees, food pantries, race protests, or fundraising efforts in which we participate, I feel as if many of those privileged enough to do so are still making a less public, internal calculation: How much are we allowed to use our wealth and our technologies to insulate ourselves and our families from the rest of the world? And, like a devil on our shoulder, our technology is telling us to go it alone. After all, it’s an iPad, not an usPad. The more advanced the tech, the more cocooned insularity it affords. “I finally caved and got the Oculus,” one of my best friends messaged me on Signal the other night. “Considering how little is available to do out in the real world, this is gonna be a game-changer.” Indeed, his hermetically sealed, Covid-19-inspired techno-paradise was now complete. Between VR, Amazon, FreshDirect, Netflix, and a sustainable income doing crypto trading, he was going to ride out the pandemic in style. Yet while VRporn.com is certainly a safer sexual strategy in the age of Covid-19 than meeting up with partners through Tinder, every choice to isolate and insulate has its correspondingly negative impact on others. The pool for my daughter wouldn’t have gotten here were it not for legions of Amazon workers behind the scenes, getting infected in warehouses or risking their health driving delivery trucks all summer. As with FreshDirect or Instacart, the externalized harm to people and places is kept out of sight. These apps are designed to be addictively fast and self-contained — push-button access to stuff that can be left at the front door without any human contact. The delivery people don’t even ring the bell; a photo of the package on the stoop automagically arrives in the inbox. Like with Thomas Jefferson’s ingenious dumbwaiter, there are no signs of the human labor that brought it. Many of us once swore off Amazon after learning of the way it evades taxes, engages in anti-competitive practices, or abuses labor. But here we are, reluctantly re-upping our Prime delivery memberships to get the cables, webcams, and Bluetooth headsets we need to attend the Zoom meetings that now constitute our own work. Others are reactivating their long-forgotten Facebook accounts to connect with friends, all sharing highly curated depictions of their newfound appreciation for nature, sunsets, and family. And as we do, many of us are lulled further into digital isolation — being rewarded the more we accept the logic of the fully wired home, cut off from the rest of the world. And so the New York Times is busy running photo spreads of wealthy families “retreating” to their summer homes — second residences worth well more than most of our primary ones — and stories about their successes working remotely from the beach or retrofitting extra bedrooms as offices. “It’s been great here,” one venture fund founder explained. “If I didn’t know there was absolute chaos in the world … I could do this forever.” But what if we don’t have to know about the chaos in the world? That’s the real promise of digital technology. We can choose which cable news, Twitter feeds, and YouTube channels to stream — the ones that acknowledge the virus and its impacts or the ones that don’t. We can choose to continue wrestling with the civic challenges of the moment, such as whether to send kids back to school full-time, hybrid, or remotely. Or — like some of the wealthiest people in my own town — we can form private “pods,” hire tutors, and offer our kids the kind of customized, elite education we could never justify otherwise. “Yes, we are in a pandemic,” one pod education provider explained to the Times. “But when it comes to education, we also feel some good may even come out of this.” I get it. And if I had younger children and could afford these things, I might even be tempted to avail myself of them. But all of these “solutions” favor those who have already accepted the promise of digital technology to provide what the real world has failed to do. Day traders, for instance, had already discovered the power of the internet to let them earn incomes safely from home using nothing but a laptop and some capital. Under the pandemic, more people are opening up online trading accounts than ever, hoping to participate in the video game version of the marketplace. Meanwhile, some of the world’s most successful social media posses are moving into luxurious “hype houses” in Los Angeles and Hawaii, where they can livestream their lifestyles, exercise routines, and sex advice — as well the products of their sponsors — to their millions of followers. And maybe it’s these young social media enthusiasts, thriving more than ever under pandemic conditions, who most explicitly embody the original promise of digital technology to provide for our every need. I remember back around 1990, when psychedelics philosopher Timothy Leary first read Stewart Brand’s book The Media Lab, about the new digital technology center MIT had created in its architecture department. Leary devoured the book cover to cover over the course of one long day. Around sunset, just as he was finishing, he threw it across the living room in disgust. “Look at the index,” he said, “of all the names, less than 3% are women. That’ll tell you something.” He went on to explain his core problem with the Media Lab and the digital universe these technology pioneers were envisioning: “They want to recreate the womb.” As Leary the psychologist saw it, the boys building our digital future were developing technology to simulate the ideal woman — the one their mothers could never be. Unlike their human mothers, a predictive algorithm could anticipate their every need in advance and deliver it directly, removing every trace of friction and longing. These guys would be able to float in their virtual bubbles — what the Media Lab called “artificial ecology” — and never have to face the messy, harsh reality demanded of people living in a real world with women and people of color and even those with differing views. For there’s the real rub with digital isolation — the problem those billionaires identified when we were gaming out their bunker strategies. The people and things we’d be leaving behind are still out there. And the more we ask them to service our bubbles, the more oppressed and angry they’re going to get. No, no matter how far Ray Kurzweil gets with his artificial intelligence project at Google, we cannot simply rise from the chrysalis of matter as pure consciousness. There’s no Dropbox plan that will let us upload body and soul to the cloud. We are still here on the ground, with the same people and on the same planet we are being encouraged to leave behind. There’s no escape from the others. Not that people aren’t trying. The ultimate digital escape fantasy would require some seriously perverse enforcement of privilege. Anything to prevent the unwashed masses — the folks working in the meat processing plants, Amazon warehouses, UPS trucks, or not at all — from violating the sacred bounds of our virtual amnionic sacs. Sure, we can replace the factory workers with robots and the delivery people with drones, but then they’ll even have less at stake in maintaining our digital retreats. Unlike their human mothers, a predictive algorithm could anticipate their every need in advance and deliver it directly, removing every trace of friction and longing. I can’t help but see the dismantling of the Post Office as a last-ditch attempt to keep the majority from piercing the bubbles of digital privilege through something as simple as voting. Climb to safety and then pull the ladder up after ourselves. No more voting, no more subsidized delivery of alternative journalism (that was the original constitutional purpose for a fully funded post office). So much the better for the algorithms streaming us the picture of the world we want to see, uncorrupted by imagery of what’s really happening out there. (And if it does come through, just swipe left, and the algorithms will know never to interrupt your dream state with such real news again.) No, of course we’ll never get there. Climate, poverty, disease, and famine don’t respect the “guardian boundary” play space defined by the Oculus VR’s user preferences. Just as the billionaires can never, ever truly leave humanity behind, none of us can climb back into the womb. When times are hard, sure, take what peace and comfort you can afford. Use whatever tech you can get your hands on to make your kid’s online education work a bit better. Enjoy the glut of streaming media left over from the heyday of the Netflix-Amazon-HBO wars. But don’t let this passing — yes, passing — crisis fool you into buying technology’s false promise of escaping from humanity to play video games alone in perpetuity. Our Covid-19 isolation is giving us a rare opportunity to see where this road takes us and to choose to use our technologies to take a very different one.
https://onezero.medium.com/the-privileged-have-entered-their-escape-pods-4706b4893af7
['Douglas Rushkoff']
2020-09-03 00:18:18.428000+00:00
['Society', 'Privilege', 'Digital', 'Technology', 'Future']
Title Privileged Entered Escape PodsContent pandemic don’t necessarily bring best instinct either matter many mutual aid network school committee food pantry race protest fundraising effort participate feel many privileged enough still making le public internal calculation much allowed use wealth technology insulate family rest world like devil shoulder technology telling u go alone it’s iPad usPad advanced tech cocooned insularity affords “I finally caved got Oculus” one best friend messaged Signal night “Considering little available real world gonna gamechanger” Indeed hermetically sealed Covid19inspired technoparadise complete VR Amazon FreshDirect Netflix sustainable income crypto trading going ride pandemic style Yet VRporncom certainly safer sexual strategy age Covid19 meeting partner Tinder every choice isolate insulate correspondingly negative impact others pool daughter wouldn’t gotten legion Amazon worker behind scene getting infected warehouse risking health driving delivery truck summer FreshDirect Instacart externalized harm people place kept sight apps designed addictively fast selfcontained — pushbutton access stuff left front door without human contact delivery people don’t even ring bell photo package stoop automagically arrives inbox Like Thomas Jefferson’s ingenious dumbwaiter sign human labor brought Many u swore Amazon learning way evades tax engages anticompetitive practice abuse labor reluctantly reupping Prime delivery membership get cable webcam Bluetooth headset need attend Zoom meeting constitute work Others reactivating longforgotten Facebook account connect friend sharing highly curated depiction newfound appreciation nature sunset family many u lulled digital isolation — rewarded accept logic fully wired home cut rest world New York Times busy running photo spread wealthy family “retreating” summer home — second residence worth well primary one — story success working remotely beach retrofitting extra bedroom office “It’s great here” one venture fund founder explained “If didn’t know absolute chaos world … could forever” don’t know chaos world That’s real promise digital technology choose cable news Twitter feed YouTube channel stream — one acknowledge virus impact one don’t choose continue wrestling civic challenge moment whether send kid back school fulltime hybrid remotely — like wealthiest people town — form private “pods” hire tutor offer kid kind customized elite education could never justify otherwise “Yes pandemic” one pod education provider explained Times “But come education also feel good may even come this” get younger child could afford thing might even tempted avail “solutions” favor already accepted promise digital technology provide real world failed Day trader instance already discovered power internet let earn income safely home using nothing laptop capital pandemic people opening online trading account ever hoping participate video game version marketplace Meanwhile world’s successful social medium posse moving luxurious “hype houses” Los Angeles Hawaii livestream lifestyle exercise routine sex advice — well product sponsor — million follower maybe it’s young social medium enthusiast thriving ever pandemic condition explicitly embody original promise digital technology provide every need remember back around 1990 psychedelics philosopher Timothy Leary first read Stewart Brand’s book Media Lab new digital technology center MIT created architecture department Leary devoured book cover cover course one long day Around sunset finishing threw across living room disgust “Look index” said “of name le 3 woman That’ll tell something” went explain core problem Media Lab digital universe technology pioneer envisioning “They want recreate womb” Leary psychologist saw boy building digital future developing technology simulate ideal woman — one mother could never Unlike human mother predictive algorithm could anticipate every need advance deliver directly removing every trace friction longing guy would able float virtual bubble — Media Lab called “artificial ecology” — never face messy harsh reality demanded people living real world woman people color even differing view there’s real rub digital isolation — problem billionaire identified gaming bunker strategy people thing we’d leaving behind still ask service bubble oppressed angry they’re going get matter far Ray Kurzweil get artificial intelligence project Google cannot simply rise chrysalis matter pure consciousness There’s Dropbox plan let u upload body soul cloud still ground people planet encouraged leave behind There’s escape others people aren’t trying ultimate digital escape fantasy would require seriously perverse enforcement privilege Anything prevent unwashed mass — folk working meat processing plant Amazon warehouse UPS truck — violating sacred bound virtual amnionic sac Sure replace factory worker robot delivery people drone they’ll even le stake maintaining digital retreat Unlike human mother predictive algorithm could anticipate every need advance deliver directly removing every trace friction longing can’t help see dismantling Post Office lastditch attempt keep majority piercing bubble digital privilege something simple voting Climb safety pull ladder voting subsidized delivery alternative journalism original constitutional purpose fully funded post office much better algorithm streaming u picture world want see uncorrupted imagery what’s really happening come swipe left algorithm know never interrupt dream state real news course we’ll never get Climate poverty disease famine don’t respect “guardian boundary” play space defined Oculus VR’s user preference billionaire never ever truly leave humanity behind none u climb back womb time hard sure take peace comfort afford Use whatever tech get hand make kid’s online education work bit better Enjoy glut streaming medium left heyday NetflixAmazonHBO war don’t let passing — yes passing — crisis fool buying technology’s false promise escaping humanity play video game alone perpetuity Covid19 isolation giving u rare opportunity see road take u choose use technology take different oneTags Society Privilege Digital Technology Future
3,911
Lessons learnt from building reactive microservices for Canva Live
Lessons learnt from building reactive microservices for Canva Live Behind the scenes on our mission to drive the next era of presentation software. Presentations are one of the most popular formats on Canva, with everyone from small businesses, to students and professionals creating stunning slide decks — averaging up to 4 new designs per second. But to truly drive the next era of presentation software, we’re empowering our community with live, interactive features. In this blog, Canva Engineer Ashwanth Fernando shares how the team launched a real-time experience through a hybrid-streaming backend solution to power the Canva Live presentation feature. With over 4 million people creating a new presentation on Canva each month, it’s no surprise this doctype consistently ranks as one of the fastest growing on our platform. But other than delighting our community with professional-looking presentation templates, we’re always on the lookout for new ways to demonstrate the magic of Canva. Throughout our research, it was clear that people invest time into creating a beautiful slideshow for one simple reason: every presenter wants maximum engagement. That’s why we’re seeing less text, and more photos, illustrations, and animations. To take engagement to the next level, we challenged ourselves to introduce real-time interactions, that allow every presenter to communicate with their audience easily, effectively and instantaneously. This is how Canva Live for Presentations came to be. What’s Canva Live? Canva Live is a patent-pending presentation feature that lets audiences ask live questions via a unique url and passcode on their mobile device. Submitted questions can then be read by presenters in a digestible interface, to support fluid audience interaction. For a visual demonstration of this, you can view the below video: As demonstrated, the audience’s questions appear in real-time on the presenter’s screen, without the page having to refresh. The traditional way to achieve this would be to poll the server at regular intervals — but the overhead of establishing a Secure Sockets Layer (SSL) connection for every poll would cause inefficiencies, potentially impacting reliability and scalability. Hence, a near real-time (NRT) experience is essential for Canva Live, while offering maximum reliability, resilience and efficiency. We achieved this with a novel approach to reactive microservices. Creating a Real-Time Experience through a Hybrid-Streaming Backend Solution Canva Live works over a hybrid streaming system. We transmit questions, question deletions, and audience count updates — from the Remote Procedure Call Servers (RPC) to the presenter UI — via a WebSockets channel. As WebSockets constantly remain open, this means the server and client can communicate at any time, making it the ideal choice for displaying real-time updates. As the number of connections between audience members and the RPC Fleet must scale in line with audience participants, we use more traditional request/response APIs for this. These connections are only required to transfer data at specific moments (eg. when a member submits a question), and multiple instances of an always-open WebSocket channel would use unnecessary compute resources. For clarity, we have created a diagram of the technology stack below (Fig. 1). The presenter and audience connect to a gateway cluster of servers, which manages all ingress traffic to our microservice RPC backend fleet. The gateway manages factors such as authentication, security, request context, connection pooling and rate limiting to the backend RPCs. The Canva Live RPC fleet is an auto-scaling group of compute nodes, which in turn talk to a Redis backend (AWS Elasticache with cluster mode enabled). Though the diagram looks very similar to a traditional n-tier deployment topology, we found an incredible variety of differences when building scalable streaming services. Below, we explain how potential landmines were avoided, and the valuable lessons we learnt in building Canva Live. Consider Building For Scale From Day 1 As software engineers, we all love the ability to build incrementally, then increase industrial strength and robustness based on traffic — especially when building a new product. For Canva Live, we had to prepare for wide scale usage from day one, with its start button being baked-in to one of our most popular pages. Redis Scalability Our Redis database is deployed with cluster mode enabled and has a number of shards with a primary master and replica. The replicas are eventually consistent with the data in the primary nodes, and can quickly be relied on if primary nodes are down. The client side is topology-aware at all times. It knows which node is a newly elected primary and can start re-routing reads/writes when the topology changes. Adding a new shard to the cluster and scaling out storage is easy within the click of a few buttons/commands as shown here. RPC scalability At our RPC tier, our compute nodes are bound to a scaling group that auto-scales based on CPU/memory usage. Gateway/Edge API layer scalability Our gateway tier sits at the edge of our data-center. It is the first component to intercept north-south traffic, and multiplexes many client websocket connections, into 1 connection to each RPC compute node. This helps scalability, as the direct mapping of client connections to compute nodes creates a linear growth on socket descriptors at the RPC compute node (which is a finite resource) The flip side of multiplexing is that the gateway tier cannot use Amazon Load Balancer (ALB) to talk to RPCs, as it has no knowledge of how many virtual connections are being serviced over a physical connection. As a result the ALB could make uninformed choices, when load balancing websocket connections over the RPC fleet. Hence, our gateway tier uses service discovery, to bypass the ALB, and talk directly talk to the RPC nodes. Choosing The Right Datastore Choosing the optimal data store is one of the most important yet overlooked aspects of system design. Canva Live had to be scalable from the start, with the system plugging into our high-traffic, existing presentation experience. Defaulting to an RDBMS database that only supports vertical scalability of writes, would make it more difficult to support growth. To build an end-to-end reactive system, we required a datastore with a reactive client driver to enable end-to-end request processing of the RPC, using reactive APIs. This programming model allows the service to enjoy the full benefits of reactive systems (as outlined in the reactive manifesto), prioritizing increased resilience to burst traffic, and increased scalability. We also needed a publish-subscribe (pub/sub) API that implements the reactive streams spec, to help us monitor data from participant events, such as questions and question deletions. Secondly, our session data is transient with a pre-set invalidation timeout of a few hours. We needed to expire data structures without performing housekeeping tasks in a separate worker process. Due to the temporary lifetime of our data, a file system based on databases would create the overhead of disk accesses. Finally, we have seen phenomenal year-on-year growth in our presentations product, and needed a database to horizontally scale. We chose Redis as our datastore, as it best met the above requirements. After analysing the pros and cons between the Redisson and lettuce Java Clients, we opted for the latter. lettuce was better suited, as its commands directly map onto the Redis counterpart. The lettuce low level java client for Redis provides a PubSub API based on the Reactive Streams, (specifications listed here), while Redisson supports all the Redis commands but has its own naming and mapping conventions (available here.). Redis also supports expiry of items for all data structures. In Redis cluster mode, we have established a topology that lets us scale from day one, without the need for code change. We host the Redis cluster in AWS Elasticache (Cluster Mode enabled) which lets us add a new shard, and rebalance the keys with a few clicks. Besides all of the above benefits, Redis also doubles up as a data-structures server, and some of these data-structures were suitable candidates for Canva Live out of the box — such as Redis Streams and SortedSets. It is worth mentioning that user updates could also be propagated to different users by using Kafka and/or SNS+SQS combos . We decided against either of these queuing systems, thanks to the extra data-structures and K-V support offered by Redis. Consider using Redis Streams for propagating user events across different user sessions There can be hundreds of question-adding and deletion events in just one Canva Live session. To facilitate this, we use a pub-sub mechanism , via Redis Stream — a log structure that allows clients to query in many different ways. (More on this here). We use Redis Streams to store our participant generated events, creating a new stream for every Canva Live session. The RPC module runs on an AWS EC2 node, and holds the presenter connection. This calls the Redis XRANGE command every second to receive any user events. The first poll in a session requests all user events in the stream, while subsequent polls only ask for events since the last retrieved entry ID. Though polling is resource inefficient, especially when using a thread per presenter session, it is easily testable, and lends itself to Canva’s vigorous unit and integration testing culture. We are now building the ability to block on a stream with an XREAD command while using the lettuce reactive API to flush down the user events to the presenter. This will allow us to build an end-to-end reactive system, which is our north star. We’ll eventually move to a model, where we can listen to multiple streams and then broadcast updates to different presenter view sessions. This will decouple the linear growth of connections from threads. In our Redis cluster mode topology, streams are distributed based on the hash key, which identifies an active Canva Live Session. As a result, a Canva Live event stream will land on a particular shard and its replicas. This allows the cluster to scale out, as not all shards need to hold an event stream. It’s hard to find an API counterpart like the Redis XREAD command in other database systems. Listening capabilities that span different streams are generally only available to messaging systems like Kafka. It’s wonderful to see this out of the box, in an in-memory data-structures server such as Redis, with simple key/value support. AWS Elasticache provides all this goodness without the headaches of administering a multi-master, plus replica Redis cluster. Minimize Network Traffic By Offloading Querying To The Datastore As Much As Possible Using a tactic taken straight from the RDMBS handbook, we have minimized network traffic by offloading querying to the datastore. As mentioned previously, our version 1 implementation used polling between RPC and Redis, to fetch new comments and manage the audience counter. However, repeated calls across several Canva Live presentations can create significant network congestion between the RPC and Redis cluster, meaning it was critical for us to minimize traffic volume. In the case of the audience counter, we only want to include new, active participants. To do this, we use a Redis SortedSet to register a participant ID, plus the current timestamp. Every time a participant polls again, the timestamp for the participant id is refreshed, by calling the Redis command, ZADD (this adds the participant id along with the current timestamp, which is always sorted). Then we need to confirm the audience count. We call Redis command ZCOUNT (count the number of items between a range of timestamps), with the current timestamp (T) and T — 10 seconds, to calculate the number of live participants within the last ten seconds. Both commands ZCOUNT and ZADD, have a time complexity of log(N), where N is the total number of items in the SortedSet. Imagine doing something like this in a file system-based database. Even if the database promised log(N) time complexity, the log(N) is still disk I/O dependent for each operation — which is far more expensive than doing it in-memory. Redis supports many more data-structures like SortedSet with optimal time complexity out of the box. We recommend using these, rather than resorting to key value storage and performing data filtering and/or manipulation at the RPC layer. The entire list of Redis commands is here https://redis.io/commands Understand the nuances of Redis transactions The concept of transaction in Redis is very different from its counterpart in traditional RDBMS. A client can submit an operations sequence to a Redis server as a transaction. The Redis server will guarantee the sequence is executed as an atomic unit without changing context to serve other requests, until the transaction is finished. However, unlike an RDBMS, if one of the steps in the sequence fails, Redis will not roll back the entire transaction. The reasoning for this behavior is listed here — https://redis.io/topics/transactions#why-redis-does-not-support-roll-backs Furthermore, a Redis cluster can only support transactions if the sequence of operations works on data structures in the same shard. We take this into account when we design the formulation of the keys hash tag for our data structures. If our data structures need to participate in a transaction, we use use keys that map to the same shard (see here https://redis.io/topics/cluster-spec#keys-hash-tags), to ensure these data structures live in the same shard. Ideally we’d like to do transactions across shard boundaries, but this will lead to strategies like two-phase commits, which could compromise the global availability of the Redis cluster (see CAP Theorem). Client specified consistency requirements like Dynamo would be more welcome for Redis transactions. Minimize streaming connections to chatty interactions It’s easy to get carried away and build every API over a streaming connection. However, a full-fledged streaming API demands substantial development effort, and runtime compute resources. As mentioned previously, we stuck with request and response APIs for the participant-facing Canva Lives features, which has proven to be a good decision. However, a case can be made for use of streaming here, because of the audience counter. Instead of repeatedly polling Canva every few seconds to inform availability, using a websockets connection can greatly simplify Redis storage by switching from the existing SortedSet to into a simple key/value store for the participant list. This is because we can detect when the client terminates the websockets connection, and use that event to remove the participant from the key value store. We voted against using participant side WebSockets connections because our first iteration uses one JVM polling thread per streaming session. If we used the same approach for the participant, it could lead to an unbounded number of threads per RPC server, with no smart event handling system in place. We’re in the design stage of replacing the polling model with a system that uses a single thread to retrieve updates across several transaction logs. This will broadcast updates to participants connected to the RPC process, to help decouple the linear growth of connections from the associated threads, and enhance scalability. Once we have this in place, it will be easier to adopt a stream-based connection for participants. De-risk complexity at every turn Canva Live is one of only two streaming services at Canva, meaning we didn’t have many established patterns to guide us. Since we’re working with a lot of new systems and libraries such as PubSub(Flux) for reactive streams, Lettuce and Redis, we wanted to make sure that we could launch to staging quickly, and validate the architecture and as a foundation to the final production version. Firstly we poll Redis at one second intervals to reduce complexity. Secondly, we decided to use request/response services for the participant side of Canva Live. Then, we implemented the data access layer using an in-memory database. Although the in-memory database implementation limited us to a single instance deployment, it allowed us to quickly validate our assumptions and ensure that the entire streaming architecture works as intended. To give more context on our technology stack, our in-memory database replicates the Redis streams with a Java Deque implementation, and the Redis SortedSet is a HashMap in Java. Once the redis implementation of the data access layer was ready, we swapped the in-memory version with the Redis version. All the above ways of de-risking complexity seems to be contrary advice to ‘Building for scale from day 1’. It is worth noting that ‘Building for scale from day 1’ does not mean trying to achieve perfection. The goal is to avoid making technical decisions that will significantly hamper our ability to scale to millions of users. Some of these ideas were borrowed from other Canva teams that had more experience in building similar systems. Leveraging the knowledge of other exceptional engineers was essential to de-risking the complexity of this system. Move towards end-to-end reactive processing We use Project Reactor (https://projectreactor.io/) to implement Reactive Streams on the RPC side. On the browser side, we use RxJs to receive events, and React, to paint the different parts of the UI in an asynchronous manner. At the beginning of our first Canva Live implementation, only our service API was using the Flux and FluxSink to flush events to the browser. However, building an entirely reactive system does supply the benefits of increased resilience, responsiveness and scalability. Due to this, we are making inner-layers reactive, all the way to the database. Our usage of lettuce, which uses the same Flux/Mono API as Reactor is ideal, as it helps our cause in writing an end-to-end reactive system. Conclusion As of now, Canva Live is enabled to all users, and we’re seeing an incredible number of sessions lighting-up our backend systems. Building distributed systems that operate at scale is complex. Adding streaming support takes that complexity to a whole new level. Having said that, we truly believe we have built a streaming NRT system that scales with demands of our users, and will help foster an interactive, seamless presentation experience. Please take a moment to try out Canva Live, and let us know your thoughts. Appendix Building the backend infrastructure for Canva Live is a result of the joint effort of Anthony Kong, James Burns, myself and innumerable other engineers that have been extremely selfless in reviewing our designs and guiding us along the way. The gateway piece of this puzzle is built and maintained by our awesome Gateway Engineering team.
https://medium.com/canva/lessons-learnt-from-building-reactive-microservices-for-canva-live-789892c58b10
['Canva Team']
2020-10-13 00:37:03.354000+00:00
['Microservices', 'Engineering', 'Software Development', 'Nodejs']
Title Lessons learnt building reactive microservices Canva LiveContent Lessons learnt building reactive microservices Canva Live Behind scene mission drive next era presentation software Presentations one popular format Canva everyone small business student professional creating stunning slide deck — averaging 4 new design per second truly drive next era presentation software we’re empowering community live interactive feature blog Canva Engineer Ashwanth Fernando share team launched realtime experience hybridstreaming backend solution power Canva Live presentation feature 4 million people creating new presentation Canva month it’s surprise doctype consistently rank one fastest growing platform delighting community professionallooking presentation template we’re always lookout new way demonstrate magic Canva Throughout research clear people invest time creating beautiful slideshow one simple reason every presenter want maximum engagement That’s we’re seeing le text photo illustration animation take engagement next level challenged introduce realtime interaction allow every presenter communicate audience easily effectively instantaneously Canva Live Presentations came What’s Canva Live Canva Live patentpending presentation feature let audience ask live question via unique url passcode mobile device Submitted question read presenter digestible interface support fluid audience interaction visual demonstration view video demonstrated audience’s question appear realtime presenter’s screen without page refresh traditional way achieve would poll server regular interval — overhead establishing Secure Sockets Layer SSL connection every poll would cause inefficiency potentially impacting reliability scalability Hence near realtime NRT experience essential Canva Live offering maximum reliability resilience efficiency achieved novel approach reactive microservices Creating RealTime Experience HybridStreaming Backend Solution Canva Live work hybrid streaming system transmit question question deletion audience count update — Remote Procedure Call Servers RPC presenter UI — via WebSockets channel WebSockets constantly remain open mean server client communicate time making ideal choice displaying realtime update number connection audience member RPC Fleet must scale line audience participant use traditional requestresponse APIs connection required transfer data specific moment eg member submits question multiple instance alwaysopen WebSocket channel would use unnecessary compute resource clarity created diagram technology stack Fig 1 presenter audience connect gateway cluster server manages ingres traffic microservice RPC backend fleet gateway manages factor authentication security request context connection pooling rate limiting backend RPCs Canva Live RPC fleet autoscaling group compute node turn talk Redis backend AWS Elasticache cluster mode enabled Though diagram look similar traditional ntier deployment topology found incredible variety difference building scalable streaming service explain potential landmines avoided valuable lesson learnt building Canva Live Consider Building Scale Day 1 software engineer love ability build incrementally increase industrial strength robustness based traffic — especially building new product Canva Live prepare wide scale usage day one start button bakedin one popular page Redis Scalability Redis database deployed cluster mode enabled number shard primary master replica replica eventually consistent data primary node quickly relied primary node client side topologyaware time know node newly elected primary start rerouting readswrites topology change Adding new shard cluster scaling storage easy within click buttonscommands shown RPC scalability RPC tier compute node bound scaling group autoscales based CPUmemory usage GatewayEdge API layer scalability gateway tier sits edge datacenter first component intercept northsouth traffic multiplex many client websocket connection 1 connection RPC compute node help scalability direct mapping client connection compute node creates linear growth socket descriptor RPC compute node finite resource flip side multiplexing gateway tier cannot use Amazon Load Balancer ALB talk RPCs knowledge many virtual connection serviced physical connection result ALB could make uninformed choice load balancing websocket connection RPC fleet Hence gateway tier us service discovery bypass ALB talk directly talk RPC node Choosing Right Datastore Choosing optimal data store one important yet overlooked aspect system design Canva Live scalable start system plugging hightraffic existing presentation experience Defaulting RDBMS database support vertical scalability writes would make difficult support growth build endtoend reactive system required datastore reactive client driver enable endtoend request processing RPC using reactive APIs programming model allows service enjoy full benefit reactive system outlined reactive manifesto prioritizing increased resilience burst traffic increased scalability also needed publishsubscribe pubsub API implement reactive stream spec help u monitor data participant event question question deletion Secondly session data transient preset invalidation timeout hour needed expire data structure without performing housekeeping task separate worker process Due temporary lifetime data file system based database would create overhead disk access Finally seen phenomenal yearonyear growth presentation product needed database horizontally scale chose Redis datastore best met requirement analysing pro con Redisson lettuce Java Clients opted latter lettuce better suited command directly map onto Redis counterpart lettuce low level java client Redis provides PubSub API based Reactive Streams specification listed Redisson support Redis command naming mapping convention available Redis also support expiry item data structure Redis cluster mode established topology let u scale day one without need code change host Redis cluster AWS Elasticache Cluster Mode enabled let u add new shard rebalance key click Besides benefit Redis also double datastructures server datastructures suitable candidate Canva Live box — Redis Streams SortedSets worth mentioning user update could also propagated different user using Kafka andor SNSSQS combo decided either queuing system thanks extra datastructures KV support offered Redis Consider using Redis Streams propagating user event across different user session hundred questionadding deletion event one Canva Live session facilitate use pubsub mechanism via Redis Stream — log structure allows client query many different way use Redis Streams store participant generated event creating new stream every Canva Live session RPC module run AWS EC2 node hold presenter connection call Redis XRANGE command every second receive user event first poll session request user event stream subsequent poll ask event since last retrieved entry ID Though polling resource inefficient especially using thread per presenter session easily testable lends Canva’s vigorous unit integration testing culture building ability block stream XREAD command using lettuce reactive API flush user event presenter allow u build endtoend reactive system north star We’ll eventually move model listen multiple stream broadcast update different presenter view session decouple linear growth connection thread Redis cluster mode topology stream distributed based hash key identifies active Canva Live Session result Canva Live event stream land particular shard replica allows cluster scale shard need hold event stream It’s hard find API counterpart like Redis XREAD command database system Listening capability span different stream generally available messaging system like Kafka It’s wonderful see box inmemory datastructures server Redis simple keyvalue support AWS Elasticache provides goodness without headache administering multimaster plus replica Redis cluster Minimize Network Traffic Offloading Querying Datastore Much Possible Using tactic taken straight RDMBS handbook minimized network traffic offloading querying datastore mentioned previously version 1 implementation used polling RPC Redis fetch new comment manage audience counter However repeated call across several Canva Live presentation create significant network congestion RPC Redis cluster meaning critical u minimize traffic volume case audience counter want include new active participant use Redis SortedSet register participant ID plus current timestamp Every time participant poll timestamp participant id refreshed calling Redis command ZADD add participant id along current timestamp always sorted need confirm audience count call Redis command ZCOUNT count number item range timestamps current timestamp — 10 second calculate number live participant within last ten second command ZCOUNT ZADD time complexity logN N total number item SortedSet Imagine something like file systembased database Even database promised logN time complexity logN still disk IO dependent operation — far expensive inmemory Redis support many datastructures like SortedSet optimal time complexity box recommend using rather resorting key value storage performing data filtering andor manipulation RPC layer entire list Redis command httpsredisiocommands Understand nuance Redis transaction concept transaction Redis different counterpart traditional RDBMS client submit operation sequence Redis server transaction Redis server guarantee sequence executed atomic unit without changing context serve request transaction finished However unlike RDBMS one step sequence fails Redis roll back entire transaction reasoning behavior listed — httpsredisiotopicstransactionswhyredisdoesnotsupportrollbacks Furthermore Redis cluster support transaction sequence operation work data structure shard take account design formulation key hash tag data structure data structure need participate transaction use use key map shard see httpsredisiotopicsclusterspeckeyshashtags ensure data structure live shard Ideally we’d like transaction across shard boundary lead strategy like twophase commits could compromise global availability Redis cluster see CAP Theorem Client specified consistency requirement like Dynamo would welcome Redis transaction Minimize streaming connection chatty interaction It’s easy get carried away build every API streaming connection However fullfledged streaming API demand substantial development effort runtime compute resource mentioned previously stuck request response APIs participantfacing Canva Lives feature proven good decision However case made use streaming audience counter Instead repeatedly polling Canva every second inform availability using websockets connection greatly simplify Redis storage switching existing SortedSet simple keyvalue store participant list detect client terminates websockets connection use event remove participant key value store voted using participant side WebSockets connection first iteration us one JVM polling thread per streaming session used approach participant could lead unbounded number thread per RPC server smart event handling system place We’re design stage replacing polling model system us single thread retrieve update across several transaction log broadcast update participant connected RPC process help decouple linear growth connection associated thread enhance scalability place easier adopt streambased connection participant Derisk complexity every turn Canva Live one two streaming service Canva meaning didn’t many established pattern guide u Since we’re working lot new system library PubSubFlux reactive stream Lettuce Redis wanted make sure could launch staging quickly validate architecture foundation final production version Firstly poll Redis one second interval reduce complexity Secondly decided use requestresponse service participant side Canva Live implemented data access layer using inmemory database Although inmemory database implementation limited u single instance deployment allowed u quickly validate assumption ensure entire streaming architecture work intended give context technology stack inmemory database replicates Redis stream Java Deque implementation Redis SortedSet HashMap Java redis implementation data access layer ready swapped inmemory version Redis version way derisking complexity seems contrary advice ‘Building scale day 1’ worth noting ‘Building scale day 1’ mean trying achieve perfection goal avoid making technical decision significantly hamper ability scale million user idea borrowed Canva team experience building similar system Leveraging knowledge exceptional engineer essential derisking complexity system Move towards endtoend reactive processing use Project Reactor httpsprojectreactorio implement Reactive Streams RPC side browser side use RxJs receive event React paint different part UI asynchronous manner beginning first Canva Live implementation service API using Flux FluxSink flush event browser However building entirely reactive system supply benefit increased resilience responsiveness scalability Due making innerlayers reactive way database usage lettuce us FluxMono API Reactor ideal help cause writing endtoend reactive system Conclusion Canva Live enabled user we’re seeing incredible number session lightingup backend system Building distributed system operate scale complex Adding streaming support take complexity whole new level said truly believe built streaming NRT system scale demand user help foster interactive seamless presentation experience Please take moment try Canva Live let u know thought Appendix Building backend infrastructure Canva Live result joint effort Anthony Kong James Burns innumerable engineer extremely selfless reviewing design guiding u along way gateway piece puzzle built maintained awesome Gateway Engineering teamTags Microservices Engineering Software Development Nodejs
3,912
Single-binary Web Apps in Go and Vue — Part 4
Photo by David Pisnoy on Unsplash This is part 4 in a four part series where we will demonstrate building a web application using Go and Vue, and finally bundle it all together in a single binary for super-easy deployment/distribution. Part 1 can be found here, part 2 here, and part 3 here. In part 1 we built the Go and Vue apps. In part 2 we changed the Go app to automatically start the Vue app by running Node when the version of the application is “development”. In part 3 we bundled it all up into a single compiled binary. In this article, to make our lives easy, we are going to use the tool make to build everything. If you recall we had to run these commands to make a final bundled build. $ cd app $ npm run build $ cd .. $ go generate $ go build -tags=prod -ldflags="-X 'main.Version=1.0.0'" That’s a lot of steps. Let us simplify. I’ll start by showing the whole Makefile, and then we’ll break it down. Line 1 is simple. It states that when we run make the default action is run, meaning it will execute the script at line 17. Line 4 is a variable that identifies the path to UPX. UPX is a nifty tool that compresses executables. Why do I want this? When we bundle in our static JavaScript assets (our Vue app) it makes our final executable fairly large. UPX will make the binary as small as possible. Lines 6–10 define some variables we are going to use further down. The variable VERSION is where you can set the version of your application. This is applied to the variable Version in main.go is where you can set the version of your application. This is applied to the variable in BUILDFLAGS are flags that will be passed to go build. The flag “-s” tells Go to omit the symbol table from the binary, and the flag “-w” says to omit debug information. The “-X” flag is what is setting the version variable in the binary are flags that will be passed to go build. The flag “-s” tells Go to omit the symbol table from the binary, and the flag “-w” says to omit debug information. The “-X” flag is what is setting the version variable in the binary PROJECTNAME is a variable that is the name of the final executable is a variable that is the name of the final executable GCVARS contains the environment variables specifying to omit cgo and to use AMD64 architecture contains the environment variables specifying to omit cgo and to use AMD64 architecture GC is the final Go compile command. This is the string that will run the actual build Lines 12–15 is a bit of script that ensures that UPX is installed. If it isn’t installed the Make process will stop with an error. Lines 17 and 18 define the goal named run. This just runs the Go app with the tag “dev”. This will also start the Node server for the Vue app because in main.go the variable Version is “development”. The final bits define goals to build for various platforms. Notice how each depends on the goal generate-compiled-assets, which in turn depends on build-vue-app. This means that if you run make build-linux it will first build The Vue app, then run “go generate”, then finally run the Go build.
https://adam-presley.medium.com/single-binary-web-apps-in-go-and-vue-part-4-2a1ab9f69fcb
['Adam Presley']
2020-12-30 05:56:27.084000+00:00
['Software Development', 'JavaScript', 'Development', 'Vuejs', 'Golang']
Title Singlebinary Web Apps Go Vue — Part 4Content Photo David Pisnoy Unsplash part 4 four part series demonstrate building web application using Go Vue finally bundle together single binary supereasy deploymentdistribution Part 1 found part 2 part 3 part 1 built Go Vue apps part 2 changed Go app automatically start Vue app running Node version application “development” part 3 bundled single compiled binary article make life easy going use tool make build everything recall run command make final bundled build cd app npm run build cd go generate go build tagsprod ldflagsX mainVersion100 That’s lot step Let u simplify I’ll start showing whole Makefile we’ll break Line 1 simple state run make default action run meaning execute script line 17 Line 4 variable identifies path UPX UPX nifty tool compress executables want bundle static JavaScript asset Vue app make final executable fairly large UPX make binary small possible Lines 6–10 define variable going use variable VERSION set version application applied variable Version maingo set version application applied variable BUILDFLAGS flag passed go build flag “s” tell Go omit symbol table binary flag “w” say omit debug information “X” flag setting version variable binary flag passed go build flag “s” tell Go omit symbol table binary flag “w” say omit debug information “X” flag setting version variable binary PROJECTNAME variable name final executable variable name final executable GCVARS contains environment variable specifying omit cgo use AMD64 architecture contains environment variable specifying omit cgo use AMD64 architecture GC final Go compile command string run actual build Lines 12–15 bit script ensures UPX installed isn’t installed Make process stop error Lines 17 18 define goal named run run Go app tag “dev” also start Node server Vue app maingo variable Version “development” final bit define goal build various platform Notice depends goal generatecompiledassets turn depends buildvueapp mean run make buildlinux first build Vue app run “go generate” finally run Go buildTags Software Development JavaScript Development Vuejs Golang
3,913
Apple Is Killing A Billion-Dollar Ad Industry With One Popup
Apple Is Killing A Billion-Dollar Ad Industry With One Popup The new iOS 14 privacy feature spells trouble for advertisement agencies and promises to end an era of personalized ads Photo by Tobias Moore on Unsplash When Apple’s WWDC 2020 digital-only Keynote event kickstarted, all eyes were on the new mac OS Big Sur and the ambitious Apple Silicon chips. But, from the perspective of advertisement agencies, it was the new iOS 14 privacy-based features that sent shockwaves in their industry and became the major talking point. For the uninitiated, a lot of apps today use an Advertising identifier (IDFA). It allows developers and marketers to track activity for advertising purposes. Plenty of marketing agencies backed by Google and Facebook run campaigns to record purchases, usage time, user actions, and subsequently serve personalized ads. Over 100K apps on the App Store today have the Facebook or Google SDK integrated which tracks and sends your data to the tech giants and third party brokers. But iOS 14 is all set to change that by being upfront and transparent with users about how their data is used for ads.
https://medium.com/macoclock/apple-is-killing-a-billion-dollar-ad-industry-with-one-popup-2f83d182837f
['Anupam Chugh']
2020-07-10 18:05:22.119000+00:00
['Technology', 'Marketing', 'Advertising', 'Apple', 'Business']
Title Apple Killing BillionDollar Ad Industry One PopupContent Apple Killing BillionDollar Ad Industry One Popup new iOS 14 privacy feature spell trouble advertisement agency promise end era personalized ad Photo Tobias Moore Unsplash Apple’s WWDC 2020 digitalonly Keynote event kickstarted eye new mac OS Big Sur ambitious Apple Silicon chip perspective advertisement agency new iOS 14 privacybased feature sent shockwaves industry became major talking point uninitiated lot apps today use Advertising identifier IDFA allows developer marketer track activity advertising purpose Plenty marketing agency backed Google Facebook run campaign record purchase usage time user action subsequently serve personalized ad 100K apps App Store today Facebook Google SDK integrated track sends data tech giant third party broker iOS 14 set change upfront transparent user data used adsTags Technology Marketing Advertising Apple Business
3,914
8 last-minute ideas for a healthier Valentine’s Day
This story was originally published on blog.healthtap.com on February 14, 2018. Valentine’s Day is a wonderful day to celebrate all of the love in your life, whether that love is with a partner, your friends, or your family. Valentine’s Day is also an amazing day to take the time to practice some self-care and to show some love for yourself. If you need some last minute ideas to spice up your day, try these. They’re perfect to do as a date with that special someone, to do with your friends, or to do by yourself if you need a little time for self-love. These healthy ideas will help make your celebration more adventurous and interesting, and healthier too! Go on a hike Wintery hike in the woods? Coastal hike by the beach? Lace up your shoes and go spend some quality time outdoors by going on a hike with your loved ones, or just for some peace of mind. You’ll get fresh air, beautiful views, and some great exercise to boot. Cook dinner at home Light some candles, pop open some wine, and save some money by cooking a meal at home. It can be a collaborative effort, you’ll get to show off your cooking skills, and you can cook something healthy and nourishing for you and the one you’re with. You’ll also get to be able to better control your portions according to your health goals and needs. Make a dark chocolate dessert Instead of devouring a box of chocolates, try eating some dark chocolate or making a dark chocolate dessert instead. Dark chocolate is full of antioxidants called flavonols, which promote heart health by lowering LDL cholesterol (the “bad” cholesterol) in your arteries and by improving circulation. Just make sure you stick to chocolate above 70% cacao, and watch added sugar and fat which counteracts these heart-healthy benefits. Go out dancing Whether you’re going out with a date or with your friends, going to a dance class or just hitting the town can be not only an incredibly fun, but also an extremely fit way to celebrate the evening. Dancing is one of the most fun ways to get in aerobic and muscle-strengthening exercise, and you can burn up to 230 calories in just 30 minutes. Go on a picnic Load up a basket of healthy goodies and head outside to a nice and sunny spot outdoors. You’ll get to choose exactly what you want to put in your basket while getting to explore outside, and it can be a perfect way to end a day of hiking or any other adventure! Spend some time smooching If you’re with someone you care about on Valentine’s Day, it’s good to get in that time to smooch that special someone. Did you know that kissing has some great health benefits? Kissing helps lower blood pressure, spikes your feel-good hormones, and also burns a few calories. In fact, a good old-fashioned make-out can burn up to 6.5 calories a minute! Get a massage Whether you’re booking a couple massage or you’re going solo, getting a massage is a perfect way to show yourself some love on Valentine’s Day. Massages help decrease anxiety, relieve chronic pain, improve circulation, and have a host of other incredible benefits for both your mind and body. It’ll be a treat you’ll feel great about giving yourself. Give the gift of a group fitness pass Want to show someone you care? Give the gift fo working out together! A group fitness pass will be a wonderful way for you and your partner or friend to hold each other accountable in your fitness goals, and to have a lot more fun and time together doing it. Whatever your plans are for today, we wish you and your loved ones a healthy and happy Valentine’s Day! Author: Maggie Harriman
https://medium.com/healthtap/8-last-minute-ideas-for-a-healthier-valentines-day-2b29e86a6a97
[]
2018-02-14 18:30:58.406000+00:00
['Self Care', 'Health', 'Valentines Day', 'Love', 'Wellness']
Title 8 lastminute idea healthier Valentine’s DayContent story originally published bloghealthtapcom February 14 2018 Valentine’s Day wonderful day celebrate love life whether love partner friend family Valentine’s Day also amazing day take time practice selfcare show love need last minute idea spice day try They’re perfect date special someone friend need little time selflove healthy idea help make celebration adventurous interesting healthier Go hike Wintery hike wood Coastal hike beach Lace shoe go spend quality time outdoors going hike loved one peace mind You’ll get fresh air beautiful view great exercise boot Cook dinner home Light candle pop open wine save money cooking meal home collaborative effort you’ll get show cooking skill cook something healthy nourishing one you’re You’ll also get able better control portion according health goal need Make dark chocolate dessert Instead devouring box chocolate try eating dark chocolate making dark chocolate dessert instead Dark chocolate full antioxidant called flavonols promote heart health lowering LDL cholesterol “bad” cholesterol artery improving circulation make sure stick chocolate 70 cacao watch added sugar fat counteracts hearthealthy benefit Go dancing Whether you’re going date friend going dance class hitting town incredibly fun also extremely fit way celebrate evening Dancing one fun way get aerobic musclestrengthening exercise burn 230 calorie 30 minute Go picnic Load basket healthy goody head outside nice sunny spot outdoors You’ll get choose exactly want put basket getting explore outside perfect way end day hiking adventure Spend time smooching you’re someone care Valentine’s Day it’s good get time smooch special someone know kissing great health benefit Kissing help lower blood pressure spike feelgood hormone also burn calorie fact good oldfashioned makeout burn 65 calorie minute Get massage Whether you’re booking couple massage you’re going solo getting massage perfect way show love Valentine’s Day Massages help decrease anxiety relieve chronic pain improve circulation host incredible benefit mind body It’ll treat you’ll feel great giving Give gift group fitness pas Want show someone care Give gift fo working together group fitness pas wonderful way partner friend hold accountable fitness goal lot fun time together Whatever plan today wish loved one healthy happy Valentine’s Day Author Maggie HarrimanTags Self Care Health Valentines Day Love Wellness
3,915
Shitting Like Tim Ferriss
Enter Tim Ferriss and the Mental Ward So, if depleting my body of all animal products and saturated fat caused me to shake hands with God, Shiva, and the gang, then surely doing the exact opposite would work, right? Wrong. (Quick side note: I didn’t see Elvis there. Meaning, he’s either still alive or in the “other place.” Or, maybe the other place is the same as being alive? I digress.) I read all the books and articles that I could find to improve my mind, body, and soul. I was all in, baby! Again. Among my pile of books, I had copies of James Clear’s Atomic Habits and Tim Ferriss’s books about getting a 4-hour body and a 4-hour workweek. At this point, I’d settle for any habit, atomic or not. Whatever I could do to improve! Do you know what happens when you try to tackle 27 daily habits back-to-back? You go crazy. Legit. It was 11 p.m. on Saturday, and I was calling the Veterans Crisis Line (if anyone needs it: 800–273–8255). The lady on the phone was nice; the cops that showed up, not so much. Maybe it had to do with me being a former Marine and being 6'2" that they decided to cuff me and cart me out. Don’t worry, I’ll write about the whole experience later (ex: “7 Life Lessons I Learned While Locked Up in the Loney Bin”). Long story short: a week after living in a locked-up hospital ward with bolted-down furniture and “Medicine time” nurses, I realized something: I was taking things too far. I needed balance. I needed self-compassion.
https://medium.com/rogues-gallery/shitting-like-tim-ferriss-c959858f9173
['Ryan Dejonghe']
2020-12-16 23:03:23.765000+00:00
['Humor', 'Creativity', 'Health', 'Self', 'Tim Ferriss']
Title Shitting Like Tim FerrissContent Enter Tim Ferriss Mental Ward depleting body animal product saturated fat caused shake hand God Shiva gang surely exact opposite would work right Wrong Quick side note didn’t see Elvis Meaning he’s either still alive “other place” maybe place alive digress read book article could find improve mind body soul baby Among pile book copy James Clear’s Atomic Habits Tim Ferriss’s book getting 4hour body 4hour workweek point I’d settle habit atomic Whatever could improve know happens try tackle 27 daily habit backtoback go crazy Legit 11 pm Saturday calling Veterans Crisis Line anyone need 800–273–8255 lady phone nice cop showed much Maybe former Marine 62 decided cuff cart Don’t worry I’ll write whole experience later ex “7 Life Lessons Learned Locked Loney Bin” Long story short week living lockedup hospital ward bolteddown furniture “Medicine time” nurse realized something taking thing far needed balance needed selfcompassionTags Humor Creativity Health Self Tim Ferriss
3,916
Designing for the Discovery of Big Data
Spain is the number one country for tourism in the world. With sights like the Museo del Prado and Royal Palace of Madrid, it is hard to see how such spectacular beauty could ever be overlooked. To assist in visualising the majesty of this European gem, Vizzuality and CartoDB are proud to announce our latest release: an interactive tool to analyse tourist spending in Spain during the summer of 2014. Using BBVA Data and Analytics, you can see how tourists of different nationalities spent their time in Spain. Take a look at this UN-BBVA-LIEVABLE visualisation! With anonymised data, on 5.4 million credit card transactions, we worked out how to visualise tourist spending effectively while optimising the speed and performance of the application. Equipped with our unique blend of pioneering design principles and innovative coding, we delivered “a piece of artwork” — almost fit for the Prado!
https://medium.com/vizzuality-blog/designing-for-the-discovery-of-big-data-b715afdd23ce
['Jamie Gibson']
2016-01-07 11:48:06.965000+00:00
['Design', 'Big Data', 'Data Visualization']
Title Designing Discovery Big DataContent Spain number one country tourism world sight like Museo del Prado Royal Palace Madrid hard see spectacular beauty could ever overlooked assist visualising majesty European gem Vizzuality CartoDB proud announce latest release interactive tool analyse tourist spending Spain summer 2014 Using BBVA Data Analytics see tourist different nationality spent time Spain Take look UNBBVALIEVABLE visualisation anonymised data 54 million credit card transaction worked visualise tourist spending effectively optimising speed performance application Equipped unique blend pioneering design principle innovative coding delivered “a piece artwork” — almost fit PradoTags Design Big Data Data Visualization
3,917
Animations with Matplotlib
Animations with Matplotlib Using the matplotlib library to create some interesting animations. Animations are an interesting way of demonstrating a phenomenon. We as humans are always enthralled by animated and interactive charts rather than the static ones. Animations make even more sense when depicting time-series data like stock prices over the years, climate change over the past decade, seasonalities and trends since we can then see how a particular parameter behaves with time. The above image is a simulation of Rain and has been achieved with Matplotlib library which is fondly known as the grandfather of python visualization packages. Matplotlib simulates raindrops on a surface by animating the scale and opacity of 50 scatter points. Today Python boasts of a large number of powerful visualization tools like Plotly, Bokeh, Altair to name a few. These libraries are able to achieve state of the art animations and interactiveness. Nonetheless, the aim of this article is to highlight one aspect of this library which isn’t explored much and that is Animations and we are going to look at some of the ways of doing that.
https://towardsdatascience.com/animations-with-matplotlib-d96375c5442c
['Parul Pandey']
2020-09-09 01:37:14.569000+00:00
['Data Visualization', 'Matplotlib', 'Python', 'Programming', 'Towards Data Science']
Title Animations MatplotlibContent Animations Matplotlib Using matplotlib library create interesting animation Animations interesting way demonstrating phenomenon human always enthralled animated interactive chart rather static one Animations make even sense depicting timeseries data like stock price year climate change past decade seasonalities trend since see particular parameter behaves time image simulation Rain achieved Matplotlib library fondly known grandfather python visualization package Matplotlib simulates raindrop surface animating scale opacity 50 scatter point Today Python boast large number powerful visualization tool like Plotly Bokeh Altair name library able achieve state art animation interactiveness Nonetheless aim article highlight one aspect library isn’t explored much Animations going look way thatTags Data Visualization Matplotlib Python Programming Towards Data Science
3,918
Why Engineers Cannot Estimate Time
A statistical approach to explaining bad deadlines in engineering projects Whether you are a junior, senior, project manager, or a top-level manager with 20 years of experience, software project time estimation never becomes easy. No one no matter how experienced or genius they are can claim to know for sure the exact time a software project would take. This problem is especially prevalent in software engineering, but other engineering disciplines are also known to suffer from the same downfall. So while this article focuses on software engineering, it also applies to other disciplines, to an extent. Overview Let’s first have a birds-eye view of the problem, the consequences, and the potential root causes. I will be covering most of these during this series. The Problem Software projects seldom meet the deadline. The Consequences Marketing efforts can be wasted, clients can be dissatisfied, stressed developers can write poor quality code to meet deadlines and compromise product reliability, and ultimately, projects can outright get canceled. The Known Causes Wrong time estimates (the focus of this article) . . Unclear requirements at the start of the project and, later, changing requirements. Gold-plating: too much attention to details outside the scope of work. Not taking enough time in the research and architecture design phase or, conversely, taking too much time. Overlooking potential issues with 3rd party integrations. The desire to “get it right the first time” Working on too many projects at the same time or getting distracted (breaking the flow too often). Unbalanced quality-throughput scale. Over-optimism, Dunning-Kruger effect, pure uncertainty, or just math? Stage 5: ACCEPTANCE It’s easy to dismiss the concept of over-optimism all together just because it’s common sense that no developer who ever struggled to meet a deadline will be optimistic when setting deadlines. Now if project management is not coming from an engineering background and they set deadlines without knowing what they are doing, that’s a whole different issue that is outside the scope of this article. Some also attribute bad time estimation to the Dunning-Kruger effect, however, if inexperience or overestimating one’s ability is behind underestimating time then definitely more experience should alleviate the issue, right? The biggest companies out there with almost infinite resources still have a shockingly high rate of missing deadlines, so that hypothesis is debunked. Not to mention, we have all experienced this ourselves. More experience barely helps when it comes to time estimates. Most developers, especially rather experienced ones, quickly conclude that it’s just pure uncertainty. And it follows that time estimates will always be wrong and that’s just a fact of life and the only thing we can do about it is, well, try to meet client demands and tell developers to “just crunch” when things go wrong. We are all familiar with the stress, the garbage code, and the absolute mayhem that this philosophy causes. Is there a method to the madness? Is this really the best way we can get things done? Well, I didn’t think so and that’s when I embarked on my journey trying to find a rational mathematical explanation as to why all those smart people are unable to estimate the time it’d take them to do something. It’s just math! One day I was doing a task that should have taken 10 minutes and ended up taking 2 hours. I started contemplating the reasons why I thought it would take 10 minutes and the root cause that pumped that number all the way up to 2 hours. My thought process was a bit interesting: I thought it would take 10 minutes because I actually knew 100% in my head the exact code that I needed to write. It actually took me around 7–10 minutes to be done with the code. Then it took 2 hours because of a bug in the framework completely unknown to me. This is what people like to call in project management “force majeure”; external uncontrollable causes of delay. Now you might be thinking that I’m just proving the uncertainty argument with that scenario. Well, yes and no. Let’s zoom out a bit. Sure, uncertainty is the root cause of the delay of this particular task because I would have never guessed that bug existed. But should it be responsible for the delay of the whole project? That’s where we need to draw the distinction that a single task isn’t representative of the project and vice versa. How we “normally” estimate time A normal distribution (bell curve) Normal distributions are all around us and the human brain is pretty used to them. We are experts at estimating things following a normal distribution by nature; it’s the basis of gaining experience by exposure. If you went to the nearest 7–11 almost 20 times this month and every time it took you 5 minutes, except for that time the elevator needed maintenance and you had to wait for 10 minutes and maybe that other time you decided to wait a couple of minutes until it stops raining. What would be your guess for the time it takes you to go there right now? 5 minutes? I mean, it doesn’t make sense to say 15 because it’s a rare incident or 7 unless it’s raining outside. And you’d be right, most likely. If 18 out of 20 times took 5 minutes then certainly there’s a big chance that it would just take 5 minutes (the median) this time, roughly 90% chance (without getting into more complex algebra, of course). It’s skewed! Even if you are really good at estimating the time a task will take, that doesn’t mean you will be good at estimating the time the project will take! Counter intuitively, you will be more wrong. Now all the math nerds (or data scientists/statisticians) reading right now must have already recognized that tiny graph in the previous meme as a right-skewed normal distribution. Let me enlarge, and clarify: The median still has a higher probability of being true than the mean, for that single task! If you were to guess the mode value which has the highest probability, you’d be even more wrong on a larger scale Do you see how things can go wrong here? Our “natural” guess is based on the median which maximizes our probability of guessing right, however, the real number when that “event” occurs enough times will always approach the mean. In other words: The more similar tasks you do, the more that error accumulates! Delay equation, based on that hypothesis Programming tasks on a project are usually pretty similar or at least grouped into few similar clusters! This equation also implies that the problem is scalable! While we want everything in software projects to be scalable, problems are certainly not welcome. So, how to use this knowledge? To be honest, while writing this article I didn’t have in mind any intention to give “instructions” based on this hypothesis. It’s just meant as an exploratory analysis concluding with a hypothesis that’s up to you, the reader, to interpret however you wish. However, I do know that many will be disappointed at that open-ended conclusion so here’s what I personally make of it. It’s easier to tell if task X would take more/less/same time compared to task Y than it is to tell exactly how long they would take. This is because comparing the medians works just as well as comparing the means if the skewness of the curves is roughly the same (which is true for similar tasks). I don’t recall or record every single similar task to do the math and get the mean (and couldn’t find any data to torture). So I usually estimate the inevitable error (mean-median) as a percentage of the task time that goes up/down depending on how comfortable I’m with the dev environment (do I like this language/framework? (40%) Do I have good debugging tools? (30%) Good IDE support? (25%) … etc). I started splitting sprints into equally sized tasks, just to create some uniformity in the time estimation process. This allows me to benefit from point 1, it should be easy to tell if two tasks are roughly equal in time. This also makes tasks even more similar so that the hypothesis applies even more perfectly and things become more predictable. With these principles applied, you can do a “test run” if you have the resources. For example, if in X1 days with Y1 developers Z1 of the uniform tasks were completed then we can easily solve for X2 (days) given we know Y2 (developers available) and Z2 (total tasks left). Finally, make sure to follow if you don’t want to miss the upcoming articles covering the other causes of delay.
https://medium.com/swlh/why-engineers-cannot-estimate-time-5639750df419
['Hesham Meneisi']
2020-12-26 10:43:29.157000+00:00
['Software Engineering', 'Software Development', 'Engineering', 'Time Management', 'Project Management']
Title Engineers Cannot Estimate TimeContent statistical approach explaining bad deadline engineering project Whether junior senior project manager toplevel manager 20 year experience software project time estimation never becomes easy one matter experienced genius claim know sure exact time software project would take problem especially prevalent software engineering engineering discipline also known suffer downfall article focus software engineering also applies discipline extent Overview Let’s first birdseye view problem consequence potential root cause covering series Problem Software project seldom meet deadline Consequences Marketing effort wasted client dissatisfied stressed developer write poor quality code meet deadline compromise product reliability ultimately project outright get canceled Known Causes Wrong time estimate focus article Unclear requirement start project later changing requirement Goldplating much attention detail outside scope work taking enough time research architecture design phase conversely taking much time Overlooking potential issue 3rd party integration desire “get right first time” Working many project time getting distracted breaking flow often Unbalanced qualitythroughput scale Overoptimism DunningKruger effect pure uncertainty math Stage 5 ACCEPTANCE It’s easy dismiss concept overoptimism together it’s common sense developer ever struggled meet deadline optimistic setting deadline project management coming engineering background set deadline without knowing that’s whole different issue outside scope article also attribute bad time estimation DunningKruger effect however inexperience overestimating one’s ability behind underestimating time definitely experience alleviate issue right biggest company almost infinite resource still shockingly high rate missing deadline hypothesis debunked mention experienced experience barely help come time estimate developer especially rather experienced one quickly conclude it’s pure uncertainty follows time estimate always wrong that’s fact life thing well try meet client demand tell developer “just crunch” thing go wrong familiar stress garbage code absolute mayhem philosophy cause method madness really best way get thing done Well didn’t think that’s embarked journey trying find rational mathematical explanation smart people unable estimate time it’d take something It’s math One day task taken 10 minute ended taking 2 hour started contemplating reason thought would take 10 minute root cause pumped number way 2 hour thought process bit interesting thought would take 10 minute actually knew 100 head exact code needed write actually took around 7–10 minute done code took 2 hour bug framework completely unknown people like call project management “force majeure” external uncontrollable cause delay might thinking I’m proving uncertainty argument scenario Well yes Let’s zoom bit Sure uncertainty root cause delay particular task would never guessed bug existed responsible delay whole project That’s need draw distinction single task isn’t representative project vice versa “normally” estimate time normal distribution bell curve Normal distribution around u human brain pretty used expert estimating thing following normal distribution nature it’s basis gaining experience exposure went nearest 7–11 almost 20 time month every time took 5 minute except time elevator needed maintenance wait 10 minute maybe time decided wait couple minute stop raining would guess time take go right 5 minute mean doesn’t make sense say 15 it’s rare incident 7 unless it’s raining outside you’d right likely 18 20 time took 5 minute certainly there’s big chance would take 5 minute median time roughly 90 chance without getting complex algebra course It’s skewed Even really good estimating time task take doesn’t mean good estimating time project take Counter intuitively wrong math nerd data scientistsstatisticians reading right must already recognized tiny graph previous meme rightskewed normal distribution Let enlarge clarify median still higher probability true mean single task guess mode value highest probability you’d even wrong larger scale see thing go wrong “natural” guess based median maximizes probability guessing right however real number “event” occurs enough time always approach mean word similar task error accumulates Delay equation based hypothesis Programming task project usually pretty similar least grouped similar cluster equation also implies problem scalable want everything software project scalable problem certainly welcome use knowledge honest writing article didn’t mind intention give “instructions” based hypothesis It’s meant exploratory analysis concluding hypothesis that’s reader interpret however wish However know many disappointed openended conclusion here’s personally make It’s easier tell task X would take morelesssame time compared task tell exactly long would take comparing median work well comparing mean skewness curve roughly true similar task don’t recall record every single similar task math get mean couldn’t find data torture usually estimate inevitable error meanmedian percentage task time go updown depending comfortable I’m dev environment like languageframework 40 good debugging tool 30 Good IDE support 25 … etc started splitting sprint equally sized task create uniformity time estimation process allows benefit point 1 easy tell two task roughly equal time also make task even similar hypothesis applies even perfectly thing become predictable principle applied “test run” resource example X1 day Y1 developer Z1 uniform task completed easily solve X2 day given know Y2 developer available Z2 total task left Finally make sure follow don’t want miss upcoming article covering cause delayTags Software Engineering Software Development Engineering Time Management Project Management
3,919
Pinterest Trends: Insights into unstructured data
Stephanie Rogers | Pinterest engineer, Discovery What topics are Pinners interested in? When are they most engaged with these topics? How are they engaging with those topics? To answer these questions, we built an internal web service that visualizes unstructured data and helps us better understand timely trends we can resurface to Pinners through the product. The tool shows the most popular Pins, as well as time series trends of keywords in Pins and searches. One of the use cases for the tool is it helps us understand what topics Pinners are interested in, when that interest usually happens and how they are engaging with these topics. Specifically for when, we visualize keywords over time to more easily identify seasonality or trends of topics, but the most powerful insights come from understanding Pinner behavior through top Pins. For example, with a simple search of a holiday, like “Valentine’s day,” we can see that interest starts to rise about two months before February 14. But interest in the keyword wasn’t enough; we wanted to determine when one should start promoting different types of products. We saw that male Pinners were looking at products towards the beginning of the peak. These were forward-thinking individuals, looking for gifts that would have to be preordered. Approximately 2–3 days before the holiday, male Pinners were primarily looking at DIY crafts and baked goods, things that didn’t require much time or could bought at the convenience store the night before. And finally, on the day of Valentine’s Day, we saw a lot of humorous memes around being lonely. We were able to find these engagement trends in a matter of seconds. Male Pinning trends leading up to Valentine’s Day January 2015 — Products Early February 2015 — DIY & Baked Goods February 14, 2015 — Lonely Memes Motivation A core part of any solution for keyword trends is being able to perform full-text search over attributes. While MapReduce is good for querying structured content around Pins, it’s slow when answering queries that need full-text search. ElasticSearch, on the other hand, provides a distributed, full-text search engine. By indexing the unstructured data around Pins (such as description, title and interest) with ElasticSearch, we produced a tool that processes full-text queries in real-time and visualizes trends and related Pins in a user-friendly way. At a high level, the tool offers a keyword search over Pin descriptions and search queries to: Find the top N Pins or search queries with the given keyword Show and compare time series trends, including the volume of repins and searches daily Additionally, the tool filters keyword volume by various segments including location, gender, interests, categories and time. Implementation Extract all text associated with Pins Insert Pin text into ElasticSearch Index text data (ElasticSearch does this for us) Build a service to call ElasticSearch API on the application backend Visualize data on the application frontend using Flask and ReactJS Challenges Data Collection Gathering all of the text related to a Pin, including description, title, tagged interests, categories and timestamps, as well as Pinner demographics, requires complicated logic that can scale. We use a series of Hive and Cascading jobs (both MapReduce-based frameworks) to run a Pinball workflow nightly to extract and dump all text associated with the Pins from the previous day into our ElasticSearch clusters, which then indexes this text. Design A major design decision was to use daily indexes (one index per day) since many high-volume time-series projects do this by default, including Logstash. Using these daily indexes had several benefits to the scalability and performance of our entire system, including: Increased flexibility in specifying time ranges. Faster reads as a result of well-distributed documents among various nodes. Minimized number of indexes involved in each query to avoid associated overhead. Bulk insertion or bulk reads through parallel calls. Easier recovery after failure. Easier tuning of properties of the cluster (# shards, replication, etc.). Smaller indices led to faster iteration on testing these immutable properties. Scalability Despite using big data technologies, we faced various scalability challenges with our workflows. There was simply too much data to run simple Hive queries, so we optimized our Hive query settings, switched to Cascading jobs and made trade offs on implementation choices. With more than 14GB of data daily and around two years worth of data stored thus far (around 10TB of data total), a bigger issue of scalability came from our ElasticSearch clusters. We have had to continuously scale our clusters by adding more nodes. Today we have 33 i2.2xlarge search nodes and 3 m3.2xlarge master nodes. Although replication isn’t needed to gain protection against data-loss since ES isn’t the primary persistent storage, we still decided to use a replication factor of 1 (meaning there are two copies of all data) to spread read-load across multiple servers. Performance After launching our prototype, we saw a lot of room for improvement in application performance, especially as the number of users grew. We switched from raw HTTP requests to the ElasticSearch Python client and optimized the ElasticSearch query code in our service, which led to a 2x performance increase. We also implemented server-side and client-side caching for the added benefit of instantaneous results for more frequent queries. The end result of all of these optimizations is sub two second queries for users. Outcomes The innovative tool has been a tremendous success. Usage is pervasive internally to derive Pinner insights, highlight popular content and even to detect spam. If you’re interested in working on large scale data processing and analytics challenges like this one, join our team! Acknowledgements: This project is a joint effort across multiple teams inside Pinterest. Various teams provided insightful feedback and suggestions. Major engineering contributors include Stephanie Rogers, Justin Mejorada-Pier, Chunyan Wang and the rest of the Data Engineering team.
https://medium.com/pinterest-engineering/pinterest-trends-insights-into-unstructured-data-b4dbb2c8fb63
['Pinterest Engineering']
2017-02-21 19:42:38.385000+00:00
['Elasticsearch', 'Analytics', 'Pinterest', 'Engineering', 'Big Data']
Title Pinterest Trends Insights unstructured dataContent Stephanie Rogers Pinterest engineer Discovery topic Pinners interested engaged topic engaging topic answer question built internal web service visualizes unstructured data help u better understand timely trend resurface Pinners product tool show popular Pins well time series trend keywords Pins search One use case tool help u understand topic Pinners interested interest usually happens engaging topic Specifically visualize keywords time easily identify seasonality trend topic powerful insight come understanding Pinner behavior top Pins example simple search holiday like “Valentine’s day” see interest start rise two month February 14 interest keyword wasn’t enough wanted determine one start promoting different type product saw male Pinners looking product towards beginning peak forwardthinking individual looking gift would preordered Approximately 2–3 day holiday male Pinners primarily looking DIY craft baked good thing didn’t require much time could bought convenience store night finally day Valentine’s Day saw lot humorous meme around lonely able find engagement trend matter second Male Pinning trend leading Valentine’s Day January 2015 — Products Early February 2015 — DIY Baked Goods February 14 2015 — Lonely Memes Motivation core part solution keyword trend able perform fulltext search attribute MapReduce good querying structured content around Pins it’s slow answering query need fulltext search ElasticSearch hand provides distributed fulltext search engine indexing unstructured data around Pins description title interest ElasticSearch produced tool process fulltext query realtime visualizes trend related Pins userfriendly way high level tool offer keyword search Pin description search query Find top N Pins search query given keyword Show compare time series trend including volume repins search daily Additionally tool filter keyword volume various segment including location gender interest category time Implementation Extract text associated Pins Insert Pin text ElasticSearch Index text data ElasticSearch u Build service call ElasticSearch API application backend Visualize data application frontend using Flask ReactJS Challenges Data Collection Gathering text related Pin including description title tagged interest category timestamps well Pinner demographic requires complicated logic scale use series Hive Cascading job MapReducebased framework run Pinball workflow nightly extract dump text associated Pins previous day ElasticSearch cluster index text Design major design decision use daily index one index per day since many highvolume timeseries project default including Logstash Using daily index several benefit scalability performance entire system including Increased flexibility specifying time range Faster read result welldistributed document among various node Minimized number index involved query avoid associated overhead Bulk insertion bulk read parallel call Easier recovery failure Easier tuning property cluster shard replication etc Smaller index led faster iteration testing immutable property Scalability Despite using big data technology faced various scalability challenge workflow simply much data run simple Hive query optimized Hive query setting switched Cascading job made trade offs implementation choice 14GB data daily around two year worth data stored thus far around 10TB data total bigger issue scalability came ElasticSearch cluster continuously scale cluster adding node Today 33 i22xlarge search node 3 m32xlarge master node Although replication isn’t needed gain protection dataloss since ES isn’t primary persistent storage still decided use replication factor 1 meaning two copy data spread readload across multiple server Performance launching prototype saw lot room improvement application performance especially number user grew switched raw HTTP request ElasticSearch Python client optimized ElasticSearch query code service led 2x performance increase also implemented serverside clientside caching added benefit instantaneous result frequent query end result optimization sub two second query user Outcomes innovative tool tremendous success Usage pervasive internally derive Pinner insight highlight popular content even detect spam you’re interested working large scale data processing analytics challenge like one join team Acknowledgements project joint effort across multiple team inside Pinterest Various team provided insightful feedback suggestion Major engineering contributor include Stephanie Rogers Justin MejoradaPier Chunyan Wang rest Data Engineering teamTags Elasticsearch Analytics Pinterest Engineering Big Data
3,920
Software Engineer of Tomorrow Manifesto
Collaborate to apply Artificial Intelligence Methods for developing Advantageous Conditions to increase Intensity and Efficiency of Software Engineering Follow
https://medium.com/ai-for-software-engineering/software-engineer-of-tomorrow-manifesto-70a4033d38d1
['Aiforse Community']
2017-08-31 14:47:46.752000+00:00
['Software Engineering', 'Artificial Intelligence', 'Software Development', 'Data', 'Data Science']
Title Software Engineer Tomorrow ManifestoContent Collaborate apply Artificial Intelligence Methods developing Advantageous Conditions increase Intensity Efficiency Software Engineering FollowTags Software Engineering Artificial Intelligence Software Development Data Data Science
3,921
Cómo clasificar obras de arte por estilo en 7 líneas de código
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/metadatos/c%C3%B3mo-clasificar-obras-de-arte-por-estilo-en-7-l%C3%ADneas-de-c%C3%B3digo-335b3d11fc43
['Jaime Durán']
2019-06-06 00:39:14.165000+00:00
['Data Science', 'Computer Vision', 'Español', 'Neural Networks', 'Artificial Intelligence']
Title Cómo clasificar obras de arte por estilo en 7 líneas de códigoContent Get newsletter signing create Medium account don’t already one Review Privacy Policy information privacy practice Check inbox Medium sent email complete subscriptionTags Data Science Computer Vision Español Neural Networks Artificial Intelligence
3,922
A Brief Introduction to Supervised Learning
Supervised learning is the most common subbranch of machine learning today. Typically, new machine learning practitioners will begin their journey with supervised learning algorithms. Therefore, the first of this three post series will be about supervised learning. Supervised machine learning algorithms are designed to learn by example. The name “supervised” learning originates from the idea that training this type of algorithm is like having a teacher supervise the whole process. When training a supervised learning algorithm, the training data will consist of inputs paired with the correct outputs. During training, the algorithm will search for patterns in the data that correlate with the desired outputs. After training, a supervised learning algorithm will take in new unseen inputs and will determine which label the new inputs will be classified as based on prior training data. The objective of a supervised learning model is to predict the correct label for newly presented input data. At its most basic form, a supervised learning algorithm can be written simply as: Where Y is the predicted output that is determined by a mapping function that assigns a class to an input value x. The function used to connect input features to a predicted output is created by the machine learning model during training. Supervised learning can be split into two subcategories: Classification and regression. Classification During training, a classification algorithm will be given data points with an assigned category. The job of a classification algorithm is to then take an input value and assign it a class, or category, that it fits into based on the training data provided. The most common example of classification is determining if an email is spam or not. With two classes to choose from (spam, or not spam), this problem is called a binary classification problem. The algorithm will be given training data with emails that are both spam and not spam. The model will find the features within the data that correlate to either class and create the mapping function mentioned earlier: Y=f(x). Then, when provided with an unseen email, the model will use this function to determine whether or not the email is spam. Classification problems can be solved with a numerous amount of algorithms. Whichever algorithm you choose to use depends on the data and the situation. Here are a few popular classification algorithms: Linear Classifiers Support Vector Machines Decision Trees K-Nearest Neighbor Random Forest Regression Regression is a predictive statistical process where the model attempts to find the important relationship between dependent and independent variables. The goal of a regression algorithm is to predict a continuous number such as sales, income, and test scores. The equation for basic linear regression can be written as so: Where x[i] is the feature(s) for the data and where w[i] and b are parameters which are developed during training. For simple linear regression models with only one feature in the data, the formula looks like this: Where w is the slope, x is the single feature and b is the y-intercept. Familiar? For simple regression problems such as this, the models predictions are represented by the line of best fit. For models using two features, the plane will be used. Finally, for a model using more than two features, a hyperplane will be used. Imagine we want to determine a student’s test grade based on how many hours they studied the week of the test. Lets say the plotted data with a line of best fit looks like this: There is a clear positive correlation between hours studied (independent variable) and the student’s final test score (dependent variable). A line of best fit can be drawn through the data points to show the models predictions when given a new input. Say we wanted to know how well a student would do with five hours of studying. We can use the line of best fit to predict the test score based on other student’s performances. There are many different types of regression algorithms. The three most common are listed below: Linear Regression Logistic Regression Polynomial Regression Simple Regression Example First we will import the needed libraries and then create a random dataset with an increasing output. We can then place our line of best fit onto the plot along with all of the data points. We will then print out the slope and intercept of the regression model. print("Slope: ", reg.coef_[0]) print("Intercept:", reg.intercept_) Output: Slope: 65.54726684409927 Intercept: -1.8464500230055103 In middle school, we all learned that the equation for a linear line is y = mx + b. We can now create a function called “predict” that will multiply the slope (w) with the new input (x). This function will also use the intercept (b) to return an output value. After creating the function, we can predict the output values when x = 3 and when x = -1.5. Predict y For 3: 194.7953505092923 Predict y For -1.5: -100.16735028915441 Now let’s plot the original data points with the line of best fit. We can then add the new points that we predicted (colored red). As expected, they fall on the line of best fit. Conclusion Supervised learning is the simplest subcategory of machine learning and serves as an introduction to machine learning to many machine learning practitioners. Supervised learning is the most commonly used form of machine learning, and has proven to be an excellent tool in many fields. This post was part one of a three part series. Part two will cover unsupervised learning.
https://towardsdatascience.com/a-brief-introduction-to-supervised-learning-54a3e3932590
['Aidan Wilson']
2019-10-01 04:36:45.128000+00:00
['Machine Learning', 'Artificial Intelligence', 'AI', 'Data Science', 'Data Visualization']
Title Brief Introduction Supervised LearningContent Supervised learning common subbranch machine learning today Typically new machine learning practitioner begin journey supervised learning algorithm Therefore first three post series supervised learning Supervised machine learning algorithm designed learn example name “supervised” learning originates idea training type algorithm like teacher supervise whole process training supervised learning algorithm training data consist input paired correct output training algorithm search pattern data correlate desired output training supervised learning algorithm take new unseen input determine label new input classified based prior training data objective supervised learning model predict correct label newly presented input data basic form supervised learning algorithm written simply predicted output determined mapping function assigns class input value x function used connect input feature predicted output created machine learning model training Supervised learning split two subcategories Classification regression Classification training classification algorithm given data point assigned category job classification algorithm take input value assign class category fit based training data provided common example classification determining email spam two class choose spam spam problem called binary classification problem algorithm given training data email spam spam model find feature within data correlate either class create mapping function mentioned earlier Yfx provided unseen email model use function determine whether email spam Classification problem solved numerous amount algorithm Whichever algorithm choose use depends data situation popular classification algorithm Linear Classifiers Support Vector Machines Decision Trees KNearest Neighbor Random Forest Regression Regression predictive statistical process model attempt find important relationship dependent independent variable goal regression algorithm predict continuous number sale income test score equation basic linear regression written xi feature data wi b parameter developed training simple linear regression model one feature data formula look like w slope x single feature b yintercept Familiar simple regression problem model prediction represented line best fit model using two feature plane used Finally model using two feature hyperplane used Imagine want determine student’s test grade based many hour studied week test Lets say plotted data line best fit look like clear positive correlation hour studied independent variable student’s final test score dependent variable line best fit drawn data point show model prediction given new input Say wanted know well student would five hour studying use line best fit predict test score based student’s performance many different type regression algorithm three common listed Linear Regression Logistic Regression Polynomial Regression Simple Regression Example First import needed library create random dataset increasing output place line best fit onto plot along data point print slope intercept regression model printSlope regcoef0 printIntercept regintercept Output Slope 6554726684409927 Intercept 18464500230055103 middle school learned equation linear line mx b create function called “predict” multiply slope w new input x function also use intercept b return output value creating function predict output value x 3 x 15 Predict 3 1947953505092923 Predict 15 10016735028915441 let’s plot original data point line best fit add new point predicted colored red expected fall line best fit Conclusion Supervised learning simplest subcategory machine learning serf introduction machine learning many machine learning practitioner Supervised learning commonly used form machine learning proven excellent tool many field post part one three part series Part two cover unsupervised learningTags Machine Learning Artificial Intelligence AI Data Science Data Visualization
3,923
Serious back door Vulnerabilities spotted in TikTok
Serious back door Vulnerabilities spotted in TikTok The security flaws were identified by a cybersecurity firm Check Point, which the company claims to have fixed TikTok has broken all barriers of popularity, achieving 1.5 billion global users in just over two & a half years. The immense growth can be gauged from the fact that the app is available in 150 markets & used in 75 languages globally. Even more important is the niche that it serves — Generation Z which utilizes the app to create short video clips — mostly lip-synced of 3 to 15 seconds & short looping videos of 3 to 60 seconds. Having achieved all these laurels, however, the application has been under fire from a lot of quarters for the potential risks identified within the application recently. A Cybersecurity firm Check Point pointed to multiple vulnerabilities that its researchers uncovered. Although the security firm made Tik Tok aware of these security flaws on November 20, 2019, which the latter claims to have addressed by December 15, 2019, as confirmed by Check Point — the damage is done. The problems were brewing for Tik Tok, even before the report of these vulnerabilities surfaced. With its strong Chinese connection — the parent company ByteDance based in Beijing, the app was under intense scrutiny in the United States. Although the decision by American authorities to scrutinize Chinese technology like Tik Tok was considered more of a trade war by-product by some, that notion seems to be quelled with the recent revelations.
https://medium.com/technicity/serious-back-door-vulnerabilities-spotted-in-tik-tok-e717167a1b80
['Faisal Khan']
2020-01-15 00:49:20.554000+00:00
['Privacy', 'Technology', 'Artificial Intelligence', 'Future', 'Cybersecurity']
Title Serious back door Vulnerabilities spotted TikTokContent Serious back door Vulnerabilities spotted TikTok security flaw identified cybersecurity firm Check Point company claim fixed TikTok broken barrier popularity achieving 15 billion global user two half year immense growth gauged fact app available 150 market used 75 language globally Even important niche serf — Generation Z utilizes app create short video clip — mostly lipsynced 3 15 second short looping video 3 60 second achieved laurel however application fire lot quarter potential risk identified within application recently Cybersecurity firm Check Point pointed multiple vulnerability researcher uncovered Although security firm made Tik Tok aware security flaw November 20 2019 latter claim addressed December 15 2019 confirmed Check Point — damage done problem brewing Tik Tok even report vulnerability surfaced strong Chinese connection — parent company ByteDance based Beijing app intense scrutiny United States Although decision American authority scrutinize Chinese technology like Tik Tok considered trade war byproduct notion seems quelled recent revelationsTags Privacy Technology Artificial Intelligence Future Cybersecurity
3,924
What Developers should know about Product Management
Life as a developer I started my career working in a startup as a developer. I had a really good manager. For 2 years, he taught me to write maintainable code, create a scalable architecture and write tests to ensure quality. It would have been the best job in the world... If it wasn’t because of product managers. Product managers were cold terrible people. They did not care about quality, efficiency or maintenance. They just wanted us to finish a project as fast as possible. Photo by Steve Harvey on Unsplash I remember one of them skipped the design of a feature and gave it to us to deliver it faster… it was the ugliest feature I have ever seen. I really loved my job, but Product managers brought all the bad: stress, deadlines, tech debt, etc. I hated writing low-quality code, so I decided to try in a new company. Spoiler alert: it did not improve. Actually, it got really bad. I started to work in a team with two product managers. I am going to call them Bob and Alice. On a normal day, I would have a list of ten bugs that needed to be fixed. I would pick issue 1111. Then, Alice would come and ask me to finish issue 3003 first. Then we would have the standup, where Bob would say that ticket 6676 was the top priority… COULD THEY TALK TO EACH OTHER BEFORE TALKING TO US? That was when I moved to DevOps. Learning about business In DevOps, there are no product managers. So I went from having two to having zero! Awesome right? Best job ever, I got to write code and enjoy life! But with the new role, I got new responsibilities too: investigate what Developers need, calculate budgets, design new tools, create and analyze metrics, risk analysis… We had no product managers because we were the product managers! Developers were our customers and we were trying to release new value to them. This made me change my point of view about product managers. I want to tell you the story of two projects that taught me many lessons: The first story is when we released a new feature called Parallel Deployments. We put lots of effort and thoughts into it. Then we rolled it out as an optional feature… and people loved it. We got feedback and it was awesome. Everyone was using it so it became a de-facto way to deploy. This taught me how satisfying it is to fix a problem that your customer is facing. It gave me a sense of self-realization. The second story is when we woke up one day and our testing cluster did not work. Most Developers were blocked in their work: releases, deadlines, and testing were all paralysed. We needed a fast fix. We worked 6h straight (no lunch break) to create a new full working environment (did I mention that it was in a different Cloud Platform?). It was a huge effort! We did some manual changes and ugly patches. I’m not proud of it… but we got it working! It was then when I started to understand Product Managers and why some times having technical debt is not the worst. After that, I read some books about Product Management: Project to Product by Mik Kersten, Inspired by Martin Cagan. I also read some business books like The Lean Startup by Eric Ries. Now that I understand some of these business values, I don’t see Product Managers as awful monsters anymore. They are people responsible to bring value to customers. The problem is that most companies have a wall between business and development that hide all this valuable information. As a developer, every ticket looks the same. I don’t know what brings value to customers, so it’s difficult to make decisions between quality, efficiency, scalability and time spent. And I get pissed off if they ask me to release a low-quality feature because of an unreasonable deadline. I want to share with you some of the things that I learned about business. So you can help to break the wall between Engineering and Business and have more meaningful work. Business values These are the most important things that I learned from these business books and my own experience: Why do we work Success is not delivering a feature; success is learning how to solve the customer’s problem — Eric Ries A Product is bigger than engineering. It starts when someone thinks about that feature. Then prioritization, planning, and design are needed. Depending on the company, many more steps may be part of this value stream. Probably, when you see a new feature ticket in your board, it has already been in many others teams’ boards for months. In engineering, we make trade-offs between efficiency, quality, time spent and maintainability every day. To take the best decisions, we should aim to understand the main parts of this value stream and the customers’ needs. The problem is that business people and engineers use different languages. As an engineer, I like facts: how fast do I fix a bug (maintainability), coverage in my tests (quality), how fast do I add a new feature (tech debt), error rate (quality), availability of my system(stability),… But I have no idea about business metrics! I just see priorities that have no explanations. In Mik Kersten’s book called Project to Product, he defines a Flow Framework to correlate business results (Value, Cost, Quality and Happiness) with development metrics (Velocity, Efficiency, Time and Load). This not also helps developers take better decisions, but it also helps business people define a better strategy for the company. In his book, Mik Kersten also discusses the importance of traceability on a Value Stream Network. Software development looks like an interconnected network of teams where each of them adds value to the product. The problem is when work arrives at the team more like a dump than a traceable story (what is the customer that needs the feature, when was designed, which work has been done by which teams,…). How should we work Our highest priority is to satisfy the customer through early and continuous delivery of valuable software — First Principle of the Agile Manifesto No one knows about the future. The best thing we can do is to add a small change, get feedback and LEARN. This improves the capacity of adaptability to the team and will allow adapting to market trends quicker than the competition. Small changes are useful everywhere: as a Developer, it is better to add small pieces of code that can be easily reviewed and tested. as a DevOps, it’s better to test a new tool in a small set of developers first and get feedback before using it everywhere. as a Product Managers, it helps to test hypotheses about the users. I once released a feature that took a month to build, just to see that no customer used it. The quickest we can test if it makes sense to build a feature, the better. This is called an MVP (minimum viable product) and it’s used to reduce time wasted on unneeded features. Who is responsible for the business Specialization allows us to handle ever-growing complexity, but the benefits of specialization can only be fully realized if the silos that it creates can be connected effectively. — Mik Kersten People run companies. They define culture and are responsible for every desition made. That is why communication is highly important. In the book called The Five Dysfunctions of a Team by Patrick M. Lencioni, he shows a pyramid of what makes a team dysfunctional. I believe these principles can be applied to fix communication problems: they can be used on something as small as within a team, to collaboration between teams or even between two companies. From bottom to top, these dysfunctions are: Absence of TRUST . Teammates need to be vulnerable with one another. It is vital to have confidence among team members to know their peers’ intentions are good and that there is no reason to be protective. . Teammates need to be vulnerable with one another. It is vital to have confidence among team members to know their peers’ intentions are good and that there is no reason to be protective. Fear of CONFLICT . Conflict helps growth, producing the best possible solution in the shortest period of time. Adding light to disagreements and force members to work through it is key to resolve issues. . Conflict helps growth, producing the best possible solution in the shortest period of time. Adding light to disagreements and force members to work through it is key to resolve issues. Lack of COMMITMENT . Some times it feels that a difficult decision gets delayed all the time. This is due to a lack of clarity and lack of buy-in. You should keep clear that a decision is better than no decision. Even a wrong decision can help us continue and learn from our mistakes. . Some times it feels that a difficult decision gets delayed all the time. This is due to a lack of clarity and lack of buy-in. You should keep clear that a decision is better than no decision. Even a wrong decision can help us continue and learn from our mistakes. Avoidance of ACCOUNTABILITY . Team members should be willing to call their peers on performance or behaviours that may hurt the team. If there is a specific process or requirement, everyone in the team should enforce it and ask others to follow it. Here, regular reviews and team rewards can help to achieve it. . Team members should be willing to call their peers on performance or behaviours that may hurt the team. If there is a specific process or requirement, everyone in the team should enforce it and ask others to follow it. Here, regular reviews and team rewards can help to achieve it. Inattention to RESULTS. This is when we avoid team status to focus on individual status. Teamwork is more important than superheroes. When I see a single person is responsible for the efficiency of a team.. I get scared. I have seen this many times: there are no discussions or communication as this person takes all the decisions alone. To avoid it, there should be a Public Declaration of Results and leaders should show that they do not value anything more than results. Thanks for reading! Maybe you are like me and have had bad experiences with Product Managers. If so, just try to talk to them and ask for metrics and information about customers. I have to say that the product managers I talked about were good at their job, but we did not know how to communicate. Leave a comment and applause if you liked this blog post. You can also write me on my twitter account @Marvalcam1
https://medium.com/hacking-talent/what-developers-should-know-about-product-management-90333f5354eb
['Maria Valcam']
2019-08-16 12:09:50.824000+00:00
['Software Engineering', 'Engineering', 'Business', 'Agile', 'Product Management']
Title Developers know Product ManagementContent Life developer started career working startup developer really good manager 2 year taught write maintainable code create scalable architecture write test ensure quality would best job world wasn’t product manager Product manager cold terrible people care quality efficiency maintenance wanted u finish project fast possible Photo Steve Harvey Unsplash remember one skipped design feature gave u deliver faster… ugliest feature ever seen really loved job Product manager brought bad stress deadline tech debt etc hated writing lowquality code decided try new company Spoiler alert improve Actually got really bad started work team two product manager going call Bob Alice normal day would list ten bug needed fixed would pick issue 1111 Alice would come ask finish issue 3003 first would standup Bob would say ticket 6676 top priority… COULD TALK TALKING US moved DevOps Learning business DevOps product manager went two zero Awesome right Best job ever got write code enjoy life new role got new responsibility investigate Developers need calculate budget design new tool create analyze metric risk analysis… product manager product manager Developers customer trying release new value made change point view product manager want tell story two project taught many lesson first story released new feature called Parallel Deployments put lot effort thought rolled optional feature… people loved got feedback awesome Everyone using became defacto way deploy taught satisfying fix problem customer facing gave sense selfrealization second story woke one day testing cluster work Developers blocked work release deadline testing paralysed needed fast fix worked 6h straight lunch break create new full working environment mention different Cloud Platform huge effort manual change ugly patch I’m proud it… got working started understand Product Managers time technical debt worst read book Product Management Project Product Mik Kersten Inspired Martin Cagan also read business book like Lean Startup Eric Ries understand business value don’t see Product Managers awful monster anymore people responsible bring value customer problem company wall business development hide valuable information developer every ticket look don’t know brings value customer it’s difficult make decision quality efficiency scalability time spent get pissed ask release lowquality feature unreasonable deadline want share thing learned business help break wall Engineering Business meaningful work Business value important thing learned business book experience work Success delivering feature success learning solve customer’s problem — Eric Ries Product bigger engineering start someone think feature prioritization planning design needed Depending company many step may part value stream Probably see new feature ticket board already many others teams’ board month engineering make tradeoff efficiency quality time spent maintainability every day take best decision aim understand main part value stream customers’ need problem business people engineer use different language engineer like fact fast fix bug maintainability coverage test quality fast add new feature tech debt error rate quality availability systemstability… idea business metric see priority explanation Mik Kersten’s book called Project Product defines Flow Framework correlate business result Value Cost Quality Happiness development metric Velocity Efficiency Time Load also help developer take better decision also help business people define better strategy company book Mik Kersten also discus importance traceability Value Stream Network Software development look like interconnected network team add value product problem work arrives team like dump traceable story customer need feature designed work done teams… work highest priority satisfy customer early continuous delivery valuable software — First Principle Agile Manifesto one know future best thing add small change get feedback LEARN improves capacity adaptability team allow adapting market trend quicker competition Small change useful everywhere Developer better add small piece code easily reviewed tested DevOps it’s better test new tool small set developer first get feedback using everywhere Product Managers help test hypothesis user released feature took month build see customer used quickest test make sense build feature better called MVP minimum viable product it’s used reduce time wasted unneeded feature responsible business Specialization allows u handle evergrowing complexity benefit specialization fully realized silo creates connected effectively — Mik Kersten People run company define culture responsible every desition made communication highly important book called Five Dysfunctions Team Patrick Lencioni show pyramid make team dysfunctional believe principle applied fix communication problem used something small within team collaboration team even two company bottom top dysfunction Absence TRUST Teammates need vulnerable one another vital confidence among team member know peers’ intention good reason protective Teammates need vulnerable one another vital confidence among team member know peers’ intention good reason protective Fear CONFLICT Conflict help growth producing best possible solution shortest period time Adding light disagreement force member work key resolve issue Conflict help growth producing best possible solution shortest period time Adding light disagreement force member work key resolve issue Lack COMMITMENT time feel difficult decision get delayed time due lack clarity lack buyin keep clear decision better decision Even wrong decision help u continue learn mistake time feel difficult decision get delayed time due lack clarity lack buyin keep clear decision better decision Even wrong decision help u continue learn mistake Avoidance ACCOUNTABILITY Team member willing call peer performance behaviour may hurt team specific process requirement everyone team enforce ask others follow regular review team reward help achieve Team member willing call peer performance behaviour may hurt team specific process requirement everyone team enforce ask others follow regular review team reward help achieve Inattention RESULTS avoid team status focus individual status Teamwork important superheroes see single person responsible efficiency team get scared seen many time discussion communication person take decision alone avoid Public Declaration Results leader show value anything result Thanks reading Maybe like bad experience Product Managers try talk ask metric information customer say product manager talked good job know communicate Leave comment applause liked blog post also write twitter account Marvalcam1Tags Software Engineering Engineering Business Agile Product Management
3,925
Economics of Big Data and Privacy: Exploring Netflix and Facebook
Photo by Carlos Muza on Unsplash It is the 21st century, technology is on the rise, the internet has succeeded paper texts. We live in a world that is interconnected. In this fast-paced, growing world, data is being rapidly created every second. The use of algorithms and statistical measures allows us to graph each movement in a way that is acceptable for predictive modeling. Big data refers to huge amounts of data accumulated over time through the use of internet services. Traditional econometrics methods fail when analyzing such huge amounts of data and we require a host of new algorithms that can crunch this data and provide insights. (Harding et al, 2018). Big data can be referred to all the human activity performed over the last decade and exponentially growing every second. Being interconnected has its benefits and drawbacks, one of the major drawbacks being privacy. Big data does just encompass the analysis of data but it also consists of data collection. Data collection is on the ways where personal user data can become compromised. (Kshetri, 2014). Predictive modeling will not only help us improve our services but it would have a deep impact on industries like healthcare and food. The accumulation of data cannot be stopped and we must be well aware of the benefits and drawbacks of the holy grail of technology, data. This paper aims to examine all the facts, case studies related to data, and how it affects our modern life. Data or information can be referred to as the accumulation of past behavior. Information can also be categorized as a sort of data. Typically something that we took for granted a few years back has not boomed in this decade due to a large amount of human activity and computing technologies. In the 21st century, we are surrounded by data that can be composed of two types: discrete and continuous. Discrete data consists of entries that can be used for classification whereas continuous data refers to the entries that can be used for regression. Man and Data are inseparable as it is the flow of information. Data has been a very important part of human existence from time immemorial. Once civilizations were established they could not function without data. Indus Valley Civilization had seals (a type of coins) in which data was tabulated. The Incas, another very old civilization, had the same methods for data collection. As civilization progressed, man-made data tabulation also developed. It graduated into coins that replaced the barter system. There were also numbers which have been used since biblical times. The seafarers also had a system of data that helped in their trade. Historically data collection was an important aspect of life in ancient times and around the 1950s, due to the rise of computing systems, data could be presented in the format of bits and bytes. In the 21st century data has been regarded as the new oil. Privacy is the state of freedom from intrusion and the ability of an individual to have the information only up to themselves. The person should have the freedom to share the information whenever they require it. In the 21st century, due to the boom in data and computing, companies have tried to exploit it using sophisticated algorithms and techniques known as data mining. Due to limited enforcement by the government for these privacy laws companies have exploited the data to gain more and more users by invading their privacy. (Cate, 1997). One reason between the disconnection of data and privacy is that many users are not aware of when their data is being collected (Acquisti et al, 2016). While we may consider data a valuable resource, we should be aware of how this data can be exploited by companies or politicians to attract a certain set of customers. Users can give out unintended personal information to these platforms in forms of text, images, preferences, and browsing time (Xu et al, 2014). These data collections pose a threat to humanity and to rectify this, new techniques to perform data mining are being explored extensively where the main aim is to study, analyze, process data in such a way in which privacy is maintained (Xu et al, 2014). In the 21st century, human-computer interface activity is at its peak. A lot of companies depend on the accumulation and processing of huge amounts of data(Oussous et al, 2018). Huge amounts of data, also known as big data, are a resource to a company’s research and development as they help companies decide on where to put the money and invest. The world economy has been changed into something called a data economy that refers to an ecosystem where data is gathered, organized, and exchanged using big data algorithms. These days data can be huge, cluttered and unstructured, an example is when different clients have different accounts on the same platform and to extract useful information the source algorithms have to first preprocess the data in such a way that manages bias, outliers, and imbalances (Tummala et al, 2018). We are surrounded by data in such a way that services like YouTube experiences a new video every 24hours with a rough estimate of 13 billion to 50 billion data parameters in a span of 5 years (Fosso Wamba et al, 2015). Harnessing human data to predict future movement is a common strategy for companies to game data, while Youtube is producing such huge amounts of data, people using the service are contributing back to the service by storing their likes and dislikes in a “big database” maintained by YouTube. Big Data and Business analytics are estimated to provide an annual revenue of about 150.8 billion dollars in the US (Tao et al, 2019). While these firms earn by providing users with a better interface using their data, some firms exploit data to influence a portion of individuals. They use computer algorithms to predict and transform user data into something usable, using data crunching and data mining techniques, they extract user data to sell or influence. Facebook, a social network company, was recently involved with Cambridge Analytica, a data-mining firm that gathered data of Facebook users using loopholes. Community profiles were built upon this data which was used to target customized ads. Due to this Facebook was on a decline as this was considered a massive data breach and personal user data consisting of images, text, posts, and likes. This scandal played a key role in the US Elections 2016 and following this GDPR (General Data Protection Regulation) was established in the EU (Tao et al, 2019). It is estimated that companies use past data to build something called as recommendation engines that can predict what sort of content a user wants to view. One such example is Netflix that asks users to rate movies on a scale of 1 to 5 to build a personalized profile for the user. For the Netflix recommendation engine, linear algebra or to be more precise SVD (Singular Value Decomposition) was used to a system that can predict what the user might like (Hallinan et al, 2014). To Conclude, Big data and privacy go hand in hand as they are interconnected and interdependent. For a breach of privacy, one must have access to huge amounts of data, and to build these computing engines, we need large scale distributed computing resources and techniques. We see how data became so popular and with the disruption of the right technological tools and algorithms, companies were able to harness the predictive capabilities of the system. We also see how privacy is a big part of the data economy and how data collection methods seem to differ in this fast-paced growing world. We cannot stop the flow of data but we can surely be aware of what is being collected. Companies like Cambridge Analytica used loopholes in the Facebook Platform to gather personal user data was surely a breach of privacy due to which Facebook was called upon in Congress and their market share dropped drastically. We provide a logical flow of how data, privacy arose, and how due to the huge amounts of human activity data seemed to be called “Big Data”. For the future direction of this research, we plan to analyze how we can control the flow of data by using technological techniques and we plan to discuss the effect of racial bias in such huge amounts of data, more specifically how racial biases affect data mining algorithms (Obermeyer et al, 2019). References Harding, Matthew, and Jonathan Hersh. “Big Data in Economics.” IZA World of Labor, 2018, doi:10.15185/izawol.451. Kshetri, Nir. “Big Data׳s Impact on Privacy, Security and Consumer Welfare.” Telecommunications Policy, vol. 38, no. 11, 2014, pp. 1134–1145., doi:10.1016/j.telpol.2014.10.002. Data is the new oil. Header_image. [accessed 2020 Jun 22]. https://spotlessdata.com/blog/data-new-oil Cate FH. Privacy in the information age. Washington, D.C.: Brookings Institution Press; 1997. Acquisti A, Taylor C, Wagman L. The Economics of Privacy. Journal of Economic Literature. 2016;54(2):442–492. Oussous A, Benjelloun F, Ait Lahcen A, Belfkih S. Big Data technologies: A survey. Journal of King Saud University — Computer and Information Sciences. 2018 [accessed 2020 Jun 22];30(4):431–448. Tummala Y, Kalluri D. A review on Data Mining & Big Data Analytics. International Journal of Engineering & Technology. 2018 [accessed 2020 Jun 22];7(4.24):92. Fosso Wamba S, Akter S, Edwards A, Chopin G, Gnanzou D. How ‘big data’ can make a big impact: Findings from a systematic review and a longitudinal case study. International Journal of Production Economics. 2015;165:234–246. Xu L, Jiang C, Wang J, Yuan J, Ren Y. Information Security in Big Data: Privacy and Data Mining. IEEE Access. 2014 [accessed 2020 Jun 22];2:1149–1176. Hallinan B, Striphas T. Recommended for you: The Netflix Prize and the production of algorithmic culture. 2014;18(1):117–137. Tao H, Bhuiyan M, Rahman M, Wang G, Wang T, Ahmed M, Li J. Economic perspective analysis of protecting big data security and privacy. Future Generation Computer Systems. 2019 [accessed 2020 Jun 23];98:660–671. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 [accessed 2020 Jun 23];366(6464):447–453.
https://medium.com/towards-artificial-intelligence/economics-of-big-data-and-privacy-exploring-netflix-and-facebook-d7a2e9df05c8
['Aadit Kapoor']
2020-08-27 23:11:51.853000+00:00
['Privacy', 'Netflix', 'Software Development', 'Facebook', 'Big Data']
Title Economics Big Data Privacy Exploring Netflix FacebookContent Photo Carlos Muza Unsplash 21st century technology rise internet succeeded paper text live world interconnected fastpaced growing world data rapidly created every second use algorithm statistical measure allows u graph movement way acceptable predictive modeling Big data refers huge amount data accumulated time use internet service Traditional econometrics method fail analyzing huge amount data require host new algorithm crunch data provide insight Harding et al 2018 Big data referred human activity performed last decade exponentially growing every second interconnected benefit drawback one major drawback privacy Big data encompass analysis data also consists data collection Data collection way personal user data become compromised Kshetri 2014 Predictive modeling help u improve service would deep impact industry like healthcare food accumulation data cannot stopped must well aware benefit drawback holy grail technology data paper aim examine fact case study related data affect modern life Data information referred accumulation past behavior Information also categorized sort data Typically something took granted year back boomed decade due large amount human activity computing technology 21st century surrounded data composed two type discrete continuous Discrete data consists entry used classification whereas continuous data refers entry used regression Man Data inseparable flow information Data important part human existence time immemorial civilization established could function without data Indus Valley Civilization seal type coin data tabulated Incas another old civilization method data collection civilization progressed manmade data tabulation also developed graduated coin replaced barter system also number used since biblical time seafarer also system data helped trade Historically data collection important aspect life ancient time around 1950s due rise computing system data could presented format bit byte 21st century data regarded new oil Privacy state freedom intrusion ability individual information person freedom share information whenever require 21st century due boom data computing company tried exploit using sophisticated algorithm technique known data mining Due limited enforcement government privacy law company exploited data gain user invading privacy Cate 1997 One reason disconnection data privacy many user aware data collected Acquisti et al 2016 may consider data valuable resource aware data exploited company politician attract certain set customer Users give unintended personal information platform form text image preference browsing time Xu et al 2014 data collection pose threat humanity rectify new technique perform data mining explored extensively main aim study analyze process data way privacy maintained Xu et al 2014 21st century humancomputer interface activity peak lot company depend accumulation processing huge amount dataOussous et al 2018 Huge amount data also known big data resource company’s research development help company decide put money invest world economy changed something called data economy refers ecosystem data gathered organized exchanged using big data algorithm day data huge cluttered unstructured example different client different account platform extract useful information source algorithm first preprocess data way manages bias outlier imbalance Tummala et al 2018 surrounded data way service like YouTube experience new video every 24hours rough estimate 13 billion 50 billion data parameter span 5 year Fosso Wamba et al 2015 Harnessing human data predict future movement common strategy company game data Youtube producing huge amount data people using service contributing back service storing like dislike “big database” maintained YouTube Big Data Business analytics estimated provide annual revenue 1508 billion dollar US Tao et al 2019 firm earn providing user better interface using data firm exploit data influence portion individual use computer algorithm predict transform user data something usable using data crunching data mining technique extract user data sell influence Facebook social network company recently involved Cambridge Analytica datamining firm gathered data Facebook user using loophole Community profile built upon data used target customized ad Due Facebook decline considered massive data breach personal user data consisting image text post like scandal played key role US Elections 2016 following GDPR General Data Protection Regulation established EU Tao et al 2019 estimated company use past data build something called recommendation engine predict sort content user want view One example Netflix asks user rate movie scale 1 5 build personalized profile user Netflix recommendation engine linear algebra precise SVD Singular Value Decomposition used system predict user might like Hallinan et al 2014 Conclude Big data privacy go hand hand interconnected interdependent breach privacy one must access huge amount data build computing engine need large scale distributed computing resource technique see data became popular disruption right technological tool algorithm company able harness predictive capability system also see privacy big part data economy data collection method seem differ fastpaced growing world cannot stop flow data surely aware collected Companies like Cambridge Analytica used loophole Facebook Platform gather personal user data surely breach privacy due Facebook called upon Congress market share dropped drastically provide logical flow data privacy arose due huge amount human activity data seemed called “Big Data” future direction research plan analyze control flow data using technological technique plan discus effect racial bias huge amount data specifically racial bias affect data mining algorithm Obermeyer et al 2019 References Harding Matthew Jonathan Hersh “Big Data Economics” IZA World Labor 2018 doi1015185izawol451 Kshetri Nir “Big Data׳s Impact Privacy Security Consumer Welfare” Telecommunications Policy vol 38 11 2014 pp 1134–1145 doi101016jtelpol201410002 Data new oil Headerimage accessed 2020 Jun 22 httpsspotlessdatacomblogdatanewoil Cate FH Privacy information age Washington DC Brookings Institution Press 1997 Acquisti Taylor C Wagman L Economics Privacy Journal Economic Literature 2016542442–492 Oussous Benjelloun F Ait Lahcen Belfkih Big Data technology survey Journal King Saud University — Computer Information Sciences 2018 accessed 2020 Jun 22304431–448 Tummala Kalluri review Data Mining Big Data Analytics International Journal Engineering Technology 2018 accessed 2020 Jun 22742492 Fosso Wamba Akter Edwards Chopin G Gnanzou ‘big data’ make big impact Findings systematic review longitudinal case study International Journal Production Economics 2015165234–246 Xu L Jiang C Wang J Yuan J Ren Information Security Big Data Privacy Data Mining IEEE Access 2014 accessed 2020 Jun 2221149–1176 Hallinan B Striphas Recommended Netflix Prize production algorithmic culture 2014181117–137 Tao H Bhuiyan Rahman Wang G Wang Ahmed Li J Economic perspective analysis protecting big data security privacy Future Generation Computer Systems 2019 accessed 2020 Jun 2398660–671 Obermeyer Z Powers B Vogeli C Mullainathan Dissecting racial bias algorithm used manage health population Science 2019 accessed 2020 Jun 233666464447–453Tags Privacy Netflix Software Development Facebook Big Data
3,926
Matplotlib Cheat Sheet 📊
Making the bar graph horizontal is as easy as plt.barh( ). Let’s add one more attribute to our graphs in order to depict the amount of variance. Within you code add the following code varience = [2,4,3,2,4] plt.barh( sectors , sector_values , xerr = varience , color = ‘blue’) The xerr= allows us to indicate the amount of variance per sector value. If need be yerr= is also an option. Next we will create a stacked bar graph. It may appear that there is a lot of code for this graph but try your best to go through it slowly and remember all the steps we took while creating every graph until now. sectors = [‘Sec 1’,’Sec 2',’Sec 3',’Sec 4',’Sec 5'] sector_values = [ 23 , 45 , 17 , 32 , 29 ] subsector_values = [ 20 , 40 , 20 , 30 , 30 ] index = np.arange(5) width = 0.30 plt.bar(index, sector_values, width, color = ‘green’, label = ‘sector_values’) plt.bar(index + width, subsector_values,width, color = ‘blue’, label = ‘subsector_values’) plt.title(‘Horizontally Stacked Bars’) plt.xlabel(‘Sectors’) plt.ylabel(‘Sector Values’) plt.xticks(index + width/2 , sectors) plt.legend(loc = ‘best’) plt.show() Without making much modification to our code we can stack our bar graphs one atop the other by indicating, for example, bottom = sector_values within the plt.bar() method of the plot that we want to be on top. Be sure to get rid of the width variable and any instance where it was called further down into our code. index = np.arange( 5 ) plt.bar( index , sector_values , width , color = ‘green’ , label = ‘sector_values’ ) plt.bar( index , subsector_values , width , color = ‘blue’ , label = ‘subsector_values’ , bottom = sector_values ) Next let’s create a pie chart. This is done easily by using the pie( ) method. We will start with a simple chart then add modifying attributes to make it more unique. Again don’t be overwhelmed with the amount of code that this chart requires. plt.figure( figsize=( 15 , 5 ) ) hospital_dept = [ ‘Dept A’ , ’Dept B’ , ’Dept C’ , ’Dept D’ , ’Dept E’ ] dept_share = [ 20 , 25 , 15 , 10 , 20 ] Explode = [ 0 , 0.1 , 0 , 0 , 0 ] — — Explodes the Orange Section of Our Plot plt.pie( dept_share , explode = Explode , labels = hospital_dept , shadow =’true’ , startangle= 45 ) plt.axis( ‘equal’ ) plt.legend( title = “List of Departmments” , loc=”upper right” ) plt.show( ) Histograms are used to plot the frequency of score occurrences in a continuous dataset that have been divided into classes called bins. In order to create our dataset we are going to use the numpy function np.random.randn. This will generate data with the properties of a normal distribution curve. x = np.random.randn( 1000 ) plt.title( ‘Histogram’ ) plt.xlabel( ‘Random Data’ ) plt.ylabel( ‘Frequency’ ) plt.hist( x , 10 ) — — — plots our randomly generated x values into 10 bins. plt.show( ) Finally lets talk about scatter plots and 3D plotting. Scatter plots are vert useful when dealing with a regression problem. In order to create our scatter plot we are going to create an arbitrary set of height and weight data and plot them against each other. height = np.array ( [ 192 , 142 , 187 , 149 , 153 , 193 , 155 , 178 , 191 , 177 , 182 , 179 , 185 , 158 , 158 ] ) weight = np.array ( [ 90 , 71 , 66 , 75 , 79 , 60 , 98 , 96 , 68 , 67 , 40 , 68 , 63, 74 , 63 ] ) plt.xlim( 140 , 200 ) plt.ylim( 60 , 100 ) plt.scatter( height , weight ) plt.title( ‘Scatter Plot’ ) plt.xlabel( ‘Height’ ) plt.ylabel( ‘Weight’ ) plt.show( ) This same scatterplot can also be visualized in 3D. To do this we are going to first import the mplot3d module as follows: from mpl_toolkits import mplot3d Next we need to create the variable ax that is set equal to our projection type. ax = plt.axes( projection = ‘3d’) The following code is fairly repetitive of what you’ve seen before. ax = plt.axes( projection = ‘3d’ ) ax.scatter3D( height , weight ) ax.set_xlabel( ‘Height’ ) ax.set_ylabel( ‘Weight’ ) plt.show( ) Well if you’ve made it this far you should be proud of yourself. We’ve only gone through the basics of what matplotlib is capable of but, as you’ve noticed, there is a bit of a trend in how plots are created and executed. Check out the Matplotlib Sample Plots page in order to see the many more plots Matplotlib is capable of. Next we will discuss Seaborn.
https://medium.com/analytics-vidhya/matplotlib-cheat-sheet-51716f26061a
['Mulbah Kallen']
2019-10-10 05:11:00.196000+00:00
['Data Visualization', 'Python', 'Matplotlib']
Title Matplotlib Cheat Sheet 📊Content Making bar graph horizontal easy pltbarh Let’s add one attribute graph order depict amount variance Within code add following code varience 24324 pltbarh sector sectorvalues xerr varience color ‘blue’ xerr allows u indicate amount variance per sector value need yerr also option Next create stacked bar graph may appear lot code graph try best go slowly remember step took creating every graph sector ‘Sec 1’’Sec 2’Sec 3’Sec 4’Sec 5 sectorvalues 23 45 17 32 29 subsectorvalues 20 40 20 30 30 index nparange5 width 030 pltbarindex sectorvalues width color ‘green’ label ‘sectorvalues’ pltbarindex width subsectorvalueswidth color ‘blue’ label ‘subsectorvalues’ plttitle‘Horizontally Stacked Bars’ pltxlabel‘Sectors’ pltylabel‘Sector Values’ pltxticksindex width2 sector pltlegendloc ‘best’ pltshow Without making much modification code stack bar graph one atop indicating example bottom sectorvalues within pltbar method plot want top sure get rid width variable instance called code index nparange 5 pltbar index sectorvalues width color ‘green’ label ‘sectorvalues’ pltbar index subsectorvalues width color ‘blue’ label ‘subsectorvalues’ bottom sectorvalues Next let’s create pie chart done easily using pie method start simple chart add modifying attribute make unique don’t overwhelmed amount code chart requires pltfigure figsize 15 5 hospitaldept ‘Dept A’ ’Dept B’ ’Dept C’ ’Dept D’ ’Dept E’ deptshare 20 25 15 10 20 Explode 0 01 0 0 0 — — Explodes Orange Section Plot pltpie deptshare explode Explode label hospitaldept shadow ’true’ startangle 45 pltaxis ‘equal’ pltlegend title “List Departmments” loc”upper right” pltshow Histograms used plot frequency score occurrence continuous dataset divided class called bin order create dataset going use numpy function nprandomrandn generate data property normal distribution curve x nprandomrandn 1000 plttitle ‘Histogram’ pltxlabel ‘Random Data’ pltylabel ‘Frequency’ plthist x 10 — — — plot randomly generated x value 10 bin pltshow Finally let talk scatter plot 3D plotting Scatter plot vert useful dealing regression problem order create scatter plot going create arbitrary set height weight data plot height nparray 192 142 187 149 153 193 155 178 191 177 182 179 185 158 158 weight nparray 90 71 66 75 79 60 98 96 68 67 40 68 63 74 63 pltxlim 140 200 pltylim 60 100 pltscatter height weight plttitle ‘Scatter Plot’ pltxlabel ‘Height’ pltylabel ‘Weight’ pltshow scatterplot also visualized 3D going first import mplot3d module follows mpltoolkits import mplot3d Next need create variable ax set equal projection type ax pltaxes projection ‘3d’ following code fairly repetitive you’ve seen ax pltaxes projection ‘3d’ axscatter3D height weight axsetxlabel ‘Height’ axsetylabel ‘Weight’ pltshow Well you’ve made far proud We’ve gone basic matplotlib capable you’ve noticed bit trend plot created executed Check Matplotlib Sample Plots page order see many plot Matplotlib capable Next discus SeabornTags Data Visualization Python Matplotlib
3,927
What 4 Years Of Programming Taught Me About Writing Typed JavaScript
What 4 Years Of Programming Taught Me About Writing Typed JavaScript Untyped JavaScript, TypeScript, Flow, and PropTypes: Which one should you use? Photo by Kevin Canlas on Unsplash Types of types Mostly, statically typed languages are criticized for restricting developers. On the other hand, they’re loved for bringing early information about errors, documenting components (such as modules, methods, etc.), and now other more advanced functionalities such as auto-completion. A preliminary study from 2009¹ on untyped languages gives us some reference on exactly those pros and cons. Today, another type of language is also widely used: dynamically typed languages. A dynamically typed language is different from its counterpart by bringing types but at runtime. This way, you can have far more freedom than strongly typed languages while keeping their advantages. From our list, we have a single dynamically typed language: TypeScript. And that’s not completely exact, TS could also be called a soft typed language, that is between a dynamically and statically typed language. As this is not today’s subject, curious readers can have a look at the following article: Why am I saying there is only one? Of course, JavaScript is considered untyped (or weakly typed), while PropTypes is a package allowing type-checking at runtime. Flow is … neither. In practice, it looks a lot like TypeScript, and both are often compared. Inside your IDE and CLI, they are similar but their engines differ: TypeScript is a language while Flow is called a “static type checker”. In Flow, you write “annotations” to set types. At compile time, those annotations must be removed, which creates JavaScript files without any superset. It has been an argument in favor of Flow: performance. Both solutions have almost the same functionalities, but Flow removes any overhead that TypeScript has, once compiled. My experience I started my career in the JavaScript & front-end world in 2016 with Angular2 (and TypeScript). Before this front-end project, I mainly worked on C#, Java, and a bit of Vanilla JavaScript. I hated it. To me, vanilla JavaScript had no structure, type, object-oriented concepts, it was HELL. With more experience, practice with Angular2 (shipped with TypeScript) and then React, I almost started to enjoy it. That’s when I seriously considered a way to type JavaScript: I used TypeScript for a short time (which I didn’t master at the time) but I was back to untyped JavaScript with React. React with untyped JavaScript was working not-so-bad but I felt like a piece was missing: After a few weeks on a React project, code could easily get messy and difficult to understand, even more for newcomers. We needed a lot of time to read a piece of code. The number of errors after build was too high. Among the practices needed to avoid these problems, my experience told me typed JavaScript was the top priority. After some research: here I was, looking at TypeScript, Flow, and PropTypes. PropsTypes allowed me to type check my props but that was not enough. What about types outside of my React components? Can I validate that my app is type-safe as part of my CI pipeline? Can I validate it as a commit hook? Well, You can find some ways to validate types during your tests² and use it outside of component³ but PropTypes was not designed with that in mind. It was intended to give you information in real-time (at runtime). As I was not convinced by PropTypes, I was left with two choices: TypeScript and Flow. Functionality-wise they looked the same, with some good point on Flow side: No overhead meant better performance. Backed by Facebook, as well as React. Those were enough to make a difference and that’s why I started using Flow with React. Flow Around 2 years. That’s the time I’ve been using Flow and only Flow. It worked with React / React Native like a charm, boosted our team productivity, was a great tool as part of our CI pipeline, and generally helped us deliver. And then, we hit a wall. Once our projects started getting bigger, flow server struggled to start, was getting slower and slower. At some point we were simply unable to run it, our CI running time and its cost skyrocketed. Unfortunately, at the same time, I started getting into other projects. Among them: an embedded project with Arduino using johnny-five⁴. That’s when I realized the second weakness of Flow: its community support. That’s how typing systems in JavaScript works: you write your module with one solution and write an independent type definition for the other. TypeScript had a lot of support, I think that even today, the number of libraries without support that I used is less than 10. Flow was different: during the time I used it in the React ecosystem, there were a few libraries without support but I could still work without it. Outside of this ecosystem, support was a mess, I was unable to find even one library working with Flow. I found solutions to transform TypeScript definitions to Flow but in the end, it didn’t work. At some point, I wondered if I would really have to use TypeScript, even for React / RN projects. How inconsistent is that: using a technology backed by a company such as Facebook, and replacing its typing system with one from Microsoft? Moreover, support for TypeScript in React must be a mess, right? TypeScript I (re)discovered a whole new world. After, getting back to TypeScript on side projects with Johnny-Five or React, I couldn’t believe how I loved it. Not only my performance and library support problems disappeared, but I got to love its syntax. Since 2016 and a lot of updates, I could not find anything to reproach TypeScript for. React / React Native support was perfect, with the whole ecosystem such as eslint with prettier, jest, etc. My next move was simple: As I previously wrote templates/boilerplates for my team using Flow, they were quickly replaced with the equivalent in TypeScript. Since then, we’ve only been using TypeScript. Flow was great for a long time, its performance and support problems got the best of him while TypeScript later evolved to be amazing: my choice was simple. Conclusion If you were to ask me what to chose between untyped JavaScript, PropTypes, Flow, and TypeScript for your project: I would tell you the following: Untyped JavaScript can be a good choice if you work on a project for less than a week, throw it after, and don’t wish to learn any type solution. PropTypes is a great tool if, for any reason, you cannot use Flow or TypeScript but is not enough for a big project. Flow was great, since then I did not try it again, but would not bet on it. TypeScript is a great solution, in my own opinion the best out there and required for any JavaScript project with high stakes. Regarding using a typing library I experienced some objections, that I would like to answer here. Loss of performance: today’s solutions are great and the infinitely small loss of performance you could sustain would be balanced by the structure that typed code brings. Lack of knowledge in TypeScript / Flow: if your project has high stakes and you don’t want to use typed JavaScript because your developers don’t know it, the project is bound to fail. Change your team or train them, that’s the only way. Loss of time: I’ll quote Clean Code: A Handbook of Agile Software Craftsmanship⁵ from Robert C. Martin⁶: “Indeed, the ratio of time spent reading vs. writing is well over 10:1.” If one thing, typed JavaScript will help you read code, thus increasing your productivity rather than decreasing it. If you have a different opinion about typed JavaScript or experience with Flow, feel free to contact me so I can link yours. Thanks for reading. TL;DR TypeScript.
https://medium.com/javascript-in-plain-english/what-4-years-of-programming-taught-me-about-writing-typed-javascript-2bac38b45f79
['Teddy Morin']
2020-12-04 17:41:19.730000+00:00
['Programming', 'Software Development', 'JavaScript', 'Typescript', 'React']
Title 4 Years Programming Taught Writing Typed JavaScriptContent 4 Years Programming Taught Writing Typed JavaScript Untyped JavaScript TypeScript Flow PropTypes one use Photo Kevin Canlas Unsplash Types type Mostly statically typed language criticized restricting developer hand they’re loved bringing early information error documenting component module method etc advanced functionality autocompletion preliminary study 2009¹ untyped language give u reference exactly pro con Today another type language also widely used dynamically typed language dynamically typed language different counterpart bringing type runtime way far freedom strongly typed language keeping advantage list single dynamically typed language TypeScript that’s completely exact TS could also called soft typed language dynamically statically typed language today’s subject curious reader look following article saying one course JavaScript considered untyped weakly typed PropTypes package allowing typechecking runtime Flow … neither practice look lot like TypeScript often compared Inside IDE CLI similar engine differ TypeScript language Flow called “static type checker” Flow write “annotations” set type compile time annotation must removed creates JavaScript file without superset argument favor Flow performance solution almost functionality Flow remove overhead TypeScript compiled experience started career JavaScript frontend world 2016 Angular2 TypeScript frontend project mainly worked C Java bit Vanilla JavaScript hated vanilla JavaScript structure type objectoriented concept HELL experience practice Angular2 shipped TypeScript React almost started enjoy That’s seriously considered way type JavaScript used TypeScript short time didn’t master time back untyped JavaScript React React untyped JavaScript working notsobad felt like piece missing week React project code could easily get messy difficult understand even newcomer needed lot time read piece code number error build high Among practice needed avoid problem experience told typed JavaScript top priority research looking TypeScript Flow PropTypes PropsTypes allowed type check prop enough type outside React component validate app typesafe part CI pipeline validate commit hook Well find way validate type tests² use outside component³ PropTypes designed mind intended give information realtime runtime convinced PropTypes left two choice TypeScript Flow Functionalitywise looked good point Flow side overhead meant better performance Backed Facebook well React enough make difference that’s started using Flow React Flow Around 2 year That’s time I’ve using Flow Flow worked React React Native like charm boosted team productivity great tool part CI pipeline generally helped u deliver hit wall project started getting bigger flow server struggled start getting slower slower point simply unable run CI running time cost skyrocketed Unfortunately time started getting project Among embedded project Arduino using johnnyfive⁴ That’s realized second weakness Flow community support That’s typing system JavaScript work write module one solution write independent type definition TypeScript lot support think even today number library without support used le 10 Flow different time used React ecosystem library without support could still work without Outside ecosystem support mess unable find even one library working Flow found solution transform TypeScript definition Flow end didn’t work point wondered would really use TypeScript even React RN project inconsistent using technology backed company Facebook replacing typing system one Microsoft Moreover support TypeScript React must mess right TypeScript rediscovered whole new world getting back TypeScript side project JohnnyFive React couldn’t believe loved performance library support problem disappeared got love syntax Since 2016 lot update could find anything reproach TypeScript React React Native support perfect whole ecosystem eslint prettier jest etc next move simple previously wrote templatesboilerplates team using Flow quickly replaced equivalent TypeScript Since we’ve using TypeScript Flow great long time performance support problem got best TypeScript later evolved amazing choice simple Conclusion ask chose untyped JavaScript PropTypes Flow TypeScript project would tell following Untyped JavaScript good choice work project le week throw don’t wish learn type solution PropTypes great tool reason cannot use Flow TypeScript enough big project Flow great since try would bet TypeScript great solution opinion best required JavaScript project high stake Regarding using typing library experienced objection would like answer Loss performance today’s solution great infinitely small loss performance could sustain would balanced structure typed code brings Lack knowledge TypeScript Flow project high stake don’t want use typed JavaScript developer don’t know project bound fail Change team train that’s way Loss time I’ll quote Clean Code Handbook Agile Software Craftsmanship⁵ Robert C Martin⁶ “Indeed ratio time spent reading v writing well 101” one thing typed JavaScript help read code thus increasing productivity rather decreasing different opinion typed JavaScript experience Flow feel free contact link Thanks reading TLDR TypeScriptTags Programming Software Development JavaScript Typescript React
3,928
4 Science-Backed Ways to Get You Feeling Energetic
4 Science-Backed Ways to Get You Feeling Energetic These tactical approaches will improve concentration and alertness Photo by Mateus Campos Felipe on Unsplash Raise your hands if you struggle to get out of bed, even when you’ve technically gotten enough sleep, constantly begging your alarm to give you just 5 minutes, and you rely on several pints of coffee to get you through the morning — and probably all through the average working day? It’s common to feel tired in our fast-paced modern world. It’s never a new thing to find yourself running from one activity to another, even when you’ve planned out a day to gain balance, and soothe your soul. Whether it’s the emotional fatigue from all the weird things going on in the world, trying to juggle your passion and talent with the job you find yourself in, or having your sleep routine thrown off by a change in schedule, it seems virtually everyone is struggling with morning tiredness and wondering how to get back their energy to make their mornings more reasonable. Tiredness is one of the UK’s top health complaints — figures from Healthspan show a worrying 97% of us claim we feel tired most of the time, and doctors’ records reveal 10% of people who book an appointment are looking for a cure for their tiredness. Before I proceed, I do want to mention that I’m not talking about conditions like Chronic Fatigue Syndrome and SEID, which affect several million people here in the US alone and are very hard to cure. What I am talking about is a general state of tiredness that affects many, many more people (both children and adults) and can be prevented by evaluating your habits and changing those that are draining your energy.
https://medium.com/skilluped/4-science-backed-ways-to-get-you-feeling-energetic-1e1e02113cbe
['Benjamin Ebuka']
2020-11-27 03:43:16.044000+00:00
['Health', 'Inspiration', 'Science', 'Life Lessons', 'Self Improvement']
Title 4 ScienceBacked Ways Get Feeling EnergeticContent 4 ScienceBacked Ways Get Feeling Energetic tactical approach improve concentration alertness Photo Mateus Campos Felipe Unsplash Raise hand struggle get bed even you’ve technically gotten enough sleep constantly begging alarm give 5 minute rely several pint coffee get morning — probably average working day It’s common feel tired fastpaced modern world It’s never new thing find running one activity another even you’ve planned day gain balance soothe soul Whether it’s emotional fatigue weird thing going world trying juggle passion talent job find sleep routine thrown change schedule seems virtually everyone struggling morning tiredness wondering get back energy make morning reasonable Tiredness one UK’s top health complaint — figure Healthspan show worrying 97 u claim feel tired time doctors’ record reveal 10 people book appointment looking cure tiredness proceed want mention I’m talking condition like Chronic Fatigue Syndrome SEID affect several million people US alone hard cure talking general state tiredness affect many many people child adult prevented evaluating habit changing draining energyTags Health Inspiration Science Life Lessons Self Improvement
3,929
Presenting Your Data
What is Data Visualization? Data visualization is the process of presenting data. It is how we communicate findings from data in visually clear, concise, and often aesthetic ways. A data visualization typically focuses on a specific dataset, aiming to communicate a relationship, trend, distribution, etc. or lack thereof among variables. Visualizations help us get a grasp of the holistic view of our dataset, one we cannot have if we simply eyeball the raw data. In short, humans need visuals! Why is it Important? At the core of it, a data visualization transforms many data points into a single story. And stories can be good or bad; ideally, a good data visualization — like a good story — holds the reader’s attention and presents information in a concise, easy-to-understand way, leaving the reader with something to take away. Data visualization allows for visual literacy in the data, meaning that it allows otherwise complex data to be visually processed and understood in simpler ways. Visualizations reduce the cognitive load required to understand a dataset, provide overviews of the data, and comprise a crucial part of conducting exploratory data analysis. Example of data visualization used in exploratory data analysis. Image credit: https://en.wikipedia.org/wiki/Exploratory_data_analysis To get a better sense of why data visualization is important, we can look into existing visualizations to get a sense of their communicative intents and whether they achieve that intent. Junk Charts is a blog for data visualization critiques, run by Kaiser Fung, who picks out data visualizations in media to evaluate/critique them and provide suggestions for improvement. Learning to weigh the pros and cons of a visualization can help give a sense of why we should take the time to produce strong data visualizations (which isn’t always easy). Clear data presentation goes hand in hand with good design, and not just design in the aesthetically pleasing visual sense. Good design is equivalent to clear communication, and it facilitates an end goal such that the viewers of the visualization can describe the relationships in the data. Data visualizations should contain communicative intent. So this brings into importance the choice of data visualization, data transformations, shapes, use of text and color, and more. Types of Data Visualization The Python Graph Gallery provides a comprehensive list and guide to different data visualization types and the information they communicate. The site splits data visualization types into categories of distribution, correlation, ranking, maps, and more. There are many types of visualizations, ranging from a simple line chart to a more complex parallel plot. The type of visualization makes a big difference on what is being communicated about the data. The best choice of visualization depends on the data itself and what relationships in the data we intend to explore. Take a heat map for example: this type of visualization would best be used if you want to understand the correlation between different values for multiple variables, for instance the correlation between average gas price and geographic location. Example of a heat map. Image credit: https://www.usatoday.com/story/travel/roadwarriorvoices/2015/01/10/use-this-us-gas-price-heat-map-to-design-cheapest-possible-road-trip/83204036/ On the other hand, a simple histogram can suffice for displaying the distribution of a single variable, such as the arrival time for a certain event. Example of a histogram. Image credit: https://en.wikipedia.org/wiki/Histogram. For more examples of current data visualization use cases, checkout The Atlas, a collection of charts and graphs used by Quartz. Visualization Tips Choosing a visualization type The best visualization for your data depends on how much data you have, what kind of data you have, what questions you are trying to answer from the data, and often there isn’t one best visualization. It would be helpful to play around with different visualization types because two visualizations on the same data can draw attention to different attributes of the data. This visualization picker allows you to customize what you’re exploring about the data, and filters out suggested visualizations for your specific goal, but keep in mind these are just suggestions for starter visualizations. When you have one variable (univariate) If it’s numeric (e.g. histogram, box plot), the visualization should display the distribution/dispersion of the data, mode, and outliers If it’s categorical (e.g. bar chart), the visualization should display frequency distribution and skew When you have 2 (or more) variables (bivariate) The better visualization type differs depending on whether the comparison is numeric to numeric (e.g. scatterplot), numeric to categorical (e.g. multiple histograms), or categorical to categorical (e.g. side-by-side bar plot) The right choice of visualization also depends on what question you are exploring about the data. See the below diagram for a loose guide to picking a visualization type. Diagram for choosing a data visualization type. Image credit: https://www.datapine.com/blog/how-to-choose-the-right-data-visualization-types/ Transforming the data Log transform is often applied to skewed data to bring it to a less skewed and more normal-shaped curve, making it easier to visually perceive the data and perform exploratory data analysis. When you have a lot of data, smoothing helps remove noise in the data to improve visibility of the general shape and important features of the data. Use of color For qualitative/categorical data, choose distinct colors to clearly separate the categories. For quantitative data, use gradients to show comparative differences in small to large values Use of text Incorporate text to add clarity and intention, avoid clutter. Add legends, labels, and captions to point out important features or conclusions Resources and Tutorials To get a more comprehensive understanding of data visualization and its purposes, checkout this guide on Developing Visualisation Literacy. For another overview of data visualization and walkthroughs on presenting data using R, checkout the book Data Visualization: A Practical Introduction by Kieran Healy (the draft manuscript is available online). Flowing Data covers data in everyday life and provides tutorials for doing data visualization in R. For those interested in visual storytelling, The Pudding is a digital publication that uses creative data visualizations to explain ideas in popular culture. Visualizing Data is an encyclopedia for data visualization, with insights into best practices, examples, interviews with experts, and more. If you’re looking to learn more coding for data visualization, this tutorial walks you through simple data visualization using matplotlib and Pandas, covering how to filter data, and how to plot a line plot, bar chart, and box plot. This Medium article does a good job of showcasing the use cases for bar charts, scatterplots, and pie charts (though we recommend against using pie charts because it’s hard to accurately compare values across pie charts — histograms/bar charts are generally better options).
https://medium.com/ds3ucsd/eyeballing-the-data-ea77437ff6db
['Emily Zhao']
2019-06-04 23:19:59.716000+00:00
['Visualization', 'Data Science', 'Matplotlib', 'Data Visualization', 'Data Analysis']
Title Presenting DataContent Data Visualization Data visualization process presenting data communicate finding data visually clear concise often aesthetic way data visualization typically focus specific dataset aiming communicate relationship trend distribution etc lack thereof among variable Visualizations help u get grasp holistic view dataset one cannot simply eyeball raw data short human need visuals Important core data visualization transforms many data point single story story good bad ideally good data visualization — like good story — hold reader’s attention present information concise easytounderstand way leaving reader something take away Data visualization allows visual literacy data meaning allows otherwise complex data visually processed understood simpler way Visualizations reduce cognitive load required understand dataset provide overview data comprise crucial part conducting exploratory data analysis Example data visualization used exploratory data analysis Image credit httpsenwikipediaorgwikiExploratorydataanalysis get better sense data visualization important look existing visualization get sense communicative intent whether achieve intent Junk Charts blog data visualization critique run Kaiser Fung pick data visualization medium evaluatecritique provide suggestion improvement Learning weigh pro con visualization help give sense take time produce strong data visualization isn’t always easy Clear data presentation go hand hand good design design aesthetically pleasing visual sense Good design equivalent clear communication facilitates end goal viewer visualization describe relationship data Data visualization contain communicative intent brings importance choice data visualization data transformation shape use text color Types Data Visualization Python Graph Gallery provides comprehensive list guide different data visualization type information communicate site split data visualization type category distribution correlation ranking map many type visualization ranging simple line chart complex parallel plot type visualization make big difference communicated data best choice visualization depends data relationship data intend explore Take heat map example type visualization would best used want understand correlation different value multiple variable instance correlation average gas price geographic location Example heat map Image credit httpswwwusatodaycomstorytravelroadwarriorvoices20150110usethisusgaspriceheatmaptodesigncheapestpossibleroadtrip83204036 hand simple histogram suffice displaying distribution single variable arrival time certain event Example histogram Image credit httpsenwikipediaorgwikiHistogram example current data visualization use case checkout Atlas collection chart graph used Quartz Visualization Tips Choosing visualization type best visualization data depends much data kind data question trying answer data often isn’t one best visualization would helpful play around different visualization type two visualization data draw attention different attribute data visualization picker allows customize you’re exploring data filter suggested visualization specific goal keep mind suggestion starter visualization one variable univariate it’s numeric eg histogram box plot visualization display distributiondispersion data mode outlier it’s categorical eg bar chart visualization display frequency distribution skew 2 variable bivariate better visualization type differs depending whether comparison numeric numeric eg scatterplot numeric categorical eg multiple histogram categorical categorical eg sidebyside bar plot right choice visualization also depends question exploring data See diagram loose guide picking visualization type Diagram choosing data visualization type Image credit httpswwwdatapinecombloghowtochoosetherightdatavisualizationtypes Transforming data Log transform often applied skewed data bring le skewed normalshaped curve making easier visually perceive data perform exploratory data analysis lot data smoothing help remove noise data improve visibility general shape important feature data Use color qualitativecategorical data choose distinct color clearly separate category quantitative data use gradient show comparative difference small large value Use text Incorporate text add clarity intention avoid clutter Add legend label caption point important feature conclusion Resources Tutorials get comprehensive understanding data visualization purpose checkout guide Developing Visualisation Literacy another overview data visualization walkthroughs presenting data using R checkout book Data Visualization Practical Introduction Kieran Healy draft manuscript available online Flowing Data cover data everyday life provides tutorial data visualization R interested visual storytelling Pudding digital publication us creative data visualization explain idea popular culture Visualizing Data encyclopedia data visualization insight best practice example interview expert you’re looking learn coding data visualization tutorial walk simple data visualization using matplotlib Pandas covering filter data plot line plot bar chart box plot Medium article good job showcasing use case bar chart scatterplots pie chart though recommend using pie chart it’s hard accurately compare value across pie chart — histogramsbar chart generally better optionsTags Visualization Data Science Matplotlib Data Visualization Data Analysis
3,930
Women and AI.
Statistics show only an estimated 22% of Artificial Intelligence professionals globally are female and only 20% of all computer programmers are female. Find this revelation shocking? Well so do I. Hi, my name is Malaika and I find the world of technology and computer science absolutely riveting. My fascination with technology was not only the driving factor to overcome the challenges I faced, but it also helped keep up the inquisitive and explorative attitude that invigorated my fascination in the field of AI but as a woman, I was shocked to find that in the AI-driven automated world that we are moving towards, the computer science field is still mostly dominated by men. I have always wanted to be part of the technology industry and this contrast has only motivated me further to pursue my interests. My interest and love for computer science began at a very young age. My earliest childhood memories are of being surrounded by technology and since then I’ve tried to gain knowledge in all the various fields that computer science has to offer. I’ve experimented with graphic design, explored the world of web and app development and spent hours learning the basics of different programming languages. However, one field that captivated my interest and curiosity immensely was the world of Artificial Intelligence. I was introduced to AI while using a virtual assistant on my parent’s phone and this interaction ignited a ceaseless curiosity to learn more about AI. Along with research done on my own time I was given the opportunity to be part of the Inspirit AI Scholars program which expanded my knowledge on the field of AI. My instructors in the program were Raunak Bhattacharyya and Sharon Newman and they introduced us to subjects such as natural language processing and neural networks as well as solving real-life problems using AI. I learned how AI impacts almost every industry in the world and we have the power to use it for good. For example, our instructors showed us how AI is used in medical diagnostics as it helps spot signs of certain diseases in medical scans and makes accurate predictions about patients’ future health. My teamates and I also worked on a project and our project aimed to classify tweets relating to various natural disasters into categories depending on what type of aid was required. I loved being a part of this program and I felt more alive and engaged than I ever had before learning more about AI. In this three-part blog series I would like to include the impact AI has in different industries, such as automobile, education, finance and healthcare and then further discuss the issue of privacy and security as this will help introduce and bring awareness to teenagers and young adults on the basic concepts of AI and its importance. By being exposed to cutting edge technologies and creative coding in AI, as a woman, I would love to bridge the gender gap and be part of the leading team of women in this exciting field. We all have conscious and unconscious biases when talking about women in technology and I would like to use this blog to discuss the issue of ethics in the field of artificial intelligence, including gender and racial biases as I believe Artificial Intelligence has the potential to not only overcome but also eradicate biases.
https://medium.com/carre4/women-and-ai-a8389ec6334c
['Malaika N']
2020-12-08 21:28:46.140000+00:00
['Artificial Intelligence', 'AI', 'Women In Ai', 'Women In Tech']
Title Women AIContent Statistics show estimated 22 Artificial Intelligence professional globally female 20 computer programmer female Find revelation shocking Well Hi name Malaika find world technology computer science absolutely riveting fascination technology driving factor overcome challenge faced also helped keep inquisitive explorative attitude invigorated fascination field AI woman shocked find AIdriven automated world moving towards computer science field still mostly dominated men always wanted part technology industry contrast motivated pursue interest interest love computer science began young age earliest childhood memory surrounded technology since I’ve tried gain knowledge various field computer science offer I’ve experimented graphic design explored world web app development spent hour learning basic different programming language However one field captivated interest curiosity immensely world Artificial Intelligence introduced AI using virtual assistant parent’s phone interaction ignited ceaseless curiosity learn AI Along research done time given opportunity part Inspirit AI Scholars program expanded knowledge field AI instructor program Raunak Bhattacharyya Sharon Newman introduced u subject natural language processing neural network well solving reallife problem using AI learned AI impact almost every industry world power use good example instructor showed u AI used medical diagnostics help spot sign certain disease medical scan make accurate prediction patients’ future health teamates also worked project project aimed classify tweet relating various natural disaster category depending type aid required loved part program felt alive engaged ever learning AI threepart blog series would like include impact AI different industry automobile education finance healthcare discus issue privacy security help introduce bring awareness teenager young adult basic concept AI importance exposed cutting edge technology creative coding AI woman would love bridge gender gap part leading team woman exciting field conscious unconscious bias talking woman technology would like use blog discus issue ethic field artificial intelligence including gender racial bias believe Artificial Intelligence potential overcome also eradicate biasesTags Artificial Intelligence AI Women Ai Women Tech
3,931
Tug of war.
The morning coffee that awakens the mind, weakens the foundations of the shrine, that is the body. That which is forceful against the wills of the mind, the mind that remains powerless as the limbs keep pushing on, as though chasing that sun until it reaches the horizon.
https://medium.com/weeds-wildflowers/tug-of-war-6c2b64191099
[]
2020-12-24 00:38:03.991000+00:00
['Struggle', 'Mental Health', 'Mindfulness', 'Creativity', 'Poetry']
Title Tug warContent morning coffee awakens mind weakens foundation shrine body forceful will mind mind remains powerless limb keep pushing though chasing sun reach horizonTags Struggle Mental Health Mindfulness Creativity Poetry
3,932
Anti Bar chart, Bar chart Club
Inspiration, knowledge, and anything about data science by Make-AI Follow
https://medium.com/make-ai-data-stories/anti-bar-chart-bar-chart-club-56a2275b08aa
['Benedict Aryo']
2019-08-21 06:14:06.328000+00:00
['Data Science', 'Bar Chart', 'Data Visualization', 'Exploratory Data Analysis', 'Visualization']
Title Anti Bar chart Bar chart ClubContent Inspiration knowledge anything data science MakeAI FollowTags Data Science Bar Chart Data Visualization Exploratory Data Analysis Visualization
3,933
Radio #LatentVoices. 001 — A Glimpse of Another World.
Radio #LatentVoices. 001 — A Glimpse of Another World. AI-driven musical storytelling Eerie and fantastic at the same time: Exploration of AI-driven creative tools is full of surprises and discoveries. It’s like entering an Unsupervised Machine Dream — everything is new, unique, unexpectable. For me running JukeBox for the first time was an ethereal experience. Can you remember that scene from DEVS, a brilliant mini-series about a group of scientists who were guarding a secret? I don’t want to spoil the show, just this experience to scan another dimension — bit by bit, sound by sound, where pixelated obscurity becomes a clear vision… JukeBox provides you with a similarly overwhelming experience- My first soundscape was this one. I’ve just run it per default settings. Sometimes the most usual can become the most surprising. I’ve got this noisy soundtrack with somebody speaking: After several iterations and around 8 hours of work, AI provided me this clean sound: A poetic but unknown language. A man, speaking with silent bitterness about something, vanished. About life, about death. You cannot understand this language, but compassion and empathy are inundating your soul. And then you hear heavenly musical instruments and a female voice, singing an ethnic song. You almost remember the transience of the nameless world you’ve experienced for a minute through an AI-portal. But it’s so familiar, full of past and missing future. And yet — it’s all present. The world of Unknown. Which you never will be able to recreate. Like a dream fading away.
https://medium.com/merzazine/radio-latentvoices-001-a-glimpse-of-another-world-796417f18b5c
['Vlad Alex', 'Merzmensch']
2020-12-14 22:12:45.071000+00:00
['Art', 'Artificial Intelligence', 'Latentvoices', 'Music']
Title Radio LatentVoices 001 — Glimpse Another WorldContent Radio LatentVoices 001 — Glimpse Another World AIdriven musical storytelling Eerie fantastic time Exploration AIdriven creative tool full surprise discovery It’s like entering Unsupervised Machine Dream — everything new unique unexpectable running JukeBox first time ethereal experience remember scene DEVS brilliant miniseries group scientist guarding secret don’t want spoil show experience scan another dimension — bit bit sound sound pixelated obscurity becomes clear vision… JukeBox provides similarly overwhelming experience first soundscape one I’ve run per default setting Sometimes usual become surprising I’ve got noisy soundtrack somebody speaking several iteration around 8 hour work AI provided clean sound poetic unknown language man speaking silent bitterness something vanished life death cannot understand language compassion empathy inundating soul hear heavenly musical instrument female voice singing ethnic song almost remember transience nameless world you’ve experienced minute AIportal it’s familiar full past missing future yet — it’s present world Unknown never able recreate Like dream fading awayTags Art Artificial Intelligence Latentvoices Music
3,934
Insights from our Synthetic Content and Deep Fake Technology Round Table
INSIGHTS GATHERED: CHALLENGES: The ability to recognise deep fake pictures by humans is very low, while the ability of an AI to detect them is fairly high. Facebook’s Deepfake Detection Challenge, in collaboration with Microsoft, Amazon Web Services, and the Partnership on AI, was run through Kaggle, a platform for coding contests that is owned by Google. The best model to emerge from the contest detected deep fakes from Facebook’s collection just over 82 % of the time. It was argued that, at the current level, we won’t get to 90%. On the other hand, the percentage of deep fakes currently circulating on Facebook is in the single digits and there are many other sources of misinformation. We are easily fooled by video: Our subconsciousness has made a decision whether something is real or not before our conscious mind even starts processing the content. Authenticity and security — A cat vs. mouse race: generation of new deep fakes technologies vs. the detector of deep fakes. A significant security challenge is also socially engineered cyber attacks, not only main stream deep fakes. Increased accessibility of this technology to the mainstream public will significantly accelerate this race. A definition problem: There are various perspectives what a deep fake is and sometimes there is also confusion with terminology such as fake news. Historically, facial reenactment & face-swapping was the main deep fake use case. Now the term is used in a variety of situations and other developments such as, for example, voice synthesis add more layers to fool our senses. The last 10% is hard: Many use cases break at a certain point, especially for more horizontal use cases where the technology is simply not sophisticated enough. For example, creating avatars/assistants for gaming, customer service and AdTech. Interactive use cases are typically challenging. The quality in deep fakes, synthesis compared to other AI capabilities is not different. Even if you look at image classification — which has been democratised until now, if you try to generalise it — it will break. If you want to take an AI solution into production, it needs to be systematically structured and trained. The same is the case with synthetic data generation > if you can narrow down the use case and know the practicalities of what you are trying to produce, then it will likely work. OPPORTUNITIES: Lower costs of production with synthetic content is going to significantly accelerate high quality media production. This in turn will enable a wide set of application areas and use cases for commercial application — even by individuals. Deep Fakes are fuelling the start of the third evolutionary stage of media. Top Content: Movies, videos and post alterations: Individuals will be able to produce high quality content, even movies, with very limited resources. There will likely be a whole market for virtual actors to be customized for any purpose. Even after the final video is produced it could be altered for a different story with a new script. Digital copies and avatars: Individuals such as celebrities could scale their presence by addressing people in their local language or have their team write new scripts for presentations, talks or even commercials. Another use case is customized avatars that guide us in virtual worlds. Related to content creation — editing the video component to a virtual assistant is an interesting area. Synthetic product placement and fashion: Any products can be placed in media for more personal advertisement and essentially offer new marketing channels. Furthermore, clothing brands could use this technology for their advertisement and e-commerce sites. Personalization or Anonymization: Consumers can choose to personalize avatars in virtual (e.g. gaming) environments or swap our looks with alternative versions to stay anonymous. Workplace of the future: Immersive experience and interactive engagement mechanisms will enhance the accessibility of video calls, moving a step closer to semi-virtual reality. Advancement in video synthesis, where NVIDIA has recently shown a demo of how via face-tracking with a few floating-point variables, you are able to reconstruct it on the other side of a Zoom call for example. If we push the boundaries here we will be able to reach a 3-point video format that will allow us to look at people in a 3D format (for instance getting a step closer towards holograms). Identity verification/protection and blockchain: With a vast increase of use-cases pinned on these technologies, it is ever more important have protection over your digital identity and online reputation. Especially for those who’s authenticity is important to their digital reputation (politicians, influencers, etc). Tools to let someone know whether your image has been stolen to create a deep fake is an appealing use case. Single shared source of truth (SSOT) is one way to approach this. Meaning, one piece of information put on blockchain gets replicated 10,000 and stays there for eternity. Digital IP can be put on blockchain to track the authenticity of any content. Any modification to a picture gets shared on the blockchain, hence knowing if something is real or not. Some people argue that this is the (only) most scalable and sustainable way to approach the identity problem.
https://medium.com/dataseries/insights-from-our-synthetic-content-and-deep-fake-technology-round-table-609489ca26d9
['Mike Reiner']
2020-12-18 13:49:51.268000+00:00
['AI', 'Future', 'Deepfakes', 'Synthetic Data']
Title Insights Synthetic Content Deep Fake Technology Round TableContent INSIGHTS GATHERED CHALLENGES ability recognise deep fake picture human low ability AI detect fairly high Facebook’s Deepfake Detection Challenge collaboration Microsoft Amazon Web Services Partnership AI run Kaggle platform coding contest owned Google best model emerge contest detected deep fake Facebook’s collection 82 time argued current level won’t get 90 hand percentage deep fake currently circulating Facebook single digit many source misinformation easily fooled video subconsciousness made decision whether something real conscious mind even start processing content Authenticity security — cat v mouse race generation new deep fake technology v detector deep fake significant security challenge also socially engineered cyber attack main stream deep fake Increased accessibility technology mainstream public significantly accelerate race definition problem various perspective deep fake sometimes also confusion terminology fake news Historically facial reenactment faceswapping main deep fake use case term used variety situation development example voice synthesis add layer fool sens last 10 hard Many use case break certain point especially horizontal use case technology simply sophisticated enough example creating avatarsassistants gaming customer service AdTech Interactive use case typically challenging quality deep fake synthesis compared AI capability different Even look image classification — democratised try generalise — break want take AI solution production need systematically structured trained case synthetic data generation narrow use case know practicality trying produce likely work OPPORTUNITIES Lower cost production synthetic content going significantly accelerate high quality medium production turn enable wide set application area use case commercial application — even individual Deep Fakes fuelling start third evolutionary stage medium Top Content Movies video post alteration Individuals able produce high quality content even movie limited resource likely whole market virtual actor customized purpose Even final video produced could altered different story new script Digital copy avatar Individuals celebrity could scale presence addressing people local language team write new script presentation talk even commercial Another use case customized avatar guide u virtual world Related content creation — editing video component virtual assistant interesting area Synthetic product placement fashion product placed medium personal advertisement essentially offer new marketing channel Furthermore clothing brand could use technology advertisement ecommerce site Personalization Anonymization Consumers choose personalize avatar virtual eg gaming environment swap look alternative version stay anonymous Workplace future Immersive experience interactive engagement mechanism enhance accessibility video call moving step closer semivirtual reality Advancement video synthesis NVIDIA recently shown demo via facetracking floatingpoint variable able reconstruct side Zoom call example push boundary able reach 3point video format allow u look people 3D format instance getting step closer towards hologram Identity verificationprotection blockchain vast increase usecases pinned technology ever important protection digital identity online reputation Especially who’s authenticity important digital reputation politician influencers etc Tools let someone know whether image stolen create deep fake appealing use case Single shared source truth SSOT one way approach Meaning one piece information put blockchain get replicated 10000 stay eternity Digital IP put blockchain track authenticity content modification picture get shared blockchain hence knowing something real people argue scalable sustainable way approach identity problemTags AI Future Deepfakes Synthetic Data
3,935
Learn AI with Free TPU Power — the ELI5 way
In this article, you’ll learn how to use Google Colab for training a CNN on the MNIST dataset using Google’s TPUs. Hold up, what’s a CNN? In a regular Neural Network, you recognize patterns from labelled data (“supervised learning”), with a structure made of inputs, outputs, and neurons in between. Some are these are connected, but deciding which ones are connected manually doesn’t work well for things like images, since the net doesn’t understand how the pixels are related. A CNN, or Convolutional Neural Network, connects some of the neurons to pixels that are close together, to start out with some knowing of how the pixels are related. This is a very high level overview, but if you want to dig into the architecture, check out this guide. What about MNIST? MNIST is a dataset of handwritten digits, with a training set of 60,000 examples and a testing set of 10,000 examples. It’s often used in beginner problems. And what is Google Colab? Google Colaboratory was an internal research tool for data science, which was released to the public with the goal of the dissemination of AI research and education. It offers free GPU and TPU usage (with limits, of course ;)). Lastly, what is a TPU? TPU stands for “Tensor Processing Unit”, an alternative to CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) that’s especially designed for calculating tensors. A tensor is an alternative to multidimensional arrays (like in NumPy), and are functions you feed data into. The relationships in a neural network can be easily described and processed in tensors, so a TPU is very fast for this kind of work. Training and Testing a Model on GPUs and TPUs Signup for Google Colaboratory. Open up this pre-made GPU vs TPU notebook (credits). When you open it up, TPU backend should be enabled (if not, check Runtime -> Change runtime type -> Hardware Accelerator -> TPU). Run a “cell” using Shift-Enter / Command-Enter. Run all the cells in order, starting from the cell named “Download MNIST”. If it’s successful, the empty [] brackets should turn into [1], [2], [3], and so on. That’s it! You may run into challenges if you’re doing this long after I wrote the article, and cell [4] (training and testing) will take some time to run its 10 epochs. Conclusion Much of the barrier of entry to AI and data science used to be in the infrastructure, for instance getting the necessary compute to train large models. Nowadays, with tools like Google Colab, it really is as simple as opening and running a notebook in your browser, and not much different from using a Google Doc or spreadsheet. What Should I Do With This Information? Now you at least know how to run an AI model easily. If you want to practice on real-world challenges, head on over to bitgrit’s competition platform, with new competitions regularly added. This will train your skills, and act as a means to build up your portfolio. This article was written by Frederik Bussler, CEO at bitgrit. Join our data scientist community or our Telegram for insights and opportunities in data science.
https://medium.com/bitgrit-data-science-publication/learn-ai-with-free-tpu-power-the-eli5-way-4e5484ea0d08
[]
2019-03-16 11:29:59.619000+00:00
['Machine Learning', 'Artificial Intelligence', 'Technology', 'Google', 'Data Science']
Title Learn AI Free TPU Power — ELI5 wayContent article you’ll learn use Google Colab training CNN MNIST dataset using Google’s TPUs Hold what’s CNN regular Neural Network recognize pattern labelled data “supervised learning” structure made input output neuron connected deciding one connected manually doesn’t work well thing like image since net doesn’t understand pixel related CNN Convolutional Neural Network connects neuron pixel close together start knowing pixel related high level overview want dig architecture check guide MNIST MNIST dataset handwritten digit training set 60000 example testing set 10000 example It’s often used beginner problem Google Colab Google Colaboratory internal research tool data science released public goal dissemination AI research education offer free GPU TPU usage limit course Lastly TPU TPU stand “Tensor Processing Unit” alternative CPUs Central Processing Unit GPUs Graphics Processing Unit that’s especially designed calculating tensor tensor alternative multidimensional array like NumPy function feed data relationship neural network easily described processed tensor TPU fast kind work Training Testing Model GPUs TPUs Signup Google Colaboratory Open premade GPU v TPU notebook credit open TPU backend enabled check Runtime Change runtime type Hardware Accelerator TPU Run “cell” using ShiftEnter CommandEnter Run cell order starting cell named “Download MNIST” it’s successful empty bracket turn 1 2 3 That’s may run challenge you’re long wrote article cell 4 training testing take time run 10 epoch Conclusion Much barrier entry AI data science used infrastructure instance getting necessary compute train large model Nowadays tool like Google Colab really simple opening running notebook browser much different using Google Doc spreadsheet Information least know run AI model easily want practice realworld challenge head bitgrit’s competition platform new competition regularly added train skill act mean build portfolio article written Frederik Bussler CEO bitgrit Join data scientist community Telegram insight opportunity data scienceTags Machine Learning Artificial Intelligence Technology Google Data Science
3,936
Celery throttling — setting rate limit for queues
Writing some code Well, let’s write some code. Create the main.py file and set the basic settings: from celery import Celery from kombu import Queue ​ app = Celery('Test app', broker='amqp://guest@localhost//') ​ # 1 queue for tasks and 1 queue for tokens app.conf.task_queues = [ Queue('github'), # I limited the queue length to 2, so that tokens do not accumulate # otherwise this could lead to a breakdown of our rate limit Queue('github_tokens', max_length=2) ] ​ # this task will play the role of our token # it will never be executed, we will just pull it as a message from the queue @app.task def token(): return 1 ​ # setting up a constant issue of our token @app.on_after_configure.connect def setup_periodic_tasks(sender, **kwargs): # we will issue 1 token per second # that means rate limit for github queue is 60 tasks per minute sender.add_periodic_task(1.0, token.signature(queue='github_tokens')) Do not forget to launch Rabbit, I prefer to do this with docker: docker run -d --rm --name rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management Now let’s run celery beat - special celery worker, that is always launched and responsible for running periodic tasks. celery -A main beat --loglevel=info After that, messages will appear in the console once a second: [2020-03-22 22:49:00,992: INFO/MainProcess] Scheduler: Sending due task main.token() (main.token) Well, we have set up the issue of tokens for our ‘bucket’. Now all we have to do is to learn how to pull tokens. Let’s try to optimize the code that we wrote earlier for requests to github. Add these lines to main.py : # function for pulling tokens from queue def rate_limit(task, task_group): # acquiring broker connection from pool with task.app.connection_for_read() as conn: # getting token msg = conn.default_channel.basic_get(task_group+'_tokens', no_ack=True) # received None - queue is empty, no tokens if msg is None: # repeat task after 1 second task.retry(countdown=1) ​ # Added some prints for logging # I set max_retries=None, so that tasks will repeat until complete @app.task(bind=True) def get_github_api1(self, max_retries=None): rate_limit(self, 'github') print ('Called Api 1') ​ ​ @app.task(bind=True) def get_github_api2(self, max_retries=None): rate_limit(self, 'github') print ('Called Api 2') Now lets check how it works. In addition to the beat process, add 8 workers: celery -A main worker -c 8 -Q github And create a separate little script to run these tasks, call it producer.py : from main import get_github_api1, get_github_api2 ​ tasks = [get_github_api1, get_github_api2] ​ for i in range(100): # launching tasks one by one tasks[i % 2].apply_async(queue='github') Start it with python producer.py , and look at logs of workers: [2020-03-23 13:04:15,017: WARNING/ForkPoolWorker-3] Called Api 2 [2020-03-23 13:04:16,053: WARNING/ForkPoolWorker-8] Called Api 2 [2020-03-23 13:04:17,112: WARNING/ForkPoolWorker-1] Called Api 2 [2020-03-23 13:04:18,187: WARNING/ForkPoolWorker-1] Called Api 1 ... (96 more lines) Despite the fact that we have 8 workers, tasks are executed approximately once per second. If there was no token at the time task reached the worker, task is rescheduled. Also, I think you have already noticed, that in fact we throttle not queue, but some logical group of tasks, that can actually be located in different queues. Thus, our control becomes even more detailed and granular. Putting it all together Of course, the number of such task groups is not limited (only by capabilities of the broker). Putting the whole code together, expanding and ‘beautifying’ it: from celery import Celery from kombu import Queue from queue import Empty from functools import wraps ​ app = Celery('hello', broker='amqp://guest@localhost//') ​ task_queues = [ Queue('github'), Queue('google') ] ​ # per minute rate rate_limits = { 'github': 60, 'google': 100 } ​ # generating queues for all groups with limits, that we defined in dict above task_queues += [Queue(name+'_tokens', max_length=2) for name, limit in rate_limits.items()] ​ app.conf.task_queues = task_queues ​ @app.task def token(): return 1 ​ @app.on_after_configure.connect def setup_periodic_tasks(sender, **kwargs): # generating auto issuing of tokens for all lmited groups for name, limit in rate_limits.items(): sender.add_periodic_task(60 / limit, token.signature(queue=name+'_tokens')) ​ # I really like decorators ;) def rate_limit(task_group): def decorator_func(func): @wraps(func) def function(self, *args, **kwargs): with self.app.connection_for_read() as conn: # Here I used another higher level method # We are getting complete queue interface # but in return losing some perfomance because # under the hood there is additional work done with conn.SimpleQueue(task_group+'_tokens', no_ack=True, queue_opts={'max_length':2}) as queue: try: # Another advantage is that we can use blocking call # It can be more convenient than calling retry() all the time # However, it depends on the specific case queue.get(block=True, timeout=5) return func(self, *args, **kwargs) except Empty: self.retry(countdown=1) return function return decorator_func ​ # much more beautiful and readable with decorators, agree? @app.task(bind=True, max_retries=None) @rate_limit('github') def get_github_api1(self): print ('Called github Api 1') ​ @app.task(bind=True, max_retries=None) @rate_limit('github') def get_github_api2(self): print ('Called github Api 2') ​ @app.task(bind=True, max_retries=None) @rate_limit('google') def query_google_api1(self): print ('Called Google Api 1') ​ @app.task(bind=True, max_retries=None) @rate_limit('google') def query_google_api1(self): print ('Called Google Api 2') Thus, the total task calls of the google group will not exceed 100/min, and the github group — 60/min. Note that in order to set up such throttling, it took less than 50 lines of code. Is it possible to make it even simpler?
https://medium.com/analytics-vidhya/celery-throttling-setting-rate-limit-for-queues-5b5bf16c73ce
['Magomed Aliev']
2020-05-10 15:54:26.235000+00:00
['Distributed Systems', 'Python', 'Software Development', 'Celery', 'Rabbitmq']
Title Celery throttling — setting rate limit queuesContent Writing code Well let’s write code Create mainpy file set basic setting celery import Celery kombu import Queue ​ app CeleryTest app brokeramqpguestlocalhost ​ 1 queue task 1 queue token appconftaskqueues Queuegithub limited queue length 2 token accumulate otherwise could lead breakdown rate limit Queuegithubtokens maxlength2 ​ task play role token never executed pull message queue apptask def token return 1 ​ setting constant issue token apponafterconfigureconnect def setupperiodictaskssender kwargs issue 1 token per second mean rate limit github queue 60 task per minute senderaddperiodictask10 tokensignaturequeuegithubtokens forget launch Rabbit prefer docker docker run rm name rabbit p 1567215672 p 56725672 rabbitmq3management let’s run celery beat special celery worker always launched responsible running periodic task celery main beat loglevelinfo message appear console second 20200322 224900992 INFOMainProcess Scheduler Sending due task maintoken maintoken Well set issue token ‘bucket’ learn pull token Let’s try optimize code wrote earlier request github Add line mainpy function pulling token queue def ratelimittask taskgroup acquiring broker connection pool taskappconnectionforread conn getting token msg conndefaultchannelbasicgettaskgrouptokens noackTrue received None queue empty token msg None repeat task 1 second taskretrycountdown1 ​ Added print logging set maxretriesNone task repeat complete apptaskbindTrue def getgithubapi1self maxretriesNone ratelimitself github print Called Api 1 ​ ​ apptaskbindTrue def getgithubapi2self maxretriesNone ratelimitself github print Called Api 2 let check work addition beat process add 8 worker celery main worker c 8 Q github create separate little script run task call producerpy main import getgithubapi1 getgithubapi2 ​ task getgithubapi1 getgithubapi2 ​ range100 launching task one one tasksi 2applyasyncqueuegithub Start python producerpy look log worker 20200323 130415017 WARNINGForkPoolWorker3 Called Api 2 20200323 130416053 WARNINGForkPoolWorker8 Called Api 2 20200323 130417112 WARNINGForkPoolWorker1 Called Api 2 20200323 130418187 WARNINGForkPoolWorker1 Called Api 1 96 line Despite fact 8 worker task executed approximately per second token time task reached worker task rescheduled Also think already noticed fact throttle queue logical group task actually located different queue Thus control becomes even detailed granular Putting together course number task group limited capability broker Putting whole code together expanding ‘beautifying’ celery import Celery kombu import Queue queue import Empty functools import wrap ​ app Celeryhello brokeramqpguestlocalhost ​ taskqueues Queuegithub Queuegoogle ​ per minute rate ratelimits github 60 google 100 ​ generating queue group limit defined dict taskqueues Queuenametokens maxlength2 name limit ratelimitsitems ​ appconftaskqueues taskqueues ​ apptask def token return 1 ​ apponafterconfigureconnect def setupperiodictaskssender kwargs generating auto issuing token lmited group name limit ratelimitsitems senderaddperiodictask60 limit tokensignaturequeuenametokens ​ really like decorator def ratelimittaskgroup def decoratorfuncfunc wrapsfunc def functionself args kwargs selfappconnectionforread conn used another higher level method getting complete queue interface return losing perfomance hood additional work done connSimpleQueuetaskgrouptokens noackTrue queueoptsmaxlength2 queue try Another advantage use blocking call convenient calling retry time However depends specific case queuegetblockTrue timeout5 return funcself args kwargs except Empty selfretrycountdown1 return function return decoratorfunc ​ much beautiful readable decorator agree apptaskbindTrue maxretriesNone ratelimitgithub def getgithubapi1self print Called github Api 1 ​ apptaskbindTrue maxretriesNone ratelimitgithub def getgithubapi2self print Called github Api 2 ​ apptaskbindTrue maxretriesNone ratelimitgoogle def querygoogleapi1self print Called Google Api 1 ​ apptaskbindTrue maxretriesNone ratelimitgoogle def querygoogleapi1self print Called Google Api 2 Thus total task call google group exceed 100min github group — 60min Note order set throttling took le 50 line code possible make even simplerTags Distributed Systems Python Software Development Celery Rabbitmq
3,937
AWS Glue Studio—No Spark Skills-No Problem
AWS Glue Studio—No Spark Skills-No Problem Easily create Spark ETL jobs using AWS Glue Studio — no Spark experience required Image by Gerd Altmann from Pixabay AWS Glue Studio was launched recently. With AWS Glue Studio you can use a GUI to create, manage and monitor ETL jobs without the need of Spark programming skills. Users may visually create an ETL job by visually defining the source/transform/destination nodes of an ETL job that can perform operations like fetching/saving data, joining datasets, selecting fields, filtering etc. Once a user assembles the various nodes of the ETL job, AWS Glue Studio automatically generates the Spark Code for you. AWS Glue Studio supports many different types of data sources including: S3 RDS Kinesis Kafka Let us try to create a simple ETL job. This ETL job will use 3 data sets-Orders, Order Details and Products. The objective is to Join these three data sets, select a few fields, and finally filter orders where the MRSP of the product is greater than $100. Finally, we want to save the results to S3. When we are done the ETL jobs should visually look like this. Image by Author Lets start by downloading the data sets required for this tutorial. Save the data sets in S3. $ git clone https://github.com/mkukreja1/blogs $ aws s3 mb s3://glue-studio make_bucket: glue-studio $ aws s3 cp blogs/glue-studio/orders.csv s3://glue-studio/data/orders/orders.csv upload: blogs/glue-studio/orders.csv to s3://glue-studio/data/orders/orders.csv $ aws s3 cp blogs/glue-studio/orderdetails.csv s3://glue-studio/data/orderdetails/orderdetails.csv upload: blogs/glue-studio/orderdetails.csv to s3://glue-studio/data/orderdetails/orderdetails.csv $ aws s3 cp blogs/glue-studio/products.csv s3://glue-studio/data/products/products.csv upload: blogs/glue-studio/products.csv to s3://glue-studio/data/products/products.csv We will save these files to S3 and catalog them in the orders database using the Glue Crawler. $ aws glue create-database --database-input '{"Name":"orders"}' $ aws glue create-crawler --cli-input-json '{"Name": "orders","Role": "arn:aws:iam::175908995626:role/glue-role","DatabaseName": "orders","Targets": {"S3Targets": [{"Path": "s3://glue-studio/data/orders/"},{"Path": "s3://glue-studio/data/orders/"}]}}' $ aws glue start-crawler --name orders $ aws glue delete-crawler --name orders $ aws glue create-crawler --cli-input-json '{"Name": "orderdetails","Role": "arn:aws:iam::175908995626:role/glue-role","DatabaseName": "orders","Targets": {"S3Targets": [{"Path": "s3://glue-studio/data/orderdetails/"},{"Path": "s3://glue-studio/data/orderdetails/"}]}}' $ aws glue start-crawler --name orderdetails $ aws glue delete-crawler --name orderdetails $ aws glue create-crawler --cli-input-json '{"Name": "products","Role": "arn:aws:iam::175908995626:role/glue-role","DatabaseName": "orders","Targets": {"S3Targets": [{"Path": "s3://glue-studio/data/products/"},{"Path": "s3://glue-studio/data/products/"}]}}' $ aws glue start-crawler --name products $ aws glue delete-crawler --name products Using the AWS console open AWS Glue service and click on AWS Glue Studio using the left menu. Make sure you have Blank Graph selected. Click on Create. Image by Author Start by creating the first Transform Node-Fetch Orders Data Image by Author Make sure that Fetch Orders Data points to the orders table catalogued in Glue previously. Image by Author Using the same principles as above create the Transform Node-Fetch OrderDetails Data as well as Fetch Products Data Now we will create a Transform Node that will join Fetch Orders Data to Fetch OrderDetails Data. Image by Author Notice how the joining condition is defined between the two tables as below. Using the same principles create a Transform Node that will join Join Orders to Fetch Products Data. Image by Author Since we want to select a subset of columns from the three table we can use the Select Fields Node Image by Author Notice how you can check boxes for fileds that should be included in the final result set. Image by Author Now we would like to filter the products whose MRSP is greate than $100. This can be achived by creating a Filter Products MRSP>100 Node as below. Image by Author Notice how one or more filter condition can be defined. Image by Author Finally, we want to save the result table to S3 in Parquet format. For tis we create a Destination Node-Save Results Image by Author Image by Author Use the Save button to save your ETL job. At this time you should be able to see that AWS Glue Studio has automatically generated the Spark code for you. Click on the Script menu to view the generated code. We are all set. Lets run the job using the Run button on top right. Click on Run Details should show you the status of the running job. Once the job status changes to Succeeded you can go to S3 to check the final results of the job. Image by Author At this point there should be many Parquet files produced in the results folder. Image by Author You can check the cotenets of the file using the Apache Parquet Viewer. Image by Author I hope this article was helpful. AWS Glue Studio is covered as part of the AWS Big Data Analytics course offered by Datafence Cloud Academy. The course is taught online by myself on weekends.
https://towardsdatascience.com/aws-glue-studio-no-spark-skills-no-problem-b3204ed98aa4
['Manoj Kukreja']
2020-09-29 14:22:03.132000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'AWS', 'Data']
Title AWS Glue Studio—No Spark SkillsNo ProblemContent AWS Glue Studio—No Spark SkillsNo Problem Easily create Spark ETL job using AWS Glue Studio — Spark experience required Image Gerd Altmann Pixabay AWS Glue Studio launched recently AWS Glue Studio use GUI create manage monitor ETL job without need Spark programming skill Users may visually create ETL job visually defining sourcetransformdestination node ETL job perform operation like fetchingsaving data joining datasets selecting field filtering etc user assembles various node ETL job AWS Glue Studio automatically generates Spark Code AWS Glue Studio support many different type data source including S3 RDS Kinesis Kafka Let u try create simple ETL job ETL job use 3 data setsOrders Order Details Products objective Join three data set select field finally filter order MRSP product greater 100 Finally want save result S3 done ETL job visually look like Image Author Lets start downloading data set required tutorial Save data set S3 git clone httpsgithubcommkukreja1blogs aws s3 mb s3gluestudio makebucket gluestudio aws s3 cp blogsgluestudioorderscsv s3gluestudiodataordersorderscsv upload blogsgluestudioorderscsv s3gluestudiodataordersorderscsv aws s3 cp blogsgluestudioorderdetailscsv s3gluestudiodataorderdetailsorderdetailscsv upload blogsgluestudioorderdetailscsv s3gluestudiodataorderdetailsorderdetailscsv aws s3 cp blogsgluestudioproductscsv s3gluestudiodataproductsproductscsv upload blogsgluestudioproductscsv s3gluestudiodataproductsproductscsv save file S3 catalog order database using Glue Crawler aws glue createdatabase databaseinput Nameorders aws glue createcrawler cliinputjson Name ordersRole arnawsiam175908995626roleglueroleDatabaseName ordersTargets S3Targets Path s3gluestudiodataordersPath s3gluestudiodataorders aws glue startcrawler name order aws glue deletecrawler name order aws glue createcrawler cliinputjson Name orderdetailsRole arnawsiam175908995626roleglueroleDatabaseName ordersTargets S3Targets Path s3gluestudiodataorderdetailsPath s3gluestudiodataorderdetails aws glue startcrawler name orderdetails aws glue deletecrawler name orderdetails aws glue createcrawler cliinputjson Name productsRole arnawsiam175908995626roleglueroleDatabaseName ordersTargets S3Targets Path s3gluestudiodataproductsPath s3gluestudiodataproducts aws glue startcrawler name product aws glue deletecrawler name product Using AWS console open AWS Glue service click AWS Glue Studio using left menu Make sure Blank Graph selected Click Create Image Author Start creating first Transform NodeFetch Orders Data Image Author Make sure Fetch Orders Data point order table catalogued Glue previously Image Author Using principle create Transform NodeFetch OrderDetails Data well Fetch Products Data create Transform Node join Fetch Orders Data Fetch OrderDetails Data Image Author Notice joining condition defined two table Using principle create Transform Node join Join Orders Fetch Products Data Image Author Since want select subset column three table use Select Fields Node Image Author Notice check box fileds included final result set Image Author would like filter product whose MRSP greate 100 achived creating Filter Products MRSP100 Node Image Author Notice one filter condition defined Image Author Finally want save result table S3 Parquet format ti create Destination NodeSave Results Image Author Image Author Use Save button save ETL job time able see AWS Glue Studio automatically generated Spark code Click Script menu view generated code set Lets run job using Run button top right Click Run Details show status running job job status change Succeeded go S3 check final result job Image Author point many Parquet file produced result folder Image Author check cotenets file using Apache Parquet Viewer Image Author hope article helpful AWS Glue Studio covered part AWS Big Data Analytics course offered Datafence Cloud Academy course taught online weekendsTags Machine Learning Data Science Artificial Intelligence AWS Data
3,938
Rivian Has Become a Top Dog in the Electric Vehicle Battle
Rivian Has Become a Top Dog in the Electric Vehicle Battle Amazon will drive 10,000 Rivian vans in 2022, 100,000 in 2030 Photo by Amazon/Rivian While Tesla holds a massive lead in consumer electric vehicle sales, another company has a firm grip on the delivery market: Rivian. If you’ve never heard of Rivian, that’s OK: they’re rather lowkey. Founded in 2009, the company didn’t reveal its first two products — an electric pickup truck and SUV — until 2017. In 2019, Rivian received $1.55 billion in funding from three companies: Cox Automotive ($350 million), Ford ($500 million), and Amazon ($700 million). Due to factors relating to COVID-19, Ford terminated its contract with Rivian. Amazon didn’t leave, however, and will start seeing returns on its investment quite soon. Not long after Cox’s investment in September 2019, Amazon announced a purchase of 100,000 Rivian vans for its delivery fleet — the largest delivery EV order to date. Amazon plans to run on 100% renewable energy by 2030, and the Rivian fleet will play a large role in that. The company initially planned to have all 100,000 EV delivery vans on the road by 2024, with some hitting the roads in 2020, which is now unlikely with COVID-related delays. A recent announcement did say that 10,000 Rivian vans should be delivering Amazon packages by 2022. The two companies have yet to reveal specifications for the van but did say it will have 150 miles of range (which would be class-leading) and a 360-degree view of the exterior. This view will be shown on a center-stationed monitor display. Rivian is expected to announce its battery supplier for the Amazon project by the end of this year. This project will put Amazon in contention as the largest at-home delivery fleet in the world. Currently with about 30,000 vehicles, the company is behind the likes of FedEx (78,000), UPS (125,000), and United States Postal Service (200,000). The Amazon/Rivian partnership is not a unique one aside from the size of the commitment, though. UPS has ordered 10,000 EVs (with an option for 10,000 more) from startup Arrival and will begin rolling those out worldwide within the next few years. FedEx made an order for 1,000 electric vans at the end of 2018, while companies are currently battling for a large fleet contract from the USPS. Companies such as Tesla and Nikola are building electric semi-trucks meant for large-freight hauling to retailers. It would not be surprising to see either company jump into the smaller-scale delivery game down the line, however. The development of electric vehicles (specifically with batteries) comes slowly but surely, which is why prices are still high and delivery fleets aren’t already 100% electric. In an industry where things do move so slow, the latest reveal of Rivian’s van gives us some tangibility that we don’t always get. Thanks to Amazon’s commitment, Rivian has a large lead over all other hopeful EV suppliers. From a delivery standpoint, at least.
https://medium.com/swlh/rivian-has-become-a-top-dog-in-the-electric-vehicle-battle-2dcea44bebc5
['Dylan Hughes']
2020-11-24 20:42:19.860000+00:00
['Transportation', 'Climate Change', 'Environment', 'Sustainability', 'Electric Vehicles']
Title Rivian Become Top Dog Electric Vehicle BattleContent Rivian Become Top Dog Electric Vehicle Battle Amazon drive 10000 Rivian van 2022 100000 2030 Photo AmazonRivian Tesla hold massive lead consumer electric vehicle sale another company firm grip delivery market Rivian you’ve never heard Rivian that’s OK they’re rather lowkey Founded 2009 company didn’t reveal first two product — electric pickup truck SUV — 2017 2019 Rivian received 155 billion funding three company Cox Automotive 350 million Ford 500 million Amazon 700 million Due factor relating COVID19 Ford terminated contract Rivian Amazon didn’t leave however start seeing return investment quite soon long Cox’s investment September 2019 Amazon announced purchase 100000 Rivian van delivery fleet — largest delivery EV order date Amazon plan run 100 renewable energy 2030 Rivian fleet play large role company initially planned 100000 EV delivery van road 2024 hitting road 2020 unlikely COVIDrelated delay recent announcement say 10000 Rivian van delivering Amazon package 2022 two company yet reveal specification van say 150 mile range would classleading 360degree view exterior view shown centerstationed monitor display Rivian expected announce battery supplier Amazon project end year project put Amazon contention largest athome delivery fleet world Currently 30000 vehicle company behind like FedEx 78000 UPS 125000 United States Postal Service 200000 AmazonRivian partnership unique one aside size commitment though UPS ordered 10000 EVs option 10000 startup Arrival begin rolling worldwide within next year FedEx made order 1000 electric van end 2018 company currently battling large fleet contract USPS Companies Tesla Nikola building electric semitrucks meant largefreight hauling retailer would surprising see either company jump smallerscale delivery game line however development electric vehicle specifically battery come slowly surely price still high delivery fleet aren’t already 100 electric industry thing move slow latest reveal Rivian’s van give u tangibility don’t always get Thanks Amazon’s commitment Rivian large lead hopeful EV supplier delivery standpoint leastTags Transportation Climate Change Environment Sustainability Electric Vehicles
3,939
Committed to Success: Hai Hoang, Tech Lead at Planworth
Hai Hoang is a Commit engineer who joined Planworth, an early-stage wealth planning SaaS platform in 2019 as a technical lead. We sat down with him to hear about his journey. Tell us a bit about your background before joining Commit? I spent the first two-and-a-half years of my career at a large tech company, then did my own startup for around two years. It was an on-demand personal assistant app. We launched our product and got to market, but market conditions were bad at the time. Investments dried up and we ended up having to shut down. I went back to the first tech company I worked for, to figure out what I wanted to do next. Then I worked for a few startups, but nothing was really going anywhere. That’s when Commit came into the picture. I was one of the first engineers, part of the founding team. “You don’t really know how you get along with people until you work with them, right? To me, Commit offers a really good opportunity to get a feel for that before making a long term commitment” What drew you to Commit? The people. I had worked with some members of the Commit team at other startups. The people are really fun and they had senior engineers on board. I was attracted to that, because I knew I could learn a ton from them. It was a very good environment for me to start new projects all the time and learn from some of the best. It was clear that Commit’s goal was to minimize risk for engineers. We wanted to offer engineers the opportunity to meet with startup founders and assess what their product was and what their business strategy was before fully jumping in — that really appealed to me, because I had been with failed startups before. How did you get connected with Planworth? I actually had no intention of leaving Commit. I was there from the beginning, I was helping build the company — I thought there was no reason for me to leave. But then Planworth came around, and the product and the team got me very interested. I could see potential that I hadn’t seen with other projects. Plus, it’s in the fintech field, which I’ve always had an interest in but have never been able to dip my toes into. What attracted you to Planworth? I liked the fact that they recognized their product-market fit. That’s something many startups don’t have from the beginning. Most startups have an idea, then they build a product, then they go out and validate it with the market. Planworth built a rough proof-of-concept and got immediate validation. So by the time I joined, it was clear they had found a market, figured out a business model, and had a plan for earning revenue. It was amazing to see. Also, it was great that I had a three-month period working with the founders and the team, before I formally joined, so I really got to know them and their product. What has it been like so far? It’s been very, very good. They’ve given me the autonomy and authority to implement my vision of what I want a team to look like, which I’ve never had the opportunity to do. They gave me that trust. And not just from the management perspective — even the engineering team trusts me to make decisions. It’s a great career opportunity because previously I’ve been a team lead, where you’re running projects, but at Planworth I’m also managing people. How has your time with Commit helped you in this new role? I’ve definitely been able to transfer things I learned at Commit onto my work at Planworth. At Commit I learned about project management, especially figuring out ways to deliver work in a shorter time frame. A lot of the technical skill sets I picked up at Commit also set me up for success at Planworth, like devops tricks. I’m still learning a lot at Planworth. Maybe not as much on the engineering side, but on the management side and team lead side. I have Tarsem and James [Planworth’s co-founders] teaching me and coaching me. So I’m learning every day. How has it been working with two non-technical founders? It’s been good. Engineering is very different than what they’re used to, but they’re very open-minded and understanding about the process. That’s one of the characteristics I like about them. They care about scalability, and they care about having clean code and tests and stuff like that. Not all founders do. What would you tell or say to other engineers considering joining Commit? Go for it. And keep an open mind, because every single project is very different. You end up working with different teams and different people. You don’t really know how you get along with people until you work with them, right? To me, Commit offers a really good opportunity to get a feel for that. I think it’s a fantastic model. As a person who comes from the startup world, working on multiple failed startups, it really does mitigate the risk.
https://medium.com/commit-engineering/committed-to-success-hai-hoang-tech-lead-at-planworth-38f8442cebe9
['Beier Cai']
2020-06-14 21:48:45.193000+00:00
['Technology', 'Careers', 'Software Development', 'Startup', 'Entrepreneurship']
Title Committed Success Hai Hoang Tech Lead PlanworthContent Hai Hoang Commit engineer joined Planworth earlystage wealth planning SaaS platform 2019 technical lead sat hear journey Tell u bit background joining Commit spent first twoandahalf year career large tech company startup around two year ondemand personal assistant app launched product got market market condition bad time Investments dried ended shut went back first tech company worked figure wanted next worked startup nothing really going anywhere That’s Commit came picture one first engineer part founding team “You don’t really know get along people work right Commit offer really good opportunity get feel making long term commitment” drew Commit people worked member Commit team startup people really fun senior engineer board attracted knew could learn ton good environment start new project time learn best clear Commit’s goal minimize risk engineer wanted offer engineer opportunity meet startup founder ass product business strategy fully jumping — really appealed failed startup get connected Planworth actually intention leaving Commit beginning helping build company — thought reason leave Planworth came around product team got interested could see potential hadn’t seen project Plus it’s fintech field I’ve always interest never able dip toe attracted Planworth liked fact recognized productmarket fit That’s something many startup don’t beginning startup idea build product go validate market Planworth built rough proofofconcept got immediate validation time joined clear found market figured business model plan earning revenue amazing see Also great threemonth period working founder team formally joined really got know product like far It’s good They’ve given autonomy authority implement vision want team look like I’ve never opportunity gave trust management perspective — even engineering team trust make decision It’s great career opportunity previously I’ve team lead you’re running project Planworth I’m also managing people time Commit helped new role I’ve definitely able transfer thing learned Commit onto work Planworth Commit learned project management especially figuring way deliver work shorter time frame lot technical skill set picked Commit also set success Planworth like devops trick I’m still learning lot Planworth Maybe much engineering side management side team lead side Tarsem James Planworth’s cofounder teaching coaching I’m learning every day working two nontechnical founder It’s good Engineering different they’re used they’re openminded understanding process That’s one characteristic like care scalability care clean code test stuff like founder would tell say engineer considering joining Commit Go keep open mind every single project different end working different team different people don’t really know get along people work right Commit offer really good opportunity get feel think it’s fantastic model person come startup world working multiple failed startup really mitigate riskTags Technology Careers Software Development Startup Entrepreneurship
3,940
How I made the switch to AI research
In 2015, I wanted to help with AI research, but taking the first steps felt daunting. I’d graduated from MIT then spent eight years building web startups. I’d put in my 10,000 hours, gotten funding from Y Combinator and grown a company to thirty people. Moving to research felt like starting over in my career. Was it really a good idea to throw away years of work? A friend told me about South Park Commons (SPC), a new space for people who were taking the first steps on a new path, and introduced me to Ruchi, the founder. Ruchi is super impressive, she was one of the earliest Facebook engineers, and had founded and sold a successful company. She also has a high-bandwidth and disarmingly direct communication style that I found refreshing. Over lunch, Ruchi described South Park Commons as a community in which everyone is starting over. Starting over is in fact the main thing that unifies the group. For example, two current Commons members are Jason, the maintainer of a popular open-source project Quill, who’s been learning to do enterprise sales, and Malcolm, a successful infrastructure engineer who’s starting a fund Strong Atomics to invest in nuclear fusion companies. I joined South Park Commons, blocked off three months to see if I could make progress, and made a plan to teach myself machine learning. Several other SPC members were interested in the space, so we started going through a curriculum of courses and organized a paper reading group. As soon as I got over the fear and took the plunge, things got vastly easier. Six months of focused work later, I had a position at OpenAI.
https://medium.com/south-park-commons/how-i-made-the-switch-to-ai-research-b053b402608
['South Park Commons']
2017-08-03 16:49:23.029000+00:00
['AI', 'Naturallanguageprocessing', 'Artificial Intelligence', 'Machine Learning']
Title made switch AI researchContent 2015 wanted help AI research taking first step felt daunting I’d graduated MIT spent eight year building web startup I’d put 10000 hour gotten funding Combinator grown company thirty people Moving research felt like starting career really good idea throw away year work friend told South Park Commons SPC new space people taking first step new path introduced Ruchi founder Ruchi super impressive one earliest Facebook engineer founded sold successful company also highbandwidth disarmingly direct communication style found refreshing lunch Ruchi described South Park Commons community everyone starting Starting fact main thing unifies group example two current Commons member Jason maintainer popular opensource project Quill who’s learning enterprise sale Malcolm successful infrastructure engineer who’s starting fund Strong Atomics invest nuclear fusion company joined South Park Commons blocked three month see could make progress made plan teach machine learning Several SPC member interested space started going curriculum course organized paper reading group soon got fear took plunge thing got vastly easier Six month focused work later position OpenAITags AI Naturallanguageprocessing Artificial Intelligence Machine Learning
3,941
20 (more) Technologies that will change your life by 2050
20 (more) Technologies that will change your life by 2050 The future will be… weird ? I recently shared an article called “The “Next Big Thing” in Technology : 20 Inventions That Will Change the World”, which got a few dozen thousand hits in the past couple of weeks. This calls for a sequel. The previous 20 technologies were specifically centered on the next 20 years of technology development; but there’s a lot more to unravel when looking beyond the near future, though certainty obviously decreases with time. Below are 20 technologies that will change the world by 2050 and beyond. This date and predictions are understandably vague and arbitrary, and we all know that predictions often fall flat (check my 2020 tech predictions if you don’t believe me). Regardless, the knowledge gained through planning for potential technologies is crucial to the selection of appropriate actions as future events unfold. Above all, articles such as this one act as catalysts to steer the conversation in the right direction. 1. DNA computing Life is far, far more complex than any of the technologies humanity has ever created. As such, it could make sense to use life’s building blocks to create an entirely new type of computational power. Indeed, for all the talks of Artificial Intelligence, nothing beats our mushy insides when it comes to learning and making inferences. DNA computing is the idea that we can use biology instead of silicon to solve complex problems. As a DNA strand links to another, it creates a reaction which modifies another short DNA sequence. Action. Reaction. It’s not a silly idea : most of our computers are built to reflect the very organic way humans think (how else would we grasp computer’s inputs and outputs). Humanity is pretty far from anything usable right now : we’ve only been able to create the most basic Turing machine, which entails creating a set of rules, feeding an input, and getting a specific output based on the defined rules. In real term… well, we managed to play tic-tac-toe with DNA, which is both dispiriting and amazing. More on DNA Computing here [Encyclopaedia Britannica]. 2. Smart Dust Smart dust is a swarm of incredibly tiny sensors which would gather huge amounts of information (light, vibrations, temperature…) over a large area. Such systems can theoretically be put in communication with a server or with analysis systems via a wireless computer network to transmit said information. Potential applications include detection of corrosion in aging pipes before they leak (for example in drinking water… oh, hi Flint), tracking mass movements in cities or even monitoring climate change over large areas. Some of the issues with this technology is the ecological harm these sensors could cause, as well as their potential for being used for unethical behavior. We are also far from something that could be implemented in the near future : it’s very hard to communicate with something this small, and Smart Dust would likely be vulnerable to environmental conditions and microwaves. More on Smart Dust here [WSJ]. 3. 4D printing The name 4D printing can lead to confusion: I am not implying that humanity will be able to create and access another dimension (Only Rubik can do that). Put simply, a 4D-printed product is a 3D-printed object which can change properties when a specific stimulus is applied (submerged underwater, heated, shaken, not stirred…). The applications are still being discussed, but some very promising industries include healthcare (pills that activate only if the body reaches a certain temperature), fashion (clothes that become tighter in cold temperature), and home-making (furniture that becomes rigid under a certain stimulus). The key challenges of this technology is obviously finding the relevant components for all types of uses. Some work is being done in this space, but we’re not even close to being customer-ready, having yet to master reversible changes of certain materials. More on 4D Printing here [Wikipedia]. 4. Neuromorphic Hardware Now THIS is real SciFi. Taking a page from biology, physics, mathematics, computer science and electronic engineering, neuromorphic engineering aims to create hardware which copies the neurons in their response to sensory inputs. Whereas DNA Computing aims to recreate computers with organic matter, Neuromorphic Hardware aims to recreate neurons and synapses using silicon. This is especially relevant as we’re seeing an end to the exponential computing power growth predicted by Moore’s law (that’s quantum mechanics for you), and have to find new ways to calculate a bunch of things very quickly. We’re not really sure how far this idea can be taken, but exploring it is, if anything, great for theoretical AI research. Should said research go further and become actionable, you’ll find me knocking on Sarah Connor’s door. More on Neuromorphic Hardware here [Towards Data Science]. 5. Nanoscale 3D printing 3D printing is still a solution looking for a problem. That’s partly because 3D printers are still too expensive for the average Joe, and not sophisticated and quick enough for large-scale manufacturing companies. This may change over the next few decades : researchers have developed a method that uses a laser to ensure that incredibly tiny structures can be 3D-printed much, much faster (X1,000), while still ensuring a good quality of build. This method is called “femtosecond projection TPL”, but I much prefer “Nanoscale 3D printing” (because I’m a technological peasant). Use cases are currently centered around flexible electronics and micro-optics, but quick discoveries around materials (both liquid and solid) leads researchers to think that they will be able to build small but imagination-baffling structures in the near future. One might imagine the medical community could use something like this… More on Nanoscale 3D Printing here [Future Timeline]. 6. Digital Twins As opposed to some of the other techs discussed in this article, this technology may not affect you directly, and is already being implemented (and will continue to be for a long, long time). Essentially, digital twins integrate artificial intelligence, machine learning and software analytics to create a digital replica of physical assets that updates and changes as its physical counterpart changes. Digital twins provide a variety of information throughout an object’s life cycle, and can even help when testing new functionalities of a physical object. With an estimated 35 billion connected objects being installed by 2021, digital twins will exist for billions of things in the near future, if only for the potential billions of dollars of savings in maintenance and repair (that’s a lot of billions). Look out for big news on the matter coming out if the manufacturing, automotive and healthcare industries. Why would I mention this ever-present idea as a technology to look out for in 2050? Easy : though we are talking about objects now, the future of digital twins rests in the creation of connected cities, and connected humans. More on Digital Twins here [Forbes]. 7. Volumetric displays / Free-space displays If one cuts through the blah blah (of which there is too much in this space), volumetric displays are essentially holograms. There are currently 3 techniques to create holograms, none of which are very impressive : illuminating spinning surfaces (first seen in 1948), illuminating gases (first seen in 1914), or illuminating particles in the air (first seen in 2004). The use of volumetric displays in advertising (the primary focus for this concept) may be either greatly entertaining, or absolutely terrible because of potential impracticabilities. You can imagine which easily by watching Blade Runner 2049). I’m also dubious about the tech’s importance: computers were supposed to kill paper and I still print every single presentation I receive to read it. I don’t see hologram being anything else than a hype-tech attached to other more interesting techs (such as adaptive projectors). More on Volumetric Displays here [Optics & Photonics News]. 8. Brain-Computer interface (BCI) A brain-computer interface, sometimes called a neural-control interface, mind-machine interface, direct neural interface, or brain–machine interface, is a direct communication pathway between an enhanced or wired brain and an external device (If you start reading words like ElectroEncephaloGraphy, you’ve gone too far into the literature). If that sounds like something you’ve heard a lot about recently, it might have a lot to do with Elon Musk and a pig of his... Beyond obvious and needed work in the prosthetic space, it’s the medical aspect which would be most transformative. A chip implemented in the brain could help prevent motion sickness, could detect and diagnose cancer cells and help with the rehabilitation of stroke victims. It could also be used for marketing, entertainment and education purposes. But let’s not get ahead of ourselves : there are currently dozens, if not hundreds of technical challenges to wrestle with before getting anywhere near something the average person could use. First and foremost, we’d need to find the right material that would not corrode and/or hurt the brain after a few weeks, and get a better understanding of how the brain ACTUALLY works. More on brain-computer interface here [The Economist]. 9. Zero-knowledge proof (aka: zero-knowledge succinct non-interactive argument of knowledge) Privacy: ever heard of it? Computer scientists are perfecting a cryptographic tool for proving something without revealing the information underlying the proof. It sounds incredible but not impossible once you wrap your head around the concept and the fact that it’s a bit more complex than saying “c’mon bro, you know I’m good for it”. Allow me to simplify : Bob has a blind friend named Alice and two marbles of different colours, which are identical in shape and size. Alice puts them behind her back and shows one to Bob. She then does it again, either changing the marble or showing the same one again, asking if this is the same as the marble first shown. If Bob were guessing whether it was the same or not, he would have a 50/50 chance of getting it right, so she does it again. And again. And because Bob sees colours, he gets it right each time, and the chance that he guessed lucky diminishes. That way, Alice knows that Bob knows which marble is the original shown (and its colour), without her ever knowing the colour of any of the marbles. Boom, zero-knowledge proof. ZNP is this concept, applied to digitally to complex algorithms. It’s easy to come up with VERY cool use cases. For example, if an app needs to know that you have enough money to put a transaction through : your bank could communicate that yes, that is the case, without giving an amount. It could also help identify a person without a birth certificate, or allow someone to enter a restricted website without needing to display their date of birth. Yay for privacy. More on zero-knowledge proof here [Wired]. 10. Flying autonomous vehicles This one is easier to grasp as it has been part of the collective imagination for dozens, if not hundreds of years. Cars. But they fly. Obviously, there are a lot of issues with this very sci-fi idea. We’re already struggling to stop people from attacking “classical” autonomous cars, so the jury is still out on whether it will ever come to be. Another issue is the fact that much of our world is built for traditional cars. Roads, buildings, parkings, insurance, licenses… everything would need to be destroyed and remade. It is likely that such cars will never see the light of day unless society crumbles and is rebuilt (2020’s not over yet). There are currently 15 prototypes in development. I’d bet that none of these will ever come to light, except as playthings for the uber-rich. But hey, who doesn’t want to see Zuckerberg live out his mid-life crisis in the skies? More on Flying Autonomous Vehicles here [Forbes]. 11. Smart Robots / Autonomous Mobile Robots This has also been a staple of SciFi for many years, for obvious reasons: imagine mixing robotics with enough Artificial Intelligence to entertain the idea of the digital world becoming physical. Welcome to your tape. Before any of this can ever happen, we will need to improve robotics (robots don’t move so good right now) and create a new branch of AI research to explore a myriad of reactions such a technology would require to be operational. AMRs will also need nice, strong batteries, hence the current research into Lithium–silicon technologies. Though no terminators are in sight, we’re starting to see such autonomous robots in warehouses, where they pick your Amazon purchase, and in the street, where they’ve begun bringing us our groceries. More on Smart Robots here [EDx]. 12. Secure quantum internet As I’ve mentioned in previous articles, quantum computing will allow us to take leaps in the number of calculations a computer can do per second. A by-product of this is the fact that no password will be safe in a quantum world, as it should become possible to try all possible text and number combinations in record time. Modern problems require modern solutions. Researchers at the Delft University of Technology in the Netherlands are working on a quantum Internet infrastructure where communications are coded in the form of qubits and entangled in photons (yes, light) flowing in optical fibers, so as to render them impossible to decrypt without disturbing the network. In everyday word, that means that anyone listening in or hacking the network would disrupt the communication, rendering it unintelligible — data in such a state is, by nature, impossible to observe without altering it. The underlying science is fascinating, and I strongly recommend clicking on the link below to explore it. More on Secure Quantum Internet here [Harvard School of Engineering]. 13. Hyper-personalized medicine This is yet another technology which is burgeoning today, but has yet a long way to go. At its heart, hyper-personalised medicine is genetic medicine designed for a single patient, making it possible to treat diseases that were once incurable, or that were too rare to be worth curing. In 2019, a young girl named Mila Makovec, suffering from a rare and fatal genetic brain disease, was offered a tailor-made treatment (named Milasen — cute) to restore the function of the failed gene. Though she is not cured, her condition has stabilised, which is a big win. The development of such personalized drugs is made possible by rapid advances in sequencing and genetic editing : creating a complete human genome sequence has gone from costing $20 million or so in 2006 to less than $500 in 2020. However, creating a drug still requires major resources (a year of development in Mila’s case) and the mobilization of specialized teams. The question of cost therefore risks limiting the generalization of such treatments. More on Hyper-Personalised Medicine here [MIT Technology Review]. 14. Biotech / Cultured / Artificial tissues / bioprinting Bioprinting is the process of creating cellular structures using 3D printing techniques, where cell functions are retained throughout the printing process. Generally, 3D bioprinting uses a layer-by-layer printing method to deposit materials sometimes referred to as bio-inks to create natural biological tissue-like structures which are then used in the fields of medical engineering. A number of challenges pave the road ahead : we don’t know enough about the human body to implement these techniques safely, the price is very high, cells don’t live for very long when printed… the list goes on. And that’s without mentioning all the ethical questions such a technology raises. There are nevertheless so many potential use cases that it’s well worth solving these issues : beyond allowing us to do transplants on amputees, it could also help us create “meatless meat”, leading to a more humane and more ecological meat industry. More on Bioprinting here [Explaining the Future]. 15. Anti-aging drugs Several treatments intended to slow or reverse aging are currently in their testing phase. They block the aging of cells linked to age and reduce the inflammation responsible for the accumulation of toxic substances or degenerative pathologies, such as Alzheimer’s, cancer or cardiovascular diseases. In short, we’re not trying to “cure aging”, but instead seek to improve immune functions in older people. Many studies are ongoing. In June 2019, the American start-up Unity Biotechnology, for example, launched a knee arthritis drug test. The biotech Alkahest, on the other hand, promises to curb cognitive loss by injecting young blood components. Finally, researchers have been testing rapamycin, an immunosuppressant, as an anti-aging treatment for many, many years. The latter shows great promises, as it improves immune functions by as much as 20%. The barriers are many : beyond the scientific costs, political pressure will need to be applied to key players to change the rules of healthcare as we know it. And we know how THAT usually plays out… More on Anti-Aging Drugs here [University of Michigan]. 16. Miniature AI Because of AI’s complexity, the computing power required to train artificial intelligence algorithms and create breakthroughs doubles every 3.4 months. In addition, the computers dedicated to these programs require a gigantic consumption of energy. The digital giants are now working to miniaturize AI technology to make it accessible to the general public. Google Assistant and Siri thus integrate voice recognition systems holding onto a smartphone chip. AI is also used in digital cameras, capable of automatically retouching a photo by removing an annoying detail or improving the contrast, for example. Localized AI is better for privacy and would remove any latency in the transfer of information. Obviously, because this space is ever-evolving, it is very difficult to see beyond the next few years of evolution — all we know is that many technical difficulties are still in the way (mathematically, mechanically, spiritually…). More on Miniature AI here [MIT Technology Review]. 17. Hyperloop The fact that Elon Musk makes a second appearance on this list is a testament to his very specific brand of genius. His hyperloop project consists of an underground low-pressure-tube in which capsules transporting passengers and/or goods move. Beyond removing air from the tube, friction on the ground is also removed, as the capsules are lifted by an electromagnetic lift systems. The capsules are propelled by a magnetic field created by linear induction motors placed at regular intervals inside the tubes. Removing air and ground fiction would allow such a transportation method to reach insane speed : 1,102 km/h versus 885 km/h for planes at full speed (and the hyperloop can reach its top speed much faster than a plane). Other benefits include reduced pollution and noise. However, this technology would require the creation of extensive tunnels, sometimes under cities. The price is fairly prohibitive : $75M per kilometer built. Other issues include making a perfectly straight tunnel, removing ALL air from the tube, and reaching the passengers in case of accidents. This has led some transportation experts to claim that the hyperloop has no future. Regardless, the memes are hilarious. More on Hyperloop here [Tesla]. 18. Space mining / Asteroid mining Asteroids have huge mineral resources. The idea behind space mining is that we could catch these asteroids, extract the minerals (especially the rare ones!), and bring them back to earth to sell them. Planets are also considered relevant in this discussion. How hard could it be to make a lot of money from the final frontier ? Turns out, it’s fairly complex. Difficulties include the high cost of space exploration, difficulties finding the right asteroids, and the difficulty of landing on it when it’s moving at high speed (18,000 km/h on average). That’s a lot of difficulties. And that’s without discussing the potential trade and space wars that could result from two nations or companies having their eyes on the same space rock. So far, only the US and… Luxembourg (?) have passed laws in that regard. However, if resources on earth become scarily scarce, and recycling is not an option, it might just become worth it. More on Space Mining here [Financial Time]. 19. Orbital Solar Power An orbital solar power station, solar power satellite or space solar power plant would be an artificial satellite built in high orbit that would use microwave or laser power transmission to send solar energy to a very large antenna on Earth. That energy could then be used instead of conventional and polluting energy sources. The advantage of placing a solar power plant in orbit is that it would not be affected by day-night cycles, weather and seasons, due to its constant “view” of the Sun. This idea has been around since 1968, but we’ve still got a long way to go. Construction costs are very high, and the technology will not be able to compete with current energy sources unless a way is discovered to reduce the cost of launches (this is where Elon shines again). We could alternatively develop a space industry to build this type of power plant from materials taken from other planets or low gravity asteroids. Alternatively, we could just stop polluting the world, take one for future generation, and switch to less convenient / more expensive sources of energy… More on Orbital Solar Power here [Forbes]. 20. Teleportation of complex organic molecules Honestly, I don’t know enough science to explain this one properly. But I’ll do my best! Teleportation, or the science of disappearing in one place to immediately reappear in another, is something that’s been in the popular imagination for decades now. We’re discussing something a bit simpler here : quantum teleportation is able to share information near instantaneously from one point to another, not matter. We’re not talking about silly fish & chips recipe type of information, we’re talking about the make-up of entire molecules. In the early 2000s, scientists were able to transfer particles of light (with zero mass) over short distances. Further experiments in quantum entanglement led to successful teleportation of the first complete atom. This was followed by the first molecules, consisting of multiple atoms. Logically, then, we could expect the first complex organic molecules such as DNA and proteins to be teleported by 2050. I have no idea what to do with this information. More on Quantum teleportation here [Wikipedia]. Conclusion Technology has a tendency to hold a dark mirror to society, reflecting both what’s great and evil about its makers. It’s important to remember that technology is often value-neutral : it’s what we do with it day in, day out that defines whether or not we are dealing with the “next big thing”. Good luck out there.
https://medium.com/predict/20-more-technologies-that-will-change-your-life-by-2050-a28563a763a3
['Adrien Book']
2020-09-21 18:12:12.183000+00:00
['Next Big Thing', 'Predictions', 'Future', 'Technology', 'AI']
Title 20 Technologies change life 2050Content 20 Technologies change life 2050 future be… weird recently shared article called “The “Next Big Thing” Technology 20 Inventions Change World” got dozen thousand hit past couple week call sequel previous 20 technology specifically centered next 20 year technology development there’s lot unravel looking beyond near future though certainty obviously decrease time 20 technology change world 2050 beyond date prediction understandably vague arbitrary know prediction often fall flat check 2020 tech prediction don’t believe Regardless knowledge gained planning potential technology crucial selection appropriate action future event unfold article one act catalyst steer conversation right direction 1 DNA computing Life far far complex technology humanity ever created could make sense use life’s building block create entirely new type computational power Indeed talk Artificial Intelligence nothing beat mushy inside come learning making inference DNA computing idea use biology instead silicon solve complex problem DNA strand link another creates reaction modifies another short DNA sequence Action Reaction It’s silly idea computer built reflect organic way human think else would grasp computer’s input output Humanity pretty far anything usable right we’ve able create basic Turing machine entail creating set rule feeding input getting specific output based defined rule real term… well managed play tictactoe DNA dispiriting amazing DNA Computing Encyclopaedia Britannica 2 Smart Dust Smart dust swarm incredibly tiny sensor would gather huge amount information light vibration temperature… large area system theoretically put communication server analysis system via wireless computer network transmit said information Potential application include detection corrosion aging pipe leak example drinking water… oh hi Flint tracking mass movement city even monitoring climate change large area issue technology ecological harm sensor could cause well potential used unethical behavior also far something could implemented near future it’s hard communicate something small Smart Dust would likely vulnerable environmental condition microwave Smart Dust WSJ 3 4D printing name 4D printing lead confusion implying humanity able create access another dimension Rubik Put simply 4Dprinted product 3Dprinted object change property specific stimulus applied submerged underwater heated shaken stirred… application still discussed promising industry include healthcare pill activate body reach certain temperature fashion clothes become tighter cold temperature homemaking furniture becomes rigid certain stimulus key challenge technology obviously finding relevant component type us work done space we’re even close customerready yet master reversible change certain material 4D Printing Wikipedia 4 Neuromorphic Hardware real SciFi Taking page biology physic mathematics computer science electronic engineering neuromorphic engineering aim create hardware copy neuron response sensory input Whereas DNA Computing aim recreate computer organic matter Neuromorphic Hardware aim recreate neuron synapsis using silicon especially relevant we’re seeing end exponential computing power growth predicted Moore’s law that’s quantum mechanic find new way calculate bunch thing quickly We’re really sure far idea taken exploring anything great theoretical AI research said research go become actionable you’ll find knocking Sarah Connor’s door Neuromorphic Hardware Towards Data Science 5 Nanoscale 3D printing 3D printing still solution looking problem That’s partly 3D printer still expensive average Joe sophisticated quick enough largescale manufacturing company may change next decade researcher developed method us laser ensure incredibly tiny structure 3Dprinted much much faster X1000 still ensuring good quality build method called “femtosecond projection TPL” much prefer “Nanoscale 3D printing” I’m technological peasant Use case currently centered around flexible electronics microoptics quick discovery around material liquid solid lead researcher think able build small imaginationbaffling structure near future One might imagine medical community could use something like this… Nanoscale 3D Printing Future Timeline 6 Digital Twins opposed tech discussed article technology may affect directly already implemented continue long long time Essentially digital twin integrate artificial intelligence machine learning software analytics create digital replica physical asset update change physical counterpart change Digital twin provide variety information throughout object’s life cycle even help testing new functionality physical object estimated 35 billion connected object installed 2021 digital twin exist billion thing near future potential billion dollar saving maintenance repair that’s lot billion Look big news matter coming manufacturing automotive healthcare industry would mention everpresent idea technology look 2050 Easy though talking object future digital twin rest creation connected city connected human Digital Twins Forbes 7 Volumetric display Freespace display one cut blah blah much space volumetric display essentially hologram currently 3 technique create hologram none impressive illuminating spinning surface first seen 1948 illuminating gas first seen 1914 illuminating particle air first seen 2004 use volumetric display advertising primary focus concept may either greatly entertaining absolutely terrible potential impracticability imagine easily watching Blade Runner 2049 I’m also dubious tech’s importance computer supposed kill paper still print every single presentation receive read don’t see hologram anything else hypetech attached interesting tech adaptive projector Volumetric Displays Optics Photonics News 8 BrainComputer interface BCI braincomputer interface sometimes called neuralcontrol interface mindmachine interface direct neural interface brain–machine interface direct communication pathway enhanced wired brain external device start reading word like ElectroEncephaloGraphy you’ve gone far literature sound like something you’ve heard lot recently might lot Elon Musk pig Beyond obvious needed work prosthetic space it’s medical aspect would transformative chip implemented brain could help prevent motion sickness could detect diagnose cancer cell help rehabilitation stroke victim could also used marketing entertainment education purpose let’s get ahead currently dozen hundred technical challenge wrestle getting anywhere near something average person could use First foremost we’d need find right material would corrode andor hurt brain week get better understanding brain ACTUALLY work braincomputer interface Economist 9 Zeroknowledge proof aka zeroknowledge succinct noninteractive argument knowledge Privacy ever heard Computer scientist perfecting cryptographic tool proving something without revealing information underlying proof sound incredible impossible wrap head around concept fact it’s bit complex saying “c’mon bro know I’m good it” Allow simplify Bob blind friend named Alice two marble different colour identical shape size Alice put behind back show one Bob either changing marble showing one asking marble first shown Bob guessing whether would 5050 chance getting right Bob see colour get right time chance guessed lucky diminishes way Alice know Bob know marble original shown colour without ever knowing colour marble Boom zeroknowledge proof ZNP concept applied digitally complex algorithm It’s easy come cool use case example app need know enough money put transaction bank could communicate yes case without giving amount could also help identify person without birth certificate allow someone enter restricted website without needing display date birth Yay privacy zeroknowledge proof Wired 10 Flying autonomous vehicle one easier grasp part collective imagination dozen hundred year Cars fly Obviously lot issue scifi idea We’re already struggling stop people attacking “classical” autonomous car jury still whether ever come Another issue fact much world built traditional car Roads building parking insurance licenses… everything would need destroyed remade likely car never see light day unless society crumbles rebuilt 2020’s yet currently 15 prototype development I’d bet none ever come light except plaything uberrich hey doesn’t want see Zuckerberg live midlife crisis sky Flying Autonomous Vehicles Forbes 11 Smart Robots Autonomous Mobile Robots also staple SciFi many year obvious reason imagine mixing robotics enough Artificial Intelligence entertain idea digital world becoming physical Welcome tape ever happen need improve robotics robot don’t move good right create new branch AI research explore myriad reaction technology would require operational AMRs also need nice strong battery hence current research Lithium–silicon technology Though terminator sight we’re starting see autonomous robot warehouse pick Amazon purchase street they’ve begun bringing u grocery Smart Robots EDx 12 Secure quantum internet I’ve mentioned previous article quantum computing allow u take leap number calculation computer per second byproduct fact password safe quantum world become possible try possible text number combination record time Modern problem require modern solution Researchers Delft University Technology Netherlands working quantum Internet infrastructure communication coded form qubits entangled photon yes light flowing optical fiber render impossible decrypt without disturbing network everyday word mean anyone listening hacking network would disrupt communication rendering unintelligible — data state nature impossible observe without altering underlying science fascinating strongly recommend clicking link explore Secure Quantum Internet Harvard School Engineering 13 Hyperpersonalized medicine yet another technology burgeoning today yet long way go heart hyperpersonalised medicine genetic medicine designed single patient making possible treat disease incurable rare worth curing 2019 young girl named Mila Makovec suffering rare fatal genetic brain disease offered tailormade treatment named Milasen — cute restore function failed gene Though cured condition stabilised big win development personalized drug made possible rapid advance sequencing genetic editing creating complete human genome sequence gone costing 20 million 2006 le 500 2020 However creating drug still requires major resource year development Mila’s case mobilization specialized team question cost therefore risk limiting generalization treatment HyperPersonalised Medicine MIT Technology Review 14 Biotech Cultured Artificial tissue bioprinting Bioprinting process creating cellular structure using 3D printing technique cell function retained throughout printing process Generally 3D bioprinting us layerbylayer printing method deposit material sometimes referred bioinks create natural biological tissuelike structure used field medical engineering number challenge pave road ahead don’t know enough human body implement technique safely price high cell don’t live long printed… list go that’s without mentioning ethical question technology raise nevertheless many potential use case it’s well worth solving issue beyond allowing u transplant amputee could also help u create “meatless meat” leading humane ecological meat industry Bioprinting Explaining Future 15 Antiaging drug Several treatment intended slow reverse aging currently testing phase block aging cell linked age reduce inflammation responsible accumulation toxic substance degenerative pathology Alzheimer’s cancer cardiovascular disease short we’re trying “cure aging” instead seek improve immune function older people Many study ongoing June 2019 American startup Unity Biotechnology example launched knee arthritis drug test biotech Alkahest hand promise curb cognitive loss injecting young blood component Finally researcher testing rapamycin immunosuppressant antiaging treatment many many year latter show great promise improves immune function much 20 barrier many beyond scientific cost political pressure need applied key player change rule healthcare know know usually play out… AntiAging Drugs University Michigan 16 Miniature AI AI’s complexity computing power required train artificial intelligence algorithm create breakthrough double every 34 month addition computer dedicated program require gigantic consumption energy digital giant working miniaturize AI technology make accessible general public Google Assistant Siri thus integrate voice recognition system holding onto smartphone chip AI also used digital camera capable automatically retouching photo removing annoying detail improving contrast example Localized AI better privacy would remove latency transfer information Obviously space everevolving difficult see beyond next year evolution — know many technical difficulty still way mathematically mechanically spiritually… Miniature AI MIT Technology Review 17 Hyperloop fact Elon Musk make second appearance list testament specific brand genius hyperloop project consists underground lowpressuretube capsule transporting passenger andor good move Beyond removing air tube friction ground also removed capsule lifted electromagnetic lift system capsule propelled magnetic field created linear induction motor placed regular interval inside tube Removing air ground fiction would allow transportation method reach insane speed 1102 kmh versus 885 kmh plane full speed hyperloop reach top speed much faster plane benefit include reduced pollution noise However technology would require creation extensive tunnel sometimes city price fairly prohibitive 75M per kilometer built issue include making perfectly straight tunnel removing air tube reaching passenger case accident led transportation expert claim hyperloop future Regardless meme hilarious Hyperloop Tesla 18 Space mining Asteroid mining Asteroids huge mineral resource idea behind space mining could catch asteroid extract mineral especially rare one bring back earth sell Planets also considered relevant discussion hard could make lot money final frontier Turns it’s fairly complex Difficulties include high cost space exploration difficulty finding right asteroid difficulty landing it’s moving high speed 18000 kmh average That’s lot difficulty that’s without discussing potential trade space war could result two nation company eye space rock far US and… Luxembourg passed law regard However resource earth become scarily scarce recycling option might become worth Space Mining Financial Time 19 Orbital Solar Power orbital solar power station solar power satellite space solar power plant would artificial satellite built high orbit would use microwave laser power transmission send solar energy large antenna Earth energy could used instead conventional polluting energy source advantage placing solar power plant orbit would affected daynight cycle weather season due constant “view” Sun idea around since 1968 we’ve still got long way go Construction cost high technology able compete current energy source unless way discovered reduce cost launch Elon shine could alternatively develop space industry build type power plant material taken planet low gravity asteroid Alternatively could stop polluting world take one future generation switch le convenient expensive source energy… Orbital Solar Power Forbes 20 Teleportation complex organic molecule Honestly don’t know enough science explain one properly I’ll best Teleportation science disappearing one place immediately reappear another something that’s popular imagination decade We’re discussing something bit simpler quantum teleportation able share information near instantaneously one point another matter We’re talking silly fish chip recipe type information we’re talking makeup entire molecule early 2000s scientist able transfer particle light zero mass short distance experiment quantum entanglement led successful teleportation first complete atom followed first molecule consisting multiple atom Logically could expect first complex organic molecule DNA protein teleported 2050 idea information Quantum teleportation Wikipedia Conclusion Technology tendency hold dark mirror society reflecting what’s great evil maker It’s important remember technology often valueneutral it’s day day defines whether dealing “next big thing” Good luck thereTags Next Big Thing Predictions Future Technology AI
3,942
Build your first full-stack serverless app with Vue and AWS Amplify
Build flexible, scalable, and reliable apps with AWS Amplify In this tutorial, you will learn how to build a full-stack serverless app using Vue and AWS Amplify. You will create a new project and add a full authorisation flow using the authenticator component. This includes: Please let me know if you have any questions or want to learn more on the above at @gerardsans. Introduction to AWS Amplify Amplify makes developing, releasing and operating modern full-stack serverless apps easy and delightful. Mobile and frontend web developers are being supported throughout the app life cycle via an open source Amplify Framework (consisting of the Amplify libraries and Amplify CLI) and seamless integrations with AWS cloud services, and the AWS Amplify Console. Amplify libraries : in this article we will be using aws-amplify and @aws-amplify/ui-vue . : in this article we will be using and . Amplify CLI : command line tool for configuring and integrating cloud services. : command line tool for configuring and integrating cloud services. UI components : authenticator, photo picker, photo album and chat bot. : authenticator, photo picker, photo album and chat bot. Cloud services : authentication, storage, analytics, notifications, AWS Lambda functions, REST and GraphQL APIs, predictions, chat bots and extended reality (AR/VR). : authentication, storage, analytics, notifications, AWS Lambda functions, REST and GraphQL APIs, predictions, chat bots and extended reality (AR/VR). Offline-first support: Amplify DataStore provides a programming model for leveraging shared and distributed data without writing additional code for data reconciliation between offline and online scenarios. By using AWS Amplify, teams can focus on development while the Amplify team enforces best patterns and practices throughout the AWS Amplify stack. Amplify CLI The Amplify CLI provides a set of commands to help with repetitive tasks and automating cloud service setup and provision. Some commands will prompt questions and provide sensible defaults to assist you during its execution. These are some common tasks. Run: amplify init , to setup a new environment. Eg: dev, test, dist. , to setup a new environment. Eg: dev, test, dist. amplify push , to provision local resources to the cloud. , to provision local resources to the cloud. amplify status , to list local resources and their current status. The Amplify CLI uses AWS CloudFormation to manage service configuration and resource provisioning via templates. This a declarative and atomic approach to configuration. Once a template is executed, it will either fail or succeed. Setting up a new project with the Vue CLI To get started, create a new project using the Vue CLI. If you already have it, skip to the next step. If not, install it and create the app using: yarn global add @vue/cli vue create amplify-app Navigate to the new directory and check everything checks out before continuing cd amplify-app yarn serve Prerequisites Before going forward make sure you have gone through the instructions in our docs to sign up to your AWS Account and install and configure the Amplify CLI. Setting up your Amplify project AWS Amplify allows you to create different environments to define your preferences and settings. For any new project you need to run the command below and answer as follows: amplify init Enter a name for the project: amplify-app Enter a name for the environment: dev Choose your default editor: Visual Studio Code Please choose the type of app that you’re building javascript What javascript framework are you using vue Source Directory Path: src Distribution Directory Path: dist Build Command: npm run-script build Start Command: npm run-script serve Do you want to use an AWS profile? Yes Please choose the profile you want to use default At this point, the Amplify CLI has initialised a new project and a new folder: amplify. The files in this folder hold your project configuration. <amplify-app> |_ amplify |_ .config |_ #current-cloud-backend |_ backend team-provider-info.json Installing the AWS Amplify dependencies Install the required dependencies for AWS Amplify and Vue using: yarn add aws-amplify @aws-amplify/ui-vue Adding authentication AWS Amplify provides authentication via the auth category which gives us access to AWS Cognito. To add authentication use the following command: amplify add auth When prompted choose: Do you want to use default authentication and security configuration?: Default configuration How do you want users to be able to sign in when using your Cognito User Pool?: Username Do you want to configure advanced settings? No Pushing changes to the cloud By running the push command, the cloud resources will be provisioned and created in your AWS account. amplify push To quickly check your newly created Cognito User Pool you can run amplify status To access the AWS Cognito Console at any time, go to the dashboard at https://console.aws.amazon.com/cognito. Also be sure that your region is set correctly. Your resources have been created and you can start using them! Configuring the Vue application Reference the auto-generated aws-exports.js file that is now in your src folder. To configure the app, open main.ts and add the following code below the last import: import Vue from 'vue' import App from './App.vue' import Amplify from 'aws-amplify'; import '@aws-amplify/ui-vue'; import aws_exports from './aws-exports'; Amplify.configure(aws_exports); Vue.config.productionTip = false new Vue({ render: h => h(App), }).$mount('#app') Using the Authenticator Component AWS Amplify provides UI components that you can use in your app. Let’s add these components to the project In order to use the authenticator component add it to src/App.vue : <template> <div id="app"> <amplify-authenticator> <div> <h1>Hey, {{user.username}}!</h1> <amplify-sign-out></amplify-sign-out> </div> </amplify-authenticator> </div> </template> <script> import { AuthState, onAuthUIStateChange } from '@aws-amplify/ui-components' export default { name: 'app', data() { return { user: { }, } }, created() { // authentication state managament onAuthUIStateChange((state, user) => { // set current user and load data after login if (state === AuthState.SignedIn) { this.user = user; } }) } } </script> You can run the app and see that an authentication flow has been added in front of your app component. This flow gives users the ability to sign up and sign in. To view any users that were created, go back to the Cognito Dashboard at https://console.aws.amazon.com/cognito. Also be sure that your region is set correctly. Alternatively you can also use: amplify console auth Accessing User Data To access the user’s info using the Auth API. This will return a promise. import { Auth } from 'aws-amplify'; Auth.currentAuthenticatedUser().then(console.log) Publishing your app To deploy and host your app on AWS, we can use the hosting category. amplify add hosting Select the plugin module to execute: Amazon CloudFront and S3 Select the environment setup: DEV hosting bucket name YOURBUCKETNAME index doc for the website index.html error doc for the website index.html Now, everything is set up & we can publish it: amplify publish Cleaning up Services To wipe out all resources from your project and your AWS Account, you can do this by running: amplify delete Conclusion Congratulations! You successfully built your first full-stack serverless app using Vue and AWS Amplify. Thanks for following this tutorial. You have learnt how to provide an authentication flow using the authenticator component or via the service API and how to use Amplify CLI to execute common tasks including adding and removing services.
https://gerard-sans.medium.com/build-your-first-full-stack-serverless-app-with-vue-and-aws-amplify-9ed7ef9e9926
['Gerard Sans']
2020-09-14 12:08:20.757000+00:00
['JavaScript', 'Aws Amplify', 'Vuejs', 'AWS']
Title Build first fullstack serverless app Vue AWS AmplifyContent Build flexible scalable reliable apps AWS Amplify tutorial learn build fullstack serverless app using Vue AWS Amplify create new project add full authorisation flow using authenticator component includes Please let know question want learn gerardsans Introduction AWS Amplify Amplify make developing releasing operating modern fullstack serverless apps easy delightful Mobile frontend web developer supported throughout app life cycle via open source Amplify Framework consisting Amplify library Amplify CLI seamless integration AWS cloud service AWS Amplify Console Amplify library article using awsamplify awsamplifyuivue article using Amplify CLI command line tool configuring integrating cloud service command line tool configuring integrating cloud service UI component authenticator photo picker photo album chat bot authenticator photo picker photo album chat bot Cloud service authentication storage analytics notification AWS Lambda function REST GraphQL APIs prediction chat bot extended reality ARVR authentication storage analytics notification AWS Lambda function REST GraphQL APIs prediction chat bot extended reality ARVR Offlinefirst support Amplify DataStore provides programming model leveraging shared distributed data without writing additional code data reconciliation offline online scenario using AWS Amplify team focus development Amplify team enforces best pattern practice throughout AWS Amplify stack Amplify CLI Amplify CLI provides set command help repetitive task automating cloud service setup provision command prompt question provide sensible default assist execution common task Run amplify init setup new environment Eg dev test dist setup new environment Eg dev test dist amplify push provision local resource cloud provision local resource cloud amplify status list local resource current status Amplify CLI us AWS CloudFormation manage service configuration resource provisioning via template declarative atomic approach configuration template executed either fail succeed Setting new project Vue CLI get started create new project using Vue CLI already skip next step install create app using yarn global add vuecli vue create amplifyapp Navigate new directory check everything check continuing cd amplifyapp yarn serve Prerequisites going forward make sure gone instruction doc sign AWS Account install configure Amplify CLI Setting Amplify project AWS Amplify allows create different environment define preference setting new project need run command answer follows amplify init Enter name project amplifyapp Enter name environment dev Choose default editor Visual Studio Code Please choose type app you’re building javascript javascript framework using vue Source Directory Path src Distribution Directory Path dist Build Command npm runscript build Start Command npm runscript serve want use AWS profile Yes Please choose profile want use default point Amplify CLI initialised new project new folder amplify file folder hold project configuration amplifyapp amplify config currentcloudbackend backend teamproviderinfojson Installing AWS Amplify dependency Install required dependency AWS Amplify Vue using yarn add awsamplify awsamplifyuivue Adding authentication AWS Amplify provides authentication via auth category give u access AWS Cognito add authentication use following command amplify add auth prompted choose want use default authentication security configuration Default configuration want user able sign using Cognito User Pool Username want configure advanced setting Pushing change cloud running push command cloud resource provisioned created AWS account amplify push quickly check newly created Cognito User Pool run amplify status access AWS Cognito Console time go dashboard httpsconsoleawsamazoncomcognito Also sure region set correctly resource created start using Configuring Vue application Reference autogenerated awsexportsjs file src folder configure app open maints add following code last import import Vue vue import App Appvue import Amplify awsamplify import awsamplifyuivue import awsexports awsexports Amplifyconfigureawsexports VueconfigproductionTip false new Vue render h hApp mountapp Using Authenticator Component AWS Amplify provides UI component use app Let’s add component project order use authenticator component add srcAppvue template div idapp amplifyauthenticator div h1Hey userusernameh1 amplifysignoutamplifysignout div amplifyauthenticator div template script import AuthState onAuthUIStateChange awsamplifyuicomponents export default name app data return user created authentication state managament onAuthUIStateChangestate user set current user load data login state AuthStateSignedIn thisuser user script run app see authentication flow added front app component flow give user ability sign sign view user created go back Cognito Dashboard httpsconsoleawsamazoncomcognito Also sure region set correctly Alternatively also use amplify console auth Accessing User Data access user’s info using Auth API return promise import Auth awsamplify AuthcurrentAuthenticatedUserthenconsolelog Publishing app deploy host app AWS use hosting category amplify add hosting Select plugin module execute Amazon CloudFront S3 Select environment setup DEV hosting bucket name YOURBUCKETNAME index doc website indexhtml error doc website indexhtml everything set publish amplify publish Cleaning Services wipe resource project AWS Account running amplify delete Conclusion Congratulations successfully built first fullstack serverless app using Vue AWS Amplify Thanks following tutorial learnt provide authentication flow using authenticator component via service API use Amplify CLI execute common task including adding removing servicesTags JavaScript Aws Amplify Vuejs AWS
3,943
EaaS: Everything-as-a-Service
Traditional Service The easiest way to begin defining what a service is, is to define what it is not. A service is not a product. A service, in the context of business and economics, is a transaction in which no physical goods are exchanged — it is intangible. The consumer does not receive anything tangible or theirs to own from the provider. This is what differentiates services from goods (i.e. products) — which are tangible items traded from producer to consumer during an exchange, where the consumer then owns the good. Products compared to Services Most people are familiar with goods versus services, and many businesses offer a combination of both. For example, many banks offer physical products such as credit cards, and also offer services like financial advice and planning. However, this is a very simple example, and in today’s world — the differentiation between products and services is becoming increasingly blurred. Let’s expand on the banking example by considering the following: A tech consulting firm helps to develop a digital product — a mobile banking application for iOS — for their client in the retail arm of ACME Bank. They’re providing a service (software development) and the result is a tangible product (the app). ACME’s customers pay annual fees to the bank, but in return get both products and services. The mobile app, a digital product, can be used for services — like booking an appointment for financial advice or ordering a product — like a new credit card. The credit card is a physical product, but it is associated with the bank’s lending service. This example is still fairly straightforward, but you can see how products and services are blended to create better experiences for both producers and consumers. And examples like this are happening everywhere — in both traditional and emerging businesses, creating a service ecosystem. Traditionally, the “behind the scenes” work of developing and enabling products and services was performed through many different methods, sometimes in silos: Business Process Engineering, Traditional Project Management, Software Development, ITIL, Lean Six Sigma, etc. Today, in the very customer-centric and employee-centric world, the details of what goes into creating these “experiences” — through products and services — is loosely known as service design. Service Design So we understand services in the traditional sense — a consumer is serviced by the provider. Services, like products, don’t come out of thin air. Like products, services too need to be designed in order to provide the most satisfaction to both the consumer and the provider (i.e. employees). Even for product producers, service design is valuable when considering the experience of everyone involved in that value chain. It bridges the gaps between customer experience design and product design by considering everything in between. Services often involve many moving parts, and service design uses the following monikers to describe service components: People, Processes, and Props. The 3 P’s are common terms used to describe the “building blocks” of Service Design. Service Design, as you may infer from the name, utilizes the same concepts and methodologies from similar disciplines such as User Experience Design, Human-Computer Interaction, as well as Research, Ethnography, and Anthropology. There are methods to follow to ensure that experience is actually at the centre of the design, as opposed to just thinking about customers, or revenue, or brand image, for example. High-Level Approach for Service Design The service design framework is an iterative process and is co-created with the individuals involved in the service delivery including customers. Each phase has inputs and outputs, for example, personas, journey maps, and service blueprints. To expand on the components of a service, let’s take a look at the high-level customer journey below — using a theoretical service for food ordering and pick-up. Very High-Level Journey: Food Order & Pick-up Service What are the building blocks (People, Processes, and Props) that make up the composition of this particular service? We know that it involves some technology, some people, and a physical place — but let’s dig a bit deeper. Building Blocks that enable the Journey above At a high-level, we can map the necessary components for each phase of the journey. Similar to the paradigm of software, services have front-end and back-end components. People, processes, and props that the customer sees, and ones they don’t. This is a very high-level view, and a service blueprint would normally contain many details of the front-end and back-end people, processes, and props involved various versions of journeys. Also, keeping in mind our “stacked dimension” model for describing the abstraction level of services — we can see that People, Props and Processes can loosely translate to things that occur in the Business, Application, and Technology/Physical layers: Business Actors like Employees, Digital Applications, Physical Structures, Hardware, Business Processes, etc. And that’s Service Design in a nutshell. Service Oriented Architecture (SOA) Around the turn of the millennium, a new systems design concept began to rise in popularity. “Service Oriented Architecture” became the craze in many organizations. Service orientation as an approach to architecture builds on similar paradigms as traditional services described above. Similar to how a large business would offer many traditional services, a large software system often has many different functions or purposes. Traditionally, before the emergence of SOA, large-scale systems were monolithic; built on a singular codebase, with shared components, and shared infrastructure, all tightly coupled. Service Orientation, on the other hand, is the concept of breaking up the app into components as “services”, each representing a specific business functionality. They are loosely coupled and working together to form the overall functionality of a large system. Picture monoliths as a large, 3-tier client-server systems: a presentation layer or client such as a graphical user interface, an application layer with lots of logic in the form of methods or functions, and a data access layer connecting the system to the underlying databases. High-Level Comparison of Architectural Styles To add more detail, “services” in an SOA model are discrete, self-contained functions, that communicate with other components over a common protocol. Integration of services with each other, with data stores, etc. is facilitated through service brokers or service buses. A presentation layer like a GUI can consume the service through this layer. Services — because of this — are accessible remotely and can be independently updated and maintained. For example, you could swap out one service for another in a larger application, without taking down the rest of the application. Talk about not putting all of your eggs in one basket! A primary goal of service orientation is to promote the re-use of functions, leading to efficiencies and simpler development of new applications which are modular and can consist of existing services. Despite the popularity and rise of SOA, there is not one sole industry standard for implementing them, and instead, there are principles, patterns, and approaches to the concept developed by many organizations. A few examples of commonalities in the service-oriented architecture approach include: Services act as producers and consumers . The underlying logic of producer services is abstracted from the consumer — meaning they only see the endpoint. and . The underlying logic of producer services is abstracted from the consumer — meaning they only see the endpoint. Services are both granular and composable . They are coarse-grained and represent a specific business purpose; not too granular as to reduce re-use and applicability elsewhere. Services can be combined to compose other services. and . They are coarse-grained and represent a specific business purpose; not too granular as to reduce re-use and applicability elsewhere. Services can be combined to compose other services. Services are discoverable. They produce metadata which can be accessed over the common communication protocol and then interpreted. If this all sounds familiar — or confusing — its because the concept of services and SOA are closely related to modern APIs, microservices, and integration standards. Microservices Similar to SOA, “microservices” is also an architectural style for building systems and applications that are loosely coupled. In the examples above, you’ll notice that the “functionality” in a SOA model is broken up into smaller parts. However, SOA models often share single or few underlying databases or legacy systems, and rely on a single broker layer to pass data around. To build on this concept, microservices go a step further. They are discrete instances where the entire stack of a certain function is isolated, and various microservices are combined to compose a distributed system. High-Level View of a Distributed System composed of Microservices In contrast with SOA, there is no “broker” or “service bus” layer to connect various services. Instead, the microservices communicate over a lightweight protocol (such as HTTP). Just as how service orientation provided many benefits in the early-mid 2000s, microservices also provide many benefits in the modern technology landscape. As teams shift to more Agile and DevOps practices, microservices provide a unique advantage in the fact that various smaller teams can independently build product features for a larger application. A key principle of microservices is that they perform only one thing, and perform that thing well. With cloud platforms and containers, teams are able to quickly stand up virtual or containerized infrastructure to build their independent services. This also provides benefits for operations teams: outages in a small service do not bring down the entire system. There are even not-so-tangible benefits, for example, teams having the ability to develop services in a technology stack (language, OS, database, etc.) that they’re comfortable with — independent of the larger system. However —as with all things — benefits come with trade-offs. With larger, heterogeneous solutions composed of many microservices with underlying components to meet each use case, comes greater complexity and different architectural concerns. For example, in traditional monolithic systems (and in some cases SOA implementations) — applications share a common database acting as a system of record. In the microservices model, each instance has its own database combining to create a system of record. This adds great flexibility for teams because a required schema change to one service’s data store doesn’t introduce changes for other teams. With that being said, you can likely imagine a scenario where services read or write data within the same domain, which can lead to challenges with data consistency and integrity. The growth in popularity of microservices and their complexity has led to many innovations in the cloud space, including service meshes or fabrics. We will leave this more advanced topic for another time. Software, Platforms, and Infrastructure delivered “as a Service” Getting back to our familiar, traditional definition of service, we have seen it being applied to the procurement of information technology. Traditionally, (as in pre-2000s) software was complicated to build and manage. And as mentioned above, it still is. But, for application teams building business software, a lot of the management of underlying technologies have been abstracted. Let’s walk through the abstraction layers one more time to illustrate how this all comes together. Abstraction model + Examples of components Think of each of these layers consisting of various components that were traditionally sold as products themselves. You would, at the very least, need to purchase a server which would power the applications you are developing and installing. You would need storage for any persistence of data. You would need hardware for communication with the network. And on it goes up the stack — operating systems, databases, etc. Moore’s Law and other advances in computing hardware have led way to extreme drops in the cost of computing hardware. And with cheaper hardware costs, companies are able to purchase large amounts of hardware and rent it as…you guessed it…a service. IaaS — Infrastructure as a Service Infrastructure as a service is a concept where providers — like Amazon, Microsoft, and Google — own large premises comprised of hardware and infrastructure. They rent virtualized access to these resources through pay-as-you-go subscription models. As mentioned earlier, this adds agility to start-ups and software teams who do not have the capital to purchase their physical resources. At the highest level, this can be compared with renting a home versus purchasing one. Using this analogy, if the start-up decides to “move” — or pivot, dissolve, sell, etc. — they can shut down their infrastructure without any long term costs or losses. The use of IaaS generally provides the same flexibility as if it was your infrastructure. This comes with the same overhead from a managerial and operational perspective — minus managing the premises, power, security, etc. For teams that need a more managed service, there are various “platforms” offered as a service. PaaS — Platform as a Service Platform-as-a-service builds on the IaaS layer, and adds abstractions which can be leveraged by development teams. PaaS hides away the complexities of dealing with operating systems, middlewares, and runtimes — and allows developers and operators to focus on writing and supporting the application, rather than infrastructure. To go further, PaaS can be run on top of any types of hardware. For companies not yet at public-cloud maturity, the option of running PaaS on their on-premise hardware is still viable. A platform services team could manage the core PaaS infrastructure, and development teams can focus on building/deploying. The big cloud players offer PaaS solutions on top of their IaaS — Amazon Elastic Beanstalk, Google App Engine, Azure Apps. There are also other offerings like Heroku, Cloud Foundry, and OpenShift — which will run in environments not specific to one provider. The spectrum of responsibility: Purple = fully managed as a service. As applications grow, the value proposition of PaaS solutions begin to wear off, considering scale often leads to unique requirements. The unique requirements in return often lead to infrastructure level challenges, which cannot be addressed through the abstraction of the application platform. SaaS — Software as a Service Finally, we finish off with Software-as-a-Service. SaaS refers to applications which are fully managed by the provider. This means that organizations can purchase software licenses, and immediately get to work without any development, management of infrastructure, or installation — it just works over the internet. However, large SaaS applications are not that simple. Referencing the diagram above, you’ll notice that there is still a bit of green in the SaaS column. Often, customers will need to configure the system to meet their business requirements, add users, add some security configurations, and so on. With that said, companies can begin realizing value very quickly, often without the involvement of IT.
https://medium.com/swlh/eaas-everything-as-a-service-5c12484b0b4e
['Ryan S.']
2019-06-25 15:33:04.354000+00:00
['Economics', 'Technology', 'Cloud Computing', 'Business', 'Design']
Title EaaS EverythingasaServiceContent Traditional Service easiest way begin defining service define service product service context business economics transaction physical good exchanged — intangible consumer receive anything tangible provider differentiates service good ie product — tangible item traded producer consumer exchange consumer owns good Products compared Services people familiar good versus service many business offer combination example many bank offer physical product credit card also offer service like financial advice planning However simple example today’s world — differentiation product service becoming increasingly blurred Let’s expand banking example considering following tech consulting firm help develop digital product — mobile banking application iOS — client retail arm ACME Bank They’re providing service software development result tangible product app ACME’s customer pay annual fee bank return get product service mobile app digital product used service — like booking appointment financial advice ordering product — like new credit card credit card physical product associated bank’s lending service example still fairly straightforward see product service blended create better experience producer consumer example like happening everywhere — traditional emerging business creating service ecosystem Traditionally “behind scenes” work developing enabling product service performed many different method sometimes silo Business Process Engineering Traditional Project Management Software Development ITIL Lean Six Sigma etc Today customercentric employeecentric world detail go creating “experiences” — product service — loosely known service design Service Design understand service traditional sense — consumer serviced provider Services like product don’t come thin air Like product service need designed order provide satisfaction consumer provider ie employee Even product producer service design valuable considering experience everyone involved value chain bridge gap customer experience design product design considering everything Services often involve many moving part service design us following moniker describe service component People Processes Props 3 P’s common term used describe “building blocks” Service Design Service Design may infer name utilizes concept methodology similar discipline User Experience Design HumanComputer Interaction well Research Ethnography Anthropology method follow ensure experience actually centre design opposed thinking customer revenue brand image example HighLevel Approach Service Design service design framework iterative process cocreated individual involved service delivery including customer phase input output example persona journey map service blueprint expand component service let’s take look highlevel customer journey — using theoretical service food ordering pickup HighLevel Journey Food Order Pickup Service building block People Processes Props make composition particular service know involves technology people physical place — let’s dig bit deeper Building Blocks enable Journey highlevel map necessary component phase journey Similar paradigm software service frontend backend component People process prop customer see one don’t highlevel view service blueprint would normally contain many detail frontend backend people process prop involved various version journey Also keeping mind “stacked dimension” model describing abstraction level service — see People Props Processes loosely translate thing occur Business Application TechnologyPhysical layer Business Actors like Employees Digital Applications Physical Structures Hardware Business Processes etc that’s Service Design nutshell Service Oriented Architecture SOA Around turn millennium new system design concept began rise popularity “Service Oriented Architecture” became craze many organization Service orientation approach architecture build similar paradigm traditional service described Similar large business would offer many traditional service large software system often many different function purpose Traditionally emergence SOA largescale system monolithic built singular codebase shared component shared infrastructure tightly coupled Service Orientation hand concept breaking app component “services” representing specific business functionality loosely coupled working together form overall functionality large system Picture monolith large 3tier clientserver system presentation layer client graphical user interface application layer lot logic form method function data access layer connecting system underlying database HighLevel Comparison Architectural Styles add detail “services” SOA model discrete selfcontained function communicate component common protocol Integration service data store etc facilitated service broker service bus presentation layer like GUI consume service layer Services — — accessible remotely independently updated maintained example could swap one service another larger application without taking rest application Talk putting egg one basket primary goal service orientation promote reuse function leading efficiency simpler development new application modular consist existing service Despite popularity rise SOA one sole industry standard implementing instead principle pattern approach concept developed many organization example commonality serviceoriented architecture approach include Services act producer consumer underlying logic producer service abstracted consumer — meaning see endpoint underlying logic producer service abstracted consumer — meaning see endpoint Services granular composable coarsegrained represent specific business purpose granular reduce reuse applicability elsewhere Services combined compose service coarsegrained represent specific business purpose granular reduce reuse applicability elsewhere Services combined compose service Services discoverable produce metadata accessed common communication protocol interpreted sound familiar — confusing — concept service SOA closely related modern APIs microservices integration standard Microservices Similar SOA “microservices” also architectural style building system application loosely coupled example you’ll notice “functionality” SOA model broken smaller part However SOA model often share single underlying database legacy system rely single broker layer pas data around build concept microservices go step discrete instance entire stack certain function isolated various microservices combined compose distributed system HighLevel View Distributed System composed Microservices contrast SOA “broker” “service bus” layer connect various service Instead microservices communicate lightweight protocol HTTP service orientation provided many benefit earlymid 2000s microservices also provide many benefit modern technology landscape team shift Agile DevOps practice microservices provide unique advantage fact various smaller team independently build product feature larger application key principle microservices perform one thing perform thing well cloud platform container team able quickly stand virtual containerized infrastructure build independent service also provides benefit operation team outage small service bring entire system even notsotangible benefit example team ability develop service technology stack language OS database etc they’re comfortable — independent larger system However —as thing — benefit come tradeoff larger heterogeneous solution composed many microservices underlying component meet use case come greater complexity different architectural concern example traditional monolithic system case SOA implementation — application share common database acting system record microservices model instance database combining create system record add great flexibility team required schema change one service’s data store doesn’t introduce change team said likely imagine scenario service read write data within domain lead challenge data consistency integrity growth popularity microservices complexity led many innovation cloud space including service mesh fabric leave advanced topic another time Software Platforms Infrastructure delivered “as Service” Getting back familiar traditional definition service seen applied procurement information technology Traditionally pre2000s software complicated build manage mentioned still application team building business software lot management underlying technology abstracted Let’s walk abstraction layer one time illustrate come together Abstraction model Examples component Think layer consisting various component traditionally sold product would least need purchase server would power application developing installing would need storage persistence data would need hardware communication network go stack — operating system database etc Moore’s Law advance computing hardware led way extreme drop cost computing hardware cheaper hardware cost company able purchase large amount hardware rent as…you guessed it…a service IaaS — Infrastructure Service Infrastructure service concept provider — like Amazon Microsoft Google — large premise comprised hardware infrastructure rent virtualized access resource payasyougo subscription model mentioned earlier add agility startup software team capital purchase physical resource highest level compared renting home versus purchasing one Using analogy startup decides “move” — pivot dissolve sell etc — shut infrastructure without long term cost loss use IaaS generally provides flexibility infrastructure come overhead managerial operational perspective — minus managing premise power security etc team need managed service various “platforms” offered service PaaS — Platform Service Platformasaservice build IaaS layer add abstraction leveraged development team PaaS hide away complexity dealing operating system middlewares runtimes — allows developer operator focus writing supporting application rather infrastructure go PaaS run top type hardware company yet publiccloud maturity option running PaaS onpremise hardware still viable platform service team could manage core PaaS infrastructure development team focus buildingdeploying big cloud player offer PaaS solution top IaaS — Amazon Elastic Beanstalk Google App Engine Azure Apps also offering like Heroku Cloud Foundry OpenShift — run environment specific one provider spectrum responsibility Purple fully managed service application grow value proposition PaaS solution begin wear considering scale often lead unique requirement unique requirement return often lead infrastructure level challenge cannot addressed abstraction application platform SaaS — Software Service Finally finish SoftwareasaService SaaS refers application fully managed provider mean organization purchase software license immediately get work without development management infrastructure installation — work internet However large SaaS application simple Referencing diagram you’ll notice still bit green SaaS column Often customer need configure system meet business requirement add user add security configuration said company begin realizing value quickly often without involvement ITTags Economics Technology Cloud Computing Business Design
3,944
Quickly
Quickly Read this quickly, faster, faster… Photo by Marc-Olivier Jodoin on Unsplash Read this quickly Faster Faster Hurry up Get in Sit down Don’t think Just do Early bird Don’t break Don’t stop Keep going Faster Get it right Make room Wind it up Never slow Never stop Outta time Outta this Outta that Not enough Need more Give ’em more In a hurry Faster still Never still Can’t breathe Can’t move Can’t decide CRASH! …breathe… Slow It Down. Feel your life. Look around. Capture it with an open palm. And let it wash your spirit.
https://medium.com/passive-asset/quickly-22a105bbeb49
['Kelly Neuer']
2020-12-14 21:20:22.798000+00:00
['Self-awareness', 'Meditation', 'Poetry On Medium', 'Culture', 'Society']
Title QuicklyContent Quickly Read quickly faster faster… Photo MarcOlivier Jodoin Unsplash Read quickly Faster Faster Hurry Get Sit Don’t think Early bird Don’t break Don’t stop Keep going Faster Get right Make room Wind Never slow Never stop Outta time Outta Outta enough Need Give ’em hurry Faster still Never still Can’t breathe Can’t move Can’t decide CRASH …breathe… Slow Feel life Look around Capture open palm let wash spiritTags Selfawareness Meditation Poetry Medium Culture Society
3,945
Serverless ETL using Lambda and SQS
AWS recently introduced a new feature that allows you to automatically pull messages from a SQS queue and have them processed by a Lambda function. I recently started experimenting with this feature to do ETL (“Extract, Transform, Load”) on a S3 bucket. I was curious to see how fast, and at what cost I could process the data in my bucket. Let’s see how it went! Note: all the code necessary to follow along can be found at https://github.com/PokaInc/lambda-sqs-etl The goal Our objective here is to load JSON data from a S3 bucket (the “source” bucket), flatten the JSON and store it in another bucket (the “destination” bucket). “Flattening” (sometimes called “relationalizing”) will transform the following JSON object: { "a": 1, "b": { "c": 2, "d": 3, "e": { "f": 4 } } } into { "a": 1, "b.c": 2, "b.d": 3, "b.e.f": 4 } Flattening JSON objects like this makes it easier, for example, to store the resulting data in Redshift or rewrite the JSON files to CSV format. Now, here’s a look at the source bucket and the data we have to flatten. Getting to know the data Every file in the source bucket is a collection of un-flattened JSON objects:
https://medium.com/poka-techblog/serverless-etl-using-lambda-and-sqs-d8b4de1d1c43
['Simon-Pierre Gingras']
2018-08-13 11:51:01.250000+00:00
['Python', 'S3', 'AWS', 'AWS Lambda', 'Serverless']
Title Serverless ETL using Lambda SQSContent AWS recently introduced new feature allows automatically pull message SQS queue processed Lambda function recently started experimenting feature ETL “Extract Transform Load” S3 bucket curious see fast cost could process data bucket Let’s see went Note code necessary follow along found httpsgithubcomPokaInclambdasqsetl goal objective load JSON data S3 bucket “source” bucket flatten JSON store another bucket “destination” bucket “Flattening” sometimes called “relationalizing” transform following JSON object 1 b c 2 3 e f 4 1 bc 2 bd 3 bef 4 Flattening JSON object like make easier example store resulting data Redshift rewrite JSON file CSV format here’s look source bucket data flatten Getting know data Every file source bucket collection unflattened JSON objectsTags Python S3 AWS AWS Lambda Serverless
3,946
Streaming Real-time data to AWS Elasticsearch using Kinesis Firehose
Explore how we can deliver real-time data using data streams to Elasticsearch service using AWS Kinesis Firehose. Elasticsearch is an open-source solution that is used by many companies around the world for analytics. By definition, Elasticsearch is an open-source, RESTful, distributed, indexed search, and analytics solution. The first part of the definition is that it is an open-source solution, which means it is a community-driven solution and it is free to use widely. Next, it is RESTful, which means all the communication and configurations can be done through simple REST HTTP API calls. Elasticsearch has developed a feature-rich REST API framework for use by clients in order to consume from. Next distributed and indexed, now this is where Elasticsearch gets differentiated from a common search solution that we tend to implement on top of our existing databases. Elasticsearch is distributed, which means the functionality and data being stored are using multiple resources and it will use all of these resources to do these functionalities which makes them very efficient meanwhile providing high availability as well. Next, it is indexed, which improves significantly data retrieval. Elasticsearch uses Apache Lucene for indexing, which is known to be an extremely fast indexing solution that makes search or analytics extremely fast. There are many other concepts that we need to talk about Elasticsearch. But since this article is going to address a solution for stream data into Elasticsearch and not on how to use Elasticsearch I will not try to go and write about those concepts. If you are a complete beginner at Elasticsearch I recommend you to first learn at least the basic concepts that are used in Elasticsearch before continuing through this article. There are many valuable posts, videos on the internet about Elasticsearch for beginners which I am sure you will be able to find. For this article, we are going to use the AWS Elasticsearch service because it is fully managed by AWS so we do not need to worry about deployments, security, and making a fail-proof high available solution. AWS Kinesis Amazon Kinesis is a service provided by AWS for processing data in real-time. In this scenario that we are going to talk about data that are coming to Kinesis as a stream where Kinesis will execute all kinds of functionalities based on our requirements. There are many other data stream processing solutions in the community as well. Apache Kafka, Apache Spark are some of the examples that are used currently in the community. The main reason that I chose Kinesis is that our Elasticseach solution will also be created as an AWS service, but also since all the servers, failures are fully managed by AWS themselves it is really easy us to more focus on the outputs rather than managing the infrastructure. I think before going to discuss about Kinesis let us first try to understand what are data streams and some examples for data streams. Data Streams Data streams are data that are generated and sent continuously from a data source. This data source is called as data producers in the streaming world. The main feature that distinguishes data streams with other form of data sources is that it is generated continuously, like a river. So rather than sending data in batches, in streaming, we expect that every time data will be given to us in a means of a stream. Data streams mostly surface with the Big Data world. We may this that at a given time frame if we inspect data it might be small, so why are we referring to Big Data for data streams. But here we are talking about continuously sending data every time, which means a large quantity of data at the end of the day. Below are some examples of data streams that we can find in the real world. Log files generated by a software application Financial stock market data User interaction data from a web application Device and sensor output from any kind of IoT devices Thus what we do with these data streams is either we can do analytics in real-time and provide analytical data as the output or we can store these data and do analytics on top of the data. The first scenario is used in many solutions where real-time analytics are required like identifying traffic congestions, identifying different patterns, etc. For the second scenario what we do is we store these data in an analytical platform and later to analytics on top of that. Elasticsearch is such kind of analytical platform where we can perform analytical tasks. But how can we handle and send the data to Elasticsearch using the data stream that we are getting from data producers? thus comes the Kinesis into the use case of ours. So as mentioned earlier Kinesis is a service that is used to process data streams. So what kind of solutions does AWS Kinesis provides to handle these kinds of data streams? for that currently, AWS Kinesis provides four types of solutions at the time I am writing this article. Kinesis Data Streams — used to collect and process large streams of data records in real-time — used to collect and process large streams of data records in real-time Kinesis Data Firehose — used to deliver real-time streaming data to destinations such as Amazon S3, Redshift, Elasticsearch, etc.. — used to deliver real-time streaming data to destinations such as Amazon S3, Redshift, Elasticsearch, etc.. Kineses Data Analytics — used to process and analyze streaming data using standard SQL — used to process and analyze streaming data using standard SQL Kinesis Video Streams — used to fully manage services that use to stream live video from devices So out of the four solutions provided, we can see that for our use case which s to load data into Elaslticsearch we will be going to use Kinesis Data Firehose. Amazon Kinesis Data Firehose Kinesis Data Firehose is one of the four solutions provided by AWS Kinesis service. So as Kinesis service, it is also fully managed by AWS which means we do not need to worry about deployments, availability, and security at every level. Kinesis Firehose is used to deliver real-time streaming data to pre-defined service destinations. At the writing of this article, AWS supports four destinations. Amazon S3 — an easy to use object storage Amazon Redshift — petabyte-scale data warehouse Amazon Elasticsearch Service — open-source search and analytics engine Splunk — operational intelligent tool for analyzing machine-generated data As mentioned in the title itself in this we are going to look at how we can load data into Elasticsearch destination. Before implementing and designing or solution let us first look at some of the basic concepts of Kinesis Firehose. Kinesis Data Firehose delivery stream — the main component of the firehose, which is the main delivery stream which will be sent to our destinations the main component of the firehose, which is the main delivery stream which will be sent to our destinations Data producer — the entity which sends records of data to Kinesis Data Firehose. These will be the main source for our data streams — the entity which sends records of data to Kinesis Data Firehose. These will be the main source for our data streams Record — the data that our data producer sends to the Kinesis Firehose delivery stream. In s data stream these records will be sent continuously. Usually, these records will be very small with the max value as 1000KB — the data that our data producer sends to the Kinesis Firehose delivery stream. In s data stream these records will be sent continuously. Usually, these records will be very small with the max value as 1000KB Buffer size and buffer interval — the configurations which determine how much buffering is needed before delivering them to the destinations. In order to process data stream data producers should continuously send data to our Firehose. Thus now the question should be how do these data producers send data to our Firehose. There are several ways for data producers to send data to our Firehose. Kinesis Data Firehose PUT APIs — PutRecord() or PutRecordBatch() API to send source records to the delivery stream. Amazon Kinesis Agent — Kinesis Agent is a stand-alone Java software application that offers an easy way to collect and send source records. In order to load data in the Kinesis Agent, this agent should be available in our data producer system. AWS IoT — Create AWS IoT rules that send data from MQTT messages. CloudWatch Logs — If we are going to use cloudwatch logs we can use subscription filters to deliver a real-time stream of log events. CloudWatch Events — Create rules to indicate which events are of interest to your application and what automated action to take when a rule matches an event. In our solution, we are going to use the demo data stream provided by Kinesis Firehose. In demo data stream data will be sent in the following format. {"ticker_symbol":"QXZ", "sector":"HEALTHCARE", "change":-0.05, "price":84.51} Since now we have a basic understanding about the components and services we are going to use let us now begin to design our solution. Our scenario is to deliver data stream which are going to be created by data producers to our analytical platform, which is Elasticsearch service. Before implementing the solution we might be asking these following questions based on our requirements. Does the data need to be transformed before stored into Elasticsearch? If so what kind of transformation is required? Do we need to store raw data even after transformation for future purposes or as a backup? Let’s assume for our scenario answers for all three questions above is Yes. Then we need functionalities to transform our data into a different format and also save the raw data into some location. Fortunately, Kinesis Firehose already provides these functionalities by using AWS Lambda functions. According to the diagram above before sending data to the Elasticsearch service a Lambda function will transform our data according to our requirements. In the meantime, our raw data before transformation will be sent to an AWS S3 bucket. In there using lifecycle rules we can transfer them to either AWS Galcier or other S3 categories according to our requirements. Now since we have the design ready let’s go ahead and try to implement our designed solution in AWS. Creating Elasticsearch service We are going to create our Elasticsearch service only for development and learning purposes so the configurations that we will select for our domain will not be the most secure configurations and should not be followed for an enterprise solution. First , Go to AWS Elasticservice service and create a new domain. Since we are using this domain for only learning select Deployment type as Development and testing and click next. Next, provide a domain name and after that select the instance type as t2.small.elasticsearch because that is the only instance type that is available for the free tier if you are already using a free tier account. You can leave all the other options as default values without any change. On the next screen we will be asked to configure security for our elasticsearch domain. The recommended is to use VPC access where our domain will be in a private network with only instances within our VPC network that will have access. But in our case make it as Public access. Below on the same page you will see where we are setting access policy for our domain. Since this is only for learning add Allow open access to the domain which will make any IP address have access. (we are using this only for learning purpose) Here we can specify either an IAM role or specific AWS user accounts as well. Now that is it for creating our testing elasticsearch domain. After a couple of minutes our elasticsearch domain will be up and running. So creating our Elasticsearch service is done now. Creating Kinesis Data Firehose Go to AWS Kinesis service and select Kinesis Firehose and create a delivery stream. In the next screen give a stream name and select the source as Direct PUT or other sources. The other option here is to select the source as a Kinesis Data Stream. In our scenario, we do not have a Kinesis data stream and we send the data to the Firehose directly from our producer. We have the option to enable server-side encryption for data we stream. If we want to encryption for data enable it and provide the necessary encryption key. But for this article let’s keep server-side encryption as disabled. Next we will be asked whether we need to transform the source records. Here enable data transformation. This will prompt us to give an Lambda function which will handle the data transformation. Since we have not created a Lambda function yet select Create new. This will open a box where AWS will prompt us with already available lambda blueprints. Select the first option General Kinesis Data Firehose Processing for our use case since we are going to do a custom function. This will guide us to the Lambda function creation page. Before creating the Lambda function first create an IAM role for our lambda function with permission for AWS Kinesis services. Here we are providing full access to Firehose, but we only need permission for “firehose:PutRecordBatch” . But since this is only for learning purposes let’s give full access. Make sure to add AWSLambdaExecute permission as well which will give execution permission for our function as well as log permission to create logs to cloudwatch service. After creating our role let’s again go to our Lambda function creation page. There give our function a name and select the IAM role we created earlier. As mentioned earlier our data stream will have the following format. {"ticker_symbol":"QXZ", "sector":"HEALTHCARE", "change":-0.05, "price":84.51} So for this article's purpose let’s assume that we do not need change property to be stored in our Elasticsearch service. Also let’s rename the ticker_sysmbol property to ticker_id to be stored as in Elasticsearch. Below is the lambda function that I have created using node.js in order to fulfill our transformation. As you know you can use any supported language in Lambda to create your function. exports.handler = async (event, context) => { /* Process the list of records and transform them */ const output = event.records.map((record) => { const payload =record.data; const resultPayLoad = { ticker_id : payload.ticker_symbol, sector : payload.sector, price : payload.price, }; return{ recordId: record.recordId, result: 'Ok', data: (resultPayLoad), } }); console.log(`Processing completed. Successful records ${output.length}.`); return { records: output }; }; As mentioned above we are doing a simple transformation of renaming a property and removing a unwanted property. Go ahead and create this Lambda function with the above code. Now go back to our Firehose creation page and select our created Lambda function. This will give us a warning saying that our Lambda function timeout is only 3 seconds and increase it at least for 1 minute. This warning is given because we are dealing with streaming data it may take some time to execute and complete this function and the default 3 seconds timeout will not be enough. We can do that by going to our function’s basic settings. Lambda supports up to 5 minutes of timeout. After that, we have also have an option to convert record format to either Apache Parquet format for Apache ORC format rather than using a JSON format. These converting is done using AWS Glue service by defining our schemas there. But in our scenario, we do not need that functionality. Next, we will be asked for the destination of our data stream. Select Elasticsearch from there and select our domain we created earlier and with the index that we are going to store our data with. Next we have the option of taking backups of our raw data. Here we can either select to backup only failed records or all records for future purposes. Here we need to select a S3 bucket, if you don’t have it already created we can create a new bucket using Create New. We can also append a prefix to the data stream that will be save on the S3 bucket so we can easily categorize our saved data inside the S3 bucket. Kinesis Data Firehose automatically appends the “YYYY/MM/DD/HH/” UTC prefix to delivered S3 files. Apart from that, we can add a custom prefix as well according to our requirements. The next page will display several configurations that we can modify. First is Elasticsearch buffer conditions. Firehose will buffer data before sending them to store in Elasticsearch, we can determine this buffer using two metrics, buffer size and buffer interval. So when either of these conditions fulfilled Kinesis Firehose will deliver our data to Elasticsearch. Since we are going to use Lamda functions to transform our data stream we need to comply with the AWS Lambda function payload limit, which is 6 MB. Next configuration is S3 compression and encryption. These configurations are related to the backup S3 bucket that we are going to use for raw data. Here we can either compress data in order to make them smaller or make them secure by encrypting data before strong in S3. Next Error loggings will be enabled by default which will log errors on Cloudwatch, lastly, we need to create a new IAM role for our Firehose service. For that, we can go ahead and create a new IAM role. After that we have done all the configurations so go ahead and create our stream. It will take a couple of seconds for AWS to create our firehose stream. The status of our stream will become Active when the stream is fully created. Now in order to test our system working according to our design we can go to our stream and Test with demo data. This will send demo test data continuously to our Kinesis Firehose. We can first go to the Elasticsearch Kibana dashboard and verify that our data is loaded accordingly with the appropriate transformation we have done. Confirm that all the raw data is available on our S3 buckets as well. Now we have done our configurations and made sure they are working using the demo stream. That is all that I was hoping to go through using this article, but there are many more other functionalities provided by both AWS Kinesis and Elasticsearch services so make sure to explore more on those services. Thank you for reading this article. :)
https://medium.com/swlh/streaming-real-time-data-to-aws-elasticsearch-using-kinesis-firehose-74626d0d84f1
['Janitha Tennakoon']
2020-06-03 10:16:44.374000+00:00
['Big Data', 'Kinesis', 'AWS', 'Elasticsearch', 'Data']
Title Streaming Realtime data AWS Elasticsearch using Kinesis FirehoseContent Explore deliver realtime data using data stream Elasticsearch service using AWS Kinesis Firehose Elasticsearch opensource solution used many company around world analytics definition Elasticsearch opensource RESTful distributed indexed search analytics solution first part definition opensource solution mean communitydriven solution free use widely Next RESTful mean communication configuration done simple REST HTTP API call Elasticsearch developed featurerich REST API framework use client order consume Next distributed indexed Elasticsearch get differentiated common search solution tend implement top existing database Elasticsearch distributed mean functionality data stored using multiple resource use resource functionality make efficient meanwhile providing high availability well Next indexed improves significantly data retrieval Elasticsearch us Apache Lucene indexing known extremely fast indexing solution make search analytics extremely fast many concept need talk Elasticsearch since article going address solution stream data Elasticsearch use Elasticsearch try go write concept complete beginner Elasticsearch recommend first learn least basic concept used Elasticsearch continuing article many valuable post video internet Elasticsearch beginner sure able find article going use AWS Elasticsearch service fully managed AWS need worry deployment security making failproof high available solution AWS Kinesis Amazon Kinesis service provided AWS processing data realtime scenario going talk data coming Kinesis stream Kinesis execute kind functionality based requirement many data stream processing solution community well Apache Kafka Apache Spark example used currently community main reason chose Kinesis Elasticseach solution also created AWS service also since server failure fully managed AWS really easy u focus output rather managing infrastructure think going discus Kinesis let u first try understand data stream example data stream Data Streams Data stream data generated sent continuously data source data source called data producer streaming world main feature distinguishes data stream form data source generated continuously like river rather sending data batch streaming expect every time data given u mean stream Data stream mostly surface Big Data world may given time frame inspect data might small referring Big Data data stream talking continuously sending data every time mean large quantity data end day example data stream find real world Log file generated software application Financial stock market data User interaction data web application Device sensor output kind IoT device Thus data stream either analytics realtime provide analytical data output store data analytics top data first scenario used many solution realtime analytics required like identifying traffic congestion identifying different pattern etc second scenario store data analytical platform later analytics top Elasticsearch kind analytical platform perform analytical task handle send data Elasticsearch using data stream getting data producer thus come Kinesis use case mentioned earlier Kinesis service used process data stream kind solution AWS Kinesis provides handle kind data stream currently AWS Kinesis provides four type solution time writing article Kinesis Data Streams — used collect process large stream data record realtime — used collect process large stream data record realtime Kinesis Data Firehose — used deliver realtime streaming data destination Amazon S3 Redshift Elasticsearch etc — used deliver realtime streaming data destination Amazon S3 Redshift Elasticsearch etc Kineses Data Analytics — used process analyze streaming data using standard SQL — used process analyze streaming data using standard SQL Kinesis Video Streams — used fully manage service use stream live video device four solution provided see use case load data Elaslticsearch going use Kinesis Data Firehose Amazon Kinesis Data Firehose Kinesis Data Firehose one four solution provided AWS Kinesis service Kinesis service also fully managed AWS mean need worry deployment availability security every level Kinesis Firehose used deliver realtime streaming data predefined service destination writing article AWS support four destination Amazon S3 — easy use object storage Amazon Redshift — petabytescale data warehouse Amazon Elasticsearch Service — opensource search analytics engine Splunk — operational intelligent tool analyzing machinegenerated data mentioned title going look load data Elasticsearch destination implementing designing solution let u first look basic concept Kinesis Firehose Kinesis Data Firehose delivery stream — main component firehose main delivery stream sent destination main component firehose main delivery stream sent destination Data producer — entity sends record data Kinesis Data Firehose main source data stream — entity sends record data Kinesis Data Firehose main source data stream Record — data data producer sends Kinesis Firehose delivery stream data stream record sent continuously Usually record small max value 1000KB — data data producer sends Kinesis Firehose delivery stream data stream record sent continuously Usually record small max value 1000KB Buffer size buffer interval — configuration determine much buffering needed delivering destination order process data stream data producer continuously send data Firehose Thus question data producer send data Firehose several way data producer send data Firehose Kinesis Data Firehose PUT APIs — PutRecord PutRecordBatch API send source record delivery stream Amazon Kinesis Agent — Kinesis Agent standalone Java software application offer easy way collect send source record order load data Kinesis Agent agent available data producer system AWS IoT — Create AWS IoT rule send data MQTT message CloudWatch Logs — going use cloudwatch log use subscription filter deliver realtime stream log event CloudWatch Events — Create rule indicate event interest application automated action take rule match event solution going use demo data stream provided Kinesis Firehose demo data stream data sent following format tickersymbolQXZ sectorHEALTHCARE change005 price8451 Since basic understanding component service going use let u begin design solution scenario deliver data stream going created data producer analytical platform Elasticsearch service implementing solution might asking following question based requirement data need transformed stored Elasticsearch kind transformation required need store raw data even transformation future purpose backup Let’s assume scenario answer three question Yes need functionality transform data different format also save raw data location Fortunately Kinesis Firehose already provides functionality using AWS Lambda function According diagram sending data Elasticsearch service Lambda function transform data according requirement meantime raw data transformation sent AWS S3 bucket using lifecycle rule transfer either AWS Galcier S3 category according requirement since design ready let’s go ahead try implement designed solution AWS Creating Elasticsearch service going create Elasticsearch service development learning purpose configuration select domain secure configuration followed enterprise solution First Go AWS Elasticservice service create new domain Since using domain learning select Deployment type Development testing click next Next provide domain name select instance type t2smallelasticsearch instance type available free tier already using free tier account leave option default value without change next screen asked configure security elasticsearch domain recommended use VPC access domain private network instance within VPC network access case make Public access page see setting access policy domain Since learning add Allow open access domain make IP address access using learning purpose specify either IAM role specific AWS user account well creating testing elasticsearch domain couple minute elasticsearch domain running creating Elasticsearch service done Creating Kinesis Data Firehose Go AWS Kinesis service select Kinesis Firehose create delivery stream next screen give stream name select source Direct PUT source option select source Kinesis Data Stream scenario Kinesis data stream send data Firehose directly producer option enable serverside encryption data stream want encryption data enable provide necessary encryption key article let’s keep serverside encryption disabled Next asked whether need transform source record enable data transformation prompt u give Lambda function handle data transformation Since created Lambda function yet select Create new open box AWS prompt u already available lambda blueprint Select first option General Kinesis Data Firehose Processing use case since going custom function guide u Lambda function creation page creating Lambda function first create IAM role lambda function permission AWS Kinesis service providing full access Firehose need permission “firehosePutRecordBatch” since learning purpose let’s give full access Make sure add AWSLambdaExecute permission well give execution permission function well log permission create log cloudwatch service creating role let’s go Lambda function creation page give function name select IAM role created earlier mentioned earlier data stream following format tickersymbolQXZ sectorHEALTHCARE change005 price8451 article purpose let’s assume need change property stored Elasticsearch service Also let’s rename tickersysmbol property tickerid stored Elasticsearch lambda function created using nodejs order fulfill transformation know use supported language Lambda create function exportshandler async event context Process list record transform const output eventrecordsmaprecord const payload recorddata const resultPayLoad tickerid payloadtickersymbol sector payloadsector price payloadprice return recordId recordrecordId result Ok data resultPayLoad consolelogProcessing completed Successful record outputlength return record output mentioned simple transformation renaming property removing unwanted property Go ahead create Lambda function code go back Firehose creation page select created Lambda function give u warning saying Lambda function timeout 3 second increase least 1 minute warning given dealing streaming data may take time execute complete function default 3 second timeout enough going function’s basic setting Lambda support 5 minute timeout also option convert record format either Apache Parquet format Apache ORC format rather using JSON format converting done using AWS Glue service defining schema scenario need functionality Next asked destination data stream Select Elasticsearch select domain created earlier index going store data Next option taking backup raw data either select backup failed record record future purpose need select S3 bucket don’t already created create new bucket using Create New also append prefix data stream save S3 bucket easily categorize saved data inside S3 bucket Kinesis Data Firehose automatically appends “YYYYMMDDHH” UTC prefix delivered S3 file Apart add custom prefix well according requirement next page display several configuration modify First Elasticsearch buffer condition Firehose buffer data sending store Elasticsearch determine buffer using two metric buffer size buffer interval either condition fulfilled Kinesis Firehose deliver data Elasticsearch Since going use Lamda function transform data stream need comply AWS Lambda function payload limit 6 MB Next configuration S3 compression encryption configuration related backup S3 bucket going use raw data either compress data order make smaller make secure encrypting data strong S3 Next Error logging enabled default log error Cloudwatch lastly need create new IAM role Firehose service go ahead create new IAM role done configuration go ahead create stream take couple second AWS create firehose stream status stream become Active stream fully created order test system working according design go stream Test demo data send demo test data continuously Kinesis Firehose first go Elasticsearch Kibana dashboard verify data loaded accordingly appropriate transformation done Confirm raw data available S3 bucket well done configuration made sure working using demo stream hoping go using article many functionality provided AWS Kinesis Elasticsearch service make sure explore service Thank reading article Tags Big Data Kinesis AWS Elasticsearch Data
3,947
Basics of React.Js
Once you know JavaScript, you can immediately cause changes in your browser by manipulating the DOM. You can use JavaScript or jQuery to add or remove elements or change values when things are clicked and create an interactive website. However, DOM manipulation requires some excessive steps. If you want to change a displayed number on a website with DOM manipulation, you need to manually reassign the value or text of that element to the new one. If you have a lot of elements that update on changing a single thing, then you need to manually reassign for all of them. But this is where React.js comes in! Creating A React App Create a react app. npx create-react-app appName cd appName npm start A new page should open up with a spinning atom on it. This means your app is running. Now open it up in the text editor of your choice. Open the src folder and the App.js file. This is the file mainly responsible for showing you the spinning atom page. Inside of the header the img tag is responsible for the atom, the p tag is responsible for the line of white text, and the ‘a’ tag is responsible for the blue link at the bottom. Now delete the header. Your page should now only have the following code: import React from 'react'; import logo from './logo.svg'; import './App.css'; function App() { return ( <div className="App"> </div> ); } export default App; You will see a white page. Now let’s get started. As you may have guessed from the sample template given, the display is very similar to basic HTML. The main difference is that everything is wrapped in divs. Add some paragraph’s inside the div. <div className="App"> <p>Paragraph Line</p> <p>Paragraph Line</p> <p>Paragraph Line</p> </div> Now on the page, you can see three-paragraph lines. Instead of putting in the same line three times however, you can create a component that says the line, then just re-use that component. Functional Component Create a new file called Paragraph.js and enter the following text inside. import React from "react"; const Paragraph = (props) => { return( <div> <p>Paragraph Line</p> </div> ) } export default Paragraph; This is called a functional component. It is just a function used for displaying information. It has no state or time hook (we will learn about these in a second). In our original App.js file, import the Paragraph from the Paragraph.js file. import Paragraph from './paragraph.js'; Instead, change the div to show: return ( <div className="App"> <Paragraph /> <Paragraph /> <Paragraph /> </div> ); Now you are displaying the Paragraph component three times, and it looks exactly the same as earlier. The other type of component is called a class component. It has a life cycle meaning you can make it do things at certain times, such as when the component is first being displayed or about to be removed, and a state. Class Component Create a new file called Number.js. import React from "react"; export default class Number extends React.Component { render() { return( <div> <p>1</p> <p>2</p> <p>3</p> </div> ) } } Add our new number class component to our app.js file. return ( <div className="App"> <Paragraph /> <Paragraph /> <Paragraph /> <Number /> </div> ); State To give it a state, add the following to it: state = { number: 0 } This gives the state a number value that is currently set to 0. Add a button that when clicked tells you information on the state. import React from "react"; export default class Number extends React.Component { state = { number: 0 } printNumber = () => { console.log(this.state) } render() { return( <div> <p>1</p> <p>2</p> <p>3</p> <button onClick={this.printNumber()}>Print State</button> </div> ) } } Firstly, you will see that your state is printed, but you can also make it print a specific variable in the state, in this case specifically the number variable. Change the print number function to: printNumber = () => { console.log(this.state.number) } Now it will print the number inside of the state. But you may now notice that the number is printed automatically without you clicking the button and that clicking the button actually doesn’t print anything in addition at all. This is because of the way the function is called inside of our button. <button onClick={this.printNumber()}>Print State</button> Right now, the function is being called the second the button is made. We want to make the button call the function on click. <button onClick={() => {this.printNumber()}}>Print State</button> The function now does just that. You can also make it so the something is passed into the function when the button is clicked, such as button information if you want a few buttons to run the same function but with different values based on the button (such as a calculator) or when entering information inside of a form. Change the button to: <button id="button" onClick={(event) => {this.printNumber(event)}}>Print State</button> Change the function to: printNumber = (event) => { console.log(this.state.number) console.log(event) console.log(event.target) console.log(event.target.id) } Now when pressed, it will print state.number, the event that called it (a click), as well as the specific thing that called the even (the button). Like in state above, you can also access the specific attribute you want, such as id or class name, by doing event.target.attribute. You can change state by the setState function. this.setState({ number: this.state.number + 1 }) This will set the number inside of state to be one more than whatever it currently is. Wrap it inside a function: addOne = () => { this.setState({ number: this.state.number + 1 }) console.log(this.state.number) } And add a new button: <button id="button" onClick={(event) => {this.addOne(event)}}>Add Number</button> If you click it multiple times, you see that it prints a number each time and that each number increases each time. However, you will notice that each time it prints the old number first, then add one and that this process is repeated each time. It looks like the function is printing first then adding to the number inside of state, but it should be doing the opposite! This is because setState takes a small amount of time, while console logging is instantaneous. You can, however, set it so that the print happens explicitly after state is finished changing. Change the addOne function to: addOne = () => { this.setState({ number: this.state.number + 1 }, () => { console.log(this.state.number) }) } Now it works as you would expect, the number is increased by 1 and the result is printed. Lifecycle You can make a component do things like run functions or change state at specific times. Add a new function that sets the state to something weird. funkyState = () => { this.setState({ number: 999 }) } We can make this function run as soon as the component is shown: componentDidMount(){ this.funkyState() } Now if you refresh the page and hit the Print State button, you will see that the number inside of state was instantly set to 999. Forms Create a new file called Form.js. import React from "react"; export default class Form extends React.Component { render() { return( <div> <p>Hello, Name!</p> <form> Name: <input type="text" value='' /> <input type="submit" value="Submit" /> </form> </div> ) } } Add it to our app.js file. return ( <div className="App"> <Paragraph /> <Paragraph /> <Paragraph /> <Number /> <Form /> </div> ); This is a form where you can write your name, and submit it. We want to make it so that the “Hello, Name!” line shows the name that you submit in the name bar. The value of the name bar is set to an empty string so that it doesn’t display a value by default. However, this means that no matter what you type in, nothing will display. This is where state comes in. Set up state. state = { name: '', submittedName: '' } You can change the input line so that whenever it changes, it updates the name-value inside of state, and it updates its own value to reflect the name inside of state. Add a function to handle change and update the name input line. handleChange = (event) => { this.setState({ name: event.target.value }) } Name: <input type="text" value={this.state.name} onChange={event => {this.handleChange(event)}} /> This makes it so whenever there is a change in the name box (such as when you are typing) it will call the handleChange function and pass in the typing changes you made, updating state.name. And since the value is now set to reflect state.name, whatever you type is also reflected inside of the bar. Change the hello line above to: <p>Hello, {this.state.submittedName}!</p> Right now the submittedName is blank so nothing will show up. Add the following submit function and changes to the form. submit = (event) => { this.setState({ submittedName: this.state.name }) } <form onSubmit={(event) => {this.submit(event)}}> Now enter a name hit submit. You see that your submitted name will be displayed for a fraction of a second before disappearing. This is because the page refreshes on a submit. You need to add the following line to the submit function: event.preventDefault() This stops the normal auto-refresh. submit = (event) => { this.setState({ submittedName: this.state.name }) event.preventDefault() } Now enter a name and try it again, and it should work as you would expect! Props Information can be passed from one component to another in the form of props. Create two new files called ‘sample1.js’ and ‘sample2.js’. Inside of sample1 enter the following code: import React from "react"; const Sample1 = (props) => { return( <div> <p>Sample 1</p> </div> ) } export default Sample1; This will be a functional component. Import sample1 and add it to the bottom of the div: <Sample1 /> The page will now display the line ‘Sample 1’. If you want to pass props to it from the form component, you do by doing: <Sample1 name={this.state.name} submittedName={this.state.submittedName}/> When you use the imported component, you also add: variableName={variableToPass} The variable name can be anything, although it is best to make it consistent. To access props inside a functional component do props.variableName, where variableName is the name given when passing it in. Change the lines inside of sample1 to: import React from "react"; const Sample1 = (props) => { return( <div> <p>props.name</p> <p>props.submittedName</p> <p>{props.name}</p> <p>{props.submittedName}</p> </div> ) } export default Sample1; You will see that the first two lines just say the words “props.name” and “props.submittedName”. But the third and fourth lines are blank. If you start typing into the bar, the third line will start reflecting whatever you are typing. This is because it shows whatever is inside the name variable that was passed down, which is the name inside of state of the first component. The second line shows submittedName, so it will only show a. value after you hit enter and actually set it to something. Notice that the lines change by themselves without you having to manually assign new values or anything to them, unlike if you were using regular HTML/DOM manipulation. Through the use of props and state, things automatically change to match the state whenever there are changes to the state. To access props inside a class component is slightly different, you use this.props.variableName, where variableName is the name given when passing it in. Change the lines inside of sample2 to: import React from "react"; export default class Sample2 extends React.Component { render() { return( <div> <p>this.props.name</p> <p>this.props.submittedName</p> <p>{this.props.name}</p> <p>{this.props.submittedName}</p> </div> ) } } Everything else is basically the same. CSS Inside your folder, you will see that you already came with an ‘App.css’ file. Inside your App.js file you will see that the file is already imported at the top via the line: import './App.css'; This means any changes to that CSS file will be applied globally. CSS is pretty standard, with the exception that the ‘class’ attribute is instead called ‘className’. If you go back to ‘App.js’ you will notice that the original div was given the className “App”. If you change is to ‘class1’ and add the following inside of the CSS file: .class1 { color: blue } You will see that now everything is no longer centered and that all of the text is blue. This is because the earlier formatting was for members of the ‘app’ class. Miscellaneous If you want to run a function or get the value of a variable inside of a normal line similar to the way you would normally use ${} for a string, here you use curly braces. Add the following function to the top: numberFunction = () => { return (9 + 9) } Open up ‘Number.js’ again add the following two lines to the bottom: <p>5 + 5</p> <p>{5 + 5}</p> <p>{this.numberFunction()}</p> The first line will just display the line ‘5 + 5’ but the second will actually display 10. The last line similar displays 18, which is the return value of the numberFunction. Congratulations, you now know the basics of React.js! A useful addition to React is Redux, which you can find the basics of in my other article on Redux: https://medium.com/future-vision/redux-in-react-7f1776f2443d. Have fun making frontend applications with your new-found knowledge!
https://medium.com/swlh/basics-of-react-js-92ba04117bc
['Nicky Liu']
2020-01-26 00:11:46.067000+00:00
['Software Engineering', 'Programming', 'Software Development', 'React', 'JavaScript']
Title Basics ReactJsContent know JavaScript immediately cause change browser manipulating DOM use JavaScript jQuery add remove element change value thing clicked create interactive website However DOM manipulation requires excessive step want change displayed number website DOM manipulation need manually reassign value text element new one lot element update changing single thing need manually reassign Reactjs come Creating React App Create react app npx createreactapp appName cd appName npm start new page open spinning atom mean app running open text editor choice Open src folder Appjs file file mainly responsible showing spinning atom page Inside header img tag responsible atom p tag responsible line white text ‘a’ tag responsible blue link bottom delete header page following code import React react import logo logosvg import Appcss function App return div classNameApp div export default App see white page let’s get started may guessed sample template given display similar basic HTML main difference everything wrapped divs Add paragraph’s inside div div classNameApp pParagraph Linep pParagraph Linep pParagraph Linep div page see threeparagraph line Instead putting line three time however create component say line reuse component Functional Component Create new file called Paragraphjs enter following text inside import React react const Paragraph prop return div pParagraph Linep div export default Paragraph called functional component function used displaying information state time hook learn second original Appjs file import Paragraph Paragraphjs file import Paragraph paragraphjs Instead change div show return div classNameApp Paragraph Paragraph Paragraph div displaying Paragraph component three time look exactly earlier type component called class component life cycle meaning make thing certain time component first displayed removed state Class Component Create new file called Numberjs import React react export default class Number extends ReactComponent render return div p1p p2p p3p div Add new number class component appjs file return div classNameApp Paragraph Paragraph Paragraph Number div State give state add following state number 0 give state number value currently set 0 Add button clicked tell information state import React react export default class Number extends ReactComponent state number 0 printNumber consolelogthisstate render return div p1p p2p p3p button onClickthisprintNumberPrint Statebutton div Firstly see state printed also make print specific variable state case specifically number variable Change print number function printNumber consolelogthisstatenumber print number inside state may notice number printed automatically without clicking button clicking button actually doesn’t print anything addition way function called inside button button onClickthisprintNumberPrint Statebutton Right function called second button made want make button call function click button onClick thisprintNumberPrint Statebutton function also make something passed function button clicked button information want button run function different value based button calculator entering information inside form Change button button idbutton onClickevent thisprintNumbereventPrint Statebutton Change function printNumber event consolelogthisstatenumber consolelogevent consolelogeventtarget consolelogeventtargetid pressed print statenumber event called click well specific thing called even button Like state also access specific attribute want id class name eventtargetattribute change state setState function thissetState number thisstatenumber 1 set number inside state one whatever currently Wrap inside function addOne thissetState number thisstatenumber 1 consolelogthisstatenumber add new button button idbutton onClickevent thisaddOneeventAdd Numberbutton click multiple time see print number time number increase time However notice time print old number first add one process repeated time look like function printing first adding number inside state opposite setState take small amount time console logging instantaneous however set print happens explicitly state finished changing Change addOne function addOne thissetState number thisstatenumber 1 consolelogthisstatenumber work would expect number increased 1 result printed Lifecycle make component thing like run function change state specific time Add new function set state something weird funkyState thissetState number 999 make function run soon component shown componentDidMount thisfunkyState refresh page hit Print State button see number inside state instantly set 999 Forms Create new file called Formjs import React react export default class Form extends ReactComponent render return div pHello Namep form Name input typetext value input typesubmit valueSubmit form div Add appjs file return div classNameApp Paragraph Paragraph Paragraph Number Form div form write name submit want make “Hello Name” line show name submit name bar value name bar set empty string doesn’t display value default However mean matter type nothing display state come Set state state name submittedName change input line whenever change update namevalue inside state update value reflect name inside state Add function handle change update name input line handleChange event thissetState name eventtargetvalue Name input typetext valuethisstatename onChangeevent thishandleChangeevent make whenever change name box typing call handleChange function pas typing change made updating statename since value set reflect statename whatever type also reflected inside bar Change hello line pHello thisstatesubmittedNamep Right submittedName blank nothing show Add following submit function change form submit event thissetState submittedName thisstatename form onSubmitevent thissubmitevent enter name hit submit see submitted name displayed fraction second disappearing page refreshes submit need add following line submit function eventpreventDefault stop normal autorefresh submit event thissetState submittedName thisstatename eventpreventDefault enter name try work would expect Props Information passed one component another form prop Create two new file called ‘sample1js’ ‘sample2js’ Inside sample1 enter following code import React react const Sample1 prop return div pSample 1p div export default Sample1 functional component Import sample1 add bottom div Sample1 page display line ‘Sample 1’ want pas prop form component Sample1 namethisstatename submittedNamethisstatesubmittedName use imported component also add variableNamevariableToPass variable name anything although best make consistent access prop inside functional component propsvariableName variableName name given passing Change line inside sample1 import React react const Sample1 prop return div ppropsnamep ppropssubmittedNamep ppropsnamep ppropssubmittedNamep div export default Sample1 see first two line say word “propsname” “propssubmittedName” third fourth line blank start typing bar third line start reflecting whatever typing show whatever inside name variable passed name inside state first component second line show submittedName show value hit enter actually set something Notice line change without manually assign new value anything unlike using regular HTMLDOM manipulation use prop state thing automatically change match state whenever change state access prop inside class component slightly different use thispropsvariableName variableName name given passing Change line inside sample2 import React react export default class Sample2 extends ReactComponent render return div pthispropsnamep pthispropssubmittedNamep pthispropsnamep pthispropssubmittedNamep div Everything else basically CSS Inside folder see already came ‘Appcss’ file Inside Appjs file see file already imported top via line import Appcss mean change CSS file applied globally CSS pretty standard exception ‘class’ attribute instead called ‘className’ go back ‘Appjs’ notice original div given className “App” change ‘class1’ add following inside CSS file class1 color blue see everything longer centered text blue earlier formatting member ‘app’ class Miscellaneous want run function get value variable inside normal line similar way would normally use string use curly brace Add following function top numberFunction return 9 9 Open ‘Numberjs’ add following two line bottom p5 5p p5 5p pthisnumberFunctionp first line display line ‘5 5’ second actually display 10 last line similar display 18 return value numberFunction Congratulations know basic Reactjs useful addition React Redux find basic article Redux httpsmediumcomfuturevisionreduxinreact7f1776f2443d fun making frontend application newfound knowledgeTags Software Engineering Programming Software Development React JavaScript
3,948
AWS Lambda Event Validation in Python — Now with PowerTools
How can you improve on the already excellent Pydantic validation? Recently, I had the pleasure of contributing a new parser utility code to an amazing and relatively new project on Github: AWS Lambda Powertools. This repo, which started as Python oriented (but now supports other languages such as Java and more to follow) provides an easy to use solution for lambda logging, tracing (with cloud watch metrics), SSM utilities and now validation and advanced parsing for incoming AWS Lambda events. The new parser utility, will help you to achieve next level validation. It’s based on Pydantic and it’s marked as an optional utility. In order to install it, you will need to specify it like this: pip install aws-lambda-powertools[pydantic] For other non parsing usages for the libraries such as logger, metrics (and more) see this excellent blog post. Validation with Pydantic Well, currently, if you followed my guidelines in my previous blog post, you had to write (and now maintain) a handful of AWS Lambda events such as Eventbridge, DynamoDB streams, Step functions and more. Basically, a schema for each AWS event that a lambda receives. You found out how to write these Pydantic schemas by either looking at the AWS documentation or by printing the event JSON. It’s an important process it can get tedious quickly. Let’s take a look at an Eventbridge event: The detail field is a dictionary which describes the user schema, the actual message that we would like to extract, validate and parse. In order to achieve that, the following Pydantic schema can be used: The lambda handler which parses the event will look like this: It works. You have to write a little bit of code but it works. The problem in this solution is that you need to maintain an AWS event schema which might get changed by AWS at some point. When it does, this validation schema fails and raises a validation exception in runtime. Not ideal to say the least. However, it can get better, much better!
https://medium.com/cyberark-engineering/aws-lambda-event-validation-in-python-now-with-powertools-431852ac7caa
['Ran Isenberg']
2020-11-18 08:41:52.554000+00:00
['Validation', 'Python', 'AWS Lambda', 'Software Development', 'AWS']
Title AWS Lambda Event Validation Python — PowerToolsContent improve already excellent Pydantic validation Recently pleasure contributing new parser utility code amazing relatively new project Github AWS Lambda Powertools repo started Python oriented support language Java follow provides easy use solution lambda logging tracing cloud watch metric SSM utility validation advanced parsing incoming AWS Lambda event new parser utility help achieve next level validation It’s based Pydantic it’s marked optional utility order install need specify like pip install awslambdapowertoolspydantic non parsing usage library logger metric see excellent blog post Validation Pydantic Well currently followed guideline previous blog post write maintain handful AWS Lambda event Eventbridge DynamoDB stream Step function Basically schema AWS event lambda receives found write Pydantic schema either looking AWS documentation printing event JSON It’s important process get tedious quickly Let’s take look Eventbridge event detail field dictionary describes user schema actual message would like extract validate parse order achieve following Pydantic schema used lambda handler par event look like work write little bit code work problem solution need maintain AWS event schema might get changed AWS point validation schema fails raise validation exception runtime ideal say least However get better much betterTags Validation Python AWS Lambda Software Development AWS
3,949
We May Have Misunderstood Myelin
Myelin is often referred to as an insulator because it drastically increases the speed of action potential propagation; an axon ensheathed in myelin can send signals up to 300x faster than its unmyelinated counterparts. This enhanced conduction speed allows rapid communication to occur between distal parts of a spread out body plan. For example, a nerve impulse travels a great distance — from your foot to your brain and back to your foot — in a tiny fraction of a second. And this is a highly valuable feature of a living body — your chances of survival in dangerous situations depend largely on your reaction time. For this reason, myelin is a feature found in every vertebrate and may be an evolutionary requirement for the existence of vertebrates in general. Myelin: More Than A Golden Insulator Referring to myelin as an “insulating sheath” evokes an inaccurate equivalence between the brain and a modern electronic device, as if myelin is a piece of non-conductive material that wraps around an electrical wire. But myelin does so much more than function as an inert piece of insulation. As I mentioned before, the brain’s glial cells play the vital role of providing nutrients to power-hungry neurons. This is extremely important for oligodendrocytes because the insulating nature of myelin also isolates axons from the extracellular space. Whereas many types of cells can simply collect their own food from the environment, myelinated neurons have no access to the outside world. Oligodendrocytes are in the perfect position to spoon-feed neurons that constantly need to meet the energy demands of their firing axons. Oligodendrocytes traffic RNA across microtubules (which are lit up in this GIF) to maintain myelin through local translation. They also transport energy substrates, like lactate, through myelin into actively firing axons. Image Credit: @Meng2fu The importance of this this relationship between oligodendrocytes and neurons cannot be understated — proper myelination is absolutely necessary for our health. Evidence has linked abnormalities in myelin structure and function to a wide range of diseases. For example, Multiple sclerosis is caused when the immune system selectively attacks and degrades myelin is caused when the immune system selectively attacks and degrades myelin Cognitive disturbances in schizophrenia seem to be related to oligodendrocyte and myelin dysfunction seem to be related to oligodendrocyte and myelin dysfunction Post-mortem observations have seen that people with major depression show changes in myelinated regions of the brain show changes in myelinated regions of the brain Children with autism show an antibody response to myelin, which is similar to the etiology of multiple sclerosis show an antibody response to myelin, which is similar to the etiology of multiple sclerosis Veterans with post-traumatic stress disorder (PTSD) actually show increased myelination in parts of the brain Clearly, oligodendrocytes need to maintain a delicate balance of myelination to ensure neuronal health and brain function. Even beyond disease and dysfunction, myelination is a highly dynamic process that is constantly changing in response to the growth and behavior of an organism. For example, action potential conduction delays remain the same throughout development despite a huge increase in the distance that nerve impulses are required to travel (as the body gets bigger over time). In addition, extensive piano practice induces large-scale myelination changes in specific brain regions as a resulting of learning a new motor skill. But how can oligodendrocytes respond so dynamically to the needs of individual axons, the growth of a body, and the behavior of an organism? It had been hypothesized that a form of rapid communication between neurons and oligodendrocytes must be maintained in order to support these dynamic and responsive changes in myelin structure. And indeed, such a structure was recently discovered underneath the many layers of the myelin sheath: a hidden synapse between an axon and its myelin. The Axo-Myelinic Synapse Synapses are considered to be the major form of information processing in the brain. They are defined by Oxford Lexico as “a junction between two nerve cells, consisting of a minute gap across which impulses pass by diffusion of a neurotransmitter.” But synapses aren’t found only between nerve cells. The evidence is mounting that axo-myelinic synapses are formed within every segment of myelin and its underlying axon. These synapses seem to work in exactly the same way as a synapse between two neurons: an action potential causes the fusion of vesicles with the cell membrane, which dumps the contents of the vesicles (neurotransmitters) into the synaptic cleft. Then, protein receptors on the post-synaptic neuron sense the incoming chemical signal and respond accordingly. Action potentials induce the release of neurotransmitter at the synapse, which binds to post-synaptic receptors to induce a response in the receiving neuron. Image Credit: Wikimedia Commons In the case of neuron-to-neuron synapses, neurotransmitters will sometimes induce an action potential in the post-synaptic cell. But this doesn’t really apply to axo-myelinic synapses, because oligodendrocytes don’t produce action potentials. However, there are many other downstream effects that occur when neurotransmitters bind to receptors on any post-synaptic cell. Unlike a computer, brain cells don’t need to use electrical signals to transmit important information; cellular signaling often happens through vastly complicated networks of cascading chemical reactions. In this way, axo-myelinic synapses are a completely untapped therapeutic target that are likely involved in all myelin-related diseases. But practically nothing is known about the function of these newly discovered synapses. Despite the lack of hard evidence, we can make some educated speculations about the role of axo-myelinic communication in our brains. (I won’t dig too deeply into the details here, so if you want more information about the possible molecular mechanisms you can check out my qualifying exam and its associated figures). There are three probable ways in which axo-myelinic synapse activity could affect neurons and oligodendrocytes. We’ve already briefly touched on the first: the activity-dependent delivery of vital nutrients. As we discussed, myelin prevents axons from taking in vital nutrients, so oligodendrocytes deliver the food directly to the neuron. It seems extraordinarily likely that the amount of energy delivered to a given neuron would depend on the activity level of specific axons. And this information would be easily communicated through axo-myelinic synapses (see the top portion of the figure below). Secondly, axo-myelinic synapses are an attractive mechanism to explain how oligodendrocytes determine the thickness of their myelin wrappings. By using axo-myelinic synapses as a read-out of neuronal activity, an oligodendrocyte could fine tune the structure of each individual myelin sheath to modulate the speed of neuronal signaling (see the bottom portion of the figure below). This is a figure I created to visualize the hypothetical ways in which axo-myelinic synapses might function. Top part of the figure: PKC is immediately activated by low frequency action potential firing which perfuses lactate into myelin through monocarboxylate transporter 1 (MCT) where it is delivered to actively firing axons through MCT2. Bottom part of the figure: High frequency action potentials cause a delayed inhibition of MAPK through myelinic NMDA receptor activation which enhances mRNA translation in myelin. Finally, the interplay of these two mechanisms raises other fascinating possibilities. Because oligodendrocytes myelinate up to 60 different axons, they are in a perfect position to control the activity of an entire neuronal network. By receiving information from hundreds of axo-myelinic synapses, an oligodendrocyte could selectively provide nutrients to certain axons while also changing the structure of individual myelin sheaths. The combination of these abilities provides oligodendrocytes with an immense power to orchestrate the information processing of a group of neurons. An oligodendrocyte could starve out specific neurons by withholding nutrients, or it could alter the myelination on particular axons to synchronize activity across a network. We don’t really know if any of this happens, but the mere possibilities are mind-boggling. Whatever the actual functions of axo-myelinic synapses turn out to be, oligodendrocytes clearly play a much more important role in the brain than they receive credit for.
https://medium.com/medical-myths-and-models/youve-been-misled-about-myelin-d6238691704b
['Ben L. Callif']
2020-02-04 05:50:55.312000+00:00
['Neuroscience', 'The Law Of The Instrument', 'Brain', 'Myelin', 'Biology']
Title May Misunderstood MyelinContent Myelin often referred insulator drastically increase speed action potential propagation axon ensheathed myelin send signal 300x faster unmyelinated counterpart enhanced conduction speed allows rapid communication occur distal part spread body plan example nerve impulse travel great distance — foot brain back foot — tiny fraction second highly valuable feature living body — chance survival dangerous situation depend largely reaction time reason myelin feature found every vertebrate may evolutionary requirement existence vertebrate general Myelin Golden Insulator Referring myelin “insulating sheath” evokes inaccurate equivalence brain modern electronic device myelin piece nonconductive material wrap around electrical wire myelin much function inert piece insulation mentioned brain’s glial cell play vital role providing nutrient powerhungry neuron extremely important oligodendrocyte insulating nature myelin also isolates axon extracellular space Whereas many type cell simply collect food environment myelinated neuron access outside world Oligodendrocytes perfect position spoonfeed neuron constantly need meet energy demand firing axon Oligodendrocytes traffic RNA across microtubule lit GIF maintain myelin local translation also transport energy substrate like lactate myelin actively firing axon Image Credit Meng2fu importance relationship oligodendrocyte neuron cannot understated — proper myelination absolutely necessary health Evidence linked abnormality myelin structure function wide range disease example Multiple sclerosis caused immune system selectively attack degrades myelin caused immune system selectively attack degrades myelin Cognitive disturbance schizophrenia seem related oligodendrocyte myelin dysfunction seem related oligodendrocyte myelin dysfunction Postmortem observation seen people major depression show change myelinated region brain show change myelinated region brain Children autism show antibody response myelin similar etiology multiple sclerosis show antibody response myelin similar etiology multiple sclerosis Veterans posttraumatic stress disorder PTSD actually show increased myelination part brain Clearly oligodendrocyte need maintain delicate balance myelination ensure neuronal health brain function Even beyond disease dysfunction myelination highly dynamic process constantly changing response growth behavior organism example action potential conduction delay remain throughout development despite huge increase distance nerve impulse required travel body get bigger time addition extensive piano practice induces largescale myelination change specific brain region resulting learning new motor skill oligodendrocyte respond dynamically need individual axon growth body behavior organism hypothesized form rapid communication neuron oligodendrocyte must maintained order support dynamic responsive change myelin structure indeed structure recently discovered underneath many layer myelin sheath hidden synapse axon myelin AxoMyelinic Synapse Synapses considered major form information processing brain defined Oxford Lexico “a junction two nerve cell consisting minute gap across impulse pas diffusion neurotransmitter” synapsis aren’t found nerve cell evidence mounting axomyelinic synapsis formed within every segment myelin underlying axon synapsis seem work exactly way synapse two neuron action potential cause fusion vesicle cell membrane dump content vesicle neurotransmitter synaptic cleft protein receptor postsynaptic neuron sense incoming chemical signal respond accordingly Action potential induce release neurotransmitter synapse bind postsynaptic receptor induce response receiving neuron Image Credit Wikimedia Commons case neurontoneuron synapsis neurotransmitter sometimes induce action potential postsynaptic cell doesn’t really apply axomyelinic synapsis oligodendrocyte don’t produce action potential However many downstream effect occur neurotransmitter bind receptor postsynaptic cell Unlike computer brain cell don’t need use electrical signal transmit important information cellular signaling often happens vastly complicated network cascading chemical reaction way axomyelinic synapsis completely untapped therapeutic target likely involved myelinrelated disease practically nothing known function newly discovered synapsis Despite lack hard evidence make educated speculation role axomyelinic communication brain won’t dig deeply detail want information possible molecular mechanism check qualifying exam associated figure three probable way axomyelinic synapse activity could affect neuron oligodendrocyte We’ve already briefly touched first activitydependent delivery vital nutrient discussed myelin prevents axon taking vital nutrient oligodendrocyte deliver food directly neuron seems extraordinarily likely amount energy delivered given neuron would depend activity level specific axon information would easily communicated axomyelinic synapsis see top portion figure Secondly axomyelinic synapsis attractive mechanism explain oligodendrocyte determine thickness myelin wrapping using axomyelinic synapsis readout neuronal activity oligodendrocyte could fine tune structure individual myelin sheath modulate speed neuronal signaling see bottom portion figure figure created visualize hypothetical way axomyelinic synapsis might function Top part figure PKC immediately activated low frequency action potential firing perfuses lactate myelin monocarboxylate transporter 1 MCT delivered actively firing axon MCT2 Bottom part figure High frequency action potential cause delayed inhibition MAPK myelinic NMDA receptor activation enhances mRNA translation myelin Finally interplay two mechanism raise fascinating possibility oligodendrocyte myelinate 60 different axon perfect position control activity entire neuronal network receiving information hundred axomyelinic synapsis oligodendrocyte could selectively provide nutrient certain axon also changing structure individual myelin sheath combination ability provides oligodendrocyte immense power orchestrate information processing group neuron oligodendrocyte could starve specific neuron withholding nutrient could alter myelination particular axon synchronize activity across network don’t really know happens mere possibility mindboggling Whatever actual function axomyelinic synapsis turn oligodendrocyte clearly play much important role brain receive credit forTags Neuroscience Law Instrument Brain Myelin Biology
3,950
Kubernetes — Learn Init Container Pattern
Kubernetes — Learn Init Container Pattern Understanding Init Container Pattern With an Example Project Photo by Judson Moore on Unsplash Kubernetes is an open-source container orchestration engine for automating deployment, scaling, and management of containerized applications. A pod is the basic building block of kubernetes application. Kubernetes manages pods instead of containers and pods encapsulate containers. A pod may contain one or more containers, storage, IP addresses, and, options that govern how containers should run inside the pod. A pod that contains one container refers to a single container pod and it is the most common kubernetes use case. A pod that contains Multiple co-related containers refers to a multi-container pod. There are few patterns for multi-container pods on of them is the init container pattern. In this post, we will see this pattern in detail with an example project. What are Init Containers Other Patterns Example Project Test With Deployment Object How to Configure Resource Limits When should we use this pattern Summary Conclusion What are Init Containers Init Containers are the containers that should run and complete before the startup of the main container in the pod. It provides a separate lifecycle for the initialization so that it enables separation of concerns in the applications. For example, you need to install some specific software before you want to run your application you can do that installation part in the Init Container of the pod. Init Container Pattern If you look at the above diagram, you can define n number of containers for Init containers and your main container starts only after all the Init containers are terminated successfully. All the init Containers will be executed sequentially and if there is an error in the Init container the pod will be restarted which means all the Init containers are executed again. So, it's better to design your Init container as simple, quick, and Idompodent.
https://medium.com/bb-tutorials-and-thoughts/kubernetes-learn-init-container-pattern-7a757742de6b
['Bhargav Bachina']
2020-09-24 20:23:20.730000+00:00
['Software Engineering', 'DevOps', 'Software Development', 'Docker', 'Kubernetes']
Title Kubernetes — Learn Init Container PatternContent Kubernetes — Learn Init Container Pattern Understanding Init Container Pattern Example Project Photo Judson Moore Unsplash Kubernetes opensource container orchestration engine automating deployment scaling management containerized application pod basic building block kubernetes application Kubernetes manages pod instead container pod encapsulate container pod may contain one container storage IP address option govern container run inside pod pod contains one container refers single container pod common kubernetes use case pod contains Multiple corelated container refers multicontainer pod pattern multicontainer pod init container pattern post see pattern detail example project Init Containers Patterns Example Project Test Deployment Object Configure Resource Limits use pattern Summary Conclusion Init Containers Init Containers container run complete startup main container pod provides separate lifecycle initialization enables separation concern application example need install specific software want run application installation part Init Container pod Init Container Pattern look diagram define n number container Init container main container start Init container terminated successfully init Containers executed sequentially error Init container pod restarted mean Init container executed better design Init container simple quick IdompodentTags Software Engineering DevOps Software Development Docker Kubernetes
3,951
The Free and Easy Way to Improve School Culture
All it takes is 30 minutes and a pair of shoes. Photo by Arek Adeoye on Unsplash What’s the purpose of school? No, really. I want you to think about that question for a few minutes before you read on. Did you come up with an answer? I bet it has something to do with educating kids so that they can grow up into fully functional adults. If it is, then I would pretty much agree with you. My next question is this: are schools serving their purpose? I wish we were at a coffee shop so you could really give me your answer to this question. I know it’s a complex question, and your answer will depend on your own experiences in school, where you live and any number of other factors. I would bet you an avocado toast and an Americano that your answer to the above question is no. Schools are easy to criticize and hard to defend. Everyone feels like schools are failing, students are failing and teachers are failing. If you feel like this, it’s not that you’re wrong. The thing that bothers me though, as a high school teacher and strong proponent of public education, is that while nearly everyone agrees that schools need to do better, it’s rare for people to galvanize their support behind common sense ideas and initiatives that work for students. There are no shortage of federal programs, private software companies, professional development consulting agencies, whack-a-doo secretaries of education lining up to push products, mandates and curriculum onto schools. Probably some of them are great, or they could be if they weren’t replaced with some other, newer version before the ink has dried on the contract or the check has been cashed. I’m one of the teachers that rolls my eyes when there is a new set of acronyms to learn at the beginning of the school year — RTIs, IEPs, 504s, ESSA. Out with the old and in with the new. At their heart though, the purpose of any of these programs are for kids to feel connected and noticed while they are at school so that they are mentally and emotionally prepared and willing to learn. Walk your way to a better school Here’s my plan to get rid of acronyms, cut spending, improve teacher retention, fight childhood obesity, decrease student stress and anxiety and improve attendance. Oh, did I mention: it’s free! The plan is this: every student should go for a 30 minute walk every day. I teach at a charter school, where I am fortunate to have the freedom to try out crazy ideas like the one above. The original goal of charter schools is to pilot interesting ideas on a small scale to see if they are scalable to larger schools. For the past 30 days, I have been going for walks with a group of 13 students. I make little maps of different neighborhoods within walking distance of my school, pass them out with arrows indicating the directions, and we take off. Sounds crazy, right? Let me convince you why this is a good idea and then explain how feasible it is for any size school in any location. A Pyramid of Benefits Photo by Emma Simpson on Unsplash The first benefit of this practice is improvements to physical health. If you haven’t seen that crazy popular youtube video about the health impacts of walking for 30 minutes each day, you should first get up and go for a walk and then watch the video. Walking is a low impact way to ward off obesity, diabetes, hypertension and more. The second benefit of going for daily walks is social engagement. When I started out on this endeavor in early September, I didn’t know any of the students well at all. As we walked shoulder to shoulder on sidewalks, trails and parking lots, I had great conversations with each and every one of them. I learned about little snippets of their lives that would never come up in a regular class period or even a fifteen minute check in with a guidance counselor. I’m not just nosy, this information is helpful in figuring out what motivates a student or troubleshooting their lack of motivation. We also ran into lots of people from the town who interacted with my students. Too often ‘school’ means being contained within the four walls of a classroom. It was great for students to help senior citizens bump their walkers up over the curb and to step off the sidewalk to allow a mother pushing a stroller to pass. These are little things, but I’d never think to address them in the classroom. Students also benefited from socializing with each other in a safe, somewhat controlled setting. Trust me, I wasn’t keyed in listening to every conversation, but they knew that it was not an appropriate setting for certain topics. It was great to hear them talking to each other about learning to drive, getting their first jobs or even giving relationship advice. Students that wouldn’t normally talk or hang out had the opportunity to socialize in a healthy, productive way. A third benefit is the connections that students made between academic content areas and the real world. An example is the daily math I had them do to figure out how long we had to walk. I didn’t give them an algorithm or a worksheet, but I expected them to figure out what time we had to turn around if we were leaving at 1:57 and we wanted to be back at 2:42. When we hoofed it through different neighborhoods, there were great conversations about why one part of town has big mansions and another part of town has small ranch homes. We identified trees and invasive species and noticed wildlife behaviors. A fourth (but probably not final) benefit was the connection to place that occurred as a result of these walks. Our school is in a very car-centric town. We would cover 1–3 miles and students were always amazed to see that we were able to walk to the Chinese food place or past Jacob’s house. Even they grew up in the town, they hadn’t ever developed a mental map of it. I teach science, so I’m always looking out for interesting plants and one day it made me burst with pride when a student up ahead of me shouted back “Yo, do you want us to turn left at that big sycamore tree?” Every School Can Do This Photo by Randy Fath on Unsplash Before you start listing the many reasons why it would be impossible for every student in every school to go for a walk every day, let’s just pause for a moment to remember that this is the country that expanded westward on foot and also sent a man for a walk on the moon. I think we can overcome some of these obstacles. For schools where permission slips are a headache, consider this: my school sends home one permission slip at the beginning of the year. It was written by our lawyer, so I assume it’s legal. When parents sign it, they give us permission to take kids on any field trips we want to all year long, provided we give them notification. It works out great and avoids the permission slip back and forth that has happened in other schools that I’ve worked in. Maybe some schools are worried about the time intrusion. With so many scheduled classes and trying to fit in electives like band and chorus, where would the time for this come from? At my school, we are able to award PE credit to students for participating, so that’s one avenue. I also have observed that shifting gears through six or more classes each day is overwhelming for students, which manifests as anxiety. Change the schedule, extend lunch, split a period with a study hall. There are all kinds of creative schedules out there, it is not impossible to find 30 minutes for an activity with such widespreading benefits. Staffing may be another issue to consider — one that I believe can be overcome as well. I am a busy teacher with young kids at home. Having the chance to get a little exercise in during the day is a joy! While not all teachers would jump at this chance, many would. There is no preparation or grading required to go for daily walks, and the relationships I form with students make them easier to work with in my regular academic classes. There are probably other obstacles as well, but none are insurmountable, especially considering the payoffs. Get Started Today! Photo by Manasvita S on Unsplash The great thing about this plan is that any school could try it out on a small or large scale immediately. There’s no cost and no risk — only rewards. So students, teachers, parents, administrators: what are you waiting for? Get walking! Don’t work in a school? Don’t care about schools? Weird that you’re reading this article, but that’s fine. Here’s the great thing: You can go for a walk too, and get all of the above benefits as well. When you get back from your walk, take all of that great energy and use it to tell someone else that you really think that the way to fix schools is to start by getting kids and teachers up on their feet. And then maybe when you do bump into me in a coffee shop, and I ask you if schools are serving their purpose, you’ll have a different answer!
https://medium.com/age-of-awareness/the-free-and-easy-way-to-improve-school-culture-644940d26601
['Emily Kingsley']
2020-01-24 02:34:50.374000+00:00
['Society', 'Schools', 'Culture', 'Health', 'Education']
Title Free Easy Way Improve School CultureContent take 30 minute pair shoe Photo Arek Adeoye Unsplash What’s purpose school really want think question minute read come answer bet something educating kid grow fully functional adult would pretty much agree next question school serving purpose wish coffee shop could really give answer question know it’s complex question answer depend experience school live number factor would bet avocado toast Americano answer question Schools easy criticize hard defend Everyone feel like school failing student failing teacher failing feel like it’s you’re wrong thing bother though high school teacher strong proponent public education nearly everyone agrees school need better it’s rare people galvanize support behind common sense idea initiative work student shortage federal program private software company professional development consulting agency whackadoo secretary education lining push product mandate curriculum onto school Probably great could weren’t replaced newer version ink dried contract check cashed I’m one teacher roll eye new set acronym learn beginning school year — RTIs IEPs 504s ESSA old new heart though purpose program kid feel connected noticed school mentally emotionally prepared willing learn Walk way better school Here’s plan get rid acronym cut spending improve teacher retention fight childhood obesity decrease student stress anxiety improve attendance Oh mention it’s free plan every student go 30 minute walk every day teach charter school fortunate freedom try crazy idea like one original goal charter school pilot interesting idea small scale see scalable larger school past 30 day going walk group 13 student make little map different neighborhood within walking distance school pas arrow indicating direction take Sounds crazy right Let convince good idea explain feasible size school location Pyramid Benefits Photo Emma Simpson Unsplash first benefit practice improvement physical health haven’t seen crazy popular youtube video health impact walking 30 minute day first get go walk watch video Walking low impact way ward obesity diabetes hypertension second benefit going daily walk social engagement started endeavor early September didn’t know student well walked shoulder shoulder sidewalk trail parking lot great conversation every one learned little snippet life would never come regular class period even fifteen minute check guidance counselor I’m nosy information helpful figuring motivates student troubleshooting lack motivation also ran lot people town interacted student often ‘school’ mean contained within four wall classroom great student help senior citizen bump walker curb step sidewalk allow mother pushing stroller pas little thing I’d never think address classroom Students also benefited socializing safe somewhat controlled setting Trust wasn’t keyed listening every conversation knew appropriate setting certain topic great hear talking learning drive getting first job even giving relationship advice Students wouldn’t normally talk hang opportunity socialize healthy productive way third benefit connection student made academic content area real world example daily math figure long walk didn’t give algorithm worksheet expected figure time turn around leaving 157 wanted back 242 hoofed different neighborhood great conversation one part town big mansion another part town small ranch home identified tree invasive specie noticed wildlife behavior fourth probably final benefit connection place occurred result walk school carcentric town would cover 1–3 mile student always amazed see able walk Chinese food place past Jacob’s house Even grew town hadn’t ever developed mental map teach science I’m always looking interesting plant one day made burst pride student ahead shouted back “Yo want u turn left big sycamore tree” Every School Photo Randy Fath Unsplash start listing many reason would impossible every student every school go walk every day let’s pause moment remember country expanded westward foot also sent man walk moon think overcome obstacle school permission slip headache consider school sends home one permission slip beginning year written lawyer assume it’s legal parent sign give u permission take kid field trip want year long provided give notification work great avoids permission slip back forth happened school I’ve worked Maybe school worried time intrusion many scheduled class trying fit elective like band chorus would time come school able award PE credit student participating that’s one avenue also observed shifting gear six class day overwhelming student manifest anxiety Change schedule extend lunch split period study hall kind creative schedule impossible find 30 minute activity widespreading benefit Staffing may another issue consider — one believe overcome well busy teacher young kid home chance get little exercise day joy teacher would jump chance many would preparation grading required go daily walk relationship form student make easier work regular academic class probably obstacle well none insurmountable especially considering payoff Get Started Today Photo Manasvita Unsplash great thing plan school could try small large scale immediately There’s cost risk — reward student teacher parent administrator waiting Get walking Don’t work school Don’t care school Weird you’re reading article that’s fine Here’s great thing go walk get benefit well get back walk take great energy use tell someone else really think way fix school start getting kid teacher foot maybe bump coffee shop ask school serving purpose you’ll different answerTags Society Schools Culture Health Education
3,952
3 Interviews — 5 Questions. Five basic things I learnt about Python…
1. Is Python a compiled or an interpreted language? The answer is ‘Both’ . But this answer most probably won’t get anyone the job until we explain the difference between an interpreted & a compiled language. Why we need a translator? Humans understand and hence talk human language, something closer to English. Machines talk in binary language, all 1’s & 0's. That is why, we need a translator in between which takes a human readable code, written in high level programming languages such as Python and converts it into a form understandable by a computer machine. Now the available translators are of two types, a compiler and an interpreter. What is a compiler? A compiler is a computer program that takes all your code at once and translates it into machine language. The resultant file is an executable that can be run as is. The pro is: this process is fast since it does all the job at once. The con is: this has to be done for every machine all over again. You cannot compile your code on one machine, generate an exe and run it over other machines regardless. What is an interpreter? On the other hand, an interpreter translates your code one instruction at a time. Pro: takes its time since an error at line 574 means it notifies you, you fix the error and it starts translating again from line 1. Con: Once translated, the generated bytecode file is platform independent. No matter what machine you want this code to run at, take your virtual machine with you and you are good to go because the generated bytecode is going to run on your PVM ( python virtual machine) and not on the actual physical CPU of your machine. Compiled vs Interpreted Now, the answer that might get you the job is: python does both. When we write a python code and run it, the compiler generates a bytecode file (with .pyc or .pyo extension). We can then take this bytecode and our python virtual machine and run it on any machine we want seamlessly. The PVM in this case is the interpreter that converts the bytecode to a machine code. 2. Is Python Call-by-Value or Call-by-Reference? The answer again is ‘Both’. This is so basic that you can even find it at your first Google search, but knowing the details is important. What is call-by-value? Call-by-value and call-by-reference are the techniques specifying how arguments are passed to a callable( more specifically a function) by a caller. In a language that follows call-by-value technique, when passing arguments to a function, a copy of the variable is passed. That means the value that is passed to the function is a new value stored at a new memory address hence any changes made to the value passed to the function will only happen for the copy stored at the new address and the original value will remain intact. call-by-value in action What is call-by-reference? In call-by-reference technique, we pass the memory address of the variable as an argument to the function. This memory address is called a reference. Hence, when a function operates on this value, it is actually operating on the original value stored at the memory address passed as an argument so the original value is not preserved anymore but changed. call-by-reference in action Python’s call-by-object-reference Python follows a combination of both of these, known as call-by-object-reference. This is a hybrid technique because what is passed is a reference but what happens (in some cases) is more similar to an original value change. Everything in Python is an object, which means the value is stored at a memory location and the variable we declare is only a container for that memory address. No matter how many times we create a copy of that value, all the variables will still be pointing to the same memory location. Hence, in Python there is no concept of passing a copy of variable as an argument. In any case we end up passing the reference(memory location) as an argument to a function. So this is call-by-reference inherently. A quick example to understand this, no matter how many variables we declare to store the value of an Interger 2, all of them contain the same memory address because variables in Python are nothing else but containers for memory addresses. Hence, this point is sorted that the arguments passed in Python are always the references and never the values. Whether the original value remains intact or not depends upon the type of data structure. Some of the data structures in Python are mutable which means you can change their values in place while some are immutable which means an effort to change their value will result in a new value stored at a new location and the new reference will be stored in the variable. The examples of mutable objects in Python are list, dict, set, byte array while the immutable objects are int, float, complex, string, tuple, frozen set [note: immutable version of set], bytes. So if the reference passed to the function was pointing towards a mutable value, it will be changed in place and your container will contain the same memory address it originally had. If the reference passed to the function is of a memory location storing an immutable value, the new value after processing will be stored at a new memory location and the container will be updated to store the address of new memory location. This is what call-by-object-reference is. As an example, when the variable ‘a’ is referencing an integer and we try to modify its value, ‘a’ starts pointing towards a new memory location since modifying the value of an integer in place is not possible.
https://medium.com/swlh/3-interviews-5-questions-55bd4cae8b9f
['Ramsha Bukhari']
2020-10-28 22:47:15.036000+00:00
['Python', 'Software Engineering', 'Programming', 'Data Visualization', 'Database']
Title 3 Interviews — 5 Questions Five basic thing learnt Python…Content 1 Python compiled interpreted language answer ‘Both’ answer probably won’t get anyone job explain difference interpreted compiled language need translator Humans understand hence talk human language something closer English Machines talk binary language 1’s 0 need translator take human readable code written high level programming language Python convert form understandable computer machine available translator two type compiler interpreter compiler compiler computer program take code translates machine language resultant file executable run pro process fast since job con done every machine cannot compile code one machine generate exe run machine regardless interpreter hand interpreter translates code one instruction time Pro take time since error line 574 mean notifies fix error start translating line 1 Con translated generated bytecode file platform independent matter machine want code run take virtual machine good go generated bytecode going run PVM python virtual machine actual physical CPU machine Compiled v Interpreted answer might get job python write python code run compiler generates bytecode file pyc pyo extension take bytecode python virtual machine run machine want seamlessly PVM case interpreter convert bytecode machine code 2 Python CallbyValue CallbyReference answer ‘Both’ basic even find first Google search knowing detail important callbyvalue Callbyvalue callbyreference technique specifying argument passed callable specifically function caller language follows callbyvalue technique passing argument function copy variable passed mean value passed function new value stored new memory address hence change made value passed function happen copy stored new address original value remain intact callbyvalue action callbyreference callbyreference technique pas memory address variable argument function memory address called reference Hence function operates value actually operating original value stored memory address passed argument original value preserved anymore changed callbyreference action Python’s callbyobjectreference Python follows combination known callbyobjectreference hybrid technique passed reference happens case similar original value change Everything Python object mean value stored memory location variable declare container memory address matter many time create copy value variable still pointing memory location Hence Python concept passing copy variable argument case end passing referencememory location argument function callbyreference inherently quick example understand matter many variable declare store value Interger 2 contain memory address variable Python nothing else container memory address Hence point sorted argument passed Python always reference never value Whether original value remains intact depends upon type data structure data structure Python mutable mean change value place immutable mean effort change value result new value stored new location new reference stored variable example mutable object Python list dict set byte array immutable object int float complex string tuple frozen set note immutable version set byte reference passed function pointing towards mutable value changed place container contain memory address originally reference passed function memory location storing immutable value new value processing stored new memory location container updated store address new memory location callbyobjectreference example variable ‘a’ referencing integer try modify value ‘a’ start pointing towards new memory location since modifying value integer place possibleTags Python Software Engineering Programming Data Visualization Database
3,953
Six Steps to a Better Deadlift
Photo by Alora Griffiths on Unsplash The barbell deadlift is a compound exercise that works almost every major muscle group. While it is phenomenal for developing full body strength, it also has the potential to be dangerous if performed improperly. For all of its benefits, it could be argued that the deadlift’s risks outweigh the rewards if you don’t take the time to execute it with precision technique. So in order to help you maximize those rewards while minimizing the risks, I’ve comprised a checklist of six steps that will walk you through every step of a properly performed deadlift — from start to finish. #1. Find Your Ideal Foot Placement Ankle mobility and limb length will play a factor in what exact stance is best for you, but here are a couple of general guidelines: As for how far underneath the bar your feet should be ; if you’re looking straight down, the bar should be right around the middle of the foot. A visual cue is to think about the bar “cutting your foot in half”. ; if you’re looking straight down, the bar should be right around the middle of the foot. A visual cue is to think about the bar “cutting your foot in half”. As for how far apart your feet should be; try simply performing a few vertical jumps and pay attention to where your feet are landing. Use this stance as a reference point to how far apart you may want to space your feet underneath the bar. #2. Use the “Balloon Analogy” to Brace Your Core Correctly Being told to “brace your core” is a common coaching cue. What is not so common, however, is being told how to brace the right way. The first thing a lot of people think of doing when they’re told to “brace” is to tighten up their abs by sucking in their stomach. This will cause the lower back to round under a load, and is the exact opposite of what you want to do. Instead, learn how to properly brace your core by using the “balloon analogy”. How to do it: Imagine a balloon is between your hands. If you press down on it, it expands 360 degrees. Now apply this same idea to your core by doing the following: Place your hands onto your midsection. Let your thumbs wrap around your sides, going towards your lower back area. Take in a deep , nasal-only breath. , nasal-only breath. You will feel expansion 360 degrees — around the stomach, the sides, and into your lower back. This is your body’s “internal weightlifting belt” turning on. (Which is the same way you should be bracing to get the most out of an actual weightlifting belt.) Remember that the #1 function of the core is to stabilize the spine under load. That stability is achieved with a 360 degree brace. #3. Screw Your Feet Into the Ground Imagine there is a thick piece of paper between your feet and you want to rip it in two. Let’s just assume this is one seriously thick piece of paper, and in order to rip it, you need to generate tension throughout the entire lower body. You can do this by thinking of “screwing your feet into the ground”. While keeping the feet flat on the floor, try to actively pull your feet apart from each other. You’ll find that the tension you’re creating at the feet will also cause the knees to push outwards. This arc of tension will continue all the way up to the hips, glutes, and hamstrings. “Screwing your feet into the ground” is a quick fix to keep the lower body tight from the feet up. #4. Create Tension in the Mid Back and Lats Keep your arms completely straight and reach your hands down your body as far possible while maintaining an upright posture. Now reach your hands back behind your body. At this point, you should be feeling tension all throughout the lats and middle back region — this is the same tension that you want to create when you reach down to grab the barbell. Of course, the barbell will prevent you from actually reaching your hands behind your body when you’re putting this to practice during a deadlift, but understanding how to set the lats in this manner is the first step in taking the slack out of the bar (more on that soon), and this position should remain constant throughout the entire duration of the set. Keep in mind that especially in a compound move like the deadlift, if any part of the body is allowed to remain “loose”, it can cause the entire chain to collapse. So being sure to properly pretension both the lower and upper body is critical. If you feel yourself losing tightness anywhere, put the weight down and reset — or discontinue the set. #5. Take the Slack Out of the Bar If the weight on the bar is 225 pounds, think about pulling with around 220 pounds of force as you place your hands onto the bar and pull your body into position. You should hear a “click” coming from either side of the barbell — that’s the sound of the bar being pulled into the weight plates. This will allow you to leverage your bodyweight against the barbell, and from this point you can set your body back into an ideal position to pull — typically a position that has your shoulders nearly directly over the barbell. #6. Leg Press the Floor Away, Lockout With a Neutral Posture, Repeat Once you’re ready to pull, think about performing a maximum effort leg press through the floor. Drive forcefully until you reach the point of lockout, and once you’re there, don’t hyperextend the lumbar spine. You may have seen lifters arch their lower back at the top of a deadlift, but all you need to do is stand straight up. Anything beyond a neutral posture is hyperextension and can excessively compress your lower back. From the lockout, you have the option of either simply dropping the weight to the floor or lowering the bar back down in the same fashion in which you brought it up. Photo by Anastase Maragos on Unsplash In Summary Anything worth doing is worth doing right, and if the barbell deadlift is a staple in your routine, keeping these six steps in mind will help optimize your performance while minimizing your risk of injury. Find your footing, brace your core, create full-body tension, pull the slack out of the bar, and leg press the floor away until lockout: Six steps to a better — and safer — deadlift. Thanks for reading! Have a question? Want something covered in a future article? Let me know in the comments! Click here to be notified whenever a new story is published. — Zack
https://medium.com/in-fitness-and-in-health/six-steps-to-a-better-deadlift-48974962dec2
['Zack Harris']
2020-11-01 17:16:18.134000+00:00
['Health', 'Wellness', 'Fitness', 'Life', 'Self Improvement']
Title Six Steps Better DeadliftContent Photo Alora Griffiths Unsplash barbell deadlift compound exercise work almost every major muscle group phenomenal developing full body strength also potential dangerous performed improperly benefit could argued deadlift’s risk outweigh reward don’t take time execute precision technique order help maximize reward minimizing risk I’ve comprised checklist six step walk every step properly performed deadlift — start finish 1 Find Ideal Foot Placement Ankle mobility limb length play factor exact stance best couple general guideline far underneath bar foot you’re looking straight bar right around middle foot visual cue think bar “cutting foot half” you’re looking straight bar right around middle foot visual cue think bar “cutting foot half” far apart foot try simply performing vertical jump pay attention foot landing Use stance reference point far apart may want space foot underneath bar 2 Use “Balloon Analogy” Brace Core Correctly told “brace core” common coaching cue common however told brace right way first thing lot people think they’re told “brace” tighten ab sucking stomach cause lower back round load exact opposite want Instead learn properly brace core using “balloon analogy” Imagine balloon hand press expands 360 degree apply idea core following Place hand onto midsection Let thumb wrap around side going towards lower back area Take deep nasalonly breath nasalonly breath feel expansion 360 degree — around stomach side lower back body’s “internal weightlifting belt” turning way bracing get actual weightlifting belt Remember 1 function core stabilize spine load stability achieved 360 degree brace 3 Screw Feet Ground Imagine thick piece paper foot want rip two Let’s assume one seriously thick piece paper order rip need generate tension throughout entire lower body thinking “screwing foot ground” keeping foot flat floor try actively pull foot apart You’ll find tension you’re creating foot also cause knee push outwards arc tension continue way hip glute hamstring “Screwing foot ground” quick fix keep lower body tight foot 4 Create Tension Mid Back Lats Keep arm completely straight reach hand body far possible maintaining upright posture reach hand back behind body point feeling tension throughout lat middle back region — tension want create reach grab barbell course barbell prevent actually reaching hand behind body you’re putting practice deadlift understanding set lat manner first step taking slack bar soon position remain constant throughout entire duration set Keep mind especially compound move like deadlift part body allowed remain “loose” cause entire chain collapse sure properly pretension lower upper body critical feel losing tightness anywhere put weight reset — discontinue set 5 Take Slack Bar weight bar 225 pound think pulling around 220 pound force place hand onto bar pull body position hear “click” coming either side barbell — that’s sound bar pulled weight plate allow leverage bodyweight barbell point set body back ideal position pull — typically position shoulder nearly directly barbell 6 Leg Press Floor Away Lockout Neutral Posture Repeat you’re ready pull think performing maximum effort leg press floor Drive forcefully reach point lockout you’re don’t hyperextend lumbar spine may seen lifter arch lower back top deadlift need stand straight Anything beyond neutral posture hyperextension excessively compress lower back lockout option either simply dropping weight floor lowering bar back fashion brought Photo Anastase Maragos Unsplash Summary Anything worth worth right barbell deadlift staple routine keeping six step mind help optimize performance minimizing risk injury Find footing brace core create fullbody tension pull slack bar leg press floor away lockout Six step better — safer — deadlift Thanks reading question Want something covered future article Let know comment Click notified whenever new story published — ZackTags Health Wellness Fitness Life Self Improvement
3,954
Learning How to Learn: Powerful mental tools to help you master tough subjects, Diffuse Mode
Learning How to Learn: Powerful mental tools to help you master tough subjects, Diffuse Mode How to make use of your superior diffuse mode a.k.a. your subconsciousness This is a follow-up of the chapter discussing the focused mode. I have a lot of ideas every day, but not enough time to write them all down, so I chose to write the other ideas down before I forget them. I was pretty certain I won’t forget the ideas described in this chapter. So the diffuse mode is somewhat the opposite of the focused mode. It is active in the background or your subconsciousness. The mistake most people and students tend to make when learning, is that they don’t spend a lot of time in the diffuse mode. This can cause a dramatic lack of depth in their understanding about subjects. If you teach those students about the simulation hypothesis (see the chapter: 09/14/2019 — Simulation Hypothesis and ‘Good and Evil’), they will use their focused mode to learn every bit of element related to that hypothesis, but won’t spend much time in the diffuse mode. What is the consequence of spending less time in the diffuse mode? Well, they won’t ask themselves those deep and abstract questions like “But how does this hypothesis relate to ethics and morals?” It is a sad trend you get to see in modern education, superficial understanding of material. I much prefer the old days like in Ancient Greece or Rome where even the emperors like Marcus Aurelius knew about philosophy and had a deep understanding about things. How to enter the diffuse mode First of all, you can enter the diffuse mode simply by not focusing on anything (through meditation or mindfulness), but this alone is not effective in terms of creativity and learning new things. In order to command your diffuse mode to learn something in the background, you need to use the focused mode first. For example, you want to know your own personal definition of the meaning of life. What you could start with is simply Googling information (and also storing them long-term), try to answer and view as many perspectives as you can, and then just relax. Do something else like exercising or meditating, the thinking will continue and run in the background. And after 15 minutes or even hours afterward, simply return to the question and you will be surprised how many new and stronger connections were formed in your brain without the conscious ‘you’. This technique can also beautifully be used when taking tests, exams, or making homework: https://lifehacker.com/improve-your-test-scores-with-the-hard-start-jump-to-e-1790599531 — Improve Your Test Scores With the “Hard Start-Jump to Easy” Technique Diffuse mode and working memory The diffuse mode is not limited to the working memory slots located in your prefrontal cortex, unlike the focused mode which is. This makes your subconsciousness so much more powerful when used correctly (albeit not as powerful as depicted in those movies). The focused mode also tends to activate old neural pathways that aren’t really that creative nor are the cerebral distances very long (the distance between two activated neurons, brain regions etc). The diffuse mode can activate but also create new neural pathways that have a much longer cerebral distance than the focused mode can. This allows the diffuse mode to be much more creative but also combine ideas from many different brain regions. Again, the diffuse mode is not limited to your working memory slots located in your prefrontal cortex, so it can connect and ‘think’ about as many ideas simultaneously as its neural resources allow. Diffuse mode and psychedelics I would really advise the book ‘How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence’ by Michael Pollan to learn more about the information I am going to say next. So the diffuse mode is mostly active when you don’t focus (using your focused mode). What mostly happens, neurophysiologically seen, is that the activity in the so-called default mode network increases. This allows for all kinds of connections to be much more active too, not only within one brain region but between brain regions, too. This is why someone taking psychedelics gets to see all kinds of weird hallucinations like seeing faces in inanimate objects (this phenomenon, called pareidolia, happens even without taking psychedelics, but the increased activity from the default mode network just increases the probability of occurrence). People who take psychedelics or meditate have the feeling they have found all kinds of ‘truths’ never thought of before. You could say, that when taking psychedelics, you are essentially being aware of your diffuse mode. During sleep, you are experiencing your diffuse mode, too. Diffuse mode and Entropic Learning Model See the chapter 09/11/2019 — Entropic Learning Model for more information. The diffuse mode is just such an important part of our learning and thinking process, that I made a separate phase in my learning model to remind myself that after hours of hard thinking, I need to relax to allow my diffuse mode to take over the thinking work. Can the diffuse mode run when you are activating the focused mode? Yes, but only when you switch tasks or ways of thinking. If you are thinking about psychology and get stuck somewhere, switch to a more left-hemispheric mode of thinking like physics or mathematics (the idea that the left and right brain hemispheres are separated from each other, in terms of logic and creativity respectively, is a myth, but brain lateralization or specialization certainly does exist to a certain degree). How long does it take to ‘enter’ the diffuse mode? I am not sure, but as far as I have read the information, it can be as little as 10 minutes to even hours. The thing, however, is that to stay in the diffuse mode, you need to repeat to yourself the images, ideas, questions etc. from time to time in order to make your diffuse mode think about the subject even if it takes hours. According to research, there seems to be a correlation between having more knowledge and the duration required to switch between focused and diffuse mode effectively. The more knowledge, the faster you can switch between those two modes. Diffuse mode and exponential learning It is important, no matter how much homework you have, to try to switch between the focused mode and diffuse mode. It may feel like it takes a lot more time to finish your homework, in the long run, you will understand the things much deeper. This deeper understanding will make it much easier to learn new and related material. You don’t want to end up studying for years and then only using and remembering less than 10%. Imagine how it feels to spend 40 hours a week studying, while knowing in the back of your head that only 4 of these hours were ‘effective’. Keep this thought alive in the background to motivate yourself to use that diffuse mode from time to time and not to rush your learning. Of course there are many more techniques to make your retention get closer to that 100%, like the method of loci, spaced repetition, interleaved practice, exercise, nutrition, reducing stress, getting enough sleep (which most students lack), etc. I personally don’t spend 40 hours a week learning (new) things, not only because I don’t have the time for it, but because I don’t really need to. My retention and understanding of material is very close to that 100% and you might even say above 100%, because of all the new ideas I am generating. Those little 20 hours a week quickly turn into the equivalent of 40 hours a week most students follow, and it grows exponentially.
https://medium.com/superintelligence/09-16-2019-learning-how-to-learn-powerful-mental-tools-to-help-you-master-tough-subjects-e99684abb8c8
['John Von Neumann Ii']
2019-11-10 20:31:05.116000+00:00
['Neuroscience', 'Learning', 'Education', 'Students', 'Brain']
Title Learning Learn Powerful mental tool help master tough subject Diffuse ModeContent Learning Learn Powerful mental tool help master tough subject Diffuse Mode make use superior diffuse mode aka subconsciousness followup chapter discussing focused mode lot idea every day enough time write chose write idea forget pretty certain won’t forget idea described chapter diffuse mode somewhat opposite focused mode active background subconsciousness mistake people student tend make learning don’t spend lot time diffuse mode cause dramatic lack depth understanding subject teach student simulation hypothesis see chapter 09142019 — Simulation Hypothesis ‘Good Evil’ use focused mode learn every bit element related hypothesis won’t spend much time diffuse mode consequence spending le time diffuse mode Well won’t ask deep abstract question like “But hypothesis relate ethic morals” sad trend get see modern education superficial understanding material much prefer old day like Ancient Greece Rome even emperor like Marcus Aurelius knew philosophy deep understanding thing enter diffuse mode First enter diffuse mode simply focusing anything meditation mindfulness alone effective term creativity learning new thing order command diffuse mode learn something background need use focused mode first example want know personal definition meaning life could start simply Googling information also storing longterm try answer view many perspective relax something else like exercising meditating thinking continue run background 15 minute even hour afterward simply return question surprised many new stronger connection formed brain without conscious ‘you’ technique also beautifully used taking test exam making homework httpslifehackercomimproveyourtestscoreswiththehardstartjumptoe1790599531 — Improve Test Scores “Hard StartJump Easy” Technique Diffuse mode working memory diffuse mode limited working memory slot located prefrontal cortex unlike focused mode make subconsciousness much powerful used correctly albeit powerful depicted movie focused mode also tends activate old neural pathway aren’t really creative cerebral distance long distance two activated neuron brain region etc diffuse mode activate also create new neural pathway much longer cerebral distance focused mode allows diffuse mode much creative also combine idea many different brain region diffuse mode limited working memory slot located prefrontal cortex connect ‘think’ many idea simultaneously neural resource allow Diffuse mode psychedelics would really advise book ‘How Change Mind New Science Psychedelics Teaches Us Consciousness Dying Addiction Depression Transcendence’ Michael Pollan learn information going say next diffuse mode mostly active don’t focus using focused mode mostly happens neurophysiologically seen activity socalled default mode network increase allows kind connection much active within one brain region brain region someone taking psychedelics get see kind weird hallucination like seeing face inanimate object phenomenon called pareidolia happens even without taking psychedelics increased activity default mode network increase probability occurrence People take psychedelics meditate feeling found kind ‘truths’ never thought could say taking psychedelics essentially aware diffuse mode sleep experiencing diffuse mode Diffuse mode Entropic Learning Model See chapter 09112019 — Entropic Learning Model information diffuse mode important part learning thinking process made separate phase learning model remind hour hard thinking need relax allow diffuse mode take thinking work diffuse mode run activating focused mode Yes switch task way thinking thinking psychology get stuck somewhere switch lefthemispheric mode thinking like physic mathematics idea left right brain hemisphere separated term logic creativity respectively myth brain lateralization specialization certainly exist certain degree long take ‘enter’ diffuse mode sure far read information little 10 minute even hour thing however stay diffuse mode need repeat image idea question etc time time order make diffuse mode think subject even take hour According research seems correlation knowledge duration required switch focused diffuse mode effectively knowledge faster switch two mode Diffuse mode exponential learning important matter much homework try switch focused mode diffuse mode may feel like take lot time finish homework long run understand thing much deeper deeper understanding make much easier learn new related material don’t want end studying year using remembering le 10 Imagine feel spend 40 hour week studying knowing back head 4 hour ‘effective’ Keep thought alive background motivate use diffuse mode time time rush learning course many technique make retention get closer 100 like method locus spaced repetition interleaved practice exercise nutrition reducing stress getting enough sleep student lack etc personally don’t spend 40 hour week learning new thing don’t time don’t really need retention understanding material close 100 might even say 100 new idea generating little 20 hour week quickly turn equivalent 40 hour week student follow grows exponentiallyTags Neuroscience Learning Education Students Brain
3,955
How to Use Storytelling Conventions to Create Better Visualizations
Story is the best form of communication we have. To the steely-eyed analyst, it may seem superfluous and manipulative — a way to persuade with emotion instead of facts. While it is true that some have used story to mislead and befuddle, to discard it altogether is like blaming shoes for an inability to dunk a basketball. Stories aren’t the problem; false stories are. The goal of the analyst, then, is not to avoid stories, but to tell better ones. Not only is story an effective way to communicate, for the data analyst it is unavoidable, because every presentation of data tells a story whether it is intended or not. If the story isn’t made explicit, the audience will make one up. Take the ubiquitous tabular report as an example… Story: I’m not sure what any of this means but I did work really hard to collect all the data. A visualization project doesn’t succeed by accident. Behind every one is a developer who has mastered the data, the subject matter, and the user stories. No one understands the content better than she does. By comparison, the audience’s vantage point is limited. If left to their own devices, chances are good that they will miss important insights or draw incorrect conclusions. Given that, there is no better person than the visualization developer to provide a point of view on what the data means. If the audience is looking for a story, then it is incumbent on the developer to guide them to the one that is most meaningful while staying true to the data. For a visualization to succeed, the developer must own the role of storyteller. The Story Framework Stories are about ideas. A particular story might be about a detective figuring out who did it, or survivors fighting off a zombie apocalypse, but underneath the fictional facade is a point of view about life. The combination of setting, characters, events, and tension is simply a metaphor about real-world ideas that matter — and are true. The genius of story is that it doesn’t tell you an idea is important; it shows you. When done well, its outcomes seem inevitable and its conclusions are convincing. Few methods can match a great story’s ability to enlighten and persuade. To accomplish this, stories typically follow a framework, or narrative arc, that looks like this… 1. A relatable protagonist whose world is in balance 2. A crisis that knocks their world out of balance making the status quo unacceptable 3. A journey to restore balance that faces progressively escalating opposition 4. A climax where the protagonist must decide to risk everything in order find balance once again You can see how this plays out in a couple of great movies from the ‘80s… In The Karate Kid, A high schooler named Daniel moves to California with his mom and is doing reasonably well at making new friends (balance), when a bully with a cool red leather jacket and sweet karate moves decides to make Daniel his target (crisis). Daniel is determined to learn karate to defend himself and finds Mr. Miyagi to train him (journey). In the end, Daniel must overcome the bully in a final battle royale for all to witness (climax). In Back to the Future, Marty is a normal kid trying to take his girlfriend to the prom, and also happens to be friends with a mad scientist who discovers time travel (balance). A string of events leads Marty to accidentally travel back in time to when his parents first met, and threatens his future existence by allowing his mom to become enamored with him instead of his dad (crisis). Marty then has to orchestrate events so that his mom and dad fall in love (journey), and then get back to the present time using the power from a clock tower struck by lightning (climax). The beauty of this framework is that it takes advantage of a characteristic we all share as humans: the need for order and balance. When something threatens that need, the tension causes us to direct all of our mental, emotional, and physical capacities toward restoring that balance. Sitting idle is not an option; action must be taken. A visualization can likewise use this framework to present information in a more persuasive and compelling way. If a report states facts simply because they exist with no concern for what they mean, then a visualization shows the facts that matter, when they matter, to whom they matter, and what can be done about it. Knowing that a user will act when he believes the status quo is untenable and understands what he can do about it, an effective visualization focuses on the facts that reveal meaningful tension and provide a guided path to the appropriate actions. Let’s look at how each part of the framework applies to visualization design… Scope depth over breadth “A relatable protagonist whose world is in balance” Storytellers understand who their audience is and what they care about, which enables them to create relatable protagonists and a clear picture of what a balanced and desirable life looks like. Good storytellers go deep, not wide. They limit the number of characters and the breadth of the created world to only what can be known intimately. If visualization is a form of storytelling, then the audience is its protagonist and the setting its analytical scope. A successful visual creates a world its audience will immediately recognize as their own, with familiar terminology, organization, and concepts of favorable conditions. Its scope favors depth over breadth. It does not waste space on extraneous topics just because the data is available or previous reports included them, but instead focuses solely on the problem it set out to solve, and solves only that. Exception-based visual cues “A crisis that knocks the protagonist’s world out of balance making the status quo unacceptable” Crisis is the driving force of a story. Without it there is no action, and without action there is no story. If the protagonist lives in a world where everything is as it should be, then why would she do anything to change that? Minor annoyances or moderately-challenging setbacks might lead her to make adjustments, but that doesn’t make for a compelling story. What is compelling is when an event threatens the very essence of life as she knows it. When that happens, action is not optional; it’s a matter of survival. A visualization is likewise defined by action — consequential action, more to the point. Its aim is to convince the viewer that the status quo is unacceptable and that action is mandatory. In the same way a story uses crisis as an impetus for action, a visualization makes crises jump off the screen and compels the viewer to act. It does not allow minor issues to clutter the view, but rather it focuses squarely on the things that will dramatically damage the current state if left unaddressed. In the business world it’s common to see a report full of performance KPIs like sales this year vs the previous year, or market share of a company vs a competitor. In far too many cases, every positive and negative variation is highlighted with green or red like the left side of the chart above. While it succeeds in looking like a Christmas tree, it fails at helping the viewer understand what truly matters. In reality only a few KPI variances have meaningful implications for the overall health of a business, which are called exceptions. An effective visualization is clear on which exceptions impact performance the most, and displays them front and center. Progressively-revealed detail “A journey to restore balance that faces progressively escalating opposition” Every story is a journey. They are sometimes about the protagonist literally getting from point A to point B, but they are always about the protagonist’s journey of personal transformation. No good story leaves its characters how it found them. It may seem that all is well at the beginning of a story, but a major crisis exposes how vulnerable they are. The narrative arc is not about recovering what the crisis took away; it’s about the protagonist growing into a better version of themselves that they didn’t realize was possible before. And just like in real life, it doesn’t happen with one transformational event, but progressively over the course of many events with each one requiring a little more than the one before it. The heroism that’s always required in the final act would not be possible in act one. It’s the journey in the middle that makes it possible. While a visualization does not usually demand heroic acts from its users, it does concede that they need to go on a journey involving several stages of analysis before they’re ready to act. Few real-world problems are so simple that a single KPI or view could clarify the severity of a situation or the appropriate response. Decision-makers want to go through a progression that starts with high-level performance questions and then move on to increasingly-detailed questions until a specific opportunity for action is identified. The job of a visualization is to simply mirror this progression. Actionable conclusions “A climax where the protagonist must decide to risk everything in order find balance once again” In the narrative arc of a story, the protagonist’s transformation is only complete once he irreversibly turns away from who he once was and embraces his new self. Every event, character, decision, and action in the story builds to the moment at the end where he makes a final decision and takes the required action. In a well-crafted story, the end seems inevitable because every previous moment logically led to it, one step at a time. In the same way, a visualization builds toward a final, decisive action from its users. Every choice about what, how, and where to show information is made with this end in mind. Common tabular reports provide information and nothing more. A better visualization provides the necessary insight for making decisions. To do this well, a visualization designer learns what type of information her user base needs for better decision-making, and then figures out how to sequence visuals so that her users can intuitively get to that information as quickly as possible.
https://medium.com/nightingale/how-to-use-storytelling-conventions-to-create-better-visualizations-45177ae517ba
['Dan Gastineau']
2019-06-12 21:43:18.930000+00:00
['Topicsindv', 'Design', 'Storytelling', 'Data', 'Data Visualization']
Title Use Storytelling Conventions Create Better VisualizationsContent Story best form communication steelyeyed analyst may seem superfluous manipulative — way persuade emotion instead fact true used story mislead befuddle discard altogether like blaming shoe inability dunk basketball Stories aren’t problem false story goal analyst avoid story tell better one story effective way communicate data analyst unavoidable every presentation data tell story whether intended story isn’t made explicit audience make one Take ubiquitous tabular report example… Story I’m sure mean work really hard collect data visualization project doesn’t succeed accident Behind every one developer mastered data subject matter user story one understands content better comparison audience’s vantage point limited left device chance good miss important insight draw incorrect conclusion Given better person visualization developer provide point view data mean audience looking story incumbent developer guide one meaningful staying true data visualization succeed developer must role storyteller Story Framework Stories idea particular story might detective figuring survivor fighting zombie apocalypse underneath fictional facade point view life combination setting character event tension simply metaphor realworld idea matter — true genius story doesn’t tell idea important show done well outcome seem inevitable conclusion convincing method match great story’s ability enlighten persuade accomplish story typically follow framework narrative arc look like this… 1 relatable protagonist whose world balance 2 crisis knock world balance making status quo unacceptable 3 journey restore balance face progressively escalating opposition 4 climax protagonist must decide risk everything order find balance see play couple great movie ‘80s… Karate Kid high schooler named Daniel move California mom reasonably well making new friend balance bully cool red leather jacket sweet karate move decides make Daniel target crisis Daniel determined learn karate defend find Mr Miyagi train journey end Daniel must overcome bully final battle royale witness climax Back Future Marty normal kid trying take girlfriend prom also happens friend mad scientist discovers time travel balance string event lead Marty accidentally travel back time parent first met threatens future existence allowing mom become enamored instead dad crisis Marty orchestrate event mom dad fall love journey get back present time using power clock tower struck lightning climax beauty framework take advantage characteristic share human need order balance something threatens need tension cause u direct mental emotional physical capacity toward restoring balance Sitting idle option action must taken visualization likewise use framework present information persuasive compelling way report state fact simply exist concern mean visualization show fact matter matter matter done Knowing user act belief status quo untenable understands effective visualization focus fact reveal meaningful tension provide guided path appropriate action Let’s look part framework applies visualization design… Scope depth breadth “A relatable protagonist whose world balance” Storytellers understand audience care enables create relatable protagonist clear picture balanced desirable life look like Good storyteller go deep wide limit number character breadth created world known intimately visualization form storytelling audience protagonist setting analytical scope successful visual creates world audience immediately recognize familiar terminology organization concept favorable condition scope favor depth breadth waste space extraneous topic data available previous report included instead focus solely problem set solve solves Exceptionbased visual cue “A crisis knock protagonist’s world balance making status quo unacceptable” Crisis driving force story Without action without action story protagonist life world everything would anything change Minor annoyance moderatelychallenging setback might lead make adjustment doesn’t make compelling story compelling event threatens essence life know happens action optional it’s matter survival visualization likewise defined action — consequential action point aim convince viewer status quo unacceptable action mandatory way story us crisis impetus action visualization make crisis jump screen compels viewer act allow minor issue clutter view rather focus squarely thing dramatically damage current state left unaddressed business world it’s common see report full performance KPIs like sale year v previous year market share company v competitor far many case every positive negative variation highlighted green red like left side chart succeeds looking like Christmas tree fails helping viewer understand truly matter reality KPI variance meaningful implication overall health business called exception effective visualization clear exception impact performance display front center Progressivelyrevealed detail “A journey restore balance face progressively escalating opposition” Every story journey sometimes protagonist literally getting point point B always protagonist’s journey personal transformation good story leaf character found may seem well beginning story major crisis expose vulnerable narrative arc recovering crisis took away it’s protagonist growing better version didn’t realize possible like real life doesn’t happen one transformational event progressively course many event one requiring little one heroism that’s always required final act would possible act one It’s journey middle make possible visualization usually demand heroic act user concede need go journey involving several stage analysis they’re ready act realworld problem simple single KPI view could clarify severity situation appropriate response Decisionmakers want go progression start highlevel performance question move increasinglydetailed question specific opportunity action identified job visualization simply mirror progression Actionable conclusion “A climax protagonist must decide risk everything order find balance again” narrative arc story protagonist’s transformation complete irreversibly turn away embrace new self Every event character decision action story build moment end make final decision take required action wellcrafted story end seems inevitable every previous moment logically led one step time way visualization build toward final decisive action user Every choice show information made end mind Common tabular report provide information nothing better visualization provides necessary insight making decision well visualization designer learns type information user base need better decisionmaking figure sequence visuals user intuitively get information quickly possibleTags Topicsindv Design Storytelling Data Data Visualization
3,956
We Need a Code of Ethics
We Need a Code of Ethics We’ve been moving too fast for too long and it’s hurting everyone. I keep wondering what we can do to ensure we’re building a better world with the products we make and if we need a code of ethics. So often I see people fall through the cracks because they’re “edge cases” or “not the target audience.” But at the end of the day, we’re still talking about humans. Other professions that can deeply alter someone’s life have a code of ethics and conduct they must adhere to. Take physicians’ Hippocratic oath for example. It’s got some great guiding principles, which we’ll get into below. Violating this code can mean fines or the losing the ability to practice medicine. We also build things that can deeply alter someone’s life, so why shouldn’t we have one in tech? While this isn’t a new subject by any stretch, I made a mini one of my own. It’s been boiled down to just one thing and is flexible enough to guide all my other decisions. I even modified it from the Hippocratic Oath: We will prevent harm whenever possible, as prevention is preferable to mitigation. We will also take into account not just the potential for harm, but the harm’s impact. That’s meaningful to me because our for years, tech had a “move fast, break things” mentality, which screams immaturity. It’s caused carelessness, metrics-obsessed growth, and worse— I don’t need to belabor that here. We’ve moved fast for long enough, now let’s grow together to be more intentional about both what and how we build. Maybe the new mantra could be “move thoughtfully and prevent harm,” but maybe that isn’t quite as catchy. A practical example Recently, a developer launched a de-pixelizer. Essentially, it can take a pixelated image and approximate what the person’s face might look like. The results were…not great. Setting aside that the AI seems to have been only trained on white faces, we have to consider how process this might go wrong and harm people. Imagine that this algorithm makes it into the hands of law enforcement, who mistakenly identifies someone as a criminal. This mistake could potentially ruin someone’s life, so we have to tread very carefully here. Even if the AI achieves 90% accuracy, there’s still a 10% chance it could be wrong. And while the potential for false positives might be relatively low, the impact of those mistakes could have severe consequences. Remember, we aren’t talking about which version of Internet Explorer we should support, we’re talking about someone’s life — we have to be more granular because both the potential and impact of harm is high here. Accident Theory and Preventing Harm Kaelin Burns (Fractal’s Director of Product Management) has this to say about creating products ethically: “When you create something, you also create the ‘accident’ of it. For example, when cars were invented, everyone was excited about the positive impact of getting places faster, but it also created the negative impact of car crashes and injuries. How do you evaluate the upside of a new invention along with the possible negative consequences, especially when they have never happened before? So when you’re creating new technology, I believe you have a responsibility, to the best of your ability, to think through the negative, unintended, and problematic uses of that technology, and to weigh it against the good it can do. It become particularly challenging when that technology also has the potential be extremely profitable, but is even more important in those cases.” If you’re looking for an exercise you can do, try my Black Mirror Brainstorm. In closing In this post, we discussed why we need a code of ethics in our world. I shared one thing that I added to mine. We also talked about the impact of not having one. You also learned about Accident Theory and have a shiny new exercise to try. I’m curious about one thing: What’s one thing you’d include if you made a Hippocratic Oath for tech? Thanks for reading.
https://medium.com/thisisartium/we-need-a-code-of-ethics-eaaba6f9394b
['Joshua Mauldin']
2020-07-23 22:54:13.381000+00:00
['Entrepreneurship', 'Design', 'Ethics']
Title Need Code EthicsContent Need Code Ethics We’ve moving fast long it’s hurting everyone keep wondering ensure we’re building better world product make need code ethic often see people fall crack they’re “edge cases” “not target audience” end day we’re still talking human profession deeply alter someone’s life code ethic conduct must adhere Take physicians’ Hippocratic oath example It’s got great guiding principle we’ll get Violating code mean fine losing ability practice medicine also build thing deeply alter someone’s life shouldn’t one tech isn’t new subject stretch made mini one It’s boiled one thing flexible enough guide decision even modified Hippocratic Oath prevent harm whenever possible prevention preferable mitigation also take account potential harm harm’s impact That’s meaningful year tech “move fast break things” mentality scream immaturity It’s caused carelessness metricsobsessed growth worse— don’t need belabor We’ve moved fast long enough let’s grow together intentional build Maybe new mantra could “move thoughtfully prevent harm” maybe isn’t quite catchy practical example Recently developer launched depixelizer Essentially take pixelated image approximate person’s face might look like result were…not great Setting aside AI seems trained white face consider process might go wrong harm people Imagine algorithm make hand law enforcement mistakenly identifies someone criminal mistake could potentially ruin someone’s life tread carefully Even AI achieves 90 accuracy there’s still 10 chance could wrong potential false positive might relatively low impact mistake could severe consequence Remember aren’t talking version Internet Explorer support we’re talking someone’s life — granular potential impact harm high Accident Theory Preventing Harm Kaelin Burns Fractal’s Director Product Management say creating product ethically “When create something also create ‘accident’ example car invented everyone excited positive impact getting place faster also created negative impact car crash injury evaluate upside new invention along possible negative consequence especially never happened you’re creating new technology believe responsibility best ability think negative unintended problematic us technology weigh good become particularly challenging technology also potential extremely profitable even important cases” you’re looking exercise try Black Mirror Brainstorm closing post discussed need code ethic world shared one thing added mine also talked impact one also learned Accident Theory shiny new exercise try I’m curious one thing What’s one thing you’d include made Hippocratic Oath tech Thanks readingTags Entrepreneurship Design Ethics
3,957
Amazon EC2 for Dummies — Virtual Servers on the Cloud
For more information on Instance Types see the EC2 documentation. 3- Storage Amazon EC2 offers flexible, cost-effective, and easy-to-use data storage options to be used with instances each having a unique combination of performance and durability. Each option can be used independently or in combination. These storage options are divided into 4 categories: 1- Amazon Elastic Block Store (EBS): EBS is a durable block-level storage option for persistent storage. It is recommended for data requiring granular and frequent updates such as a database. The data persists on an EBS volume even after the instance has been stopped or terminated, unlike instance store volumes. It’s a network-attached volume so can be attached or detached at will from an instance. More than one EBS volumes can be attached to an instance at a time. The EBS encryption feature allows the encryption of data. For backups, EBS provides the EBS snapshot feature which stores the snapshot on Amazon S3 and can be used to create an EBS volume from it and attached to a new instance. EBA volumes are created in a specific availability zone(AZ) and automatically replicated within the same AZ and are available for all instances in that particular availability zone. Amazon EBS provides the following volume types: General Purpose SSD, Provisioned IOPS SSD, Throughput Optimized HDD, and Cold HDD each type is either IOPS optimized or throughput optimized. EBS volumes provide the capability to dynamically increase size, modify the provisioned IOPS capacity, and change volume type on live production volumes. You continue to pay for the volume used as long as the data persists. 2- Amazon EC2 instance store: It’s a temporary block-level storage option and is available on disks physically attached to the host computer, unlike EBS volumes that are network-attached. The data on instance store volumes only persist for the lifetime of the instance and can’t be detached and attached to other instances like EBS. Data persists in case the instance reboots but stoping or terminating the instance results in permanent loss of data since every block of storage in the instance store is reset. It is ideal for data that needs to be stored temporarily such as cache, buffer or frequently changing data. The size of the instance store available and the type of hardware used for the instance store volumes are determined by the instance type. Instance store volumes are included in the cost of an instance’s usage cost. AMI’s created from instances having instance store as storage volumes don’t preserve data present on these volumes and are not present on instances launched from that AMI. Also changing instance type won’t have the same previous instance store volume attached to it and all data will be lost. 3- Amazon Elastic File System (Amazon EFS): Amazon EFS is a scalable file storage and can be used to create a file system and mount to EC2 instances. It is used as a common data source for workloads and applications running on multiple instances. 4- Amazon Simple Storage Service (Amazon S3): Amazon S3 is object storage that provides access to reliable, fast, and inexpensive data storage infrastructure. It allows you to store and retrieve any amount of data, at any time, from within Amazon EC2 or anywhere on the web. Amazon EC2 uses Amazon S3 for storing AMIs and snapshots of data volumes. To learn more about storage visit EC2 documentation. 4- Networking By default, all EC2 instances are launched in a default VPC which is a Virtual Private Cloud that enables you to logically separate a section of AWS cloud to launch your resources in a virtual network defined as per your needs. It consists of IPv4/IPv6 addresses, internet gateways, route tables, NAT gateways, public, and private subnets and helps you in designing your network and controlling access to your resources over the internet. You can place resources such as databases in a private subnet to deny any access to them over the internet or place others like web servers in public subnets for global access. Amazon EC2 and Amazon VPC use the IPv4 addressing protocol by default and this behavior can’t be disabled. An IPv4 CIDR block must be specified when creating a VPC. A public IPv4 address is automatically assigned to an instance when launched in default VPC. These public IP addresses are not associated with a specific AWS account and when disassociated from an instance, they are released back into the public IPv4 address pool, and cannot be reused. For this purpose, AWS offers Elastic IP addresses (EIP). It is a public IPv4 address that you can allocate to your account until you choose to release it. A DNS Server is also provided by Amazon that resolves Amazon-provided IPv4 DNS hostnames to IPv4 addresses. AWS also provides Elastic Network Interface(ENI) which is a logical networking component in a VPC that represents a virtual network card and enables the communication between different components. We can create our own network interface which can be attached to an instance, detached from an instance, and attached it to another instance as we require. Every instance in a VPC has a default network interface, called the primary network interface which cannot be detached. To learn more about networking visit EC2 documentation. 5- Security For building enterprise-level applications providing security is a must and AWS provides state of the art security features to prevent any threat to customer’s application. AWS follows the shared responsibility model that describes the security as a responsibility between both AWS and the customers. Security of the cloud — The protection of the infrastructure that runs AWS services in the AWS Cloud falls under AWS’s responsibility. AWS also provides services that you can use securely. Third-party auditors regularly test and verify the effectiveness of AWS security as part of the AWS Compliance Programs. — The protection of the infrastructure that runs AWS services in the AWS Cloud falls under AWS’s responsibility. AWS also provides services that you can use securely. Third-party auditors regularly test and verify the effectiveness of AWS security as part of the AWS Compliance Programs. Security in the cloud — Customer responsibility is determined by the AWS service that they use. Customers are responsible for other factors including the sensitivity of data, the company’s requirements, and applicable laws and regulations. For example, securely keeping the private key used for connection to EC2 instances is under Customer responsibilities. Security is an utmost priority at AWS and there are a lot of security features provided by AWS that can help manage EC2 and other services security needs that include: Infrasture security — that includes network isolation via VPCs and subnets, physical host isolation by virtually isolating EC2 instances on the same host, network control via security groups acts as a firewall and lets you define IP ranges to accept traffic from, NAT gateways to allow instances in private subnets to reach global internet, AWS Systems Manager Session Manager to access your instances remotely instead, VPC Flow Logs to monitor the traffic and EC2 Instance Connect for connection to your instances using Secure Shell (SSH) without the need to share and manage SSH keys. — that includes via VPCs and subnets, by virtually isolating EC2 instances on the same host, via acts as a firewall and lets you define IP ranges to accept traffic from, to allow instances in private subnets to reach global internet, to access your instances remotely instead, to monitor the traffic and for connection to your instances using Secure Shell (SSH) without the need to share and manage SSH keys. Interface VPC endpoint — enables you to privately access Amazon EC2 APIs by restricting all network traffic between your VPC and Amazon EC2 to the Amazon network and eliminate any need for internet gateways, NAT devices, or virtual private gateways. — enables you to privately access Amazon EC2 APIs by restricting all network traffic between your VPC and Amazon EC2 to the Amazon network and eliminate any need for internet gateways, NAT devices, or virtual private gateways. Resilience — The AWS global infrastructure is built around AWS Regions and Availability Zones. Regions include multiple (at least 2) Availability Zones that are physically separated and isolated connected via low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically failover between zones without interruption. — The AWS global infrastructure is built around AWS Regions and Availability Zones. Regions include multiple (at least 2) Availability Zones that are physically separated and isolated connected via low-latency, high-throughput, and highly redundant networking. With Availability Zones, you can design and operate applications and databases that automatically failover between zones without interruption. Data protection — Data hosted on AWS’s infrastructure is controlled and maintained by AWS, including the security configuration controls for handling customer content and personal data. However, customers are responsible for any personal data that they put in the AWS Cloud. For that AWS provides encryption at rest for EBS volumes and snapshots and encryption in transit by providing a secure communication channel for remote access to instances. Secure and private connectivity between EC2 instances of all types is provided by AWS. In addition, some instances type automatically encrypt data-in-transit between instances. — Data hosted on AWS’s infrastructure is controlled and maintained by AWS, including the security configuration controls for handling customer content and personal data. However, customers are responsible for any personal data that they put in the AWS Cloud. For that AWS provides for EBS volumes and snapshots and by providing a secure communication channel for remote access to instances. Secure and private connectivity between EC2 instances of all types is provided by AWS. In addition, some instances type automatically encrypt data-in-transit between instances. IAM — For data protection purposes, AWS recommends that you protect AWS account credentials and set up individual user accounts with AWS Identity and Access Management (IAM) so that each user is given only the permissions necessary to fulfill their job duties. Also, define roles and policies for services to access only the necessary features of a service. — For data protection purposes, AWS recommends that you protect AWS account credentials and set up individual user accounts with AWS Identity and Access Management (IAM) so that each user is given only the permissions necessary to fulfill their job duties. Also, define roles and policies for services to access only the necessary features of a service. Key Pairs — A key pair, consisting of a private key and a public key, is a set of security credentials that you use to prove your identity when connecting to an instance. The public key is stored on EC2, and you store the private key. Private key is used to securely access your instances. To learn more about security see EC2 documentation.
https://medium.com/analytics-vidhya/amazon-ec2-for-dummies-virtual-servers-on-the-cloud-205ceeb11cd4
['Furqan Butt']
2020-08-29 15:59:16.390000+00:00
['Amazon Web Services', 'AWS', 'Cloud Services', 'Cloud Computing', 'Ec2']
Title Amazon EC2 Dummies — Virtual Servers CloudContent information Instance Types see EC2 documentation 3 Storage Amazon EC2 offer flexible costeffective easytouse data storage option used instance unique combination performance durability option used independently combination storage option divided 4 category 1 Amazon Elastic Block Store EBS EBS durable blocklevel storage option persistent storage recommended data requiring granular frequent update database data persists EBS volume even instance stopped terminated unlike instance store volume It’s networkattached volume attached detached instance one EBS volume attached instance time EBS encryption feature allows encryption data backup EBS provides EBS snapshot feature store snapshot Amazon S3 used create EBS volume attached new instance EBA volume created specific availability zoneAZ automatically replicated within AZ available instance particular availability zone Amazon EBS provides following volume type General Purpose SSD Provisioned IOPS SSD Throughput Optimized HDD Cold HDD type either IOPS optimized throughput optimized EBS volume provide capability dynamically increase size modify provisioned IOPS capacity change volume type live production volume continue pay volume used long data persists 2 Amazon EC2 instance store It’s temporary blocklevel storage option available disk physically attached host computer unlike EBS volume networkattached data instance store volume persist lifetime instance can’t detached attached instance like EBS Data persists case instance reboots stoping terminating instance result permanent loss data since every block storage instance store reset ideal data need stored temporarily cache buffer frequently changing data size instance store available type hardware used instance store volume determined instance type Instance store volume included cost instance’s usage cost AMI’s created instance instance store storage volume don’t preserve data present volume present instance launched AMI Also changing instance type won’t previous instance store volume attached data lost 3 Amazon Elastic File System Amazon EFS Amazon EFS scalable file storage used create file system mount EC2 instance used common data source workload application running multiple instance 4 Amazon Simple Storage Service Amazon S3 Amazon S3 object storage provides access reliable fast inexpensive data storage infrastructure allows store retrieve amount data time within Amazon EC2 anywhere web Amazon EC2 us Amazon S3 storing AMIs snapshot data volume learn storage visit EC2 documentation 4 Networking default EC2 instance launched default VPC Virtual Private Cloud enables logically separate section AWS cloud launch resource virtual network defined per need consists IPv4IPv6 address internet gateway route table NAT gateway public private subnets help designing network controlling access resource internet place resource database private subnet deny access internet place others like web server public subnets global access Amazon EC2 Amazon VPC use IPv4 addressing protocol default behavior can’t disabled IPv4 CIDR block must specified creating VPC public IPv4 address automatically assigned instance launched default VPC public IP address associated specific AWS account disassociated instance released back public IPv4 address pool cannot reused purpose AWS offer Elastic IP address EIP public IPv4 address allocate account choose release DNS Server also provided Amazon resolve Amazonprovided IPv4 DNS hostnames IPv4 address AWS also provides Elastic Network InterfaceENI logical networking component VPC represents virtual network card enables communication different component create network interface attached instance detached instance attached another instance require Every instance VPC default network interface called primary network interface cannot detached learn networking visit EC2 documentation 5 Security building enterpriselevel application providing security must AWS provides state art security feature prevent threat customer’s application AWS follows shared responsibility model describes security responsibility AWS customer Security cloud — protection infrastructure run AWS service AWS Cloud fall AWS’s responsibility AWS also provides service use securely Thirdparty auditor regularly test verify effectiveness AWS security part AWS Compliance Programs — protection infrastructure run AWS service AWS Cloud fall AWS’s responsibility AWS also provides service use securely Thirdparty auditor regularly test verify effectiveness AWS security part AWS Compliance Programs Security cloud — Customer responsibility determined AWS service use Customers responsible factor including sensitivity data company’s requirement applicable law regulation example securely keeping private key used connection EC2 instance Customer responsibility Security utmost priority AWS lot security feature provided AWS help manage EC2 service security need include Infrasture security — includes network isolation via VPCs subnets physical host isolation virtually isolating EC2 instance host network control via security group act firewall let define IP range accept traffic NAT gateway allow instance private subnets reach global internet AWS Systems Manager Session Manager access instance remotely instead VPC Flow Logs monitor traffic EC2 Instance Connect connection instance using Secure Shell SSH without need share manage SSH key — includes via VPCs subnets virtually isolating EC2 instance host via act firewall let define IP range accept traffic allow instance private subnets reach global internet access instance remotely instead monitor traffic connection instance using Secure Shell SSH without need share manage SSH key Interface VPC endpoint — enables privately access Amazon EC2 APIs restricting network traffic VPC Amazon EC2 Amazon network eliminate need internet gateway NAT device virtual private gateway — enables privately access Amazon EC2 APIs restricting network traffic VPC Amazon EC2 Amazon network eliminate need internet gateway NAT device virtual private gateway Resilience — AWS global infrastructure built around AWS Regions Availability Zones Regions include multiple least 2 Availability Zones physically separated isolated connected via lowlatency highthroughput highly redundant networking Availability Zones design operate application database automatically failover zone without interruption — AWS global infrastructure built around AWS Regions Availability Zones Regions include multiple least 2 Availability Zones physically separated isolated connected via lowlatency highthroughput highly redundant networking Availability Zones design operate application database automatically failover zone without interruption Data protection — Data hosted AWS’s infrastructure controlled maintained AWS including security configuration control handling customer content personal data However customer responsible personal data put AWS Cloud AWS provides encryption rest EBS volume snapshot encryption transit providing secure communication channel remote access instance Secure private connectivity EC2 instance type provided AWS addition instance type automatically encrypt dataintransit instance — Data hosted AWS’s infrastructure controlled maintained AWS including security configuration control handling customer content personal data However customer responsible personal data put AWS Cloud AWS provides EBS volume snapshot providing secure communication channel remote access instance Secure private connectivity EC2 instance type provided AWS addition instance type automatically encrypt dataintransit instance IAM — data protection purpose AWS recommends protect AWS account credential set individual user account AWS Identity Access Management IAM user given permission necessary fulfill job duty Also define role policy service access necessary feature service — data protection purpose AWS recommends protect AWS account credential set individual user account AWS Identity Access Management IAM user given permission necessary fulfill job duty Also define role policy service access necessary feature service Key Pairs — key pair consisting private key public key set security credential use prove identity connecting instance public key stored EC2 store private key Private key used securely access instance learn security see EC2 documentationTags Amazon Web Services AWS Cloud Services Cloud Computing Ec2
3,958
Artificial Intelligence: Synergy or Sinnery?
Will the advent of A.I. allow us to embark upon a complete overhaul of traditional labor structures? This is a question that comes up less frequently than others and one that has an answer that is wholly dependent on whether or not we’d like to take an optimistic or pessimistic view. In another way to phrase it — A.I. can be seen as the harbinger of an age where humankind can, for the most part, finally unshackle themselves from the toils of labor. Conversely, it can also be regarded — and is often regarded in this way — as an enormous threat to employment, set to disrupt almost every industry and cause massive scale job loss. Assuming an optimistic perspective, it’s certainly an exciting proposition, one that would have to be supplemented with some measure of a universal basic income for everyone or some completely innovative way by which resources can be accumulated by members of society. While it seems wholly unfeasible to live in a world where humans need no longer work (again, for the most part) and can be set free to pursue their individual endeavors, it is nonetheless a tantalizing prospect. Preparing for a world without work means grappling with the roles work plays in society, and finding potential substitutes. First and foremost, we rely on work to distribute purchasing power: to give us the dough to buy our bread. Eventually, in our distant Star Trek future, we might get rid of money and prices altogether, as soaring productivity allows society to provide people with all they need at near-zero cost. — Ryan Avent, The Guardian Supposing one were to take a pessimistic perspective, the threat of soaring unemployment rates is all too real. We’ve already seen the loss of jobs brought about by automation in the workforce and A.I. poses the most menacing danger of all. The darkest of estimates come to show the loss of half of all current jobs to automation and A.I. — if this had even been remotely exaggerated, it certainly seems drastic enough to consider alternative systems of wage disbursement in its entirety.
https://medium.com/hackernoon/a-i-synergy-or-sinnery-3eeb2a2c8d3
['Michael Woronko']
2019-02-25 11:41:00.852000+00:00
['AI', 'Philosophy', 'Technology', 'Artificial Intelligence', 'Elon Musk']
Title Artificial Intelligence Synergy SinneryContent advent AI allow u embark upon complete overhaul traditional labor structure question come le frequently others one answer wholly dependent whether we’d like take optimistic pessimistic view another way phrase — AI seen harbinger age humankind part finally unshackle toil labor Conversely also regarded — often regarded way — enormous threat employment set disrupt almost every industry cause massive scale job loss Assuming optimistic perspective it’s certainly exciting proposition one would supplemented measure universal basic income everyone completely innovative way resource accumulated member society seems wholly unfeasible live world human need longer work part set free pursue individual endeavor nonetheless tantalizing prospect Preparing world without work mean grappling role work play society finding potential substitute First foremost rely work distribute purchasing power give u dough buy bread Eventually distant Star Trek future might get rid money price altogether soaring productivity allows society provide people need nearzero cost — Ryan Avent Guardian Supposing one take pessimistic perspective threat soaring unemployment rate real We’ve already seen loss job brought automation workforce AI pose menacing danger darkest estimate come show loss half current job automation AI — even remotely exaggerated certainly seems drastic enough consider alternative system wage disbursement entiretyTags AI Philosophy Technology Artificial Intelligence Elon Musk
3,959
Big Tech Regulators Are Missing the Point
Facebook’s CEO Mark Zuckerberg at a 2018 Congressional hearing on privacy (Photo by Chip Somodevilla/Getty Images | Source) It has been a tragic saga, for people who are familiar with the ways that social media platforms and companies operate, to watch government regulatory sessions with Big Tech companies. For many young people, this began with U.S. lawmakers’ questioning in Congressional hearings; sessions that revealed the lack of understanding of social media by, frankly, elder legislators. However, for those of us who study modern technology and the way that it has mutated capitalism into an entirely new beast, the frustrations with how lawyers, government officials, and any who engage in mainstream regulatory discourse, continue and intensify. This is primarily because regulators seem to not have an understanding of the actual imperatives guiding Big Tech. While they aim at Antitrust, they tip their hand in journals like the New York Times and say that the case is harder to make than they expected. They fail to realize, because they are not versed in deep understanding of the paradigms that guide Big Tech, why their case is so hard. Companies like Google, Facebook, Apple, Amazon, Microsoft, and the like, have not been motivated by user products for over a decade. They are focused on data and prediction products. The disconnect between this older understanding of how capitalism has worked, and how Shoshanna Zuboff’s appropriately named “surveillance capitalism” works currently is ruining any chance of actually reigning in Big Tech. There is an urgent need for deeper understandings of surveillance capitalism and its imperatives in order to truly reveal the danger Big Tech poses to all of us, and move towards substantive regulation. Surveillance Capitalism? Zuboff’s paradigm-shifting work, The Age of Surveillance Capitalism, is a necessary prerequisite read for anybody who dares challenge Big Tech’s hegemonic influence. I’ll detail a few key concepts that motivate the regulatory arguments against Big Tech and best depict why current Antitrust cases will likely fall embarrassingly flat. Facebook is not after Instagram or WhatsApp in order to improve the actual user interfaces or messaging capabilities, they are after these companies to acquire more of your behavioral data to feed into their machine learning prediction algorithms. Primary is the idea that companies like Google, Facebook, Amazon, and more, are not in the business of making their user products better. Zuboff calls this old cycle of product improvement the “behavioral reinvestment cycle.” This argues that, in the old days, Google may have used user data on how their search bar has been used in order to improve the search bar itself — potentially adding a new feature like search suggestions. This cycle closely mirrors the cycle of capital reinvestment from industrial capitalism, where we can imagine the profits from a company like Ford Motor to be reinvested back into their production lines or the cars themselves. This is not how Big Tech companies operate. This point could not be more important. Companies who are playing the surveillance capitalist game are not interested in changing their products to better serve users. The actual products these companies sell are predictions — that’s why Google is in the advertising business; they predict how you are feeling, thinking, and how you may do so in the future in order to give you a perfectly timed and tailored advertisement. You are not the customer for Big Tech companies. You are the raw materials, you generate behavioral data that they analyze, and then they sell predictions to their actual customers: advertisers. All of this motivates the true incentives that Big Tech are following, which follow what Zuboff calls the extraction imperative. Their prediction products improve as they harvest more behavioral data from you. Therefore, there is a strong incentive to extract more data from you — i.e. they want to make you use their platforms more, and in different ways. There is also an incentive under the extraction imperative to simply collect as much data as possible, and this is facilitated by acquiring diverse companies. Facebook is not after Instagram or WhatsApp in order to improve the actual user interfaces or messaging capabilities, they are after these companies to acquire more of your behavioral data to feed into their machine learning prediction algorithms. Under these incentives, companies like Facebook have spent years biding their time and taking flak from privacy scandal after privacy scandal because their entire business relies on gathering more data from you. For example, in 2014, Facebook faced intense privacy backlash after acquiring WhatsApp, and vowed to keep the data from the two apps in separate silos. Almost seven years later, however, in today’s New York Times article on the Antitrust cases, it is taken as common sense that the apps are being integrated. The article states, “In September, 18 months after the initial announcement that the apps would work together, Facebook unveiled the integration of Instagram and its Messenger services. The company anticipates that it may take even longer to complete the technical work for stitching together WhatsApp with its other apps.” Zuboff points out that this is part of a pattern that Big Tech companies have used since the early 2000’s: they do something that shocks us and raises privacy concerns, apologize and say they made a mistake and will protect privacy, and then wait long enough until everyone forgets and simply do it anyway. She calls this the “dispossession cycle,” and it is crucial to understand for any regulator trying to understand how these companies operate. How Regulators Should Proceed In light of these ideas that drastically shift how Big Tech is understood, regulators need to commensurately shift their strategies. The narratives that Facebook and Google have become expert at blasting out in blog posts will trump regulators’ narratives unless they, and the public, truly understand what these companies are after. They plainly make people think their apps are communication, entertainment, or gaming tools. But this is only what they are on the surface: they are actually tools to make behavioral prediction products for advertisers. Instead of trying to argue that product-based competition has been harmed by Big Tech snapping up would-be competitors like Instagram or WhatsApp, a better argument must emphasize that prediction product competition is monopolized by acquiring more sources of data. I should strongly note that I do not endorse in the slightest the idea that a market of prediction products is even legitimate. Nor do I wish to imply it doesn’t infringe heavily on human rights. However, using the language of surveillance capitalism will help regulators take the first step in the argument against Big Tech, and will lead to even stronger critiques that these predictions products — based on enormous and rich streams of behavioral data — infringe on autonomy as they arguably “know” you so well they can manipulate you. The anti-competitive argument easily follows from recognizing that the competition lies in competing data extraction and predictions, not competing user interfaces or product features. An understanding of surveillance capitalism, and the extraction and prediction imperatives, also counters the typical narratives woven by companies like Facebook and Google. In the same New York Times article, Facebook executives are quoted saying things like, “These transactions were intended to provide better products for the people who use them, and they unquestionably did,” Jennifer Newstead, Facebook’s general counsel… … Mr. Zuckerberg said Facebook was fighting a far larger ecosystem of competitors that went beyond social networking, including “Google, Twitter, Snapchat, iMessage, TikTok, YouTube and more consumer apps, to many others in advertising.” That is because Facebook and its other apps are used for communication and entertainment, such as streaming video and gaming. These narratives make it seem like Big Tech companies are motivated by the old-school “behavioral reinvestment cycle” described above. They plainly make people think their apps are communication, entertainment, or gaming tools. But this is only what they are on the surface: they are actually tools to make behavioral prediction products for advertisers. The line that these companies “make better products for users” is utilized over and over again. It is a diversionary tactic, and should be recognized as such. Regulators need to be crystal clear in their counter-narratives and call out these diversions. The moves of regulators often are the only exposure a broader public receives to these issues, so regulators must do better to expose Big Tech’s charades to the general population. Regulators must finally also understand that arguments for privacy are not just based on Big Tech companies knowing where you live, or who your friends are. The true invasion of privacy is that, though prediction, they know how you feel, where you may be going, even what you may think about soon. Our thoughts and feelings are no longer private, and those are what are being fed to advertisers to make you more likely to view or click their ads. The same way that democracy is often tied with freedom of speech, we need to deeply understand the implications of systems of consolidated power having this knowledge so that we can move to protect freedom of behavior or freedom of thought. These ideas deserve a longer treatment, but it should suffice to say that they must be the crux of truly motivating why Big Tech is so dangerous. Urgency is Needed, with Caution These ideas scratch the surface of how understanding of Big Tech companies needs to radically shift in order to motivate any regulatory action and rhetoric that cuts at the core of the actual problems. Without such an understanding, regulators seem doomed to face frustrations and lose the trust of the public through failed action, and easy counter-arguments coming from Big Tech. Regulation as an ideology has decayed in the U.S. since the Neoliberal period under Reagan, and is now a partisan issue. Failed regulatory action will only stymie momentum towards understanding that a capitalist system can only function if it is regulated. We thus need to speak with urgency towards spreading understanding of surveillance capitalism so that everyone understands what’s at stake if Big Tech is left unchecked. Though, we must also be cautious and note that the problem is so contingent on a fairly massive ideological shift that it would likely take something along the lines of a social movement to meet the challenge — something that would take time. In the meantime, those who understand how surveillance capitalism operates must raise their voices and share these ideas as widely as possible. The power and reach of Big Tech’s behavioral extraction and manipulation will only increase with time.
https://medium.com/swlh/big-tech-regulators-are-missing-the-point-240481da2eb8
['Nick Rabb']
2020-12-11 03:53:58.772000+00:00
['Technology', 'Regulation', 'Google', 'Surveillance Capitalism', 'Facebook']
Title Big Tech Regulators Missing PointContent Facebook’s CEO Mark Zuckerberg 2018 Congressional hearing privacy Photo Chip SomodevillaGetty Images Source tragic saga people familiar way social medium platform company operate watch government regulatory session Big Tech company many young people began US lawmakers’ questioning Congressional hearing session revealed lack understanding social medium frankly elder legislator However u study modern technology way mutated capitalism entirely new beast frustration lawyer government official engage mainstream regulatory discourse continue intensify primarily regulator seem understanding actual imperative guiding Big Tech aim Antitrust tip hand journal like New York Times say case harder make expected fail realize versed deep understanding paradigm guide Big Tech case hard Companies like Google Facebook Apple Amazon Microsoft like motivated user product decade focused data prediction product disconnect older understanding capitalism worked Shoshanna Zuboff’s appropriately named “surveillance capitalism” work currently ruining chance actually reigning Big Tech urgent need deeper understanding surveillance capitalism imperative order truly reveal danger Big Tech pose u move towards substantive regulation Surveillance Capitalism Zuboff’s paradigmshifting work Age Surveillance Capitalism necessary prerequisite read anybody dare challenge Big Tech’s hegemonic influence I’ll detail key concept motivate regulatory argument Big Tech best depict current Antitrust case likely fall embarrassingly flat Facebook Instagram WhatsApp order improve actual user interface messaging capability company acquire behavioral data feed machine learning prediction algorithm Primary idea company like Google Facebook Amazon business making user product better Zuboff call old cycle product improvement “behavioral reinvestment cycle” argues old day Google may used user data search bar used order improve search bar — potentially adding new feature like search suggestion cycle closely mirror cycle capital reinvestment industrial capitalism imagine profit company like Ford Motor reinvested back production line car Big Tech company operate point could important Companies playing surveillance capitalist game interested changing product better serve user actual product company sell prediction — that’s Google advertising business predict feeling thinking may future order give perfectly timed tailored advertisement customer Big Tech company raw material generate behavioral data analyze sell prediction actual customer advertiser motivates true incentive Big Tech following follow Zuboff call extraction imperative prediction product improve harvest behavioral data Therefore strong incentive extract data — ie want make use platform different way also incentive extraction imperative simply collect much data possible facilitated acquiring diverse company Facebook Instagram WhatsApp order improve actual user interface messaging capability company acquire behavioral data feed machine learning prediction algorithm incentive company like Facebook spent year biding time taking flak privacy scandal privacy scandal entire business relies gathering data example 2014 Facebook faced intense privacy backlash acquiring WhatsApp vowed keep data two apps separate silo Almost seven year later however today’s New York Times article Antitrust case taken common sense apps integrated article state “In September 18 month initial announcement apps would work together Facebook unveiled integration Instagram Messenger service company anticipates may take even longer complete technical work stitching together WhatsApp apps” Zuboff point part pattern Big Tech company used since early 2000’s something shock u raise privacy concern apologize say made mistake protect privacy wait long enough everyone forgets simply anyway call “dispossession cycle” crucial understand regulator trying understand company operate Regulators Proceed light idea drastically shift Big Tech understood regulator need commensurately shift strategy narrative Facebook Google become expert blasting blog post trump regulators’ narrative unless public truly understand company plainly make people think apps communication entertainment gaming tool surface actually tool make behavioral prediction product advertiser Instead trying argue productbased competition harmed Big Tech snapping wouldbe competitor like Instagram WhatsApp better argument must emphasize prediction product competition monopolized acquiring source data strongly note endorse slightest idea market prediction product even legitimate wish imply doesn’t infringe heavily human right However using language surveillance capitalism help regulator take first step argument Big Tech lead even stronger critique prediction product — based enormous rich stream behavioral data — infringe autonomy arguably “know” well manipulate anticompetitive argument easily follows recognizing competition lie competing data extraction prediction competing user interface product feature understanding surveillance capitalism extraction prediction imperative also counter typical narrative woven company like Facebook Google New York Times article Facebook executive quoted saying thing like “These transaction intended provide better product people use unquestionably did” Jennifer Newstead Facebook’s general counsel… … Mr Zuckerberg said Facebook fighting far larger ecosystem competitor went beyond social networking including “Google Twitter Snapchat iMessage TikTok YouTube consumer apps many others advertising” Facebook apps used communication entertainment streaming video gaming narrative make seem like Big Tech company motivated oldschool “behavioral reinvestment cycle” described plainly make people think apps communication entertainment gaming tool surface actually tool make behavioral prediction product advertiser line company “make better product users” utilized diversionary tactic recognized Regulators need crystal clear counternarratives call diversion move regulator often exposure broader public receives issue regulator must better expose Big Tech’s charade general population Regulators must finally also understand argument privacy based Big Tech company knowing live friend true invasion privacy though prediction know feel may going even may think soon thought feeling longer private fed advertiser make likely view click ad way democracy often tied freedom speech need deeply understand implication system consolidated power knowledge move protect freedom behavior freedom thought idea deserve longer treatment suffice say must crux truly motivating Big Tech dangerous Urgency Needed Caution idea scratch surface understanding Big Tech company need radically shift order motivate regulatory action rhetoric cut core actual problem Without understanding regulator seem doomed face frustration lose trust public failed action easy counterargument coming Big Tech Regulation ideology decayed US since Neoliberal period Reagan partisan issue Failed regulatory action stymie momentum towards understanding capitalist system function regulated thus need speak urgency towards spreading understanding surveillance capitalism everyone understands what’s stake Big Tech left unchecked Though must also cautious note problem contingent fairly massive ideological shift would likely take something along line social movement meet challenge — something would take time meantime understand surveillance capitalism operates must raise voice share idea widely possible power reach Big Tech’s behavioral extraction manipulation increase timeTags Technology Regulation Google Surveillance Capitalism Facebook
3,960
Docker: my questions from the first day
Images and Docker Hub How do I see the actual docker file on Docker Hub? Amazingly this isn’t a simple thing. Docker Hub really just hosts the images, not the actual Dockerfile used to make them (assuming they were made from a Dockerfile). You can get lucky by heading to the page for the desired image on Docker Hub, and often you will find a link to a GitHub hosted Dockerfile. You can also get some idea about the image if you head to Tags and click on the tag you want, and look at the image history. Where is the actual image on my machine? On your machine, run docker info and look for Docker Root Dir , like mine: Docker Root Dir: /var/lib/docker Liar! I went to that directory and it doesn’t exist! Probably you are on a Mac like me. In that case, a virtual image is located at: ~/Library/Containers/com.docker.docker/Data/vms/0 This image is run behind the scenes with HyperKit to run Docker images. You can enter that image with: screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty Then try the directory again ls /var/lib/docker/ should do it! Press control k to exit. Where on my machine can I see the actual Docker file that was used to create the image? You are similarly out of luck! There’s some tricks floating around on how to search the logs for how the image was built to find the Dockerfile, but in general this is not shipped with the image when you do a
https://medium.com/practical-coding/docker-my-questions-from-the-first-day-bc6af8d2a826
['Oliver K. Ernst']
2020-08-29 23:24:11.529000+00:00
['Coding', 'Development', 'Programming', 'Docker', 'AI']
Title Docker question first dayContent Images Docker Hub see actual docker file Docker Hub Amazingly isn’t simple thing Docker Hub really host image actual Dockerfile used make assuming made Dockerfile get lucky heading page desired image Docker Hub often find link GitHub hosted Dockerfile also get idea image head Tags click tag want look image history actual image machine machine run docker info look Docker Root Dir like mine Docker Root Dir varlibdocker Liar went directory doesn’t exist Probably Mac like case virtual image located LibraryContainerscomdockerdockerDatavms0 image run behind scene HyperKit run Docker image enter image screen LibraryContainerscomdockerdockerDatavms0tty try directory l varlibdocker Press control k exit machine see actual Docker file used create image similarly luck There’s trick floating around search log image built find Dockerfile general shipped image aTags Coding Development Programming Docker AI
3,961
Using Machine Learning to Predict Value of Homes On Airbnb
Introduction Data products have always been an instrumental part of Airbnb’s service. However, we have long recognized that it’s costly to make data products. For example, personalized search ranking enables guests to more easily discover homes, and smart pricing allows hosts to set more competitive prices according to supply and demand. However, these projects each required a lot of dedicated data science and engineering time and effort. Recently, advances in Airbnb’s machine learning infrastructure have lowered the cost significantly to deploy new machine learning models to production. For example, our ML Infra team built a general feature repository that allows users to leverage high quality, vetted, reusable features in their models. Data scientists have started to incorporate several AutoML tools into their workflows to speed up model selection and performance benchmarking. Additionally, ML infra created a new framework that will automatically translate Jupyter notebooks into Airflow pipelines. In this post, I will describe how these tools worked together to expedite the modeling process and hence lower the overall development costs for a specific use case of LTV modeling — predicting the value of homes on Airbnb. What Is LTV? Customer Lifetime Value (LTV), a popular concept among e-commerce and marketplace companies, captures the projected value of a user for a fixed time horizon, often measured in dollar terms. At e-commerce companies like Spotify or Netflix, LTV is often used to make pricing decisions like setting subscription fees. At marketplace companies like Airbnb, knowing users’ LTVs enable us to allocate budget across different marketing channels more efficiently, calculate more precise bidding prices for online marketing based on keywords, and create better listing segments. While one can use past data to calculate the historical value of existing listings, we took one step further to predict LTV of new listings using machine learning. Machine Learning Workflow For LTV Modeling Data scientists are typically accustomed to machine learning related tasks such as feature engineering, prototyping, and model selection. However, taking a model prototype to production often requires an orthogonal set of data engineering skills that data scientists might not be familiar with. Luckily, At Airbnb we have machine learning tools that abstract away the engineering work behind productionizing ML models. In fact, we could not have put our model into production without these amazing tools. The remainder of this post is organized into four topics, along with the tools we used to tackle each task: Feature Engineering: Define relevant features Define relevant features Prototyping and Training: Train a model prototype Train a model prototype Model Selection & Validation: Perform model selection and tuning Perform model selection and tuning Productionization: Take the selected model prototype to production Feature Engineering Tool used: Airbnb’s internal feature repository — Zipline One of the first steps of any supervised machine learning project is to define relevant features that are correlated with the chosen outcome variable, a process called feature engineering. For example, in predicting LTV, one might compute the percentage of the next 180 calendar dates that a listing is available or a listing’s price relative to comparable listings in the same market. At Airbnb, feature engineering often means writing Hive queries to create features from scratch. However, this work is tedious and time consuming as it requires specific domain knowledge and business logic, which means the feature pipelines are often not easily sharable or even reusable. To make this work more scalable, we developed Zipline — a training feature repository that provides features at different levels of granularity, such as at the host, guest, listing, or market level. The crowdsourced nature of this internal tool allows data scientists to use a wide variety of high quality, vetted features that others have prepared for past projects. If a desired feature is not available, a user can create her own feature with a feature configuration file like the following: When multiple features are required for the construction of a training set, Zipline will automatically perform intelligent key joins and backfill the training dataset behind the scenes. For the listing LTV model, we used existing Zipline features and also added a handful of our own. In sum, there were over 150 features in our model, including: Location : country, market, neighborhood and various geography features : country, market, neighborhood and various geography features Price : nightly rate, cleaning fees, price point relative to similar listings : nightly rate, cleaning fees, price point relative to similar listings Availability : Total nights available, % of nights manually blocked : Total nights available, % of nights manually blocked Bookability : Number of bookings or nights booked in the past X days : Number of bookings or nights booked in the past X days Quality: Review scores, number of reviews, and amenities A example training dataset With our features and outcome variable defined, we can now train a model to learn from our historical data. Prototyping and Training Tool used: Machine learning Library in Python — scikit-learn As in the example training dataset above, we often need to perform additional data processing before we can fit a model: Data Imputation: We need to check if any data is missing, and whether that data is missing at random. If not, we need to investigate why and understand the root cause. If yes, we should impute the missing values. We need to check if any data is missing, and whether that data is missing at random. If not, we need to investigate why and understand the root cause. If yes, we should impute the missing values. Encoding Categorical Variables: Often we cannot use the raw categories in the model, since the model doesn’t know how to fit on strings. When the number of categories is low, we may consider using one-hot encoding. However, when the cardinality is high, we might consider using ordinal encoding, encoding by frequency count of each category. In this step, we don’t quite know what is the best set of features to use, so writing code that allows us to rapidly iterate is essential. The pipeline construct, commonly available in open-source tools like Scikit-Learn and Spark, is a very convenient tool for prototyping. Pipelines allow data scientists to specify high-level blueprints that describe how features should be transformed, and which models to train. To make it more concrete, below is a code snippet from our LTV model pipeline: At a high level, we use pipelines to specify data transformations for different types of features, depending on whether those features are of type binary, categorical, or numeric. FeatureUnion at the end simply combines the features column-wise to create the final training dataset. The advantage of writing prototypes with pipelines is that it abstracts away tedious data transformations using data transforms. Collectively, these transforms ensure that data will be transformed consistently across training and scoring, which solves a common problem of data transformation inconsistency when translating a prototype into production. Furthermore, pipelines also separates data transformations from model fitting. While not shown in the code above, data scientists can add a final step to specify an estimator for model fitting. By exploring different estimators, data scientists can perform model selection to pick the best model to improve the model’s out of sample error. Performing Model Selection Tool used: Various AutoML frameworks As mentioned in the previous section, we need to decide which candidate model is the best to put into production. To make such a decision, we need to weigh the tradeoffs between model interpretability and model complexity. For example, a sparse linear model might be very interpretable but not complex enough to generalize well. A tree based model might be flexible enough to capture non-linear patterns but not very interpretable. This is known as the Bias-Variance tradeoff. Figure referenced from Introduction to Statistical Learning with R by James, Witten, Hastie, and Tibshirani In applications such as insurance or credit screening, a model needs to be interpretable because it’s important for the model to avoid inadvertently discriminating against certain customers. In applications such as image classification, however, it is much more important to have a performant classifier than an interpretable model. Given that model selection can be quite time consuming, we experimented with using various AutoML tools to speed up the process. By exploring a wide variety of models, we found which types of models tended to perform best. For example, we learned that eXtreme gradient boosted trees (XGBoost) significantly outperformed benchmark models such as mean response models, ridge regression models, and single decision trees. Comparing RMSE allows us to perform model selection Given that our primary goal was to predict listing values, we felt comfortable productionizing our final model using XGBoost, which favors flexibility over interpretability. Taking Model Prototypes to Production Tool used: Airbnb’s notebook translation framework — ML Automator As we alluded to earlier, building a production pipeline is quite different from building a prototype on a local laptop. For example, how can we perform periodic re-training? How do we score a large number of examples efficiently? How do we build a pipeline to monitor model performance over time? At Airbnb, we built a framework called ML Automator that automagically translates a Jupyter notebook into an Airflow machine learning pipeline. This framework is designed specifically for data scientists who are already familiar with writing prototypes in Python, and want to take their model to production with limited experience in data engineering. A simplified overview of the ML Automator Framework (photo credit: Aaron Keys) First, the framework requires a user to specify a model config in the notebook. The purpose of this model config is to tell the framework where to locate the training table, how many compute resources to allocate for training, and how scores will be computed. Additionally, data scientists are required to write specific fit and transform functions. The fit function specifies how training will be done exactly, and the transform function will be wrapped as a Python UDF for distributed scoring (if needed). Here is a code snippet demonstrating how the fit and transform functions are defined in our LTV model. The fit function tells the framework that a XGBoost model will be trained, and that data transformations will be carried out according to the pipeline we defined previously. Once the notebook is merged, ML Automator will wrap the trained model inside a Python UDF and create an Airflow pipeline like the one below. Data engineering tasks such as data serialization, scheduling of periodic re-training, and distributed scoring are all encapsulated as a part of this daily batch job. As a result, this framework significantly lowers the cost of model development for data scientists, as if there was a dedicated data engineer working alongside the data scientists to take the model into production! A graph view of our LTV Airflow DAG, running in production Note: Beyond productionization, there are other topics, such as tracking model performance over time or leveraging elastic compute environment for modeling, which we will not cover in this post. Rest assured, these are all active areas under development. Lessons Learned & Looking Ahead In the past few months, data scientists have partnered very closely with ML Infra, and many great patterns and ideas arose out of this collaboration. In fact, we believe that these tools will unlock a new paradigm for how to develop machine learning models at Airbnb. First, the cost of model development is significantly lower : by combining disparate strengths from individual tools: Zipline for feature engineering, Pipeline for model prototyping, AutoML for model selection and benchmarking, and finally ML Automator for productionization, we have shortened the development cycle tremendously. : by combining disparate strengths from individual tools: Zipline for feature engineering, Pipeline for model prototyping, AutoML for model selection and benchmarking, and finally ML Automator for productionization, we have shortened the development cycle tremendously. Second, the notebook driven design reduces barrier to entry : data scientists who are not familiar with the framework have immediate access to a plethora of real life examples. Notebooks used in production are guaranteed to be correct, self-documenting, and up-to-date. This design drives strong adoption from new users. : data scientists who are not familiar with the framework have immediate access to a plethora of real life examples. Notebooks used in production are guaranteed to be correct, self-documenting, and up-to-date. This design drives strong adoption from new users. As a result, teams are more willing to invest in ML product ideas: At the time of this post’s writing, we have several other teams exploring ML product ideas by following a similar approach: prioritizing the listing inspection queue, predicting the likelihood that listings will add cohosts, and automating flagging of low quality listings. We are very excited about the future of this framework and the new paradigm it brought along. By bridging the gap between prototyping and productionization, we can truly enable data scientists and engineers to pursue end-to-end machine learning projects and make our product better.
https://medium.com/airbnb-engineering/using-machine-learning-to-predict-value-of-homes-on-airbnb-9272d3d4739d
['Robert Chang']
2017-07-17 16:07:24.185000+00:00
['Machine Learning', 'Data Science', 'AI', 'Technology', 'Artificial Intelligence']
Title Using Machine Learning Predict Value Homes AirbnbContent Introduction Data product always instrumental part Airbnb’s service However long recognized it’s costly make data product example personalized search ranking enables guest easily discover home smart pricing allows host set competitive price according supply demand However project required lot dedicated data science engineering time effort Recently advance Airbnb’s machine learning infrastructure lowered cost significantly deploy new machine learning model production example ML Infra team built general feature repository allows user leverage high quality vetted reusable feature model Data scientist started incorporate several AutoML tool workflow speed model selection performance benchmarking Additionally ML infra created new framework automatically translate Jupyter notebook Airflow pipeline post describe tool worked together expedite modeling process hence lower overall development cost specific use case LTV modeling — predicting value home Airbnb LTV Customer Lifetime Value LTV popular concept among ecommerce marketplace company capture projected value user fixed time horizon often measured dollar term ecommerce company like Spotify Netflix LTV often used make pricing decision like setting subscription fee marketplace company like Airbnb knowing users’ LTVs enable u allocate budget across different marketing channel efficiently calculate precise bidding price online marketing based keywords create better listing segment one use past data calculate historical value existing listing took one step predict LTV new listing using machine learning Machine Learning Workflow LTV Modeling Data scientist typically accustomed machine learning related task feature engineering prototyping model selection However taking model prototype production often requires orthogonal set data engineering skill data scientist might familiar Luckily Airbnb machine learning tool abstract away engineering work behind productionizing ML model fact could put model production without amazing tool remainder post organized four topic along tool used tackle task Feature Engineering Define relevant feature Define relevant feature Prototyping Training Train model prototype Train model prototype Model Selection Validation Perform model selection tuning Perform model selection tuning Productionization Take selected model prototype production Feature Engineering Tool used Airbnb’s internal feature repository — Zipline One first step supervised machine learning project define relevant feature correlated chosen outcome variable process called feature engineering example predicting LTV one might compute percentage next 180 calendar date listing available listing’s price relative comparable listing market Airbnb feature engineering often mean writing Hive query create feature scratch However work tedious time consuming requires specific domain knowledge business logic mean feature pipeline often easily sharable even reusable make work scalable developed Zipline — training feature repository provides feature different level granularity host guest listing market level crowdsourced nature internal tool allows data scientist use wide variety high quality vetted feature others prepared past project desired feature available user create feature feature configuration file like following multiple feature required construction training set Zipline automatically perform intelligent key join backfill training dataset behind scene listing LTV model used existing Zipline feature also added handful sum 150 feature model including Location country market neighborhood various geography feature country market neighborhood various geography feature Price nightly rate cleaning fee price point relative similar listing nightly rate cleaning fee price point relative similar listing Availability Total night available night manually blocked Total night available night manually blocked Bookability Number booking night booked past X day Number booking night booked past X day Quality Review score number review amenity example training dataset feature outcome variable defined train model learn historical data Prototyping Training Tool used Machine learning Library Python — scikitlearn example training dataset often need perform additional data processing fit model Data Imputation need check data missing whether data missing random need investigate understand root cause yes impute missing value need check data missing whether data missing random need investigate understand root cause yes impute missing value Encoding Categorical Variables Often cannot use raw category model since model doesn’t know fit string number category low may consider using onehot encoding However cardinality high might consider using ordinal encoding encoding frequency count category step don’t quite know best set feature use writing code allows u rapidly iterate essential pipeline construct commonly available opensource tool like ScikitLearn Spark convenient tool prototyping Pipelines allow data scientist specify highlevel blueprint describe feature transformed model train make concrete code snippet LTV model pipeline high level use pipeline specify data transformation different type feature depending whether feature type binary categorical numeric FeatureUnion end simply combine feature columnwise create final training dataset advantage writing prototype pipeline abstract away tedious data transformation using data transforms Collectively transforms ensure data transformed consistently across training scoring solves common problem data transformation inconsistency translating prototype production Furthermore pipeline also separate data transformation model fitting shown code data scientist add final step specify estimator model fitting exploring different estimator data scientist perform model selection pick best model improve model’s sample error Performing Model Selection Tool used Various AutoML framework mentioned previous section need decide candidate model best put production make decision need weigh tradeoff model interpretability model complexity example sparse linear model might interpretable complex enough generalize well tree based model might flexible enough capture nonlinear pattern interpretable known BiasVariance tradeoff Figure referenced Introduction Statistical Learning R James Witten Hastie Tibshirani application insurance credit screening model need interpretable it’s important model avoid inadvertently discriminating certain customer application image classification however much important performant classifier interpretable model Given model selection quite time consuming experimented using various AutoML tool speed process exploring wide variety model found type model tended perform best example learned eXtreme gradient boosted tree XGBoost significantly outperformed benchmark model mean response model ridge regression model single decision tree Comparing RMSE allows u perform model selection Given primary goal predict listing value felt comfortable productionizing final model using XGBoost favor flexibility interpretability Taking Model Prototypes Production Tool used Airbnb’s notebook translation framework — ML Automator alluded earlier building production pipeline quite different building prototype local laptop example perform periodic retraining score large number example efficiently build pipeline monitor model performance time Airbnb built framework called ML Automator automagically translates Jupyter notebook Airflow machine learning pipeline framework designed specifically data scientist already familiar writing prototype Python want take model production limited experience data engineering simplified overview ML Automator Framework photo credit Aaron Keys First framework requires user specify model config notebook purpose model config tell framework locate training table many compute resource allocate training score computed Additionally data scientist required write specific fit transform function fit function specifies training done exactly transform function wrapped Python UDF distributed scoring needed code snippet demonstrating fit transform function defined LTV model fit function tell framework XGBoost model trained data transformation carried according pipeline defined previously notebook merged ML Automator wrap trained model inside Python UDF create Airflow pipeline like one Data engineering task data serialization scheduling periodic retraining distributed scoring encapsulated part daily batch job result framework significantly lower cost model development data scientist dedicated data engineer working alongside data scientist take model production graph view LTV Airflow DAG running production Note Beyond productionization topic tracking model performance time leveraging elastic compute environment modeling cover post Rest assured active area development Lessons Learned Looking Ahead past month data scientist partnered closely ML Infra many great pattern idea arose collaboration fact believe tool unlock new paradigm develop machine learning model Airbnb First cost model development significantly lower combining disparate strength individual tool Zipline feature engineering Pipeline model prototyping AutoML model selection benchmarking finally ML Automator productionization shortened development cycle tremendously combining disparate strength individual tool Zipline feature engineering Pipeline model prototyping AutoML model selection benchmarking finally ML Automator productionization shortened development cycle tremendously Second notebook driven design reduces barrier entry data scientist familiar framework immediate access plethora real life example Notebooks used production guaranteed correct selfdocumenting uptodate design drive strong adoption new user data scientist familiar framework immediate access plethora real life example Notebooks used production guaranteed correct selfdocumenting uptodate design drive strong adoption new user result team willing invest ML product idea time post’s writing several team exploring ML product idea following similar approach prioritizing listing inspection queue predicting likelihood listing add cohosts automating flagging low quality listing excited future framework new paradigm brought along bridging gap prototyping productionization truly enable data scientist engineer pursue endtoend machine learning project make product betterTags Machine Learning Data Science AI Technology Artificial Intelligence
3,962
Tech Employees Disagree With Their Companies on BLM
Tech Employees Disagree With Their Companies on BLM Nearly 1/3 of Facebook staff members aren’t satisfied with the company’s response Photo by AJ Colores on Unsplash According to a new survey of tech professionals from data company Blind, a significant number of tech professionals at major companies disagree with their company’s response to the Black Lives Matter movement. More troubling, a large number also feel that they can’t discuss their perspectives openly at work. The survey revealed that 30% of Facebook employees disagree or strongly disagree with their company’s stance on Black Lives Matter and the death of George Floyd, as do 20% at Microsoft. More than half (56%) of Facebook staff members don’t feel comfortable raising their opinions on the situation to colleagues, and the same goes for 49% of people working at Google. This is a surprising result, especially for Google. The company usually prides itself on encouraging lively discussion and debate among its staff, using a network of Google-only private chat rooms and affinity groups. These groups often shape the company’s policies. When Google considered inking a deal with the Department of Defense to use its AI capabilities to analyze drone footage, staff members quickly organized using these internal groups and shut the efforts down. In an infamous case, a Googler also published an allegedly sexist memo on the company’s internal websites, which led to a backlash from other staff members who felt no qualms about speaking up. So it’s uncharacteristic for Googlers to feel they have to be reserved about a political and social movement, especially one that seems to fit relatively directly into Google’s “Don’t be Evil” ethos. It’s also unclear why Googlers felt uncomfortable. The company may feel that politically, it can’t comment as directly on the movement as it does on other issues. As a search engine that controls much of the world’s information, Google may feel that it has to remain neutral, even on important movements like #BLM. That’s a liability for a company with socially engaged staff members, and some may be feeling the impact of restrictions on their ability to take a strong stance. Encouragingly, a majority (62%) of African-American staff members agree with their company’s BLM stance and response. But at the same time, only 10% of Black and 20% of Latino staff members felt that their ethnicity was represented in the upper management of their tech company, versus 76% of white respondents. And nearly half of Google and Facebook employees of any ethnicity say that their personal values are represented by upper management. Diversity in tech is a challenging and important issue. Tech companies are clearly still navigating the best ways to respond to movements like Black Lives Matter, and how to find their own role in advancing these causes. Blind’s survey shows that they’re making progress, but have more work to do on this front. But even more importantly than their response to the movement, tech companies need to integrate diversity more directly into the core of the operations. Responding to a movement is one thing — ensuring representation at the upper echelons of a company is another. Tech should continue to evaluate its response to BLM, but should also consider diversity more broadly and continue working towards more inclusivity and representation of people of all ethnicities on boards and in leadership positions.
https://tomsmith585.medium.com/tech-employees-disagree-with-their-companies-on-blm-2207b9b4c4b3
['Thomas Smith']
2020-06-10 14:23:53.221000+00:00
['Google', 'Tech', 'Facebook', 'Diversity', 'Black Lives Matter']
Title Tech Employees Disagree Companies BLMContent Tech Employees Disagree Companies BLM Nearly 13 Facebook staff member aren’t satisfied company’s response Photo AJ Colores Unsplash According new survey tech professional data company Blind significant number tech professional major company disagree company’s response Black Lives Matter movement troubling large number also feel can’t discus perspective openly work survey revealed 30 Facebook employee disagree strongly disagree company’s stance Black Lives Matter death George Floyd 20 Microsoft half 56 Facebook staff member don’t feel comfortable raising opinion situation colleague go 49 people working Google surprising result especially Google company usually pride encouraging lively discussion debate among staff using network Googleonly private chat room affinity group group often shape company’s policy Google considered inking deal Department Defense use AI capability analyze drone footage staff member quickly organized using internal group shut effort infamous case Googler also published allegedly sexist memo company’s internal website led backlash staff member felt qualm speaking it’s uncharacteristic Googlers feel reserved political social movement especially one seems fit relatively directly Google’s “Don’t Evil” ethos It’s also unclear Googlers felt uncomfortable company may feel politically can’t comment directly movement issue search engine control much world’s information Google may feel remain neutral even important movement like BLM That’s liability company socially engaged staff member may feeling impact restriction ability take strong stance Encouragingly majority 62 AfricanAmerican staff member agree company’s BLM stance response time 10 Black 20 Latino staff member felt ethnicity represented upper management tech company versus 76 white respondent nearly half Google Facebook employee ethnicity say personal value represented upper management Diversity tech challenging important issue Tech company clearly still navigating best way respond movement like Black Lives Matter find role advancing cause Blind’s survey show they’re making progress work front even importantly response movement tech company need integrate diversity directly core operation Responding movement one thing — ensuring representation upper echelon company another Tech continue evaluate response BLM also consider diversity broadly continue working towards inclusivity representation people ethnicity board leadership positionsTags Google Tech Facebook Diversity Black Lives Matter
3,963
New research shows why anyone with high blood pressure — nearly half of U.S.
New research shows why anyone with high blood pressure — nearly half of U.S. adults — should seek to lower it. High blood pressure, or hypertension, can accelerate the decline in brain function, including memory, concentration and verbal skills, scientists report today in the journal Hypertension. The cognitive decline occurs whether the hypertension starts early in life or much later. “Effectively treating high blood pressure at any age in adulthood could reduce or prevent this acceleration,” says study author Sandhi Barreto, MD, professor of medicine at the Universidade Federal de Minas Gerais in Brazil. “Collectively, the findings suggest hypertension needs to be prevented, diagnosed and effectively treated in adults of any age to preserve cognitive function.” The findings add to evidence in a feature article I wrote last year defining hypertension and revealing the rising problem:
https://robertroybritt.medium.com/new-research-shows-why-anyone-with-high-blood-pressure-nearly-half-of-u-s-7160828d3470
['Robert Roy Britt']
2020-12-14 23:58:30.427000+00:00
['Blood Pressure', 'Hypertension', 'Health', 'Brain', 'High Blood Pressure']
Title New research show anyone high blood pressure — nearly half USContent New research show anyone high blood pressure — nearly half US adult — seek lower High blood pressure hypertension accelerate decline brain function including memory concentration verbal skill scientist report today journal Hypertension cognitive decline occurs whether hypertension start early life much later “Effectively treating high blood pressure age adulthood could reduce prevent acceleration” say study author Sandhi Barreto MD professor medicine Universidade Federal de Minas Gerais Brazil “Collectively finding suggest hypertension need prevented diagnosed effectively treated adult age preserve cognitive function” finding add evidence feature article wrote last year defining hypertension revealing rising problemTags Blood Pressure Hypertension Health Brain High Blood Pressure
3,964
Correct Code
Correct Code Stephanie Weirich Designs Tools for a Safer World Stephanie Weirich By Jacob Williamson-Rea New cars are packed with helpful technology. Downward-facing cameras help drivers stay within lanes, and adaptive cruise control can brake and accelerate a vehicle based on other drivers’ speeds. Likewise, banks use encryption software that changes your banking information into code that only your bank can use and read, and bank software even analyzes financial markets to make investments. These features are based on software systems that rely on over 100 million lines of code, with separate programs for each component of each system. But as technology evolves, the software behind these systems needs to keep up. Stephanie Weirich, ENIAC President’s Distinguished Professor in Computer and Information Science, aims to make software systems more reliable, maintainable and secure. Her research improves tools that help programmers to determine the correctness of their code, which is applicable to a broad scope of software. Specifically, Weirich researches and improves Haskell, a programming language that places a lot of emphasis on correctness, thanks to its basis in logic and mathematical theories. “People might not realize how much computational power underlies our society,” Weirich says. “Cars, for example, possess very strong correctness requirements as they have become so reliant on computation. If banks mess up their code, it can cause disaster for our financial systems. The security and correctness of these programs is very important.” If a hacker goes after the software behind a less-correct (and thus less-protected) component of a car, such as the brakes, the results could be dangerous and devastating. Not only could the hacker gain control, but because each individual system is interconnected as a whole, the other programs for different components could be prone to errors as well. Similarly, if a driver is using lane-keeping assist and adaptive cruise control on the highway, a bug in a less-correct braking system might tell the adaptive cruise control that the car is braking when in fact it isn’t, which could be deadly. “Automation is everywhere. Cars today are just computers that have a steering wheel instead of a keyboard,” says Antal Spector-Zabusky, one of Weirich’s doctoral students. “It’s very important that these computers are as reliable as possible so that everything functions correctly, along with every other interlocking system, to prevent software from crashing and to ward off hackers.” TRAVELING ACROSS LANGUAGES Programs for embedded systems, like those found in cars, are typically written in a programming language called “C.” Programmers make sure that their software will use data correctly by combining relevant variables into classifications known as “data types.” Types are what allow a programmer to assign rules to all of the different components of a computer program. Weirich focuses on Haskell because she uses it to improve the type system of the language itself, which leads to even more extensive correctness for programmers. She’s making the types more expressive, and as a result, programmers can make better use of the type system to help them develop correct code. For example, you could represent a date, such as February 29, 2019, using three integers: 2, 29 and 2019. However, the non-expressive “integer” type does not capture the relationship between these numbers. A more expressive type used for storing dates would flag this value as invalid by encoding the fact that February only has 28 days in non-leap years. While these tools and systems are not directly usable across different languages, the ideas are. For example, Weirich says Mozilla’s Rust language, a new programming language similar to C, draws from research on type systems, such as the type system research in the Haskell community. Wherever they’re implemented, the more expressive the type system, the more it can check complex, intricate relationships between components of a program. By contrast, a less expressive type system might not be able to detect when such relationships are violated when the program is compiled, resulting in errors and incorrect behavior at runtime. Stronger types and better system verification software allow programmers to ensure they’re writing code correctly. Weirich has also worked with Spector-Zabusky to improve Haskell’s compiler, which is what turns the Haskell language into a language used by computers. “Instead of getting rid of bugs afterward, you get rid of them in the first place,” Weirich says. “The idea is that since you’re ruling out bugs at the beginning by how you are defining your types, you might be shortening development time. Also, because you don’t have to implement a wrong program and then redo that program, you’re shortening the maintenance time, because the compiler can help you figure out what part of the code can be changed.” DEEPSPEC Many professors and students in the Department of Computer and Information Science collaborate in a group called Programming Languages @ Penn, or PLClub. This includes Weirich and Spector-Zabusky, who have been working on a project called DeepSpec, a National Science Foundation flagship program, more formally known as “Expeditions in Computing: The Science of Deep Specification.” The DeepSpec project is a collaboration between Penn, MIT, Princeton and Yale. “DeepSpec is examining this question: What does it really take to specify software correctly?”, says Spector-Zabusky. “We want to specify software that is used in the real world.” To specify software is a fundamental component of ensuring that a program is as correct as possible. Specifications range in intensity, all the way from simple specifications, such as ensuring that an application won’t crash when it is used, to deep specifications, which could include ensuring that a complex numerical simulation computes correctly. Weirich’s research directly informs the DeepSpec project, particularly her work with SpectorZabusky to verify the Haskell compiler. The group aims to develop computer system verification for an entire computer system, which includes the operating system, hardware code and every other component. This takes correctness properties a step further, or deeper, than types can, resulting in a higher degree of confidence in these systems. Professor Stephanie Weirich leads a spirited discussion in CIS 552: Advanced Programming as students use pair programming to work through an exercise based on the topic of the week. COMPLEXITY IN CS Computer science’s complexity is what originally attracted Weirich to the field. “Everything changes rapidly, and there’s always new stuff,” she says. “Computer science is very broad, so it would be impossible to keep up with every aspect of every field. It makes more sense to gain expertise in specific areas.” Weirich has been accumulating expertise in statically typed programming languages like Haskell for over twenty years. She continues to do so, and students from all corners of the University, from freshmen to doctoral candidates, benefit tremendously. This semester, Weirich is teaching CIS 552: Advanced Programming to graduate students and select undergraduates. “In Advanced Programming, I demonstrate ideas that are most expressible in the Haskell language,” Weirich says. “I take ideas from my research and get to teach them to people who want to become software developers. This gives them not only a new way to develop code, but also a new perspective on programming.” In CIS 120: Programming Languages and Techniques, which Weirich will teach in spring 2020, she introduces freshmen to computer science through program design. She says she enjoys teaching this course, partly because she sees students progress from battling the difficult content to understanding it. “Overall, undergraduates recognize that so many different fields now rely on computation,” she says. “There’s a big distribution in skill level and understanding, so throughout the semester, it’s rewarding to see that switch to understanding at different points for different students.”
https://medium.com/penn-engineering/correct-code-f41cce278ae3
['Penn Engineering']
2020-06-04 19:19:44.622000+00:00
['Coding', 'Computer Science', 'Programming Languages', 'Engineering', 'Science']
Title Correct CodeContent Correct Code Stephanie Weirich Designs Tools Safer World Stephanie Weirich Jacob WilliamsonRea New car packed helpful technology Downwardfacing camera help driver stay within lane adaptive cruise control brake accelerate vehicle based drivers’ speed Likewise bank use encryption software change banking information code bank use read bank software even analyzes financial market make investment feature based software system rely 100 million line code separate program component system technology evolves software behind system need keep Stephanie Weirich ENIAC President’s Distinguished Professor Computer Information Science aim make software system reliable maintainable secure research improves tool help programmer determine correctness code applicable broad scope software Specifically Weirich research improves Haskell programming language place lot emphasis correctness thanks basis logic mathematical theory “People might realize much computational power underlies society” Weirich say “Cars example posse strong correctness requirement become reliant computation bank mess code cause disaster financial system security correctness program important” hacker go software behind lesscorrect thus lessprotected component car brake result could dangerous devastating could hacker gain control individual system interconnected whole program different component could prone error well Similarly driver using lanekeeping assist adaptive cruise control highway bug lesscorrect braking system might tell adaptive cruise control car braking fact isn’t could deadly “Automation everywhere Cars today computer steering wheel instead keyboard” say Antal SpectorZabusky one Weirich’s doctoral student “It’s important computer reliable possible everything function correctly along every interlocking system prevent software crashing ward hackers” TRAVELING ACROSS LANGUAGES Programs embedded system like found car typically written programming language called “C” Programmers make sure software use data correctly combining relevant variable classification known “data types” Types allow programmer assign rule different component computer program Weirich focus Haskell us improve type system language lead even extensive correctness programmer She’s making type expressive result programmer make better use type system help develop correct code example could represent date February 29 2019 using three integer 2 29 2019 However nonexpressive “integer” type capture relationship number expressive type used storing date would flag value invalid encoding fact February 28 day nonleap year tool system directly usable across different language idea example Weirich say Mozilla’s Rust language new programming language similar C draw research type system type system research Haskell community Wherever they’re implemented expressive type system check complex intricate relationship component program contrast le expressive type system might able detect relationship violated program compiled resulting error incorrect behavior runtime Stronger type better system verification software allow programmer ensure they’re writing code correctly Weirich also worked SpectorZabusky improve Haskell’s compiler turn Haskell language language used computer “Instead getting rid bug afterward get rid first place” Weirich say “The idea since you’re ruling bug beginning defining type might shortening development time Also don’t implement wrong program redo program you’re shortening maintenance time compiler help figure part code changed” DEEPSPEC Many professor student Department Computer Information Science collaborate group called Programming Languages Penn PLClub includes Weirich SpectorZabusky working project called DeepSpec National Science Foundation flagship program formally known “Expeditions Computing Science Deep Specification” DeepSpec project collaboration Penn MIT Princeton Yale “DeepSpec examining question really take specify software correctly” say SpectorZabusky “We want specify software used real world” specify software fundamental component ensuring program correct possible Specifications range intensity way simple specification ensuring application won’t crash used deep specification could include ensuring complex numerical simulation computes correctly Weirich’s research directly informs DeepSpec project particularly work SpectorZabusky verify Haskell compiler group aim develop computer system verification entire computer system includes operating system hardware code every component take correctness property step deeper type resulting higher degree confidence system Professor Stephanie Weirich lead spirited discussion CIS 552 Advanced Programming student use pair programming work exercise based topic week COMPLEXITY CS Computer science’s complexity originally attracted Weirich field “Everything change rapidly there’s always new stuff” say “Computer science broad would impossible keep every aspect every field make sense gain expertise specific areas” Weirich accumulating expertise statically typed programming language like Haskell twenty year continues student corner University freshman doctoral candidate benefit tremendously semester Weirich teaching CIS 552 Advanced Programming graduate student select undergraduate “In Advanced Programming demonstrate idea expressible Haskell language” Weirich say “I take idea research get teach people want become software developer give new way develop code also new perspective programming” CIS 120 Programming Languages Techniques Weirich teach spring 2020 introduces freshman computer science program design say enjoys teaching course partly see student progress battling difficult content understanding “Overall undergraduate recognize many different field rely computation” say “There’s big distribution skill level understanding throughout semester it’s rewarding see switch understanding different point different students”Tags Coding Computer Science Programming Languages Engineering Science
3,965
Deploying Static Websites To AWS S3 + CloudFront + Route53 Using The TypeScript AWS CDK
Deploying Static Websites To AWS S3 + CloudFront + Route53 Using The TypeScript AWS CDK Dennis O'Keeffe Follow Nov 4 · 4 min read In today’s post, we’re going to walk through a step-by-step deployment of a static website to an S3 bucket that has CloudFront setup as the global CDN. The post is written using the AWS TypeScript CDK. This example is used as a deployment for a static export of a NextJS 10 website. Find the blog post on how to do that here. That being said, this post is aimed at pushing any HTML to S3 to use a static website. I simply use the NextJS content to demo the final product and changes in steps required to get it done. Getting Started We need to set up a new npm project and install the prerequisites. We’ll also create a stacks directory to house our S3 stack and update it to take some custom. Updating cdk.json Add the following the the cdk.json file: Setting up context Add the following the the cdk.json file: A guide to getting your account ID can be found on the AWS website, but if you are familiar with the AWS CLI then you can use the following. Ensure that you set the account to be a string with the number returned. For more information on context, see the AWS docs. Updating the TypeScript Configuration File In tsconfig.json , add the following: This is a basic TypeScript configuration for the CDK to compile the TypeScript configuration to JavaScript. Handling the static site stack Open up stacks/s3-static-site-with-cloudfront/index.ts and add the following: The above was adjusted from the AWS CDK Example to convert things to run as a stack as opposed to a construct. To explain what is happening here: We have an interface StaticSiteProps which allows up to pass an object of arguments domainName and siteSubDomain which will allow us to demo an example. If I were to push domainName as dennisokeeffe.com and siteSubDomain as s3-cdk-deployment-example then you would expect the website to be available at s3-cdk-deployment-example.dennisokeeffe.com. This is assigned as the variable siteDomain within the class. An ARN certificate certificateArn is created to enable us to use https . A new CloudFront distribution is created and assigned to distribution . The certificateArn is used to configure the ACM Certificate Reference, and the siteDomain is used here as the name. A new Alias Record is created for our siteDomain value and has the target set to be the new CloudFront Distribution. Finally, we deploy assets from a source ./site-contents which expects you to have your code source in that folder relative to the stacks folder. In our case, this will not be what we want and that value will be changed. The deployment also invalidates the objects on the CDN. This may or may not be what you want depending on how your cache-busting mechanisms work. If you have hashed assets and no-cache or max-age=0 for your index.html file (which you should) then you can switch this off. Invalidation costs money. In my case, I am going to adjust the code above to import path and change the s3deploy.Source.asset('./site-contents') value to become s3deploy.Source.asset(path.resolve(__dirname, '../../../next-10-static-export/out')) (which points to my output directory with the static HTML build assets). This relates to my corresponding blog post on exporting NextJS 10 static websites directly. Note that you will need to add import path = require('path') to the top and install @types/node . Using the StaticSite Stack Back at the root directory in index.ts , let's import the stack and put it to use. In the above, we simply import the stack, create a new app with the cdk API, then pass that app to the new instance of the StaticSite . If you recall, the constructor for the StaticSite reads constructor(parent: Construct, name: string, props: StaticSiteProps) and so it expects three arguments. The CDK app. The “name” or identifier for the stack. Props that adhere to our StaticSiteProps , so in our case an object that passes the domainName and siteSubDomain . Updating package.json Before deployment, let’s adjust package.json for some scripts to help with the deployment. Now we are ready to roll. Deploying our site Note: you must have your static folder from another project ready for this to work. Please refer to my post on a static site export of NextJS 10 if you would like to follow what I am doing here. To deploy our site, we need to transpile the TypeScript to JavaScript, then run the CDK synth and deploy commands. Note: you’ll need to make sure that your AWS credentials are configured for this to work. I personally use aws-vault. You’ll need to accept the new resources template generated before the deployment will commence. In my particular case, I used the NextJS static site example given from my post on Exporting Static NextJS 10 Websites You can see the final, live deploy at https://nextjs-10-static-example.dennisokeeffe.com. Resources Image credit: Ignat Kushanrev Originally posted on my blog.
https://medium.com/javascript-in-plain-english/deploying-static-websites-to-aws-s3-cloudfront-route53-using-the-typescript-aws-cdk-8ae66774d1b
["Dennis O'Keeffe"]
2020-11-04 07:05:20.616000+00:00
['S3', 'Nextjs', 'JavaScript', 'Typescript', 'AWS']
Title Deploying Static Websites AWS S3 CloudFront Route53 Using TypeScript AWS CDKContent Deploying Static Websites AWS S3 CloudFront Route53 Using TypeScript AWS CDK Dennis OKeeffe Follow Nov 4 · 4 min read today’s post we’re going walk stepbystep deployment static website S3 bucket CloudFront setup global CDN post written using AWS TypeScript CDK example used deployment static export NextJS 10 website Find blog post said post aimed pushing HTML S3 use static website simply use NextJS content demo final product change step required get done Getting Started need set new npm project install prerequisite We’ll also create stack directory house S3 stack update take custom Updating cdkjson Add following cdkjson file Setting context Add following cdkjson file guide getting account ID found AWS website familiar AWS CLI use following Ensure set account string number returned information context see AWS doc Updating TypeScript Configuration File tsconfigjson add following basic TypeScript configuration CDK compile TypeScript configuration JavaScript Handling static site stack Open stackss3staticsitewithcloudfrontindexts add following adjusted AWS CDK Example convert thing run stack opposed construct explain happening interface StaticSiteProps allows pas object argument domainName siteSubDomain allow u demo example push domainName dennisokeeffecom siteSubDomain s3cdkdeploymentexample would expect website available s3cdkdeploymentexampledennisokeeffecom assigned variable siteDomain within class ARN certificate certificateArn created enable u use http new CloudFront distribution created assigned distribution certificateArn used configure ACM Certificate Reference siteDomain used name new Alias Record created siteDomain value target set new CloudFront Distribution Finally deploy asset source sitecontents expects code source folder relative stack folder case want value changed deployment also invalidates object CDN may may want depending cachebusting mechanism work hashed asset nocache maxage0 indexhtml file switch Invalidation cost money case going adjust code import path change s3deploySourceassetsitecontents value become s3deploySourceassetpathresolvedirname next10staticexportout point output directory static HTML build asset relates corresponding blog post exporting NextJS 10 static website directly Note need add import path requirepath top install typesnode Using StaticSite Stack Back root directory indexts let import stack put use simply import stack create new app cdk API pas app new instance StaticSite recall constructor StaticSite read constructorparent Construct name string prop StaticSiteProps expects three argument CDK app “name” identifier stack Props adhere StaticSiteProps case object pass domainName siteSubDomain Updating packagejson deployment let’s adjust packagejson script help deployment ready roll Deploying site Note must static folder another project ready work Please refer post static site export NextJS 10 would like follow deploy site need transpile TypeScript JavaScript run CDK synth deploy command Note you’ll need make sure AWS credential configured work personally use awsvault You’ll need accept new resource template generated deployment commence particular case used NextJS static site example given post Exporting Static NextJS 10 Websites see final live deploy httpsnextjs10staticexampledennisokeeffecom Resources Image credit Ignat Kushanrev Originally posted blogTags S3 Nextjs JavaScript Typescript AWS
3,966
Building a data lake on AWS using Redshift Spectrum
Building a data lake on AWS using Redshift Spectrum Engineering@ZenOfAI Follow Mar 11 · 5 min read In one of our earlier posts, we had talked about setting up a data lake using AWS LakeFormation. Once the data lake is setup, we can use Amazon Athena to query data. Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage. With Athena, there is no need for complex ETL jobs to prepare data for analysis. Today, we will explore querying the data from a data lake in S3 using Redshift Spectrum. This use case makes sense for those organizations that already have a significant exposure to using Redshift as their primary data warehouse. Amazon Redshift Spectrum is used to efficiently query and retrieve structured and semistructured data from files in Amazon S3 without having to load the data into Amazon Redshift tables. Amazon Redshift Spectrum resides on dedicated Amazon Redshift servers that are independent of your cluster. Redshift Spectrum pushes many compute-intensive tasks, such as predicate filtering and aggregation, down to the Redshift Spectrum layer. How is Amazon Athena different from Amazon Redshift Spectrum? Redshift Spectrum needs an Amazon Redshift cluster and an SQL client that’s connected to the cluster so that we can execute SQL commands. But Athena is serverless. In Redshift Spectrum the external tables are read-only, it does not support insert query. Athena supports the insert query which inserts records into S3. Amazon Redshift cluster To use Redshift Spectrum, you need an Amazon Redshift cluster and a SQL client that’s connected to your cluster so that you can execute SQL commands. The cluster and the data files in Amazon S3 must be in the same AWS Region. Redshift cluster needs the authorization to access the external data catalog in AWS Glue or Amazon Athena and the data files in Amazon S3. Let’s kick off the steps required to get the Redshift cluster going. Create an IAM Role for Amazon Redshift Open the IAM console, choose Roles. Then choose, Create role. Choose AWS service, and then select Redshift. Under Select your use case, select Redshift — Customizable and then choose Next: Permissions. Then Attach permissions policy page appears. Attach the following policies AmazonS3FullAccess, AWSGlueConsoleFullAccess and AmazonAthenaFullAccess For Role name, enter a name for your role, in this case, redshift-spectrum-role. Choose Create role. Create a Sample Amazon Redshift Cluster Open the Amazon Redshift console . . Choose the AWS Region . The cluster and the data files in Amazon S3 must be in the same AWS Region. . The cluster and the data files in Amazon S3 must be in the same AWS Region. Select CLUSTERS and choose Create cluster . Cluster Configuration : and choose . : Based on the size of data and type of data(compressed/uncompressed), select the nodes. Amazon Redshift provides an option to calculate the best configuration of a cluster, based on the requirements. Then choose to Calculate the best configuration for your needs . . In this case, use dc2.large with 2 nodes. Specify Cluster details. Cluster identifier : Name-of-the-cluster. : Name-of-the-cluster. Database port : Port number 5439 which is the default. : Port number 5439 which is the default. Master user name : Master user of the DB instance. : Master user of the DB instance. Master user password: Specify the password. In the Cluster permissions section, select Available IAM roles and choose the IAM role that was created earlier, redshift-spectrum-role. Then choose the Add IAM role. Select Create cluster, wait till the status is Available. Connect to Database Open the Amazon Redshift console and choose EDITOR. Database name is dev. Create an External Schema and an External Table External tables must be created in an external schema. To create an external schema, run the following command. Please replace the iam_role with the role that was created earlier. create external schema spectrum from data catalog database 'spectrumdb' iam_role 'arn:aws:iam::xxxxxxxxxxxx:role/redshift-spectrum-role' create external database if not exists; Copy data using the following command. The data used above is provided by AWS. Configure aws cli on your machine and run this command aws s3 cp s3://awssampledbuswest2/tickit/spectrum/sales/ s3://bucket-name/data/source/ --recursive To create an external table, please run the following command. The table is created in the spectrum. create external table spectrum.table_name( salesid integer, listid Now the table is available in Redshift Spectrum. We can analyze the data using SQL queries like so: create external table spectrum.table_name( salesid integer, listid integer, sellerid integer, buyerid integer, eventid integer, dateid smallint, qtysold smallint, saletime timestamp) row format delimited fields terminated by '\t' stored as textfile location 's3://bucket-name/copied-prefix/'; Now the table is available in Redshift Spectrum. We can analyze the data using SQL queries like so: SELECT * FROM spectrum.rs_table LIMIT 10; Create a Table in Athena using Glue Crawler In case you are just starting out on the AWS Glue crawler, I have explained how to create one from scratch in one of my earlier articles. In this case, I created the rs_table in spectrumdb database. Comparison between Amazon Redshift Spectrum and Amazon Athena I ran some basic queries in Athena and Redshift Spectrum as well. The query elapsed time comparison is as follows. It take about 3 seconds on Athena compared to about 16 seconds on Redshift Spectrum. The idea behind this post was to get you up and running with a basic data lake on S3 that is queryable on Redshift Spectrum. I hope it was useful. This story is authored by PV Subbareddy. He is a Big Data Engineer specializing on AWS Big Data Services and Apache Spark Ecosystem.
https://medium.com/zenofai/building-a-data-lake-on-aws-using-redshift-spectrum-6e306089aa04
['Engineering Zenofai']
2020-03-17 11:15:39.768000+00:00
['Software Development', 'Redshift', 'AWS', 'Cloud Computing', 'Athena']
Title Building data lake AWS using Redshift SpectrumContent Building data lake AWS using Redshift Spectrum EngineeringZenOfAI Follow Mar 11 · 5 min read one earlier post talked setting data lake using AWS LakeFormation data lake setup use Amazon Athena query data Athena interactive query service make easy analyze data Amazon S3 using standard SQL Athena serverless infrastructure manage Athena need complex ETL job prepare data analysis Today explore querying data data lake S3 using Redshift Spectrum use case make sense organization already significant exposure using Redshift primary data warehouse Amazon Redshift Spectrum used efficiently query retrieve structured semistructured data file Amazon S3 without load data Amazon Redshift table Amazon Redshift Spectrum resides dedicated Amazon Redshift server independent cluster Redshift Spectrum push many computeintensive task predicate filtering aggregation Redshift Spectrum layer Amazon Athena different Amazon Redshift Spectrum Redshift Spectrum need Amazon Redshift cluster SQL client that’s connected cluster execute SQL command Athena serverless Redshift Spectrum external table readonly support insert query Athena support insert query insert record S3 Amazon Redshift cluster use Redshift Spectrum need Amazon Redshift cluster SQL client that’s connected cluster execute SQL command cluster data file Amazon S3 must AWS Region Redshift cluster need authorization access external data catalog AWS Glue Amazon Athena data file Amazon S3 Let’s kick step required get Redshift cluster going Create IAM Role Amazon Redshift Open IAM console choose Roles choose Create role Choose AWS service select Redshift Select use case select Redshift — Customizable choose Next Permissions Attach permission policy page appears Attach following policy AmazonS3FullAccess AWSGlueConsoleFullAccess AmazonAthenaFullAccess Role name enter name role case redshiftspectrumrole Choose Create role Create Sample Amazon Redshift Cluster Open Amazon Redshift console Choose AWS Region cluster data file Amazon S3 must AWS Region cluster data file Amazon S3 must AWS Region Select CLUSTERS choose Create cluster Cluster Configuration choose Based size data type datacompresseduncompressed select node Amazon Redshift provides option calculate best configuration cluster based requirement choose Calculate best configuration need case use dc2large 2 node Specify Cluster detail Cluster identifier Nameofthecluster Nameofthecluster Database port Port number 5439 default Port number 5439 default Master user name Master user DB instance Master user DB instance Master user password Specify password Cluster permission section select Available IAM role choose IAM role created earlier redshiftspectrumrole choose Add IAM role Select Create cluster wait till status Available Connect Database Open Amazon Redshift console choose EDITOR Database name dev Create External Schema External Table External table must created external schema create external schema run following command Please replace iamrole role created earlier create external schema spectrum data catalog database spectrumdb iamrole arnawsiamxxxxxxxxxxxxroleredshiftspectrumrole create external database exists Copy data using following command data used provided AWS Configure aws cli machine run command aws s3 cp s3awssampledbuswest2tickitspectrumsales s3bucketnamedatasource recursive create external table please run following command table created spectrum create external table spectrumtablename salesid integer listid table available Redshift Spectrum analyze data using SQL query like create external table spectrumtablename salesid integer listid integer sellerid integer buyerid integer eventid integer dateid smallint qtysold smallint saletime timestamp row format delimited field terminated stored textfile location s3bucketnamecopiedprefix table available Redshift Spectrum analyze data using SQL query like SELECT spectrumrstable LIMIT 10 Create Table Athena using Glue Crawler case starting AWS Glue crawler explained create one scratch one earlier article case created rstable spectrumdb database Comparison Amazon Redshift Spectrum Amazon Athena ran basic query Athena Redshift Spectrum well query elapsed time comparison follows take 3 second Athena compared 16 second Redshift Spectrum idea behind post get running basic data lake S3 queryable Redshift Spectrum hope useful story authored PV Subbareddy Big Data Engineer specializing AWS Big Data Services Apache Spark EcosystemTags Software Development Redshift AWS Cloud Computing Athena
3,967
Complete Introduction to PySpark-Part 2
Complete Introduction to PySpark-Part 2 Exploratory Data Analysis using PySpark Photo by Markus Spiske on Unsplash Exploratory Data Analysis Exploratory Data Analysis is the most crucial part, to begin with whenever we are working with a dataset. It allows us to analyze the data and let us explore the initial findings from data like how many rows and columns are there, what are the different columns, etc. EDA is an approach where we summarize the main characteristics of the data using different methods and mainly visualization. Let’s start EDA using PySpark, before this if you have not yet installed PySpark, kindly visit the link below and get it configured on your Local Machine. Importing Required Libraries and Dataset Once we have configured PySpark on our machine we can use Jupyter Notebook to start exploring it. In this article, we will perform EDA operations using PySpark, for this we will using the Boston Dataset which can be downloaded Kaggle. Let’s start by importing the required libraries and loading the dataset. #importing Required Libraries import findspark findspark.init() import pyspark # only run after findspark.init() from pyspark.sql import SparkSession from pyspark.sql import SQLContext #Creating a pyspark session spark = SparkSession.builder.getOrCreate() #Importing Dataset df = spark.read.csv('Boston.csv', inferSchema=True, header=True) df.show(5) Boston Dataset(Source: By Author) Starting the EDA There are different functions defined under pyspark which we can use for Exploratory Data Analysis, let us explore some of these functions and see how useful they are. Schema Schema is similar to the Info() function of pandas dataframe. It shows us the information about all the columns which are there in the dataset. df.printSchema() Schema(Source: By Author) 2. Describe Describe function is used to display the statistical properties of all the columns in the dataset. It shows us values like Mean, Median, etc. for all the columns. In PySpark we need to call the show() function every time we need to display the information it works just like the head() function of python. df.describe().show() Statistical Properties(Source: By Author) Similarly, we can use describe function column-wise also. df.describe('AGE').show() Describe Column Wise(Source: By Author) 3. Filter The filter function is used to filter the data using different user-defined conditions. Let us see how we can use it accordingly. #Filtering data with Indus=7.07 df.filter(df.INDUS==7.07).show() Filter1(Source: By Author) Similarly, we can use multiple filters in a single line of code. df.filter((df.INDUS==7.07) & (df.MEDV=='High')).show() Filter2(Source: By Author) 4. GroupBy and Sorting PySpark inbuilt functions can be used to Group the data according to user requirements and also sorts the data as required. df.groupBy('MEDV').count().show() GroupBy(Source: By Author) df.sort((df.TAX).desc()).show(5) Sorting(Source: By Author) 5. Select & Distinct The select function is used to select different columns while a distinct function can be used to select distinct values of that column. df.select('MEDV').distinct().count() Select and Distinct(Source: By Author) 6. WithColumn WithColumn function is used to create a new column by providing certain conditions for the new columns and defining the name of the new column. #Creating New column with values from Age column divided by 2 df.withColumn('HALF_AGE', df.AGE/2.0). select('AGE','HALF_AGE') .show(5) WithColumn(Source: By Author) In this article, we covered some major functions defined under PySpark which we can use for Exploratory Data Analysis and Understanding the data we are working on. Go ahead, try these functions with different datasets, and if you face any problem let me know in the response section. Before You Go Thanks for reading! If you want to get in touch with me, feel free to reach me on [email protected] or my LinkedIn Profile. You can view my Github profile for different data science projects and packages tutorial. Also, feel free to explore my profile and read different articles I have written related to Data Science.
https://towardsdatascience.com/complete-introduction-to-pyspark-part-2-135d2f2c13e2
['Himanshu Sharma']
2020-11-13 14:01:44.049000+00:00
['Exploratory Data Analysis', 'Data Analysis', 'Pyspark', 'Python', 'Data Science']
Title Complete Introduction PySparkPart 2Content Complete Introduction PySparkPart 2 Exploratory Data Analysis using PySpark Photo Markus Spiske Unsplash Exploratory Data Analysis Exploratory Data Analysis crucial part begin whenever working dataset allows u analyze data let u explore initial finding data like many row column different column etc EDA approach summarize main characteristic data using different method mainly visualization Let’s start EDA using PySpark yet installed PySpark kindly visit link get configured Local Machine Importing Required Libraries Dataset configured PySpark machine use Jupyter Notebook start exploring article perform EDA operation using PySpark using Boston Dataset downloaded Kaggle Let’s start importing required library loading dataset importing Required Libraries import findspark findsparkinit import pyspark run findsparkinit pysparksql import SparkSession pysparksql import SQLContext Creating pyspark session spark SparkSessionbuildergetOrCreate Importing Dataset df sparkreadcsvBostoncsv inferSchemaTrue headerTrue dfshow5 Boston DatasetSource Author Starting EDA different function defined pyspark use Exploratory Data Analysis let u explore function see useful Schema Schema similar Info function panda dataframe show u information column dataset dfprintSchema SchemaSource Author 2 Describe Describe function used display statistical property column dataset show u value like Mean Median etc column PySpark need call show function every time need display information work like head function python dfdescribeshow Statistical PropertiesSource Author Similarly use describe function columnwise also dfdescribeAGEshow Describe Column WiseSource Author 3 Filter filter function used filter data using different userdefined condition Let u see use accordingly Filtering data Indus707 dffilterdfINDUS707show Filter1Source Author Similarly use multiple filter single line code dffilterdfINDUS707 dfMEDVHighshow Filter2Source Author 4 GroupBy Sorting PySpark inbuilt function used Group data according user requirement also sort data required dfgroupByMEDVcountshow GroupBySource Author dfsortdfTAXdescshow5 SortingSource Author 5 Select Distinct select function used select different column distinct function used select distinct value column dfselectMEDVdistinctcount Select DistinctSource Author 6 WithColumn WithColumn function used create new column providing certain condition new column defining name new column Creating New column value Age column divided 2 dfwithColumnHALFAGE dfAGE20 selectAGEHALFAGE show5 WithColumnSource Author article covered major function defined PySpark use Exploratory Data Analysis Understanding data working Go ahead try function different datasets face problem let know response section Go Thanks reading want get touch feel free reach hmix13gmailcom LinkedIn Profile view Github profile different data science project package tutorial Also feel free explore profile read different article written related Data ScienceTags Exploratory Data Analysis Data Analysis Pyspark Python Data Science
3,968
Getting Your Data Ready for AI
Editor’s Note: Preparing data is a crucial and unavoidable part of any data scientist’s job. In this post writer Kate Shoup takes a closer look at the data bottleneck that affects so many projects, and how to address it. Most people enter the field of data science because “they love the challenge of developing algorithms and building machine learning models that turn previously unusable data into valuable insight,” writes IBM’s Sonali Surange in a 2018 blog post. But these days, Surange notes, “most data scientists are spending up to 80 percent of their time sourcing and preparing data, leaving them very little time to focus on the more complex, interesting and valuable parts of their job.” (There’s that 80% figure again!) This bottleneck in the data-wrangling phase exists for various reasons. One is the sheer volume of data that companies collect — complicated by limited means by which to locate that data later. As organizations “focus on data capture, storage, and processing,” write Limburn and Taylor, they “have too often overlooked concerns such as data findability, classification and governance.” In this scenario, “data goes in, but there’s no safe, reliable or easy way to find out what you’re looking for and get it out again.” Unfortunately, observes Jarmul, the burden of sifting through this so-called data lake often falls on the data science team. Another reason for the data-wrangling bottleneck is the persistence of data silos. Data silos, writes AI expert Edd Wilder-James in a 2016 article for Harvard Business Review, are “isolated islands of data” that make it “prohibitively costly to extract data and put it to other uses.” Some data silos are the result of software incompatibilities — for example, when data for one department is stored on one system, and data for another department is stored on a different and incompatible system. Reconciling and integrating this data can be costly. Other data silos exist for political reasons. “Knowledge is power,” Wilder-James explains, “and groups within an organization become suspicious of others wanting to use their data.” This sense of proprietorship can undermine the interests of the organization as a whole. Finally, silos might develop because of concerns about data governance. For example, suppose that you have a dataset that might be of value to others in your organization but is sensitive in nature. Unless you know exactly who will use that data and for what, you’re more likely to cordon it off than to open it up to potential misuse. In addition to prolonging the data-wrangling phase, the existence of data lakes and data silos can severely hamper your ability to locate the best possible data for an AI project. This will likely affect the quality of your model and, by extension, the quality of the broader organizational effort that your project is meant to support. For example, suppose that your company’s broader organizational effort is to improve customer engagement, and as part of that effort it has enlisted you to design a chatbot. “If you’ve built a model to power a chatbot and it’s working against data that’s not as good as the data your competitor is able to use in their chatbot,” says Limburn, “then their chatbot — and their customer engagement — is going to be better.” Solutions One way to ease the data-wrangling bottleneck is to try to address it up front. Katharine Jarmul champions this approach. “Suppose you have an application,” she explains, “and you’ve decided that you want to use activity on your application to figure out how to build a useful predictive model later on to predict what the user wants to do next. If you already know you’re going to collect this data, and you already know what you might use it for, you could work with your developers to figure out how you can create transformations as you ingest the data.” Jarmul calls this prescriptive data science, which stands in contrast to the much more common approach: reactionary data science. Maybe it’s too late in the game for that. In that case, there are any number of data catalogs to help data scientists access and prepare data. A data catalog centralizes information about available data in one location, enabling users to access it in a self-service manner. “A good data catalog,” writes analytics expert Jen Underwood in a 2017 blog post, “serves as a searchable business glossary of data sources and common data definitions gathered from automated data discovery, classification, and cross-data source entity mapping.” According to a 2017 article by Gartner, “demand for data catalogs is soaring as organizations struggle to inventory distributed data assets to facilitate data monetization and conform to regulations.” Examples of data catalogs include the following: Microsoft Azure Data Catalog Alation Catalog Collibra Catalog Smart Data Catalog by Waterline Watson Knowledge Catalog In addition to data catalogs to surface data for AI projects, there are several tools to facilitate other data-science tasks, including connecting to data sources to access data, labeling data, and transforming data. These include the following: Database query tools Data scientists use tools such as SQL, Apache Hive, Apache Pig, Apache Drill, and Presto to access and, in some cases, transform data. Programming languages and software libraries To access, label, and transform data, data scientists employ tools like R, Python, Spark, Scala, and Pandas. Notebooks These programming environments, which include Jupyter, IPython, knitr, RStudio, and R Markdown, also aid data scientists in accessing, labeling, and transforming data.
https://medium.com/oreillymedia/getting-your-data-ready-for-ai-efdbdba6d0cf
["O'Reilly Media"]
2020-09-24 12:49:30.657000+00:00
['Artificial Intelligence', 'AI', 'Data Science', 'Data', 'Data Scientist']
Title Getting Data Ready AIContent Editor’s Note Preparing data crucial unavoidable part data scientist’s job post writer Kate Shoup take closer look data bottleneck affect many project address people enter field data science “they love challenge developing algorithm building machine learning model turn previously unusable data valuable insight” writes IBM’s Sonali Surange 2018 blog post day Surange note “most data scientist spending 80 percent time sourcing preparing data leaving little time focus complex interesting valuable part job” There’s 80 figure bottleneck datawrangling phase exists various reason One sheer volume data company collect — complicated limited mean locate data later organization “focus data capture storage processing” write Limburn Taylor “have often overlooked concern data findability classification governance” scenario “data go there’s safe reliable easy way find you’re looking get again” Unfortunately observes Jarmul burden sifting socalled data lake often fall data science team Another reason datawrangling bottleneck persistence data silo Data silo writes AI expert Edd WilderJames 2016 article Harvard Business Review “isolated island data” make “prohibitively costly extract data put uses” data silo result software incompatibility — example data one department stored one system data another department stored different incompatible system Reconciling integrating data costly data silo exist political reason “Knowledge power” WilderJames explains “and group within organization become suspicious others wanting use data” sense proprietorship undermine interest organization whole Finally silo might develop concern data governance example suppose dataset might value others organization sensitive nature Unless know exactly use data you’re likely cordon open potential misuse addition prolonging datawrangling phase existence data lake data silo severely hamper ability locate best possible data AI project likely affect quality model extension quality broader organizational effort project meant support example suppose company’s broader organizational effort improve customer engagement part effort enlisted design chatbot “If you’ve built model power chatbot it’s working data that’s good data competitor able use chatbot” say Limburn “then chatbot — customer engagement — going better” Solutions One way ease datawrangling bottleneck try address front Katharine Jarmul champion approach “Suppose application” explains “and you’ve decided want use activity application figure build useful predictive model later predict user want next already know you’re going collect data already know might use could work developer figure create transformation ingest data” Jarmul call prescriptive data science stand contrast much common approach reactionary data science Maybe it’s late game case number data catalog help data scientist access prepare data data catalog centralizes information available data one location enabling user access selfservice manner “A good data catalog” writes analytics expert Jen Underwood 2017 blog post “serves searchable business glossary data source common data definition gathered automated data discovery classification crossdata source entity mapping” According 2017 article Gartner “demand data catalog soaring organization struggle inventory distributed data asset facilitate data monetization conform regulations” Examples data catalog include following Microsoft Azure Data Catalog Alation Catalog Collibra Catalog Smart Data Catalog Waterline Watson Knowledge Catalog addition data catalog surface data AI project several tool facilitate datascience task including connecting data source access data labeling data transforming data include following Database query tool Data scientist use tool SQL Apache Hive Apache Pig Apache Drill Presto access case transform data Programming language software library access label transform data data scientist employ tool like R Python Spark Scala Pandas Notebooks programming environment include Jupyter IPython knitr RStudio R Markdown also aid data scientist accessing labeling transforming dataTags Artificial Intelligence AI Data Science Data Data Scientist
3,969
Why We Spend Our Brief Lives Indoors, Alone, and Typing
Why We Spend Our Brief Lives Indoors, Alone, and Typing Or, how I justify teaching my students the dying art of writing I worry about what to tell Kate. For most of my students, knowing how to write well will just be an unfair advantage in whatever career they choose — I tell them it’s like being rich, or pretty. But Kate takes art personally: she loves the Dadaists but also thinks they’re a bit cliqueish and silly; in her emails she quotes Rilke and Hurston. She’s one of those students so passionate about writing and literature it makes me feel, briefly, younger to be around her. It also secretly breaks my heart. I once unprofessionally confessed to Kate my misgivings about training her in an art form as archaic as stained glass. She tells me she still thinks about this all the time. I know better than to blame myself for Kate’s career choice; she was already doomed to this vocation when I met her. You recognize these students less by their intrinsic talent than by their seriousness of purpose: they allow themselves no other options; they’re in it for the long haul. She recently took a year off from studying law, politics, and economics to take nothing but courses in literature: “There is something telling me that if I ever do something to save people from the misery that other people have caused them,” she wrote me, “it will be because of what Ibsen teaches me, and not a class on terrorism.” She worries that by obsessively observing and recording she’s missing out on the experience of being alive. I have assured Kate she is more alive than anyone I know. How can I justify luring guileless young people into squandering their passion and talents on a life of letters? But in this dystopian year 2019, with “the newspapers dying like huge moths,” as Ray Bradbury predicted in the ’50s, “Literature” a niche category among all the Book-Shaped Products© on sale, and “the discourse” limited to 280 characters, how can I justify luring guileless young people into squandering their passion and talents on a life of letters? It’s not just that writing is an unmonetizable profession — Kate knows that much already — I worry it’s obsolete. Late in life, the novelist James Salter despaired of the post-literate civilization he saw already arriving: “The new populations will live in hives of concrete on a diet of film, television, and the internet.” Trying to read something like To the Lighthouse with attention spans stunted by stroboscopic overdoses of Instagram/Twitter/Reddit/Imgur might as well be climbing Kilimanjaro. These very words will likely vanish from your head within hours, driven out by the latest presidential obscenity or Mueller revelation, the next episode of Atlanta, a new Maru video. “They speak about the dumbing of America as a foregone thing, already completed,” wrote Michael Herr, “but, duh, it’s a process, and we haven’t seen anything yet.” That was in 1999, long before what people are calling, with the chilling nonchalance of a fait accompli, the “post-truth era.” Consensual reality is as abandoned as Friendster; everyone now gets to curate their own truths like Spotify playlists. You can convincingly Photoshop a Parkland survivor tearing up the Constitution, or CGI “deepfake” footage of Kat Dennings having sex with you. Journalists routinely get death threats, while the two most widely trusted institutions in America are the police and the military, who can be relied upon to obediently massacre us on command. I’m still haunted by an essay speculating that, if the President were proved to have committed impeachable crimes, we would face “an epistemological crisis” in this country: What if his supporters simply declined to buy the evidence? Of course people have always claimed that the culture is in decline, that the age of true art is past, that each generation is more illiterate, vulgar, and stupid than the last. But the constancy of this claim throughout history can obscure, from the limited perspective of a single short lifetime, that it may be true. We lack the historical elevation to tell whether this current darkness is just a passing reactionary spasm, like the McCarthy aberration, or part of a longer, more inexorable Gibbonian decline. “I know Writing isn’t dead and I believe it’ll only be once we all are for good,” Kate wrote me. I just hope the latter date is further away than it sometimes seems. But even an apocalypse needs chroniclers. One of my colleagues says she’s writing for that (possibly brief) interval between the end of the internet and our extinction, when our grandchildren may turn to our words to try to understand what happened. I keep remembering Agnolo di Tura writing, during the Black Death: “so many died that all believed it was the end of the world.” He had buried five children by then, and had every reason to believe it was true; still, he wrote it down. Or maybe it’s not civilization that’s in decline; maybe it’s just me. Talking to Kate also makes me feel older, uncomfortably aware of the distance between her searing idealism and my own guttering disillusion. Anyone who makes the mistake of turning their passion into a vocation gets to watch it turn, like gold transmuting into lead, into a job. You start out motivated by pure, childish things: the pleasure of finding something you do well, of telling stories or making jokes. You’re driven by the same fear that drives magnates and despots: the approaching deadline of mortality, the dreadful urgency to make something to prove you were here. These motives gradually get buried under geological layers of bullshit — reputation, recognition, self-image, money — until every airport bookstore becomes a warped hall of mirrors confronting you with your own insecurity, petty jealousies, and resentment. Posterity is no less absurd an illusion than an afterlife. A friend of mine recently forwarded me a cache of letters by Raymond Chandler, in which he ruminates, like one of his own weltschmerzy heroes, on the vanity of literary striving: “Do I wish to win the Nobel Prize? […] I’d have to go to Sweden and dress up and make a speech. Is the Nobel Prize worth all that? Hell, no.” Just as courage is acting despite your fear, faith is acting despite your despair. Why, then, do we do it — spend so much of our brief time alive in this gorgeous world indoors, alone, and typing? After all my worrying about what to tell Kate, it turned out it was up to her to tell me. “Somehow most people are taught that Art is a way to distract from the terror,” she wrote me, “when in fact I think it is the only way to get through it at all.” In other words, all my arguments for writing’s futility are in fact arguments for its necessity. I was never as idealistic as Kate — or rather, I was never as hopeful; my idealism is too fragile, too easily disappointed. What she and I share is that foolish, ineradicable belief in art and the written word: That there is such a thing as truth, and that it matters when it’s spoken, even if no one listens. Beliefs so frail and indefensible, so easily debunked, that you’d almost have to call them articles of faith. And faith is like courage: Just as courage is acting despite your fear, faith is acting despite your despair. The last time I saw Kate we stopped, on a whim, at the Cathedral of Saint John the Divine and discovered the “American Poet’s Corner,” a chapel dedicated to writers. We stood searching its floor for the names of our favorites, the patron saints of our chosen vocation: Poe and Twain, Fitzgerald and O’Connor, Cummings and Plath. The quotation from O’Connor reads: “I can, with one eye squinted, take it all as a blessing.” I’d likened writing to stained glass, an anachronism, but stained glass is more than an artifact in itself — it’s a medium, to make the invisible manifest. The sunlight through the cathedral windows cast a warm pastel glow across the flagstones, lending to those graven words the animating blush of illumination. A few days ago Kate wrote to let me know she’d been accepted to journalism school, with a full scholarship. She wrote: “Looking forward to it all.”
https://humanparts.medium.com/why-we-spend-our-lives-indoors-alone-typing-e3b1a98e6f45
['Timothy Kreider']
2019-04-15 18:00:04.298000+00:00
['Education', 'Creativity', 'Media', 'Culture', 'Writing']
Title Spend Brief Lives Indoors Alone TypingContent Spend Brief Lives Indoors Alone Typing justify teaching student dying art writing worry tell Kate student knowing write well unfair advantage whatever career choose — tell it’s like rich pretty Kate take art personally love Dadaists also think they’re bit cliqueish silly email quote Rilke Hurston She’s one student passionate writing literature make feel briefly younger around also secretly break heart unprofessionally confessed Kate misgiving training art form archaic stained glass tell still think time know better blame Kate’s career choice already doomed vocation met recognize student le intrinsic talent seriousness purpose allow option they’re long haul recently took year studying law politics economics take nothing course literature “There something telling ever something save people misery people caused them” wrote “it Ibsen teach class terrorism” worry obsessively observing recording she’s missing experience alive assured Kate alive anyone know justify luring guileless young people squandering passion talent life letter dystopian year 2019 “the newspaper dying like huge moths” Ray Bradbury predicted ’50s “Literature” niche category among BookShaped Products© sale “the discourse” limited 280 character justify luring guileless young people squandering passion talent life letter It’s writing unmonetizable profession — Kate know much already — worry it’s obsolete Late life novelist James Salter despaired postliterate civilization saw already arriving “The new population live hive concrete diet film television internet” Trying read something like Lighthouse attention span stunted stroboscopic overdoses InstagramTwitterRedditImgur might well climbing Kilimanjaro word likely vanish head within hour driven latest presidential obscenity Mueller revelation next episode Atlanta new Maru video “They speak dumbing America foregone thing already completed” wrote Michael Herr “but duh it’s process haven’t seen anything yet” 1999 long people calling chilling nonchalance fait accompli “posttruth era” Consensual reality abandoned Friendster everyone get curate truth like Spotify playlist convincingly Photoshop Parkland survivor tearing Constitution CGI “deepfake” footage Kat Dennings sex Journalists routinely get death threat two widely trusted institution America police military relied upon obediently massacre u command I’m still haunted essay speculating President proved committed impeachable crime would face “an epistemological crisis” country supporter simply declined buy evidence course people always claimed culture decline age true art past generation illiterate vulgar stupid last constancy claim throughout history obscure limited perspective single short lifetime may true lack historical elevation tell whether current darkness passing reactionary spasm like McCarthy aberration part longer inexorable Gibbonian decline “I know Writing isn’t dead believe it’ll good” Kate wrote hope latter date away sometimes seems even apocalypse need chronicler One colleague say she’s writing possibly brief interval end internet extinction grandchild may turn word try understand happened keep remembering Agnolo di Tura writing Black Death “so many died believed end world” buried five child every reason believe true still wrote maybe it’s civilization that’s decline maybe it’s Talking Kate also make feel older uncomfortably aware distance searing idealism guttering disillusion Anyone make mistake turning passion vocation get watch turn like gold transmuting lead job start motivated pure childish thing pleasure finding something well telling story making joke You’re driven fear drive magnate despot approaching deadline mortality dreadful urgency make something prove motif gradually get buried geological layer bullshit — reputation recognition selfimage money — every airport bookstore becomes warped hall mirror confronting insecurity petty jealousy resentment Posterity le absurd illusion afterlife friend mine recently forwarded cache letter Raymond Chandler ruminates like one weltschmerzy hero vanity literary striving “Do wish win Nobel Prize … I’d go Sweden dress make speech Nobel Prize worth Hell no” courage acting despite fear faith acting despite despair — spend much brief time alive gorgeous world indoors alone typing worrying tell Kate turned tell “Somehow people taught Art way distract terror” wrote “when fact think way get all” word argument writing’s futility fact argument necessity never idealistic Kate — rather never hopeful idealism fragile easily disappointed share foolish ineradicable belief art written word thing truth matter it’s spoken even one listens Beliefs frail indefensible easily debunked you’d almost call article faith faith like courage courage acting despite fear faith acting despite despair last time saw Kate stopped whim Cathedral Saint John Divine discovered “American Poet’s Corner” chapel dedicated writer stood searching floor name favorite patron saint chosen vocation Poe Twain Fitzgerald O’Connor Cummings Plath quotation O’Connor read “I one eye squinted take blessing” I’d likened writing stained glass anachronism stained glass artifact — it’s medium make invisible manifest sunlight cathedral window cast warm pastel glow across flagstone lending graven word animating blush illumination day ago Kate wrote let know she’d accepted journalism school full scholarship wrote “Looking forward all”Tags Education Creativity Media Culture Writing
3,970
An Update to Your Fitbit Could Detect a Covid-19 Symptom
An Update to Your Fitbit Could Detect a Covid-19 Symptom You might have a full-featured pulse oximeter sitting on your wrist right now Photo: Adam Birkett/Unsplash One of the scariest things about Covid-19 is that if you get the virus, there’s not a whole lot you can do. Official guidelines say to treat it at home, much as you would a cold or flu — rest, drink fluids, separate yourself from others in your living space, and so on. That’s all well and good, except that Covid-19 has fast developed a reputation for causing otherwise stable patients to crash alarmingly quickly. Doctors tell stories of patients who battle the virus for days or weeks, seem fine, and then in a matter of hours deteriorate and need to be placed on a ventilator — or worse. If you’re treating yourself for Covid-19 at home, how can you know if you’re in the middle of a serious crash? By all accounts, Covid-19 makes many people feel terrible — how can patients outside a hospital setting know when things have gone from merely awful to life-threatening? One potential health tool that’s rapidly emerging is the use of a home pulse oximeter. These simple devices measure the oxygen level in your blood. If it drops below 92%, that’s a concern. If it falls further, you could be in big trouble — some Covid-19 patients have reportedly had levels in the 50% range. Pulse oximeters are especially appealing because Covid-19 has been reported to cause silent hypoxia. In this condition — which seems tailor-made to haunt the dreams of hypochondriacs — a person can walk around with a serious Covid-19 oxygen deficiency and not know about it until it’s too late. There’s only one problem — home pulse oximeters are fast becoming more scarce than toilet paper. I got an Innovo pulse oximeter a year ago to monitor myself during exercise. I paid $23 for the device, and it arrived in two days. This morning, I checked Amazon and could only find one pulse oximeter available to ship in less than a week. They were charging $60. Most wouldn’t ship until mid-May, or later, at any price. If you’re one of the millions of people who wear a Fitbit smartwatch, though, there’s good news. You likely have a full-featured pulse oximeter sitting on your wrist right now. And it could be one firmware update away from potentially saving your life. That Fitbit has been quietly placing pulse oximeters in their watches for years has long been a badly kept industry secret. Users and gadget reviewers alike have noticed the sensors on the back of their Fitbit devices and speculated about their presence and function. A video on my own low-budget YouTube channel speculating about the sensor has 10,000+ views and has received more viewer engagement than many of my other videos. The consensus (which Fitbit ultimately confirmed) was that the company was quietly developing a program to use the sensor for detection of sleep apnea. As early as 2017, Fitbit was hinting at this direction and copped to testing hardware for detecting apnea (a serious condition, the treatment of which will be a projected $6.7 billion industry by 2021). As Fitbit rolled out improved sleep tracking last year, a move toward tracking sleep apnea seemed just over the horizon (I was a beta tester in this program). There are several hurdles to including a pulse oximeter in a consumer wearable. First, there are the technical hurdles. The device has to actually work, and measuring oxygen levels at the wrist is a challenging problem. Some data also indicates that it’s especially challenging with people of color, a concerning finding, especially since Covid-19 impacts these communities disproportionately. There are also concerns that a user’s movements could impact the readings — although, silent hypoxia aside, it’s unclear how much a Covid-19-afflicted patient would be moving around. And price is always a concern — smartwatches often cost $150+, putting them out of reach of many vulnerable populations. Rather than treating blood oxygen as another vanity metric to show to life hackers and exercise fanatics, it’s gone the much-harder route of using its sensors to work toward diagnosing an actual medical condition. But beyond the technical challenges, there are also major regulatory hurdles to clear. Telling people their step count (or even their heart rate) is one thing. Diagnosing them with a disease using a consumer device is another entirely. My own best guess is that Fitbit, as an independent company, didn’t have the regulatory connections and pocketbook to stomach a move into the medical device sector. With its announced acquisition by Google, though, Fitbit suddenly has a deep-pocketed corporate parent to navigate the U.S. Food and Drug Administration (and handle the liability from potential device failures) on its behalf. Perhaps because of that backing, Fitbit quietly rolled out blood oxygen level tracking in its app in January and told Gizmodo that it “expect[s] to submit for FDA clearance soon.” Fitbit’s new blood oxygen measurement capability has potentially life-saving implications in the fight against Covid-19. Twenty-eight million people already wear Fitbits, so using them as pulse oximeters could provide monitoring capabilities to a huge swath of people at once (the capability is not available in all Fitbit models, but is present in its newer smartwatches and trackers). Blood oxygen levels are more useful in a diagnostic sense when they’re used to track a trend. What better way to see oxygen level trends than to have an always-on pulse oximeter on your wrist? At the moment, Fitbit only exposes oxygen level data in the sleep-tracking portion of its app. The levels are used to show a general summary of oxygenation status, and users can’t get a specific percentage reading. This is consistent with its original goal of tracking sleep apnea. But that’s likely a firmware and software decision, not a hardware one. A simple firmware update over the air could likely enable full blood oxygen level tracking very easily, since the hardware (and likely the algorithms for processing raw pulse oximeter readings into meaningful data) are already there. So will Fitbit enable this feature? A lot likely depends on regulatory bodies like the FDA. The FDA reportedly did not allow Apple to enable its own blood oxygen level tracking on the Apple watch, another popular device with a stealth pulse oximeter onboard. But it did hint at allowing consumer wearables to monitor for Covid-19, and researchers are forging ahead with studies to evaluate the Apple Watch, Fitbit devices, and Garmin smartwatches for this purpose. At the moment, Fitbit still says its devices shouldn’t be used for medical purposes. And there are the ongoing technical concerns about the devices’ accuracy, especially at the low oxygen levels that indicate danger. Here, though, Fitbit’s cautious approach and years of testing may serve it well. Rather than treating blood oxygen as another vanity metric to show to life hackers and exercise fanatics, it’s gone the much-harder route of using its sensors to work toward diagnosing an actual medical condition. That means the company has likely been laying the groundwork for medical device clearance — technically and legally — from day one. That gives it a huge advantage over its competitors, both in terms of regulatory connections and the hardware already baked into millions of its devices. That the condition Fitbit chose to treat, sleep apnea, is characterized by low oxygen levels bodes well, too. The company has likely focused on detecting oxygen accurately at low levels from the beginning. This potentially gives it another major boost over other smart devices, which work best at measuring the high oxygen levels exhibited by healthy users. It may even give Fitbit’s devices an advantage over existing, FDA-cleared pulse oximeters, which likely use more basic software and simpler algorithms to perform their measurements. And for Fitbit’s blood oxygen levels to be useful, they don’t have to be perfect. They just need to show a meaningful trend. Rather than exposing the values as a specific percentage, the company could always give a summary statistic to indicate an overall trend or trajectory — green for “You’re fine,” yellow for “Call your doctor,” and red for “go to the ER.” Critics of Fitbit’s tech also miss the point that its pulse oximeter readings wouldn’t have to stand alone. The company already has detailed knowledge of its users’ bodies, including their height, weight, age, heart rate trends, and overall activity levels, as well as their baseline blood oxygen levels. All this data could be integrated into a risk score for low oxygen levels — the pulse oximeter reading wouldn’t need to stand alone. I don’t have a window into Fitbit’s tech or regulatory teams or into the FDA. But given what I know about the company’s trajectory and hardware, it seems ideally placed to rapidly provide life-saving oxygen monitoring to millions. Doing so would likely require addressing technical and regulatory (not to mention UI and privacy) issues rapidly and taking on some unknown risks. But Fitbit has years of research under its belt. It has proven hardware, trusted by millions of users and medical industry players alike. It appears to have a relationship with the FDA, and the corporate backing, in Google and Alphabet, to intensify that relationship quickly (and address the inevitable liability concerns of fast-tracking a medical device). If any company can bring life-saving pulse oximetry to millions of people overnight, my money is on Fitbit.
https://onezero.medium.com/an-update-to-your-fitbit-could-detect-a-covid-19-symptom-a14699667583
['Thomas Smith']
2020-05-13 05:31:01.310000+00:00
['Health', 'Wearables', 'Fitbit', 'Tech', 'Coronavirus']
Title Update Fitbit Could Detect Covid19 SymptomContent Update Fitbit Could Detect Covid19 Symptom might fullfeatured pulse oximeter sitting wrist right Photo Adam BirkettUnsplash One scariest thing Covid19 get virus there’s whole lot Official guideline say treat home much would cold flu — rest drink fluid separate others living space That’s well good except Covid19 fast developed reputation causing otherwise stable patient crash alarmingly quickly Doctors tell story patient battle virus day week seem fine matter hour deteriorate need placed ventilator — worse you’re treating Covid19 home know you’re middle serious crash account Covid19 make many people feel terrible — patient outside hospital setting know thing gone merely awful lifethreatening One potential health tool that’s rapidly emerging use home pulse oximeter simple device measure oxygen level blood drop 92 that’s concern fall could big trouble — Covid19 patient reportedly level 50 range Pulse oximeter especially appealing Covid19 reported cause silent hypoxia condition — seems tailormade haunt dream hypochondriac — person walk around serious Covid19 oxygen deficiency know it’s late There’s one problem — home pulse oximeter fast becoming scarce toilet paper got Innovo pulse oximeter year ago monitor exercise paid 23 device arrived two day morning checked Amazon could find one pulse oximeter available ship le week charging 60 wouldn’t ship midMay later price you’re one million people wear Fitbit smartwatch though there’s good news likely fullfeatured pulse oximeter sitting wrist right could one firmware update away potentially saving life Fitbit quietly placing pulse oximeter watch year long badly kept industry secret Users gadget reviewer alike noticed sensor back Fitbit device speculated presence function video lowbudget YouTube channel speculating sensor 10000 view received viewer engagement many video consensus Fitbit ultimately confirmed company quietly developing program use sensor detection sleep apnea early 2017 Fitbit hinting direction copped testing hardware detecting apnea serious condition treatment projected 67 billion industry 2021 Fitbit rolled improved sleep tracking last year move toward tracking sleep apnea seemed horizon beta tester program several hurdle including pulse oximeter consumer wearable First technical hurdle device actually work measuring oxygen level wrist challenging problem data also indicates it’s especially challenging people color concerning finding especially since Covid19 impact community disproportionately also concern user’s movement could impact reading — although silent hypoxia aside it’s unclear much Covid19afflicted patient would moving around price always concern — smartwatches often cost 150 putting reach many vulnerable population Rather treating blood oxygen another vanity metric show life hacker exercise fanatic it’s gone muchharder route using sensor work toward diagnosing actual medical condition beyond technical challenge also major regulatory hurdle clear Telling people step count even heart rate one thing Diagnosing disease using consumer device another entirely best guess Fitbit independent company didn’t regulatory connection pocketbook stomach move medical device sector announced acquisition Google though Fitbit suddenly deeppocketed corporate parent navigate US Food Drug Administration handle liability potential device failure behalf Perhaps backing Fitbit quietly rolled blood oxygen level tracking app January told Gizmodo “expects submit FDA clearance soon” Fitbit’s new blood oxygen measurement capability potentially lifesaving implication fight Covid19 Twentyeight million people already wear Fitbits using pulse oximeter could provide monitoring capability huge swath people capability available Fitbit model present newer smartwatches tracker Blood oxygen level useful diagnostic sense they’re used track trend better way see oxygen level trend alwayson pulse oximeter wrist moment Fitbit expose oxygen level data sleeptracking portion app level used show general summary oxygenation status user can’t get specific percentage reading consistent original goal tracking sleep apnea that’s likely firmware software decision hardware one simple firmware update air could likely enable full blood oxygen level tracking easily since hardware likely algorithm processing raw pulse oximeter reading meaningful data already Fitbit enable feature lot likely depends regulatory body like FDA FDA reportedly allow Apple enable blood oxygen level tracking Apple watch another popular device stealth pulse oximeter onboard hint allowing consumer wearable monitor Covid19 researcher forging ahead study evaluate Apple Watch Fitbit device Garmin smartwatches purpose moment Fitbit still say device shouldn’t used medical purpose ongoing technical concern devices’ accuracy especially low oxygen level indicate danger though Fitbit’s cautious approach year testing may serve well Rather treating blood oxygen another vanity metric show life hacker exercise fanatic it’s gone muchharder route using sensor work toward diagnosing actual medical condition mean company likely laying groundwork medical device clearance — technically legally — day one give huge advantage competitor term regulatory connection hardware already baked million device condition Fitbit chose treat sleep apnea characterized low oxygen level bodes well company likely focused detecting oxygen accurately low level beginning potentially give another major boost smart device work best measuring high oxygen level exhibited healthy user may even give Fitbit’s device advantage existing FDAcleared pulse oximeter likely use basic software simpler algorithm perform measurement Fitbit’s blood oxygen level useful don’t perfect need show meaningful trend Rather exposing value specific percentage company could always give summary statistic indicate overall trend trajectory — green “You’re fine” yellow “Call doctor” red “go ER” Critics Fitbit’s tech also miss point pulse oximeter reading wouldn’t stand alone company already detailed knowledge users’ body including height weight age heart rate trend overall activity level well baseline blood oxygen level data could integrated risk score low oxygen level — pulse oximeter reading wouldn’t need stand alone don’t window Fitbit’s tech regulatory team FDA given know company’s trajectory hardware seems ideally placed rapidly provide lifesaving oxygen monitoring million would likely require addressing technical regulatory mention UI privacy issue rapidly taking unknown risk Fitbit year research belt proven hardware trusted million user medical industry player alike appears relationship FDA corporate backing Google Alphabet intensify relationship quickly address inevitable liability concern fasttracking medical device company bring lifesaving pulse oximetry million people overnight money FitbitTags Health Wearables Fitbit Tech Coronavirus
3,971
Bulletproof Writers: Call for Submissions
Bulletproof Writers, as of now, has remained a dormant project of mine, due to my previous inability to share with others and try and do it all on my own. But now, I’ve realized my folly and am opening the publication to new writers. The easiest way to apply is through this link at Smedian, scrolling down to this publication, and then requesting to contribute there. The other way to apply is by sending me a message through Facebook, but as I’m not online there more than once or twice a day that might take longer to get accepted. *These guidelines have been updated as of June 7, 2020. Submission Requirements The only requirements I require from you as a guest poster to this publication are these: There is no required word length. Make your post as long or as short as it needs to be. Focus on being respectful with your reader’s time, not selling your product or service in the post, and providing value to the readers (including a link to your product in your bio is okay). Please keep your post centered around the theme of ‘writing’ exclusively. At the time, I’m accepting non-fiction pieces only. A short bio at the end of your post with a link to your website or opt-in offer is fine. Keep it to 2–3 sentences with 1–2 links included MAX. I’ll attach a byline of my own to every post, so if you apply to the publication, you must be okay with this. Submissions are okay, whether they are previously published or unpublished drafts. Although we prefer unpublished drafts, both are fine as long as the piece you are submitting was published less than 7 days ago. Please do not pull articles from another publication to be published in ours, though, as this is seen as bad etiquette online. Republishing your content is fine! Please change around the tags, photo, and headline to help it get more visibility on Medium, and not be penalized by Medium’s algorithm. Let your article sit for at least 6 months before republishing it. Also, if your piece has been published somewhere else before, please include a note at the bottom of your piece so I can look at it and suggest any changes that could be made to help it stand out as a unique piece to our publication. Attach a photo to your post to help it get more visibility. Use a free site like Unsplash to find these photos and make sure they are royalty-free & openly accessible to the creative commons. Please include photo credit so the editors are aware where the photo came from, and that you have the proper rights to use it. Include 3–5 relevant tags to your post. This is not necessary, but again, it will help your post get more visibility! Share it out to your followers. This can be as simple as a quick Tweet or a share on your Facebook profile page. Believe me, anything you can do to increase the visibility of your post helps! Edit your post. Please edit and polish your submission as best as you can. Make sure it’s spaced out nicely and is typo-free. The less work I have to do to publish it, the better! Focus on value first. Look at your post and ask “Is this valuable? If I was a reader, what would I be getting out of reading this post? What tangible things would I walk away from?” If you brainstorm your post from this way, it becomes a lot easier to help your readers reach the results you are promising with your post and that your audience desires. Okay, that was 11 tips. Maybe I’m a little more strict than I thought. Please, if you’re interested in applying, don’t hesitate to do so. I’d love to help you grow your audience and reach more writers with your words and this publication is a great way for me to do so :) Looking forward to reading your submissions! Cheers, Blake P.S. once you’re accepted as a writer to this publication, please follow this guide to submit your draft & add it to the publication’s queue.
https://medium.com/bulletproof-writers/bulletproof-writers-call-for-submissions-66d47d7f5c1e
['Blake Powell']
2020-06-07 22:26:49.684000+00:00
['Writing Tips', 'Creativity', 'Blogging', 'Art', 'Writing']
Title Bulletproof Writers Call SubmissionsContent Bulletproof Writers remained dormant project mine due previous inability share others try I’ve realized folly opening publication new writer easiest way apply link Smedian scrolling publication requesting contribute way apply sending message Facebook I’m online twice day might take longer get accepted guideline updated June 7 2020 Submission Requirements requirement require guest poster publication required word length Make post long short need Focus respectful reader’s time selling product service post providing value reader including link product bio okay Please keep post centered around theme ‘writing’ exclusively time I’m accepting nonfiction piece short bio end post link website optin offer fine Keep 2–3 sentence 1–2 link included MAX I’ll attach byline every post apply publication must okay Submissions okay whether previously published unpublished draft Although prefer unpublished draft fine long piece submitting published le 7 day ago Please pull article another publication published though seen bad etiquette online Republishing content fine Please change around tag photo headline help get visibility Medium penalized Medium’s algorithm Let article sit least 6 month republishing Also piece published somewhere else please include note bottom piece look suggest change could made help stand unique piece publication Attach photo post help get visibility Use free site like Unsplash find photo make sure royaltyfree openly accessible creative common Please include photo credit editor aware photo came proper right use Include 3–5 relevant tag post necessary help post get visibility Share follower simple quick Tweet share Facebook profile page Believe anything increase visibility post help Edit post Please edit polish submission best Make sure it’s spaced nicely typofree le work publish better Focus value first Look post ask “Is valuable reader would getting reading post tangible thing would walk away from” brainstorm post way becomes lot easier help reader reach result promising post audience desire Okay 11 tip Maybe I’m little strict thought Please you’re interested applying don’t hesitate I’d love help grow audience reach writer word publication great way Looking forward reading submission Cheers Blake PS you’re accepted writer publication please follow guide submit draft add publication’s queueTags Writing Tips Creativity Blogging Art Writing
3,972
What Your Startup is Doing Wrong: Four Small Changes for Big Growth
One of the biggest rites of passage for any founder is the first time they pitch their idea / product / solution to an audience. You’ve worked hard to perfect your code and design, and your demo is debugged and polished. You’re the type of person who works wells under pressure, and you haven’t really thought about what you’re going to say specifically. Inspiration always comes to you when you need it! You approach the mic… you pull out your lucky laser pointer… and you begin to passionately talk about “your baby.” This is the dream! But, as you look out into the audience, you see it: the blank, confused expressions, the questioning looks, and the glances at phones and watches. Oh no! “Why aren’t they as excited as I am?” you think. Many early-stage startups struggle to pitch and market their product effectively and efficiently. There’s no judgement in that statement; most of these companies are founded and managed by really brilliant engineers and product-focused teams. This is the reality of young technology startups. Not everyone has the luxury of getting a business degree, hiring a growth marketer, or having access to a network of savvy advisers. But if these companies want to raise capital, make profits, and/or get good PR, they will need to be able to communicate their value succinctly. What are some quick fixes these fledgling companies can change or add to their website/pitch to bring real value? Write a unique value proposition about your solution. Photo by Thought Catalog on Unsplash. 1. Write a Unique Value Proposition. Growth marketers talk a lot about unique value propositions (UVPs), but that’s because they are so important! And while conceptually, it sounds easy to add a UVP, it’s actually a hard thing to execute. Your UVP is a statement that describes the benefits of your solution, how you meet your customers’ needs, and what distinguishes you from your competition. It also needs to be displayed prominently on your website, landing page, and marketing campaign. More deeply, in a concise yet evocative statement, you must describe the benefits of your product using The Four U’s: a) Useful: how is this product useful to a customer, and what problem is it solving for them? b) Urgency: why does a customer need your solution right this instant? c) Unique: what about your service is unlike anything else on the market? d) Ultra-specific: can you describe your company without any ambiguity or confusion, such that it leaves a reader without any hesitations or questions about what you are selling? Holy crap! A UVP needs to have A LOT of information in a super tiny space. It may seem impossible, but aim for clarity over creativity initially. As awareness for your product or service builds, then you can begin to get clever with your messaging For example, Netflix now simply states: “Watch what happens next.” This is a hat-tip to their binge watching reputation. But Netflix can be creative and vague because nearly everyone knows what Netflix does. They’ve achieved product market fit. Unfortunately, your machine learning startup with an esoteric company name and logo will probably need an incredibly ULTRA-SPECIFIC and USEFUL value proposition to keep users on your site. Do you want to quickly test your UVP? Ask strangers who are completely unaware of your website and product to view your home page with your UVP visible. Only allow them to view it for seven seconds. If they can’t tell you want your company does and how it will benefit someone immediately, keep tweaking your UVP. Pitch your product in a streamlined, catchy manner. Photo by Kane Reinholdtsen on Unsplash. 2. Craft a concise, catchy pitch. It’s cliché for sure, but can you pitch your company in the time it takes to ride an elevator? You may wonder why people use this analogy, but there are some useful reasons why you want to be able to pitch in 15 seconds. Those who can quickly and accurately describe their product and its benefits are seen as competent and prepared. They demonstrate they know what they’re talking about. You don’t want to lose your audience’s attention and you don’t want to lead them down a rabbit hole of misinformation and confusion. There are two types of pitches: One is this elevator pitch: a super quick overview, almost a spoken unique value proposition. The second is a meatier pitch: a longer pitch, useful for investor meetings, competitions, candidate interviews, and media events. I like to use this format to craft a one minute pitch: a) Start with a short, funny anecdote or staggering statistic that relates to why someone would use your solution. You need to get your audience’s attention, and using emotion is a great way to grab someone’s focus. b) Introduce your solution by succinctly stating what you do. c) List the key benefits of your solution. d) Highlight why your company and/or market has a competitive advantage (great team, huge market size, large waitlist of customers, large network or famous connections). e) Close with a final catchy phrase that re-summarizes your solution. Insert clear calls-to-action in your website and pitch. Photo by rawpixel.com on Unsplash. 3. Have a clear call-to-action. “Click here.” “Register.” “Join now.” We’ve all seen these vague buttons on websites. To the person who built the website, it’s obvious what clicking that button will do. But to your audience, your intrepid potential customers, they have no idea what they’re getting into. Companies need to make sure the messaging around their call-to-action (CTA) is crystal clear. Returning to the Netflix example, their sign-up button clearly states: “Join Free for a Month.” You know when you click that button, not only will you be joining Netflix, but you’ll be getting my first month free. No confusion there, right? If you’re simply selling a product online, “Buy Now,” always works well. But what about getting people to give you their email address to sign-up for a waitlist or newsletter? “Click here to be added to our Beta release launch!” or “Sign up for our otter meme-a-day email!” Make it obvious! Run A/B tests on your messaging, or perform the stranger test I mentioned above. Who are you trying to reach? Understanding this will allow you to tailor your messaging appropriately. Thanks to GIPHY and HBO. 4. Cater to your audience. There’s nothing worse than missing out on a big opportunity because you didn’t do your homework! If you’re pitching to non-technical investors, and you’re a high-tech engineering startup, make sure your presentation has the appropriate amount of technical definitions and background, as well as business-related information. This way, investors can make informed decisions, and you can appear business-savvy and empathetic. If your consumers are teenagers, but your customers are their parents, make sure your website is understandable to both generations. This will help you attract and sell to an appropriate audience. Many people hate old-school frameworks, but it could be incredibly beneficial to sit with your co-founders and conduct a STP exercise (segment. target. position.) a) What possible demographic segments could be interested in your solution ? b) Which segment will you be targeting initially and why? c) How will you position your solution to get that target’s attention? Remember: know your audience, know your customers, and know your unique value proposition! Know your audience… and audience size. Thanks to GIPHY and HBO. This article was inspired by my first San Francisco pitch competition last week, where my company was one of the few startup to go beyond the tech. We impressed the judges, especially the ones who weren’t data scientists and were looking for investments, with our emphasis on UVP and STP. Big props to my team at PipelineAI: we won the Startup Showcase at The Artificial Intelligence Conference. What is one overlooked business or marketing aspect you have found makes a big difference to focus on early in the life of a startup? Do you prefer focusing on traditional marketing frameworks or new-age growth hacks in a startup’s infancy? Share your thoughts below! Thanks to Thomas Maremaa and John A. Parks for providing invaluable editing direction on this article, Mikail Gündoğdu for GIF advice, and the Tradecraft community supporting me in my endeavors as a growth writer.
https://medium.com/tradecraft-traction/what-your-startup-is-doing-wrong-four-small-changes-for-big-growth-e1a9409392b6
['Jessica Poteet']
2017-09-29 15:10:20.298000+00:00
['Silicon Valley', 'Growth', 'Marketing', 'Business', 'Startup']
Title Startup Wrong Four Small Changes Big GrowthContent One biggest rite passage founder first time pitch idea product solution audience You’ve worked hard perfect code design demo debugged polished You’re type person work well pressure haven’t really thought you’re going say specifically Inspiration always come need approach mic… pull lucky laser pointer… begin passionately talk “your baby” dream look audience see blank confused expression questioning look glance phone watch Oh “Why aren’t excited am” think Many earlystage startup struggle pitch market product effectively efficiently There’s judgement statement company founded managed really brilliant engineer productfocused team reality young technology startup everyone luxury getting business degree hiring growth marketer access network savvy adviser company want raise capital make profit andor get good PR need able communicate value succinctly quick fix fledgling company change add websitepitch bring real value Write unique value proposition solution Photo Thought Catalog Unsplash 1 Write Unique Value Proposition Growth marketer talk lot unique value proposition UVPs that’s important conceptually sound easy add UVP it’s actually hard thing execute UVP statement describes benefit solution meet customers’ need distinguishes competition also need displayed prominently website landing page marketing campaign deeply concise yet evocative statement must describe benefit product using Four U’s Useful product useful customer problem solving b Urgency customer need solution right instant c Unique service unlike anything else market Ultraspecific describe company without ambiguity confusion leaf reader without hesitation question selling Holy crap UVP need LOT information super tiny space may seem impossible aim clarity creativity initially awareness product service build begin get clever messaging example Netflix simply state “Watch happens next” hattip binge watching reputation Netflix creative vague nearly everyone know Netflix They’ve achieved product market fit Unfortunately machine learning startup esoteric company name logo probably need incredibly ULTRASPECIFIC USEFUL value proposition keep user site want quickly test UVP Ask stranger completely unaware website product view home page UVP visible allow view seven second can’t tell want company benefit someone immediately keep tweaking UVP Pitch product streamlined catchy manner Photo Kane Reinholdtsen Unsplash 2 Craft concise catchy pitch It’s cliché sure pitch company time take ride elevator may wonder people use analogy useful reason want able pitch 15 second quickly accurately describe product benefit seen competent prepared demonstrate know they’re talking don’t want lose audience’s attention don’t want lead rabbit hole misinformation confusion two type pitch One elevator pitch super quick overview almost spoken unique value proposition second meatier pitch longer pitch useful investor meeting competition candidate interview medium event like use format craft one minute pitch Start short funny anecdote staggering statistic relates someone would use solution need get audience’s attention using emotion great way grab someone’s focus b Introduce solution succinctly stating c List key benefit solution Highlight company andor market competitive advantage great team huge market size large waitlist customer large network famous connection e Close final catchy phrase resummarizes solution Insert clear callstoaction website pitch Photo rawpixelcom Unsplash 3 clear calltoaction “Click here” “Register” “Join now” We’ve seen vague button website person built website it’s obvious clicking button audience intrepid potential customer idea they’re getting Companies need make sure messaging around calltoaction CTA crystal clear Returning Netflix example signup button clearly state “Join Free Month” know click button joining Netflix you’ll getting first month free confusion right you’re simply selling product online “Buy Now” always work well getting people give email address signup waitlist newsletter “Click added Beta release launch” “Sign otter memeaday email” Make obvious Run AB test messaging perform stranger test mentioned trying reach Understanding allow tailor messaging appropriately Thanks GIPHY HBO 4 Cater audience There’s nothing worse missing big opportunity didn’t homework you’re pitching nontechnical investor you’re hightech engineering startup make sure presentation appropriate amount technical definition background well businessrelated information way investor make informed decision appear businesssavvy empathetic consumer teenager customer parent make sure website understandable generation help attract sell appropriate audience Many people hate oldschool framework could incredibly beneficial sit cofounder conduct STP exercise segment target position possible demographic segment could interested solution b segment targeting initially c position solution get target’s attention Remember know audience know customer know unique value proposition Know audience… audience size Thanks GIPHY HBO article inspired first San Francisco pitch competition last week company one startup go beyond tech impressed judge especially one weren’t data scientist looking investment emphasis UVP STP Big prop team PipelineAI Startup Showcase Artificial Intelligence Conference one overlooked business marketing aspect found make big difference focus early life startup prefer focusing traditional marketing framework newage growth hack startup’s infancy Share thought Thanks Thomas Maremaa John Parks providing invaluable editing direction article Mikail Gündoğdu GIF advice Tradecraft community supporting endeavor growth writerTags Silicon Valley Growth Marketing Business Startup
3,973
Jellyfish Reveal More Glowing Secrets & Bacteria Make Purple Sea Snail Dye
NEWSLETTER Jellyfish Reveal More Glowing Secrets & Bacteria Make Purple Sea Snail Dye This Week in Synthetic Biology (Issue #14) Receive this newsletter every Friday morning! Sign up here: https://synbio.substack.com/ Tell me about your research on Twitter. The Crystal Jelly Unveils Its Brightest Protein Yet Aequorea victoria, the crystal jelly, hovers in the waters off the coast of California. Decades ago, Osamu Shimomura noticed that these jellies emit a faint, green light. So he took pieces from one of them, did some experiments, and found the protein responsible for the glow. That protein — GFP — is now used in thousands of labs to light up the insides of microscopic cells. Shimomura shared the 2008 Nobel Prize for that work, along with Martin Chalfie and Roger Tsien, who died in 2016. Now, it looks like the crystal jelly hasn’t given up all of its secrets just yet. In a new study, nine previously unstudied proteins, also from Aequorea victoria and a related species, were reported. Several of the new fluorescent proteins have quirky characteristics, too. One of them is “the brightest GFP homolog yet characterized”, while another protein can respond to both UV and blue light. The scientists even found a couple of purple and blue-pigmented chromoproteins. The findings are further evidence that, in the darkness of the oceans, scores of mysteries remain to be discovered. This work was published Nov. 2 in the open-access journal PLoS Biology. Link Will DNA Replace Grocery Store Barcodes? A standard barcode — think grocery store rectangle, with black-and-white lines — contains 11 digits. Mixing up those digits in every possible way gives about 100 billion possible combinations. That’s a lot, but it’s not nearly as many combinations as what a barcode made from DNA could provide. A new study, published in Nature Communications, reports a molecular, DNA tagging system that could become the future of barcodes. The DNA was dehydrated, which made it more stable, and the sequences were read out in just a few seconds with an Oxford Nanopore MinION, a small, portable DNA sequencer. To facilitate that speed, the authors came up with some clever ways to avoid complex, computational analysis of the DNA signals; they were able to read the barcodes directly from the raw sequence data. This study was published Nov. 3 and is open access. Link Bacteria Produce Tyrian Purple Dye (From Sea Snails!) As early as 1570 BC, the Phoenicians were dying fabrics with Tyrian purple. To make the dye required a process so intensive as to be nonsensical; as many as 250,000 sea snails (Bolinus brandaris) had to be smashed into goop to make just one ounce of dye. It was a color reserved for royalty, and literally worth more than its weight in gold. Thank goodness, no more snails need to be smooshed to make Tyrian purple dye. Engineered E. coli bacteria can now make the dye’s predominant chemical, called 6,6'-dibromoindigo. To achieve this, scientists from Seoul National University added several genes to the bacteria; a tryptophan 6-halogenase gene, a tryptophanase gene and a flavin-containing monooxygenase. That’s a mouth garbling sentence, but I promise the result is easier to understand: the cells were able to produce 315 mg of 6,6'-dibromoindigo per liter in flasks, using tryptophan — an animo acid — as the chemical precursor. This work was published Nov. 2 in Nature Chemical Biology. Link 79 Different Cas9 Proteins Were Tested. Some Are Wicked Cool Cas9 is maybe the most famous protein on earth. It’s like, the Kim Kardashian of the protein world. If there was a magazine for proteins, Cas9 would be on its cover. Oh wait, that already happened. There’s a lot of different Cas9 proteins, but not all of them have been characterized. In a new study, scientists identified, and tested, 79 different Cas9 orthologs — proteins taken from different species, but that have the same function — and figured out how they recognize and cut DNA. Intriguingly, some of the Cas9 proteins only worked at specific temperatures; Cme2 Cas9, for example, “was only robustly active from ~30 °C to 55 °C suggesting the possibility of temperature-controlled DNA search and modification.” This study was published Nov. 2 in Nature Communications, and is open access. Link CRISPR Shuts Down Fertilized Eggs I didn’t know about the birds and the bees until my parents sat me down and told me. But if you’re wondering, a typical pregnancy starts like this: a fertilized egg latches on to the endometrium in the uterus. That activates a flood of genes to turn “on”, including one called leukemia inhibitory factor, or LIF. A new study has figured out a way to cut off fertility — with CRISPR — by targeting LIF and switching it “off”. The reason this is cool is because, well, the CRISPR-Cas9 system is photoactivatable, meaning it can be switched on with an LED. The scientists, from Keio University in Tokyo, think that their work could prove useful in basic science research that probes the molecular signals underpinning this process. The study was published Nov. 2 in the journal PNAS, and is open access. Link
https://medium.com/bioeconomy-xyz/jellyfish-reveal-more-glowing-secrets-bacteria-make-purple-sea-snail-dye-37d03a3d58e9
['Niko Mccarty']
2020-11-06 13:39:00.501000+00:00
['Newsletter', 'CRISPR', 'News', 'Science', 'Future']
Title Jellyfish Reveal Glowing Secrets Bacteria Make Purple Sea Snail DyeContent NEWSLETTER Jellyfish Reveal Glowing Secrets Bacteria Make Purple Sea Snail Dye Week Synthetic Biology Issue 14 Receive newsletter every Friday morning Sign httpssynbiosubstackcom Tell research Twitter Crystal Jelly Unveils Brightest Protein Yet Aequorea victoria crystal jelly hovers water coast California Decades ago Osamu Shimomura noticed jelly emit faint green light took piece one experiment found protein responsible glow protein — GFP — used thousand lab light inside microscopic cell Shimomura shared 2008 Nobel Prize work along Martin Chalfie Roger Tsien died 2016 look like crystal jelly hasn’t given secret yet new study nine previously unstudied protein also Aequorea victoria related specie reported Several new fluorescent protein quirky characteristic One “the brightest GFP homolog yet characterized” another protein respond UV blue light scientist even found couple purple bluepigmented chromoproteins finding evidence darkness ocean score mystery remain discovered work published Nov 2 openaccess journal PLoS Biology Link DNA Replace Grocery Store Barcodes standard barcode — think grocery store rectangle blackandwhite line — contains 11 digit Mixing digit every possible way give 100 billion possible combination That’s lot it’s nearly many combination barcode made DNA could provide new study published Nature Communications report molecular DNA tagging system could become future barcodes DNA dehydrated made stable sequence read second Oxford Nanopore MinION small portable DNA sequencer facilitate speed author came clever way avoid complex computational analysis DNA signal able read barcodes directly raw sequence data study published Nov 3 open access Link Bacteria Produce Tyrian Purple Dye Sea Snails early 1570 BC Phoenicians dying fabric Tyrian purple make dye required process intensive nonsensical many 250000 sea snail Bolinus brandaris smashed goop make one ounce dye color reserved royalty literally worth weight gold Thank goodness snail need smooshed make Tyrian purple dye Engineered E coli bacteria make dye’s predominant chemical called 66dibromoindigo achieve scientist Seoul National University added several gene bacteria tryptophan 6halogenase gene tryptophanase gene flavincontaining monooxygenase That’s mouth garbling sentence promise result easier understand cell able produce 315 mg 66dibromoindigo per liter flask using tryptophan — animo acid — chemical precursor work published Nov 2 Nature Chemical Biology Link 79 Different Cas9 Proteins Tested Wicked Cool Cas9 maybe famous protein earth It’s like Kim Kardashian protein world magazine protein Cas9 would cover Oh wait already happened There’s lot different Cas9 protein characterized new study scientist identified tested 79 different Cas9 orthologs — protein taken different specie function — figured recognize cut DNA Intriguingly Cas9 protein worked specific temperature Cme2 Cas9 example “was robustly active 30 °C 55 °C suggesting possibility temperaturecontrolled DNA search modification” study published Nov 2 Nature Communications open access Link CRISPR Shuts Fertilized Eggs didn’t know bird bee parent sat told you’re wondering typical pregnancy start like fertilized egg latch endometrium uterus activates flood gene turn “on” including one called leukemia inhibitory factor LIF new study figured way cut fertility — CRISPR — targeting LIF switching “off” reason cool well CRISPRCas9 system photoactivatable meaning switched LED scientist Keio University Tokyo think work could prove useful basic science research probe molecular signal underpinning process study published Nov 2 journal PNAS open access LinkTags Newsletter CRISPR News Science Future
3,974
IBM is Recognized in the 2020 iF Design Awards
On behalf of our design team at IBM Cloud, Data and AI, we’re excited to announce that we’ve won iF Design Awards in the Communications category for IBM AutoAI and IBM Watson Studio Desktop. We are thrilled to see these two products get recognized for their outstanding design work. This year, the iF Design jury, comprised of 78 international experts, judged 7,300 products and projects submitted from 56 countries from around the world. The iF Design Award is one of the world’s oldest, most celebrated, and most competitive design competitions. This is our third year in a row being recognized by this organization, and the first time that we have seen two of our products get awarded at the same time. It’s truly an achievement and an honor for us, and I’m so proud that our team’s hard work has paid off. What is IBM AutoAI? IBM AutoAI, part of IBM Watson Studio, automates the process of building machine learning models for users such as data scientists. Businesses looking to integrate AI into their practices often struggle to establish the necessary foundation for this technology due to limited resources or a gap in skill sets. The process of understanding how to use AI and generate machine learning models from data sets can take days or weeks. With a distinct emphasis on trust and explainability, IBM AutoAI visualizes this automated machine learning process through each stage of data preparation, algorithm selection, model creation, and data enhancement. The tool is able to teach and empower users to identify and apply the best models for their data in a matter of minutes, helping businesses save time and resources. AutoAI guides users through the process of joining multiple data sources with suggestions and prompts throughout the data preparation experience. Designing for IBM AutoAI One of the primary goals for the design team was making IBM AutoAI understandable for users with varying levels of expertise. It was a challenge for the designers to understand the AI and machine learning technology behind this automated solution, and then communicating the model creation process in a comprehensive but visually appealing way. The team set to create a software product that guided the user through these complex technological processes step by step. IBM AutoAI visualizes the entire model creation process through multiple “lenses”, providing transparency to users in a way that they can understand the process to whatever extent of detail that they need. The design team worked directly with IBM Research to understand the underlying technology and user expectations for this type of tool. The team also interviewed target users and conducted competitive research to increase their domain knowledge in artificial intelligence and better inform their design decisions. Based on deep user research, the designers found that users inherently didn’t trust an automated solution. The design team wanted to avoid this perception of an automated solution as a “black box”, where it is unclear to the user how a result was generated from the information that they input. Throughout the design process, the designers placed emphasis on explaining all steps of the software tool’s process in laymen’s terms in order to build confidence and trust with the users. By leveraging the IBM Enterprise Design Thinking framework the design process also extended to development, content, and offering management teams, which helped create a product more aligned with all stakeholder goals. What is IBM Watson Studio Desktop? IBM Watson Studio Desktop is a data science and machine-learning software platform that provides self-service, drag-and-drop data analytics right from the user’s desktop. The software platform’s features include the ability to automate data preparation and modeling, data analysis, enhanced visualizations, and an intuitive interface that doesn’t require coding knowledge. It can integrate with on-premise, cloud, and hybrid-cloud environments. This dashboard offers users a way to explore, prepare, and model their data with simple drag and drop features, without needing coding abilities. Data analysis can be a painstaking process as users need to gather, clean, sort, and sift through the data while working with data scattered across several sources and locations. IBM Watson Studio Desktop is an end-to-end solution that helps businesses to get started with the data analysis process faster, giving data scientists all the tools they need to improve their workflow. This product is a desktop version of IBM Watson Studio, a collaborative cloud data analysis software. Designing for IBM Watson Studio Desktop The design team behind IBM Watson Studio Desktop conducted research on their target users, primarily data scientist, to understand their needs. The designers conducted interviews with sponsor users and corporations as well as on-site user testing. The team found that data-scientists primarily worked in isolation, and were looking for a more dynamic, collaborative workflow, where they had all of their tools in one place. The team aimed to design a tool and interface where data scientists were provided with an ecosystem of data analysis tools. They wanted to create a space for their users to collaborate, access all of their needed tools and information at once, and create a cohesive workflow between themselves and their peers. User Experience Journey for IBM Watson Studio Desktop users Another challenge for the UX team was to design and implement all of these capabilities that were originally designed for the cloud version of the software into the desktop version. IBM Watson Desktop Studio was created for users who wanted to work offline as well as in an interface with more narrowed and tailored machine learning capabilities. The team wanted to design a desktop tool that translated well as an extension of the cloud tool, with a user experience that was more simplified and focused, but still familiar to users from the original cloud version. The team designed an interface that used similar design principles, as well as carried over key features from the cloud version that the users wanted to see in this new environment. “IBM Watson Studio Desktop and IBM Watson AutoAI bridge gaps in skills and knowledge and make data analysis and machine learning more accessible for businesses in the modern age. We designed these products with empathy and a user-centered approach, so that our users could confidently integrate AI into their business workflows.” --Alex Swain, Design Principal at IBM Cloud, Data and AI Designing Watson Products As described above, designing software products with AI and machine learning capabilities is a challenging task that requires an in-depth understanding of the field and its challenges. AI has the power to impact businesses on a large scale, and understanding how to take advantage of these capabilities is essential for businesses to succeed and excel with their data strategy. Being recognized for the design work behind these products is a true testament, to how much user experience can play a role in shaping how this AI technologies can impact our lives. Winning Teams IBM AutoAI Design Principal: Alex Swain Design Team: Dillon Eversman, Voranouth Supadulya IBM Watson Studio Desktop
https://medium.com/design-ibm/ibm-is-recognized-in-the-2020-if-design-awards-1221123585f8
['Arin Bhowmick']
2020-02-13 05:10:31.955000+00:00
['Machine Learning', 'UX', 'Data Science', 'Design', 'AI']
Title IBM Recognized 2020 Design AwardsContent behalf design team IBM Cloud Data AI we’re excited announce we’ve Design Awards Communications category IBM AutoAI IBM Watson Studio Desktop thrilled see two product get recognized outstanding design work year Design jury comprised 78 international expert judged 7300 product project submitted 56 country around world Design Award one world’s oldest celebrated competitive design competition third year row recognized organization first time seen two product get awarded time It’s truly achievement honor u I’m proud team’s hard work paid IBM AutoAI IBM AutoAI part IBM Watson Studio automates process building machine learning model user data scientist Businesses looking integrate AI practice often struggle establish necessary foundation technology due limited resource gap skill set process understanding use AI generate machine learning model data set take day week distinct emphasis trust explainability IBM AutoAI visualizes automated machine learning process stage data preparation algorithm selection model creation data enhancement tool able teach empower user identify apply best model data matter minute helping business save time resource AutoAI guide user process joining multiple data source suggestion prompt throughout data preparation experience Designing IBM AutoAI One primary goal design team making IBM AutoAI understandable user varying level expertise challenge designer understand AI machine learning technology behind automated solution communicating model creation process comprehensive visually appealing way team set create software product guided user complex technological process step step IBM AutoAI visualizes entire model creation process multiple “lenses” providing transparency user way understand process whatever extent detail need design team worked directly IBM Research understand underlying technology user expectation type tool team also interviewed target user conducted competitive research increase domain knowledge artificial intelligence better inform design decision Based deep user research designer found user inherently didn’t trust automated solution design team wanted avoid perception automated solution “black box” unclear user result generated information input Throughout design process designer placed emphasis explaining step software tool’s process laymen’s term order build confidence trust user leveraging IBM Enterprise Design Thinking framework design process also extended development content offering management team helped create product aligned stakeholder goal IBM Watson Studio Desktop IBM Watson Studio Desktop data science machinelearning software platform provides selfservice draganddrop data analytics right user’s desktop software platform’s feature include ability automate data preparation modeling data analysis enhanced visualization intuitive interface doesn’t require coding knowledge integrate onpremise cloud hybridcloud environment dashboard offer user way explore prepare model data simple drag drop feature without needing coding ability Data analysis painstaking process user need gather clean sort sift data working data scattered across several source location IBM Watson Studio Desktop endtoend solution help business get started data analysis process faster giving data scientist tool need improve workflow product desktop version IBM Watson Studio collaborative cloud data analysis software Designing IBM Watson Studio Desktop design team behind IBM Watson Studio Desktop conducted research target user primarily data scientist understand need designer conducted interview sponsor user corporation well onsite user testing team found datascientists primarily worked isolation looking dynamic collaborative workflow tool one place team aimed design tool interface data scientist provided ecosystem data analysis tool wanted create space user collaborate access needed tool information create cohesive workflow peer User Experience Journey IBM Watson Studio Desktop user Another challenge UX team design implement capability originally designed cloud version software desktop version IBM Watson Desktop Studio created user wanted work offline well interface narrowed tailored machine learning capability team wanted design desktop tool translated well extension cloud tool user experience simplified focused still familiar user original cloud version team designed interface used similar design principle well carried key feature cloud version user wanted see new environment “IBM Watson Studio Desktop IBM Watson AutoAI bridge gap skill knowledge make data analysis machine learning accessible business modern age designed product empathy usercentered approach user could confidently integrate AI business workflows” Alex Swain Design Principal IBM Cloud Data AI Designing Watson Products described designing software product AI machine learning capability challenging task requires indepth understanding field challenge AI power impact business large scale understanding take advantage capability essential business succeed excel data strategy recognized design work behind product true testament much user experience play role shaping AI technology impact life Winning Teams IBM AutoAI Design Principal Alex Swain Design Team Dillon Eversman Voranouth Supadulya IBM Watson Studio DesktopTags Machine Learning UX Data Science Design AI
3,975
Where to begin with color in 3D?
Where to begin with color in 3D? Getting started with color in Cinema4D Lite. Color, shape, and fun in C4D (created by Sarah Healy) I can vividly remember opening 3D Studio Max for the first time. It felt like I had suddenly been thrust the controls of the starship enterprise, with zero experience of actually navigating through space. I can also remember hastily closing the software package, feeling defeated and retreating back the comfortable flatness of 2D space. As a designer used to create one dimension suddenly having multiple viewports, three dimensions and camera to contend with is a wee bit overwhelming. In the world of 3D, everything gets a little more complicated — even color. Well, at least I did not find it very intuitive when learning3D. Here is a starting point with color in Cinema 4D.
https://uxdesign.cc/where-to-begin-with-color-in-3d-3e81f92beb77
['Sarah Healy']
2019-08-21 23:47:24.331000+00:00
['Cinema 4d', 'Colors', 'Education', 'Design', 'Creativity']
Title begin color 3DContent begin color 3D Getting started color Cinema4D Lite Color shape fun C4D created Sarah Healy vividly remember opening 3D Studio Max first time felt like suddenly thrust control starship enterprise zero experience actually navigating space also remember hastily closing software package feeling defeated retreating back comfortable flatness 2D space designer used create one dimension suddenly multiple viewports three dimension camera contend wee bit overwhelming world 3D everything get little complicated — even color Well least find intuitive learning3D starting point color Cinema 4DTags Cinema 4d Colors Education Design Creativity
3,976
I know what a Data Scientist is… but what the heck is a Machine Learning Engineer?!
I know what a Data Scientist is… but what the heck is a Machine Learning Engineer?! Rodney Joyce Follow Sep 6 · 6 min read “IT” This reply has got me by for the past 20 years when asked by various relatives and friends exactly what it is that I do. It does mean I have to “fix” working computers, install virus scanners, get printers working (throw it away), and fix iTunes for my mum on a regular basis and generally I am considered an authority on anything that is slightly more technical than average. However, in the last decade (and especially the last 3 years) the technical landscape has shifted exponentially with machine learning now accessible to anyone with a browser, so this answer no longer suffices as dinner parties. Out of interest, the drivers for this are things like access to more data (IoT, faster networks) and the abstraction of the AI tools used by Google et al into elastic cloud services and various others — that is a whole post in itself. Everyone seems to have heard of Data Scientists. It was even labelled “The Sexiest Job of the 21st Century” (google it — I don’t know who said it first). But when I say I am a Machine Learning Engineer I often draw blank looks. (I say “AI” Engineer to the non-technical to get a nod of recognition). So what exactly is the difference between a Data Scientist and a Data Engineer, and what is a Machine Learning Engineer? This too has been discussed to death however I read an article that summed it up perfectly. I am also currently working on a project (that shall remain nameless) that highlights the points made in this article perfectly. Do yourself a favour and read this first then come back here for a real example: https://www.oreilly.com/ideas/data-engineers-vs-data-scientists Some background: There’s a limit in statistics and maths that I hit fairly soon where I am happy to hand over to someone who specializes in it. I wish I had paid more attention at school during maths class and stopped having so much fun… trying telling that to a teenager though! I understand basic stats, I can train a Linear Regression model, I can tell Azure to run AutoML for me and I can hypertune a model using SparkML. I can build a pretty decent app end to end to identify hotdogs or not hotdogs on the edge. But I cannot tell you WHY these params worked better than those ones. WHY the Random Forest resulted in a higher accuracy or what the best metrics are to use to evaluate the outcome of 100 training runs for an X model. Fortunately, the Data Scientist can, and he loves the complex maths! Finally…. a job that is not boring! But… not everyone with a PHD knows how to train their models in parallel using distributed code (I will try not to mention Databricks yet again ;). Most Data Scientists use Pandas/numpy and don’t necessarily know (or care, to be fair) about the potential limitations when it comes to training. Nor do they necessarily care about ordering a beefy 128 Gig GPU machine to run their experiments overnight because it is taking 8 hours to train a model. Suggesting to use PySpark or Dask just gets an irritated look as it detracts from valuable experimenting time. When requested to deploy his model as an API driven by a Git commit with automatic model drift monitoring it is met with a disgruntled snort… However… I do, for example, appreciate the beauty of distributed compute and the wonderfully scaleable architecture of Spark. I love a good API and data pipeline as much as the next Data Engineer and can spend hours refactoring code until it passes all the definitions of “Clean Code” (Consultant Tip: If you want to meet your budget and project plan then find the healthy balance between technical perfection and the real value that the code will generate. We are doing all of this for a reason, and it’s not to get the code onto 1 line). I love the concept of CI/CD and I adore simplicity, practicality and optimizing things like cloud services, code, processes and every day life. Needless to say this does not always go down well with other humans, however it’s a common trait in Data Engineers and Programmers. So… now that we understand the personalities of the Data Scientist and Data Engineer let’s put them together, focus on their strengths and make an amazing team that can meet the business requirement as quickly as possible whilst consuming as little time and $$ as possible. Before I go further, obviously there are exceptions to the rule and lucky people (usually without kids) who are in fact able to bridge both roles… we’ll focus on the average here. Even if you can bridge both roles… should you? A healthy team is a diverse team. I’ve seen projects where a Data Engineer is given a complex Machine Learning project and a couple of days to figure it out. Whilst it is possible, I believe this is not a good idea. Data Science and Machine Learning engineering ARE NOT the same thing. I have also seen projects where a Data Scientist is put on a project which involves Big Data (whatever that is) with no data engineering support and in both cases everyone wonders why they are taking so long to get any results. The project I am on right now is a fantastic example of the article above. We have a Data Scientist (insert any number of PHDs here) and myself as the Machine Learning Engineer/Data Engineer (insert any number of Azure cloud certifications here). As a team we are approaching the problem according to our strengths and, of course, based on what we prefer to do, which is important if you want to retain your staff (did I mention that this “AI” stuff is in hot demand and everyone wants to do it but doesn’t know how?). For example, early on in the project, training a single model (we have over 40) was taking over 10 hours. One option would be to scale up and get a bigger VM which is the hammer and nail approach. These beasts are not cheap and halfway through a 10 hour training session could fail and the process needs to be repeated. This was the selected approach to get us past that blocker and is working. However, in parallel I am looking the Data Scientist’s code, rewriting it from Pandas into PySpark (Note: there’s 101 other ways to do this — I am just a Databricks fanboy) and building the system to log experiment results and deploy the models as containerized APIs microservices with an Azure function to orchestrate the results asynchronously. Put a near real-time PowerBI report and alerting to watch for model drift and an Azure function to trigger model retraining and it’s a work of art! Damn I love my job. Together we make an awesome team as the whole is greater than the sum of the parts. Our roles overlap a lot and I am improving my understanding of stats and ranking better in Kaggle contests. He is learning new ways to improve his workflow and understanding more about data engineering. To summarize: The best result is to understand the roles and challenges unique to a Machine Learning project and to plan appropriately from a time and effort POV -anything is possible with enough time. They share many aspects with standard application development projects and the approach is not too dissimilar. You wouldn’t ask a API engineer to do UX would you? (Don’t get me started — this happens a lot!). Just put the Data Scientists and the Data Engineers together in a room and let the magic happen… and if you need help putting an ML Ops process in place then get in touch at https://data-driven.com/
https://medium.com/data-driven-ai/i-know-what-a-data-scientist-is-but-what-the-heck-is-a-machine-learning-engineer-7996415ce3c
['Rodney Joyce']
2020-09-06 11:55:25.329000+00:00
['Machine Learning', 'Data Engineering', 'Machine Learning Engineer', 'AI', 'Data Science']
Title know Data Scientist is… heck Machine Learning EngineerContent know Data Scientist is… heck Machine Learning Engineer Rodney Joyce Follow Sep 6 · 6 min read “IT” reply got past 20 year asked various relative friend exactly mean “fix” working computer install virus scanner get printer working throw away fix iTunes mum regular basis generally considered authority anything slightly technical average However last decade especially last 3 year technical landscape shifted exponentially machine learning accessible anyone browser answer longer suffices dinner party interest driver thing like access data IoT faster network abstraction AI tool used Google et al elastic cloud service various others — whole post Everyone seems heard Data Scientists even labelled “The Sexiest Job 21st Century” google — don’t know said first say Machine Learning Engineer often draw blank look say “AI” Engineer nontechnical get nod recognition exactly difference Data Scientist Data Engineer Machine Learning Engineer discussed death however read article summed perfectly also currently working project shall remain nameless highlight point made article perfectly favour read first come back real example httpswwworeillycomideasdataengineersvsdatascientists background There’s limit statistic math hit fairly soon happy hand someone specializes wish paid attention school math class stopped much fun… trying telling teenager though understand basic stats train Linear Regression model tell Azure run AutoML hypertune model using SparkML build pretty decent app end end identify hotdog hotdog edge cannot tell params worked better one Random Forest resulted higher accuracy best metric use evaluate outcome 100 training run X model Fortunately Data Scientist love complex math Finally… job boring But… everyone PHD know train model parallel using distributed code try mention Databricks yet Data Scientists use Pandasnumpy don’t necessarily know care fair potential limitation come training necessarily care ordering beefy 128 Gig GPU machine run experiment overnight taking 8 hour train model Suggesting use PySpark Dask get irritated look detracts valuable experimenting time requested deploy model API driven Git commit automatic model drift monitoring met disgruntled snort… However… example appreciate beauty distributed compute wonderfully scaleable architecture Spark love good API data pipeline much next Data Engineer spend hour refactoring code pass definition “Clean Code” Consultant Tip want meet budget project plan find healthy balance technical perfection real value code generate reason it’s get code onto 1 line love concept CICD adore simplicity practicality optimizing thing like cloud service code process every day life Needless say always go well human however it’s common trait Data Engineers Programmers So… understand personality Data Scientist Data Engineer let’s put together focus strength make amazing team meet business requirement quickly possible whilst consuming little time possible go obviously exception rule lucky people usually without kid fact able bridge roles… we’ll focus average Even bridge roles… healthy team diverse team I’ve seen project Data Engineer given complex Machine Learning project couple day figure Whilst possible believe good idea Data Science Machine Learning engineering thing also seen project Data Scientist put project involves Big Data whatever data engineering support case everyone wonder taking long get result project right fantastic example article Data Scientist insert number PHDs Machine Learning EngineerData Engineer insert number Azure cloud certification team approaching problem according strength course based prefer important want retain staff mention “AI” stuff hot demand everyone want doesn’t know example early project training single model 40 taking 10 hour One option would scale get bigger VM hammer nail approach beast cheap halfway 10 hour training session could fail process need repeated selected approach get u past blocker working However parallel looking Data Scientist’s code rewriting Pandas PySpark Note there’s 101 way — Databricks fanboy building system log experiment result deploy model containerized APIs microservices Azure function orchestrate result asynchronously Put near realtime PowerBI report alerting watch model drift Azure function trigger model retraining it’s work art Damn love job Together make awesome team whole greater sum part role overlap lot improving understanding stats ranking better Kaggle contest learning new way improve workflow understanding data engineering summarize best result understand role challenge unique Machine Learning project plan appropriately time effort POV anything possible enough time share many aspect standard application development project approach dissimilar wouldn’t ask API engineer UX would Don’t get started — happens lot put Data Scientists Data Engineers together room let magic happen… need help putting ML Ops process place get touch httpsdatadrivencomTags Machine Learning Data Engineering Machine Learning Engineer AI Data Science
3,977
The Vantage Point of Stars
From a safe vantage point, far from the walrus-death cliffs, the stars hang smiling in the sky. The cosmos between themselves and earth is enough to swallow up their years of smiling light. If I could rise to that expanse could I forget the plunging to rock? The blood swept into the sea — the last heaving breaths of walruses? How lucky the stars for that vast space. They do not have to leap from the sky or find themselves shoved into spaces too small, too restrictive, to do what it is that stars do; quietly they still and twinkle and hold their spot for eons, asking nothing. Walruses ask nothing but a bit of ice or shore. Koalas, ablaze and climbing trees, shrieking. They, too, only need a bit of peace and branches of green. Sure, brush fires are normal. Ice melting; normal. But not like this. The Earth spinning and changing and moving through time as time has asked it to do — forgive us our interference, our human intrusion on the norm, for we really think it is all about us, our needs, our wants from this earth, our take, our taking. When the seas rise up to meet our mistakes, what cliffs, I ask, will we leap from? Will there be trees for us to climb, as the flaming koalas? Will there be a nice lady who will rip off her shirt and snuff out the flames of our sins as they crawl up our legs or will we simply keep running and hope the wind will put them out? The stars won’t shine then. They’ll wink their “I told you so’s,” grateful to be stars and not koalas, stars and not polar bears adrift on melting ice-boats, their furs narrowing at the sides, carcasses that breathe, until, they don’t. A star paints its path across the sky, one last, vast motion of hope, recipient of wish, of prayer, far-removed hope-flung.
https://medium.com/fiddleheads-floss/the-vantage-point-of-stars-1b9ae5dee13b
['Christina M. Ward']
2019-11-25 03:47:29.868000+00:00
['Poetry', 'Environment', 'Climate Change', 'Society', 'Short Story']
Title Vantage Point StarsContent safe vantage point far walrusdeath cliff star hang smiling sky cosmos earth enough swallow year smiling light could rise expanse could forget plunging rock blood swept sea — last heaving breath walrus lucky star vast space leap sky find shoved space small restrictive star quietly still twinkle hold spot eon asking nothing Walruses ask nothing bit ice shore Koalas ablaze climbing tree shrieking need bit peace branch green Sure brush fire normal Ice melting normal like Earth spinning changing moving time time asked — forgive u interference human intrusion norm really think u need want earth take taking sea rise meet mistake cliff ask leap tree u climb flaming koala nice lady rip shirt snuff flame sin crawl leg simply keep running hope wind put star won’t shine They’ll wink “I told so’s” grateful star koala star polar bear adrift melting iceboat fur narrowing side carcass breathe don’t star paint path across sky one last vast motion hope recipient wish prayer farremoved hopeflungTags Poetry Environment Climate Change Society Short Story
3,978
Why is it So Hard to Integrate Machine Learning into Real Business Applications?
You’ve played around with machine learning, learned about the mysteries of neural networks, almost won a Kaggle competition and now you feel ready to bring all this to real world impact. It’s time to build some real AI-based applications. But time and again you face setbacks and you’re not alone. It takes time and effort to move from a decent machine learning model to the next level of incorporating it into a live business application. Why? Having a trained machine learning model is just the starting point. There are many other considerations and components that needs to be built, tested and deployed for a functioning application. In the following post I will present a real AI-based application (based on a real customer use case), explain the challenges and suggest ways to simplify development and deployment. Use Case: Online Product Recommendations Targeted product recommendations is one of the most common methods to increase revenue, computers make suggestions based on users’ historical preferences, product to product correlations and other factors like location (e.g. proximity to a store), weather and more. Building such solutions requires analyzing historical transactions and creating a model. Then when applying it to production you’ll want to incorporate fresh data such as the last transactions the customer made and re-train the model for accurate results. Machine learning models are rarely trained over raw data. Data preparation is required to form feature vectors which aggregate and combine various data sources into more meaningful datasets and identify a clear pattern. Once the data is prepared, we use one or more machine learning algorithms, conduct training and create models or new datasets which incorporate the learnings. For recommendation engines, it is best to incorporate both deep learning (e.g. TensorFlow) to identify which products are bought “together”, and machine learning (e.g. XGboost) to identify the relations between users and products based on their historical behavior. The results from both models are then combined into a single model serving application. Example pipeline: Real-time product recommendations The serving application accepts a user’s ID, brings additional context from feature and user tables, feeds it into a model and returns a set of product recommendations. Note that serving must be done in real-time while the user is still browsing in the application, so its always better to cache data and models. On the other hand, recent product purchases or locations may have significant impact on future customer product choices and you need to constantly monitor activities and update feature tables and models. An online business requires automation and a CI/CD process applied into machine learning operations, enabling continuous applications. It is important to support auto-scaling and meet demand fluctuations, sustain failures and provide data security, not to mention to take regulatory constraints into consideration. The Machine Learning Operational Flow In a typical development flow, developing code or models is just the first step. The biggest effort goes on making each element, including data collection, preparation, training, and serving production-ready, enabling them to run repeatedly with minimal user intervention. What it takes to turn code or algorithms into real application The data science and engineering team is required to package the code, address scalability, tune for performance, instrument and automate. These tasks take months today. Serverless helps reduce effort significantly by automating many of the above steps, as explained in my previous post Serverless: Can is Simplify Data Science Projects?. Other important tools to keep in mind are Kubernetes and KubeFlow, which bring CI/CD and openness to the machine learning world. Read more about them in my post Kubernetes: The Open and Scalable Approach to ML Pipelines. Machine Learning Code Portability and Reproducibility A key challenge is that the same code may run in different environments, including notebooks for experimentation, IDEs (e.g. PyCharm) and containers for running on a cluster or as part of an automated ML workflow engine. In each environment you might have different configurations and use different parameters, inputs or output datasets. A lot of work is spent on moving and changing code, sometimes by different people. Once you run your work, you want to be able to quickly visualize results, compare them with past results and understand which data was used to produce each model. There are vendor specific solutions for these needs, but you can’t use them if you want to achieve portability across environments. Iguazio works with leading companies to form a cross platform standard and open implementation for machine learning environments, metadata and artifacts. This allows greater simplicity, automation and portability. Check out this video to learn how you can move from running/testing code in a local IDE to a production grade automated machine learning pipeline in less than a minute (based on KubeFlow).
https://towardsdatascience.com/why-is-it-so-hard-to-integrate-machine-learning-into-real-business-applications-69603402116a
['Yaron Haviv']
2019-07-08 20:45:27.105000+00:00
['Machine Learning', 'Data Science', 'AI', 'Kubernetes', 'Serverless']
Title Hard Integrate Machine Learning Real Business ApplicationsContent You’ve played around machine learning learned mystery neural network almost Kaggle competition feel ready bring real world impact It’s time build real AIbased application time face setback you’re alone take time effort move decent machine learning model next level incorporating live business application trained machine learning model starting point many consideration component need built tested deployed functioning application following post present real AIbased application based real customer use case explain challenge suggest way simplify development deployment Use Case Online Product Recommendations Targeted product recommendation one common method increase revenue computer make suggestion based users’ historical preference product product correlation factor like location eg proximity store weather Building solution requires analyzing historical transaction creating model applying production you’ll want incorporate fresh data last transaction customer made retrain model accurate result Machine learning model rarely trained raw data Data preparation required form feature vector aggregate combine various data source meaningful datasets identify clear pattern data prepared use one machine learning algorithm conduct training create model new datasets incorporate learning recommendation engine best incorporate deep learning eg TensorFlow identify product bought “together” machine learning eg XGboost identify relation user product based historical behavior result model combined single model serving application Example pipeline Realtime product recommendation serving application accepts user’s ID brings additional context feature user table feed model return set product recommendation Note serving must done realtime user still browsing application always better cache data model hand recent product purchase location may significant impact future customer product choice need constantly monitor activity update feature table model online business requires automation CICD process applied machine learning operation enabling continuous application important support autoscaling meet demand fluctuation sustain failure provide data security mention take regulatory constraint consideration Machine Learning Operational Flow typical development flow developing code model first step biggest effort go making element including data collection preparation training serving productionready enabling run repeatedly minimal user intervention take turn code algorithm real application data science engineering team required package code address scalability tune performance instrument automate task take month today Serverless help reduce effort significantly automating many step explained previous post Serverless Simplify Data Science Projects important tool keep mind Kubernetes KubeFlow bring CICD openness machine learning world Read post Kubernetes Open Scalable Approach ML Pipelines Machine Learning Code Portability Reproducibility key challenge code may run different environment including notebook experimentation IDEs eg PyCharm container running cluster part automated ML workflow engine environment might different configuration use different parameter input output datasets lot work spent moving changing code sometimes different people run work want able quickly visualize result compare past result understand data used produce model vendor specific solution need can’t use want achieve portability across environment Iguazio work leading company form cross platform standard open implementation machine learning environment metadata artifact allows greater simplicity automation portability Check video learn move runningtesting code local IDE production grade automated machine learning pipeline le minute based KubeFlowTags Machine Learning Data Science AI Kubernetes Serverless
3,979
10 Extraordinary GitHub Repos for All Developers
10 Extraordinary GitHub Repos for All Developers Interview resources, build your own X, a list of great public APIs, and more Photo by Vishnu R Nair on Unsplash GitHub is the number one platform for sharing all kinds of technologies, frameworks, libraries, and collections of all sorts. But with the sheer mass also comes the problem to find the most useful repositories. So I have decided to curate this list of ten fantastic repositories that provide great value for all software engineers. All of them have a lot of GitHub stars, underlining their relevance, popularity, and usefulness. Some of them will help you learn new things, some will help you build cool things, and all of them will help you to become better software engineers.
https://medium.com/better-programming/10-extraordinary-github-repos-for-all-developers-939cdeb28ad0
['Simon Holdorf']
2020-03-31 16:07:48.275000+00:00
['Creativity', 'JavaScript', 'Technology', 'Productivity', 'Programming']
Title 10 Extraordinary GitHub Repos DevelopersContent 10 Extraordinary GitHub Repos Developers Interview resource build X list great public APIs Photo Vishnu R Nair Unsplash GitHub number one platform sharing kind technology framework library collection sort sheer mass also come problem find useful repository decided curate list ten fantastic repository provide great value software engineer lot GitHub star underlining relevance popularity usefulness help learn new thing help build cool thing help become better software engineersTags Creativity JavaScript Technology Productivity Programming
3,980
Multi-Object tracking is hard, and maintaining privacy while doing it is even harder!
Tracking in Computer Vision is the task of estimating an object’s trajectory throughout an image sequence. To track an individual object, we need to identify the object from one image to another and recognize it among distractors. There are a number of techniques we can use to remove distractors, such as background subtraction, but we’re primarily interested here in the tracking technique known as tracking by detection. In this paradigm, we first try to detect the object in the image, and then we try to associate the objects we detect in subsequent frames. Distinguishing the target object from distractor objects is then part of an association problem — and this can get complicated! You can think of it like “connecting the dots” — which is exponentially more challenging when there are many dots representing many different objects in the same scene. For example, if we want to track a specific car in a parking lot, it’s not enough just to have a really good car detector; we need to be able to tell apart the car of interest from all the other cars in the image. To do so, we might compute some appearance features that allow us to identify the same car from image to image. Alternatively, we can try to track all the other cars, too — turning the problem into a Multi-Object Tracking task. This approach enables more accurate tracking by detection with weaker appearance models, and it allows us to track every object of a category without choosing a single target a priori. In these figures, we’re trying to track two red dots, simultaneously detected over 4 consecutive frames (t=1,2,3,4). But with only the position and time of detection as information, there are two different sets of trajectories that are acceptable solutions. The two dots may cross paths while maintaining a straight motion like in the left image, or they may avoid each other turning in opposite directions, like in the image on the right. If we were only interested in tracking one of the two dots, the second one would act as a distractor potentially causing a tracking error. What are the current approaches? As a still largely unsolved and active area of research, there’s an extensive literature covering different approaches to multi-object tracking. Since 2014, there has even been a standard benchmark in the field, called the Multiple Object Tracking (MOT) Challenge, which maintains datasets that researchers and individuals can use to benchmark their algorithms. We’ll discuss a few common approaches here and present them in a simplified way, but this is far from an exhaustive list. For more, we suggest the following survey from 2017 by Leal-Taixé Laura et al.. Kalman Filtering and Hungarian algorithm One of the simplest approaches is to try matching detections between adjacent frames, which can be formulated as an assignment problem. In its simplest form, for each object detected at time t, a matching distance is computed with each object detected at time t+1. The matching distance can be a simple intersection-over-union between bounding boxes, or it could include an appearance model to be more robust. An optimization algorithm called the Hungarian algorithm is then used to find the assignment solution that minimizes the sum of all the matching distances. In addition, since most of the objects we are trying to track are moving, rather than comparing the new detection’s position to the tracks most recent known locations, it works better to use the track position history to predict where the object was going. In order to integrate the different uncertainties from this kinematic model and the noise from the detector, a filtering framework is often used, such as a Kalman Filter or Particle Filter. A more complex but straightforward extension to this approach is to search for an optimal solution over a higher number of frames. One possible way is to use a hierarchical model. For example, we can compute small tracklets between adjacent frames over short, non-overlapping segments, and then try to match tracklets between consecutive segments. The Hungarian algorithm can be used again if we can come up with a good distance-matching function between tracklets. Multi Hypothesis Another possible approach is to maintain, for each original detection as a starting point, a graph of possible trajectories. Detections are represented by nodes in the tree and each path on that tree is a track hypothesis. Two hypotheses that share a detection are in conflict and the problem can then be reformulated as finding an independent set that maximizes a confidence score. Let’s imagine the simple case above, where two objects are being detected during three consecutive frames. Each node corresponds to a detection, and the nodes are vertically aligned with the frame they have been detected in. An edge between two nodes corresponds to a possible association and the number next to the edge measures the matching distance between detections (lower value means the two detections are more similar). It is common if the dissimilarity between two detections is above a threshold, to consider the association completely impossible. This is why here, there is no edge between nodes E and C. Node D however, could be associated either to B or E in the next frame, and the decision will be made using the matching distance. Therefore, each path on this graph corresponds to a track hypothesis, and here it should be easy to see that the optimal solution is obtained with two tracks: A–>B–>C (ABC) and D–>E–>F (DEF). There is however another acceptable solution: AEF and DBC. However, the track hypothesis DBF prevents any other complete trajectory starting from A, as track hypothesis in order to be compatible, must not share any node, and from node E, we can only go to F. The figure below is a new graph representing each track hypothesis with a node. There is also an edge between two nodes if the track hypotheses are in conflict, that is, if they share one or more detections. For example, there is an edge between nodes ABC and DBF, as they share the detection B. But, hypotheses ABC and DEF are not linked with an edge, and so they are compatible. The idea is to list all the independent sets in this graph and this would give us all the possible solutions to our association problem, and there are efficient algorithms in graph theory that allow us to do just that. Here the independent sets are: {ABC, DEF} {AEF, DBC} We just need to choose now between these two solutions. We can sum all the matching distances in a track hypothesis to get the track hypothesis cost, and sum all the track hypothesis costs in a set to get the sets cost. {ABC, DEF} ABC:Cost=0.1+0.1=0.2 DEF:Cost=0.1+0.1=0.2 {ABC, DEF}: Cost=0.2+0.2=0.4 {AEF, DBC} AEF:Cost=5+0.1=5.1 DBC:Cost=5+0.1=5.1 {AEF, DBC}Cost=5.1+5.1=10.2 {ABC, DEF} with a cost of 0.4 is then retained as the optimal solution. If you want to know more about Multi-Hypothesis Tracking, a more detailed description of an implementation by Chanho et al, can be read here. Network flow formulation The data association problem can also be formulated as a network flow. A unit of flow from the source to the sink represents a track, and the global optimal solution is the flow configuration that minimizes the overall cost. Intuitively, finding the best tracking solution as an association between objects can be seen as solving the K disjoint path on a graph, where nodes are detections and edge weights are the affinity between two detections. This is another one of the well-studied optimization problems in graph theory. Going back to our previous association problem, we can add imaginary starting and destination points, respectively S and T. We are now looking for the two shortest paths from S to T, that do not share any nodes. Again the solution will be SABCT and SDEFT. Another way to look at it is to imagine the node S as a source that sends a flow through the network. Each edge as a capacity (here it is 1 because we want non overlapping trajectory) and a cost, and solving the association problem becomes equivalent to minimizing the overall cost for a given amount of flow. For instance, here we are trying to send 2 units of flow from the sink (S) to the tank (T). One unit will go through ABC, the other through DEF, for a total cost of 0.4. But we could also have sent the same amount of flow (2) by sending one unit through DBC and another one through AEF, except the cost would be 10.2 and so {SABCT, SDEFT} is retained as the optimal solution. Again, for a more detailed description of an implementation of Network flow for tracking, you can find an example here by Zhang et al. Why is this so hard? Researchers have made some incredible advances in object detection and recognition in the past few years, thanks in large part to the emergence of Deep Learning. Now it’s possible to detect and classify hundreds of different objects in a single image with very high accuracy. But multi-object tracking is still extremely challenging, due to a number of problems: Occlusions: In crowded or other complex scene settings, it’s very common that an object of interest would have its trajectory partially occluded, either by an element of the background (fixed environment/scene), like a pole or a tree, or by another object. A multi-object tracking algorithm needs to account for the possibility that an object may disappear and later reappear in an image sequence, to be able to re-associate that object to its prior trajectory.
https://medium.com/numina/multi-object-tracking-is-hard-and-maintaining-privacy-while-doing-it-is-even-harder-c288ccbc9c40
['Raphael Viguier']
2019-11-08 17:24:11.177000+00:00
['Privacy By Design', 'Engineering', 'Tracking', 'Computer Vision', 'Algorithms']
Title MultiObject tracking hard maintaining privacy even harderContent Tracking Computer Vision task estimating object’s trajectory throughout image sequence track individual object need identify object one image another recognize among distractors number technique use remove distractors background subtraction we’re primarily interested tracking technique known tracking detection paradigm first try detect object image try associate object detect subsequent frame Distinguishing target object distractor object part association problem — get complicated think like “connecting dots” — exponentially challenging many dot representing many different object scene example want track specific car parking lot it’s enough really good car detector need able tell apart car interest car image might compute appearance feature allow u identify car image image Alternatively try track car — turning problem MultiObject Tracking task approach enables accurate tracking detection weaker appearance model allows u track every object category without choosing single target priori figure we’re trying track two red dot simultaneously detected 4 consecutive frame t1234 position time detection information two different set trajectory acceptable solution two dot may cross path maintaining straight motion like left image may avoid turning opposite direction like image right interested tracking one two dot second one would act distractor potentially causing tracking error current approach still largely unsolved active area research there’s extensive literature covering different approach multiobject tracking Since 2014 even standard benchmark field called Multiple Object Tracking MOT Challenge maintains datasets researcher individual use benchmark algorithm We’ll discus common approach present simplified way far exhaustive list suggest following survey 2017 LealTaixé Laura et al Kalman Filtering Hungarian algorithm One simplest approach try matching detection adjacent frame formulated assignment problem simplest form object detected time matching distance computed object detected time t1 matching distance simple intersectionoverunion bounding box could include appearance model robust optimization algorithm called Hungarian algorithm used find assignment solution minimizes sum matching distance addition since object trying track moving rather comparing new detection’s position track recent known location work better use track position history predict object going order integrate different uncertainty kinematic model noise detector filtering framework often used Kalman Filter Particle Filter complex straightforward extension approach search optimal solution higher number frame One possible way use hierarchical model example compute small tracklets adjacent frame short nonoverlapping segment try match tracklets consecutive segment Hungarian algorithm used come good distancematching function tracklets Multi Hypothesis Another possible approach maintain original detection starting point graph possible trajectory Detections represented node tree path tree track hypothesis Two hypothesis share detection conflict problem reformulated finding independent set maximizes confidence score Let’s imagine simple case two object detected three consecutive frame node corresponds detection node vertically aligned frame detected edge two node corresponds possible association number next edge measure matching distance detection lower value mean two detection similar common dissimilarity two detection threshold consider association completely impossible edge node E C Node however could associated either B E next frame decision made using matching distance Therefore path graph corresponds track hypothesis easy see optimal solution obtained two track A–B–C ABC D–E–F DEF however another acceptable solution AEF DBC However track hypothesis DBF prevents complete trajectory starting track hypothesis order compatible must share node node E go F figure new graph representing track hypothesis node also edge two node track hypothesis conflict share one detection example edge node ABC DBF share detection B hypothesis ABC DEF linked edge compatible idea list independent set graph would give u possible solution association problem efficient algorithm graph theory allow u independent set ABC DEF AEF DBC need choose two solution sum matching distance track hypothesis get track hypothesis cost sum track hypothesis cost set get set cost ABC DEF ABCCost010102 DEFCost010102 ABC DEF Cost020204 AEF DBC AEFCost50151 DBCCost50151 AEF DBCCost5151102 ABC DEF cost 04 retained optimal solution want know MultiHypothesis Tracking detailed description implementation Chanho et al read Network flow formulation data association problem also formulated network flow unit flow source sink represents track global optimal solution flow configuration minimizes overall cost Intuitively finding best tracking solution association object seen solving K disjoint path graph node detection edge weight affinity two detection another one wellstudied optimization problem graph theory Going back previous association problem add imaginary starting destination point respectively looking two shortest path share node solution SABCT SDEFT Another way look imagine node source sends flow network edge capacity 1 want non overlapping trajectory cost solving association problem becomes equivalent minimizing overall cost given amount flow instance trying send 2 unit flow sink tank One unit go ABC DEF total cost 04 could also sent amount flow 2 sending one unit DBC another one AEF except cost would 102 SABCT SDEFT retained optimal solution detailed description implementation Network flow tracking find example Zhang et al hard Researchers made incredible advance object detection recognition past year thanks large part emergence Deep Learning it’s possible detect classify hundred different object single image high accuracy multiobject tracking still extremely challenging due number problem Occlusions crowded complex scene setting it’s common object interest would trajectory partially occluded either element background fixed environmentscene like pole tree another object multiobject tracking algorithm need account possibility object may disappear later reappear image sequence able reassociate object prior trajectoryTags Privacy Design Engineering Tracking Computer Vision Algorithms
3,981
Everything you need to know about color
Color evokes emotion, sparks excitement, and grabs attention. Color can help draw your eye where you want it on anything from a poster or billboard to an email in your inbox. Color can even influence your mood. Did you know the color psychology behind a red and yellow combination makes you hungry? It’s no wonder well-known fast-food chains like McDonald’s and KFC use red and yellow colors in their logos. Color theory is a set of principles for creating harmonious color combinations. It’s a mixture of science and art. Understanding the fundamentals of color theory and where color comes from is important to know as a designer. Once you master it, you’ll know how to create the best color combinations for your graphic and web design projects. If you don’t believe color has an impact on your design then take a look at this example. It’s the exact same illustration, the only difference is the colors. Which one is pleasing to look at and which makes your eyes want to explode?
https://uxdesign.cc/everything-you-need-to-know-about-color-d921c07c8b0b
['Monica Galvan']
2020-10-17 20:58:27.235000+00:00
['UI', 'Visual Design', 'Design', 'Creativity', 'UX']
Title Everything need know colorContent Color evokes emotion spark excitement grab attention Color help draw eye want anything poster billboard email inbox Color even influence mood know color psychology behind red yellow combination make hungry It’s wonder wellknown fastfood chain like McDonald’s KFC use red yellow color logo Color theory set principle creating harmonious color combination It’s mixture science art Understanding fundamental color theory color come important know designer master you’ll know create best color combination graphic web design project don’t believe color impact design take look example It’s exact illustration difference color one pleasing look make eye want explodeTags UI Visual Design Design Creativity UX
3,982
Replicating a Human Pilot’s Ability to Visually Detect Aircraft
QUT researchers have used a complex maths model to develop an algorithm that enables unmanned aerial vehicles (UAV) to replicate a human pilot’s ability to visually detect aircraft at a range of more than 2km. Professor Jason Ford, who was awarded the inaugural Australian Defence Industry Award of Academic of the Year in 2019, said developing the visual detection system had tackled the key barrier to fully achieving the global commercial market of unmanned aerial vehicles. “We’ve been working on this problem for 10 years and over that time 50 people or more have been involved in this project,” said Ford, a chief investigator with the QUT Centre for Robotics. “We are leading the world in solving the extremely challenging problem of replicating the role of a pilot’s eye. “Imagine you’re observing something from a cockpit and it’s hidden against the clouds. If you watch it over a period of time, you build up confidence something is there. “The algorithm does the same.” The advisory for human pilots is that they will need at least 11.4 seconds to commence an avoidance manoeuvre once they visually detect another plane or other aerial vehicle. In the past decade, the system has evolved through a range of testing including on aircraft and on UAVs. The QUT researchers developed the algorithm based on a mathematical model called the Hidden Markov Model (HMM). HMMs were developed in the 1960s and allow people to predict unknown, or hidden, variables from observed information. Professor Jason Ford had completed his PhD on HMMs, and has developed techniques to work with weak measurements by using a combination of measure theory, control theory, and information theory. Image: QUT. Ford said although most people outside of the maths community would not have heard of HMMs, they would have benefited from its many applications in economics, neuro-biology and telecommunication and examples as broad as in DNA sequencing to speech recognition systems used by smartphone digital assistants. The algorithm used in the UAV object detection system was developed by Ford Dr Tim Molloy, Dr Jasmin Martin and others. “The algorithm boosts the weak signal while reducing the surround signal noise,” Ford said. Professor Jason Ford (front) led the development of an algorithm that enables unmanned aerial vehicles (UAV) to replicate a human pilot’s ability to visually detect aircraft at a range of more than 2km. Ford said one of the major challenges in developing the sense-and-avoid system for unmanned aerial aircraft was to make it small enough to be able to be carried on a UAV. The breakthrough is the latest step after a series of related research projects in the past decade, including the Smart Skies Project and Project ResQu in collaboration with Boeing Australia and Insitu Pacific. Testing commenced in 2010 with flights to collect data to start working on the project, and in early 2014 a breakthrough proof-of-concept flight proved a system in UAV was able to detect another aircraft using vision while in flight. “Boeing and Insitu Pacific have valued the ongoing collaboration with QUT and Professor Ford’s team,” said Brendan Williams, Associate Technical Fellow, Airspace Integration for The Boeing Company. “The algorithm has been evaluated and matured in regular flight tests, with strong positive results, and we are looking to transitioning its use as a baseline technology in regular Beyond Visual Line of Sight operations.” Since then, the research has focussed on improving the performance, size and cost of the technology to improve the commercial feasibility of the system. Ford said the ultimate aim of this research was to enable UAVs to be more easily be used in general airspace for commercial applications.
https://medium.com/thelabs/replicating-a-human-pilots-ability-to-visually-detect-aircraft-d9594913934a
['Qut Science']
2020-10-26 05:18:36.379000+00:00
['Machine Learning', 'Technology', 'Engineering', 'Business', 'AI']
Title Replicating Human Pilot’s Ability Visually Detect AircraftContent QUT researcher used complex math model develop algorithm enables unmanned aerial vehicle UAV replicate human pilot’s ability visually detect aircraft range 2km Professor Jason Ford awarded inaugural Australian Defence Industry Award Academic Year 2019 said developing visual detection system tackled key barrier fully achieving global commercial market unmanned aerial vehicle “We’ve working problem 10 year time 50 people involved project” said Ford chief investigator QUT Centre Robotics “We leading world solving extremely challenging problem replicating role pilot’s eye “Imagine you’re observing something cockpit it’s hidden cloud watch period time build confidence something “The algorithm same” advisory human pilot need least 114 second commence avoidance manoeuvre visually detect another plane aerial vehicle past decade system evolved range testing including aircraft UAVs QUT researcher developed algorithm based mathematical model called Hidden Markov Model HMM HMMs developed 1960s allow people predict unknown hidden variable observed information Professor Jason Ford completed PhD HMMs developed technique work weak measurement using combination measure theory control theory information theory Image QUT Ford said although people outside math community would heard HMMs would benefited many application economics neurobiology telecommunication example broad DNA sequencing speech recognition system used smartphone digital assistant algorithm used UAV object detection system developed Ford Dr Tim Molloy Dr Jasmin Martin others “The algorithm boost weak signal reducing surround signal noise” Ford said Professor Jason Ford front led development algorithm enables unmanned aerial vehicle UAV replicate human pilot’s ability visually detect aircraft range 2km Ford said one major challenge developing senseandavoid system unmanned aerial aircraft make small enough able carried UAV breakthrough latest step series related research project past decade including Smart Skies Project Project ResQu collaboration Boeing Australia Insitu Pacific Testing commenced 2010 flight collect data start working project early 2014 breakthrough proofofconcept flight proved system UAV able detect another aircraft using vision flight “Boeing Insitu Pacific valued ongoing collaboration QUT Professor Ford’s team” said Brendan Williams Associate Technical Fellow Airspace Integration Boeing Company “The algorithm evaluated matured regular flight test strong positive result looking transitioning use baseline technology regular Beyond Visual Line Sight operations” Since research focussed improving performance size cost technology improve commercial feasibility system Ford said ultimate aim research enable UAVs easily used general airspace commercial applicationsTags Machine Learning Technology Engineering Business AI
3,983
How to create an irresistible offer
Know Your Audience Whether common sense or trite, I feel like I’m beating a dead horse whenever I bring this up. Which is why I sometimes glance over it. But it must be said, because you’re not going to rack up sales with a tone-deaf offer. You must know who your offer is for. You must know who your offer is for. (Click To Tweet) It’s like if someone came to Music Entrepreneur HQ and pitched a guest post about the environment (oh wait, this actually happened!). Sorry, though many musicians are environmentally conscious, trying to sell them your recycling services is going to prove an uphill battle. What are musicians interested in? Growing their fan base. Getting listeners for their music. Bringing a crowd to their shows. And so on. There might be an opportunity to sneak in some tips about reducing their carbon footprint in an offer that covers one or more of the topics just mentioned. But it would be best to assume no opportunity, because you want your content to be focused and targeted. Who is interested in recycling services? That’s what you’d want to figure out before pitching your offer. In like manner, if you wish to create an irresistible offer, you must know your audience and what their needs are. If you can, go and ask them now.
https://medium.com/datadriveninvestor/how-to-create-an-irresistible-offer-674a71ea7482
['David Andrew Wiebe']
2020-12-29 16:52:59.815000+00:00
['Business', 'Creativity', 'Entrepreneurship', 'Product', 'Freelancing']
Title create irresistible offerContent Know Audience Whether common sense trite feel like I’m beating dead horse whenever bring sometimes glance must said you’re going rack sale tonedeaf offer must know offer must know offer Click Tweet It’s like someone came Music Entrepreneur HQ pitched guest post environment oh wait actually happened Sorry though many musician environmentally conscious trying sell recycling service going prove uphill battle musician interested Growing fan base Getting listener music Bringing crowd show might opportunity sneak tip reducing carbon footprint offer cover one topic mentioned would best assume opportunity want content focused targeted interested recycling service That’s you’d want figure pitching offer like manner wish create irresistible offer must know audience need go ask nowTags Business Creativity Entrepreneurship Product Freelancing
3,984
5 Cities Where Man And Nature Collide
5 Cities Where Man And Nature Collide Bringing a little bit of wildness back into our most civilised spaces New York City and last of the city’s green space, Central Park (by Jermaine Ee on Unsplash) As cities continue to expand and our urban sprawl pushes further and further into what was once forests, fields and savannah, we are increasingly coming into conflict with nature. Many animals are turning this to their advantage and while some have roamed our cities for centuries, others are taking their first tentative steps into our cities. Sometimes the consequences are adorable, sometimes they can be deadly, but either way, these animals are changing how we interact with our world around us. Leopards — Mumbai, India A leopard spotted at night Mumbai, India is one of the most densely populated areas on the plan, with a population density of 32,000 people per square kilometre. It’s home 19.75 million people and also at least 40 leopards. Bordering the Sanjay Gandhi National Park, the city of Mumbai has virtually no buffer zone between the urban sprawl and the park, and so the leopards, seeking an easy meal are known to venture into the city at night. Lacking adequate trash infrastructure, waste piles up in the streets of Mumbai, attracting, among other things, stray dogs. There 30 million stray dogs in India and 95,000 in Mumbai alone and a sizeable minority of them carry the rabies virus. The Leopards, however, are helping to combat both them and the virus. As the apex predator in this unique eco-system, the leopards hunt the dogs, viewing them as a far easier meal than the deer commonly found in the park. At least 40% of their diet is thought to be dogs, and it’s been speculated the around 1,500 dogs are required per year to sustain a leopard population of this size. Each year they likely prevent 1,000 bites from stray dogs, preventing around 90 rabies cases in a country that’s one of the worst affected in the world by the rabies virus, thanks largely to its stray dog population. While attacks on humans from the leopards are very rare, they do happen, often with fatal results. As Mumbai continues to expand, the risk will continue to increase, but efforts are being made to combat this. Initiatives aimed at educating people on how to stay safe in areas where there are known to be leopards have proven incredibly effective in reducing attacks and those leading the charge in leopards conservation are confident that with proper care, both humans and leopards can flourish. Raccoons — Toronto, Canada A raccoon in the city (by Den Trushtin on Unsplash) Often referred to as the raccoon capital of the world, Toronto, Canada, has one of the highest raccoon populations in the world, with approximately one raccoon per twenty-nine people. Famously intelligent and nimble-fingered, raccoons will eat just about anything. It wasn’t until 2002 that raccoons colonised Toronto in such numbers, when the city introduced organic trash bins, without too much consideration as to how the raccoons living in the forests on the outskirts of the city would view this new, easy food source. Today the city still struggles with raccoons’ thanks in large part to how adaptable they are to city life. All attempts to control them and stop them stealing food have so far failed. The Native Americans understood raccoons as clearly as we do today, with the word ‘raccoon’ coming from the Powhatan word ‘aroughcum’ meaning ‘animal that scratches with its hands. The Aztecs were a little more to the point, calling them ‘mapachitli’ which means, ‘one who takes everything with its hands’. Hyenas — Harar, Ethiopia The Hyena Man of Harar (by Gill Penny from Flickr) For at least five hundred years, the people of Harar, Ethiopia have been feeding the Hyenas that live on the outskirts of the city in the caves of the Hakim Mountain. The hyenas make their home alongside the tombs of important religious leaders in the Islamic faith and the people of Harar came to view the hyenas there came to be seen as a symbol of luck. They feed them porridge, butter and goats meat and as long as the hyenas continue to eat, the city is said to have good fortune. When not being fed porridge and meat annually though, they roam the garbage dumps of the city, eating anything and everything they can find. As one of Africa’s largest predators, they have a huge appetite, which may be what led one family to feed them scraps of raw meat. Over the years, the hyenas have learned to come when called and for 40 years, one man fed a pack of them. He also trained his son to feed them, who’s famous today for feeding them from his mouth. He places a stick between his teeth with a piece of meat on the end for the hyenas to eat. He, in turn, is teaching his son to do the same, while his sister has also entered the family business of feeding hyenas. The feeding of the Hyenas has become one of the city’s most famous tourist attractions, and adventurous explorers can even feed the Hyenas from their mouths themselves. As Harar continues to grow, however, the family worries that these visitors may be pushed out and one of the most unlikely human-animal encounters with a rich history could come to an end. Foxes — London, UK Urban Fox — London Over 10,000 foxes live in London, accounting for 14% of the UK’s total population. Highly adaptable creatures, who can eat just about anything, foxes found a natural home amidst the gloomy streets of London. Their diet differs from their country-dwelling cousins and they eat an even split of household trash and meat. Their favourite, and most beneficial food for us, appears to be rats and they’re noted as being a significant factor in keeping London’s rat population to a minimum. There are as many as 18 foxes per square kilometre in London, which has led to them taking up residence in some interesting places. In 2011, while the UK’s tallest building, the Shard, was being constructed, a fox nicknamed Romeo took up residence on the 72nd floor. Now the open-air viewing gallery and the highest accessible point to the public, Romeo survived by eating scraps left by the construction workers. Referred to, quite aptly as, ‘a resourceful little chap’, Romeo was later caught by animal rescue workers and was released onto the streets of Bermondsey in London. Cats — Istanbul, Turkey One of many cats of Istanbul The history of cats in Istanbul goes back a long way. Originally, they came to the city as ships cats, which had been tasked with keeping the rat population down aboard ships during long sea voyages. When the Ottoman Empire took Istanbul (then Constantinople) in 1453, they brought with them a unique perspective on the city’s feline inhabitants. Cats have a special place in Islam, with one reportedly saving Muhammed’s life from a snake. As a reward, he blessed all cats to always land on their feet. Another story tells of Muhammed cutting the sleeve of his robe so that the cat sleeping there would not be disturbed. This love of cats translated into a deep respect and close bond with them in Istanbul, the new capital of the Ottoman Empire, and cat populations only continued to grow as the people there fed and cared for them. Today the cats of Istanbul number at least 125,000 are famously affectionate and tame, despite largely being street cats. As a bonus, they keep rat populations in the city in check too, which in previous centuries meant a reduced number of cases of the plague. Both the government and the residents, as well as tourists feed the huge population and as a result, Istanbul has become famous as a cat lovers paradise, with virtually nowhere in the city that the felines haven’t made their own, not even the famous Hagia Sofia, which has been home to a cat called Gli for 16 years. Wild Cities Ultimately, whatever the animal that roams out cities, we must learn to live with them. Cities are expanding at an unprecedented rate and for the wildlife whose homes we destroy to expand, there is often no choice but to move into our urban spaces. By working with wildlife organisations and learning about the creatures we share our cities with, we may be able to bring a little bit of wildness back into our most civilised spaces.
https://medium.com/age-of-awareness/5-cities-where-man-and-nature-collide-ba9c5f03a32c
['Danny Kane']
2020-08-04 00:50:02.259000+00:00
['Environment', 'Cities', 'Nature', 'Culture', 'Society']
Title 5 Cities Man Nature CollideContent 5 Cities Man Nature Collide Bringing little bit wildness back civilised space New York City last city’s green space Central Park Jermaine Ee Unsplash city continue expand urban sprawl push forest field savannah increasingly coming conflict nature Many animal turning advantage roamed city century others taking first tentative step city Sometimes consequence adorable sometimes deadly either way animal changing interact world around u Leopards — Mumbai India leopard spotted night Mumbai India one densely populated area plan population density 32000 people per square kilometre It’s home 1975 million people also least 40 leopard Bordering Sanjay Gandhi National Park city Mumbai virtually buffer zone urban sprawl park leopard seeking easy meal known venture city night Lacking adequate trash infrastructure waste pile street Mumbai attracting among thing stray dog 30 million stray dog India 95000 Mumbai alone sizeable minority carry rabies virus Leopards however helping combat virus apex predator unique ecosystem leopard hunt dog viewing far easier meal deer commonly found park least 40 diet thought dog it’s speculated around 1500 dog required per year sustain leopard population size year likely prevent 1000 bite stray dog preventing around 90 rabies case country that’s one worst affected world rabies virus thanks largely stray dog population attack human leopard rare happen often fatal result Mumbai continues expand risk continue increase effort made combat Initiatives aimed educating people stay safe area known leopard proven incredibly effective reducing attack leading charge leopard conservation confident proper care human leopard flourish Raccoons — Toronto Canada raccoon city Den Trushtin Unsplash Often referred raccoon capital world Toronto Canada one highest raccoon population world approximately one raccoon per twentynine people Famously intelligent nimblefingered raccoon eat anything wasn’t 2002 raccoon colonised Toronto number city introduced organic trash bin without much consideration raccoon living forest outskirt city would view new easy food source Today city still struggle raccoons’ thanks large part adaptable city life attempt control stop stealing food far failed Native Americans understood raccoon clearly today word ‘raccoon’ coming Powhatan word ‘aroughcum’ meaning ‘animal scratch hand Aztecs little point calling ‘mapachitli’ mean ‘one take everything hands’ Hyenas — Harar Ethiopia Hyena Man Harar Gill Penny Flickr least five hundred year people Harar Ethiopia feeding Hyenas live outskirt city cave Hakim Mountain hyena make home alongside tomb important religious leader Islamic faith people Harar came view hyena came seen symbol luck feed porridge butter goat meat long hyena continue eat city said good fortune fed porridge meat annually though roam garbage dump city eating anything everything find one Africa’s largest predator huge appetite may led one family feed scrap raw meat year hyena learned come called 40 year one man fed pack also trained son feed who’s famous today feeding mouth place stick teeth piece meat end hyena eat turn teaching son sister also entered family business feeding hyena feeding Hyenas become one city’s famous tourist attraction adventurous explorer even feed Hyenas mouth Harar continues grow however family worry visitor may pushed one unlikely humananimal encounter rich history could come end Foxes — London UK Urban Fox — London 10000 fox live London accounting 14 UK’s total population Highly adaptable creature eat anything fox found natural home amidst gloomy street London diet differs countrydwelling cousin eat even split household trash meat favourite beneficial food u appears rat they’re noted significant factor keeping London’s rat population minimum many 18 fox per square kilometre London led taking residence interesting place 2011 UK’s tallest building Shard constructed fox nicknamed Romeo took residence 72nd floor openair viewing gallery highest accessible point public Romeo survived eating scrap left construction worker Referred quite aptly ‘a resourceful little chap’ Romeo later caught animal rescue worker released onto street Bermondsey London Cats — Istanbul Turkey One many cat Istanbul history cat Istanbul go back long way Originally came city ship cat tasked keeping rat population aboard ship long sea voyage Ottoman Empire took Istanbul Constantinople 1453 brought unique perspective city’s feline inhabitant Cats special place Islam one reportedly saving Muhammed’s life snake reward blessed cat always land foot Another story tell Muhammed cutting sleeve robe cat sleeping would disturbed love cat translated deep respect close bond Istanbul new capital Ottoman Empire cat population continued grow people fed cared Today cat Istanbul number least 125000 famously affectionate tame despite largely street cat bonus keep rat population city check previous century meant reduced number case plague government resident well tourist feed huge population result Istanbul become famous cat lover paradise virtually nowhere city feline haven’t made even famous Hagia Sofia home cat called Gli 16 year Wild Cities Ultimately whatever animal roams city must learn live Cities expanding unprecedented rate wildlife whose home destroy expand often choice move urban space working wildlife organisation learning creature share city may able bring little bit wildness back civilised spacesTags Environment Cities Nature Culture Society
3,985
Standard Cognition Uses Rockset to Deliver Data APIs and Real-Time Metrics for Vision AI
Standard Cognition Uses Rockset to Deliver Data APIs and Real-Time Metrics for Vision AI Kevin Leong Follow Jan 31 · 5 min read Walk into a store, grab the items you want, and walk out without having to interact with a cashier or even use a self-checkout system. That’s the no-hassle shopping experience of the future you’ll get at the Standard Store, a demonstration store showcasing the AI-powered checkout pioneered by Standard Cognition. The company makes use of computer vision to remove the need for checkout lines of any sort in physical retail locations. Their autonomous checkout system only requires easy-to-install overhead cameras, with no other sensors or RFID tags needed on shelves or merchandise. Standard uses the camera information in its computer vision platform to generate locations of individuals in the store-a type of in-store GPS-and track what items they pick up from the shelves. Shoppers simply exit the store with their items and get sent a receipt for their purchases. Employing computer vision to deliver a no-touch checkout experience requires that Standard efficiently handle large volumes of data from many sources. Aside from video data from each camera-equipped store, Standard deals with other data sets such as transactional data, store inventory data that arrive in different formats from different retailers, and metadata derived from the extensive video captured by their cameras. As is common with fast-growing markets, Standard’s data and analytics requirements are constantly evolving. Adding external data sources, each with a different schema, can require significant effort building and maintaining ETL pipelines. Testing new functionality on their transactional data store is costly and can impact production. Ad hoc queries to measure the accuracy of the checkout process in real time are not possible with traditional data architectures. To overcome these challenges and support rapid iteration on the product, the Standard engineering team relies on Rockset for their prototyping and internal analytics. Schemaless Ingest for Running Experiments Standard builds their production systems to access the streams of events they collect through a number of backend APIs, and the team is continually adding new API endpoints to make more data available to developers. Rockset plays a key role in prototyping APIs that will eventually be productionized and offers several advantages in this regard. When in the experimental phase, quick schema changes are required when analyzing their data. Rockset does not require schema definition for ingest, but still allows users to run fast SQL queries against the raw data using a very flexible schema-on-read approach. Using Rockset as their prototyping platform, Standard engineers can quickly experiment with different functions on the data. Standard also uses Rockset for fast prototyping because it can be readily accessed as a fully managed cloud service. Engineers simply connect to various data sources and ingest and query the data without having to manage servers or databases. Compared to the alternative of prototyping on their transactional data store, Standard’s cost of experimentation with Rockset is low. Ad Hoc Analysis of Operational Metrics Standard is constantly monitoring operational metrics from retailer partners, and their own demonstration store, to improve the efficiency and precision of their systems. Of particular importance in computer-vision-aided checkout is the accuracy of the transactions. Were shoppers charged for the correct number of items? How accurate were the AI models compared to human-resolved events? The engineering team pulls together multiple data sets-event streams from the stores, data from vendors, store inventory information, and debug logs-to generate accuracy metrics. They stream all this data into Rockset, which allows Standard to run ad hoc queries to join across data sets and analyze metrics in real time, rather than wait for asynchronous data lake jobs. An Environment for Rapid Prototyping and Real-Time Analytics Standard incorporates Rockset into their development flow for rapid prototyping and real-time analytics purposes. They bring in transactional data and various third-party data sets, typically in CSV or Parquet format and each with its own custom schema, using the Rockset Write API for ingestion whenever new data is available. For feature prototyping, engineers build an experimental API, using the Rockset Node.js client, that is refined over multiple iterations. Once a feature is mature, it is converted to a serverless function, using Google Cloud Functions, in their online production system in order to present data as an API to developers. This flow allows the engineering team to move quickly, with no infrastructure required, when developing new functionality. Standard productionizes several endpoints a day using this methodology. In the real-time analytics scenario, data from disparate sources-structured data managed by Standard and unstructured third-party data-is loaded into Rockset. Once ingested into Rockset, engineers can immediately perform SQL queries to measure and analyze operational metrics. Rockset offers the Standard team an ideal environment for ad hoc queries, allowing engineers to bring in and query internal and external data sets in real time without having to worry about indexing the data for performance. Constantly Improving Checkout Accuracy and Product at Standard Standard’s Rockset environment allows the team greater speed and simplicity when developing new features and verifying the accuracy of their AI models. In a nascent market where correctness of the computer vision platform will be crucial in gaining adoption of its automated checkout system, the ability to constantly improve accuracy and product functionality gives Standard an important edge. “The team at Standard is always looking to increase the accuracy of the computer vision platform and add new features to the product. We need to be able to drive product improvements from conception to production rapidly, and that involves being able to run experiments and analyze real-time metrics quickly and simply,” says Tushar Dadlani, computer vision engineering manager at Standard Cognition. “Using Rockset in our development environment gives us the ability to perform ad hoc analysis without a significant investment in infrastructure and performance tuning. We have over two thirds of our technical team using Rockset for their work, helping us increase the speed and agility with which we operate.” As Standard continues to evolve its AI-powered autonomous checkout offering, the team hopes to bring even more data into its platform in the future. Standard will extend the same rapid development model, enabled by Rockset, to incorporating new types of data into its analysis. Its next project will introduce user behavior event streams into its analysis, using Rockset’s SQL engine to join across the multiple data sets being analyzed.
https://medium.com/rocksetcloud/standard-cognition-uses-rockset-to-deliver-data-apis-and-real-time-metrics-for-vision-ai-a080180352c7
['Kevin Leong']
2020-01-31 21:56:22.117000+00:00
['Real Time Analytics', 'Computer Vision', 'AI', 'Data', 'API']
Title Standard Cognition Uses Rockset Deliver Data APIs RealTime Metrics Vision AIContent Standard Cognition Uses Rockset Deliver Data APIs RealTime Metrics Vision AI Kevin Leong Follow Jan 31 · 5 min read Walk store grab item want walk without interact cashier even use selfcheckout system That’s nohassle shopping experience future you’ll get Standard Store demonstration store showcasing AIpowered checkout pioneered Standard Cognition company make use computer vision remove need checkout line sort physical retail location autonomous checkout system requires easytoinstall overhead camera sensor RFID tag needed shelf merchandise Standard us camera information computer vision platform generate location individual storea type instore GPSand track item pick shelf Shoppers simply exit store item get sent receipt purchase Employing computer vision deliver notouch checkout experience requires Standard efficiently handle large volume data many source Aside video data cameraequipped store Standard deal data set transactional data store inventory data arrive different format different retailer metadata derived extensive video captured camera common fastgrowing market Standard’s data analytics requirement constantly evolving Adding external data source different schema require significant effort building maintaining ETL pipeline Testing new functionality transactional data store costly impact production Ad hoc query measure accuracy checkout process real time possible traditional data architecture overcome challenge support rapid iteration product Standard engineering team relies Rockset prototyping internal analytics Schemaless Ingest Running Experiments Standard build production system access stream event collect number backend APIs team continually adding new API endpoint make data available developer Rockset play key role prototyping APIs eventually productionized offer several advantage regard experimental phase quick schema change required analyzing data Rockset require schema definition ingest still allows user run fast SQL query raw data using flexible schemaonread approach Using Rockset prototyping platform Standard engineer quickly experiment different function data Standard also us Rockset fast prototyping readily accessed fully managed cloud service Engineers simply connect various data source ingest query data without manage server database Compared alternative prototyping transactional data store Standard’s cost experimentation Rockset low Ad Hoc Analysis Operational Metrics Standard constantly monitoring operational metric retailer partner demonstration store improve efficiency precision system particular importance computervisionaided checkout accuracy transaction shopper charged correct number item accurate AI model compared humanresolved event engineering team pull together multiple data setsevent stream store data vendor store inventory information debug logsto generate accuracy metric stream data Rockset allows Standard run ad hoc query join across data set analyze metric real time rather wait asynchronous data lake job Environment Rapid Prototyping RealTime Analytics Standard incorporates Rockset development flow rapid prototyping realtime analytics purpose bring transactional data various thirdparty data set typically CSV Parquet format custom schema using Rockset Write API ingestion whenever new data available feature prototyping engineer build experimental API using Rockset Nodejs client refined multiple iteration feature mature converted serverless function using Google Cloud Functions online production system order present data API developer flow allows engineering team move quickly infrastructure required developing new functionality Standard productionizes several endpoint day using methodology realtime analytics scenario data disparate sourcesstructured data managed Standard unstructured thirdparty datais loaded Rockset ingested Rockset engineer immediately perform SQL query measure analyze operational metric Rockset offer Standard team ideal environment ad hoc query allowing engineer bring query internal external data set real time without worry indexing data performance Constantly Improving Checkout Accuracy Product Standard Standard’s Rockset environment allows team greater speed simplicity developing new feature verifying accuracy AI model nascent market correctness computer vision platform crucial gaining adoption automated checkout system ability constantly improve accuracy product functionality give Standard important edge “The team Standard always looking increase accuracy computer vision platform add new feature product need able drive product improvement conception production rapidly involves able run experiment analyze realtime metric quickly simply” say Tushar Dadlani computer vision engineering manager Standard Cognition “Using Rockset development environment give u ability perform ad hoc analysis without significant investment infrastructure performance tuning two third technical team using Rockset work helping u increase speed agility operate” Standard continues evolve AIpowered autonomous checkout offering team hope bring even data platform future Standard extend rapid development model enabled Rockset incorporating new type data analysis next project introduce user behavior event stream analysis using Rockset’s SQL engine join across multiple data set analyzedTags Real Time Analytics Computer Vision AI Data API
3,986
Let’s Go for a Walk
Let’s Go for a Walk Why a daily walk is as important to me as brushing my teeth Photo by Elijah Hail on Unsplash I always wanted to be a runner. I grew up in Los Angeles, where there were a lot of runners in their tiny, 80s short-shorts and sweatbands. They looked so powerful and elegant. One of the most exciting moments of my young life was sitting out one night with all the neighbors at the side of the road and watching an Olympian run by with the torch just before the 1984 Los Angeles Olympic games began. I still remember that figure flying by us so elegantly, torch held high, hardly breaking a sweat. I wanna be like that, I thought. But even as a child, I had issues with running. For one thing, I had serious asthma, perpetually aggravated by the thick layer of smog that covered the city in the 80s. I also had a lot of joint issues, despite my youth, and running caused my knees to ache unbearably within minutes. I often defaulted to walking — less elegant, but far more enjoyable, I soon discovered, when my dad asked me to start joining him for his morning walks. I don’t remember us talking very much — I’ve always had a hard time finding things to talk about with my dad — and so we often walked in silence. I remember feeling safe out there in those early mornings, before anyone expected me to get dressed and organize my homework and do all the things I was supposed to do. There was a freedom out there: me, just walking around like there was nothing better to do. I also remember being entranced by the sights we passed by. I loved looking at people’s yards — especially those who had done a lot of landscaping. I found the flowers so beautiful, and wondered what kinds of little passageways were behind the hedges. I loved looking at the trees and often stopped to touch their bark. And I was in heaven when we went down the street to the beautiful park near the freeway entrance that felt like a private little woodland filled with twisting pathways.
https://medium.com/wilder-with-yael-wolfe/lets-go-for-a-walk-eb4c5b8ea541
['Yael Wolfe']
2020-11-16 17:15:23.980000+00:00
['Walking', 'Nature', 'Outdoors', 'Mental Health', 'Health']
Title Let’s Go WalkContent Let’s Go Walk daily walk important brushing teeth Photo Elijah Hail Unsplash always wanted runner grew Los Angeles lot runner tiny 80 shortshorts sweatband looked powerful elegant One exciting moment young life sitting one night neighbor side road watching Olympian run torch 1984 Los Angeles Olympic game began still remember figure flying u elegantly torch held high hardly breaking sweat wanna like thought even child issue running one thing serious asthma perpetually aggravated thick layer smog covered city 80 also lot joint issue despite youth running caused knee ache unbearably within minute often defaulted walking — le elegant far enjoyable soon discovered dad asked start joining morning walk don’t remember u talking much — I’ve always hard time finding thing talk dad — often walked silence remember feeling safe early morning anyone expected get dressed organize homework thing supposed freedom walking around like nothing better also remember entranced sight passed loved looking people’s yard — especially done lot landscaping found flower beautiful wondered kind little passageway behind hedge loved looking tree often stopped touch bark heaven went street beautiful park near freeway entrance felt like private little woodland filled twisting pathwaysTags Walking Nature Outdoors Mental Health Health
3,987
Yearly Review: Most Read Stories of 2020
Yearly Review: Most Read Stories of 2020 Focusing on business, productivity, and writing As it’s natural, at this time of the year, to look back at some of the highlights that truly stand out, I thought I’d do the same for some of my top pieces on Medium. I truly gave a lot of energy to this platform over the past year, and do not regret any of that. For one, it has made me a better writer. It also challenged me to start new things (such as a new business, a few columns). Overall, it also has taught me a lot about the fine balance between writing for myself and for the audience. Optimising my talents and strengths, whilst also keeping creativity and fun in the picture — such a hard one to balance. I thought I’d look back at the 10 most successful pieces of 2020, and provide an honest opinion (and speculation) on what worked and why they got so much love (I am using a combination of reads and views). One thing is for certain, I am looking to come back in 2021 with a more honest approach to what I can take on, and how much I can commit to the different things I am juggling. Less is more, for real this time. In this piece, I dive deep into some stats as well as using my intuition to provide some lessons for fellow writers.
https://medium.com/the-business-of-wellness/yearly-review-most-read-stories-of-2020-aa42fe724a90
['Fab Giovanetti']
2020-12-30 09:49:53.852000+00:00
['Headlines', 'Business', 'Writing', 'Creativity', 'Writing Tips']
Title Yearly Review Read Stories 2020Content Yearly Review Read Stories 2020 Focusing business productivity writing it’s natural time year look back highlight truly stand thought I’d top piece Medium truly gave lot energy platform past year regret one made better writer also challenged start new thing new business column Overall also taught lot fine balance writing audience Optimising talent strength whilst also keeping creativity fun picture — hard one balance thought I’d look back 10 successful piece 2020 provide honest opinion speculation worked got much love using combination read view One thing certain looking come back 2021 honest approach take much commit different thing juggling Less real time piece dive deep stats well using intuition provide lesson fellow writersTags Headlines Business Writing Creativity Writing Tips
3,988
Data science for weather forecast: how to prove a funny theory
What are we trying to do? Before describing what this experiment is all about, I need to give you some context. My colleague Aouss Sbai (co-author of this article) and I were looking for a fun project to work on. So we asked our mentor Plamen Nedeltchev (Distinguished Engineer at Cisco) if he had anything in stock and he shared with us that he had a theory about the weather in San Jose, CA. He told us that he was able to predict if the summer was going to be hot or not solely based on the temperatures of the 19th, 20th and 21st of May. He asked us to prove it. You read that correctly, predict the average weather of 3 months based on 3 days. Frankly, we did not really believe in it and approached this task with great scepticism. Nevertheless, we got started and tried to understand how we could go about proving such an original statement. Data collection The first hurdle In any data science problem, the starting point of anything is data. What data do we want exactly? Remember our objective: predict the average summer temperature following the temperatures of 3 days in May (19th, 20th and 21st) for the city of San Jose. So we started looking for databases or archives of historical temperature data. Guess what? There were none that were either complete enough or simply available to us 😬. The only source of information we found was on a website, the old Farmer’s Almanac, which listed the average temperatures of each day since 1945. Amazing, the job is done then! Well, not exactly… This is the page corresponding to the 23rd of March, 2019. We have access to the mean temperature, which is exactly what we need. But when it comes to calculating the average temperature of the summer of each year since 1945, this becomes much more tedious (~4500 days). There was no way we would visit a different page for each day and manually gather this data in an excel sheet… So we decided to automate this task and write a script for it! 🤖 Automation with a web scraping script Basically, the idea is that the script visits each page independently and looks for the data we want, calculates the average temperature of the summer, captures the temperatures of the 3 days in May, and repeats the process for each year starting from 1945. But how exactly can we do that? This is the URL of the page you just saw above. As you can see, it is specified in the URL the city and the date you want to access. So we could tell the script which URL to visit, for each day we were interested in. But here comes another issue. Once the script is on the page, how does it detect the temperature that we want? Well, as you might know, each webpage is written in HTML format, which means that each element that you see on screen belongs to a specific HTML tag. And, luckily for us, each page of that website was structured in the exact same way. So the only thing that we needed to do is identify in which HTML tag was the daily mean temperature stored and tell the script to fetch that specific value. (For those interested, we used the python library Beautiful Soup) The script was then able to do all the nasty calculations for us and return for each year the average summer temperature and the individual temperatures of the 3 days of May, all bundled in a nice Excel sheet 📝. what our dataset looks like now (temperatures are in Fahrenheit). The first 3 columns are the 3 days of May, and the last one is the average summer temperature But (there’s always a “but”), that was not enough. In fact, when you think about it, each line of our Excel sheet represented 1 year (average temp of summer + 3 days of May). So, even if we went back to 1945, that represented only 73 lines… which is far too little data to pretend to do any sort of reliable analysis or prediction ( a couple hundreds would be much better). So we decided to repeat the exact same process for 4 other cities of northern California around San Jose which were subject to the same type of weather but were far enough not to have redundant data (taking San Francisco for instance, which is by the sea would have biased everything, and taking Milpitas, which is in San Jose suburbs wouldn’t have added any relevant data). We now have 370 measurements, which is not ideal, but sufficient to start doing some analysis. Let the analysis begin! Data Transformation Now let’s try to simplify our dataset to make it easier to analyze. To start things off, we pulled the Excel file data into Alteryx, a data science tool to create end-to-end data pipelines. This will help us prepare and analyze the data all along the experiment. Ingested Data: we decided to add 2 columns which indicated the city and the year of the measurement We aimed to visualize the data using Tableau, which is one of the most commonly used Business Intelligence (BI) tools. Hence, we needed to transform the data in a format that is easily and efficiently consumed by Tableau. It is worth mentioning that we scraped the data in a format that was already structured, and, therefore, very little data cleaning was required. We merely reordered and reformatted some columns and checked that there were no null values.
https://towardsdatascience.com/data-science-for-weather-forecast-how-to-prove-a-funny-theory-f005ea2d1efe
['Julien Emery']
2019-04-06 18:29:28.919000+00:00
['Weather', 'Technology', 'Business Intelligence', 'Data Science', 'Data Visualization']
Title Data science weather forecast prove funny theoryContent trying describing experiment need give context colleague Aouss Sbai coauthor article looking fun project work asked mentor Plamen Nedeltchev Distinguished Engineer Cisco anything stock shared u theory weather San Jose CA told u able predict summer going hot solely based temperature 19th 20th 21st May asked u prove read correctly predict average weather 3 month based 3 day Frankly really believe approached task great scepticism Nevertheless got started tried understand could go proving original statement Data collection first hurdle data science problem starting point anything data data want exactly Remember objective predict average summer temperature following temperature 3 day May 19th 20th 21st city San Jose started looking database archive historical temperature data Guess none either complete enough simply available u 😬 source information found website old Farmer’s Almanac listed average temperature day since 1945 Amazing job done Well exactly… page corresponding 23rd March 2019 access mean temperature exactly need come calculating average temperature summer year since 1945 becomes much tedious 4500 day way would visit different page day manually gather data excel sheet… decided automate task write script 🤖 Automation web scraping script Basically idea script visit page independently look data want calculates average temperature summer capture temperature 3 day May repeat process year starting 1945 exactly URL page saw see specified URL city date want access could tell script URL visit day interested come another issue script page detect temperature want Well might know webpage written HTML format mean element see screen belongs specific HTML tag luckily u page website structured exact way thing needed identify HTML tag daily mean temperature stored tell script fetch specific value interested used python library Beautiful Soup script able nasty calculation u return year average summer temperature individual temperature 3 day May bundled nice Excel sheet 📝 dataset look like temperature Fahrenheit first 3 column 3 day May last one average summer temperature there’s always “but” enough fact think line Excel sheet represented 1 year average temp summer 3 day May even went back 1945 represented 73 lines… far little data pretend sort reliable analysis prediction couple hundred would much better decided repeat exact process 4 city northern California around San Jose subject type weather far enough redundant data taking San Francisco instance sea would biased everything taking Milpitas San Jose suburb wouldn’t added relevant data 370 measurement ideal sufficient start analysis Let analysis begin Data Transformation let’s try simplify dataset make easier analyze start thing pulled Excel file data Alteryx data science tool create endtoend data pipeline help u prepare analyze data along experiment Ingested Data decided add 2 column indicated city year measurement aimed visualize data using Tableau one commonly used Business Intelligence BI tool Hence needed transform data format easily efficiently consumed Tableau worth mentioning scraped data format already structured therefore little data cleaning required merely reordered reformatted column checked null valuesTags Weather Technology Business Intelligence Data Science Data Visualization
3,989
The new equation for ultimate AI energy efficiency.
The new equation for ultimate AI energy efficiency. Part V of our series, “Real Perspectives on Artificial Intelligence” features Rick Calle, AI business development lead for M12, Microsoft’s venture fund. How energy-intensive is the AI infrastructure today? And what does that mean for the future of discipline? Rick leads AI business development for M12, Microsoft’s venture fund. He works at the intersection of AI algorithms, hardware computing efficiency, and novel AI use cases. During his time with Qualcomm’s AI Research, he worked with the team that launched Qualcomm’s AI Engine into over 100 different models of AI-enabled mobile phones. Today’s AI algorithms, software and hardware combined are 10X to 100X more energy-intensive than they should be. In light of Microsoft’s recent announcement of its carbon negative commitment, my challenge to the industry is clear: let’s improve AI hardware and software so that we don’t overheat our planet. The computing industry is always optimizing for speed and innovation, but not necessarily considering the lifetime energy cost of that speed. I saw an inflection point around 2012 when the progression of AI hardware and algorithmic capabilities began to deviate from Moore’s law. Prior to that, most AI solutions were running on one, maybe two processors with workloads tracking to Moore’s law. A steady progression of workloads from the Perceptron in 1958 to systems like Bidirectional LSTM neural networks for speech recognition in the mid-2000s. Training AI models with multiple GPUs changed everything. After Alex Krizhevsky and team designed the AlexNet model with two GPUs in 2012, the computing power and electrical energy involved in training AI models took off at an entirely different pace: over 100X compounding every two years. Theirs was certainly not the first Convolutional Neural Network (CNN), but their “SuperVision” entry swept the field, winning the 2012 ImageNet competition by a huge margin. The next year nearly all competitors used CNNs and trained with multiple processors! Fast forward to 2019, and quickly developing innovative neural networks for Natural Language Processing may require hundreds or thousands of distributed GPUs — like self-attention encoder-decoder models that employ Neural Architecture Search (NAS) methods. According to a recent University of Massachusetts Amherst study, the amount of CO2 emitted from energy generation plants to power the computation involved in creating a new state-of-the-art AI model, was the equivalent of five automobile lifetime’s worth of CO2 emissions. If that’s what it takes to train only one new AI model, you can see that it is just not compatible with prioritization of sustainability. I believe we can incentivize the AI industry to make a change in the overall lifetime energy budget for AI workloads, and identify startups that are already committed to this cause. Where do you see the biggest opportunities for the highest impact energy savings? My colleagues and I think it’s joint optimization of three things: energy-efficient AI hardware, co-designed efficient AI algorithms and AI-aware computer networks. The challenge is that the energy consumption of AI models is likely the last thing an AI algorithm developer is thinking about (unless they’re focused on mobile phones). Usually the early optimizations are foremost around performance. AI engineers often think: “what’s my peak accuracy” and “how fast can I train the model” — both of which need faster computing and more energy. I support a new success metric to help incentivize the AI industry and startups to reduce energy and CO2 emissions at data center scale. We need to shift the focus to higher throughput and lower lifetime total cost of ownership of a system for given computing workloads. I stress “system” because often hardware marketing metrics forget to mention the energy cost of extra processors, memory, and networks required for an AI training system. Success Metric = Workload Throughput ÷ [ ($ Cost of System) + ($ Cost of Lifetime Energy of System) ] Throughput measures how fast we can compute the required AI algorithms. In the phraseology of the late Harvard Business School Professor Clayton Christensen, workload throughput is the “job” that matters at the end of the day. Not peak Floating Point Operations Per Second (FLOPS) which are magical, mystical marketing numbers only loosely related to getting the computational “job” done. The denominator of this ratio is the computing hardware cost plus the lifetime energy cost of operating that hardware including cooling and any extra network and processors required. With this new ratio, AI designers have far more degrees of freedom to optimize software, hardware and algorithms. For example, the power consumption of an AI chip itself — whether it is 50 watts or 450 watts — doesn’t matter as much. The lifetime energy consumption of many chips to deliver a certain workload throughput is what matters most. If we can maximize this success ratio, then by definition energy and CO2 emissions are reduced as well. Why change the “performance” mindset that has been the status quo for so long? AI has an existential problem. As its models continue to get larger, more computationally complex, and more accuracy is desired to reach human performance levels, the energy required to train those models increases exponentially. At some point if things continue as they have, researchers won’t be able to get enough computers or energy to create the new AI algorithms we want. I’m really worried about that potentially stalling AI innovation. Not many research labs can string together 4,000 leading-edge processors and run them for weeks. They just don’t have the resources to deploy exascale computers. So at some point — without change — we have the potential to reach a ceiling of innovation. I’d hate to see another AI winter. If our AI industry innovates around the success metric, then we will benefit from AI that is more compatible with sustainability, yet meets performance goals with lower lifetime energy hardware, more efficient AI algorithms and lower energy infrastructure. +
https://the-engine.medium.com/the-new-equation-for-ultimate-ai-energy-efficiency-119eccafb38c
['The Engine']
2020-06-23 16:19:38.710000+00:00
['AI', 'Artificial Intelligence', 'Climate Change', 'Energy', 'Computing']
Title new equation ultimate AI energy efficiencyContent new equation ultimate AI energy efficiency Part V series “Real Perspectives Artificial Intelligence” feature Rick Calle AI business development lead M12 Microsoft’s venture fund energyintensive AI infrastructure today mean future discipline Rick lead AI business development M12 Microsoft’s venture fund work intersection AI algorithm hardware computing efficiency novel AI use case time Qualcomm’s AI Research worked team launched Qualcomm’s AI Engine 100 different model AIenabled mobile phone Today’s AI algorithm software hardware combined 10X 100X energyintensive light Microsoft’s recent announcement carbon negative commitment challenge industry clear let’s improve AI hardware software don’t overheat planet computing industry always optimizing speed innovation necessarily considering lifetime energy cost speed saw inflection point around 2012 progression AI hardware algorithmic capability began deviate Moore’s law Prior AI solution running one maybe two processor workload tracking Moore’s law steady progression workload Perceptron 1958 system like Bidirectional LSTM neural network speech recognition mid2000s Training AI model multiple GPUs changed everything Alex Krizhevsky team designed AlexNet model two GPUs 2012 computing power electrical energy involved training AI model took entirely different pace 100X compounding every two year certainly first Convolutional Neural Network CNN “SuperVision” entry swept field winning 2012 ImageNet competition huge margin next year nearly competitor used CNNs trained multiple processor Fast forward 2019 quickly developing innovative neural network Natural Language Processing may require hundred thousand distributed GPUs — like selfattention encoderdecoder model employ Neural Architecture Search NAS method According recent University Massachusetts Amherst study amount CO2 emitted energy generation plant power computation involved creating new stateoftheart AI model equivalent five automobile lifetime’s worth CO2 emission that’s take train one new AI model see compatible prioritization sustainability believe incentivize AI industry make change overall lifetime energy budget AI workload identify startup already committed cause see biggest opportunity highest impact energy saving colleague think it’s joint optimization three thing energyefficient AI hardware codesigned efficient AI algorithm AIaware computer network challenge energy consumption AI model likely last thing AI algorithm developer thinking unless they’re focused mobile phone Usually early optimization foremost around performance AI engineer often think “what’s peak accuracy” “how fast train model” — need faster computing energy support new success metric help incentivize AI industry startup reduce energy CO2 emission data center scale need shift focus higher throughput lower lifetime total cost ownership system given computing workload stress “system” often hardware marketing metric forget mention energy cost extra processor memory network required AI training system Success Metric Workload Throughput ÷ Cost System Cost Lifetime Energy System Throughput measure fast compute required AI algorithm phraseology late Harvard Business School Professor Clayton Christensen workload throughput “job” matter end day peak Floating Point Operations Per Second FLOPS magical mystical marketing number loosely related getting computational “job” done denominator ratio computing hardware cost plus lifetime energy cost operating hardware including cooling extra network processor required new ratio AI designer far degree freedom optimize software hardware algorithm example power consumption AI chip — whether 50 watt 450 watt — doesn’t matter much lifetime energy consumption many chip deliver certain workload throughput matter maximize success ratio definition energy CO2 emission reduced well change “performance” mindset status quo long AI existential problem model continue get larger computationally complex accuracy desired reach human performance level energy required train model increase exponentially point thing continue researcher won’t able get enough computer energy create new AI algorithm want I’m really worried potentially stalling AI innovation many research lab string together 4000 leadingedge processor run week don’t resource deploy exascale computer point — without change — potential reach ceiling innovation I’d hate see another AI winter AI industry innovates around success metric benefit AI compatible sustainability yet meet performance goal lower lifetime energy hardware efficient AI algorithm lower energy infrastructure Tags AI Artificial Intelligence Climate Change Energy Computing
3,990
How To Find Peace In The Eye Of The Stress Storm Around You
We Can Get Away Without Going Away “People try to get away from it all — to the country, to the beach, to the mountains. You always wish that you could too. Which is idiotic: you can get away from it anytime you like. By going within. Nowhere you can go is more peaceful — more free of interruptions — than your own soul…An instant’s recollection and there it is: complete tranquility.” — Marcus Aurelius, “Meditations”, Gregory Hayes Translation The words above are likely written in 180 AD, but the idea about “getting away” is often mouthed by a stressed worker in the present day. How many times have you wished you could get away? Ideas of a calm beach and umbrella drink may float in your daydreams. Likely ancient Romans did the same thing. However, one thing the emperor couldn’t do was “get away” — his chaotic times and life allowed no time for a lavish vacation. As Donald Robertson explains in his book “How to Think Like A Roman Emperor”, Marcus dealt with nonstop stressful personal and job-related issues. He lost 7 of his 13 children prematurely. One of his friends attempted to dethrone him by armed rebellion and a letter from his own wife may have started the attempt. The Antonine Plague ravaged the empire. It’s thought to have killed nearly 10% of Rome’s 75 million people at the time. A “friendly” German tribe rebelled and attacked the empire, forcing Marcus to live most of his late and sickly life at Spartan-like battle camps. Now, this is a stressful life. However, Marcus never turned into one of those horrific Roman tyrants you see portrayed in movies. So, how did he do it? As Marcus himself points out in his journal, he found a way to get away into his own mind. The journal he carried, which became Meditations, was his get away. As he himself mentioned, it only took an instance to escape to “complete tranquility”. While he didn’t leave a detailed explanation, the emperor shows he could escape his chaotic world without physically leaving. This is the ultimate discovery for us in the present day. We don’t have to jump on a plane and go somewhere to escape the ever-present stress. A peaceful get away and place for renewal is much closer than we can ever imagine.
https://medium.com/mind-cafe/how-to-find-peace-in-the-eye-of-the-stress-storm-around-you-f5db9fdfe298
['Erik Brown']
2020-12-26 14:54:39.962000+00:00
['Health', 'Philosophy', 'Mindfulness', 'Psychology', 'Self Improvement']
Title Find Peace Eye Stress Storm Around YouContent Get Away Without Going Away “People try get away — country beach mountain always wish could idiotic get away anytime like going within Nowhere go peaceful — free interruption — soul…An instant’s recollection complete tranquility” — Marcus Aurelius “Meditations” Gregory Hayes Translation word likely written 180 AD idea “getting away” often mouthed stressed worker present day many time wished could get away Ideas calm beach umbrella drink may float daydream Likely ancient Romans thing However one thing emperor couldn’t “get away” — chaotic time life allowed time lavish vacation Donald Robertson explains book “How Think Like Roman Emperor” Marcus dealt nonstop stressful personal jobrelated issue lost 7 13 child prematurely One friend attempted dethrone armed rebellion letter wife may started attempt Antonine Plague ravaged empire It’s thought killed nearly 10 Rome’s 75 million people time “friendly” German tribe rebelled attacked empire forcing Marcus live late sickly life Spartanlike battle camp stressful life However Marcus never turned one horrific Roman tyrant see portrayed movie Marcus point journal found way get away mind journal carried became Meditations get away mentioned took instance escape “complete tranquility” didn’t leave detailed explanation emperor show could escape chaotic world without physically leaving ultimate discovery u present day don’t jump plane go somewhere escape everpresent stress peaceful get away place renewal much closer ever imagineTags Health Philosophy Mindfulness Psychology Self Improvement
3,991
Write First with Your Head, Then Revise with Your Heart
By Caroline Donahue — THE BOOK DOCTOR After how long January feels, February seems to flash past, a quick five minutes. In the midst of this zooming month, I have begun revising my novel. So far, the daunting steps of reading the first draft over several times and considering whether the structure is really working have been completed. As I waded through scenes that took me years to write, only to make judgements on whether they are, as Heidi Klum used to say on Project Runway, “In” or “Out,” I was struck with the realization that a first draft and later ones have entirely different intentions. A first draft is for the head. We need to understand a lot of things through writing a first draft. To be clear, you may write many sections of a book many times in a first draft. But for the sake of clarity, I think of a first draft as the first time you’ve written a piece all the way through to the end and have been able to type THE END, if only to delete it immediately afterward. This first draft is to understand what happens in the book. This is true for fiction as well as memoir. Anything with a narrative. We are writing that first draft to understand the scope, to know what is part of the story, who the players are, what they are like, and what span of their lives we will witness as readers. There are a lot of decisions to make in this first draft, and our head works very hard to make them. First person or third? Multiple perspectives? Multiple timelines or chronological? Past tense of the increasingly trendy present? A first draft is often accompanied by lists of decisions to make. I have even assigned my clients these sorts of lists when they get lost in the weeds. But now that I have pushed the boat away from the dock and am floating in the middle of the water, far away from shore in my revision, I have come to understand that the second draft is no longer the draft of the head. I know what happens and who is involved, my tense, my POV and the span of time covered. So what is this new draft for? This, my friends, is when the heart gets involved. If you simply tell a story point by point but without any emotion or atmosphere or without engaging the senses, you’ll never get into your reader’s heart. My editor and mentor said when I gave him the end of my first draft that I now had the foundation and structure of a book, much like the frame and roof of a house. But now, I need to connect the electricity and decorate the book so the reader feels at home in it. And this is when you need to start re-reading what you’ve written with your feelings at the front. Try to distract your head with tasks that will keep it away from the action. Have it format the draft or make sure the line spacing is even while you get on with the real work of making the book come to life. Look at all the characters from an emotional perspective rather than a factual one. Instead of asking what the character does for a living, for example, ask how she feels about it. Does she love her work or hate it? Or, even better, does she love it even while she is going broke because it doesn’t cover all her bills. There we go…the heart has been engaged. Read through anything you write a first time to see if it makes sense, if it is logical. This is an essential part of revision; however, you’re not finished once you’ve determined that everything makes sense. The review that puts a hand on your heart and asks if you are feeling anything when you are reading it is the one that will make the difference between a fine book that people forget about quickly and one that they text their friends in the middle of the night to tell them that they absolutely must read it. I recently read a book twice in a row because I connected so completely to the emotional level of the story, even though I found a couple of factual errors that could have been easily avoided. A reader will forgive small breaks in logic, but she won’t forgive a lack of feeling. So, as you consider your writing this month, make sure you know where your heart stands on the way it’s going. As writers, our ability with language makes us susceptible to staying in our heads, but I hope I’ve convinced you that the key to a masterpiece is in the heart. Originally published at https://thewildword.com on February 27, 2020.
https://thewildwordmagazine.medium.com/write-first-with-your-head-then-revise-with-caroline-donahue-95cd1f784109
['The Wild Word Magazine']
2020-03-06 16:27:52.294000+00:00
['Writing Tips', 'Creativity', 'Writing']
Title Write First Head Revise HeartContent Caroline Donahue — BOOK DOCTOR long January feel February seems flash past quick five minute midst zooming month begun revising novel far daunting step reading first draft several time considering whether structure really working completed waded scene took year write make judgement whether Heidi Klum used say Project Runway “In” “Out” struck realization first draft later one entirely different intention first draft head need understand lot thing writing first draft clear may write many section book many time first draft sake clarity think first draft first time you’ve written piece way end able type END delete immediately afterward first draft understand happens book true fiction well memoir Anything narrative writing first draft understand scope know part story player like span life witness reader lot decision make first draft head work hard make First person third Multiple perspective Multiple timeline chronological Past tense increasingly trendy present first draft often accompanied list decision make even assigned client sort list get lost weed pushed boat away dock floating middle water far away shore revision come understand second draft longer draft head know happens involved tense POV span time covered new draft friend heart get involved simply tell story point point without emotion atmosphere without engaging sens you’ll never get reader’s heart editor mentor said gave end first draft foundation structure book much like frame roof house need connect electricity decorate book reader feel home need start rereading you’ve written feeling front Try distract head task keep away action format draft make sure line spacing even get real work making book come life Look character emotional perspective rather factual one Instead asking character living example ask feel love work hate even better love even going broke doesn’t cover bill go…the heart engaged Read anything write first time see make sense logical essential part revision however you’re finished you’ve determined everything make sense review put hand heart asks feeling anything reading one make difference fine book people forget quickly one text friend middle night tell absolutely must read recently read book twice row connected completely emotional level story even though found couple factual error could easily avoided reader forgive small break logic won’t forgive lack feeling consider writing month make sure know heart stand way it’s going writer ability language make u susceptible staying head hope I’ve convinced key masterpiece heart Originally published httpsthewildwordcom February 27 2020Tags Writing Tips Creativity Writing
3,992
3 Web Technologies Killed by Google
AngularJS AngularJS is perhaps the first relevant JavaScript framework to appear. It was released by Google in 2010 — at a time when the most prominent JavaScript library was jQuery. Instead of just a library like jQuery, AngularJS, also known as Angular 1, is a whole framework that brought the MVVM concept to the world of front-end development. In 2016 the Angular, which we know today, was released. According to Wappalyzer, many large websites still use AngularJS for their front-end — but support will be discontinued next year. The technology behind AngularJS is simply outdated — because modern frameworks like React, Vue, and Angular all use a CLI by now. This allows us to write code in, for example, React.js that would not work in a browser — in Reacts case; it is the JSX syntax that is converted by the CLI into classic JS & HTML for the production version. AngularJS, on the other hand, reminds us very much of Vue.js when we use it without CLI. Instead of converting the code, we write for production; we write everything directly in our HTML and JS files. So are the so-called directives, which we implement as HTML attributes: data-ng-repeat: "item in items" Without the JavaScript code provided by AngularJS, the browser could not do anything with these attributes — a classic example of client-side rendering. But the trend is more and more towards server-side-rendering and static pages where our JavaScript data structures are converted to HTML that can be rendered in the browser. Where for Angular, there is the so-called Angular Universal to render a page on the server-side; for AngularJS, the possibility seems to be missing. Working without a CLI and simply importing the library over a CDN and writing code like jQuery is not that complicated. Still, CLIs have become an integral part of the developer community — regardless of the framework or library, because it makes sense to have TypeScript, Linting, and transcompiling support. Without a CLI, however, this is virtually unthinkable. As of December 2021, AngularJS will stop long term support.
https://medium.com/javascript-in-plain-english/killed-by-google-aa2c71c324cf
['Louis Petrik']
2020-11-15 12:27:39.013000+00:00
['Web Framework', 'Software Development', 'Google', 'Web Development', 'Cloud Computing']
Title 3 Web Technologies Killed GoogleContent AngularJS AngularJS perhaps first relevant JavaScript framework appear released Google 2010 — time prominent JavaScript library jQuery Instead library like jQuery AngularJS also known Angular 1 whole framework brought MVVM concept world frontend development 2016 Angular know today released According Wappalyzer many large website still use AngularJS frontend — support discontinued next year technology behind AngularJS simply outdated — modern framework like React Vue Angular use CLI allows u write code example Reactjs would work browser — Reacts case JSX syntax converted CLI classic JS HTML production version AngularJS hand reminds u much Vuejs use without CLI Instead converting code write production write everything directly HTML JS file socalled directive implement HTML attribute datangrepeat item item Without JavaScript code provided AngularJS browser could anything attribute — classic example clientside rendering trend towards serversiderendering static page JavaScript data structure converted HTML rendered browser Angular socalled Angular Universal render page serverside AngularJS possibility seems missing Working without CLI simply importing library CDN writing code like jQuery complicated Still CLIs become integral part developer community — regardless framework library make sense TypeScript Linting transcompiling support Without CLI however virtually unthinkable December 2021 AngularJS stop long term supportTags Web Framework Software Development Google Web Development Cloud Computing
3,993
When You Feel like You Can’t Live up to Your Own Writing
When You Feel like You Can’t Live up to Your Own Writing Late night self-reflections of a writer on book deadline A page from my journal that night. I was rolling around in bed at 2 am. My mind was racing with outstanding tasks, unwritten paragraphs, and self-doubt. Together with my co-author John Fitch I’m in the final stages of writing a book about the importance of Time Off. And we are on a rapidly approaching deadline to hand the draft over to our editor. Many authors I talked to in the past told me that they are a mental mess in the days before handing off their manuscript, but I hoped I’d be immune to this. Especially given the topic we are writing about. But there I was, unable to sleep because my mind couldn’t stop worrying about the book. And at a deeper layer, worrying about the worrying. So I did the only thing I could think of at the moment to calm down my mind. I got up and put my thoughts to paper, writing the following words in my notebook to reassure myself of what I am doing. It helped me a lot. Maybe if you are in a similar situation, it can help you too.
https://maxfrenzel.medium.com/when-you-feel-like-you-cant-live-up-to-your-own-writing-93e1d94f0cf6
['Max Frenzel']
2019-10-25 01:28:29.617000+00:00
['Personal Growth', 'Self', 'Creativity', 'Journaling', 'Writing']
Title Feel like Can’t Live WritingContent Feel like Can’t Live Writing Late night selfreflections writer book deadline page journal night rolling around bed 2 mind racing outstanding task unwritten paragraph selfdoubt Together coauthor John Fitch I’m final stage writing book importance Time rapidly approaching deadline hand draft editor Many author talked past told mental mess day handing manuscript hoped I’d immune Especially given topic writing unable sleep mind couldn’t stop worrying book deeper layer worrying worrying thing could think moment calm mind got put thought paper writing following word notebook reassure helped lot Maybe similar situation help tooTags Personal Growth Self Creativity Journaling Writing
3,994
Data Exploration and Analysis Using Python
Data Exploration and Analysis Using Python Simple ways to make your data talk Data exploration is a key aspect of data analysis and model building. Without spending significant time on understanding the data and its patterns one cannot expect to build efficient predictive models. Data exploration takes major chunk of time in a data science project comprising of data cleaning and preprocessing. In this article, I will explain the various steps involved in data exploration through simple explanations and Python code snippets. The key steps involved in data exploration are: > Load data > Identify variables > Variable analysis > Handling missing values > Handling outliers > Feature engineering Load data and Identify variables: Data sources can vary from databases to websites. Data sourced is known as raw data. Raw data cannot be directly used for model building, as it will be inconsistent and not suitable for prediction. It has to be treated for anomalies and missing values. Variable can be of different types such as character, numeric, categorical, and continuous. Variable Type Identifying the predictor and target variable is also a key step in model building. Target is the dependent variable and predictor is the independent variable based on which the prediction is made. Categorical or discrete variables are those that cannot be mathematically manipulated. It is made up of fixed values such as 0 and 1. On the other hand, continuous variables can be interpreted using mathematical functions like finding the average or sum of all values. You can use a series of Python codes to understand the types of variables in your dataset. #Import required libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns #Load the data titan=pd.read_csv("../input/titan.csv") #get an overview of the data titan.head() titan.tail() titan.sample(10) #identify variable type titan.dtypes titan.info() titan.describe() Variable Analysis: Variable analysis can be done in three ways, univariate analysis, bivariate analysis, and multivariate analysis. Variable Analysis Univariate analysis is used to highlight missing and outlier values. Here each variable is analysed on its own for range and distribution. Univariate analysis differs for categorical and continuous variables. For categorical variables, you can use frequency table to understand distribution of each category. For continuous variables, you have to understand the central tendency and spread of the variable. It can be measured using mean, median, mode, etc. It can be visualized using box plot or histogram. #Understand various summary statistics of the data include =['object', 'float', 'int'] titan.describe(include=include) titan.describe() #Get count of values in a categorical variable titan.survived.value_counts() titan.age.hist(figsize=(10,5)) Histogram Bivariate Analysis is used to find the relationship between two variables. Analysis can be performed for combination of categorical and continuous variables. Scatter plot is suitable for analyzing two continuous variables. It indicates the linear or non-linear relationship between the variables. Bar charts helps to understand relation between two categorical variables. Certain statistical tests are also used to effectively understand bivariate relationship. Scipy library has extensive modules for performing these tests in Python. Bivariate Analysis Matplotlib and Seaborn libraries can be used to plot different relational graphs that help visualizing bivariate relationship between different types of variables. Scatter Plot iris = sns.load_dataset("iris") sns.relplot(x = 'sepal_length', y = 'petal_length', hue='species',data = iris) relplot = sns.catplot(x="pclass", hue="who", col="survived", data=titan, kind="count", height=4, aspect=.7); relplot Handling Missing Values: Missing values in the dataset can reduce model fit. It can lead to a biased model as the data cannot be analysed completely. Behavior and relationship with other variables cannot be deduced correctly. It can lead to wrong prediction or classification. Missing values may occur due to problems in data extraction or data collection, which can be categorized as MCAR, MAR, and NMAR. Missing Values Missing values can be treated by deletion, mean/mode/median imputation, KNN imputation, or using prediction models. Handling Missing Values You can visually analyse the missing data using a library called as Missingno in Python. import missingno as msno msno.bar(titan) msno.heatmap(titan) np.mean(titan['age']) from scipy import stats stats.mode(titan['embarked']) titancopy['age'].fillna(29,inplace=True) titancopy['embarked'].fillna("S", inplace=True) Handling Outliers: Outliers can occur naturally in a data or can be due to data entry errors. They can drastically change the results of the data analysis and statistical modeling. Outliers are easily detected by visualization methods, like box-plot, histogram, and scatter plot. Outliers are handled like missing values by deleting observations, transforming them, binning or grouping them, treating them as a separate group, or imputing values. Box Plot import plotly.express as px fig = px.box(titan,x='survived',y='age', color='pclass') fig.show() px.box(titan, y='age') px.box(titan,x='survived',y='fare', color='pclass') #Adding trendline to the data x=iris.sepal_length y=iris.petal_width plt.scatter(x, y) z = np.polyfit(x, y, 1) p = np.poly1d(z) plt.plot(x,p(x),"y--") plt.show() Feature Engineering: Feature engineering is the process of extracting more information from existing data. Feature selection also can be part of it. Two common techniques of feature engineering are variable transformation and variable creation. In variable transformation existing variable is transformed using certain functions. For example, a number can be replaced by its logarithmic value. Another technique is to create a new variable from the existing variable. For example, breaking the date field in the format of dd/mm/yy to date, month and year columns. Variable Transformation titancopy = titan.copy() #variable transformation titancopy['alive'].replace({'no':0,'yes':1}, inplace=True) #Convert boolean to integer titancopy["alone"]=titancopy["alone"].astype(int) Two other data transformation techniques are encoding categorical variables and scaling continuous variables to normalize the data. This depends on the model that is used for evaluation, as some models accept categorical variables. Irrelevant features can decrease the accuracy of the model. Feature selection can be done automatically or manually. A correlation matrix is used to visualize how the features are related to each other or with the target variable. Correlation Matrix titancopy.corr() plt.figure(figsize=(10,10)) corr = titan.corr() ax = sns.heatmap( corr, vmin=-1, vmax=1, center=0, cmap=sns.diverging_palette(20, 220, n=200), square=True, annot=True ) ax.set_xticklabels( ax.get_xticklabels(), rotation=45, horizontalalignment='right' ) ax.set_yticklabels( ax.get_yticklabels(), rotation=45, ); The scikit-learn library provides few good classes such as SelectBest to select specific number of features from the given dataset. The tree-based classifier in the same library can be used to get the feature importance scores. This covers some of the key steps involved in data exploration. Each of these steps can be reiterated depending on the size of the data and the requirement of the model. Data scientists spend the maximum amount of time in data preprocessing as data quality directly impacts the success of the model. All the code snippets shown here are executed in the Exploratory Data Analysis and Visualization Kaggle notebook.
https://towardsdatascience.com/data-exploration-and-analysis-using-python-e564473d7607
['Raji Rai']
2020-06-12 18:54:51.038000+00:00
['Python', 'Data Analysis', 'Data Science', 'Data Visualization', 'Data Exploration']
Title Data Exploration Analysis Using PythonContent Data Exploration Analysis Using Python Simple way make data talk Data exploration key aspect data analysis model building Without spending significant time understanding data pattern one cannot expect build efficient predictive model Data exploration take major chunk time data science project comprising data cleaning preprocessing article explain various step involved data exploration simple explanation Python code snippet key step involved data exploration Load data Identify variable Variable analysis Handling missing value Handling outlier Feature engineering Load data Identify variable Data source vary database website Data sourced known raw data Raw data cannot directly used model building inconsistent suitable prediction treated anomaly missing value Variable different type character numeric categorical continuous Variable Type Identifying predictor target variable also key step model building Target dependent variable predictor independent variable based prediction made Categorical discrete variable cannot mathematically manipulated made fixed value 0 1 hand continuous variable interpreted using mathematical function like finding average sum value use series Python code understand type variable dataset Import required library import panda pd import numpy np import matplotlibpyplot plt import seaborn sn Load data titanpdreadcsvinputtitancsv get overview data titanhead titantail titansample10 identify variable type titandtypes titaninfo titandescribe Variable Analysis Variable analysis done three way univariate analysis bivariate analysis multivariate analysis Variable Analysis Univariate analysis used highlight missing outlier value variable analysed range distribution Univariate analysis differs categorical continuous variable categorical variable use frequency table understand distribution category continuous variable understand central tendency spread variable measured using mean median mode etc visualized using box plot histogram Understand various summary statistic data include object float int titandescribeincludeinclude titandescribe Get count value categorical variable titansurvivedvaluecounts titanagehistfigsize105 Histogram Bivariate Analysis used find relationship two variable Analysis performed combination categorical continuous variable Scatter plot suitable analyzing two continuous variable indicates linear nonlinear relationship variable Bar chart help understand relation two categorical variable Certain statistical test also used effectively understand bivariate relationship Scipy library extensive module performing test Python Bivariate Analysis Matplotlib Seaborn library used plot different relational graph help visualizing bivariate relationship different type variable Scatter Plot iris snsloaddatasetiris snsrelplotx sepallength petallength huespeciesdata iris relplot snscatplotxpclass huewho colsurvived datatitan kindcount height4 aspect7 relplot Handling Missing Values Missing value dataset reduce model fit lead biased model data cannot analysed completely Behavior relationship variable cannot deduced correctly lead wrong prediction classification Missing value may occur due problem data extraction data collection categorized MCAR MAR NMAR Missing Values Missing value treated deletion meanmodemedian imputation KNN imputation using prediction model Handling Missing Values visually analyse missing data using library called Missingno Python import missingno msno msnobartitan msnoheatmaptitan npmeantitanage scipy import stats statsmodetitanembarked titancopyagefillna29inplaceTrue titancopyembarkedfillnaS inplaceTrue Handling Outliers Outliers occur naturally data due data entry error drastically change result data analysis statistical modeling Outliers easily detected visualization method like boxplot histogram scatter plot Outliers handled like missing value deleting observation transforming binning grouping treating separate group imputing value Box Plot import plotlyexpress px fig pxboxtitanxsurvivedyage colorpclass figshow pxboxtitan yage pxboxtitanxsurvivedyfare colorpclass Adding trendline data xirissepallength yirispetalwidth pltscatterx z nppolyfitx 1 p nppoly1dz pltplotxpxy pltshow Feature Engineering Feature engineering process extracting information existing data Feature selection also part Two common technique feature engineering variable transformation variable creation variable transformation existing variable transformed using certain function example number replaced logarithmic value Another technique create new variable existing variable example breaking date field format ddmmyy date month year column Variable Transformation titancopy titancopy variable transformation titancopyalivereplaceno0yes1 inplaceTrue Convert boolean integer titancopyalonetitancopyaloneastypeint Two data transformation technique encoding categorical variable scaling continuous variable normalize data depends model used evaluation model accept categorical variable Irrelevant feature decrease accuracy model Feature selection done automatically manually correlation matrix used visualize feature related target variable Correlation Matrix titancopycorr pltfigurefigsize1010 corr titancorr ax snsheatmap corr vmin1 vmax1 center0 cmapsnsdivergingpalette20 220 n200 squareTrue annotTrue axsetxticklabels axgetxticklabels rotation45 horizontalalignmentright axsetyticklabels axgetyticklabels rotation45 scikitlearn library provides good class SelectBest select specific number feature given dataset treebased classifier library used get feature importance score cover key step involved data exploration step reiterated depending size data requirement model Data scientist spend maximum amount time data preprocessing data quality directly impact success model code snippet shown executed Exploratory Data Analysis Visualization Kaggle notebookTags Python Data Analysis Data Science Data Visualization Data Exploration
3,995
The One Year Plan For Cracking Coding Interviews
The One Year Plan For Cracking Coding Interviews About my hustle before cracking interviews. It took me one year to go from a noob programmer to someone decent enough to crack coding interviews for getting internships and gaining experience. I still have a long way to go, but the first step to being a good programmer is working in the real world and getting experience, which can be best gained by internships. And if you want an internship, you have to crack the interview first. Which brings us to this blog. Photo by Jordan Whitfield on Unsplash I have broken down my one-year plan, which I diligently followed, and will hopefully help you with your planning if you are in the starting stage. Prerequisite: Knowing the basics and syntax of one programming language. Most students tend to know Java, C, or Python from their colleges/highschools. You can stick to the one you are comfortable with from these three, but if C is your preferred language, I would recommend you to switch to C++. My first language was C, which made me switch to C++. I learned Java on the side, enjoyed it more, and decided to practice competitive coding in Java, and so every interview I have ever cracked was by using Java. I had zero experience in python, but after joining Facebook, all of the code I have written as an intern is in Python. So my point is, there is no superior language amongst these three, try not to worry about which one to choose. Just pick one, crack interviews in that one, and you can learn the rest on the go depending on where you get placed. Here’s the plan: The month-specific blogs that are released so far have been linked below, and the rest are coming soon. Month 1: Big O, Arrays and Strings: Read it here Month 2: Linked Lists: Read it here Month 3: Stacks and Queues: Read it here Month 4: Trees and Tries: Read it here Month 5: Hashmap, Dictionary, HashSet Month 5: Graphs Month 6: Recursion and Dynamic Programming Month 7: Sorting and Searching Month 8: Reading(about system design, scalability, PM questions, OS, threads, locks, security basics, garbage collection, etc. basically expanding your knowledge in whatever field required, depending on your target role) Month 9, 10, 11, 12: A mix of medium and hard questions in your preferred website. Practice by participating in contests, focusing on topics that you are weak at, mock interviews, etc. Source — forbes.com Here’s how I approach every topic in each month — Let’s say you are in month 4, and focusing on trees. You need to first understand what trees are, different types of trees, and be able to define class Node and Tree. You then need to be able to perform basic operations like adding, finding, and deleting an element, pre-order, in-order, post-order, and level-by-level traversal. Lastly, you practice different tree questions available on Hackerrank, Leetcode, or a website of your choice. You should target the easy questions first, and once you are comfortable, move on to medium and hard. The last 4 months are for solving a mix of different questions, via contests or otherwise, which is necessary because when you are practicing tree questions, you know you have to use a tree. But if you are given a random question, how will you know a tree would be the best approach? Also, always look for the most optimal solution in forums after solving it yourself. You have an entire month, and if you manage to dedicate 40–70 hours a week, you’ll be able to master trees in such a way that if a tree question is thrown at you in an interview, you’ll be able to mostly solve it since you trained your mind to think that way with intense practice. If you are a student, dedicating this much time is definitely doable, even with side projects, homework, etc. Your grades might take a hit (my As became Bs in that one semester(month 9,10, 11, 12) when I was dedicating over 8 hours a day to competitive coding) but it was worth it. You should also try to build projects or do research on the side while preparing. Some people learn better by participating in contests in CodeForces, CodeChef, etc. while others prefer practicing questions. Again, there is no benefit of one over the other, do what you personally prefer. I do not believe in practicing particular topics for a particular company, some websites claim to have a set of questions dedicated to a particular company, eg: cracking the Google interview. I think the goal should be to be a better developer overall, focusing on just a few topics that Google tends to test candidates on may not be the best way to follow. Interviewers also judge you based on your LinkedIn, Resume, past experiences, courses taken, Github, degrees and certifications, projects, research papers, etc. Practicing competitive coding does not guarantee a job, but it does guarantee you’ll be able to crack technical interview rounds most of the time, and you’ll also be a better developer overall, which might help you when you build projects. Lastly, don’t stop. It may seem easy at first when you are motivated, but that fuel dies in a month or so. Keep your goal in mind, of course, it’s going to be hard, but the only ones who make it are those who stick to the plan. You can edit the plan if you need to, but once done, stick to it, even on your lazy days, even when you have a college fest or a party to attend, even when you are sleepy. Like I said, the ones who succeed are the ones who *stick to the plan*. This sums up my schedule at a high level. I plan on digging deep, and my next blog will only focus on month 1(Big O, Arrays and strings), the one after that will be month 2, and so on. I hope this was helpful, let me know if you want me to also write about any other topic on the side, or if you have any queries. I’d appreciate it if you could ask your questions on Instagram since I prefer to keep LinkedIn for professional opportunities, but either is fine. Thanks! Signing off! Anjali Viramgama Incoming Software Developer at Microsoft LinkedIn | Instagram
https://towardsdatascience.com/the-one-year-plan-for-competitive-coding-6af53f2f719c
['Anjali Viramgama']
2020-12-13 21:58:57.485000+00:00
['Competitive Programming', 'Google', 'Facebook', 'Coding', 'Technology']
Title One Year Plan Cracking Coding InterviewsContent One Year Plan Cracking Coding Interviews hustle cracking interview took one year go noob programmer someone decent enough crack coding interview getting internship gaining experience still long way go first step good programmer working real world getting experience best gained internship want internship crack interview first brings u blog Photo Jordan Whitfield Unsplash broken oneyear plan diligently followed hopefully help planning starting stage Prerequisite Knowing basic syntax one programming language student tend know Java C Python collegeshighschools stick one comfortable three C preferred language would recommend switch C first language C made switch C learned Java side enjoyed decided practice competitive coding Java every interview ever cracked using Java zero experience python joining Facebook code written intern Python point superior language amongst three try worry one choose pick one crack interview one learn rest go depending get placed Here’s plan monthspecific blog released far linked rest coming soon Month 1 Big Arrays Strings Read Month 2 Linked Lists Read Month 3 Stacks Queues Read Month 4 Trees Tries Read Month 5 Hashmap Dictionary HashSet Month 5 Graphs Month 6 Recursion Dynamic Programming Month 7 Sorting Searching Month 8 Readingabout system design scalability PM question OS thread lock security basic garbage collection etc basically expanding knowledge whatever field required depending target role Month 9 10 11 12 mix medium hard question preferred website Practice participating contest focusing topic weak mock interview etc Source — forbescom Here’s approach every topic month — Let’s say month 4 focusing tree need first understand tree different type tree able define class Node Tree need able perform basic operation like adding finding deleting element preorder inorder postorder levelbylevel traversal Lastly practice different tree question available Hackerrank Leetcode website choice target easy question first comfortable move medium hard last 4 month solving mix different question via contest otherwise necessary practicing tree question know use tree given random question know tree would best approach Also always look optimal solution forum solving entire month manage dedicate 40–70 hour week you’ll able master tree way tree question thrown interview you’ll able mostly solve since trained mind think way intense practice student dedicating much time definitely doable even side project homework etc grade might take hit became Bs one semestermonth 910 11 12 dedicating 8 hour day competitive coding worth also try build project research side preparing people learn better participating contest CodeForces CodeChef etc others prefer practicing question benefit one personally prefer believe practicing particular topic particular company website claim set question dedicated particular company eg cracking Google interview think goal better developer overall focusing topic Google tends test candidate may best way follow Interviewers also judge based LinkedIn Resume past experience course taken Github degree certification project research paper etc Practicing competitive coding guarantee job guarantee you’ll able crack technical interview round time you’ll also better developer overall might help build project Lastly don’t stop may seem easy first motivated fuel dy month Keep goal mind course it’s going hard one make stick plan edit plan need done stick even lazy day even college fest party attend even sleepy Like said one succeed one stick plan sum schedule high level plan digging deep next blog focus month 1Big Arrays string one month 2 hope helpful let know want also write topic side query I’d appreciate could ask question Instagram since prefer keep LinkedIn professional opportunity either fine Thanks Signing Anjali Viramgama Incoming Software Developer Microsoft LinkedIn InstagramTags Competitive Programming Google Facebook Coding Technology
3,996
How Facebook and Google uses Machine Learning at their best
“Machine learning will automate jobs that most people thought could only be done by people.” ~Dave Waters Hello everyone , so today I will like to tell you how the most famous companies Facebook and Google uses Machine Learning at their best to ease their tasks and do cool stuffs that earlier was thought to be impossible. What is Machine Learning? Have you ever wondered that how we learn or how from our birth our brain learns each and every bit, be it our parents and friend’s faces or riding bicycles or learning mathematical formulas etc. It is because our brain observes whatever we see and makes patterns (which we call as experiences )and by analysing these patterns it predicts and makes further decisions. Due to this decision making power of our brain unlike computers the developers thought that why not we give prediction and decision making power to the computers which would add extra stars to the computers speed ie now computers can take decisions and do predictions as fast as they do calculations. So to achieve this they brought the concept of Machine learning which is to make some programs that based on the data provided to the computers (similar to experiences of our brain) can predict results or target values. In this way these programs can basically help machines learn. “Machine intelligence is the last invention that humanity will ever need to make.” ~Nick Bostrom Now talking about the companies, Google and Facebook took a great advantage of the Machine Learning and not only reduced their work load that was earlier done by humans but also proved to be the smartest and most lucrative and innovative companies for the clients. How Google uses Machine Learning? Google has declared itself a machine learning-first company. Google is the master of all. It takes advantage of machine learning algorithms and provides customers with a valuable and personalized experience. Machine learning is already embedded in its services like Gmail, Google Search and Google Maps. Google services, for example, the image search and translation tools use sophisticated machine learning. This allows the computer to see, listen and speak in much the same way as humans do. Much wow! Gmail As you all know that our social, promotional and primary mails are separated in different boxes . This is filtered through Google as it labels the email accordingly. This is where machine learning plays a crucial part. The user intervention is used to tune to its threshold and when a user marks a message in a consistent direction, Gmail itself performs a real-time increment to its threshold and that’s how Gmail learns for future and later uses those results for categorization. Smart replies: This is really a smart move made by Google. Now, with the help of this feature, you can reply instantly in a second. With the suggested replies given by Gmail. ‘Smart Replies’ and ‘Smart Compose’ are indeed the best products that Google has given to its customers. These are powered by machine learning and will offer suggestions as you type. This is also a major reason why Google stands as one of the leading companies today. Also, it is not just in English. It will bring support in four new languages: Spanish, French, Italian and Portuguese. Google Search and Google Maps This also employs machine learning and while you start typing in the search box it automatically anticipates what you are looking for. It then provides suggested search terms for the same. These suggestions are showcased because of past searches (Recommendations), trend (which everyone is looking for), or from your present location. For example — Bus traffic delays — Hundreds of major cities around the world, thousands of people traveling, One machine that is learning and informing . Google gets all the real-time data on bus locations and forecasts it in a jiffy. So, now you don’t have to wait long hours for your bus. With the combination of time, distance traveled, and individual events as datasets, it is now possible for Google to provide predictions. Now, there is no need to rely on bus schedules provided by public transportation agencies. With the help of your location, day of the week, and time of day, your estimated time of arrival (ETA) can be understood. The best invention ever done for students is Google search and no one will deny it. Google Search and Google Maps use machine learning too and help the people in their day to day tasks. You can check the awesomeness of Google Machine Learning by just going to Google and then typing or speaking just weather and without asking anything it will automatically tell you the whole weather report for your area. This is the cause of Machine Learning Google Assistant It helps one to assist in everyday tasks, be it household chores or a deal worth crores. The Google Assistant makes it easy for you to search for nearby restaurants when it’s raining heavily, helps you to buy movie tickets while on the go and find the nearest theatre from your place. Also, helps you to navigate to the theater. In short, you don’t have to worry when you have a smartphone, because Google takes care of everything. This is all done due to strong machine learning algorithms used by google. Google Translate The world is migrating. Leave the rest of the world, at least 24 languages are spoken in India itself, with over 13 different scripts and 720 dialects. Well, if we talk about the world there are roughly 6,500 spoken languages in the world today. Can’t thank Google enough cause we’ve all used Google Translate at some point (I hope, you travel a lot too).The best is that it’s free, fast, and is generally accurate. Its translation of words, sentences, and paragraphs have helped many to decode and understand. It is true that it is not 100% accurate when it comes to larger blocks of text or for some language, but it can provide people with a general meaning to make the understanding less complex. All this is possible because of Statistical Machine Translation (SMT). So, no matter how much you hate mathematics or statistics you will have to thank and love it . This is a process where computers analyze millions of existing translated documents from the web to learn vocabulary and look for patterns in a language. After that Google translates it. It then picks the most statistically probable translation when asked to translate a new bit of text. Speech Recognition: Ok, Google. The speech recognition feature enables the user to convert audio to text by applying powerful neural network models in an easy-to-use API. Currently, the API recognizes 120 languages and its variants to support the global user base. Through this voice, command-and-control can be enabled and the audio can be transcribed from the call centers. Also, the processing of real-time data can be done. Starting from streaming to prerecorded audio, speech recognition has mastered it all and all credits can be given to Google’s machine learning technology. Reverse Image Search Google’s Search by Image is a feature that uses reverse image search and allows users to search for related images just by uploading an image or image URL. Google accomplishes this by analyzing the submitted picture and constructing a mathematical model of it using advanced algorithms. It is then compared with billions of other images in Google’s databases before returning matching and similar results. When available, Google also uses metadata about the image such as description. Reverse image search is a content-based image retrieval (CBIR) query technique that involves providing the CBIR system with a sample image that it will then base its search upon; in terms of information retrieval, the sample image is what formulates a search query. Image search creates categories that you might be looking for. With the image search, it becomes easy to search for similar images. It also helps to find the websites that contain these images and the other sizes of the picture you searched with. Google Adsense With the help of machine learning, Google keeps track of the users’ search history. With the help of that history, it recommends the advertisement to the user as now its aware of its target market. It’s heavily based on the search history data and machine learning helps Google to achieve this. It created a win-win situation. With Google AdSense, the website owners earn money from their online content and AdSense works by matching text and display ads to the site based on the content and the visitors. There are many more examples such as Google Music, Google Photos ,Google Adwords etc. which makes great use of Machine Learning and that is the reason that Google has the most number of users in most of the field as it not only makes our work easier but also solves the problem smartly. How Facebook uses Machine Learning? Machine Learning is the vital aspect of Facebook. It would not even be possible to handle 2.4 billion users while providing them the best service without using Machine Learning! Let’s take an example. It is mind-boggling how Facebook can guess the people you might be familiar with in real life using “People You May Know”. And they are right most of the time!!! Well, this magical effect is achieved by using Machine Learning algorithms that analyze your profile, your interests, your current friends and also their friends and various other factors to calculate the people you might potentially know. That’s only one example ,other aspects are the Facebook News Feed, Facial Recognition system, Targeted Advertising on your page, etc. which we would look below. Facial Recognition Facial Recognition is among the many wonders of Machine Learning on Facebook. It might be trivial for you to recognize your friends on social media (even under that thick layer of makeup!!!) but how does Facebook manage it? Well, if you have your “tag suggestions” or “face recognition” turned on in Facebook (this means you have provided permission for Facial Recognition), then the Machine Learning System analyses the pixels of the face in the image and creates a template which is basically a string of numbers. But this template is unique for every face (sort of a facial fingerprint!) and can be used to detect that face again in another face and suggest a tag. So now the question is, What is the use of enabling Facial Recognition on Facebook? Well, in case any newly uploaded photo or video on Facebook includes your face but you haven’t been tagged, the Facial Recognition algorithm can recognize your template and send you a notification. Also, if another user tries to upload your picture as their Facebook profile picture (maybe to get more popular!), then you can be notified immediately. Facial Recognition in conjugation with other accessibility options can also inform people with visual impairments if they are in a photo or video. Textual Analysis While you may believe photos are the most important on Facebook (especially your photos!), the text is equally as important. And there is a lot of text on Facebook!!! To understand and manage this text in the correct manner, Facebook uses DeepText which is a text engine based on deep learning that can understand thousands of posts in a second in more than 20 languages with as much accuracy as you can! But understanding a language-based text is not that easy as you think! In order to truly understand the text, DeepText has to understand many things like grammar, idioms, slang words, context, etc. For example: If there is a sentence “I love Apple” in a post, then does the writer mean the fruit or the company? Most probably it is the company (Except for Android users!) but it really depends on the context and DeepText has to learn this. Because of these complexities, and that too in multiple languages, DeepText uses Deep Learning and therefore it handles labeled data much more efficiently than traditional Natural Language Processing models. Targeted Advertising Did you just shop for some great clothes at Myntra and then saw their ads on your Facebook page? Or did you just like a post by Lakme and then magically see their ad also? Well, this magic is done using deep neural networks that analyze your age, gender, location, page likes, interests, and even your mobile data to profile you into select categories and then show you ads specifically targeted towards these categories. Facebook also partners with different data collection companies like Epsilon, Acxiom, Datalogix, BlueKai, etc. and also uses their data about you to accurately profile you. For Example, Suppose that the data collected from your online interests, field of study, shopping history, restaurant choices, etc. profiles you in the category of young fashionista according to the Facebook deep neural networks algorithm. Then the ads you are shown will likely cater to this category so that you get the most relevant and useful ads that you are most likely to click. (So that Facebook generates more revenue of course!) In this way, Facebook hopes to maintain a competitive edge against other high-tech companies like Google who is also fighting to obtain our short attention spans!!! Language Translation Facebook is less a social networking site and more a worldwide obsession! There are people all over the world that use Facebook but many of them also don’t know English. So what should you do if you want to use Facebook but you only know Hindi? Never fear! Facebook has an in-house translator that simply converts the text from one language to another by clicking the “See Translation” button. And in case you wonder how it translates more or less accurately, well Facebook Translator uses Machine Learning of course! The first click on the “See Translation” button for some text (Suppose it’s Beyonce’s posts) sends a translation request to the server and then that translation is cached by the server for other users (Who also require translation for Beyonce’s posts in this example). The Facebook translator accomplishes this by analyzing millions of documents that are already translated from one language to another and then looking for the common patterns and basic vocabulary of the language. After that, it picks the most accurate translation possible based on educated guesses that mostly turn out to be correct. For now, all languages are updated monthly so that the ML system is up to date on new slangs and sayings! News Feed The Facebook News Feed was one addition that everybody hated initially but now everybody loves!!! And if you are wondering why some stories show up higher in your Facebook News Feed and some are not even displayed, well here is how it works! Different photos, videos, articles, links or updates from your friends, family or businesses you like show up in your personal Facebook News Feed according to a complex system of ranking that is managed by a Machine Learning algorithm. The rank of anything that appears in your News Feed is decided on three factors. Your friends, family, public figures or businesses that you interact with a lot are given top priority. Your feed is also customized according to the type of content you like (Movies, Books, Fashion, Video games, etc.) Also, posts that are quite popular on Facebook with lots of likes, comments and shares have a higher chance of appearing on your Facebook News Feed. So theses are some of the cool stuffs about how theses large companies are benefitted by Machine Learning. So now I will take your leave by telling you some interesting words of the co-founder of Chatables , Amy Stapleton and director of Paetoro Dr. Dave Waters. We are entering a new world. The technologies of machine learning, speech recognition, and natural language understanding are reaching a nexus of capability. The end result is that we’ll soon have artificially intelligent assistants to help us in every aspect of our lives.” ~Amy Stapleton Predicting the future isn’t magic, it’s artificial intelligence.” ~Dave Waters Thank you for reading!!
https://ushivam4u.medium.com/how-facebook-and-google-uses-machine-learning-at-their-best-f43453f6109d
['Shivam Prasad Upadhyay']
2020-10-20 08:03:31.862000+00:00
['Machine Learning', 'Speech Recognition', 'Facebook', 'Google', 'Facial Recognition']
Title Facebook Google us Machine Learning bestContent “Machine learning automate job people thought could done people” Dave Waters Hello everyone today like tell famous company Facebook Google us Machine Learning best ease task cool stuff earlier thought impossible Machine Learning ever wondered learn birth brain learns every bit parent friend’s face riding bicycle learning mathematical formula etc brain observes whatever see make pattern call experience analysing pattern predicts make decision Due decision making power brain unlike computer developer thought give prediction decision making power computer would add extra star computer speed ie computer take decision prediction fast calculation achieve brought concept Machine learning make program based data provided computer similar experience brain predict result target value way program basically help machine learn “Machine intelligence last invention humanity ever need make” Nick Bostrom talking company Google Facebook took great advantage Machine Learning reduced work load earlier done human also proved smartest lucrative innovative company client Google us Machine Learning Google declared machine learningfirst company Google master take advantage machine learning algorithm provides customer valuable personalized experience Machine learning already embedded service like Gmail Google Search Google Maps Google service example image search translation tool use sophisticated machine learning allows computer see listen speak much way human Much wow Gmail know social promotional primary mail separated different box filtered Google label email accordingly machine learning play crucial part user intervention used tune threshold user mark message consistent direction Gmail performs realtime increment threshold that’s Gmail learns future later us result categorization Smart reply really smart move made Google help feature reply instantly second suggested reply given Gmail ‘Smart Replies’ ‘Smart Compose’ indeed best product Google given customer powered machine learning offer suggestion type also major reason Google stand one leading company today Also English bring support four new language Spanish French Italian Portuguese Google Search Google Maps also employ machine learning start typing search box automatically anticipates looking provides suggested search term suggestion showcased past search Recommendations trend everyone looking present location example — Bus traffic delay — Hundreds major city around world thousand people traveling One machine learning informing Google get realtime data bus location forecast jiffy don’t wait long hour bus combination time distance traveled individual event datasets possible Google provide prediction need rely bus schedule provided public transportation agency help location day week time day estimated time arrival ETA understood best invention ever done student Google search one deny Google Search Google Maps use machine learning help people day day task check awesomeness Google Machine Learning going Google typing speaking weather without asking anything automatically tell whole weather report area cause Machine Learning Google Assistant help one assist everyday task household chore deal worth crore Google Assistant make easy search nearby restaurant it’s raining heavily help buy movie ticket go find nearest theatre place Also help navigate theater short don’t worry smartphone Google take care everything done due strong machine learning algorithm used google Google Translate world migrating Leave rest world least 24 language spoken India 13 different script 720 dialect Well talk world roughly 6500 spoken language world today Can’t thank Google enough cause we’ve used Google Translate point hope travel lot tooThe best it’s free fast generally accurate translation word sentence paragraph helped many decode understand true 100 accurate come larger block text language provide people general meaning make understanding le complex possible Statistical Machine Translation SMT matter much hate mathematics statistic thank love process computer analyze million existing translated document web learn vocabulary look pattern language Google translates pick statistically probable translation asked translate new bit text Speech Recognition Ok Google speech recognition feature enables user convert audio text applying powerful neural network model easytouse API Currently API recognizes 120 language variant support global user base voice commandandcontrol enabled audio transcribed call center Also processing realtime data done Starting streaming prerecorded audio speech recognition mastered credit given Google’s machine learning technology Reverse Image Search Google’s Search Image feature us reverse image search allows user search related image uploading image image URL Google accomplishes analyzing submitted picture constructing mathematical model using advanced algorithm compared billion image Google’s database returning matching similar result available Google also us metadata image description Reverse image search contentbased image retrieval CBIR query technique involves providing CBIR system sample image base search upon term information retrieval sample image formulates search query Image search creates category might looking image search becomes easy search similar image also help find website contain image size picture searched Google Adsense help machine learning Google keep track users’ search history help history recommends advertisement user aware target market It’s heavily based search history data machine learning help Google achieve created winwin situation Google AdSense website owner earn money online content AdSense work matching text display ad site based content visitor many example Google Music Google Photos Google Adwords etc make great use Machine Learning reason Google number user field make work easier also solves problem smartly Facebook us Machine Learning Machine Learning vital aspect Facebook would even possible handle 24 billion user providing best service without using Machine Learning Let’s take example mindboggling Facebook guess people might familiar real life using “People May Know” right time Well magical effect achieved using Machine Learning algorithm analyze profile interest current friend also friend various factor calculate people might potentially know That’s one example aspect Facebook News Feed Facial Recognition system Targeted Advertising page etc would look Facial Recognition Facial Recognition among many wonder Machine Learning Facebook might trivial recognize friend social medium even thick layer makeup Facebook manage Well “tag suggestions” “face recognition” turned Facebook mean provided permission Facial Recognition Machine Learning System analysis pixel face image creates template basically string number template unique every face sort facial fingerprint used detect face another face suggest tag question use enabling Facial Recognition Facebook Well case newly uploaded photo video Facebook includes face haven’t tagged Facial Recognition algorithm recognize template send notification Also another user try upload picture Facebook profile picture maybe get popular notified immediately Facial Recognition conjugation accessibility option also inform people visual impairment photo video Textual Analysis may believe photo important Facebook especially photo text equally important lot text Facebook understand manage text correct manner Facebook us DeepText text engine based deep learning understand thousand post second 20 language much accuracy understanding languagebased text easy think order truly understand text DeepText understand many thing like grammar idiom slang word context etc example sentence “I love Apple” post writer mean fruit company probably company Except Android user really depends context DeepText learn complexity multiple language DeepText us Deep Learning therefore handle labeled data much efficiently traditional Natural Language Processing model Targeted Advertising shop great clothes Myntra saw ad Facebook page like post Lakme magically see ad also Well magic done using deep neural network analyze age gender location page like interest even mobile data profile select category show ad specifically targeted towards category Facebook also partner different data collection company like Epsilon Acxiom Datalogix BlueKai etc also us data accurately profile Example Suppose data collected online interest field study shopping history restaurant choice etc profile category young fashionista according Facebook deep neural network algorithm ad shown likely cater category get relevant useful ad likely click Facebook generates revenue course way Facebook hope maintain competitive edge hightech company like Google also fighting obtain short attention span Language Translation Facebook le social networking site worldwide obsession people world use Facebook many also don’t know English want use Facebook know Hindi Never fear Facebook inhouse translator simply convert text one language another clicking “See Translation” button case wonder translates le accurately well Facebook Translator us Machine Learning course first click “See Translation” button text Suppose it’s Beyonce’s post sends translation request server translation cached server user also require translation Beyonce’s post example Facebook translator accomplishes analyzing million document already translated one language another looking common pattern basic vocabulary language pick accurate translation possible based educated guess mostly turn correct language updated monthly ML system date new slang saying News Feed Facebook News Feed one addition everybody hated initially everybody love wondering story show higher Facebook News Feed even displayed well work Different photo video article link update friend family business like show personal Facebook News Feed according complex system ranking managed Machine Learning algorithm rank anything appears News Feed decided three factor friend family public figure business interact lot given top priority feed also customized according type content like Movies Books Fashion Video game etc Also post quite popular Facebook lot like comment share higher chance appearing Facebook News Feed thesis cool stuff thesis large company benefitted Machine Learning take leave telling interesting word cofounder Chatables Amy Stapleton director Paetoro Dr Dave Waters entering new world technology machine learning speech recognition natural language understanding reaching nexus capability end result we’ll soon artificially intelligent assistant help u every aspect lives” Amy Stapleton Predicting future isn’t magic it’s artificial intelligence” Dave Waters Thank readingTags Machine Learning Speech Recognition Facebook Google Facial Recognition
3,997
Product Seven
Product Seven The seven steps necessary to produce a successful product. Seven steps are necessary to properly execute a product. This process does not guarantee success. Other variables such as team/personnel, financials, time, and so on are other variables which must also be considered. The first step is the idea. It is essential to move from mind to an available product as efficiently as possible. When an idea is new it has the most energy. This energy will fade over time and ultimately the idea will become obsolete. If action is not taken it allows opportunity for others to act. There is a thought that every idea is given to or held by at least two individuals. Be the one who acts. He who hesitates is lost. The second step is design. Manifesting an idea into something tangible is design. Facilitation is key as to not extinguish the spark of the idea. Ideation is also essential and should be pushed to it’s breaking point. There is no repercussion for pushing an idea to impossibility. Once the limits have been pushed the design may comfortably be accepted within the confines of impossibility. The third step is the (creative/technical) handoff. Now is the time for reality to ground the dreamer. A compromise between what is ideal and what is realistic must be made. This may require going back one or even two steps. This is not a failure and should not be frowned upon. It is extremely important not to move forward to the fourth step until both creative and technical parties have an aligned vision. The fourth step is development. The technical team must be left alone and not interrupted during this phase. Every interruption is time lost. Enough time lost is a failed product. If clarification is needed address it immediately but do not interrupt the flow. If the third step was adequately performed the technical team can be allowed isolation. This is ideal. The fifth step is testing. An unbiased third party must now examine the work of the (creative and technical) team as it stands. All observations and discoveries are welcomed in this stage. Resolve and reconcile everything that comes to light before moving forward. The sixth step is production. The product must be consumable by the population. Production must move as the gears of a clock. Any hiccup must immediately be addressed. Reliability is a key aspect in efficient production. The seventh step is available. The product must be presented to the population appropriately. If not immediately intuitive it must be explained eloquently. The product can and ultimately will die in this state. What is important is that the product has a successful life cycle before eventually dying. The integrity of the original idea must be retained. Do not milk the product to it’s dying breath. There is no honor in this. Instead start a new with an idea, and begin again.
https://uxdesign.cc/product-seven-d4fa0b6ec131
['Daniel Soucek']
2018-06-12 05:00:37.953000+00:00
['Development', 'User Experience', 'Design', 'Software Development', 'Product Design']
Title Product SevenContent Product Seven seven step necessary produce successful product Seven step necessary properly execute product process guarantee success variable teampersonnel financials time variable must also considered first step idea essential move mind available product efficiently possible idea new energy energy fade time ultimately idea become obsolete action taken allows opportunity others act thought every idea given held least two individual one act hesitates lost second step design Manifesting idea something tangible design Facilitation key extinguish spark idea Ideation also essential pushed it’s breaking point repercussion pushing idea impossibility limit pushed design may comfortably accepted within confines impossibility third step creativetechnical handoff time reality ground dreamer compromise ideal realistic must made may require going back one even two step failure frowned upon extremely important move forward fourth step creative technical party aligned vision fourth step development technical team must left alone interrupted phase Every interruption time lost Enough time lost failed product clarification needed address immediately interrupt flow third step adequately performed technical team allowed isolation ideal fifth step testing unbiased third party must examine work creative technical team stand observation discovery welcomed stage Resolve reconcile everything come light moving forward sixth step production product must consumable population Production must move gear clock hiccup must immediately addressed Reliability key aspect efficient production seventh step available product must presented population appropriately immediately intuitive must explained eloquently product ultimately die state important product successful life cycle eventually dying integrity original idea must retained milk product it’s dying breath honor Instead start new idea begin againTags Development User Experience Design Software Development Product Design
3,998
iOS 14: Apple Finally Listened
What’s new in iOS 14? The main changes that have been made to the upcoming version of iOS are ‘quality of life’ improvements that help reduce clutter, and give more information at a glance. 1. Widgets For Android users, widgets have been useful feature for years. If you want to look at your reminders quickly, you can just unlock your phone and look at the reminder’s widget, rather than having to open the reminders app. If you want to see your shopping list, you can place a widget on your homepage so it’s easily accessible. The lack of a proper widget system has been a flaw in iOS for years in my opinion. The closest thing we have had to Android’s widgets in previous years is the leftmost page on your home menu. All of the widgets look the same, and you have to scroll down if you want to find the widget for a specific app. Image: Apple Newsroom. iOS 14 brings widgets to a whole new level. The leftmost page on your home-screen has transformed into a tile-based menu where all the widgets are separated into smaller, but more accessible positions. The widgets stand out from one another, and can be dragged onto your home-screen, with the ability to position them in the space of any two-by-two area of app space. You can also drag widgets on top of each other, giving you the option to scroll through widgets — a smart feature that will stop your home-screen from having too many widgets. This is known as ‘Smart Stack’. Apple have also implemented a feature that allows Smart Stacks to automatically scroll during certain times of day, based on your activity. For instance, you could wake up and find the Apple News widget is currently being displayed; but at 11am the widget has switched pages to the reminders option. In the evening, the widget may have updated to show you a summary of your exercise for the day, or perhaps a show you can watch. The ‘Widget Gallery’ is a menu that can also be used to drag widgets onto your home-screen. The gallery gives the user the option to change the size of widgets to fit different areas of the home-screen. You could have a widget that is the size of a two-by-two area of apps, a two-by-four area, or even a four-by-four area of space. I am very much looking forward to seeing how widgets will make my iPhone a more simple, informative space for when I’m on the go.
https://medium.com/swlh/ios-14-apple-finally-listened-68e2f27db47c
['Joe Mccormick']
2020-06-27 20:50:19.305000+00:00
['Design', 'Software Development', 'Business', 'iOS', 'Apple']
Title iOS 14 Apple Finally ListenedContent What’s new iOS 14 main change made upcoming version iOS ‘quality life’ improvement help reduce clutter give information glance 1 Widgets Android user widget useful feature year want look reminder quickly unlock phone look reminder’s widget rather open reminder app want see shopping list place widget homepage it’s easily accessible lack proper widget system flaw iOS year opinion closest thing Android’s widget previous year leftmost page home menu widget look scroll want find widget specific app Image Apple Newsroom iOS 14 brings widget whole new level leftmost page homescreen transformed tilebased menu widget separated smaller accessible position widget stand one another dragged onto homescreen ability position space twobytwo area app space also drag widget top giving option scroll widget — smart feature stop homescreen many widget known ‘Smart Stack’ Apple also implemented feature allows Smart Stacks automatically scroll certain time day based activity instance could wake find Apple News widget currently displayed 11am widget switched page reminder option evening widget may updated show summary exercise day perhaps show watch ‘Widget Gallery’ menu also used drag widget onto homescreen gallery give user option change size widget fit different area homescreen could widget size twobytwo area apps twobyfour area even fourbyfour area space much looking forward seeing widget make iPhone simple informative space I’m goTags Design Software Development Business iOS Apple
3,999
Why Small Data is a Big Deal
Why Small Data is a Big Deal Here’s how small data can make a big impact. Big Data in the News When we hear about Artificial Intelligence in the news, it’s usually about some shiny new breakthrough built using big data — things like Tesla’s self-driving cars, OpenAI’s text generators, or Neuralink’s brain-computer interfaces. Small Data in Real Life However, as with much of what we read in the news, these are outliers. In reality, most AI projects look completely different. Most of us aren’t trying to create superintelligence, we’re just trying to optimize KPIs, whether it’s churn, attrition, sales, traffic, or any of a million other metrics. And to optimize KPIs, you don’t need big data. The most common sources of data for KPI optimization are day-to-day business tools like Hubspot, Salesforce, Google Analytics, or even Typeform. Unless you’re one of the outliers, the exports from these tools likely wouldn’t qualify as big data. If you do work with big data, then all the more power to you. You’ll potentially be able to create even more accurate models, but it’s not a pre-requisite to adding value to an organization. Small Data for Object Detection When we hear about “big data,” it’s often in the same breath as “object detection.” Indeed, the norm for object detection models has been training on massive amounts of data. Until now. Researchers at the University of Waterloo released a paper discussing ‘Less Than One’-Shot Learning, or the ability for a model to accurately recognize more objects than the number of examples it was trained on. A popular dataset for computer vision experiments is called MNIST, which is made of 60,000 training images of handwritten digits from 0 to 9. A previous experiment by MIT researchers showed that it was possible to “distill” this huge 60,000-image dataset down to just 10 images, carefully engineered to be equal in information to the full set, achieving almost the same accuracy. LO-Shot Learning In the new LO-Short paper, researchers figured out they could create images that blended multiple digits together, which are fed into the model with “soft” labels, like how a man and a horse have partial features of a centaur. In the context of MNIST, this refers to the idea of digits sharing some features, such as a digit of 3 looking somewhat like an 8, a tiny bit like a 0, but nothing like a 1 or a 7. Thus, training images can be used to understand more objects than are even in the training set. Astonishingly, it seems there’s virtually no limit to this concept, meaning that carefully engineered soft labels could encode any number of categories. “With two points, you can separate a thousand classes or 10,000 classes or a million classes.” (source) More Than Theory The paper demonstrates this concept with a kNN (k-nearest neighbors) approach, which classifies objects via separable planes along feature axes in a graphical approach. By creating tiny synthetic datasets, with carefully engineered soft labels, the kNN algorithm was able to detect more classes than there were data points. Limitations While this approach worked astonishingly well applied to the visual, interpretable kNN algorithm, the approach may not work for complicated and opaque neural networks. The previous example of “data distillation” also doesn’t work as well, since it requires that you start with a very large dataset. The Implications More than just a fascination, this research has important implications for the AI industry, namely in reducing data requirements. Intense data requirements make it extremely expensive to train AI models. For instance, GPT-3 cost upwards of $4 million to train. There are also concerns that inference, or making predictions with the model, is too expensive for researchers. This is more of a reflection of the extreme size and computing requirements of GPT-3, a 175-billion parameter model, than anything else. The status quo for current State-of-the-Art models is to train on as much data as possible. GPT-3, a language model, took that to the extreme by training on essentially the entire Internet. With the new LO-Shot Learning breakthrough, it may one day be possible to accomplish SOTA with just a few data points, resulting in extremely lightweight and efficient models.
https://medium.com/datadriveninvestor/why-small-data-is-a-big-deal-83c17e118785
['Obviously Ai']
2020-12-27 16:18:59.046000+00:00
['Data Analysis', 'Artificial Intelligence', 'Data', 'AI', 'Data Science']
Title Small Data Big DealContent Small Data Big Deal Here’s small data make big impact Big Data News hear Artificial Intelligence news it’s usually shiny new breakthrough built using big data — thing like Tesla’s selfdriving car OpenAI’s text generator Neuralink’s braincomputer interface Small Data Real Life However much read news outlier reality AI project look completely different u aren’t trying create superintelligence we’re trying optimize KPIs whether it’s churn attrition sale traffic million metric optimize KPIs don’t need big data common source data KPI optimization daytoday business tool like Hubspot Salesforce Google Analytics even Typeform Unless you’re one outlier export tool likely wouldn’t qualify big data work big data power You’ll potentially able create even accurate model it’s prerequisite adding value organization Small Data Object Detection hear “big data” it’s often breath “object detection” Indeed norm object detection model training massive amount data Researchers University Waterloo released paper discussing ‘Less One’Shot Learning ability model accurately recognize object number example trained popular dataset computer vision experiment called MNIST made 60000 training image handwritten digit 0 9 previous experiment MIT researcher showed possible “distill” huge 60000image dataset 10 image carefully engineered equal information full set achieving almost accuracy LOShot Learning new LOShort paper researcher figured could create image blended multiple digit together fed model “soft” label like man horse partial feature centaur context MNIST refers idea digit sharing feature digit 3 looking somewhat like 8 tiny bit like 0 nothing like 1 7 Thus training image used understand object even training set Astonishingly seems there’s virtually limit concept meaning carefully engineered soft label could encode number category “With two point separate thousand class 10000 class million classes” source Theory paper demonstrates concept kNN knearest neighbor approach classifies object via separable plane along feature ax graphical approach creating tiny synthetic datasets carefully engineered soft label kNN algorithm able detect class data point Limitations approach worked astonishingly well applied visual interpretable kNN algorithm approach may work complicated opaque neural network previous example “data distillation” also doesn’t work well since requires start large dataset Implications fascination research important implication AI industry namely reducing data requirement Intense data requirement make extremely expensive train AI model instance GPT3 cost upwards 4 million train also concern inference making prediction model expensive researcher reflection extreme size computing requirement GPT3 175billion parameter model anything else status quo current StateoftheArt model train much data possible GPT3 language model took extreme training essentially entire Internet new LOShot Learning breakthrough may one day possible accomplish SOTA data point resulting extremely lightweight efficient modelsTags Data Analysis Artificial Intelligence Data AI Data Science