Unnamed: 0
int64
0
192k
title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
info
stringlengths
45
90.4k
1,300
Six Keys to Better Jobs, Wider Prosperity
Six Keys to Better Jobs, Wider Prosperity MIT Work of the Future report finds growing workplace inequities that can, and must be addressed By Peter Dizikes, MIT News Office Decades of technological change have polarized the earnings of the American workforce, helping highly educated white-collar workers thrive, while hollowing out the middle class. Yet present-day advances like robots and artificial intelligence do not spell doom for middle-tier or lower-wage workers, since innovations create jobs as well. With better policies in place, more people could enjoy good careers even as new technology transforms workplaces. That’s the conclusion of the final report from MIT’s Task Force on the Work of the Future, which summarizes over two years of research on technology and jobs. The report, “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines,” was released today [November 17], and the task force is hosting an online conference on November 18, the “ AI & the Future of Work Congress” to explain the research. At the core of the task force’s findings: A robot-driven jobs apocalypse is not on the immediate horizon. As technology takes jobs away, it provides new opportunities; about 63 percent of jobs performed in 2018 did not exist in 1940. Rather than a robot revolution in the workplace, we are witnessing a gradual tech evolution. At issue is how to improve the quality of jobs, particularly for middle- and lower-wage workers, and ensure there is greater shared prosperity than the U.S. has seen in recent decades. “The sky is not falling, but it is slowly lowering,” says David Autor, the Ford Professor of Economics at MIT, associate head of MIT’s Department of Economics, and a co-chair of the task force. “We need to respond. The world is gradually changing in very important ways, and if we just keep going in the direction we’re going, it is going to produce bad outcomes.” That starts with a realistic understanding of technological change, say the task force leaders. The task force aimed “to move past the hype about what [technologies] might be here, and now we’re looking at what can we feasibly do to move things forward for workers,” says Elisabeth Beck Reynolds, executive director of the task force as well as executive director of the MIT Industrial Performance Center. “We looked across a range of industries and examined the numerous factors — social, cognitive, organizational, economic — that shape how firms adopt technology.” “We want to inject into the public discourse a more nuanced way of talking about technology and work,” adds David Mindell, task force co-chair, professor of aeronautics and astronautics, and the Dibner Professor of the History of Engineering and Manufacturing at MIT. “It’s not that the robots are coming tomorrow and there’s nothing we can do about it. Technology is an aggregate of human choices.” The report also addresses why Americans may be concerned about work and the future. It states: “Where innovation fails to drive opportunity, it generates a palpable fear of the future: the suspicion that technological progress will make the country wealthier while threatening the people’s livelihoods. This fear exacts a high price: political and regional divisions, distrust of institutions, and mistrust of innovation itself. The last four decades of economic history give credence to that fear.” “Automation is transforming our work, our lives, our society,” says MIT President L. Rafael Reif, who initiated the formation of the task force in 2017. “Fortunately, the harsh societal consequences that concern us all are not inevitable. How we design tomorrow’s technologies, and the policies and practices we build around them, will profoundly shape their impact.” Reif adds: “Getting this right is among the most important and inspiring challenges of our time — and it should be a priority for everyone who hopes to enjoy the benefits of a society that’s healthy and stable, because it offers opportunity for all.” Six Takeaways The task force, an Institute-wide group of scholars and researchers, spent over two years studying work and technology in depth. The final report presents six overarching conclusions and a set of policy recommendations. The conclusions: 1) Technological change is simultaneously replacing existing work and creating new work. It is not eliminating work altogether. Over the last several decades, technology has significantly changed many workplaces, especially through digitization and automation, which have replaced clerical, administrative, and assembly-line workers across the country. But the overall percentage of adults in paid employment has largely risen for over a century. In theory, the report states, there is “no instrinsic conflict between technological change, full employment, and rising earnings.” In practice, however, technology has polarized the economy. White-collar workers — in medicine, marketing, design, research, and more — have become more productive and richer, while middle-tier workers have lost out. Meanwhile, there has been growth in lower-paying service-industry jobs where digitization has little impact — such as food services, janitors, and drivers. Since 1978, aggregate U.S. productivity has risen by 66 percent, while compensation for production and nonsupervisory workers has risen by only 10 percent. Wage gaps also exist by race and gender, and cities do not provide the “ escalator “ to the middle class they once did. While innovations have replaced many receptionists, clerks, and assembly-line workers, they have simultaneously created new occupations. Since the middle of the 20th century, the U.S. has seen major growth in the computer industry, renewable energy, medical specialties, and many areas of design, engineering, marketing, and health care. These industries can support many middle-income jobs as well — while the services sector keeps growing. As the task force leaders state in the report, “The dynamic interplay among task automation, innovation, and new work creation, while always disruptive, is a primary wellspring of rising productivity. Innovation improves the quantity, quality, and variety of work that a worker can accomplish in a given time. This rising productivity, in turn, enables improving living standards and the flourishing of human endeavors.” However, a bit ruefully, the authors also note that “in what should be a virtuous cycle, rising productivity provides society with the resources to invest in those whose livelihoods are disrupted by the changing structure of work.” But this has not come to pass, as the distribution of value from these jobs has been lopsided. In the U.S., lower-skill jobs only pay 79 percent as much when compared to Canada, 74 percent compared to the U.K., and 57 percent compared to Germany. “People understand that automation can make the country richer and make them poorer, and that they’re not sharing in those gains,” Autor says. “We think that can be fixed.” 2) Momentous impacts of technological change are unfolding gradually. Time and again, media coverage about technology and jobs focuses on dramatic scenarios in which robots usurp people, and we face a future without work. But this picture elides a basic point: Technologies mimicking human actions are difficult to build, and expensive. It is generally cheaper to simply hire people for those tasks. On the other hand, technologies that augment human abilities — like tools that let doctors make diagnoses — help those workers become more productive. Apart from clerical and assembly-line jobs, many technologies exist in concert with workers, not as a substitute for them. Thus, workplace technology usually involves “augmentation tasks more than replacement tasks,” Mindell says. The task force report surveys technology adoption in industries including insurance, health care, manufacturing, and autonomous vehicles, finding growth in “narrow” AI systems that complement workers. Meanwhile, technologists are working on difficult problems like better robotic dexterity, which could lead to more direct replacement of workers, but such advances at a high level are further off in the future. “That’s what technological adoption looks like,” Mindell says. “It’s uneven, it’s lumpy, it goes in fits and starts.” The key question is how innovators at MIT and elsewhere can shape new technology to broad social benefit. 3) Rising labor productivity has not translated into broad increases in income because societal institutions and labor market policies have fallen into disrepair. While the U.S. has witnessed a lot of technological innovation in recent decades, it has not seen as much policy innovation, particularly on behalf of workers. The polarizing effects of technology on jobs would be lessened if middle- and lower-income workers had relatively better support in other ways. Instead, in terms of pay, working environment, termination notice time, paid vacation time, sick time, and family leave, “less-educated and low-paid U.S. workers fare worse than comparable workers in other wealthy industrialized nations,” the report notes. The adjusted gross hourly earnings of lower-skill workers in the U.S. in 2015 averaged $10.33, compared to $24.28 in Denmark, $18.18 in Germany, and $17.61 in Australia. “It’s untenable that the labor market has this growing gulf without shared prosperity,” Autor says. “We need to restore the synergy between rising productivity and improvements in labor market opportunity.” He adds: “We’ve had real institutional failure, and it’s within our hands to change it. … That includes worker voice, minimum wages, portable benefits, and incentives that cause companies to invest in workers.” Looking ahead, the report cautions, “If those technologies deploy into the labor institutions of today, which were designed for the last century, we will see similar effects to recent decades: downward pressure on wages, skills, and benefits, and an increasingly bifurcated labor market.” The task force argues instead for institutional innovations that complement technological change. 4) Improving the quality of jobs requires innovation in labor market institutions. The task force contends the U.S. needs to modernize labor policies on several fronts, including restoring the federal minimum wage to a reasonable percentage of the national median wage and, crucially, indexing it to inflation. The report also suggests upgrading unemployment insurance in several ways, including: using very recent earnings to determine eligibility or linking eligibility to hours worked, not earnings; making it easier to receive partial benefits in case of events like loss of a second job; and dropping the requirement that people need to seek full-time work to receive benefits, since so many people hold part-time positions. The report also observes that U.S. collective bargaining law and processes are antiquated. The authors argue that workers need better protection of their current collective bargaining rights; new forms of workplace representation beyond traditional unions; and legal protections allowing groups to organize that include home-care workers, farmworkers, and independent contractors. 5) Fostering opportunity and economic mobility necessitates cultivating and refreshing worker skills. Technological advancement may often be incremental, but changes happen often enough that workers’ skills and career paths can become obsolete. The report emphasizes that U.S. workers need more opportunities to add new skills — whether through the community college system, online education, company-based retraining, or other means. The report calls for making ongoing skills development accessible, engaging, and cost-effective. This requires buttressing what already works, while advancing new tools: blended online and in-person offerings, machine-supervised learning, and augmented and virtual reality learning environments. The greatest needs are among workers without four-year college degrees. “We need to focus on those who are between high school and the four-year degree,” Reynolds says. “There should be pathways for those people to increase their skill set and make it meaningful to the labor market. We really need a shift that makes this a high priority.” 6) Investing in innovation will drive new job creation, speed growth, and meet rising competitive challenges. The rate of new-job creation over the last century is heavily driven by technological innovation, the report notes, with a considerable portion of that stemming from federal investment in R&D, which has helped produce many forms of computing and medical advances, among other things. As of 2015, the U.S. invested 2.7 percent of its GDP in R&D, compared to 2.9 percent in Germany and 2.1 percent in China. But the public share of that R&D investment has fallen from 40 percent in 1985 to 25 percent in 2015. The task force calls for a recommitment to this federal support. “Innovation has a key role in job creation and growth,” Autor says. Given the significance of innovation to job and wealth creation, the report calls for increased overall federal research funding; targeted assistance that helps small- and medium-sized businesses adopt technology; policies creating a wider geographical spread of innovation in the U.S.; and policies that enhance investment in workers, not just capital, including the elimination of accelerated capital depreciation claims, and an employer training tax credit that functions like the R&D tax credit. Global Issues, U.S. Suggestions In addition to Reynolds, Autor, and Mindell, MIT’s Task Force on the Work of the Future consisted of a group of 18 MIT professors representing all five Institute schools and the MIT Schwarzman College of Computing; a 22-person advisory board drawn from the ranks of industry leaders, former government officials, and academia; a 14-person research board of scholars; and over 20 graduate students. The task force also consulted with business executives, labor leaders, and community college leaders, among others. The final document includes case studies from specific firms and sectors as well, and the Task Force is publishing nearly two dozen research briefs that go into the primary research in more detail. The task force observed global patterns at play in the way technology is adopted and diffused through the workplace, although its recommendations are focused on U.S. policy issues. “While our report is very geared toward the U.S. in policy terms, it clearly is speaking to a lot of trends and issues that exist globally,” Reynolds said. “The message is not just for the U.S. Many of the challenges we outline are found in other countries too, albeit to lesser degrees. As we wrote in the report, ‘the central challenge ahead, indeed the work of the future, is to advance labor market opportunity to meet, complement, and shape technological innovations.’” The task force intends to circulate ideas from the report among policymakers and politicians, corporate leaders and other business managers, and researchers, as well as anyone with an interest in the condition of work in the 21st century. “I hope people are receptive,” Reynolds adds. “We have made forceful recommendations that tie together different policy areas — skills, job quality, and innovation. These issues are critical, particularly as we think about recovery and rebuilding in the age of COVID-19. I hope our message will be picked up by both the public sector and private sector leaders, because both of those are essential to forge the path forward.”
https://medium.com/mit-initiative-on-the-digital-economy/six-keys-to-better-jobs-wider-prosperity-fab2a7c6ed79
['Mit Ide']
2020-11-23 00:35:05.624000+00:00
['Automation', 'Robots', 'Future Of Work', 'Productivity', 'AI']
Title Six Keys Better Jobs Wider ProsperityContent Six Keys Better Jobs Wider Prosperity MIT Work Future report find growing workplace inequity must addressed Peter Dizikes MIT News Office Decades technological change polarized earnings American workforce helping highly educated whitecollar worker thrive hollowing middle class Yet presentday advance like robot artificial intelligence spell doom middletier lowerwage worker since innovation create job well better policy place people could enjoy good career even new technology transforms workplace That’s conclusion final report MIT’s Task Force Work Future summarizes two year research technology job report “The Work Future Building Better Jobs Age Intelligent Machines” released today November 17 task force hosting online conference November 18 “ AI Future Work Congress” explain research core task force’s finding robotdriven job apocalypse immediate horizon technology take job away provides new opportunity 63 percent job performed 2018 exist 1940 Rather robot revolution workplace witnessing gradual tech evolution issue improve quality job particularly middle lowerwage worker ensure greater shared prosperity US seen recent decade “The sky falling slowly lowering” say David Autor Ford Professor Economics MIT associate head MIT’s Department Economics cochair task force “We need respond world gradually changing important way keep going direction we’re going going produce bad outcomes” start realistic understanding technological change say task force leader task force aimed “to move past hype technology might we’re looking feasibly move thing forward workers” say Elisabeth Beck Reynolds executive director task force well executive director MIT Industrial Performance Center “We looked across range industry examined numerous factor — social cognitive organizational economic — shape firm adopt technology” “We want inject public discourse nuanced way talking technology work” add David Mindell task force cochair professor aeronautics astronautics Dibner Professor History Engineering Manufacturing MIT “It’s robot coming tomorrow there’s nothing Technology aggregate human choices” report also address Americans may concerned work future state “Where innovation fails drive opportunity generates palpable fear future suspicion technological progress make country wealthier threatening people’s livelihood fear exacts high price political regional division distrust institution mistrust innovation last four decade economic history give credence fear” “Automation transforming work life society” say MIT President L Rafael Reif initiated formation task force 2017 “Fortunately harsh societal consequence concern u inevitable design tomorrow’s technology policy practice build around profoundly shape impact” Reif add “Getting right among important inspiring challenge time — priority everyone hope enjoy benefit society that’s healthy stable offer opportunity all” Six Takeaways task force Institutewide group scholar researcher spent two year studying work technology depth final report present six overarching conclusion set policy recommendation conclusion 1 Technological change simultaneously replacing existing work creating new work eliminating work altogether last several decade technology significantly changed many workplace especially digitization automation replaced clerical administrative assemblyline worker across country overall percentage adult paid employment largely risen century theory report state “no instrinsic conflict technological change full employment rising earnings” practice however technology polarized economy Whitecollar worker — medicine marketing design research — become productive richer middletier worker lost Meanwhile growth lowerpaying serviceindustry job digitization little impact — food service janitor driver Since 1978 aggregate US productivity risen 66 percent compensation production nonsupervisory worker risen 10 percent Wage gap also exist race gender city provide “ escalator “ middle class innovation replaced many receptionist clerk assemblyline worker simultaneously created new occupation Since middle 20th century US seen major growth computer industry renewable energy medical specialty many area design engineering marketing health care industry support many middleincome job well — service sector keep growing task force leader state report “The dynamic interplay among task automation innovation new work creation always disruptive primary wellspring rising productivity Innovation improves quantity quality variety work worker accomplish given time rising productivity turn enables improving living standard flourishing human endeavors” However bit ruefully author also note “in virtuous cycle rising productivity provides society resource invest whose livelihood disrupted changing structure work” come pas distribution value job lopsided US lowerskill job pay 79 percent much compared Canada 74 percent compared UK 57 percent compared Germany “People understand automation make country richer make poorer they’re sharing gains” Autor say “We think fixed” 2 Momentous impact technological change unfolding gradually Time medium coverage technology job focus dramatic scenario robot usurp people face future without work picture elides basic point Technologies mimicking human action difficult build expensive generally cheaper simply hire people task hand technology augment human ability — like tool let doctor make diagnosis — help worker become productive Apart clerical assemblyline job many technology exist concert worker substitute Thus workplace technology usually involves “augmentation task replacement tasks” Mindell say task force report survey technology adoption industry including insurance health care manufacturing autonomous vehicle finding growth “narrow” AI system complement worker Meanwhile technologist working difficult problem like better robotic dexterity could lead direct replacement worker advance high level future “That’s technological adoption look like” Mindell say “It’s uneven it’s lumpy go fit starts” key question innovator MIT elsewhere shape new technology broad social benefit 3 Rising labor productivity translated broad increase income societal institution labor market policy fallen disrepair US witnessed lot technological innovation recent decade seen much policy innovation particularly behalf worker polarizing effect technology job would lessened middle lowerincome worker relatively better support way Instead term pay working environment termination notice time paid vacation time sick time family leave “lesseducated lowpaid US worker fare worse comparable worker wealthy industrialized nations” report note adjusted gross hourly earnings lowerskill worker US 2015 averaged 1033 compared 2428 Denmark 1818 Germany 1761 Australia “It’s untenable labor market growing gulf without shared prosperity” Autor say “We need restore synergy rising productivity improvement labor market opportunity” add “We’ve real institutional failure it’s within hand change … includes worker voice minimum wage portable benefit incentive cause company invest workers” Looking ahead report caution “If technology deploy labor institution today designed last century see similar effect recent decade downward pressure wage skill benefit increasingly bifurcated labor market” task force argues instead institutional innovation complement technological change 4 Improving quality job requires innovation labor market institution task force contends US need modernize labor policy several front including restoring federal minimum wage reasonable percentage national median wage crucially indexing inflation report also suggests upgrading unemployment insurance several way including using recent earnings determine eligibility linking eligibility hour worked earnings making easier receive partial benefit case event like loss second job dropping requirement people need seek fulltime work receive benefit since many people hold parttime position report also observes US collective bargaining law process antiquated author argue worker need better protection current collective bargaining right new form workplace representation beyond traditional union legal protection allowing group organize include homecare worker farmworkers independent contractor 5 Fostering opportunity economic mobility necessitates cultivating refreshing worker skill Technological advancement may often incremental change happen often enough workers’ skill career path become obsolete report emphasizes US worker need opportunity add new skill — whether community college system online education companybased retraining mean report call making ongoing skill development accessible engaging costeffective requires buttressing already work advancing new tool blended online inperson offering machinesupervised learning augmented virtual reality learning environment greatest need among worker without fouryear college degree “We need focus high school fouryear degree” Reynolds say “There pathway people increase skill set make meaningful labor market really need shift make high priority” 6 Investing innovation drive new job creation speed growth meet rising competitive challenge rate newjob creation last century heavily driven technological innovation report note considerable portion stemming federal investment RD helped produce many form computing medical advance among thing 2015 US invested 27 percent GDP RD compared 29 percent Germany 21 percent China public share RD investment fallen 40 percent 1985 25 percent 2015 task force call recommitment federal support “Innovation key role job creation growth” Autor say Given significance innovation job wealth creation report call increased overall federal research funding targeted assistance help small mediumsized business adopt technology policy creating wider geographical spread innovation US policy enhance investment worker capital including elimination accelerated capital depreciation claim employer training tax credit function like RD tax credit Global Issues US Suggestions addition Reynolds Autor Mindell MIT’s Task Force Work Future consisted group 18 MIT professor representing five Institute school MIT Schwarzman College Computing 22person advisory board drawn rank industry leader former government official academia 14person research board scholar 20 graduate student task force also consulted business executive labor leader community college leader among others final document includes case study specific firm sector well Task Force publishing nearly two dozen research brief go primary research detail task force observed global pattern play way technology adopted diffused workplace although recommendation focused US policy issue “While report geared toward US policy term clearly speaking lot trend issue exist globally” Reynolds said “The message US Many challenge outline found country albeit lesser degree wrote report ‘the central challenge ahead indeed work future advance labor market opportunity meet complement shape technological innovations’” task force intends circulate idea report among policymakers politician corporate leader business manager researcher well anyone interest condition work 21st century “I hope people receptive” Reynolds add “We made forceful recommendation tie together different policy area — skill job quality innovation issue critical particularly think recovery rebuilding age COVID19 hope message picked public sector private sector leader essential forge path forward”Tags Automation Robots Future Work Productivity AI
1,301
Data Visualization with Python HoloViz: Interactive Plots and Widgets in Jupyter
The Goal of the Visualization With data representing a ground “truth” of binary classification, and predicted values (floats ranging from 0 to 1.0), I’m going to put together a dashboard in order to: Generate Hard Predictions Show a confusion matrix Evaluate the Classifier through the AUC curve and a Precision-Recall Curve The “Data” Basically, I created an artificial set of binary categories (85% / 15%), threw random data at each bin, and then set the values between [0,1]. This should look a lot like the result of a binary classifier from scikit-learn. Generating Hard Predictions This is the key part of the interactive portion of this visualization. At the top of the dashboard, there will be a handy slider whose values will represent cut-off values (above that value, assume category 1, below, assume 0). By default, I initialize the cutoff value to maximize the F1-Score. In theory, if we could have perfect precision and recall, this quantity should be 1. As the slider moves through the various cutoff values, the rest of the visualization should convey changes in the various other metrics. One of the best ways to capture this relationship is through a confusion matrix, or a 2x2 table showing the results of the prediction against the actual values. The Confusion Matrix To achieve this visual element, I’ll be using the hv.HeatMap plot, along with some tricks to make it behave. Getting customized axes and tick marks proved to be rather difficult, so instead, I’ll also use hv.Labels to make it explicitly clear what the confusion matrix is showing: A Confusion Matrix! Hopefully the diagonal has the big values. The tricky part here was disabling the axes and positioning things correctly. The heat map is really a 2x2, with ranges (0,1) on both x and y. So in order to place something in the top left quadrant, you need to refer to it with a tuple corresponding to (x,y): (0,1,VAL) where VAL is the actual value of the heat map or the corresponding label. I created two lists and used a map to sort things in the right order. AUC and Precision-Recall Curves The code for generating these curves is pretty simple as I’ll continue leaning on HoloViews: # for the AUC, we need only plot our FP vs TP hv.Curve(data[:,[3,1]]). \ opts(xrotation=45, xlabel='False Positive', ylabel='True Positive') AUC Curve… My fake data is a little too easy to “classify” # for the PR curve, we need only plot recall vs precision hv.Curve(data[:,[6,5]]). \ opts(xlim=(0,1), ylim=(0,1)) Very Precision, Much Recall. A Layout and Putting it Together The individual components are pretty easy to slap together, but now I’ll bring it all into a single view. The slider will now iterate through the various cut-off values while the rest of the plots update. In this way, we can see different F1 Scores, changes to the confusion matrix, and where on the PR curve each cutoff will land. For this section, I’ll be introducing two complications. The big one, wrapping everything in a class, is a practice I use to keep things organized. With the second, I’ll be using a layout instead of running each widget in a separate cell. The layouts that come with panel are fairly simple and do a good job of letting you track widgets as you add them. For this dashboard, I’ll often refer to a widget by it’s position in the layout rather than directly: Hey would you look at that, an interactive dashboard! The class code shouldn’t be too horrible. There are basically five components: (During Initialization) Defining all of the data I’ll be using. I try to pre-calculate everything I’ll need beforehand (or optimize its calculation) to smooth out the user experience. (During Initialization) Initialize the widgets with default values or some initial plot. (During Initialization) Assign watchers. Because everything is wrapped in a class, the actual callbacks can come later. Define callbacks. Remember, these are the functions that are triggered by interacting with some widget and subsequently, modify other widgets or layouts. Create plotting functions. These functions basically create a plot when called. During a callback, plots will get created for each change induced by the watchers. The actual plotting code should be fairly straightforward. Personally, the hardest part of plotting is getting the display options to look right. HoloViews does a pretty good job laying out the options either in the docstrings or the help function, invoked with: hv.help(hv.Curve) # or any hv plot You may notice under the ### layouts ### section, I actually use several layouts. You can use a pn.gridspec to make one super layout, but I find it’s simplest to think in rows and columns. The pn.layout.row and pn.layout.column also do a great job at centering and dealing with margins, saving a lot of headache. Using these also makes referring to or updating widgets in those layouts a lot easier. Lastly, I want to point out that if you intend to work primarily in a notebook, you do not need to use classes or layouts. Widgets in different cells will still update as long as the linking code (callback/watcher) is working properly. Again, follow along with the notebook at: https://github.com/ernestk-git/holoviz_extended/blob/master/Panel_Interactive.ipynb
https://towardsdatascience.com/data-visualization-with-python-holoviz-plotting-4848e905f2c0
['Ernest Kim']
2019-09-05 14:27:35.617000+00:00
['Data Science', 'Data Visualization', 'Python', 'Dashboard', 'Bokeh']
Title Data Visualization Python HoloViz Interactive Plots Widgets JupyterContent Goal Visualization data representing ground “truth” binary classification predicted value float ranging 0 10 I’m going put together dashboard order Generate Hard Predictions Show confusion matrix Evaluate Classifier AUC curve PrecisionRecall Curve “Data” Basically created artificial set binary category 85 15 threw random data bin set value 01 look lot like result binary classifier scikitlearn Generating Hard Predictions key part interactive portion visualization top dashboard handy slider whose value represent cutoff value value assume category 1 assume 0 default initialize cutoff value maximize F1Score theory could perfect precision recall quantity 1 slider move various cutoff value rest visualization convey change various metric One best way capture relationship confusion matrix 2x2 table showing result prediction actual value Confusion Matrix achieve visual element I’ll using hvHeatMap plot along trick make behave Getting customized ax tick mark proved rather difficult instead I’ll also use hvLabels make explicitly clear confusion matrix showing Confusion Matrix Hopefully diagonal big value tricky part disabling ax positioning thing correctly heat map really 2x2 range 01 x order place something top left quadrant need refer tuple corresponding xy 01VAL VAL actual value heat map corresponding label created two list used map sort thing right order AUC PrecisionRecall Curves code generating curve pretty simple I’ll continue leaning HoloViews AUC need plot FP v TP hvCurvedata31 optsxrotation45 xlabelFalse Positive ylabelTrue Positive AUC Curve… fake data little easy “classify” PR curve need plot recall v precision hvCurvedata65 optsxlim01 ylim01 Precision Much Recall Layout Putting Together individual component pretty easy slap together I’ll bring single view slider iterate various cutoff value rest plot update way see different F1 Scores change confusion matrix PR curve cutoff land section I’ll introducing two complication big one wrapping everything class practice use keep thing organized second I’ll using layout instead running widget separate cell layout come panel fairly simple good job letting track widget add dashboard I’ll often refer widget it’s position layout rather directly Hey would look interactive dashboard class code shouldn’t horrible basically five component Initialization Defining data I’ll using try precalculate everything I’ll need beforehand optimize calculation smooth user experience Initialization Initialize widget default value initial plot Initialization Assign watcher everything wrapped class actual callback come later Define callback Remember function triggered interacting widget subsequently modify widget layout Create plotting function function basically create plot called callback plot get created change induced watcher actual plotting code fairly straightforward Personally hardest part plotting getting display option look right HoloViews pretty good job laying option either docstrings help function invoked hvhelphvCurve hv plot may notice layout section actually use several layout use pngridspec make one super layout find it’s simplest think row column pnlayoutrow pnlayoutcolumn also great job centering dealing margin saving lot headache Using also make referring updating widget layout lot easier Lastly want point intend work primarily notebook need use class layout Widgets different cell still update long linking code callbackwatcher working properly follow along notebook httpsgithubcomernestkgitholovizextendedblobmasterPanelInteractiveipynbTags Data Science Data Visualization Python Dashboard Bokeh
1,302
Demystifying Uncertainty Principle
I used MATLAB software to plot the curves shown in figures 1, 2 and 3 When both slits are open (fig. 2), it looks like the electron can hit the screen by going either from slit 1 or slit 2. It is understood that opening the second slit increases the number of electrons, and hence the probability of electrons striking the screen, but when a plot is drawn between the probability of hitting the screen when both slits are open versus the distance along the screen then there were regions where we get higher number of electrons (or higher probability than the previous case) but at other regions, we get lower number of electrons. In some regions, the probability of electrons striking the screen came out to be zero. This is quite intimidating. In Fig. 2 shown above, the points where the plot touches the horizontal axis are the points of zero probability. Feynman’s Water Wave Slit Experiment | Source: The Feynman Lectures on Physics Vol. 1 Feynman’s Explanation In fact, Feynman wrote, “The double-slit experiment contains all the mysteries of quantum mechanics.” It is observed that the pattern we got when both slits are open is similar to the plot when waves (shown above) were taken instead of particles. This concludes that electrons also interfere and behave as waves as well. This arises the regions of zero probability (destructive interference) when both slits were open. That means those electrons would have hit the screen in the zero probability region if only one slit was open and don’t strike the same region when both slits are open but here, another problem arises — Electron gun was firing one electron at a time so how does an electron know how many slits were open? “It seems as if, somewhere on their journey from source to screen, the particles (electrons) acquire information about both slits”, Hawking wrote in ‘Grand Design’. Many possible explanations were given by many physicists to explain this quantum behavior. Feynman said that the electrons take every possible path connecting those two points i.e., from source to screen, an electron can take a straight line path or a path to Mars and comes back on Earth to strike the screen. It looks like a science-fiction movie but it isn’t. According to him, when both slits are open, the electron goes through slit 1 and interferes with the path in which it goes through slit 2 which causes interference. I have a different perspective to comprehend this phenomenon: When both the slits are open then, electron divides into two halves which go through each slit, and interfere with each other (Interfering with itself doesn’t produce any effect). When only one slit is open, it won’t split into two. Now, the same question arises: how does an electron know about the slits? The answer is simple but intriguing. It’s because of the uncertainty in the position of an electron. It can acquire information about the slits before it reaches there. Consider that the electron is about to enter slit 1 but due to uncertainty in its position it could be anywhere near this slit’s vicinity. For example, the electron may be behind the slit or it may be crossing the slit with a certain velocity or it may be about to enter the slit. So, before the electron reaches slit 1, it can get the information about the slit and can act accordingly. Figure 3: When electrons are observed The Need for Uncertainty Principle You might be thinking that we should make an apparatus that can mark the slit whenever an electron goes through that particular slit and then we can say that the electron goes through either slit 1 or slit 2. Feynman pointed out such an apparatus. We can put the lights near the slits so, when the sound from the buzzer is heard, we see a flash either near slit 1 or slit 2. But when Feynman made such an apparatus, it changes the outcome and he got a different plot (Fig. 3) which shows no interference pattern. It seemed like observing the electron changes its behavior. Actually, the photon which is emitted from the light source hits the electron and gets scattered. This photon gives an impulse to the electron which changes its path in such a way that the electron didn’t strike the region where it supposed to when there was no light source and hence, it shows no interference pattern. Now, to decrease the impulse given to the electron by the photon, the momentum of the photon should be decreased. The momentum of the photon(p) is inversely proportional to its wavelength(λ): p ∝ 1/ λ. When the wavelength becomes more than the separation of the slits, the impulse gets decreased, and the plot obtained looks similar to the plot (Fig. 2) when no light was used. But by decreasing the momentum or increasing the wavelength, the flash becomes a big fuzzy when the light was scattered from electron, and it’s hard to distinguish from which slit the flash is coming. Therefore, this experiment concludes that if we try to decrease, without disturbing the outcome, p (or increase λ) then, x (position) increases, and vice-versa. It means, if we are successful in locating the position of a microscopic particle (like an electron) then we can’t tell how much fast or slow the particle is going (uncertainty in velocity or momentum Δp) or if we successfully measure its velocity then we are unable to find its position in space (uncertainty in position Δx). We can’t know both things simultaneously with great certainty. So, it’s hard to build such an apparatus that can locate the electron without disturbing the result. Fortunately, Quantum Mechanics’ laws don’t apply for macroscopic objects (like soccerball) otherwise we’d see the soccer ball moving in the zig-zag path when we kick it. But these laws successfully explain the phenomena where other laws failed to explain like the photoelectric effect. To save the existence of Quantum Mechanics and explain this absurdity, Heisenberg suggested there should be some limitation to make the laws consistent, and gave the Uncertainty Principle. Since Quantum Mechanics is such a powerful theory used in many upcoming technologies like Quantum Computing, we need uncertainty principle to comprehend quantum behavior of atomic and sub-atomic particles. We Are Not Sub-Atomic Particles! Heisenberg’s Uncertainty Principle applies to every particle but it can’t be observed on a macroscopic level because λ is very large and hence, uncertainty in position is very small. In the TV scripted series, Genius, Albert Einstein, and Niels Bohr were talking to each other about the proof of Quantum Mechanics while walking. There was a moment when they were crossing the road and Einstein intentionally threw himself toward a car but Bohr pulled him back before the car hit him. When Bohr asked him to be more careful from the next time, Albert smiled hysterically and said, “Why should I? Why should either of us? According to you if that automobile was a particle but we didn’t see it, it wouldn’t have been there at all. We would be perfectly safe”. To his defense, Bohr replied, “That principle is applied only to sub-atomic particles and automobiles are not sub-atomic particles”. In conclusion, Heisenberg’s Uncertainty Principle might save the perilous existence of Quantum Mechanics in the future but it won’t save you from the car so, be careful! The Uncertainty Principle | Genius
https://medium.com/swlh/the-soul-of-quantum-mechanics-2dc215b390da
['Dishant Varshney']
2020-06-25 03:25:50.207000+00:00
['Physics', 'Knowledge', 'Science', 'Future', 'Quantum Mechanics']
Title Demystifying Uncertainty PrincipleContent used MATLAB software plot curve shown figure 1 2 3 slit open fig 2 look like electron hit screen going either slit 1 slit 2 understood opening second slit increase number electron hence probability electron striking screen plot drawn probability hitting screen slit open versus distance along screen region get higher number electron higher probability previous case region get lower number electron region probability electron striking screen came zero quite intimidating Fig 2 shown point plot touch horizontal axis point zero probability Feynman’s Water Wave Slit Experiment Source Feynman Lectures Physics Vol 1 Feynman’s Explanation fact Feynman wrote “The doubleslit experiment contains mystery quantum mechanics” observed pattern got slit open similar plot wave shown taken instead particle concludes electron also interfere behave wave well arises region zero probability destructive interference slit open mean electron would hit screen zero probability region one slit open don’t strike region slit open another problem arises — Electron gun firing one electron time electron know many slit open “It seems somewhere journey source screen particle electron acquire information slits” Hawking wrote ‘Grand Design’ Many possible explanation given many physicist explain quantum behavior Feynman said electron take every possible path connecting two point ie source screen electron take straight line path path Mars come back Earth strike screen look like sciencefiction movie isn’t According slit open electron go slit 1 interferes path go slit 2 cause interference different perspective comprehend phenomenon slit open electron divide two half go slit interfere Interfering doesn’t produce effect one slit open won’t split two question arises electron know slit answer simple intriguing It’s uncertainty position electron acquire information slit reach Consider electron enter slit 1 due uncertainty position could anywhere near slit’s vicinity example electron may behind slit may crossing slit certain velocity may enter slit electron reach slit 1 get information slit act accordingly Figure 3 electron observed Need Uncertainty Principle might thinking make apparatus mark slit whenever electron go particular slit say electron go either slit 1 slit 2 Feynman pointed apparatus put light near slit sound buzzer heard see flash either near slit 1 slit 2 Feynman made apparatus change outcome got different plot Fig 3 show interference pattern seemed like observing electron change behavior Actually photon emitted light source hit electron get scattered photon give impulse electron change path way electron didn’t strike region supposed light source hence show interference pattern decrease impulse given electron photon momentum photon decreased momentum photonp inversely proportional wavelengthλ p ∝ 1 λ wavelength becomes separation slit impulse get decreased plot obtained look similar plot Fig 2 light used decreasing momentum increasing wavelength flash becomes big fuzzy light scattered electron it’s hard distinguish slit flash coming Therefore experiment concludes try decrease without disturbing outcome p increase λ x position increase viceversa mean successful locating position microscopic particle like electron can’t tell much fast slow particle going uncertainty velocity momentum Δp successfully measure velocity unable find position space uncertainty position Δx can’t know thing simultaneously great certainty it’s hard build apparatus locate electron without disturbing result Fortunately Quantum Mechanics’ law don’t apply macroscopic object like soccerball otherwise we’d see soccer ball moving zigzag path kick law successfully explain phenomenon law failed explain like photoelectric effect save existence Quantum Mechanics explain absurdity Heisenberg suggested limitation make law consistent gave Uncertainty Principle Since Quantum Mechanics powerful theory used many upcoming technology like Quantum Computing need uncertainty principle comprehend quantum behavior atomic subatomic particle SubAtomic Particles Heisenberg’s Uncertainty Principle applies every particle can’t observed macroscopic level λ large hence uncertainty position small TV scripted series Genius Albert Einstein Niels Bohr talking proof Quantum Mechanics walking moment crossing road Einstein intentionally threw toward car Bohr pulled back car hit Bohr asked careful next time Albert smiled hysterically said “Why either u According automobile particle didn’t see wouldn’t would perfectly safe” defense Bohr replied “That principle applied subatomic particle automobile subatomic particles” conclusion Heisenberg’s Uncertainty Principle might save perilous existence Quantum Mechanics future won’t save car careful Uncertainty Principle GeniusTags Physics Knowledge Science Future Quantum Mechanics
1,303
The Singularity of Knowledge
Synergy Beyond imagination, the human mind is also gifted with the ability to decipher patterns — to make sense out of nonsense. Especially, we seem to love common denominators. So much so that the biggest breakthroughs in science have revolved around unification as much as they have around discovery. Routinely, we’ve made paradigm-shattering discoveries by simply tying loose ends together, and we continue to operate under this ambition (it can be said that our next target in line is dark matter). The greatest minds in history have understood this need for unification to be the ultimate prerogative. Some, like Nikola Tesla, had subsequently failed in their connecting of certain dots while others, like James Clerk Maxwell, had become famous for it. The problem is that it’s not easy. Far from it. As clever as we are, we’ve compartmentalized our systems of knowledge into such distinct and divided segments of study that it’s near impossible for one student to embark upon two opposing streams of belief, something that had been the norm only a hundred years ago. The noösphere promises us a rekindling of this comprehensive approach to understanding our world. With its synergetic potential and it’s touch-point responsiveness, it holds the ability to take all that we’ve chopped up and bring it back together, even if for a moment, just to see if anything blends together comfortably, anything that we hadn’t had, or couldn’t have had, previously considered. Because, and this is the main point to digest, the noösphere is able to do something that we ourselves have a hard time doing. It can discern and catalogue, cross-boundaries and synthesize streams of information. It can employ numerous algorithms that would take us an absurdly long time to match in terms of efficacy. Sounds like A.I. doesn’t it? It doesn’t necessarily have to be, though artificial intelligence will certainly be an integral part of its picture, as it currently is. The noösphere is the environ. We are the data points. Twitter lets political discourse unfold in real time. Instagram lets people share their experiences with a taste of immediacy. TikTok, well, it may serve useful in some respect one day. Quora, Reddit, Wikipedia. All far from perfect, but we’re getting there. Once we’re able to communicate faster and better and once we’re able to contextualize and idealize more comprehensively than ever before, we’ll see the connecting of a new array of dots that we hadn’t previously thought possible. Knowledge will come together, under a real singularity, and harmonize itself to a point whereby we’ll have as comprehensive of an outlook as we can imagine. Whatever this really means (and it may mean many very different things), it will be the milestone of our civilization. Technologically, socially, environmentally, astronomically, biologically — information will reach the apex of interconnectedness; in so doing, we’ll have the most informed understanding that there can possibly be (correlating to our rate of new discoveries) at any given time. Our segregation of various fields of study will no longer be isolating; our subjective experiences and insights will no longer be so subjective; our vision will no longer be obstructed by division. The singularity of knowledge — it’s already happening, but it’s about to speed up to rates we won’t even realize until we’re able to look back on it. Our only obligation, it seems, is to nurture this process rather than standing back and watching it unfold on its own under the presumption of a far-and-away singularity that we don’t have enough time or imaginative power to consider. In essence, we are the singularity.
https://medium.com/predict/the-singularity-of-knowledge-5b60b04892a6
['Michael Woronko']
2020-12-02 15:20:37.627000+00:00
['Philosophy', 'Technology', 'Future', 'Knowledge', 'Science']
Title Singularity KnowledgeContent Synergy Beyond imagination human mind also gifted ability decipher pattern — make sense nonsense Especially seem love common denominator much biggest breakthrough science revolved around unification much around discovery Routinely we’ve made paradigmshattering discovery simply tying loose end together continue operate ambition said next target line dark matter greatest mind history understood need unification ultimate prerogative like Nikola Tesla subsequently failed connecting certain dot others like James Clerk Maxwell become famous problem it’s easy Far clever we’ve compartmentalized system knowledge distinct divided segment study it’s near impossible one student embark upon two opposing stream belief something norm hundred year ago noösphere promise u rekindling comprehensive approach understanding world synergetic potential it’s touchpoint responsiveness hold ability take we’ve chopped bring back together even moment see anything blend together comfortably anything hadn’t couldn’t previously considered main point digest noösphere able something hard time discern catalogue crossboundaries synthesize stream information employ numerous algorithm would take u absurdly long time match term efficacy Sounds like AI doesn’t doesn’t necessarily though artificial intelligence certainly integral part picture currently noösphere environ data point Twitter let political discourse unfold real time Instagram let people share experience taste immediacy TikTok well may serve useful respect one day Quora Reddit Wikipedia far perfect we’re getting we’re able communicate faster better we’re able contextualize idealize comprehensively ever we’ll see connecting new array dot hadn’t previously thought possible Knowledge come together real singularity harmonize point whereby we’ll comprehensive outlook imagine Whatever really mean may mean many different thing milestone civilization Technologically socially environmentally astronomically biologically — information reach apex interconnectedness we’ll informed understanding possibly correlating rate new discovery given time segregation various field study longer isolating subjective experience insight longer subjective vision longer obstructed division singularity knowledge — it’s already happening it’s speed rate won’t even realize we’re able look back obligation seems nurture process rather standing back watching unfold presumption farandaway singularity don’t enough time imaginative power consider essence singularityTags Philosophy Technology Future Knowledge Science
1,304
FrankMocap — New SOTA for Fast 3D Pose Estimation
FrankMocap is a new state-of-the-art neural network for 3D body and hand movement recognition that was recently developed and published by researchers at Facebook Artificial Intelligence Research (FAIR). Egocentric Hand Motion Capture. Source: FAIR Github The model accepts video footage from one RGB camera as input. At the output, the model gives the predicted body and arm poses. FrankMocap's main goal is to make it easier to access 3D posture estimation techniques. FrankMocap processes predictions at 9.5 frames per second on inference. At the same time, the system bypasses analogs in the accuracy of predictions.
https://medium.com/swlh/frankmocap-sota-3d-pose-estimation-87b679419e74
['Mikhail Raevskiy']
2020-12-13 21:12:46.706000+00:00
['Machine Learning', 'Data Science', 'Deep Learning', 'Artificial Intelligence', 'AI']
Title FrankMocap — New SOTA Fast 3D Pose EstimationContent FrankMocap new stateoftheart neural network 3D body hand movement recognition recently developed published researcher Facebook Artificial Intelligence Research FAIR Egocentric Hand Motion Capture Source FAIR Github model accepts video footage one RGB camera input output model give predicted body arm pose FrankMocaps main goal make easier access 3D posture estimation technique FrankMocap process prediction 95 frame per second inference time system bypass analog accuracy predictionsTags Machine Learning Data Science Deep Learning Artificial Intelligence AI
1,305
Tools for using Kubernetes
Tools for using Kubernetes Tools for a team of any level to realize a container architecture. Kubernetes, the container orchestration tool originally developed by Google, has become a defacto for Agile and DevOps teams. With the advance of ML, Kubernetes has become even more important for an organization. Here, we have summed up a list of tools which can be used to realize a container architecture for different phases and maturity levels for enterprise organizations. Kubectl The most important area for Devops is command line. Kubectl is the command line tool for Kubernetes that controls the Kubernetes cluster manager. Under Kubectl, there are several subcommands for more precise cluster management control, such as converting files between different API versions, or executing container commands. It is also the basis of many other tools in the ecosystem. kuttle: kubectl wrapper — Kubernetes wrapper for sshuttle kubectl sudo — kubernetes cmd with the security privileges of another user mkubectx — single command across for all your selected kubernetes contexts Kubectl-debug — Debugging the pod by a new container with troubleshooting tools pre-installed Minikube The next important area is development. Minikube is a great Kubernetes tool for development and testing. Minikube is used by teams to get started on and build POCs using Kubernetes. It can be used to run a single-node Kubernetes cluster locally for development and testing. There are plenty of Kubernetes features supported on Minikube, including DNS, NodePorts, ConfigMaps and Secrets, Dashboards, Container Runtime (Docker, rkt, and CRI-O), enabling CNI’s, and ingress. This step-by-step guide for a quick and easy installation. KubeDirector Once the team has build extensively, it will need to scale out the clusters. It brings Enterprise level capabilities for Kubernetes. KubeDirector uses standard Kubernetes facilities of custom resources and API extensions to implement stateful scaleout application clusters. This approach enables transparent integration with user/resource management and existing clients and tools. Prometheus Each team has a need for operational metrics to define operational efficiency and ROI. Prometheus can be leveraged for providing alerting and monitoring infrastructure to Kubernetes native applications. Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. Prometheus provides the infrastructure but for metric analytics, dashboards and monitoring graphs, Grafana is used on top of Prometheus. Skaffold Once the team has spent time building a repeatable process for containerization with metrics and alerting, having CI/CD becomes the next phase of development. Skaffold is a command line tool that facilitates continuous development for Kubernetes applications. It helps the team to iterate on the application source code locally then deploy to local or remote Kubernetes clusters. Skaffold handles the workflow for building, pushing and deploying your application. It also provides building blocks and describes customization for a CI/CD pipeline. CI/CD will require test automation as well. The test-infra repository contains tools and configuration files for the testing and automation needs of the Kubernetes project. KubeFlow Once the products gather huge amounts of data, data pipelines and data products could be build for these applications. Kubeflow is a Cloud Native platform for machine learning based on Google’s internal machine learning pipelines.
https://medium.com/acing-ai/tools-for-using-kubernetes-84d47a73ef2e
['Vimarsh Karbhari']
2020-06-11 13:29:53.152000+00:00
['Containers', 'Data Science', 'Artificial Intelligence', 'Technology', 'Data Engineering']
Title Tools using KubernetesContent Tools using Kubernetes Tools team level realize container architecture Kubernetes container orchestration tool originally developed Google become defacto Agile DevOps team advance ML Kubernetes become even important organization summed list tool used realize container architecture different phase maturity level enterprise organization Kubectl important area Devops command line Kubectl command line tool Kubernetes control Kubernetes cluster manager Kubectl several subcommands precise cluster management control converting file different API version executing container command also basis many tool ecosystem kuttle kubectl wrapper — Kubernetes wrapper sshuttle kubectl sudo — kubernetes cmd security privilege another user mkubectx — single command across selected kubernetes context Kubectldebug — Debugging pod new container troubleshooting tool preinstalled Minikube next important area development Minikube great Kubernetes tool development testing Minikube used team get started build POCs using Kubernetes used run singlenode Kubernetes cluster locally development testing plenty Kubernetes feature supported Minikube including DNS NodePorts ConfigMaps Secrets Dashboards Container Runtime Docker rkt CRIO enabling CNI’s ingres stepbystep guide quick easy installation KubeDirector team build extensively need scale cluster brings Enterprise level capability Kubernetes KubeDirector us standard Kubernetes facility custom resource API extension implement stateful scaleout application cluster approach enables transparent integration userresource management existing client tool Prometheus team need operational metric define operational efficiency ROI Prometheus leveraged providing alerting monitoring infrastructure Kubernetes native application Prometheus Cloud Native Computing Foundation project system service monitoring system collect metric configured target given interval evaluates rule expression display result trigger alert condition observed true Prometheus provides infrastructure metric analytics dashboard monitoring graph Grafana used top Prometheus Skaffold team spent time building repeatable process containerization metric alerting CICD becomes next phase development Skaffold command line tool facilitates continuous development Kubernetes application help team iterate application source code locally deploy local remote Kubernetes cluster Skaffold handle workflow building pushing deploying application also provides building block describes customization CICD pipeline CICD require test automation well testinfra repository contains tool configuration file testing automation need Kubernetes project KubeFlow product gather huge amount data data pipeline data product could build application Kubeflow Cloud Native platform machine learning based Google’s internal machine learning pipelinesTags Containers Data Science Artificial Intelligence Technology Data Engineering
1,306
Stories on ILLUMINATION-Curated — All Volumes
Archives of Collections — Volumes Stories on ILLUMINATION-Curated — All Volumes Easy access to curated and outstanding stories Photo by Syed Hussaini on Unsplash ILLUMINATION-Curated is a unique collection, consists of edited and high-quality stories. Our unique publication hosts outstanding and curated stories from experienced and accomplished writers of Medium. We compile and distribute stories submitted to ILLUMINATION-Curated daily. Our top writers make a great effort to create outstanding stories, and we help them develop visibility for their high-quality content. The purpose of this story is to keep volumes in a single link for easy access. As a reference material, we also provide a link to all editorial resources of ILLUMINATION-Curated in this post. Our readers appreciate the distribution lists covering stories submitted to ILLUMINATION-Curated daily. The daily volumes make it easy to access the articles and discover our writers. Some readers are closely following specific writers that they found in these circulated lists. This archive version can be a useful resource for researchers and those who are studying specific genres. We cover over 100 topics. This story allows our new writers to explore stories of our experienced writers and connect with them quickly and meaningfully. ILLUMINATION-Curated strives for cross-pollination. Writers learn from each other by collaborating. Our writers do not compete; instead, they enhance and extend each other’s messages. Customised Image courtesy of Dew Langrial 07 December 2020 06 December 2020 05 December 2020 04 December 2020 03 December 2020 02 December 2020 01 December 2020 30 November 2020 29 November 2020 28 November 2020 27 November 2020 26 November 2020 25 November 2020 24 November 2020 23 November 2020 22 November 2020 21 November 2020 20 November 2020 19 November 2020 18 November 2020 17 November 2020 16 November 2020 15 November 2020 14 November 2020 13 November 2020 12 November 2020 11 November 2020 10 November 2020 09 November 2020 08 November 2020 07 November 2020 06 November 2020 05 November 2020 04 November 2020 03 November 2020 02 November 2020 01 November 2020 30 October 2020 29 October 2020 28 October 2020 27 October 2020 26 October 2020 25 October 2020 24 October 2020 23 October 2020 22 October 2020 21 October 2020 20 October 2020 19 October 2020 18 October 2020 17 October 2020 16 October 2020 15 October 2020 14 October 2020 13 October 2020 12 October 2020 11 October 2020 10 October 2020 09 October 2020 08 October 2020 07 October 2020 06 October 2020 05 October 2020 04 October 2020 03 October 2020 02 October 2020 01 October 2020 30 September 2020 29 September 2020 28 September 2020 27 September 2020 26 September 2020 25 September 2020 24 September 2020 23 September 2020 22 September 2020 21 September 2020 20 September 2020 19 September 2020 Editorial Resources About ILLUMINATION Curated
https://medium.com/illumination-curated/stories-on-illumination-curated-627b289571b4
[]
2020-12-08 18:04:45.367000+00:00
['Business', 'Technology', 'Self Improvement', 'Science', 'Writing']
Title Stories ILLUMINATIONCurated — VolumesContent Archives Collections — Volumes Stories ILLUMINATIONCurated — Volumes Easy access curated outstanding story Photo Syed Hussaini Unsplash ILLUMINATIONCurated unique collection consists edited highquality story unique publication host outstanding curated story experienced accomplished writer Medium compile distribute story submitted ILLUMINATIONCurated daily top writer make great effort create outstanding story help develop visibility highquality content purpose story keep volume single link easy access reference material also provide link editorial resource ILLUMINATIONCurated post reader appreciate distribution list covering story submitted ILLUMINATIONCurated daily daily volume make easy access article discover writer reader closely following specific writer found circulated list archive version useful resource researcher studying specific genre cover 100 topic story allows new writer explore story experienced writer connect quickly meaningfully ILLUMINATIONCurated strives crosspollination Writers learn collaborating writer compete instead enhance extend other’s message Customised Image courtesy Dew Langrial 07 December 2020 06 December 2020 05 December 2020 04 December 2020 03 December 2020 02 December 2020 01 December 2020 30 November 2020 29 November 2020 28 November 2020 27 November 2020 26 November 2020 25 November 2020 24 November 2020 23 November 2020 22 November 2020 21 November 2020 20 November 2020 19 November 2020 18 November 2020 17 November 2020 16 November 2020 15 November 2020 14 November 2020 13 November 2020 12 November 2020 11 November 2020 10 November 2020 09 November 2020 08 November 2020 07 November 2020 06 November 2020 05 November 2020 04 November 2020 03 November 2020 02 November 2020 01 November 2020 30 October 2020 29 October 2020 28 October 2020 27 October 2020 26 October 2020 25 October 2020 24 October 2020 23 October 2020 22 October 2020 21 October 2020 20 October 2020 19 October 2020 18 October 2020 17 October 2020 16 October 2020 15 October 2020 14 October 2020 13 October 2020 12 October 2020 11 October 2020 10 October 2020 09 October 2020 08 October 2020 07 October 2020 06 October 2020 05 October 2020 04 October 2020 03 October 2020 02 October 2020 01 October 2020 30 September 2020 29 September 2020 28 September 2020 27 September 2020 26 September 2020 25 September 2020 24 September 2020 23 September 2020 22 September 2020 21 September 2020 20 September 2020 19 September 2020 Editorial Resources ILLUMINATION CuratedTags Business Technology Self Improvement Science Writing
1,307
7 Painless Writing Tips That Make a Powerful Impact
1. Give me that title* “What does an algorithm know about creating intriguing titles?” I snorted when I first discovered the headline-analyser co-schedule somewhere in the ether. As an experiment, I ran my most and least successful blog titles through the search bar and was horrified to discover co-schedule judged them accurately. It’s actually a useful tool. OK, so it won’t get your puns and it’s not infallible, but it does analyse your title for a metrics you’re probably not aware of: Sentiment: decides if you’re being optimistic or a killjoy; titles with a positive sentiment typically do best. Keywords: measure your choices against words most commonly searched for by inquisitive Googlers. Length analysis: tells you off for being too wordy or too terse; there’s both an optimal character- and word-count for clicks. Word balance: takes a cold, hard look at the readability and attractiveness of your title. *Meghan Trainor, ‘Title’ When 2. becomes one* I had no idea how often I was writing “very embarrassed” when I meant “mortified” or “anxious and upset” when I meant “traumatised”. And while I’m on the subject, “I won’t” will always feel more natural than “I will not”. Two words are rarely better than one: be concise, make an impact. *The Spice Girls, ‘When Two Becomes One’ 3. I’d do anything for love (But I won’t do that)* I’ve said it once and I’ll say it again (and again until the end of time): go through your writing and remove every excess “that”. You’ll find an alarming amount and you’ll thank me. *Meat Loaf, ‘I’d Do Anything for Love’ 4. I came in like a wrecking ball* Begin with ferocity. It takes 2 minutes of rewriting or swapping sentences around to make sure your first line is a dagger to the heart. Edit with ferocity. “I stood there and watched while the building burned down to the ground” becomes “I watched the building burn”. Fire. Literally. *Miley Cirus, ‘Wrecking Ball’ 5. Let’s twist Again* I’ve read advice to “never use common phrases or cliches”. It’s a (partial) lie; you should absolutely use both, just remember to give them your own twist — however small. “through thick and extremely thin” will always be more interesting than “through thick and thin”. *Chubby Checker, ‘Let’s Twist Again’ 6. Why. do. we. crucify ourselves?* You never need to apologise for what you’re saying. This isn’t because you’re always right (you’re not) but because qualifiers make your work less impactful and we should believe what we write. You don’t need to say “I believe, in my humble opinion, xyz”: we know it’s in your opinion because you’re writing it — and we’re under no illusions you’re humble. *Tori Amos, ‘Crucify’ 7. Stop! In the name of love* This is my favourite all-time tip because it means doing absolutely nothing. After you finish writing, stop. Breathe. Step away from the laptop. Drink tea. Cuddle the cat. It doesn’t matter how long you’ve been writing, no one can get away without editing after a break to clear their head. I should know: I once published an article that began “I once lived in a house in a house.” Sounds magical but is misleading. *The Hollies, ‘Stop in the Name of Love’
https://medium.com/blankpage/7-painless-writing-tips-that-make-a-powerful-impact-13c39d8ea7c7
['Jessica A']
2020-12-21 15:54:05.601000+00:00
['Writing', 'Writing Tips', 'Writing Life', 'Creativity', 'Creative Process']
Title 7 Painless Writing Tips Make Powerful ImpactContent 1 Give title “What algorithm know creating intriguing titles” snorted first discovered headlineanalyser coschedule somewhere ether experiment ran least successful blog title search bar horrified discover coschedule judged accurately It’s actually useful tool OK won’t get pun it’s infallible analyse title metric you’re probably aware Sentiment decides you’re optimistic killjoy title positive sentiment typically best Keywords measure choice word commonly searched inquisitive Googlers Length analysis tell wordy terse there’s optimal character wordcount click Word balance take cold hard look readability attractiveness title Meghan Trainor ‘Title’ 2 becomes one idea often writing “very embarrassed” meant “mortified” “anxious upset” meant “traumatised” I’m subject “I won’t” always feel natural “I not” Two word rarely better one concise make impact Spice Girls ‘When Two Becomes One’ 3 I’d anything love won’t I’ve said I’ll say end time go writing remove every excess “that” You’ll find alarming amount you’ll thank Meat Loaf ‘I’d Anything Love’ 4 came like wrecking ball Begin ferocity take 2 minute rewriting swapping sentence around make sure first line dagger heart Edit ferocity “I stood watched building burned ground” becomes “I watched building burn” Fire Literally Miley Cirus ‘Wrecking Ball’ 5 Let’s twist I’ve read advice “never use common phrase cliches” It’s partial lie absolutely use remember give twist — however small “through thick extremely thin” always interesting “through thick thin” Chubby Checker ‘Let’s Twist Again’ 6 crucify never need apologise you’re saying isn’t you’re always right you’re qualifier make work le impactful believe write don’t need say “I believe humble opinion xyz” know it’s opinion you’re writing — we’re illusion you’re humble Tori Amos ‘Crucify’ 7 Stop name love favourite alltime tip mean absolutely nothing finish writing stop Breathe Step away laptop Drink tea Cuddle cat doesn’t matter long you’ve writing one get away without editing break clear head know published article began “I lived house house” Sounds magical misleading Hollies ‘Stop Name Love’Tags Writing Writing Tips Writing Life Creativity Creative Process
1,308
How to Cultivate Patience, the Ancient Virtue We All Need Right Now
How to Cultivate Patience, the Ancient Virtue We All Need Right Now The way we live now discourages patience. It’s time to reprioritize this lost virtue. Two days before the Associated Press declared him the winner of the 2020 presidential election, Joe Biden tried to settle his nation’s rattled nerves. “[Democracy] sometimes requires a little patience,” he remarked. “Stay calm . . . the process is working.” For many, it wasn’t working fast enough. Every hour that passed seemed to turn up the tension and frustration of the U.S. electorate. Protests and counterprotests broke out. After just a few days of waiting, America seemed poised to lose its collective shit. Contrast this state of affairs with the 2000 contest between George W. Bush and Al Gore, which remained in limbo for five weeks following Election Day. If you can’t imagine today’s America putting up with that kind of delay, experts can’t either. “Patience is a character strength that our society has definitely neglected,” says Sarah Schnitker, PhD, an associate professor of psychology at Baylor University. “Over the past 20 years in particular, as our technology has advanced at a very fast pace, I think it’s changed our expectations about when and how much we should have to wait as well as our general ideas about suffering.” Much of Schnitker’s research has centered on patience. She says that many of history’s great philosophers, from Aristotle to Thomas Aquinas, regarded patience as one of humanity’s noblest attributes. Likewise, most of the major Eastern and Western religions — from Judaism and Christianity to Islam and Buddhism — describe patience as a fundamental virtue to be admired and cultivated. “Patience is a character strength that our society has definitely neglected.” But since the Industrial Revolution ushered in a new era of speed, production, and consumption, patience has lost its appeal, Schnitker says. “Our culture is all about quick wins and solving problems fast,” she adds. “If you’re patient, there’s this misconception that you’re kind of a doormat — that patience is not something we think of as winners having.” There are economic, political, and environmental reasons to believe that America’s disdain for patience will eventually cost it (and the world) dearly. But setting aside those concerns, patience also seems to be really important when it comes to mental health and well-being, Schnitker says. “It’s positively associated with life satisfaction, with hope, with self-esteem, and with regulated behavior, and it’s negatively associated with loneliness, depression, and anxiety,” she says. Patience can alleviate the pressure to advance and achieve that many of us feel so urgently, and patience may replace the shallow gratifications that many of us now demand — and often come to depend on — from the stuff we buy, watch, and otherwise consume. “I think that this year — both with the pandemic and with the political situation — has shown us that we need to develop more patience,” Schnitker says. Fortunately, there are some evidence-backed ways to do that. Understanding what patience looks like Situations that demand patience tend to come in three types. “There’s daily hassle patience,” Schnitker says. This type includes waiting in line at the store, waiting for a web browser to load, and other quotidian sources of delay or frustration. The next type she terms “hardship patience,” which refers to open-ended situations like living with an illness or enduring other sources of persistent concern or uncertainty. Finally, there’s “interpersonal patience,” which is the type a person requires when dealing with an obstreperous child, an obnoxious coworker, or some other difficult person. Speaking with Elemental the day before Biden was announced as the winner, Schnitker said, “The current moment is interesting because the election really involves all three types of patience. It’s waiting for an outcome, and maybe it’s dealing with relatives who don’t agree with you, and it’s also dealing with thoughts about long-term polarization and the need to find more unity.” She says that “patience” (like the word “patient”) is derived from the Latin word for suffering. And people who possess patience are those who are able to endure something unpleasant without letting it influence their emotions or behavior. Spend some time thinking about that definition, and you begin to realize how central patience (or its opposite) is to anxiety, depression, anger, and other negative emotional states as well as to compulsive behavior. All of these ills are tightly bound up with an inability to tolerate a person or a situation. It could even be said that the current moment’s fixation with happiness — with finding more of it and making it last — is driven in part by impatience; we don’t want to have to wait long for our next moment of joy or pleasure or bliss. Why are we all so impatient these days? Again, Schnitker says that many elements of contemporary life prioritize speed and ease over patience and endurance. “We are all about instant gratification, and I think the advertising and technology industries push us in this direction,” she says. Whatever it is that a person wants — food, entertainment, information, stuff, sex, money, enlightenment — the fastest route to each is continually pitched to us as the best route despite evidence to the contrary. Haste and urgency, for example, are associated with stress and arousal. “When we speed everything up — when we have this feeling of go go go — that’s all sympathetic nervous activity,” says Peter Payne, a researcher at Dartmouth College who studies meditative movement and the health benefits of practices such as qigong and tai chi. While sympathetic nervous system activity is fine in moderation, chronic overactivity of this system is associated with anxiety, depression, headaches, poor sleep, and diseases of the heart, gut, and immune system. Rushing all the time seems to promote this kind of overactivity and its many detriments. “It’s positively associated with life satisfaction, with hope, with self-esteem, and with regulated behavior, and it’s negatively associated with loneliness, depression, and anxiety.” Impatience may also rob people of experiences that give life meaning. Researchers have found that effort seems to be an essential ingredient in satisfaction, contentment, and other positive emotions. “A lot of happiness lies in the doing, not in the having done,” says Barbara Fredrickson, a distinguished professor of psychology at the University of North Carolina. She says that the expenditure of effort can contribute to a sense of purpose, meaning, and interconnectedness — all of which are sources of self-esteem and other positive states. The message here is not that everything fast or easy is bad. Rather, it’s that fast and easy are not always optimal. When people lose the ability to be patient, they may also be losing access to the things that make life most satisfying and enjoyable while also raising their risks for all the health problems associated with stress. How to cultivate patience The more people exercise their patience muscles, the stronger those muscles become. “There are a lot of ways to practice waiting in life, and doing this can really help us build up our patience,” Schnitker says. For example, whenever you encounter a wait — whether it’s in line at the store or sitting in traffic — those are good opportunities to practice patience. “Not using that time to reach for our phones and check our social or news feeds — I think can really help,” she says. To her point, research from Temple University has found that frequent smartphone use is associated with both heightened impatience and impulsivity. During periods of waiting or frustration, Schnitker says it can be helpful to practice a technique known as cognitive reappraisal or “reframing,” which basically means looking at something as an opportunity rather than as a hardship. “When people are able to reframe what could be considered a threat or a source of suffering as a useful challenge, we know that helps,” she says. “So if you tell yourself that patience is good for my mental health and I need to develop it, then you can reframe those periods of waiting as great opportunities to help yourself.” She says that reframing is also helpful when dealing with people who get on your nerves or during situations that entail extended periods of waiting. “So with this election, I could tell myself that this waiting should restore some of my faith in the system because it’s showing me that we care about our democracy and making sure everyone’s vote counts,” she says. In interpersonal contexts, reframing could entail changing your thoughts from “this person is so annoying” to “being around this person is an opportunity for me to practice my patience.” It could also entail making an effort to see the situation from another person’s point of view. Finally, Schnitker says that mindfulness training and similar forms of meditation are helpful because they pump up your awareness of your own thoughts and feelings. It’s this awareness that allows you to make helpful tweaks — to your habits and also to your appraisals of people and situations — that will bolster your patience. “Right now, we don’t have a lot of cultural narratives that help us make sense of waiting or suffering,” she says. Rediscovering and reprioritizing patience may be one way to create more-helpful narratives — and to push back against so much that feels wrong with the world today.
https://elemental.medium.com/how-to-cultivate-patience-the-ancient-virtue-we-all-need-right-now-afd144abb507
['Markham Heid']
2020-11-12 06:32:23.877000+00:00
['The Nuance', 'Patience', 'Lifestyle', 'Mental Health', 'Health']
Title Cultivate Patience Ancient Virtue Need Right NowContent Cultivate Patience Ancient Virtue Need Right way live discourages patience It’s time reprioritize lost virtue Two day Associated Press declared winner 2020 presidential election Joe Biden tried settle nation’s rattled nerve “Democracy sometimes requires little patience” remarked “Stay calm process working” many wasn’t working fast enough Every hour passed seemed turn tension frustration US electorate Protests counterprotests broke day waiting America seemed poised lose collective shit Contrast state affair 2000 contest George W Bush Al Gore remained limbo five week following Election Day can’t imagine today’s America putting kind delay expert can’t either “Patience character strength society definitely neglected” say Sarah Schnitker PhD associate professor psychology Baylor University “Over past 20 year particular technology advanced fast pace think it’s changed expectation much wait well general idea suffering” Much Schnitker’s research centered patience say many history’s great philosopher Aristotle Thomas Aquinas regarded patience one humanity’s noblest attribute Likewise major Eastern Western religion — Judaism Christianity Islam Buddhism — describe patience fundamental virtue admired cultivated “Patience character strength society definitely neglected” since Industrial Revolution ushered new era speed production consumption patience lost appeal Schnitker say “Our culture quick win solving problem fast” add “If you’re patient there’s misconception you’re kind doormat — patience something think winner having” economic political environmental reason believe America’s disdain patience eventually cost world dearly setting aside concern patience also seems really important come mental health wellbeing Schnitker say “It’s positively associated life satisfaction hope selfesteem regulated behavior it’s negatively associated loneliness depression anxiety” say Patience alleviate pressure advance achieve many u feel urgently patience may replace shallow gratification many u demand — often come depend — stuff buy watch otherwise consume “I think year — pandemic political situation — shown u need develop patience” Schnitker say Fortunately evidencebacked way Understanding patience look like Situations demand patience tend come three type “There’s daily hassle patience” Schnitker say type includes waiting line store waiting web browser load quotidian source delay frustration next type term “hardship patience” refers openended situation like living illness enduring source persistent concern uncertainty Finally there’s “interpersonal patience” type person requires dealing obstreperous child obnoxious coworker difficult person Speaking Elemental day Biden announced winner Schnitker said “The current moment interesting election really involves three type patience It’s waiting outcome maybe it’s dealing relative don’t agree it’s also dealing thought longterm polarization need find unity” say “patience” like word “patient” derived Latin word suffering people posse patience able endure something unpleasant without letting influence emotion behavior Spend time thinking definition begin realize central patience opposite anxiety depression anger negative emotional state well compulsive behavior ill tightly bound inability tolerate person situation could even said current moment’s fixation happiness — finding making last — driven part impatience don’t want wait long next moment joy pleasure bliss impatient day Schnitker say many element contemporary life prioritize speed ease patience endurance “We instant gratification think advertising technology industry push u direction” say Whatever person want — food entertainment information stuff sex money enlightenment — fastest route continually pitched u best route despite evidence contrary Haste urgency example associated stress arousal “When speed everything — feeling go go go — that’s sympathetic nervous activity” say Peter Payne researcher Dartmouth College study meditative movement health benefit practice qigong tai chi sympathetic nervous system activity fine moderation chronic overactivity system associated anxiety depression headache poor sleep disease heart gut immune system Rushing time seems promote kind overactivity many detriment “It’s positively associated life satisfaction hope selfesteem regulated behavior it’s negatively associated loneliness depression anxiety” Impatience may also rob people experience give life meaning Researchers found effort seems essential ingredient satisfaction contentment positive emotion “A lot happiness lie done” say Barbara Fredrickson distinguished professor psychology University North Carolina say expenditure effort contribute sense purpose meaning interconnectedness — source selfesteem positive state message everything fast easy bad Rather it’s fast easy always optimal people lose ability patient may also losing access thing make life satisfying enjoyable also raising risk health problem associated stress cultivate patience people exercise patience muscle stronger muscle become “There lot way practice waiting life really help u build patience” Schnitker say example whenever encounter wait — whether it’s line store sitting traffic — good opportunity practice patience “Not using time reach phone check social news feed — think really help” say point research Temple University found frequent smartphone use associated heightened impatience impulsivity period waiting frustration Schnitker say helpful practice technique known cognitive reappraisal “reframing” basically mean looking something opportunity rather hardship “When people able reframe could considered threat source suffering useful challenge know helps” say “So tell patience good mental health need develop reframe period waiting great opportunity help yourself” say reframing also helpful dealing people get nerve situation entail extended period waiting “So election could tell waiting restore faith system it’s showing care democracy making sure everyone’s vote counts” say interpersonal context reframing could entail changing thought “this person annoying” “being around person opportunity practice patience” could also entail making effort see situation another person’s point view Finally Schnitker say mindfulness training similar form meditation helpful pump awareness thought feeling It’s awareness allows make helpful tweak — habit also appraisal people situation — bolster patience “Right don’t lot cultural narrative help u make sense waiting suffering” say Rediscovering reprioritizing patience may one way create morehelpful narrative — push back much feel wrong world todayTags Nuance Patience Lifestyle Mental Health Health
1,309
How to Survive Life in the NICU
Life in the NICU can be stressful for baby and family When I was pregnant with my son and living on the antenatal unit (for moms with high-risk pregnancies), I was given a tour of the NICU. I went on this tour out of curiosity, knowing my son would not end up there as he was being born via c-section at 37 weeks. I thought the NICU (neonatal intensive care unit) was only for preemies. Not my child. Boy was I wrong. Shortly after my son was born he decided to stop breathing. He was quickly whisked away to the NICU to be assessed by a team of highly trained doctors and nurses. Still recovering from my general anaesthetic, I woke up a few hours later to find out my son was hooked up to the CPAP — a Darth Vader type respiratory device, to help my son breathe. When he was six hours old, I was wheeled on a stretcher to the side of his incubator and was only able to see his toes. No touching allowed. The next morning I began my 14 day vigil, being glued to the side of his incubator, carefully watching the monitors beeping away his vital signs. While the early days were a complete daze, I eventually found my footing and figured out the routine of the NICU. It is this routine, and lessons learned, I want to share with you. It’s okay to be overwhelmed. Everything about the NICU is intimidating. From the masked medical staff, locked doors, hushed voices and incubators holding the smallest babies you’ve ever seen, it is a lot to take in. Even on your last day, you can still feel as though you are in another world. And that’s because you are. The NICU is a unique area in the hospital, caring for the tinniest patients. Ask questions. Repeatedly. You are sleep deprived, emotional and have just been through childbirth. There is no way you can possibly remember all the information being thrown at you. So don’t be afraid to ask a lot of questions and get the medical staff to write down important information for you. Some key questions to ask: What time does the doctor/nurse practitioner do their daily rounds? Is it the same time every day or does it vary? Are you able to talk to this person during their rounds and ask questions specific to the care of your baby? What is the nurse/baby ratio for the nurse caring for your baby? The amount of babies in the nurse’s care will depend on the medical conditions of the babies (very ill babies have a nurse fully dedicated to them). When is shift change for the nurses? This is important to know so you can check in with the new nurse at the beginning of his/her shift and learn if there are any updates in your child’s care plan for the next 12 hours. Where are the breast pump supplies kept? This includes the breast pump, bottles, sterilization equipment and other supplies. Where can you pump? At baby’s bedside (which is ideal as seeing your baby will increase your milk supply)? A quiet room? What should you do with your milk after it’s pumped? Usually there is a consistent place to put your milk, with stickers with your baby’s information. 3. Talk to other parents. I know you just want to curl up in a ball right now and be left alone. But trust me. Talking to other parents helps. A lot. I learned so much about life in the NICU by talking to other moms. This included where to store any food people brought me, where to get free coffee (some larger hospitals have stocked kitchens for parents), what questions to ask the nurse and, most importantly, someone to talk to who knew what I was going through. This is a huge benefit as your friends and family likely don’t understand why you won’t leave your baby’s side. 4. Get outside. Even if it’s just for 15 minutes. You need fresh air, a short walk, and a break from the NICU. This is so important for your mental health. These short breaks will also give you the energy to continue. I tried to take a break every four hours. 5. Sleep. Preferably in a real bed. This was a big mistake I made. I thought I needed to be with my son around the clock. It was exhausting. Even though we had a room at the Ronald McDonald House, I never slept more than four hours at a time, afraid if I was away from my son something would happen. Don’t be like me. Try to get at least six hours uninterrupted sleep each night. And yes, you can give the medical staff your cell phone number and tell them to call you if there is a problem. 6. Breathe. Just breathe. Take it day by day. Don’t start thinking too far in advance. Just be in the moment and know you are doing the best you can for your child. To learn more about patient advocacy visit my website www.learnpatientadvocacy.com.
https://cynthialockrey.medium.com/how-to-survive-life-in-the-nicu-636fbecde267
['Cynthia Lockrey']
2018-07-17 17:59:11.028000+00:00
['Pregnancy', 'Nicu', 'Parenting', 'Health', 'Mental Health']
Title Survive Life NICUContent Life NICU stressful baby family pregnant son living antenatal unit mom highrisk pregnancy given tour NICU went tour curiosity knowing son would end born via csection 37 week thought NICU neonatal intensive care unit preemie child Boy wrong Shortly son born decided stop breathing quickly whisked away NICU assessed team highly trained doctor nurse Still recovering general anaesthetic woke hour later find son hooked CPAP — Darth Vader type respiratory device help son breathe six hour old wheeled stretcher side incubator able see toe touching allowed next morning began 14 day vigil glued side incubator carefully watching monitor beeping away vital sign early day complete daze eventually found footing figured routine NICU routine lesson learned want share It’s okay overwhelmed Everything NICU intimidating masked medical staff locked door hushed voice incubator holding smallest baby you’ve ever seen lot take Even last day still feel though another world that’s NICU unique area hospital caring tinniest patient Ask question Repeatedly sleep deprived emotional childbirth way possibly remember information thrown don’t afraid ask lot question get medical staff write important information key question ask time doctornurse practitioner daily round time every day vary able talk person round ask question specific care baby nursebaby ratio nurse caring baby amount baby nurse’s care depend medical condition baby ill baby nurse fully dedicated shift change nurse important know check new nurse beginning hisher shift learn update child’s care plan next 12 hour breast pump supply kept includes breast pump bottle sterilization equipment supply pump baby’s bedside ideal seeing baby increase milk supply quiet room milk it’s pumped Usually consistent place put milk sticker baby’s information 3 Talk parent know want curl ball right left alone trust Talking parent help lot learned much life NICU talking mom included store food people brought get free coffee larger hospital stocked kitchen parent question ask nurse importantly someone talk knew going huge benefit friend family likely don’t understand won’t leave baby’s side 4 Get outside Even it’s 15 minute need fresh air short walk break NICU important mental health short break also give energy continue tried take break every four hour 5 Sleep Preferably real bed big mistake made thought needed son around clock exhausting Even though room Ronald McDonald House never slept four hour time afraid away son something would happen Don’t like Try get least six hour uninterrupted sleep night yes give medical staff cell phone number tell call problem 6 Breathe breathe Take day day Don’t start thinking far advance moment know best child learn patient advocacy visit website wwwlearnpatientadvocacycomTags Pregnancy Nicu Parenting Health Mental Health
1,310
A Self-Editing Checklist From an Editor-in-Chief
In newsrooms, editors often talk about the text that writers file using a hygiene metaphor: “Clean” copy is grammatically correct, solidly written, and generally needs only light editing to be publishable. If you’re working with an editor, filing clean copy will make them love you — and want to work with you more. If you’re publishing directly, it’s even more important that your copy is spotless! The best way to make sure you file (or self-publish) the crispest, cleanest copy possible is to create your own process of self-editing — catching errors, fact-checking, and smoothing the language. My favorite way to self-edit is to examine my article with a series of different “lenses.” Think of the machine an optometrist uses to check your vision: She’ll swap in different lenses for you to look through, one by one. Similarly, you can look at your writing with “lens” after “lens.” You might first read it with a data-accuracy lens, for example, and then reread it with a lens on how the quotes flow. If you know you have a tendency to overuse the passive voice, read it over with a passive voice lens, making sentences more active as you go through. (Personally, I always make sure to read with a wordiness lens — deleting needless adjectives and clauses to make every sentence simpler and more succinct.) The Self-editing checklist Think of these questions each as a “lens” to look at the story through. Not every lens will apply to every story. And make sure to create lenses that account for your own writing habits and tics. Did I tell the right story? What is my story focus/theory/angle? Is it clearly and succinctly stated at the top of the story? So what? Why should readers care about this story? Have I told it the right way? Is the story clear? Compelling? Engaging? Is this the best “lede” for the story? Why? (Your lede, the first lines of a story, should essentially tell the story, either in anecdotal or straight form.) Does the “nut graf” (the paragraph explaining what readers are in store for) clearly and directly lay out the story’s focus/theory/angle, tell the who/what/where/when/why/how, and show the reader why they should care about it? the story’s focus/theory/angle, tell the who/what/where/when/why/how, and show the reader why they should care about it? Do the quotes help tell the story? Are they vivid and colorful, and do they express emotions as necessary? Do they tell dull information that would be better paraphrased? Are they presented well, with clear transitions and setups? Does every scene, detail, and anecdote function to help the reader understand the story? (No matter how fascinating the scene is or how eloquent the quote, if the answer is no, cut it.) Would more details or visual descriptions help bring the story to life? Does the piece provide adequate context? Have you included history, previous news, supporting statistics, data, explanations? Are expert voices included where necessary, and are their comments useful in telling the story? Is the last line or “kicker” structured for maximum impact? Does it relate back to the lede, or the story focus, or does it look forward? Is everything true, and are all the necessary perspectives included? Develop your own system for “skeptical editing”: Double-check all names, facts, dates, spellings, quotes. Are numbers, statistics, and data clear and accurate? Is additional data needed to substantiate the story? Weed out assumptions and vague statements. Make sure terms are explained, acronyms spelled out on first use. Check the background of every source or person cited in the story and for each ask: Are they credible? What is their agenda? What biases do they bring? Whose perspective is missing from the story? How might you include that missing perspective? What are the factual holes in the story? Instead of “writing around” them, do the reporting or research to fill them. Are the mechanics correct? Check spelling, punctuation, and style. Check that the story is the length it needs to be. Check for passive voice, gerunds, wordiness, clichés, or whatever your grammatical crutches are. Get rid of fancy words when simple ones will do. Does the story do enough “hand-holding” for the reader? Is its logic easy to follow? Are the transitions clear and does the story flow sensibly? Does it sound okay? (Or more to the point, does the story sound awesome?) Read your copy aloud to see if the story flows. Listen to the language. Make sure there’s a mix of shorter and longer sentences and that each sentence is clear and straightforward. Is the story due now?
https://medium.com/creators-hub/a-self-editing-checklist-from-an-editor-in-cheif-e55abb475e61
['Indrani Sen']
2020-11-02 10:42:36.997000+00:00
['Writing', 'Creativity', 'Editing', 'Tips For Writers', 'Resources']
Title SelfEditing Checklist EditorinChiefContent newsroom editor often talk text writer file using hygiene metaphor “Clean” copy grammatically correct solidly written generally need light editing publishable you’re working editor filing clean copy make love — want work you’re publishing directly it’s even important copy spotless best way make sure file selfpublish crispest cleanest copy possible create process selfediting — catching error factchecking smoothing language favorite way selfedit examine article series different “lenses” Think machine optometrist us check vision She’ll swap different lens look one one Similarly look writing “lens” “lens” might first read dataaccuracy lens example reread lens quote flow know tendency overuse passive voice read passive voice lens making sentence active go Personally always make sure read wordiness lens — deleting needle adjective clause make every sentence simpler succinct Selfediting checklist Think question “lens” look story every lens apply every story make sure create lens account writing habit tic tell right story story focustheoryangle clearly succinctly stated top story reader care story told right way story clear Compelling Engaging best “lede” story lede first line story essentially tell story either anecdotal straight form “nut graf” paragraph explaining reader store clearly directly lay story’s focustheoryangle tell whowhatwherewhenwhyhow show reader care story’s focustheoryangle tell whowhatwherewhenwhyhow show reader care quote help tell story vivid colorful express emotion necessary tell dull information would better paraphrased presented well clear transition setup every scene detail anecdote function help reader understand story matter fascinating scene eloquent quote answer cut Would detail visual description help bring story life piece provide adequate context included history previous news supporting statistic data explanation expert voice included necessary comment useful telling story last line “kicker” structured maximum impact relate back lede story focus look forward everything true necessary perspective included Develop system “skeptical editing” Doublecheck name fact date spelling quote number statistic data clear accurate additional data needed substantiate story Weed assumption vague statement Make sure term explained acronym spelled first use Check background every source person cited story ask credible agenda bias bring Whose perspective missing story might include missing perspective factual hole story Instead “writing around” reporting research fill mechanic correct Check spelling punctuation style Check story length need Check passive voice gerund wordiness clichés whatever grammatical crutch Get rid fancy word simple one story enough “handholding” reader logic easy follow transition clear story flow sensibly sound okay point story sound awesome Read copy aloud see story flow Listen language Make sure there’s mix shorter longer sentence sentence clear straightforward story due nowTags Writing Creativity Editing Tips Writers Resources
1,311
Writing is My Bridge
Writing is My Bridge How I use writing to balance my mind. @oplattner unsplash.com When I was in high school, I wanted to be a writer. I didn’t know why. I lost the battle of choosing college majors with my parents because I just couldn’t explain what I intuitively knew: Writing was my salvation. We often ask people we meet: “Are you a creative person?”, “Are you an analytical person?” We don’t realize that so many of us are both. I grew up in the Asian culture of overachievement in science and mathematics. That means the analytical side of me flourished while the creative side suppressed. Creativity is frowned upon by strict Asian parents as the gateway to disobedience. It wasn’t until I quit my Wall Street technology job that I realized what was lacking in my life. Up until then, my life had been dedicated solely to analytical pursuits that I forgot to take care of my emotional needs and creative needs. The cost of that was a couple of years of anxiety and depression. It took years of reevaluating myself, my connections and my life to really unleash the emotional and the creative side of myself again. The motivation was the birth of my son. Following my son’s amazing development from infancy to toddlerhood allowed me to peek into my own childhood. It reminded me of the humanity, the creativity and the sensitive self that existed in me from the beginning. For once, to be a better mother to my son, I had to take a leap of faith. I had to come back completely to the essence of myself. I had to make my own life fulfilling by balancing out all my needs: analytical needs, emotional needs, and creative needs. Making a career change is never easy. For me, the trigger was the deadening feeling of working on a piece of data analysis code and not loving it anymore. It was hard to accept when things are simply not enough. I felt guilty. I had worked very hard at what I “supposedly” did best. I loved it for many years. I was given great opportunities. But, I just wasn’t excited about it all anymore. I felt like the wife stuck in a dead marriage with the guy who all the neighborhood ladies wanted as a husband. The one thing about motherhood is that: it’s fast, it’s furious and it waits for no one. I had no time, energy nor the strength to fight with myself about the decisions I made. I just did it all. I changed loads upon loads of diapers. I reveled in my “free time” as my infant son stared up at me from his baby blanket. I laminated printouts for his activities. I read parenting books. I set up playdates. I learned to discipline him. It felt like a huge tidal wave. I surfed it without having any knowledge of how to do it from the start. Then, one night the truth hits me like a ton of bricks. What would my ideal job be now that I don’t have a career safety net? I couldn’t answer the question. So, I started to write. I wrote about parenting issues. I journaled. I researched. Then, I wrote some more. Pretty soon, I started a blog. Then, I learned all about SEO, Wordpress, Pinterest, Instagram, Twitter, and Facebook. I learned about taking engaging photographs. I learned to create memes for my audience. I learned to skip Photoshop and go directly to Canva. I learned to check my grammar. I’m still learning every day. It’s exhilarating to get years of materials out. Through the process, I slowly opened up my creative funnel. The thing about the creative funnel is that once you turn it on, it’s hard to turn it off. The other day, I came across a piece of data visualization while researching freelance writing jobs. It was mesmerizing to me. I wanted to critique the analysis and get my hands on that dataset. There you go, my friends! For me, the only way back to being a balanced individual is to write my way back to my emotional, creative and analytical self. If writing isn’t a bridge, I don’t know what is. Writing ties together my left brain and my right brain. — Picture from Pexels.com It’s a bridge that connects my left brain and my right brain. It’s a bridge that opens up the possibility of having a career that is not limited to one profession. It leads me to my new path of pursuing many different projects across a variety of fields. Writing brings everything together. — original Do you want to hear about my latest projects? Ask me after I get through analyzing my first dataset in three years.
https://medium.com/jun-wu-blog/writing-is-my-bridge-d37dbcf9cb1d
['Jun Wu']
2019-11-28 00:14:28.123000+00:00
['Creativity', 'Writing Tips', 'Writing', 'Blogging', 'Writing On Medium']
Title Writing BridgeContent Writing Bridge use writing balance mind oplattner unsplashcom high school wanted writer didn’t know lost battle choosing college major parent couldn’t explain intuitively knew Writing salvation often ask people meet “Are creative person” “Are analytical person” don’t realize many u grew Asian culture overachievement science mathematics mean analytical side flourished creative side suppressed Creativity frowned upon strict Asian parent gateway disobedience wasn’t quit Wall Street technology job realized lacking life life dedicated solely analytical pursuit forgot take care emotional need creative need cost couple year anxiety depression took year reevaluating connection life really unleash emotional creative side motivation birth son Following son’s amazing development infancy toddlerhood allowed peek childhood reminded humanity creativity sensitive self existed beginning better mother son take leap faith come back completely essence make life fulfilling balancing need analytical need emotional need creative need Making career change never easy trigger deadening feeling working piece data analysis code loving anymore hard accept thing simply enough felt guilty worked hard “supposedly” best loved many year given great opportunity wasn’t excited anymore felt like wife stuck dead marriage guy neighborhood lady wanted husband one thing motherhood it’s fast it’s furious wait one time energy strength fight decision made changed load upon load diaper reveled “free time” infant son stared baby blanket laminated printout activity read parenting book set playdates learned discipline felt like huge tidal wave surfed without knowledge start one night truth hit like ton brick would ideal job don’t career safety net couldn’t answer question started write wrote parenting issue journaled researched wrote Pretty soon started blog learned SEO Wordpress Pinterest Instagram Twitter Facebook learned taking engaging photograph learned create meme audience learned skip Photoshop go directly Canva learned check grammar I’m still learning every day It’s exhilarating get year material process slowly opened creative funnel thing creative funnel turn it’s hard turn day came across piece data visualization researching freelance writing job mesmerizing wanted critique analysis get hand dataset go friend way back balanced individual write way back emotional creative analytical self writing isn’t bridge don’t know Writing tie together left brain right brain — Picture Pexelscom It’s bridge connects left brain right brain It’s bridge open possibility career limited one profession lead new path pursuing many different project across variety field Writing brings everything together — original want hear latest project Ask get analyzing first dataset three yearsTags Creativity Writing Tips Writing Blogging Writing Medium
1,312
3 Books to Improve Your Coding Skills
Code Complete by Steve McConnell When I finished this book, I was surprised by why nobody had explained such basic but crucial things to me until now. You might be asking, “What are they?” Let me bring to you a few examples. For instance, declaring and initializing a variable only in a place where it is going to be used. There is no need to declare a variable and only assign it somewhere in the code. The variable should have the least visible scope possible. The benefit of this is that code readability improves a lot and your teammates will be thankful for that. Another example is how to use if conditions efficiently. They are simple, but they can reduce code readability dramatically. Check the following example: The example above has too many nested if conditions, making it hard to follow and test the logic. While learning programming, we focus on how the if condition works and when to use it. But nobody tells us how it could be misused. The book gives some advice for this case: Avoid too many nested blocks, consider splitting the code into functions, and check if the switch..case statement is suitable (if the language supports it). Those and many other examples are covered in this book.
https://medium.com/better-programming/3-books-to-improve-your-coding-skills-afa67621192
['Dmytro Khmelenko']
2020-10-01 17:35:54.293000+00:00
['Professional Growth', 'Software Development', 'Books', 'Software Engineering', 'Programming']
Title 3 Books Improve Coding SkillsContent Code Complete Steve McConnell finished book surprised nobody explained basic crucial thing might asking “What they” Let bring example instance declaring initializing variable place going used need declare variable assign somewhere code variable least visible scope possible benefit code readability improves lot teammate thankful Another example use condition efficiently simple reduce code readability dramatically Check following example example many nested condition making hard follow test logic learning programming focus condition work use nobody tell u could misused book give advice case Avoid many nested block consider splitting code function check switchcase statement suitable language support many example covered bookTags Professional Growth Software Development Books Software Engineering Programming
1,313
What Miley Cyrus Did To Win Over a Booing Crowd
To be honest, I was just as offended as any Chris Cornell fan when I saw that Miley Cyrus was taking on what may be the most technically difficult song he sang — Say Hello To Heaven. Probably like most people in the audience, who even had a clue who Miley is, I had already made up my mind about her covering a classic Temple of the Dog song. It was going to suck. Thinking that she’d butcher the song the same way as she did with Nirvana's Smells Like Teen Spirit years earlier, I clicked on the YT video that captured her performance at a tribute show in honor of the late Chris Cornell. Judging from the other related videos of performances that seem to take place at the same show, I can only assume that she was the only woman and the one that reduced the average age of the performers by at least 20 years. It’s safe to say that she stood out. You know when you look at a list and it says “What doesn’t fit?”. I know I was the one that didn’t fit. — Miley Cyrus Most of the remaining giants of the grunge era were there to pay their respect and perform with members of Temple of the Dog, Soundgarden, and Audioslave. I could not for the life of me figure out why the hell a country-pop star would attempt (or even want to) join that gang. A few seconds into the video I realized just how far off she was from her comfort zone when she entered the stage. Hearing her being called up felt like a bit of a joke, and that’s probably why she was met with confusing chuckles, impulsive booing, and half-ass claps. She walked up firmly while awkwardly mumbling something about blowing the surprise due to her mic going live before she entered the stage. Miley inquired “Shall we do this?” as if she wanted to get this over with as soon as possible. Maybe she just wanted to start singing asap to counteract the initial response she got. She knew that that the audience wasn’t going to welcome her, and she had already accepted it in spite of the discomfort. The musicians synced and started playing the song. I had a hard time recognizing her with an outfit that covered more than it revealed. Ironically, that, her hairstyle and her awkwardness gave her a grungy presence. She focused on the band and the music purposefully and looked away from the prejudiced looks of the audience. Once she opened her mouth to sing, everybody just shut up. But jaws were left dropped. Her singing was impeccable in spite of not hitting those extra high notes that only Chris could pull off in the 90s. A lot of my covers I kind of customize for myself, and that one was just in its original form and I really did all his little ad-libs and runs, and it was just a really intense experience as a performer. — Miley Cyrus It was also apparent, that she had broadened her lower vocal registry, enabling her to effortlessly hit unusual depth, unlike many mainstream female vocalists. Compared to the other vocalists that night, she was technically the best and damn close to the singing abilities of Chris. Not only did she look different, but she also moved and sounded differently. Gone was the crazy choreography, which had been replaced with intuitive movements to the music that would power her vocals. She had reinvented herself, and it couldn’t get more grungy than that. The rawness and the freshness of her performance were far more persuasive and it made the imperfection perfect. She knew what the audience would value the most. That was one of the moments where you realize it’s not about you. It’s about that audience, and when you’re in a room with what is unifying people is their love for Chris Cornell and his talent, it really changed the way that I was performing. — Miley Cyrus In an interview, where Howard Stern addresses this particular performance and asks Miley about the experience, she stated that the performance sounded nothing like it did at soundcheck because she was so moved by the amount of love that was there for Chris Cornell. It really didn’t come from me, that performance. — Miley Cyrus In the end, Miley had struck an emotional chord with everyone who saw that performance. She showed with her performance that she could sing his pain in a way that would make the fans relive his past performances one last time. And that’s what tributes are for.
https://medium.com/illumination/what-miley-cyrus-did-to-win-over-a-booing-crowd-28e2a702954b
['Sara Kiani']
2020-12-28 14:39:37.294000+00:00
['Music', 'Leadership', 'Change', 'Marketing', 'Culture']
Title Miley Cyrus Win Booing CrowdContent honest offended Chris Cornell fan saw Miley Cyrus taking may technically difficult song sang — Say Hello Heaven Probably like people audience even clue Miley already made mind covering classic Temple Dog song going suck Thinking she’d butcher song way Nirvanas Smells Like Teen Spirit year earlier clicked YT video captured performance tribute show honor late Chris Cornell Judging related video performance seem take place show assume woman one reduced average age performer least 20 year It’s safe say stood know look list say “What doesn’t fit” know one didn’t fit — Miley Cyrus remaining giant grunge era pay respect perform member Temple Dog Soundgarden Audioslave could life figure hell countrypop star would attempt even want join gang second video realized far comfort zone entered stage Hearing called felt like bit joke that’s probably met confusing chuckle impulsive booing halfass clap walked firmly awkwardly mumbling something blowing surprise due mic going live entered stage Miley inquired “Shall this” wanted get soon possible Maybe wanted start singing asap counteract initial response got knew audience wasn’t going welcome already accepted spite discomfort musician synced started playing song hard time recognizing outfit covered revealed Ironically hairstyle awkwardness gave grungy presence focused band music purposefully looked away prejudiced look audience opened mouth sing everybody shut jaw left dropped singing impeccable spite hitting extra high note Chris could pull 90 lot cover kind customize one original form really little adlibs run really intense experience performer — Miley Cyrus also apparent broadened lower vocal registry enabling effortlessly hit unusual depth unlike many mainstream female vocalist Compared vocalist night technically best damn close singing ability Chris look different also moved sounded differently Gone crazy choreography replaced intuitive movement music would power vocal reinvented couldn’t get grungy rawness freshness performance far persuasive made imperfection perfect knew audience would value one moment realize it’s It’s audience you’re room unifying people love Chris Cornell talent really changed way performing — Miley Cyrus interview Howard Stern address particular performance asks Miley experience stated performance sounded nothing like soundcheck moved amount love Chris Cornell really didn’t come performance — Miley Cyrus end Miley struck emotional chord everyone saw performance showed performance could sing pain way would make fan relive past performance one last time that’s tribute forTags Music Leadership Change Marketing Culture
1,314
Freelance Writer? How to Know When It’s Time to Fire a Client
1.) If a freelance writing client tries to tell you how to run your business, they might not be a good fit for your writing services. Some clients think you should adjust your rates based on their needs. This isn’t a smart way to run your writing business. Your rates are what they are based upon your expertise, writing talent, and demand for your services. If a client doesn’t want to pay your rates or thinks you should adjust your business’ modus operandi to suit the state of their business, it might be time to cut them loose as a client. 2.) If one of your freelance writing clients thinks it’s okay to talk down to you or berate you in discussions about their content creation needs/requirements, this is a clear sign it’s time to fire them as a client. You’re running a business. How you run your business isn’t up for discussion or debate. If a client doesn’t respect you enough to speak to you as a fellow businessperson, they’re not worthy of having access to your writing services. There are plenty of other business owners around the world who would be thrilled to have an experienced freelance writer at their disposal. Terminate your business relationship with a client who belittles you and focus your efforts on replacing them with more profitable writing clients. 3.) If a freelance writing client thinks you should drop all your other work just to attend to their last-minute request for content, and then wants you to offer a reduced rate because they’re a regular client, you might want to think twice about whether they’re adding to your business’ bottom line. You started your writing business to turn a profit, not to be held hostage to the whims of penny-pinching clients who think you’re at their beck-and-call. Focus on connecting with clients who appreciate your talents, are willing to pay top rates to have access to your services, and who understand they need to pay extra if they want a last-minute piece of content. You might even want to consider offering access to your services on a retainer business model to ensure your regular clients have easy access to your services, yet still allowing you to turn a profit.
https://medium.com/publishous/freelance-writer-how-to-know-when-its-time-to-fire-a-client-738ad9d5ec88
['George J. Ziogas']
2020-08-31 09:54:12.940000+00:00
['Entrepreneurship', 'Business', 'Writing', 'Work', 'Freelancing']
Title Freelance Writer Know It’s Time Fire ClientContent 1 freelance writing client try tell run business might good fit writing service client think adjust rate based need isn’t smart way run writing business rate based upon expertise writing talent demand service client doesn’t want pay rate think adjust business’ modus operandi suit state business might time cut loose client 2 one freelance writing client think it’s okay talk berate discussion content creation needsrequirements clear sign it’s time fire client You’re running business run business isn’t discussion debate client doesn’t respect enough speak fellow businessperson they’re worthy access writing service plenty business owner around world would thrilled experienced freelance writer disposal Terminate business relationship client belittles focus effort replacing profitable writing client 3 freelance writing client think drop work attend lastminute request content want offer reduced rate they’re regular client might want think twice whether they’re adding business’ bottom line started writing business turn profit held hostage whim pennypinching client think you’re beckandcall Focus connecting client appreciate talent willing pay top rate access service understand need pay extra want lastminute piece content might even want consider offering access service retainer business model ensure regular client easy access service yet still allowing turn profitTags Entrepreneurship Business Writing Work Freelancing
1,315
State of Managed Kubernetes 2020
EKS vs. AKS vs. GKE from a Developer’s Perspective In February of 2019, just a few months after AWS announced the GA release of EKS to join Azure’s AKS and GCP’s GKE, I wrote up a comparison of these services as part of the first edition of the open-source Kubernetes book. Since then, Kubernetes adoption exploded, and the managed Kubernetes offering from all the major cloud providers became standardized. According to Cloud Native Computing Foundation (CNCF)’s most recent survey released in March 2020, Kubernetes usage in production jumped from 58% to 78% with managed Kubernetes services from AWS and GCP leading the pack. Container Management Usage from CNCF 2019 Survey From my personal experience working with Kubernetes, the most notable difference from 2019 to now has been the feature parity across the clouds. The huge lead that GKE once enjoyed has been largely reduced, and in some cases, surpassed by other providers. Since there are plenty of resources comparing each service offerings and price differences (e.g. learnk8s.io, stackrox.com, parkmycloud.com), I’m going to focus on personal experiences using these services in development and production as a developer in this article. Amazon EKS Considering AWS’s dominance on the cloud, it’s not surprising to see huge usage numbers for EKS and kops. The obvious advantage for existing AWS customers is to move workloads from EC2 or ECS to EKS with minimal modification to other services. However, in terms of managed Kubernetes features, I generally found EKS to lag GKE and AKS. There is a public roadmap on Github for all AWS container services (ECS, ECR, Fargate, and EKS), but the general impression I get from AWS is a push for more serverless offerings (e.g. Lambda, Fargate) more so than container usage. That isn’t to say that support from Amazon hasn’t been amazing nor do I think EKS is not Amazon’s priority. In fact, EKS provides a financially-backed SLA to encourage enterprise usage (Update Jun 15, 2020 — as for 5/19/20, AKS also provides financially backed SLA). With EKS making RBAC and Pod Security Policies mandatory, it beats out GKE and AKS in terms of base-level security. Finally, now that GKE is also charging $0.10/hour per master node management, the pricing differences between the two clouds are even more negligible with reserved instances and other enterprise agreements in place. Like many other AWS services, EKS provides a large degree of flexibility in terms of configuring your cluster. On the other hand, this flexibility also means the management burden falls on the developer. For example, EKS provides support for Calico CNI for network policies but requires the users to install and upgrade them manually. Kubernetes logs can be exported to CloudWatch, but it’s off by default and leaves it up to the developer to deploy a logging agent to collect application logs. Finally, upgrades are also user-initiated with the responsibility of updating master service components (e.g. CoreDNS, kube-proxy, etc) falling on the developer as well. Deploying Worker Nodes is a Separate Step than Provisioning a Cluster — Image from AWS The most frustrating part with EKS was the difficulty in creating a cluster for experimentation. In production, most of the concerns above are solved with Terraform or CloudFormation. But when I wanted to simply create a small cluster to try out new things, using the CLI or the GUI often took a while to provision, only to realize that I missed a setting or IAM roles later in the process. I found eksctl to be the most reliable method of creating a production-ready EKS cluster until we perfected the Terraform configs. The eksworkshop website also provides excellent guides for common cluster setup operations such as deploying the Kubernetes dashboard, standing up an EFK stack for logging, as well as integrating with other AWS services like X-Ray and AppMesh. Overall, until EKS introduced managed node groups with Kubernetes 1.14 and above, I found the management burden on EKS fairly high, especially in the beginning. AWS is quickly catching up to the competition, but EKS is still not the best place to start for new users. Azure AKS Surprisingly, AKS surpassed GKE in terms of providing support for newer Kubernetes versions (preview for 1.18 on AKS vs 1.17 on GKE as of June 2020). Also, AKS remains as the only service to not charge for control plane usage. Like EKS, master node upgrades must be initiated by the developer, but EKS takes care of underlying system upgrades. Personally, I have not used AKS in production, so I can’t comment on technical or operational challenges. However, as of 5/19/2020, AKS not only provides a financially backed SLA (99.95% with Availability Zones), but also made it an optional feature to allow for unlimited free clusters (Updated Jun 15, 2020). Still, Azure’s continued investment in Kubernetes is apparent through contributions to Helm (Microsoft acquired Deis who created Helm) as it graduated from CNCF. As Azure continues to close the gap with AWS, I expect AKS usage to grow with increasing support to address community concerns. Packaging Applications — CNCF Survey Google Cloud GKE While Google’s decision to begin charging for control plane usage for non-Anthos clusters stirred some frustrations among the developer community, GKE undoubtedly remains the king of managed Kubernetes in terms of features, support, and ease of use. For new users unfamiliar with Kubernetes, the GUI experience of creating a cluster and default logging and monitoring integration via Stackdriver makes it easy to get started. GKE is also the only service to provide a completely automated master and node upgrade process. With the introduction of cluster maintenance windows, node upgrades can occur in a controlled environment with minimal overhead. Node auto-repair support also reduces management burdens on the developers. Similar to many GCP products, GKE’s excellent managed environment does mean that customization may be difficult or sometimes impossible. For example, GKE installs kube-dns by default, and to use CoreDNS, you need to hack around the kube-dns settings. Likewise, if Stackdriver does not suit your needs for logging and monitoring, then you’ll have to uninstall those agents and manage other logging agents yourself. Still, my experiences with GKE have been generally pleasant, and even considering the price increase, I still recommend GKE over EKS and AKS. The more exciting part with GKE is the growing number of services built on top such as managed Istio and Cloud Run. Managed service mesh and a serverless environment for containers will continue to lower the bar for migration to cloud and microservices architecture. While GCP lags AWS and Azure in terms of overall cloud market share, it still holds its lead for Kubernetes in 2020. Google Cloud Service Platform — GCP Blog Resources
https://medium.com/swlh/state-of-managed-kubernetes-2020-4be006643360
['Yitaek Hwang']
2020-06-16 14:57:37.830000+00:00
['Kubernetes', 'Azure', 'Google Cloud Platform', 'AWS', 'Gke']
Title State Managed Kubernetes 2020Content EKS v AKS v GKE Developer’s Perspective February 2019 month AWS announced GA release EKS join Azure’s AKS GCP’s GKE wrote comparison service part first edition opensource Kubernetes book Since Kubernetes adoption exploded managed Kubernetes offering major cloud provider became standardized According Cloud Native Computing Foundation CNCF’s recent survey released March 2020 Kubernetes usage production jumped 58 78 managed Kubernetes service AWS GCP leading pack Container Management Usage CNCF 2019 Survey personal experience working Kubernetes notable difference 2019 feature parity across cloud huge lead GKE enjoyed largely reduced case surpassed provider Since plenty resource comparing service offering price difference eg learnk8sio stackroxcom parkmycloudcom I’m going focus personal experience using service development production developer article Amazon EKS Considering AWS’s dominance cloud it’s surprising see huge usage number EKS kops obvious advantage existing AWS customer move workload EC2 ECS EKS minimal modification service However term managed Kubernetes feature generally found EKS lag GKE AKS public roadmap Github AWS container service ECS ECR Fargate EKS general impression get AWS push serverless offering eg Lambda Fargate container usage isn’t say support Amazon hasn’t amazing think EKS Amazon’s priority fact EKS provides financiallybacked SLA encourage enterprise usage Update Jun 15 2020 — 51920 AKS also provides financially backed SLA EKS making RBAC Pod Security Policies mandatory beat GKE AKS term baselevel security Finally GKE also charging 010hour per master node management pricing difference two cloud even negligible reserved instance enterprise agreement place Like many AWS service EKS provides large degree flexibility term configuring cluster hand flexibility also mean management burden fall developer example EKS provides support Calico CNI network policy requires user install upgrade manually Kubernetes log exported CloudWatch it’s default leaf developer deploy logging agent collect application log Finally upgrade also userinitiated responsibility updating master service component eg CoreDNS kubeproxy etc falling developer well Deploying Worker Nodes Separate Step Provisioning Cluster — Image AWS frustrating part EKS difficulty creating cluster experimentation production concern solved Terraform CloudFormation wanted simply create small cluster try new thing using CLI GUI often took provision realize missed setting IAM role later process found eksctl reliable method creating productionready EKS cluster perfected Terraform configs eksworkshop website also provides excellent guide common cluster setup operation deploying Kubernetes dashboard standing EFK stack logging well integrating AWS service like XRay AppMesh Overall EKS introduced managed node group Kubernetes 114 found management burden EKS fairly high especially beginning AWS quickly catching competition EKS still best place start new user Azure AKS Surprisingly AKS surpassed GKE term providing support newer Kubernetes version preview 118 AKS v 117 GKE June 2020 Also AKS remains service charge control plane usage Like EKS master node upgrade must initiated developer EKS take care underlying system upgrade Personally used AKS production can’t comment technical operational challenge However 5192020 AKS provides financially backed SLA 9995 Availability Zones also made optional feature allow unlimited free cluster Updated Jun 15 2020 Still Azure’s continued investment Kubernetes apparent contribution Helm Microsoft acquired Deis created Helm graduated CNCF Azure continues close gap AWS expect AKS usage grow increasing support address community concern Packaging Applications — CNCF Survey Google Cloud GKE Google’s decision begin charging control plane usage nonAnthos cluster stirred frustration among developer community GKE undoubtedly remains king managed Kubernetes term feature support ease use new user unfamiliar Kubernetes GUI experience creating cluster default logging monitoring integration via Stackdriver make easy get started GKE also service provide completely automated master node upgrade process introduction cluster maintenance window node upgrade occur controlled environment minimal overhead Node autorepair support also reduces management burden developer Similar many GCP product GKE’s excellent managed environment mean customization may difficult sometimes impossible example GKE installs kubedns default use CoreDNS need hack around kubedns setting Likewise Stackdriver suit need logging monitoring you’ll uninstall agent manage logging agent Still experience GKE generally pleasant even considering price increase still recommend GKE EKS AKS exciting part GKE growing number service built top managed Istio Cloud Run Managed service mesh serverless environment container continue lower bar migration cloud microservices architecture GCP lag AWS Azure term overall cloud market share still hold lead Kubernetes 2020 Google Cloud Service Platform — GCP Blog ResourcesTags Kubernetes Azure Google Cloud Platform AWS Gke
1,316
🌟Introducing Dash Cytoscape🌟
Now you can create beautiful and powerful network mapping applications entirely in Python, no JavaScript required! Dash Cytoscape introduces the latest additions to our ever-growing family of Dash components. Built on top of Cytoscape.js, the Dash Cytoscape library brings Cytoscape’s capabilities into the Python ecosystem. Best of all, it’s open-sourced under an MIT license and available today on PyPI. Simply run pip install dash-cytoscape to get started! You can also find the complete source code in the Github repository, and view the documentation in the Dash Cytoscape User Guide. In this post, we will: Provide some background on the Cytoscape project. Show you how Dash’s declarative layout, elements and styling help you build an intuitive and intelligent application. Explain how Dash callbacks power your application’s interactivity. Introduce you to the customizable styling available with Dash Cytoscape, including an online style editor that you can use as an interactive playground for your style and layout ideas. Illustrate how to visualize large social networks using Dash Cytoscape. Share our vision of integrating with other Python bioinformatics and computational biology tools. Standing on the shoulders of giants This project would not be possible without the amazing work done by the Cytoscape Consortium, an alliance of universities and industry experts working to make network visualization accessible for everyone. The Cytoscape project has been available for some time both as Java software and as a JavaScript API; they are maintained in their Github organization. The library can also be used in React through the recently released react-cytoscapejs library, which allows the creation of Cytoscape components that can be easily integrated in your React projects. Dash Cytoscape extends the latter by offering a Pythonic, callbacks-ready, declarative interface that is ready to be integrated in your existing Dash projects, or used as a standalone component to interactively display your graphs. A familiar and declarative interface Powerful built-in layouts Picking the right layout for your graph is essential to helping viewers understand your data. This highly customizable feature is now fully available in Dash, and can be easily specified using a dictionary. The original Cytoscape.js includes many great layouts for displaying your graph in the way it should be viewed. You can choose to display your nodes in a grid, in a circle, as a tree, or using physics simulations. In fact, you can even choose the exact number of rows or columns for your grid, the radius of your circle, or the temperature and cooling factor of your simulation. For example, to display your graph with a fixed grid of 25 rows, you can simply declare: dash_cytoscape.Cytoscape( id='cytoscape', elements=[...], layout={ 'name': 'grid', 'rows': 25 } ) Find the full example here. Intuitive and clear element declaration Creating nodes with Dash Cytoscape is straightforward: You make a dictionary in which you specify the data associated with the node (i.e., the ID and the display label of your node), and, optionally, the default position. To add an edge between two nodes, you give the ID of the source node and the target node, and specify how you want to label the nodes. Group all elements (nodes and edges) inside a list, and you are ready to go! In a nutshell, here’s how you would create a basic graph with two nodes: dash_cytoscape.Cytoscape( id='cytoscape', layout={'name': 'preset'}, elements=[ {'data': {'id': 'one', 'label': 'Node 1'}, 'position': {'x': 50, 'y': 50}}, {'data': {'id': 'two', 'label': 'Node 2'}, 'position': {'x': 200, 'y': 200}}, {'data': {'source': 'one', 'target': 'two', 'label': 'Node 1 to 2'}} ] ) If you already have an adjacency list, you can easily format the data to be accepted by Cytoscape, and display in your browser with about 70 lines of code: Displaying over 8000 edges and their associated nodes with a concentric layout. This uses the Stanford Google+ Dataset. Beautiful and customizable styling Cytoscape provides a range of styling options through a familiar CSS-like interface. You get to specify the exact color, pixel size and opacity of your elements. You can choose the shape of your nodes from over 20 options, including circular, triangular, and rectangular, as well as non-traditional content for your nodes (e.g. displaying an image by adding a URL, or adding a pie chart inside a circular node). The edges can be curved or straight, and it is even possible to add arrows at the middle or end-point. To add a style to your stylesheet, you simply need to specify which group of elements you want to modify with a selector, and input the properties you want to modify as keys. For example, if you want nodes 15 pixels wide by 15 pixels high, styled opaque with a custom gray color, you add the following dictionary to your stylesheet: { 'selector': 'node', 'style': { 'opacity': 0.9, 'height': 15, 'width': 15, 'background-color': '#222222' } } The selector can be a type of element (i.e., a node or edge) or be a certain class (which you can specify). It can also have a certain ID or match certain conditions (e.g., node height is over a certain threshold). Using the online style editor In order to help the community get acquainted with the library, we created an online style editor application that lets you interactively modify the stylesheet and layout of sample graphs. This tool will help you learn how to use the style properties and quickly prototype new designs. Best of all, it displays the stylesheet in a JSON format so that you can simply copy and paste it into your Cytoscape app! Try it out here. Create your on style and save the JSON. The source code is in usage-advanced.py. Familiar Dash callbacks Dash Callbacks are used to make your Dash apps interactive. They are fired whenever the input you define is modified, such as when the user clicks a button or drags a slider inside your UI. The callback functions are computed on the server side, which enables the use of optimized and heavy-duty libraries such as Scipy and Numpy. Use Dash callbacks with dash-cytoscape to update the underlying elements , the layout , or the stylesheet of the graph. For more, see our documentation chapter on callbacks. Additionally, you can use a collection of user-interaction events as an input to your callbacks. They are triggered whenever the user interacts with the graph itself; in other words, when they hover, tap, or select an element or a group of elements. You can choose to input the entire JSON description of the element object (including its connected edges, its parents, its children, and the complete stylesheet), or only the data contained withing the object. To see what is being output, you can assign the following simple callback to your graph: @app.callback(Output('html-div-output', 'children'), [Input('cytoscape', 'tapNodeData')]) def displayTapNodeData(data): return json.dumps(data, indent=2) This will output the formatted JSON sent by your Cytoscape component into an html.Div field. To read more about event callbacks and how to use them for user interaction, check out our user guide chapter on Cytoscape events. Try out the demo here. You can find the source code in usage-events.py. Visualizing large social networks One way you might want to use Dash Cytoscape is to visualize large social networks. Visualizing large network graphs with thousands or millions of nodes can quickly become overwhelming. In this example, we use Dash Cytoscape with Dash callbacks to interactively explore a network by clicking on nodes of interest. This graph displays the Google+ social social network from the Stanford Large Network Dataset collection. Dynamically expand your graphs Start with a single node (representing a Google+ user) and explore all of its outgoing (i.e. all of the users they are following) or incoming edges (i.e. all of their followers). Try out the demo here. You can find the source code in usage-elements.py. Fast and reactive styling When mapping large networks, strategic styling can help enhance understanding of the data. Leveraging the rendering speed and scalability of Cytoscape.js, we can easily create callbacks that update the stylesheet of large graphs using Dash components such as dropdown menus and input fields, or that update upon clicking a certain node. In this example, we display 750 edges of Google+ users and focus on a particular user by clicking on a specific node. The callback updates the stylesheet by appending a selector that colors the selected ID in purple, and the parents and children in different colors that you specify. Our user guide chapter on styling covers the basics to get you started. Try out the demo here. You can find the source code in usage-stylesheet.py. Integrating with other libraries The release of Dash Cytoscape brings the capabilities of Cytoscape.js into the Python world, opening up the possibility of integrating with a wide range of excellent graph and network libraries in Python. For example, Biopython is a collection of open-source bioinformatics Python tools. In around 100 lines of code, we wrote a parser capable of generating cytoscape elements from Biopython’s Phylo objects. The parser is generic enough that it can be directly integrated in your bioinformatics workflow, and enables you to quickly create interactive phylogeny apps, all in a familiar and pythonic environment. View the phylogeny demo in the docs. Interactively explore your phylogeny trees. The elements are automatically generated from a biopython’s Phylo object, which can be initiated from a wide range of data format. Dash Cytoscape is the first step to provide deeper Dash integration with Biopython and well known graph libraries such as NetworkX and Neo4j. To wrap up Today, Plotly.py is widely used for exploratory analysis and Dash is a powerful analytics solution in the scientific community. Recently, researchers published a paper on CRISPR in Nature and built their machine learning platform using Dash. There is an obvious need for powerful and user-friendly visualizations tools in Python, and network visualization is not an exception. We are planning to fully leverage the resources available in Python to make Cytoscape useful for more network scientists and computational biologists, as well as the broader scientific community. Dash Cytoscape is a work in progress, and we encourage you to help us improve it and make it accessible to more people. Contribute to documentation, improve its compatibility with other libraries, or add functionalities that make it easier to use. Head over to our GitHub repository to get started! We are currently working on multiple improvements, including support for NetworkX, integration with Biopython, and object-oriented declaration for elements, styles and layouts. Check out those issues to keep track of the progress, or to support us through your contributions! If you wish to use this library in a commercial setting, please see our on-premise offerings, which not only guarantee technical support, but also support our open-source initiatives, including Dash Cytoscape itself.
https://medium.com/plotly/introducing-dash-cytoscape-ce96cac824e4
[]
2019-02-05 22:24:00.717000+00:00
['Python', 'Plotly', 'Data Science', 'Data Visualization', 'Dash']
Title 🌟Introducing Dash Cytoscape🌟Content create beautiful powerful network mapping application entirely Python JavaScript required Dash Cytoscape introduces latest addition evergrowing family Dash component Built top Cytoscapejs Dash Cytoscape library brings Cytoscape’s capability Python ecosystem Best it’s opensourced MIT license available today PyPI Simply run pip install dashcytoscape get started also find complete source code Github repository view documentation Dash Cytoscape User Guide post Provide background Cytoscape project Show Dash’s declarative layout element styling help build intuitive intelligent application Explain Dash callback power application’s interactivity Introduce customizable styling available Dash Cytoscape including online style editor use interactive playground style layout idea Illustrate visualize large social network using Dash Cytoscape Share vision integrating Python bioinformatics computational biology tool Standing shoulder giant project would possible without amazing work done Cytoscape Consortium alliance university industry expert working make network visualization accessible everyone Cytoscape project available time Java software JavaScript API maintained Github organization library also used React recently released reactcytoscapejs library allows creation Cytoscape component easily integrated React project Dash Cytoscape extends latter offering Pythonic callbacksready declarative interface ready integrated existing Dash project used standalone component interactively display graph familiar declarative interface Powerful builtin layout Picking right layout graph essential helping viewer understand data highly customizable feature fully available Dash easily specified using dictionary original Cytoscapejs includes many great layout displaying graph way viewed choose display node grid circle tree using physic simulation fact even choose exact number row column grid radius circle temperature cooling factor simulation example display graph fixed grid 25 row simply declare dashcytoscapeCytoscape idcytoscape element layout name grid row 25 Find full example Intuitive clear element declaration Creating node Dash Cytoscape straightforward make dictionary specify data associated node ie ID display label node optionally default position add edge two node give ID source node target node specify want label node Group element node edge inside list ready go nutshell here’s would create basic graph two node dashcytoscapeCytoscape idcytoscape layoutname preset element data id one label Node 1 position x 50 50 data id two label Node 2 position x 200 200 data source one target two label Node 1 2 already adjacency list easily format data accepted Cytoscape display browser 70 line code Displaying 8000 edge associated node concentric layout us Stanford Google Dataset Beautiful customizable styling Cytoscape provides range styling option familiar CSSlike interface get specify exact color pixel size opacity element choose shape node 20 option including circular triangular rectangular well nontraditional content node eg displaying image adding URL adding pie chart inside circular node edge curved straight even possible add arrow middle endpoint add style stylesheet simply need specify group element want modify selector input property want modify key example want node 15 pixel wide 15 pixel high styled opaque custom gray color add following dictionary stylesheet selector node style opacity 09 height 15 width 15 backgroundcolor 222222 selector type element ie node edge certain class specify also certain ID match certain condition eg node height certain threshold Using online style editor order help community get acquainted library created online style editor application let interactively modify stylesheet layout sample graph tool help learn use style property quickly prototype new design Best display stylesheet JSON format simply copy paste Cytoscape app Try Create style save JSON source code usageadvancedpy Familiar Dash callback Dash Callbacks used make Dash apps interactive fired whenever input define modified user click button drag slider inside UI callback function computed server side enables use optimized heavyduty library Scipy Numpy Use Dash callback dashcytoscape update underlying element layout stylesheet graph see documentation chapter callback Additionally use collection userinteraction event input callback triggered whenever user interacts graph word hover tap select element group element choose input entire JSON description element object including connected edge parent child complete stylesheet data contained withing object see output assign following simple callback graph appcallbackOutputhtmldivoutput child Inputcytoscape tapNodeData def displayTapNodeDatadata return jsondumpsdata indent2 output formatted JSON sent Cytoscape component htmlDiv field read event callback use user interaction check user guide chapter Cytoscape event Try demo find source code usageeventspy Visualizing large social network One way might want use Dash Cytoscape visualize large social network Visualizing large network graph thousand million node quickly become overwhelming example use Dash Cytoscape Dash callback interactively explore network clicking node interest graph display Google social social network Stanford Large Network Dataset collection Dynamically expand graph Start single node representing Google user explore outgoing ie user following incoming edge ie follower Try demo find source code usageelementspy Fast reactive styling mapping large network strategic styling help enhance understanding data Leveraging rendering speed scalability Cytoscapejs easily create callback update stylesheet large graph using Dash component dropdown menu input field update upon clicking certain node example display 750 edge Google user focus particular user clicking specific node callback update stylesheet appending selector color selected ID purple parent child different color specify user guide chapter styling cover basic get started Try demo find source code usagestylesheetpy Integrating library release Dash Cytoscape brings capability Cytoscapejs Python world opening possibility integrating wide range excellent graph network library Python example Biopython collection opensource bioinformatics Python tool around 100 line code wrote parser capable generating cytoscape element Biopython’s Phylo object parser generic enough directly integrated bioinformatics workflow enables quickly create interactive phylogeny apps familiar pythonic environment View phylogeny demo doc Interactively explore phylogeny tree element automatically generated biopython’s Phylo object initiated wide range data format Dash Cytoscape first step provide deeper Dash integration Biopython well known graph library NetworkX Neo4j wrap Today Plotlypy widely used exploratory analysis Dash powerful analytics solution scientific community Recently researcher published paper CRISPR Nature built machine learning platform using Dash obvious need powerful userfriendly visualization tool Python network visualization exception planning fully leverage resource available Python make Cytoscape useful network scientist computational biologist well broader scientific community Dash Cytoscape work progress encourage help u improve make accessible people Contribute documentation improve compatibility library add functionality make easier use Head GitHub repository get started currently working multiple improvement including support NetworkX integration Biopython objectoriented declaration element style layout Check issue keep track progress support u contribution wish use library commercial setting please see onpremise offering guarantee technical support also support opensource initiative including Dash Cytoscape itselfTags Python Plotly Data Science Data Visualization Dash
1,317
The Danger of Humanizing Algorithms
The Danger of Humanizing Algorithms Misleading terminology can be dangerous. Machines are actually not learning Photo by Michael Dziedzic on Unsplash. To many, 2016 marked the year when artificial intelligence (AI) came of age. AlphaGo triumphed against the world’s best human Go players, demonstrating the almost inexhaustible potential of artificial intelligence. Programs playing board games with superhuman skills like AlphaGo or AlphaZero have created unparalleled hype surrounding AI, and this has only been fueled by big data availability. In this context, it is not surprising that the public, business, and scientific interest in machine learning are unchecked. These programs can go further than beating a human player, going so far as to invent new and ingenious gameplay. They learn from data, identify patterns, and make decisions based on these patterns. Depending on the application, decision-making occurs without or with only minimal human intervention. Since data production is a continuous process, machine learning solutions adapt autonomously, learning from new information and previous operations. In 2016, AlphaGo used a total of 300,000 games as training data to achieve its excellent results. Every guide out there about how to implement machine learning applications will tell you that you need a clear vision of the problem it has to solve. In many cases, the machine learning applications are faster, more accurate, and time-saving, therefore — among other benefits — shortening time-to-market. However, it will only address this specific problem with the data given. But is this learning in correspondence to the way humans learn? No, it is not. Not even remotely.
https://medium.com/better-programming/the-danger-of-humanizing-algorithms-a9a0e1a5c8e6
['The Unlikely Techie']
2020-08-19 14:11:04.343000+00:00
['Machine Learning', 'Programming', 'Data Science', 'AI', 'Artificial Intelligence']
Title Danger Humanizing AlgorithmsContent Danger Humanizing Algorithms Misleading terminology dangerous Machines actually learning Photo Michael Dziedzic Unsplash many 2016 marked year artificial intelligence AI came age AlphaGo triumphed world’s best human Go player demonstrating almost inexhaustible potential artificial intelligence Programs playing board game superhuman skill like AlphaGo AlphaZero created unparalleled hype surrounding AI fueled big data availability context surprising public business scientific interest machine learning unchecked program go beating human player going far invent new ingenious gameplay learn data identify pattern make decision based pattern Depending application decisionmaking occurs without minimal human intervention Since data production continuous process machine learning solution adapt autonomously learning new information previous operation 2016 AlphaGo used total 300000 game training data achieve excellent result Every guide implement machine learning application tell need clear vision problem solve many case machine learning application faster accurate timesaving therefore — among benefit — shortening timetomarket However address specific problem data given learning correspondence way human learn even remotelyTags Machine Learning Programming Data Science AI Artificial Intelligence
1,318
My Internships at Optimizely
I’ve been very lucky to be an intern at Optimizely for two summers now, as part of their talented Business Systems Engineering team. This team’s mission is to enable Optimizely to take action on its own internal data. Considering Optimizely’s mission is to let our customers take action on their data, it only makes sense that we have methods to make data-driven decisions ourselves. The Business System team builds rock-solid performant data pipelines that take messy, raw data streams and transform them into a neat homogeneous OLAP schema in the Data Warehouse. You might think, why does Optimizely need a Data Warehouse or a Business Systems team? As a startup, it makes sense that we purchase or subscribe to a product like Zendesk, a helpdesk system, rather than hire a team of engineers to build our own. Optimizely uses a myriad of these products, but this creates a problem: useful data about our customers is siloed inside these products. There is no easy way to gain insights about our customers across all the systems we use. Furthermore, most external systems we subscribe to do not have the functionality to write complicated queries that an analyst would need to perform. A Data Warehouse allows us to elegantly solve both of these problems. Being data-driven is so important to us, that there are TVs around the office with charts which are powered by the Data Warehouse. Optimizely Engineering is insistent that interns become part of the team as another engineer, not simply an “intern.” This is unlike most other places, where an intern is assigned work to do in isolation. Intern work at Optimizely is code reviewed just like every other engineer, and is (hopefully) eventually pushed into production. Interns follow the same engineering processes at Optimizely that full-time engineers follow. I worked on several exciting and high impact projects during my tenure here. One of the most impactful projects I completed was overhauling the Zendesk Data Pipeline. Historically, this pipeline caused a great deal of grief to the team with frequent failures affecting the ability to monitor Success service stats in real time. I re-wrote it using a clean object-oriented structure, new API endpoints, and extending the functionality to track SLAs from our Success staff. Tracking these over time is critical to Optimizely’s Customer Success team, especially as the company prepares to roll out an exciting new initiative in the near future. Another project I worked on was implementing a RESTful API called SpeedFeed that is being used to interview full-time candidates, in a take-home assignment. The SpeedFeed API assignment more closely represents the daily work of a data engineer compared to a traditional phone screen. This project enabled our hiring team to evaluate the interview candidates in a brand new way! I also worked on building several new data pipelines. Two of these included Google Cloud and Amazon Web Service costing, that allow Optimizely to track hosting costs at a granular level. Another one was for Docebo, our e-course management system, that allows analysts to answer a plethora of important questions about customer engagement with our education platform. Optimizely also enables engineer creativity to be showcased during Hackathons. I worked on two small projects during a special intern hackday. The first project came from an insight I gained using the Data Warehouse. I determined, by using the Levenshtein string distance function, that many customers likely misspelled their email addresses when signing up for Optimizely. To solve this, we integrated with Mailcheck.js, which offers suggestions for email misspellings. A second project involved increasing the security of our product, by integrating with Castle.io, which detects suspicious login activity. We know a security incident can end the whole company, which is why we try to be as proactive as possible, for example, by adding 2-Step Verification. Overall, I had an excellent two summers interning at Optimizely. There are plenty of fun activities available in the Bay Area. Summer intern trips included an SF Giants Game, miniature golf, escape the room, group volunteering activities and weekend hiking excursions. I strongly recommend any great, aspiring software engineer to intern here at Optimizely. Optinauts have a bright future ahead, with a world-class engineering team coming up with brilliant solutions that delight our customers. If this sounds interesting to you, check out our careers page. Optimizely Interns at a San Francisco Giants Game
https://medium.com/engineers-optimizely/my-internships-at-optimizely-417aad8572f4
['Ryan Smith']
2016-08-17 22:29:50.242000+00:00
['Software Engineering', 'Optimizely', 'San Francisco', 'Data Engineering', 'Internships']
Title Internships OptimizelyContent I’ve lucky intern Optimizely two summer part talented Business Systems Engineering team team’s mission enable Optimizely take action internal data Considering Optimizely’s mission let customer take action data make sense method make datadriven decision Business System team build rocksolid performant data pipeline take messy raw data stream transform neat homogeneous OLAP schema Data Warehouse might think Optimizely need Data Warehouse Business Systems team startup make sense purchase subscribe product like Zendesk helpdesk system rather hire team engineer build Optimizely us myriad product creates problem useful data customer siloed inside product easy way gain insight customer across system use Furthermore external system subscribe functionality write complicated query analyst would need perform Data Warehouse allows u elegantly solve problem datadriven important u TVs around office chart powered Data Warehouse Optimizely Engineering insistent intern become part team another engineer simply “intern” unlike place intern assigned work isolation Intern work Optimizely code reviewed like every engineer hopefully eventually pushed production Interns follow engineering process Optimizely fulltime engineer follow worked several exciting high impact project tenure One impactful project completed overhauling Zendesk Data Pipeline Historically pipeline caused great deal grief team frequent failure affecting ability monitor Success service stats real time rewrote using clean objectoriented structure new API endpoint extending functionality track SLAs Success staff Tracking time critical Optimizely’s Customer Success team especially company prepares roll exciting new initiative near future Another project worked implementing RESTful API called SpeedFeed used interview fulltime candidate takehome assignment SpeedFeed API assignment closely represents daily work data engineer compared traditional phone screen project enabled hiring team evaluate interview candidate brand new way also worked building several new data pipeline Two included Google Cloud Amazon Web Service costing allow Optimizely track hosting cost granular level Another one Docebo ecourse management system allows analyst answer plethora important question customer engagement education platform Optimizely also enables engineer creativity showcased Hackathons worked two small project special intern hackday first project came insight gained using Data Warehouse determined using Levenshtein string distance function many customer likely misspelled email address signing Optimizely solve integrated Mailcheckjs offer suggestion email misspelling second project involved increasing security product integrating Castleio detects suspicious login activity know security incident end whole company try proactive possible example adding 2Step Verification Overall excellent two summer interning Optimizely plenty fun activity available Bay Area Summer intern trip included SF Giants Game miniature golf escape room group volunteering activity weekend hiking excursion strongly recommend great aspiring software engineer intern Optimizely Optinauts bright future ahead worldclass engineering team coming brilliant solution delight customer sound interesting check career page Optimizely Interns San Francisco Giants GameTags Software Engineering Optimizely San Francisco Data Engineering Internships
1,319
It’s Not Microservices, It’s You
It’s Not Microservices, It’s You Microservices are just one example of a technology trend that has been hyped up to be the answer to everybody’s problems Photo by You X Ventures on Unsplash The hype creates a dynamic of inflated expectations and excitement with business representatives and software engineers alike. In companies and teams where decision pushing is commonplace this often leads to rushed decisions, which likely ends in frustration and disappointment. Business representatives, software engineers and other technical specialists should be freely exchanging ideas, discussing risks and doubts as well as challenging each other with strong mutual respect. Creating this culture takes effort, a sense of personal responsibility and proactiveness from all involved. And I can guarantee you, no architecture or new technology will create long term success if the culture within the company is dominated by a small group of individuals who don’t understand that being a leader is to listen. Teams that rush into microservices can get burned in many different ways. When re architecting an application into smaller, loosely-coupled pieces of software that communicate with each other over a network, teams suddenly have to deal with the fallacies of distributed computing and decentralized data management. There’s a multitude of articles that explore these complexities in greater detail, so I won’t replicate that effort. I can say that underestimating these complexities often results in fragile architectures, scaling issues and substantial rework. Mastering them takes preparation, planning and experience. We cannot overcome the fact there will be a learning curve, as there really is no substitute for experience. That being said, the chances of success can be greatly increased through the right preparation and planning. A key aspect in that regard is estimation. In this blog I want to share an estimation technique that I’ve found to be helpful to channel the excitement around technology trends and break through unrealistic expectations by providing clarity on effort as well as associated complexity, risks and unknowns. This enables the right conversations between business representatives, software engineers and other technical specialists about trends like microservices before committing prematurely. How to do work estimation right No matter how hard you try, estimation will never be perfect. We cannot predict the future and foresee what we don’t know. The fact that we cannot be perfect when estimating, does not mean it doesn’t have value. Complexity, uncertainty, and risk are all factors that influence confidence and therefore also influence the estimated effort. But most estimation techniques whether it’s hours, ideal days, story points or t-shirt sizing, only focus on the effort and don’t provide means to also express confidence. Which is a shame since it’s a big part of the value that is achieved through estimation. Because work that the team doesn’t feel confident about is much more likely to cause problems. One way of making confidence transparent having the team that will be delivering the work perform range estimation. A range estimation technique I’ve had good results with is the 50/90 estimation technique. When using 50/90 estimation, every piece of work is estimated twice: The first estimate represents a “aggressive, but possible” (ABP) estimate where there’s 50% certainty of completing the work within that time. The second estimate represents a “highly probable” (HP) estimate where there’s 90% certainty of completing the work within that time. A narrow range, with the ABP and HP estimates being fairly close together, means the team is confident in the work. A wide range, where the ABP and HP estimates are far apart, means the team is not confident in the work based on current information and knowledge. When dealing with a wide range, the team should discuss the complexities, risks or unknowns they foresee and if these can be reduced or mitigated. Some examples it’s lead-time, dependencies on other teams, known bottlenecks, development complexity of technologies that are not known. When these things are outside the span of control of the team, they’re still relevant. Doesn’t mean it becomes their responsibility to fix, it’s their responsibility to make it transparent so it can either be acted upon or explicitly accepted. There are three additional rules that ensure you get the most value out of the estimation process and the decision making process: Make sure that the estimation is done by the engineers that are performing the work. Involving others with prior expertise is certainly valuable, but they should assume a role where they use their experiences in a coaching capacity with the goal to eliminate blind spots in the estimations. Prevent estimations performed by one person to reduce cognitive bias and in a group setting make sure everyone involved provides input. Which can be challenging in groups with very vocal or overpowering individuals if not addressed. A simple trick often applied in story point estimation is having everybody present their estimates at the exact same time to eliminate the possibility of people being influenced. This is crucial to reduce the cognitive bias of individuals and extract the best out of the team by sparking the right conversations. Communicate estimations in bandwidth form, with the identified risks and possible follow up actions to reduce or mediate risks. The 50/90 estimation technique offers a formula for compounding all the range estimates back into a single number. With the estimation results available, it’s time to decide what to do with the identified risks. Often it is possible to reduce or mitigate them. A common example is building a proof of concept to better understand a particular problem or challenge. The improved understanding or reduced blind spot should improve estimate accuracy. Investing additional time and resources in reducing or mitigating risks might not always be possible or worth the effort. This is fine, as long as residual risks are explicitly accepted. Preparation is half the victory Instead of telling you if microservices are the right choice I shared a technique that enables your team to come to their own conclusion. Which is how it should be as nobody understands your circumstances better. A key takeaway is that estimation is just a tool. The real goal is using that valuable information to bring everyone together and have honest conversations about the value and the realistic cost of applying/using microservices. Doing so gives a much better starting point than someone who watched a few tech talks on how Netflix migrated to microservices and feels that success can be replicated overnight. It’s very possible that some teams come to the conclusion they don’t need microservices or that they don’t feel confident enough given the complexities and learning curve when viewed in context of time constraints. Companies and teams often have multiple improvements going one in parallel so priorities have to be set. In that case, consider starting with a monolith that is designed to be modular. This enables that application to be broken up into microservices later if their situation changes and the additional complexity of microservices can be justified. This upholds the KISS and YAGNI principles and avoids over engineering while also being mindful that the situation might change in the future. Those who have completely disregarded monoliths would do well to remember that Netflix started as a monolith before transitioning to microservices at a later stage. And while we all have the belief that we are building the next unicorn with massive scale requirements just around the corner, it’s likely that some of us are wrong.
https://medium.com/swlh/its-not-microservices-it-s-you-8f2431dc50ff
['Oskar Uit De Bos']
2020-08-21 12:27:54.857000+00:00
['Software Development', 'Software Engineering', 'Microservices']
Title It’s Microservices It’s YouContent It’s Microservices It’s Microservices one example technology trend hyped answer everybody’s problem Photo X Ventures Unsplash hype creates dynamic inflated expectation excitement business representative software engineer alike company team decision pushing commonplace often lead rushed decision likely end frustration disappointment Business representative software engineer technical specialist freely exchanging idea discussing risk doubt well challenging strong mutual respect Creating culture take effort sense personal responsibility proactiveness involved guarantee architecture new technology create long term success culture within company dominated small group individual don’t understand leader listen Teams rush microservices get burned many different way architecting application smaller looselycoupled piece software communicate network team suddenly deal fallacy distributed computing decentralized data management There’s multitude article explore complexity greater detail won’t replicate effort say underestimating complexity often result fragile architecture scaling issue substantial rework Mastering take preparation planning experience cannot overcome fact learning curve really substitute experience said chance success greatly increased right preparation planning key aspect regard estimation blog want share estimation technique I’ve found helpful channel excitement around technology trend break unrealistic expectation providing clarity effort well associated complexity risk unknown enables right conversation business representative software engineer technical specialist trend like microservices committing prematurely work estimation right matter hard try estimation never perfect cannot predict future foresee don’t know fact cannot perfect estimating mean doesn’t value Complexity uncertainty risk factor influence confidence therefore also influence estimated effort estimation technique whether it’s hour ideal day story point tshirt sizing focus effort don’t provide mean also express confidence shame since it’s big part value achieved estimation work team doesn’t feel confident much likely cause problem One way making confidence transparent team delivering work perform range estimation range estimation technique I’ve good result 5090 estimation technique using 5090 estimation every piece work estimated twice first estimate represents “aggressive possible” ABP estimate there’s 50 certainty completing work within time second estimate represents “highly probable” HP estimate there’s 90 certainty completing work within time narrow range ABP HP estimate fairly close together mean team confident work wide range ABP HP estimate far apart mean team confident work based current information knowledge dealing wide range team discus complexity risk unknown foresee reduced mitigated example it’s leadtime dependency team known bottleneck development complexity technology known thing outside span control team they’re still relevant Doesn’t mean becomes responsibility fix it’s responsibility make transparent either acted upon explicitly accepted three additional rule ensure get value estimation process decision making process Make sure estimation done engineer performing work Involving others prior expertise certainly valuable assume role use experience coaching capacity goal eliminate blind spot estimation Prevent estimation performed one person reduce cognitive bias group setting make sure everyone involved provides input challenging group vocal overpowering individual addressed simple trick often applied story point estimation everybody present estimate exact time eliminate possibility people influenced crucial reduce cognitive bias individual extract best team sparking right conversation Communicate estimation bandwidth form identified risk possible follow action reduce mediate risk 5090 estimation technique offer formula compounding range estimate back single number estimation result available it’s time decide identified risk Often possible reduce mitigate common example building proof concept better understand particular problem challenge improved understanding reduced blind spot improve estimate accuracy Investing additional time resource reducing mitigating risk might always possible worth effort fine long residual risk explicitly accepted Preparation half victory Instead telling microservices right choice shared technique enables team come conclusion nobody understands circumstance better key takeaway estimation tool real goal using valuable information bring everyone together honest conversation value realistic cost applyingusing microservices give much better starting point someone watched tech talk Netflix migrated microservices feel success replicated overnight It’s possible team come conclusion don’t need microservices don’t feel confident enough given complexity learning curve viewed context time constraint Companies team often multiple improvement going one parallel priority set case consider starting monolith designed modular enables application broken microservices later situation change additional complexity microservices justified upholds KISS YAGNI principle avoids engineering also mindful situation might change future completely disregarded monolith would well remember Netflix started monolith transitioning microservices later stage belief building next unicorn massive scale requirement around corner it’s likely u wrongTags Software Development Software Engineering Microservices
1,320
Pandora Boxchain: Monthly Digest, September
Starting this month and going forward we plan to publish a monthly digest with the most interesting updates for the Pandora Boxchain project. Research and Development: The main focus of our research and development activities in September was on Prometheus consensus and a layer 1 network based on it, which will become the hosting layer for our high-load computing protocol at the level 2. We call this 1st layer of network “BOX”, and the second “PAN”; together they ensemble Pandora Boxchain. Overall directions of research of Prometheus consensus and BOX network development were related to: Formal verification of the parts of the consensus algorithm utilising special insertion modelling methodology, developed by Prof. Litichevsky and Prof. Hilbert; designing the improvements to the consensus algorithms; development of consensus prototypes in Python; improvements to the existing Rust implementation of the network node. The results of these activities were the following: Implemented prototype of Prometheus consensus in Python. We’ve started working on verification mechanisms which will protect the network from accepting faulty or malicious blocks and transactions. This includes wrongly signed, sent repeatedly, sent at a wrong time, in a wrong round, with wrong links to previous blocks or transactions. Started working on formal verification of gossips mechanism for Prometheus consensus. This mechanism allows validators to communicate and reach consensus on when a block was skipped either maliciously or by network problems. If one of the validators doesn’t see a block from the previous validator in a validation queue he sends negative gossip and other validators respond either with a block if they know about it or with negative gossip confirming block absence in DAG. Implemented so named “timestepper mechanism” that allows to perform tricky tests by manipulating simulation of time and seeing how nodes react to complex scenarios. Switched to Elliptic Curve cryptography in Prometheus consensus. It is much more space efficient than RSA and allows us to calculate close to real-world memory and performance overhead. Performed analysis of Dfinity, Casper, Tezos consensuses and made an investigations on BLS, Schnorr signatures. Created a new ordering and merging algorithms for Prometheus consensus. Worked with the gossip system: initialization of sending negative and positive state events, gossip transaction, checking of a full system of gossip systems work. Made mempool updates: improvements of the mempool functionality for working with gossip transactions, adding new transactions to the block, implementation of the penalty case, which arises if, within the framework of mempool, a negative and positive state from one author was discovered. In this case, the validator will immediately write out the penalty (even write out to himself) and immediately transit into the block. Created and tested tx_by_hash() storing — storage system for all transactions within the {tx_hash: tx} structure or {transaction hash: transaction}. This mechanism is based on DAG and includes methods for adding and searching transactions in this structure. Additionally to R&D activities in Prometheus and BOX Network we were also performing further development of our Proof of Computing Work protocol and existing PAN testnet: Developed a prototype of the electron.js application with built-in box proxy. Based on this prototype we will develop The Pandora Market desktop application with a description of the use cases. Studied Proof of Computing Work algorithm from the point of view of Markov chains. There are three types of nodes in Proof of Cognitive Work: worker (performs computations), validator (validates computations), arbiter (resolves conflicts through arbitration). Through the accumulation of reputation or by not following the protocol (i.e. Byzantine behaviour) the nodes in Pandora network migrate between these states. This makes Markov chain an interesting framework to study behaviour in Pandora’s ecosystem. The questions of this study are: what are the steady states in this ecosystem given different parameters of the model, and under which conditions ecosystem exits in a steady state, such that it functions as designed. Events: September, 5th ➡️ Pandora Boxchain Meetup in Berlin On our second Pandora Boxchain meetup in Berlinblockchain and AI enthusiasts joined together after the Zero Knowledge Summit at Mindspace. Andrey Sobol presented our research on ‘Randomness in PoS’. After Sergey Korostelyov shed some light on how decentralised distributed network technology can help build decentralized AI systems. His presentation is available here. September, 7th — 9th➡️#ETHBerlin Hackathon Our team created a reference implementation of the ERC1329- Inalienable Reputation Token during #ETHBerlin Hackathon. This ERC proposes the standard in creating inalienable reputation tokens. Take a look at first version of ERC1329 and join the discussion on #Ethereum GitHub. September, 9th ➡️ Blockchain Startup Pitch We took part in Blockchain Startup Pitch that took place during Berlin Blockchain Week. The team presented the project to blockchain savvy audience and had a great discussion with the community. September, 7th-11th ➡️ Blоckchain Cruise Maxim Orlovsky, the founder of Pandora Boxchain, took part in Blockchain Cruise on The Mediterranean sea together with Charlie Lee, Bobby Lee, Jimmy Song, Brock Pierce, Tone Vays and other influential changemakers in the blockchain space. He presented the results of joint academic research and technical engineering, revealing a new type of PoS consensus Prometheus, that supersedes PoW in all main aspects. His presentation: “PoS Consensus: can it be as much censorship-resistant and secure as PoW?” is available for all on slideshare.net. September 22–23 ➡️Baltics Honeybadger Conference 2018 At the end of the month our Team attended the Baltic Honeybadger 2018 Conference, where we had a lot of discussions regarding Pandora Boxchain technologies with Bitcoin community & developers, including Adam Back, Peter Todd, Giacomo Zucco, Matt Corallo, Eric Voskuil, Alex Petrov and others. We post updates on Research and Development achievements and upcoming Events on socials. Our Communities in Social Networks are really strong and active, and they enlarged greatly during September. In only one month we gained over 5000 users in Facebook and more than 1200 followers followed as on Twitter. Join our communities and be a part of Pandora Boxchain.
https://medium.com/pandoraboxchain/pandora-boxchain-monthly-digest-with-the-most-interesting-updates-and-news-in-september-502520a07a87
['Olha Rymar']
2018-10-25 12:48:11.334000+00:00
['Decentralized', 'Blockchain', 'AI', 'Artificial Intelligence']
Title Pandora Boxchain Monthly Digest SeptemberContent Starting month going forward plan publish monthly digest interesting update Pandora Boxchain project Research Development main focus research development activity September Prometheus consensus layer 1 network based become hosting layer highload computing protocol level 2 call 1st layer network “BOX” second “PAN” together ensemble Pandora Boxchain Overall direction research Prometheus consensus BOX network development related Formal verification part consensus algorithm utilising special insertion modelling methodology developed Prof Litichevsky Prof Hilbert designing improvement consensus algorithm development consensus prototype Python improvement existing Rust implementation network node result activity following Implemented prototype Prometheus consensus Python We’ve started working verification mechanism protect network accepting faulty malicious block transaction includes wrongly signed sent repeatedly sent wrong time wrong round wrong link previous block transaction Started working formal verification gossip mechanism Prometheus consensus mechanism allows validators communicate reach consensus block skipped either maliciously network problem one validators doesn’t see block previous validator validation queue sends negative gossip validators respond either block know negative gossip confirming block absence DAG Implemented named “timestepper mechanism” allows perform tricky test manipulating simulation time seeing node react complex scenario Switched Elliptic Curve cryptography Prometheus consensus much space efficient RSA allows u calculate close realworld memory performance overhead Performed analysis Dfinity Casper Tezos consensus made investigation BLS Schnorr signature Created new ordering merging algorithm Prometheus consensus Worked gossip system initialization sending negative positive state event gossip transaction checking full system gossip system work Made mempool update improvement mempool functionality working gossip transaction adding new transaction block implementation penalty case arises within framework mempool negative positive state one author discovered case validator immediately write penalty even write immediately transit block Created tested txbyhash storing — storage system transaction within txhash tx structure transaction hash transaction mechanism based DAG includes method adding searching transaction structure Additionally RD activity Prometheus BOX Network also performing development Proof Computing Work protocol existing PAN testnet Developed prototype electronjs application builtin box proxy Based prototype develop Pandora Market desktop application description use case Studied Proof Computing Work algorithm point view Markov chain three type node Proof Cognitive Work worker performs computation validator validates computation arbiter resolve conflict arbitration accumulation reputation following protocol ie Byzantine behaviour node Pandora network migrate state make Markov chain interesting framework study behaviour Pandora’s ecosystem question study steady state ecosystem given different parameter model condition ecosystem exit steady state function designed Events September 5th ➡️ Pandora Boxchain Meetup Berlin second Pandora Boxchain meetup Berlinblockchain AI enthusiast joined together Zero Knowledge Summit Mindspace Andrey Sobol presented research ‘Randomness PoS’ Sergey Korostelyov shed light decentralised distributed network technology help build decentralized AI system presentation available September 7th — 9th➡️ETHBerlin Hackathon team created reference implementation ERC1329 Inalienable Reputation Token ETHBerlin Hackathon ERC proposes standard creating inalienable reputation token Take look first version ERC1329 join discussion Ethereum GitHub September 9th ➡️ Blockchain Startup Pitch took part Blockchain Startup Pitch took place Berlin Blockchain Week team presented project blockchain savvy audience great discussion community September 7th11th ➡️ Blоckchain Cruise Maxim Orlovsky founder Pandora Boxchain took part Blockchain Cruise Mediterranean sea together Charlie Lee Bobby Lee Jimmy Song Brock Pierce Tone Vays influential changemakers blockchain space presented result joint academic research technical engineering revealing new type PoS consensus Prometheus supersedes PoW main aspect presentation “PoS Consensus much censorshipresistant secure PoW” available slidesharenet September 22–23 ➡️Baltics Honeybadger Conference 2018 end month Team attended Baltic Honeybadger 2018 Conference lot discussion regarding Pandora Boxchain technology Bitcoin community developer including Adam Back Peter Todd Giacomo Zucco Matt Corallo Eric Voskuil Alex Petrov others post update Research Development achievement upcoming Events social Communities Social Networks really strong active enlarged greatly September one month gained 5000 user Facebook 1200 follower followed Twitter Join community part Pandora BoxchainTags Decentralized Blockchain AI Artificial Intelligence
1,321
5 Steps to Being Traditionally Published
You hear pretty often that publishing is dead. Or that it’s impossible to actually sell a first book anymore. Or that you can’t do it, without a huge platform (or some other equally improbable necessity.) The truth is, though, that it’s always been hard to have a book traditionally published. And it’s no more impossible today than it ever has been. It’s really just numbers. There are way more people who want to be published than there are opportunities to be published. High demand equals high difficulty. So, even though the writer hires the agent, and sells to the editor, and the publisher — the agent, editor, publisher have so many opportunities to do their work that they don’t have to really hustle for clients. People are scrambling, begging them to take most of the proceeds of each book sale. That’s just the way of the world. That being said, though, if traditional publishing is your goal (and let’s be honest, it’s a goal for a lot of writers), it’s not impossible. Here’s how you do it: Write a really good book, all the way through to the end. If you’re writing fiction, this is a must. No agent will look at you if you don’t have a complete novel, so no traditional publisher will have the chance to consider your work. With non-fiction you can write a proposal instead of the whole book. If you’re a first time writer, though, it wouldn’t hurt to write your whole book even for non-fiction. Regardless, the work you turn into agents (proposal or finished draft) needs to shine. While you’re writing, start building an email list. I wish that someone had given me this advice when I sold my first novel to a traditional publisher. I had eighteen months between when my book sold and when it was released. My best use of that time would have been building an email list, but I didn’t know that. No one told me. So, I’m telling you. You’ll have an easier time appealing to an agent (and a publisher) if you have a solid foundation of readers already. It goes without saying, you’ll have an easier time selling your book if you have people waiting to buy it. If you can get to 10,000 on your email list, it will make a difference. If you can get to 100,000 — suddenly you’re not 1 in a million anymore. You’re more rare, which means that agents and publishers will be more anxious to compete for your business. The truth is, you probably don’t even need an agent or publisher if you build your list to that level. You certainly won’t have trouble attracting one or the other or both if that’s what you want. Look for an agent. This requires you to learn how to write a solid query letter. A query letter is a one-page sales letter that tries to entice the agent to request your manuscript or proposal. Most large traditional publishers require you to have an agent before they’ll look at your work. Write the next book. And so on. I guess the real truth is that it’s much harder to get your first finished book published than your second, and harder for the second than the third, etc. Writing is a skill that takes practice to master. If you keep going, you increase your chances exponentially. And if you quit, you build a block wall that your chances can’t overcome. You aren’t competing against every single writer trying to get a book published. I have a friend who used to read an agent’s slush pile. He says that a full 90 percent of the queries that come in are instant rejects — because the writing isn’t there. You’re only competing against the books that are publish-ready. You need to make sure your book is publish-ready. Do what it takes to get there. That means a lot of practice, a lot of reading, a lot of writing. Maybe hiring an editor. Maybe taking some classes. It means behaving like a professional writer. If you stick with it and keep improving, you’ll get there. By time you do, though, you might find that you don’t really want to be there anymore. The world of books is changing way faster than the world of publishing has kept up with.
https://shauntagrimes.medium.com/5-steps-to-being-traditionally-published-415a4996cf38
['Shaunta Grimes']
2019-05-31 11:41:28.385000+00:00
['Work', 'Publishing', 'Writing', 'Self', 'Creativity']
Title 5 Steps Traditionally PublishedContent hear pretty often publishing dead it’s impossible actually sell first book anymore can’t without huge platform equally improbable necessity truth though it’s always hard book traditionally published it’s impossible today ever It’s really number way people want published opportunity published High demand equal high difficulty even though writer hire agent sell editor publisher — agent editor publisher many opportunity work don’t really hustle client People scrambling begging take proceeds book sale That’s way world said though traditional publishing goal let’s honest it’s goal lot writer it’s impossible Here’s Write really good book way end you’re writing fiction must agent look don’t complete novel traditional publisher chance consider work nonfiction write proposal instead whole book you’re first time writer though wouldn’t hurt write whole book even nonfiction Regardless work turn agent proposal finished draft need shine you’re writing start building email list wish someone given advice sold first novel traditional publisher eighteen month book sold released best use time would building email list didn’t know one told I’m telling You’ll easier time appealing agent publisher solid foundation reader already go without saying you’ll easier time selling book people waiting buy get 10000 email list make difference get 100000 — suddenly you’re 1 million anymore You’re rare mean agent publisher anxious compete business truth probably don’t even need agent publisher build list level certainly won’t trouble attracting one that’s want Look agent requires learn write solid query letter query letter onepage sale letter try entice agent request manuscript proposal large traditional publisher require agent they’ll look work Write next book guess real truth it’s much harder get first finished book published second harder second third etc Writing skill take practice master keep going increase chance exponentially quit build block wall chance can’t overcome aren’t competing every single writer trying get book published friend used read agent’s slush pile say full 90 percent query come instant reject — writing isn’t You’re competing book publishready need make sure book publishready take get mean lot practice lot reading lot writing Maybe hiring editor Maybe taking class mean behaving like professional writer stick keep improving you’ll get time though might find don’t really want anymore world book changing way faster world publishing kept withTags Work Publishing Writing Self Creativity
1,322
How Boredom Can Help You Be More Productive and Creative
How Boredom Can Help You Be More Productive and Creative We need to understand the importance of boredom Photo by Anastasia Shuraeva on Pexels In a study, participants were told to sit in a room doing nothing for 15 minutes. In that room, there was also a button which would electrically shock them. They could choose to click that button at their own will. 67% of men and 25% of women pressed that button. It goes to show how much we hate boredom. We hate it so much that we can choose pain over boredom. So, we try to avoid it as much as we can. Avoiding boredom has never been easier because we’ve got so many things to do. We try to keep doing something in our free time, whether it is scrolling through social media, watching a badass movie, or binge-watching shows on Netflix. That makes sense because boredom is boring.
https://medium.com/curious/how-boredom-can-help-you-be-more-productive-and-creative-8a923db650af
['Binit Acharya']
2020-10-16 05:40:30.707000+00:00
['Personal Development', 'Self Improvement', 'Life', 'Creativity', 'Productivity']
Title Boredom Help Productive CreativeContent Boredom Help Productive Creative need understand importance boredom Photo Anastasia Shuraeva Pexels study participant told sit room nothing 15 minute room also button would electrically shock could choose click button 67 men 25 woman pressed button go show much hate boredom hate much choose pain boredom try avoid much Avoiding boredom never easier we’ve got many thing try keep something free time whether scrolling social medium watching badass movie bingewatching show Netflix make sense boredom boringTags Personal Development Self Improvement Life Creativity Productivity
1,323
Weekly Prompt: 28-31.12
Ahoy!!! 2020 is coming to an end…Oh no!!! Let’s all panic over the fact that we feel like we didn’t achieve much this year. Let’s publish content about how terrible these 12 months have been…Let’s complain and be negative, because that’s what we’re expected to do around this time, right? Well, not us, folks, not us. We will use these last few days of 2020 we have left to reflect on our internal world. To hell with everything that happens externally…There’s only so much we can do about it, there’s only so much we can control! I am more concerned about what goes on within. What we’re focusing on. What we’re repeating in our heads. You know, the good stuff. I know I’ve been living in my head a lot lately and so I want to ground myself through the following prompts. I hope you’re onboard with the idea of some more self reflection! It will be rewarding in the end (and you know it!): Monday: Tell me about a “wow” or “oh yeah” moment in your life. When you came to a realization and something made complete sense. What happened? Where were you? What did you realize/learn? Tuesday: What do you wish you were bold or brave enough to do? Wednesday: What does having purpose mean to you? Thursday (poetry challenge): In detail, write a poem about how you would like people to feel after interacting with you. That’s it for now, dear friends! Hope you have enjoyed the ride KTHT took you on in 2020 and are excited for new projects and exciting challenges in 2021 :) Thank you for your time, as always. A big, fat NAMAS’CRAY!
https://medium.com/know-thyself-heal-thyself/weekly-prompt-28-31-12-5476823302d9
['𝘋𝘪𝘢𝘯𝘢 𝘊.']
2020-12-28 16:36:29.277000+00:00
['Short Story', 'Writing', 'Energy', 'Newsletterandprompts', 'Creativity']
Title Weekly Prompt 283112Content Ahoy 2020 coming end…Oh Let’s panic fact feel like didn’t achieve much year Let’s publish content terrible 12 month been…Let’s complain negative that’s we’re expected around time right Well u folk u use last day 2020 left reflect internal world hell everything happens externally…There’s much there’s much control concerned go within we’re focusing we’re repeating head know good stuff know I’ve living head lot lately want ground following prompt hope you’re onboard idea self reflection rewarding end know Monday Tell “wow” “oh yeah” moment life came realization something made complete sense happened realizelearn Tuesday wish bold brave enough Wednesday purpose mean Thursday poetry challenge detail write poem would like people feel interacting That’s dear friend Hope enjoyed ride KTHT took 2020 excited new project exciting challenge 2021 Thank time always big fat NAMAS’CRAYTags Short Story Writing Energy Newsletterandprompts Creativity
1,324
Why You Should Care About Joe Rogan’s Bowel Habits
Unless you’re a protein guzzling gym monkey, the thought of your entire diet consisting of this one food group - which national guidelines tell us should only make up ten to thirty-five percent of our diet - probably seems like a recipe made for gastric disaster. And if Joe Rogan’s experience of the diet so far is anything to go by, then you wouldn’t be far wrong. On a recent Instagram post Rogan gave us all the sordid details in typically comedic fashion: “Carnivore diet update; the good and the bad. Let’s start with the bad. There’s really only one “bad” thing, and that thing is diarrhoea. I’m not sure diarrhoea is an accurate word for it, like I don’t think a shark is technically a fish. It’s a different thing, and with regular diarrhoea I would compare it to a fire you see coming a block or two away and you have the time to make an escape, whereas this carnivore diet is like out of nowhere the fire is coming through the cracks, your doorknob is red hot, and all hope is lost. I haven’t shit my pants yet, but I’ve come to accept that if I keep going with this diet it’s just a matter of time before we lose a battle, and I fill my undies like a rainforest mudslide overtaking a mountain road. It’s that bad. It seems to be getting a little better every day, so there’s that to look forward to, but as of today I trust my butthole about as much as I trust a shifty neighbour with a heavy Russian accent that asks a lot of personal questions.” As funny as this post might seem (does toilet humour ever get old?), if we look past the punchlines we can see that Rogan is describing some relatively serious bowel related side effects to the carnivore diet. Side effects that anyone who is considering trying out this latest exclusionary diet should take note of if they want to avoid a prolonged period of solitary confinement to their lavatory. Not all of us have a cushy enough lifestyle whereupon frequent toilet trips would only be a minor grievance. I imagine that running your own podcast, as Rogan does, in your own studio, on your own timetable, gives you a certain amount of freedom when it comes to your toilet trips. The same couldn’t be said for your average nine to five office worker, or a worker in any other amount of relatively run of the mill jobs. I worked part-time as a shop assistant in a high street clothing store to get a bit of extra money while I was at university and I can distinctly remember the painful squirming of having to hold in a number one for longer than was comfortable, due to my boss not wanting the staff to have more than one toilet trip while they were on the cash register. I can’t begin to imagine what kind of pain I would have gone through if I’d been doing the carnivore diet whilst I worked there and I started getting some Rogan-esque bowel trouble whilst I was confined to that shop floor.
https://antonypinol.medium.com/why-you-should-care-about-joe-rogans-bowel-habits-6a8460fbc2c2
['Antony Pinol']
2020-01-16 09:11:47.521000+00:00
['Diet', 'Wellness', 'Health', 'Self Improvement', 'Self']
Title Care Joe Rogan’s Bowel HabitsContent Unless you’re protein guzzling gym monkey thought entire diet consisting one food group national guideline tell u make ten thirtyfive percent diet probably seems like recipe made gastric disaster Joe Rogan’s experience diet far anything go wouldn’t far wrong recent Instagram post Rogan gave u sordid detail typically comedic fashion “Carnivore diet update good bad Let’s start bad There’s really one “bad” thing thing diarrhoea I’m sure diarrhoea accurate word like don’t think shark technically fish It’s different thing regular diarrhoea would compare fire see coming block two away time make escape whereas carnivore diet like nowhere fire coming crack doorknob red hot hope lost haven’t shit pant yet I’ve come accept keep going diet it’s matter time lose battle fill undies like rainforest mudslide overtaking mountain road It’s bad seems getting little better every day there’s look forward today trust butthole much trust shifty neighbour heavy Russian accent asks lot personal questions” funny post might seem toilet humour ever get old look past punchlines see Rogan describing relatively serious bowel related side effect carnivore diet Side effect anyone considering trying latest exclusionary diet take note want avoid prolonged period solitary confinement lavatory u cushy enough lifestyle whereupon frequent toilet trip would minor grievance imagine running podcast Rogan studio timetable give certain amount freedom come toilet trip couldn’t said average nine five office worker worker amount relatively run mill job worked parttime shop assistant high street clothing store get bit extra money university distinctly remember painful squirming hold number one longer comfortable due bos wanting staff one toilet trip cash register can’t begin imagine kind pain would gone I’d carnivore diet whilst worked started getting Roganesque bowel trouble whilst confined shop floorTags Diet Wellness Health Self Improvement Self
1,325
A Great Feeling
And today, we will start with my dear cousin. Call her Jubilate Mashauri. I call her cousin. Simply because we have a lineage relationship that brings us closer. I met her when I was 11 years old and she was 12 years old. Just a one year difference. (But she still wants a ‘shkamoo’ from me!) She was in class six, I was in class five. A beautiful, charming and open creature. I was playing football when she just popped in grandma’s gate with her younger sister tightly held on her left hand. Seeing her for the first moment left me heading for a quick shower. Ha h! Silly me. She seemed interested with my character, and I got interested to know her. Soon enough did I learn how related we were. In her course of stay at grandma’s place, we became close and loving relatives — playing, having fun and getting close to one another. We spent much time learning of our characters and personalities in a nutshell, kiddish way I can say. Most especially, the ‘what’ and ‘why’ we wanted to become in the near future. I don’t quite remember if she mentioned any of her plans to me, but I know what I always mentioned to her! “I want to be a Lawyer! Yes, a lawyer.” I cannot forget that. It was deep in my bloodline. Now, I bet you know what happens when ka-boy child and a girl child get so close ha? Words start to develop. Fashionable silly words and jokes of love arise! (Chuckles) Spare me Jubilate, ha ha! Let’s forget about that, anyways. [ It was foolish age. Not that important. ] So, the holidays passed, I got back home, she kept schooling and life went on. But one thing had attached us in common. The dream to attend Agape Lutheran Junior Seminary. You a’ll know how these ambitions develop with kids. Passion. A connected passion embolden in character.
https://medium.com/the-chat-post/a-great-feeling-3aaff6dc4195
['Mark Malekela']
2020-01-09 01:46:27.741000+00:00
['Cousins', 'Life Lessons', 'People', 'Candid Chat', 'Storytelling']
Title Great FeelingContent today start dear cousin Call Jubilate Mashauri call cousin Simply lineage relationship brings u closer met 11 year old 12 year old one year difference still want ‘shkamoo’ class six class five beautiful charming open creature playing football popped grandma’s gate younger sister tightly held left hand Seeing first moment left heading quick shower Ha h Silly seemed interested character got interested know Soon enough learn related course stay grandma’s place became close loving relative — playing fun getting close one another spent much time learning character personality nutshell kiddish way say especially ‘what’ ‘why’ wanted become near future don’t quite remember mentioned plan know always mentioned “I want Lawyer Yes lawyer” cannot forget deep bloodline bet know happens kaboy child girl child get close ha Words start develop Fashionable silly word joke love arise Chuckles Spare Jubilate ha ha Let’s forget anyways foolish age important holiday passed got back home kept schooling life went one thing attached u common dream attend Agape Lutheran Junior Seminary a’ll know ambition develop kid Passion connected passion embolden characterTags Cousins Life Lessons People Candid Chat Storytelling
1,326
Why Startups Need Affordable Help
Why Startups Need Affordable Help Teaching Startup is trying to reach both near and far away places It’s the worst kept secret in business, but entrepreneurs generally don’t have a lot of disposable cash. This is not a blanket personal assessment, just a business rule. It doesn’t matter if the founder is a kid with a few hundred dollars of saved-up birthday cash or a multi-millionaire who can fund a new enterprise with pocket money. When a business starts, it starts with capital, and that capital is finite and has to pay for everything. Sometimes including rent and meals for the founder. The vast majority of new business founders don’t come into the business with a blank check. Most founders don’t struggle over how much equity they’re willing to part with for that $500K Shark Tank investment. So it kills me when I see founders spend money they don’t have on things they don’t need — things that aren’t going to be immediately helpful in furthering their progress. I founded Teaching Startup — a newsletter and app with answers for entrepreneurs — to provide help to every founder. But I know every founder isn’t made of money, so I made it affordable, $10 a month. And I know it takes time for that help to be realized, so I threw in a free trial. I also know it’s not for everyone, so I made it a cancel anytime, no commitment deal. Some of my favorite feedback is the feedback I get from founders in Africa, India, South America, and other far-away (to me) corners of the world where the money doesn’t flow like it does in Silicon Valley or New York. What they’re spending is worth so much more than $10 a month, so if they find value in the product, I know the product is valuable here in the US. Or in the UK, or Australia, or all those other places Teaching Startup members come from. So if you’re in one of those far away places, let me offer to cut the cost for you. If you’re in one of those places where the startup money doesn’t flow, here, there, or anywhere, use invite code FARAWAY before the end of 2020 and we’ll lower the price of Teaching Startup to $6.99 a month for all of 2021. Even if you aren’t in one of those places, use invite code NEAR and I’ll give you your first month for $5. If Teaching Startup winds up not being right for you, no worries. In both cases, you get up to 30 days free to figure that out. And if that still doesn’t fit your budget, talk to us, and we’ll see what we can do. Good advice and the right answers don’t have to cost $300 an hour. We’re here to prove that.
https://jproco.medium.com/why-startups-need-affordable-help-c6f13ea06025
['Joe Procopio']
2020-12-09 13:45:42.778000+00:00
['Careers', 'Entrepreneurship', 'Business', 'Startup', 'Education']
Title Startups Need Affordable HelpContent Startups Need Affordable Help Teaching Startup trying reach near far away place It’s worst kept secret business entrepreneur generally don’t lot disposable cash blanket personal assessment business rule doesn’t matter founder kid hundred dollar savedup birthday cash multimillionaire fund new enterprise pocket money business start start capital capital finite pay everything Sometimes including rent meal founder vast majority new business founder don’t come business blank check founder don’t struggle much equity they’re willing part 500K Shark Tank investment kill see founder spend money don’t thing don’t need — thing aren’t going immediately helpful furthering progress founded Teaching Startup — newsletter app answer entrepreneur — provide help every founder know every founder isn’t made money made affordable 10 month know take time help realized threw free trial also know it’s everyone made cancel anytime commitment deal favorite feedback feedback get founder Africa India South America faraway corner world money doesn’t flow like Silicon Valley New York they’re spending worth much 10 month find value product know product valuable US UK Australia place Teaching Startup member come you’re one far away place let offer cut cost you’re one place startup money doesn’t flow anywhere use invite code FARAWAY end 2020 we’ll lower price Teaching Startup 699 month 2021 Even aren’t one place use invite code NEAR I’ll give first month 5 Teaching Startup wind right worry case get 30 day free figure still doesn’t fit budget talk u we’ll see Good advice right answer don’t cost 300 hour We’re prove thatTags Careers Entrepreneurship Business Startup Education
1,327
How I Got Out Of My Head And Into The World
The truth will set you free I found this the hardest. I didn’t lie, instead, I actively avoided any situation where I might have to come clean about losing my dad. I didn’t want anyone to pity me, so I alienated myself from everyone. It took me far too long to realise that most importantly, being honest with others about your situation will help you to come to terms with it. The first time I told someone how I felt, it cascaded from my lips as a jumbled string of letters. It didn’t sound right. They had existed as disconnected and painful words tumbling around in my mind like dirty laundry for so long. I repeated it until it became a simple sentence and my mind began to feel clearer as a result. Take a deep breath It took me a few attempts using apps like Headspace. At first, I used it to help me sleep when my insomnia was bad. The first few times I tried, I got so frustrated with the narrator telling me to focus on my breath when I couldn’t do it, that I’d end up in a fit of tears and give up. I was desperate to sleep. I came back to it again and again. I gave up every time. Then one day, something clicked. I made it through a 5-minute session and ended up feeling enveloped in a warm, fuzzy feeling. I was relaxed and for the first time in months, my mind was quiet. The best things are wild and free I used to stay inside for fear that I might cry if someone asked how I was, I can’t bear to be emotional in front of people, let alone in public. It’s so easy to fall into the trap of thinking that you should hide indoors when you’re feeling low. Take yourself for a walk, find a beautiful park, forest or trail. Get in between the trees and walk through fields. The fresh air and the open space will make you feel at ease and the movement will help clear the fuzzy cloud in your head. You’ll go home feeling brighter than you did before. Progress not perfection When I first started going to the gym, I had no idea what in the world I was trying to achieve. I used whatever machines I felt like using and picked up any weight. I would work-out until I could barely walk and left feeling high on life. I had dabbled in strength training, Muay-Thai and dance but never actually stuck to one particular thing to see any improvement. When I was depressed I had no concept of time. I could sink hours focusing on negative thoughts of the past or the future but was never firmly in the present. When I trained for a specific deadlifting goal or some other target, I was immersed in the present moment while trying to hit every rep and set. Strength training brought me firmly back to the present moment, no matter where my head was at the time. I had a sense of focus. I use the concepts of strength training which centre around progress not perfection and I apply them to my daily life. I now accept that things take time and that’s okay. A small achievement every day. That’s all it takes.
https://medium.com/age-of-awareness/how-i-went-from-negative-to-positive-in-a-year-5e59320fecb5
['Tiffany Kee']
2020-11-21 10:44:21.140000+00:00
['Mental Health', 'Happiness', 'Grief', 'Health', 'Self']
Title Got Head WorldContent truth set free found hardest didn’t lie instead actively avoided situation might come clean losing dad didn’t want anyone pity alienated everyone took far long realise importantly honest others situation help come term first time told someone felt cascaded lip jumbled string letter didn’t sound right existed disconnected painful word tumbling around mind like dirty laundry long repeated became simple sentence mind began feel clearer result Take deep breath took attempt using apps like Headspace first used help sleep insomnia bad first time tried got frustrated narrator telling focus breath couldn’t I’d end fit tear give desperate sleep came back gave every time one day something clicked made 5minute session ended feeling enveloped warm fuzzy feeling relaxed first time month mind quiet best thing wild free used stay inside fear might cry someone asked can’t bear emotional front people let alone public It’s easy fall trap thinking hide indoors you’re feeling low Take walk find beautiful park forest trail Get tree walk field fresh air open space make feel ease movement help clear fuzzy cloud head You’ll go home feeling brighter Progress perfection first started going gym idea world trying achieve used whatever machine felt like using picked weight would workout could barely walk left feeling high life dabbled strength training MuayThai dance never actually stuck one particular thing see improvement depressed concept time could sink hour focusing negative thought past future never firmly present trained specific deadlifting goal target immersed present moment trying hit every rep set Strength training brought firmly back present moment matter head time sense focus use concept strength training centre around progress perfection apply daily life accept thing take time that’s okay small achievement every day That’s takesTags Mental Health Happiness Grief Health Self
1,328
How to create better Email Signatures
Most people who send emails don’t spend time on their email signatures, which is a real missed opportunity. Your email signatures are opportunities for you to make clear who you are, to stand out. Make sure people can reach you and not only by email. This email signature can contain more information about you personally but also as a business. So, if you’re putting your name and a point or two of contact information in your signature, you’re not taking full advantage of the opportunity to connect and engage with the people you’re emailing. So what should you put into your signature? It depends on what you want to achieve and on your personal preference. Here are some suggestions as you create your own: First and Last Name Affiliation Info Secondary Contact Information Social Media Icons Call to Action Disclaimer or Legal Requirements Photo or Logo Let’s individually look at each point. 1. First and Last name I don’t think this point needs any explanation. You should always put your full name in the email signature. 2. Affiliation Info Closely following your name should be your affiliation information. Your affiliations could mean your job title, your company or organization, and even your department. Providing this information provides more context about the conversation and your role in it. Suppose it’s a recognizable organization. This helps you get the attention of your readers, so they take your message seriously. 3. Secondary Contact Information Secondary contact information is essential, too, so that the recipient knows how else to contact you. This might include a secondary Email, a phone number, or even a Fax if that’s still used. This might also be an opportunity for you to promote your website. 4. Social Media Icons Your social media platforms are the primary way of representing you in the modern era. Your brand is majorly exposed to these profiles, and they need to be followed. You can tell a lot about a person by what they post and how they portray themselves. That’s why it’s a great idea to include links to your social media pages in your email signature. It not only reinforces your brand, but it also helps people find new ways to contact and follow you. This can even drive online traffic to your content if you post the links online on your profiles. So if you do include social icons in your signature, make sure you’re keeping your social profiles up-to-date. Even if you have a presence on many social media sites, though, try to cap the number of icons to five or six. Focus on the accounts that matter most to growing your business or building your brand. 5. Call to Action One of the most important things to include in your email signature is a call to action (CTA). The best email signature CTAs are simple, up-to-date, non-pushy, and in line with your email style, making them appear more like a post-script and less like a sales pitch. Links to videos can be especially noticeable because, in some email clients like Gmail, a video’s thumbnail will show up underneath your signature. 6. Industry Disclaimer or Legal Requirements Some industries, such as legal, financial, and insurance, have specific email usage guidelines and etiquette to protect private information from being transmitted. 7. Photo or Logo An image is a great choice to spice up your email signature. If you want a personal touch so that recipients you’ve never met can associate your name with your face, consider using a professional photo in your signature.
https://medium.com/build-back-business/how-to-create-better-email-signatures-7786410aef2a
['Bryan Dijkhuizen']
2020-12-05 10:34:53.359000+00:00
['Email', 'Work', 'Entrepreneurship', 'Business', 'Marketing']
Title create better Email SignaturesContent people send email don’t spend time email signature real missed opportunity email signature opportunity make clear stand Make sure people reach email email signature contain information personally also business you’re putting name point two contact information signature you’re taking full advantage opportunity connect engage people you’re emailing put signature depends want achieve personal preference suggestion create First Last Name Affiliation Info Secondary Contact Information Social Media Icons Call Action Disclaimer Legal Requirements Photo Logo Let’s individually look point 1 First Last name don’t think point need explanation always put full name email signature 2 Affiliation Info Closely following name affiliation information affiliation could mean job title company organization even department Providing information provides context conversation role Suppose it’s recognizable organization help get attention reader take message seriously 3 Secondary Contact Information Secondary contact information essential recipient know else contact might include secondary Email phone number even Fax that’s still used might also opportunity promote website 4 Social Media Icons social medium platform primary way representing modern era brand majorly exposed profile need followed tell lot person post portray That’s it’s great idea include link social medium page email signature reinforces brand also help people find new way contact follow even drive online traffic content post link online profile include social icon signature make sure you’re keeping social profile uptodate Even presence many social medium site though try cap number icon five six Focus account matter growing business building brand 5 Call Action One important thing include email signature call action CTA best email signature CTAs simple uptodate nonpushy line email style making appear like postscript le like sale pitch Links video especially noticeable email client like Gmail video’s thumbnail show underneath signature 6 Industry Disclaimer Legal Requirements industry legal financial insurance specific email usage guideline etiquette protect private information transmitted 7 Photo Logo image great choice spice email signature want personal touch recipient you’ve never met associate name face consider using professional photo signatureTags Email Work Entrepreneurship Business Marketing
1,329
Why Your Startup Isn’t Cashflow Positive Until You Make A Living Wage
You work really hard for years building your company. And you’re burning through your savings as you build your company. Picture: Depositphotos Finally, finally you get your company to cash flow positive. “Thank goodness,” you say to yourself. “We’re finally free of needing more money. “Now, the business is self-sustaining. We can just invest the profits of the business back into the company.” So it’s a rude shock when you realize that your company isn’t truly profitable even though your company is cash flow positive. How can this be? Cash flow positive means you don’t need money any more, right? Let me tell you about my friend Mark. I met Mark a couple of years ago. He has a really cool company that he and his business partner started. They received some angel funding that helped them, but they are truly bootstrapping. I love their business and their business model. Their product is unique. And, slowly but surely, Mark’s company has gained traction. I tell Mark the same thing every time I see him. “When are you and your partner going start taking a salary?” Mark’s answer is the same each time, “When we’re profitable.” I wish I could get Mark to change his mind, but I haven’t been successful, yet. You’re only truly profitable when you and your company are cash flow positive. Congratulations. Your company is profitable, but you’re still draining your bank account. Guess what? You haven’t achieved true profitability yet. You’ve achieved true profitability when you are no longer draining your personal bank account of money. Somehow, there is this misconception that your investors expect you to starve while you build your company. Nothing could be further from the truth. Experienced investors know that it’s important for you to make a living wage. In other words, your investors want you to have a big enough salary, so you don’t have to worry about paying the bills each month. Now, I’m not saying that you should pay yourself a huge salary. That doesn’t make sense. However, I am saying that you should, as soon possible, pay yourself enough money, so you can pay your bills. Why you should pay yourself a living wage. I recommend to every entrepreneur I work with that they pay themselves something as soon as possible. Just pay yourself something. Even $100 per month is okay if that’s all you feel comfortable with. The benefits of paying yourself a small salary go beyond the small amount of money you will make. Let me explain why, except this time I will use a negative example. There was another entrepreneur I worked with named “James”. James didn’t pay himself a salary. We were going through our bi-weekly review of his company. The revenue was growing, and the company should have been cash flow positive. In fact, cash from operations was growing, so it didn’t make any sense why James’ net cash position was dropping. Then James gave me the answer. “My wife wants me to repay the second mortgage on our home.” “But there’s no loan on the books,” I said. “You can’t just take money out of the company. You have shareholders.” ‘You don’t understand. We have to pay off that mortgage.” “I understand what you want to do, but you can’t do it that way. You’re embezzling money from your company.” You don’t want to put yourself in the position where you will be tempted to do the wrong thing. I instantly knew I would have to stop working with James because James was embezzling money from his company. I was bummed. I had been working with James for a while. And James’ company had gotten to a nice amount of revenue and was cash flow positive. James was going to blow it big time if he didn’t change his thought process. James wouldn’t change his mind, so I told James our business relationship was over. I’m not saying that not paying yourself a salary will result in you doing what James did. Mark is proof of that. But why put yourself in a bad financial position? Instead start the discipline of paying yourself a small salary. You can start with as small an amount as you want, then… Start paying yourself more as the health of your business improves, then… Keep increasing the amount you are paying yourself more until you are paying yourself a living wage. Disciplined cash management leads to better results for your company. I am a big believer in being what I call “Appropriately Frugal” when you are the CEO. Simply put, being Appropriately Frugal means that you spend money on the important stuff for your business, and save money on everything you can save money on. However, you and your employees are not the area to save money on. You do want to attract the best employees. Then you will need to pay them appropriately. Again, I’m not saying that you should pay your employees crazy salaries. But I am saying that you should pay your employees market rate. Pay your employees as much as you can if you can’t afford market rate: Your employees will feel appreciated, and… Your employees will not have to worry about their finances. The idea that your employees will accept less than market rate because you are just starting only makes sense if they can afford it. Otherwise… You will not retain your employees, or… You will not hire the best employees, or… You will not hire any employees at all. So why aren’t you willing to pay yourself if you are willing to pay your employees? Why are you any different than your employees? You are the most important asset your company has. You will not be at your best if you are constantly worrying about how you are going to pay your bills each month. Just remember that true profitability comes when your company AND you are cash flow positive. And Mark, I know you’re reading this post. I hope you have decided to pay yourself something. If you haven’t decided to pay yourself something, then I hope this post helps to change your mind. For more, read: www.brettjfox.com/are-you-being-appropriately-frugal-and-why-its-so-important/
https://brett-j-fox.medium.com/why-your-startup-isnt-cashflow-positive-until-you-make-a-living-wage-50468d573227
['Brett Fox']
2019-10-31 00:19:57.187000+00:00
['Entrepreneurship', 'Business', 'Startup', 'Venture Capital', 'Technology']
Title Startup Isn’t Cashflow Positive Make Living WageContent work really hard year building company you’re burning saving build company Picture Depositphotos Finally finally get company cash flow positive “Thank goodness” say “We’re finally free needing money “Now business selfsustaining invest profit business back company” it’s rude shock realize company isn’t truly profitable even though company cash flow positive Cash flow positive mean don’t need money right Let tell friend Mark met Mark couple year ago really cool company business partner started received angel funding helped truly bootstrapping love business business model product unique slowly surely Mark’s company gained traction tell Mark thing every time see “When partner going start taking salary” Mark’s answer time “When we’re profitable” wish could get Mark change mind haven’t successful yet You’re truly profitable company cash flow positive Congratulations company profitable you’re still draining bank account Guess haven’t achieved true profitability yet You’ve achieved true profitability longer draining personal bank account money Somehow misconception investor expect starve build company Nothing could truth Experienced investor know it’s important make living wage word investor want big enough salary don’t worry paying bill month I’m saying pay huge salary doesn’t make sense However saying soon possible pay enough money pay bill pay living wage recommend every entrepreneur work pay something soon possible pay something Even 100 per month okay that’s feel comfortable benefit paying small salary go beyond small amount money make Let explain except time use negative example another entrepreneur worked named “James” James didn’t pay salary going biweekly review company revenue growing company cash flow positive fact cash operation growing didn’t make sense James’ net cash position dropping James gave answer “My wife want repay second mortgage home” “But there’s loan books” said “You can’t take money company shareholders” ‘You don’t understand pay mortgage” “I understand want can’t way You’re embezzling money company” don’t want put position tempted wrong thing instantly knew would stop working James James embezzling money company bummed working James James’ company gotten nice amount revenue cash flow positive James going blow big time didn’t change thought process James wouldn’t change mind told James business relationship I’m saying paying salary result James Mark proof put bad financial position Instead start discipline paying small salary start small amount want then… Start paying health business improves then… Keep increasing amount paying paying living wage Disciplined cash management lead better result company big believer call “Appropriately Frugal” CEO Simply put Appropriately Frugal mean spend money important stuff business save money everything save money However employee area save money want attract best employee need pay appropriately I’m saying pay employee crazy salary saying pay employee market rate Pay employee much can’t afford market rate employee feel appreciated and… employee worry finance idea employee accept le market rate starting make sense afford Otherwise… retain employee or… hire best employee or… hire employee aren’t willing pay willing pay employee different employee important asset company best constantly worrying going pay bill month remember true profitability come company cash flow positive Mark know you’re reading post hope decided pay something haven’t decided pay something hope post help change mind read wwwbrettjfoxcomareyoubeingappropriatelyfrugalandwhyitssoimportantTags Entrepreneurship Business Startup Venture Capital Technology
1,330
Leo Orenstein, Senior Controls Engineer: “If you want to do something challenging, go for autonomous trucking.”
This month’s employee spotlight is on our Senior Software Engineer, Leo Orenstein, who is designing control code for Starsky Robotics trucks. As the controller is a safety-critical multibody vehicle weighing over 40 tons and over 20 meters long that is supposed to operate autonomously on a public highway, there is no doubt it’s a hard problem to solve. Leo says he is enjoying every single part of it and looking for more people who are not afraid of challenges to join the team. Leo, let’s talk first about your role at Starsky. What are you and your team working on? I’m on the Planning and Controls team at Starsky. What we are doing is taking very high-level context behaviors such as “keep driving”, “change lanes” or “pull over because something has happened incorrectly” and turning them into specific commands that a self-driving truck can actually follow. The output will be “turn the steering wheel 13.48 degrees right” or “press the throttle 18.9 degrees”. In other words, we take these pretty abstract ideas and translate them into a language that our computer hardware system can understand and follow. It’s a two-step process. It starts with Planning to identify these abstract things and break them down into tasks that are more detailed but still not quite reconcilable. Then Controls helps turn them into real commands. I’ve been doing both Planning and Controls, bouncing between them, depending on what’s more critical at the time. Right now, I’ve been working more on the path planning side, and I find it incredibly interesting. It’s a relatively new field as opposed to Controls which is pretty well-established as it has existed for about 70 years now. Path planning has more liberty and is more open for experimentation. How big is your team now? There are six people on the Planning and Controls team at the moment, and we are hoping to recruit another team member by the end of the year. You have hands-on experience of working in many different industries, including Oil and Gas, Transportation, Mining, and Aviation. What was the most interesting job you had before Starsky? I was working at General Electric’s research center and that was a really interesting job because it was very diverse, and that was where I gained experience in so many different fields. There was this thing that we used to say to each other back then: “If you don’t like what you’re working on, don’t worry. It’s going to change soon.” It did change a lot. For example, in the same month, I went to an offshore oil rig, a sugar cane plant, and an iron ore mining facility, because I was working on all these different projects. It was intense, but I enjoyed that variety. The work itself was interesting enough, but I especially liked working on different subjects, going from one to the other and quickly switching between them. Each project was unique. Industries, companies and their problems were completely different, and every time I managed to find the right solutions for them, it felt great. As a person who has worked in both large corporations and small start-ups, can you compare these two experiences? I’m definitely a start-up person. I have little question about this now. I like the agility of a start-up. I know this is a cliché, but it’s true. I believe in the idea of trying things. If you have an idea, try it. If it doesn’t work out, find something else and then try again. At large corporations, you have cycles. Let’s say, we start working on a project. Three months in, we know it won’t work. However, it has funding for the whole year and it’s in scope. So, even though we know it won’t work, we keep trying because that’s the plan. I find this dreadful. Of course, start-ups have their own issues too. For instance, whatever was working when there were 10 people in a company is not going to work when there are 20. It’s not going to work again when there are 50, and if a company doesn’t realize that, the issue becomes quite pronounced. Besides that, it’s not a secret that big companies have more well-established processes. Sometimes it’s enough to just click on a button and have magic happen. Not a lot of magic happens in a start-up. If something is being done, you probably either know who’s doing it or going to be doing it yourself. I like working on lots of different things as this is the only way to actually get to know your product and understand how the magic is made. “If you have an idea, try it. If it doesn’t work out, find something else and then try again.” How has Starsky helped you in your professional development, and what advice would you give to prospective Starsky candidates? Before I joined Starsky, I thought I was a decent coder. Then I figured out I was wrong. From a technical perspective, Starsky is a really great place to learn. The company has a very open, collaborative environment and the best culture for learning. It basically says, “if you don’t know things, that’s okay, let’s find out together.” It’s part of Starsky’s DNA. So, if you are joining the autonomous driving field from another industry, go for Starsky. We understand that no one knows all the answers, and we are willing to work with new people to ramp up our collective knowledge. That being said, trucks are the hardest control problem I ever faced. It’s a very complex system. Even for human drivers, it’s a difficult thing to operate. There are many external factors affecting it and a lot of things can go wrong, so you need to be very precise. For instance, we can all of a sudden get a gust of crosswind. It’s almost impossible to predict it and quite hard to measure it, and just as sudden as it appeared, it may go away. However the truck cannot allow this to push it to the side. So, you need to figure out a way to overcome all these changes and make sure that the truck still responds well. What’s great is that this is not a research project. We often say to each other: “Is there a simpler way of getting it done?” That’s because we are building an actual product rather than just trying to find a theoretical solution. So, we are looking for people who care a lot about turning things into reality. If you do care, if you are ready to push the boundaries, and if you want to do something challenging, then go for autonomous trucking. “We are building an actual product rather than just trying to find a theoretical solution.” What do you find the most challenging in developing autonomous driving systems? Safety is the most challenging part. In general, the more well-defined a problem is, the more feasible and easier it is to solve. With a safety-critical system like an autonomous truck operating on a public highway, it’s like trying to solve a problem where absolutely anything can go wrong. So, you have to take a very disciplined safety-engineering approach and make sure you are covering all your bases. You need to find out all the failure cases, document them and implement safety mechanisms for all these scenarios. Even if your algorithm works 99.99 percent of the time, it will still be failing once a day. So, you need to make sure that the whole system is really bulletproof. Can you share a few interesting facts about yourself to let people know you better? I like to cook a lot, and I actually went to cooking classes for about a year at the time I was doing my master’s. I was studying, working and doing cooking classes. That was pretty intense. The breaking point was when someone asked me to open a restaurant with them. The guy had a restaurant space and asked me to open a brewery in it. I did the math and decided that it would be too much risk for me, so I passed on that opportunity. That’s pretty much when I left cooking, as I figured out that I love it as a hobby. My wife tells me that the only thing that can really get me mad is getting something wrong when I’m cooking. I’m a very chill guy, but if I get the recipe wrong, I get crazy mad for the whole day. Also, on a personal note, I’m having a baby soon. And I really appreciate how supportive of that Starsky has been. Not only do we have parental leave, but people truly understand the importance of that. I know that some companies don’t really care — even though you’re having a baby, you have to deliver a product in the first place. It’s more like taking parental leave but being on Slack while doing it. At Starsky, you are not simply getting the leave, but you are actually encouraged to enjoy it and bond with your family. *** If you want to join the Starsky team and help us get unmanned trucks on the road, please apply here.
https://medium.com/starsky-robotics-blog/leo-orenstein-if-you-want-to-do-something-challenging-go-for-autonomous-trucking-944897d999ba
['Starsky Team']
2019-10-22 17:48:26.542000+00:00
['Autonomous Cars', 'Careers', 'Startup', 'Self Driving Cars', 'Engineering']
Title Leo Orenstein Senior Controls Engineer “If want something challenging go autonomous trucking”Content month’s employee spotlight Senior Software Engineer Leo Orenstein designing control code Starsky Robotics truck controller safetycritical multibody vehicle weighing 40 ton 20 meter long supposed operate autonomously public highway doubt it’s hard problem solve Leo say enjoying every single part looking people afraid challenge join team Leo let’s talk first role Starsky team working I’m Planning Controls team Starsky taking highlevel context behavior “keep driving” “change lanes” “pull something happened incorrectly” turning specific command selfdriving truck actually follow output “turn steering wheel 1348 degree right” “press throttle 189 degrees” word take pretty abstract idea translate language computer hardware system understand follow It’s twostep process start Planning identify abstract thing break task detailed still quite reconcilable Controls help turn real command I’ve Planning Controls bouncing depending what’s critical time Right I’ve working path planning side find incredibly interesting It’s relatively new field opposed Controls pretty wellestablished existed 70 year Path planning liberty open experimentation big team six people Planning Controls team moment hoping recruit another team member end year handson experience working many different industry including Oil Gas Transportation Mining Aviation interesting job Starsky working General Electric’s research center really interesting job diverse gained experience many different field thing used say back “If don’t like you’re working don’t worry It’s going change soon” change lot example month went offshore oil rig sugar cane plant iron ore mining facility working different project intense enjoyed variety work interesting enough especially liked working different subject going one quickly switching project unique Industries company problem completely different every time managed find right solution felt great person worked large corporation small startup compare two experience I’m definitely startup person little question like agility startup know cliché it’s true believe idea trying thing idea try doesn’t work find something else try large corporation cycle Let’s say start working project Three month know won’t work However funding whole year it’s scope even though know won’t work keep trying that’s plan find dreadful course startup issue instance whatever working 10 people company going work 20 It’s going work 50 company doesn’t realize issue becomes quite pronounced Besides it’s secret big company wellestablished process Sometimes it’s enough click button magic happen lot magic happens startup something done probably either know who’s going like working lot different thing way actually get know product understand magic made “If idea try doesn’t work find something else try again” Starsky helped professional development advice would give prospective Starsky candidate joined Starsky thought decent coder figured wrong technical perspective Starsky really great place learn company open collaborative environment best culture learning basically say “if don’t know thing that’s okay let’s find together” It’s part Starsky’s DNA joining autonomous driving field another industry go Starsky understand one know answer willing work new people ramp collective knowledge said truck hardest control problem ever faced It’s complex system Even human driver it’s difficult thing operate many external factor affecting lot thing go wrong need precise instance sudden get gust crosswind It’s almost impossible predict quite hard measure sudden appeared may go away However truck cannot allow push side need figure way overcome change make sure truck still responds well What’s great research project often say “Is simpler way getting done” That’s building actual product rather trying find theoretical solution looking people care lot turning thing reality care ready push boundary want something challenging go autonomous trucking “We building actual product rather trying find theoretical solution” find challenging developing autonomous driving system Safety challenging part general welldefined problem feasible easier solve safetycritical system like autonomous truck operating public highway it’s like trying solve problem absolutely anything go wrong take disciplined safetyengineering approach make sure covering base need find failure case document implement safety mechanism scenario Even algorithm work 9999 percent time still failing day need make sure whole system really bulletproof share interesting fact let people know better like cook lot actually went cooking class year time master’s studying working cooking class pretty intense breaking point someone asked open restaurant guy restaurant space asked open brewery math decided would much risk passed opportunity That’s pretty much left cooking figured love hobby wife tell thing really get mad getting something wrong I’m cooking I’m chill guy get recipe wrong get crazy mad whole day Also personal note I’m baby soon really appreciate supportive Starsky parental leave people truly understand importance know company don’t really care — even though you’re baby deliver product first place It’s like taking parental leave Slack Starsky simply getting leave actually encouraged enjoy bond family want join Starsky team help u get unmanned truck road please apply hereTags Autonomous Cars Careers Startup Self Driving Cars Engineering
1,331
Finally, an intuitive explanation of why ReLU works
One may be inclined to point out that ReLUs cannot extrapolate; that is, a series of ReLUs fitted to resemble a sine wave from -4 < x < 4 will not be able to continue the sine wave for values of x outside of those bounds. It’s important to remember, however, that it’s not the goal of a neural network to extrapolate, the goal is to generalize. Consider, for instance, a model fitted to predict house price based on number of bathrooms and number of bedrooms. It doesn’t matter if the model struggles to carry the pattern to negative values of number of bathrooms or values of number of bedrooms exceeding five hundred, because it’s not the objective of the model. (You can read more about generalization vs extrapolation here.) The strength of the ReLU function lies not in itself, but in an entire army of ReLUs. This is why using a few ReLUs in a neural network does not yield satisfactory results; instead, there must be an abundance of ReLU activations to allow the network to construct an entire map of points. In multi-dimensional space, rectified linear units combine to form complex polyhedra along the class boundaries. Here lies the reason why ReLU works so well: when there are enough of them, they can approximate any function just as well as other activation functions like sigmoid or tanh, much like stacking hundreds of Legos, without the downsides. There are several issues with smooth-curve functions that do not occur with ReLU — one being that computing the derivative, or the rate of change, the driving force behind gradient descent, is much cheaper with ReLU than with any other smooth-curve function. Another is that sigmoid and other curves have an issue with the vanishing gradient problem; because the derivative of the sigmoid function gradually slopes off for larger absolute values of x. Because the distributions of inputs may shift around heavily earlier during training away from 0, the derivative will be so small that no useful information can be backpropagated to update the weights. This is often a major problem in neural network training. Graphed in Desmos. On the other hand, the derivative of the ReLU function is simple; it’s the slope of whatever line the input is on. It will reliably return a useful gradient, and while the fact that x = 0 {x < 0} may sometimes lead to a ‘dead neuron problem’, ReLU has still shown to be, in general, more powerful than not only curved functions (sigmoid, tanh) but also ReLU variants attempting to solve the dead neuron problem, like Leaky ReLU. ReLU is designed to work in abundance; with heavy volume it approximates well, and with good approximation it performs just as well as any other activation function, without the downsides.
https://medium.com/analytics-vidhya/if-rectified-linear-units-are-linear-how-do-they-add-nonlinearity-40247d3e4792
['Andre Ye']
2020-09-02 17:36:24.981000+00:00
['Machine Learning', 'AI', 'Artificial Intelligence', 'Data Science', 'Towards Data Science']
Title Finally intuitive explanation ReLU worksContent One may inclined point ReLUs cannot extrapolate series ReLUs fitted resemble sine wave 4 x 4 able continue sine wave value x outside bound It’s important remember however it’s goal neural network extrapolate goal generalize Consider instance model fitted predict house price based number bathroom number bedroom doesn’t matter model struggle carry pattern negative value number bathroom value number bedroom exceeding five hundred it’s objective model read generalization v extrapolation strength ReLU function lie entire army ReLUs using ReLUs neural network yield satisfactory result instead must abundance ReLU activation allow network construct entire map point multidimensional space rectified linear unit combine form complex polyhedron along class boundary lie reason ReLU work well enough approximate function well activation function like sigmoid tanh much like stacking hundred Legos without downside several issue smoothcurve function occur ReLU — one computing derivative rate change driving force behind gradient descent much cheaper ReLU smoothcurve function Another sigmoid curve issue vanishing gradient problem derivative sigmoid function gradually slope larger absolute value x distribution input may shift around heavily earlier training away 0 derivative small useful information backpropagated update weight often major problem neural network training Graphed Desmos hand derivative ReLU function simple it’s slope whatever line input reliably return useful gradient fact x 0 x 0 may sometimes lead ‘dead neuron problem’ ReLU still shown general powerful curved function sigmoid tanh also ReLU variant attempting solve dead neuron problem like Leaky ReLU ReLU designed work abundance heavy volume approximates well good approximation performs well activation function without downsidesTags Machine Learning AI Artificial Intelligence Data Science Towards Data Science
1,332
Google’s New Accessibility Projects
Google has recently unveiled 3 separate efforts to bring technology to those with disabilities to help make their daily lives easier and more accessible. The three projects are Project Euphonia, which aims to help those with speech impairments; Live Relay, which assists anyone who is hard of hearing; and Project Diva, which aims to give autonomy and independence to people with the help of Google Assist. More than 15% of people in the United States live with a disability, and that number is only expected to grow in the years ahead as we grow older and start living longer. There has never been a better time to try to harness the power of our technology to help make the lives of the disabled more comfortable and fulfilling. Project Euphonia Project Euphonia aims to help those with speech difficulties caused by cerebral palsy, autism, and other developmental disorders, as well as neurologic conditions like ALS (amyotrophic lateral sclerosis), stroke, MS (multiple sclerosis), Parkinson’s Disease, or traumatic brain injuries. Google’s aim with Project Euphonia is to use the power of AI to help computers understand speech that is impaired with improved accuracy, and then, in turn, use those computers to make sure everyone using the service can be understood. Google has partnered with the ALS Residence Initiative and the ALS Therapy Development Institute to record voices of men and women with ALS, and have worked on optimizing algorithms that can help to transcribe and recognize their words more reliably. Live Relay Live Relay was set up with the goal of bringing voice calls to those who are deaf or hard of hearing. By using a phone’s own speech recognition and text-to-speech software, users will be able to let the phone listen and speak on their behalf, making it possible to speak to someone who is deaf or hard of hearing. Google also plans to integrate real-time translation into their Live Relay software, allowing anyone in the world to speak to one another regardless of any language barrier. Project Diva Project Diva helps those who are nonverbal or suffer from limited mobility to give Google Assistant commands without needing to use their voice, but instead by using an external switch device. The device is a small box into which an assistive button is plugged. The signal coming from the button is then converted by the box into a command sent to the Google Assistant. For now, Project Diva is limited to single-purpose buttons, but they are currently devising a system that makes use of RFID tags which they can then associate with certain specific commands. This article was originally published on RussEwell.co
https://russewell.medium.com/googles-new-accessibility-projects-bb5968546c1b
['Russ Ewell']
2019-11-05 14:40:18.294000+00:00
['Russ Ewell', 'Disability', 'Artificial Intelligence', 'Google', 'Technology']
Title Google’s New Accessibility ProjectsContent Google recently unveiled 3 separate effort bring technology disability help make daily life easier accessible three project Project Euphonia aim help speech impairment Live Relay assist anyone hard hearing Project Diva aim give autonomy independence people help Google Assist 15 people United States live disability number expected grow year ahead grow older start living longer never better time try harness power technology help make life disabled comfortable fulfilling Project Euphonia Project Euphonia aim help speech difficulty caused cerebral palsy autism developmental disorder well neurologic condition like ALS amyotrophic lateral sclerosis stroke MS multiple sclerosis Parkinson’s Disease traumatic brain injury Google’s aim Project Euphonia use power AI help computer understand speech impaired improved accuracy turn use computer make sure everyone using service understood Google partnered ALS Residence Initiative ALS Therapy Development Institute record voice men woman ALS worked optimizing algorithm help transcribe recognize word reliably Live Relay Live Relay set goal bringing voice call deaf hard hearing using phone’s speech recognition texttospeech software user able let phone listen speak behalf making possible speak someone deaf hard hearing Google also plan integrate realtime translation Live Relay software allowing anyone world speak one another regardless language barrier Project Diva Project Diva help nonverbal suffer limited mobility give Google Assistant command without needing use voice instead using external switch device device small box assistive button plugged signal coming button converted box command sent Google Assistant Project Diva limited singlepurpose button currently devising system make use RFID tag associate certain specific command article originally published RussEwellcoTags Russ Ewell Disability Artificial Intelligence Google Technology
1,333
Using AI to detect Cat and Dog pictures, with Tensorflow & Keras. (3)
Pre-Trained convnet: The number one reason for our data not reaching the heights of accuracy is the lack of data we have to train our system with. If Deep Learning is the new electricity then data is its fuel. Thus to help us in our endeavor we will break our system into two parts. The convolutional block and the classifier block. The convolutional block will contain all our neural network components before the “Flatten” portion of our code. We will be using a pre-trained convolutional base called the InceptionV3 architecture. The model was trained on 1.4 million images and thus has no shortage of the proverbial fuel. Process of switching classifiers. Analyzing the model: Create a new block of code anywhere in our previous notebook, within the block write: from tensorflow.keras.applications.inception_v3 import InceptionV3 #import InceptionV3 conv_base = InceptionV3(weights='imagenet', include_top=False, input_shape=(64, 64, 3)) for layer in conv_base.layers: layer.trainable = False We first import the InceptionV3convolutional base and set that as the conv_base. We will reconfigure the model to have our input shape of (64,64,3). We will also freeze our convbase’s trainability as we want to keep the information stored within the convbase and only train the classifier portion. Next, under the above code, type: print(conv_base.summary()) to get a view of the convolutional base. You should get the following output: Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 64, 64, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 64, 64, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 64, 64, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 32, 32, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 32, 32, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 32, 32, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 16, 16, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 16, 16, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 16, 16, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 16, 16, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 8, 8, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 8, 8, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 8, 8, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 8, 8, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 4, 4, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 4, 4, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 4, 4, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 4, 4, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 2, 2, 512) 0 ================================================================= Total params: 14,714,688 Trainable params: 0 Non-trainable params: 14,714,688 _________________________________________________________________ None As you can see the architecture is made up of convolutional 2d blocks and maxpooling2d blocks which is no different from our own code in part 2. The main difference is that they have trained more data on the convolutional base and thus required more layers. Developing our model: We will know to change our previous model architecture to : network = models.Sequential() network.add(conv_base) network.add(layers.Flatten()) network.add(layers.Dense(256, kernel_regularizer=regularizers.l2(0.001))) network.add(layers.LeakyReLU()) network.add(layers.Dense(1,activation='sigmoid')) The rest of the model block will stay the same. Notice, we added our conv_base block just like any other layer. Now before we run the block I must warn you that it will take a substantial amount of time due to the large nature of the conv_base. Now, if you’re willing to wait, go ahead and run the model! Graphical analysis of our model: Finally, we can run our image block from part two to see the accuracy we achieved with this method: A sad fail Unfortunately, it seems using pre-trained models doesn’t help in our case. This is most likely due to the lack of data used to optimizer our classifier. This can be fixed by using more images as well as using larger images. Interestingly, removing the following piece of code: for layer in conv_base.layers: layer.trainable = False grants an increase of accuracy to 96%. 96% Accuracy rate Thus implying the data used to train InceptionV3 does not “coincide” with our data. Furthermore, running it for 500 epochs which would take 3 hours will grant an increase in accuracy to 98%. Next Time: Next time, we will use our model to create a visual-based password cracker! ***GitHubCode***: https://github.com/MoKillem/CatDogVanillaNeuralNetwork/blob/main/CNN_CATS_96%25.ipynb
https://codewebduh.medium.com/using-ai-to-detect-cat-and-dog-pictures-with-tensorflow-keras-3-f24698c574c6
['Maxamed Sarinle']
2020-11-25 09:15:40.323000+00:00
['Keras', 'Artificial Intelligence', 'Python', 'Convolution Neural Net', 'TensorFlow']
Title Using AI detect Cat Dog picture Tensorflow Keras 3Content PreTrained convnet number one reason data reaching height accuracy lack data train system Deep Learning new electricity data fuel Thus help u endeavor break system two part convolutional block classifier block convolutional block contain neural network component “Flatten” portion code using pretrained convolutional base called InceptionV3 architecture model trained 14 million image thus shortage proverbial fuel Process switching classifier Analyzing model Create new block code anywhere previous notebook within block write tensorflowkerasapplicationsinceptionv3 import InceptionV3 import InceptionV3 convbase InceptionV3weightsimagenet includetopFalse inputshape64 64 3 layer convbaselayers layertrainable False first import InceptionV3convolutional base set convbase reconfigure model input shape 64643 also freeze convbase’s trainability want keep information stored within convbase train classifier portion Next code type printconvbasesummary get view convolutional base get following output Layer type Output Shape Param input1 InputLayer None 64 64 3 0 block1conv1 Conv2D None 64 64 64 1792 block1conv2 Conv2D None 64 64 64 36928 block1pool MaxPooling2D None 32 32 64 0 block2conv1 Conv2D None 32 32 128 73856 block2conv2 Conv2D None 32 32 128 147584 block2pool MaxPooling2D None 16 16 128 0 block3conv1 Conv2D None 16 16 256 295168 block3conv2 Conv2D None 16 16 256 590080 block3conv3 Conv2D None 16 16 256 590080 block3pool MaxPooling2D None 8 8 256 0 block4conv1 Conv2D None 8 8 512 1180160 block4conv2 Conv2D None 8 8 512 2359808 block4conv3 Conv2D None 8 8 512 2359808 block4pool MaxPooling2D None 4 4 512 0 block5conv1 Conv2D None 4 4 512 2359808 block5conv2 Conv2D None 4 4 512 2359808 block5conv3 Conv2D None 4 4 512 2359808 block5pool MaxPooling2D None 2 2 512 0 Total params 14714688 Trainable params 0 Nontrainable params 14714688 None see architecture made convolutional 2d block maxpooling2d block different code part 2 main difference trained data convolutional base thus required layer Developing model know change previous model architecture network modelsSequential networkaddconvbase networkaddlayersFlatten networkaddlayersDense256 kernelregularizerregularizersl20001 networkaddlayersLeakyReLU networkaddlayersDense1activationsigmoid rest model block stay Notice added convbase block like layer run block must warn take substantial amount time due large nature convbase you’re willing wait go ahead run model Graphical analysis model Finally run image block part two see accuracy achieved method sad fail Unfortunately seems using pretrained model doesn’t help case likely due lack data used optimizer classifier fixed using image well using larger image Interestingly removing following piece code layer convbaselayers layertrainable False grant increase accuracy 96 96 Accuracy rate Thus implying data used train InceptionV3 “coincide” data Furthermore running 500 epoch would take 3 hour grant increase accuracy 98 Next Time Next time use model create visualbased password cracker GitHubCode httpsgithubcomMoKillemCatDogVanillaNeuralNetworkblobmainCNNCATS9625ipynbTags Keras Artificial Intelligence Python Convolution Neural Net TensorFlow
1,334
Design Patterns: Factory
It’s a factory The factory is a creational pattern that can be used to retrieve instances of objects, without having to new them up directly in calling code. At my job, I find I’m using creational patterns constantly; and most of the time it’s a factory. In class-based programming, the factory method pattern is a creational pattern that uses factory methods to deal with the problem of creating objects without having to specify the exact class of the object that will be created. This is done by creating objects by calling a factory method-either specified in an interface and implemented by child classes, or implemented in a base class and optionally overridden by derived classes-rather than by calling a constructor. From Wikipedia This pattern is very related to the strategy pattern — at least as far as I’m concerned. In the previous post on the strategy pattern we learned that you can use multiple implementations of a single interface as differing “strategies”. In the post, we were deciding based on some pretend run time situation of which strategy to use: ILogger logger = null; if (args[0] == "file") logger = new FileLogger(); // create a file logger because the consumer specified it in some way. else logger = new ConsoleLogger(); // create a console logger as the fall back strategy. The above could be an example of the application choosing a strategy based on some run time input (the value in args[0] ). Why is the snippet a problem? It probably won’t be the first time it happens, and when your codebase is very simple. As your codebase evolves however, and you get perhaps more places where you would want to instantiate a ILogger, and more ILoggers get added, you start needing to update more and more code. What do I mean by that? Well, imagine you added this “if/else” logger logic to 50 additional files. That if/else logic now exists in 50 files! Every time a “branch” occurs in code, that makes the code harder to understand. This may be only one simple 4 line set of instructions, with a simple to follow branch, but what if this same sort of situation were throughout your codebase, applying to more than just an ILogger ? What if, even worse, you add a MsSqlLogger , and a MongoLogger to your possibilities of loggers, now you have an if/else branch to update in a hypothetical 50 files; that's no good! How can we avoid some of this hassle? The factory method to the rescue! Implementation We’ll be using the same ILogger strategy and implementation from the previous post as a base line. The few additions are: public enum LoggerType { Console, File } public interface ILoggerFactory { ILogger GetLogger(LoggerType loggerType); } That’s it for the “ abstraction “ part of our factory. Now the implementation: public class LoggerFactory : ILoggerFactory { public ILogger GetLogger(LoggerType loggerType) { switch (loggerType) { case LoggerType.Console: return new ConsoleLogger(); case LoggerType.File: return new FileLogger(); default: throw new ArgumentException($"{nameof(loggerType)} was invalid."); } } } and a (bad) example of how to use it (since we aren’t for this example using dependency injection like we should in the real world): static void Main(string[] args) { ILoggerFactory loggerFactory = new LoggerFactory(); ILogger logger = null; logger = loggerFactory.GetLogger(LoggerType.Console); logger.Log($"Doot doot, this should be a {nameof(ConsoleLogger)}. {logger.GetType()}"); logger = loggerFactory.GetLogger(LoggerType.File); logger.Log($"Doot doot, this should be a {nameof(FileLogger)}. {logger.GetType()}"); } Reasons to use this pattern How does the previous section actually help us? If you recall, in our hypothetical scenario our original “if/else” branching logic occurred in 50 files. We needed to then add two additional strategies, meaning we needed to update 50 files. How did the factory help us? Well now, that branching logic is completely contained within the factory implementation itself. We simply add our MsSql and Mongo values to our enum, and add two new case statements to our factory implementation - a total of 2 files updated, rather than 50. This not only saves us a ton of time, it help ensure that we don’t miss making updates in any of our 50 files. One additional thought is the factory itself is very testable. It’s easy to test all the “logic” that’s involved with choosing the correct strategy, because all of that logic is completely contained within the factory itself, rather than across 50 files! References
https://medium.com/swlh/design-patterns-factory-b5d0417bb086
['Russell Hammett Jr.', 'Kritner']
2020-03-01 11:03:36.151000+00:00
['Programming', 'Software Engineering', 'Software Development', 'Software Design Patterns', 'Design Patterns']
Title Design Patterns FactoryContent It’s factory factory creational pattern used retrieve instance object without new directly calling code job find I’m using creational pattern constantly time it’s factory classbased programming factory method pattern creational pattern us factory method deal problem creating object without specify exact class object created done creating object calling factory methodeither specified interface implemented child class implemented base class optionally overridden derived classesrather calling constructor Wikipedia pattern related strategy pattern — least far I’m concerned previous post strategy pattern learned use multiple implementation single interface differing “strategies” post deciding based pretend run time situation strategy use ILogger logger null args0 file logger new FileLogger create file logger consumer specified way else logger new ConsoleLogger create console logger fall back strategy could example application choosing strategy based run time input value args0 snippet problem probably won’t first time happens codebase simple codebase evolves however get perhaps place would want instantiate ILogger ILoggers get added start needing update code mean Well imagine added “ifelse” logger logic 50 additional file ifelse logic exists 50 file Every time “branch” occurs code make code harder understand may one simple 4 line set instruction simple follow branch sort situation throughout codebase applying ILogger even worse add MsSqlLogger MongoLogger possibility logger ifelse branch update hypothetical 50 file thats good avoid hassle factory method rescue Implementation We’ll using ILogger strategy implementation previous post base line addition public enum LoggerType Console File public interface ILoggerFactory ILogger GetLoggerLoggerType loggerType That’s “ abstraction “ part factory implementation public class LoggerFactory ILoggerFactory public ILogger GetLoggerLoggerType loggerType switch loggerType case LoggerTypeConsole return new ConsoleLogger case LoggerTypeFile return new FileLogger default throw new ArgumentExceptionnameofloggerType invalid bad example use since aren’t example using dependency injection like real world static void Mainstring args ILoggerFactory loggerFactory new LoggerFactory ILogger logger null logger loggerFactoryGetLoggerLoggerTypeConsole loggerLogDoot doot nameofConsoleLogger loggerGetType logger loggerFactoryGetLoggerLoggerTypeFile loggerLogDoot doot nameofFileLogger loggerGetType Reasons use pattern previous section actually help u recall hypothetical scenario original “ifelse” branching logic occurred 50 file needed add two additional strategy meaning needed update 50 file factory help u Well branching logic completely contained within factory implementation simply add MsSql Mongo value enum add two new case statement factory implementation total 2 file updated rather 50 save u ton time help ensure don’t miss making update 50 file One additional thought factory testable It’s easy test “logic” that’s involved choosing correct strategy logic completely contained within factory rather across 50 file ReferencesTags Programming Software Engineering Software Development Software Design Patterns Design Patterns
1,335
OMG. Buffer Lost HALF Its Social Media Traffic This Year. What Does It Mean?
Social media marketing software team admits they’re failing at social media marketing. Buffer, a company I’ve considered one of the leaders in social media with a massive presence (think top 1%, unicorn status) made a shocking announcement this week. In a post on the their blog, Buffer author Kevan Lee plainly states, “We as a Buffer marketing team — working on a product that helps people succeed on social media — have yet to figure out how to get things working on Facebook (especially), Twitter, Pinterest, and more.” Somehow, some way, Buffer has lost nearly half its social referral traffic over the last year. The bottom seems to be falling out across Facebook, Twitter, LinkedIn and Google+: Now, the figures are shocking, but Buffer’s openness about them is par for the course. They’ve long been trailblazers in corporate transparency, even publishing all of their salaries to the web. The Buffer team is running some experiments to try to determine the cause of this huge loss in social referral traffic, but I have a few ideas of my own on it: 1. It Could Be Instrumentation Error Facebook Mobile (which is essentially 80% of Facebook’s) apparently doesn’t add UTM parameters. This means that social traffic could potentially be mischaracterized as direct. Google Analytics certainly doesn’t have a huge incentive to make Facebook, Twitter or other social networks look great, so they have no incentive to straighten this out. 2. The 72% Drop in Google+ Traffic Seems Reasonable Without Having Done Anything “Wrong” The biggest drop Buffer has seen (by far) was in their Google+ traffic, which is down 72% over the last year. We all know Google+ has had one foot in the grave so long I wouldn’t even include it in a calculation of average traffic losses. I checked WordStream’s own analytics and discovered that our Google+ referral numbers are actually similar to Buffer’s, despite the fact I’ve personally maintained an active presence on Google+ both personally and for the company. I’d be willing to bet that other companies are seeing similar results on Google+. It’s just not as active as it once was. 3. We’re Drowning in Crap Content Organic social is so ridiculously competitive now, with an ever-increasing volume of content going after the same finite number of people’s attention. Even when you’re exceptional, the pool of other exceptional content creators is growing. As Rand Fishkin said, “Buffer’s content in 2013/14 was revolutionary and unique. It’s stayed good, but competition has figured out some of what made them special.” Perhaps readers are tiring of “super transparency” as a content marketing style. It’s actually a bit humbling that even companies like Buffer, whom so many of us look to for strategy on creating and promoting remarkable content, are also struggling with this. 4. Facebook/Twitter Ads Are Super Important WordStream’s own Facebook traffic grows every month at a really good clip, but yes, we’re spending money on Facebook Ads. Sure, it’s a bummer that all of social isn’t free, but what the heck — sometimes it’s nice be able to fix a problem by throwing money at it (it’s a pretty easy solution, actually). Organic Facebook reach is just really pathetic now. If your only plan for getting people from Facebook to your website is to post things on your Page, you’re going to fail. It doesn’t matter how awesome your content is… Facebook just doesn’t want to show it organically anymore. The Newsfeed is too busy. The good news is that if you’re posting quality content and focusing on engagement, your Facebook ads can be super cheap. 5. Organic Social is a Bit of a Hamster Wheel With declining organic reach, there’s less of a “snowball effect” like what you typically see in SEO, where a steady amount of effort produces increasing returns every month. You have to work really, really hard on a continuous basis at organic social to move the needle even a little. You pretty much have to double your efforts to double results, which is pretty hard to do when you’re already as big as Buffer. In short, I don’t think Buffer’s plummeting organic social traffic is the result of any lack of creativity or effort on their part. I reject Kevan Lee’s conclusions to that effect, as they’re obviously brilliant people and didn’t get where they are by sucking at social. Personally, I think it has more to do with external factors and their need to adapt to them. In fact, I first thought, “What?! They don’t have a social media manager?” But then almost immediately afterward I said to myself, “Don’t hire one now… put that money into your social ads budget instead.” Best of luck to Buffer as they try to figure out their internal number, and kudos to them for sharing them in such an honest and forthright way. The whole industry will learn from their experience. What do you think of Buffer’s traffic loss and their potential reasons for it? Share your thoughts in the comments. Image credit: Business Insider About The Author Larry Kim is the CEO of Mobile Monkey and founder of WordStream. You can connect with him on Twitter, Facebook, LinkedIn and Instagram.
https://medium.com/marketing-and-entrepreneurship/omg-buffer-lost-half-its-social-media-traffic-this-year-what-does-it-mean-bc567b4f05c7
['Larry Kim']
2017-04-30 16:25:59.447000+00:00
['Marketing', 'Twitter', 'Facebook', 'Social Media', 'Social Media Marketing']
Title OMG Buffer Lost HALF Social Media Traffic Year MeanContent Social medium marketing software team admits they’re failing social medium marketing Buffer company I’ve considered one leader social medium massive presence think top 1 unicorn status made shocking announcement week post blog Buffer author Kevan Lee plainly state “We Buffer marketing team — working product help people succeed social medium — yet figure get thing working Facebook especially Twitter Pinterest more” Somehow way Buffer lost nearly half social referral traffic last year bottom seems falling across Facebook Twitter LinkedIn Google figure shocking Buffer’s openness par course They’ve long trailblazer corporate transparency even publishing salary web Buffer team running experiment try determine cause huge loss social referral traffic idea 1 Could Instrumentation Error Facebook Mobile essentially 80 Facebook’s apparently doesn’t add UTM parameter mean social traffic could potentially mischaracterized direct Google Analytics certainly doesn’t huge incentive make Facebook Twitter social network look great incentive straighten 2 72 Drop Google Traffic Seems Reasonable Without Done Anything “Wrong” biggest drop Buffer seen far Google traffic 72 last year know Google one foot grave long wouldn’t even include calculation average traffic loss checked WordStream’s analytics discovered Google referral number actually similar Buffer’s despite fact I’ve personally maintained active presence Google personally company I’d willing bet company seeing similar result Google It’s active 3 We’re Drowning Crap Content Organic social ridiculously competitive everincreasing volume content going finite number people’s attention Even you’re exceptional pool exceptional content creator growing Rand Fishkin said “Buffer’s content 201314 revolutionary unique It’s stayed good competition figured made special” Perhaps reader tiring “super transparency” content marketing style It’s actually bit humbling even company like Buffer many u look strategy creating promoting remarkable content also struggling 4 FacebookTwitter Ads Super Important WordStream’s Facebook traffic grows every month really good clip yes we’re spending money Facebook Ads Sure it’s bummer social isn’t free heck — sometimes it’s nice able fix problem throwing money it’s pretty easy solution actually Organic Facebook reach really pathetic plan getting people Facebook website post thing Page you’re going fail doesn’t matter awesome content is… Facebook doesn’t want show organically anymore Newsfeed busy good news you’re posting quality content focusing engagement Facebook ad super cheap 5 Organic Social Bit Hamster Wheel declining organic reach there’s le “snowball effect” like typically see SEO steady amount effort produce increasing return every month work really really hard continuous basis organic social move needle even little pretty much double effort double result pretty hard you’re already big Buffer short don’t think Buffer’s plummeting organic social traffic result lack creativity effort part reject Kevan Lee’s conclusion effect they’re obviously brilliant people didn’t get sucking social Personally think external factor need adapt fact first thought “What don’t social medium manager” almost immediately afterward said “Don’t hire one now… put money social ad budget instead” Best luck Buffer try figure internal number kudos sharing honest forthright way whole industry learn experience think Buffer’s traffic loss potential reason Share thought comment Image credit Business Insider Author Larry Kim CEO Mobile Monkey founder WordStream connect Twitter Facebook LinkedIn InstagramTags Marketing Twitter Facebook Social Media Social Media Marketing
1,336
7 Food-Related Hacks That You Must Implement to Lose Weight Easily
7 Food-Related Hacks That You Must Implement to Lose Weight Easily These hacks are based on psychology, biology and hormonal science. Photo by Brooke Lark on Unsplash Food is one of the most powerful natural stimulants out there — seeing, smelling or even thinking about which is enough to activate a craving and feeling of hunger in human bodies. It has been proven that any time a person sees food, particularly unhealthy, junk and calorie-rich food, the reward centres in his or her brain light up making them feel hungrier. A hunger that is always accompanied by an increase in your heart rate and sometimes even makes you drool a little. This happens usually because your body is priming you to eat, whether you are planning to do so or not. In a 2006 study, jars of Hershey’s kisses were placed on the desks of some secretaries in an organisation. Half of the secretaries received them in a transparent jar while the other half in an opaque one. The ones who could see the chocolates staring them in the face all day ate 71% more than those who couldn’t see them. The phenomenon associated with this study is called food cue reactivity. According to which, if you see the food your body would crave to eat it. This happening doesn’t only apply to food, but it also implies to anything that we have learnt to associate with food in the past. Therefore, certain smells, sights, images and locations can trigger food cravings depending on our learned associations with them as well. Think about how this is already happening in your own life on almost a daily basis, for example, you decided to eat clean and healthy today but your coworker or friend brought burgers or you were casually watching tv and a pizza hut commercial came on. You would immediately crave the food, your will power would go down and your mouth will be like - damn, I haven't had pizza in ages and next thing we know you have already ordered it. Unless we are aware of how food cues influence our decision, we on a daily basis can end up consuming calories we had no intention of eating without even paying much attention to them.
https://medium.com/in-fitness-and-in-health/7-food-related-hacks-that-you-must-implement-to-lose-weight-easily-1cc53835f4d1
['Shruti']
2020-12-18 15:06:54.740000+00:00
['Food', 'Health', 'Self', 'Fitness', 'Psychology']
Title 7 FoodRelated Hacks Must Implement Lose Weight EasilyContent 7 FoodRelated Hacks Must Implement Lose Weight Easily hack based psychology biology hormonal science Photo Brooke Lark Unsplash Food one powerful natural stimulant — seeing smelling even thinking enough activate craving feeling hunger human body proven time person see food particularly unhealthy junk calorierich food reward centre brain light making feel hungrier hunger always accompanied increase heart rate sometimes even make drool little happens usually body priming eat whether planning 2006 study jar Hershey’s kiss placed desk secretary organisation Half secretary received transparent jar half opaque one one could see chocolate staring face day ate 71 couldn’t see phenomenon associated study called food cue reactivity According see food body would crave eat happening doesn’t apply food also implies anything learnt associate food past Therefore certain smell sight image location trigger food craving depending learned association well Think already happening life almost daily basis example decided eat clean healthy today coworker friend brought burger casually watching tv pizza hut commercial came would immediately crave food power would go mouth like damn havent pizza age next thing know already ordered Unless aware food cue influence decision daily basis end consuming calorie intention eating without even paying much attention themTags Food Health Self Fitness Psychology
1,337
A Light at the End of the Covid Tunnel
Hang in there, Covid-19 vaccines are coming soon. (Image credit: Tama66 via pixabay) We’ve learned a lot since I last wrote about Covid-19 in May. Wear a mask. They work. There is no excuse, there is no debate. Dozens of studies on the subject all support the same conclusion: higher rates of mask wearing leads to lower rates of new infections, and less deaths. It’s quite simple: the virus is primarily spread by respiratory droplets in the air. Wearing a mask protects other people by blocking your aerosol particles, and protects yourself by filtering out other people’s. While any mask is far better than nothing, the data shows multi-layer cloth masks are more effective filters than thin single-layer ones (like bandanas or neck gaiters). Lots of new research suggests that vitamin D may be helpful in protecting you against Covid-19. Comparisons between mild and severe cases show that vitamin D deficiency is a common underlying factor in critically ill hospitalized patients. Correlation doesn’t necessarily imply causation, but vitamin D is also known to be important four our immune system. Your body can make its own by getting some sun, but not so much during winter quarantines. Given the likely upside in reducing Covid severity, and lack of any downsides, I think everyone should start taking vitamin D supplements as a precaution. Don’t go crazy, 1000–2000 IUs per day is fine, taken with meals to help absorption. We now have at least two extremely promising vaccine candidates, and more still coming down the pipeline! The Pfizer/BioNTech study enrolled 43,000 participants, half of which got a real vaccine and half of which received a placebo. After a first-stage analysis, there were 170 confirmed cases of Covid-19. Of all the people who got sick, 95% of them (162/170) were in the placebo group. It’s a similar story for the Moderna trial: 30,000 participants, 95 confirmed cases, and 90 out of the 95 people who contracted Covid were in the unvaccinated group. These are amazingly good results. The data shows that getting these vaccines will significantly reduce your chances of contracting COVID-19, and if you do get it, will significantly reduce your odds of dying from it. I know that some people will be understandably cautious about getting such a new vaccine. The politicization of everything in our lives has unfortunately undermined the public’s confidence in this vaccine development process. But let me make this as clear as possible — you can trust the scientific method. These clinical studies have strictly designed trial protocols, approved by the FDA/NIH, with predefined rules about how the statistical analysis will be conducted. The results are in, and the math doesn’t lie. Within the next few weeks, you’ll be hearing about these vaccines getting “FDA emergency use authorization.” In short, this means that after rigorous safety and efficacy testing in the clinical trials, the data demonstrates such a compelling potential benefit to public health that it outweighs any potential unknown risks. I know that some people will be understandably concerned about the possibility of these vaccines having dangerous side effects. We have good news on that front as well: from the tens of thousands of volunteers who have received vaccines since July, there have been no adverse reactions beyond the standard fatigue and aches you might get from a flu shot. (By the way, get your flu shot too.) Some will argue that we have no way of knowing whether rare, unintended, side effects might emerge over time. While it’s true that these sorts of long-term studies are still ongoing, we have to weigh this uncertainty against the data we already do have, which is how dangerous and deadly Covid-19 is. Even if you are fortunate to have a mild case, an alarming number of “Covid long-haulers” report suffering long-term health complications even after recovering from the viral infection. These lingering symptoms include chronic fatigue, lung damage, heart inflammation, and mental fog. The bottom line is that vaccination will give you the opportunity to prepare your body to defend itself against these worst outcomes. If I could get either of these vaccines tomorrow, I would do so without any hesitation. As a scientist, I can assure you that the light at the end of the tunnel is now visible! Millions of doses of these vaccines have already been manufactured, in anticipation of achieving the promising clinical data we now have. The speed at which we have achieved these vaccines should be viewed not with suspicion or with fear, but with pride at witnessing one of the greatest scientific achievements of our lives. The most at-risk groups, like frontline health care workers, will start getting vaccinated by the end of this year. For the rest of us, vaccines should be available to the general public through the Spring of 2021. I would bet on a return to near-normal by next summer! In the weeks following Thanksgiving in the US, more than 2,000 Americans are dying every single day. Case numbers, hospitalizations, and deaths are on the rise almost everywhere. It’s never been more important for everyone to remain vigilant. I know we’re all tired of this pandemic. It has caused incalculable societal disruption, economic devastation, isolation, suffering, and loss. Many of us have endured the sadness of missing our friends and families, and putting our lives on hold for almost an entire year. But if you’ve been careful enough to avoid getting coronavirus so far — don’t let all of these sacrifices be for nothing. Hang on just a little longer. Wear a mask, take your vitamins, avoid indoor gatherings, and get a vaccine as soon as you can.
https://medium.com/politically-speaking/a-light-at-the-end-of-the-covid-tunnel-f6eef3e8cbe5
['Chris Calvey']
2020-12-07 20:42:44.449000+00:00
['Society', 'Politics', 'News', 'Coronavirus', 'Pandemic']
Title Light End Covid TunnelContent Hang Covid19 vaccine coming soon Image credit Tama66 via pixabay We’ve learned lot since last wrote Covid19 May Wear mask work excuse debate Dozens study subject support conclusion higher rate mask wearing lead lower rate new infection le death It’s quite simple virus primarily spread respiratory droplet air Wearing mask protects people blocking aerosol particle protects filtering people’s mask far better nothing data show multilayer cloth mask effective filter thin singlelayer one like bandana neck gaiter Lots new research suggests vitamin may helpful protecting Covid19 Comparisons mild severe case show vitamin deficiency common underlying factor critically ill hospitalized patient Correlation doesn’t necessarily imply causation vitamin also known important four immune system body make getting sun much winter quarantine Given likely upside reducing Covid severity lack downside think everyone start taking vitamin supplement precaution Don’t go crazy 1000–2000 IUs per day fine taken meal help absorption least two extremely promising vaccine candidate still coming pipeline PfizerBioNTech study enrolled 43000 participant half got real vaccine half received placebo firststage analysis 170 confirmed case Covid19 people got sick 95 162170 placebo group It’s similar story Moderna trial 30000 participant 95 confirmed case 90 95 people contracted Covid unvaccinated group amazingly good result data show getting vaccine significantly reduce chance contracting COVID19 get significantly reduce odds dying know people understandably cautious getting new vaccine politicization everything life unfortunately undermined public’s confidence vaccine development process let make clear possible — trust scientific method clinical study strictly designed trial protocol approved FDANIH predefined rule statistical analysis conducted result math doesn’t lie Within next week you’ll hearing vaccine getting “FDA emergency use authorization” short mean rigorous safety efficacy testing clinical trial data demonstrates compelling potential benefit public health outweighs potential unknown risk know people understandably concerned possibility vaccine dangerous side effect good news front well ten thousand volunteer received vaccine since July adverse reaction beyond standard fatigue ache might get flu shot way get flu shot argue way knowing whether rare unintended side effect might emerge time it’s true sort longterm study still ongoing weigh uncertainty data already dangerous deadly Covid19 Even fortunate mild case alarming number “Covid longhaulers” report suffering longterm health complication even recovering viral infection lingering symptom include chronic fatigue lung damage heart inflammation mental fog bottom line vaccination give opportunity prepare body defend worst outcome could get either vaccine tomorrow would without hesitation scientist assure light end tunnel visible Millions dos vaccine already manufactured anticipation achieving promising clinical data speed achieved vaccine viewed suspicion fear pride witnessing one greatest scientific achievement life atrisk group like frontline health care worker start getting vaccinated end year rest u vaccine available general public Spring 2021 would bet return nearnormal next summer week following Thanksgiving US 2000 Americans dying every single day Case number hospitalization death rise almost everywhere It’s never important everyone remain vigilant know we’re tired pandemic caused incalculable societal disruption economic devastation isolation suffering loss Many u endured sadness missing friend family putting life hold almost entire year you’ve careful enough avoid getting coronavirus far — don’t let sacrifice nothing Hang little longer Wear mask take vitamin avoid indoor gathering get vaccine soon canTags Society Politics News Coronavirus Pandemic
1,338
What is Deep Learning and How Deep Learning Works?
By the technology develops, we’re getting into a new era that full of human-like robots. It wasn’t possible even to imagine that is coming 100 years ago. Since the increase in technology has exponential growth, advanced robots will occur in a much shorter period. What is Artificial Intelligence? It’s better to understand the AI (Artificial Intelligence) concept before jumping into Deep Learning. We can define AI as a system that analyzes its environment and takes actions to maximize its chance of success without being explicitly programmed. As we got a brief idea about AI, let’s inspect some subfields of AI roughly. Natural Language Processing Machine Learning Neural Networks Robotics But, What about Deep Learning? Where is it? Well, deep learning is not directly a subfield of AI. It is a subfield of some subfields since it uses multiple fields in its application. For the sake of visualizing what we’ve talked about, look at the following Venn Diagram: So, What Is Deep Learning? Deep learning is a method of artificial intelligence that mimics the functioning of the human brain while interpreting data and generating patterns for use in making decisions. Deep Learning is a subset of machine learning, as shown in the Venn Diagram above. Deep Learning has networks capable of learning from data that is mostly unsupervised. Deep Learning also is known as a deep neural network or deep neural learning. Deep Learning also uses a subfield of AI, which is Neural Networks. A Perceptron is a single layer neural network; a multi-layer perceptron is called Neural Networks. What the heck is The Perceptron? Neuron vs. Perceptron — Image from IF Inteligencia Futura A Perceptron is nothing but an artificial neuron. A Neuron takes the input signals and processes it then outputs a signal; A Perceptron takes inputs processes it and then outputs a processed data. The picture above may seem a bit complicated and rough, so let’s inspect it in more detail. How The Perceptrons works The diagram above visualizes how they work quite clearly. In Fact, the formula at the right explains everything. A Perceptron takes an input matrix and multiplies (application of the dot product) it with each different weight matrix then, sums all of these dot products. After that, an activation function applies to that result summation. In our world, most of the things are not linear; so, the activation function we use is mostly non-linear. In Fact, what I’ve explained above is not entirely correct. Mostly, we also add a bias term before applying the activation function. That addition of bias term allows us the shift our activation function. Notice that, while we’re programming this algorithm, we’re specifically using an activation function that coded by another programmer, or we’re coding our activation function from scratch. Why Is Non-Linearity Important? As we clearly can see from the picture above, If we use a linear function to separate the triangles between the circles, it can’t be said that we’re doing a good job. But, If we use a non-linear function to do that task, we can easily separate them without waiving anything. Let’s say that we have a classification algorithm that classifies if a picture we provide is a dog’s or cat’s image. Assume the triangles in the graph above is represents the cats, the circles represent the dogs. If we use a linear activation function, our algorithm won’t be successful enough to classify 6 pictures correctly in total. Since 3 triangles and 3 circles are not located where they supposed to be. But if we use a non-linear activation function, that failure disappears! We’re able to separate them properly with a non-linear function. How does the Learning Process start? The machine needs to know what’s right and what’s wrong to learn correctly. In other words, we need to feed it with some data. Let’s get back to our dog and cat classification example. So, we’re feeding our deep learning algorithm with some data that includes cat and dog pictures. While we’re feeding with these data, we’re providing correct answers to the machine. The machine changes these weight matrices during this learning process. So, we don’t have constant weight matrices all the time. Then, we’re providing random pictures to classify whether it’s a cat or dog. How accurate results we get mainly depends on the amount of data we provide to the machine. The activation function and bias we use also affect the accuracy of our deep learning algorithm. To understand more about programming, AI, and technology, follow the upcoming posts. Any comments, or if you have any questions, write it in the comment!
https://medium.com/ai-in-plain-english/what-is-deep-learning-and-how-deep-learning-works-6f055125633d
['Cagri Ozarpaci']
2020-06-22 21:31:59.814000+00:00
['Deep Learning', 'AI', 'Artificial Intelligence', 'Computer Science', 'Data Science']
Title Deep Learning Deep Learning WorksContent technology develops we’re getting new era full humanlike robot wasn’t possible even imagine coming 100 year ago Since increase technology exponential growth advanced robot occur much shorter period Artificial Intelligence It’s better understand AI Artificial Intelligence concept jumping Deep Learning define AI system analyzes environment take action maximize chance success without explicitly programmed got brief idea AI let’s inspect subfields AI roughly Natural Language Processing Machine Learning Neural Networks Robotics Deep Learning Well deep learning directly subfield AI subfield subfields since us multiple field application sake visualizing we’ve talked look following Venn Diagram Deep Learning Deep learning method artificial intelligence mimic functioning human brain interpreting data generating pattern use making decision Deep Learning subset machine learning shown Venn Diagram Deep Learning network capable learning data mostly unsupervised Deep Learning also known deep neural network deep neural learning Deep Learning also us subfield AI Neural Networks Perceptron single layer neural network multilayer perceptron called Neural Networks heck Perceptron Neuron v Perceptron — Image Inteligencia Futura Perceptron nothing artificial neuron Neuron take input signal process output signal Perceptron take input process output processed data picture may seem bit complicated rough let’s inspect detail Perceptrons work diagram visualizes work quite clearly Fact formula right explains everything Perceptron take input matrix multiplies application dot product different weight matrix sum dot product activation function applies result summation world thing linear activation function use mostly nonlinear Fact I’ve explained entirely correct Mostly also add bias term applying activation function addition bias term allows u shift activation function Notice we’re programming algorithm we’re specifically using activation function coded another programmer we’re coding activation function scratch NonLinearity Important clearly see picture use linear function separate triangle circle can’t said we’re good job use nonlinear function task easily separate without waiving anything Let’s say classification algorithm classifies picture provide dog’s cat’s image Assume triangle graph represents cat circle represent dog use linear activation function algorithm won’t successful enough classify 6 picture correctly total Since 3 triangle 3 circle located supposed use nonlinear activation function failure disappears We’re able separate properly nonlinear function Learning Process start machine need know what’s right what’s wrong learn correctly word need feed data Let’s get back dog cat classification example we’re feeding deep learning algorithm data includes cat dog picture we’re feeding data we’re providing correct answer machine machine change weight matrix learning process don’t constant weight matrix time we’re providing random picture classify whether it’s cat dog accurate result get mainly depends amount data provide machine activation function bias use also affect accuracy deep learning algorithm understand programming AI technology follow upcoming post comment question write commentTags Deep Learning AI Artificial Intelligence Computer Science Data Science
1,339
[DeNA TechCon 2020 ライブ配信] DeNA データプラットフォームにおける自由と統制のバランス
in In Fitness And In Health
https://medium.com/dena-analytics-blog/dena-techcon-2020-%E3%83%A9%E3%82%A4%E3%83%96%E9%85%8D%E4%BF%A1-dena-%E3%83%87%E3%83%BC%E3%82%BF%E3%83%97%E3%83%A9%E3%83%83%E3%83%88%E3%83%95%E3%82%A9%E3%83%BC%E3%83%A0%E3%81%AB%E3%81%8A%E3%81%91%E3%82%8B%E8%87%AA%E7%94%B1%E3%81%A8%E7%B5%B1%E5%88%B6%E3%81%AE%E3%83%90%E3%83%A9%E3%83%B3%E3%82%B9-5075ebd9ea03
[]
2020-03-26 05:32:13.382000+00:00
['Bigquery', 'Analytics', 'Kubernetes', 'Data Engineering', 'Technology']
Title DeNA TechCon 2020 ライブ配信 DeNA データプラットフォームにおける自由と統制のバランスContent Fitness HealthTags Bigquery Analytics Kubernetes Data Engineering Technology
1,340
How Green Technologies Of The Future Are Being Built In Singapore
Also known as The Lion City, The Garden City and the Little Red Dot, Singapore is famous for its significant achievements in innovation, favorable tax system, recognized universities and great life quality. However, Singapore also ranks at the very top of the world’s most densely populated independent territories right after Macau and Monaco. And with insignificant amount of natural resources of its own, the country needs a strong, forward-thinking take on energy and innovation in order to power its fast-growing community. A few weeks ago, I was fortunate to visit Singapore and have a chat with Nilesh Y. Jadhav, the Senior Scientist and Program Director at the Energy Research Institute at NTU Singapore. We discussed Singapore’s energy situation and how his work at NTU involves running EcoCampus, the soon-to-be world’s greenest campus where they develop and test green technologies of the future. First off, on February 20th the The Government of Singapore introduced carbon tax on large direct emitters that received approval from stakeholders and observers. Many believe that the tax was well-timed and could boost Singapore’s economy. #1–What do you think of the new tax and how would you describe rest of the country’s energy policy? Nilesh Y. Jadhav The new tax was a great move to level the playing field for renewables. However, looking at the wider picture, I would say that Singapore has been quite prudent in its take on energy. Unlike many other countries including the U.S., Singapore doesn’t believe in subsidising energy. Regardless of scarcity of land and dense population, the country aims to be energy resilient and sustainable. During the past two decades, Singapore moved from oil fired to gas fired electricity generation which drastically improved the country’s carbon footprint. 95% of Singapore’s electricity today comes from natural gas. Now, for the resilience of gas infrastructure and to build solid electricity security, the country is implementing LNG storage capabilities. #2 — What about renewable energy? In Singapore, we have quite limited renewable energy options. Our wind speed and tidal currents aren’t sufficient. And geothermal energy is not viable as well. Hence, so far the only option for renewable energy has been solar. Thanks to our location close to equator, we get quite good solar irradiation in Singapore (an average of 1,580 kWh/m2/year) which is about 30% more than Germany. However, the tropical climate does make our weather a bit unpredictable with clouds and the rain. This leads to variability in solar generation. I have read that you are testing a large floating solar farm in a water reservoir? Has it been successful? It was indeed successfully installed in one of the reservoirs and they are collecting project data to see if this approach is economically and technologically feasible. We did an analysis on the solar potential and discovered that even if we were to cover all the rooftops and water reservoirs in Singapore with solar panels, we would only meet 10% of all the country’s energy needs with solar. Another way that the country is supporting renewable energy is through financing models and economic development by anchoring the clean energy ecosystem in Singapore. Notably, a major solar module manufacturer viz. Renewable Energy Corporation (REC) chose Singapore to setup it’s largest fully-integrated solar module manufacturing facility that is expected to produce 190,000 solar modules per month. Singapore Government has committed to over 350MW of solar installations by 2020. #3 — As an avid solar energy enthusiast, do you think that in the future solar energy could be the world’s leading source of renewable energy? I do believe that. Many things are happening in the right direction — each year, the price has been falling rapidly (in year 2016 only, solar PV module prices have slashed by more than 20%). Yet again, it’s linked to geographic and climatic conditions, for example in Denmark solar will probably be second or third best because there’s remarkably more wind. For countries such as China and India I think solar is the most efficient option. However, at the current price of solar and energy storage it won’t be able to supply the base load, but the second tier. The Hive Building at NTU , Singapore #4 — When did you join NTU and what initiated the birth of EcoCampus? Fun fact is that I used to work for Shell — some people call it the “dark side” of energy. Then, about five years ago I decided to shift my career to clean energy and sustainability and that led me to a position at NTU in the Energy Research Institute (ERI@N). I started off with assisting in developing technology roadmaps for The the Singapore Government for solar, smart grid and energy efficient buildings. Two years later we started the EcoCampus initiative, a living laboratory for innovative clean technology solutions. The most important characteristic of the EcoCampus initiative is that each technology needs to be demonstrated on the campus apart from the R&D work. This really adds the biggest value to the company as often times, it’s difficult to find the first adopter of cutting edge technologies. As a small country we are resource scarce, but being open to collaborations and to the market, we can give the companies access to high-quality research, testing and access to the Asian market. Among many others, Siemens needed a place to testbed their technology that they wanted to bring from the US to Singapore. So they landed at #5 — Tell us a little bit more about the projects that have been built in EcoCampus? Today, six of the projects are successfully completed. One of them, developed together with Engie, is an app for energy conservation through user behaviour. We tested it with the students and the whole campus staff. They interacted with the facility managers in order to save energy via the app. We involved professors from the Sociology and Economics Departments in order to add some great gamification elements and make people want to use it. Thanks to this solution, we would be able to save about 5% of energy on the campus through behaviour change. Right now, we are working on the second version called PowerZee, which will be used in other universities all over Singapore and the world. Find out more at “App Allows Students To Reduce Uni’s Power Bills”. #6— Could you share some numbers on how much money the green approach will help you save money at NTU? Yes, certainly. Our goal is to reduce 35% in energy, water and waste intensity by 2020. This should leave us with around 5–10 million Singapore dollars of savings per year. This goal is also in line with Singapore’s commitment in the Paris Agreement which states 36% reduction in carbon intensity. Due to our unordinary mix of energy, the saving in energy are directly linked to carbon savings. #7 — One of your key research fields is Energy Information and Analytics. Could you talk about some of these projects? Most certainly. With more than 200 buildings on the campus we are able to collect a lot of data. We are using smart meters and BMS (building management system) for collecting all the data. We are tracking everything from energy efficiency to consumption patterns. For example, during the holidays the energy consumption in the campus decreases significantly. Thanks to the data we can negotiate our energy bill. Along with analytics we also do data simulation and modelling on energy use of different buildings. #8 — What kind of data do you use for the weather predictions? In fact, we can’t really rely on external data, so we are using the two weather stations that we have on the campus and another one in another campus nearby with great solar surveillance cameras. #9—And what about the data visualisations side. Do you do it in-house? As researchers we are fans of using open source software, so we do most of the data modelling and visualisations ourselves, but we also work with companies such as IES to develop the virtual campus data platform with advanced simulation capabilities. For example, there is an on-going project for making our professors accountable for the energy consumption of their departments. The idea is that the professors have a fixed energy budget and if they manage to save energy they can keep the rest of the budget for research and if they overspend, they need to explain their actions. Gardens By The Bay, Singapore #10 —That is an interesting approach. What other developments are taking place in Singapore? I would say that the most exciting developments are happening in energy efficiency and smart buildings. One of the sustainability goals for Singapore is that by 2020, 80% of the Singapore’s buildings need to be green certified. At the moment it’s little bit over 20%. Interestingly, it’s not only buildings or campuses, but entire city districts that are becoming green. There are research and policy efforts in Singapore to push further towards zero energy and even positive energy buildings. There is one great research project that we call a Smart Multi Energy System (SMES). It combines thermal, electrical and gas energy sources and they are being optimised based on the availability of each energy source at any point of time. It enables you to play with the grid in real time offering enhanced demand response opportunities. Once this project finishes it can be deployed at any industrial site that has different energy sources and it will help to save up to 20% of all costs. What was supposed to be a 20 minute interview, lasted over an hour. Thank you, Nilesh, and thank you all for reading. We also discussed autonomous vehicles, wireless charging and food waste so I certainly encourage you to ask additional questions should you have any. Just write me at [email protected] or add them to the comments.
https://medium.com/planet-os/how-the-green-technologies-of-the-future-are-being-built-in-singapore-8baeb64546f1
['Annika Ljaš']
2017-03-16 12:37:36.504000+00:00
['Renewable Energy', 'Singapore', 'Climate Change', 'Energy', 'Environment']
Title Green Technologies Future Built SingaporeContent Also known Lion City Garden City Little Red Dot Singapore famous significant achievement innovation favorable tax system recognized university great life quality However Singapore also rank top world’s densely populated independent territory right Macau Monaco insignificant amount natural resource country need strong forwardthinking take energy innovation order power fastgrowing community week ago fortunate visit Singapore chat Nilesh Jadhav Senior Scientist Program Director Energy Research Institute NTU Singapore discussed Singapore’s energy situation work NTU involves running EcoCampus soontobe world’s greenest campus develop test green technology future First February 20th Government Singapore introduced carbon tax large direct emitter received approval stakeholder observer Many believe tax welltimed could boost Singapore’s economy 1–What think new tax would describe rest country’s energy policy Nilesh Jadhav new tax great move level playing field renewables However looking wider picture would say Singapore quite prudent take energy Unlike many country including US Singapore doesn’t believe subsidising energy Regardless scarcity land dense population country aim energy resilient sustainable past two decade Singapore moved oil fired gas fired electricity generation drastically improved country’s carbon footprint 95 Singapore’s electricity today come natural gas resilience gas infrastructure build solid electricity security country implementing LNG storage capability 2 — renewable energy Singapore quite limited renewable energy option wind speed tidal current aren’t sufficient geothermal energy viable well Hence far option renewable energy solar Thanks location close equator get quite good solar irradiation Singapore average 1580 kWhm2year 30 Germany However tropical climate make weather bit unpredictable cloud rain lead variability solar generation read testing large floating solar farm water reservoir successful indeed successfully installed one reservoir collecting project data see approach economically technologically feasible analysis solar potential discovered even cover rooftop water reservoir Singapore solar panel would meet 10 country’s energy need solar Another way country supporting renewable energy financing model economic development anchoring clean energy ecosystem Singapore Notably major solar module manufacturer viz Renewable Energy Corporation REC chose Singapore setup it’s largest fullyintegrated solar module manufacturing facility expected produce 190000 solar module per month Singapore Government committed 350MW solar installation 2020 3 — avid solar energy enthusiast think future solar energy could world’s leading source renewable energy believe Many thing happening right direction — year price falling rapidly year 2016 solar PV module price slashed 20 Yet it’s linked geographic climatic condition example Denmark solar probably second third best there’s remarkably wind country China India think solar efficient option However current price solar energy storage won’t able supply base load second tier Hive Building NTU Singapore 4 — join NTU initiated birth EcoCampus Fun fact used work Shell — people call “dark side” energy five year ago decided shift career clean energy sustainability led position NTU Energy Research Institute ERIN started assisting developing technology roadmaps Singapore Government solar smart grid energy efficient building Two year later started EcoCampus initiative living laboratory innovative clean technology solution important characteristic EcoCampus initiative technology need demonstrated campus apart RD work really add biggest value company often time it’s difficult find first adopter cutting edge technology small country resource scarce open collaboration market give company access highquality research testing access Asian market Among many others Siemens needed place testbed technology wanted bring US Singapore landed 5 — Tell u little bit project built EcoCampus Today six project successfully completed One developed together Engie app energy conservation user behaviour tested student whole campus staff interacted facility manager order save energy via app involved professor Sociology Economics Departments order add great gamification element make people want use Thanks solution would able save 5 energy campus behaviour change Right working second version called PowerZee used university Singapore world Find “App Allows Students Reduce Uni’s Power Bills” 6— Could share number much money green approach help save money NTU Yes certainly goal reduce 35 energy water waste intensity 2020 leave u around 5–10 million Singapore dollar saving per year goal also line Singapore’s commitment Paris Agreement state 36 reduction carbon intensity Due unordinary mix energy saving energy directly linked carbon saving 7 — One key research field Energy Information Analytics Could talk project certainly 200 building campus able collect lot data using smart meter BMS building management system collecting data tracking everything energy efficiency consumption pattern example holiday energy consumption campus decrease significantly Thanks data negotiate energy bill Along analytics also data simulation modelling energy use different building 8 — kind data use weather prediction fact can’t really rely external data using two weather station campus another one another campus nearby great solar surveillance camera 9—And data visualisation side inhouse researcher fan using open source software data modelling visualisation also work company IES develop virtual campus data platform advanced simulation capability example ongoing project making professor accountable energy consumption department idea professor fixed energy budget manage save energy keep rest budget research overspend need explain action Gardens Bay Singapore 10 —That interesting approach development taking place Singapore would say exciting development happening energy efficiency smart building One sustainability goal Singapore 2020 80 Singapore’s building need green certified moment it’s little bit 20 Interestingly it’s building campus entire city district becoming green research policy effort Singapore push towards zero energy even positive energy building one great research project call Smart Multi Energy System SMES combine thermal electrical gas energy source optimised based availability energy source point time enables play grid real time offering enhanced demand response opportunity project finish deployed industrial site different energy source help save 20 cost supposed 20 minute interview lasted hour Thank Nilesh thank reading also discussed autonomous vehicle wireless charging food waste certainly encourage ask additional question write aljashplanetoscom add commentsTags Renewable Energy Singapore Climate Change Energy Environment
1,341
The 7 best business books of 2020 (that I read)
The 7 Best Business Books of 2020 (That I Read) Of the 46 books I read in 2020, these are the best for entrepreneurs and managers. Photo by CHUTTERSNAP on Unsplash It was a tough year for most of us. As an entrepreneur in the hospitality area, my business was on the verge of the abyss. One thing that helped us to survive were ideas from a book I read some time ago. As much as it is good to keep hope and some optimism, we should be cautious about prospects. Many of the problems faced this year will not magically disappear at midnight of 31/12/2020. Therefore, I am listing to you the 7 best books for entrepreneurs, managers, and business-related professionals. One of the few positive points of this year was that, with my business closed during part of the year, I could read more. After 46 books (you can see my reviews for all of them on my Goodreads page, feel free to add me there too), these are the 7 I recommend for you to read in 2021. Put them on your bucket list, you will not regret it. The 1-Page Marketing Plan: Get New Customers, Make More Money, And Stand out From The Crowd During my years at the business school, we had plenty of Marketing classes. Most of them used material from superstar-authors like Philip Kotler and Michael Porter. I will not deny that it was useful and even inspiring, but they focus more on the big-corporation game instead of the new and smaller business. This book, written by Allan Dib, resolved many knowledge gaps I had about marketing for startups and small businesses. If you are an entrepreneur with a new project in mind, it should be on your must-read list. The pages have the perfect combination of being simultaneously entertaining and informative. Allan Dib’s writing-style makes it easier to assimilate the content by using trivial, everyday examples. The paragraphs about building a mailing list felt like an anchor on me, to the point I questioned why I didn’t build my company mail list three years ago. If you are a big-corp marketing manager, maybe this book will not be the most useful for you. But possibly, after reading it, you will end up wanting to start your own business. Unlimited Memory: How to Use Advanced Learning Strategies to Learn Faster, Remember More and be More Productive It would not be an exaggeration to say that this is one of the most practical non-fiction books that I ever read. Kevin Horsley has very good credentials: a 2nd place in a world memory contest, for example. This book is all about mnemonics — tools that help us remember certain facts or large amounts of information. One many think that in the age of Google, to improve our memory performance is a waste of time. Nothing more distant from the truth: even to use google, you need to know what you want to know. Besides, to remember all the names and tastes of your co-workers and clients may impress everyone and bring competitive advantages. In 136 pages, Kevin Horsley delivers to us methods like The Journey, The Car, The Body, The Peg, and Mind-Mapping. Down-to-earth, workable techniques that, after you learn, may look like you are even cheating. The first two methods (The Journey and The Body) sounded to me almost like magic, since in a few minutes you can store in perfect sequence a considerable amount of information. If you think that your struggle with explaining the financial numbers to your partners is because of poor memory, I bet you are wrong. You were just never shown how powerful your memory really can be. High Output Management Another masterpiece from an author with respectable credentials. Andrew Grove was the 3rd employee of Intel — after both founders- later its CEO and turned to be one of the most legendary Silicon Valley executives. High Output Management is originally from 1983, so it could be considered old for our current standards, but it is not. In fact, this is probably what makes this book a must-read. Andrew Grove does not save words or is afraid of crushing sensibilities. He, straightforwardly, wrote what he really applied over his brilliant career. In another article, I listed 9 management lessons I took from his writings. Teachings like: How to understand why your team is not achieving good results. For every indicator, have a counter indicator. This one is especially helpful when defining your business goals, as explained here. This one is especially helpful when defining your business goals, as explained here. How monitoring should be done. Answer correctly to “Do you have a minute?” and do not lose your talents. It is a book with great, priceless lessons for anyone managing a team, either a single summer intern or a multinational with thousands of employees. Never Split the Difference If you are a budget hotel or hostel entrepreneur in Eastern Europe, stop reading this article right now. I don’t want a potential competitor having such a competitive edge as the one provided by Never Split the Difference. And I am talking seriously because this book even helped me to cut the expenses of my company during the COVID-19 crisis! The author, Chris Voss, served as one of the main FBI negotiators in dozens of crises, not only in the USA but also abroad. Like in the Philippines, where he negotiated with the members from Abu Sayyaf, an ISIS-affiliated terrorist organization. With all his career expertise, he translates in 274 pages some brilliant insights, useful in multiple sorts of negotiations. There are important lessons to avoid the fight-or-flight mindset that eventually makes the parts lose. When I read a good book, I take notes, but only from the most important points. From Never Split the Difference I took almost 5 pages of notes in A4 paper. This should put into perspective how many remarkable are there. So Good They Can’t Ignore You: Why Skills Trump Passion in the Quest for Work You Love If you are familiar with the concept of deep work, likely you are also acquainted with the name of Cal Newport. This is another world-class publication from this young MIT Ph.D. Filled with great career examples, used as the basis for the careful development of his conclusion: the common-belief that passion should be the driver of a career change or choice is a bad idea. Do not think this is the only takeaway from the almost 300 pages. Another interesting concept, the theory of career capital, may sound simple once you understand it, but often we neglect it in an era of lifestyle-design experts giving poor advice. This theory foundation is that, instead of over discovering a true calling, one should master rare and valuable skills. Then you use these skills to build career capital. Later you invest this career capital to gain control over what you do and how you do it. Only then you identify and act on a life-changing mission. As the author settles: This philosophy is less sexy than the fantasy of dropping everything to go live among the monks in the mountains, but it’s also a philosophy that has been shown time and again to actually work. Another excellent point made is against multitasking, as I explained in another article. Starting Your Own Business Far From Home: What (Not) to Do When Opening a Company in Another State, Country, or Galaxy Disclaimer: the author of this book is also the author of this article, but you still can check dozens of reviews from verified readers on the Amazon page. Four years ago I dropped a promising career just after a promotion, to follow the dream of opening my own business. It was not easy, especially because it was a tourist hospitality business and we faced one of the worst crises in history in this sector during 2020. As an additional obstacle, I opened this company in a country totally different from my culture and with a language that I barely speak (at the beginning). But both I and my business survived. The lessons I learned, the mistakes I made, and the solutions I found are all in this book. Because there is no better year to launch an entrepreneurial venture than 2021 — plenty of cash-strapped but promising business are for sales around, and if you look carefully, there will be excellent opportunities waiting for a risk-taking entrepreneur. Moral Letters to Lucilius — Volume 1 It may surprise you I am listing a book with almost two thousand years among the best business-readings of 2020. But Moral Letters to Lucilius is one of the most brilliant survival manuals that I ever saw. The first time I heard about Seneca was by reference in one of Nassim Taleb’s books. After reading it, he turned to be y favorite Roman philosopher for a reason: All his letters and manuscripts have timeless advice about human nature, negotiation, and even physical exercise. Who would imagine that an ancient Latin philosopher made burpees in the morning? Contrary to common (and often justified) preconceptions, it is an ancient book that is outstanding but also a pleasure to read. Or should I say “a joy”, since the term pleasure is not very welcome for stoics? Bottom line: one of the best books I read, ever.
https://medium.com/datadriveninvestor/the-7-best-business-books-of-2020-that-i-read-637422f177ff
['Levi Borba']
2020-12-29 17:38:38.621000+00:00
['Management', 'Entrepreneurship', 'Business', 'Startup', 'Money']
Title 7 best business book 2020 readContent 7 Best Business Books 2020 Read 46 book read 2020 best entrepreneur manager Photo CHUTTERSNAP Unsplash tough year u entrepreneur hospitality area business verge abyss One thing helped u survive idea book read time ago much good keep hope optimism cautious prospect Many problem faced year magically disappear midnight 31122020 Therefore listing 7 best book entrepreneur manager businessrelated professional One positive point year business closed part year could read 46 book see review Goodreads page feel free add 7 recommend read 2021 Put bucket list regret 1Page Marketing Plan Get New Customers Make Money Stand Crowd year business school plenty Marketing class used material superstarauthors like Philip Kotler Michael Porter deny useful even inspiring focus bigcorporation game instead new smaller business book written Allan Dib resolved many knowledge gap marketing startup small business entrepreneur new project mind mustread list page perfect combination simultaneously entertaining informative Allan Dib’s writingstyle make easier assimilate content using trivial everyday example paragraph building mailing list felt like anchor point questioned didn’t build company mail list three year ago bigcorp marketing manager maybe book useful possibly reading end wanting start business Unlimited Memory Use Advanced Learning Strategies Learn Faster Remember Productive would exaggeration say one practical nonfiction book ever read Kevin Horsley good credential 2nd place world memory contest example book mnemonic — tool help u remember certain fact large amount information One many think age Google improve memory performance waste time Nothing distant truth even use google need know want know Besides remember name taste coworkers client may impress everyone bring competitive advantage 136 page Kevin Horsley delivers u method like Journey Car Body Peg MindMapping Downtoearth workable technique learn may look like even cheating first two method Journey Body sounded almost like magic since minute store perfect sequence considerable amount information think struggle explaining financial number partner poor memory bet wrong never shown powerful memory really High Output Management Another masterpiece author respectable credential Andrew Grove 3rd employee Intel — founder later CEO turned one legendary Silicon Valley executive High Output Management originally 1983 could considered old current standard fact probably make book mustread Andrew Grove save word afraid crushing sensibility straightforwardly wrote really applied brilliant career another article listed 9 management lesson took writing Teachings like understand team achieving good result every indicator counter indicator one especially helpful defining business goal explained one especially helpful defining business goal explained monitoring done Answer correctly “Do minute” lose talent book great priceless lesson anyone managing team either single summer intern multinational thousand employee Never Split Difference budget hotel hostel entrepreneur Eastern Europe stop reading article right don’t want potential competitor competitive edge one provided Never Split Difference talking seriously book even helped cut expense company COVID19 crisis author Chris Voss served one main FBI negotiator dozen crisis USA also abroad Like Philippines negotiated member Abu Sayyaf ISISaffiliated terrorist organization career expertise translates 274 page brilliant insight useful multiple sort negotiation important lesson avoid fightorflight mindset eventually make part lose read good book take note important point Never Split Difference took almost 5 page note A4 paper put perspective many remarkable Good Can’t Ignore Skills Trump Passion Quest Work Love familiar concept deep work likely also acquainted name Cal Newport another worldclass publication young MIT PhD Filled great career example used basis careful development conclusion commonbelief passion driver career change choice bad idea think takeaway almost 300 page Another interesting concept theory career capital may sound simple understand often neglect era lifestyledesign expert giving poor advice theory foundation instead discovering true calling one master rare valuable skill use skill build career capital Later invest career capital gain control identify act lifechanging mission author settle philosophy le sexy fantasy dropping everything go live among monk mountain it’s also philosophy shown time actually work Another excellent point made multitasking explained another article Starting Business Far Home Opening Company Another State Country Galaxy Disclaimer author book also author article still check dozen review verified reader Amazon page Four year ago dropped promising career promotion follow dream opening business easy especially tourist hospitality business faced one worst crisis history sector 2020 additional obstacle opened company country totally different culture language barely speak beginning business survived lesson learned mistake made solution found book better year launch entrepreneurial venture 2021 — plenty cashstrapped promising business sale around look carefully excellent opportunity waiting risktaking entrepreneur Moral Letters Lucilius — Volume 1 may surprise listing book almost two thousand year among best businessreadings 2020 Moral Letters Lucilius one brilliant survival manual ever saw first time heard Seneca reference one Nassim Taleb’s book reading turned favorite Roman philosopher reason letter manuscript timeless advice human nature negotiation even physical exercise would imagine ancient Latin philosopher made burpees morning Contrary common often justified preconception ancient book outstanding also pleasure read say “a joy” since term pleasure welcome stoic Bottom line one best book read everTags Management Entrepreneurship Business Startup Money
1,342
How we built the CyberSift Attack Map
Recently we launched a small site called the “CyberSift Attack Map”hosted athttp://attack-map.cybersift.io.Any one involved in the InfoSec industry will be instantly familiar with the site: It’s basically a map of attacks which either trip some rule in a signature based IPS such as SNORT, or land in a honeypot. In this article we’ll list some of the libraries and techniques we used to build the site for any devs out there who are interested. Backend We used the pythonFlask microframework, work … Read more at David’s Blog
https://medium.com/david-vassallos-blog-posts/how-we-built-the-cybersift-attack-map-5c05fb2a5b9d
['David Vassallo']
2018-07-09 14:12:33.729000+00:00
['JavaScript', 'Web Apps', 'Python']
Title built CyberSift Attack MapContent Recently launched small site called “CyberSift Attack Map”hosted athttpattackmapcybersiftioAny one involved InfoSec industry instantly familiar site It’s basically map attack either trip rule signature based IPS SNORT land honeypot article we’ll list library technique used build site devs interested Backend used pythonFlask microframework work … Read David’s BlogTags JavaScript Web Apps Python
1,343
The Scientific Guide To Not Freezing Your Ass off This Winter
Two Septembers ago, a South Dakota snowstorm caught me off guard. I packed light — too light — for a trip to the Black Hills, to participate in the Buffalo Roundup and Arts Festival at Custer State Park. Huddled in the bed of a pickup truck in the middle of a thundering herd of buffalo, wearing every article of clothing I had and still cold all the way down in my bones, I swore I’d never be unprepared for the conditions again. This winter, as the ongoing pandemic makes it unsafe to gather indoors, you may find yourself spending more time outside if you want to do any socializing, braving low temperatures and less-than-ideal weather in many parts of the country. You may not have plans to race across the frigid prairie chasing buffalo, but even if you’re just having some backyard beers with your friends, the same concepts apply: Preparation is key, clothing choice is all-important, and understanding the science of warmth can help you hang onto it. Your body is constantly producing — and losing — heat “For us to have our metabolism, our cells being alive, [that] takes energy,” says Christopher Minson, PhD, a thermal physiologist at the University of Oregon. “The byproduct of metabolism is heat, and that’s why we have a body temperature.” But as your body constantly generates heat, it also needs to get rid of it to not overheat, and there are three primary ways that happens: conduction, convection, and evaporation. Conduction happens through contact with surfaces. If your body temperature is higher than the things around you, you’ll lose heat when you touch those things. “Different materials have different conductivity,” Minson says. “Metal, for instance, really conducts a lot. You’ll lose a lot of heat to a metal surface, vs. plastic or something else. Even wood is much better at not conducting heat.” Here’s the first piece of advice: If you’re planning an outdoor event, consider the furniture. If you skip metal folding chairs in favor of seats made of wood, fabric, or plastic, conduction will decrease and you and your guests will automatically stay warmer. As for convection, “If you’re standing in a current — whether it’s water or wind, that’s convection,” says Minson. As air moves around you, it pulls heat away from your body. The calmer the current, the warmer you’ll stay, but this one is a bit more complicated in the context of safe socializing, because when it comes to reducing Covid-19 transmission, airflow is your friend. So, rather than erecting a tent (which isn’t guaranteed to lower risk of infection), find other ways to keep the wind from whipping away your warmth. You can try an umbrella — it can act almost like your own personal enclosure — to reduce heat loss through convection, or other accessories, like a tight-knit balaclava or wind shell jacket, that cut the wind. And don’t forget your feet: “Running shoes are well-ventilated so your foot doesn’t overheat,” Minson says, which is great while you’re exercising, but less helpful when you’re just hanging out. “Wear something that covers your shoes to keep air from getting in, or choose shoes that keep wind out. Leather is a good choice.” When thinking about what to wear, keep in mind the third, and possibly most important, method of heat loss: evaporation. What really determines how comfortable you’ll be isn’t the layers you wear, but what they’re made of, and what’s between them. Your body is constantly producing moisture, and the evaporation of that moisture is the foundation of the human thermoregulation system. Sweat evaporating off your skin cools you down in summer. In winter, the cooling effect is a lot less desirable, but you still need to get rid of the dampness. While a totally impermeable outer layer might keep the wind out, it could also lock your moisture in, and that’s not necessarily a good thing. “The idea is you need a balance, especially if you’re moving around and generating some heat,” Minson says. “You need some ability to lose the water vapor from your skin. If you wear a plastic bag, there’s no ability for humidity to escape from your body.” If you’re moving a lot and generating a lot of warmth, you could start to overheat. And when you stop moving, all that moisture will eventually cool down, making you much colder. All this to say: ventilation is vital. Trapping warm air When you’re preparing to endure low temperatures, it can be tempting to don layer after layer, imagining that the more clothes you wear, the warmer you’ll be. But (as I can attest, after shivering through a day in South Dakota despite wearing everything from my suitcase), that’s not always true. Technically, what really determines how comfortable you’ll be isn’t the layers you wear, but what they’re made of, and what’s between them. “Fundamentally, what keeps you warm is air,” explains Michael Cattanach, global product director for Polartec, a Massachusetts-based company that makes synthetic thermal fabric for outdoor apparel. “It’s about keeping pockets of air next to your body and using fabrics that trap air and keep layers of air together.” “Fundamentally, what keeps you warm is air.” Remember, your body is constantly giving off heat. When you wear clothes that trap still air (but not moisture) against your skin, the air absorbs that heat and you stay warmer. Leggings with a thermal grid pattern, for example, leave more room for air than something like skin-tight spandex, and will therefore keep you warmer. And heated air will remain around your body much longer if you insulate. Just like the insulation we use in our houses prevents heat loss, your clothing creates a barrier that keeps heat from escaping. The art of layering is about quality, not quantity Cattanach’s formula for a foolproof clothing system includes three layers: “something next to the skin to manage sweat and moisture, a second layer that’s insulating, then something with weather protection on the surface.” The base layer is arguably the most important of the three, and should be fitted but not constricting. If you think you may break a sweat, or plan to be sitting by a fire that may eventually make you overly warm, for instance, go for a fabric that’s moisture-wicking; synthetics and synthetic/natural blends are a good choice. “[Wool] is the original smart fiber, and can absorb and release moisture. Since basically the beginning of time, it’s existed to keep a mammal warm, cool, safe and comfortable.” In terms of all-natural fibers, cotton is comfortable as long as your skin stays dry, but won’t do you any favors once you sweat and create moisture. Wool, on the other hand, can keep you warm even if you sweat a bit, and release that moisture to prevent overheating. “It’s the original smart fiber, and can absorb and release moisture. Since basically the beginning of time, it’s existed to keep a mammal warm, cool, safe and comfortable”, says Clara Parkes, the New York Times bestselling author of Knitlandia and Vanishing Fleece. She points out that we’re mammals, too, so wool’s a natural choice to keep us warm. A thin and soft Merino wool base layer can help keep you comfortable without overheating, and a thicker wool sweater on top will let you use the principle of trapped air to your advantage. “Wool is great for insulation because each fiber can have 18 or 20 curvatures per inch,” Parkes explains. “They’re like coiled springs, always pushing away from one another and creating space. The thicker the coils, the more still air is trapped in the fabric, and the higher its insulation abilities will be.” Thick, rough sweaters are especially warm because the fibers are “jumbled and chaotic,” holding lots of air, says Parkes. For a less prickly layer against your skin, Merino works well because it’s a high-curvature fiber, trapping a disproportionate amount of air, despite its thinness. The potential drawbacks to wool — and the reasons many lean toward newer, synthetic fleeces — are that it’s heavy and daunting to wash. But the latter shouldn’t stop you from buying that impossibly warm sweater, says Parkes. “If you’re at all nervous about it, just do a hand wash, and treat it like you would your hair,” she says. “It’s chemically identical. A sink full of warm water, a quick dip, and an air dry is all it takes.” What you do matters almost as much as what you wear I called this a guide for staying warm while enjoying backyard beers, but actually, when it comes to staying warm, an alcoholic beverage can work against you. “One of the most profound systems we have for heat loss or conservation is the simple dilation and constriction of our skin,” says Minson. When you’re cold, the skin constricts, sending blood flow back toward the core. When you’re hot, blood vessels just under the skin dilate, releasing heat. Unfortunately, alcohol is a vasodilator. When you first start to drink, you may feel warmer thanks to the blood rising to the surface. But it won’t last long — all that escaping heat through conduction and convection will cool you off quick. That’s not to say your choices are to abstain or freeze — but if you’re going to be drinking, try to make up for that heat loss by raising your metabolic rate. “Move a little more,” says Minson. “If you start feeling cold, just get up and do some squats. You may look ridiculous, but you’ll stay warmer.” The other thing you can do to hack your metabolic rate is with what you eat and drink, Minson adds. “More protein and fats will raise your metabolic rate.” In other words, don’t skimp on the charcuterie. You can train your brain to tolerate the cold Thermoregulation is a physical science, but there’s a major psychological component to staying warm, too. Humans are super adaptable creatures, and as the winter wears on, we really do grow accustomed to being cold. “In November when it’s 42 degrees outside, you’re going to feel chilly because you’re not used to it,” Minson says. “When March rolls around, you’re used to it. It might be the same temperature, but your brain adapts.” Hence my misery during that South Dakota September snow: My brain wasn’t in winter mode yet. You can make that adaptation happen sooner, he adds, through what basically amounts to exposure therapy. Bundle up a little less than you think you really need to, and force yourself to power through the discomfort. A caveat: No one’s suggesting you go out and get frostbite in the name of brain hacking. If you start to shiver, your core temperature is actually dropping and it’s time to add another layer. If you can’t get back up to feeling comfortable fairly quickly, it’s probably time to dissolve the hangout and call it a night. But within reason, Minson says, it’s okay to embrace the cold. “It’s about being in a cold environment and being like, ‘Okay, I’m aware of the cold but I don’t feel cold.’ It’s losing the fear and realizing you can handle it. We really can hack our brains and feel more comfortable in the cold.”
https://elemental.medium.com/the-scientific-guide-to-not-freezing-your-ass-off-this-winter-27620cb5b47
['Kate Morgan']
2020-12-14 06:32:49.746000+00:00
['Winter', 'Outdoors', 'Coronavirus', 'Pandemic', 'Weather']
Title Scientific Guide Freezing Ass WinterContent Two Septembers ago South Dakota snowstorm caught guard packed light — light — trip Black Hills participate Buffalo Roundup Arts Festival Custer State Park Huddled bed pickup truck middle thundering herd buffalo wearing every article clothing still cold way bone swore I’d never unprepared condition winter ongoing pandemic make unsafe gather indoors may find spending time outside want socializing braving low temperature lessthanideal weather many part country may plan race across frigid prairie chasing buffalo even you’re backyard beer friend concept apply Preparation key clothing choice allimportant understanding science warmth help hang onto body constantly producing — losing — heat “For u metabolism cell alive take energy” say Christopher Minson PhD thermal physiologist University Oregon “The byproduct metabolism heat that’s body temperature” body constantly generates heat also need get rid overheat three primary way happens conduction convection evaporation Conduction happens contact surface body temperature higher thing around you’ll lose heat touch thing “Different material different conductivity” Minson say “Metal instance really conduct lot You’ll lose lot heat metal surface v plastic something else Even wood much better conducting heat” Here’s first piece advice you’re planning outdoor event consider furniture skip metal folding chair favor seat made wood fabric plastic conduction decrease guest automatically stay warmer convection “If you’re standing current — whether it’s water wind that’s convection” say Minson air move around pull heat away body calmer current warmer you’ll stay one bit complicated context safe socializing come reducing Covid19 transmission airflow friend rather erecting tent isn’t guaranteed lower risk infection find way keep wind whipping away warmth try umbrella — act almost like personal enclosure — reduce heat loss convection accessory like tightknit balaclava wind shell jacket cut wind don’t forget foot “Running shoe wellventilated foot doesn’t overheat” Minson say great you’re exercising le helpful you’re hanging “Wear something cover shoe keep air getting choose shoe keep wind Leather good choice” thinking wear keep mind third possibly important method heat loss evaporation really determines comfortable you’ll isn’t layer wear they’re made what’s body constantly producing moisture evaporation moisture foundation human thermoregulation system Sweat evaporating skin cool summer winter cooling effect lot le desirable still need get rid dampness totally impermeable outer layer might keep wind could also lock moisture that’s necessarily good thing “The idea need balance especially you’re moving around generating heat” Minson say “You need ability lose water vapor skin wear plastic bag there’s ability humidity escape body” you’re moving lot generating lot warmth could start overheat stop moving moisture eventually cool making much colder say ventilation vital Trapping warm air you’re preparing endure low temperature tempting layer layer imagining clothes wear warmer you’ll attest shivering day South Dakota despite wearing everything suitcase that’s always true Technically really determines comfortable you’ll isn’t layer wear they’re made what’s “Fundamentally keep warm air” explains Michael Cattanach global product director Polartec Massachusettsbased company make synthetic thermal fabric outdoor apparel “It’s keeping pocket air next body using fabric trap air keep layer air together” “Fundamentally keep warm air” Remember body constantly giving heat wear clothes trap still air moisture skin air absorbs heat stay warmer Leggings thermal grid pattern example leave room air something like skintight spandex therefore keep warmer heated air remain around body much longer insulate like insulation use house prevents heat loss clothing creates barrier keep heat escaping art layering quality quantity Cattanach’s formula foolproof clothing system includes three layer “something next skin manage sweat moisture second layer that’s insulating something weather protection surface” base layer arguably important three fitted constricting think may break sweat plan sitting fire may eventually make overly warm instance go fabric that’s moisturewicking synthetic syntheticnatural blend good choice “Wool original smart fiber absorb release moisture Since basically beginning time it’s existed keep mammal warm cool safe comfortable” term allnatural fiber cotton comfortable long skin stay dry won’t favor sweat create moisture Wool hand keep warm even sweat bit release moisture prevent overheating “It’s original smart fiber absorb release moisture Since basically beginning time it’s existed keep mammal warm cool safe comfortable” say Clara Parkes New York Times bestselling author Knitlandia Vanishing Fleece point we’re mammal wool’s natural choice keep u warm thin soft Merino wool base layer help keep comfortable without overheating thicker wool sweater top let use principle trapped air advantage “Wool great insulation fiber 18 20 curvature per inch” Parkes explains “They’re like coiled spring always pushing away one another creating space thicker coil still air trapped fabric higher insulation ability be” Thick rough sweater especially warm fiber “jumbled chaotic” holding lot air say Parkes le prickly layer skin Merino work well it’s highcurvature fiber trapping disproportionate amount air despite thinness potential drawback wool — reason many lean toward newer synthetic fleece — it’s heavy daunting wash latter shouldn’t stop buying impossibly warm sweater say Parkes “If you’re nervous hand wash treat like would hair” say “It’s chemically identical sink full warm water quick dip air dry takes” matter almost much wear called guide staying warm enjoying backyard beer actually come staying warm alcoholic beverage work “One profound system heat loss conservation simple dilation constriction skin” say Minson you’re cold skin constricts sending blood flow back toward core you’re hot blood vessel skin dilate releasing heat Unfortunately alcohol vasodilator first start drink may feel warmer thanks blood rising surface won’t last long — escaping heat conduction convection cool quick That’s say choice abstain freeze — you’re going drinking try make heat loss raising metabolic rate “Move little more” say Minson “If start feeling cold get squat may look ridiculous you’ll stay warmer” thing hack metabolic rate eat drink Minson add “More protein fat raise metabolic rate” word don’t skimp charcuterie train brain tolerate cold Thermoregulation physical science there’s major psychological component staying warm Humans super adaptable creature winter wear really grow accustomed cold “In November it’s 42 degree outside you’re going feel chilly you’re used it” Minson say “When March roll around you’re used might temperature brain adapts” Hence misery South Dakota September snow brain wasn’t winter mode yet make adaptation happen sooner add basically amount exposure therapy Bundle little le think really need force power discomfort caveat one’s suggesting go get frostbite name brain hacking start shiver core temperature actually dropping it’s time add another layer can’t get back feeling comfortable fairly quickly it’s probably time dissolve hangout call night within reason Minson say it’s okay embrace cold “It’s cold environment like ‘Okay I’m aware cold don’t feel cold’ It’s losing fear realizing handle really hack brain feel comfortable cold”Tags Winter Outdoors Coronavirus Pandemic Weather
1,344
Predicting Heart Disease With a Neural Network
Predicting Heart Disease With a Neural Network Predict the probability of getting heart disease with a Python neural network Photo by Kendal on Unsplash In these times of coronavirus, many hospitals are short-staffed and in dire straits. A lack of staff causes many problems. Not all patients can be treated, doctors are excessively tired and risk not taking appropriate precautions. Once a doctor gets sick, staff reductions accelerate, and so on. This leads us to consider the importance of technology in the medical field. One of the most promising branches of technology today is artificial intelligence (AI). Today, we’re going to talk about implementing artificial neural networks in the field of medicine. More specifically, we will create a neural network that predicts the probability of having heart disease. Disclaimer: If you look at my previous work, you will see that this not the first time I have written about AI for medical purposes. I want to be clear that this is not a scientifically rigorous study — it’s just a way of implementing AI to solve real-world problems. Having said that, let’s start!
https://medium.com/better-programming/predicting-heart-disease-with-a-neural-network-a48d2ce59bc5
['Tommaso De Ponti']
2020-04-24 16:17:59.096000+00:00
['Programming', 'AI', 'Neural Networks', 'Python', 'Machine Learning']
Title Predicting Heart Disease Neural NetworkContent Predicting Heart Disease Neural Network Predict probability getting heart disease Python neural network Photo Kendal Unsplash time coronavirus many hospital shortstaffed dire strait lack staff cause many problem patient treated doctor excessively tired risk taking appropriate precaution doctor get sick staff reduction accelerate lead u consider importance technology medical field One promising branch technology today artificial intelligence AI Today we’re going talk implementing artificial neural network field medicine specifically create neural network predicts probability heart disease Disclaimer look previous work see first time written AI medical purpose want clear scientifically rigorous study — it’s way implementing AI solve realworld problem said let’s startTags Programming AI Neural Networks Python Machine Learning
1,345
Code Samples from TFCO — TensorFlow Constrained Optimization
Includes Code Samples from TFCO — TensorFlow Constrained Optimization The above article models business functions which is equivalent to modelling the conceptual structure of the system. It is always good to model the business process (BPMN) because that is the standardised way of modeling the system. Business Functions model the category of operations of the system routine. In order to work with Deep Learning Libraries, I have created an article that showcases about TensorFlow Constrained Optimization (TFCO) which works similar to Boxing and Unboxing technique as explained above in the article. In this example, I have provided a class which assigns the responsibilities to the TensorFlow operations defined in the example. The example here uses Recall constraints which recalls the data objects based on a Hinge Loss. A Recall is a metric that is equivalent to TPR (True Positive Rate). Recalling a data object implies assessing the correctness measure of the object’s existence. The constraint optimization problem is defined within a Class using an Object Oriented Programming fashion. Each constraint of the class is defined in a method as a tensor totally relying on Object Constraint Language (OCL) like syntax. Implying, each method returns a tensor of unit variable for single constraint. The TFCO process takes in one input data point similar to the two data points structure taken by a DEA model. The Data Management Units (DMUs) are similar to weights accepted the TFCO in this model but there is a Characteristic Loss Function as explained below. Google Research’s TensorFlow Constrained Optimization is a Python Library for performing Machine Learning based Optimizations. In this article, I have taken an example from Recall constraint, which characterises features in the data and minimizes the rejection of objects represented in the data. Hinge Loss Hinge Loss is represented as [0, 1 — y(x)], this implies in the entropy calculation, it does not consider those labels which are true predictions whereas those objects which are classified false are considered for false. A minimization algorithm is performed to reduce the false positives. The problem is rate minimization problem, where the constraints are defined and the hinge loss is defined. Defining the Objective # we use hinge loss because we need to capture those that are not classified correctly and minimize that loss def objective(self): predictions = self._predictions if callable(predictions): predictions = predictions() return tf.compat.v1.losses.hinge_loss(labels=self._labels, logits=predictions) The objective here is hinge loss with labels representing the true positives and false positives. Defining the Constraints The constraints are defined such that the recall value is at least the lower bound which is mentioned in the problem. In Convex Optimization Case, the constraints are represented as ≥ 0. def constraints(self): # In eager mode, the predictions must be a nullary function returning a # Tensor. In graph mode, they could be either such a function, or a Tensor # itself. predictions = self._predictions if callable(predictions): predictions = predictions() # Recall that the labels are binary (0 or 1). true_positives = self._labels * tf.cast(predictions > 0, dtype=tf.float32) true_positive_count = tf.reduce_sum(true_positives) recall = true_positive_count / self._positive_count # The constraint is (recall >= self._recall_lower_bound), which we convert # to (self._recall_lower_bound - recall <= 0) because # ConstrainedMinimizationProblems must always provide their constraints in # the form (tensor <= 0). # # The result of this function should be a tensor, with each element being # a quantity that is constrained to be non-positive. We only have one # constraint, so we return a one-element tensor. return self._recall_lower_bound - recall def proxy_constraints(self): # In eager mode, the predictions must be a nullary function returning a # Tensor. In graph mode, they could be either such a function, or a Tensor # itself. predictions = self._predictions if callable(predictions): predictions = predictions() # Use 1 - hinge since we're SUBTRACTING recall in the constraint function, # and we want the proxy constraint function to be convex. Recall that the # labels are binary (0 or 1). true_positives = self._labels * tf.minimum(1.0, predictions) true_positive_count = tf.reduce_sum(true_positives) recall = true_positive_count / self._positive_count # Please see the corresponding comment in the constraints property. return self._recall_lower_bound - recall The Full Example Problem of Recall Constraint class ExampleProblem(tfco.ConstrainedMinimizationProblem): def __init__(self, labels, predictions, recall_lower_bound): self._labels = labels self._predictions = predictions self._recall_lower_bound = recall_lower_bound # The number of positively-labeled examples. self._positive_count = tf.reduce_sum(self._labels) @property def num_constraints(self): return 1 # we use hinge loss because we need to capture those that are not classified correctly and minimize that loss def objective(self): pass def constraints(self): pass def proxy_constraints(self): pass problem = ExampleProblem( labels=constant_labels, predictions=predictions, recall_lower_bound=recall_lower_bound, ) Visualization of Constant Input Data for which the Recall is calculated *Please Note: In this case the problem is originating from the data Recall Calculated using Hinge Loss for the Provided Input Data Distribution Constrained average hinge loss = 1.185147 Constrained recall = 0.845000 In the article shown above we do not have ever changing data, using existing data we calculate the input data weights in order to predict those samples that produce lowest recall. The predictions from one constrained optimization model is sent to the next model that runs on different loss. This way we can model how those two objects communicate each other. I’ll leave it up to you guys to decide if Azure ML Studio or AWS Deep Racer can be used to build Machine Learning Models using these ideas. References
https://medium.com/nerd-for-tech/code-samples-from-tfco-tensorflow-constrained-optimization-17acdf4913e
['Aswin Vijayakumar']
2020-11-04 16:03:26.433000+00:00
['Artificial Intelligence', 'Python', 'TensorFlow', 'Constrained Optimization', 'Machine Learning']
Title Code Samples TFCO — TensorFlow Constrained OptimizationContent Includes Code Samples TFCO — TensorFlow Constrained Optimization article model business function equivalent modelling conceptual structure system always good model business process BPMN standardised way modeling system Business Functions model category operation system routine order work Deep Learning Libraries created article showcase TensorFlow Constrained Optimization TFCO work similar Boxing Unboxing technique explained article example provided class assigns responsibility TensorFlow operation defined example example us Recall constraint recall data object based Hinge Loss Recall metric equivalent TPR True Positive Rate Recalling data object implies assessing correctness measure object’s existence constraint optimization problem defined within Class using Object Oriented Programming fashion constraint class defined method tensor totally relying Object Constraint Language OCL like syntax Implying method return tensor unit variable single constraint TFCO process take one input data point similar two data point structure taken DEA model Data Management Units DMUs similar weight accepted TFCO model Characteristic Loss Function explained Google Research’s TensorFlow Constrained Optimization Python Library performing Machine Learning based Optimizations article taken example Recall constraint characterises feature data minimizes rejection object represented data Hinge Loss Hinge Loss represented 0 1 — yx implies entropy calculation consider label true prediction whereas object classified false considered false minimization algorithm performed reduce false positive problem rate minimization problem constraint defined hinge loss defined Defining Objective use hinge loss need capture classified correctly minimize loss def objectiveself prediction selfpredictions callablepredictions prediction prediction return tfcompatv1losseshingelosslabelsselflabels logitspredictions objective hinge loss label representing true positive false positive Defining Constraints constraint defined recall value least lower bound mentioned problem Convex Optimization Case constraint represented ≥ 0 def constraintsself eager mode prediction must nullary function returning Tensor graph mode could either function Tensor prediction selfpredictions callablepredictions prediction prediction Recall label binary 0 1 truepositives selflabels tfcastpredictions 0 dtypetffloat32 truepositivecount tfreducesumtruepositives recall truepositivecount selfpositivecount constraint recall selfrecalllowerbound convert selfrecalllowerbound recall 0 ConstrainedMinimizationProblems must always provide constraint form tensor 0 result function tensor element quantity constrained nonpositive one constraint return oneelement tensor return selfrecalllowerbound recall def proxyconstraintsself eager mode prediction must nullary function returning Tensor graph mode could either function Tensor prediction selfpredictions callablepredictions prediction prediction Use 1 hinge since SUBTRACTING recall constraint function want proxy constraint function convex Recall label binary 0 1 truepositives selflabels tfminimum10 prediction truepositivecount tfreducesumtruepositives recall truepositivecount selfpositivecount Please see corresponding comment constraint property return selfrecalllowerbound recall Full Example Problem Recall Constraint class ExampleProblemtfcoConstrainedMinimizationProblem def initself label prediction recalllowerbound selflabels label selfpredictions prediction selfrecalllowerbound recalllowerbound number positivelylabeled example selfpositivecount tfreducesumselflabels property def numconstraintsself return 1 use hinge loss need capture classified correctly minimize loss def objectiveself pas def constraintsself pas def proxyconstraintsself pas problem ExampleProblem labelsconstantlabels predictionspredictions recalllowerboundrecalllowerbound Visualization Constant Input Data Recall calculated Please Note case problem originating data Recall Calculated using Hinge Loss Provided Input Data Distribution Constrained average hinge loss 1185147 Constrained recall 0845000 article shown ever changing data using existing data calculate input data weight order predict sample produce lowest recall prediction one constrained optimization model sent next model run different loss way model two object communicate I’ll leave guy decide Azure ML Studio AWS Deep Racer used build Machine Learning Models using idea ReferencesTags Artificial Intelligence Python TensorFlow Constrained Optimization Machine Learning
1,346
Entrepreneurs: If You’re Looking for Podcasts in 2020, Pick These
Stuff You Should Know For random knowledge, SYSK is the place to go. This award-winning podcast comes from the writers over at HowStuffWorks and is consistently ranked in the top charts. Every Tuesday, Thursday, and Saturday, Josh Clark and Charles W. “Chuck” Bryant educate listeners on different topics. No matter the topic, they always cross-connect with pop culture. Want to learn how going to the moon works? How yawning works? What prison food is like? After lots of time listening, you’ll end up feeling like you’ve completed a degree in Out-Of-Left-Field Things. Business Wars There are fascinating stories behind many of the household-name companies and products that we all know. Business Wars host David Brown takes you through the audible journeys that brought many of these companies and products to what they are today. Grasp the details of how Evan Spiegel grew Snapchat to go head-to-head with Facebook, or listen to the battle in the chocolate market between Hershey and Mars. The use of great sound effects and creative narration by this Wondery podcast makes the listening experience comparable to watching a documentary. Reply All For tales that keep you listening, tune in to Reply All. Focused on how people shape the internet and how the internet shapes people, hosts PJ Vogt and Alex Goldman have lively discussions about random yet intriguing situations and dig deep. One episode, The Snapchat Thief, is about how the identity of a Snapchat account hacker was investigated and (spoiler alert) eventually found. Another episode, called Adam Pisces and the $2 Coke, is about the occurrence of a flood of strange Domino’s Pizza orders. Each segment is about 30 to 45 minutes long, a good length for the average commute. How I Built This with Guy Raz Chances are, you’ve at least heard about HIBT. Produced by NPR, this is a podcast about the stories behind the movements built by entrepreneurs, innovators, and idealists. Each weekly episode is 30 to 60 minutes of conversation between host Guy Raz and a notable guest. You can hear about the origins of Atari (and Chuck E. Cheese) from Nolan Bushnell himself, and about how Sara Blakely founded Spanx. You can listen to Drybar’s Alli Webb, or to Haim Saban’s story about Power Rangers. If you want to learn about the in-depth process and interesting hurdles that go hand-in-hand with groundbreaking success, you’ll enjoy this. Every Little Thing Similar to Stuff You Should Know, ELT is a goldmine for random facts. As the host, Flora Lichtman takes you through some of the most pressing questions out there. How are new stamp designs created? What are dogs saying when they bark, and why do auctioneers talk so fast? How do you make that pumpkin spice flavor we all know? This podcast also has a wide variety of invited guest speakers. In one segment you can hear from an airline pilot, and another you can learn from a microbiologist. If you’re someone who likes to learn something new every day, these segments have you covered. Syntax.fm If you happen to be a hardcore tech geek or want to get accustomed to tech lingo, you’ll love Syntax.fm. The hosts, Scott Tolinski and Wes Bos, teach web development for a living, so they have a wide range of tech fluency, from JavaScript to CSS to React to WordPress. Although niche, these are topics that influence the work of many. They have unique segments like the Spooky Stories episodes, through which you can hear about moderately-disastrous tech-related incidents. They also discuss more general topics, like design foundations for developers and how to get better at solving problems. Episodes are light-hearted and full of awesome info. The Pitch If you’re a fan of Shark Tank, you will enjoy tuning in to The Pitch. The show, hosted by Josh Muccio, features entrepreneurs who are in need of venture funding and pitch investors, live. The goal is to give listeners an authentic look into what it’s really like to get involved with venture capital. You’ll hear from one entrepreneur per episode, so you’ll get into the details. You’ll hear stories about new businesses, post-pitch pivoting, and will even get to follow folks through their journey months after their pitch.
https://medium.com/swlh/entrepreneurs-if-youre-looking-for-podcasts-in-2020-pick-these-15e4b613006b
['Ben Scheer']
2020-01-05 10:33:13.551000+00:00
['Business', 'Startup', 'Podcast', 'Technology', 'Productivity']
Title Entrepreneurs You’re Looking Podcasts 2020 Pick TheseContent Stuff Know random knowledge SYSK place go awardwinning podcast come writer HowStuffWorks consistently ranked top chart Every Tuesday Thursday Saturday Josh Clark Charles W “Chuck” Bryant educate listener different topic matter topic always crossconnect pop culture Want learn going moon work yawning work prison food like lot time listening you’ll end feeling like you’ve completed degree OutOfLeftField Things Business Wars fascinating story behind many householdname company product know Business Wars host David Brown take audible journey brought many company product today Grasp detail Evan Spiegel grew Snapchat go headtohead Facebook listen battle chocolate market Hershey Mars use great sound effect creative narration Wondery podcast make listening experience comparable watching documentary Reply tale keep listening tune Reply Focused people shape internet internet shape people host PJ Vogt Alex Goldman lively discussion random yet intriguing situation dig deep One episode Snapchat Thief identity Snapchat account hacker investigated spoiler alert eventually found Another episode called Adam Pisces 2 Coke occurrence flood strange Domino’s Pizza order segment 30 45 minute long good length average commute Built Guy Raz Chances you’ve least heard HIBT Produced NPR podcast story behind movement built entrepreneur innovator idealist weekly episode 30 60 minute conversation host Guy Raz notable guest hear origin Atari Chuck E Cheese Nolan Bushnell Sara Blakely founded Spanx listen Drybar’s Alli Webb Haim Saban’s story Power Rangers want learn indepth process interesting hurdle go handinhand groundbreaking success you’ll enjoy Every Little Thing Similar Stuff Know ELT goldmine random fact host Flora Lichtman take pressing question new stamp design created dog saying bark auctioneer talk fast make pumpkin spice flavor know podcast also wide variety invited guest speaker one segment hear airline pilot another learn microbiologist you’re someone like learn something new every day segment covered Syntaxfm happen hardcore tech geek want get accustomed tech lingo you’ll love Syntaxfm host Scott Tolinski Wes Bos teach web development living wide range tech fluency JavaScript CSS React WordPress Although niche topic influence work many unique segment like Spooky Stories episode hear moderatelydisastrous techrelated incident also discus general topic like design foundation developer get better solving problem Episodes lighthearted full awesome info Pitch you’re fan Shark Tank enjoy tuning Pitch show hosted Josh Muccio feature entrepreneur need venture funding pitch investor live goal give listener authentic look it’s really like get involved venture capital You’ll hear one entrepreneur per episode you’ll get detail You’ll hear story new business postpitch pivoting even get follow folk journey month pitchTags Business Startup Podcast Technology Productivity
1,347
My Life Without Bread
The changes have been so profound, I feel like a completely different person. It makes me wonder if most of the modern problems that people suffer from aren’t caused by our diets. Mine certainly was. For three years now, I’ve eaten like this and here’s what’s happened: I’ve lost weight. In total, I’ve lost about 40 lbs. I could probably lose a little more, but even if I don’t, I’m still way better off than I was. I’ve maintained it. It’s been maintainable because, after a while, my cravings for these foods disappeared. It’s the norm for me now. It’s not something I’m “sticking to,” it’s just how I eat. Photo by Author: You can see how my face was bloated and puffy. My joints no longer hurt. The joint of my right middle finger used to be enlarged. I thought it was the onset of arthritis. At night my right hand would stiffen into a painful claw that I’d have to work to loosen every morning. I couldn’t wear my wedding ring, not because my finger was too fat, but because it wouldn’t go over my knuckle. I also had pain in my shoulders that made taking a sweater off over my head difficult and my knees ached, just walking up the stairs. I took Advil daily, to combat the pain. All of that pain and inflammation has disappeared and only returns when I eat sugar. I can run up the stairs. Now I can easily pop up and down the stairs instead of lumbering, huffing, and puffing. Which is great considering that I make my living running after toddlers. My mood swings have disappeared. I used to get quite irritated over small things. Now my moods are stable. I’m more easygoing. I am calmer and more approachable. I’m sure everyone is thankful for that. I look healthier and younger and I’m starting to like the way I look for the first time in my life. In the last three years, since I’ve become genuinely healthier, I’ve finally begun to like the way I look. I’m not perfect, but when I look in the mirror, I like what I see. I feel like I’m 35-years-old. I definitely don’t feel “my age.” When I think about how old I am, from the inside out, I feel about the same as I did when I was 35. Possibly better, because I had an undiagnosed heart condition and I was always fatigued back then. I have mental energy. I have the mental energy to get everything done in my day. I can concentrate better, remember things easier and I don’t need a nap every afternoon.
https://medium.com/illumination/my-life-without-bread-f791f18cc2a9
['Erin King']
2020-08-15 18:32:55.329000+00:00
['Diet', 'Food', 'Health', 'Self', 'Books']
Title Life Without BreadContent change profound feel like completely different person make wonder modern problem people suffer aren’t caused diet Mine certainly three year I’ve eaten like here’s what’s happened I’ve lost weight total I’ve lost 40 lb could probably lose little even don’t I’m still way better I’ve maintained It’s maintainable craving food disappeared It’s norm It’s something I’m “sticking to” it’s eat Photo Author see face bloated puffy joint longer hurt joint right middle finger used enlarged thought onset arthritis night right hand would stiffen painful claw I’d work loosen every morning couldn’t wear wedding ring finger fat wouldn’t go knuckle also pain shoulder made taking sweater head difficult knee ached walking stair took Advil daily combat pain pain inflammation disappeared return eat sugar run stair easily pop stair instead lumbering huffing puffing great considering make living running toddler mood swing disappeared used get quite irritated small thing mood stable I’m easygoing calmer approachable I’m sure everyone thankful look healthier younger I’m starting like way look first time life last three year since I’ve become genuinely healthier I’ve finally begun like way look I’m perfect look mirror like see feel like I’m 35yearsold definitely don’t feel “my age” think old inside feel 35 Possibly better undiagnosed heart condition always fatigued back mental energy mental energy get everything done day concentrate better remember thing easier don’t need nap every afternoonTags Diet Food Health Self Books
1,348
Python Dash Data Visualization Dashboard Web App Template
In this tutorial, I will share a sample template for the data visualization web app dashboard using Python Dash which will look like below. This is a sample template that can be used or extended to create dashboards quickly using Python Dash and connecting the correct data sources. Prior background with Python and Dash will be good to grasp the article. I will run through the code in the article and share the link of GitHub code for anyone to use. import dash import dash_bootstrap_components as dbc import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output, State import plotly.express as px Import the relevant libraries. pip install any missing libraries. The template uses Dash bootstrap, Dash HTML and Dash core components. ‘dbc’ are dash boostrap components, ‘dcc’ are dash core components and ‘html’ are dash html components The layout consists of the sidebar and main content page The app is initialized as: app = dash.Dash(external_stylesheets=[dbc.themes.BOOTSTRAP]) app.layout = html.Div([sidebar, content]) Sidebar consists of Parameters header and controls. sidebar = html.Div( [ html.H2('Parameters', style=TEXT_STYLE), html.Hr(), controls ], style=SIDEBAR_STYLE, ) Below are all the controls of the sidebar which consist of a dropdown, range slider, checklist, and radio buttons. One can extend to add their own. controls = dbc.FormGroup( [ html.P('Dropdown', style={ 'textAlign': 'center' }), dcc.Dropdown( id='dropdown', options=[{ 'label': 'Value One', 'value': 'value1' }, { 'label': 'Value Two', 'value': 'value2' }, { 'label': 'Value Three', 'value': 'value3' } ], value=['value1'], # default value multi=True ), html.Br(), html.P('Range Slider', style={ 'textAlign': 'center' }), dcc.RangeSlider( id='range_slider', min=0, max=20, step=0.5, value=[5, 15] ), html.P('Check Box', style={ 'textAlign': 'center' }), dbc.Card([dbc.Checklist( id='check_list', options=[{ 'label': 'Value One', 'value': 'value1' }, { 'label': 'Value Two', 'value': 'value2' }, { 'label': 'Value Three', 'value': 'value3' } ], value=['value1', 'value2'], inline=True )]), html.Br(), html.P('Radio Items', style={ 'textAlign': 'center' }), dbc.Card([dbc.RadioItems( id='radio_items', options=[{ 'label': 'Value One', 'value': 'value1' }, { 'label': 'Value Two', 'value': 'value2' }, { 'label': 'Value Three', 'value': 'value3' } ], value='value1', style={ 'margin': 'auto' } )]), html.Br(), dbc.Button( id='submit_button', n_clicks=0, children='Submit', color='primary', block=True ), ] ) I am using Dash Boostrap Layout for the layout of the main content page https://dash-bootstrap-components.opensource.faculty.ai/docs/components/layout/ The main content page has a header and then divided into 4 rows. The first row has 4 cards, the second row has 3 figures, the third row has one figure and the fourth row has 2 figures. content = html.Div( [ html.H2('Analytics Dashboard Template', style=TEXT_STYLE), html.Hr(), content_first_row, content_second_row, content_third_row, content_fourth_row ], style=CONTENT_STYLE ) Following is the first row containing 4 cards. content_first_row = dbc.Row([ dbc.Col( dbc.Card( [ dbc.CardBody( [ html.H4(id='card_title_1', children=['Card Title 1'], className='card-title', style=CARD_TEXT_STYLE), html.P(id='card_text_1', children=['Sample text.'], style=CARD_TEXT_STYLE), ] ) ] ), md=3 ), dbc.Col( dbc.Card( [ dbc.CardBody( [ html.H4('Card Title 2', className='card-title', style=CARD_TEXT_STYLE), html.P('Sample text.', style=CARD_TEXT_STYLE), ] ), ] ), md=3 ), dbc.Col( dbc.Card( [ dbc.CardBody( [ html.H4('Card Title 3', className='card-title', style=CARD_TEXT_STYLE), html.P('Sample text.', style=CARD_TEXT_STYLE), ] ), ] ), md=3 ), dbc.Col( dbc.Card( [ dbc.CardBody( [ html.H4('Card Title 4', className='card-title', style=CARD_TEXT_STYLE), html.P('Sample text.', style=CARD_TEXT_STYLE), ] ), ] ), md=3 ) ]) More reference on dash cards can be found here The following is the second row have 2 columns with figures. content_second_row = dbc.Row( [ dbc.Col( dcc.Graph(id='graph_1'), md=4 ), dbc.Col( dcc.Graph(id='graph_2'), md=4 ), dbc.Col( dcc.Graph(id='graph_3'), md=4 ) ] ) Following is the third row with one column with a figure. content_third_row = dbc.Row( [ dbc.Col( dcc.Graph(id='graph_4'), md=12, ) ] ) The following is the final row with two columns with figures. content_fourth_row = dbc.Row( [ dbc.Col( dcc.Graph(id='graph_5'), md=6 ), dbc.Col( dcc.Graph(id='graph_6'), md=6 ) ] ) Example of a sample callback for a graph. This can be extended to use data sources and figures of anyone’s choice. @app.callback( Output('graph_1', 'figure'), [Input('submit_button', 'n_clicks')], [State('dropdown', 'value'), State('range_slider', 'value'), State('check_list', 'value'), State('radio_items', 'value') ]) def update_graph_1(n_clicks, dropdown_value, range_slider_value, check_list_value, radio_items_value): print(n_clicks) print(dropdown_value) print(range_slider_value) print(check_list_value) print(radio_items_value) fig = { 'data': [{ 'x': [1, 2, 3], 'y': [3, 4, 5] }] } return fig Example of a sample callback for Card. This can be extended to have dynamic text displayed on cards. @app.callback( Output('card_title_1', 'children'), [Input('submit_button', 'n_clicks')], [State('dropdown', 'value'), State('range_slider', 'value'), State('check_list', 'value'), State('radio_items', 'value') ]) def update_card_title_1(n_clicks, dropdown_value, range_slider_value, check_list_value, radio_items_value): print(n_clicks) print(dropdown_value) print(range_slider_value) print(check_list_value) print(radio_items_value) # Sample data and figure return 'Card Tile 1 change by call back' @app.callback( Output('card_text_1', 'children'), [Input('submit_button', 'n_clicks')], [State('dropdown', 'value'), State('range_slider', 'value'), State('check_list', 'value'), State('radio_items', 'value') ]) def update_card_text_1(n_clicks, dropdown_value, range_slider_value, check_list_value, radio_items_value): print(n_clicks) print(dropdown_value) print(range_slider_value) print(check_list_value) print(radio_items_value) # Sample data and figure return 'Card text change by call back' CSS for the components. The sidebar is position fixed ( scrolling the page does not change the sidebar). Width, margin-right, and margin-left are in terms of percentage in order for the webpage to dynamically change size according to the size. # the style arguments for the sidebar. SIDEBAR_STYLE = { 'position': 'fixed', 'top': 0, 'left': 0, 'bottom': 0, 'width': '20%', 'padding': '20px 10px', 'background-color': '#f8f9fa' } # the style arguments for the main content page. CONTENT_STYLE = { 'margin-left': '25%', 'margin-right': '5%', 'top': 0, 'padding': '20px 10px' } TEXT_STYLE = { 'textAlign': 'center', 'color': '#191970' } CARD_TEXT_STYLE = { 'textAlign': 'center', 'color': '#0074D9' } Github repository for the template source code. You can find the dash_template.py file in the ‘src’ folder. Run this and can check the web app page on http://127.0.0.1:8085/ Few reference links https://plotly.com/python/plotly-express/
https://medium.com/analytics-vidhya/python-dash-data-visualization-dashboard-template-6a5bff3c2b76
['Ishan Mehta']
2020-06-11 16:04:20.493000+00:00
['Plotly', 'Python', 'Data Science', 'Dashboard Design', 'Data Visualization']
Title Python Dash Data Visualization Dashboard Web App TemplateContent tutorial share sample template data visualization web app dashboard using Python Dash look like sample template used extended create dashboard quickly using Python Dash connecting correct data source Prior background Python Dash good grasp article run code article share link GitHub code anyone use import dash import dashbootstrapcomponents dbc import dashcorecomponents dcc import dashhtmlcomponents html dashdependencies import Input Output State import plotlyexpress px Import relevant library pip install missing library template us Dash bootstrap Dash HTML Dash core component ‘dbc’ dash boostrap component ‘dcc’ dash core component ‘html’ dash html component layout consists sidebar main content page app initialized app dashDashexternalstylesheetsdbcthemesBOOTSTRAP applayout htmlDivsidebar content Sidebar consists Parameters header control sidebar htmlDiv htmlH2Parameters styleTEXTSTYLE htmlHr control styleSIDEBARSTYLE control sidebar consist dropdown range slider checklist radio button One extend add control dbcFormGroup htmlPDropdown style textAlign center dccDropdown iddropdown option label Value One value value1 label Value Two value value2 label Value Three value value3 valuevalue1 default value multiTrue htmlBr htmlPRange Slider style textAlign center dccRangeSlider idrangeslider min0 max20 step05 value5 15 htmlPCheck Box style textAlign center dbcCarddbcChecklist idchecklist option label Value One value value1 label Value Two value value2 label Value Three value value3 valuevalue1 value2 inlineTrue htmlBr htmlPRadio Items style textAlign center dbcCarddbcRadioItems idradioitems option label Value One value value1 label Value Two value value2 label Value Three value value3 valuevalue1 style margin auto htmlBr dbcButton idsubmitbutton nclicks0 childrenSubmit colorprimary blockTrue using Dash Boostrap Layout layout main content page httpsdashbootstrapcomponentsopensourcefacultyaidocscomponentslayout main content page header divided 4 row first row 4 card second row 3 figure third row one figure fourth row 2 figure content htmlDiv htmlH2Analytics Dashboard Template styleTEXTSTYLE htmlHr contentfirstrow contentsecondrow contentthirdrow contentfourthrow styleCONTENTSTYLE Following first row containing 4 card contentfirstrow dbcRow dbcCol dbcCard dbcCardBody htmlH4idcardtitle1 childrenCard Title 1 classNamecardtitle styleCARDTEXTSTYLE htmlPidcardtext1 childrenSample text styleCARDTEXTSTYLE md3 dbcCol dbcCard dbcCardBody htmlH4Card Title 2 classNamecardtitle styleCARDTEXTSTYLE htmlPSample text styleCARDTEXTSTYLE md3 dbcCol dbcCard dbcCardBody htmlH4Card Title 3 classNamecardtitle styleCARDTEXTSTYLE htmlPSample text styleCARDTEXTSTYLE md3 dbcCol dbcCard dbcCardBody htmlH4Card Title 4 classNamecardtitle styleCARDTEXTSTYLE htmlPSample text styleCARDTEXTSTYLE md3 reference dash card found following second row 2 column figure contentsecondrow dbcRow dbcCol dccGraphidgraph1 md4 dbcCol dccGraphidgraph2 md4 dbcCol dccGraphidgraph3 md4 Following third row one column figure contentthirdrow dbcRow dbcCol dccGraphidgraph4 md12 following final row two column figure contentfourthrow dbcRow dbcCol dccGraphidgraph5 md6 dbcCol dccGraphidgraph6 md6 Example sample callback graph extended use data source figure anyone’s choice appcallback Outputgraph1 figure Inputsubmitbutton nclicks Statedropdown value Staterangeslider value Statechecklist value Stateradioitems value def updategraph1nclicks dropdownvalue rangeslidervalue checklistvalue radioitemsvalue printnclicks printdropdownvalue printrangeslidervalue printchecklistvalue printradioitemsvalue fig data x 1 2 3 3 4 5 return fig Example sample callback Card extended dynamic text displayed card appcallback Outputcardtitle1 child Inputsubmitbutton nclicks Statedropdown value Staterangeslider value Statechecklist value Stateradioitems value def updatecardtitle1nclicks dropdownvalue rangeslidervalue checklistvalue radioitemsvalue printnclicks printdropdownvalue printrangeslidervalue printchecklistvalue printradioitemsvalue Sample data figure return Card Tile 1 change call back appcallback Outputcardtext1 child Inputsubmitbutton nclicks Statedropdown value Staterangeslider value Statechecklist value Stateradioitems value def updatecardtext1nclicks dropdownvalue rangeslidervalue checklistvalue radioitemsvalue printnclicks printdropdownvalue printrangeslidervalue printchecklistvalue printradioitemsvalue Sample data figure return Card text change call back CSS component sidebar position fixed scrolling page change sidebar Width marginright marginleft term percentage order webpage dynamically change size according size style argument sidebar SIDEBARSTYLE position fixed top 0 left 0 bottom 0 width 20 padding 20px 10px backgroundcolor f8f9fa style argument main content page CONTENTSTYLE marginleft 25 marginright 5 top 0 padding 20px 10px TEXTSTYLE textAlign center color 191970 CARDTEXTSTYLE textAlign center color 0074D9 Github repository template source code find dashtemplatepy file ‘src’ folder Run check web app page http1270018085 reference link httpsplotlycompythonplotlyexpressTags Plotly Python Data Science Dashboard Design Data Visualization
1,349
Failed Predictions for 2020 We Wish Came True
It’s always interesting to wonder how much our ancestors, predecessors, and younger selves knew where they were going. But equally fascinating, in my opinion, are those bold predictions from the past that hit completely wide of the mark. Not only is it a neat insight into the way the minds of the past considered their place in human history, but it serves as a reminder that no matter our achievements, we can never gauge our societal momentum with any real exactness. 2020 has been an eventful and chaotic year, so I figured I would turn my ear back to the voices of the past and delve into a little alternate history. It seems that those voices had a lot of ideas about what 2020 in particular might look like. Prophecies aren’t interesting, so I made sure to stick to considered, thoughtful predictions made by futurologists, writers, engineers, scientists, and other trend forecasters. These are the unmet expectations, forgotten dreams, and unrequited wishes for the 2020 that never was. A 26-Hour Work Week Photo by You X Ventures on Unsplash This one is probably the most disappointing. In 1968, physicist Herman Kahn and futurist Anthony J. Weiner predicted that by 2020 the average American would be working 26 hours per week- about 1370 per year. It was a pretty bold prediction considering the average American worked approximately 37 hours per week in 1968. And it speaks to the optimism of the Post-War period that envisioned a future of linear progress and continuous economic growth. As it stands, the average American now works roughly 35 hours per week according to the Bureau of Labor Statistics- and that figure varies according to factors such as gender, age and marital status (the average for men is 41 hours for instance). It also doesn’t include “side hustles” which many modern Americans increasingly feel the need to support themselves with. The U.S also has a relatively high figure of 11% of its employees that work over 50 hours per week (according to the OECD). Sadly, the idea of a 26-hour work week seems less realistic now than it did in the 1960s- and not just for Americans. But if any country is going to get close to making it less of a fantasy, it will be a progressive nation like Denmark, Norway, or The Netherlands. Humans Will Land On Mars Photo by Nicolas Lobos on Unsplash Although this prediction is, ultimately, wrong- it’s not far off. The idea that we would send human beings to Mars by 2020 is something I remember growing up with, in fact. Humans setting foot on Mars by the early 21st century was a recurring promise in the books and documentaries I consumed as a kid. In a 1997 issue of WIRED, Peter Leyden and Peter Schwartz gave 2020 as the year we would finally succeed in sending a manned spacecraft to the Red Planet. We’re on our way, having successfully landed several robotic craft (such as probes, rovers, and landers), but current estimates for a manned mission put it a good decade hence. What’s most interesting about Leyden and Schwartz’s prediction however, is not that we would reach Mars by 2020, but that we would do so as part of a “joint effort supported by virtually all nations on the planet”. They describe four astronauts from a multinational team beaming images of the Martian landscape back to 11 billion people- which is also interesting, as the most recent United Nations estimates for the world population (as of September 2020) sit at 7.8 billion, with 10 billion not expected until 2057. The beaming of those images are an important part of the prediction though, and tell us that this was as much a prediction about sociology as it was scientific discovery. The images that never were beamed to us this year have an emotional weight to them. Leyden and Schwartz envisioned the 2020 Mars landing as being a turning point in history, a triumph of global cooperation that would put an end to an Earth divided by nations and give rise to a more collective mindset. “The images from Mars drive home another point: We’re one global society, one human race. The divisions we impose on ourselves look ludicrous from afar. The concept of a planet of warring nations, a state of affairs that defined the previous century, makes no sense.” It’s poignant to think that this, rather than our technical capabilities, has proven to be the most unrealistic aspect of their prediction. It makes me think of classic science fiction from the Cold War era (think Gene Roddenberry’s Star Trek or Poul Anderson’s Tau Zero) in which a future spacefaring Earth always had a single identity. Nation-states were gone, but cultural identities were never lost. Ethnic and religious conflicts were seen as archaic. Although it may seem far away right now, there is hope in the idea that through technology we can achieve social progress. The Death of Nationalism Photo by Jørgen Håland on Unsplash This one ties in quite nicely to the previous prediction. If you think about it, they’re essentially the same: through advances in technology, we can overcome national and ethnic divides, and come together as one. In 1968, political science professor Ithiel de Sola Pool confidently proclaimed that “By the year 2018 nationalism should be a waning force in the world,” due to our enhanced capabilities for translation and communication. While it’s true that the internet has facilitated a more interconnected world, our technical innovations haven’t brought about the greater empathy de Sola Pool hoped for. Quite the opposite, in fact. Trump, Brexit, Bolsonaro, Erdoğan, Orbán, the Front National, and the Alternative für Deutschland were and are driven by a viciously-xenophobic, fervently anti-intellectual brand of populist nationalism. The question that remains is whether de Sola Pool’s prediction was wrong entirely or whether it was simply premature. If we are to think of human history in terms of Hegelian Dialectics, then the process of nationalism’s erasure could very well be underway. It’s just not a smooth and linear process. Rather, it’s a messy, generational progression of “two steps forward, one step back”. The French Revolution deposed a tyrannical monarchy but led to a little something known as The Terror, and from that chaos emerged a new tyrant in the form of Napoleon- a political opportunist who derailed the very liberty he professed to love. It was a good half-century before the fruits of the Revolution came to bear insofar as individual liberty was concerned. By that same token, the rise of Trump, Brexiteers, and those like them could be the last fightback of populist nationalism as the world moves inexorably to a more interconnected and interdependent future. The more they swing in one direction, the likelier it is that the next generation of policymakers will move to compensate. My point being, we won’t know for definite that de Sola Pool was off the mark until many years hence. Hyper-Intelligent Apes Will Be Our Personal Slaves Photo by Margaux Ansel on Unsplash No, I’m not kidding. During my research for this article, this was the prediction for 2020 that seemed to crop up the most in my internet searches. Probably because people can’t quite believe that this was a serious prediction for the world in which we now live. In 1967 The Futurist published an article that stated “By the year 2020, it may be possible to breed intelligent species of animals, such as apes, that will be capable of performing manual labor.” According to the writer this included everything from vacuuming the house to pruning the rosebushes, and even driving our cars. These apes, which would be specially-bred and trained as chauffeurs, would supposedly reduce the amount of car crashes. Now I’ve never seen a chimp drive a car outside of a circus, so I can’t attest as to whether or not they would be more adept at spotting potential hazards on the road than we are. But these aren’t just any old apes- the article implies they’re a kind of super-ape, bred for specific purposes in the same manner as dogs. Alas these apes don’t exist, but the basic idea that by 2020 we will use our enhanced technology to find new uses for animals is not incorrect. Scientists and mechanical engineers at Singapore’s Nanyang Technological University have recently experimented with the creation of “cyborg insects”, successfully implanting electrodes into the leg-muscles of beetles in order to control how they move. These remote-control bugs- far cheaper than robots of the same size- can theoretically be put to a number of uses- from espionage to search-and-rescue. It’s not as impressive as a baboon trying to scrub dried oatmeal from a breakfast bowl, but it’s in the spirit of things. Telepathy & Teleportation Photo by David Clode on Unsplash Perhaps the most surprising aspect of this prediction is not so much that it exists, but that it was made as recently as 2014. Michael J. O’Farrell, founder of The Mobile Institute and veteran of the tech industry, proclaimed in the 2014 book Shift 2020 that both telepathy and teleportation will have been made possible by the current year. This breakthrough was supposed to have been achieved through a process known as “nanomobility”. O’Farrell writes that “By 2020, I predict people will become incubators for personally controlled and protected Embodied Application Platforms and Body Area Networks, with a primary source-code Physical State and hyper-interactive genetically reproduced Virtual States. All states would host a mass of molecular-sized web-servers; IP domains and AP transport protocols capable of self-sustaining replication, atomically powered quantum computing and persona-patented commerce. I have coined the phrase nanomobility to capture and describe this new uncharted state.” So what’s the modern reality of telepathy and teleportation? Well the truth is that they simply don’t exist- at least, not in the way we typically imagine these concepts. The closest we’ve gotten to telepathy is electro-encephalography (EEG), in which a device not dissimilar in shape to a swimming cap is outfitted with large electrodes and placed upon the scalp of the subject. These electrodes record electrical activity which is then interpreted by a computer. Scientists have used this interface to both send signals from the brain and receive electrical pulses in turn. Volunteers have been able to transmit brain activity to each other, to computer software, and even to animals- with one volunteer able to stimulate the motor area of a sedated rat’s brain in order to get it to move its tail. The closest scientists have come to something resembling teleportation is a process known as quantum teleportation, which is less an act of transportation so much as it is communication. Quantum information has been proven capable of transmitting from one place to another. In 2014, researchers at the Technical University Delft reported having teleported the information of two entangled quantumbits three meters apart. These breakthroughs may not have impacted our everyday lives in the way that the futurists’ hoped, but they are nonetheless extraordinary accomplishments that we can only hope will serve as part of a greater journey of discovery.
https://medium.com/predict/failed-predictions-for-2020-we-wish-came-true-7dba84a76bea
['Michael J. Vowles']
2020-12-11 22:43:19.768000+00:00
['History', 'Future', 'Technology', 'Science', '2020']
Title Failed Predictions 2020 Wish Came TrueContent It’s always interesting wonder much ancestor predecessor younger self knew going equally fascinating opinion bold prediction past hit completely wide mark neat insight way mind past considered place human history serf reminder matter achievement never gauge societal momentum real exactness 2020 eventful chaotic year figured would turn ear back voice past delve little alternate history seems voice lot idea 2020 particular might look like Prophecies aren’t interesting made sure stick considered thoughtful prediction made futurologists writer engineer scientist trend forecaster unmet expectation forgotten dream unrequited wish 2020 never 26Hour Work Week Photo X Ventures Unsplash one probably disappointing 1968 physicist Herman Kahn futurist Anthony J Weiner predicted 2020 average American would working 26 hour per week 1370 per year pretty bold prediction considering average American worked approximately 37 hour per week 1968 speaks optimism PostWar period envisioned future linear progress continuous economic growth stand average American work roughly 35 hour per week according Bureau Labor Statistics figure varies according factor gender age marital status average men 41 hour instance also doesn’t include “side hustles” many modern Americans increasingly feel need support US also relatively high figure 11 employee work 50 hour per week according OECD Sadly idea 26hour work week seems le realistic 1960s Americans country going get close making le fantasy progressive nation like Denmark Norway Netherlands Humans Land Mars Photo Nicolas Lobos Unsplash Although prediction ultimately wrong it’s far idea would send human being Mars 2020 something remember growing fact Humans setting foot Mars early 21st century recurring promise book documentary consumed kid 1997 issue WIRED Peter Leyden Peter Schwartz gave 2020 year would finally succeed sending manned spacecraft Red Planet We’re way successfully landed several robotic craft probe rover lander current estimate manned mission put good decade hence What’s interesting Leyden Schwartz’s prediction however would reach Mars 2020 would part “joint effort supported virtually nation planet” describe four astronaut multinational team beaming image Martian landscape back 11 billion people also interesting recent United Nations estimate world population September 2020 sit 78 billion 10 billion expected 2057 beaming image important part prediction though tell u much prediction sociology scientific discovery image never beamed u year emotional weight Leyden Schwartz envisioned 2020 Mars landing turning point history triumph global cooperation would put end Earth divided nation give rise collective mindset “The image Mars drive home another point We’re one global society one human race division impose look ludicrous afar concept planet warring nation state affair defined previous century make sense” It’s poignant think rather technical capability proven unrealistic aspect prediction make think classic science fiction Cold War era think Gene Roddenberry’s Star Trek Poul Anderson’s Tau Zero future spacefaring Earth always single identity Nationstates gone cultural identity never lost Ethnic religious conflict seen archaic Although may seem far away right hope idea technology achieve social progress Death Nationalism Photo Jørgen Håland Unsplash one tie quite nicely previous prediction think they’re essentially advance technology overcome national ethnic divide come together one 1968 political science professor Ithiel de Sola Pool confidently proclaimed “By year 2018 nationalism waning force world” due enhanced capability translation communication it’s true internet facilitated interconnected world technical innovation haven’t brought greater empathy de Sola Pool hoped Quite opposite fact Trump Brexit Bolsonaro Erdoğan Orbán Front National Alternative für Deutschland driven viciouslyxenophobic fervently antiintellectual brand populist nationalism question remains whether de Sola Pool’s prediction wrong entirely whether simply premature think human history term Hegelian Dialectics process nationalism’s erasure could well underway It’s smooth linear process Rather it’s messy generational progression “two step forward one step back” French Revolution deposed tyrannical monarchy led little something known Terror chaos emerged new tyrant form Napoleon political opportunist derailed liberty professed love good halfcentury fruit Revolution came bear insofar individual liberty concerned token rise Trump Brexiteers like could last fightback populist nationalism world move inexorably interconnected interdependent future swing one direction likelier next generation policymakers move compensate point won’t know definite de Sola Pool mark many year hence HyperIntelligent Apes Personal Slaves Photo Margaux Ansel Unsplash I’m kidding research article prediction 2020 seemed crop internet search Probably people can’t quite believe serious prediction world live 1967 Futurist published article stated “By year 2020 may possible breed intelligent specie animal ape capable performing manual labor” According writer included everything vacuuming house pruning rosebush even driving car ape would speciallybred trained chauffeur would supposedly reduce amount car crash I’ve never seen chimp drive car outside circus can’t attest whether would adept spotting potential hazard road aren’t old ape article implies they’re kind superape bred specific purpose manner dog Alas ape don’t exist basic idea 2020 use enhanced technology find new us animal incorrect Scientists mechanical engineer Singapore’s Nanyang Technological University recently experimented creation “cyborg insects” successfully implanting electrode legmuscles beetle order control move remotecontrol bug far cheaper robot size theoretically put number us espionage searchandrescue It’s impressive baboon trying scrub dried oatmeal breakfast bowl it’s spirit thing Telepathy Teleportation Photo David Clode Unsplash Perhaps surprising aspect prediction much exists made recently 2014 Michael J O’Farrell founder Mobile Institute veteran tech industry proclaimed 2014 book Shift 2020 telepathy teleportation made possible current year breakthrough supposed achieved process known “nanomobility” O’Farrell writes “By 2020 predict people become incubator personally controlled protected Embodied Application Platforms Body Area Networks primary sourcecode Physical State hyperinteractive genetically reproduced Virtual States state would host mass molecularsized webservers IP domain AP transport protocol capable selfsustaining replication atomically powered quantum computing personapatented commerce coined phrase nanomobility capture describe new uncharted state” what’s modern reality telepathy teleportation Well truth simply don’t exist least way typically imagine concept closest we’ve gotten telepathy electroencephalography EEG device dissimilar shape swimming cap outfitted large electrode placed upon scalp subject electrode record electrical activity interpreted computer Scientists used interface send signal brain receive electrical pulse turn Volunteers able transmit brain activity computer software even animal one volunteer able stimulate motor area sedated rat’s brain order get move tail closest scientist come something resembling teleportation process known quantum teleportation le act transportation much communication Quantum information proven capable transmitting one place another 2014 researcher Technical University Delft reported teleported information two entangled quantumbits three meter apart breakthrough may impacted everyday life way futurists’ hoped nonetheless extraordinary accomplishment hope serve part greater journey discoveryTags History Future Technology Science 2020
1,350
Minds In Their Prime
I fall in love with minds that are better than mine The kind of minds that in this world are hard to find And when you find them you better take the time To revel in their imagery the bridges they build, so hard to define And were those minds to all fall into line And cogitate at their utmost prime Imagine the world we could create, the states sublime Immaculate wonder, the universe refined
https://medium.com/poets-unlimited/minds-in-their-prime-ace425c8f71c
['Aarish Shah']
2017-09-16 03:21:57.878000+00:00
['Inspiration', 'Writing', 'Poetry', 'Creativity', 'Photography']
Title Minds PrimeContent fall love mind better mine kind mind world hard find find better take time revel imagery bridge build hard define mind fall line cogitate utmost prime Imagine world could create state sublime Immaculate wonder universe refinedTags Inspiration Writing Poetry Creativity Photography
1,351
Reporters: Big Tech is Slowly Killing Journalism
A movement is growing to try to save the news business by reining in the power of tech giants Google and Facebook, which together control 60% of digital advertising. Watchdog groups accuse the companies of profiting off the work of journalists while undercutting the ad revenue that pays their salaries. The Senate Judiciary Committee recently held a hearing on the subject of big data and privacy. Laura Bassett, a freelance journalist formerly with the Huffington Post, testified at that hearing. She said Google and Facebook should be broken up — or at least, heavily regulated. “They’re basically a country, they’re that powerful. Not only do they have the power to tip elections and control what kind of news they’re putting at the top of their feeds, but they’re also killing journalists, financially,” Bassett said. “So, it’s just creating a real problem when one or two companies has the power to cripple the free press as we know it.” More than 2,500 reporters have been laid off so far this year. A study from the University of North Carolina Chapel Hill last year found about 1,800 local newspapers have gone out of business since 2004–20% of the total industry. The decline began many years ago when sites like Craigslist reduced newspaper revenues by about 40% by rendering classified ads obsolete. Freelance reporter John Stanton, formerly of Buzzfeed, also submitted testimony at the hearing. He said he sees the widespread layoffs of reporters as a threat to communities and democracy — leaving “news deserts” with little-to-no reporting on government corruption and a host of local issues, positive and negative. He urged Facebook and Google to be better corporate citizens and devise a way to ensure content providers get paid. “While they’re not governmental entities, they do have a responsibility — given that they now kind-of control the way that people consume news — to not put profits above the ability to have a vibrant, thriving news culture,” Stanton said. Brian O’Kelley, a tech entrepreneur who invented the system that underpins digital advertising, also testified at the hearing. He said big news sites should band together and stop allowing digital firms to handle their ad sales — thus forcing advertisers off Facebook and Google, and back to the news sites themselves. “They can just click the box and turn it off and stop working with all these programmatic advertising companies,” O’Kelley said. “And because it is funding some of their business right now, turning it off and switching to something else feels scary — even if it is the right decision in the medium term.” O’Kelley said part of the solution may be a federal law patterned after one in California, giving consumers the power to limit the ways websites collect their personal data and browser history.
https://medium.com/save-journalism/reporters-big-tech-is-slowly-killing-journalism-29d2b56fa097
['Save Journalism']
2019-05-28 14:51:54.909000+00:00
['Local News', 'Advertising', 'Journalism', 'Google', 'Big Tech']
Title Reporters Big Tech Slowly Killing JournalismContent movement growing try save news business reining power tech giant Google Facebook together control 60 digital advertising Watchdog group accuse company profiting work journalist undercutting ad revenue pay salary Senate Judiciary Committee recently held hearing subject big data privacy Laura Bassett freelance journalist formerly Huffington Post testified hearing said Google Facebook broken — least heavily regulated “They’re basically country they’re powerful power tip election control kind news they’re putting top feed they’re also killing journalist financially” Bassett said “So it’s creating real problem one two company power cripple free press know it” 2500 reporter laid far year study University North Carolina Chapel Hill last year found 1800 local newspaper gone business since 2004–20 total industry decline began many year ago site like Craigslist reduced newspaper revenue 40 rendering classified ad obsolete Freelance reporter John Stanton formerly Buzzfeed also submitted testimony hearing said see widespread layoff reporter threat community democracy — leaving “news deserts” littletono reporting government corruption host local issue positive negative urged Facebook Google better corporate citizen devise way ensure content provider get paid “While they’re governmental entity responsibility — given kindof control way people consume news — put profit ability vibrant thriving news culture” Stanton said Brian O’Kelley tech entrepreneur invented system underpins digital advertising also testified hearing said big news site band together stop allowing digital firm handle ad sale — thus forcing advertiser Facebook Google back news site “They click box turn stop working programmatic advertising companies” O’Kelley said “And funding business right turning switching something else feel scary — even right decision medium term” O’Kelley said part solution may federal law patterned one California giving consumer power limit way website collect personal data browser historyTags Local News Advertising Journalism Google Big Tech
1,352
Artificial Intelligence on Cyber Security and Pandemic in 2020
As we come to realize after a long and trying time in 2020 with the effect of the pandemic 2020. It became an understanding that the use of technology became more fundamentally important to us with lockdown being implemented. Hence with the development of artificial intelligence. Artificial intelligence (AI) is evolving — literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI. Photo by Michael Dziedzic on Unsplash While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.” Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages, and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks — for instance, spotting road signs — and researchers can spend months working out how to connect them so they work together seamlessly. In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers’ imaginations and their existing biases. So Quoc Le, a computer scientist at Google, and colleagues developed a program called AutoML-Zero that could develop AI programs with effectively zero human input, using only basic mathematical concepts a high school student would know. “Our ultimate goal is to actually develop novel machine learning concepts that even researchers could not find,” he says. The program discovers algorithms using a loose approximation of evolution. It starts by creating a population of 100 candidate algorithms by randomly combining mathematical operations. It then tests them on a simple task, such as an image recognition problem where it has to decide whether a picture shows a cat or a truck. In each cycle, the program compares the algorithms’ performance against hand-designed algorithms. Copies of the top performers are “mutated” by randomly replacing, editing, or deleting some of its code to create slight variations of the best algorithms. These “children” get added to the population, while older programs get culled. The cycle repeats. The system creates thousands of these populations at once, which lets it churn through tens of thousands of algorithms a second until it finds a good solution. The program also uses tricks to speed up the search, like occasionally exchanging algorithms between populations to prevent any evolutionary dead ends, and automatically weeding out duplicate algorithms. Artificial Intelligence on Cyber Security There is currently a big debate raging about whether Artificial Intelligence (AI) is a good or bad thing in terms of its impact on human life. With more and more enterprises using AI for their needs, it’s time to analyze the possible impacts of the implementation of AI in the cybersecurity field. The positive uses of AI for cybersecurity Biometric logins are increasingly being used to create secure logins by either scanning fingerprints, retinas, or palm prints. This can be used alone or in conjunction with a password and is already being used in most new smartphones. Large companies have been the victims of security breaches that compromised email addresses, personal information, and passwords. Cybersecurity experts have reiterated on multiple occasions that passwords are extremely vulnerable to cyber attacks, compromising personal information, credit card information, and social security numbers. These are all reasons why biometric logins are a positive AI contribution to cybersecurity. AI can also be used to detect threats and other potentially malicious activities. Conventional systems simply cannot keep up with the sheer number of malware that is created every month, so this is a potential area for AI to step in and address this problem. Cybersecurity companies are teaching AI systems to detect viruses and malware by using complex algorithms so AI can then run pattern recognition in software. AI systems can be trained to identify even the smallest behaviors of ransomware and malware attacks before it enters the system and then isolate them from that system. They can also use predictive functions that surpass the speed of traditional approaches. Systems that run on AI unlock potential for natural language processing which collects information automatically by combing through articles, news, and studies on cyber threats. This information can give insight into anomalies, cyber attacks, and prevention strategies. This allows cybersecurity firms to stay updated on the latest risks and time frames and build responsive strategies to keep organizations protected. AI systems can also be used in situations of multi-factor authentication to provide access to their users. Different users of a company have different levels of authentication privileges which also depend on the location from which they’re accessing the data. When AI is used, the authentication framework can be a lot more dynamic and real-time and it can modify access privileges based on the network and location of the user. Multi-factor authentication collects user information to understand the behavior of this person and decide about the user’s access privileges. To use AI to its fullest capabilities, it must be implemented by the right cybersecurity firms that are familiar with its functioning. Whereas in the past, malware attacks could occur without leaving any indication on which weakness it exploited, AI can step in to protect the cybersecurity firms and their clients from attacks even when multiple skilled attacks are occurring. Drawbacks and limitations of using AI for cybersecurity The benefits outlined above are just a fraction of the potential of AI in helping cybersecurity, but some limitations are preventing AI from becoming a mainstream tool used in the field. To build and maintain an AI system, companies would require an immense amount of resources including memory, data, and computing power. Additionally, because AI systems are trained through learning data sets, cybersecurity firms need to get their hands on many different data sets of malware codes, non-malicious codes, and anomalies. Obtaining all of these accurate data sets can take a really long time and resources which some companies cannot afford. Another drawback is that hackers can also use AI themselves to test their malware and improve and enhance it to potentially become AI-proof. In fact, AI-proof malware can be extremely destructive as they can learn from existing AI tools and develop more advanced attacks to be able to penetrate traditional cybersecurity programs or even AI-boosted systems. Solutions to AI limitations Knowing these limitations and drawbacks, it’s obvious that AI is a long way from becoming the only cybersecurity solution. The best approach in the meantime would be to combine traditional techniques with AI tools, so organizations should keep these solutions in mind when developing their cybersecurity strategy: Employ a cybersecurity firm with professionals who have experience and skills in many different facets of cybersecurity. Have your cybersecurity team test your systems and networks for any potential gaps and fix them immediately. Use filters for URLs to block malicious links that potentially have a virus or malware. Install firewalls and other malware scanners to protect your systems and have these constantly updated to match redesigned malware. As the potential of AI is being explored to boost the cybersecurity profile of a corporation, it is also being developed by hackers. Since it is still being developed and its potential is far from reach, we cannot yet know whether it will one day be helpful or detrimental for cybersecurity. In the meantime, organizations must do as much as they can with a mix of traditional methods and AI to stay on top of their cybersecurity strategy. Artificial Intelligence in COVID- 19 Pandemic It can be understood during the COVID-19 pandemic, health care professionals and researchers have been confined mostly to using local and national datasets to study the impact of comorbidities, pre-existing medication use, demographics, and various interventions on disease course. Multiple organizations are running an initiative to accelerate global collaborative research on COVID-19 through access to high-quality, real-time multi-center patient datasets. The National Science Foundation has provided funding to develop the Records Evaluation for COVID-19 Emergency Research (RECovER) initiative. They are using the technology to find trends and data connections to help better understand and treat COVID-19, with a special emphasis on the impact existing medications have on COVID-19. This approach allows a health care professional or researcher to identify patterns in patient responses to drugs, select or rank the predictions from our platform for drug repurposing, and evaluate their responses over time. This will help with COVID-19 and other potential pandemics. Artificial Intelligence can inform public health decision-making amid the pandemic. A new model for predicting COVID-19’s impact using artificial intelligence (AI) dramatically outperforms other models, so much so that it has attracted the interest of public health officials across the country. While existing models to predict the spread of a disease already exists, few, if any, incorporate AI, which allows a model to make predictions based on observations of what is actually happening — for example, increasing cases among specific populations — as opposed to what the model’s designers think will happen. With the use of AI, it is possible to discover patterns hidden in data that humans alone might not recognize. AI is a powerful tool, so it only makes sense to apply it to one of the most urgent problems the world faces,” says Yaser Abu-Mostafa (Ph.D. ’83), professor of electrical engineering and computer science, who led the development of the new CS156 model (so-named for the Caltech computer science class where it got its start). The researchers evaluate the accuracy of the model by comparing it to the predictions of an ensemble model built by the Centers for Disease Control and Prevention from 45 major models from universities and institutes across the country. Using 1,500 predictions as points of comparison with the CDC ensemble, the researchers found that the CS156 model was more accurate than the ensemble model 58 percent of the time as of November 25. Abu-Mostafa is currently expanding the CS156 model based on feedback from public health officials in the hope that it can be a lifesaving tool to guide policy decisions. This model is being modified to allow public health officials to predict how various interventions — like mask mandates and safer-at-home orders — affect control of the spread of the disease. Armed with those predictions, public health officials would be better able to evaluate which interventions are most likely to help. At the end of it all, it is an undeniable fact that AI is at the center of a new enterprise to build computational models of intelligence. The main assumption is that intelligence (human or otherwise) can be represented in terms of symbol structures and symbolic operations which can be programmed in a digital computer. There is much debate as to whether such an appropriately programmed computer would be a mind, or would merely simulate one, but AI researchers need not wait for the conclusion to that debate, nor for the hypothetical computer that could model all of the human intelligence, however, we cannot deny the fact of the contribution of AI on cybersecurity and public health.
https://medium.com/change-becomes-you/artificial-intelligence-on-cyber-security-and-pandemic-in-2020-2a03f01f9756
['Antoine Blodgett']
2020-12-08 07:17:46.199000+00:00
['Covid 19', 'AI', 'Cybersecurity', 'Artificial Intelligence', 'Tech']
Title Artificial Intelligence Cyber Security Pandemic 2020Content come realize long trying time 2020 effect pandemic 2020 became understanding use technology became fundamentally important u lockdown implemented Hence development artificial intelligence Artificial intelligence AI evolving — literally Researchers created software borrows concept Darwinian evolution including “survival fittest” build AI program improve generation generation without human input program replicated decade AI research matter day designer think one day could discover new approach AI Photo Michael Dziedzic Unsplash people taking baby step took giant leap unknown” say Risto Miikkulainen computer scientist University Texas Austin involved work “This one paper could launch lot future research” Building AI algorithm take time Take neural network common type machine learning used translating language driving car network loosely mimic structure brain learn training data altering strength connection artificial neuron Smaller subcircuits neuron carry specific task — instance spotting road sign — researcher spend month working connect work together seamlessly recent year scientist sped process automating step program still rely stitching together readymade circuit designed human mean output still limited engineers’ imagination existing bias Quoc Le computer scientist Google colleague developed program called AutoMLZero could develop AI program effectively zero human input using basic mathematical concept high school student would know “Our ultimate goal actually develop novel machine learning concept even researcher could find” say program discovers algorithm using loose approximation evolution start creating population 100 candidate algorithm randomly combining mathematical operation test simple task image recognition problem decide whether picture show cat truck cycle program compare algorithms’ performance handdesigned algorithm Copies top performer “mutated” randomly replacing editing deleting code create slight variation best algorithm “children” get added population older program get culled cycle repeat system creates thousand population let churn ten thousand algorithm second find good solution program also us trick speed search like occasionally exchanging algorithm population prevent evolutionary dead end automatically weeding duplicate algorithm Artificial Intelligence Cyber Security currently big debate raging whether Artificial Intelligence AI good bad thing term impact human life enterprise using AI need it’s time analyze possible impact implementation AI cybersecurity field positive us AI cybersecurity Biometric logins increasingly used create secure logins either scanning fingerprint retina palm print used alone conjunction password already used new smartphones Large company victim security breach compromised email address personal information password Cybersecurity expert reiterated multiple occasion password extremely vulnerable cyber attack compromising personal information credit card information social security number reason biometric logins positive AI contribution cybersecurity AI also used detect threat potentially malicious activity Conventional system simply cannot keep sheer number malware created every month potential area AI step address problem Cybersecurity company teaching AI system detect virus malware using complex algorithm AI run pattern recognition software AI system trained identify even smallest behavior ransomware malware attack enters system isolate system also use predictive function surpass speed traditional approach Systems run AI unlock potential natural language processing collect information automatically combing article news study cyber threat information give insight anomaly cyber attack prevention strategy allows cybersecurity firm stay updated latest risk time frame build responsive strategy keep organization protected AI system also used situation multifactor authentication provide access user Different user company different level authentication privilege also depend location they’re accessing data AI used authentication framework lot dynamic realtime modify access privilege based network location user Multifactor authentication collect user information understand behavior person decide user’s access privilege use AI fullest capability must implemented right cybersecurity firm familiar functioning Whereas past malware attack could occur without leaving indication weakness exploited AI step protect cybersecurity firm client attack even multiple skilled attack occurring Drawbacks limitation using AI cybersecurity benefit outlined fraction potential AI helping cybersecurity limitation preventing AI becoming mainstream tool used field build maintain AI system company would require immense amount resource including memory data computing power Additionally AI system trained learning data set cybersecurity firm need get hand many different data set malware code nonmalicious code anomaly Obtaining accurate data set take really long time resource company cannot afford Another drawback hacker also use AI test malware improve enhance potentially become AIproof fact AIproof malware extremely destructive learn existing AI tool develop advanced attack able penetrate traditional cybersecurity program even AIboosted system Solutions AI limitation Knowing limitation drawback it’s obvious AI long way becoming cybersecurity solution best approach meantime would combine traditional technique AI tool organization keep solution mind developing cybersecurity strategy Employ cybersecurity firm professional experience skill many different facet cybersecurity cybersecurity team test system network potential gap fix immediately Use filter URLs block malicious link potentially virus malware Install firewall malware scanner protect system constantly updated match redesigned malware potential AI explored boost cybersecurity profile corporation also developed hacker Since still developed potential far reach cannot yet know whether one day helpful detrimental cybersecurity meantime organization must much mix traditional method AI stay top cybersecurity strategy Artificial Intelligence COVID 19 Pandemic understood COVID19 pandemic health care professional researcher confined mostly using local national datasets study impact comorbidities preexisting medication use demographic various intervention disease course Multiple organization running initiative accelerate global collaborative research COVID19 access highquality realtime multicenter patient datasets National Science Foundation provided funding develop Records Evaluation COVID19 Emergency Research RECovER initiative using technology find trend data connection help better understand treat COVID19 special emphasis impact existing medication COVID19 approach allows health care professional researcher identify pattern patient response drug select rank prediction platform drug repurposing evaluate response time help COVID19 potential pandemic Artificial Intelligence inform public health decisionmaking amid pandemic new model predicting COVID19’s impact using artificial intelligence AI dramatically outperforms model much attracted interest public health official across country existing model predict spread disease already exists incorporate AI allows model make prediction based observation actually happening — example increasing case among specific population — opposed model’s designer think happen use AI possible discover pattern hidden data human alone might recognize AI powerful tool make sense apply one urgent problem world faces” say Yaser AbuMostafa PhD ’83 professor electrical engineering computer science led development new CS156 model sonamed Caltech computer science class got start researcher evaluate accuracy model comparing prediction ensemble model built Centers Disease Control Prevention 45 major model university institute across country Using 1500 prediction point comparison CDC ensemble researcher found CS156 model accurate ensemble model 58 percent time November 25 AbuMostafa currently expanding CS156 model based feedback public health official hope lifesaving tool guide policy decision model modified allow public health official predict various intervention — like mask mandate saferathome order — affect control spread disease Armed prediction public health official would better able evaluate intervention likely help end undeniable fact AI center new enterprise build computational model intelligence main assumption intelligence human otherwise represented term symbol structure symbolic operation programmed digital computer much debate whether appropriately programmed computer would mind would merely simulate one AI researcher need wait conclusion debate hypothetical computer could model human intelligence however cannot deny fact contribution AI cybersecurity public healthTags Covid 19 AI Cybersecurity Artificial Intelligence Tech
1,353
Microservices and AWS App Mesh
The Application is similar to the one described in the Part 1. I just added few lines of extra code to get the response from microservice bookingapp-movie. Refer my GitHub Repo for bookingapp-home for the python code I used. From Part 1, I have my ECS cluster ready with 3 tasks, bookingapp-home, bookingapp-movie and bookingapp-redis all 3 tasks have service discovery configured and resolving to the endpoints properly. Let’s assume that our application is working fine and we want to rollout new code changes only to bookingapp-movie microservice. We can rollout the changes using rolling update strategy, but if we face any issue in the new code, all the traffic will get impacted. To do a rollout of new changes safely, we can use canary model i.e. route 75% traffic to the old bookingapp-movie service and 25% to the new bookingapp-moviev2 service, if we don’t observe any issue, send 50% to new bookingapp-moviev2 service and eventually send all traffic to the new service. By this method, by changing the simple weight parameter we can safely rollout new code changes without any impact. Create new service in AWS ECS I have cloned my GitHub Repo bookingapp-moviev2 and created a new docker image and pushed it to docker hub. I am going to create a new task called bookingapp-moviev2 using the new docker image and bring up a service moviev2 and add it to the ALB. Add the container bookingapp-moviev2:latest and create the task definition. Now create a service moviev2 for the task. Add the new service to the ALB and service discovery enabled as moviev2.internal-bookingapp.com. I have the autoscaling configured for this service as well. Finally review and save the service. Now you will be having 4 services in place. home → from bookingapp-home task movie → from bookingapp-movie task moviev2 → from bookingapp-moviev2 task (running modified code of bookingapp-movie) redis → from bookingapp-redis task. ECS services The sample application has a ALB in front of it, and the ALB listens on port 80 and the backend is configured based on URL paths. /home → bookingapp-home-tg → refers home service → bookingapp-home task. /movie → bookingapp-movie-tg → refers movie service → bookingapp-movie task. /moviev2 → bookingapp-moviev2-tg → refers moviev2 service → bookingapp-moviev2 task. /redis → bookingapp-redis-tg → refers home service → bookingapp-redis task. When you see the canary deployment architecture diagram shown above, you will see that the home service contacts movie service (endpoint movie.internal-bookingapp.com). I successfully made some code changes and created a new service for movie called moviev2 (endpoint moviev2.internal-bookingapp.com). Now moviev2 service is in place but requests are not going there. Let’s see how we can replace movie service with moviev2 using canary deployment model with the help of AWS App Mesh. AWS App Mesh Now the good part about App Mesh is you don’t have to change anything in your application code to use it. Let’s create the necessary stuffs in AWS App Mesh. Create a mesh for our application — bookingapp. Create mesh Create virtual node for all our services. Now create for home service, listener on port 5000 as we exposed the same port from the container. Leave the backend empty for now, we need to update it later once we create virtual services. Now repeat the same for other services also, bookingapp-movie and bookingapp-moviev2. Create virtual services for all of our services. Make sure the service name is same as the one you created in ECS service discovery. Create the same for other services as well. We have a total of 4 virtual services now. After we create services, we need to add backend to the home-virtual-node because home service has to contact movie service. Virtual Nodes → home-virtual-node → Edit Create a virtual router only for movie service. As mentioned above, virtual router will route the traffic based on the routes we listed. In the route section, specify route type ass http, target as virtual node movie-virtual-node and moviev2-virutal-node and weight as you wish with match as /movie. That is the path we use to access the service in the container. Create the virtual route. Now add the virtual router to the service movie.internal-bookingapp.com. Let’s pause and understand the flow here, when the traffic comes to service movie.internal-bookingapp.com, it will reach the envoy proxy and service movie.internal-bookingapp.com has a provider called movie-virtual-router, so it will route the traffic there, virtual router has 2 routes as 50% weight each, so each request will go to each virtual route, one virtual route points to virtual node movie-virtual-node which maps to AWS CloudMap Service movie. AWS CloudMap service resolves to IP and forwards the request. This is how the overall traffic flow happens using AWS App Mesh. Now update the task definitions to use App Mesh. On the ECS cluster go to task definitions of bookingapp-home and create a new revision. Enable App Mesh and provide all the necessary details. Click Apply and the proxy configuration will be auto populated. After you apply you will now see the envoy container added to the container section. Click create to create the new task definition version. Repeat the same for other task definition also bookingapp-movie and bookingapp-moviev2. Now, update services to use the latest task definition. In the ECS cluster go to service tab, select movie service and update. Make sure you select Force new deployment check box and deploy the service. Repeat the same for moviev2 and home service as well. Wait for the fargate to pull latest container images and bring it up. Once instances are up, make sure it is added to ALB target groups respectively. Simple curl request to ALB /home path equally distributes the load to both the services (movie.internal-bookingapp.com and moviev2.internal-bookingapp.com). Now I confirmed that my moviev2 service works fine with 50% of the traffic. We can now increase the traffic from 50% to 80% and see the traffic distribution. Traffic is mostly routed to moviev2 service still around 10–20% approx routed to movie service based on weight. Now we can simply assign 100% weight for moviev2 service and eventually stop the fargate instances and delete the ALB target group. Closing Notes Using AWS App Mesh we can easily integrate our existing services without any code changes on our application stack and we can deploy the code changes on the fly just by increasing a simple weight parameter in the App Mesh Route rules. Very easy to revert the deployment to old code by just switching the weight parameter to 100% for old service.
https://deepanmurugan.medium.com/microservices-and-aws-app-mesh-f4c7cab9ddca
[]
2020-12-30 16:50:19.078000+00:00
['Microservices', 'App Mesh', 'AWS', 'Aws Ecs', 'Docker']
Title Microservices AWS App MeshContent Application similar one described Part 1 added line extra code get response microservice bookingappmovie Refer GitHub Repo bookingapphome python code used Part 1 ECS cluster ready 3 task bookingapphome bookingappmovie bookingappredis 3 task service discovery configured resolving endpoint properly Let’s assume application working fine want rollout new code change bookingappmovie microservice rollout change using rolling update strategy face issue new code traffic get impacted rollout new change safely use canary model ie route 75 traffic old bookingappmovie service 25 new bookingappmoviev2 service don’t observe issue send 50 new bookingappmoviev2 service eventually send traffic new service method changing simple weight parameter safely rollout new code change without impact Create new service AWS ECS cloned GitHub Repo bookingappmoviev2 created new docker image pushed docker hub going create new task called bookingappmoviev2 using new docker image bring service moviev2 add ALB Add container bookingappmoviev2latest create task definition create service moviev2 task Add new service ALB service discovery enabled moviev2internalbookingappcom autoscaling configured service well Finally review save service 4 service place home → bookingapphome task movie → bookingappmovie task moviev2 → bookingappmoviev2 task running modified code bookingappmovie redis → bookingappredis task ECS service sample application ALB front ALB listens port 80 backend configured based URL path home → bookingapphometg → refers home service → bookingapphome task movie → bookingappmovietg → refers movie service → bookingappmovie task moviev2 → bookingappmoviev2tg → refers moviev2 service → bookingappmoviev2 task redis → bookingappredistg → refers home service → bookingappredis task see canary deployment architecture diagram shown see home service contact movie service endpoint movieinternalbookingappcom successfully made code change created new service movie called moviev2 endpoint moviev2internalbookingappcom moviev2 service place request going Let’s see replace movie service moviev2 using canary deployment model help AWS App Mesh AWS App Mesh good part App Mesh don’t change anything application code use Let’s create necessary stuff AWS App Mesh Create mesh application — bookingapp Create mesh Create virtual node service create home service listener port 5000 exposed port container Leave backend empty need update later create virtual service repeat service also bookingappmovie bookingappmoviev2 Create virtual service service Make sure service name one created ECS service discovery Create service well total 4 virtual service create service need add backend homevirtualnode home service contact movie service Virtual Nodes → homevirtualnode → Edit Create virtual router movie service mentioned virtual router route traffic based route listed route section specify route type as http target virtual node movievirtualnode moviev2virutalnode weight wish match movie path use access service container Create virtual route add virtual router service movieinternalbookingappcom Let’s pause understand flow traffic come service movieinternalbookingappcom reach envoy proxy service movieinternalbookingappcom provider called movievirtualrouter route traffic virtual router 2 route 50 weight request go virtual route one virtual route point virtual node movievirtualnode map AWS CloudMap Service movie AWS CloudMap service resolve IP forward request overall traffic flow happens using AWS App Mesh update task definition use App Mesh ECS cluster go task definition bookingapphome create new revision Enable App Mesh provide necessary detail Click Apply proxy configuration auto populated apply see envoy container added container section Click create create new task definition version Repeat task definition also bookingappmovie bookingappmoviev2 update service use latest task definition ECS cluster go service tab select movie service update Make sure select Force new deployment check box deploy service Repeat moviev2 home service well Wait fargate pull latest container image bring instance make sure added ALB target group respectively Simple curl request ALB home path equally distributes load service movieinternalbookingappcom moviev2internalbookingappcom confirmed moviev2 service work fine 50 traffic increase traffic 50 80 see traffic distribution Traffic mostly routed moviev2 service still around 10–20 approx routed movie service based weight simply assign 100 weight moviev2 service eventually stop fargate instance delete ALB target group Closing Notes Using AWS App Mesh easily integrate existing service without code change application stack deploy code change fly increasing simple weight parameter App Mesh Route rule easy revert deployment old code switching weight parameter 100 old serviceTags Microservices App Mesh AWS Aws Ecs Docker
1,354
The World’s Happiest Countries
The World’s Happiest Countries Vast differences in well-being exist between the happiest and least happy nations. Finland maintained its status as the world’s happiest country, while the United States slipped a notch to № 19, according to the latest annual World Happiness Report, released March 20, 2019. Here’s how some of the 156 countries placed, based on Gallup Polls, as analyzed by the United Nations Sustainable Development Solutions Network: The report — which should be taken with at least a few grains of salt given that it relies on somewhat unreliable self-reporting, and that it reflects averages that don’t speak to any specific individuals’ well-being — revealed several trends. One that jumped out at researchers who analyzed the data: Happiness in the United States, among both adults and adolescents, has generally declined and is lower now than at the turn of the millennium, the researchers said. Smartphones and other digital technology may be playing a role, but are not the sole cause. “The compulsive pursuit of substance abuse and addictive behaviors is causing severe unhappiness.” “This year’s report provides sobering evidence of how addictions are causing considerable unhappiness and depression in the US,” said Jeffrey Sachs, director of the Sustainable Development Solutions Network. “Addictions come in many forms, from substance abuse to gambling to digital media. The compulsive pursuit of substance abuse and addictive behaviors is causing severe unhappiness. Government, business, and communities should use these indicators to set new policies aimed at overcoming these sources of unhappiness.” The report indicates that the main factors separating the happiest countries from the least happy are income per capita, social support, healthy life expectancy, freedom, perception of corruption, and … Generosity. The report finds support for other research suggesting that volunteering time and donating money to help others brings happiness to the giver. “The world is a rapidly changing place,” said John Helliwell, a professor emeritus in economics at the University of British Columbia and co-editor of the report. “How communities interact with each other whether in schools, workplaces, neighborhoods or on social media has profound effects on world happiness.” Other broad trends revealed in the report, which is based on a three-year average of the survey data (the most recent period being 2016–2018): Among the 20 countries where happiness grew the most between 2005 and today, “10 are in Central and Eastern Europe, five are in sub-Saharan Africa, and three in Latin America.” The 10 countries with the biggest declines in happiness “typically suffered some combination of economic, political, and social stresses,” the report states. The five largest drops since 2005: Yemen, India, Syria, Botswana and Venezuela. Average overall world happiness has fallen in recent years, driven by the sustained downward trend in India and the growing population there. Researchers see “a widespread recent upward trend in negative affect, comprising worry, sadness and anger, especially marked in Asia and Africa, and more recently elsewhere.” Image: Unsplash/Anthony Ginsbrook My own ongoing Happiness Survey (you can take it here — full results to be reported later this year) has yielded some preliminary, non-scientific results related to individual happiness. So far, those who report being the happiest also most strongly agree with these statements, on average: I’m physically healthy. I’m mentally healthy. I have a great relationship with a significant other. I’m close with my family. I enjoy my work/career. I laugh a lot. However, I suggest interpreting both sets of results with caution, if for no other reason than this simple fact: Defining happiness is a challenge itself.
https://medium.com/luminate/the-worlds-happiest-countries-f31e88cba993
['Robert Roy Britt']
2019-03-21 00:57:22.666000+00:00
['Happiness', 'Health', 'Life', 'Wellbeing', 'Science']
Title World’s Happiest CountriesContent World’s Happiest Countries Vast difference wellbeing exist happiest least happy nation Finland maintained status world’s happiest country United States slipped notch № 19 according latest annual World Happiness Report released March 20 2019 Here’s 156 country placed based Gallup Polls analyzed United Nations Sustainable Development Solutions Network report — taken least grain salt given relies somewhat unreliable selfreporting reflects average don’t speak specific individuals’ wellbeing — revealed several trend One jumped researcher analyzed data Happiness United States among adult adolescent generally declined lower turn millennium researcher said Smartphones digital technology may playing role sole cause “The compulsive pursuit substance abuse addictive behavior causing severe unhappiness” “This year’s report provides sobering evidence addiction causing considerable unhappiness depression US” said Jeffrey Sachs director Sustainable Development Solutions Network “Addictions come many form substance abuse gambling digital medium compulsive pursuit substance abuse addictive behavior causing severe unhappiness Government business community use indicator set new policy aimed overcoming source unhappiness” report indicates main factor separating happiest country least happy income per caput social support healthy life expectancy freedom perception corruption … Generosity report find support research suggesting volunteering time donating money help others brings happiness giver “The world rapidly changing place” said John Helliwell professor emeritus economics University British Columbia coeditor report “How community interact whether school workplace neighborhood social medium profound effect world happiness” broad trend revealed report based threeyear average survey data recent period 2016–2018 Among 20 country happiness grew 2005 today “10 Central Eastern Europe five subSaharan Africa three Latin America” 10 country biggest decline happiness “typically suffered combination economic political social stresses” report state five largest drop since 2005 Yemen India Syria Botswana Venezuela Average overall world happiness fallen recent year driven sustained downward trend India growing population Researchers see “a widespread recent upward trend negative affect comprising worry sadness anger especially marked Asia Africa recently elsewhere” Image UnsplashAnthony Ginsbrook ongoing Happiness Survey take — full result reported later year yielded preliminary nonscientific result related individual happiness far report happiest also strongly agree statement average I’m physically healthy I’m mentally healthy great relationship significant I’m close family enjoy workcareer laugh lot However suggest interpreting set result caution reason simple fact Defining happiness challenge itselfTags Happiness Health Life Wellbeing Science
1,355
How I “Sanity Check” Financials For a B2C Business Idea
Hypothetically, let’s consider a B2C software app that grows primarily through paid acquisition (advertising) as an example. In this scenario, it has a “freemium” model wherein the basic functionality is free but heavier usage customers have to pay via subscription. It’s not a marketplace or a service that gets better with more users, so the only “real” value a customer provides to the business is revenue. There’s three immediate KPIs that are important to contemplate: Customer Acquisition Cost: How much it costs to acquire customers. Conversion Rate: What % of customers convert into paying customers. Paying Customer Value: How much value paying customers generate in revenue. This can seem a little abstract to consider without any data, but, it’s possible to inject a sense of reality by using a framework. By researching the “closest” competitors to the idea or components of the business model in question, and getting a feel for their KPIs, there’s an initial baseline to work with. At this point, I can whirl up a basic spreadsheet and start to populate it. Here’s a link to it. In the top left, in green, I have the major KPI variables. Changing these affects the rest of the spreadsheet. My “research” returned these figures: CAC: $10 CR: 10% PCV (6M)*: $150 *I drop off revenue in months 7–12 to roughly account for churn. So, once input these figures, it populates the other cells with data. In the example above, I started off by spending $1,000 in advertising in Month 1 as a one-off injection of capital, and reinvested the returns through to Month 12. This generated “gross profit” for the year of $1,528.91, and $17,677.71 over two years, which can be used to fund operating expenditure: But, all we have here is a baseline. Now, it’s time to probe further by “playing around” with the KPIs using a logical basis. This is unique in each circumstance, so I try to calibrate them realistically and with merit. Otherwise, the process falls apart. For example, if I’m able to offer my product in a market that is untapped by competitors, I could make the assumption CAC will be lower at varying degrees and explore how that changes the financials. If my product delivers more value to the customer than the competition, I could make the assumption PCV (6M) will be higher in varying degrees and explore how that changes the financials. This is where a unique value proposition can really shine through. The reverse is also true — catering for the “unknown” and factors that are “overlooked” or “underestimated”. What happens if CAC is doubled — $20? What happens if the Conversion Rate is halved — 5%? What happens if PCV (6M) is 50% less — $100? Doing this helps me build “quick narratives” around the business model. Not just by changing numbers randomly, but by deliberately adjusting them depending upon what I believe to be the strengths and weaknesses of the business in that unique context. It’s possible to “get a feel” for what KPIs are most likely to deviate based upon the research, and to roughly what degree. This can be used to map out a minimum and maximum threshold for each KPI. You can also use a sensitivity analysis chart to consume this data better visually, where Conversion Rate is on the x-axis and PCV is on the y-axis: Image supplied by author. In the above example, the KPI range where a business can hit and “win” (green) is larger than the red area (where it “loses”).
https://medium.com/founders-hustle/how-i-sanity-check-financials-for-a-b2c-business-idea-37877cdf0dc9
['Martin Delaney']
2020-12-16 09:04:41.102000+00:00
['Leadership', 'Entrepreneurship', 'Business', 'Startup', 'Founders']
Title “Sanity Check” Financials B2C Business IdeaContent Hypothetically let’s consider B2C software app grows primarily paid acquisition advertising example scenario “freemium” model wherein basic functionality free heavier usage customer pay via subscription It’s marketplace service get better user “real” value customer provides business revenue There’s three immediate KPIs important contemplate Customer Acquisition Cost much cost acquire customer Conversion Rate customer convert paying customer Paying Customer Value much value paying customer generate revenue seem little abstract consider without data it’s possible inject sense reality using framework researching “closest” competitor idea component business model question getting feel KPIs there’s initial baseline work point whirl basic spreadsheet start populate Here’s link top left green major KPI variable Changing affect rest spreadsheet “research” returned figure CAC 10 CR 10 PCV 6M 150 drop revenue month 7–12 roughly account churn input figure populates cell data example started spending 1000 advertising Month 1 oneoff injection capital reinvested return Month 12 generated “gross profit” year 152891 1767771 two year used fund operating expenditure baseline it’s time probe “playing around” KPIs using logical basis unique circumstance try calibrate realistically merit Otherwise process fall apart example I’m able offer product market untapped competitor could make assumption CAC lower varying degree explore change financials product delivers value customer competition could make assumption PCV 6M higher varying degree explore change financials unique value proposition really shine reverse also true — catering “unknown” factor “overlooked” “underestimated” happens CAC doubled — 20 happens Conversion Rate halved — 5 happens PCV 6M 50 le — 100 help build “quick narratives” around business model changing number randomly deliberately adjusting depending upon believe strength weakness business unique context It’s possible “get feel” KPIs likely deviate based upon research roughly degree used map minimum maximum threshold KPI also use sensitivity analysis chart consume data better visually Conversion Rate xaxis PCV yaxis Image supplied author example KPI range business hit “win” green larger red area “loses”Tags Leadership Entrepreneurship Business Startup Founders
1,356
The Dark Side of Attending an Elite College
Whenever I meet up with friends from Penn or high school classmates that attended other top schools, the conversation always turns to a familiar topic: What would life be like if I hadn’t attended a top college? As someone who was obsessed with college admissions in high school, I fully bought into the myth that higher education at an elite school would make life easier. Better job opportunities, amazing connections and alumni network, and a sense of confidence that we would carry for the rest of our lives. And to be honest — all of this was largely true. I had multiple six figure job offers in consulting and finance upon graduation. I was able to peek behind the curtain and examine the lives of the true global elite. And regardless of my work experience, college is still a major talking point in most interviews. But all of this obscured the true opportunity cost of attending an elite school. And based on conversations with hundreds of similar grads, psychologists, and even professors teaching at elite colleges, there seems to be a general consensus that there are enormous hidden costs associated with a top school. Setting aside the obvious (and very real) risk of accumulating hundreds of thousands of dollars in student debt, graduating from a top school can prove detrimental in the following ways: Career Options We are consistently told that elite schools open doors to elite jobs. This is true but it glosses over the fact that these jobs are unappealing in nature to most people, whether they could get them or not. It also neglects to mention that these “elite jobs” will become your only options if you want to “maintain your upward trajectory.” You pretty much have a few options when you graduate from an elite school: Become an investment banking analyst Become a management consultant Work for an established tech company in Silicon Valley (an option that has become more common over the last 5 years) If you are unsure of what to do, you might look at academia or traditional higher prestige graduate school. You can become a doctor, lawyer or perhaps work for a fledgling startup, if money is not an issue. At most schools, people pursuing law and medicine usually have long desired to work in these fields. I can’t count how many friends became doctors (especially surgeons) because they were decent at math and science, didn’t want to work in engineering, and were to risk adverse to look into other options. Similarly, law school has become the go to place for well to do, intelligent (yet aimless) grads from top schools. As one acquaintance told me at a recent party, “It’s three whole years of substantial studying but a pretty decent break from having to get a real job. Plus it will get my parents off of my back.” It’s three whole years of substantial studying but a pretty decent break from having to get a real job. Plus it will get my parents off of my back. These are the same people who are bored out of their minds when I talk about anything remotely related to the legal field. Clearly, they do not want to be lawyers. But if you’re a humanities major who isn’t sure what you want to do with your life, this becomes an attractive option. Many of these students have family that will gladly pay for any graduate degree. Even the ones drowning in debt from their undergraduate education might pick this path. After all, you’re already in too deep — both financially and mentally. There is also the killing of positive career ambitions. I value entrepreneurship, but rarely see people who are willing to take the risk to do something different, despite having great ideas with potentially monumental impact. Mind you — I went to school with a number of friends in Wharton. You would think that of all school that a few business savvy entrepreneurs might emerge, Wharton would certainly be a major producer of business. But this is rarely the case. Why? Because while many of these students were once adventurous risk takers, they become heavily risk adverse, to the point where they would rather work a job they hate, in a city they can barely afford, and hang out with people they can hardly stand — just so that they don’t have to feel the humiliation of removing Goldman Sachs from their LinkedIn title. Stress/Increased Sensitivity to What Other People Think This one is perhaps the most insidious and it starts in early on in college. You begin to believe that taking on enormous amounts of stress and even doing things that are unethical (or even illegal) are all just “part of the game.” As long as you can pull it all together by your 8 a.m. class or 9 a.m. interview, it doesn’t matter that you haven’t slept in three days or that your partner on a group project has figured out a way to sabotage another group’s project, to give you guys an edge when graded on a curve. Not only do many of these people need a general wake up call — telling them they need to look at the bigger picture and focus on living a healthier lifestyle — many are in immediate need of help. I learned this the hard way when I got an email over winter break at Penn informing me that a student in my Spanish class had jumped to her death from a building in Philadelphia. The death of my classmate gained national news attention because she was pretty, smart, an athlete, and seemingly had everything going for her. In all these cases, a false sense of inadequacy seemed to be at the root of the problem. Could there have been other mental health issues unrelated to attending an elite school? Of course. But based on my personal knowledge of these people, I find it hard to believe that their hyper critical environment played no role. Freedom While a healthy work life balance and mental health are crucial issues, perhaps the worst of the unspoken dark sides of attending a top college is the loss of freedom many people experience. If you really dig into why most people wanted to go to a top school, the answer is pretty much the same — they wanted some sort of freedom. This could be financial freedom for someone who wants a better life, personal freedom for those who never felt accepted, or even the freedom to impact change and make a positive difference in the world. But unless you can move past your degree, make choices based on what you need and not what you think others expect, and ultimately reject the perceptions of your family, friends, and coworkers (no easy task), your elite college degree will only serve as an ever tightening noose and will ultimately hinder you from finding happiness.
https://medium.com/escaping-the-9-to-5/the-dark-side-of-attending-an-elite-college-c92d1b6c3ccb
['Casey Botticello']
2020-05-06 02:21:12.830000+00:00
['Mental Health', 'Entrepreneurship', 'Business', 'Education', 'Finance']
Title Dark Side Attending Elite CollegeContent Whenever meet friend Penn high school classmate attended top school conversation always turn familiar topic would life like hadn’t attended top college someone obsessed college admission high school fully bought myth higher education elite school would make life easier Better job opportunity amazing connection alumnus network sense confidence would carry rest life honest — largely true multiple six figure job offer consulting finance upon graduation able peek behind curtain examine life true global elite regardless work experience college still major talking point interview obscured true opportunity cost attending elite school based conversation hundred similar grad psychologist even professor teaching elite college seems general consensus enormous hidden cost associated top school Setting aside obvious real risk accumulating hundred thousand dollar student debt graduating top school prove detrimental following way Career Options consistently told elite school open door elite job true gloss fact job unappealing nature people whether could get also neglect mention “elite jobs” become option want “maintain upward trajectory” pretty much option graduate elite school Become investment banking analyst Become management consultant Work established tech company Silicon Valley option become common last 5 year unsure might look academia traditional higher prestige graduate school become doctor lawyer perhaps work fledgling startup money issue school people pursuing law medicine usually long desired work field can’t count many friend became doctor especially surgeon decent math science didn’t want work engineering risk adverse look option Similarly law school become go place well intelligent yet aimless grad top school one acquaintance told recent party “It’s three whole year substantial studying pretty decent break get real job Plus get parent back” It’s three whole year substantial studying pretty decent break get real job Plus get parent back people bored mind talk anything remotely related legal field Clearly want lawyer you’re humanity major isn’t sure want life becomes attractive option Many student family gladly pay graduate degree Even one drowning debt undergraduate education might pick path you’re already deep — financially mentally also killing positive career ambition value entrepreneurship rarely see people willing take risk something different despite great idea potentially monumental impact Mind — went school number friend Wharton would think school business savvy entrepreneur might emerge Wharton would certainly major producer business rarely case many student adventurous risk taker become heavily risk adverse point would rather work job hate city barely afford hang people hardly stand — don’t feel humiliation removing Goldman Sachs LinkedIn title StressIncreased Sensitivity People Think one perhaps insidious start early college begin believe taking enormous amount stress even thing unethical even illegal “part game” long pull together 8 class 9 interview doesn’t matter haven’t slept three day partner group project figured way sabotage another group’s project give guy edge graded curve many people need general wake call — telling need look bigger picture focus living healthier lifestyle — many immediate need help learned hard way got email winter break Penn informing student Spanish class jumped death building Philadelphia death classmate gained national news attention pretty smart athlete seemingly everything going case false sense inadequacy seemed root problem Could mental health issue unrelated attending elite school course based personal knowledge people find hard believe hyper critical environment played role Freedom healthy work life balance mental health crucial issue perhaps worst unspoken dark side attending top college loss freedom many people experience really dig people wanted go top school answer pretty much — wanted sort freedom could financial freedom someone want better life personal freedom never felt accepted even freedom impact change make positive difference world unless move past degree make choice based need think others expect ultimately reject perception family friend coworkers easy task elite college degree serve ever tightening noose ultimately hinder finding happinessTags Mental Health Entrepreneurship Business Education Finance
1,357
Microplastic Pollution In Our Soil
Microplastic Pollution In Our Soil Microplastic pollution isn’t just a marine pollution problem. Here‘s what we need to know. Photo by Noah Buscher on Unsplash When we think about microplastic pollution, we often think about the ocean. After all, we usually truck our plastic trash off for recycling or the landfill. That should keep it there, right? Sadly, recent research studies have found that microplastic pollution is a growing concern in farm soil. Thanks to these scientists, we’re now aware that microplastic can enter plants and impede the growth of plants. This means that animals that eat plants consume the plastic in these plants too. Obviously, that includes us. I know it isn’t exactly news, we’re already breathing, drinking, and eating microplastic through seafood. Still, now it confirmed that plastic is everywhere, even in fruits and vegetables! In the United States and Europe, we deposit 107,000 to 730,000 tonnes of microplastic on agricultural lands annually, which could be more than two times the amount that enters the ocean (93,000 to 236,000 tons). Where did all these microplastics come from? Sources and causes Sewage sludge A year ago, I wrote about microfiber pollution and how it’s affecting our marine environment. In my research, I came across how plastic microfibers from our clothes shed when we wash and dry them. I said clothes but it really includes any type of fabric made with synthetic fibers. Anything polyester, acrylic, nylon, or spandex is plastic in the form of textiles. They’re commonly used to make sweaters, fleece jackets, sheets, quilts, soft toys, rugs, upholstery, and etc. Every time we wash and dry these things, tiny plastic particles break off and go into the drain. At the water treatment plant, filters catch the bigger microplastics while the rest enters the waterways. Microplastics are plastic particles smaller than 5mm/0.2 inches. In contrast, microfibers are less than 10 micrometers (0.001 mm/about 0.00004″). Water treatment plants are mostly unable to catch microfibers, but they can catch microplastic. These microplastics end up in sewage sludge which is commonly used as fertilizer at farms. In Europe and the US, we apply 50% of sewage sludge as fertilizers on agricultural lands, essentially dumping tons of microplastic onto farmlands year after year. In the US, the annual tonnage dumped is approximately 21249 metric tons. Slow-release fertilizers, coated seed, and plastic mulches Besides sewage sludge, we’ve also introduced microplastic directly into farm soils in the form of plastic-encapsulated slow-release fertilizers and plastic-coated seeds. The plastic coatings were meant to protect seeds from bacteria and diseases. These are significant sources of plastic pollution. A 2019 European Chemicals Agency report placed the annual plastic released onto agricultural lands at 10,000 metric tons for slow-release fertilizers, and 500 metric tons for coated seeds. In addition, some farmers use plastic mulches in place of mulch to keep moisture and warmth in the soil and to suppress weeds. Since the 1950s, farmers have also started using plastic in place of glass for their greenhouses. These plastics are difficult to recycle and dispose of. They’re often burnt or piled in a corner of their farms where they slowly break down into smaller bits of microplastic. Naturally, all these sources of plastic breaks down into microplastic that contaminates whatever grows from the soil. Rain It’s raining plastic! Microplastic has been detected in high concentrations air and rain samples in major cities like London and Paris, but studies have found them in the arctic and remote areas all over Europe and the US too. To find out the extent of plastic pollution over protected areas in the US, Janice Brahney, an assistant professor at Utah State University, conducted a study. She collected atmospheric dust samples and rainwater from 11 National Parks and Wilderness areas in the western US. It’s raining plastic over at Bryce Canyon, Utah. Photo by Mark Boss on Unsplash They found microplastic in 98% of the samples and estimated the number of plastic particles deposited over the area to be the equivalent of 123 to 300 million water bottles. The biggest source of this microplastic pollution came from synthetic textiles from clothing, carpet, tents and climbing ropes, etc. Microbeads accounted for 30% of the observed plastic, but they aren’t the microbeads in personal products. The scientists think they might be broken off from paint and coatings. Consequences of microplastic pollution in soil What does plastic pollution in the soil mean to us? Mary Beth Kirkham, a plant physiologist and professor at Kansas State University, conducted an interesting experiment. She grew wheat plants contaminated with microplastics, cadmium, and both microplastics and cadmium. Cadmium is a very toxic cancer-causing metal commonly released into the environment through car batteries and tires. She then compared the growth of these plants to plants grown without these contaminants. More than two weeks later, the plants grown with microplastic turned yellow and wilted. Plants grown only with cadmium-contaminated soil did better, so the plant growth problem was due to microplastics. Worse, plants grown with soil contaminated with both cadmium and microplastic contained a higher level of cadmium. This is an indication that microplastics act as a vector for cadmium to enter the plant. Similar effects have been observed by scientists all over the world. Alters soil characteristics In a study conducted in Germany, researchers added different types of microplastic to the soil in different concentrations. Then they studied the microplastics’ effect on soil structure and function, water holding capacity, and microbial activity. They used 4 different types (polyacrylic fibers, polyamide beads, polyester fibers, and polyethylene fragments) of microplastic commonly found in the environment. And added them in different amounts up to just 2% in concentration. (Plastic has been detected in soil in concentrations up to 7%.) Though the full impact of microplastic soil contamination still needs to be studied, the results from this study show that microplastic affects fundamental soil characteristics. In the words of the scientists, “microplastics are relevant long-term anthropogenic stressors and drivers of global change in terrestrial ecosystems.” Contaminates vegetables and fruits A group of Italian researchers has detected the presence of microplastic in a variety of supermarket produce like apples, carrots, and lettuce. Apples were the most contaminated, while lettuce was the least contaminated. The scientists think that the perennial nature of fruit trees allow more plastic to accumulate. Another study done by Chinese researchers found that plants contaminated with nanoplastics don’t grow as well and have lower chlorophyll content. They found evidence of nanoplastics bioaccumulating in plants and concluded that microplastic pollution can affect agricultural sustainability and food safety. Bioaccumulates up the food chain — plastic and toxins The natural question at this point is, what happens to animals (and humans) that consume these plants? In studies conducted on rats, scientists learned that microplastics can accumulate in the gut, liver, and kidneys, disrupt the metabolism of energy and fat, and cause oxidative stress. The smaller the microplastic, the quicker and easier it passed into the rat’s tissues and organs. The horror! Consider Professor Kirkham’s experiment which demonstrated that microplastic can increase the chemical contamination of plants, and the problem becomes worse. Owing to the characteristic of plastic, it’s easy for microorganisms and pollutants (like lead and pesticides) to bind to its surface. While we don’t understand the full effects of these contaminated particles on the human body yet, both microplastic and its contaminants can bioaccumulate as we go up the food chain. For instance, the microplastic enters the plants, cows eat the plant in copious amounts. Over time, microplastics and toxins that entered the plant bioaccumulates. By the time we consume the beef, the plastic and toxin content would be elevated. Now what? The more I read about plastic pollution, the more evident it is that what I know is just the tip of this nasty iceberg. I’m grateful for hardworking scientists studying climate change and plastic pollution. The solution to microplastic pollution, if there’s even one, has to be a collective effort. No single country, individual, or profession can solve this problem. Absolutely everyone has to chip in. As a consumer, there are limits to what we can do, but as usual, I’ll suggest the following: Vote for leaders who know about and propose comprehensive climate change solution (a comprehensive solution will include plastic pollution too) Listen to and learn from the scientists Talk about the plastic and climate issues to everyone who’s willing to listen Make lifestyle changes to reduce plastic use A note about synthetic fibers Previously, I was in two minds about synthetic fibers. Surely recycled polyester clothes are good? Plastic down-recycled into stuffings and rugs seems to be a good use of plastic too, but now I’m thinking twice about it. After all, microplastic from synthetic textiles (including stuffings and rugs) is a very significant global source of pollution in the environment — land, water, air… it’s everywhere. However, suggesting a wardrobe change is extremely irresponsible if we don’t address our overconsumption of clothes. People may start buying too many natural-fiber clothes and that would tax natural resources. A better way is to buy secondhand natural-fiber clothes, reduce our polyester clothes use, and go for a small but high-quality wardrobe rather than a tonne of plasticky clothes.
https://medium.com/thoughts-economics-politics-sustainability/microplastic-pollution-in-our-soil-9772d639d96f
['Julie X']
2020-09-08 20:35:29.830000+00:00
['Sustainability', 'Microplastic Pollution', 'Climate Change', 'Environment', 'Plastic Pollution']
Title Microplastic Pollution SoilContent Microplastic Pollution Soil Microplastic pollution isn’t marine pollution problem Here‘s need know Photo Noah Buscher Unsplash think microplastic pollution often think ocean usually truck plastic trash recycling landfill keep right Sadly recent research study found microplastic pollution growing concern farm soil Thanks scientist we’re aware microplastic enter plant impede growth plant mean animal eat plant consume plastic plant Obviously includes u know isn’t exactly news we’re already breathing drinking eating microplastic seafood Still confirmed plastic everywhere even fruit vegetable United States Europe deposit 107000 730000 tonne microplastic agricultural land annually could two time amount enters ocean 93000 236000 ton microplastics come Sources cause Sewage sludge year ago wrote microfiber pollution it’s affecting marine environment research came across plastic microfibers clothes shed wash dry said clothes really includes type fabric made synthetic fiber Anything polyester acrylic nylon spandex plastic form textile They’re commonly used make sweater fleece jacket sheet quilt soft toy rug upholstery etc Every time wash dry thing tiny plastic particle break go drain water treatment plant filter catch bigger microplastics rest enters waterway Microplastics plastic particle smaller 5mm02 inch contrast microfibers le 10 micrometer 0001 mmabout 000004″ Water treatment plant mostly unable catch microfibers catch microplastic microplastics end sewage sludge commonly used fertilizer farm Europe US apply 50 sewage sludge fertilizer agricultural land essentially dumping ton microplastic onto farmland year year US annual tonnage dumped approximately 21249 metric ton Slowrelease fertilizer coated seed plastic mulch Besides sewage sludge we’ve also introduced microplastic directly farm soil form plasticencapsulated slowrelease fertilizer plasticcoated seed plastic coating meant protect seed bacteria disease significant source plastic pollution 2019 European Chemicals Agency report placed annual plastic released onto agricultural land 10000 metric ton slowrelease fertilizer 500 metric ton coated seed addition farmer use plastic mulch place mulch keep moisture warmth soil suppress weed Since 1950s farmer also started using plastic place glass greenhouse plastic difficult recycle dispose They’re often burnt piled corner farm slowly break smaller bit microplastic Naturally source plastic break microplastic contaminates whatever grows soil Rain It’s raining plastic Microplastic detected high concentration air rain sample major city like London Paris study found arctic remote area Europe US find extent plastic pollution protected area US Janice Brahney assistant professor Utah State University conducted study collected atmospheric dust sample rainwater 11 National Parks Wilderness area western US It’s raining plastic Bryce Canyon Utah Photo Mark Boss Unsplash found microplastic 98 sample estimated number plastic particle deposited area equivalent 123 300 million water bottle biggest source microplastic pollution came synthetic textile clothing carpet tent climbing rope etc Microbeads accounted 30 observed plastic aren’t microbeads personal product scientist think might broken paint coating Consequences microplastic pollution soil plastic pollution soil mean u Mary Beth Kirkham plant physiologist professor Kansas State University conducted interesting experiment grew wheat plant contaminated microplastics cadmium microplastics cadmium Cadmium toxic cancercausing metal commonly released environment car battery tire compared growth plant plant grown without contaminant two week later plant grown microplastic turned yellow wilted Plants grown cadmiumcontaminated soil better plant growth problem due microplastics Worse plant grown soil contaminated cadmium microplastic contained higher level cadmium indication microplastics act vector cadmium enter plant Similar effect observed scientist world Alters soil characteristic study conducted Germany researcher added different type microplastic soil different concentration studied microplastics’ effect soil structure function water holding capacity microbial activity used 4 different type polyacrylic fiber polyamide bead polyester fiber polyethylene fragment microplastic commonly found environment added different amount 2 concentration Plastic detected soil concentration 7 Though full impact microplastic soil contamination still need studied result study show microplastic affect fundamental soil characteristic word scientist “microplastics relevant longterm anthropogenic stressor driver global change terrestrial ecosystems” Contaminates vegetable fruit group Italian researcher detected presence microplastic variety supermarket produce like apple carrot lettuce Apples contaminated lettuce least contaminated scientist think perennial nature fruit tree allow plastic accumulate Another study done Chinese researcher found plant contaminated nanoplastics don’t grow well lower chlorophyll content found evidence nanoplastics bioaccumulating plant concluded microplastic pollution affect agricultural sustainability food safety Bioaccumulates food chain — plastic toxin natural question point happens animal human consume plant study conducted rat scientist learned microplastics accumulate gut liver kidney disrupt metabolism energy fat cause oxidative stress smaller microplastic quicker easier passed rat’s tissue organ horror Consider Professor Kirkham’s experiment demonstrated microplastic increase chemical contamination plant problem becomes worse Owing characteristic plastic it’s easy microorganism pollutant like lead pesticide bind surface don’t understand full effect contaminated particle human body yet microplastic contaminant bioaccumulate go food chain instance microplastic enters plant cow eat plant copious amount time microplastics toxin entered plant bioaccumulates time consume beef plastic toxin content would elevated read plastic pollution evident know tip nasty iceberg I’m grateful hardworking scientist studying climate change plastic pollution solution microplastic pollution there’s even one collective effort single country individual profession solve problem Absolutely everyone chip consumer limit usual I’ll suggest following Vote leader know propose comprehensive climate change solution comprehensive solution include plastic pollution Listen learn scientist Talk plastic climate issue everyone who’s willing listen Make lifestyle change reduce plastic use note synthetic fiber Previously two mind synthetic fiber Surely recycled polyester clothes good Plastic downrecycled stuffing rug seems good use plastic I’m thinking twice microplastic synthetic textile including stuffing rug significant global source pollution environment — land water air… it’s everywhere However suggesting wardrobe change extremely irresponsible don’t address overconsumption clothes People may start buying many naturalfiber clothes would tax natural resource better way buy secondhand naturalfiber clothes reduce polyester clothes use go small highquality wardrobe rather tonne plasticky clothesTags Sustainability Microplastic Pollution Climate Change Environment Plastic Pollution
1,358
Solving “Container Killed by Yarn For Exceeding Memory Limits” Exception in Apache Spark
Introduction Apache Spark is an open-source framework for distributed big-data processing. Originally written in Scala, it also has native bindings for Java, Python, and R programming languages. It also supports SQL, Streaming Data, Machine Learning, and Graph Processing. All in all, Apache Spark is often termed as Unified analytics engine for large-scale data processing. If you have been using Apache Spark for some time, you would have faced an exception which looks something like this: Container killed by YARN for exceeding memory limits, 5 GB of 5GB used The reason can either be on the driver node or on the executor node. In simple words, the exception says, that while processing, spark had to take more data in memory that the executor/driver actually has. There can be a few reasons for this which can be resolved in the following ways: Your data is skewed, which means you have not partitioned the data properly during processing which resulted in more data to process for a particular task. In this case, you can examine your data and try a custom partitioner that uniformly partitions the dataset. Your Spark Job might be shuffling a lot of data over the network. Out of the memory available for an executor, only some part is allotted for shuffle cycle. Try using efficient Spark API's like reduceByKey over groupByKey etc, if not already done. Sometimes, shuffle can be unavoidable though. In that case, we need to increase memory configurations which we will discuss in further points If the above two points are not applicable, try the following in order until the error is resolved. Revert any changes you might have made to spark conf files before moving ahead. Increase Memory Overhead Memory Overhead is the amount of off-heap memory allocated to each executor. By default, memory overhead is set to the higher value between 10% of the Executor Memory or 384 mb. Memory Overhead is used for Java NIO direct buffers, thread stacks, shared native libraries, or memory-mapped files. The above exception can occur on either driver or executor node. Wherever the error is, try increasing the overhead memory gradually for that container only (driver or executor) and re-run the job. Maximum recommended memoryOverhead is 25% of the executor memory Caution: Make sure that the sum of the driver or executor memory plus the driver or executor memory overhead is always less than the value of yarn.nodemanager.resource.memory-mb i.e. spark.driver/executor.memory + spark.driver/executor.memoryOverhead < yarn.nodemanager.resource.memory-mb You have to change the property by editing the spark-defaults.conf file on the master node. sudo vim /etc/spark/conf/spark-defaults.conf spark.driver.memoryOverhead 1024 spark.executor.memoryOverhead 1024 You can specify the above properties cluster-wide for all the jobs or you can also pass it as a configuration for a single job like below spark-submit --class org.apache.spark.examples.WordCount --master yarn --deploy-mode cluster --conf spark.driver.memoryOverhead=512 --conf spark.executor.memoryOverhead=512 <path/to/jar> If this doesn’t solve your problem, try the next point Reducing the number of Executor Cores If you have a higher number of executor cores, the amount of memory required goes up. So, try reducing the number of cores per executor which reduces the number of tasks that can run on the executor, thus reducing the memory required. Again, change the configuration of driver or executor depending on where the error is. sudo vim /etc/spark/conf/spark-defaults.conf spark.driver.cores 3 spark.executor.cores 3 Similar to the previous point, you can specify the above properties cluster-wide for all the jobs or you can also pass it as a configuration for a single job like below: spark-submit --class org.apache.spark.examples.WordCount --master yarn --deploy-mode cluster --executor-cores 5--driver-cores 4 <path/to/jar> If this doesn’t work, see the next point Increase the number of partitions If there are more partitions, the amount of memory required per partition would be less. Memory usage can be monitored by Ganglia. You can increase the number of partitions by invoking .repartition(<num_partitions>) on RDD or Dataframe No luck yet? Increase executor or driver memory. Increase Driver or Executor Memory Depending on where the error has occurred, increase the memory of the driver or executor Caution: spark.driver/executor.memory + spark.driver/executor.memoryOverhead < yarn.nodemanager.resource.memory-mb sudo vim /etc/spark/conf/spark-defaults.conf spark.executor.memory 2g spark.driver.memory 1g Just like other properties, this can also be overridden per job spark-submit --class org.apache.spark.examples.WordCount --master yarn --deploy-mode cluster --executor-memory 2g --driver-memory 1g <path/to/jar> Most likely by now, you should have resolved the exception. If not, you might need more memory-optimized instances for your cluster! Happy Coding! Reference: https://aws.amazon.com/premiumsupport/knowledge-center/emr-spark-yarn-memory-limit/
https://medium.com/analytics-vidhya/solving-container-killed-by-yarn-for-exceeding-memory-limits-exception-in-apache-spark-b3349685df16
['Chandan Bhattad']
2019-11-01 04:46:53.356000+00:00
['Spark', 'Big Data', 'Distributed Systems', 'Data Engineering', 'Apache Spark']
Title Solving “Container Killed Yarn Exceeding Memory Limits” Exception Apache SparkContent Introduction Apache Spark opensource framework distributed bigdata processing Originally written Scala also native binding Java Python R programming language also support SQL Streaming Data Machine Learning Graph Processing Apache Spark often termed Unified analytics engine largescale data processing using Apache Spark time would faced exception look something like Container killed YARN exceeding memory limit 5 GB 5GB used reason either driver node executor node simple word exception say processing spark take data memory executordriver actually reason resolved following way data skewed mean partitioned data properly processing resulted data process particular task case examine data try custom partitioner uniformly partition dataset Spark Job might shuffling lot data network memory available executor part allotted shuffle cycle Try using efficient Spark APIs like reduceByKey groupByKey etc already done Sometimes shuffle unavoidable though case need increase memory configuration discus point two point applicable try following order error resolved Revert change might made spark conf file moving ahead Increase Memory Overhead Memory Overhead amount offheap memory allocated executor default memory overhead set higher value 10 Executor Memory 384 mb Memory Overhead used Java NIO direct buffer thread stack shared native library memorymapped file exception occur either driver executor node Wherever error try increasing overhead memory gradually container driver executor rerun job Maximum recommended memoryOverhead 25 executor memory Caution Make sure sum driver executor memory plus driver executor memory overhead always le value yarnnodemanagerresourcememorymb ie sparkdriverexecutormemory sparkdriverexecutormemoryOverhead yarnnodemanagerresourcememorymb change property editing sparkdefaultsconf file master node sudo vim etcsparkconfsparkdefaultsconf sparkdrivermemoryOverhead 1024 sparkexecutormemoryOverhead 1024 specify property clusterwide job also pas configuration single job like sparksubmit class orgapachesparkexamplesWordCount master yarn deploymode cluster conf sparkdrivermemoryOverhead512 conf sparkexecutormemoryOverhead512 pathtojar doesn’t solve problem try next point Reducing number Executor Cores higher number executor core amount memory required go try reducing number core per executor reduces number task run executor thus reducing memory required change configuration driver executor depending error sudo vim etcsparkconfsparkdefaultsconf sparkdrivercores 3 sparkexecutorcores 3 Similar previous point specify property clusterwide job also pas configuration single job like sparksubmit class orgapachesparkexamplesWordCount master yarn deploymode cluster executorcores 5drivercores 4 pathtojar doesn’t work see next point Increase number partition partition amount memory required per partition would le Memory usage monitored Ganglia increase number partition invoking repartitionnumpartitions RDD Dataframe luck yet Increase executor driver memory Increase Driver Executor Memory Depending error occurred increase memory driver executor Caution sparkdriverexecutormemory sparkdriverexecutormemoryOverhead yarnnodemanagerresourcememorymb sudo vim etcsparkconfsparkdefaultsconf sparkexecutormemory 2g sparkdrivermemory 1g like property also overridden per job sparksubmit class orgapachesparkexamplesWordCount master yarn deploymode cluster executormemory 2g drivermemory 1g pathtojar likely resolved exception might need memoryoptimized instance cluster Happy Coding Reference httpsawsamazoncompremiumsupportknowledgecenteremrsparkyarnmemorylimitTags Spark Big Data Distributed Systems Data Engineering Apache Spark
1,359
An Honest Conversation With My Mum Looking Back At My Eating Disorder
An Honest Conversation With My Mum Looking Back At My Eating Disorder Refinery29 UK Follow Mar 29 · 8 min read By Eve Simmons Photographed by Eylul Aslan I am a strong woman and it’s all thanks to my mother, a staunch feminist who spent the majority of her 20s reclaiming the night and her 30s dressing her baby daughter in anything other than pink dresses. The first sentence I ever learned was “more food please”. A little further down the line I learned how to ask (politely) for seconds, whenever I wanted them. Following the unwritten rule of feminism, the word ‘diet’ was forbidden. So when I developed a tormenting, tyrannical eating disorder at the age of 22 my mum was, understandably, shocked. As was I — not to mention anyone who had ever shared a “shall we order one of everything?” meal with me. I was living in my north London family home at the time, having just landed my first job in fashion journalism as an intern. A combination of a mild identity crisis, slotting myself into the skinny model set and an anxious disposition led to me clutching for some sense of control, when all else felt uncontrollable. The rise of clean eating was a convenient curse. Manipulating and later, restricting, my diet was the focus I’d been looking for. It took all of two months for Mum to notice and march me to the doctor’s surgery. And it took her all of five months to come to the heartbreaking realisation that this was something she couldn’t fix. Now, five years on, we’ve just completed my third Eating Disorders Awareness Week as a fully recovered and functioning adult. It’s only now, after starting my own eating disorder support website and having written a book on the subject, that I’ve begun to read stories from parents, carers and other loved ones, and come to terms with what my disorder must have been like for my nearest and dearest. Despite her unwavering love and support, I know that my mother — like every mother who has ever lived — still harbours a pernicious guilt. And given the enormous portion of pudding she now slops on my plate at family dinners, I know she’s terrified it’ll happen again. Her words, spoken in one particularly poignant family therapy session, still linger. “It was my job to protect you. And I couldn’t. I’ll never forgive myself for that.” It’s been four years since we had that conversation and we haven’t spoken in great detail about it since. I’ve been petrified to bring it up — hearing her utter those words was hard enough the first time. Now, I want to relieve her of those feelings. So last week, as we tucked into an apple tart, I attempted to do just that. Eve: This tart is lovely. Remember when I never used to eat tart? Mum: The day the doctors told us that you had to go into hospital. You were terrified. They said they were going to take you in and you just stared at me as if to say, Please, just make it better. And I knew if it were down to me, I wouldn’t be able to do it. I remember looking at you and saying, ‘I think you have to do what they’re saying. You have to go into hospital.’ It was heartbreaking. Eve: God yeah, I’m so sorry Mum. And then there was the time you bought me a collection of teeny tiny chocolate bars. Mum: I was so petrified of overwhelming you. My approach was always ‘slowly slowly’, so I would collect little boxes of raisins and mini nuts and put them in your handbag, thinking, hoping, you might get tempted. Then a few weeks later I was putting your clothes away and found everything I’d bought unopened, stuffed at the back of your cupboard. I just sat on the stairs and cried. Eve: Well that’s nice and depressing. Look at me now though! [said through mouthful of pastry] Mum: Well exactly. The one thing I always said about both my children was that they loved their food. You both grew up with healthy appetites and adored your food. And I loved watching it. You’d eat wholeheartedly. Then suddenly, you didn’t. Eve: And it’s especially weird considering I’m your child…and my brother’s sister. Mum: Yes. I did wonder how on Earth it could happen to us…and where I went wrong. Obviously I blamed myself, because I always blame myself. Eve: [Teary] But you know Mum, from what I’ve learned in the past few years about this illness, sometimes there really is no explanation. It just happens, just like any other illness. Mum: I know, I know. But while I know that I couldn’t have necessarily prevented it, as a mother what hurts is not being able to make it better. You grew up generally listening to what I said. I always hoped that I’d had a positive influence on what you thought and you’d come to me expecting answers. The worst moment was when I realised no matter what I did, I couldn’t make it better this time. Eve: When did you realise that? Mum: The day the doctors told us that you had to go into hospital. You were terrified. They said they were going to take you in and you just stared at me as if to say, Please, just make it better. And I knew if it were down to me, I wouldn’t be able to do it. I remember looking at you and saying, ‘I think you have to do what they’re saying. You have to go into hospital.’ It was heartbreaking. Eve: Did you ever think about what would happen if…the…worst… Mum: I didn’t let myself think about it. I couldn’t bear to. That’s why I knew you had to go to hospital — as scary as it was. Your brother was living in the US, my husband had been dead for a decade. You were my…everything. I wasn’t losing another person I loved. Eve: I guess being in hospital sheltered me from whatever was going on at home — and how you were dealing with it. Mum: I was absolutely frantic. Leaving you there was one of the hardest things I’ve ever had to do. I spent hours on end on the phone to the hospital, trying to find out what was going on and make sure you were seeing a professional, rather than being isolated in your room. I knew that few of the staff had professional training and a lot of them had actually come from working in prisons. And that’s how they treated you — like prisoners. Eve: I couldn’t have got through it without that. But it wasn’t too bad in the end, food-wise. As soon as I started eating — because I didn’t have a choice — it became less scary and I was able to eat pretty much everything quite quickly. Mum: Not from where I was sitting. You had good days and bad days. If you were ever stressed out or upset or worried, you wouldn’t eat much and then your weight would drop, just like that. I came to see you after you’d been in hospital for a month and you took off your jumper — and I could see all your bones. I was with your brother and he was so shocked, he couldn’t speak for an hour after we left. Eve: That’s so weird because I remember feeling like I was getting better at that point — and that I looked okay. Mum: [Raises eyebrows] You didn’t that day. But then I started to see that you still had fight in you. The hospital was so horrid that you pledged to do whatever you could to get out of there — and you did. You fought to escape so you could tell the story — like a true Simmons. Eve: Here’s an uncomfortable question. Despite always teaching me that all food was good food, did my illness make you question your own eating habits? Mum: Well, for the past 10 years I’ve lived with inflammatory bowel disease and have had to eat very small portions, otherwise I could be in agonising pain. And I know that’s something you picked up on. There were times when I’d force myself to eat a bigger meal to set a good example and end up awake all night, writhing in pain. But I was confident in the knowledge that I never had a problem with food. I never even worried about size growing up, like so many girls my age. Eve: What? Never? Mum: Nope. I was always quite curvy but didn’t ever obsess over it. I didn’t get on well [with my mother] so I rejected everything she did — including diets. Oh and [giggling] when it comes to exercise, I think I’ve done about five sit-ups in my whole life. Eve: Yes, we never were [a family] for exercise, were we? Mum: No, which is why I thought it was the weirdest thing when I saw you doing sit-ups on your bedroom floor when you became ill. It just wasn’t us — it wasn’t you. Eve: See, how can you feel guilty when you couldn’t have possibly passed anything on to me? Mum: Because mothers always blame themselves don’t they? And I’m convinced it’s something to do with the early death of your father — him being ill with cancer for so long — and I’ll always carry guilt that you didn’t have the carefree childhood I felt you should have had. Whether it was my fault or not. You were the good girl who never complained and I always felt that there would be a time when the anxiety would catch up with you. And I was right, it did. Eve: Maybe. But who knows why it happened. It isn’t anyone’s fault. And at least there’s something good to come out of it — it’s given me a sense of purpose, of passion. Mum: Absolutely. And for that I am immensely proud of you. I think the way you help other people is wonderful. You want to stop people going through what you did — what we all did. Eve: But you were worried about me writing about it at first! Mum: Yes, because I know how journalism works. And I knew that the moment you spoke out, you’d always be ‘the girl with the eating disorder’. I worried that you’d become so consumed with it all, you wouldn’t have a chance to pursue other opportunities and experiences. Eve: But if anything it’s given me more experiences. Mum: You’re right. And — having been so private and not told anyone — I realised that my daughter had been so brave in speaking about it to help others, I ought to do the same too. Eve: As I sit here shovelling spoonfuls of apple tart into my mouth, can you honestly, seriously tell me that you still worry about my relationship with food? Mum: It’s something I’ll always think could rear its ugly head again. Just like any illness. As your mother, I don’t think I’ll ever stop worrying about that. If you or someone you love is struggling with an eating disorder, please call on 0808 801 0677. Support and information is available 365 days a year.
https://medium.com/refinery29/an-honest-conversation-with-my-mum-looking-back-at-my-eating-disorder-b8894dd82751
[]
2020-03-29 19:01:00.889000+00:00
['Wellness', 'Living', 'Eating Disorders', 'Health']
Title Honest Conversation Mum Looking Back Eating DisorderContent Honest Conversation Mum Looking Back Eating Disorder Refinery29 UK Follow Mar 29 · 8 min read Eve Simmons Photographed Eylul Aslan strong woman it’s thanks mother staunch feminist spent majority 20 reclaiming night 30 dressing baby daughter anything pink dress first sentence ever learned “more food please” little line learned ask politely second whenever wanted Following unwritten rule feminism word ‘diet’ forbidden developed tormenting tyrannical eating disorder age 22 mum understandably shocked — mention anyone ever shared “shall order one everything” meal living north London family home time landed first job fashion journalism intern combination mild identity crisis slotting skinny model set anxious disposition led clutching sense control else felt uncontrollable rise clean eating convenient curse Manipulating later restricting diet focus I’d looking took two month Mum notice march doctor’s surgery took five month come heartbreaking realisation something couldn’t fix five year we’ve completed third Eating Disorders Awareness Week fully recovered functioning adult It’s starting eating disorder support website written book subject I’ve begun read story parent carers loved one come term disorder must like nearest dearest Despite unwavering love support know mother — like every mother ever lived — still harbour pernicious guilt given enormous portion pudding slop plate family dinner know she’s terrified it’ll happen word spoken one particularly poignant family therapy session still linger “It job protect couldn’t I’ll never forgive that” It’s four year since conversation haven’t spoken great detail since I’ve petrified bring — hearing utter word hard enough first time want relieve feeling last week tucked apple tart attempted Eve tart lovely Remember never used eat tart Mum day doctor told u go hospital terrified said going take stared say Please make better knew wouldn’t able remember looking saying ‘I think they’re saying go hospital’ heartbreaking Eve God yeah I’m sorry Mum time bought collection teeny tiny chocolate bar Mum petrified overwhelming approach always ‘slowly slowly’ would collect little box raisin mini nut put handbag thinking hoping might get tempted week later putting clothes away found everything I’d bought unopened stuffed back cupboard sat stair cried Eve Well that’s nice depressing Look though said mouthful pastry Mum Well exactly one thing always said child loved food grew healthy appetite adored food loved watching You’d eat wholeheartedly suddenly didn’t Eve it’s especially weird considering I’m child…and brother’s sister Mum Yes wonder Earth could happen us…and went wrong Obviously blamed always blame Eve Teary know Mum I’ve learned past year illness sometimes really explanation happens like illness Mum know know know couldn’t necessarily prevented mother hurt able make better grew generally listening said always hoped I’d positive influence thought you’d come expecting answer worst moment realised matter couldn’t make better time Eve realise Mum day doctor told u go hospital terrified said going take stared say Please make better knew wouldn’t able remember looking saying ‘I think they’re saying go hospital’ heartbreaking Eve ever think would happen if…the…worst… Mum didn’t let think couldn’t bear That’s knew go hospital — scary brother living US husband dead decade my…everything wasn’t losing another person loved Eve guess hospital sheltered whatever going home — dealing Mum absolutely frantic Leaving one hardest thing I’ve ever spent hour end phone hospital trying find going make sure seeing professional rather isolated room knew staff professional training lot actually come working prison that’s treated — like prisoner Eve couldn’t got without wasn’t bad end foodwise soon started eating — didn’t choice — became le scary able eat pretty much everything quite quickly Mum sitting good day bad day ever stressed upset worried wouldn’t eat much weight would drop like came see you’d hospital month took jumper — could see bone brother shocked couldn’t speak hour left Eve That’s weird remember feeling like getting better point — looked okay Mum Raises eyebrow didn’t day started see still fight hospital horrid pledged whatever could get — fought escape could tell story — like true Simmons Eve Here’s uncomfortable question Despite always teaching food good food illness make question eating habit Mum Well past 10 year I’ve lived inflammatory bowel disease eat small portion otherwise could agonising pain know that’s something picked time I’d force eat bigger meal set good example end awake night writhing pain confident knowledge never problem food never even worried size growing like many girl age Eve Never Mum Nope always quite curvy didn’t ever ob didn’t get well mother rejected everything — including diet Oh giggling come exercise think I’ve done five situps whole life Eve Yes never family exercise Mum thought weirdest thing saw situps bedroom floor became ill wasn’t u — wasn’t Eve See feel guilty couldn’t possibly passed anything Mum mother always blame don’t I’m convinced it’s something early death father — ill cancer long — I’ll always carry guilt didn’t carefree childhood felt Whether fault good girl never complained always felt would time anxiety would catch right Eve Maybe know happened isn’t anyone’s fault least there’s something good come — it’s given sense purpose passion Mum Absolutely immensely proud think way help people wonderful want stop people going — Eve worried writing first Mum Yes know journalism work knew moment spoke you’d always ‘the girl eating disorder’ worried you’d become consumed wouldn’t chance pursue opportunity experience Eve anything it’s given experience Mum You’re right — private told anyone — realised daughter brave speaking help others ought Eve sit shovelling spoonful apple tart mouth honestly seriously tell still worry relationship food Mum It’s something I’ll always think could rear ugly head like illness mother don’t think I’ll ever stop worrying someone love struggling eating disorder please call 0808 801 0677 Support information available 365 day yearTags Wellness Living Eating Disorders Health
1,360
Which framework is better: Angular.js, React.js, or Vue.js?
Before I answer, if you‘re reading this article to pick a framework “to learn”, don’t. Read this article instead. If you want to pick a framework to use (in an actual project), you may proceed :)
https://medium.com/edge-coders/which-framework-is-better-angular-js-react-js-or-vue-js-77c67d00d410
['Samer Buna']
2019-01-29 22:50:39.831000+00:00
['React', 'Programming', 'JavaScript', 'Angularjs', 'Vuejs']
Title framework better Angularjs Reactjs VuejsContent answer you‘re reading article pick framework “to learn” don’t Read article instead want pick framework use actual project may proceed Tags React Programming JavaScript Angularjs Vuejs
1,361
The Inspirational Fiction Books that Changed Me More Than Self-Help
The Inspirational Fiction Books that Changed Me More Than Self-Help The last book changed me for a reason you wouldn’t expect. Photo by Justin on Unsplash Escapism helped me cope with a year plagued by its bitter reality. I escaped the fear of being stuck inside by exercising outdoors. I escaped the fear of negative thoughts by minimizing my news intake. I escaped the fear of the world’s certain uncertainty by diving into the undemanding world of inspirational fiction. Inspirational fiction allowed me to live in a world where hope and prosperity run rampant. They introduced me to characters with which I could empathize, characters with which I could grow for the entirety of the 200-or-so pages. These stories built up my faith muscles and reminded me that there is still positivity; still good; still happiness circulating around the world. The books below can do the same for you. They can pluck you out of the four walls you’ve been staring at for the last year and introduce you to a world of potential. By reading them, you will feel an unmatched delight near impossible to gain by simply reading self-help books. Inspirational fiction allows you to feel the help in action, not just be told it. (Note: These are not affiliate links. I just wanted to make it easy for you to go and purchase some darn good reading material that has the possibility of uplifting your day)
https://medium.com/mind-cafe/the-inspirational-fiction-books-that-changed-me-more-than-self-help-2e3bc6e7e0be
['Jordan Gross']
2020-12-29 16:39:31.658000+00:00
['Life Lessons', 'Inspiration', 'Self Improvement', 'Creativity', 'Books']
Title Inspirational Fiction Books Changed SelfHelpContent Inspirational Fiction Books Changed SelfHelp last book changed reason wouldn’t expect Photo Justin Unsplash Escapism helped cope year plagued bitter reality escaped fear stuck inside exercising outdoors escaped fear negative thought minimizing news intake escaped fear world’s certain uncertainty diving undemanding world inspirational fiction Inspirational fiction allowed live world hope prosperity run rampant introduced character could empathize character could grow entirety 200orso page story built faith muscle reminded still positivity still good still happiness circulating around world book pluck four wall you’ve staring last year introduce world potential reading feel unmatched delight near impossible gain simply reading selfhelp book Inspirational fiction allows feel help action told Note affiliate link wanted make easy go purchase darn good reading material possibility uplifting dayTags Life Lessons Inspiration Self Improvement Creativity Books
1,362
Finland’s New Free AI Courses
Finland’s New Free AI Courses How to get a certificate and take advantage of the course by Elements AI. Photo by Arttu Päivinen on Unsplash Besides being the home of Santa Claus, Finland is known as a tech leader, even ahead of the US, according to the UNDP. Indeed, tech operations constitute “over 50% of all Finnish exports.” We even owe technologies like Linux and the first web browser to Finland. Today, Finland is keeping up its tech legacy with its free Elements of AI online course. Overview Elements of AI is a set of two online courses made by Reaktor and the University of Helsinki, combining theory and practice with teaching as many people as possible about AI. The two courses are titled Introduction to AI and Building AI . The course is well on its way to achieving its mission of making AI accessible, as over 550,000 people have already signed up, as of writing. Introduction to AI The first course is split into six chapters: What is AI? AI problem solving Real-world AI Machine learning Neural networks Implications Screenshot of “Elements of AI” course progress section, captured by the author. The course is very well designed, with simple explanations, nice visualizations, and exercises at the bottom of most chapters to solidify your learning. Both courses feature a “Course Progress” ribbon to show you how you’re progressing through the course and to keep you motivated. Building AI The second section will take around 50 hours and is split into five chapters: Getting started with AI Dealing with uncertainty Machine learning Neural networks Conclusion This time, the exercises are more in-depth and practical, so they’ll be more challenging than before. Be sure to check out the community below if you get stuck. Community Elements of AI come with an awesome, highly active community at Spectrum, where you can discuss and ask questions about each chapter. As of writing, the community has almost 8,000 members with whom you can ask questions and study. I’ve found it’s an invaluable resource to make sure I truly understand the material. Best of all, it’s free! Certificate You can purchase a certificate, upon completion, for each course, for just 50 Euros. This will be a shareable certificate that would make a great addition to any CV or LinkedIn, although it’s totally optional, and the course is normally free. The Final Project For the final project, you’re expected to demonstrate your skills and creativity. While it’s not required, it’s a great opportunity to put your skills to practice and share with a community of thousands of other learners. Elements of AI gives a lot of inspiration and ideas for final projects, such as “Sources Checker” — a bot that checks the sources of news articles online. Other ideas include noise pollution forecasting, predicting stock criteria like growth and reliability, matching ideas and doers, automating applications to relevant jobs, making expert recommendations, assessing financial risk, recommending healthy meals, and many more. Perhaps my favorite idea is the “AI credit-risk management for social lending” project, which uses AI to predict credit risk. Models like these are already being used in the real world. For instance, the micro-loan company Creditt uses Obviously.AI’s API to score customer profiles and find out how much to credit users.
https://medium.com/towards-artificial-intelligence/finlands-new-free-ai-courses-b75c1d53ac84
['Frederik Bussler']
2020-12-10 19:07:29.138000+00:00
['AI', 'Artificial Intelligence', 'Learning', 'Data Science', 'Education']
Title Finland’s New Free AI CoursesContent Finland’s New Free AI Courses get certificate take advantage course Elements AI Photo Arttu Päivinen Unsplash Besides home Santa Claus Finland known tech leader even ahead US according UNDP Indeed tech operation constitute “over 50 Finnish exports” even owe technology like Linux first web browser Finland Today Finland keeping tech legacy free Elements AI online course Overview Elements AI set two online course made Reaktor University Helsinki combining theory practice teaching many people possible AI two course titled Introduction AI Building AI course well way achieving mission making AI accessible 550000 people already signed writing Introduction AI first course split six chapter AI AI problem solving Realworld AI Machine learning Neural network Implications Screenshot “Elements AI” course progress section captured author course well designed simple explanation nice visualization exercise bottom chapter solidify learning course feature “Course Progress” ribbon show you’re progressing course keep motivated Building AI second section take around 50 hour split five chapter Getting started AI Dealing uncertainty Machine learning Neural network Conclusion time exercise indepth practical they’ll challenging sure check community get stuck Community Elements AI come awesome highly active community Spectrum discus ask question chapter writing community almost 8000 member ask question study I’ve found it’s invaluable resource make sure truly understand material Best it’s free Certificate purchase certificate upon completion course 50 Euros shareable certificate would make great addition CV LinkedIn although it’s totally optional course normally free Final Project final project you’re expected demonstrate skill creativity it’s required it’s great opportunity put skill practice share community thousand learner Elements AI give lot inspiration idea final project “Sources Checker” — bot check source news article online idea include noise pollution forecasting predicting stock criterion like growth reliability matching idea doer automating application relevant job making expert recommendation assessing financial risk recommending healthy meal many Perhaps favorite idea “AI creditrisk management social lending” project us AI predict credit risk Models like already used real world instance microloan company Creditt us ObviouslyAI’s API score customer profile find much credit usersTags AI Artificial Intelligence Learning Data Science Education
1,363
Why you should never agree to use teleportation
Why you should never agree to use teleportation Spoiler: because it’ll probably kill you…at least for a little while. If you’ve seen any sort of science fiction movie — you’ve probably come across the notion of teleportation. The ability to instantly be transported from one side of the planet — to the other. Imagine a world where you could be in Paris for breakfast, Buenos Aires for lunch, and the newest restaurant on the moon for dinner. Pure fantasy right? It may have been fantasy…until 2018 anyway. Scientists in China successfully teleported a photon from Earth onto a satellite 300 miles away. This moved the concept of teleportation from being impossible to simply being a herculean endeavour. Before we start tasting that freshly baked French bread each morning — we first need to work out how to teleport larger particles, small inanimate objects, “lesser” forms of life, and finally humans. That is to say nothing of the seemingly astronomical amount of computing power and transmission bandwidth we will need to be capable of harnessing in order to teleport a human. One day, a century or two from now, this technology will be mature. The question then arises — should you use a transporter, or will it mean your instant death with your life being taken over by a doppelganger? How do you know that whoever steps into the transporter is the same person who steps out? Let us consider four ways in which a transporter might work, and whether that would mean that “you” come out the other end or a copy. Facsimile Body Transmission Mind Transmission Wormholes Facsimile Your body is scanned by the teleporter in your lounge room and deconstructed. You are reprinted at the destination with new “ink”. Whilst atomically (and genetically) identical — the person at the destination would be a copy as the base materials used are different “instances” of those elements. You, of course, are dead — and will stay dead. To demonstrate with another example — imagine transporting a house from point A to point B using this method. The house in point A has been destroyed, and while the bricks being printed in Point B look identical — they are mere copies. Body Transmission Your body is scanned, and deconstructed into its constituent “Lego blocks” (read: atoms). These same blocks are then fed through some sort of pipe (or via quantum entanglement) and drop out at the destination — where they are reassembled into yourself. Unlike the previous example, the very same atoms in the original you have made it to the destination. In this scenario — you were definitely killed but were you brought back to life and consciousness. Or was a new instance of your consciousness that was “booted up”? Does it even matter if it’s a different instance of consciousness? Mind Transmission Your body is scanned. A replica is reprinted at the destination — including all the data in your brain (memories, facts, relationships, and neural pathways). The electrochemical impulses that course through your brain are transmitted (similar to a data file over Bluetooth or wi-fi) and into your new brain. This way, while the body is new, the original “spark of life” has been transmitted over to Point B. The consciousness of the individual may have effectively just blanked out (as you would under a coma or deep sleep) for a few milliseconds. Wormholes The teleportation device creates and opens a wormhole under your feet that creates a tunnel through space-time, with the other end of the wormhole terminating at your destination. In this way — you and your atoms remain wholly intact, and you effectively walk through a door or get onto a slide which takes you to where you need to go. This solution saves you from any death and preserves the continuity of your consciousness.
https://medium.com/predict/why-you-should-never-agree-to-use-teleportation-cec3a3de58f2
['Kesh Anand']
2019-06-26 20:20:59.707000+00:00
['Consciousness', 'Future', 'Science Fiction', 'Technology', 'Science']
Title never agree use teleportationContent never agree use teleportation Spoiler it’ll probably kill you…at least little you’ve seen sort science fiction movie — you’ve probably come across notion teleportation ability instantly transported one side planet — Imagine world could Paris breakfast Buenos Aires lunch newest restaurant moon dinner Pure fantasy right may fantasy…until 2018 anyway Scientists China successfully teleported photon Earth onto satellite 300 mile away moved concept teleportation impossible simply herculean endeavour start tasting freshly baked French bread morning — first need work teleport larger particle small inanimate object “lesser” form life finally human say nothing seemingly astronomical amount computing power transmission bandwidth need capable harnessing order teleport human One day century two technology mature question arises — use transporter mean instant death life taken doppelganger know whoever step transporter person step Let u consider four way transporter might work whether would mean “you” come end copy Facsimile Body Transmission Mind Transmission Wormholes Facsimile body scanned teleporter lounge room deconstructed reprinted destination new “ink” Whilst atomically genetically identical — person destination would copy base material used different “instances” element course dead — stay dead demonstrate another example — imagine transporting house point point B using method house point destroyed brick printed Point B look identical — mere copy Body Transmission body scanned deconstructed constituent “Lego blocks” read atom block fed sort pipe via quantum entanglement drop destination — reassembled Unlike previous example atom original made destination scenario — definitely killed brought back life consciousness new instance consciousness “booted up” even matter it’s different instance consciousness Mind Transmission body scanned replica reprinted destination — including data brain memory fact relationship neural pathway electrochemical impulse course brain transmitted similar data file Bluetooth wifi new brain way body new original “spark life” transmitted Point B consciousness individual may effectively blanked would coma deep sleep millisecond Wormholes teleportation device creates open wormhole foot creates tunnel spacetime end wormhole terminating destination way — atom remain wholly intact effectively walk door get onto slide take need go solution save death preserve continuity consciousnessTags Consciousness Future Science Fiction Technology Science
1,364
This is why your read-eval-print-loop is so amazing
One of the things that makes the tech community so special is that we are always looking for ways to work more efficiently. Everyone has their favorite set of tools which makes them run better. As a professional UI dev, the Chrome DevTools and the Node.js read-eval-print-loop (REPL) became my favorite tools early on. I noticed that they enabled me to work more efficiently and allowed me to learn new things more quickly. The three phases of the REPL process This actually made me curious to investigate why this tool is so useful. I could easily find plenty of blog posts which explained what REPLs are and how to use them, for example here or here. But this post here is dedicated to the why (as in why are REPLs such a great tool for developers). “The number one reason that schools move away from Java as a teaching language is the high bars to Hello-world programs.” — Stuart Halloway What is a REPL? REPL stands for read-evaluate-print-loop and this is basically all there is to it. Your application runtime is in a specific state and the REPL helps you to interact with it. The REPL will read and evaluate the commands and print the result and then go back to the start to read your next input. The evaluate step might change your runtime. This process can be seen as an interview with your application to query its current state. In other words, the REPL makes your runtime more tangible and allows you to test hypotheses about it. According to Stuart Halloway, the absence of a REPL in Java is the most significant reason why schools started to move to other languages to teach programming. Some people even use the REPL to write better unit tests. Do I already use a REPL (-like tool) today? This basic explanation might have reminded you of some tools which you use every day. If you know and use one of the following tools, the answer is “yes”: The dev tools of your browser (like Chrome DevTools) Your terminal/shell Jupyter Notebooks The REPL process in Clojure Repl.it, jsfiddle.net, or jsbin.com Online regex validators Why is the REPL so helpful? This question kept me up at night because I didn’t understand what makes us inefficient in the first place. I started to research some common psychological effects and tried to link them to my daily interactions with the REPL. Here are my top three hypotheses: Being in the flow Flow is the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. (source) I think all of us are familiar with this state, it makes us extremely productive and time flies basically. Unfortunately, it’s fairly easy to “lose” the flow, for example when you get interrupted or when you have to wait for some period. I learned this can happen very fast: Researchers found out that one second is about the limit for the user’s flow of thought to stay uninterrupted. The REPL doesn’t need to compile or deploy your code. This leads to a very short response time (<100ms). Thus, you are able to test your hypotheses without losing the flow. This is what we want to avoid (source: XKCD) Positive Reinforcement Positive reinforcement involves the addition of a reinforcing stimulus following a behavior that makes it more likely that the behavior will occur again. (source) This is the effect that appeals the most to me. Your brain learns to favor certain actions when they were rewarded in the past. This reward could be a bonus from your boss after an outstanding month or a simple “Great job!” from your skiing instructor. Every time your REPL experiment succeeds and you solved a puzzle/problem, your brain feels rewarded as well! This also takes place when you code in a common IDE. But the REPL responds way faster and allows you to iterate more often. So, more experiments lead to more reinforcement. This effect makes you use the REPL more often and keeps your eye on the ball (instead of distracting yourself by checking for emails). Digital Amnesia The tendency to forget information that can be found readily online by using Internet search engines. (source) I have to admit, I often mix Java, Python and JavaScript syntax, because that information can be found all over the internet. I would ask myself “Do I need to use add(), append() or push() to add a new element to an array in JavaScript?”. Thus for me, an example of this effect is recalling method names of API and language references. In the REPL, I can see the available functions immediately with autocomplete: The code-completion feature of the Node.js REPL The great thing is, this works beyond the standard objects of programming languages. This works for all frameworks and modules, which makes the REPL more mighty than your IDE! There’s no need to compare the version numbers of modules and API references anymore: “Truth can only be found in one place: the code.” – Robert C. Martin, Clean Code I hope this article helped you to understand how your brain works and how the REPL can help you to be more productive. I’m curious to see if you agree with my hypotheses or if you know more tools to be a more efficient developer. Update 2/13/2019: I’ve also written a blog post about the usage of REPLs in Cloud Foundry Environments. Check out this video by DJ Adams if you’d like to see the REPL in action :)
https://medium.com/free-code-camp/this-is-why-your-read-eval-print-loop-is-so-amazing-cf0362003983
[]
2019-02-13 17:29:32.137000+00:00
['Programming', 'JavaScript', 'Psychology', 'Tech', 'Productivity']
Title readevalprintloop amazingContent One thing make tech community special always looking way work efficiently Everyone favorite set tool make run better professional UI dev Chrome DevTools Nodejs readevalprintloop REPL became favorite tool early noticed enabled work efficiently allowed learn new thing quickly three phase REPL process actually made curious investigate tool useful could easily find plenty blog post explained REPLs use example post dedicated REPLs great tool developer “The number one reason school move away Java teaching language high bar Helloworld programs” — Stuart Halloway REPL REPL stand readevaluateprintloop basically application runtime specific state REPL help interact REPL read evaluate command print result go back start read next input evaluate step might change runtime process seen interview application query current state word REPL make runtime tangible allows test hypothesis According Stuart Halloway absence REPL Java significant reason school started move language teach programming people even use REPL write better unit test already use REPL like tool today basic explanation might reminded tool use every day know use one following tool answer “yes” dev tool browser like Chrome DevTools terminalshell Jupyter Notebooks REPL process Clojure Replit jsfiddlenet jsbincom Online regex validators REPL helpful question kept night didn’t understand make u inefficient first place started research common psychological effect tried link daily interaction REPL top three hypothesis flow Flow mental state operation person performing activity fully immersed feeling energized focus full involvement enjoyment process activity source think u familiar state make u extremely productive time fly basically Unfortunately it’s fairly easy “lose” flow example get interrupted wait period learned happen fast Researchers found one second limit user’s flow thought stay uninterrupted REPL doesn’t need compile deploy code lead short response time 100ms Thus able test hypothesis without losing flow want avoid source XKCD Positive Reinforcement Positive reinforcement involves addition reinforcing stimulus following behavior make likely behavior occur source effect appeal brain learns favor certain action rewarded past reward could bonus bos outstanding month simple “Great job” skiing instructor Every time REPL experiment succeeds solved puzzleproblem brain feel rewarded well also take place code common IDE REPL responds way faster allows iterate often experiment lead reinforcement effect make use REPL often keep eye ball instead distracting checking email Digital Amnesia tendency forget information found readily online using Internet search engine source admit often mix Java Python JavaScript syntax information found internet would ask “Do need use add append push add new element array JavaScript” Thus example effect recalling method name API language reference REPL see available function immediately autocomplete codecompletion feature Nodejs REPL great thing work beyond standard object programming language work framework module make REPL mighty IDE There’s need compare version number module API reference anymore “Truth found one place code” – Robert C Martin Clean Code hope article helped understand brain work REPL help productive I’m curious see agree hypothesis know tool efficient developer Update 2132019 I’ve also written blog post usage REPLs Cloud Foundry Environments Check video DJ Adams you’d like see REPL action Tags Programming JavaScript Psychology Tech Productivity
1,365
Creating Good UX for Better AI
Creating Good UX for Better AI How to design a product that benefits both the user and the AI model As you’ve probably noticed, Machine Learning and Artificial Intelligence are here to stay and will continue to disrupt the market. Many products have inherently integrated AI functions (i.e., Netflix’s suggestions, Facebook’s auto-tagging, Google’s question answering), and by 2024, 69% of the manager’s routine workload, will be automated, as Gartner forecasts. A lot of work has been done around designing products that make AI accessible for users, but what about designing a product that improves the AI model? How does UX approach the development of better AI? I’ve always been very excited about AI, and for the past couple of months, I’ve been working on the Product Management and UX of several highly technical and advanced AI products. In my experience, bridging the gap between the science behind Machine Learning(ML) and the end-user is a real challenge, but it’s crucial and valuable. Humans have a huge responsibility when it comes to teaching the different models — it can either turn into something great or go horribly wrong. In this article, I will focus on the two sides of an AI product, and then combine them into one approach that will benefit both the end-user and the ML model. So, first, let’s focus on the two sides of the experience: User-centered design Model-centered design After becoming familiar with these, I’ll combine them into one Machine Learning Experience — Model-User Design. User-Centered Design — Creating a good product User-centered design is the shared goal of everyone interested in UX. If the product is centered around a real user’s needs, it is far more likely to create a product-market fit and generate happy customers. AI is pretty new to people. Many people are afraid of it for many reasons — from giving false predictions to taking away their jobs (not to mention their lives, but that’s some Terminator stuff). That’s why creating a good experience for the user is crucial. There are a couple of tools we can use in order to create a good experience in AI products. We’ll cover some of them, including finding the right problem to solve in order to provide value, how to explain the model running “under the hood”, keeping the user involved in the learning process and preparing for mistakes. Find a good problem to solve The basic rule of product-market fit, which applies to all other products, applies to AI. For the product to succeed, a real problem needs to be solved. If we create the most complicated state-of-the-art AI product that predicts the flying route of a fly, that would be a great model, but no problem is being solved and no value is being created. AI should add value to users and optimize the way they work. “The only reason your product should exist is to solve someone’s problem.” — Kevin Systrom, Co-Founder of Instagram Explainability Explainable AI explains what AI does to the user. The user has the right to understand why the algorithm predicted something. Explaining the why creates a more reliable connection and a feeling of trust. There are many examples such as product content suggestions on Netflix and YouTube — “Because you liked X:”, or “Based on your watch history:”. These sentences make you understand why Netflix suggested Ozark — because you watched Breaking Bad! You should also be aware that it’s not just about the experience, but that it’s a regulation ‘thing’. GDPR includes the right of an individual to ask for a human review of the AI’s prediction, to understand if the algorithm has made a mistake. Control & User feedback We should keep in mind that the model doesn’t always know what’s best for the user, and that users should feel they have the power to affect the model and “teach” it. For example — create opportunities for the user to provide feedback if the prediction is right or not. These types of messages enable feedback from the user, which will eventually help the prediction improve. Prepare for mistakes An AI algorithm won’t be 100% correct all the time. That’s why the algorithm should be able to project its confidence in a prediction —if a prediction isn’t very confident, the user should know about it and take it with a grain of salt. Also, be ready to handle mistakes and errors. The user is more likely to accept mistakes in AI if they are followed with an explanation of why the model came to its prediction (as mentioned before — explainability!). This statement should also be followed by information on how to improve the model in the future. It’s really important to remember AI has a huge impact on people’s lives. That’s why AI models’ predictions and mistakes have a colossal effect on people’s lives — wrong predictions may be highly offensive to the user (e.g., Google’s horrible false classification) or cause physical damage and even death (e.g., accidents made by self-driving cars). Model-Centered Design — Creating a good AI Now that we’re aligned about what user-centered design is, let’s talk about how to make the design centered around the ML model — how to improve the model and make the learning process as efficient and beneficial as possible. When we talked about user-centered design, our goal was to make the model understand the user. Now, let’s try to make sure the user understands the model. To make this generic and straightforward, let’s establish a very high-level flow of the machine learning process: In order to think about Machine Learning Experience, let’s forget for a second what we know about user interface components. Let’s talk about the process and how it meets humans. Training a model The training part of the ML model is essentially taking a lot of data and uploading it so that the algorithm can learn from it. Let’s say we want to train a model to identify lemurs in pictures. A training process can include uploading 1,000 images, some labelled and some not. Then, waiting for the model to learn. At the end of the process the model will be trained and can identify a lemur! As users, we’d like to make sure the algorithm learned. That’s why it’s important to visualize and clarify the training process — things like the accuracy of the model, the number of epochs that it took for it to learn, etc. Also, if we want to make sure the model works as we want it to, we can move to inference phase. Inference In this part, we’d like to test the understanding of the model. Inferring, to put it in very simple words, is pressing the “run” button on the AI model, with a given input. If we take the lemur example from before, at this point, we would upload a picture and check that the model understands what a lemur is and what isn’t. After seeing the result, the user should have the ability to provide feedback, so the model will learn and improve. Monitoring In order to make sure the model is performing well, monitoring is needed. It’s essential to understand the relevant metrics in order to monitor the model well. For a deeper understanding of the subject, I highly recommend reading this article: Model-User Design — Creating a good AI Product Now, when we know both sides of the AI-Product equation, we’re able to identify the guidelines for creating a good AI product: When thinking about the product’s users, we need to take into consideration the ML researcher who will feed and train the algorithm. With that in mind, we have some key takeaways: Quality Control — Help the user understand the model To give good predictions and provide an actual value, the top motivation for the ML researcher is to make sure the algorithm is as accurate as possible. For that to happen, we need the user to have comprehensive understanding of the model’s inputs and outputs. e.g., users should understand the importance of labelling training data and giving feedback to the predictions. The better users understand the important metrics of the model, the better they’ll be able to improve the model and get better results. In other words, in order to improve the model, users need to understand the “needs” of the model. Feedback Feedback Feedback — Help the model understand the user In order to improve the model, it’s important to make the user’s feedback as intuitive as possible and make it a big part of the user flow. There’s only so much an algorithm can understand about human needs without actual human input (imagine expecting a baby to learn how to speak without teaching it what’s right and what’s wrong). Make it personal Making users feel like they’re taking an active part in a product’s functioning is highly beneficial, for two reasons: If the users feel their contribution is making the model’s improvement, they will be much more invested. The more the users feel the model knows them and understands their needs, the more they will enjoy the effects of the model, get precise predictions, and trust the model. Extra reading on the subject can be found on this great post about the IKEA effect: Learn from the best (inputs) It’s a shared motivation for the model to learn from the best quality of input. A good design can encourage the user to upload high-quality inputs and remark when and why low-quality inputs aren’t good enough. e.g., a message saying the input image’s quality is too low in a way that the user understands and “believes”, therefore, wants to upload better images.
https://medium.com/beyondminds/creating-good-ux-for-better-ai-fefae1d9ac2f
['Omri Lachman']
2020-10-01 07:50:46.432000+00:00
['AI', 'Artificial Intelligence', 'Technology', 'UX', 'Machine Learning']
Title Creating Good UX Better AIContent Creating Good UX Better AI design product benefit user AI model you’ve probably noticed Machine Learning Artificial Intelligence stay continue disrupt market Many product inherently integrated AI function ie Netflix’s suggestion Facebook’s autotagging Google’s question answering 2024 69 manager’s routine workload automated Gartner forecast lot work done around designing product make AI accessible user designing product improves AI model UX approach development better AI I’ve always excited AI past couple month I’ve working Product Management UX several highly technical advanced AI product experience bridging gap science behind Machine LearningML enduser real challenge it’s crucial valuable Humans huge responsibility come teaching different model — either turn something great go horribly wrong article focus two side AI product combine one approach benefit enduser ML model first let’s focus two side experience Usercentered design Modelcentered design becoming familiar I’ll combine one Machine Learning Experience — ModelUser Design UserCentered Design — Creating good product Usercentered design shared goal everyone interested UX product centered around real user’s need far likely create productmarket fit generate happy customer AI pretty new people Many people afraid many reason — giving false prediction taking away job mention life that’s Terminator stuff That’s creating good experience user crucial couple tool use order create good experience AI product We’ll cover including finding right problem solve order provide value explain model running “under hood” keeping user involved learning process preparing mistake Find good problem solve basic rule productmarket fit applies product applies AI product succeed real problem need solved create complicated stateoftheart AI product predicts flying route fly would great model problem solved value created AI add value user optimize way work “The reason product exist solve someone’s problem” — Kevin Systrom CoFounder Instagram Explainability Explainable AI explains AI user user right understand algorithm predicted something Explaining creates reliable connection feeling trust many example product content suggestion Netflix YouTube — “Because liked X” “Based watch history” sentence make understand Netflix suggested Ozark — watched Breaking Bad also aware it’s experience it’s regulation ‘thing’ GDPR includes right individual ask human review AI’s prediction understand algorithm made mistake Control User feedback keep mind model doesn’t always know what’s best user user feel power affect model “teach” example — create opportunity user provide feedback prediction right type message enable feedback user eventually help prediction improve Prepare mistake AI algorithm won’t 100 correct time That’s algorithm able project confidence prediction —if prediction isn’t confident user know take grain salt Also ready handle mistake error user likely accept mistake AI followed explanation model came prediction mentioned — explainability statement also followed information improve model future It’s really important remember AI huge impact people’s life That’s AI models’ prediction mistake colossal effect people’s life — wrong prediction may highly offensive user eg Google’s horrible false classification cause physical damage even death eg accident made selfdriving car ModelCentered Design — Creating good AI we’re aligned usercentered design let’s talk make design centered around ML model — improve model make learning process efficient beneficial possible talked usercentered design goal make model understand user let’s try make sure user understands model make generic straightforward let’s establish highlevel flow machine learning process order think Machine Learning Experience let’s forget second know user interface component Let’s talk process meet human Training model training part ML model essentially taking lot data uploading algorithm learn Let’s say want train model identify lemur picture training process include uploading 1000 image labelled waiting model learn end process model trained identify lemur user we’d like make sure algorithm learned That’s it’s important visualize clarify training process — thing like accuracy model number epoch took learn etc Also want make sure model work want move inference phase Inference part we’d like test understanding model Inferring put simple word pressing “run” button AI model given input take lemur example point would upload picture check model understands lemur isn’t seeing result user ability provide feedback model learn improve Monitoring order make sure model performing well monitoring needed It’s essential understand relevant metric order monitor model well deeper understanding subject highly recommend reading article ModelUser Design — Creating good AI Product know side AIProduct equation we’re able identify guideline creating good AI product thinking product’s user need take consideration ML researcher feed train algorithm mind key takeaway Quality Control — Help user understand model give good prediction provide actual value top motivation ML researcher make sure algorithm accurate possible happen need user comprehensive understanding model’s input output eg user understand importance labelling training data giving feedback prediction better user understand important metric model better they’ll able improve model get better result word order improve model user need understand “needs” model Feedback Feedback Feedback — Help model understand user order improve model it’s important make user’s feedback intuitive possible make big part user flow There’s much algorithm understand human need without actual human input imagine expecting baby learn speak without teaching what’s right what’s wrong Make personal Making user feel like they’re taking active part product’s functioning highly beneficial two reason user feel contribution making model’s improvement much invested user feel model know understands need enjoy effect model get precise prediction trust model Extra reading subject found great post IKEA effect Learn best input It’s shared motivation model learn best quality input good design encourage user upload highquality input remark lowquality input aren’t good enough eg message saying input image’s quality low way user understands “believes” therefore want upload better imagesTags AI Artificial Intelligence Technology UX Machine Learning
1,366
Hierarchical Clustering on Categorical Data in R
Dissimilarity Matrix Arguably, this is the backbone of your clustering. Dissimilarity matrix is a mathematical expression of how different, or distant, the points in a data set are from each other, so you can later group the closest ones together or separate the furthest ones — which is a core idea of clustering. This is the step where data types differences are important as dissimilarity matrix is based on distances between individual data points. While it is quite easy to imagine distances between numerical data points (remember Eucledian distances, as an example?), categorical data (factors in R) does not seem as obvious. In order to calculate a dissimilarity matrix in this case, you would go for something called Gower distance. I won’t get into the math of it, but I am providing a links here and here. To do that I prefer to use daisy() with metric = c("gower") from the cluster package . #----- Dummy Data -----# # the data will be sterile clean in order to not get distracted with other issues that might arise, but I will also write about some difficulties I had, outside the code library(dplyr) # ensuring reproducibility for sampling set.seed(40) # generating random variable set # specifying ordered factors, strings will be converted to factors when using data.frame() # customer ids come first, we will generate 200 customer ids from 1 to 200 id.s <- c(1:200) %>% factor() budget.s <- sample(c("small", "med", "large"), 200, replace = T) %>% factor(levels=c("small", "med", "large"), ordered = TRUE) origins.s <- sample(c("x", "y", "z"), 200, replace = T, prob = c(0.7, 0.15, 0.15)) area.s <- sample(c("area1", "area2", "area3", "area4"), 200, replace = T, prob = c(0.3, 0.1, 0.5, 0.2)) source.s <- sample(c("facebook", "email", "link", "app"), 200, replace = T, prob = c(0.1,0.2, 0.3, 0.4)) ## day of week - probabilities are mocking the demand curve dow.s <- sample(c("mon", "tue", "wed", "thu", "fri", "sat", "sun"), 200, replace = T, prob = c(0.1, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2)) %>% factor(levels=c("mon", "tue", "wed", "thu", "fri", "sat", "sun"), ordered = TRUE) # dish dish.s <- sample(c("delicious", "the one you don't like", "pizza"), 200, replace = T) # by default, data.frame() will convert all the strings to factors synthetic.customers <- data.frame(id.s, budget.s, origins.s, area.s, source.s, dow.s, dish.s) #----- Dissimilarity Matrix -----# library(cluster) # to perform different types of hierarchical clustering # package functions used: daisy(), diana(), clusplot() gower.dist <- daisy(synthetic.customers[ ,2:7], metric = c("gower")) # class(gower.dist) ## dissimilarity , dist Done with a dissimilarity matrix. That’s very fast on 200 observations, but can be very computationally expensive in case you have a large data set. In reality, it is quite likely that you will have to clean the dataset first, perform the necessary transformations from strings to factors and keep an eye on missing values. In my own case, the dataset contained rows of missing values, which nicely clustered together every time, leading me to assume that I found a treasure until I had a look at the values (meh!). Clustering Algorithms You could have heard that there is k-means and hierarchical clustering. In this post, I focus on the latter as it is a more exploratory type, and it can be approached differently: you could choose to follow either agglomerative (bottom-up) or divisive (top-down) way of clustering. Agglomerative clustering will start with n clusters, where n is the number of observations, assuming that each of them is its own separate cluster. Then the algorithm will try to find most similar data points and group them, so they start forming clusters. In contrast, divisive clustering will go the other way around — assuming all your n data points are one big cluster and dividing most dissimilar ones into separate groups. If you are thinking which one of them to use, it is always worth trying all the options, but in general, agglomerative clustering is better in discovering small clusters, and is used by most software; divisive clustering — in discovering larger clusters. I personally like having a look at dendrograms — graphical representation of clustering first to decide which method I will stick to. As you will see below, some of dendrograms will be pretty balanced , while others will look like a mess. # The main input for the code below is dissimilarity (distance matrix) # After dissimilarity matrix was calculated, the further steps will be the same for all data types # I prefer to look at the dendrogram and fine the most appealing one first - in this case, I was looking for a more balanced one - to further continue with assessment #------------ DIVISIVE CLUSTERING ------------# divisive.clust <- diana(as.matrix(gower.dist), diss = TRUE, keep.diss = TRUE) plot(divisive.clust, main = "Divisive") #------------ AGGLOMERATIVE CLUSTERING ------------# # I am looking for the most balanced approach # Complete linkages is the approach that best fits this demand - I will leave only this one here, don't want to get it cluttered # complete aggl.clust.c <- hclust(gower.dist, method = "complete") plot(aggl.clust.c, main = "Agglomerative, complete linkages") Assessing clusters Here, you will decide between different clustering algorithms and a different number of clusters. As it often happens with assessment, there is more than one way possible, complemented by your own judgement. It’s bold and in italics because your own judgement is important — the number of clusters should make practical sense and they way data is divided into groups should make sense too. Working with categorical variables, you might end up with non-sense clusters because the combination of their values is limited — they are discrete, so is the number of their combinations. Possibly, you don’t want to have a very small number of clusters either — they are likely to be too general. In the end, it all comes to your goal and what you do your analysis for. Conceptually, when clusters are created, you are interested in distinctive groups of data points, such that the distance between them within clusters (or compactness) is minimal while the distance between groups (separation) is as large as possible. This is intuitively easy to understand: distance between points is a measure of their dissimilarity derived from dissimilarity matrix. Hence, the assessment of clustering is built around evaluation of compactness and separation. I will go for 2 approaches here and show that one of them might produce nonsense results: Elbow method: start with it when the compactness of clusters, or similarities within groups are most important for your analysis. Silhouette method: as a measure of data consistency, the silhouette plot displays a measure of how close each point in one cluster is to points in the neighboring clusters. In practice, they are very likely to provide different results that might be confusing at a certain point— different number of clusters will correspond to the most compact / most distinctively separated clusters, so judgement and understanding what your data is actually about will be a significant part of making the final decision. There are also a bunch of measurements that you can analyze for your own case. I am adding them to the code itself. # Cluster stats comes out as list while it is more convenient to look at it as a table # This code below will produce a dataframe with observations in columns and variables in row # Not quite tidy data, which will require a tweak for plotting, but I prefer this view as an output here as I find it more comprehensive library(fpc) cstats.table <- function(dist, tree, k) { clust.assess <- c("cluster.number","n","within.cluster.ss","average.within","average.between", "wb.ratio","dunn2","avg.silwidth") clust.size <- c("cluster.size") stats.names <- c() row.clust <- c() output.stats <- matrix(ncol = k, nrow = length(clust.assess)) cluster.sizes <- matrix(ncol = k, nrow = k) for(i in c(1:k)){ row.clust[i] <- paste("Cluster-", i, " size") } for(i in c(2:k)){ stats.names[i] <- paste("Test", i-1) for(j in seq_along(clust.assess)){ output.stats[j, i] <- unlist(cluster.stats(d = dist, clustering = cutree(tree, k = i))[clust.assess])[j] } for(d in 1:k) { cluster.sizes[d, i] <- unlist(cluster.stats(d = dist, clustering = cutree(tree, k = i))[clust.size])[d] dim(cluster.sizes[d, i]) <- c(length(cluster.sizes[i]), 1) cluster.sizes[d, i] } } output.stats.df <- data.frame(output.stats) cluster.sizes <- data.frame(cluster.sizes) cluster.sizes[is.na(cluster.sizes)] <- 0 rows.all <- c(clust.assess, row.clust) # rownames(output.stats.df) <- clust.assess output <- rbind(output.stats.df, cluster.sizes)[ ,-1] colnames(output) <- stats.names[2:k] rownames(output) <- rows.all is.num <- sapply(output, is.numeric) output[is.num] <- lapply(output[is.num], round, 2) output } # I am capping the maximum amout of clusters by 7 # I want to choose a reasonable number, based on which I will be able to see basic differences between customer groups as a result stats.df.divisive <- cstats.table(gower.dist, divisive.clust, 7) stats.df.divisive Look, average.within, which is an average distance among observations within clusters, is shrinking, so does within cluster SS. Average silhouette width is a bit less straightforward, but the reverse relationship is nevertheless there. See how disproportional the size of clusters is. I wouldn’t rush into working with incomparable number of observations within clusters. One of the reasons, the dataset can be imbalanced, and some group of observations will outweigh all the rest in the analysis — this is not good and is likely to lead to biases. stats.df.aggl <-cstats.table(gower.dist, aggl.clust.c, 7) #complete linkages looks like the most balanced approach stats.df.aggl Notice how more balanced agglomerative complete linkages hierarchical clustering is compared on the number of observations per group. # --------- Choosing the number of clusters ---------# # Using "Elbow" and "Silhouette" methods to identify the best number of clusters # to better picture the trend, I will go for more than 7 clusters. library(ggplot2) # Elbow # Divisive clustering ggplot(data = data.frame(t(cstats.table(gower.dist, divisive.clust, 15))), aes(x=cluster.number, y=within.cluster.ss)) + geom_point()+ geom_line()+ ggtitle("Divisive clustering") + labs(x = "Num.of clusters", y = "Within clusters sum of squares (SS)") + theme(plot.title = element_text(hjust = 0.5)) So, we’ve produced the “elbow” graph. It shows how the within sum of squares — as a measure of closeness of observations : the lower it is the closer the observations within the clusters are — changes for the different number of clusters. Ideally, we should see a distinctive “bend” in the elbow where splitting clusters further gives only minor decrease in the SS. In the case of a graph below, I would go for something around 7. Although in this case, one of a clusters will consist of only 2 observations, let’s see what happens with agglomerative clustering. # Agglomerative clustering,provides a more ambiguous picture ggplot(data = data.frame(t(cstats.table(gower.dist, aggl.clust.c, 15))), aes(x=cluster.number, y=within.cluster.ss)) + geom_point()+ geom_line()+ ggtitle("Agglomerative clustering") + labs(x = "Num.of clusters", y = "Within clusters sum of squares (SS)") + theme(plot.title = element_text(hjust = 0.5)) Agglomerative “elbow” looks similar to that of divisive, except that agglomerative one looks smoother — with “bends” being less abrupt. Similarly to divisive clustering, I would go for 7 clusters, but choosing between the two methods, I like the size of the clusters produced by the agglomerative method more — I want something comparable in size. # Silhouette ggplot(data = data.frame(t(cstats.table(gower.dist, divisive.clust, 15))), aes(x=cluster.number, y=avg.silwidth)) + geom_point()+ geom_line()+ ggtitle("Divisive clustering") + labs(x = "Num.of clusters", y = "Average silhouette width") + theme(plot.title = element_text(hjust = 0.5)) When it comes to silhouette assessment, the rule is you should choose the number that maximizes the silhouette coefficient because you want clusters that are distinctive (far) enough to be considered separate. The silhouette coefficient ranges between -1 and 1, with 1 indicating good consistency within clusters, -1 — not so good. From the plot above, you would not go for 5 clusters — you would rather prefer 9. As a comparison, for the “easy” case, the silhouette plot is likely to look like the graph below. We are not quite, but almost there. ggplot(data = data.frame(t(cstats.table(gower.dist, aggl.clust.c, 15))), aes(x=cluster.number, y=avg.silwidth)) + geom_point()+ geom_line()+ ggtitle("Agglomerative clustering") + labs(x = "Num.of clusters", y = "Average silhouette width") + theme(plot.title = element_text(hjust = 0.5)) What the silhouette width graph above is saying is “the more you break the dataset, the more distinctive the clusters become”. Ultimately, you will end up with individual data points — and you don’t want that, and if you try a larger k for the number of clusters — you will see it. E.g., at k=30, I got the following graph: So-so: the more you split, the better it gets, but we can’t be splitting till individual data points (remember that we have 30 clusters here in the graph above, and only 200 data points). Summing it all up, agglomerative clustering in this case looks way more balanced to me — the cluster sizes are more or less comparable (look at that cluster with just 2 observations in the divisive section!), and I would go for 7 clusters obtained by this method. Let’s see how they look and check what’s inside. The dataset consists of 6 variables which need to be visualized in 2D or 3D, so it’s time for a challenge! The nature of categorical data poses some limitations too, so using some pre-defined solutions might get tricky. What I want to a) see how observations are clustered, b) know is how observations are distributed across categories, thus I created a) a colored dendrogram, b) a heatmap of observations count per variable within each cluster. library("ggplot2") library("reshape2") library("purrr") library("dplyr") # let's start with a dendrogram library("dendextend") dendro <- as.dendrogram(aggl.clust.c) dendro.col <- dendro %>% set("branches_k_color", k = 7, value = c("darkslategray", "darkslategray4", "darkslategray3", "gold3", "darkcyan", "cyan3", "gold3")) %>% set("branches_lwd", 0.6) %>% set("labels_colors", value = c("darkslategray")) %>% set("labels_cex", 0.5) ggd1 <- as.ggdend(dendro.col) ggplot(ggd1, theme = theme_minimal()) + labs(x = "Num. observations", y = "Height", title = "Dendrogram, k = 7") # Radial plot looks less cluttered (and cooler) ggplot(ggd1, labels = T) + scale_y_reverse(expand = c(0.2, 0)) + coord_polar(theta="x") # Time for the heatmap # the 1st step here is to have 1 variable per row # factors have to be converted to characters in order not to be dropped clust.num <- cutree(aggl.clust.c, k = 7) synthetic.customers.cl <- cbind(synthetic.customers, clust.num) cust.long <- melt(data.frame(lapply(synthetic.customers.cl, as.character), stringsAsFactors=FALSE), id = c("id.s", "clust.num"), factorsAsStrings=T) cust.long.q <- cust.long %>% group_by(clust.num, variable, value) %>% mutate(count = n_distinct(id.s)) %>% distinct(clust.num, variable, value, count) # heatmap.c will be suitable in case you want to go for absolute counts - but it doesn't tell much to my taste heatmap.c <- ggplot(cust.long.q, aes(x = clust.num, y = factor(value, levels = c("x","y","z", "mon", "tue", "wed", "thu", "fri","sat","sun", "delicious", "the one you don't like", "pizza", "facebook", "email", "link", "app", "area1", "area2", "area3", "area4", "small", "med", "large"), ordered = T))) + geom_tile(aes(fill = count))+ scale_fill_gradient2(low = "darkslategray1", mid = "yellow", high = "turquoise4") # calculating the percent of each factor level in the absolute count of cluster members cust.long.p <- cust.long.q %>% group_by(clust.num, variable) %>% mutate(perc = count / sum(count)) %>% arrange(clust.num) heatmap.p <- ggplot(cust.long.p, aes(x = clust.num, y = factor(value, levels = c("x","y","z", "mon", "tue", "wed", "thu", "fri","sat", "sun", "delicious", "the one you don't like", "pizza", "facebook", "email", "link", "app", "area1", "area2", "area3", "area4", "small", "med", "large"), ordered = T))) + geom_tile(aes(fill = perc), alpha = 0.85)+ labs(title = "Distribution of characteristics across clusters", x = "Cluster number", y = NULL) + geom_hline(yintercept = 3.5) + geom_hline(yintercept = 10.5) + geom_hline(yintercept = 13.5) + geom_hline(yintercept = 17.5) + geom_hline(yintercept = 21.5) + scale_fill_gradient2(low = "darkslategray1", mid = "yellow", high = "turquoise4") heatmap.p Having a heatmap, you see the how many observations fall into each factor level within initial factors (variables we’ve started with). The deeper blue corresponds to a higher relative number of observations within a cluster. In this one, you can also see that day of the week / basket size have almost the same amount of customers in each bin— it might mean those are not definitive for the analysis and might be omitted.
https://towardsdatascience.com/hierarchical-clustering-on-categorical-data-in-r-a27e578f2995
['Anastasia Reusova']
2019-03-26 16:08:19.820000+00:00
['Data Science', 'Clustering', 'Segmentation', 'Visualization']
Title Hierarchical Clustering Categorical Data RContent Dissimilarity Matrix Arguably backbone clustering Dissimilarity matrix mathematical expression different distant point data set later group closest one together separate furthest one — core idea clustering step data type difference important dissimilarity matrix based distance individual data point quite easy imagine distance numerical data point remember Eucledian distance example categorical data factor R seem obvious order calculate dissimilarity matrix case would go something called Gower distance won’t get math providing link prefer use daisy metric cgower cluster package Dummy Data data sterile clean order get distracted issue might arise also write difficulty outside code librarydplyr ensuring reproducibility sampling setseed40 generating random variable set specifying ordered factor string converted factor using dataframe customer id come first generate 200 customer id 1 200 id c1200 factor budget samplecsmall med large 200 replace factorlevelscsmall med large ordered TRUE origin samplecx z 200 replace prob c07 015 015 area samplecarea1 area2 area3 area4 200 replace prob c03 01 05 02 source samplecfacebook email link app 200 replace prob c0102 03 04 day week probability mocking demand curve dows samplecmon tue wed thu fri sat sun 200 replace prob c01 01 02 02 01 01 02 factorlevelscmon tue wed thu fri sat sun ordered TRUE dish dish samplecdelicious one dont like pizza 200 replace default dataframe convert string factor syntheticcustomers dataframeids budget origin area source dows dish Dissimilarity Matrix librarycluster perform different type hierarchical clustering package function used daisy diana clusplot gowerdist daisysyntheticcustomers 27 metric cgower classgowerdist dissimilarity dist Done dissimilarity matrix That’s fast 200 observation computationally expensive case large data set reality quite likely clean dataset first perform necessary transformation string factor keep eye missing value case dataset contained row missing value nicely clustered together every time leading assume found treasure look value meh Clustering Algorithms could heard kmeans hierarchical clustering post focus latter exploratory type approached differently could choose follow either agglomerative bottomup divisive topdown way clustering Agglomerative clustering start n cluster n number observation assuming separate cluster algorithm try find similar data point group start forming cluster contrast divisive clustering go way around — assuming n data point one big cluster dividing dissimilar one separate group thinking one use always worth trying option general agglomerative clustering better discovering small cluster used software divisive clustering — discovering larger cluster personally like look dendrograms — graphical representation clustering first decide method stick see dendrograms pretty balanced others look like mess main input code dissimilarity distance matrix dissimilarity matrix calculated step data type prefer look dendrogram fine appealing one first case looking balanced one continue assessment DIVISIVE CLUSTERING divisiveclust dianaasmatrixgowerdist dis TRUE keepdiss TRUE plotdivisiveclust main Divisive AGGLOMERATIVE CLUSTERING looking balanced approach Complete linkage approach best fit demand leave one dont want get cluttered complete agglclustc hclustgowerdist method complete plotagglclustc main Agglomerative complete linkage Assessing cluster decide different clustering algorithm different number cluster often happens assessment one way possible complemented judgement It’s bold italic judgement important — number cluster make practical sense way data divided group make sense Working categorical variable might end nonsense cluster combination value limited — discrete number combination Possibly don’t want small number cluster either — likely general end come goal analysis Conceptually cluster created interested distinctive group data point distance within cluster compactness minimal distance group separation large possible intuitively easy understand distance point measure dissimilarity derived dissimilarity matrix Hence assessment clustering built around evaluation compactness separation go 2 approach show one might produce nonsense result Elbow method start compactness cluster similarity within group important analysis Silhouette method measure data consistency silhouette plot display measure close point one cluster point neighboring cluster practice likely provide different result might confusing certain point— different number cluster correspond compact distinctively separated cluster judgement understanding data actually significant part making final decision also bunch measurement analyze case adding code Cluster stats come list convenient look table code produce dataframe observation column variable row quite tidy data require tweak plotting prefer view output find comprehensive libraryfpc cstatstable functiondist tree k clustassess cclusternumbernwithinclusterssaveragewithinaveragebetween wbratiodunn2avgsilwidth clustsize cclustersize statsnames c rowclust c outputstats matrixncol k nrow lengthclustassess clustersizes matrixncol k nrow k fori c1k rowclusti pasteCluster size fori c2k statsnamesi pasteTest i1 forj seqalongclustassess outputstatsj unlistclusterstatsd dist clustering cutreetree k iclustassessj ford 1k clustersizesd unlistclusterstatsd dist clustering cutreetree k iclustsized dimclustersizesd clengthclustersizesi 1 clustersizesd outputstatsdf dataframeoutputstats clustersizes dataframeclustersizes clustersizesisnaclustersizes 0 rowsall cclustassess rowclust rownamesoutputstatsdf clustassess output rbindoutputstatsdf clustersizes 1 colnamesoutput statsnames2k rownamesoutput rowsall isnum sapplyoutput isnumeric outputisnum lapplyoutputisnum round 2 output capping maximum amout cluster 7 want choose reasonable number based able see basic difference customer group result statsdfdivisive cstatstablegowerdist divisiveclust 7 statsdfdivisive Look averagewithin average distance among observation within cluster shrinking within cluster SS Average silhouette width bit le straightforward reverse relationship nevertheless See disproportional size cluster wouldn’t rush working incomparable number observation within cluster One reason dataset imbalanced group observation outweigh rest analysis — good likely lead bias statsdfaggl cstatstablegowerdist agglclustc 7 complete linkage look like balanced approach statsdfaggl Notice balanced agglomerative complete linkage hierarchical clustering compared number observation per group Choosing number cluster Using Elbow Silhouette method identify best number cluster better picture trend go 7 cluster libraryggplot2 Elbow Divisive clustering ggplotdata dataframetcstatstablegowerdist divisiveclust 15 aesxclusternumber ywithinclusterss geompoint geomline ggtitleDivisive clustering labsx Numof cluster Within cluster sum square SS themeplottitle elementtexthjust 05 we’ve produced “elbow” graph show within sum square — measure closeness observation lower closer observation within cluster — change different number cluster Ideally see distinctive “bend” elbow splitting cluster give minor decrease SS case graph would go something around 7 Although case one cluster consist 2 observation let’s see happens agglomerative clustering Agglomerative clusteringprovides ambiguous picture ggplotdata dataframetcstatstablegowerdist agglclustc 15 aesxclusternumber ywithinclusterss geompoint geomline ggtitleAgglomerative clustering labsx Numof cluster Within cluster sum square SS themeplottitle elementtexthjust 05 Agglomerative “elbow” look similar divisive except agglomerative one look smoother — “bends” le abrupt Similarly divisive clustering would go 7 cluster choosing two method like size cluster produced agglomerative method — want something comparable size Silhouette ggplotdata dataframetcstatstablegowerdist divisiveclust 15 aesxclusternumber yavgsilwidth geompoint geomline ggtitleDivisive clustering labsx Numof cluster Average silhouette width themeplottitle elementtexthjust 05 come silhouette assessment rule choose number maximizes silhouette coefficient want cluster distinctive far enough considered separate silhouette coefficient range 1 1 1 indicating good consistency within cluster 1 — good plot would go 5 cluster — would rather prefer 9 comparison “easy” case silhouette plot likely look like graph quite almost ggplotdata dataframetcstatstablegowerdist agglclustc 15 aesxclusternumber yavgsilwidth geompoint geomline ggtitleAgglomerative clustering labsx Numof cluster Average silhouette width themeplottitle elementtexthjust 05 silhouette width graph saying “the break dataset distinctive cluster become” Ultimately end individual data point — don’t want try larger k number cluster — see Eg k30 got following graph Soso split better get can’t splitting till individual data point remember 30 cluster graph 200 data point Summing agglomerative clustering case look way balanced — cluster size le comparable look cluster 2 observation divisive section would go 7 cluster obtained method Let’s see look check what’s inside dataset consists 6 variable need visualized 2D 3D it’s time challenge nature categorical data pose limitation using predefined solution might get tricky want see observation clustered b know observation distributed across category thus created colored dendrogram b heatmap observation count per variable within cluster libraryggplot2 libraryreshape2 librarypurrr librarydplyr let start dendrogram librarydendextend dendro asdendrogramagglclustc dendrocol dendro setbrancheskcolor k 7 value cdarkslategray darkslategray4 darkslategray3 gold3 darkcyan cyan3 gold3 setbrancheslwd 06 setlabelscolors value cdarkslategray setlabelscex 05 ggd1 asggdenddendrocol ggplotggd1 theme thememinimal labsx Num observation Height title Dendrogram k 7 Radial plot look le cluttered cooler ggplotggd1 label scaleyreverseexpand c02 0 coordpolarthetax Time heatmap 1st step 1 variable per row factor converted character order dropped clustnum cutreeagglclustc k 7 syntheticcustomerscl cbindsyntheticcustomers clustnum custlong meltdataframelapplysyntheticcustomerscl ascharacter stringsAsFactorsFALSE id cid clustnum factorsAsStringsT custlongq custlong groupbyclustnum variable value mutatecount ndistinctids distinctclustnum variable value count heatmapc suitable case want go absolute count doesnt tell much taste heatmapc ggplotcustlongq aesx clustnum factorvalue level cxyz mon tue wed thu frisatsun delicious one dont like pizza facebook email link app area1 area2 area3 area4 small med large ordered geomtileaesfill count scalefillgradient2low darkslategray1 mid yellow high turquoise4 calculating percent factor level absolute count cluster member custlongp custlongq groupbyclustnum variable mutateperc count sumcount arrangeclustnum heatmapp ggplotcustlongp aesx clustnum factorvalue level cxyz mon tue wed thu frisat sun delicious one dont like pizza facebook email link app area1 area2 area3 area4 small med large ordered geomtileaesfill perc alpha 085 labstitle Distribution characteristic across cluster x Cluster number NULL geomhlineyintercept 35 geomhlineyintercept 105 geomhlineyintercept 135 geomhlineyintercept 175 geomhlineyintercept 215 scalefillgradient2low darkslategray1 mid yellow high turquoise4 heatmapp heatmap see many observation fall factor level within initial factor variable we’ve started deeper blue corresponds higher relative number observation within cluster one also see day week basket size almost amount customer bin— might mean definitive analysis might omittedTags Data Science Clustering Segmentation Visualization
1,367
My Agoraphobic Life
My name is Heather, and I have a problem I live with agoraphobia, and it keeps me bound to geographical and emotional areas.Agoraphobia directly translated means “Fear of the market place.” Interesting. I dream of a life where I could enjoy the marketplace or any public place. But I can’t. Not yet, anyway. It started when… Many therapists are confident my agoraphobia is a result of childhood sexual trauma. Makes sense. I did live with ongoing sexual abuse between ages six and 14. Those years, I spent most of my life looking over my shoulder and gauging my perpetrator’s true intentions. But I got out of that situation, and I feel pretty healed. I think my agoraphobia is the result of a medical problem. I was 18 years old, and nursing my firstborn when suddenly, my heart started pounding wildly in my chest. My friend drove me to the hospital, where doctors and nurses acted very suspicious that I was on drugs. The male nurse leaned over my shoulder and whispered into my ear, “Would it be alright if I undress you with my hands?” Then, they left me hooked up to a heart monitor for a few hours. My heart rate fluctuated around 230 beats per minute. That is fast. Too fast, they told me as they quickly pushed a bit of adenosine into my IV. I remember the feeling of bricks on my chest, and my vision fading into one pin-point of light that reminded me of turning off a tube-television. I lost consciousness for a moment. The doctors said this was likely an isolated episode. Still, I was traumatized. I was constantly aware of my heartbeat. Was it too fast or too slow? Would tonight be the night it stops altogether? I was what my first therapist would call hypervigilant before she diagnosed me with panic disorder. It was a good call, but no amount of therapy or medication seemed to help. The panic grew into insomnia, and it didn’t take long for me to lose the desire to leave my home. I couldn’t locate the source of this panic. It wasn’t a tangible thing that I could hit or run from. What I could do is retreat into my zone. Defining the Zone When people learn that I suffer from agoraphobia, I think they imagine me cowering in a dark corner of my hoarder house and mumbling to myself. This is not the case at all. I have built my life in such a way that I can actually live it. Here’s a little known fact: There is a somewhat secret border surrounding Los Angeles’ Jewish communities called an eruv. This line is made up of walls, hills, and partially by a thin string. The string, somewhat like a fishing line, is secured inconspicuously between existing poles. Orthodox Jews aren’t permitted to push, pull, or carry outside of their home on Sabbath. So rather than stay inside, the eruv expands the idea of “home.” Not that these people live in the street, but the eruv marks a common vs. public area where they are safe to conduct necessary activities like carrying their child to temple or pushing grandma in a wheelchair. They are safe to do so without fear of sin. They are safe. I have done something similar to contain my fear and expand my home. I selected a house that I love and filled it full of comfort. I don’t have a cowering corner, but I do have a little Harry Potter closet beneath the stairs if I need to be in a small space. I can walk to the park with my children. My doctor, the grocery store, hospital, pharmacy, and library are all within steps of each other. I have picked a coffee shop and a restaurant where the people aren’t threatening, and the environment is cozy. This is my zone. I am never far from professional help, should I need it, and I am safe. Yet somehow, this system is imperfect. I have been married for ten years, and have never been to my inlaws’ home because it is eight hours away. Not even once. My relationships suffer. Goals. Photo by Matic Kozinc on Unsplash Relationships Living with agoraphobia makes it nearly impossible to forge new relationships. The second I set foot out of my zone, I am overcome with panic — my heart rate increases, I am suddenly starved for air, my head is dizzy, and I am positive I will die. Not figuratively. I am suddenly faced with a few choices: Fight it, run from it, or retreat. So I choose to retreat, and when I do, I forgo meeting new people. Getting around is a bit of a problem. I drive, like most everybody. But I can’t drive out of the zone, and I definitely cannot use the freeway. Try that out sometime in Southern, California. Even as a passenger outside of the zone, I get the urge to jump out of a moving car. Let’s be real clear- I don’t want to die, and I am not suicidal. Since I do want to live, I have to be child-locked into a car. The rough part is, not everyone wants to be my chauffeur. Especially when I’m in the back seat having an entire come apart. And public transportation? Nope. I can’t do it. For this reason, my circle has become extremely small. I have friends. Like…two friends. No, seriously. Their names are Tanya and Amy. I used to have many, but a friendship with me lacks a particular quid-pro-quo element common to typical relationships. You might want me to drop by, for instance. Only I can’t. Not unless you want to meet me at my coffee shop, where it is safe. The friends I do have know they need to come to my house, and any excursion may end with me begging to go back home. And they love me anyway. Photo by Joseph Pearson on Unsplash Making new friends can be hard on me too. Once a person finds out about my situation, they want to fix me. Whether it be through prayer or multi-level-marketing snake oil that they think I should buy. Strangers are sincere in their desire to fix me, I guess. I just feel like they should try to understand me first. Am I really that broken? In life, though, sometimes stuff happens — unavoidable stuff like funerals, weddings, and grandbabies. These are milestones that you can’t miss — even an agoraphobic mess such as myself. So I get into a car, or a plane and grit my teeth, white-knuckling it the whole way. There is not enough Xanax in the entire world to make me go if I didn’t have to. It isn’t all sadness and tears, though. I can promise that Tanya and Amy are genuinely my friends. And my husband, boy, does he ever love me. They are an integral part of my tribe, along with my family. My tribe respects my self-imposed boundaries and sees my panic attacks as just some quirky thing that comes with loving me. They understand I am not trying to have a one-sided relationship. My health suffers a little. My most recent bloodwork showed my Vitamin D to be a whopping 13. So I spend more time writing on my balcony or at my park. The future My future is brighter than it has ever been. No one should feel sorry for me, because I have progressed in leaps and bounds. I’ve come from not leaving my home, to creating a zone. Every day, I try to push those boundaries out, just a little. It doesn’t take long for a bit of work to become a lot of progress. The world isn’t trying to hurt me. It is merely waiting for me to become part of it.
https://heathermonroe.medium.com/my-agoraphobic-life-af38d326ea22
['Heather Monroe']
2019-10-22 23:03:18.003000+00:00
['Self-awareness', 'Community', 'Mental Health', 'Self Improvement', 'Abuse']
Title Agoraphobic LifeContent name Heather problem live agoraphobia keep bound geographical emotional areasAgoraphobia directly translated mean “Fear market place” Interesting dream life could enjoy marketplace public place can’t yet anyway started when… Many therapist confident agoraphobia result childhood sexual trauma Makes sense live ongoing sexual abuse age six 14 year spent life looking shoulder gauging perpetrator’s true intention got situation feel pretty healed think agoraphobia result medical problem 18 year old nursing firstborn suddenly heart started pounding wildly chest friend drove hospital doctor nurse acted suspicious drug male nurse leaned shoulder whispered ear “Would alright undress hands” left hooked heart monitor hour heart rate fluctuated around 230 beat per minute fast fast told quickly pushed bit adenosine IV remember feeling brick chest vision fading one pinpoint light reminded turning tubetelevision lost consciousness moment doctor said likely isolated episode Still traumatized constantly aware heartbeat fast slow Would tonight night stop altogether first therapist would call hypervigilant diagnosed panic disorder good call amount therapy medication seemed help panic grew insomnia didn’t take long lose desire leave home couldn’t locate source panic wasn’t tangible thing could hit run could retreat zone Defining Zone people learn suffer agoraphobia think imagine cowering dark corner hoarder house mumbling case built life way actually live Here’s little known fact somewhat secret border surrounding Los Angeles’ Jewish community called eruv line made wall hill partially thin string string somewhat like fishing line secured inconspicuously existing pole Orthodox Jews aren’t permitted push pull carry outside home Sabbath rather stay inside eruv expands idea “home” people live street eruv mark common v public area safe conduct necessary activity like carrying child temple pushing grandma wheelchair safe without fear sin safe done something similar contain fear expand home selected house love filled full comfort don’t cowering corner little Harry Potter closet beneath stair need small space walk park child doctor grocery store hospital pharmacy library within step picked coffee shop restaurant people aren’t threatening environment cozy zone never far professional help need safe Yet somehow system imperfect married ten year never inlaws’ home eight hour away even relationship suffer Goals Photo Matic Kozinc Unsplash Relationships Living agoraphobia make nearly impossible forge new relationship second set foot zone overcome panic — heart rate increase suddenly starved air head dizzy positive die figuratively suddenly faced choice Fight run retreat choose retreat forgo meeting new people Getting around bit problem drive like everybody can’t drive zone definitely cannot use freeway Try sometime Southern California Even passenger outside zone get urge jump moving car Let’s real clear don’t want die suicidal Since want live childlocked car rough part everyone want chauffeur Especially I’m back seat entire come apart public transportation Nope can’t reason circle become extremely small friend Like…two friend seriously name Tanya Amy used many friendship lack particular quidproquo element common typical relationship might want drop instance can’t unless want meet coffee shop safe friend know need come house excursion may end begging go back home love anyway Photo Joseph Pearson Unsplash Making new friend hard person find situation want fix Whether prayer multilevelmarketing snake oil think buy Strangers sincere desire fix guess feel like try understand first really broken life though sometimes stuff happens — unavoidable stuff like funeral wedding grandbabies milestone can’t miss — even agoraphobic mess get car plane grit teeth whiteknuckling whole way enough Xanax entire world make go didn’t isn’t sadness tear though promise Tanya Amy genuinely friend husband boy ever love integral part tribe along family tribe respect selfimposed boundary see panic attack quirky thing come loving understand trying onesided relationship health suffers little recent bloodwork showed Vitamin whopping 13 spend time writing balcony park future future brighter ever one feel sorry progressed leap bound I’ve come leaving home creating zone Every day try push boundary little doesn’t take long bit work become lot progress world isn’t trying hurt merely waiting become part itTags Selfawareness Community Mental Health Self Improvement Abuse
1,368
Dispelling Three Common Myths of Machine Learning Personalization
Photo by Glenn Carstens-Peters on Unsplash Before we get all worked up about the future of AI and the inevitable singularity, we should be clear about what exactly machine learning personalization (MLP) is. It turns out that what it is and what it does is probably not what you thought. In what follows, I’ll try to explain and dispel three of the most common myths I see when reading and discussing MLP with academics and practitioners. Keep in mind that these myths apply to collaborative filtering and hybrid approaches to personalized recommendations, which rely on behavioral big data and make up most of what we see deployed in industry today. Myth 1: MLP Works by Predicting Your Needs, Preferences, and Desires This misconception is understandable as we generally view persons to have both inner desires, needs, and preferences and outer-facing behavior. If we use the word personalization, then we might assume we are referring to one’s inner world of needs and preferences — that unique, narrative soup of personal history, values, goals, desires, and wants that make you, you. But we aren’t and we can’t. Many academic articles and in patents for recommender systems by companies like Google and IBM make this mistake. For example, a highly cited paper by Basu et al. (1998) states: This paper presents an inductive learning approach to recommendation that is able to use both ratings information and other forms of information about each artifact in predicting user preferences. Or another, more recent example from the first sentence of Yeomans et al. (2019): Computer algorithms are increasingly being used to predict people’s preferences and make recommendations. I can very quickly tell you why this cannot possibly be how MLP works. Preferences don’t buy things. Needs don’t click ads. Desires don’t churn. People do these things and these things are recorded in the form of observable behaviors. The training data used in MLP is really just a thin slice of your observed behavior, behavior which is afforded by the design of the app or device and happens to be measured. Smart designers and data scientists can collect the right kind of measurements to make the inferential leap from observed behavior to mental state fairly accurate, however. Another reason why MLP can’t predict your preferences is because we have no method for actually knowing what your “true” preferences are. Do we mean your considered, conscious, verbally-reported preferences, or those unconscious goal-directed preferences we share with our evolutionary ancestors? Without a ground truth, we cannot compute a loss function and we therefore cannot optimize the parameters of the predictive model to minimize this cost function. Despite this discrepancy, many in industry and academia have seemingly fallen into the trap of radical behaviorism, whether they’re aware of it or not. Here’s a more concrete example. When you train a machine learning model, there is no outcome column labeled “needs” or “interests” and a list of possible discrete values such a variable might have, such as “toilet paper” or “trip to Italy.” Instead the outcome column will simply say “Buy” or “Add” or “Churn” and the value for these columns will be typically either a 1/0. These are all very narrowly defined behaviors that are the result of a near infinity of prior mental states. But, strictly speaking, mental states are not equivalent to behaviors. (See Daniel Dennett’s Inverted Spectrum thought experiment for a nice example of why the “meaning” of behaviors is over-determined.) Conflating a behavior with a mental state is sloppy thinking and, at worst, scares laypeople into believing that machines can predict their thoughts. MLP should not be mistaken for a predictive theory of mind. Myth 2: Personalized Recommendations are Unique to You This myth might take the most unpacking, and there are several angles to this, but I will focus here on just a couple. In many cases, the predictions and recommendations are based on models trained (optimized) on aggregate data that may not even include any of your personal data. If you can be said to receive a “personalized” recommendation or prediction at all, then it is only because recommendations were not made using a pre-set list and given to all at once. This is roughly how advertising was done prior to the Internet, when everyone saw the same billboards and newspaper ads. Let’s call this the “naive view” of MLP. Many people seem to believe that if each row in a data table is assigned a prediction (instead of globally assigning one to everyone, say based on a global average), then that prediction is personalized. But this is an extremely thin understanding of personalization. The concept of personalization deserves a much richer examination. A New Taxonomy For Evaluating Personalization I suggest we instead think of personalization using a dual taxonomy of properties of either the 1) data or 2) the model, or some combination of both. Data used in MLP can be broken down into input and output properties. For example, a personalized recommendation uses X% behavioral input features, or is the proportion of input features classified as personal data under the GDPR. It stands to reason that personalization requires personal data, and personal data are defined differently according to the particular legal regime (GDPR or CCPA or FTC’s Fair Information Practice Principles, for instance). Conversely, we might quantify the degree of personalization by reference to output data properties, such as the uniqueness of values. A more personalized prediction means that fewer people share the same recommendation. Surely a system that resulted in everyone getting the same recommendation isn’t really personalized (or is it?). This will of course depend on the size of ranked lists, the inventory size, and the number of users you’re recommending for. Substantive vs. Procedural Personalization Another interesting way of viewing personalization is by considering whether it is substantive or procedural. I’ve borrowed this idea from political philosophy, where scholars debate whether procedural or substantive justice if preferable. I’ll sidestep those thorny questions for now. Substantive personalization refers to properties of the output (e.g., all unique outputs), irrespective of the process which led to this output. Procedural personalization refers to the process (e.g., all rows are input to the same process), irrespective of the particular output such a process might generate. This distinction is useful because there might be cases where we have a highly homogenous group of data subjects and, even though we have trained a model on each unique data subject’s data, we end up with the same (or very similar) output recommendations. From an outsider’s perspective, it might seem like we haven’t personalized our recommendations since nearly every data subject got the same recommendation. But we could reply by saying our recommendations were procedurally personalized. Various Other Ways We Might Conceptualize Personalization The following list is neither exhaustive nor mutually exclusive. We might decide that our personalized recommendations are personalized because they combine personal data (defined under the GDPR) and each user receives a unique recommendation, for example. With this in mind, we might also classify a prediction or recommendation as personalized based on properties of the model used to generate it. At the most basic level, learning an individual model for each user using just that user’s own data would seem to be very personalized, though not practical for most organizations. Most data controllers simply don’t collect this much behavioral data (…yet). One consequence of this view is that users with the exact same profiles would get the exact same model. Again, this would be an example of input-based procedural, not substantive, personalization. Another approach we might take is to quantify personalization as the uniqueness of model parameter values. So if our models have different parameter values, then the resulting predictions are personalized, even if the results are the same. This would represent input-based substantive personalization. Currently, most industry models trained on aggregate data wouldn’t satisfy this criterion. Or we might quantify personalization as the type of model used to generate the prediction: maybe some will get a neural network, while others get a random forest. Perhaps some users do not care so much about the “best possible prediction” and a linear regression would be preferable to a deep neural network (it would also be more explainable…). As long as data subjects were input into a unique procedure for assigning the particular model, we might call this model-based procedural personalization, even if the resulting recommendations were all similar (perhaps because we only can recommend a small set of items). Finally, maybe a personalized prediction means that the model generating a personalized recommendation for you had a unique set of input features. This will increasingly occur as data subjects under the GDPR opt-out of specific forms of data collection (e.g., certain kinds of tracking cookies or GPS locations). The behavioral data you permit the data controller to collect may mean you need different models based on different feature sets, determined by regulatory pressures. We could classify this case as an instance of input-based substantive personalization. Myth 3: MLP Knows You Better Than You Know Yourself (or Your Friends) Be wary when researchers and industry claim that their MLP systems “outperform humans.” In some cases, the researchers may have artificially reduced the scope of the prediction context to make it more amenable to a machine. Doing this contextual sleight of hand can make MLP seem more powerful and accurate than it really is, especially when predictive performance is evaluated. For example, Yeomans et al. (2019) compared the predictions by friends and spouses to those from a simple collaborative filtering (CF) system for predicting a focal user’s ratings of jokes. The study found a basic CF system was able to predict more accurately than a friend or spouse. Yet, the experiment included a set of 12 jokes pre-selected by the researchers. The much more difficult problem of selecting 12 jokes from a nearly infinite set of possible jokes in all cultures and languages was left to the humans. In essence, the researchers had already personalized a list of jokes to each subject in the study, given their linguistic background, country of origin, and current location. Once narrowed to such a small recommendation space, the algorithm’s performance appears quite impressive, but nevertheless hides the fact the hardest task had already been done by humans. A similar argument can be made for personalization on e-commerce sites: by going to a website, a person has already self-selected into a group who would be interested in products offered by the website. Consequently, when we hear impressive accuracy or recall scores, we need to keep in mind how specific and narrow the prediction context is.
https://medium.com/datadriveninvestor/three-myths-surrounding-machine-learning-personalization-9b1a7133e6db
['Travis Greene']
2020-04-17 10:17:41.935000+00:00
['AI', 'Advertising', 'Marketing', 'Data Science', 'Machine Learning']
Title Dispelling Three Common Myths Machine Learning PersonalizationContent Photo Glenn CarstensPeters Unsplash get worked future AI inevitable singularity clear exactly machine learning personalization MLP turn probably thought follows I’ll try explain dispel three common myth see reading discussing MLP academic practitioner Keep mind myth apply collaborative filtering hybrid approach personalized recommendation rely behavioral big data make see deployed industry today Myth 1 MLP Works Predicting Needs Preferences Desires misconception understandable generally view person inner desire need preference outerfacing behavior use word personalization might assume referring one’s inner world need preference — unique narrative soup personal history value goal desire want make aren’t can’t Many academic article patent recommender system company like Google IBM make mistake example highly cited paper Basu et al 1998 state paper present inductive learning approach recommendation able use rating information form information artifact predicting user preference another recent example first sentence Yeomans et al 2019 Computer algorithm increasingly used predict people’s preference make recommendation quickly tell cannot possibly MLP work Preferences don’t buy thing Needs don’t click ad Desires don’t churn People thing thing recorded form observable behavior training data used MLP really thin slice observed behavior behavior afforded design app device happens measured Smart designer data scientist collect right kind measurement make inferential leap observed behavior mental state fairly accurate however Another reason MLP can’t predict preference method actually knowing “true” preference mean considered conscious verballyreported preference unconscious goaldirected preference share evolutionary ancestor Without ground truth cannot compute loss function therefore cannot optimize parameter predictive model minimize cost function Despite discrepancy many industry academia seemingly fallen trap radical behaviorism whether they’re aware Here’s concrete example train machine learning model outcome column labeled “needs” “interests” list possible discrete value variable might “toilet paper” “trip Italy” Instead outcome column simply say “Buy” “Add” “Churn” value column typically either 10 narrowly defined behavior result near infinity prior mental state strictly speaking mental state equivalent behavior See Daniel Dennett’s Inverted Spectrum thought experiment nice example “meaning” behavior overdetermined Conflating behavior mental state sloppy thinking worst scare laypeople believing machine predict thought MLP mistaken predictive theory mind Myth 2 Personalized Recommendations Unique myth might take unpacking several angle focus couple many case prediction recommendation based model trained optimized aggregate data may even include personal data said receive “personalized” recommendation prediction recommendation made using preset list given roughly advertising done prior Internet everyone saw billboard newspaper ad Let’s call “naive view” MLP Many people seem believe row data table assigned prediction instead globally assigning one everyone say based global average prediction personalized extremely thin understanding personalization concept personalization deserves much richer examination New Taxonomy Evaluating Personalization suggest instead think personalization using dual taxonomy property either 1 data 2 model combination Data used MLP broken input output property example personalized recommendation us X behavioral input feature proportion input feature classified personal data GDPR stand reason personalization requires personal data personal data defined differently according particular legal regime GDPR CCPA FTC’s Fair Information Practice Principles instance Conversely might quantify degree personalization reference output data property uniqueness value personalized prediction mean fewer people share recommendation Surely system resulted everyone getting recommendation isn’t really personalized course depend size ranked list inventory size number user you’re recommending Substantive v Procedural Personalization Another interesting way viewing personalization considering whether substantive procedural I’ve borrowed idea political philosophy scholar debate whether procedural substantive justice preferable I’ll sidestep thorny question Substantive personalization refers property output eg unique output irrespective process led output Procedural personalization refers process eg row input process irrespective particular output process might generate distinction useful might case highly homogenous group data subject even though trained model unique data subject’s data end similar output recommendation outsider’s perspective might seem like haven’t personalized recommendation since nearly every data subject got recommendation could reply saying recommendation procedurally personalized Various Ways Might Conceptualize Personalization following list neither exhaustive mutually exclusive might decide personalized recommendation personalized combine personal data defined GDPR user receives unique recommendation example mind might also classify prediction recommendation personalized based property model used generate basic level learning individual model user using user’s data would seem personalized though practical organization data controller simply don’t collect much behavioral data …yet One consequence view user exact profile would get exact model would example inputbased procedural substantive personalization Another approach might take quantify personalization uniqueness model parameter value model different parameter value resulting prediction personalized even result would represent inputbased substantive personalization Currently industry model trained aggregate data wouldn’t satisfy criterion might quantify personalization type model used generate prediction maybe get neural network others get random forest Perhaps user care much “best possible prediction” linear regression would preferable deep neural network would also explainable… long data subject input unique procedure assigning particular model might call modelbased procedural personalization even resulting recommendation similar perhaps recommend small set item Finally maybe personalized prediction mean model generating personalized recommendation unique set input feature increasingly occur data subject GDPR optout specific form data collection eg certain kind tracking cooky GPS location behavioral data permit data controller collect may mean need different model based different feature set determined regulatory pressure could classify case instance inputbased substantive personalization Myth 3 MLP Knows Better Know Friends wary researcher industry claim MLP system “outperform humans” case researcher may artificially reduced scope prediction context make amenable machine contextual sleight hand make MLP seem powerful accurate really especially predictive performance evaluated example Yeomans et al 2019 compared prediction friend spouse simple collaborative filtering CF system predicting focal user’s rating joke study found basic CF system able predict accurately friend spouse Yet experiment included set 12 joke preselected researcher much difficult problem selecting 12 joke nearly infinite set possible joke culture language left human essence researcher already personalized list joke subject study given linguistic background country origin current location narrowed small recommendation space algorithm’s performance appears quite impressive nevertheless hide fact hardest task already done human similar argument made personalization ecommerce site going website person already selfselected group would interested product offered website Consequently hear impressive accuracy recall score need keep mind specific narrow prediction context isTags AI Advertising Marketing Data Science Machine Learning
1,369
These Allusions Are Real
Photo by Julius Drost on Unsplash How would you feel if someone referred to you as “Scrooge”? Or, how would you react if you were speaking and a listener said, “Your nose is getting longer”? Finally, what would you say if someone called you “The Scarecrow”? In each case, you would probably be offended — and rightfully so. After all, the first person is comparing you to the miserly employer in the Charles Dickens’ novel entitled The Christmas Carol. The second person is calling you a liar by referring to the classic children’s story “Pinocchio” by Carlo Lorenzini. And the third person is saying you need a brain, like Dorothy’s friend in The Wizard of Oz by Frank Baum. No, the purpose of this essay is not to teach you how to trade literary insults, but to emphasize the use of allusions. An allusion is an indirect reference to a well-known person, place, or event from history, from mythology, from literature, or from other works of art. Allusions are often used for three reasons: to catch the reader’s attention, to provide a short but vivid description, and to make a strong connection. Photo by Matt Popovich on Unsplash To Catch the Reader’s Attention. People who write newspaper and magazine headlines use allusions frequently to catch the reader’s attention. For instance, articles about Daylight Savings Time might allude to the Biblical verse “Let there be light” (Genesis 1:3). Stories of betrayal might refer to William Shakespeare’s line in Julius Caesar: “Et Tu, Brutus.” And situations that defy logic might be described as a “Catch 22,” after the 1961 novel by Joseph Heller. One more obvious example is the title of this essay which alludes to the homonym “illusion,” which, like a mirage, is not real. To Provide a Short but Vivid Description. Speakers and authors often use allusions as a shortcut. Instead of having to describe how cheap someone is, the speaker or author can just say the person is a “Scrooge.” Then, the listener or reader who is familiar with The Christmas Carol will immediately understand the comparison. Photo by JC Gellidon on Unsplash One example of an allusion that appears every spring involves the National Collegiate Athletic Association’s basketball tournament. Certain schools — like Duke, Michigan, and Kansas — are traditional powerhouses, and they usually qualify for the tournament each year. Other schools, however, seldom make it to the tournament. As a result, when these schools unexpectedly qualify, sportswriters across the country refer to them as “Cinderella” teams. “Cinderella,” of course, is the fairy tale about the young housemaid who wasn’t even expected at the ball. Yet, when she arrived in a beautiful dress and glass slippers, she attracted the attention of the handsome prince. When these Cinderella teams eventually lose, the allusion is extended. The sportswriters will write that the clock has struck midnight, and these teams have to return to reality. To Make a Strong Connection. As a writer, you, too, may want to use an allusion occasionally to make a strong connection with your reader. If you want to emphasize an extremely important day in your life, for instance, you might refer to it as “D-day.” This allusion applies to the World War II Allied invasion that liberated France from German occupation and served as a major turning point in the War (June 6, 1944). Or, if you want to describe a particular failure in your life, you may call it your “Waterloo,” a reference to Napoleon Bonaparte’s final defeat in Belgium on June 18, 1815. An allusion is similar to an inside joke between the writer and the reader. Thus, before you use an allusion, you should be reasonably sure that your intended reader will understand it. If, for instance, your reader is young and not interested in history, references to D-day and Waterloo will not be understood or appreciated. But, if your reader is young and familiar with popular music, you could introduce a story about failure by alluding to the Britney Spears’ song “Oops, I Did It Again.” If you use an allusion, do you have to document the source? No. If you’re simply referring to a person, place, event, or work of art, no documentation is necessary. Thus, allusions can add life to your writing without making you feel as if you’re writing a research paper. Photo by John Ruddock on Unsplash As a baseball fan, I am tempted to conclude this essay by saying this is the “bottom of the ninth,” an allusion to the last inning of a typical game. However, since this may be the first time some of you have ever thought about using allusions in your writing, I’d rather refer to the beginning of the game. Thus, as the umpire says right after the playing of the national anthem, “Play Ball!”
https://jimlabate.medium.com/these-allusions-are-real-b28af318100d
['Jim Labate']
2019-06-20 11:01:01.227000+00:00
['Literary', 'Writing Prompts', 'Writing', 'Imagination', 'Creativity']
Title Allusions RealContent Photo Julius Drost Unsplash would feel someone referred “Scrooge” would react speaking listener said “Your nose getting longer” Finally would say someone called “The Scarecrow” case would probably offended — rightfully first person comparing miserly employer Charles Dickens’ novel entitled Christmas Carol second person calling liar referring classic children’s story “Pinocchio” Carlo Lorenzini third person saying need brain like Dorothy’s friend Wizard Oz Frank Baum purpose essay teach trade literary insult emphasize use allusion allusion indirect reference wellknown person place event history mythology literature work art Allusions often used three reason catch reader’s attention provide short vivid description make strong connection Photo Matt Popovich Unsplash Catch Reader’s Attention People write newspaper magazine headline use allusion frequently catch reader’s attention instance article Daylight Savings Time might allude Biblical verse “Let light” Genesis 13 Stories betrayal might refer William Shakespeare’s line Julius Caesar “Et Tu Brutus” situation defy logic might described “Catch 22” 1961 novel Joseph Heller One obvious example title essay alludes homonym “illusion” like mirage real Provide Short Vivid Description Speakers author often use allusion shortcut Instead describe cheap someone speaker author say person “Scrooge” listener reader familiar Christmas Carol immediately understand comparison Photo JC Gellidon Unsplash One example allusion appears every spring involves National Collegiate Athletic Association’s basketball tournament Certain school — like Duke Michigan Kansas — traditional powerhouse usually qualify tournament year school however seldom make tournament result school unexpectedly qualify sportswriter across country refer “Cinderella” team “Cinderella” course fairy tale young housemaid wasn’t even expected ball Yet arrived beautiful dress glass slipper attracted attention handsome prince Cinderella team eventually lose allusion extended sportswriter write clock struck midnight team return reality Make Strong Connection writer may want use allusion occasionally make strong connection reader want emphasize extremely important day life instance might refer “Dday” allusion applies World War II Allied invasion liberated France German occupation served major turning point War June 6 1944 want describe particular failure life may call “Waterloo” reference Napoleon Bonaparte’s final defeat Belgium June 18 1815 allusion similar inside joke writer reader Thus use allusion reasonably sure intended reader understand instance reader young interested history reference Dday Waterloo understood appreciated reader young familiar popular music could introduce story failure alluding Britney Spears’ song “Oops Again” use allusion document source you’re simply referring person place event work art documentation necessary Thus allusion add life writing without making feel you’re writing research paper Photo John Ruddock Unsplash baseball fan tempted conclude essay saying “bottom ninth” allusion last inning typical game However since may first time ever thought using allusion writing I’d rather refer beginning game Thus umpire say right playing national anthem “Play Ball”Tags Literary Writing Prompts Writing Imagination Creativity
1,370
2020 AI Open-Source Software and Mission-Critical Platforms
As we are approaching the end of an unusual year, a difficult 2020 with a global pandemic and high unemployment that has disrupted the lives of so many people, I’m reflecting on some of the positives that we can take from this year. In my world of technology and open-source software, innovation didn’t stop; in fact, we can argue that there was an increase in productivity by having millions of people working from home, reducing commute times, travel, and unnecessary meetings. Software innovations are happening in the open; yes, this year, again, most of the latest innovations are open-source software projects built with one or many other open-source software components. Augmented reality, virtual reality, autonomous cars, artificial intelligence (AI), machine learning (ML), deep learning (DL), and more are all growing as open-source software. Needless to say, all programming languages and frameworks are open-source, too. Open-source building blocks such as Python, Tensorflow, and Pytorch to name a few, are powering the latest innovations. I like to keep an eye on the growth of the different open registries and repositories. GitHub has surpassed 100 million repositories and more than 50 million users this year. NPM, where JavaScript/Node.js open-source packages are available, surpassed 1.4 million packages; Nuget for open-source .NET code surpassed 220,000 packages; and Python packages available in PyPI surpassed 270,000 [1] The number of open-source projects in the AI and Data space is growing exponentially. It is now hard to create categories to classify all the open-source software available in this space, from libraries, frameworks, databases, and automation to directly infused AI and tooling. With a growing number of open-source software to create AI applications, we also have an increase in real-life use cases. Businesses across industries are adopting AI to address real business challenges and opportunities. Healthcare providers using ML and DL for faster and better diagnoses, telcos using AI to optimize network performance, the financial services industry reducing fraud, and generating better predictions are just a few examples of use cases we see now every day across every industry vertical. There are many more examples to add for insurance, transportation, government, and the utility industries. One common denominator across these important industries is that all have mission-critical applications with very valuable data running on mission-critical platforms. Traditionally known as mainframes, IBM Z and IBM LinuxONE platforms host the most crucial business functions in all of these industries. For decades, they have continued to improve their technology in high-speed transaction processing, capacity for very large volumes of transactions, best-in-class security, and second-to-none resiliency. In the banking industry, 44 of the top 50 global banks are running IBM Z and 2/3 of the Fortune 100 use IBM Z or LinuxONE. This is an impressive coverage that tells us that our daily lives are supported by these mission-critical platforms. All of this mainframe information brings us back to AI. When enterprises need AI applications in the best platform for I/O intensive transactions of structured or unstructured data, there is an ideal mission-critical platform; when AI applications need high-performant access to storage and databases, there is an ideal mission-critical platform; when AI applications need to secure data in transit, at rest and in use with confidential computing, there is an ideal mission-critical platform; when AI applications need a resilient platform that provides 99.999% availability or more, there is an ideal mission-critical platform designed to deliver on all of these criteria. Mainframes are this ideal mission-critical platform that can tightly integrate AI/ML/DL applications with data and core business systems that reside in the same platform. In other words, they provide a secure high-performance environment to bring AI, ML, and DL to existing transactional applications and deliver real-time insights and predictions. The ecosystem of open-source software for IBM Z and LinuxONE (s390x processor architecture) continues to grow. I believe it is at its best in 2020, and I have great hopes for the upcoming 2021 to be a year of continuous growth in the open-source software ecosystem for this mission-critical platform. The most popular open-source software for AI has only existed for a few years. As we are coming to the end of this difficult 2020, we see that it has been a strengthening year for many open-source projects. Tensorflow and PyTorch are used more than ever, and a number of open-source projects are becoming very popular, for example, Egeria, Pandas, Jupyter Notebook, Elyra, ONNX, Kubeflow, and others that I hope will continue to grow and be available across all platforms in 2021. Open source is not a trend; it is here stronger than ever. We are going to continue to see innovation and enhancements in the AI and Data open-source ecosystem. The data that resides in mission-critical platforms such as IBM Z and LinuxONE is a valuable asset for businesses and can be used for creative AI solutions. AI open-source software and mission-critical platforms introduce exciting possibilities in 2021 and beyond. The LF AI & Data landscape explores open source projects in Artificial Intelligence and Data and their respective sub-domains [1] Source: Nov 2, 2020 www.modulecounts.com [2] Free image by iXimus from Pixabay
https://medium.com/ibm-data-ai/2020-ai-open-source-software-and-mission-critical-platforms-ecdc69475193
['Javier Perez']
2020-12-08 20:26:46.597000+00:00
['Open Source', 'AI', 'Artificial Intelligence', 'Mainframe', 'Mission Critical']
Title 2020 AI OpenSource Software MissionCritical PlatformsContent approaching end unusual year difficult 2020 global pandemic high unemployment disrupted life many people I’m reflecting positive take year world technology opensource software innovation didn’t stop fact argue increase productivity million people working home reducing commute time travel unnecessary meeting Software innovation happening open yes year latest innovation opensource software project built one many opensource software component Augmented reality virtual reality autonomous car artificial intelligence AI machine learning ML deep learning DL growing opensource software Needless say programming language framework opensource Opensource building block Python Tensorflow Pytorch name powering latest innovation like keep eye growth different open registry repository GitHub surpassed 100 million repository 50 million user year NPM JavaScriptNodejs opensource package available surpassed 14 million package Nuget opensource NET code surpassed 220000 package Python package available PyPI surpassed 270000 1 number opensource project AI Data space growing exponentially hard create category classify opensource software available space library framework database automation directly infused AI tooling growing number opensource software create AI application also increase reallife use case Businesses across industry adopting AI address real business challenge opportunity Healthcare provider using ML DL faster better diagnosis telco using AI optimize network performance financial service industry reducing fraud generating better prediction example use case see every day across every industry vertical many example add insurance transportation government utility industry One common denominator across important industry missioncritical application valuable data running missioncritical platform Traditionally known mainframe IBM Z IBM LinuxONE platform host crucial business function industry decade continued improve technology highspeed transaction processing capacity large volume transaction bestinclass security secondtonone resiliency banking industry 44 top 50 global bank running IBM Z 23 Fortune 100 use IBM Z LinuxONE impressive coverage tell u daily life supported missioncritical platform mainframe information brings u back AI enterprise need AI application best platform IO intensive transaction structured unstructured data ideal missioncritical platform AI application need highperformant access storage database ideal missioncritical platform AI application need secure data transit rest use confidential computing ideal missioncritical platform AI application need resilient platform provides 99999 availability ideal missioncritical platform designed deliver criterion Mainframes ideal missioncritical platform tightly integrate AIMLDL application data core business system reside platform word provide secure highperformance environment bring AI ML DL existing transactional application deliver realtime insight prediction ecosystem opensource software IBM Z LinuxONE s390x processor architecture continues grow believe best 2020 great hope upcoming 2021 year continuous growth opensource software ecosystem missioncritical platform popular opensource software AI existed year coming end difficult 2020 see strengthening year many opensource project Tensorflow PyTorch used ever number opensource project becoming popular example Egeria Pandas Jupyter Notebook Elyra ONNX Kubeflow others hope continue grow available across platform 2021 Open source trend stronger ever going continue see innovation enhancement AI Data opensource ecosystem data resides missioncritical platform IBM Z LinuxONE valuable asset business used creative AI solution AI opensource software missioncritical platform introduce exciting possibility 2021 beyond LF AI Data landscape explores open source project Artificial Intelligence Data respective subdomains 1 Source Nov 2 2020 wwwmodulecountscom 2 Free image iXimus PixabayTags Open Source AI Artificial Intelligence Mainframe Mission Critical
1,371
Artificial Intelligence in Construction — TechVirtuosity
[Copyright : Pop Nukoonrat] © 123RF.com Revolutionizing Construction Construction and the methods we use are crucial to our success in modern architecture. We build houses and massive structures using our computers and we harness the that processing power to create new solutions. But artificial intelligence in construction takes things to a whole new level! It’s a tool that can help us push the boundaries further and it can do a lot to the industry as a whole. So then why haven’t we seen more innovation? The Construction Industry is Stagnating This isn’t to say that there hasn’t been a lot of improvements throughout the years, but construction has remained slower to adapt. In the past we often assumed that productivity equaled larger machines and that theory worked for a while. But now days we need something more than bigger machines, we need smarter machines and solutions. And while several other industries such as retail, medical and businesses in general have expanded, construction has fallen a bit behind. We simply need to adapt and use more technology. But what if we had more artificial intelligence in construction? Would this technology help lead us to a utopia? How Artificial Intelligence Helps Construction While it’s still early on some parts, AI has proven to show some promise in reducing costs. There is also software out there known as building information modeling, or BIM for short. AI can be trained to help suggest improvements and build solutions early on. It can also be used in risk management/mitigation, by providing safer alternatives. Construction robots are becoming more popular along with 3D printing, but add AI to it and we have a new advantage. AI can do the things that are too risky for us to risks our lives with. While using AI in this way is still new and very early, there’s a multitude of other areas it can help in. Of course, being a young technology also brings on risks for those using the technology early on… The Early Risks Involved Artificial intelligence in construction is a great solution but more technology also brings on different risks that also need to be considered. Anytime technology is involved we can typically also inherent the risk of getting hacked. If construction software or robots were to get hacked it could jeopardize an entire project. The argument is that it’s safer to have physical workers doing the actual work than to have robots or AI trying to take over. This is only partially true though. Construction steadily accounts for 20%+ of yearly deaths at work. AI poses the risk of hackers but as it stands the death toll is high right now, without the involvement in these life saving technologies. Hacking risks aside, AI isn’t perfect and does make mistakes too! This isn’t always the case but it’s important to recognize that mistakes happen with new technologies. The costs to implement AI could cost more money if it’s done wrong. But it’s not all bad! [Copyright : Kittipong Jirasukhanont] © 123RF.com Machine Learning can Mitigate Risks Machine learning is an important aspect of using artificial intelligence in construction. It allows a program to continually test and essentially “learn”. We’ve seen AI used in the field of medicine with success in the recent past which shows promise. But machine learning gives us some control. But first, in case you didn’t know what machine learning is… It’s a method used to teach an AI how to accomplish something. It is given parameters to gauge success and failure, a way to remember the results and a way to improve the numbers. Think of it like a race car, if it crashes it fails, if it completes the course it succeeds. Machine learning can take it a step further though. It can take that concept of the race car and find the most optimal way to complete the course, and that’s why it can be more productive than humans. We use machine learning to run thousands of trials and errors to succeed. This makes the AI more capable than a human which is why it can benefit construction. A single person can only try so many times, whereas an AI can run tests or simulations thousands of times endlessly. Construction can benefit from an AI that can continually learn the most efficient and safest way to build or solve problems. Machine learning can also incorporate previous knowledge in its tests, improving the outcome or immediate start of the training. As the AI grows it’ll also show us improved ways of doing things. Smart AIs Equal Smarter Solutions The future demands solutions that are safe, cheaper and ultimately faster. Construction is always a time sensitive task because a lot of it happens outside! The weather impacts it just as much as the efficiency of the workers doing it. AI provides solutions that are faster and cheaper which means there is less risk of weather delaying a project. The faster a task is completed the less likely something will go wrong, in theory. AI can help with many different areas by… Providing a solution to labor shortages through the use of software automation and solutions. Reducing risks to safety by spotting flaws or creating safer alternatives. Actively monitoring work environments and regulations. Can be used to collaborate on building plans while making smart suggestions (BIM software mentioned above). Providing analytics and statistics online for clients or workers. [Copyright : Preechar Bowonkitwanchai] © 123RF.com Artificial Intelligence in Construction Should be Embraced We talked about a lot of the areas that AI can help with but it needs to be given the opportunity. While some companies are already using this technology, there are still many more that are not quite there. AI can be used to help keep track of projects and to reassure clients. Whether it’s the city contracting the work or a variety of other businesses. Clients could benefit from seeing the progress online. AI has a lot of potential and even the user interface could have AI to learn what clients find most useful to view online. In the end, AI is going to be in the future of the construction industry. Whether we want it or not, it will help push innovation forward. But what do you think? Should we use more AI or avoid using those technologies in construction? Drop a comment below!
https://medium.com/swlh/artificial-intelligence-in-construction-techvirtuosity-124de131f26
['Brandon Santangelo']
2019-09-21 17:14:25.582000+00:00
['Construction', 'AI', 'Artificial Intelligence', 'Technology', 'Machine Learning']
Title Artificial Intelligence Construction — TechVirtuosityContent Copyright Pop Nukoonrat © 123RFcom Revolutionizing Construction Construction method use crucial success modern architecture build house massive structure using computer harness processing power create new solution artificial intelligence construction take thing whole new level It’s tool help u push boundary lot industry whole haven’t seen innovation Construction Industry Stagnating isn’t say hasn’t lot improvement throughout year construction remained slower adapt past often assumed productivity equaled larger machine theory worked day need something bigger machine need smarter machine solution several industry retail medical business general expanded construction fallen bit behind simply need adapt use technology artificial intelligence construction Would technology help lead u utopia Artificial Intelligence Helps Construction it’s still early part AI proven show promise reducing cost also software known building information modeling BIM short AI trained help suggest improvement build solution early also used risk managementmitigation providing safer alternative Construction robot becoming popular along 3D printing add AI new advantage AI thing risky u risk life using AI way still new early there’s multitude area help course young technology also brings risk using technology early on… Early Risks Involved Artificial intelligence construction great solution technology also brings different risk also need considered Anytime technology involved typically also inherent risk getting hacked construction software robot get hacked could jeopardize entire project argument it’s safer physical worker actual work robot AI trying take partially true though Construction steadily account 20 yearly death work AI pose risk hacker stand death toll high right without involvement life saving technology Hacking risk aside AI isn’t perfect make mistake isn’t always case it’s important recognize mistake happen new technology cost implement AI could cost money it’s done wrong it’s bad Copyright Kittipong Jirasukhanont © 123RFcom Machine Learning Mitigate Risks Machine learning important aspect using artificial intelligence construction allows program continually test essentially “learn” We’ve seen AI used field medicine success recent past show promise machine learning give u control first case didn’t know machine learning is… It’s method used teach AI accomplish something given parameter gauge success failure way remember result way improve number Think like race car crash fails completes course succeeds Machine learning take step though take concept race car find optimal way complete course that’s productive human use machine learning run thousand trial error succeed make AI capable human benefit construction single person try many time whereas AI run test simulation thousand time endlessly Construction benefit AI continually learn efficient safest way build solve problem Machine learning also incorporate previous knowledge test improving outcome immediate start training AI grows it’ll also show u improved way thing Smart AIs Equal Smarter Solutions future demand solution safe cheaper ultimately faster Construction always time sensitive task lot happens outside weather impact much efficiency worker AI provides solution faster cheaper mean le risk weather delaying project faster task completed le likely something go wrong theory AI help many different area by… Providing solution labor shortage use software automation solution Reducing risk safety spotting flaw creating safer alternative Actively monitoring work environment regulation used collaborate building plan making smart suggestion BIM software mentioned Providing analytics statistic online client worker Copyright Preechar Bowonkitwanchai © 123RFcom Artificial Intelligence Construction Embraced talked lot area AI help need given opportunity company already using technology still many quite AI used help keep track project reassure client Whether it’s city contracting work variety business Clients could benefit seeing progress online AI lot potential even user interface could AI learn client find useful view online end AI going future construction industry Whether want help push innovation forward think use AI avoid using technology construction Drop comment belowTags Construction AI Artificial Intelligence Technology Machine Learning
1,372
The Overlooked Conservative Case for Reining in Big Tech
The Overlooked Conservative Case for Reining in Big Tech Democrats aren’t the only ones ready to rewrite the antitrust rules for internet platforms Photo: SOPA Images/Getty Images Never in world history has one sector of the global economy risen to such global dominance, so fast, as Big Tech has in the past 20 years. In 2000, Amazon was an online bookseller, Apple was still an underdog, Google was a scrappy startup with little revenue, and Facebook didn’t exist. Today, along with Microsoft, they are the world’s five most valuable companies, and their decisions carry a level of global influence rivaled only by nation-states. They exert control over what we can say, how we can say it, what we buy, and what we read, and they wield unilateral power over the countless smaller businesses that rely on their platforms. Until about five years ago, a prevailing 21st-century view was that the internet sector was so dynamic that upstarts could come along at any point and depose the giants: Just look at how Google and Apple blew past Microsoft, or how Facebook conquered MySpace. That view is no longer tenable, as the top platforms’ network effects, lock-in, access to data, diversification of business lines, and ability to buy or copy rivals has given them advantages that now appear nearly insurmountable. The relevant business question is no longer, “Will they stay on top?”, but rather, “What markets will they conquer next?” (The one competitive threat that still looms is that China-based giants could outmaneuver them with products such as WeChat and TikTok. But the Trump administration’s crackdown on Chinese tech has abruptly curtailed that threat domestically, and India’s crackdown has mitigated it in the largest non-aligned market.) What to do about that concentration of power, if anything, is a question that has rapidly grown in urgency. There is an emerging consensus that antitrust action in some form is warranted, including among Republicans who are naturally skeptical of government intervention in markets. But there has been little clarity or agreement as to what form that action should take — until now. The Pattern We finally have a blueprint for regulating Big Tech. Or rather, two blueprints. Undercurrents Under-the-radar trends, stories, and random anecdotes worth your time. Facebook and Twitter are taking some precautionary measures ahead of the U.S. election. The most interesting came from Twitter, which announced on Friday that it will take three previously untried steps to pump the brakes on misinformation and polarizing content, starting October 20. First, it will default to a quote-tweet when you go to retweet something, encouraging you to stop and think about what you want to add to the conversation rather than simply amplifying a viewpoint. Second, it will stop surfacing tweets from people you don’t follow in your feed or notifications. Finally, it will only show trending topics that come with editorial context. You can read its full announcement here. Facebook, for its part, announced an indefinite ban on political ads starting after November 3, along with other measures aimed at thwarting misinformation around who won the election or incitements to violence in its wake. The most interesting came from Twitter, which announced on Friday that it will take three previously untried steps to pump the brakes on misinformation and polarizing content, starting October 20. First, it will default to a quote-tweet when you go to retweet something, encouraging you to stop and think about what you want to add to the conversation rather than simply amplifying a viewpoint. Second, it will stop surfacing tweets from people you don’t follow in your feed or notifications. Finally, it will only show trending topics that come with editorial context. You can read its full announcement here. Facebook, for its part, announced an indefinite ban on political ads starting after November 3, along with other measures aimed at thwarting misinformation around who won the election or incitements to violence in its wake. Cambridge Analytica didn’t unduly influence Brexit, a U.K. commission concluded, wrapping a three-year investigation into the political consultancy’s use of Facebook data in the campaign. The Financial Times reports that probe found that the methods used by a Cambridge Analytica affiliate were “in the main, well recognised processes using commonly available technology,” and that the resulting targeting of voters was not uniquely effective. The report was taken as vindication by some who felt the Cambridge Analytica scandal was overblown all along. Some privacy advocates were quick to reply that the real scandal was always more about how the data was gathered and obtained than how it affected election outcomes. (Both can be true; I made a version of this argument in 2018.) Headlines of the week Five Years of Tech Diversity Reports — and Little Progress — Sara Harrison, Wired How Excel may have caused loss of 16,000 Covid tests in England — Alex Hern, The Guardian QAnon high priest was just trolling away as a Citigroup tech executive — William Turton and Joshua Brustein, Bloomberg
https://onezero.medium.com/the-overlooked-conservative-case-for-reining-in-big-tech-5d1942d79a26
['Will Oremus']
2020-10-10 12:55:38.805000+00:00
['Pattern Matching', 'Antitrust', 'Facebook', 'Technology', 'Apple']
Title Overlooked Conservative Case Reining Big TechContent Overlooked Conservative Case Reining Big Tech Democrats aren’t one ready rewrite antitrust rule internet platform Photo SOPA ImagesGetty Images Never world history one sector global economy risen global dominance fast Big Tech past 20 year 2000 Amazon online bookseller Apple still underdog Google scrappy startup little revenue Facebook didn’t exist Today along Microsoft world’s five valuable company decision carry level global influence rivaled nationstates exert control say say buy read wield unilateral power countless smaller business rely platform five year ago prevailing 21stcentury view internet sector dynamic upstart could come along point depose giant look Google Apple blew past Microsoft Facebook conquered MySpace view longer tenable top platforms’ network effect lockin access data diversification business line ability buy copy rival given advantage appear nearly insurmountable relevant business question longer “Will stay top” rather “What market conquer next” one competitive threat still loom Chinabased giant could outmaneuver product WeChat TikTok Trump administration’s crackdown Chinese tech abruptly curtailed threat domestically India’s crackdown mitigated largest nonaligned market concentration power anything question rapidly grown urgency emerging consensus antitrust action form warranted including among Republicans naturally skeptical government intervention market little clarity agreement form action take — Pattern finally blueprint regulating Big Tech rather two blueprint Undercurrents Undertheradar trend story random anecdote worth time Facebook Twitter taking precautionary measure ahead US election interesting came Twitter announced Friday take three previously untried step pump brake misinformation polarizing content starting October 20 First default quotetweet go retweet something encouraging stop think want add conversation rather simply amplifying viewpoint Second stop surfacing tweet people don’t follow feed notification Finally show trending topic come editorial context read full announcement Facebook part announced indefinite ban political ad starting November 3 along measure aimed thwarting misinformation around election incitement violence wake interesting came Twitter announced Friday take three previously untried step pump brake misinformation polarizing content starting October 20 First default quotetweet go retweet something encouraging stop think want add conversation rather simply amplifying viewpoint Second stop surfacing tweet people don’t follow feed notification Finally show trending topic come editorial context read full announcement Facebook part announced indefinite ban political ad starting November 3 along measure aimed thwarting misinformation around election incitement violence wake Cambridge Analytica didn’t unduly influence Brexit UK commission concluded wrapping threeyear investigation political consultancy’s use Facebook data campaign Financial Times report probe found method used Cambridge Analytica affiliate “in main well recognised process using commonly available technology” resulting targeting voter uniquely effective report taken vindication felt Cambridge Analytica scandal overblown along privacy advocate quick reply real scandal always data gathered obtained affected election outcome true made version argument 2018 Headlines week Five Years Tech Diversity Reports — Little Progress — Sara Harrison Wired Excel may caused loss 16000 Covid test England — Alex Hern Guardian QAnon high priest trolling away Citigroup tech executive — William Turton Joshua Brustein BloombergTags Pattern Matching Antitrust Facebook Technology Apple
1,373
There Are 3 Big Misconceptions About Medium Going Around
There Are 3 Big Misconceptions About Medium Going Around Don’t let them confuse you. I just wrote a piece about how each Medium writer should do their own legwork when it comes to finding their way on the platform. And it’s true, the more you learn by yourself, the better. There are, however, many misconceptions going around about Medium, especially on outside forums, which can seriously impede a writer’s progress on this journey. And that’s not good. When it comes to using Medium to further your writing career, any misunderstanding can set you back a lot when not corrected in time. So let’s get into it. A few Medium concepts a lot of people have been getting wrong It’s time to stop the confusion once and for all. 1. Fans and followers are not the same thing Sometimes you’ll see a successful Medium writer talking about how the stat she pays more attention to is her number of fans, so you think she must be obsessed with her follower count, right? Wrong! On Medium, followers and fans are NOT the same thing. We don’t use those two words interchangeably because they are completely different concepts. A follower is a person who went into your profile and clicked on the “follow” button. This person is more likely to receive your content on her feed because she is actively indicating to Medium that she likes your writing and would like to see more from you. When you click on Follow, you become a follower A fan is simply someone who claps for your story. Everyone who claps for one of your stories becomes a fan, it doesn’t matter if they gave you 1 clap or 50. It also doesn’t matter if they’re following you or not. Clicking on the clap button makes you a fan. Therefore, sometimes your followers will be your fans because they clapped for your story, but not every one of your fans will necessarily be a follower, they can be just people who came across your article and happened to like it. You can go to your profile to see who follows you. To check your fans, go to your stats page. It will show you total fans for the month (nº of people who clapped for your stories) and number of fans per story. 2. Publications, Medium magazines, member features and curation are not the same thing Publications Anyone on Medium can create a publication. Just go on your round profile picture on the top right corner, click “publications,” then “create new.” I’ve created one. It’s called Mariposa and it’s awesome. There are Medium publications of all sizes, each with its own editors and catering to its own specific niche. Each publication has its own submission guidelines and rules to accept writers. I haven’t yet come across a publication which doesn’t feature their “how to submit” page in a very obvious place on their homepage. If you wish to submit, read and follow the instructions carefully. Medium Magazines (or Collections) These are especially put together by Medium following a theme. Some of the most recent ones were: “Can we Talk?”; “For the Record”; “Office Politics” and “Reasonable Doubt.” The good news is that Medium will occasionally send out emails to its Partner Program Members with specific calls for submissions, but unlike publications, these magazines or collections don’t have easily accessible guidelines or submissions open year-round. All you can do, really, is keep checking your inbox. Member Feature Story These are the stories Medium editors pick to feature at the homepage. When you click on one of them, it will have the nice “Member Feature Story” up there near the title. There’s no way to submit or apply for those. All you should do is to write and post a story as usual, then hope Medium editors will see it and like it enough to want to feature it. If they do, you’ll get a notification by email. It never happened to me, but other writers who had their pieces featured have confirmed that this is the process. Update: when I wrote this story, I was under the impression that Member Feature meant featured stories BY members, which meant you’d have to be a member to have a story features. As I have recently learned, Member Feature means the story is feature TO the members, which means the writer herself doesn’t have to be a member to be featured. Curation (or Story Distributed by Curators) Curation is the term for having your story picked by the Medium curators to be distributed under one or more tags. Getting a story curated means it will show up on thousands of people’s feeds, including those who don’t follow you. Getting curated is also an endorsement of your story by Medium editors. It means they have read it and found it worthy of sharing. Medium now notifies writers when their stories are curated. You can also know if a story has been curated when you look at individual story stats and see something like this: Medium makes it pretty obvious when you’re curated. Any story posted behind the paywall can get curated, whether you post them on publications or just on your profile, just make sure you keep the box for the Partner Program checked when you publish. It looks like this: For more detailed insights into curation, make sure to read Shannon Ashley’s piece on the subject here. 3. There’s no “normal” when it comes to Medium — each writer has her own journey Another common misconception about Medium is the idea that you can predict how your experience is going to be like (or how much money you’re going to make) based on the experiences of others. Because each voice here is unique, each writer is going to have a different experience. You can ask however many questions you want. Is it normal that I haven’t been curated yet? How much can I expect to make in my first week? Is it normal to only get 3 claps on your first story? These questions don’t even make sense. Or they do, only they all have the same answer: when it comes to Medium, there is no normal. Some writers sign up on Medium with Facebook, and bring along their friends as their first audience. Some writers sign up for Medium and start with a 0 follower count. Some will submit to publications, some won’t. Some will get accepted, some won’t. Some will have well-received stories, some won’t. What’s normal? All of it is. We’re all unique people, with unique voices and a particular way to experience the platform. There’s no way to predict how your experience is going to be like based on someone else’s. You can achieve similar results by taking similar steps, but please, don’t get attached to comparisons, and forget the idea that there is a “normal.” You make your own Medium journey. You make your own normal.
https://medium.com/sunday-morning-talks/there-are-3-big-misconceptions-about-medium-going-around-3f63e090f3c3
['Tesia Blake']
2019-02-28 17:33:33.395000+00:00
['Medium', 'Writing', 'Self', 'Creativity', 'Writing Tips']
Title 3 Big Misconceptions Medium Going AroundContent 3 Big Misconceptions Medium Going Around Don’t let confuse wrote piece Medium writer legwork come finding way platform it’s true learn better however many misconception going around Medium especially outside forum seriously impede writer’s progress journey that’s good come using Medium writing career misunderstanding set back lot corrected time let’s get Medium concept lot people getting wrong It’s time stop confusion 1 Fans follower thing Sometimes you’ll see successful Medium writer talking stat pay attention number fan think must obsessed follower count right Wrong Medium follower fan thing don’t use two word interchangeably completely different concept follower person went profile clicked “follow” button person likely receive content feed actively indicating Medium like writing would like see click Follow become follower fan simply someone clap story Everyone clap one story becomes fan doesn’t matter gave 1 clap 50 also doesn’t matter they’re following Clicking clap button make fan Therefore sometimes follower fan clapped story every one fan necessarily follower people came across article happened like go profile see follows check fan go stats page show total fan month nº people clapped story number fan per story 2 Publications Medium magazine member feature curation thing Publications Anyone Medium create publication go round profile picture top right corner click “publications” “create new” I’ve created one It’s called Mariposa it’s awesome Medium publication size editor catering specific niche publication submission guideline rule accept writer haven’t yet come across publication doesn’t feature “how submit” page obvious place homepage wish submit read follow instruction carefully Medium Magazines Collections especially put together Medium following theme recent one “Can Talk” “For Record” “Office Politics” “Reasonable Doubt” good news Medium occasionally send email Partner Program Members specific call submission unlike publication magazine collection don’t easily accessible guideline submission open yearround really keep checking inbox Member Feature Story story Medium editor pick feature homepage click one nice “Member Feature Story” near title There’s way submit apply write post story usual hope Medium editor see like enough want feature you’ll get notification email never happened writer piece featured confirmed process Update wrote story impression Member Feature meant featured story member meant you’d member story feature recently learned Member Feature mean story feature member mean writer doesn’t member featured Curation Story Distributed Curators Curation term story picked Medium curator distributed one tag Getting story curated mean show thousand people’s feed including don’t follow Getting curated also endorsement story Medium editor mean read found worthy sharing Medium notifies writer story curated also know story curated look individual story stats see something like Medium make pretty obvious you’re curated story posted behind paywall get curated whether post publication profile make sure keep box Partner Program checked publish look like detailed insight curation make sure read Shannon Ashley’s piece subject 3 There’s “normal” come Medium — writer journey Another common misconception Medium idea predict experience going like much money you’re going make based experience others voice unique writer going different experience ask however many question want normal haven’t curated yet much expect make first week normal get 3 clap first story question don’t even make sense answer come Medium normal writer sign Medium Facebook bring along friend first audience writer sign Medium start 0 follower count submit publication won’t get accepted won’t wellreceived story won’t What’s normal We’re unique people unique voice particular way experience platform There’s way predict experience going like based someone else’s achieve similar result taking similar step please don’t get attached comparison forget idea “normal” make Medium journey make normalTags Medium Writing Self Creativity Writing Tips
1,374
A Look Behind the Mask
Not all of these characteristics need be present to constitute an abusive relationship, and there are certainly others that were likely not mentioned. Although abuse follows a similar pattern, it is important to note they can manifest in certain individualized behaviors. Understanding our personal experience is key to moving forward and planning a safe exit. As victims, we can use our knowledge and awareness of our partners’ behavior patterns during our unique cycle of violence. From this, we can determine indicators of upcoming episodes and plan suitable responses to keep us safe. In this way, we learn to adapt in order to survive. But eventually as time goes on, the cycle of violence becomes shorter, faster, and more intense. Breaking the cycle of abuse means breaking the denial that something is wrong. It means that we must forfeit the illusion of what we have accepted our lives to be. It means we have to gain a conscious awareness that we are actually being abused. It means we have to finally let go of the fairytale that turned into a nightmare. No one wants to face that pain, no one. Healing takes time, and it hurts, especially at the beginning. If you are considering that now is the time for you to leave, more than likely you have been depressed, felt trapped, or even felt like death was your only way out. I am here today as living proof that you can survive this. You are brave and strong. Look what you have already endured. We must accept this disillusionment. It is only when we are at the end of the road that we can truly begin to heal. If you feel your life is in eminent danger or you are being threatened or physically harmed, call local law enforcement for immediate assistance. Please reach out to your local domestic violence shelter or call the national hotline at 1–800–799–7233 to begin safety planning and to obtain information on domestic violence restraining and protective orders.
https://medium.com/we-are-warriors/behind-the-mask-profiling-a-narcissistic-abuser-dbbbfe972104
['Samantha Clarke']
2019-05-12 11:12:41.186000+00:00
['Mental Health', 'Wellness', 'Domestic Violence', 'Abuse', 'Narcissism']
Title Look Behind MaskContent characteristic need present constitute abusive relationship certainly others likely mentioned Although abuse follows similar pattern important note manifest certain individualized behavior Understanding personal experience key moving forward planning safe exit victim use knowledge awareness partners’ behavior pattern unique cycle violence determine indicator upcoming episode plan suitable response keep u safe way learn adapt order survive eventually time go cycle violence becomes shorter faster intense Breaking cycle abuse mean breaking denial something wrong mean must forfeit illusion accepted life mean gain conscious awareness actually abused mean finally let go fairytale turned nightmare one want face pain one Healing take time hurt especially beginning considering time leave likely depressed felt trapped even felt like death way today living proof survive brave strong Look already endured must accept disillusionment end road truly begin heal feel life eminent danger threatened physically harmed call local law enforcement immediate assistance Please reach local domestic violence shelter call national hotline 1–800–799–7233 begin safety planning obtain information domestic violence restraining protective ordersTags Mental Health Wellness Domestic Violence Abuse Narcissism
1,375
How to Implement Logging in Your Python Application
Enter Python’s Logging Module Fortunately, the importance of logging is not a new phenomenon. Python ships with a ready-made logging solution as part of the Python standard library. It solves all the aforementioned problems with using print . For example: Automatically add context, such as line number and timestamps to logs. It’s possible to update our logger at runtime by passing a configuration file to the app. It is easy to customise the log severity and configure different logging levels for different environments Let’s try it out and set up a very basic logger: Running this gives: INFO:__main__:Getting some docs... INFO:__main__:Doc count 2 INFO:__main__:Finished Easy peasy! Here, we have imported the logging module from the Python standard library. We then updated the default basic log level to log INFO messages. Next, logger = logging.getLogger(__name__) instantiates our logging instance. Finally, we passed an event to the logger with a log level of INFO by calling logger.info("") . At first glance, this output might appear suspiciously similar to using print() . Next, we’ll expand our example logger to demonstrate some of the more powerful features that the Python standard logging module provides. Log levels We can configure the severity of the logs being output and filter out unimportant ones. The module defines five constants throughout the spectrum, making it easy to differentiate between messages. The numeric values of logging levels are given in the following table: Logging levels from Python’s documentation. It’s important not to flood your logs with lots of messages. To achieve concise logs, we should be careful to define the correct log level for each event: logger.critical("Really bad event" ) logger.error("An error") logger.warning("An unexpected event") logger.info("Used for tracking normal application flow") logger.debug("Log data or variables for developing") I tend to use the debug level to log the data being passed around the app. Here is an example of using three different log levels in the few lines of code responsible for sending events to Kafka: Formatting logs The default formatter of the Python logging module doesn’t provide a great amount of detail. Fortunately, it is easy to configure the log format to add all the context we need to produce super-useful log messages. For example, here we add a timestamp and the log level to the log message: formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s') It’s best practice to add as much context as possible to your logs. This can easily be achieved by adding structured data to the log message’s metadata. For example, you may have scaled your application to run with multiple workers. In this case, it might be important to know which worker was logging each event when you’re debugging, so let’s add a worker ID context to the log metadata: # Create the log formatter formatter = logging.Formatter( '%(asctime)s - %(worker)s %(levelname)s - %(message)s') handler.setFormatter(formatter) logger.info('Querying database for docs...', extra={'worker': 'id_1'}) The output becomes: 2020-09-02 22:06:18,170 - id_1 - INFO - Querying database for docs... Log handlers Now that we have perfectly formatted logs being fired at us from all over our application code, we need to consider where those logs are ending up. By default, the logs are being written to stdout , but Python’s logging module provides us with the functionality to push logs to alternative locations. For example, to save logs to the example.log file on disk: # create a file handler handler = logging.FileHandler('example.log') handler.setLevel(logging.INFO) There are several types of handlers that can be used. For the complete list, see the documentation for handlers. It is also possible to define custom logging handlers for different use cases. For example, this library defines a log handler for pushing logs to Slack! To summarise. We’ve set up the Python standard logging module and configured it to log to different locations with custom log formats. You can find the final code for the example logger below:
https://medium.com/better-programming/how-to-implement-logging-in-your-python-application-1730315003c4
['Leo Brack']
2020-09-09 14:33:01.296000+00:00
['Programming', 'Software Development', 'Python', 'Startup', 'Data Science']
Title Implement Logging Python ApplicationContent Enter Python’s Logging Module Fortunately importance logging new phenomenon Python ship readymade logging solution part Python standard library solves aforementioned problem using print example Automatically add context line number timestamps log It’s possible update logger runtime passing configuration file app easy customise log severity configure different logging level different environment Let’s try set basic logger Running give INFOmainGetting doc INFOmainDoc count 2 INFOmainFinished Easy peasy imported logging module Python standard library updated default basic log level log INFO message Next logger logginggetLoggername instantiates logging instance Finally passed event logger log level INFO calling loggerinfo first glance output might appear suspiciously similar using print Next we’ll expand example logger demonstrate powerful feature Python standard logging module provides Log level configure severity log output filter unimportant one module defines five constant throughout spectrum making easy differentiate message numeric value logging level given following table Logging level Python’s documentation It’s important flood log lot message achieve concise log careful define correct log level event loggercriticalReally bad event loggererrorAn error loggerwarningAn unexpected event loggerinfoUsed tracking normal application flow loggerdebugLog data variable developing tend use debug level log data passed around app example using three different log level line code responsible sending event Kafka Formatting log default formatter Python logging module doesn’t provide great amount detail Fortunately easy configure log format add context need produce superuseful log message example add timestamp log level log message formatter loggingFormatterasctimes levelnames message It’s best practice add much context possible log easily achieved adding structured data log message’s metadata example may scaled application run multiple worker case might important know worker logging event you’re debugging let’s add worker ID context log metadata Create log formatter formatter loggingFormatter asctimes worker levelnames message handlersetFormatterformatter loggerinfoQuerying database doc extraworker id1 output becomes 20200902 220618170 id1 INFO Querying database doc Log handler perfectly formatted log fired u application code need consider log ending default log written stdout Python’s logging module provides u functionality push log alternative location example save log examplelog file disk create file handler handler loggingFileHandlerexamplelog handlersetLevelloggingINFO several type handler used complete list see documentation handler also possible define custom logging handler different use case example library defines log handler pushing log Slack summarise We’ve set Python standard logging module configured log different location custom log format find final code example logger belowTags Programming Software Development Python Startup Data Science
1,376
Veganism in 5 Easy Steps
Photo Credit: Creatv-Eight--Unsplash Veganism in 5 Easy Steps A simple guide to eating more eco-friendly This is not another one of those preachy “convert to veganism” articles. This is for people who are sincerely unaware of what is happening to animals on a daily basis. This is for the people who say, “I love animals,” but continue to eat them turning a blind eye to the injustice their eating is doing to these sentient beings, our bodies, and the planet. It was twenty years ago when I first learned about the dangers of cow’s milk and the effect dairy products have on humans bodies, ie, extra mucous, skin problems, inflammation to name a few. It was 15 years ago that I watched the documentary,” Super Size Me’’ and gave up McDonald’s indefinitely. My 4-year-old son had never had McDonald’s and went to a friend whose parents took him there without me knowing for pancakes. He literally threw up the food. It is not fit for human consumption, and when the body is not used to it, it will reject it. This is what many kids are eating day after day. It was a decade ago that I took a class called, “Food and Mental Health” taught by a Naturopath in Seattle. I was a meat-eater and a dairy consumer at this point. I had given up “red meat” for the most part and thought it was better to use ground turkey, but had no idea where the turkeys came from. I didn’t think twice about my “healthier” alternative to red meat. I was fortunate in this class as I learned about these turkeys all stuffed together in their metal cages. I saw footage of chickens and turkeys being debeaked so they wouldn’t peck each other to death and them being so “plumped” up with hormones they could no longer walk. I watched them try to lurch themselves over other dead birds, while they sat in piles of their own excrement. I saw footage showing the baby male chickens being dumped in a huge garbage bin alive with their soft yellow down to later be incinerated as they were no use to the factory farm as they couldn’t lay eggs to be sold. Photo by Jason Leung on Unsplash I won’t continue, but you can imagine scenarios like this across the board within the meat and dairy industries and if you still think dairy is okay because it doesn’t kill the animal, listen to the cries of the baby cows and mothers before slaughter and when being taken away to be bottle-fed while the mother is milked so that humans can use her milk?! It doesn’t make much sense when you really stop to think about it. We harm innocent creatures for a taste or flavor that goes well with our nightly glass of wine or our dinner out, but put no connection to the sentient beings we are massacring on a daily basis. But, what will I eat? For those of you still with me, with curiosity, what does this meat-free life look like? What will I eat? What will my family eat? How will I get my protein? I’ve got you. Below are some resources to get started on your journey. Do you have to be perfect? No, just a beginning would be learning about the animals we say we care so much about. Watch a documentary, start following vegetarian and vegan recipe bloggers, Give up meat one day a week (Meatless Mondays is a thing). Give up meat and dairy for a month (Veganuary). You won’t die and you might just like it! What is the worst thing that can happen when you incorporate more whole foods into your diet and maybe trying something that you traditionally wouldn’t have?
https://medium.com/illumination/veganism-in-5-easy-steps-cb1bece2173c
['Melissa Steussy']
2020-12-07 04:11:29.308000+00:00
['Veganism', 'Health', 'Vegan', 'Plant Based', 'Animal Rights']
Title Veganism 5 Easy StepsContent Photo Credit CreatvEightUnsplash Veganism 5 Easy Steps simple guide eating ecofriendly another one preachy “convert veganism” article people sincerely unaware happening animal daily basis people say “I love animals” continue eat turning blind eye injustice eating sentient being body planet twenty year ago first learned danger cow’s milk effect dairy product human body ie extra mucous skin problem inflammation name 15 year ago watched documentary” Super Size Me’’ gave McDonald’s indefinitely 4yearold son never McDonald’s went friend whose parent took without knowing pancake literally threw food fit human consumption body used reject many kid eating day day decade ago took class called “Food Mental Health” taught Naturopath Seattle meateater dairy consumer point given “red meat” part thought better use ground turkey idea turkey came didn’t think twice “healthier” alternative red meat fortunate class learned turkey stuffed together metal cage saw footage chicken turkey debeaked wouldn’t peck death “plumped” hormone could longer walk watched try lurch dead bird sat pile excrement saw footage showing baby male chicken dumped huge garbage bin alive soft yellow later incinerated use factory farm couldn’t lay egg sold Photo Jason Leung Unsplash won’t continue imagine scenario like across board within meat dairy industry still think dairy okay doesn’t kill animal listen cry baby cow mother slaughter taken away bottlefed mother milked human use milk doesn’t make much sense really stop think harm innocent creature taste flavor go well nightly glass wine dinner put connection sentient being massacring daily basis eat still curiosity meatfree life look like eat family eat get protein I’ve got resource get started journey perfect beginning would learning animal say care much Watch documentary start following vegetarian vegan recipe blogger Give meat one day week Meatless Mondays thing Give meat dairy month Veganuary won’t die might like worst thing happen incorporate whole food diet maybe trying something traditionally wouldn’t haveTags Veganism Health Vegan Plant Based Animal Rights
1,377
Abbott’s Rapid-Response Covid-19 Test; Is the Approval Good News?
Abbott’s Rapid-Response Covid-19 Test; Is the Approval Good News? Unanswered questions may impede the rollout What if millions of people could get a quick, reliable test and find out if they are Covid-19 carriers? The Food and Drug Administration granted emergency-use authorization to Abbott Laboratories for a $5 rapid-response Covid-19 test. Reliable high-frequency testing may present the world with a viable path forward. A widely available test would help kids get back to school safely and allow workers to return to the office. A rapid test might enable us to eat inside a restaurant, take a vacation, or go to a football game. Is BinaxNOW Covid-19 Antigen Card the solution we have all been waiting for? Maybe, but we need answers to some critical questions before we hop on a cruise ship. BinaxNow’s emergency use authorization is approved for use in symptomatic patients in a healthcare setting. But the coordinated release of a free digital health app along with Abbott’s claim to be able to test “millions of people per day” acknowledges this test will be used beyond its limited approval. The “who, what, when, where, and how” of BinaxNow utilization must be addressed. Abbott’s BinaxNOW Covid-19 Ag Card is about the size of a credit card and doesn’t require added equipment. Photo: Abbott Laboratories What Covid-19 tests are available now? There are three categories of Covid-19 tests. Each works in different ways to detect evidence of SARS-CoV-2 infection. Antibody testing detects a past infection and potential immunity. Molecular testing(PCR) detects genetic material from the virus to determine if someone has the virus right now. Antigen testing detects the fragmented pieces of the virus that trigger an immune response. Like PCR testing, antigen testing is used to detect an active infection but can be done much faster. The recently FDA-approved rapid testing BinaxNow uses antigen detection. BinaxNow is a step in the right direction Rapid-responding tests are certainly a positive step in the right direction. Getting reliable results as fast as possible will help us reopen our economies and stop the pandemic spread. Here are the valuable BinaxNow features: Fast results. Abbott’s rapid antigen test provides results within 15 minutes. Accurate results. The test is highly accurate when testing symptomatic patients within seven days of the onset of symptoms. The data reported to the FDA shows a sensitivity of 97.1% and a specificity of 98.5%. Pain-free nasal swab. This technology does not require the tickle-your-brain deep nasopharyngeal swab like the PCR tests. A simple, painless nose swab is used to collect the testing specimen. No instrumentation required. This test does not require a medical practice to purchase expensive or complicated equipment. The lack of capital investment makes it ideal for CLIA-waived point-of-care testing. NAVICA™ app. Abbott released a complimentary digital health tool to pair with the new COVID-19 antigen test to facilitate use. Let’s tap the breaks on BinaxNow Before we get too excited, we need to understand the limitations of this specific rapid antigen testing technology. This specific test has a few problems and unanswered questions. Who performs the test? BinaxNow is only approved for clinical use by health care professionals. The Abbott press release makes it clear this test is not approved for use by the general public outside of a healthcare provider’s oversight. The press release states millions of tests can be done per day. If these tests require a healthcare provider, then infrastructure for a scalable rollout of “millions of tests per day” needs to be implemented. Antigen testing has limitations. Antigen tests look for pieces of the virus. They are less accurate than traditional molecular PCR testing, which looks for the virus’s genetic material. BinaxNow is not FDA-approved as a screening test. The test is meant to be used only on people with symptoms of COVID-19 and within seven days of the onset of their symptoms. The accuracy of asymptomatic patients is unpublished. The FDA approved this test based on a study of 102 symptomatic patients. The results show a sensitivity of 97% and a specificity of 98%. These patients, who were within seven days of the onset of symptoms, would have had high levels of viral shedding. These numbers indicate BinaxNow is an accurate way to test sick people, but how effective is it when testing asymptomatic individuals? 5. BinaxNow will be used off-label on asymptomatic people. The entire world has been waiting for a low-cost test. BinaxNow can and will be used legally off-label on asymptomatic individuals. Health professionals need to know the accuracy beyond the reported specificity and sensitivity in symptomatic patients suspected of having Covid-19. Before off-label use occurs, we must know how to interpret the results. 6. The NAVICA™ app creates a blurry line between screening and diagnostics. The NAVICA™ Press Release makes an excellent case “to help facilitate easier access to organizations and other locations where people gather.” If BinaxNow is limited to the symptomatic individuals within seven days of the onset of symptoms, the app would have limited utility. The creation of NAVICA™ reveals Abbott Lab’s is counting on the widespread use of their rapid antigen test. If so, we need to know the accuracy of testing asymptomatic individuals. 7. The economics of BinaxNow is unclear. The Abbott press release highlights the test will cost $5, but this test is not a direct-to-consumer product. What will this test actually cost, and who is paying for it? Doctors and hospitals purchase tests through a supply chain and then bill a third-party payer for the cost. BinaxNow is not being released as a direct-to-consumer product. Essential questions for an effective, scalable, and rapid roll-out must be answered before medical offices, hospitals, and consumer lab companies can begin to offer this potentially game-changing testing option. Here are the practical questions for medical office integration: What is the appropriate Current Procedural Terminology (CPT code)? There are currently two approved antigen testing codes (86328 and 86769). Will BinaxNow use one of these or a new one? Will Medicare, Medicaid, and private insurance companies honor and reimburse for BinaxNow? What is the rate of reimbursement for the CPT code? The reimbursement rates must justify the costs. Medical practices have to evaluate the financial impact of any new technology. If BinaxNow costs $5/unit and Medicaid reimburses $4, then a medical office will not be able to afford to offer the service. Medical practices will be highly motivated to provide rapid testing to their patients. Without a fair reimbursement rate, practices may find themselves testing their way to bankruptcy. Rapid antigen testing through BinaxNow could be a game-changing technology. As with many things in Operation Warp Speed, we are missing a nationally coordinated strategic plan. Binaxnow will be welcomed by the public and the medical community, but we deserve to know how well it works as a screening test and who is going to pay for it. —
https://medium.com/beingwell/abbotts-rapid-response-covid-19-test-is-the-approval-good-news-2b27c0b536b3
['Dr Jeff Livingston']
2020-09-01 21:59:56.722000+00:00
['Covid 19', 'Health', 'Testing', 'Coronavirus', 'Pandemic']
Title Abbott’s RapidResponse Covid19 Test Approval Good NewsContent Abbott’s RapidResponse Covid19 Test Approval Good News Unanswered question may impede rollout million people could get quick reliable test find Covid19 carrier Food Drug Administration granted emergencyuse authorization Abbott Laboratories 5 rapidresponse Covid19 test Reliable highfrequency testing may present world viable path forward widely available test would help kid get back school safely allow worker return office rapid test might enable u eat inside restaurant take vacation go football game BinaxNOW Covid19 Antigen Card solution waiting Maybe need answer critical question hop cruise ship BinaxNow’s emergency use authorization approved use symptomatic patient healthcare setting coordinated release free digital health app along Abbott’s claim able test “millions people per day” acknowledges test used beyond limited approval “who how” BinaxNow utilization must addressed Abbott’s BinaxNOW Covid19 Ag Card size credit card doesn’t require added equipment Photo Abbott Laboratories Covid19 test available three category Covid19 test work different way detect evidence SARSCoV2 infection Antibody testing detects past infection potential immunity Molecular testingPCR detects genetic material virus determine someone virus right Antigen testing detects fragmented piece virus trigger immune response Like PCR testing antigen testing used detect active infection done much faster recently FDAapproved rapid testing BinaxNow us antigen detection BinaxNow step right direction Rapidresponding test certainly positive step right direction Getting reliable result fast possible help u reopen economy stop pandemic spread valuable BinaxNow feature Fast result Abbott’s rapid antigen test provides result within 15 minute Accurate result test highly accurate testing symptomatic patient within seven day onset symptom data reported FDA show sensitivity 971 specificity 985 Painfree nasal swab technology require tickleyourbrain deep nasopharyngeal swab like PCR test simple painless nose swab used collect testing specimen instrumentation required test require medical practice purchase expensive complicated equipment lack capital investment make ideal CLIAwaived pointofcare testing NAVICA™ app Abbott released complimentary digital health tool pair new COVID19 antigen test facilitate use Let’s tap break BinaxNow get excited need understand limitation specific rapid antigen testing technology specific test problem unanswered question performs test BinaxNow approved clinical use health care professional Abbott press release make clear test approved use general public outside healthcare provider’s oversight press release state million test done per day test require healthcare provider infrastructure scalable rollout “millions test per day” need implemented Antigen testing limitation Antigen test look piece virus le accurate traditional molecular PCR testing look virus’s genetic material BinaxNow FDAapproved screening test test meant used people symptom COVID19 within seven day onset symptom accuracy asymptomatic patient unpublished FDA approved test based study 102 symptomatic patient result show sensitivity 97 specificity 98 patient within seven day onset symptom would high level viral shedding number indicate BinaxNow accurate way test sick people effective testing asymptomatic individual 5 BinaxNow used offlabel asymptomatic people entire world waiting lowcost test BinaxNow used legally offlabel asymptomatic individual Health professional need know accuracy beyond reported specificity sensitivity symptomatic patient suspected Covid19 offlabel use occurs must know interpret result 6 NAVICA™ app creates blurry line screening diagnostics NAVICA™ Press Release make excellent case “to help facilitate easier access organization location people gather” BinaxNow limited symptomatic individual within seven day onset symptom app would limited utility creation NAVICA™ reveals Abbott Lab’s counting widespread use rapid antigen test need know accuracy testing asymptomatic individual 7 economics BinaxNow unclear Abbott press release highlight test cost 5 test directtoconsumer product test actually cost paying Doctors hospital purchase test supply chain bill thirdparty payer cost BinaxNow released directtoconsumer product Essential question effective scalable rapid rollout must answered medical office hospital consumer lab company begin offer potentially gamechanging testing option practical question medical office integration appropriate Current Procedural Terminology CPT code currently two approved antigen testing code 86328 86769 BinaxNow use one new one Medicare Medicaid private insurance company honor reimburse BinaxNow rate reimbursement CPT code reimbursement rate must justify cost Medical practice evaluate financial impact new technology BinaxNow cost 5unit Medicaid reimburses 4 medical office able afford offer service Medical practice highly motivated provide rapid testing patient Without fair reimbursement rate practice may find testing way bankruptcy Rapid antigen testing BinaxNow could gamechanging technology many thing Operation Warp Speed missing nationally coordinated strategic plan Binaxnow welcomed public medical community deserve know well work screening test going pay —Tags Covid 19 Health Testing Coronavirus Pandemic
1,378
Is Technology Destabilizing Reality?
Yes and no. Nature is destabilizing reality. Nature (Conservation of a Circle) (NASA) Is technology destabilizing reality? Yes and no. Nature is destabilizing reality, for sure. How do we know this? The constant (re) circulation in Nature destabilizes everything. It looks like (has to look like) (can only look like) this: Nature Reality Technology Where, reality is stabilized, and, also destabilized, by the conservation of a circle. Conservation of a circle. Explaining the genesis of technology (the basis for, both, and-or, either, zero, and-or, one). Zero and-or One (Both and-or Either) Eliminating (exposing) the redundancy present in any ‘gate’ (and-or) (if-then). If-then. And-or. Corrupting our ‘understanding’ of a circuit. And, therefore, then, eventually, disrupting, everything we ‘know’ about (are relying on in) technology Circuit. And there it is. Technology. Reality. Nature stabilizing, and, also, destabilizing, both. Conservation of the circle is the core (only) dynamic in Nature (reality included).
https://medium.com/the-circular-theory/is-technology-destabilizing-reality-d45a51bcde92
['Ilexa Yardley']
2019-08-03 16:07:41.363000+00:00
['Society', 'Quantum Computing', 'Culture', 'Digital Transformation', 'Books']
Title Technology Destabilizing RealityContent Yes Nature destabilizing reality Nature Conservation Circle NASA technology destabilizing reality Yes Nature destabilizing reality sure know constant circulation Nature destabilizes everything look like look like look like Nature Reality Technology reality stabilized also destabilized conservation circle Conservation circle Explaining genesis technology basis andor either zero andor one Zero andor One andor Either Eliminating exposing redundancy present ‘gate’ andor ifthen Ifthen Andor Corrupting ‘understanding’ circuit therefore eventually disrupting everything ‘know’ relying technology Circuit Technology Reality Nature stabilizing also destabilizing Conservation circle core dynamic Nature reality includedTags Society Quantum Computing Culture Digital Transformation Books
1,379
How to Maintain a State of Creative ‘Flow’
Josh Waitzkin, chess prodigy and author of The Art Of Learning, once described a conversation he had with skiing legend Billy Kidd, in which he asked Kidd about the three most important turns on a ski run: … the three most important turns of the ski run are the last three before you get on the lift. And it’s a subtle point. That’s when the slope is leveled off, there’s less challenge. Most people are very sloppy. They’re taking the weight off the muscles they’ve been using. They have bad form. The problem with that is that on the lift ride up, they’re unconsciously internalizing bad body mechanics. As Billy points out, if your last three turns are precise, you’re internalizing precision on the lift ride up. And so it goes with flow. When we walk away from our work drained, dazed, and confused, we internalize those feelings. That all-nighter where you worked until you literally couldn’t anymore? It may have yielded production, but the brain drain you felt when you walked away followed you back to your desk the next day. The bitter gambler will always tell you the same story, “If it wasn’t for that last hand, I’d be rich!” But the gambler who laughs all the way home after doubling her money knows it’s because she walked away before her luck ran out. Hemingway knew that flow wasn’t a ghost to be strangled to death on every chance encounter. He walked away from his typewriter while he still had gas in the tank and inspiration on his side. Many artists and entrepreneurs think of beating their head against the wall in search of inspiration as a rite of passage. But Hemingway never allowed those feelings to enter his workspace. He walked away long before those feelings of brain drain could be internalized. This helped him return to his work knowing exactly where to start again. I always worked until I had something done, and I always stopped when I knew what was going to happen next. That way I could be sure of going on the next day. —Ernest Hemingway There is still a strong undercurrent in our society, particularly amongst entrepreneurs, that continues to celebrate and glamorize the grind. Just like with conversations around how much sleep is best to have each night, there is an unspoken competition around who can stay in the pressure cooker, working the longest and the hardest. But anyone can learn how to outlast the others. The real discipline comes from walking away before you’re cooked. It takes a cool, Hemingway-like confidence to tell the muses, “We’ve worked enough today. I’m sure I’ll see you around tomorrow.” So the question remains, for athletes, creatives, writers, producers, and thinkers alike: When you find your flow today, will you have the discipline to walk away before it’s all gone?
https://medium.com/s/story/how-to-master-the-flow-state-one-simple-yet-difficult-trick-56854fca9109
['Corey Mccomb']
2018-09-11 20:19:01.331000+00:00
['Life Lessons', 'Inspiration', 'Personal Development', 'Creativity', 'Productivity']
Title Maintain State Creative ‘Flow’Content Josh Waitzkin chess prodigy author Art Learning described conversation skiing legend Billy Kidd asked Kidd three important turn ski run … three important turn ski run last three get lift it’s subtle point That’s slope leveled there’s le challenge people sloppy They’re taking weight muscle they’ve using bad form problem lift ride they’re unconsciously internalizing bad body mechanic Billy point last three turn precise you’re internalizing precision lift ride go flow walk away work drained dazed confused internalize feeling allnighter worked literally couldn’t anymore may yielded production brain drain felt walked away followed back desk next day bitter gambler always tell story “If wasn’t last hand I’d rich” gambler laugh way home doubling money know it’s walked away luck ran Hemingway knew flow wasn’t ghost strangled death every chance encounter walked away typewriter still gas tank inspiration side Many artist entrepreneur think beating head wall search inspiration rite passage Hemingway never allowed feeling enter workspace walked away long feeling brain drain could internalized helped return work knowing exactly start always worked something done always stopped knew going happen next way could sure going next day —Ernest Hemingway still strong undercurrent society particularly amongst entrepreneur continues celebrate glamorize grind like conversation around much sleep best night unspoken competition around stay pressure cooker working longest hardest anyone learn outlast others real discipline come walking away you’re cooked take cool Hemingwaylike confidence tell mus “We’ve worked enough today I’m sure I’ll see around tomorrow” question remains athlete creatives writer producer thinker alike find flow today discipline walk away it’s goneTags Life Lessons Inspiration Personal Development Creativity Productivity
1,380
GAN — CycleGAN (Playing magic with pictures)
In addition, the set of images are not paired, i.e. we do not have the real images corresponding to the same locations where Monet painted the pictures. CycleGAN learns the style of his images as a whole and applies it to other types of images. CycleGAN The concept of applying GAN to an existing design is very simple. We can treat the original problem as a simple image reconstruction. We use a deep network G to convert image x to y. We reverse the process with another deep network F to reconstruct the image. Then, we use a mean square error MSE to guide the training of G and F. However, we are not interested in reconstructing images. We want to create y resembling certain styles. In GAN, a discriminator D is added to an existing design to guide the generator network to perform better. D acts as a critic between the training samples and the generated images. Through this criticism, we use backpropagation to modify the generator to produce images that address the shortcoming identified by the discriminator. In this problem, we introduce a discriminator D to make sure Y resemble Van Gogh paintings. Network design CycleGAN transfers pictures from one domain to another. To transform pictures between real images and Van Gogh paintings. We build three networks. A generator G to convert a real image to a Van Gogh style picture. to convert a real image to a Van Gogh style picture. A generator F to convert a Van Gogh style picture to a real image. to convert a Van Gogh style picture to a real image. A discriminator D to identify real or generated Van Gogh pictures. For the reverse direction, we just reverse the data flow and build an additional discriminator Dx to identify real images.
https://jonathan-hui.medium.com/gan-cyclegan-6a50e7600d7
['Jonathan Hui']
2018-07-28 15:09:53.586000+00:00
['Deep Learning', 'Artificial Intelligence', 'Computer Vision', 'Data Science', 'Machine Learning']
Title GAN — CycleGAN Playing magic picturesContent addition set image paired ie real image corresponding location Monet painted picture CycleGAN learns style image whole applies type image CycleGAN concept applying GAN existing design simple treat original problem simple image reconstruction use deep network G convert image x reverse process another deep network F reconstruct image use mean square error MSE guide training G F However interested reconstructing image want create resembling certain style GAN discriminator added existing design guide generator network perform better act critic training sample generated image criticism use backpropagation modify generator produce image address shortcoming identified discriminator problem introduce discriminator make sure resemble Van Gogh painting Network design CycleGAN transfer picture one domain another transform picture real image Van Gogh painting build three network generator G convert real image Van Gogh style picture convert real image Van Gogh style picture generator F convert Van Gogh style picture real image convert Van Gogh style picture real image discriminator identify real generated Van Gogh picture reverse direction reverse data flow build additional discriminator Dx identify real imagesTags Deep Learning Artificial Intelligence Computer Vision Data Science Machine Learning
1,381
Stop Trying to be Original
This week I had the privilege of attending the International Boys’ Schools Coalition annual conference. The topic of the conference was the arts. How can we engage boys in the arts and integrate the arts meaningfully into our students’ experiences to help them succeed personally, academically, socially, and emotionally? It was an inspiring few days that filled me with ideas to bring back to my own school, and it got me thinking about the difference between creativity and originality. Several times I heard presenters offer the disclaimer that what they were presenting “wasn’t original.” Some said that they were offering ideas they have adapted from other sources. Others explained that they had done something in their classes that felt very creative, and then they learned lots of other teachers do similar things. They seemed to think that this diminished their “originality,” even though they were combining ideas in a new way or came up with an idea with no knowledge that others shared it. Further, they seemed to feel that this perceived lack of originality diminished their personal creativity. The first session where I heard a presenter make a self-deprecating remark about his lack of originality was a session about how we can teach students to think creatively. The presenter demonstrated an exercise he does with his classes and then explained that after doing this activity for several years, he learned about a famous art teacher who had been something very similar back in the 1960s. The presenter implied that, because someone else had had this idea before him, he wasn’t being original. Maybe he wasn’t, although he didn’t even know about this other teacher from the 1960s, but that fact seemed totally beside the point to me. Whether or not he was original, he was undoubtedly creative. Originality is not the same thing as creativity. We could debate whether it is even possible for anything to be original. To conflate being creative with being original is to make creative thought impossible for most mere mortals, and so, I humbly suggest, that we don’t make that mistake. Creative thinking requires that we take what we know — things we have learned through experience, through reading and study, through witnessing the lives of those around us — and apply our own unique perspective to those things to produce something that makes our individual way of seeing understandable to others. Being creative is not inventing new ideas out of thin air. None of us exists in a vacuum. We live in a rich social context that informs our thoughts. We are shaped by the world around us. Sometimes we are conscious of this shaping, and sometimes we are not. Even if you were raised by wolves with no human contact, this would be true. Thus, being creative is seeing old ideas new ways or combining two or more existing ideas in ways that are unexpected, surprising, and interesting. Take, for example, Kurt Vonnegut’s masterpiece Slaughterhouse-Five, which celebrates its fiftieth anniversary this year. It is a stunning work of creative genius. In it, Vonnegut combines his first-hand experience in war, anti-war satire, and science fiction in the form of both space aliens and time travel. There are many war stories, but how many have aliens? There are many science fiction books, but how many are satires? There are many books with time travel, but how many also comment on the author’s lived experience? What makes Slaughterhouse-Five unique is a combination of familiar genres. He created something unique and exciting by mixing ingredients that are not usually paired together, much like a chef creating a new dish. This is the genius of his creativity. The structure of the novel is incredibly complex and, at a quick glance, it seems unlike anything else I’ve ever read. But on closer inspection, I see that it is a frame story, a structure at least as old as 1001 Nights, which dates back to the ninth century. Vonnegut took an existing structure and used it creatively, with the first and final chapters narrated in the first person by Vonnegut himself, speaking directly to the reader, and the interior chapters narrated primarily from the third person point-of-view describing the life of Billy Pilgrim, a character who is “unstuck in time.” Because of Billy’s strange experience of time, the novel is told out of sequence, jumping from present to past and, at one point, to the future. As creative as Vonnegut is in conveying Billy’s “unstuck” nature through a story divorced from linear narrative (I could go on and on about the patterns he employs in what at first seems like a random smattering of events), this is hardly the only story ever written where the events are conveyed out of chronological order. It’s not that Vonnegut has done something totally original, but rather that he has executed a concept with such skill that we feel as if we’re experiencing something brand new. Lest you misunderstand, my comments are not meant as a criticism of Vonnegut. Not in the least. Slaughterhouse-Five is one of my favorite novels of all time (not something an English teacher can say lightly). It’s one of few books that becomes more interesting with each rereading, not because it’s original, but because it is creative. And isn’t that good news for the rest of us creative-types? If the standard for being creative is originality, we can’t possibly begin to measure up. We can’t learn or teach originality. Originality requires divine intervention. But if the standard of creativity is finding new angles and new combinations, we can practice ways of seeing and we can look at the world with curiosity and wonder, always seeking to connect the dots between disparate areas of our experience and knowledge. To be creative is to be interested in everything, to be hungry for information, to be willing to try new things. Being creative is not just about spending hours in an art studio or at your computer. Being creative is a way of life.
https://dianevmulligan.medium.com/stop-trying-to-be-original-e3fa4179cad0
['Diane Vanaskie Mulligan']
2019-07-01 00:06:45.386000+00:00
['Authenticity', 'Advice and Opinion', 'Writing', 'Kurt Vonnegut', 'Creativity']
Title Stop Trying OriginalContent week privilege attending International Boys’ Schools Coalition annual conference topic conference art engage boy art integrate art meaningfully students’ experience help succeed personally academically socially emotionally inspiring day filled idea bring back school got thinking difference creativity originality Several time heard presenter offer disclaimer presenting “wasn’t original” said offering idea adapted source Others explained done something class felt creative learned lot teacher similar thing seemed think diminished “originality” even though combining idea new way came idea knowledge others shared seemed feel perceived lack originality diminished personal creativity first session heard presenter make selfdeprecating remark lack originality session teach student think creatively presenter demonstrated exercise class explained activity several year learned famous art teacher something similar back 1960s presenter implied someone else idea wasn’t original Maybe wasn’t although didn’t even know teacher 1960s fact seemed totally beside point Whether original undoubtedly creative Originality thing creativity could debate whether even possible anything original conflate creative original make creative thought impossible mere mortal humbly suggest don’t make mistake Creative thinking requires take know — thing learned experience reading study witnessing life around u — apply unique perspective thing produce something make individual way seeing understandable others creative inventing new idea thin air None u exists vacuum live rich social context informs thought shaped world around u Sometimes conscious shaping sometimes Even raised wolf human contact would true Thus creative seeing old idea new way combining two existing idea way unexpected surprising interesting Take example Kurt Vonnegut’s masterpiece SlaughterhouseFive celebrates fiftieth anniversary year stunning work creative genius Vonnegut combine firsthand experience war antiwar satire science fiction form space alien time travel many war story many alien many science fiction book many satire many book time travel many also comment author’s lived experience make SlaughterhouseFive unique combination familiar genre created something unique exciting mixing ingredient usually paired together much like chef creating new dish genius creativity structure novel incredibly complex quick glance seems unlike anything else I’ve ever read closer inspection see frame story structure least old 1001 Nights date back ninth century Vonnegut took existing structure used creatively first final chapter narrated first person Vonnegut speaking directly reader interior chapter narrated primarily third person pointofview describing life Billy Pilgrim character “unstuck time” Billy’s strange experience time novel told sequence jumping present past one point future creative Vonnegut conveying Billy’s “unstuck” nature story divorced linear narrative could go pattern employ first seems like random smattering event hardly story ever written event conveyed chronological order It’s Vonnegut done something totally original rather executed concept skill feel we’re experiencing something brand new Lest misunderstand comment meant criticism Vonnegut least SlaughterhouseFive one favorite novel time something English teacher say lightly It’s one book becomes interesting rereading it’s original creative isn’t good news rest u creativetypes standard creative originality can’t possibly begin measure can’t learn teach originality Originality requires divine intervention standard creativity finding new angle new combination practice way seeing look world curiosity wonder always seeking connect dot disparate area experience knowledge creative interested everything hungry information willing try new thing creative spending hour art studio computer creative way lifeTags Authenticity Advice Opinion Writing Kurt Vonnegut Creativity
1,382
How Self-Driving Vehicles Think: Navigating Double-Parked Cars
Written by Rachel Zucker, Software Engineer, and Shiva Ghose, Staff Software Engineer Every day, San Franciscans drive through six-way intersections, narrow streets, steep hills, and more. While driving in the city, we check mirrors, follow the speed limit, anticipate other drivers, look for pedestrians, navigate crowded streets, and more. For many of us who have been driving for years, we do these so naturally, we don’t even think about it. At Cruise, we’re programming hundreds of cars to consider, synthesize, and execute all these automatic human driving actions. In SF, each car encounters construction, cyclists, pedestrians, and emergency vehicles up to 46 times more frequently than in suburban environments, and each car learns how to maneuver around these aspects of the city every day. To give you an idea of how we’re tackling these challenges, we’re introducing a “How Self-Driving Vehicles Think” series. Each post will highlight a different aspect of teaching our vehicles to drive in one of the densest urban environments. In our first edition, we’re going to discuss how our Cruise self-driving vehicles handle double-parked vehicles (DPVs). How Cruise autonomous vehicles maneuver around double-parked vehicles Every self-driving vehicle “thinks” about three things: Perception: Where am I and what is happening around me? Planning: Given what’s around me, what should I do next? Controls: How should I go about doing what I planned? One of the most common scenarios we encounter — that requires the sophisticated application of all three of these elements — is driving around double-parked vehicles. On average in San Francisco, the probability of encountering a double-parked vehicle is 24:1 compared to a suburban area. The Cruise fleet typically performs anywhere between 200 to 800 oncoming maneuvers each day! Since double-parked vehicles are extremely common in cities, Cruise cars must be equipped to identify and navigate around them as part of the normal traffic flow. Here is how we do it. Perception Recognizing whether a vehicle is double-parked requires synthesizing a number of cues at once, such as: How far the vehicle is pulled over towards the edge of the road The appearance of brake and hazard lights The last time we saw it move Whether we can see around it to identify other cars or obstacles How close we are to an intersection We also use contextual cues like the type of vehicle (i.e. delivery trucks, who double-park frequently), construction activity, and scarcity of nearby parking. To enable our cars to identify double-parked vehicles, we collect the same information as humans. Our perception software extracts what cars around the Cruise autonomous vehicle (AV) are doing using camera, lidar, and radar images: Cameras provide the appearance and indicator light state for vehicles, and road features (such as safety cones or signage) Lidars provide distance measurements Radars provide speeds All three sensors contribute to identifying the orientation and type of vehicle. Using advanced computer vision techniques, the AV processes the raw sensor returns to identify discrete objects: “human,” “vehicle,” “bike,” etc. By tracking cars over time, the AV infers which maneuver the driver is making. The local map provides context for the scene, such as parking availability, the type of road, and lane boundaries. But to make the final decision — is a car double-parked or not — the AV needs to weigh all these factors against one another. This task is perfectly suited for machine learning. The factors are all fed into a trained neural network, which outputs the probability that any given vehicle is double-parked. In particular, we use a recurrent neural network (RNN) to solve this problem. RNNs stand out from other machine-learning implementations because they have a sense of “memory.” Each time it is rerun (as new information arrives from the sensors), the RNN includes its previous output as an input. This feedback allows it to observe each vehicle over time and accumulate confidence on whether it is double-parked or not. Planning & Controls Getting from A to B without hitting anything is a pretty well known problem in robotics. Comfortably getting from A to B without hitting anything is what we work on in the Planning and Controls team. Comfortable isn’t just defined by how quickly we accelerate or turn, it also means behaving like a predictable and reasonable driver. Having a car drive itself means we need our vehicles’ actions to be easily interpretable by the people around us. Easy-to-understand (i.e. human-like) behavior in this case comes from identifying DPVs and reacting to them in a timely manner. Once we know that a vehicle in front of us is not an active participant in the flow of traffic, we can start formulating a plan to get around the vehicle. Often times, we try to lane change around or route away from the obstacle. If that is not possible or desirable, we try to generate a path that balances how long we are in an oncoming lane with our desire to get around the DPV. Every time the car plans a trajectory around a double-parked vehicle, the AV needs to consider where the obstacle is, what other drivers are doing, how to safely bypass the obstacle, and what the car can and cannot perceive. Here, we’re navigating around a double-parked truck in the rain, with other vehicles approaching in the oncoming lane. During this maneuver, the AV yielded right-of-way to the two vehicles, which in turn were going around a double-parked vehicle in their own lane. Every move we plan takes into account the actions of the road users around us, and how we predict they will respond to our actions. With a reference trajectory planned out, we are ready to make the AV execute a maneuver. There are many ways to figure out the optimal actions to perform in order to execute a maneuver (for example, Linear Quadratic Control); however, we also need to be mindful of the constraints of our vehicle, such as how quickly we can turn the steering wheel or how quickly the car will respond to a given input. To figure out the optimal way to execute a trajectory given these constraints, we use Model Predictive Control (MPC) for motion planning. Under the hood, MPC algorithms use a model of how the system behaves (in this case, how we have learned the world around us will evolve and how we expect our car to react) to figure out the optimal action to take at each step. Finally, these instructions are sent down to the controllers, which govern the movement of the car. Putting it all together, we get: In this example, after yielding to the cyclist, we see an oncoming vehicle allowing us to complete our maneuver around the double-parked truck. It is important to recognize these situations and complete the maneuver so we support traffic flow. San Francisco is famously known to be difficult to drive in, but we at Cruise cherish the opportunity to learn from the city and make it safer. With its mid-block crosswalks, narrow streets, construction zones, and steep hills, San Francisco’s complex driving environment allows us to iterate and improve quickly, so we can achieve our goal of making roads safer. Over the coming months, we look forward to sharing more “How Self-Driving Vehicles Think” highlights from our journey. If you’re interested in joining engineers from over 100 disciplines who are tackling one of the greatest engineering challenges of our generation, join us.
https://medium.com/cruise/double-parked-vehicles-4f5ac8fc05a9
['Rachel Zucker']
2020-02-13 18:33:29.853000+00:00
['Software Engineering', 'San Francisco', 'Self Driving Cars', 'Engineering', 'Robotics']
Title SelfDriving Vehicles Think Navigating DoubleParked CarsContent Written Rachel Zucker Software Engineer Shiva Ghose Staff Software Engineer Every day San Franciscans drive sixway intersection narrow street steep hill driving city check mirror follow speed limit anticipate driver look pedestrian navigate crowded street many u driving year naturally don’t even think Cruise we’re programming hundred car consider synthesize execute automatic human driving action SF car encounter construction cyclist pedestrian emergency vehicle 46 time frequently suburban environment car learns maneuver around aspect city every day give idea we’re tackling challenge we’re introducing “How SelfDriving Vehicles Think” series post highlight different aspect teaching vehicle drive one densest urban environment first edition we’re going discus Cruise selfdriving vehicle handle doubleparked vehicle DPVs Cruise autonomous vehicle maneuver around doubleparked vehicle Every selfdriving vehicle “thinks” three thing Perception happening around Planning Given what’s around next Controls go planned One common scenario encounter — requires sophisticated application three element — driving around doubleparked vehicle average San Francisco probability encountering doubleparked vehicle 241 compared suburban area Cruise fleet typically performs anywhere 200 800 oncoming maneuver day Since doubleparked vehicle extremely common city Cruise car must equipped identify navigate around part normal traffic flow Perception Recognizing whether vehicle doubleparked requires synthesizing number cue far vehicle pulled towards edge road appearance brake hazard light last time saw move Whether see around identify car obstacle close intersection also use contextual cue like type vehicle ie delivery truck doublepark frequently construction activity scarcity nearby parking enable car identify doubleparked vehicle collect information human perception software extract car around Cruise autonomous vehicle AV using camera lidar radar image Cameras provide appearance indicator light state vehicle road feature safety cone signage Lidars provide distance measurement Radars provide speed three sensor contribute identifying orientation type vehicle Using advanced computer vision technique AV process raw sensor return identify discrete object “human” “vehicle” “bike” etc tracking car time AV infers maneuver driver making local map provides context scene parking availability type road lane boundary make final decision — car doubleparked — AV need weigh factor one another task perfectly suited machine learning factor fed trained neural network output probability given vehicle doubleparked particular use recurrent neural network RNN solve problem RNNs stand machinelearning implementation sense “memory” time rerun new information arrives sensor RNN includes previous output input feedback allows observe vehicle time accumulate confidence whether doubleparked Planning Controls Getting B without hitting anything pretty well known problem robotics Comfortably getting B without hitting anything work Planning Controls team Comfortable isn’t defined quickly accelerate turn also mean behaving like predictable reasonable driver car drive mean need vehicles’ action easily interpretable people around u Easytounderstand ie humanlike behavior case come identifying DPVs reacting timely manner know vehicle front u active participant flow traffic start formulating plan get around vehicle Often time try lane change around route away obstacle possible desirable try generate path balance long oncoming lane desire get around DPV Every time car plan trajectory around doubleparked vehicle AV need consider obstacle driver safely bypass obstacle car cannot perceive we’re navigating around doubleparked truck rain vehicle approaching oncoming lane maneuver AV yielded rightofway two vehicle turn going around doubleparked vehicle lane Every move plan take account action road user around u predict respond action reference trajectory planned ready make AV execute maneuver many way figure optimal action perform order execute maneuver example Linear Quadratic Control however also need mindful constraint vehicle quickly turn steering wheel quickly car respond given input figure optimal way execute trajectory given constraint use Model Predictive Control MPC motion planning hood MPC algorithm use model system behaves case learned world around u evolve expect car react figure optimal action take step Finally instruction sent controller govern movement car Putting together get example yielding cyclist see oncoming vehicle allowing u complete maneuver around doubleparked truck important recognize situation complete maneuver support traffic flow San Francisco famously known difficult drive Cruise cherish opportunity learn city make safer midblock crosswalk narrow street construction zone steep hill San Francisco’s complex driving environment allows u iterate improve quickly achieve goal making road safer coming month look forward sharing “How SelfDriving Vehicles Think” highlight journey you’re interested joining engineer 100 discipline tackling one greatest engineering challenge generation join usTags Software Engineering San Francisco Self Driving Cars Engineering Robotics
1,383
AWS CLI— Know its Applications and Benefits
AWS CLI — Edureka Amazon Web Services(AWS) is the market leader and top innovator in the field of cloud computing. It helps companies with a wide variety of workloads such as game development, data processing, warehousing, archive, development and many more. But, there is more to AWS than just the eye-catching browser console. It’s time that you check out Amazon’s Command Line Interface — AWS CLI. Before digging in, let’s take a look at the topics covered in this article. What Is AWS CLI ? Uses of AWS CLI Installing AWS CLI How to use AWS CLI? What is AWS CLI? AWS Command Line Interface(AWS CLI) is a unified tool using which, you can manage and monitor all your AWS services from a terminal session on your client. Although most AWS services can be managed through the AWS Management Console or via the APIs, there is a third way that can be very useful: the Command Line Interface (AWS CLI). AWS has made it possible for Linux, MacOS, and Windows users to manage the main AWS services from a local terminal session’s command line. So, with a single step installation and minimal configuration, you can start using all of the functionalities provided by the AWS Management Console using the terminal program. That would be: Linux shells: You can use command shell programs like bash, tsch and zsh to run commands in operating systems like Linux, macOS, or Unix You can use command shell programs like bash, tsch and zsh to run commands in operating systems like Linux, macOS, or Unix Windows Command Line: On Windows, you can run commands in PowerShell or in the Windows command prompt On Windows, you can run commands in PowerShell or in the Windows command prompt Remotely: You can run commands on Amazon EC2 instances through a remote terminal such as PuTTY or SSH. You can even use AWS Systems Manager to automate operational tasks across your AWS resources Apart from this, it also provides direct access to AWS services public APIs. In addition to the low-level API equivalent commands, the AWS CLI offers customization for several services. This article will tell you everything that you need to know to get started with the AWS Command Line Interface and to use it proficiently in your daily operations. Uses of AWS CLI Listed below are a few reasons which are compelling enough to get you started with AWS Command Line Interface. Easy Installation Before AWS CLI was introduced, the installation of toolkits like old AWS API involved too many complex steps. Users had to set up multiple environment variables. But the installation of AWS Command Line Interface is quick, simple and standardized. Saves Time Despite being user-friendly, AWS Mangement Console is quite a hassle sometimes. Suppose you are trying to find a large Amazon S3 folder. You have to log in to your account, search for the right S3 bucket, find the right folder and look for the right file. But with AWS CLI, if you know the right command the entire tasks will take just a few seconds. Automates Processes AWS CLI gives you the ability to automate the entire process of controlling and managing AWS services through scripts. These scripts make it easy for users to fully automate cloud infrastructure. Supports all Amazon Web Services Prior to AWS CLI, users needed a dedicated CLI tool for just the EC2 service. It worked properly, but it didn’t let users control other Amazon Web Services, like for instance the AWS RDS (Relational Database Service). But, AWS CLI lets you control all the services from one simple tool. So now that we have understood what AWS CLI is let’s get started with the installation process. Installing AWS CLI AWS Command Line Interface can be installed in three ways: Using pip Using a virtual environment Using a bundled installer In this article, we will see how to install AWS CLI using pip. Prerequisites Python 2 version 2.6.5+ or Python 3 version 3.3+ Windows, Linux, macOS, or Unix Operating System Installing the AWS CLI Using pip The common way to install AWS CLI is using pip. pip is a package management system which is used to install and manage software packages written in Python. Step 1: Install pip (on Ubuntu OS) $ sudo apt install python3-pip Step 2: Install CLI $ pip install awscli --upgrade --user Step 3: Check installation $ aws --version Once you are sure that AWS CLI is successfully installed, you need to configure it to start accessing your AWS Services through AWS CLI. Configure AWS CLI Step 4: Use the below command to configure AWS CLI $ aws configure AWS Access Key ID [None]: AKI************ AWS Secret Access Key [None]: wJalr******** Default region name [None]: us-west-2 Default output format [None]: json As a result of the above command, the AWS CLI will prompt you for four pieces of information. The first two are required. Those are your AWS Access Key ID and AWS Secret Access Key, which serve as your account credentials. The other information that you will need is region and output format, which you can leave as default for time being. NOTE: You can generate new credentials within AWS Identity and Access Management (IAM) if you do not already have them. All set! You are ready to start using AWS CLI now. Let’s check out how powerful AWS CLI can be with help of few basic examples. How to use AWS CLI? Suppose you have got some services running on AWS and you made it happen using the AWS Management console. The exact same work can be done, but with a whole lot less effort using Amazon Command Line Interface. Here’s a demonstration, Let’s say you want to launch an Amazon Linux instance from EC2. If you wish to use AWS Management Console, to launch an instance, you’ll need to: Load the EC2 Dashboard Click Launch Instance Select AMI and instance types of choice Set network, life cycle behavior, IAM, and user data settings on the Configure Instance Details page Select storage volumes on the Add Storage page Add tags on the Add Tags page Configure a security group on the Configure Security Group page Finally, review and launch the instance And, don’t forget the pop up where you’ll confirm your key pair and then head back to the EC2 Instance dashboard to get your instance data. This doesn’t sound that bad, but imagine doing it all when working with a slow internet connection or if you have to launch multiple instances of different variations multiple times. It would take a lot of time and effort, wouldn’t it? Now, let’s see how to do the same task by using AWS CLI. Step 1: Creating a new IAM user using AWS CLI Let’s see how to create a new IAM group and a new IAM user & then add the user to the group using AWS Command Line Interface First, use create-group to create a new IAM group $ aws iam create-group --group-name mygroup Use create-user to create a new user $ aws iam create-user --user-name myuser Then add the user to the group using add-user-to-group command $ aws iam add-user-to-group --user-name myuser --group-name myiamgroup Finally, assign a policy (which is saved in a file) to the user by using command put-user-policy $ aws iam put-user-policy --user-name myuser --policy-name mypoweruserole --policy-document file://MyPolicyFile.jso If you want to create a set of access keys for an IAM user, use the command create-access-key $ aws iam create-access-key --user-name myuser Step 2: Launching Amazon Linux instance using AWS CLI Just like when you launch an EC2 instance using AWS Management Console, you need to create a key pair and security group before launching an instance Use the command create-key-pair to create a key pair & use –query option to pipe your key directly into a file $ aws ec2 create-key-pair --key-name mykeypair --query 'KeyMaterial' --output text > mykeypair.pem Then create a security group and add rules to the security group $ aws ec2 create-security-group --group-name mysecurityg --description "My security group" $ aws ec2 authorize-security-group-ingress --group-id sg-903004f8 --protocol tcp --port 3389 --cidr 203.0.113.0/24 Finally, launch an EC2 instance of your choice using the command run-instance $ aws ec2 run-instances --image-id ami-09ae83da98a52eedf --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids sg-903004f8 There appears to be a lot of commands, but you can achieve the same result by combining all these commands into one and then save it as a script. That way you can modify and run the code whenever necessary, instead of starting from the first step, like when using AWS Management Console. This can drop a five-minute process down to a couple of seconds. So, now you know how to use AWS CLI to create an IAM user and launch an EC2 instance of your choice. But AWS CLI can do much more. So folks, that’s an end to this article on AWS CLI. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of AWS.
https://medium.com/edureka/aws-cli-9614bf69292d
['Vishal Padghan']
2020-09-10 10:02:58.196000+00:00
['Amazon Web Services', 'AWS', 'Cloud Computing', 'Aws Certification', 'Aws Cli']
Title AWS CLI— Know Applications BenefitsContent AWS CLI — Edureka Amazon Web ServicesAWS market leader top innovator field cloud computing help company wide variety workload game development data processing warehousing archive development many AWS eyecatching browser console It’s time check Amazon’s Command Line Interface — AWS CLI digging let’s take look topic covered article AWS CLI Uses AWS CLI Installing AWS CLI use AWS CLI AWS CLI AWS Command Line InterfaceAWS CLI unified tool using manage monitor AWS service terminal session client Although AWS service managed AWS Management Console via APIs third way useful Command Line Interface AWS CLI AWS made possible Linux MacOS Windows user manage main AWS service local terminal session’s command line single step installation minimal configuration start using functionality provided AWS Management Console using terminal program would Linux shell use command shell program like bash tsch zsh run command operating system like Linux macOS Unix use command shell program like bash tsch zsh run command operating system like Linux macOS Unix Windows Command Line Windows run command PowerShell Windows command prompt Windows run command PowerShell Windows command prompt Remotely run command Amazon EC2 instance remote terminal PuTTY SSH even use AWS Systems Manager automate operational task across AWS resource Apart also provides direct access AWS service public APIs addition lowlevel API equivalent command AWS CLI offer customization several service article tell everything need know get started AWS Command Line Interface use proficiently daily operation Uses AWS CLI Listed reason compelling enough get started AWS Command Line Interface Easy Installation AWS CLI introduced installation toolkits like old AWS API involved many complex step Users set multiple environment variable installation AWS Command Line Interface quick simple standardized Saves Time Despite userfriendly AWS Mangement Console quite hassle sometimes Suppose trying find large Amazon S3 folder log account search right S3 bucket find right folder look right file AWS CLI know right command entire task take second Automates Processes AWS CLI give ability automate entire process controlling managing AWS service script script make easy user fully automate cloud infrastructure Supports Amazon Web Services Prior AWS CLI user needed dedicated CLI tool EC2 service worked properly didn’t let user control Amazon Web Services like instance AWS RDS Relational Database Service AWS CLI let control service one simple tool understood AWS CLI let’s get started installation process Installing AWS CLI AWS Command Line Interface installed three way Using pip Using virtual environment Using bundled installer article see install AWS CLI using pip Prerequisites Python 2 version 265 Python 3 version 33 Windows Linux macOS Unix Operating System Installing AWS CLI Using pip common way install AWS CLI using pip pip package management system used install manage software package written Python Step 1 Install pip Ubuntu OS sudo apt install python3pip Step 2 Install CLI pip install awscli upgrade user Step 3 Check installation aws version sure AWS CLI successfully installed need configure start accessing AWS Services AWS CLI Configure AWS CLI Step 4 Use command configure AWS CLI aws configure AWS Access Key ID None AKI AWS Secret Access Key None wJalr Default region name None uswest2 Default output format None json result command AWS CLI prompt four piece information first two required AWS Access Key ID AWS Secret Access Key serve account credential information need region output format leave default time NOTE generate new credential within AWS Identity Access Management IAM already set ready start using AWS CLI Let’s check powerful AWS CLI help basic example use AWS CLI Suppose got service running AWS made happen using AWS Management console exact work done whole lot le effort using Amazon Command Line Interface Here’s demonstration Let’s say want launch Amazon Linux instance EC2 wish use AWS Management Console launch instance you’ll need Load EC2 Dashboard Click Launch Instance Select AMI instance type choice Set network life cycle behavior IAM user data setting Configure Instance Details page Select storage volume Add Storage page Add tag Add Tags page Configure security group Configure Security Group page Finally review launch instance don’t forget pop you’ll confirm key pair head back EC2 Instance dashboard get instance data doesn’t sound bad imagine working slow internet connection launch multiple instance different variation multiple time would take lot time effort wouldn’t let’s see task using AWS CLI Step 1 Creating new IAM user using AWS CLI Let’s see create new IAM group new IAM user add user group using AWS Command Line Interface First use creategroup create new IAM group aws iam creategroup groupname mygroup Use createuser create new user aws iam createuser username myuser add user group using addusertogroup command aws iam addusertogroup username myuser groupname myiamgroup Finally assign policy saved file user using command putuserpolicy aws iam putuserpolicy username myuser policyname mypoweruserole policydocument fileMyPolicyFilejso want create set access key IAM user use command createaccesskey aws iam createaccesskey username myuser Step 2 Launching Amazon Linux instance using AWS CLI like launch EC2 instance using AWS Management Console need create key pair security group launching instance Use command createkeypair create key pair use –query option pipe key directly file aws ec2 createkeypair keyname mykeypair query KeyMaterial output text mykeypairpem create security group add rule security group aws ec2 createsecuritygroup groupname mysecurityg description security group aws ec2 authorizesecuritygroupingress groupid sg903004f8 protocol tcp port 3389 cidr 2030113024 Finally launch EC2 instance choice using command runinstance aws ec2 runinstances imageid ami09ae83da98a52eedf count 1 instancetype t2micro keyname MyKeyPair securitygroupids sg903004f8 appears lot command achieve result combining command one save script way modify run code whenever necessary instead starting first step like using AWS Management Console drop fiveminute process couple second know use AWS CLI create IAM user launch EC2 instance choice AWS CLI much folk that’s end article AWS CLI wish check article market’s trending technology like Artificial Intelligence DevOps Ethical Hacking refer Edureka’s official site look article series explain various aspect AWSTags Amazon Web Services AWS Cloud Computing Aws Certification Aws Cli
1,384
React and MobX — Lessons Learned. Get started with MobX as your state…
Observables What allows that to happen is the use of observables. Quite simply, an observable adds to an existing data structure the possibility of being “observed” by someone. It is similar to the pattern design of Pub/Sub or Mediator, where part A asks to be notified when something happens in part B, but here, in addition to all of this happening automatically (without the necessity of “subscribing”), what is observed is the value in itself, rather than callbacks created by you. The use of decorators in MobX is optional: It is just a way to write a little less code and maintain the current structure. Every decorator has a corresponding function. To enable decorators, you may need to change the Babel settings. An example of that is the use of decorators @observable and @observer : Please note that without having to write anything specific, the observer will react alone when the observable name changes its value. Even though you have a lot of complex observables, MobX internally only records what is being used by you in the render method. Cool, right? Very easy and straightforward. Previous example without the decorator: You can find more examples of proceedings with or without the decorator in the documentation.
https://medium.com/better-programming/react-and-mobx-lessons-learned-427a8e223c93
['Caio Vaccaro']
2020-10-01 18:22:36.884000+00:00
['Mobx', 'React', 'Programming', 'JavaScript', 'Development']
Title React MobX — Lessons Learned Get started MobX state…Content Observables allows happen use observables Quite simply observable add existing data structure possibility “observed” someone similar pattern design PubSub Mediator part asks notified something happens part B addition happening automatically without necessity “subscribing” observed value rather callback created use decorator MobX optional way write little le code maintain current structure Every decorator corresponding function enable decorator may need change Babel setting example use decorator observable observer Please note without write anything specific observer react alone observable name change value Even though lot complex observables MobX internally record used render method Cool right easy straightforward Previous example without decorator find example proceeding without decorator documentationTags Mobx React Programming JavaScript Development
1,385
A Tale of a Journey Across Low-Code
Last year I had just landed my first job as Software Developer at Signify (the former Philips Lighting) and, after a few weeks, a colleague asked me if I wanted to go to a No-Code conference with him. “A conference with everything paid? Nice!” I ran to my laptop and started my desk-research on what Low-Code was. It kind of reminded me of the MIT App Inventor, but wider, more feature-complete. It triggered my curiosity: “Is this the future of the job?”. I did not expect that I was about to embark on a one-year-long exploration, that would have exposed me to external vendors, to other departments, and could have potentially changed the way we develop in our company. In this article, I am going to describe the exploration process we followed, what were our expectations and learnings, and what we see looking forward. How and Why it started Low Code Platforms: Software that provides an environment to create end-to-end web or mobile applications and lift infrastructure through graphical user interfaces and (possibly) traditional programming. In the continuous strive to improve our process and technology competencies, to increase our productivity and reduce the time to get from an idea to a prototype, some colleagues started looking into the Low-Code world. The choice for this technology came from past experiences in different companies in which the technology was successfully adopted, and its impressive presence at the Gartner’s Conference in 2018. Gartner Magic Quadrant for Enterprise High-Productivity Application Platform as a Service, 2019, from DZone There were two main desires: the ability to quickly build prototypes that could be easily integrated with the existing backend infrastructures, and the ability to co-create with UX/UI designers and let them use the tools to co-create with the customers. Desk research depicts Low-Code in conflicting ways, from the future of the development to a disaster. A lot depends on the context, and on how these tools fit in the company culture. From many to few There are many Low-Code platforms in the market. Trying them all would take just too much time. Low-Code selection process We started by walking around some conferences, talking with employees, partners, customers. At first, we were impressed: big audiences, from half a thousand people to a few thousands, a big number of applications running on the platforms, speed and agility repeated in almost every keynote. Later, we realized that there was something missing: many demos and claims were quite generic, we left each conference without a real feeling of what is possible and how. Something more was needed. We picked some of the vendors from the Quadrant and got in touch with them. We asked for some technical information (Does your platform support REST calls? And OAuth2?) and a short face-to-face demo. Not the nicest process, but it already started highlighting some differences: Native Low(No)-Code platforms (created to do that and that only) and platforms that are evolving towards Low-Code : the former with greater flexibility and complexity, the latter with some flexibility on top of their earlier scope. We did not have an application scope in mind, as we usually don’t have when we come up with a new idea. So we picked the first category. (created to do that and that only) and platforms that are : the former with greater flexibility and complexity, the latter with some flexibility on top of their earlier scope. We did not have an application scope in mind, as we usually don’t have when we come up with a new idea. So we picked the first category. “Citizen Developer”, “Citizen Developer and Developer”, “Developer” platforms: ranging from the most graphical/blocks-oriented and less flexible, to the ones that seemed more like graphical coding. Citizen developer is a recurrent expression when looking at Low-Code, and it represents an application developer without a software background. Given the complexity we were looking for, the platforms for the “Citizen Developer and Developer” category and the “Developer” one suited better. We chose two platforms and moved to the next step: development. Hands-On: some premises So, we picked two platforms: the fun could start. The attention points: The feeling : we really wanted to feel the platforms . As a bunch of developers, we wanted to do some training, read some documentation, and bend the technology to our needs. We absolutely did not want any consultant developing for us or sitting next to us daily to help us develop. Most developers learn one/two frameworks per year, and it’s highly uncommon to have consultants help you do that. Why would we treat Low-Code differently? : we really wanted to feel the platforms As a bunch of developers, we wanted to do some training, read some documentation, and bend the technology to our needs. We or sitting next to us daily to help us develop. Most developers learn one/two frameworks per year, and it’s highly uncommon to have consultants help you do that. Why would we treat Low-Code differently? The community : we wanted to join the community. Relying on the platform technical support is nice, but on the day-to-day development, you need a community. If I am stuck writing JS code, I know the solution is on StackOverflow. Is there a SO for Low-Code? : we wanted to join the community. Relying on the platform technical support is nice, but on the day-to-day development, you need a community. If I am stuck writing JS code, I know the solution is on StackOverflow. The learning curve : we wanted to perceive the learning curve. If we would have adopted the platform, we would have brought on board as many colleagues as possible. How much time would that have taken? : we wanted to perceive the learning curve. If we would have adopted the platform, we would have brought on board as many colleagues as possible. How much time would that have taken? Flexibility : what can we do on these platforms? How far can we take our application? : what can we do on these platforms? Ease of design: can we give the platform to designers and let them put the text box in the right position instead of sending JPG designs to the developers? How cool would that be! can we and let them put the text box in the right position instead of sending JPG designs to the developers? How cool would that be! Best practices: if the prototype application becomes a product, can best practices (peer review, testing, …) be enforced? Hands-On: Planned vs Realized
https://medium.com/swlh/a-tale-of-a-journey-across-low-code-248facb897f7
['Massimo Tumolo']
2020-06-27 14:12:58.502000+00:00
['Innovation', 'Software Development', 'Development', 'Technology', 'Productivity']
Title Tale Journey Across LowCodeContent Last year landed first job Software Developer Signify former Philips Lighting week colleague asked wanted go NoCode conference “A conference everything paid Nice” ran laptop started deskresearch LowCode kind reminded MIT App Inventor wider featurecomplete triggered curiosity “Is future job” expect embark oneyearlong exploration would exposed external vendor department could potentially changed way develop company article going describe exploration process followed expectation learning see looking forward started Low Code Platforms Software provides environment create endtoend web mobile application lift infrastructure graphical user interface possibly traditional programming continuous strive improve process technology competency increase productivity reduce time get idea prototype colleague started looking LowCode world choice technology came past experience different company technology successfully adopted impressive presence Gartner’s Conference 2018 Gartner Magic Quadrant Enterprise HighProductivity Application Platform Service 2019 DZone two main desire ability quickly build prototype could easily integrated existing backend infrastructure ability cocreate UXUI designer let use tool cocreate customer Desk research depicts LowCode conflicting way future development disaster lot depends context tool fit company culture many many LowCode platform market Trying would take much time LowCode selection process started walking around conference talking employee partner customer first impressed big audience half thousand people thousand big number application running platform speed agility repeated almost every keynote Later realized something missing many demo claim quite generic left conference without real feeling possible Something needed picked vendor Quadrant got touch asked technical information platform support REST call OAuth2 short facetoface demo nicest process already started highlighting difference Native LowNoCode platform created platform evolving towards LowCode former greater flexibility complexity latter flexibility top earlier scope application scope mind usually don’t come new idea picked first category created platform former greater flexibility complexity latter flexibility top earlier scope application scope mind usually don’t come new idea picked first category “Citizen Developer” “Citizen Developer Developer” “Developer” platform ranging graphicalblocksoriented le flexible one seemed like graphical coding Citizen developer recurrent expression looking LowCode represents application developer without software background Given complexity looking platform “Citizen Developer Developer” category “Developer” one suited better chose two platform moved next step development HandsOn premise picked two platform fun could start attention point feeling really wanted feel platform bunch developer wanted training read documentation bend technology need absolutely want consultant developing u sitting next u daily help u develop developer learn onetwo framework per year it’s highly uncommon consultant help would treat LowCode differently really wanted feel platform bunch developer wanted training read documentation bend technology need sitting next u daily help u develop developer learn onetwo framework per year it’s highly uncommon consultant help would treat LowCode differently community wanted join community Relying platform technical support nice daytoday development need community stuck writing JS code know solution StackOverflow LowCode wanted join community Relying platform technical support nice daytoday development need community stuck writing JS code know solution StackOverflow learning curve wanted perceive learning curve would adopted platform would brought board many colleague possible much time would taken wanted perceive learning curve would adopted platform would brought board many colleague possible much time would taken Flexibility platform far take application platform Ease design give platform designer let put text box right position instead sending JPG design developer cool would let put text box right position instead sending JPG design developer cool would Best practice prototype application becomes product best practice peer review testing … enforced HandsOn Planned v RealizedTags Innovation Software Development Development Technology Productivity
1,386
How Many Startups Can You Manage At Once?
“I’m managing three companies,” “Jason” said to me. It was our first conversation, and warning lights started going off. Picture: Depositphotos “Tell me about the three companies?” I said. Jason said, “The first company is a software company which is doing about $5 million in revenue. The second company is a SaaS company doing about $1 million in revenue. And the third company is my law practice.” “Interesting,” I responded. “How many direct reports do you have across the three companies?” Jason said, “Let me think about that.” Then after a very long pause, he said, “17.” “Wow. That’s a lot.” You’ll need to leverage yourself if you’re going to manage multiple startups. “Think of your multiple companies like one big company,” I said to Jason. “Each of the companies are then like divisions of the main company, run by you. “Ideally, you want to develop infrastructure in each division (the individual companies), so that you’re free to manage all three divisions at once. The only way this is going to happen is if you reduce the number of direct reports you have.” Jason nodded his head in agreement. “I know. 17 is too many. How many should I reduce it to?” “My magic number is seven,” I said. “Things usually break down for most people when they get above seven direct reports. “Normally, you’d have a management team for each of the businesses. My bet is this hasn’t been built out yet.” Jason quickly realized the true problem he had. “I don’t really have the teams I need, so they can manage the more junior people. I’m having to do that myself.” You’ll need great teams at each startup. “I’m not surprised,” I said. “What you’re going through is normal if you were managing just one startup. “It’s pretty common that somewhere between $1 million and $10 million in revenue, you end up building out your management team. In your case, you have to build out two management teams, maybe three.” “I get it,” Jason said. Fortunately, Jason truly did get it. Over the next several months, Jason recruited the management teams he needed. Slowly, but surely, the number of direct reports Jason had dropped to the magic number of seven. In addition, Jason divested his law practice. Now, Jason at least had a manageable problem. However, you’ll need to remain vigilant to keep your leverage. About six months later, Jason said, “I’m worried again because my direct reports are up to eleven. I know what I need to do.” That’s the challenge for you as a CEO, regardless of whether you’re running multiple companies or one company. You’ll need to start anticipating when you’ll need more senior managers, and your senior managers will need more mangers, to maintain your leverage and their leverage. In short, it’s a never ending battle. The best CEOs plan ahead, so they are constantly recruiting or building their management talent pool. It takes discipline to pull this off. And you need to teach your team to have the same discipline. No matter how hard you try, one of the startups will demand most of your attention. Jason successfully got his direct reports down to seven again. Then, the inevitable happened. Jason had the high class problem where one of the companies growth went hyper, doubling in revenue each year. And, of course, eighty percent of his time went to running the hyper growth startup. That’s okay. As long as you have follow the rule of seven, keep you and your team focused on recruiting top talent, then you can keep managing multiple businesses as you scale. For more, read: https://www.brettjfox.com/what-are-the-five-skills-you-need-to-be-a-great-ceo
https://medium.com/swlh/how-many-startups-can-you-manage-at-once-c05227e86ad6
['Brett Fox']
2020-12-30 05:55:54.156000+00:00
['Leadership', 'Entrepreneurship', 'Business', 'Startup', 'Venture Capital']
Title Many Startups Manage OnceContent “I’m managing three companies” “Jason” said first conversation warning light started going Picture Depositphotos “Tell three companies” said Jason said “The first company software company 5 million revenue second company SaaS company 1 million revenue third company law practice” “Interesting” responded “How many direct report across three companies” Jason said “Let think that” long pause said “17” “Wow That’s lot” You’ll need leverage you’re going manage multiple startup “Think multiple company like one big company” said Jason “Each company like division main company run “Ideally want develop infrastructure division individual company you’re free manage three division way going happen reduce number direct report have” Jason nodded head agreement “I know 17 many many reduce to” “My magic number seven” said “Things usually break people get seven direct report “Normally you’d management team business bet hasn’t built yet” Jason quickly realized true problem “I don’t really team need manage junior people I’m myself” You’ll need great team startup “I’m surprised” said “What you’re going normal managing one startup “It’s pretty common somewhere 1 million 10 million revenue end building management team case build two management team maybe three” “I get it” Jason said Fortunately Jason truly get next several month Jason recruited management team needed Slowly surely number direct report Jason dropped magic number seven addition Jason divested law practice Jason least manageable problem However you’ll need remain vigilant keep leverage six month later Jason said “I’m worried direct report eleven know need do” That’s challenge CEO regardless whether you’re running multiple company one company You’ll need start anticipating you’ll need senior manager senior manager need manger maintain leverage leverage short it’s never ending battle best CEOs plan ahead constantly recruiting building management talent pool take discipline pull need teach team discipline matter hard try one startup demand attention Jason successfully got direct report seven inevitable happened Jason high class problem one company growth went hyper doubling revenue year course eighty percent time went running hyper growth startup That’s okay long follow rule seven keep team focused recruiting top talent keep managing multiple business scale read httpswwwbrettjfoxcomwhatarethefiveskillsyouneedtobeagreatceoTags Leadership Entrepreneurship Business Startup Venture Capital
1,387
Former Google CEO Eric Schmidt: Let’s Start a School for A.I.
Former Google CEO Eric Schmidt: Let’s Start a School for A.I. Uncle Sam might want you… to code. If you’re interested in becoming a technologist for the federal government, former Google CEO Eric Schmidt wants to teach you how to code. According to OneZero, Schmidt has partnered up with former U.S. Secretary of Defense Robert O. Work to create a school for folks who want to become government coders. This U.S. Digital Service Academy would operate like a regular school, offering coursework and degree tracks, and focus on cutting-edge technology subjects such as cybersecurity and artificial intelligence (A.I.). As OneZero points out, the federal government is very interested in technologists who can craft new innovations in A.I. “We are engaged in an epic race for A.I. supremacy,” the publication quotes Rick Perry, secretary of the Department of Energy, as telling an NSCAI conference in 2019. “As I speak, China and Russia are striving to overtake us. Neither of these nations shares our values or our freedoms.” Despite that urging, however, the U.S. government has “fallen short” when it comes to actually funding artificial intelligence research, according to a report issued by NSCAI: “AI is only as good as the infrastructure behind it. Within DoD in particular this infrastructure is severely underdeveloped.” But the U.S. Digital Service Academy isn’t a done deal; first, Congress must approve NSCAI’s recommendation that the university be created. Then, it would actually need to be built, staffed, accredited, and launched. In order to fulfill the vision presented by Schmidt, the school would also need to forge partnerships with a variety of private companies and public institutions, in order to give students the necessary internships and other opportunities. And even if all those goals are met, the U.S. Digital Service Academy would need to persuade young technologists to opt for it over other schools that are specializing in A.I. instruction, including Stanford and MIT. Over the past several years, Eric Schmidt has shaped himself as an expert and advisor on U.S. technology policy. Last year, for example, he suggested that the U.S. government’s attempts to restrict hiring from China wouldn’t do this country’s technology industry any good. “I think the China problem is solvable with the following insight: we need access to their top scientists,” he told the audience, according to Bloomberg. He also added that “common frameworks” such as Google’s TensorFlow benefit from input from scientists and researchers in other countries. The U.S. Digital Service Academy is clearly his latest attempt to try to guide policy and discussion. If he can actually get it off the ground, though, it could provide yet another venue for technologists to learn intensely valuable A.I. and machine learning skills.
https://medium.com/dice-insights/former-google-ceo-eric-schmidt-lets-start-a-school-for-a-i-1a709e61e22b
['Nick Kolakowski']
2020-07-31 13:01:01.564000+00:00
['Artificial Intelligence', 'Google', 'Eric Schmidt', 'Education', 'Machine Learning']
Title Former Google CEO Eric Schmidt Let’s Start School AIContent Former Google CEO Eric Schmidt Let’s Start School AI Uncle Sam might want you… code you’re interested becoming technologist federal government former Google CEO Eric Schmidt want teach code According OneZero Schmidt partnered former US Secretary Defense Robert Work create school folk want become government coder US Digital Service Academy would operate like regular school offering coursework degree track focus cuttingedge technology subject cybersecurity artificial intelligence AI OneZero point federal government interested technologist craft new innovation AI “We engaged epic race AI supremacy” publication quote Rick Perry secretary Department Energy telling NSCAI conference 2019 “As speak China Russia striving overtake u Neither nation share value freedoms” Despite urging however US government “fallen short” come actually funding artificial intelligence research according report issued NSCAI “AI good infrastructure behind Within DoD particular infrastructure severely underdeveloped” US Digital Service Academy isn’t done deal first Congress must approve NSCAI’s recommendation university created would actually need built staffed accredited launched order fulfill vision presented Schmidt school would also need forge partnership variety private company public institution order give student necessary internship opportunity even goal met US Digital Service Academy would need persuade young technologist opt school specializing AI instruction including Stanford MIT past several year Eric Schmidt shaped expert advisor US technology policy Last year example suggested US government’s attempt restrict hiring China wouldn’t country’s technology industry good “I think China problem solvable following insight need access top scientists” told audience according Bloomberg also added “common frameworks” Google’s TensorFlow benefit input scientist researcher country US Digital Service Academy clearly latest attempt try guide policy discussion actually get ground though could provide yet another venue technologist learn intensely valuable AI machine learning skillsTags Artificial Intelligence Google Eric Schmidt Education Machine Learning
1,388
TikTok’s Most Recent Viral Trend Is Headed To Broadway
The so- called Ratatouille musical is based on the the 2007 Disney-Pixar film that tells the story of Remy, a talented French rat with an impressive palate, who learns to cook from old TV shows and cookbooks made by a famous human chef, Auguste Gusteau, whose motto is that “anyone can cook”. It all started when an audio from Tiktok User @e_jaccs started to make rounds on TikTok. The now viral audio is of user @e_jaccs singing about how Remy, the rodent protagonist of the film, was the rat of our dreams. The Audio soon started to gain traction and now has over 18.5 thousand videos using it. The Musical, organised by production company Seaview, will supposedly star credited Broadway performers. Proceeds from ticket sales to the digital event will raise money for the Actors Fund. The musicals origination and creation, however is not accredited to one individual.Unlike traditional broadway shows, the composition of the Ratatouille musical is one that relies on the collaboration of strangers over the internet to create canonical music and lyrics. The power and populariy of tiktok as a free creagtive space has birthed a collaborative enivromnet like no other. Hundreds of fans of the pixar movie along with musicl theatre lovers have built off eachother’s songs and ideas to create professional running orders and song instrumentation. Tiktok being the home to creatives from evry relm and genre has allowed there to be collaboriom regarding every step of the process including the set and playbill designs. The cast of the Broadway version has not yet been announced and it is still unclear whether it will be staged inside a physical Broadway theatre or from the homes of the actors chosen. Disney has historically used many of its tales such as Beauty and the Beast and The Lion King for musical adaptations but has clarified that it will not be doing the same for ratatouille saying, “ Disney does not have development plans for the title”. While that still may be true, Disney hasnot caused any trouble with the fan made adaptation of the film In fact the company has even given its blessing to the production saying, “we love when our fans engage with Disney stories” and “we thank all of the online theater makers for helping to benefit the Actors Fund in this unprecedented time of need”. It has also been confirmed that the creators of songs to be used in the musical will be credited and “compensated”. You can buy Tickets to the musical at Today Tix !
https://medium.com/illumination/tiktoks-most-recent-viral-trend-is-headed-to-broadway-72ca9d457539
['Zo Sajjad']
2020-12-28 16:33:17.027000+00:00
['Pop Culture', 'Broadway', 'Tik Tok', 'Startup', 'Music']
Title TikTok’s Recent Viral Trend Headed BroadwayContent called Ratatouille musical based 2007 DisneyPixar film tell story Remy talented French rat impressive palate learns cook old TV show cookbook made famous human chef Auguste Gusteau whose motto “anyone cook” started audio Tiktok User ejaccs started make round TikTok viral audio user ejaccs singing Remy rodent protagonist film rat dream Audio soon started gain traction 185 thousand video using Musical organised production company Seaview supposedly star credited Broadway performer Proceeds ticket sale digital event raise money Actors Fund musical origination creation however accredited one individualUnlike traditional broadway show composition Ratatouille musical one relies collaboration stranger internet create canonical music lyric power populariy tiktok free creagtive space birthed collaborative enivromnet like Hundreds fan pixar movie along musicl theatre lover built eachother’s song idea create professional running order song instrumentation Tiktok home creatives evry relm genre allowed collaboriom regarding every step process including set playbill design cast Broadway version yet announced still unclear whether staged inside physical Broadway theatre home actor chosen Disney historically used many tale Beauty Beast Lion King musical adaptation clarified ratatouille saying “ Disney development plan title” still may true Disney hasnot caused trouble fan made adaptation film fact company even given blessing production saying “we love fan engage Disney stories” “we thank online theater maker helping benefit Actors Fund unprecedented time need” also confirmed creator song used musical credited “compensated” buy Tickets musical Today Tix Tags Pop Culture Broadway Tik Tok Startup Music
1,389
Using Map Bearings and Trigonometry to Style Custom Mapbox GL Draw Tools
As a team that builds tools for the often-complicated world of Urban Planning in NYC, we run into a number of unique engineering challenges typically related to web mapping. For our newest application, Applicant Maps, we built our own custom draw tools through combining mapbox-gl-draw line and symbol layers. Users are able to draw five different “annotations” on their project map — such as our Parallel Measurement tool, which we created by placing a custom symbol on both ends of a line. Our Parallel Measurement Tool which consists of a line and symbols on both ends Symbol layers in Mapbox GL are point/marker layers on which developers can define their own icon image. Our custom arrow symbols ➤ are PNG files we created specifically for our annotations. We set the location of the arrows to match the coordinates of the line. We then rotate the arrows using the bearing of the line, lineBearing , which is the angle of a line from true north. Here is our symbol layer, startArrowLayer , which was placed at the first coordinate of our line. const { coordinates } = lineFeature.geometry; const lineBearing = bearing(coordinates[0], coordinates[1]); const startArrowLayer = { type: 'symbol', source: { type: 'geojson', data: { type: 'Feature', geometry: { type: 'Point', coordinates: lineFeature.geometry.coordinates[0], }, properties: { rotation: lineBearing + 180, }, }, }, layout: { 'icon-image': 'arrow', 'icon-size': 0.04, 'icon-rotate': { type: 'identity', property: 'rotation', }, 'icon-anchor': 'top', 'icon-rotation-alignment': 'map', 'icon-allow-overlap': true, 'icon-ignore-placement': true, }, }; Learn more about how to style layers with the Mapbox GL Style Specification. And check out how we build the entire annotation in this JavaScript file. Centerline Annotation A drawing displaying the new centerline tool from a meeting we had with planners I ran into an interesting engineering issue while building the Centerline annotation tool. Planners from our Technical Review Division wanted this tool to consist of an arrow as well as a custom centerline icon. I built the tool to mirror that of the Parallel Measurement annotation shown above, by placing an arrow on one end of the line and our centerline symbol on the other end. It was easy enough to replicate the code we used for the Parallel Measurement tool, and replace the startArrowLayer with the centerlineLayer . And there it was! All I had left to do were a couple of minor styling changes: resize the icon and move it a little further away from the line. While the size modification only required a simple fixed value change, offsetting the icon ended up being a little more complicated. Dynamic Offsetting with Trigonometry Mapbox GL’s icon-translate property allows developers to offset an icon relative to its anchor (the location where the point is originally placed) based on fixed x and y values. Because our users can draw a line in any direction, a fixed offset would produce something like this: Example of a fixed offset [10, 0] with icon-translate Similarly to how we used lineBearing to calculate the rotation of arrows, we can use this same angle to calculate a dynamic offset for our centerline icons and avoid the above situation. After console logging the lineBearing of several lines in different directions, I created this graphical depiction. I drew in the x and y input values that would translate the icon, an example of the line bearing (represented by 45°), and the distance between the initial location of the icon and the offset location (represented by c). In Mapbox GL, a negative y value implies a translation UP, and a positive y value implies a translation DOWN While we have to calculate new x and y values every time the line is drawn, there are two variables that are always known: (1) the distance in pixels that the icon should travel from the end of the line, which I called c , and (2) the angle of the line from true north, or the lineBearing , represented by ɵ. Revisiting my trigonometry days, I then calculated x and y using the pythagorean theorem and the equation of the tangent. Using the substitution method, I was able to isolate y , remove x , and produce an equation with just the lineBearing (ɵ) and c . I then plugged this new y value into the pythagorean theorem in order to find x . Note: I had to convert the lineBearing to radians before finding its tangent and a double asterisk ** represents exponents in JavaScript. const radiansBearing = (lineBearing * Math.PI) / 180; let x = null; let y = null; y = Math.sqrt((c ** 2) / ((Math.tan(radiansBearing) ** 2) + 1)); x = Math.sqrt((c ** 2) - (y ** 2)); I now had formulas for the x and y values needed to situate the icon correctly on the map. In Mapbox GL, a positive x value means a translation to the RIGHT, and a negative x value means a translation to the LEFT. A positive y value means a translation DOWN, and a negative y value means a translation UP. In order to assure that the icon was being translated appropriately based on the quadrant where the line existed, I had to set some of the x and y values to negative. Depending on the quadrant, the x and y values will need to be made negative or positive. Quadrant 1: the icon will be translated right and up [+x, -y]. Quadrant 2: right and down [+x, +y]. Quadrant 3: left and down [-x, +y]. Quadrant 4: left and up [-x, -y]. icon-translate is a weird property. It’s defined by Mapbox as: “Distance that the icon’s anchor is moved from its original placement. Positive values indicate right and down, while negative values indicate left and up.” As mentioned earlier, the anchor is the location where the point was originally placed by the user. So while we are physically translating the icon away from the line (the line will not move unless the user explicitly moves it), icon-translate is measuring the translation as a movement of the anchor not a movement of the icon. Therefore, I had to set the x and y values to the opposite of what I initially expected. if (lineBearing > 0 && lineBearing < 90) { // quadrant I x = -x; } else if (lineBearing < -90) { // quadrant II y = -y; } else if (lineBearing > 90 && lineBearing < 180) { // quadrant IV y = -y; x = -x; } I then added these x and y values to the icon-translate paint property on the centerline symbol layer. const centerlineLayer = { type: 'symbol', source: { type: 'geojson', data: { type: 'Feature', geometry: { type: 'Point', coordinates: lineFeature.geometry.coordinates[0], }, }, }, layout: layoutCenterline, paint: { 'icon-translate': [ x, y, ], }, }; The offset distance will now be the same regardless of the direction of the line. And that’s how we were able to create this cool centerline annotation on our maps!
https://medium.com/nyc-planning-digital/using-map-bearings-and-trigonometry-to-style-custom-mapbox-gl-draw-tools-455123abb68c
['Taylor Mcginnis']
2019-03-29 19:00:18.288000+00:00
['Design', 'Ember', 'Mapbox', 'Engineering', 'Nyc Planning Labs']
Title Using Map Bearings Trigonometry Style Custom Mapbox GL Draw ToolsContent team build tool oftencomplicated world Urban Planning NYC run number unique engineering challenge typically related web mapping newest application Applicant Maps built custom draw tool combining mapboxgldraw line symbol layer Users able draw five different “annotations” project map — Parallel Measurement tool created placing custom symbol end line Parallel Measurement Tool consists line symbol end Symbol layer Mapbox GL pointmarker layer developer define icon image custom arrow symbol ➤ PNG file created specifically annotation set location arrow match coordinate line rotate arrow using bearing line lineBearing angle line true north symbol layer startArrowLayer placed first coordinate line const coordinate lineFeaturegeometry const lineBearing bearingcoordinates0 coordinates1 const startArrowLayer type symbol source type geojson data type Feature geometry type Point coordinate lineFeaturegeometrycoordinates0 property rotation lineBearing 180 layout iconimage arrow iconsize 004 iconrotate type identity property rotation iconanchor top iconrotationalignment map iconallowoverlap true iconignoreplacement true Learn style layer Mapbox GL Style Specification check build entire annotation JavaScript file Centerline Annotation drawing displaying new centerline tool meeting planner ran interesting engineering issue building Centerline annotation tool Planners Technical Review Division wanted tool consist arrow well custom centerline icon built tool mirror Parallel Measurement annotation shown placing arrow one end line centerline symbol end easy enough replicate code used Parallel Measurement tool replace startArrowLayer centerlineLayer left couple minor styling change resize icon move little away line size modification required simple fixed value change offsetting icon ended little complicated Dynamic Offsetting Trigonometry Mapbox GL’s icontranslate property allows developer offset icon relative anchor location point originally placed based fixed x value user draw line direction fixed offset would produce something like Example fixed offset 10 0 icontranslate Similarly used lineBearing calculate rotation arrow use angle calculate dynamic offset centerline icon avoid situation console logging lineBearing several line different direction created graphical depiction drew x input value would translate icon example line bearing represented 45° distance initial location icon offset location represented c Mapbox GL negative value implies translation positive value implies translation calculate new x value every time line drawn two variable always known 1 distance pixel icon travel end line called c 2 angle line true north lineBearing represented ɵ Revisiting trigonometry day calculated x using pythagorean theorem equation tangent Using substitution method able isolate remove x produce equation lineBearing ɵ c plugged new value pythagorean theorem order find x Note convert lineBearing radian finding tangent double asterisk represents exponent JavaScript const radiansBearing lineBearing MathPI 180 let x null let null Mathsqrtc 2 MathtanradiansBearing 2 1 x Mathsqrtc 2 2 formula x value needed situate icon correctly map Mapbox GL positive x value mean translation RIGHT negative x value mean translation LEFT positive value mean translation negative value mean translation order assure icon translated appropriately based quadrant line existed set x value negative Depending quadrant x value need made negative positive Quadrant 1 icon translated right x Quadrant 2 right x Quadrant 3 left x Quadrant 4 left x icontranslate weird property It’s defined Mapbox “Distance icon’s anchor moved original placement Positive value indicate right negative value indicate left up” mentioned earlier anchor location point originally placed user physically translating icon away line line move unless user explicitly move icontranslate measuring translation movement anchor movement icon Therefore set x value opposite initially expected lineBearing 0 lineBearing 90 quadrant x x else lineBearing 90 quadrant II else lineBearing 90 lineBearing 180 quadrant IV x x added x value icontranslate paint property centerline symbol layer const centerlineLayer type symbol source type geojson data type Feature geometry type Point coordinate lineFeaturegeometrycoordinates0 layout layoutCenterline paint icontranslate x offset distance regardless direction line that’s able create cool centerline annotation mapsTags Design Ember Mapbox Engineering Nyc Planning Labs
1,390
What I Wish Someone Told Me When I Had My First Abnormal Pap
What I Wish Someone Told Me When I Had My First Abnormal Pap The mantra I didn’t know I needed. Photo by Gemma Chua-Tran on Unsplash The day I got my first abnormal Pap results is in the top 10 worst days of my life. I was at work sitting at my desk at work when I got a call from an unknown number. I answered and was told I had atypical squamous cells and was positive for HPV. I knew zero about what this meant, but I knew it wasn’t what I wanted to hear. I immediately walked back to my desk, told my boss I was leaving, drove home, and cried. I called my mom after some google searches proclaiming, “I think I have cancer”. Growing up female, we all learn the dreaded pap smears will come. They are scary at first but become routine. What we don’t do though is educate young females what it will be like if they are abnormal. Because here is a scary statistic according to Dr Hugh DePaulo, “as many as one in 10 pap smears come back abnormal nationwide”. I repeat — one in ten. That is a lot of abnormal pap smears daily in a population of over 328 million people. This also means there is a lot of fear if we don’t start talking about what an abnormal pap is and what it means and does not mean about you. Here is what I wish someone told me the second I received the bad news about my abnormal pap. It’s very unlikely to be cancer — don’t think the worst. I just wish someone would have used those exact words, immediately. When I was given my results, a lot of medical jargon was used, jargon that sounded like cancer. It was not until two more doctors appointments, and a procedure later, did I finally just straight up ask, “Do I have cancer?”. Only then was I given the answer clear and straight, “No, you do not have cancer” and could finally breathe again. By the time I had heard it though, I had shed plenty of tears and lost many nights of sleep over the matter, so I am here to tell you that an abnormal pap does not equate to cancer. And only 1% of abnormal pap smears ever do. Let your abnormal pap stay just that, a pap smear that is not normal. One you and the doctors are going to look more into. Do not let the scary medical terms and talk of changing cells make you fear the worst. It is not the C-word until it is the C-word. And when it is, you will be told it. You are not alone — women of all ages, including people you know have been through this. I felt like I was the only 20-something-year-old who had ever gotten this news. I had never heard from a friend or a family member that they had an abnormal pap. I had never seen one on TV. I immediately felt alone. I immediately felt dirty. I immediately felt like something was very wrong with me. Then that same day, similar stories started coming out of the wood-works. My mom told me she had an abnormal pap after I was born. My roommate told me she had gone through a similar experience a few years back. A good friend let me know she had her first abnormal pap that year too. Again, 1 in 10 women go through this — we just don’t talk about it. We don’t post our abnormal results on our Instagram reels, nor bring it up at girls' brunch. But I bet you if you are brave enough to ask, you will find so many women who will share they have been through the same exact thing. You are not alone in this experience. You will never know what “caused” this — so don’t waste your energy digging through your past. My thoughts immediately spiralled into my sexual history. I felt like my abnormal past must be due to something I did or something I had not done in my past. I went through thoughts of every male I had ever slept with and thought maybe him? Maybe he gave it to me! I thought of the HPV vaccine I had as a child and thought, goshdarnit, it must be that. I even thought about the sexual abuse I had been through — my trauma. I was sure my past negative experiences manifested this into my body. This is where all that unhealed pain was going to be showing up. Here is the kicker though: I, nor anyone, will ever know the root of these abnormal cells in our bodies. Sure, we can jump to conclusions, but our bodies are miraculous things beyond our understanding. Often beyond scientists understandings as well. Do not waste your precious energy, trying to find the root of these abnormal cells. Waste your precious energy instead on healing. Control what you can — and let your body and doctors take care of the rest. Similarly to how you will never know the cause of the abnormal pap, now that you know they are there, you can’t control them either. You likely cannot control the medical treatment you will receive or how long it may take to get that desired “normal” pap again. The good news, however, is that you can control many things about your body, such as what you put into it and the tools you use to heal. You also can maintain your mindset and stress level while you go through this experience. Hello, meditation, yoga, and lots of sleep. I google-searched a lot. I wanted to find the cure and control these cells deep inside my body. If someone would have told me to stick some herbs up my vagina, and this abnormal pap talk would be over — I probably would have. But the way out of this situation is through it… and through it depends on you and your specific circumstance. Lean into your doctor. Do your research, so you feel empowered about your body and choices. Control your stress levels. And let go of the rest. Your mental health will thank you later. You stressing about your pap smear (now yearly) will only cause anxiety— twice. I wish someone would have told me what a journey an abnormal pap is. There are paps and re-paps and procedures and waiting. For me, it was a three-year journey to finally hear those magic words: your pap is normal. According to CDC guidelines on Pap smears, the recommendation for those with normal pap results is every 3 years. Unfortunately, but necessarily, that changes to every year for those with abnormal results. Thus, you get to stress about hearing about your pap results every single year. But don’t worry about it twice! Stress about the results, sure but don’t stress about making the appointment or the appointment itself. Yes, the process is not fun, but it is to make sure you are healthy. It is to make sure it does not become the big scary C word. And you know what a leading factor of cancer is? Stress. The number one thing you can do to support yourself through this is to manage that.
https://medium.com/fearless-she-wrote/what-i-wish-someone-told-me-when-i-had-my-first-abnormal-pap-10b9042f7fc1
['Alexandra Ringer']
2020-09-20 02:44:25.617000+00:00
['Mental Health', 'Women', 'Health', 'Self', 'Medicine']
Title Wish Someone Told First Abnormal PapContent Wish Someone Told First Abnormal Pap mantra didn’t know needed Photo Gemma ChuaTran Unsplash day got first abnormal Pap result top 10 worst day life work sitting desk work got call unknown number answered told atypical squamous cell positive HPV knew zero meant knew wasn’t wanted hear immediately walked back desk told bos leaving drove home cried called mom google search proclaiming “I think cancer” Growing female learn dreaded pap smear come scary first become routine don’t though educate young female like abnormal scary statistic according Dr Hugh DePaulo “as many one 10 pap smear come back abnormal nationwide” repeat — one ten lot abnormal pap smear daily population 328 million people also mean lot fear don’t start talking abnormal pap mean mean wish someone told second received bad news abnormal pap It’s unlikely cancer — don’t think worst wish someone would used exact word immediately given result lot medical jargon used jargon sounded like cancer two doctor appointment procedure later finally straight ask “Do cancer” given answer clear straight “No cancer” could finally breathe time heard though shed plenty tear lost many night sleep matter tell abnormal pap equate cancer 1 abnormal pap smear ever Let abnormal pap stay pap smear normal One doctor going look let scary medical term talk changing cell make fear worst Cword Cword told alone — woman age including people know felt like 20somethingyearold ever gotten news never heard friend family member abnormal pap never seen one TV immediately felt alone immediately felt dirty immediately felt like something wrong day similar story started coming woodwork mom told abnormal pap born roommate told gone similar experience year back good friend let know first abnormal pap year 1 10 woman go — don’t talk don’t post abnormal result Instagram reel bring girl brunch bet brave enough ask find many woman share exact thing alone experience never know “caused” — don’t waste energy digging past thought immediately spiralled sexual history felt like abnormal past must due something something done past went thought every male ever slept thought maybe Maybe gave thought HPV vaccine child thought goshdarnit must even thought sexual abuse — trauma sure past negative experience manifested body unhealed pain going showing kicker though anyone ever know root abnormal cell body Sure jump conclusion body miraculous thing beyond understanding Often beyond scientist understanding well waste precious energy trying find root abnormal cell Waste precious energy instead healing Control — let body doctor take care rest Similarly never know cause abnormal pap know can’t control either likely cannot control medical treatment receive long may take get desired “normal” pap good news however control many thing body put tool use heal also maintain mindset stress level go experience Hello meditation yoga lot sleep googlesearched lot wanted find cure control cell deep inside body someone would told stick herb vagina abnormal pap talk would — probably would way situation it… depends specific circumstance Lean doctor research feel empowered body choice Control stress level let go rest mental health thank later stressing pap smear yearly cause anxiety— twice wish someone would told journey abnormal pap pap repaps procedure waiting threeyear journey finally hear magic word pap normal According CDC guideline Pap smear recommendation normal pap result every 3 year Unfortunately necessarily change every year abnormal result Thus get stress hearing pap result every single year don’t worry twice Stress result sure don’t stress making appointment appointment Yes process fun make sure healthy make sure become big scary C word know leading factor cancer Stress number one thing support manage thatTags Mental Health Women Health Self Medicine
1,391
Kesamaan Perilaku antara React dan Vue
Easy read, easy understanding. A good writing is a writing that can be understood in easy ways Follow
https://medium.com/easyread/persamaan-perilaku-antara-react-dan-vue-f16ae8644e98
['Alif Irfan Anshory']
2019-01-21 15:42:24.964000+00:00
['React', 'JavaScript', 'Front End Development', 'Web Development', 'Vuejs']
Title Kesamaan Perilaku antara React dan VueContent Easy read easy understanding good writing writing understood easy way FollowTags React JavaScript Front End Development Web Development Vuejs
1,392
4 Ways To Help Your Employees Build Their Confidence
Do you have a high achieving performer on your team that is talented, hard-working, and intelligent; but remains silent in group meetings and freezes in crucial calls? Freezing happens, but for some, freezing is a frequent obstacle to professional well-being. Every employee wants to feel seen, heard, and celebrated in the workplace, but for some, sharing ideas, thoughts, and accomplishments creates a total body and mind shut down. They could be experiencing “destructive perfectionism.” Brené Brown defined this kind of perfectionism in her book, The Gifts of Imperfection, as ‘a self-destructive and addictive belief system that fuels this primary thought: ‘If I look perfect, live perfectly, and do everything perfectly, I can avoid or minimize the painful feelings of shame, judgment and blame.’ While that keeps them safe, it prevents them from showing up with vulnerability and courage to step into feeling confident and connected in the workplace. Here are four suggestions to create a supportive and connected environment for employees to thrive: 1. Improv Exercises to Get Out of the Head & Into the Body Curio specializes in creating expert-led interactive virtual experiences to help employees get out of their heads and into their bodies through creative, low-pressure improv exercises. These moments give employees permission to relax and show up unrehearsed. 2.Quieting the Loud Inner Dialogue A relaxed mind feels at ease to connect authentically in a casual conversation or a board room. Some inner dialogues are so critical and loud, it leads to constant overdrive for the mind, and it drowns out all other thoughts. Mindfulness is not about clearing the mind. Mindfulness techniques slow down and quiet the chatter, leaving room for present connection with the self and others. Choose a breath awareness meditation, shifting awareness from the stream of thoughts to the breath. 3.Body-based Relaxation Techniques The body goes into fight, flight, freeze mode when it feels that a situation is a high risk. For someone with ‘destructive perfectionism’, the body sees many moments (like meetings, calls, interviews, presentations) as another risk to be seen as imperfect. The threat feels high, so they shut down. Choose a body scan meditation, shifting awareness to the physical body. 4.Rewiring Through Journaling Popular Psychologist Dr. Nicole LePera @The.Holistic.Psychologist has an incredible free resource called the Future Self Journal that takes you through a daily writing practice of rewiring thought patterns and creating new pathways to help achieve new habits and mindset. Try sharing these techniques with your team or adding a new well-being activity to your employee health programs. Creating a supportive and open environment builds more creative and collaborative teams.
https://medium.com/joincurio/4-ways-to-help-your-employees-build-their-confidence-573ddcf64a81
['Melissa Schwartz']
2020-11-23 15:01:10.665000+00:00
['Leadership', 'Mental Health', 'Culture', 'Mindfulness', 'Creativity']
Title 4 Ways Help Employees Build ConfidenceContent high achieving performer team talented hardworking intelligent remains silent group meeting freeze crucial call Freezing happens freezing frequent obstacle professional wellbeing Every employee want feel seen heard celebrated workplace sharing idea thought accomplishment creates total body mind shut could experiencing “destructive perfectionism” Brené Brown defined kind perfectionism book Gifts Imperfection ‘a selfdestructive addictive belief system fuel primary thought ‘If look perfect live perfectly everything perfectly avoid minimize painful feeling shame judgment blame’ keep safe prevents showing vulnerability courage step feeling confident connected workplace four suggestion create supportive connected environment employee thrive 1 Improv Exercises Get Head Body Curio specializes creating expertled interactive virtual experience help employee get head body creative lowpressure improv exercise moment give employee permission relax show unrehearsed 2Quieting Loud Inner Dialogue relaxed mind feel ease connect authentically casual conversation board room inner dialogue critical loud lead constant overdrive mind drowns thought Mindfulness clearing mind Mindfulness technique slow quiet chatter leaving room present connection self others Choose breath awareness meditation shifting awareness stream thought breath 3Bodybased Relaxation Techniques body go fight flight freeze mode feel situation high risk someone ‘destructive perfectionism’ body see many moment like meeting call interview presentation another risk seen imperfect threat feel high shut Choose body scan meditation shifting awareness physical body 4Rewiring Journaling Popular Psychologist Dr Nicole LePera TheHolisticPsychologist incredible free resource called Future Self Journal take daily writing practice rewiring thought pattern creating new pathway help achieve new habit mindset Try sharing technique team adding new wellbeing activity employee health program Creating supportive open environment build creative collaborative teamsTags Leadership Mental Health Culture Mindfulness Creativity
1,393
The Amazing Benefits of Being in Nature
The Amazing Benefits of Being in Nature Better health. Lower stress. Enhanced creativity. Sheer joy. It’s all out there, and it doesn’t take long. When my son recently announced he wanted to try fishing, I jumped all over it, dug out my old fly rod, and we headed out beyond the city and suburbs to a stretch of river reputed to have some nice trout. We didn’t catch a thing, and we’ve had little luck on multiple, marvelous return trips. See, it’s not just about the fish. The Lower Salt River near Phoenix near sunset. Photo by Robert Roy Britt At our favorite little stretch of river, red rock walls rise gloriously from the surprisingly verdant desert canyon, poking into predawn clouds one morning, glowing like fire one evening. The flutter of water lapping over rocks is interrupted by the sharp squawk of a heron. A bald eagle swoops down to outfish us in a real live David Attenborough moment. There’s no cell reception. The mind drifts like the laziest sections of river. Thoughts come unexpectedly, or not at all. The next riffle beckons. We breathe deep and move on. “Nature holds the key to our aesthetic, intellectual, cognitive and even spiritual satisfaction,” said E.O. Wilson, the Pulitzer Prize-winning Harvard biologist. That’s what I mean to say. And after decades of accumulating evidence, science suggests he’s onto something. From hiking in the wilderness to living near urban green spaces, experiences with nature are linked to everything from better physical health and longer life to improved creativity, lower stress levels and outright happiness. One new study even suggests brief interludes in natural green spaces should be prescribed, like a nature pill, for people who are stressed. With the number of people around the globe living in urban areas expected to grow from 54 percent in 2015 to 66 percent by 2050, preserving or creating green space will be a key to overall human well-being. We know all this intuitively. It’s why so many vacations center around camping, hiking or putting toes in the sand. We crave a connection with nature from deep in our primordial beings. And for good reason. The Good of Green In 2012, a group of backpackers set out on a multi-day excursion into the wild, with no phones or other electronics. Before departing, they took a test measuring creativity and problem-solving ability. After four days in the wild, the test was given again. Scores were up by 50 percent, from an average of 4.14 correct answers out of 10 before the hike to 6.08. Like many psychology studies, this one could not prove cause and effect. It could not determine whether the improvement owed to nature itself, or if the disengagement from technology boosted scores, or if the physical activity perhaps played a role. But the researchers — University of Kansas psychologists Ruth and Paul Atchley and David Strayer of the University of Utah — shared their intuition at the time: “Our modern society is filled with sudden events (sirens, horns, ringing phones, alarms, television, etc.) that hijack attention,” they wrote in the journal PLOS ONE. “By contrast, natural environments are associated with gentle, soft fascination, allowing the executive attentional system to replenish.” “Spending time in, or living close to, natural green spaces is associated with diverse and significant health benefits.” Other studies by then had already shown that the benefits of green space, however they accrue, are not reserved for the likes of Marlin Perkins or Bear Grylls. Any bit of green seems to help. Back in 2006, research led by Jolanda Maas, a behavioral scientist now at Vrije University Amsterdam, found that the amount of green space within a roughly 2-mile radius “had a significant relation to perceived general health.” The conclusion was based on actual measurements of greenery compared to questionnaires filled out at doctor’s offices by 250,782 people in the Netherlands. Maas and her colleagues did a similar study in 2009, looking instead at morbidity data. Of 24 diseases considered, the prevalence of 15 was lower for people living in areas with more green space. “The relation was strongest for anxiety disorder and depression,” they reported in the Journal of Epidemiology & Community Health. Other research has shown that a room with a garden view and other access to green space can reduce stress and pain among hospital patients, boosting their immune systems and aiding recovery. Likewise, gardening can reduce stress, one small study found in 2011. Interestingly, it outdid reading as a destresser. In the test, 30 people were made to perform a stressful task, then spent 30 minutes outside gardening or indoors reading. Levels of cortisol, a hormone released by stress, were measured repeatedly, and the subjects were asked about their mood before and after. “Gardening and reading each led to decreases in cortisol during the recovery period, but decreases were significantly stronger in the gardening group,” the scientists wrote in the Journal of Health Psychology. “Positive mood was fully restored after gardening, but further deteriorated during reading.” Fast forward to last year, when the benefits of nature on physical health were spelled out in a broad review of studies that involved data on more than 290 million people in 20 countries. “We found that spending time in, or living close to, natural green spaces is associated with diverse and significant health benefits,” said lead author Caoimhe Twohig-Bennett of the University of East Anglia in England. “It reduces the risk of type II diabetes, cardiovascular disease, premature death, and preterm birth, and increases sleep duration. People living closer to nature also had reduced diastolic blood pressure, heart rate and stress,” as measured by cortisol levels, Twohig-Bennett said. Nature or Nurture? There’s an important caveat to many of these studies: Being outdoors often means being active. Whether backpacking, gardening or simply walking briskly through an urban park, the subjects of studies like these may also be engaging in what other scientists call “moderate physical activity,” which even in small doses is known to improve mood, boost cognitive ability, benefit physical health and up the odds of living longer. “People living near green space likely have more opportunities for physical activity and socializing,” Twohig-Bennett said, acknowledging the struggle to determine cause-and-effect. The science indeed remains inconclusive on whether it’s nature itself or the physical activity associated with being in nature that brings health benefits, said Douglas Becker, a grad student at the University of Illinois who just published a study on the effects of nature on health care costs. “Although it is strongly suggestive of both of those things… proximity and contact with nature leading to improved health outcomes and being around nature promoting physical activity,” Becker told me. Becker examined health and environmental data from nearly all of the 3,103 counties in the continental U.S. He found that counties with more forests and shrublands had lower Medicare costs per person. The difference was not tiny. Each 1 percent of a county’s land covered in forest was associated with $4.32 in savings per person per year, on average. Becker kindly did some additional math that I don’t fully understand, but it adds up to a boatload of money: “If you multiply that by the number of Medicare fee-for-service users in a county and by the average forest cover and by the number of counties in the U.S., it amounts to about $6 billion in reduced Medicare spending every year nationally,” Becker said. So. Plant more trees, right? Well… The analysis, to be detailed in the May 2019 issue of the journal Urban Forestry and Urban Greening, does not prove that having more trees and shrubs directly lowers health care costs, Becker said. Rather, it’s one more bit of evidence pointing to possible proof that green spaces (especially forests, he notes) are good for our health. “Being in sight of nature does indeed confer benefits,” he said. Twohig-Bennett added another potential factor, gleaned from her review of the literature, suggesting trees may have as-yet unrecognized value in promoting well-being. “Exposure to a diverse variety of bacteria present in natural areas may also have benefits for the immune system and reduce inflammation,” she said, pointed out that research has suggested there may be benefits to “forest bathing,” a popular therapy in Japan that involves just walking or even lying down in a forest. “Much of the research from Japan suggests that phytoncides — organic compounds with antibacterial properties — released by trees could explain the health-boosting properties of forest bathing,” Twohig-Bennett said. The jury is still out on this therapy, but “our study shows that perhaps they have the right idea,” she said. All in Your Head In 2015, researchers at Stanford University added to evidence there are distinct benefits to nature itself, not just the walking that might get you there and back. They looked at the effects of hiking in a natural area (oak trees and shrubs) versus hiking in an urban setting (along a four-lane road). Before and after the hikes, they asked the participants a bunch of questions, and, importantly, they measured participants’ heart rates and respiration and did brain scans. There were no notable differences in the physiology of the two groups after their hikes, the researchers reported in the Proceedings of the National Academy of Science. But those who hiked in nature had, afterward, less activity in a part of the brain called the subgenual prefrontal cortex. That’s where we ruminate repeatedly on negative emotions. Less is good. “It demonstrates the impact of nature experience on an aspect of emotion regulation — something that may help explain how nature makes us feel better,” said lead author Gregory Bratman, then a graduate student at the university. Bratman’s co-author, Stanford psychology professor James Gross, took the interpretation a step further, looking at the flip side of all this: “These findings are important because they are consistent with, but do not yet prove, a causal link between increasing urbanization and increased rates of mental illness.” And apparently, it’s never too soon to start an immersion in nature. People who grew up in greener surroundings have up to a 55 percent lower risk of mental disorders as adults, according to a study of nearly 1 million Danes published earlier this year in the US journal Proceedings of the National Academy of Sciences. “There is increasing evidence that the natural environment plays a larger role for mental health than previously thought,” said study leader Kristine Engemann of Aarhus University. “With our dataset, we show that the risk of developing a mental disorder decreases incrementally the longer you have been surrounded by green space from birth and up to the age of 10.” Educators have long recognized the benefits of nature on childhood well-being. And as science increasingly supports the premise, the number of nature-based preschools and so-called “forest kindergartens” in the US has grown 60 percent or more in each of the past two years. More and more children are getting almost their entire early education in the great outdoors. Nature Pill? How much time do you need to spend in nature to see benefits? While few would argue that more isn’t better, it doesn’t take much, a new study finds. Slipping away for just 20–30 minutes to sit or stroll in a natural environment reduces levels of cortisol, the stress hormone, according to a small study published April 4, 2019 in the journal Frontiers in Psychology. Researchers had 36 urban dwellers take a break for 10 minutes or more, three times a week over eight weeks, and go to a place that “made them feel like they’ve interacted with nature.” Importantly, the volunteers were instructed not to do any aerobic exercise during the breaks and to avoid reading, conversations and using their phones. The stress-reducing efficiency of the outings was greatest among those who spent 20 to 30 minutes in their happy places, the researchers concluded. “We know that spending time in nature reduces stress, but until now it was unclear how much is enough, how often to do it, or even what kind of nature experience will benefit us,” said the lead author of the paper, MaryCarol Hunter of the University of Michigan. “Our study shows that for the greatest payoff, in terms of efficiently lowering levels of the stress hormone cortisol, you should spend 20 to 30 minutes sitting or walking in a place that provides you with a sense of nature.” Hunter and her colleagues suggest healthcare practitioners could prescribe a “nature pill” based on this finding. Combined with exercise, good sleep and a good diet, a nature pill — or whatever you prefer to call it — could be viewed as a pillar of science-based well-being. For my son and I, trekking to our favorite fishing hole every day isn’t practical. But there’s a hiking trail that starts not far from our home, leading out into the desert and up a mountain. We’ll be out there.
https://medium.com/luminate/the-amazing-benefits-of-being-in-nature-e998d93f51a0
['Robert Roy Britt']
2019-04-19 12:54:12.689000+00:00
['Nature', 'Happiness', 'Health', 'Wellbeing', 'Science']
Title Amazing Benefits NatureContent Amazing Benefits Nature Better health Lower stress Enhanced creativity Sheer joy It’s doesn’t take long son recently announced wanted try fishing jumped dug old fly rod headed beyond city suburb stretch river reputed nice trout didn’t catch thing we’ve little luck multiple marvelous return trip See it’s fish Lower Salt River near Phoenix near sunset Photo Robert Roy Britt favorite little stretch river red rock wall rise gloriously surprisingly verdant desert canyon poking predawn cloud one morning glowing like fire one evening flutter water lapping rock interrupted sharp squawk heron bald eagle swoop outfish u real live David Attenborough moment There’s cell reception mind drift like laziest section river Thoughts come unexpectedly next riffle beckons breathe deep move “Nature hold key aesthetic intellectual cognitive even spiritual satisfaction” said EO Wilson Pulitzer Prizewinning Harvard biologist That’s mean say decade accumulating evidence science suggests he’s onto something hiking wilderness living near urban green space experience nature linked everything better physical health longer life improved creativity lower stress level outright happiness One new study even suggests brief interlude natural green space prescribed like nature pill people stressed number people around globe living urban area expected grow 54 percent 2015 66 percent 2050 preserving creating green space key overall human wellbeing know intuitively It’s many vacation center around camping hiking putting toe sand crave connection nature deep primordial being good reason Good Green 2012 group backpacker set multiday excursion wild phone electronics departing took test measuring creativity problemsolving ability four day wild test given Scores 50 percent average 414 correct answer 10 hike 608 Like many psychology study one could prove cause effect could determine whether improvement owed nature disengagement technology boosted score physical activity perhaps played role researcher — University Kansas psychologist Ruth Paul Atchley David Strayer University Utah — shared intuition time “Our modern society filled sudden event siren horn ringing phone alarm television etc hijack attention” wrote journal PLOS ONE “By contrast natural environment associated gentle soft fascination allowing executive attentional system replenish” “Spending time living close natural green space associated diverse significant health benefits” study already shown benefit green space however accrue reserved like Marlin Perkins Bear Grylls bit green seems help Back 2006 research led Jolanda Maas behavioral scientist Vrije University Amsterdam found amount green space within roughly 2mile radius “had significant relation perceived general health” conclusion based actual measurement greenery compared questionnaire filled doctor’s office 250782 people Netherlands Maas colleague similar study 2009 looking instead morbidity data 24 disease considered prevalence 15 lower people living area green space “The relation strongest anxiety disorder depression” reported Journal Epidemiology Community Health research shown room garden view access green space reduce stress pain among hospital patient boosting immune system aiding recovery Likewise gardening reduce stress one small study found 2011 Interestingly outdid reading destresser test 30 people made perform stressful task spent 30 minute outside gardening indoors reading Levels cortisol hormone released stress measured repeatedly subject asked mood “Gardening reading led decrease cortisol recovery period decrease significantly stronger gardening group” scientist wrote Journal Health Psychology “Positive mood fully restored gardening deteriorated reading” Fast forward last year benefit nature physical health spelled broad review study involved data 290 million people 20 country “We found spending time living close natural green space associated diverse significant health benefits” said lead author Caoimhe TwohigBennett University East Anglia England “It reduces risk type II diabetes cardiovascular disease premature death preterm birth increase sleep duration People living closer nature also reduced diastolic blood pressure heart rate stress” measured cortisol level TwohigBennett said Nature Nurture There’s important caveat many study outdoors often mean active Whether backpacking gardening simply walking briskly urban park subject study like may also engaging scientist call “moderate physical activity” even small dos known improve mood boost cognitive ability benefit physical health odds living longer “People living near green space likely opportunity physical activity socializing” TwohigBennett said acknowledging struggle determine causeandeffect science indeed remains inconclusive whether it’s nature physical activity associated nature brings health benefit said Douglas Becker grad student University Illinois published study effect nature health care cost “Although strongly suggestive things… proximity contact nature leading improved health outcome around nature promoting physical activity” Becker told Becker examined health environmental data nearly 3103 county continental US found county forest shrublands lower Medicare cost per person difference tiny 1 percent county’s land covered forest associated 432 saving per person per year average Becker kindly additional math don’t fully understand add boatload money “If multiply number Medicare feeforservice user county average forest cover number county US amount 6 billion reduced Medicare spending every year nationally” Becker said Plant tree right Well… analysis detailed May 2019 issue journal Urban Forestry Urban Greening prove tree shrub directly lower health care cost Becker said Rather it’s one bit evidence pointing possible proof green space especially forest note good health “Being sight nature indeed confer benefits” said TwohigBennett added another potential factor gleaned review literature suggesting tree may asyet unrecognized value promoting wellbeing “Exposure diverse variety bacteria present natural area may also benefit immune system reduce inflammation” said pointed research suggested may benefit “forest bathing” popular therapy Japan involves walking even lying forest “Much research Japan suggests phytoncides — organic compound antibacterial property — released tree could explain healthboosting property forest bathing” TwohigBennett said jury still therapy “our study show perhaps right idea” said Head 2015 researcher Stanford University added evidence distinct benefit nature walking might get back looked effect hiking natural area oak tree shrub versus hiking urban setting along fourlane road hike asked participant bunch question importantly measured participants’ heart rate respiration brain scan notable difference physiology two group hike researcher reported Proceedings National Academy Science hiked nature afterward le activity part brain called subgenual prefrontal cortex That’s ruminate repeatedly negative emotion Less good “It demonstrates impact nature experience aspect emotion regulation — something may help explain nature make u feel better” said lead author Gregory Bratman graduate student university Bratman’s coauthor Stanford psychology professor James Gross took interpretation step looking flip side “These finding important consistent yet prove causal link increasing urbanization increased rate mental illness” apparently it’s never soon start immersion nature People grew greener surroundings 55 percent lower risk mental disorder adult according study nearly 1 million Danes published earlier year US journal Proceedings National Academy Sciences “There increasing evidence natural environment play larger role mental health previously thought” said study leader Kristine Engemann Aarhus University “With dataset show risk developing mental disorder decrease incrementally longer surrounded green space birth age 10” Educators long recognized benefit nature childhood wellbeing science increasingly support premise number naturebased preschool socalled “forest kindergartens” US grown 60 percent past two year child getting almost entire early education great outdoors Nature Pill much time need spend nature see benefit would argue isn’t better doesn’t take much new study find Slipping away 20–30 minute sit stroll natural environment reduces level cortisol stress hormone according small study published April 4 2019 journal Frontiers Psychology Researchers 36 urban dweller take break 10 minute three time week eight week go place “made feel like they’ve interacted nature” Importantly volunteer instructed aerobic exercise break avoid reading conversation using phone stressreducing efficiency outing greatest among spent 20 30 minute happy place researcher concluded “We know spending time nature reduces stress unclear much enough often even kind nature experience benefit us” said lead author paper MaryCarol Hunter University Michigan “Our study show greatest payoff term efficiently lowering level stress hormone cortisol spend 20 30 minute sitting walking place provides sense nature” Hunter colleague suggest healthcare practitioner could prescribe “nature pill” based finding Combined exercise good sleep good diet nature pill — whatever prefer call — could viewed pillar sciencebased wellbeing son trekking favorite fishing hole every day isn’t practical there’s hiking trail start far home leading desert mountain We’ll thereTags Nature Happiness Health Wellbeing Science
1,394
A Guide to Medium Curation
I love the word curation. It makes me think of museums, of course. But also, it makes me think of the art of pulling together disparate things and figuring out how they fit together. Curation is the biggest buzz word in the community of Medium writers right now. Several times a day, questions about curation pop up in Facebook groups I belong to for Medium writers. What is it? Why does it happen? How does it happen? What if it doesn’t happen? Is there some kind of magic bean involved? I thought I’d see if I could curate a post that answers some of those questions. (Cute, right?) Let’s start with a definition. Curation is when Medium’s elevators — the people tasked with checking out posts and ‘elevating’ them by curating them into the platform’s many topics — choose a post to share more widely with readers. When a post is curated, it shows up on the page for the topic or topics that it has been curated in. It can also be distributed to Medium members who follow those topics. For instance, this post of mine was recently curated into the writing topic. I can see that it’s been curated because the word ‘writing’ appears above it on my stats page. And when I click on ‘details’ I can see it there, too. If it was curated into more than one topic, all of the topics would show up on the detailed stats page. And, when I click on that little boxed word ‘writing’ I’m taken to the topics page, where I can see my post listed. You don’t have to do anything to submit your post for curation. Elevators automatically look at posts and decide whether or not to curate them. Some posts are passed without being looked at, due to time constraints on Medium’s part. I’m not sure what causes this or how posts are sorted into this category. I do know that if my posts were regularly getting that note, I would work toward figuring out how to stop that from happening by increasing the quality of my posts over time. The way a post is actually shared is called distribution. When a post link shows up at the bottom of the post you’re reading or you get an email or text notification about a new post — that’s Medium distributing your post to readers. Medium will share this particular post by distributing it to some readers who follow the ‘writing’ topic, as well as people who follow me and people who follow the publication I posted this article in. If people read and respond to it, they’ll distribute it more. Medium offers guidelines for curation. Medium cares most about the quality of the post. Is your writing clear? Is it grammatically correct and free of errors? Is it an interesting read? They also like to see posts with clear headlines, subheads, and photographs that are properly cited. At least as important as all of that technical stuff is this: Medium strives to be ad-free. They charge their members a monthly subscription, and those readers expect an ad-free reading experience. If you have affiliate links or less-than-subtle calls to action (say, to join your email list), or you are selling something in your post, it is unlikely to be curated even though you’re not breaking any rules. Medium allows sign up forms, for instance, in posts that are part of the Medium Partnership Program (behind the paywall.) They are just less likely to curate those posts. Medium also allows affiliate links in posts behind the paywall, as long as you use a disclosure letting readers know that you’ve used those links. But, again, they are unlikely to curate those posts. And Medium is also unlikely to curate any post that looks like it’s part of a series that it not their own. If you write a weekly series, for instance, those posts aren’t against any rule, but Medium will probably not curate them into their topics. Medium rarely curates posts that are about writing on Medium, by the way. I do not expect this post to be curated. Medium does a good job of letting you know whether you’ve been curated. On the detailed stats page for each of your posts, you’ll see a message like this if a post has been curated: If your post was not curated, you’ll see a message like this: Medium does not curate every post. Sometimes well-written posts that meet all the criteria are passed over. However if you’re finding that most of your posts aren’t being curated, here are a few ideas. First — not being curated is not the end of the world. Your post is still made available to your followers. It’s still comes up in searches or if someone flips through posts in a tag you’ve used. You’ll likely get less traffic if your post isn’t curated — but your post hasn’t been shipped off to Siberia. Take a hard look at the quality of your writing. Your posts should be clearly written and as free from grammatical and spelling errors as possible. Large chunks of narrative are hard to read online (and in print, actually), so make sure you’re breaking your posts up with lots of white space. Use subheads in your text, to help with the white space and add to the reading experience. Bullet points help with this as well. Make sure that you’re digging deep enough in our work. If you’re writing something that has been said lots of times, by lots of writers, and not adding anything new to the conversation, that could be why your posts are not curated. Medium suggests asking for peer feedback on your writing and that’s a good idea. It might be tough to hear, but if you’re not being curated it could be because your writing isn’t up to par. That doesn’t mean you’re a bad writer or that you should quit. It means that you should read a lot and implement what you learn into your work. Medium is a unique platform that lets you publish while you’re learning. Take advantage of that. Use proper formatting. Medium has let us know that they like a clear headline written with title case (most of the words capitalized, no end punctuation.) They also like a subhead that gives more information and is written in sentence case (just like it sounds, like a sentence that starts with a capital and ends with punctuation.) They like an interesting photograph at the top of your post that’s properly cited. They even have a built in way to do that. If your posts are not being curated and you’re not following these basic formatting guidelines, that could be why. Make sure you’re not advertising. This one impacted me quite a lot. I sometimes use affiliate links in my posts and building my email list is very important to me. Having multiple income streams is always high on my priority list. I had to decide which posts I wanted to optimize for Medium curation. For those posts, I don’t include any email sign-up forms or affiliate links. For a while I had a link in the bio I put at the bottom of each of my posts that I finally realized was keeping me from being curated more often. When I took that link out, my curation rate increased. Write stand-alone posts. Medium is unlikely to curate a post that feels like it is part of a series. I happen to be the kind of writer who really enjoys writing in series. Sometimes I just write my series and realize that Medium isn’t going to help me promote those posts as much as some of my other work. Other times, I try to keep the fact that I’m writing a series more subtle. If I want my posts to be curated, I don’t name the series, for instance. I try to make the post feel like someone could read it by itself and not feel lost. I might post those under a tag in my own publications, to make them stand out as a series. Or call it a series in my own promotion efforts (for instance, when I post my links to Facebook or my email list.) But the actual post needs to read as complete all by itself if I want it to have a chance at curation. Don’t rely on curation as your own form of promotion. You do not have control over whether or not Medium curates your posts, beyond making sure that you meet their guidelines. Meeting those guidelines is not a guarantee. One of the best things you can do is focus on the things you can control. Promote your own posts via your social media channels. Start to build an email list, so that you can distribute your posts to readers on your own. Make a Medium publication for your posts so that you can use Medium’s ‘letters’ feature to reach out to followers. Also remember that not every post is a great fit for Medium. For instance, I’ve found that reviews, recipes, and tactile how-to articles don’t gain my traction here. I’ve also written some posts here that didn’t get much Medium-specific traffic, but ranked on Google (which brings readers, but usually not much Medium income.) I’ve started moving some of those posts to Hubpages, where SEO and Google ranking matters more, to see how they do there.
https://medium.com/the-write-brain/a-guide-to-medium-curation-7d5be2dd97db
['Shaunta Grimes']
2019-11-28 22:09:17.054000+00:00
['Medium', 'Freelancing', 'Writing', 'Money', 'Creativity']
Title Guide Medium CurationContent love word curation make think museum course also make think art pulling together disparate thing figuring fit together Curation biggest buzz word community Medium writer right Several time day question curation pop Facebook group belong Medium writer happen happen doesn’t happen kind magic bean involved thought I’d see could curate post answer question Cute right Let’s start definition Curation Medium’s elevator — people tasked checking post ‘elevating’ curating platform’s many topic — choose post share widely reader post curated show page topic topic curated also distributed Medium member follow topic instance post mine recently curated writing topic see it’s curated word ‘writing’ appears stats page click ‘details’ see curated one topic topic would show detailed stats page click little boxed word ‘writing’ I’m taken topic page see post listed don’t anything submit post curation Elevators automatically look post decide whether curate post passed without looked due time constraint Medium’s part I’m sure cause post sorted category know post regularly getting note would work toward figuring stop happening increasing quality post time way post actually shared called distribution post link show bottom post you’re reading get email text notification new post — that’s Medium distributing post reader Medium share particular post distributing reader follow ‘writing’ topic well people follow people follow publication posted article people read respond they’ll distribute Medium offer guideline curation Medium care quality post writing clear grammatically correct free error interesting read also like see post clear headline subhead photograph properly cited least important technical stuff Medium strives adfree charge member monthly subscription reader expect adfree reading experience affiliate link lessthansubtle call action say join email list selling something post unlikely curated even though you’re breaking rule Medium allows sign form instance post part Medium Partnership Program behind paywall le likely curate post Medium also allows affiliate link post behind paywall long use disclosure letting reader know you’ve used link unlikely curate post Medium also unlikely curate post look like it’s part series write weekly series instance post aren’t rule Medium probably curate topic Medium rarely curate post writing Medium way expect post curated Medium good job letting know whether you’ve curated detailed stats page post you’ll see message like post curated post curated you’ll see message like Medium curate every post Sometimes wellwritten post meet criterion passed However you’re finding post aren’t curated idea First — curated end world post still made available follower It’s still come search someone flip post tag you’ve used You’ll likely get le traffic post isn’t curated — post hasn’t shipped Siberia Take hard look quality writing post clearly written free grammatical spelling error possible Large chunk narrative hard read online print actually make sure you’re breaking post lot white space Use subhead text help white space add reading experience Bullet point help well Make sure you’re digging deep enough work you’re writing something said lot time lot writer adding anything new conversation could post curated Medium suggests asking peer feedback writing that’s good idea might tough hear you’re curated could writing isn’t par doesn’t mean you’re bad writer quit mean read lot implement learn work Medium unique platform let publish you’re learning Take advantage Use proper formatting Medium let u know like clear headline written title case word capitalized end punctuation also like subhead give information written sentence case like sound like sentence start capital end punctuation like interesting photograph top post that’s properly cited even built way post curated you’re following basic formatting guideline could Make sure you’re advertising one impacted quite lot sometimes use affiliate link post building email list important multiple income stream always high priority list decide post wanted optimize Medium curation post don’t include email signup form affiliate link link bio put bottom post finally realized keeping curated often took link curation rate increased Write standalone post Medium unlikely curate post feel like part series happen kind writer really enjoys writing series Sometimes write series realize Medium isn’t going help promote post much work time try keep fact I’m writing series subtle want post curated don’t name series instance try make post feel like someone could read feel lost might post tag publication make stand series call series promotion effort instance post link Facebook email list actual post need read complete want chance curation Don’t rely curation form promotion control whether Medium curate post beyond making sure meet guideline Meeting guideline guarantee One best thing focus thing control Promote post via social medium channel Start build email list distribute post reader Make Medium publication post use Medium’s ‘letters’ feature reach follower Also remember every post great fit Medium instance I’ve found review recipe tactile howto article don’t gain traction I’ve also written post didn’t get much Mediumspecific traffic ranked Google brings reader usually much Medium income I’ve started moving post Hubpages SEO Google ranking matter see thereTags Medium Freelancing Writing Money Creativity
1,395
Exploring Important Feature Repressions in Deep One-Class Classification
ICLR 2021 Photo by niculcea florin on Unsplash The data we routinely collect contains only a small amount of anomalous data. This is a very pleasing fact of normal life :-) Only a few defective products are encountered on factory production lines, and medical data on rare cases are presented in papers in medical societies as new discoveries. In other words, collecting anomalous data is a very costly task. It is obvious to anyone that it is more reasonable to train only normal data to detect anomalous cases than to spend a lot of money to collect various patterns. This learning method with training only normal cases dataset is called one-class classification, as various anomalous cases aimed to be excluded from the actual cases. In this story, Learning and Evaluating Representations for Deep One-class Classification, by Google Cloud AI, is presented. This is published as a technical paper of ICLR 2021. In this paper, the two-stage framework for deep one-class classification, composed of state-of-the-art self-supervised representation learning followed by generative or discriminative one-class classifiers[]. The major contribution of this paper is the proposal of a novel distribution-augmented contrastive learning method. The framework does not only learn a better representation, but it also permits building one-class classifiers that are more faithful to the target task. They even made the code available for everyone on their Github! Let’s see how they achieved that. I will explain only the essence of DROC, so those who want to know more should click on DROC paper. What does this paper say? In this paper, anomaly detection approach with a two-stage framework for Deep Representation One-class Classification (DROC) is proposed. In the first stage, training in a deep neural network of self-supervised learning to obtain a high-level representation of the data, a mapping f to a generic high-level latent representation is obtained; then, in the second stage, the mapping f obtained in the first stage is used to map the data to the latent space and OC- Applies to traditional one-class classifiers such as SVM and KDE [Sohn et al.,2020]. Fig. 1 Overview of two-stage framework for building a deep one-class classifier. (a) In the first stage, learning representations from one-class training distribution using self-supervised learning methods, and (b) in the second stage, training one-class classifiers using learned representations. Where is the novelty in this paper? ・In order to modify Contrastive learning [Cheng et al., 2020] suitable for one-class classification, the author proposes a distribution augmentation contrast learning method. Specifically, the system learns by identifying the type of augmentation applied to the data using geometric transformations of the image [Gidaris et al., 2018], horizontal flips and rotations (0,90,180,270). This allows dealing with images with outliers (anomalous data) that are rotated. Optimize a self-supervised loss function that minimizes the distance between samples from the same image using different data augmentation functions and maximizes the distance between samples from different images using the same augmentation function. This reduces the uniformity across the hypersphere and allows for separation from outliers. ・The idea that ``the less uniformity, the better for One-class Classification’' is wrong!! A fundamental trade-off between the amount of information and the uniformity of representation was identified. It is often thought that ``the lower the uniformity, the better for One-class Classification’’, but DistAug has effectively shown that this is in fact not true. What is contrastive learning? Contrastive learning [Cheng et al., 2020, Le-Khac et al., 2020] is an algorithm that formulates a task to find similarities and dissimilarities in ML models. It first learns a general representation of an image on an unlabeled dataset, and then fine-tunes it on a small dataset of labeled images for a specific classification task. Using this approach, a machine learning model can be trained to classify similar and dissimilar images. The SimCLR framework [Cheng et al., 2020] is a powerful network that uses Contrastive learning to learn representations by maximizing the agreement between different augmented views of the same data example via Contrastive learning in the latent space. SimCLR learns representations by maximizing the agreement between different augmented views of the same data example via Contrastive learning in the latent space. For more details, I refer you to the excellent description by Aakash Nain and Thalles Silva. Fig. 2 Simple framework for contrastive learning of visual representations. Distribution-augmented contrastive learning In some cases, training deep one-class classifier results in a degenerate solution that maps all data into a single representation, which is called hypersphere collapse [Ruff et al., 2018]. The authors propose distribution-augmented contrastive learning, with the motivation of reducing uniformity across the hypersphere to allow separation from outliers. As shown in Figure 3, DistAug is used to increase the number of images. It’s not only learning to identify different instances from the original distribution, but also identifies the type of augmentation, such as rotation, to identify instances from different distributions. Fig. 3 Distribution-augmented contrastive learning Distribution augmentation (DistAug) Fig. 4 (a) When representations are uniform, isolating outliers is hard. (b) Reducing uniformity makes the boundary between inlier and outlier clear. (c ) Distribution augmentation allows inlier distribution more compact. Distribution augmentation (DistAug) training is a distribution augmentation approach for one-class contrast learning inspired by RotNet [Golan et al., 2018]. It does not model the training data distribution, but rather the sum of the training data augmented by rotation augmentations such as rotations and horizontal flips. As shown in figure 4, to isolate outliers, it is more effective to increase the distribution as in (c ) than to decrease the uniformity as in (b). The authors make it clear that their argument is not ``less uniformity is better for OCC.’’ Results From the table, we can see that the Distribution Augmentation contrastive learning method has improved the performance of previous studies in experimental tests of detection and localization in both object and texture categories. This experiment shows that methods that rely on geometric transformations are particularly effective in detecting anomalies in the “object” category, since they learn to represent visual objects. Experimental results using the MVTech dataset Figures 5 and 6 show the visualization of the localization of the defects appearing in the industrial products in the MVTec dataset. All the following figures show, from left to right, the defect input data of the test set, the ground-truth mask, and the heatmap visualization of the localization. Fig. 5 Visualization of localization using MVTech dataset Fig. 6 Visualization of localization using MVTech dataset Reference [Chen et al., 2020] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. ``A simple framework for contrastive learning of visual representations,’’ arXiv, abs/2002.05709, 2020 [Github][Sohn et al.,2020] Sohn, Kihyuk, C. Li, Jinsung Yoon, Minho Jin and T. Pfister. ``Learning and Evaluating Representations for Deep One-class Classification,’’ ArXiv, abs/2011.02578, 2020 [Gidaris et al., 2018] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. ``Unsupervised representation learning by predicting image rotations,’’ In Sixth International Conference on Learning Representations, 2018. [Ruff et al., 2018] Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Muller, and Marius Kloft. ``Deep one-class classification,’’ In ¨ International conference on machine learning, pages 4393–4402, 2018 [Golan et al., 2018] Izhak Golan and Ran El-Yaniv. ``Deep anomaly detection using geometric transformations,’’ In Advances in Neural Information Processing Systems, pages 9758–9769, 2018. Past Paper Summary List Deep Learning method 2020: [DCTNet] Uncertainty Learning 2020: [DUL] Anomaly Detection 2020: [FND] One-Class Classification 2019: [DOC] 2020: [DROC] Image Segmentation 2018: [UOLO] 2020: [ssCPCseg] Image Clustering 2020: [DTC]
https://medium.com/swlh/exploring-important-feature-repressions-in-deep-one-class-classification-droc-d04a59558f9e
['Makoto Takamatsu']
2020-12-19 14:23:05.870000+00:00
['Machine Learning', 'Anomaly Detection', 'Artificial Intelligence', 'Computer Vision', 'Deep Learning']
Title Exploring Important Feature Repressions Deep OneClass ClassificationContent ICLR 2021 Photo niculcea florin Unsplash data routinely collect contains small amount anomalous data pleasing fact normal life defective product encountered factory production line medical data rare case presented paper medical society new discovery word collecting anomalous data costly task obvious anyone reasonable train normal data detect anomalous case spend lot money collect various pattern learning method training normal case dataset called oneclass classification various anomalous case aimed excluded actual case story Learning Evaluating Representations Deep Oneclass Classification Google Cloud AI presented published technical paper ICLR 2021 paper twostage framework deep oneclass classification composed stateoftheart selfsupervised representation learning followed generative discriminative oneclass classifier major contribution paper proposal novel distributionaugmented contrastive learning method framework learn better representation also permit building oneclass classifier faithful target task even made code available everyone Github Let’s see achieved explain essence DROC want know click DROC paper paper say paper anomaly detection approach twostage framework Deep Representation Oneclass Classification DROC proposed first stage training deep neural network selfsupervised learning obtain highlevel representation data mapping f generic highlevel latent representation obtained second stage mapping f obtained first stage used map data latent space OC Applies traditional oneclass classifier SVM KDE Sohn et al2020 Fig 1 Overview twostage framework building deep oneclass classifier first stage learning representation oneclass training distribution using selfsupervised learning method b second stage training oneclass classifier using learned representation novelty paper ・In order modify Contrastive learning Cheng et al 2020 suitable oneclass classification author proposes distribution augmentation contrast learning method Specifically system learns identifying type augmentation applied data using geometric transformation image Gidaris et al 2018 horizontal flip rotation 090180270 allows dealing image outlier anomalous data rotated Optimize selfsupervised loss function minimizes distance sample image using different data augmentation function maximizes distance sample different image using augmentation function reduces uniformity across hypersphere allows separation outlier ・The idea le uniformity better Oneclass Classification’ wrong fundamental tradeoff amount information uniformity representation identified often thought lower uniformity better Oneclass Classification’’ DistAug effectively shown fact true contrastive learning Contrastive learning Cheng et al 2020 LeKhac et al 2020 algorithm formulates task find similarity dissimilarity ML model first learns general representation image unlabeled dataset finetunes small dataset labeled image specific classification task Using approach machine learning model trained classify similar dissimilar image SimCLR framework Cheng et al 2020 powerful network us Contrastive learning learn representation maximizing agreement different augmented view data example via Contrastive learning latent space SimCLR learns representation maximizing agreement different augmented view data example via Contrastive learning latent space detail refer excellent description Aakash Nain Thalles Silva Fig 2 Simple framework contrastive learning visual representation Distributionaugmented contrastive learning case training deep oneclass classifier result degenerate solution map data single representation called hypersphere collapse Ruff et al 2018 author propose distributionaugmented contrastive learning motivation reducing uniformity across hypersphere allow separation outlier shown Figure 3 DistAug used increase number image It’s learning identify different instance original distribution also identifies type augmentation rotation identify instance different distribution Fig 3 Distributionaugmented contrastive learning Distribution augmentation DistAug Fig 4 representation uniform isolating outlier hard b Reducing uniformity make boundary inlier outlier clear c Distribution augmentation allows inlier distribution compact Distribution augmentation DistAug training distribution augmentation approach oneclass contrast learning inspired RotNet Golan et al 2018 model training data distribution rather sum training data augmented rotation augmentation rotation horizontal flip shown figure 4 isolate outlier effective increase distribution c decrease uniformity b author make clear argument le uniformity better OCC’’ Results table see Distribution Augmentation contrastive learning method improved performance previous study experimental test detection localization object texture category experiment show method rely geometric transformation particularly effective detecting anomaly “object” category since learn represent visual object Experimental result using MVTech dataset Figures 5 6 show visualization localization defect appearing industrial product MVTec dataset following figure show left right defect input data test set groundtruth mask heatmap visualization localization Fig 5 Visualization localization using MVTech dataset Fig 6 Visualization localization using MVTech dataset Reference Chen et al 2020 Ting Chen Simon Kornblith Mohammad Norouzi Geoffrey Hinton simple framework contrastive learning visual representations’’ arXiv abs200205709 2020 GithubSohn et al2020 Sohn Kihyuk C Li Jinsung Yoon Minho Jin Pfister Learning Evaluating Representations Deep Oneclass Classification’’ ArXiv abs201102578 2020 Gidaris et al 2018 Spyros Gidaris Praveer Singh Nikos Komodakis Unsupervised representation learning predicting image rotations’’ Sixth International Conference Learning Representations 2018 Ruff et al 2018 Lukas Ruff Robert Vandermeulen Nico Goernitz Lucas Deecke Shoaib Ahmed Siddiqui Alexander Binder Emmanuel Muller Marius Kloft Deep oneclass classification’’ ¨ International conference machine learning page 4393–4402 2018 Golan et al 2018 Izhak Golan Ran ElYaniv Deep anomaly detection using geometric transformations’’ Advances Neural Information Processing Systems page 9758–9769 2018 Past Paper Summary List Deep Learning method 2020 DCTNet Uncertainty Learning 2020 DUL Anomaly Detection 2020 FND OneClass Classification 2019 DOC 2020 DROC Image Segmentation 2018 UOLO 2020 ssCPCseg Image Clustering 2020 DTCTags Machine Learning Anomaly Detection Artificial Intelligence Computer Vision Deep Learning
1,396
Modularizing the logic of your Vue.js Application
As an application grows, it is, unfortunately, common to see poorly designed components, with a lot of duplicate code, business logic scattered across methods, complex logic embedded in the templates, and so on. The components become large, brittle, and hard to change and test. The application becomes increasingly hard to evolve, sometimes reaching a point where the developers are eager to start from scratch, preferring a costly and risky rewrite than handling the current application state. It doesn’t have to be that way. We can and should do better. In this article, we will discuss moving the bulk of the application’s business logic into a functional core that will be easy to reuse, easy to test and change, and which will lead to smaller, leaner, and more maintainable components. We will pick up from where we left in our previous article, so you might want to check that first if you still haven’t. Interfaces and Functional Modules instead of Classes When we discussed adopting TypeScript in Vue.js applications, we took a somewhat unconventional route. Instead of modeling our data around classes, we have defined very lean interfaces to add type-annotations to our data. We have only used the fields that make up our objects in the interfaces — we have not mentioned methods or any operation over the data yet. This article does not aim at doing an in-depth debate about Functional vs. Object-Oriented programming paradigms. Both have pros and cons, but I tend to prefer a functional style, because it is easier to follow and to test, in my opinion. Thus, we will use a functional approach to build our application core, and we will try to show how it leads to a modular, testable, and reusable codebase. We will continue developing the simplified Invoice application that we started in the previous article. Planning the app functionality Before we jump right into the code, let’s talk about what functionalities we need in our application. In a real scenario, we would probably receive the requirements from a task description developed by a product team, or, if working on a side-project that we fully control, we would define that ourselves. For our simple app, we will need ways to create and manipulate invoices. This will involve adding, removing, and changing line items, selecting products, and setting rates and quantities. We will also need a way to instantiate User and Product objects easily. As we did for the types definitions, we want a modular way of building these functionalities. Building our modules We will put our modules inside a modules directory under src . We will split the functionality into as many files as it is sensible to do, grouping related functionality into single modules. Let’s start with the User and Product modules: User module Product module These two modules are very simple and similar, but they serve as a container for all the functionality related to users or products we might need down the road. Even though it looks that we are repeating code, we should not try to unify these create functions in any way — that would cause coupling between unrelated concepts and would make the code harder to change. Notice how we have defined default values for all the parameters. This will allow us to call the create functions without passing arguments and still have a valid object of the appropriate type. One thing you might be concerned about the code above is that we are listing all of the fields as individual parameters. We only have a couple of arguments in each of the create function, but the number of parameters could grow a lot as we make our models more complex. We will ignore it for now, but we will revisit this when we discuss defining a clear application boundary in a future article. Even though we have declared the LineItem interface in the same file as the Invoice , we will use a separate file for the Invoice and LineItem modules. We could group the invoice and the line item modules using a directory, but we will keep it simple and flat for now. You can use any folder structure that suits your particular situation. The lineItem module will be pretty simple as well: LineItem module Let’s move on to the Invoice module now. It will be a more complex module, so we are going to stub out the functions before implementing them. Invoice module stub Developing the Invoice module with TDD When we modify the line items in an invoice, by adding, removing, or changing a line item, we have to recalculate the invoice total. This is critical data in our application — we cannot afford to have the wrong amount calculated for the invoice — so we should test the invoice module thoroughly. With our modular core logic, it will be pretty straightforward to add tests. When we scaffolded this app, we didn’t add any of the unit test features available, but vue-cli makes it very easy to add plugins to existing projects. We will use jest to write our tests, and we can add it to our project by running: $ vue add unit-jest That will take care of installing and configuring jest to work in a Vue project. Let’s write a few tests for our Invoice module. Invoice module tests These tests are a little bit lengthy, but they are easy to follow. We start by ensuring that our create function in the invoice module returns an empty invoice. Then we move on to test the other parts of our Invoice module. We have added a testData function to help creating objects used in the tests. In a production-grade application, we would add more tests, especially to cover edge cases, making sure our module would work in every possible scenario. But for this article, this is good enough. We should now run these tests: Failing tests As expected, the tests fail because we haven’t implemented our functions yet. Let’s do that now. Invoice module implementation We have created two helper functions to avoid repeating code. The first one was the calculateTotal function. It takes the invoice and returns the total amount. It does so by first calculating the subtotal for each line item, using a new function we have added to the LineItem module, then summing all the line item totals. Let's see what the LineItem module looks like now. Adding the calculateLineTotal function to the LineItem module The calculateLineTotal function is very simple. It just multiplies the rate by the quantity. Still, having it in a separate function makes our code easier to follow and easier to change. Back to the invoice module, we can see that the setLineItem helper function takes an invoice and a list of line items and then returns an updated invoice with the given line items and the calculated total amount. With these helper functions in place, implementing our public functions is very simple — they just need to generate the new list of line items (based on the operation) and use the helper functions to return an updated invoice. And now our tests pass! Tests now succeed Using the modules in a Vue component Let’s rewrite our createInvoice method in the HelloWorld.vue component, just to have a taste of how we use our modules in a component. Again, this is a contrived example, but it already looks better than before. We now have the objects with the appropriate type from the modules’ create functions (instead of having just the type inference). In a more realistic scenario, the user would be the authenticated user; the product would come from some selector that reads from a product list; the rate and quantity would be set in the UI using inputs; and it would be possible to add/remove/update line items directly in the UI. We will build those components in the next article. Wrapping up At this point, we can have a fair degree of confidence that our invoice related logic is working. We should probably add some more tests, but we have a great baseline to develop our invoice application. We have built a solid functional core for our application logic. We are not spreading the business rules across components and, when the time comes to wire this functionality up with the UI, the components will end up being a skinny layer to connect the user actions to our core modules. Let me know what you think of this approach in the comments! Shameless Plug: If you liked this article and there are openings in your company, I’m currently looking for a job as a Senior Full Stack Engineer. You can check my Linkedin and drop me a line at vinicius0026 at gmail dot com if you think I’m a good fit. Cheers! 😃
https://medium.com/swlh/modularizing-the-logic-of-your-vue-js-application-5b920e17c25e
['Vinicius Teixeira']
2020-06-02 16:16:51.759000+00:00
['JavaScript', 'Typescript', 'Vuejs']
Title Modularizing logic Vuejs ApplicationContent application grows unfortunately common see poorly designed component lot duplicate code business logic scattered across method complex logic embedded template component become large brittle hard change test application becomes increasingly hard evolve sometimes reaching point developer eager start scratch preferring costly risky rewrite handling current application state doesn’t way better article discus moving bulk application’s business logic functional core easy reuse easy test change lead smaller leaner maintainable component pick left previous article might want check first still haven’t Interfaces Functional Modules instead Classes discussed adopting TypeScript Vuejs application took somewhat unconventional route Instead modeling data around class defined lean interface add typeannotations data used field make object interface — mentioned method operation data yet article aim indepth debate Functional v ObjectOriented programming paradigm pro con tend prefer functional style easier follow test opinion Thus use functional approach build application core try show lead modular testable reusable codebase continue developing simplified Invoice application started previous article Planning app functionality jump right code let’s talk functionality need application real scenario would probably receive requirement task description developed product team working sideproject fully control would define simple app need way create manipulate invoice involve adding removing changing line item selecting product setting rate quantity also need way instantiate User Product object easily type definition want modular way building functionality Building module put module inside module directory src split functionality many file sensible grouping related functionality single module Let’s start User Product module User module Product module two module simple similar serve container functionality related user product might need road Even though look repeating code try unify create function way — would cause coupling unrelated concept would make code harder change Notice defined default value parameter allow u call create function without passing argument still valid object appropriate type One thing might concerned code listing field individual parameter couple argument create function number parameter could grow lot make model complex ignore revisit discus defining clear application boundary future article Even though declared LineItem interface file Invoice use separate file Invoice LineItem module could group invoice line item module using directory keep simple flat use folder structure suit particular situation lineItem module pretty simple well LineItem module Let’s move Invoice module complex module going stub function implementing Invoice module stub Developing Invoice module TDD modify line item invoice adding removing changing line item recalculate invoice total critical data application — cannot afford wrong amount calculated invoice — test invoice module thoroughly modular core logic pretty straightforward add test scaffolded app didn’t add unit test feature available vuecli make easy add plugins existing project use jest write test add project running vue add unitjest take care installing configuring jest work Vue project Let’s write test Invoice module Invoice module test test little bit lengthy easy follow start ensuring create function invoice module return empty invoice move test part Invoice module added testData function help creating object used test productiongrade application would add test especially cover edge case making sure module would work every possible scenario article good enough run test Failing test expected test fail haven’t implemented function yet Let’s Invoice module implementation created two helper function avoid repeating code first one calculateTotal function take invoice return total amount first calculating subtotal line item using new function added LineItem module summing line item total Lets see LineItem module look like Adding calculateLineTotal function LineItem module calculateLineTotal function simple multiplies rate quantity Still separate function make code easier follow easier change Back invoice module see setLineItem helper function take invoice list line item return updated invoice given line item calculated total amount helper function place implementing public function simple — need generate new list line item based operation use helper function return updated invoice test pas Tests succeed Using module Vue component Let’s rewrite createInvoice method HelloWorldvue component taste use module component contrived example already look better object appropriate type modules’ create function instead type inference realistic scenario user would authenticated user product would come selector read product list rate quantity would set UI using input would possible addremoveupdate line item directly UI build component next article Wrapping point fair degree confidence invoice related logic working probably add test great baseline develop invoice application built solid functional core application logic spreading business rule across component time come wire functionality UI component end skinny layer connect user action core module Let know think approach comment Shameless Plug liked article opening company I’m currently looking job Senior Full Stack Engineer check Linkedin drop line vinicius0026 gmail dot com think I’m good fit Cheers 😃Tags JavaScript Typescript Vuejs
1,397
First Confirmed Case of Coronavirus Reinfection Doesn’t Mean We’re All Doomed
First Confirmed Case of Coronavirus Reinfection Doesn’t Mean We’re All Doomed The case is one in 23 million Photo: Li Zhihua/China News Service/Getty Images Scientists in Hong Kong reported today the first confirmed case of reinfection with SARS-CoV-2, the virus that causes Covid-19. Since the beginning of the pandemic, there have been concerns about long-term immunity to the novel coronavirus, and several possible cases of reinfection were reported in the media. But until now, none were confirmed scientifically. The question has always been whether reports of a person testing positive, recovering from the virus and testing negative, and then testing positive again weeks or months later is because of faulty testing, “dead” viral RNA lingering in the body, a reemergence of the same infection, or a genuine instance of reinfection. The Hong Kong report is the first to use genetic testing to confirm the two cases in the same person were caused by slightly different strains of the virus. According to a manuscript leaked by South China Morning Post reporter Lilian Cheng on Twitter, the patient, a 33-year-old man with no preexisting conditions, first got sick in March, presenting with a cough, sore throat, fever, and headache. He tested positive for SARS-CoV-2 on March 26 and was monitored in the hospital for two weeks (standard protocol for patients in Hong Kong regardless of disease severity) until he was discharged on April 14 following two negative tests. The second time he tested positive was after returning to Hong Kong from Spain on August 15, when he was screened at the airport as part of standard reentry procedures. This time, however, he was completely asymptomatic and never developed a cough, fever, or any other signs of Covid-19. Scientists at the University of Hong Kong sequenced the genome of the virus from the tests taken in March and August and discovered that they differed in several key areas, indicating that they were two different strains of SARS-CoV-2. Specifically, 24 nucleotides — the “building blocks” that make up the virus’s RNA — were different between the two infections. The August strain was a variation of the virus known to be circulating primarily in western Europe, suggesting the man was reinfected while abroad. While the news of a legitimate reinfection is worrying, virologists and immunologists took to Twitter to reassure people that this doesn’t mean we’re all doomed. In fact, scientists have been expecting reinfection to occur all along. Akiko Iwasaki, PhD, a professor of immunobiology at Yale University, tweeted, “This is no cause for alarm — this is a textbook example of how immunity should work.” Virus-specific antibodies created by the immune system are central to the question of immunity, and varying reports have emerged over the past few months about the quantity, quality, and duration of antibodies produced in response to SARS-CoV-2. The vast majority of people who’ve recovered from Covid-19 do develop antibodies to the virus, and typically the more severe the infection, the more antibodies they produce, providing them with protection against reinfection. However, according to the leaked manuscript, which is under review at the academic journal Clinical Infectious Diseases, the Hong Kong patient had no detectable antibodies after his first infection. It’s possible this man had a very mild initial case of Covid-19 or an abnormal immune response that resulted in fewer antibodies being produced. Either way, the absence of antibodies after the man’s first infection could explain how he became infected with a different strain a second time. In contrast, a preprint study published earlier this month and covered in the New York Times reported that three people who tested positive for antibodies were spared in a large outbreak that infected 104 people on a fishing boat from Seattle. Given that most people do develop antibodies to the virus, Angela Rasmussen, PhD, a virologist at Columbia University, tweeted that the Hong Kong case “doesn’t have major implications for immunity since most people DO have IgG [antibodies] after recovering from infection.” The fact that the second case was asymptomatic is also a good sign because it suggests that there is some protection (perhaps from T cells) that made the second infection less severe. “While immunity was not enough to block reinfection, it protected the person from disease,” Iwasaki tweeted. What’s more, the man developed a robust antibody response after the second infection. Finally, it’s important to remember that this is one confirmed case of reinfection out of the more than 23 million cases of Covid-19 worldwide, and one in 23 million is pretty good odds. As Rasmussen pointed out on Twitter, “How many people were screened to find this single case of reinfection? There’s no indication that this is anything other than a rare case of someone getting reinfected after not developing immunity to the first infection.”
https://coronavirus.medium.com/first-confirmed-case-of-coronavirus-reinfection-doesnt-mean-we-re-all-doomed-85bde2ab9e72
['Dana G Smith']
2020-08-24 22:12:30.261000+00:00
['Hong Kong', 'Immunity', 'Health', 'Covid 19', 'Coronavirus']
Title First Confirmed Case Coronavirus Reinfection Doesn’t Mean We’re DoomedContent First Confirmed Case Coronavirus Reinfection Doesn’t Mean We’re Doomed case one 23 million Photo Li ZhihuaChina News ServiceGetty Images Scientists Hong Kong reported today first confirmed case reinfection SARSCoV2 virus cause Covid19 Since beginning pandemic concern longterm immunity novel coronavirus several possible case reinfection reported medium none confirmed scientifically question always whether report person testing positive recovering virus testing negative testing positive week month later faulty testing “dead” viral RNA lingering body reemergence infection genuine instance reinfection Hong Kong report first use genetic testing confirm two case person caused slightly different strain virus According manuscript leaked South China Morning Post reporter Lilian Cheng Twitter patient 33yearold man preexisting condition first got sick March presenting cough sore throat fever headache tested positive SARSCoV2 March 26 monitored hospital two week standard protocol patient Hong Kong regardless disease severity discharged April 14 following two negative test second time tested positive returning Hong Kong Spain August 15 screened airport part standard reentry procedure time however completely asymptomatic never developed cough fever sign Covid19 Scientists University Hong Kong sequenced genome virus test taken March August discovered differed several key area indicating two different strain SARSCoV2 Specifically 24 nucleotide — “building blocks” make virus’s RNA — different two infection August strain variation virus known circulating primarily western Europe suggesting man reinfected abroad news legitimate reinfection worrying virologist immunologist took Twitter reassure people doesn’t mean we’re doomed fact scientist expecting reinfection occur along Akiko Iwasaki PhD professor immunobiology Yale University tweeted “This cause alarm — textbook example immunity work” Virusspecific antibody created immune system central question immunity varying report emerged past month quantity quality duration antibody produced response SARSCoV2 vast majority people who’ve recovered Covid19 develop antibody virus typically severe infection antibody produce providing protection reinfection However according leaked manuscript review academic journal Clinical Infectious Diseases Hong Kong patient detectable antibody first infection It’s possible man mild initial case Covid19 abnormal immune response resulted fewer antibody produced Either way absence antibody man’s first infection could explain became infected different strain second time contrast preprint study published earlier month covered New York Times reported three people tested positive antibody spared large outbreak infected 104 people fishing boat Seattle Given people develop antibody virus Angela Rasmussen PhD virologist Columbia University tweeted Hong Kong case “doesn’t major implication immunity since people IgG antibody recovering infection” fact second case asymptomatic also good sign suggests protection perhaps cell made second infection le severe “While immunity enough block reinfection protected person disease” Iwasaki tweeted What’s man developed robust antibody response second infection Finally it’s important remember one confirmed case reinfection 23 million case Covid19 worldwide one 23 million pretty good odds Rasmussen pointed Twitter “How many people screened find single case reinfection There’s indication anything rare case someone getting reinfected developing immunity first infection”Tags Hong Kong Immunity Health Covid 19 Coronavirus
1,398
How to manage files in Google Drive with Python
As a Data Analyst, most of the time I need to share my extracted data to my product manager/stakeholder and Google Drive is always my first choice. One major issue over here is I have to do it on weekly or even daily basis, which is very boring. All of us hate repetitive tasks, including me. Fortunately, Google provides API for most of its service. We are going to use Google Drive API and PyDrive to manage our files in Google Drive. Using Google Drive API Before going into coding, you should get Google Drive API access ready. I have wrote an article on how to get your Google Service Access through Client ID. You should be able to get JSON file that contain the secret key to access your Google Drive. Getting Started with PyDrive Installing PyDrive We will use the python package manager to install PyDrive pip install pydrive Connecting to Google Drive PyDrive has made the authentication very easy with just 2 lines of code. You have to rename the JSON file to “client_secrets.json” and place it in the same directory with your script. gauth.LocalWebserverAuth() will fire up the browser and ask for your authentication. Choose the google account you want to access and authorize the app. drive = GoogleDrive(gauth) create a Google Drive object to handle file. You will be using this object to list and create file. Listing and uploading file in Google Drive Line 1 to line 4 will get you the list of files/folders in your Google Drive. It will also give you the detail of those files/folders. We capture the file ID of the folder you would like to upload files to. In this case, To Share is the folder I would upload the files to. File ID is important as Google Drive uses file ID to specific the location instead of using file path. drive.CreateFile() accepts metadata(dict.) as input to initialize a GoogleDriveFile. I initialized a file with "mimeType" : "text/csv" and "id" : fileID . This id will specific where the file will be uploaded to. In this case, the file will be uploaded to the folder To Share . file1.SetContentFile("small_file.csv") will open the specified file name and set the content of the file to the GoogleDriveFile object. At this moment, the file is still not uploaded. You will need file1.Upload() to complete the upload process. Accessing files in folders How if you would like to upload files into folder inside a folder? Yes, again you would need the File ID! You can use the ListFile to get the files but this time change the root to file ID . file_list = drive.ListFile({'q': "'<folder ID>' in parents and trashed=false"}).GetList() Now we can get into folder picture inside the folder To Share . Other than uploading files to Google Drive, we can delete them too. First, create a GoogleDriveFile with the specified file ID. Use Trash() to move file to trash. You can also use Delete() to delete the file permanently. Now you have learnt how to manage your Google Drive files with Python. I hope this article is useful to you. Do drop me a comment if I made any mistake or typo. You can view the complete script in my Github. Cheers!
https://towardsdatascience.com/how-to-manage-files-in-google-drive-with-python-d26471d91ecd
['June Tao Ching']
2020-09-07 15:28:50.569000+00:00
['Python', 'Google', 'Google Drive', 'Data Science', 'Programming']
Title manage file Google Drive PythonContent Data Analyst time need share extracted data product managerstakeholder Google Drive always first choice One major issue weekly even daily basis boring u hate repetitive task including Fortunately Google provides API service going use Google Drive API PyDrive manage file Google Drive Using Google Drive API going coding get Google Drive API access ready wrote article get Google Service Access Client ID able get JSON file contain secret key access Google Drive Getting Started PyDrive Installing PyDrive use python package manager install PyDrive pip install pydrive Connecting Google Drive PyDrive made authentication easy 2 line code rename JSON file “clientsecretsjson” place directory script gauthLocalWebserverAuth fire browser ask authentication Choose google account want access authorize app drive GoogleDrivegauth create Google Drive object handle file using object list create file Listing uploading file Google Drive Line 1 line 4 get list filesfolders Google Drive also give detail filesfolders capture file ID folder would like upload file case Share folder would upload file File ID important Google Drive us file ID specific location instead using file path driveCreateFile accepts metadatadict input initialize GoogleDriveFile initialized file mimeType textcsv id fileID id specific file uploaded case file uploaded folder Share file1SetContentFilesmallfilecsv open specified file name set content file GoogleDriveFile object moment file still uploaded need file1Upload complete upload process Accessing file folder would like upload file folder inside folder Yes would need File ID use ListFile get file time change root file ID filelist driveListFileq folder ID parent trashedfalseGetList get folder picture inside folder Share uploading file Google Drive delete First create GoogleDriveFile specified file ID Use Trash move file trash also use Delete delete file permanently learnt manage Google Drive file Python hope article useful drop comment made mistake typo view complete script Github CheersTags Python Google Google Drive Data Science Programming
1,399
How to Find the Time to Pursue Your Passions
How to Find the Time to Pursue Your Passions Building a meaningful side hustle while working a 9-to-5 Photo by Blake Cheek on Unsplash While my girlfriend and I were getting ready for bed yesterday, she rolled over and quietly asked me, “What time are you getting up tomorrow?”. I could already tell where this conversation was headed. “5:30", I responded. To be honest, I’m not sure why she continues to ask me this question because I get up at the same time every single day. Nonetheless, she picked up her phone to check the time: 11 pm. I could tell she was doing some mental math in her head, silently adding up home much sleep I’ll get if I do get up at 5:30. After coming to the answer, she then tried to reason with me. “I think you should sleep until 6:30. That’s still an hour and a half before you have to work — think of everything you could get done!” She was right — sometimes I do actually need a little more sleep. But the other part of her argument is the exact reason I get up at 5:30 in the first place. “Yeah, but if I don’t get up at 5:30, I won't have enough time to get everything done that I want to throughout the day.” Getting up early allows me to focus on the things I want to focus on when I’m at my best. Trying to write an article after a full day of work is like beating my head against the wall — it never ends well. Getting up early creates an opportunity to give my best effort to the things I care about most. Before the stress of the day begins, I can devote 2.5 hours to myself — journaling, reading, writing, and exercising. And when I finally log onto my 9-to-5 job at 8 am, I feel like I've already accomplished so much. The Biggest Source of Underutilized Time I get it, finding the time to pursue what you love is hard. You’re swamped with work, family, exercise, or (let’s be honest) Netflix. The truth is, if you can’t find time to work, you won’t ever be able to pursue what you want. Most of my evenings after a long day are filled with exercise, family time, dinner, and a little relaxation. One of the last things I want to do is work some more. But the mornings are the complete opposite. They’re some of my most creative and where 80% of my work is done. The biggest source of underutilized time is in the morning. No one actually likes getting up early, but there’s a reason why some of the most successful people in the world do it. Benjamin Franklin once said, “The early morning has gold in its mouth.”. And Aristotle, the famous greek philosopher said this about the mornings: “It is well to be up before daybreak, for such habits contribute to health, wealth, and wisdom.” Success requires work. And if you can’t find the time to work, you won’t be able to build anything successfully. If you don’t want to put in the work at night, trying waking up early. Sure, you may have to go to sleep a bit earlier, but I guarantee that you’ll feel invigorated and more creative than you ever have before. Photo by Chris Curry on Unsplash Waking Up Early Makes You Less Tired “Okay, I get it,” my girlfriend responded after I told her why I get up at 5:30. “I just want to make sure you’re getting enough sleep.” This is one of the things I love about her — she’s constantly worried about me. I reassured her that I get plenty of sleep, and then followed up with something I hadn’t even recognized before, it just kind of spilled out of me. “When I sleep in, I end up feeling more tired, not less.” I know, it sounds counterintuitive. but bear with me. Let’s do the math: To bed at 11 pm Wake up at 5:30 am Total sleep time: 6.5 hours (if I fall asleep at 11 on the dot) According to the CDC, the average adult needs at least 7 hours of sleep per night. Based on the above, I’m nearly spot on. More often than not, when we’re “tired” it’s due to oversleeping, not undersleeping. Sure, maybe you could say I need a little more sleep. But when I get up early knowing that I get to work on something I love — I spring out of bed. It’s addicting starting my day by working towards the person I want to become and building something ridiculously cool.
https://medium.com/change-your-mind/how-to-find-the-time-to-pursue-your-passions-283d7b2dbb33
['Devin Arrigo']
2020-12-07 15:11:39.524000+00:00
['Advice', 'Writing', 'Self', 'Success', 'Creativity']
Title Find Time Pursue PassionsContent Find Time Pursue Passions Building meaningful side hustle working 9to5 Photo Blake Cheek Unsplash girlfriend getting ready bed yesterday rolled quietly asked “What time getting tomorrow” could already tell conversation headed “530 responded honest I’m sure continues ask question get time every single day Nonetheless picked phone check time 11 pm could tell mental math head silently adding home much sleep I’ll get get 530 coming answer tried reason “I think sleep 630 That’s still hour half work — think everything could get done” right — sometimes actually need little sleep part argument exact reason get 530 first place “Yeah don’t get 530 wont enough time get everything done want throughout day” Getting early allows focus thing want focus I’m best Trying write article full day work like beating head wall — never end well Getting early creates opportunity give best effort thing care stress day begin devote 25 hour — journaling reading writing exercising finally log onto 9to5 job 8 feel like Ive already accomplished much Biggest Source Underutilized Time get finding time pursue love hard You’re swamped work family exercise let’s honest Netflix truth can’t find time work won’t ever able pursue want evening long day filled exercise family time dinner little relaxation One last thing want work morning complete opposite They’re creative 80 work done biggest source underutilized time morning one actually like getting early there’s reason successful people world Benjamin Franklin said “The early morning gold mouth” Aristotle famous greek philosopher said morning “It well daybreak habit contribute health wealth wisdom” Success requires work can’t find time work won’t able build anything successfully don’t want put work night trying waking early Sure may go sleep bit earlier guarantee you’ll feel invigorated creative ever Photo Chris Curry Unsplash Waking Early Makes Less Tired “Okay get it” girlfriend responded told get 530 “I want make sure you’re getting enough sleep” one thing love — she’s constantly worried reassured get plenty sleep followed something hadn’t even recognized kind spilled “When sleep end feeling tired less” know sound counterintuitive bear Let’s math bed 11 pm Wake 530 Total sleep time 65 hour fall asleep 11 dot According CDC average adult need least 7 hour sleep per night Based I’m nearly spot often we’re “tired” it’s due oversleeping undersleeping Sure maybe could say need little sleep get early knowing get work something love — spring bed It’s addicting starting day working towards person want become building something ridiculously coolTags Advice Writing Self Success Creativity