title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
829
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Predicting Demand at Hundreds of Stores with Multi-Task Learning and Good Features
Predicting Demand at Hundreds of Stores with Multi-Task Learning and Good Features At Afresh, we are building technology that helps grocers reduce food waste and increase their profitability through better store-level forecasting, ordering, and operations for fresh food. Every year, one-third of food produced goes to waste, globally. In the United States, 40 percent of all food waste occurs at the retail level, with the highest occurrence in fresh food departments. Our goal is to apply technology to optimize the supply chain and make fresh food more accessible. Fresh food is complex, and the myriad variables that can impact demand require equally complex modeling. We are always looking for new features to input into our machine learning models to improve the performance and overall robustness of our system. Every day, we recommend ordering decisions for tens of thousands of store-items (produce, meat, etc.) at each of many, many grocery stores. Part of this system involves building a large multi-task demand forecasting model for all the store-items in a department (think produce, meat, etc.) using many features, including these categorical features: item_id : which item? : which item? store_id : which store? : which store? store_item_id: which store-item? A cross-product of store_id x item_id. Sales patterns vary by item, by store, and indeed by each store-item, so these are all key features. An important property of our demand forecasting model is that it generalizes to new stores (and new items) that were not seen at training time. But what if a new store is opened up, making store_id and store_item_id unavailable in the training set for the model building? To address this, we can represent the store and department using new features like size in sqft of the department, or geographic information like city, longitude and latitude. One problem with these new features is that they tend to be constant over time, and the machine learning model may suss them out as semi-categorical features, e.g. 2,386 sqft must be store_id = 27, which has similar disadvantages to using the identifiers. The upshot is that forecasting on a new store, not found in the training set, with a previously unseen sqft value, may degrade the forecast quality for items in the new store. Another idea is to use a rolling (28 day) average of department sales (dept_sales) as a proxy for store_id. The rolling average has the advantage of changing (naturally jittering) day by day, with store department sales criss-crossing each other over time, but at the same time carrying a lot of information about the sales nature of the department. Here is a chart of sales at 6 stores over an almost 2 year period. Note how an individual store department’s rolling average sales vary over time, and how different store department sales criss-cross each other. Our demand forecasting model consists of an ensemble of XGBoost gradient boosted trees (GBT) and DNN (deep neural network) models. Within XGBoost trees here are the 4 most branched upon (of many!) features, which stands in as a heuristic for determining feature importance: dom (day-of-month) (day-of-month) woy (week-of-year) (week-of-year) store_id store_item_id Clearly store_id and store_item_id are very important features. Replacing them with dept_sales (department sales) in a newly trained model yields these 3 most branched upon features: dom (day-of-month) (day-of-month) woy (week-of-year) (week-of-year) dept_sales So dept_sales looks to make a good stand-in for store identifiers in the model, at least in terms of how it is weighted by the model. But how about overall model performance? We can measure this by relative L1 error. And we want to see the ratio of forecast to sales versus actual test data sales to be close to 1.0 (a perfect forecast). We don’t want the standard deviation of the store-item ratios to vary much (all store-items ratios of 1.0 giving a stdev of 0.0 is best!). Per the table and figure, replacing store identifiers with dept_sales actually improved model performance slightly, but outside the margin of error for declaring it an actual, significant decrease in relative L1 error. But it clearly improved robustness by allowing new stores, not found in the training set, to be forecast on. When this model with dept_sales was tested on items at a new store it performed well, but not quite as well as the old store-items. In summary, the dept_sales feature is an important addition to our arsenal for forecasting demand, which in turn leads to better ordering at the store level, and significant savings when it comes to food waste. We are growing our team rapidly and are looking for passionate, talented engineers to help us solve more problems like this. If you are interested in joining, take a look at our current openings.
https://medium.com/afresh-engineering/predicting-demand-at-hundreds-of-stores-with-multi-task-learning-and-good-features-626bbd88ab20
['Afresh Engineering']
2020-12-18 00:43:31.897000+00:00
['Big Data', 'Machine Learning', 'Supply Chain', 'AI', 'Engineering']
蓬佩奧妻子月中確診新型肺炎
A columnist in political development in Greater China region, technology and gadgets, media industry, parenting and other interesting topics.
https://medium.com/@frederickyeung-59743/%E8%93%AC%E4%BD%A9%E5%A5%A7%E5%A6%BB%E5%AD%90%E6%9C%88%E4%B8%AD%E7%A2%BA%E8%A8%BA%E6%96%B0%E5%9E%8B%E8%82%BA%E7%82%8E-735db2bb94c4
['C Y S']
2020-12-25 01:25:21.600000+00:00
['USA', 'Government']
Towards Data Science
Welcome to the 3rd article of “A Journey through XGBoost” series. Up to now, we have completed 2 milestones. What we have done so far is that, basically, we discussed how to set up the system to run XGBoost on our own computers, and also we did a classification task with XGBoost and created a small (but useful) web app to communicate our results with end-users who don’t have much technical knowledge. If you come here for the very first time, I recommend you to read the first two articles of “A Journey through XGBoost” series. Here are the links. More specially, toady, we will do a regression task on the “Boston house-prices” dataset with XGBoost. Here are the topics we discuss today. Topics we discuss Form a regression problem Build an XGBoost regression model (Scikit-learn compatible API) Describe ‘RMSE’ and ‘R-squared’ metrics and metrics Create prediction error and residuals plots Explain XGBoost Regressor hyperparameters hyperparameters XGBoost’s objective function Apply L2 regularization to our XGBoost model The Boston house-prices dataset The “Boston house-prices” dataset is a built-in dataset in Scikit-learn. To access the data, all you need to do is calling the load_boston() function and assign it to a variable called data which is a Python object. Then we call various properties of that object to get X (feature matrix), y (target vector) and column names. When we write the code, you will see how to do that. For now, just look at the structure and variable information of the “Boston house-prices” dataset. The first 5 rows of the “Boston house-prices” dataset (Image by author) The following image contains the variable information of the dataset returned by the Pandas DataFrame info() method. Variable information on the “Boston house-prices” dataset (Image by author) The dataset has no missing values. All the values are numerical. So, no preprocessing step requires and the dataset is ready to use. This dataset has 506 observations and 13 features (not including the “target” column). The target column (which can be accessed using the target attribute of data) contains the median house prices. To find more information about this dataset, please read its documentation. Define the problem Based on CRIM, ZN,…, LSTAT, we want to predict the median house prices of given houses (new instances). This is a regression task because its model predicts a continuous-valued output (house prices are continuous values). The algorithm that we use to solve this regression problem is XGBoost. The XGBoost model for regression is called XGBRegressor. So, we will build an XGBoost model for this regression problem and evaluate its performance on test data (unseen data/new instances) using the Root Mean Squared Error (RMSE) and the R-squared (R²-coefficient of determination). We will also use various graphical techniques such as prediction error plot, residual plot and distribution of residuals to evaluate the regression model and verify its assumptions. Let’s get hands-on experience by writing the Python code to build an XGboost regression model on the “Boston house-prices” dataset. Building the XGboost regression model Here, we use the XGBoost Scikit-learn compatible API. “Scikit-learn compatible” means that you can use the Scikit-learn .fit() / .predict() paradigm and almost all other Scikit-learn classes with XGBoost. Here is the code. Wait till loading the Python code! (Code Snippet-1) The output of the above code segment is: The output of the Code Snippet-1 Note that how I have used several print() functions to print multiple values and graphs in the same output. To create the prediction error and residuals plots, I have used only two lines of code. Here, I have taken the advantage of the Yellowbrick Python library. Yellowbrick is an excellent machine learning visualization library. I recommend you to use it because you can make fancy visualizations with just one or two lines of code and it is fully compatible with Scikit-learn. If you’re interested, read its documentation. Our model returns an RMSE value of 2.86 (in y-units) for the test data. Is that a good value? To find out, let’s look at some statistical measures of the target column (house prices). Statistical measures of the target column With the standard deviation of 9.197 and the mean of 22.53, the RMSE value (2.86) that we got is very good. The smaller the RMSE value, the better the model. On average, the price predictions of our model are 2.86 units away from the actual values. R² value for our model is 0.89. It means that 89% of the variability observed in house prices is captured by our model and the other 11% is due to some other factors and, of course, randomness! As graphical methods, we can use prediction error plot, residuals plot and distribution of residuals to evaluate our regression model and verify its assumptions. Prediction error plot: We can see that most of the points are on a straight line. We can compare this plot against the 45-degree line, where the prediction exactly matches the model. In general, the predictions follow the actual house prices. We can see that most of the points are on a straight line. We can compare this plot against the 45-degree line, where the prediction exactly matches the model. In general, the predictions follow the actual house prices. Residuals plot: We cannot see any pattern between predictions and residuals. So, we can verify that the residuals are uncorrelated or independent. This is a good sign for our model. We cannot see any pattern between predictions and residuals. So, we can verify that the residuals are uncorrelated or independent. This is a good sign for our model. Distribution of residuals plot: By looking at this plot, we can verify that the residuals (actual values-predicted values) are approximately normally distributed. The model we created follows the standard regression assumptions. Now, it’s time to describe the XGBRegressor() hyperparameters. We have specified 7 hyperparameters. max_depth=3: Here, the XGBoost uses decision trees as base learners. By setting max_depth=3 , each tree will make 3 times of splits and stop there. Here, the XGBoost uses decision trees as base learners. By setting , each tree will make 3 times of splits and stop there. n_estimators=100: There are 100 trees (individual models) in the ensemble. There are 100 trees (individual models) in the ensemble. objective=’reg:squarederror’: A name for the loss function used in our model. reg:squarederror is the standard option for regression in XGBoost. You may also use reg:linear which gives the same result. But reg:linear is now deprecated and will be removed in a future version. A name for the loss function used in our model. is the standard option for regression in XGBoost. You may also use which gives the same result. But is now deprecated and will be removed in a future version. booster=’gbtree’: This is the type of base learner that the ML model uses every round of boosting. ‘gbtree’ is the XGBoost default base learner. With booster=‘gbtree ’, the XGBoost model uses decision trees, which is the best option for non-linear data. This is the type of base learner that the ML model uses every round of boosting. is the XGBoost default base learner. With ’, the XGBoost model uses decision trees, which is the best option for non-linear data. n_jobs=2: Use 2 cores of the processor for doing parallel computations to run XGBoost. Use 2 cores of the processor for doing parallel computations to run XGBoost. random_state=1: Controls the randomness involved in creating tress. You may use any integer. By specifying a value for random_state , you will get the same result at different executions of your code. Controls the randomness involved in creating tress. You may use any integer. By specifying a value for , you will get the same result at different executions of your code. learning_rate=0.05: Shrinks the weights of trees for each round of boosting. Decreasing learning_rate prevents overfitting. XGBoost’s objective function The objective function of XGBoost determines how far a prediction is from the actual value. Our goal is to find a model that gives the minimum value for the objective function. XGBoost’s objective function consists of two parts: The loss function The regularization term Mathematically, this can be represented as: XGBoost’s objective function (Image by author) The loss function For a regression problem, the loss function is the Mean Squared Error (MSE). For a classification problem, the loss function is the Log Loss. The hyperparameter values for the common loss functions in XGBoost are: reg:squarederror or reg:linear — for regression or — for regression binary:logistic — for binary classification — for binary classification multi:softprob — for multi-class classification The regularization term The regularization can be defined as the “control on model complexity”. We need to create a model considering both accuracy and simplicity. The regularization term is a penalty term to prevent overfitting the model. The main difference between XGBoost and other tree-based models is that XGBoost’s objective function includes a regularization term. The regularization parameters in XGBoost are: gamma: The default is 0. Values of less than 10 are standard. Increasing the value prevents overfitting. The default is 0. Values of less than 10 are standard. Increasing the value prevents overfitting. reg_alpha: L1 regularization on leaf weights. Larger values mean more regularization and prevent overfitting. The default is 0. L1 regularization on leaf weights. Larger values mean more regularization and prevent overfitting. The default is 0. reg_lambda: L2 regularization on leaf weights. Increasing the value prevents overfitting. The default is 1. Apply L2 regularization to our XGBoost model Now, we apply L2 regularization to the XGBoost model we created above. We will try 4 different values for the reg_lambda hyperparameter. We can do it with a simple for loop. Wait till loading the Python code! (Code Snippet-2) The output is: L2 regularization effect on our XGBoost model Here, we can notice that as the value of ‘lambda’ increases, the RMSE increases and the R-squared value decreases. Summary So far, We have completed 3 milestones of the XGBoost series. Today, we performed a regression task with XGBoost’s Scikit-learn compatible API. As we did in the classification problem, we can also perform regression with XGBoost’s non-Scikit-learn compatible API. In the next article, I will discuss how to perform cross-validation with XGBoost. Stay tuned for the updates! Thanks for reading! This tutorial was designed and created by Rukshan Pramoditha, the Author of Data Science 365 Blog. Read my other articles at https://rukshanpramoditha.medium.com 2021–03–11
https://towardsdatascience.com/a-journey-through-xgboost-milestone-3-a5569c72d72b
['Rukshan Pramoditha']
2021-03-11 13:40:58.270000+00:00
['Regression', 'Xgboost', 'Data Science', 'Supervised Learning', 'Machine Learning']
Losing a Word
Losing a Word Pic by Marquitta Spagnolo (CC0) on Pixy Losing a word is like losing someone Except that a word can be replaced Whereas someone usually cannot Is there a word that cannot be replaced in any language or dialect I propose the word love Imagine life without it I like you I like you very much I adore you I more than that you I’m in something with you There’s a word missing How about using crave I crave you It’s awful I’m smitten with you It’s not enough I’m besotted with you Oh my God I cherish you Come on I worship you It’s better but no I idolize you Same thing I treasure you You and everyone else I prize you How much am I worth You see There’s a word missing I’d feel like I’ve lost someone if I could’t use and mean the word love Patrick M. Ohana
https://medium.com/illumination/losing-a-word-b99059b4b9d3
['Patrick M. Ohana']
2020-12-18 07:24:15.135000+00:00
['Illumination', 'Word', 'Love', 'Poetry', 'Meditation']
Mama Monologues — Day 25 of Self Quarantine
My husband and kiddo are currently in the shower and this is the first moment in over a week I’ve been “alone” in our home in about 23 days. What am I doing with this glorious time, you ask? Sitting in silence, having myself a little mini vacay… well sort of. Toddlers aren’t quiet, especially when they have water to play with (and a Dada to scream at — so fun). And I’m still sitting in my PJs from last night (it’s currently 6:19PM). But I will take what I can get, and live my best life in this moment (meaning I will sit by my favorite window and think and drink my Rose Cider with my essential oils going, while writing what comes to my whirlwind mind). These days are so strange. I know we all feel it. And I honestly can’t believe I haven’t been alone in our home in over 3 weeks. It’s not that my husband hasn’t been willing to keep our kiddo entertained, it’s just that life’s been crazy — everything has been absolutely mind boggling and mind blowingly crazy. And while I’m unnecessarily explaining my thoughts (I do that… sometimes I hate that I do it, but other times it brings connection, which I’m down for — so here I am), I’ll also say that my “best life” isn’t usually away from my people. It’s not usually away from my perfectly precious (and a little bit psycho — if you sang the song as you read that, we can absolutely be friends) toddler and dreamy hubby. But today… oh yes, today I am just praying my husband can tolerate my daughter’s bossy-ness just a wee bit longer than normal so I can have this moment of almost-solitude. These are crazy days, and my need to be away from my favorite people is okay. Hear me — YOUR need to have a moment, to take same space — It’s OKAY. It doesn’t make me or you ungrateful or anything else. It makes us human. It makes us someones trying to wade through a crazy time in history and adjust to about 47 major life changes in the last several weeks (okay, I’m exaggerating… call me #dramaqueen — it’s fine). I’m still thankful and grateful for SO MANY THINGS, despite the struggles we all are facing …. despite the struggles I am personally facing right now. But gosh, I needed to sit in kind-of-silence tonight for 11 minutes. And that’s okay. And it’s okay for you, too. Sending love and light, Aly
https://medium.com/@alytracy/mama-monologues-day-25-of-self-quarantine-10ea00a42168
['Aly Tracy']
2020-04-06 23:43:10.304000+00:00
['Selfquarantine', 'Stories', 'Mothers', 'Motherhood', 'Moments']
There Should Be Exit Interviews for Breakups
If we have them for work, why don’t we have them for relationships? Photo by Dang Nghia on Unsplash Some people might argue that the ending of a romantic relationship is not the same as leaving a job, but I’d say they’re comparable. You can easily compare being fired to being dumped. In both cases, you didn’t want the relationship to end. Also, if you leave your job for a better job or because you’re seeking something more fulfilling, it’s a breakup in your professional life and is similar to a personal breakup. When you choose to leave someone, you’re either leaving the relationship for someone else who’s better for you or simply because it’s not a good fit for you anymore. The purpose of the exit interview is to learn from both sides. This is your chance to talk in a safe space about what worked for you and what didn’t. In the work place, your answers may be able to help your employer better serve the next employee. In a relationship, you can help your ex be better for the next person. The important part of the exit interview is that both parties know this is a professional, mature conversation. This is not the time to get angry or point fingers. In the most effective cases, both people exchange information for the benefit of clarifying what went wrong in the relationship. An exit interview is the perfect opportunity to share what you liked and didn’t like about your partner. Both sides should treat this as a truthful moment and nobody should be trying to hurt feelings. Try to use this as a learning experience. In the best case scenario, you both would leave with no confusion or assumptions about what went wrong. You’ll get whatever closure you need and be able to move on. Unfortunately, it’s so difficult to be honest in a breakup. Most often, people lie. If a man wants to break up with a woman, he’ll usually say “It has nothing to do with you. I just need to work on myself,” but this doesn’t help anybody. If he broke up with her because he thinks she’s vain or she never opened up emotionally, she deserves to know that. She might not have been aware that she was closed off, so the feedback can only help her for future relationships. Him being direct with her will also clear up any confusion she might’ve had over what caused the breakup. People who get dumped often struggle to get over the breakup because they keep wondering what they did wrong. An exit interview would provide that closure. If they knew exactly why they were left, it can quicken the healing process and most likely make them more self-aware and conscious of anything they may need to work on. It can prevent them from continuing to make the same mistakes over and over again. It’s hard being self-aware, especially when dating. We try to be on our best behavior, but if nobody tells us the behavior isn’t working for them, how do we know we need to make a change? In a perfect world, people would be able to sit down with each other after a breakup and just be honest about what ended it. Improving our communication skills and becoming more self-aware will only help us be better versions of ourselves for the next relationship. I think it could be very beneficial for both parties.
https://maclinlyons.medium.com/there-should-be-exit-interviews-for-breakups-2ae75194669d
['Maclin Lyons']
2019-04-28 20:57:48.330000+00:00
['Breakups', 'Relationships', 'Love', 'Self Improvement', 'Dating']
How to Work Through Your Creative Constipation
How to Work Through Your Creative Constipation Photo by Steve Johnson on Unsplash Writing great content can be difficult. It’s usually hard to start writing and I don’t always have ideas for my next article. I used to make the mistake by only focusing on the content of what I wanted to create. That was a misconception though. In the past few weeks, I’ve applied a technique that has greatly helped me to create new content. Most important was that I needed to focus on the process of writing, instead of the content. With the process I created for myself, I maximally use the way the brain works to come up with the best results. I found that it worked. Here’s how this process will also work for you. Use your brain as you’re meant to use it The approach I’ve used in the last couple of weeks uses three essential qualities of how our brains work, which are: Focused mode of thinking : logical and detailed thinking. This includes working on the title, structure and content of the article; : logical and detailed thinking. This includes working on the title, structure and content of the article; Flow : getting in a state of “deep work” that immerses you fully and helps you to create your best work. This can also be described as “hyper-focus”; : getting in a state of “deep work” that immerses you fully and helps you to create your best work. This can also be described as “hyper-focus”; Diffused mode of thinking: essential for connecting the dots, solving problems and linking different pieces of information together. You can use all of these 3 qualities to your advantage if you’re smart about it. Focus and flow To come up with your best work, you need to focus on what you are doing. In the focused mode of thinking, you build your storyline, set up the structure of your article and also make sure your grammar is correct. This all requires logic, planning and structure. Ideally, you want to make sure you can focus for at least 20 to 30 minutes without being disturbed. When working for a longer period on something you’re good at and love doing, it’s likely that you will experience flow. Flow is the sweet spot where you need to be during your creative process. This happens because your brain is fully engaged in what you’re doing and does not have any capacity to process anything else. Your worries will vanish and words will flow naturally. After all, worries also take up computing power of your brain, which is unavailable in a flow state. Mihaly Csikszentmihalyi, a professor in the field of positive psychology, defined flow as: …being completely involved in an activity for its own sake. The ego falls away. Time flies. Every action, movement, and thought follow inevitably from the previous one, like playing jazz. Your whole being is involved, and you’re using your skills to the utmost Photo by Gordon Williams on Unsplash Creating optimal conditions for flow Flow will not come automatically. Flow requires you to be fully immersed in what you´re doing. Hence, any distractions will be detrimental. Fortunately, there are many prerequisites for flow that you can influence quite easily. First of all, you need to make sure that you will not get disturbed for at least half an hour. Pick an environment that works best for you. Personally, for example, I need to be alone to concentrate. Having a smartphone in the room is generally also not a good idea. So get rid of the smartphone and kill all social media sessions on your computer. Social media are poison to your creativity and will suck you in. Moreover, the constant pop-ups will get you out of flow each time and it will take you 10 to 20 minutes to get to the same level of flow again. Ideally, before you start writing you’ve already finished your background research and set up the structure of your article. You will generally not get into a flow with these activities. Reading articles with previously unknown information is like learning. When you’re learning you’re not yet fully adept in that specific field. To achieve flow, however, you need to be reasonably good at what you do already. By dividing your work into segments, you can still get into a flow state for part of the work. As a bare minimum, you at least split your work into two segments; tasks you’re good at and tasks you’re not yet good at. Moreover, when you’ve set up the structure of your article, already include small sentences with the outcome of your background research — including backlinks. That way, you don’t need to go back to the posts a second time over. Once you’ve set everything up, you can focus on the process of writing. Set a timer for 25 to 30 minutes. Write without interruption during this timeframe to get into flow. Longer timeframes are of course also possible, but by setting a shorter timeframe you can motivate yourself better. I at least need this, as I’m not always the paragon of discipline when it comes to writing. Photo by Ed O’Neil on Unsplash This way of working is also called the “Pomodoro technique”, named after a tomato-shaped timer. The method is developed in the ’80s by Francesco Cirillo. Instead of focusing on the content of the work, it is a simple technique that lets you focus on the writing process itself. After you’ve followed the process and spend 25 to 30 minutes on writing, you can reward yourself with a small break. This break also has a function, as you’ll discover next. Solving problems by procrastinating When we’re writing, we mainly use the focused mode of our brain. In this mode, we process details and work in a logical and structured way towards our intended goal. This is essentially what we do optimally in a flow state. However, when we need to fit information together creatively and learn new things, we also need our diffused mode of thinking. For the diffused mode to kick in you need to take a break, rest or sleep. Your brains keep processing different pieces of information and working on your problems in the background. This is essential to come up with your best work and to break that writer’s block. So sometimes it’s best to just throw in the towel if you can’t seem to figure out how to best get your thoughts on paper. Your brain will of course not start working on a problem if you haven’t immersed yourself in it first. Therefore, if you want to write a creative article it’s best to start with 30 to 60 minutes of focused work. It doesn’t have to be your best work at this stage; it’s more important to start. Then take a break and procrastinate. Your brain will solve your problems for you while you’re in the shower. Only after you’ve set your work aside, you’re able to think about the bigger picture. Taking a break will also help you with revising your text. As revising is a vital part of writing, it is important that you do this with your best and creative insights. Not only do you want to make sure your grammar and sentence structure are ok but you also want to assess your article as a whole. Does it convey the right message? Does the article flow and is it easy to read? After you’ve taken a break, it’s easier to see the big picture. When you take some distance from what you’ve written, it’s easier to see what can be improved. By doing this after a break, you fully use your brain’s diffused mode of thinking. The process The thought of coming up with a new idea and writing a new piece can feel daunting. When I feel this “fear of writing”, or writer’s block, I generally like it better to start binge-watching instead. However, by focusing on the process of writing, I have a clear idea when I will work and for how long. The idea of “built-in” breaks and rewards is also a tremendous help. In summary, the process to optimally use the power of your brain is: Set up the structure of the article or chapter you are writing. The structure includes the general idea, title and subheadings. Note that perfection is not the goal in this step; Conduct your background research. Write down useful information in the structure you’ve set up in step 1 with backlinks to the relevant articles. That way, you don’t have to go back to the website for a second time; Write for 25 to 30 minutes without any interruption. Fully take advantage of your flow state. Pick your work environment well. Eliminate distractions such as social media; Take a break for 5 to 10 minutes. Let the diffused mode of thinking kick in to let you see the bigger picture; Revise your article. Not only check your grammar. Also, make sure that your article conveys the right message. With this article, I hope I’ve inspired you to give this method a chance. Let me know what you think in the comments below!
https://writingcooperative.com/how-to-work-through-your-creative-constipation-2b4a86eca608
['Martin Van Duyse']
2020-01-27 22:06:01.538000+00:00
['Focus', 'Writing Tips', 'Writing', 'Flow', 'Writers Block']
A Gentle Introduction to Graph Embeddings
Model Lots of researchers studied how can GNN works. TransE (Border et al., 2013), RESCAL (Nickle et al., 2011), DistMult (Yang et al., 2015) and ComplEx (Trouillon et al., 2016) will be introduced in this section. TransE If you are familiar with word2vec (Mikolov et al., 2013), you can assume that TransE (Border et al., 2013) is similar to word2vec. Giving subject entity (aka head), relation and object entity (aka tail), object entity embeddings should be close to the subject entity embeddings plus relation embeddings if the subject entity is similar to the object entity. Otherwise, the subject entity should be far away from the object entity. Word2vec Sample: King + Woman ~= Queen (source) RESCAL RESCAL (Nickle et al., 2011) uses multiple matrics to represent the relations among entities. Assume that the total number of entity is n while the total number of a relation is m , the total number of parameters is n x n x m . If there is no relation between an entity i and entity j , the value is set to zero. Matrics of Entity (E) and Relation (R) (Nickle et al., 2011) One of the challenges of RESCAL (Nickle et al., 2011) is scalability. Since the matrics store relation between every subject entity and object entity, a huge amount of parameters are introduced. DistMult DistMult (Yang et al., 2015) is similar to RESCAL (Nickle et al., 2011) except for the number of parameters. Instead of use complex matrics, Yang et al. reduce the number of relations parameters by using a diagonal matrix only (i.e. restricted matrices). It requires fewer parameters for training. A number of parameters of RESCAL can be more than DistMult ten to a hundred times. DistMult enjoys a low number of parameters (same as TransE) to achieve superior performance. In computation, DistMult is similar to TransE while DistMult uses multiplicative interaction while TransE uses addictive interaction. One of the problems of DistMult is that it can only model symmetric relations but not suitable for general knowledge graphs as it simply the relations by using a diagonal matrix. ComplEx To handle symmetric and antisymmetric relations, Trouillon et al., (2016) proposed to use complex embeddings (both real and imaginary parts). Symmetric relations mean sRo = oRs if a does not equal to b while s is a subject entity, R is relation and o is object entity. If it holds if a is equal to b only, it is antisymmetric relations. Example of Complex Number (source) The scoring function is similar to DistMult as a diagonal matric is introduced to score vectors. DistMult scoring function helps to calculate the symmetric part while the antisymmetric part is handled by imaginary embeddings.
https://medium.com/towards-artificial-intelligence/a-gentle-introduction-to-graph-embeddings-c7b3d1db0fa8
['Edward Ma']
2020-03-20 03:20:25.969000+00:00
['Machine Learning', 'Artificial Intelligence', 'Python', 'Data Science', 'Graph Neural Networks']
The Sopranos episode review — 4.3 — Christopher
Original air date: September 29, 2002 Director: Tim Van Patten Writer: Michael Imperioli (story by Michael Imperioli and Maria Laurino) Rating: 7/10 Given that this episode focuses mostly on the Italian-American obsession with Christopher Columbus, and his increasing controversy, I was pretty disinterested with this episode at the start. Tony and the others are just not having it as local Native American groups protest the holiday, as AJ learns about how Columbus was bad in school. It’s just not the most interesting stuff, and I’ve honestly never really understood this obsession. But then the episode takes a turn, in like the last scene. In the car with Silvio and Christopher, Tony really quite beautifully articulates the American dream of working for what you earn, and not blaming others for your failures. You can go ahead and say it’s an incredibly privileged point of view and I wouldn’t disagree, but it’s great writing and acting that captures this character so well. It elevates what is otherwise one of the worst episodes of the series so far.
https://medium.com/as-vast-as-space-and-as-timeless-as-infinity/the-sopranos-episode-review-4-3-christopher-92f087f375d5
['Patrick J Mullen']
2021-03-07 00:37:45.883000+00:00
['Drama', 'Tv Reviews', 'TV', 'HBO', 'The Sopranos']
Data Quality Automation With Apache Spark
Monitoring data quality is crucial to understand customers and provide them with a great product experience. Data quality informs decision-making in business and drives product development. For example, one of People.ai’s features is capturing all activity from Sales and Marketing. We analyze activities ingested from a user’s email inbox, calendar, and CRM system and display actionable insights to help sales and marketing teams take the best next action. As our system rapidly scaled, we began to see abnormal numbers for certain users, such as 70 hours spent in a day. This number seemed unrealistic (unless you time travel with a Time-Turner). After we manually investigated the problem, we didn’t find any bugs. The algorithm worked as expected. However, we have identified several edge cases, and these were impacting the reports. Identifying these edge cases helped us improve our model. The goal around monitoring activity data quality at People.ai is to identify outliers and various edge cases without customer involvement and improve our platform to provide the best experience for every user. We set out on a journey to build a rigorous quality assurance system that verifies data at every stage of a pipeline. Over the last three years, we have iterated our data quality validation flow from manual investigations and ad-hoc queries, to automated tests in CircleCI, to a fully automated Apache Spark pipeline. Semi-manual checks. In the early days, we manually investigated edge cases by running ad-hoc scripts and queries in a Jupyter notebook. To illustrate, below is pseudo-code of a test that verifies whether we have filtered out too many emails sent by users during the intake and analysis: analyzed_emails_count = len([entry for entry in data if entry[‘outbound’]]) total_emails_count = len([entry for entry in data if entry[‘from’] == user_info.user_email]) emails_mismatch = 100 — analyzed_emails_count * 100 / total_emails_count assert emails_mismatch <= 5, ‘Filtered emails value exceeds allowed threshold of 5 percent’ To define a threshold for the test, we analyzed data derived from emails sent by our users. The analysis showed that, on average, we filter out five percent or less of sent emails. These are usually some non-work related emails. For example, we can use five percent as a threshold to verify if we missed any edge cases that would cause over-filtering. If the number of filtered emails is greater than the expected threshold, we can look for root causes, such as various aliases a user might use that we failed to detect. Those queries became a part of our data quality analysis routine. But as we started to grow our customer base, the data became too large to manually check. Validation used to take more than 20 hours of work a week. Next came automation with CircleCI. In this phase, we wrote down all checks we performed manually as unit tests and built them into the pipeline. We used those tests to verify activity data for every new user upon registration. Streamlining semi-automated testing flow with CircleCI Registration of a new user would trigger a “data quality checking job” in CircleCI. If at least one of the data quality tests failed, an internal ticket was created to investigate a root cause before the system marked the user account as ready. We saw three main challenges with this approach: This flow involves a manual step to review tests that failed and identify a root cause. User registration was not complete until the issues were fixed. Whenever a test failed, we only had 48 hours to fix new user registration. Tests were run for new users upon registration only. We wanted to ensure the data validation tests were in place at every stage of a pipeline every time we ingest activities for existing users as well. It didn’t take long for us to realize that running data quality tests for each user in CircleCI was not scalable, involved too much time to identify the root causes, and caused production database overload. This led us to Apache Spark. Fully Automated Data Quality Apache Spark Job Streaming As our customer base became bigger, we started seeing a dramatic increase in the amount of data we ingested from users’ mailboxes, calendars, and CRM systems. Tests in CircleCI set up to run at the user level were no longer scalable. To improve the flow automated with CircleCI, we replicated all tests by a scheduled Spark job in Databricks. One of the advantages of Apache Spark is executing code in parallel across many different machines. By introducing Spark jobs, we moved from running CircleCI tests upon user registration only, to setting up a weekly job to evaluate all data at user level at all times. From a technical standpoint, we chose Databricks mainly because of its job scheduling capability. We also wanted the simplicity of cluster configuration that does not require DevOps engineer involvement. It became possible with Databricks as it provides Apache Spark as a hosted solution. We also automated investigations that our support engineering team usually performs manually to check for all possible root causes. This helped us eliminate the manual step we had in the previous flow. As of right now, we have numerous tests in place to validate data quality on user level that scheduled to run weekly. Our goal is to run Spark job daily and eventually embed data validation checks into the pipeline. Data Quality Pipeline The Spark Data Quality Pipeline The Spark pipeline includes three jobs: ETL layer Data quality checking layer The reporting layer The ETL layer involves a Spark job that extracts a snapshot from multiple production databases, checks and corrects data type inconsistencies, and moves the transformed data into a Hive table. This step is important to keep the data clean and avoid production database overload. The data quality checking layer contains the code that includes a set of rules for checking and measuring data quality. We apply these rules to measure data quality metrics in the following dimensions: Data completeness. Verify if there was an unexpected loss of data during activity intake or due to over filtering. Verify if there was an unexpected loss of data during activity intake or due to over filtering. Data accuracy and integrity. A set of rules to identify outliers and edge cases. A set of rules to identify outliers and edge cases. Data consistency and stability. Verify if data is accurate, consistent, and stable for all users and that the numbers we report stay within an expected range. Lastly, the reporting layer includes a data dashboard that displays the state of data quality across the board and email notifications sent to the support engineering team whenever data anomalies or outliers are detected, such as failure to detect email blasts. Data dashboard that shows the quality of job title data detected by People.ai algorithms for people with which sales reps engage The chart above shows whether People.ai detected a job title of a person correctly (such as Senior Account Manager), failed to correctly capture a correct job title, or detected an incomplete job title. For example, an incomplete job title might be the title “Director” but without a department or the title “Business Development” without seniority. Data Quality Validation is Our Responsibility, Not the Customer’s Data quality is a critical component of our system. At People.ai, data quality is placed at the same level of importance as integration testing is for software development. We found the best way to validate activity data quality is by asking data itself what does not work and why it does not work. Finding an edge case and addressing it early helps us provide smoother user onboarding process and better customer experience. Activity data quality validations should be automated as much as possible. Our goal is to embed validations into the data pipeline code, but in a manner that allows it to be effortlessly changed. Depending on the criticality of the data and validation, we want our pipeline to either fail completely or to flag and report the issue, and continue processing. This has been our journey so far at People.ai. We continue to learn and grow overtime to ensure that our customers can have full faith in the quality of data we deliver.
https://medium.com/people-ai-engineering/data-quality-automation-with-apache-spark-ac87cbbf3c37
['Tanya Lutsaievska']
2019-08-01 20:00:05.743000+00:00
['Data Quality', 'Databricks', 'Apache Spark', 'Data Science', 'Software Development']
Responsive Testing & Cypress
So, how do we manage to have a scalable responsive solution? For that purpose, we’ll use the Cypress API through a script. As in everything, this is no more than an alternative. It is not intended to be the ultimate solution, but, after having tried it, I believe it turned into a strong starting point. The aforementioned article understood that the right path is to have a default configuration in the cypress.json file that modifies some of its settings according to the parameters we establish. This is where we set ourselves apart and expand the solution. We want to have N resolutions (meaning N available devices to emulate) and N userAgents (in other words, N emulated operating systems). Moreover, we want to be able to set iOS apart from Android. We want to be able to provide test runs and regression tests even more specific: iPhone-X, iOS 13.3.1. Commands The script will allow us to set <device> and <osVersion> as parameters; and, if necessary, (for example, if we run the script on continuous integration), we can define the <record> parameter, which will, in turn, set the record key stored as an environment variable. It will also allow us to establish the <open> parameter so we can use the Cypress runner. Hands on! The script logic allows us to always get to execute the suites. As long as the <device> parameter exists in the devices.js file, the script will wait for us to set the <osVersion> corresponding to that kind of device (iOS & Android). For example, we can set “iPhoneX”. Once the device has been defined, the script will search within the osVersions defined in the os.js file corresponding to the iOS devices. For example, we can set “13.3.1”. If the version we entered doesn’t exist in the list, the script will search for the default version of that OS. node cy-start.js — -d iphoneX -osV 13.3.1 -o From the Cypress runner, we can see the following environment variables are available: device, osVersion, osType. We can take advantage of those variables in order to use them for the suite names. When running the example script, we’ll see that the tests were executed at the specified version and resolution.
https://medium.com/flux-it-thoughts/responsive-testing-cypress-72d2be68690b
['Gustavo Miguens']
2020-12-17 14:13:50.242000+00:00
['Cypress', 'QA', 'Automation', 'Technology', 'English']
three missed (friendship) connections
1 ) The Girl Who Was Friends with the Spy Kids (2002). There are a lot of things I remember about you, except the most crucial: your name. We met at your neighbor’s high school graduation party. I thought you were one of the high schoolers at first, since you looked and sounded like a model, standing at least 5'9 in heels with your hair beautifully styled and a low, sultry voice that carried across the room. But no, you were 12 years old, just like me. And you were on a first name basis with the Spy Kids. Your dad was rich as fuck, some big shot attorney with enough money to turn your backyard into a traditional Japanese garden, complete with a giant koi pond. Your bedroom was a miniature version of Paris, filled with furniture and clothes from some fancy shop called Oilily that I had never heard of but immediately agreed was fabulous, based on how impressive your room was. You were so damn cool, but your handwriting was terrible. When I got home and pulled out the post it note that you had given me with your email address written on it, I couldn’t decipher a single letter, so I was never able to get back in touch with you. I hope you’re doing well and living a fabulous life. You’ll probably never read this, but in case you do, please hit me up. I’d love to know if you’re still friends with those kids. 2 ) The Girl on My Flight from Mumbai to Amsterdam (2002). Flying from the United States to India (and vice versa) is a marathon of a journey. Though nowadays, there are some nonstop flights that go straight from Chicago to the motherland, in my experience, there have always been multiple layovers of various lengths in cities I have never returned to since. I wouldn’t go so far as to say I have face blindness, but I’ve never been great at remembering people’s faces after the first time I meet them if they don’t leave a significant impression on me, so it was lucky that you remembered mine when we ran into each other in the Mumbai airport bathroom, and that we had met at the Banga Mela convention that my Bengali community in St. Louis had hosted a few years back. Because I was growing up in a small white town in the middle of nowhere, you were the first Indian kid who was my age that I ever really had a chance to connect with, so I feel bad that I remember so little about you; I can’t recall your name, how you looked, where you lived, or even what we talked about, only that it made the long flight to Amsterdam not only bearable, but fun. Even at that young age, I had an awareness of how special and serendipitous it was to reunite with someone who recognized me thousands of miles away from home in an airport bathroom, of all places. It’s too bad I didn’t have the foresight to write down your email or any other way to contact you after we split ways to get on our separate flights back to the States. Since I didn’t get the chance to visit India again for seven years and it’s been eleven years since even then, I hope that you at least have had the chance to visit and explore it more often than I have. I hope sometime in the near future I’ll be able to make it over there again and maybe run into you again — with the hope that you’ll recognize me once more. 3 ) Hannah and Matthew (2010). I think about both of you a lot. Though the details are fuzzy now, I still remember the most important moments of what could have changed everything about my study abroad experience in London back in 2010 during my junior year of college. Those little experiences don’t seem so important when they’re happening, but without a doubt — within 24 hours to a week, you’ll start regretting it. We were in line, the three of us, waiting to get into the student fair. Both of you were English, though I can no longer remember if either of you were from London or from elsewhere. I think your names were Hannah and Matthew, but even that, I’m not sure of. Hannah, I think you might have been from a London suburb. I’m pretty sure I remember you saying you were Jewish. We talked about many little such details in the 45 minutes or so we were waiting in line together. Before we scattered, we did at least save each other’s numbers in our cell phones. But these were pre-smartphone days and the texts and minutes I could send through my dinky old phone were being paid for by my uncle, who only topped it up with the bare minimum of minutes and texts. So in the three months I lived in London, I never once got in touch with either of you. Never reached out, never made plans. Nobody in my course was particularly friendly, and I had little money to spare for going out with the mostly affluent kids who were in the program with me, so I spent most of my time alone — either exploring the city by myself or spending too much time scrolling through Livejournal in my dorm room. Had I reached out, perhaps things would have been a little different. Perhaps I would have made two good friends; perhaps I wouldn’t, but my biggest regret is that I didn’t try. In the months after I returned to the US, I often thought about both of you; whether the two of you had kept in touch, if you ever became close enough to date —of if you ended up not getting along after a second meeting and didn’t even become friends. Three years after London, when I returned to the UK for my Fulbright, I thought of both of you once again, and I wanted to see if I could pull your phone numbers from my old UK cell phone, but unfortunately, I was unable to, as I had lost the charger. I hope you both stayed in touch. I’ll never know, of course; but in my head, I like to imagine that you became the best of friends, and maybe something more.
https://medium.com/@mspriyankabose/three-missed-friendship-connections-3e69a0bc0367
['Priyanka Bose']
2020-12-19 03:46:08.775000+00:00
['Culture', 'Self', 'Creative Non Fiction', 'Creative Writing', 'Personal Essay']
SPLIT NFTs coming SOON
Hey guys we’ve got some exciting news here today The designer who will be creating our SPLIT NFTs has done previous work with Gorillaz (yes the music group) AND Metallica. He’s currently doing some work for $CFI (cyberfi) and is well established in the #DeFi space! We’re super thrilled to be offering a pair of SPLIT NFTs he’s making for us. As mentioned in the previous medium post, we will be hosting an auction for these NFTs next week. The winners of this NFT will not only have a badass collectible, but it will count as a ticket to get into a private TG group Baba and I run. This group has some of the top end crypto influencers in it, this is the same group that was in the private sale allocation. This means if you win the NFT auction, you will get guaranteed access to future Baba x Penn projects we run in the future. We’re super excited to unveil these in the coming days and pumped to be working with such a talented designer!
https://medium.com/split-network/split-nfts-coming-soon-52550b7df596
[]
2020-12-24 02:22:24.678000+00:00
['Uniswap', 'Defi', 'Altcoins', 'Ethereum', 'Bitcoin']
Which AFC teams will make the NFL playoffs?
Photo by HENCE THE BOOM on Unsplash There’s a real litany of sport on at the moment — as someone based in the UK that currently has the ‘luxury’ of not working, I’ve loved keeping my phone on aeroplane mode and watching the full NBA playoff and NFL games in the mornings. It might only be 3 weeks into the NFL season, but I’ve seen enough. I’ve seen the past, the present and the future. Here are the AFC standings after the conclusion of the regular season. You heard it here first! AFC Playoffs The lucky horseshoe and the lucky number 7 combine for the Indianapolis Colts as they sneak into the playoffs. For the first time since the motor car was invented, the Patriots won’t win the AFC East (ok it was only 2008, but it feels like an eternity). It’s going to be tight for the AFC North title. The Bengals will win some games, but not many. The AFC South is a three-horse race, but one horse (The Texans) only has 3 legs. The Chiefs might go 16–0 en route to the AFC West title. The LA Raiders will miss out due to head to head record vs the Patriots and the Colts. Here’s a breakdown of how each division will go. AFC East The Bills look set to win their first AFC East title, and have their first home playoff game, since 1995. Josh Allen’s ability to overthrow wide-open receivers in the end zone is more than made up for by his scrambling ability and Sean McDermott and Brian Daboll’s offensive scheming in Buffalo. The Pats defensive is stifling and they’re starting to build a strong running attack, but I think they’ll lose enough games to fall into second place. The Dolphins and the Jets are both garbage. The Jets more so #tankfortrevor … Buffalo Bills New England Patriots Miami Dolphins New York Jets AFC South This is the only division in the AFC that I had any real trouble with. Although the Texans are 0–3, they had the hardest opening 3-game schedule I’ve ever seen, and the Titans are 3–0 but with a total winning margin of 6 points. In fact, by the end of week 6, they could well both be 3–3. That being said, I think the Texans will have to win both matchups against the Titans to get to a winning record and to win the division, and despite a shaky start by Philip Rivers and the Colts, they’ll push the Titans hard. The Titans get it done. Tenessee Titans Indianapolis Colts Houston Texans Jacksonville Jaguars AFC North The AFC North teams get to play the NFC East and the AFC South. Lucky them! I’m writing this after watching Lamar be largely ineffective against a weak Kansas City defense in week 3, but I’ve no doubt they’ll get to 13 wins without too much fuss. The Steelers will push them close, but I think the Ravens just have enough. Baltimore Ravens Pittsburgh Steelers Cleveland Browns Cincinnati Bengals AFC West So we don’t need to talk about who’s going to win this division. Frankly, it’s hard to see who has a chance of beating the Chiefs. 16–0, anyone? Before the season, I thought that the Raiders (along with the Colts) could benefit from the extended playoff format and nick the 7th seed, but I think head to head record against the Patriots, and the Colts, may be their undoing. The Broncos have a QB crisis and although Chargers rookie Justin Herbert looks good, neither of them are likely to challenge for the playoff spots. Kansas City Chiefs Las Vegas Raiders LA Chargers Denver Broncos Concluding notes There you have it. The AFC playoffs are locked in. Feel free to hibernate until the playoffs. There are a couple of things I’m not certain about. Which way will the AFC South go? The Head to Head games will most likely decide this division Who out of the Patriots, Colts and Raiders (and maybe, just maybe, the Texans) will take the last two playoff spots? Will = probably, possibly, might, almost certainly won’t… This article was originally posted on Chris’s blog @ https://chrismiles.co/which-teams-will-make-the-playoffs-from-the-afc/
https://medium.com/top-level-sports/which-afc-teams-will-make-the-nfl-playoffs-151231a1856
['Chris Miles']
2020-10-01 09:47:10.853000+00:00
['USA', 'Sports Journalists', 'Football', 'Sports', 'NFL']
How to spend Christmas 2020 — Quarantine style
Christmas is supposed to be a holiday all about family, friends and getting together. Unfortunately, 2020 had some other plans in mind. With the COVID19 crisis still looming over us, Christmas 2020 is going to be unique in every single way. Photo by Annie Spratt on Unsplash All around me I hear friends and family discussing their Christmas plans. The yearly big family dinners, friendly get-togethers and cosy evenings near the fireplace are not really on the agenda anymore. But this doesn’t mean that Christmas should be boring this year!
https://medium.com/@lottem17/how-to-spend-christmas-2020-quarantine-style-f6936432ccbb
['Lotte Meijer']
2020-12-22 20:12:16.245000+00:00
['Quarantine', 'Books', 'Games', 'Christmas', 'Activity']
Shree Mukilan Pari on Combating Health Illiteracy at Home and Abroad
Photo by Bill Oxford on Unsplash We all grasp the importance of “traditional” literacy, which we understand to mean one’s ability to read and understand the written word in whatever language we speak. Literacy is a foundational skill, one that contributes to a more productive economy with better outcomes for people at every rung of the socioeconomic ladder. This type of literacy is undoubtedly important. However, it’s not the only form of literacy we need to concern ourselves with. Others are just as fundamental. I’d like to focus on one particular type of literacy today: health literacy, or the ability of healthcare consumers (every single person alive, to be clear) to grasp health-related information and make informed decisions around care for themselves and others. The importance of health literacy is impossible to overstate. Unfortunately, health literacy just isn’t talked about in the same way, or to the same degree, as traditional literacy. This silence has real-world consequences, some tragic. I’ll be devoting my career to cultivating a conversation around health literacy, joining a growing group of medical professionals and patient advocates (among many others) who recognize that the status quo can’t continue. I hope to be joined by others — maybe even you. To truly understand why this issue is so important, we first need to understand what health literacy really means and how it empowers healthcare consumers. We should also acknowledge what’s being done here in the United States and around the world to advance the cause of health literacy. My own vision for a more empowered patient population aligns with these efforts, and I’m honored to share it here. What Is Health Literacy? First, let’s get down to basics and answer the question that most likely brought you here in the first place: what is health literacy, really? The Centers for Disease Control defines two closely related components of health literacy: Personal health literacy : “The degree to which individuals have the ability to find, understand, and use information and services to inform health-related decisions and actions for themselves and others.” : “The degree to which individuals have the ability to find, understand, and use information and services to inform health-related decisions and actions for themselves and others.” Organizational health literacy: “The degree to which organizations equitably enable individuals to find, understand, and use information and services to inform health-related decisions and actions for themselves and others.” The United States’ foremost health authority, in other words, envisions a role for both individuals and organizations — companies, public agencies, nonprofit entities — in attaining and promoting health literacy. We all have a stake in our collective ability to navigate the numbingly complex healthcare system and make informed decisions for ourselves and others. As this ability improves, so do health outcomes. Health literacy is an acquired skill or competency, and a holistic one at that. It’s also not a skill or competency that’s equally distributed within a population as diverse as ours. The U.S. Department of Health and Human Services says that “health literacy requires knowledge from many topic areas, including the body, healthy behaviors, and the workings of the health system…[and is] influenced by the language we speak; our ability to communicate clearly and listen carefully; and our age, socioeconomic status, cultural background, past experiences, cognitive abilities, and mental health.” That’s quite a list. You’d be forgiven for thinking it all but forecloses the possibility of eradicating health illiteracy in our lifetimes — that, like poverty or racism, it’s all but intractable. You’d be wrong about that. A fully empowered population of healthcare consumers is within the realm of possibility and closer to reality than most of us assume. To understand why it’s so important that we continue to work toward this goal, we need to understand the dire consequences of health illiteracy in the here and now. The Causes and Consequences of Health Illiteracy As I wrote, better health literacy really does lead to better health outcomes. The reverse, unfortunately, is also true: poor health literacy correlates with poor health outcomes. To say it another way, health illiteracy does not promote optimal health outcomes. Health illiteracy reinforces existing inequities in healthcare delivery, access, and understanding, leading to preventable and potentially tragic outcomes. This problem stares us in the face; it’s impossible to ignore. According to the U.S. Department of Health and Human Services’ National Action Plan to Improve Health Literacy, nearly 90% of adult healthcare consumers in the U.S. “have difficulty using the everyday health information that is routinely available in our health care facilities, retail outlets, media, and communities.” In other words, about nine in ten Americans struggle to make informed healthcare decisions due to a lack of intelligible information. This struggle stems in part from the myriad and often conflicting sources through which healthcare consumers receive this information, intelligible or otherwise. The National Action Plan to Improve Health Literacy identifies more than a dozen such sources in an admittedly less-than-comprehensive list: Discussions with friends and family TV, radio, and newspapers Schools Libraries Websites and social media Doctors, dentists, nurses, physician assistants, pharmacists, and other health professionals Health educators Public health officials Nutrition and medicine labels Product pamphlets Safety warnings In the United States and around the world, health literacy is closely correlated with other inequities. Here at home, the National Action Plan to Improve Health Literacy identifies segments of the population that are more likely to struggle with limited health literacy: Adults over the age of 65 years Racial and ethnic groups other than White Recent refugees and immigrants People with less than a high school degree or GED People with incomes at or below the poverty level Non-native speakers of English Sadly, we can get more specific about the consequences of health illiteracy because we have ample data to draw upon. The National Action Plan to Improve Health Literacy describes several: Lower utilization of preventive services Poorer management of chronic conditions, such as diabetes, high blood pressure, and HIV/AIDS Poorer self-reported health Higher rates of preventable hospital visits and admissions Higher risk of medication errors Higher overall risk of mortality The social and economic costs of these undesirable outcomes are staggering: between $106 and $236 billion U.S. dollars each year, according to HHS estimates. Combating Health Illiteracy in the Developing World: A Plan for Healthier Rural Communities If you can believe it, health illiteracy is an even more urgent issue in the developing world. I’ve been fortunate enough to work on this issue in rural India, where health illiteracy affects hundreds of millions of people. As in the United States, the root causes of health illiteracy in India are complicated. But one big issue that I’m working to change is the persistent problem of villagers not paying attention to health issues and warning signs because they haven’t been told that it’s important to do so and lack access to affordable care. For example, the village my father grew up in is not well-equipped to deal with medical emergencies. The nearest hospital is quite far away and is quite expensive for the average villager. This discourages residents from getting regular checkups to ensure they are in good health and seeking care when they feel sick. Many diseases and conditions go unnoticed, leading to preventable injury and death down the road. I hope to change this through a nonprofit focused on health literacy. This organization would educate these villagers about the importance of regular check-ups and general awareness of health issues and risks. It would host multiple free medical clinics each year, where a doctor comes to the village and sees patients for a full day. Villagers would be able to visit the clinic, get full checkups or consultations for specific complaints, and obtain specialist referrals where further investigation is warranted. This model will change the perception of medicine in rural India and make it more accessible to everyone, regardless of socioeconomic position. Eventually, the hope and expectation is that residents of rural India will be more aware of medical issues — more health-literate — and take their health more seriously. If successful, there’s no reason not to expand the model to other villages and help others. Health Illiteracy Affects Us All, And We All Must Do Our Part Directly or indirectly, health illiteracy affects us all. Despite our best efforts to date, we’ve yet to eradicate health illiteracy. In many parts of the world, we’ve yet to make much of a dent in the problem at all. I have hope that this situation is slowly but steadily changing for the better. When I look at the efforts of those who share my vision for a more empowered and educated patient population, I see signs of momentum. As members of my generation take on more responsibility for healthcare development and delivery, it’ll increasingly fall to us to maintain this momentum. And, hopefully, to accelerate it. We can’t do it alone. Those of us who’ve chosen to devote our careers to improving health outcomes around the world might share a powerful desire to leave humanity stronger and more resilient than before and a determination not to be deterred by setbacks — setbacks that will, inevitably, come. Our number is comparatively small, however. For every one of us, there are hundreds of people who haven’t devoted their lives to this cause. That’s okay. We don’t expect to win over the population of the entire planet. We wouldn’t want to, because then who’d be left to do all the other jobs that need doing? We do need support from our friends, family, and communities, though. We have their backs, and we respectfully ask them to return the favor. Because health illiteracy affects us all, and we all need to do our part in the fight against it. Do you see evidence of health illiteracy in your own family or community? What do you see being done to address it?
https://medium.com/@shreemukilanpari/shree-mukilan-pari-on-combating-health-illiteracy-at-home-and-abroad-35c1fdee0415
['Shree Mukilan Pari']
2020-12-23 23:53:15.785000+00:00
['Health', 'Health Literacy', 'Healthy Lifestyle', 'Healthcare']
Get Display Media, The Canvas API, and Recording Video in the Browser
Issue no1: Make a screen recording with a video medallion in the top-right corner. Solution: If you have multiple video sources, let’s say a screen capture media track for the main video and a camera capture for the medallion video, the only way to put the two together into an output video is to paint the two videos into a Canvas and use the Canvas.captureStream() API to get a video stream that can go into the final output video. Example code (high level): /** * Prompts user to get all necessary streams and displays them on video elements * @param mainVideo - HTMLVideoElement * @param medallionVideo - HTMLVideoElement * @returns {Promise<(*|MediaStream|*)[]>} */ const capturePresentation = async ( mainVideo: HTMLVideoElement, medallionVideo: HTMLVideoElement ) => { const videoConstraints = { video: true }; const audioConstraints = { audio: true }; // Capture video inputs const screen = await navigator.mediaDevices.getDisplayMedia(videoConstraints); const camera = await navigator.mediaDevices.getUserMedia(videoConstraints); // Display them on video elements mainVideo.srcObject = screen; medallionVideo.srcObject = camera; // Both getDisplayMedia and getUserMedia // can capture sound however, I found // it's easier to reason with if the audio is // captured and stored separately const audio = await navigator.mediaDevices.getDisplayMedia(audioConstraints); // return the 3 streams that we will later need to // combine with a MediaRecorder return [screen, camera, audio]; }; /** * Paints the input sources into the putput canvas * @param mainVideo * @param medallionVideo * @param canvasElement */ const paintOnCanvas = ( mainVideo: HTMLVideoElement, medallionVideo: HTMLVideoElement, canvasElement: HTMLCanvasElement ) => { const FPS = 30; const ctx = canvasElement.getContext("2d"); let myTimeout; const draw = () => { // clear the canvas before writing to it. ctx.clearRect(0, 0, canvasElement.width, canvasElement.height); // to avoid an ugly closure from the set timeout clearTimeout(myTimeout); ctx.drawImage(mainVideo, 0, 0); // some additional math is needed to position the // medallion into dx, dy, dw, dh // depending on how you want to position it in the // geometry of the output video // I leave that to the latitude of the implementer. // For a 300x200 video const [dx, dy, dw, dh] = [0, canvasElement.width - 300, 300, 200] ctx.drawImage( medallionVideo, 0, 0, medallionVideo.videoWidth, medallionVideo.videoHeight, dx, dy, dw, dh ); // Set Timeout is used on purpose vs requestAnimationFrame // Will explain why in Issue no2 below myTimeout = setTimeout(draw, 1000 / FPS); }; // Set Timeout is used on purpose vs requestAnimationFrame // Will explain why in Issue no2 below myTimeout = setTimeout(draw, 1000 / FPS); }; The reasons I have used a Timeout vs fancier requestAnimationFrame, or requestVideoFrameCallback are the following: requestAnimationFrame is way too faster, for example for capturing a screen, so on large processing canvases, > 3000x2000 pixels for example it will have your GPU and CPU heated to the point it consumes all resources heats up your CPU to the points that it starts the CPU/GPU cooling fan. If you use timeouts, might be more gentle on the GPU, it will be fewer paint operations. Additionally, be mindful of the size of the canvas, as you can have sensible tradeoffs and still capture 1080P HD. Timeout gives you more fine-grained control on when to fire a function, rather than requestAnimationFrame. Its downsides are also understood, but for this purpose, it worked better for me. Issue no2: When I record the screen of an application with getDisplayMedia and the application went fullscreen, and I tried to paint the screen of that application into a Canvas element, the Canvas captures only the screens before and after the full screen. Solution: Both requestAnimationFrame and requestVideoFrameCallback, have this massive bug, (please someone point me where to open a but with Chrome and other browsers about this) where when an application that is recorded goes into fullscreen and you try to paint that into a Canvas, because you want to compose a video, it will choke, and the callbacks on these methods won’t fire. Not sure if it is because of some weird security policy, but my experience was after spending a few days on this, is that the only way to record the screen of an application when in full-screen is to use Timeout. This issue is not documented anywhere and unless there is some sort of security policy that I have missed, I feel that I might be doing a favor here to the community by making this issue discoverable. I’ll investigate this via a bug report with Chrome, and link it back here whenever the time allows. Issue no3: When I try to captureStream from a Canvas that has painted images from a different domain the captureStream will produce a Blob of 0 size, and basically stream capture will fail silently Solution: I’ve encountered this issue when trying to captureStream from the canvas element that I was painting images from another domain. The drawImage operation worked without a problem, but when I tried to capture the stream from that canvas, the capture stream was producing a blob that was 0 in size. It took me many hours, of hard debugging, to understand that the Canvas element was “Tainted”, I’ve discovered this during the debugging phase when I tried to capture the Blob from the Canvas rather than the stream and there the browser complained about the tainting. Tainted canvases are those that have pixels drawn from sources that are from other origins than the origin of the page that hosts the canvas element. For this to work out you need to go to your Cloudfront distribution, or whatever CDN you use to distribute media assets and enable CORS. It will work like a charm thereafter. The most confusing thing was that the paint operation on the Canvas element worked fine, but capturing the stream failed without any error or warning message whatsoever, and it can take hours to realize what is the root cause.
https://medium.com/@motiooon/get-display-media-the-canvas-api-and-recording-video-in-the-browser-dd8aaa9a6bfe
['Gabriel G Baciu']
2020-12-14 04:18:33.031000+00:00
['Video', 'JavaScript', 'Canvas', 'Getdisplaymedia', 'Getusermedia']
经由 CloudFlare 实现反向代理解决 vuejs 跨域问题的实践
经由 CloudFlare 实现反向代理解决 vuejs 跨域问题的实践 背景 vuejs 项目部署在 fake-a.com 上,需要请求位于 fake-b.com 上的 API,API 的 URL 类似 fake-b.com/api/items 。至于 fake-a.com 和 fake-b.com 这2个域名都经由 CloudFlare(简称“CF”)解析。 目标 在 fake-a.com 的 Apache 下开启反向代理,通过反向代理将发往 fake-a.com/api/items 的请求映射到 fake-b.com/api/items,这样就可以“欺骗”浏览器,允许 fake-a.com 上的 vuejs 项目调用 API 。 实现 首先,确保 Apache 开启了反向代理所需的模块。如果没有开启,可以参考文章《How To Use Apache as a Reverse Proxy with mod_proxy on Ubuntu 16.04》中提到的下列命令: sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod proxy_balancer sudo a2enmod lbmethod_byrequests 依次执行上述命令后,记得重启 Apache 。 然后,编辑 fake-a.com 上 Apache 的 vhost conf 文件,比如: /etc/apache2/sites-available/000-default.conf 其内容类似: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html/> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined <IfModule mod_dir.c> DirectoryIndex index.html index.htm </IfModule> </VirtualHost> ⚠️ 注意,因为在 fake-a.com 上只有一个项目,所以这个 vhost 配置文件中没有其它教程中出现的 ServerName 配置。如果你需要设置多个 vhost,那么可以自己添加,本文不牵扯这些内容。 在 vhost 中,只需添加如下内容: 添加后的完整 vhost 看起来应该是这样: ServerAdmin webmaster@localhost DocumentRoot /var/www/html ProxyPass /api ProxyPassReverse /api <Directory /var/www/html/> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> ServerAdmin webmaster@localhostDocumentRoot /var/www/htmlProxyPass /api http://fake-b.com/api ProxyPassReverse /api http://fake-b.com/api Options Indexes FollowSymLinksAllowOverride AllRequire all granted ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined <IfModule mod_dir.c> DirectoryIndex index.html index.htm </IfModule> </VirtualHost> ⚠️ 注意:在许多关于反向代理设置的教程中,往往会多一个命令 ProxyPreserveHost,即: ProxyPass /api ProxyPassReverse /api ProxyPreserveHost OnProxyPass /api http://fake-b.com/api ProxyPassReverse /api http://fake-b.com/api 如果设置了这一句,反向代理的请求就会被 CF 的安全机制拦截下来,触发一个 Error 1000 — DNS points to prohibited IP 错误。具体可以参考这个用户反馈。 结束。 另外,在 vuejs 项目中,常常还需要的解决一个问题是页面刷新后提示 404 Page Not Found,我个人倾向的解决办法是在站点的根目录下放置一个 .htaccess 文件,内容: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.html$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.html [L] </IfModule> 上述配置也可以写在 vhost 配置中,但 .htaccess 更灵活。提示,记得为 Apache 开启 rewrite 模块,方法是: a2enmod rewrite 记得重启 Apache 。
https://medium.com/@wangyiguo/%E7%BB%8F%E7%94%B1-cloudflare-%E5%AE%9E%E7%8E%B0%E5%8F%8D%E5%90%91%E4%BB%A3%E7%90%86%E8%A7%A3%E5%86%B3-vuejs-%E8%B7%A8%E5%9F%9F%E9%97%AE%E9%A2%98%E7%9A%84%E5%AE%9E%E8%B7%B5-38bd74f3a141
['Max Wang']
2020-12-23 19:57:05.029000+00:00
['Vuejs', 'Cloudflare', 'Proxy']
Take the Desire Paths. Make Them Better.
While the idea of shopping by chat may feel foreign to many in the U.S., it’s native behavior to many in China. Separately, there’s the trend to “chatbot all the things.” Here, messaging can be seen as a way for users to interact with a business however they choose, and if the chatbot works, users build their desire path on-the-fly. And a third messaging-related example is chat-based customer support services (like Intercom). When users feel the marked paths leave them feeling lost with no clear way to go, these services give them a paved path toward getting answers. Smartphones — desire path engines. Smartphones exploded in popularity because they paved the path that previous connected devices were only lightly facilitating — that is, browsing the Internet. Before the iPhone, using OTA messaging services, taking and sharing photos, etc., in feature phones like the Blackberry Curve, while possible, was clunky and painful. The experience for users felt bolted-on instead of native. The iPhone (and Android) paved the desire path for using mobile Internet by making it the prime function of the phone. Once that path was paved — access and functionality on mobile — smartphone platforms became the wide open greens that facilitate a near limitless potential for desire paths. Taking it even further, you can see apps as paved desire paths for what users would otherwise do on a browser. And today, as smartphones get more sensors, faster connections, and more processing power, the frontier of mobile grows. Twitter and Instagram — streams of constraints and desire. Twitter is a study in constraints and desire. Hashtags and at replies were early desire paths that users carved into the Twittersphere that Twitter ultimately made native to the platform. Image-sharing is another example. Twitter didn’t build in this functionality so users created the desire paths — sharing links to image-sharing services to work around the problem. That desire path got paved when Twitter brought the functionality in-house but it arguably happened too late — a separate image sharing service had paved it better. Meet Instagram. Or look at Twitter’s character limit. While it’s not going away, Twitter users can easily circumvent the character limits using images. If you want to share more than 140 characters — say you want to share a quote from an article or a book or you wrote a note on your phone — simply do a screenshot on your phone and share that. Shifting to Instagram. Here’s a platform where image sharing is primary. Smartphones exploded in popularity because they paved the path that earlier devices were only lightly facilitating — that is, browsing the Internet.and text secondary. Many social media influencers (e.g. celebrities) have amassed huge Instagram followings and use Instagram as their primary channel. So what do these influencers do when they have something to say on Instagram? Like on Twitter, they screenshot their phones — either a written note or whatever — and share that⁴. Benedict Evans has pointed to screenshots on phones as desire paths to fill unmet needs within apps. (Benedict Evans likens screenshots as a catch-all for lots of emerging desire paths) Other examples of Instagram paving desire paths abound — from collages to carousels to (arguably) co-opting Snapchat’s story functionality. Google. Google is a desire path paving machine. Where do you want to go? Google takes you there — “10 blue links” that get you from Point A to B in milliseconds. Google is well-known for paving desire paths. No search required: Google auto-completes the query. For example, Google has started surfacing the answers right in the search engine results page requiring no further work on the part of users — as is the case with Knowledge Graph cards. In some cases, Google surfacing quotes from top results for a query has dramatically impacted the downstream websites from which the answers came. Another example of Google paving a desire path is when the answer to a query surfaces right there in the auto-suggest box — before you even execute the search! The Internet is a massive green space across which desire paths are being cut all the time, in apps, devices, and websites. What other examples can you think of? Desire paths and web design. For a long time web developers had no way to see how users behaved on their site. Outcomes could be measured using analytical services (e.g. Google Analytics), but actually seeing where users are going and intuiting what they’re trying to do in any given situation was near impossible. That’s why FullStory session replay is so powerful — it reveals where the “paved sidewalk” (a web site’s layout, app UX/UI, whatever) isn’t working as intended as well as visualizing, literally where the users are trying to go. Using session replay to see user flows helps developers and designers improve them. It’s a little known fact that the original code name for FullStory was CowPaths. Separately from surfacing user desire paths, FullStory also helps with the desire paths of designers, product managers, and engineers — by allowing them to search user interactions for whatever behavior they need to understand. What users click a certain button? Use a certain feature? Abandon their cart or get lost in your knowledge base? Just search it. Just as much Google search is a desire path paving engine for bringing users to the world’s information, FullStory provides that power for an organizations’ user interactions on their web app. With or without a service like FullStory, understanding a user’s desire paths on your site can make the difference between success and failure. No business is immune to the constraints on time and attention for users — getting them from point A to point B on your site so they can do whatever it is they’re trying to do — well, that’s our job as builders of products and services. And it helps to mind the desire paths. The desire path you’re on. We all make plans, but life happens, anyway. Desire paths result. Instead of relegating ourselves to plodding down the same desire paths we’ve walked countless times before, perhaps it’s time to make them better — whether in business, in design, or in life. What desire paths do you take? What paths have you paved? Where is the path now on taking you?
https://medium.com/fullstory-blog/paving-the-desire-paths-24aa3b002a6a
['Justin Owings']
2017-04-17 23:11:55.182000+00:00
['UX', 'Web Design', 'Developer', 'Web Development', 'Design']
Role and Importance of Relationship Management
Relationship management is the process of maintaining a bond with strategies regarding what a business has, to deliver in the market to the audience or to other businesses. It is observed by focusing on the quality of deliverables to the audience. It helps an organization or business to grow in the market while building trust, and healthy bonds with the people. The management helps to water healthy bonds for better future relationships. Visit: why is honesty important Benefits of Relationship Management: Trust for the Quality of the Product or Service being offered. Increased Market Value of the Brand. Higher Revenue with Strong Relations. These Opportunities contribute to the expansion of the Brand. Higher Opportunities with better Relations. Expanded Network working for great Advertisement. Relationship Management offers a feeling of Loyalty from the business side of the Market. Visit: how to improve negotiation skills Importance of Relationship Management It offers a great understanding between business and audience/market. Relations are created within the market and even inside the brand as well. Higher understanding results in achieving better communication of requirements. The quality of the product is created through a better understanding of market demands. The brand gains loyalty from different parts of the market and with other businesses as well. Relations are also managed within the organizations. Reduction in product deployment cost. Project objectives are wisely communicated, resulting in a reduction of extra costs. Visit: emotion management skills Some Important Facts Relationship management services with tech integration like cloud facility, have gone on a rise from 12% to 87%. In upgrading the market, around 91% of businesses consider relationship management to be a crucial part of success. Out of them, around 74% of businesses have better access to market data with tech advances. Tips for Relationship Management: Become transparent with your business Deals Look for trust-building Strategies Making your deliverables more Credible Increase the Quality of what you deliver to meet Market needs Focus on improving the market value of Brand Be the one to initiate a conversation Push yourselves to maintain relations, even if you don’t see any reason to do so Personality Development and Relationship Management Increased Focus, Helps you to look for precise information about market and audience Recognizing opportunities for expanding business Managing timings, with new relations, form strong bonds Good Communication makes better relationships Honesty in business deals Visit: personality development skills Originally published at https://www2.slideshare.net on December 10, 2020.
https://medium.com/@strengthstheatre-com/role-and-importance-of-relationship-management-b09d7361ac30
[]
2020-12-10 10:53:56.257000+00:00
['Relationship Management', 'Career Development', 'Personal Development', 'Self Development', 'Personality Development']
TODAY’S QUESTION IS SPECIAL
RESULTS AND THE COMMENT OF THE DAY Yesterday’s Question: How many constellations can you point out/recognize/name without any help? Comment: “Because I think 0 is embarassing 🙈” by drishitmitra who answered “1”. See the results and comments here: https://steemit.com/crowdini/@crowdini/question-2018-08-20 ======= NOTE: Comments are published immediately but votes are not posted until the post is 30 minutes old. If you don’t see your vote after 30 minutes, wait a minute, refresh the page and check again. Make sure you check to see that your vote was counted. You can tell if your vote counted by seeing the circle with the up arrow to the left of the post earnings is green. You can also go to https://steemd.com/@crowdini (replace crowdini with your steemit username) and look at the right column for your vote. You can always manually upvote at any time. If your vote was not counted, fill out this form and we will figure out why ASAP. ======= YESTERDAY’S RESULT Yesterday’s Question: How many constellations can you point out/recognize/name without any help? Majority Answer: 2 TODAY’S QUESTION Do you have set motivational phrases you say to yourself?
https://medium.com/daily-emails/todays-question-is-special-6ab7e322f68a
[]
2018-08-21 14:07:42.513000+00:00
['Crowdini', 'Questions', 'Games', 'Steemit', 'Steem']
Improving Mobile App UX/UI
We designers often liken a project launch to that of a rocket or an airplane. Going along with this metaphor, the website is the demonstration of the rocket, whereas the app is the launch control panel. Which is where the principal difference lies. Imagine someone who knows nothing about piloting being put in charge of a cockpit. Will they be able to tell at a glance how to take off and fly? If you give them a simple and comprehensive set of instructions, if the control panel is intuitive, they will. This is exactly what the purpose of UX/UI is: to help the user quickly find their way and accomplish their goal. The most popular things are the ones that make life easier. So today’s motto of the designer and developer is “Let’s make it simpler and easier!” (Aircraft control panels still look so complicated because web designers haven’t gotten their hands on them.) Let’s see how we can simplify and improve things so that users can just “get in, push a button, and take off.” 1. Functional minimalism Apps normally serve to achieve simple tasks. They shouldn’t exhaust users with multiple actions and distract them from their goal. Functional minimalism is a “no-frills” principle. This applies to everything: contents, number of actions, design elements. Whereas a website can afford to have lots of contents and attention-grabbing design features, for an app it is a kiss of death. There’s too little visual space, so it must be strictly business. But “no frills” doesn’t mean it should be boring, simplistic, or monotonous. Here’s what it means: minimum cognitive load; minimum actions; logical actions and transitions; simple actions; simple design. Mobile app design is based on these very principles of functional minimalism. Laconic graphics, the presence of negative space between elements, a simple and minimal color palette. Any design element must be functionally justified. 2. Order, consistency, predictability Order is the organized presenting of all the necessary things on the screen. It’s a structured hierarchy of contents, a logical sequence of transitioning from the entry point to the app’s main goal. The less the user has to think about what they need to do, the better. Let the interface guide them. This implies a simple and easy way with no distractions. Not everyone is as attentive as designers, so don’t make them search for the things they need or try to figure out where everything is. Navigation should be consistent and understandable. Making the user lose their way between screens is absolutely inexcusable. So don’t hide navigation elements or move them during transitions. If you need to, you can use visual hints. 3. Interaction Ways of holding the phone To program navigation, you must understand how users interact with their mobile devices. Different app tasks require different ways of interacting. It’s important to determine which ways will be used for your specific task and remember that people hold their phones differently depending on the situation. There are three main ways: one-handed, using one thumb; two-handed, using one thumb; and two-handed, using two thumbs. Here’s a helpful diagram by Steven Hoober: How people hold and interact with mobile phones An app should be easy to use with one hand and on a big screen. Here are interactions with one hand: Methods of holding a touchscreen phone with one hand Gestures Navigating an app through gestures has many advantages both for the user and the developer. The gestures must be natural, logical, and relevant to the task at hand. The most popular finger gestures include: touching pressing holding double click dragging tapping swiping scrolling two-finger scrolling pinching and spreading Users are used to certain models of interaction, so it’s best to utilize familiar patterns. There should be no surprises here. Animation Animation livens up the interface and helps interaction like nothing else. It must be strictly functional and never too long or intrusive. Gamification The best way of motivating the user to use your app is adding game mechanics, as long as it is appropriate and relevant to the objectives of both the app and the audience. 4. Everything must be close at hand! Or rather, at the fingertips. The main actions should draw the users’ attention through prominent positioning of an icon or button. Menu options should be large enough to be easily selectable. Elements can’t be too small or cluttered too close together. Not all people have slim fingers and perfect motor skills, so keep that in mind. Elements can’t be too small or cluttered too close together Fields for entering data, form to be filled out, and selections to be made should all be positioned in the lower part of the screen. The information needed to move forward must be located within the thumb zone. 5. Blur method This method enables you to see what the user will see at first glance and check whether the accents have been placed properly. You can use the blur effect or just put on wrong prescription glasses — whatever lets you see the elements fuzzily. Let’s look at an example of a blurred fill-out form: It looks like you need to select an option and then press the yellow button. The original, however, looks like this: This means the accents are all wrong. The important buttons are obscured, and the accent is on the wrong button. The user is disoriented. They automatically press the yellow button and go to a wrong destination. The blur method is useful for checking whether the interface guides the user to the right place. It’s an easy way to separate the main features from the secondary ones, identify the missing accents and monotony. By identifying a mistake, we can emphasize the most important features and visually tone down the secondary ones. To make sure important interface elements are legible and visible even in low light, we must utilize the principles of contrast. Top Secrets of Contrast in Design 6. Locality The locality principle is similar to the general principle of interaction, i.e. the accessibility of elements, except it has more to do with visual and psychological perception. Any result-oriented action must take place close to the suggestion to take that action. Designers are often tempted to place an element wherever they can find the space for it. But if we’re talking about any action, then the element that helps perform that action must logically be close, and the closer the better! And let’s not forget about the progress of events. Say you need to place an “Add” button in a playlist. It seems logical to place it at the bottom of the list: But what happens when the user creates a long playlist? The button will drop to the bottom and won’t be visible the next time the app is opened. The principle of locality states that an important element must not be moved to where it’s no longer visible. 7. Avoid long registration forms Large forms should be divided across several screens to be filled out step-by-step. Forms must only require the information that is absolutely necessary. To speed up and simplify the filling-out process, you can use autosuggest, “forward” and “back” thumb buttons, and avoid the scrollbar. 8. Avoid dropdown lists Dropdown lists are not a good idea in an app. They require lots of clicking and scrolling, which is annoying and time-consuming. Long lists don’t fit on the screen and can be disorienting. It’s best to find an alternative to dropdowns. A limited choice can be represented by icons, while a long list can utilize typeahead. 9. Text All in-app texts should be brief, simple, and legible. legibility To be easily legible without zooming in, text must be in contrast against the background and be no smaller than 11pt. Legibility significantly depends on the font, line spacing, and kerning. brevity Don’t try to cram as much text as possible onto the screen. The less text there is, the more comprehensible it becomes. This also applies to user-entered text. It’s best to use autosuggest in forms to minimize the amount of information that is entered manually. replacing text with images Wherever possible, text should be replaced with visual clues. For example, instead of written instructions, use a short video or simple illustrations. 10. Testing You should always test your project to receive independent user feedback and make sure your app is used the way its designers and developers meant it to be used.
https://medium.com/outcrowd/improving-mobile-app-ux-ui-c4658620448e
['Erik Messaki']
2020-09-23 09:41:16.745000+00:00
['Web Development', 'UI Design', 'Mobile Apps', 'Mobile App Development', 'Web Design']
Don’t Stick Only To Bootstrap Use Your Own Framework
REASONS 1- You Can’t Change the Bootstrap Way Bootstrap grids always work on a 12-columns grid system. So you can’t change it. Maybe you would like to work on a 24-column system or maybe whenever you create your grid, you want 20px padding in it (you can do this in bootstrap by giving another class name to your row but still that is extra thing to add), or maybe whenever you insert an image into your columns you want them to be aligned automatically to center. These examples will go on but the main idea is you can tell your own framework how to behave and even you can change that behavior for another project according to your needs. the col-3 effect when you use 24 column system on 8 divs As you see above, while you can’t create 8 equal divs with the bootstrap you can do it with your own framework. Let’s think about another class of bootstrap: ”btn”. When you give this class name to an element it will behave like a button. Here are the class names of buttons and results of them: Bootstrap Buttons on Bootstrap As you see you need to give 2 class names, one for the button itself and another for the background and text color. But what if you want a different background and different effect when hover on the button? Yes, you need to customize it and not always but usually, when someone creates a button for a web page he or she usually uses the same style for every button on the page. Instead of giving 3 class names to an element you can create a custom button for your page and with only one class name you can use it thanks to your framework. 2- You Know What is Behind That Classes Yeah just like that you know what CSS code has been written inside that classes so you understand them perfectly. But in bootstrap for many parts, you will only know the result of that class but you will not be aware of how it is done. For instance, did you know grid columns in bootstrap has -15px left margin? It is there for a purpose but you can’t know how every bootstrap class works. 3- No More Complicated HTML While using bootstrap you will see there are too much class names for some elements. It causes confusion. Sometimes, in CSS, beginners chose an element by the class name like .col-3.mt-5.p-3 do you see what is wrong here? Exactly! What if you change mt-5 to mt-4? every time you change a bootstrap class for an element you need a change in CSS too, or for a better solution you might give a specific class name for that element but that also means extra class. So what you are gonna do here is deciding all the features in your grid system. By doing that your HTML will be clean so that you can give one specific class name to that element and apply changes by using that class name. 4- Different Layouts for Different Purposes For me, the biggest weakness of bootstrap is, it has only one way. Let say you created your own framework, and you wanna create another project but that framework doesn’t exactly fit your project. So you don’t need to create another framework just copy your old framework and make small changes on it and your second framework is ready in minutes. You can name the second framework like blog_layout.css by doing that you will know it is for blogs. You can create as many frameworks as you want. That is the power of using your own framework. 5- You Can Still Use Bootstrap Even if we criticize bootstrap here you can still use it. Because creating everything that is in bootstrap is really hard. Besides, many of the bootstrap classes will still be useful for your project. Just combine the power of bootstrap and your framework and built pages even faster than before.
https://medium.com/@YunusAybey/dont-stick-only-to-bootstrap-use-your-own-framework-8252686a2e69
['Yunus Emre Aybey']
2019-08-19 13:21:44.277000+00:00
['HTML', 'CSS', 'Bootstrap', 'Framework', 'Bootstrap 4']
How To Manage Your Money Mindset Effectively!
I am sure you’ll agree that the crux of human endeavor, achievement and in many a case, scaling astonishing heights is the result of the mindset. And, it’s this mindset, and where you let it live on your personal spectrum that determines your progress, success or failure. Did you know that one flavor of this mindset is the “money mindset”? That’s simply your set of beliefs about money and how you approach your earnings, savings and spending in your life. If you are someone living “paycheck to paycheck”, you have nurtured a scarcity mindset. And, if you are geared up to jump on opportunities that create wealth and use it effectively, you have nurtured an abundance mindset. Where do you believe your money mindset lies? Believe it or not, how you perceive money and treat it has a lot to do with your experiences with it during your childhood and youth. And, most of us operate from this frame of mind throughout our lives. Depending on the generation you belong to, money can be central to your existence or just an enabler of your lifestyle. The question is not whether you feel you have enough money to fulfill your needs, it’s about how you determine your needs and ensure you have enough, to lead the life you want to. Know that you can overcome any limiting beliefs you may have, to pursue a life of abundance and opportunity. It’s all within your control and yours’ only! A scarcity mindset always tells you that you will not have enough, and hence, you conduct yourself in that fear, with everything you do. With an abundance mindset, you cultivate positivity with every step you take, and can get creative when in a corner. You believe you will succeed, find that job or launch that business, find those customers and reach your goals. It’s not whether you have the resources to make it happen or not that matters. It’s the attitude that you will find your opportunities and bring them to fruition that works in your favor. Scarcity breeds fear and in that fear, you may hold on to the bare minimum you have than take the risk of allowing new opportunities into your life. Sometimes, you have to let go of the rope to ensure you cross the chasm and land on safe ground. Else, you will be hanging by the rope which will eventually stop swinging putting you right on top of the chasm, and you will have no energy to hold on to the rope anymore. With a mindset telling you that you will have more than you need, and it will find its way to you, provided you keep looking; you are bound to win over worry and focus your energies on the right things. Focus on abundance and that is what will find its way into your life. Focus on lack and scarcity, and it will always be so. The choice, my friend is yours to make. While financial decisions demand rationale in order to stay on track, for instance budgeting helps with planning, it also comes from a scarcity mindset. Yet, it’s needed to stay on track. Cultivate it more for discipline than anything else. Be prudent in your use of finances. Don’t get swayed away by emotions and when you do, let discipline guide you. Discipline can help control you when you are on the path of such swaying. Focus on your needs and wants, and let not the world dictate what you should do with your money. It’s personal to you and treat it with such respect. Never compare yourself with another but compare with where you were yesterday. Cultivate a positive money mindset that pushes you to look for, and focus on opportunities than obstacles, tells you that no situation is beyond repair, allows you to be open to help, whether to receive or offer, and ensures you see progress in every step. Focus on your goals and what it means to achieve them, be it to live debt free, own you home or invest in things you cherish or spend on people you love. They are the reason you will want to pursue abundance, and they are also the reason that will keep you motivated to do so. Money can be what you want it to be for you; a source of stress, fear, and scarcity or an enabler of opportunities, solution to your problems, and a tool in your hands to change the world. True wealth depends on your money mindset and it’s not what you have in your pocket or checkbook, but what you have between your ears!
https://medium.com/@rajeevmudumba/how-to-manage-your-money-mindset-effectively-555eff55977a
['Rajeev Mudumba']
2020-12-21 18:15:46.769000+00:00
['Financial Planning', 'Money', 'Money Mindset', 'Mindset', 'Finance']
4 Reasons an Online Business Should Spend on Customer Support
Editor’s Note: Lucy Benton is a marketing specialist and business consultant who contributes to Awriter. Today she joins us to explain how online businesses are passing up major profits by skimping on customer support, and how to avoid that mistake. Every online business has two major goals: sell products or services to customers, and support their continuous use. The first goal increases profits, while the second one works to retain existing clients and attract new ones. Exceptional customer support is a great way to accomplish the second goal because it helps you to meet the expectations of your market, and goes beyond that to building loyalty and improving your reputation. Surveys have indicated that many online businesses fail to answer 50% of inquiries fielded to them by existing customers, so it’s clear that some companies neglect customer support, perhaps to avoid the costs of maintaining support representatives. But on average, it’s 7 times more expensive to acquire a new customer than to keep an existing one. That’s a very big difference, and so are the losses which result from poor customer support. According to NewVoiceMedia, businesses in the U.S lose out on $41 billion in revenue every year thanks to this systemic oversight. In this post, we will analyze several reasons why an online business should invest in customer support, and key takeaways that will help you to build a better support team, and avoid an unnecessary dip in profits. Reason #1: positive customer experience is more valuable than price Online surveys continue to find that customers regard service and support as more important factors than price. For instance, a 2014 customer experience study by American Express suggested that more than 60 percent of customers were willing to spend more with an organization that provides better service than competitors. According to the same study, only 5 percent of online shoppers reported that customer support ever exceeded their expectations. It stands to reason that online marketers are losing $41 billion per year as a result. The statistics don’t lie: when online businesses show customers that they are valued and appreciated, the feelings go both ways. Eventually, people develop loyalty for a company based on good experiences. The takeaway: customers are willing to pay more if they are valued by a company. Make an active effort to treat customers as people, address needs and concerns, and improve relationships with them. Reason #2: exceptional customer support builds awareness of your brand Mark Zuckerberg calls a trusted referral “The Holy Grail of advertising,” and there are good reasons for that. First, word of mouth is one of the most powerful influencers on purchasing decisions. It’s human nature to seek out testimony, and judge based on the experience of others. Second, more than 80 percent of online customers trust the opinions of their family members and friends about the products and services of particular companies, according to the Nielsen Global Trust in Advertising Report. As such, it makes sense that content generated by your clients will always be more effective than advertising. The reason is simple: people trust experiences of previous customers, and as we have already established, good customer support makes happy customers! The takeaway: People will remember your brand if they see positive feedback and comments about it, along with testimonies of good experiences with your customer service representatives. Reason #3: excellent customer support appeals to new customers Not so long ago, many businesses exclusively cared about closing a sale. As long as customers signed under the dotted line, it wasn’t very important how they were persuaded to do so. But things have changed a lot since then. In the information age, and with hundreds of competitors to choose from, your target demographic has a lot of expectations for online businesses. It wants more than a service; it wants a personalized experience to justify choosing you over anyone else. In the competitive field of online academic writing tutorials, for instance, clients prefer services that emphasize and adapt to personal writing styles over generic by-the-book critiques. Meeting these expectations by adapting to current and prospective clients goes a long way in improving the position of a company on the market, so excellent customer support is appealing to customers who want something more unique. The takeaway: Internet customers seek out businesses who will adapt to the needs of individual clients, and to attract leads, your business should make every effort to accommodate unique needs. Reason #4: effective customer support reduces issues Unfortunately, it’s impossible to do online business without experiencing problems with customers. No matter how hard you try to make them happy, some problems are always going to arise. But this hard truth of business should not stop you from continuing to raise the bar. Here’s why: While you can’t guarantee a lack of problems, you can ensure that they never become serious enough to damage the reputation of your company. If customers know that potential issues will be handled properly even before they make a purchase, they will feel more confident and comfortable. And, by mitigating the chance of a social media crisis, you take a solid precaution that bad publicity will not tank your business over a simple mistake. The takeaway: good customer support is a way to reduce and prevent problems, making potential customers feel more confident. An ounce of prevention is worth a pound of cure, and your business should be as focused on preventing hard feelings as much as it is on resolving them.
https://medium.com/online-marketing-institute/4-reasons-an-online-business-should-spend-on-customer-support-cc78c87d6cc1
[]
2017-07-12 18:02:48.770000+00:00
['Online Marketing', 'Internet Marketing', 'Social Media Marketing', 'Business Strategy', 'Customer Experience']
State Machines in Solidity
This article discusses using State Machines as a convenient way of enforcing the workflow in Solidity, the defacto smart contract language for the Ethereum blockchain. Photo by Franck V. on Unsplash A State Machine is only ever in one state and is moved between states by changing inputs or by input events. In the case of a Smart Contract in Solidity, a message is sent from a different account (an external account or another contract) to a contract function to cause a change in state. If the input is valid for the current state, the State Machine will move to a new state. Background During the development and testing of Datona Labs’ Solidity Smart-Data-Access-Contract (S-DAC) templates, we often use state machines to encompass the workflow. In the example for this article, between two parties who ask questions and provide answers. Roles and Actions in our example UML State Machine Diagram Here is the UML State Machine diagram for our example: Example Question And Answer workflow between Data Owner and Data Requester The round-cornered rectangles represent states, the arrowed lines are transitions and the transition labels are the trigger events (from the specified sources) that cause the transition to take place. See UML State Machines for more information. Solidity State Machine Model The State Machine is moved between the states defined in the solution contract by the transition functions. Here is the partially developed Solidity contract: contract QnAsm100 is ... { enum States { AwaitQuestion, GotQuestion, AwaitAnswer, GotAnswer } States public state = States.AwaitQuestion; ... modifier onlyState(States expected) { require(state == expected, "Not permitted in this state"); _; } function setQuestion() public onlyState(States.AwaitQuestion) { state = States.GotQuestion; } function getQuestion() public onlyState(States.GotQuestion) { state = States.AwaitAnswer; } function setAnswer() public onlyState(States.AwaitAnswer) { state = States.GotAnswer; } function getAnswer() public onlyState(States.GotAnswer) { state = States.AwaitQuestion; } } It can easily be seen that the current state (checked by the function modifier onlyState to ensure that state transitions are only performed in the correct state), the new states and the transition functions map directly onto the UML State Machine diagram. We call the party who owns the data the Data Owner. This may be different to the contract owner. The party interested in using the data is called the Data Requester. Inherited Support Contracts Since these roles are common themes in our contracts at Datona Labs, during contract development we can use base classes for these roles and each of the other roles common to our domain. This technique is actually hijacking the use of inheritance as a substitute for composition, for the convenience of the author. It may be acceptable for test code, but we don’t recommend it for production code. The DataOwner base class encompasses generic data owner operations, such as a constructor, and a function modifier (onlyDataOwner) as illustrated below: contract DataOwner { address private dataOwner; constructor(address account) public { dataOwner = account; } modifier onlyDataOwner() { require(msg.sender == dataOwner, "Must be Data Owner"); _; } ... } Other functions, such as changeDataOwner(account) may be provided to assist rapid development of trial contracts, but none are needed in this case. The DataRequester and other role base classes are similar. We can add the function modifiers from the base role classes to the solution contract: contract QnAsm100 is DataOwner, DataRequester, ... { ... function getAccount(address account) internal view returns (address) { return (account != address(0)) ? account : msg.sender; } constructor(address dataOwner, address dataRequester) DcDataOwner(getAccount(dataOwner)) DcDataRequester(getAccount(dataRequester)) public { ... } function setQuestion() public isDataRequester ... function getQuestion() public isDataOwner ... function setAnswer() public isDataOwner ... function getAnswer() public isDataRequester ... } The DataOwner and DataRequester base classes must be initialised during the solution contract constructor. The account (AKA address) for either party or both parties may be supplied as an argument to the solution contract. If the account is not supplied (that is, it has a value of 0), then the msg.sender account (i.e. the contract owner) is used. We have used the function getAccount to aid the readability of the constructor above. It does not make sense for the DataOwner and DataRequester to be the same account, so we must also test for this condition within the constructor code: constructor(address dataOwner, address dataRequester) DcDataOwner(getAccount(dataOwner)) DcDataRequester(getAccount(dataRequester)) public { require(dataOwner != dataRequester, "Data Owner must not " "be Data Requester"); } Storing data on the blockchain We need to store the Data Requester’s question while waiting for the Data Owner to fetch it, and the Data Owner’s answer while waiting for the Data Requester to fetch it. In this example, we store both the question and answer in a common data string in the contract: contract QnAsm100 is ... { string data; ... function setQuestion(string memory question) ... { ... data = question; } function getQuestion() ... returns (string memory question) { ... question = data; } function setAnswer(string memory answer) ... { ... data = answer; } function getAnswer() ... returns (string memory answer) { ... answer = data; } } We can use a common data string because the State Machine ensures that only one use of the string (question or answer) is required at any one time, and we are encouraged to minimise the use of storage because it is so expensive — see below for just how expensive it is. Hierarchical State Machine One of the issues with the current solution is that there is no way to terminate the contract and release any outstanding funds back to the contract owner. Assuming that this could be done at any time, the solution can implement the following Hierarchical State Machine design (see UML State Machines for more information): Contract Owner has power to terminate at any time This is easily implemented in our solution contract by inheriting from the ContractOwner base class (hijacking inheritance as a substitute for composition again), which automatically records the contract owner and provides an onlyContractOwner modifier (in a similar manner to DataOwner, above): contract QnAsm100 is ContractOwner ... { .... function terminate() public onlyContractOwner { selfdestruct(msg.sender); } } The Solidity selfdestruct() function returns the outstanding ether to the given account via the terminate function which may be called by the contract owner at any time. Testing by Contract In order to test the solution contract, we can deploy it on a blockchain — a testnet is fine. However, the solution contract constructor has 2 parameters, the DataOwner account and the DataRequester account. To assist automated testing, we can create proxy accounts which perform the actions of the DataOwner and the DataRequester. Here is an example of the DataOwner proxy account: contract ProxyDataOwner { QnAsm100 qnaStateMachine; function setStateMachine(QnAsm100 _qnaStateMachine) public { qnaStateMachine = _qnaStateMachine; } function setAnswer(string memory answer) public { qnaStateMachine.setAnswer(answer); } function getQuestion() public returns (string memory question) { return qnaStateMachine.getQuestion(); } } The DataRequester proxy contract is similar. It provides setQuestion and getAnswer functions. The test contract itself creates the proxy accounts and then the solution contract: import "StringLib.sol"; // equal etc contract TestQnAsm100 { using StrlingLib for string; ProxyDataOwner public proxyDataOwner = new ProxyDataOwner(); ProxyDataRequester public proxyDataRequester = new ProxyDataRequester(); QnaStateMachine public qnaStateMachine = new QnaStateMachine(address(proxyDataOwner), address(proxyDataRequester)); constructor() public { proxyDataOwner.setStateMachine(qnaStateMachine); proxyDataRequester.setStateMachine(qnaStateMachine); } function testQnA(string memory question, string memory answer) public { // send and check question proxyDataRequester.setQuestion(question); string memory actual = proxyDataOwner.getQuestion(); require(question.equal(actual), "question not equal"); // send and check answer proxyDataOwner.setAnswer(answer); actual = proxyDataRequester.getAnswer(); require(answer.equal(actual), "answer not equal"); } } The example above provides a public function TestQnA which sends the given question via the proxyDataRequester to the solution contract. Then it recovers the question from the proxyDataOwner and confirms that it is valid. The same operation takes place in the other direction with the given answer being supplied to the solution contract via the proxyDataOwner, then being extracted from the proxyDataRequester and checked. Other tests are recommended to ensure that the State Machine behaves as expected. Gas Consumption The gas consumption to continually ask a 20 character question and immediately get a 20 character answer is approximately 85,000 gas per sequence: Gas consumption for 20 character string question and answer sequence The gas consumption is greater the first time because space for the data string is allocated in storage. The answer simply updates the already allocated data string. The gas consumption also depends on the length of the string. Gas consumption for 0, 32 and 64 character string question and answer sequence The gas consumption is greater the first time because space for the data string is allocated in storage. A non-empty string consumes far more gas than an empty string, because the address of the string has to be allocated as well as the actual string characters. Alternative Solution Using Events If you elect for your solution contract to emit events (see Solidity Events) whenever it receives a question or answer, then the State Machine can be simplified. For example: However, the events can only be handled by the frontside DApp code — not by another contract. This causes the testing strategy to require frontside testing. The solution contract will also require at least these modifications: contract QnAsm100 ... { ... event Question( address indexed _from, address indexed _to, bytes32 _value ); ... function setQuestion(string memory question) public onlyDataRequester onlyState(States.AwaitQuestion) { reportSet(question); emit Question(msg.sender, dataOwner(), bytes32of(question)); state = States.AwaitAnswer; } ... } State Machine Issues The biggest issue with State Machines is both their greatest asset and their greatest weakness and is illustrated by this example. The workflow must be followed as designed. No deviation is possible. They are modal, by design. In this case, if the Data Requester wishes to ask a second question, they cannot proceed until they have the answer to the first question! This would be very clunky in practice and on the edge of unusable. There are various ways to solve that here, for example by deploying multiple contracts (very expensive) or by employing queues of questions and answers, but the real fault here lies in the fundamental design of this particular State Machine. A superior design in this case may simply be a bi-directional interactive mode where questions and answers can flow freely from one role to the other. Conclusions State Machines are an ideal way to control workflow in Solidity contracts. State Machines are easy to test and that is just as true in Solidity as in most other languages. Hierarchical State Machine design should be employed to ensure adequate control of, for instance, terminating the State Machine. If the contracts are used directly by people, the State Machines should be carefully designed to avoid modality. Solidity provides the useful function modifier feature which is ideal for clearly implementing checking the current state and other requirements before changing the state of the State Machine. The gas consumption of the State Machine is small compared with the cost of storing data. Maybe storing the data off-chain would be better… Bio Jules Goddard is Co-founder of Datona Labs, who provide smart contracts to protect your digital information from abuse. website — twitter — discord
https://medium.com/coinmonks/state-machines-in-solidity-9e2d8a6d7a11
['Jules Goddard']
2020-08-21 13:45:21.113000+00:00
['Blockchain Technology', 'Ethereum', 'State Machine', 'Solidity', 'Testing']
10 Best Big Data and Hadoop Tutorials, Books, and Courses to learn in 2021
10 Best Big Data and Hadoop Tutorials, Books, and Courses to learn in 2021 javinpaul Follow Sep 15 · 7 min read image_credit — Udemy Hello guys, if you are looking to learn Big Data and Hadoop, and looking for some excellent books, courses, and tutorials to start with, then you have come to the right place. In the past have shared some free Big Data and Hadoop Courses and the best Big Data Courses and In this article, I am going to share some of the best resources to learn Big Data and Hadoop, including tutorials, books, and online courses. You can use these resources to learn both Big Data in general and Hadoop in particular at a time and place convenient to you. Best Big Data and Hadoop Books, Courses, and Tutorials So, what are we waiting for, let’s dive into the best books, courses, and tutorials to learn Big Data and Hadoop in-depth? Here is the list of the best online resources to learn Big Data: 1. The Ultimate Hands-On Hadoop (udemy.com) An excellent course to learn Hadoop online. It’s very comprehensive and covers Hadoop, MapReduce, HDFS, Spark, Pig, Hive, HBase, MongoDB, Cassandra, Flume — the list goes on! Over 25 technologies. This is one of the best Big Data and Hadoop course on Udemy, created by Frank Kane, this course is really the ultimate tutorial for learning Hadoop in depth. If you can pick just one course from this list, choose this one. It’s not free but the worth of its price and you can get in just $10 on Udemy sales which happens every now and then. Here is the link to join this course — The Ultimate Hands-On Hadoop
https://medium.com/javarevisited/10-best-big-data-and-hadoop-tutorials-books-and-courses-to-learn-in-2020-aaca8cfccb80
[]
2020-12-14 06:39:53.342000+00:00
['Big Data', 'Books', 'Hadoop', 'Tutorial', 'Data Science']
Top 10 Best Free WordPress Themes.
Getting Started with WordPress Themes: we have many CMS(content management systems) are available. Among all the CMS’s WordPress is the only world’s largest content management system. Latest statistics 39.6% of websites are built by using WordPress.In this, we are discussing the best free WordPress themes. Nearly 3500 GPL-licenced themes are there in the WordPress content management system. Out of3500 we have many free and premium themes are available in WordPress. In this article, we have picked the top 10 best free WordPress themes for 2021. Top 10 Best Free WordPress Themes for 2021: 1.Blogger Best Free Theme Blogger is a very lightweight theme. This theme will be having a high priority to rank your website on google. It is very beautiful to see. the website or blog looks simple and easy to get more views. Highlight Features: Compatible with Elementor and Gutenberg page builders. Fast and Lightweight. Responsive Design. Pre-designed header and footer. Amp ready. Multiple Demo’s It supports Woocomerce and blogs. Free Theme. Blogger Theme 2.Astra Astra is like all a one-free theme. It is a fully optimized theme. Astra is very useful for all types of blogs like personal blogs, business blogs, professional blogs, and woocomerce. Astra theme having google optimized fonts. It is one of the top 10 best free WordPress blog themes. is very useful for online stores like Woocomerce type of website. It is very responsive for all types of page builders in WordPress. Highlight Features: Woocomerce Ready. Responsive Design. Compatible with all types of page builders. Speedy customizer. It has many numbers of starter templates. Beautiful page layouts. Astra comes with different headers and footers. Astra has transperant haeder. Dedicated sidebar. Loads in just 0.5s time. 50kb in size. Astra Theme 3.OceanWP Ocean Wp is very similar to the Astra theme. It is a lightweight and fast-loading-free theme. just like Astra, it supports all types of blogs like Woocomerce, personal, professional, and business. It is one of the top 10 best free WordPress blog themes. Ocean wp Theme is compatible with all types of popular page builders like Elementor Beaver Builder, Divi, Brizy, Visual composer, etc…!. It is having a real-time translation facility. Highlight Features: Amp Ready e-commerce Ready Responsive Layout. RTL language support. Fast and Lightweight. Compatible with all types of page builders. Multipurpose Theme. SEO friendly. OceanWp Theme 4.Neve Google-like fast websites. Neve free theme speed is better performance in terms of speed. Its loading speed is 1 second. It is one of the best free themes. Neve is a very light weighted theme.it has 28kb in size for WordPress install. Using Neve we can create professional websites within minutes. we can customize the header and footer with a drag and drop technique. flexible layout options are available in this free theme. Highlight Features: Loads speed is 1 second. Lightweight. 28kb in size. Drag and drop technique. Adjustable Layout options. Lightening fast and Quick response. Amp Ready. Mobile friendly. 80+ starter sites are available. Seamless Integration with all the page builders. Neve Theme 5.Ashe Ashe is one of the top10 best free themes for WordPress.Ashe is mainly used for magazine type of blogs.it is a well-suited free theme for lifestyle blogs like fitness, bakery, travel, beauty, etc. the free theme supports mobile-friendly. Using this Ashe we can design with ease. Ash has the best Responsive feature. Ashe Theme 6.Primer Primer is one of the top and clean free themes. This theme is designed by Godaddy. It has a full-screen Header image with an editable custom logo. In this free theme responsive layout. It supports RTL language support. Highlight Features: Customize fonts. Customize colors. Responsive layout. Multi-column layout. The social links menu is available. It is available in 26 languages. RTL language support. Woocomerce Ready. Primer Theme 7.Satori Satori is a highly customized WordPress-free theme. In most cases satori free themes used for personal blogs, Restaurant type blogs. satori theme is built with Woocomerce plugins. Highlight Features: The full-width header on the home page. suitable for restaurant-type blogs. Woocomerce Ready. Multi widget ready. Satori Theme 8.video blog The name itself indicates it’s a video-friendly free theme. It has a responsive design. It has a nice header image. video blog free theme is compatible with all page builders. It’s a user-friendly free theme. Highlight Features: Responsive layout. SEO optimized. Video friendly. Compatible with all page builders. Multipurpose use. Video Blog Theme 9.Author The name itself indicates that this theme is mainly useful for Authors, publishers, writers. It is compatible with the Gutenberg post editor. This free theme adopting with all types of layouts. It has the feature of a two-column layout with content is at the right side and Navigation menus on the left-sidebar. Highlight Features: Accessibility. Responsive layout. Compatible with Gutenberg. Two-column layout. Best magazine support. Author Theme 10.Foodica Food is one of the top 10 best free themes for 2021. It is specially designed for restaurant-type of blogs. Foodica is very comfortable with a responsive and flat design. You can grow your business with ease. Highlight Features: Woocomerce integration Custom widget. Feature slider support. Responsive and Retina ready. Compatible with Gutenberg. To the best of my knowledge above 10 are the best free themes. Kindly share if you like the article. If you read more articles follow my blog. More Articles
https://medium.com/@rajeshgudipudi5/top-10-best-free-wordpress-themes-9ab85ee08d14
['Gudipudi Rajesh']
2021-06-08 10:11:13.278000+00:00
['Free Theme', 'Themes', 'Top 10', 'Wordpress Themes']
Is Christmas a Good Holiday?
Thoughts on an American tradition. For most of us born after the 1950s, our lives have been steeped in the consumerist ethic. It is nearly synonymous with American culture, and nearly impossible to escape. It can be summarized like this: Having things and being good are equivalent. Our lives are organized around consuming, possessing, and managing the things we acquire. From youth until death, many of us define ourselves by what we have acquired, what we want to acquire, and what we are ready to discard. Youths are surrounded by toys and equipment for games, mid-life workers by homes, furnishings, cars, and older folks by canes, jewelry, and the fruits of their lifelong labors. Benchmarks in our development are marked by ceremonies, like birthdays, where useful or entertaining things are handed to us by friends and family. Our holidays are defined by buying, giving, and receiving gifts, and offer a day or two relief from work in order to enjoy those things. It is important to acknowledge that these cultural events emerged from more spiritual, non-materialistic practices, but those older reasons, like honoring deities, ritualizing adulthood, or preserving community, have been hijacked by corporate advertisement. It is now an affront if a birthday or select holiday passes without the exchange of gifts. Acts of kindness, or food, can suffice for acquaintances, but we reserve our most extravagant expenditures for those closest to us. Cost is the metric by which a gift is primarily evaluated, in money or time, whereas thoughtfulness is a more abstract estimation, relevant but secondary. The more we spend, the more we show our care. It is thoughtful to buy someone a basket of oranges, for the taste and nutrition, but it is considered truly special to buy someone a new car, even if the old one runs just fine. A gift can mean so much more than just what it is, certainly. It can signal love, affection, respect, and approval, and historically has served that purpose. An anthropologist might give a more complete account of the role gifts play in knitting together communities, mending mistrustful relations, or creating meaning in a human life. Gifts have been necessary goods, like shoes for bare feet, coats for cold backs, weapons for starving hunters, or little indulgences like candy and sweets. Ultimately, we give gifts because we care. It is important to note that giving gifts is not the only expression of care, though it is perhaps the most tangible. But when the act of giving becomes compulsory, and the requirements defined by commercials and advertisements, the underlying sentiment is diminished. How we give, what we give, and when are now functions of commercial media. And when the prime beneficiaries of the economic activity of buying and giving gifts are large corporations, the underlying sentiment can be perverted and exploited for profit. How many people have been successfully convinced by television commercials that “every kiss begins with k,” as though the truest expression of love is the gifting of blood diamonds? The magnitude of the multi-decade advertisement campaign is too large to account for, but its effects can be felt at every level of society. There may not be a better example at the intersection of advertisement-induced compulsory consumerism and human care than American Christmas. Even Christians would admit it is now less about Jesus Christ and more about the toys in Santa Claus’ sack. Americans care for each other, for the most part, and are convinced the best way to signal that is by unleashing their wallets in malls, department stores, or online. They believe this because they have been conditioned by media for half a century that spending money is, dollar for dollar, the best way to show care. Christmas has been established as the season to do this, a gluttonous celebration of giving and getting material things. Jewelry proves you care for spouse, toys for your kids, trinkets and tchotchkes for your friends and coworkers, and what you receive is a measure of how much people care for you. Our personal relationships often depend on the quality of gifts. A person might become offended if all they receive from their partner is a ‘coupon’ for dinner and a back rub. Or if their gift is not fully appreciated by its receiver, however odd it might be. Grandma might not appreciate a dart gun, mom would scoff at a gift card for Arby’s, and brother might never play with the Lego set he got when he really wanted dance shoes. The best defense against giving an ill fitting gift, we are convinced, is to spend more money. In order to spend more money, we need to have more money, and we express that most consequentially with our votes. The economy is perennially the most important concern for Americans, surpassing the concern for healthcare, educational, and character deficiencies. Imagine an American politician who runs on a platform of spending less, earning less, having less, conservation, managing our personal and national finances responsibly, acting with humility in our personal and private lives, taking the bus, riding a bike, donating to charity, eating more fruits and vegetables, eating less sugar and meat, practicing community farming, promoting the arts, and favoring diplomacy over resource-procuring wars abroad, and you will have imagined a loser. There is no appeal in that life when Americans have been conditioned by years of corporate advertisements to believe that a life well-lived is filled with things, and that filling other people’s lives with things is the best expression of care. When politicians cannot win election running on an anti-consumerist platform, then consumerism is the dominant ethic in society. Americans believe consumerism is the best way to organize society. They believe the ideal state for a human live is one of affluence and abundance of things. To reject that claim is to offend American sensibilities, because it is a rejection of the American Dream. The imperative of acquisition shapes our thoughts, aspirations, and actions. We plan our lives around acquiring, we define success as acquiring, we reward ourselves with acquisitions, and we vote out any person who stands in our way. American Christmas traditions exemplify this dominant ethic, best articulated by the Santa Claus myth. From our earliest years, we are conditioned to believe our good behavior will be rewarded by Santa Claus with toys. You better not pout, you better not cry, you better watch out. That chant establishes the imperative of acquisition that becomes the cornerstone of our childhood morality: If I am good, then I will get things, and it is good to get things. This becomes more sinister as we age and impart that rudimentary belief on our estimation of ourselves and others. That childhood ethic evolves into: If I have things, then I must be good. If they have more, then they must be better. The next rational step for anyone wanting to be better is to then try and get more things. They way we define “good, right, just” is by counting our assets and trinkets. This is not controversial. People who have more things are celebrated as exemplars of the American Dream, regardless of their personal conduct, idolized and given freedom from accountability. The ethic runs so deep in our culture and history, perpetuated by our communities, families, friends, and by the Santa Claus myth, that it is difficult to notice or appreciate how much it shapes our thoughts and actions. Our work is devoted to acquisition. Our relationships are often determined by the perceived quality of the gifts we exchange. Our politics is enthralled by consumerism so much that the contest between left and right, socialist and capitalist, is really only about whether or not material things should be distributed equally or unequally. As Americans, we are preoccupied with the questions, ‘what should I get,’ and ‘who should get what?’ We ignore the deeper questions about what kind of people we have become and what kind of people we want to be. The last ten years, and especially the last few months, have shattered the confidence I once had in our culture. Divided, bitter, materialistic, superficial, greedy — these traits have come to define me as much as my countrymen, and that is so discouraging. Our politics reflects our national character, and that should chill the spine of any patriot. I know each of us is more than that, but we are limited by the agenda set by the consumerist ethic and reinforced daily by media to benefit very wealthy people. Equating our self-worth with our possessions is so natural to us that our salt-of-the-earth virtues, like curiosity, open-mindedness, thriftiness, and modesty, give way to intransigence, vanity, envy, and dissatisfaction. Resentment tears our communities apart, while those most ravaged by years of commercial media conditioning turn to fascists, liars, and frauds to alleviate feelings of inadequacy and impoverishment. The real impoverishment is not of possessions, but of vision. It has disturbed me how much my attitude in the face of unemployment and financial stress has been shaped by the consumerist ethic. The vision I’ve conjured for my life is one of fancy cars, gadgets, and wealth; it looks an awful lot like a television commercial. The vision I know I should be forming is one of virtue, values, and meaningful projects. I want to be defined by what I do, but I can’t help but define myself by what I have, or by what I don’t, as though my priorities are designed by corporate marketing offices. So many people, myself included, tragically come to believe that they are bad people, undeserving of happiness and respect, because they do not have enough things. The consumerist ethic burdens a whole class of American citizens who judge themselves not by the content of their character, but by the content of their bank accounts. In times of scarcity, hunger, and economic uncertainty, that devastates morale and exasperates feelings of alienation — poisons to the health of a nation. Disappointed in my own materialism, and by the entrenchment of the consumerist ethic in American life, I yearn for alternatives. There is no clear indication my country or peers value me for anything other than my economic productivity, and many clear indicators that I am only worth what material things I can produce, acquire, and gift. Commendation for achievements, enthusiasm from police and other services, educational opportunities, even whether or not I have access to healthcare, are all dependent on my perceived ability to generate wealth and acquire things. My values, virtues, and aptitudes are irrelevant. I know my family and friends still feel obligated to care, because of blood relation or history, but very few show me respect. I am unsuccessful in their minds, because I am poor, or curmudgeonly, or ambivalent about Christmas. That characterization might be true or false, I really can’t see their minds or motives. I am most likely projecting those suspicions on to them, externalizing the way I feel about myself — unsuccessful, undeserving, unrespectable. It is another sign I am captive to the consumerist ethic just as much as everyone else. And because of that, I am eager to change. To break away will take determination and commitment, and there are options for making a change this holiday season. We can: Teach our children that the aim of our morality should be justice, and that justice is not measured by material wealth (significantly alter the Santa Claus myth, or get rid of it). Being good and being surrounded by stuff are not equivalent. Reflect on what kind of people we want to be, what kind of society we want to have, and what kind of vision we craft for the future. Be honest about who we are. Understand there is a difference between ‘need’ and ‘want,’ and be aware of the potency of corporate advertisements to warp our priorities, dreams, and self-esteem. Only gift food, objects of necessity, handmade trinkets, or books. Nothing superfluous, nothing luxurious. Refrain from celebrating a holiday if we are not practicing members of its associated religion. Religious people, recommit to the spiritual and community significance of those holy days and reclaim them from corporate marketing departments. I am hoping for myself, and for my country, that we can bring any number of these into practice. I hope we can overthrow the tyranny of the consumerist ethic in our lives and politics, reforge our community bonds, and craft a nation committed to the cause of justice over the cause of wealth. I hope it’s not a fool’s hope, and I hope Santa isn’t so sacred as to be immune from scrutiny. Happy Holidays!
https://medium.com/@zackaryjohn/is-christmas-a-good-holiday-64a1b23a2577
['Zackary John']
2020-12-21 00:24:12.243000+00:00
['Consumerism', 'Capitalism', 'Politics', 'Christmas']
Nope, Nothing Strange Going on Here…
For the month of July I am doing a daily feature that highlights ongoing writing challenges, provides a handy Daily Tip for all Medium writers, provides opportunity for collaboration on the ILLUMINATION Publication, and extends a warm and heartfelt welcome to new writers. Welcome New Writers! In June I highlighted existing stories of new writers, but this month I am offering writers, both new and not-so-new, a chance to: Write your own new story! This feature has a list of writing prompts, and we want to see your writing. Pen a response to any of the prompts, tag me Timothy Keyand also mention if you happen to be one of the new writers on the publication. I will include your stories in an upcoming version of the “Your July Challenge” piece. The onus is not only on you however, I will be attempting to respond to one challenge or prompt for each day as well. I wouldn’t want to ask you to do something I wouldn’t be willing to do. Image by skeeze from Pixabay Regardless of how long you have been with the publication, be it a minute, or since day one, we have a whole team of editors and other writers that are ready to support you and promote your work. All any of us ask in return, is that you do the same. Keep on reading to see how you can engage with others, but first: The Daily Tip! Did you know that you can add photos and tags to your responses to other peoples’ stories? Medium considers each response to be a story in itself and offers you all the features that are available in the regular story editor when you make comments. Adding a photo, hyperlink or embedding a video is a great way to bring some attention to your responses. Create your writer bio One of the best ways we can get to know you is for you to provide a writer’s bio. When you do, Dr Mehmet Yildiz will highlight it in his daily bulletin and it will be added to our collection of writer bios. Check out the rest here: Daily distribution list Another fantastic way to get engaged with other writers on the ILLUMINATION publication is by checking out Dr Mehmet Yildiz‘s daily distribution list. In it he includes all of the stories in the last 24 hours. When you write something and publish on ILLUMINATION your article will appear there too! Here is a link to the bulletin: Let’s Connect If you want to connect with me or any of the editors or writers, please consider joining the Illumination Slack workgroup. You can request an invitation to Slack by contacting Dr Mehmet Yildiz from this link. Please type “Request for Slack” on the heading as Dr. Yildiz has many other requests from this link. I am always willing to answer questions and provide information for anyone. The other editors and writers are as well. Slack is the best way to interact if you have questions. If you need some tips to get up and running on Slack, this article is a good starting point: Curation Stars The environment of encouragement and support can be substantiated just by looking at the many writers that are seeing their stories get traction in the way of being curated by Medium. Here is a great article giving tips and pointers, as well as links to curated stories of Illumination writers: Today’s Challenge and Response Today I write in response to an idea generated by Infiniti who suggested this great prompt: Something Weird Happened… And here is my response: Upcoming Challenges In the coming days I will be tackling each of these challenges, and I am listing them here to help you get your writing pump primed as well: Desiree Driesenaar has another one, which I think originated with Holly Jahangiri. It is to create an Abecedarian poem. I think the topic is open, although Desiree chose to use life perspective. I can’t resist mentioning that Desiree Driesenaar called my “Rasheed” when she posted this link as a comment on my daily feature. I will take that as a compliment… of sorts. ;-) B. A. Cumberlidge. has suggested two new prompts. The first asks new writers to make some observations on what could be better about Illumination. The idea is to make positive suggestions rather than poke holes. The next is a weighty one which asks us when can we start trying to solve the issues that matter? Dr Mehmet Yildiz has a new challenge today; Create at least one testimonial for another writer whom you admire. He offers this one as an example: And in a mildly egocentric move, here is one that B. A. Cumberlidge. wrote about me. Some of it is even true! Holly Jahangiri tagged me in a poem chain that originated here with Martin Rushton: And Holly’s response: Rasheed Hooda challenges us to describe ourselves in an acronym of the first letters of our name: And I am actively looking for new challenges! Rear view mirror Here is a list of the challenges I have completed: Week 1 In the first week of July I responded to challenges issued by Dr Mehmet Yildiz , Tree Langdon ♾️, B. A. Cumberlidge., Holly Jahangiri, Rasheed Hooda (a couple of them had two challenges to total seven). Read all of them in a Your July Challenge compilation article! Week 2 — This Week Desiree Driesenaar, who wonders what verb you would be? Sherry McGuinn tells us about the hole in her pants, and asks us if we are feeling the same way?
https://medium.com/illumination/nope-nothing-strange-going-on-here-23a358842623
['Timothy Key']
2020-07-10 12:11:01.250000+00:00
['Inspiration', 'Mental Health', 'Leadership', 'Writing', 'Reading']
Beginners Guide to Functional Programming With Scala
Below is a compilation of a few distinctions and advantages of Scala. To help highlight the fundamental differences, there are code snippets of the featured Scala code and its Java equivalent code (if applicable.) Functional programming is extremely concise. Implementations that take several lines of code in imperative languages can be implemented in a single line in functional programming languages such as Scala. The following are two examples that show the difference between Java and Scala in Object creation. Java is an imperative language where every programming logic is explicitly written line by line. On the other hand, Scala is a declarative language and a lot of implementation details are hidden and a programmer simply writes conditions on the outcomes and higher-order functions accomplish it. For example, reading input files in Scala can be implemented in a single line because of its declarative nature while it takes several lines in Java. Digging more into high-order functions, a very powerful one that Scala supports is the map function. Map transforms one collection to another of the same type and applies a given function or condition across the entire collection. The below example uses a user-defined function, but you can also use an anonymous function which will be shown soon. The equivalent Java code requires an iteration with a typical loop structure. If you’ve worked with Java, you’ve definitely faced thread-safety issues but Scala is designed to reduce these risks. Scala and Java both support immutability and mutability; however, Scala has inherently immutable objects for the benefit of removing potential bugs from creeping into the program. The following Scala code doubles list1 and stores this new collection into another variable. The map function uses an anonymous function this time, which is defined by the short notation of ‘_’ and nicely cuts down some lines of code. The equivalent Java code directly modifies the existing list because of mutability. Unlike Java, Scala supports a feature called lazy evaluation where expression computations are delayed until they are accessed the first time. This feature makes Scala very efficient because intensive tasks don’t have to be executed right away so they are until they are needed. In this example, the variable bigtask is initialized on the first line which is still not computed yet. The variable bigtask is evaluated at the end and only then it is actually computed and the value is returned.
https://medium.com/@nehasudini/beginners-guide-to-functional-programming-with-scala-554e04caa7da
['Neha Sudini']
2020-11-11 01:32:47.682000+00:00
['Programming', 'Scala', 'Development', 'Functional Programming', 'Scala Programming']
Is Reincarnation Actually Real?
Is Reincarnation Actually Real? By Deepak Chopra, MD Do you believe in reincarnation, and if so, does it matter? According to a 2018 Pew Research survey, 33% of Americans say they believe in reincarnation, yet it is beyond the range of ordinary polling to ask why this belief exists. In an age of faith, both East and West, a person’s daily life was deeply influenced by a religion’s teaching about the afterlife. Questions of sin and redemption, karmic retribution, heavens and hells, and journeys through other bodies such as those of animals — these were pressing concerns for many centuries. Now in modern secular society, the question of surviving the extinction of the physical body has been channeled into belief versus science. We don’t ask if God finds us worthy to go to heaven so much as how credible a near-death experience might be according to the best research. The scheme of belief versus science is something of a false divide, however. There has been credible research on reincarnation, which would surprise most people, including scientists. Pioneering studies were conducted by Ian Stevenson, chairman of the psychiatry department at the University of Virginia Medical School, who began investigating the phenomenon of young children who say they recall a past life. Hundreds of such cases were looked into with the aim of validating if the person they remembered being actually existed. Stevenson traveled the world closely examining children’s memories and matching them to specific individuals, and not only were many validated, but some children even bore physical signs of injuries sustained when their previous incarnation died. After Stevenson’s death in 2010, the research was continued by another U. of V. psychiatrist, Jim Tucker, who presents some fascinating statistics in two books. In an online article that summarizes some of the more startling numbers, · Around 20% of young children claim to have memories of the time between death and birth. · 60% of children who claim to remember past lives are male. · Roughly 70% of such children remember an unnatural or violent death. · The average time spent between lifetimes is 16 months. · Such reports occur in general in children between the ages of 2 and 6, after which the phenomenon of remembering a past life wanes. There has been no serious questioning of the validity of this research, and Tucker explains reincarnation in terms of natural phenomena. “Quantum physics indicates that our physical world may grow out of our consciousness. That’s a view held not just by me, but by a number of physicists as well.” Without a doubt there’s a need in contemporary physics to account for consciousness in the universe. No physical explanation has been satisfactory in the past. People casually assume that as life evolved and became more complex, the primitive brains of lower species evolved into the massive brain of Homo sapiens. The physical evidence for that is unassailable. Yet no one has described why and how any brain is related to the mind. Brain cells do nothing so different from any other cell that their activity should produce a three-dimensional world complete with sights and sounds from an organ the texture of cold oatmeal that is totally dark and silent inside. To overcome this huge gap in our understanding of reality, two trends have cropped up in physics — one is panpsychism, the notion that the universe contains traits of mind or proto-mind the way it contains matter and energy, the other the notion that information is at the root of mind, again with the assumption that the cosmos had this property from the very outset 13.8 billion years ago. Panpsychism and information theory are fashionable, but no one knows if they are valid explanations of mind or Band-Aids applied to keep physics patched together. Without settling the unknowable future, one thing is clear. After decades of stubbornly insisting that only physical data are needed to explain everything about creation, some scientists are assigning validity to human perceptions — this is where the trail to reincarnation begins. I’m not referring to a full-blown leap into the arms of life after death. Instead, words like harmony, beauty, balance, and orderliness are acceptable in describing mathematics. Since mathematics is the fundamental language of physics, applying human terms, and subjective terms at that, to numbers is a radical step (despite the fact that mathematicians have spoken personally about the beauty of numbers for centuries). A similar shift can be observed in evolution, where Darwin’s theory resulted over time in making evolutionary studies a matter of data and statistical distributions. The rigor of modern Darwinism may be a fig leaf to cover the obvious flaw in evolutionary studies — namely that no experiments on evolution can be conducted, since evolution either took place long ago or is proceeding now at a creepingly slow rate. Suddenly in recent decades so-called “soft” inheritance has broken the lockstep of rigid Darwinism. “Soft” inheritance holds that genes do not have to mutate to create evolutionary traits, as “hard” inheritance insists upon — after all, living things are born with a complement of genes that are fixed for life. Thanks to a new field called epigenetics, it has become evident that a creature’s life experiences can actually be passed on to future generations via genetic markers that influence how DNA is triggered and regulated. Instead of an on-off switch, DNA operates more on a rheostat. Epigenetics may explain as much or more about the rise of species as the discovery of DNA itself. I’ve skimmed through radical shifts in scientific thought to arrive at the real significance of reincarnation. What Nature presents, from the level of subatomic particles to the level of DNA, is an endless recycling. Just as physics tells us matter and energy cannot be destroyed, only transformed, the same is thought to apply to information and, going a step further, to consciousness. Everything in Nature is about endless transformation, and in the cosmic recycling bin, ingredients are not simply jumbled and rejumbled like balls in a Bingo cage. Instead, as viewed in human perception, Nature exhibits evolution through three linked processes: memory, creativity, and imagination. Memory keeps the past intact, allowing older forms to contribute to new ones. Creativity allows for novelty so that recycling isn’t mere repetition of the same forms over and over. Imagination allows for invisible possibilities to take shape, either in the mind or the physical world. If everything in Nature is recycling under the influence of memory, creativity, and imagination, it seems very likely that human consciousness participates in the same recycling. Or to put it another way, if human consciousness doesn’t recycle/reincarnate, we’d be outside a process that includes everything else in the universe but us. Is that really probable? The argument for the probability of reincarnation, added to the research on children’s memories of past lives, is very persuasive, so the future of reincarnation looks bright. No one can credibly call it a mere belief or superstition or a holdover from the age of faith. But a probability is weaker than a certainty, and no one should plan on their next incarnation without a stronger argument, perhaps strong enough to approach certainty. That’s the enticing possibility we’ll discuss in the next post. (to be cont.) Deepak Chopra MD, FACP, founder of The Chopra Foundation and co-founder of The Chopra Center for Wellbeing, is a world-renowned pioneer in integrative medicine and personal transformation, and is Board Certified in Internal Medicine, Endocrinology and Metabolism. He is a Fellow of the American College of Physicians and a member of the American Association of Clinical Endocrinologists. Chopra is the author of more than 85 books translated into over 43 languages, including numerous New York Times bestsellers. His latest books are The Healing Self co-authored with Rudy Tanzi, Ph.D. and Quantum Healing (Revised and Updated): Exploring the Frontiers of Mind/Body Medicine. Chopra hosts a new podcast Infinite Potential and Daily Breath available on iTunes or Spotify www.deepakchopra.com
https://deepakchopra.medium.com/is-reincarnation-actually-real-3c0521376a41
['Deepak Chopra']
2019-07-15 14:53:01.502000+00:00
['Reincarnation', 'Consciousness', 'Epigenetics', 'Limiting Beliefs']
2020 Is the Year I Turn My Side Hustle into a Business [Plus 4 Must-Have Tools for Entrepreneurs]
2020 Is the Year I Turn My Side Hustle into a Business [Plus 4 Must-Have Tools for Entrepreneurs] Emily Bezak Feb 21, 2020·6 min read Photo by Danielle MacInnes on Unsplash Originally published on emilybezak.com Disclosure: Some of the links in this blog contain affiliate links. This means, if you click the link and decide to make a purchase, I will earn a commission at no additional cost to you. I recommend these products and services because I use them for my own business. I love the month of January. Every year, I look forward to buying a crisp, new planner, decluttering my Google Drive folders, and setting goals for the next twelve months. This year, one of my new year’s resolutions is to start a blog. After all, I am a writer and my favorite thing to write is a blog post. So this should be easy, right? I’ve spent the first four weeks of 2020 agonizing over what to write for my first blog post. Of course, I can’t possibly write a blog without an offer for my readers or some kind of call-to-action (CTA). Oh, and I should definitely plan an affiliate linking strategy to start monetizing this domain before I even put a single post in draft. And, I should also definitely wait until my new website launches and I have a real brand. Maybe I need to hire my own freelance writer? UGH. As a writer and marketer, I am the absolute worst person to market my business and write my blog. It keeps falling off my to-do list month after blog-less month. When I mentioned this problem to my uncle over Christmas, he said I’m like a plumber with clogged pipes. Gross, but true. So here I am, finally putting a draft together. Let’s talk about what I’ve done in the past few weeks to turn my side hustle as a freelance writer into a real, live business and the powerful tools I’ve added to my daily life to get there. I named my LLC. Tool #1: ZenBusiness, an online service to form my business With tax season in full swing in a few days, I should be getting my finalized paperwork for setting up my LLC from the State of Pennsylvania. I used ZenBusiness to file my LLC. Not only was it easy and quick, but the information that was provided throughout the process also helped me make informed decisions confidently. Unsurprisingly, it took me a while to settle on a name. I even took to Twitter to ask for advice with only a few suggestions. The more I researched the topic, it became clear to me that I should keep it simple: I named my business Emily Bezak, LLC. There are a few good reasons for this boring business name: It was hard enough to change my name when I got married in 2016, so my name is unlikely to change in the near future. Using my personal name removes the complexity of tying my name back to the brand and vice-versa in marketing efforts. And, although I love to write, I might find my business focus shift over time. I don’t want to nail myself down in the early stages and have to needlessly form multiple LLCs. I hired my first employee. Tool #2: Trello, a free, online tool to manage tasks and projects This task was even harder than naming my business. I’ve been a freelance writer and marketing consultant for businesses small and large since I was in college. but I really started seriously building my side hustle over the past two years. And with each passing month, I’ve added more and more assignments, clients — and ultimately, tasks — to my growing to-do lists outside of my full-time job. Emails, coordinating conference calls, proposals, version controlling edits, meeting notes, project and campaign trackers — oh, and of course writing and doing the actual work. On top of that, I’m on the Point Park University Alumni Board of Directors and manage the social media accounts for TEDxPittsburgh. Whew! We’re all busy, I know. But over the past few months, I’ve felt like I could only tread water. I had to decide if I wanted to give up and take a break or reevaluate how I conduct business to make sure I could still see my family, sleep, and function on a daily basis. I chose the latter. After reading multiple articles — and following The Gig Mindset weekly newsletter by Paul Estes — I finally pulled the trigger and reached out to my alma mater to recruit a virtual marketing assistant to help me start filling the holes in the swiss cheese that was my to-do list to take my business to the next level. Here are a few of the tasks we’ve worked on together so far: Research for a few future blog posts for my website Comparing podcast programs for one of my most trendsetting clients Creating content to start collecting testimonials for rebranding my website There are many projects and tasks to do each week. And I use Trello, a free project management system, to assign tasks to my team with a due date, task owner, and notes. I have a Board for each client and our own marketing and training, and then I break down each project into a Card with individual tasks. It feels so good moving tasks to the Done Card! I got organized. Tool #3: GSuite, the business view of Gmail includes a custom domain One quiet afternoon in December, my personal inbox went from a handful fo emails flagged for a follow-up to over 100 unread messages in a matter of hours. Of course, these emails weren’t all business-related. But the daily sifting and organizing emails into various folders was not helping me feel more organized. I was constantly refreshing, deleting, filing, and flagging for follow-ups. I wasn’t getting anywhere. I use Squarespace for my website, and I was able to add GSuite to my account for only a few bucks a month. My email address now has my professional domain, and my business emails are out of my personal inbox. (Want to chat? Shoot me a message in my new inbox at [email protected].) Bonus: My assistant can log-in to my business email without seeing my Amazon Prime orders! I installed a backup editor. Tool #4: Grammarly, a Chrome plugin that checks grammar and rates tone Have you ever sent an email draft to your work BFF to double-check how you’re coming across to the recipient? Or play-acted an important meeting to negotiate your workload or salary? Well, still do those things. To be clear, aside from what I say here, you shouldn’t stop doing that. Your work BFF understands context, timing, and office politics. Plus, work BFFs are the best. There’s also a lot to be said for writing an email in the heat of the moment, saving it as a draft, and walking away. Speaking of work friends, one of mine pointed me in the direction of the Grammarly Chrome extension that checks spelling, grammar, and rates the tone of your writing. For the record, this blog has been rated as formal, confident, and informative. It has also rated some of the sections of this blog as gloomy and disapproving. At first, I was trying to write away from that tone. But as I developed the story, I realized it’s 100-percent accurate — particularly the parts I wrote about how I haven’t been marketing my business or blogging for myself. So disappointing. For emails, blog posts, and social media comments, this tool has been a lifesaver. Final Thoughts I plan to post regularly about my business, writing, and marketing best practices from myself and a few guest contributors a few times per month. There will also be articles featuring my two English Bulldogs, current events, or living in Pittsburgh! If there are any ideas or topics you’d like covered, please comment below. I look forward to learning and growing with you. Let’s do this!
https://medium.com/@emilybezak/2020-is-the-year-i-turn-my-side-hustle-into-a-business-plus-4-must-have-tools-for-entrepreneurs-587c3d8dc1ae
['Emily Bezak']
2020-02-21 18:53:30.058000+00:00
['Writer', 'Business Tools', 'Small Business Marketing', 'Side Hustle', 'Women In Business']
Applying Netflix DevOps Patterns to Windows
Baking Windows with Packer By Justin Phelps and Manuel Correa Customizing Windows images at Netflix was a manual, error-prone, and time consuming process. In this blog post, we describe how we improved the methodology, which technologies we leveraged, and how this has improved service deployment and consistency. Artisan Crafted Images In the Netflix full cycle DevOps culture the team responsible for building a service is also responsible for deploying, testing, infrastructure, and operation of that service. A key responsibility of Netflix engineers is identifying gaps and pain points in the development and operation of services. Though the majority of our services run on Linux Amazon Machine Images (AMIs), there are still many services critical to the Netflix Playback Experience running on Windows Elastic Compute Cloud (EC2) instances at scale. We looked at our process for creating a Windows AMI and discovered it was error-prone and full of toil. First, an engineer would launch an EC2 instance and wait for the instance to come online. Once the instance was available, the engineer would use a remote administration tool like RDP to login to the instance to install software and customize settings. This image was then saved as an AMI and used in an Auto Scale Group to deploy a cluster of instances. Because this process was time consuming and painful, our Windows instances were usually missing the latest security updates from Microsoft. Last year, we decided to improve the AMI baking process. The challenges with service management included: Stale documentation OS Updates High cognitive overhead A lack of continuous testing Scaling Image Creation Our existing AMI baking tool Aminator does not support Windows so we had to leverage other tools. We had several goals in mind when trying to improve the baking methodology: Configuration as code Leverage Spinnaker for Continuous Delivery Eliminate Toil Configuration as Code The first part of our new Windows baking solution is Packer. Packer allows you to describe your image customization process as a JSON file. We make use of the amazon-ebs Packer builder to launch an EC2 instance. Once online, Packer uses WinRM to copy files and run PowerShell scripts against the instance. If all of the configuration steps are successful then Packer saves a new AMI. The configuration file, referenced scripts, and artifact dependency definitions all live in an internal git repository. We now have the software and instance configuration as code. This means changes can be tracked and reviewed like any other code change. Packer requires specific information for your baking environment and extensive AWS IAM permissions. In order to simplify the use of Packer for our software developers, we bundled Netflix-specific AWS environment information and helper scripts. Initially, we did this with a git repository and Packer variable files. There was also a special EC2 instance where Packer was executed as Jenkins jobs. This setup was better than manually baking images but we still had some ergonomic challenges. For example, it became cumbersome to ensure users of Packer received updates. The last piece of the puzzle was finding a way to package our software for installation on Windows. This would allow for reuse of helper scripts and infrastructure tools without requiring every user to copy that solution into their Packer scripts. Ideally, this would work similar to how applications are packaged in the Animator process. We solved this by leveraging Chocolatey, the package manager for Windows. Chocolatey packages are created and then stored in an internal artifact repository. This repository is added as a source for the choco install command. This means we can create and reuse packages that help integrate Windows into the Netflix ecosystem. Leverage Spinnaker for Continuous Delivery The Base Dockerfile allows updates of Packer, helper scripts, and environment configuration to propagate through the entire Windows Baking process. To make the baking process more robust we decided to create a Docker image that contains Packer, our environment configuration, and helper scripts. Downstream users create their own Docker images based on this base image. This means we can update the base image with new environment information and helper scripts, and users get these updates automatically. With their new Docker image, users launch their Packer baking jobs using Titus, our container management system. The Titus job produces a property file as part of a Spinnaker pipeline. The resulting property file contains the AMI ID and is consumed by later pipeline stages for deployment. Running the bake in Titus removed the single EC2 instance limitation, allowing for parallel execution of the jobs. Now each change in the infrastructure is tested, canaried, and deployed like any other code change. This process is automated via a Spinnaker pipeline: Example Spinnaker pipeline showing the bake, canary, and deployment stages. In the canary stage, Kayenta is used to compare metrics between a baseline (current AMI) and the canary (new AMI). The canary stage will determine a score based on metrics such as CPU, threads, latency, and GC pauses. If this score is within a healthy threshold the AMI is deployed to each environment. Running a canary for each change and testing the AMI in production allows us to capture insights around impact on Windows updates, script changes, tuning web server configuration, among others. Eliminate Toil Automating these tedious operational tasks allows teams to move faster. Our engineers no longer have to manually update Windows, Java, Tomcat, IIS, and other services. We can easily test server tuning changes, software upgrades, and other modifications to the runtime environment. Every code and infrastructure change goes through the same testing and deployment pipeline. Reaping the Benefits Changes that used to require hours of manual work are now easy to modify, test, and deploy. Other teams can quickly deploy secure and reproducible instances in an automated fashion. Services are more reliable, testable, and documented. Changes to the infrastructure are now reviewed like any other code change. This removes unnecessary cognitive load and documents tribal knowledge. Removing toil has allowed the team to focus on other features and bug fixes. All of these benefits reduce the risk of a customer-affecting outage. Adopting the Immutable Server pattern for Windows using Packer and Chocolatey has paid big dividends.
https://netflixtechblog.com/applying-netflix-devops-patterns-to-windows-2a57f2dbbf79
['Netflix Technology Blog']
2019-08-22 17:01:01.284000+00:00
['Windows', 'Docker', 'AWS', 'DevOps']
HackYourFuture’s Positive Integration Stories: Maya and Sacha
The focus on immigration is politicised to such an extent that the debate is out of proportion — s​​ometimes it is hard to grasp what the discussion is all about. At HackYourFuture, we believe that immigration is neither positive or negative in itself, but dependent on what society does to make it work. The immigration policy is not only affecting the lives of refugees in Denmark, but also foreign researchers working here, families who have been living here for many years and people who cannot live here with their partner. On the occasion of the general election, we would like to share a positive integration story about some of the people who, despite the tight foreign immigration policy, creates a life for themselves in Denmark and benefit society. Maya and Sacha Maya and Sacha are from Syria and have been living in Denmark for a few years. They are both in their twenties and have very different educational backgrounds when we meet them at HackYourFuture: Maya is still in high school and wants to become a doctor, and Sacha already has a Bachelor’s degree in IT from Syria. She is determined to study Robotics Technology for her Master’s degree. Maya and Sacha are ambitious and focused on their future careers and both have become skilful programmers and found relevant student jobs, even before completing HackYourFuture’s educational programme. But there is one thing that makes their situations very different from each other: the type of temporary protection status they have been granted. This has a big influence on their access to the Danish educational system. Different kinds of asylum and protection Maya has been granted §7.1 Convention Status which gives her a 2-year temporary residence permit in preparation for a permanent residence. This allows her to study at university on equal terms as Danish students. Sacha has been granted §7.3 General Temporary Protection Status which gives her only one year of temporary residence permit with no preparation for permanent residence and without access to education. Sacha is worried — she is determined to continue her studies in Denmark, but she doesn’t know how to get access to the Danish educational system. The §7.3 General Temporary Protection Status targets Syrians who don’t have an individual asylum motive, but have fled their country of origin because of the general situation. This type of asylum is granted to many Syrian women since their cause for fleeing is not military service, as it is for many Syrian men. Indirectly, this discriminates Syrian women as they are not allowed access to education in Denmark on equal terms as Syrian men. Access to education always makes sense As mentioned above, Sacha was determined to study for an MA in Robotics Technology and luckily, she received a scholarship to The University of Southern Denmark. When Maya completes the 1-year educational programme GIF (a high school induction programme for refugees and immigrants), she realises that she wants to continue with programming and decides to study for a BA in Software Technology at The Technical University of Denmark instead of becoming a doctor. But Sacha and Maya’s access to education in Denmark is limited by the new Finance Act. In short, the new restrictions mean that the focus has shifted from integration to return — refugees and immigrants are not meant to be integrated, but sent back. This means that not all refugees have access to education in Denmark, depending on the type of temporary protection status that have been granted. Maya and Sacha are young talented women who have learned Danish, achieved a strong professional and social network and well-paid student jobs, all in a short amount of time. The fact that they might not be able to study in Denmark while living here is incomprehensible — and the fact that the IT industry is hungry for more women makes it even more counter-productive. Whether the immigration policy focuses on integration or return, access to free education always makes sense. This is just one example of how people shape their lives through hard work, determination and integrity. We focus on the positive side of integration, but sadly, we experience just as many stories where the Danish immigration policy obstructs the way that immigrants and refugees could share their skills and contribute to the Danish society. All names have been changed due to privacy reasons.
https://medium.com/@hackyourfuturecopenhagen/hackyourfutures-positive-integration-stories-e8f2fae78586
['Hackyourfuture Copenhagen']
2019-06-17 13:14:43.591000+00:00
['Denmark', 'Copenhagen', 'Refugees', 'Self Reliance', 'Integration']
What Is Nothing?
What Is Nothing? Gravity, science says, is a force; energy responsible for our ability to touch, walk, and fall with our bodies on the earth. Up there in the moon, the ball with a dark-sided mystery yet to be uncovered; right there in the space that holds the stars, sun, moon, and our beloved earth, gravity takes on a new face as it decreases by distance and, in the absence of an additional force, causes masses to float in nothingness. But without nothing, would we be something? And if nothing can be nothing to us, then isn’t nothing also something? We’ve all heard the stories; humans with near-death experiences attesting to the existence of the AFTER-LIFE. They tell of out-of-body experiences and encounters with bright lights. Still, one must ask, are their experiences truly something or nothing given life by something? What could give life to something in this form of nothingness? Christopher Timmermann, in 2018, explored Near-Death Experience and its similarities with the psychedelic substance, N, N-Dimethyltryptamine or DMT. The potency of fantasia, as DMT is commonly called, surpasses that of the common LSD and magic mushrooms, as its intense effect kicks in early, triggering a temporary altered state of consciousness. ‘Near-death experiences (NDEs)…’ Christopher and his team of eight medics write, ‘are complex subjective experiences, which have been previously associated with the psychedelic experience and more specifically with the experience induced by the potent serotonergic, DMT. Potential similarities between both subjective states have been noted previously, including the subjective feeling of transcending one’s body and entering an alternative realm, perceiving and communicating with sentient ‘entities’ and themes related to death and dying. Result: we found significant relationships between the NDE scores and DMT-induced ego-dissolution and mystical-type experiences.’ Following scientific speculations that the spirit molecule, DMT, is released from the pineal gland during birth and death, one, then, must wonder whether something indeed exists in the nothingness of death. And so if the mind is being influenced by this compound, then they have indeed experienced nothing but imaginations of a subconscious design. A puzzling question then follows; can our unconscious subconscious be desirous of something when we are in nothing? Then there are those who give vivid accounts of events around their environment when they live as observers in spirit forms. Those who tell the doctors exactly what they wore, and did, and said as they performed surgery on them while doused with general anaesthetics. Is that, too, nothing? When we say “it’s nothing” to questions about our stressors, we hide nothing in something, giving it an unspoken, yet understood identity. At night, the clouds turn into nothingness. Still, this nothingness is occupied by air; and starry reflections; and human activities in the sky. When, at night, we ferry into the unknown corners of sleep; where neurons light the brain that paralyzes our body; a state of deep sleep where, upon wakefulness, dreams go forgotten. Where do we go? Into nothingness? Or is the mere act of being asleep just something for nothing to exist, thus giving meaning to nothing? When Lincoln said ‘knowing that I know nothing’, we believe that he meant it to be a saying that grooms our intellectual abilities; a saying that drives us to strip all confidence in our knowledge and open up space for learning from the world. The ultimate act of curiosity. But what does it mean to know nothing? Isn’t it just a desire to know something? Everything maybe. Because, of course, even in curiosity, we don’t deny that we are empty heads; that we hold no knowledge. Isn’t it so that as we search curiously for something, by stripping away pride and picking the eyes of children, that we search, not with nothing, but, indeed, with something. An intention. If our perception of the world is shaped by the things we are exposed to, then it means we carve something out of nothing. - Nothing is the formation of all things. When we fear, we live in nothing; the dark empty space where our minds are dragged by unseen weights. When we fail, we are gripped by nothing; the emptiness of regret and contemplation from fear which is of itself nothing. When we laugh and cry, we fill the nonexistence of those emotions with something that births them. When we say we are hopeful and choose to work with faith, we fill the absence of (which indeed is nothing) with a picture of our desire for. The past is something but the future is nothing. We consult the past to predict the future, one that will become nothing to us when we cease to become something. And this brings me to death, our ultimate nothingness. Dead humans are yet to emerge from the grave to confirm the existence of something after death. Our current ideas of what is are tangled speculations romanced by our individual beliefs in the aftermath of earthly existence. When we say ‘rest in peace’ we merely want to believe that something indeed exists in this nothingness that we all meet in the end. Well, maybe something does exist in this nothingness. We, after all, purportedly lose up to 21 pounds once biologically dead. So, maybe the pounds lost are the passage of our spirits and if the spirit exists after death, perhaps something indeed exists, but in nothingness.
https://medium.com/@etashelinto/what-is-nothing-d0e270566938
['Etashe Linto']
2020-12-17 07:40:08.905000+00:00
['Universe', 'NASA', 'Dreams', 'Sleeping', 'Strange Note']
It only takes 1 moment of clarity
to reinvent your self and your reality Hello, it’s me, the resilience guy talking to you from his penthouse on crossing the two main streets of Rotterdam, the biggest harbor in today's world. Listening to the countdown of the TOP2000 of all times on digital radio while my caretaker bakes an apple pie. It smells like home. The smell of apple pie brings real estate brokers and their clients in the buying mood. I already had to sell this magnificent place alongside all adoptions and comfort installed for people with a severe physical challenge like me. I lost my yesterday's life many times before. I had to let go of life securities many times. More often I simply need a paradigm shift? Connected to all things but not attached to material things. yesterday it became clear that even during the holidays, the Dutch society will stay at a standstill in many ways. Yes, just as the rest of the world at the same time. All amusement and leisure destinations have to remain closed, and at our homes, we only are allowed to invite three guests. All work needs to be executed from our office homes and maybe we will undergo a curfew. Even more, dramatic measures are on the drawing board. Hold up This was the sad part of the story. During my thinking about past events, I had this epiphany. It changed the whole paradigm of my situation towards my lockdown surroundings. Of course, the February march pandemical outbreak news had struck me too in the face, and the global hypnotic pandemic caught me too. The ongoing hypnotical avalanche of news facts and expert opinions and all kinds of predictions turned me into a virus specialist. I changed into an apathetic, mentally paralyzed talkshow consumer incapable of seeing the challenge in this COVID19 global hostage situation. Darwin once stated that only species survive in time who are capable of adapting for the better. I suddenly realized that since this crazy pandemic I rolled into a leveled playing field with my surrounding. in this troubled society, there is finally a developing a leveled playing field from my perspective. I am used to living in lockdown. I am used to being not able to do most daily activities and adventurous. No, I am not happy that people can’t do their daily routines but the current situation shows me that I was mentally healthy, fit, and ready for the recent events and restrictions. Therefore I can follow my purpose to help and assist entrepreneurs. Of course, people in general, to cope with their fears and insecurities. Step One: Acceptance of the Fears and Insecurities is quite normal in these abnormal times. But from this point, you can be sure you can transform yourself into a better version of yourself. Why? Because I can do it too. This new insight inspired me to write this blog. You see me, caught in my powered wheelchair stuck behind my adapted workstation. Lately, I was forced to let go of all certainties just like you. I lost my business but already started something new. Because the only choice I had was to reinvent myself, you can do that too! So I am happy that life is allowing me to reinvent myself within a leveled playing field. Working online? I did it my whole life! Staying at home? I did it much more than I wished for. Not able to travel freely? Check! Losing every material thing, I know that feeling. Still, as long as you are your authentic you, you can be real to somebody else who needs inspiration, resilience, and motivation to make their major step in self-realization. Life is unfair. Life is unjust; life is a bag full of shit sometimes! But in the end, even shit is a fertilizer! I lived in isolation all my life. I know I can help you to do whatever it takes. Gain back what you lost. Reframe your situation and build back a better You.
https://medium.com/@joostjosephnauta/it-only-takes-1-moment-of-clarity-fd80dbdd18fd
['Joost Joseph Nauta']
2020-12-23 09:41:38.440000+00:00
['Lockdown', 'Overcoming Fear', 'Pandemic', 'Mental Health', 'Resilience']
I Think I’m Finally Clean: Taylor Swift, Sexual Assault, and Coming Forward
I Think I’m Finally Clean: Taylor Swift, Sexual Assault, and Coming Forward I think when I first came to understand sexual assault and harassment, I was 11 or 12. My family and I were walking in Chicago together. We had just gotten up to enjoy time around the city, and were currently heading to Breakfast to enjoy a good morning together. We had just crossed the street, cars honking and people chattering on, the cold air biting at us, when we heard a man. He was no older than my mom’s age, his hair shaggy, and unkempt. He was screaming bloody murder at my mom, shouting about things I couldn’t even fathom at that age. I was shaken. I couldn’t understand why people would say such ruthless and inappropriate things, especially to people they don’t even know. My mom took it in stride, ignored him, and kept us walking. I had no idea how to react to this situation so I kept walking, pondering on how this wasn’t affecting her. It wasn’t until I grew older that I realized she had probably dealt with this longer than I had been alive. All women have to, whether we want to or not. Around the same age, I discovered Taylor Swift’s music. My dad had previously played it in the car, letting me learn the entirety of Love Story and jamming out with me to You Belong With Me. We all enjoyed her music and kept listening to her new albums. We watched as she made the transition from country to pop, and the consequential criticism she received from the general public. People called her basic, overplayed, annoying, a serial-dater. They called her all the things every single girl has ever heard in their life. She was never taken seriously as an artist, just a pop sensation with a teenage girl fanbase. She grew bigger and bigger, and my own interest crept up. I remember getting tickets to 1989 for Christmas. I cried to my mom after I received them, and remember being excited to see her live. It was one of the best gifts I had ever received. I remember planning my outfit for months, thinking about seeing her in person and squealing at the idea of hearing my favorite songs played (which were Blank Space and Shake It Off by the way, I understand these are basic but oh well) in a huge stadium. When the day finally came to see her in person, I remember excitedly doing my hair, and wearing the best outfit an 11 year old could wear. I was so excited. I remember getting to my seats and watching, awestruck at the number of people in the stadium. There were thousands of people, all excitedly chattering, waiting for Taylor to come out and perform. I remember looking at my mom and thanking her quietly for taking me. When Taylor first came out with the visionary Welcome To New York as her first performance, I remember I gasped. She ruled the stage. Everyone was following her, watching her every move, as she glided around it, singing her heart out. I was so starstruck by her. I remember copying her every move at the concert, standing up and cheering when she put her fun songs on. The one song I remember hearing but not really clicking with was the song Clean. Clean was the last song on 1989, a slow song, singing of healing after a bad time in life. I was not nearly as excited for this song, I prayed that it would end sooner, so that we could get back to the fun music. I wanted to listen to New Romantics and 22 and The Story of us. I didn’t want to listen to this slow, and repetitive song. When I first learned of the Taylor Swift sexual assault case, I was 15. The COVID-19 pandemic had already come and took away the end of my freshman year, leaving me stuck. I felt as if my life was spinning out of control. I couldn’t figure out how to ground myself to the reality that I was facing, which was growing grimmer and grimmer. Nothing seemed to be helping me, and I fell harder into a cycle of overthinking and obsessive behavior. The one thing I could say that remained consistent was my adoration of Taylor Swift. With the end of the 1989 era, and her disappearance years, Reputation and Lover did not have a profound effect on me. Though I still listened to both albums, I could not shake the feeling that something was wrong, her heart wasn’t in the right place. Learning of Scooter Braun’s plans to restrict Swift from getting her masters outraged me and brought with it an urge to protect her. When her documentary, Miss Americana, came out I didn’t originally plan on watching it. Though I wanted to, the stresses of the year were building and I felt like I was constantly worried about something. The limited time I had to do practically anything surely was an explanation as to why I didn’t watch it. However, I think there may be another reason as to why I didn’t enjoy it. I was 12 when I was first looked at inappropriately. I was in the gym working out. I had been in workout clothes, a pair of leggings and a tank-top. I remember doing squats and turning around to see an old man watching me. I was uncomfortable. I didn’t know this man. I had never met him in my life, but he somehow believed that he had the right to look at my body like this. I was an underdeveloped 12 year old. There was nothing to my body that was appealing. Even if my body was developed, it gave him no right to stare at me like I was only a piece of meat. Men did this constantly to me, and as I began to notice, to others as well. When I finally came around to watching Miss Americana we were well into quarantine. A few months had gone by, I had sat by my bed thinking about everything, wondering when I was going to be able to get out and see my friends. The next time I saw my friend was early summer. She came over and we sat, socially distanced, just talking about life. I suggested we watched the documentary because we were both huge Taylor fans. Watching the documentary, I wasn’t expecting to feel so much. Her talking of her eating disorder made me ache for her, and talking about how she felt isolated really did hurt me. I could relate so much to her, and my own empathy felt as if it was tearing me apart. When she talked about her sexual assault case, I was horrified. I had not known about the case, and was disgusted that she had to go through that, and the man who had committed that act. David Mueller was a former DJ who had been shunned after accusations of sexual harassment towards Swift had occurred. He sued her for defamation that caused him to lose his job, and in return, she countersued for $1. She ended up winning the case. The one year anniversary of the trial was shown on screen, and I realized that she was finally going to talk about it. Having kept fairly quiet about it except for when going to court, seeing her acknowledge it was heartbreaking as it was empowering. Swift said, “A year ago I was not playing in a sold out stadium in Tampa, I was in a courtroom in Denver, Colorado. This is the day the jury sided in my favor and said that they believed me.” The following version of Clean that she played struck me. Clean has never been one of my favorite songs. I thought it was beautiful, but I always wanted to move onto the more exciting songs. The ones that made me jump and laugh and be excited and finally get to enjoy myself after a hard year. When I was 16, I went to a party at the beach with my friends. It was loud and obnoxious, kids were drinking and vaping, having the time of their lives. Partying had never quite been my style. I didn’t particularly hate it or enjoy it. I was mostly neutral about it. After coming off of one of the longest days of my life at the beach, I wanted to just relax and enjoy myself with friends. While I was talking with my friends, a boy came up to me and put his arm around me and my other friend. We were talking about school, where we were from, and how old we all were. I answered his questions politely. The arm around me made me uncomfortable but not enough for me to say anything. As we kept talking, he continued to get more and more touchy, moving his hand and arm until it was pressed up against my boob. I tried to move away from his touch more and more. He also tried to open up my shorts, pressing his hand down, but my friends, luckily, noticed. One of them yanked me away, and another took my hand and walked away with me. She asked me if I was okay, and I said I was fine. After calming down for a few seconds I walked back over to my friends, and noticed he was gone. One of them told me that he he said “the hot one had left,” I don’t know what my reaction was supposed to be. Had that really just happened? I didn’t think it was something that could happen to me. Of course I knew that it was likely something would occur, but I think when it’s actually happening, and when it ends, you’re frozen. I didn’t know how to react. How to say no. How to say I wasn’t comfortable with this. How to say that I wasn’t interested. A lot of people don’t have the support I have from my friends. My friends have never questioned me, have never made me feel crazy about this, have never done anything to victim blame me. A lot of people don’t have this. People, especially those apart of the online crowd, can be incredibly toxic, shouting about how it was our fault for doing something to provoke the perpetrator. It shouldn’t matter what I wore. It shouldn’t matter that I went to a party. It shouldn’t matter that there was drinking. Consent is important. No one should have been touching me like that without my permission, especially since I had never met this boy ever in my life. Now that I’ve rewatched Miss Americana and have become even more attached to Taylor Swift after her releases of Folklore and Evermore. I understand a lot of what she felt that day. I think most women can. I do understand that my situation could have been a lot worse. I could’ve been drunk. I could’ve been alone. But it wasn’t. I’m grateful to everyone who was there that night, and infinitely grateful that no one could have found me intoxicated and alone because I shudder to think what could have happened. Women shouldn’t have to worry about this. Women shouldn’t have to be worried that someone will take advantage of them, or prey upon their body when they are unaware or unwilling to commit the sexual act. Women shouldn’t have to carry pepper spray and a rape whistle around. Women shouldn’t have to do these things to keep themselves protected. Instead of acting like the woman needs to learn to protect herself, start asking yourself why men are not taught better. Teach your boys not to touch people without their consent. Teach your boys to ask for permission. Teach your boys to take no for an answer. Teach your boys that it is not a shameful thing to be rejected. Teach your boys to be better. Looking back on everything that happened, I’ve come to realize that a lot of my own contemplation over the situation came from the fact that I was victim-blaming myself. I kept wondering if this instance even counted. Was it even bad enough? Did I suffer enough? Did it really matter when there are bigger instances occurring with women actually being raped? I’ve come to realize that I shouldn’t be asking myself those questions. One person’s suffering should not rule out another’s. Yes, some people do have it worse. But, no instance of harassment and attempted assault should be glossed over. In fact, I think it’s important that I do address that this happened to me. Listening to Clean for the first time after that experience was incredibly hard. . “Gone was any trace of you, I think I am finally clean,” Hearing those words coming from Taylor felt like she was writing what I couldn’t down. My body is my own. It is not anybody else’s to take and shape however they want. It is my own. I am the one who can make the decision as to whether I want to rid myself of that or not. I am the one who made the decision to become clean. “We don’t fight for our own happy endings. We fight to say you can’t. We fight for accountability. We fight to establish precedent. We fight because we pray we’ll be the last ones to feel this kind of pain.” — Chanel Miller, Know My Name.
https://medium.com/@caroline.feagin/i-think-when-i-first-came-to-understand-sexual-assault-and-harassment-i-was-11-or-12-2f862d16fa3d
[]
2021-09-15 19:32:30.303000+00:00
['Harassment', 'Rape Culture', 'Taylor Swift', 'Feminism', 'Sexual Assault']
SHAME OF THE SCIENCE WORLD
SHAME OF THE SCIENCE WORLD Thalidomide is a type of medical research and drug, according to estimates from the 1950s-1960s. This research has been done in unethical ways. Doctors who wanted to find a vaccine against typhoid in Nazi Germany of the time infect innocent people in the concentration camp with typhoid to experiment. Then they experimented on them. Meanwhile, hundreds of people die in the gathering camp. Later, these doctors work at the pharmaceutical company Chemie Grünenthal, where they discover the Thalidomide component. The drug called contagion was produced with this substance, whose pharmacological effects were not fully known due to insufficient research. This medication can activate or deactivate the immune system so it can be used for treatment. Later, the anti-vomiting and calming effects of this drug were discovered. Does it have a calming effect? Calming was the most needed for wartime effect. Therefore, many people used this drug. The antiemetic-anti-vomiting-effect was also a nice effect for pregnant women. One in seven Americans used it regularly, and the demand for thalidomide in European markets was much higher. This drug is the only non-barbiturate drug ever available. Barbiturate is the name for sedative drugs that cause serious side effects. For this reason, Thalidomide, which was first used in Germany, attracted great attention and was marketed in 46 countries in 1960. It was sold just like aspirin. Australian obstetrician William McBride explained its antiemetic effect in 1960 and advised pregnant women to use this drug. Pregnant women also started taking this over-the-counter drug. But then things changed. The same obstetrician William McBride began associating thalidomide with a birth defect in 1961. A German newspaper said 161 babies were badly affected by thalidomide. People who used it started to experience muscle aches, weakness, and peripheral nervous system diseases. In late 1962, as miscarriages and birth defects increased, it began to be banned in all countries sold. But it was too late now. As we said earlier, thalidomide was widely available without a prescription. Meanwhile, the company that manufactured the drug has denied the reports that this drug is harmful. The company lied to all the doctors who said that this Thalidomide -containing drug was harmful. While the minor negligence caused the health of thousands of lives and even the existence of this company, this company was still saying that the medicine was healthy for the money. After an experiment on chickens, it is now revealed that the drug Thalidomide adversely affects bone development, preventing the embryo from forming blood vessels while developing. After that, the company resisted a little more, but finally, its production completely stopped. Finally, he also does not allow the state that a simple drug Thalidomide Disasterhas two countries, the US and Turkey. Turkey is the only case where the non-visible country because in America, Dr. While Frances Kelsey does not allow legal permission, 2000 drugs are used under the name of my introduction sample. Therefore, many cases are observed in America. The order that the veterinarian in Turkey. Prof. Dr. Sureyya Tahsin Aygun in accordance with the Turkish Ministry of Health warns Turkey’s laws in this regard and these drugs are not used in any way. Therefore, due to thalidomide 90.000 deaths in the world and more than 100.000 birth defects found in Turkey have not been demonstrated in this case.
https://medium.com/predict/shame-of-the-science-world-322d696787d1
['Recep Suluker']
2020-12-27 00:54:46.743000+00:00
['Science', 'Future', 'Technology', 'Disaster', 'Disability']
All you need to know about The Brilliant Hanging Lake in Colorado Rockies
Trail Overview, Trail Tips, Hike Essentials, the Permit & Shuttle System, Campgrounds, it’s Geology. Humans have altered most of the natural habitats on our planet. Only a few of them have managed to remain unscathed. Primarily because they are too hard to reach. The Hanging Lake in Glenwood Canyon Colorado Rockies is one such blessed place. Perched at an elevation of over 7000 ft in Southern Rocky Mountains, this tranquil lake is a National Natural Landmark. The Hanging Lake Trail is one of Colorado’s most visited and beautiful short hikes. It is located in the White River National Forest near Glenwood Springs, Colorado; at just 3 hours of driving distance from the Denver airport. the trail takes you along Dead Horse Creek and through the rocky Glenwood Canyon. We visited the Hanging Lake on my husband’s birthday 🎂. For us, all the important dates are marked by either a visit to National Park or doing an adventure activity. (We went Skydiving on my birthday 😎). Honestly, the beauty of Hanging Lake exceeded my expectations. The view at the top is simply indescribable. The trail itself is perfect with the creek, trees, rock formations, and little bridges to cross along the way. The brilliantly turquoise Hanging Lake is as transparent as air. You can see trouts, tree logs, living roots, lake bed and all the microscopic life that contribute to the magical colour of the lake distinctly. Table of Contents Hanging Lake — A Geological Wonder Trail Overview Trail Tips Hike Essentials The Permit & the Shuttle System Campgrounds Good to Know Facts 1. Hanging Lake — A Geological Wonder The fragile shoreline of the Hanging Lake is a result of millions of years of deposits of dissolved limestone over rocks and logs. Technically, these deposits are known as travertine. The source of limestone is the Glenwood Canyon walls; Melting of ice from the top of the canyon bed into the canyon walls caused the dissolution of limestone. As a result, the travertine system formed here. The lake itself was formed by a geologic fault; which caused the lake bed to drop away from the gently sloping valley above it. Water flows into the lake by Bridal Veil Falls. Over time, the flowing water has deposited carbonates on the rocks and logs including travertine. The mix of minerals, carbonates and travertine give the lake it’s ethereal depth and colour. 2. Trail Overview The Journey It’s a short but strenuous hike 🏃‍♀. Gains approx 1200 feet of altitude in little over a mile. There are plenty of benches along the way. You can take as many breaks as you feel like. The trail winds through the Glenwood Canyon. That means you will be climbing alongside tall rocky walls. Because of the canyon walls and trees, almost the entire hike is shaded 😃 which kind of compensates for its continuous heart-pumping ascent. The Dead Horse Creek crosses the trail multiple times. There are wooden bridges (guess there are 7 of those) to cross the creek. No doubt, this trail is very well maintained. The hanging gardens 🌿, trees 🌲, chirping birds 🦅, a creek, mini waterfalls 💦, wooden bridges; All contribute to the picturesque scenery. We must have stopped at least 10 times on the trail to click pictures 😀. The whole trail is visually enchanting making the hike a very enjoyable one. No surprise, it’s a very popular one. Just before reaching the Hanging Lake, there a short section of a really steep climb. To make the climb easier, a staircase is carved in the rocks and handrailing is provided for support. If you can, take a minute to breathe in the expansive view of the canyon from this handrailing section. You will soon reach a fork, on your right 👉 is the hanging lake and straight ahead would the sprouting rock. The Destination You will be amazed by the colours and character of the Hanging Lake. This Lake does have a personality 😃. There is a boardwalk on the periphery of the lake which helps to observe and appreciate the lake from three sides. But the boardwalk diminishes the feeling of a lake hanging in the mountains. The Bridal Veil falls pour water into the lake. These falls are fed by the water coming from the spouting rock above it. Finally, the last stop of the trail — Spouting Rock. It is just 10 mins walk from the lake. Spouting Rock has 3 falls, 2 of which appear to be coming straight out of the rock wall 😯. You can walk behind them. Kids love it. 3. Trail Tips The Hanging Lake is one of the most popular if not the most popular hike in Colorado Rockies, so avoid holiday weekends. Unless you are a pro hiker, it’s best not to hike in winters, early Spring and late Fall. The rocky section can become icy and very slippery. Take your time and don’t pass folks on the handrail section. It’s best to hold the hands of kids in the handrail section. The cardinal rule for hiking in the wilderness is — Leave No Trace. Pack out your plastic water bottles and other trash. Please don’t feed the wildlife (I know you will be tempted to) 🐿. As it makes them dependent upon an unnatural food source (especially fingers). From the shuttle stop 🚐 it is about a ten-minute walk on the paved Glenwood Canyon Bike trail to the Hanging Lake trailhead, where permits are checked before one is allowed to get on the trail. At the trailhead, there are clean restrooms, a water fountain and trash cans. There are no facilities elsewhere on the trail. Use them when you can 🙂. The shuttle system functions the same as the trains in Switzerland. They leave right on time ⏰and do not wait for anyone. Show up a little early and be ready to go. On average, it takes between 2–4 hours to complete this 2.4-mile trail. Plan accordingly 📝. 4. Hike Essentials Carry more water 💧 than you think you would need; especially during the summer months. Unless you are Bear Grylls. Always carry some snacks 🍫 along when you go for a hike. They help to keep your energy levels up. As this hike entails solid knee workout; Carrying a trekking pole would be wise. It helps decrease the impact on your knees and spreading it across your arms. Or you can be resourceful (like me) and find a sturdy wooden stick that serves the purpose 😃. At some places, the creek flows over the trail, more like little streams. Thus, it will be wise to carry a change of socks. Frankly, wearing waterproof hiking boots may be overkill 🤔. Lastly, the usual suspects — Sunscreen, Hat 👒 and a great camera 📸 with ample battery and memory. 5. The Permit & The Shuttle System The brilliant blue-green colours of the lake are akin to an exotic beach. Only if there was some open space to lounge around it. Hanging Lake is one of the biggest, unchanged travertine systems in the Colorado Rockies. Another magnificent example of the travertine systems is the Mammoth Hot Springs in Yellowstone National Park 👌. They are very different in comparison to the travertine systems here. Mammoth Hot Springs is amazingly beautiful. To know more, read my article on Must-See Places in Yellowstone NP. Unquestionably, the lake has become ultra-popular in the last few years 💁‍♀. Increase in footfall has caused a threat to the fragile ecosystem of the Hanging Lake 😞. Therefore the authorities decided to regulate the number of people visiting. Also, the trailhead is located right off the I 70 adjacent to the Colorado River; Making it impossible to expand the parking lot. The Pre Permit Era In the pre-permit era (before Feb 2019), finding parking at the Hanging Lake trailhead was a big deal. For instance, we had to wait for 4 hours to find a spot in a not so busy season 😔. To give you a perspective of how things worked when the permit system was not in place, I’ll share my story, briefly. We visited at the tail end of the busy season and that’s why I did not think that parking would be a problem for us. So we reached the Hanging Lake Parking lot by 12 in the noon. But to my surprise, the parking lot was full, so the ranger asked us to check in sometime 😧. We went to the nearest Rest Area (about 15 mins drive) and kept checking for parking after every 45 mins 🤦‍♀. Finally, we got the parking on our 4th round, around 4 PM? We wasted 4 hours of our precious vacation time for a parking spot; Can you believe it? I would rather spend the money 💰on a ticket then wait for 4 hours (with no guarantee) for parking. The Post Permit Era In the post permit era, you will have to shell out $12 per person. It includes a permit to hike and a shuttle ride till the trailhead. You can make a reservation online or purchase the permit at the window. When booking online 📱, you will be asked to choose a date and a time slot. If for some reason you miss the allotted time slot, talk the staff at the trailhead. They may let you go depending on the trail capacity at that time. If you wish to bike till the Hanging Lake 🚲, you will have to take a bike permit. Both the permit costs the same. You can also pedal on the Glenwood Canyon Recreation Trail. Keep in mind ☝ that bikes are not allowed on the shuttle or the trail. Get your reservations done sooner than later as only 615 permits are issued for one day. This system certainly ensures that only a limited number of people are at the lake at a given time. While the shuttle services are available only during the peak season, that is, May 1 — Oct 31; the permits to visit the Hanging Lake are required year-round. Honestly, I do feel that the permit fee could be a little less. The shuttle starts from the Hanging Lake Welcome Center. The welcome centre is right next to the Glenwood Springs Rec Center. 6. Campgrounds close to the Hanging Lake Though camping is not allowed on the Hanging Lake trail, there are many campgrounds in the vicinity. Here are some worthy options: The Avalanche Creek Site Campground 🏕 It is an hour drive from the Hanging Lake trailhead. It’s close to Maroon Bells trailhead also. No reservations are required. Click here for more details. Redstone Campground 🏕 Only 37 miles from the hanging lake. Reservations are required. Click here to know more. Bogen Flats Campground along the Crystal River 🏕 45 miles from the hanging lake. Reservations are required. Click here for details. 7. Good To Know Facts Frozen Bridal Veil Falls. There are no pets allowed on this trail. You can check for pet boarding options. There is no cell service available on the Hanging Lake trail. Fishing is not allowed in the Hanging Lake. You can fish in the Colorado River at the trailhead if you want to. Touching or drinking Hanging Lake’s water is strictly prohibited as well as standing under, behind or on top of the waterfalls and walking on the fallen tree logs within the lake. This is a no drone area. It’s best to go in the morning, as you will have very little company 😄. If you plan to go hiking in Rocky Mountain National Park, you can refer the Guide to the Most Rewarding Hike in Colorado Rockies. For those of you who love hiking in National Parks, read about some of the stunning hikes in Zion NP and Bryce NP. For night sky gazers, know about my superb experience in Natural Bridges National Monument. Happy Hiking and Happy Travels. Ciao!
https://medium.com/must-visit-spots-and-walks-in-yellowstone-national/all-you-need-to-know-about-the-brilliant-hanging-lake-in-colorado-rockies-1fc34f1c2d06
['Sanjeevnie Syngal']
2019-09-15 22:37:35.438000+00:00
['Colorado', 'Travel Writing', 'Travel', 'Hiking', 'Traveling']
Use the Tube!
In the midst of last week’s heatwave I twice went into town to meet with friends — once with a change onto the Victoria line and the other keeping to my local Northern line only. My last foray had been March 18th (the day Boris pulled the plug on the educational system by cancelling the exams with no consultation or planning) when I already felt rather exposed and slightly nervous about my foolhardiness on the journey home especially when a woman sat down right next to me despite other available seats as already my sense of safe space was overriding my Londoner acceptance of zero personal space when commuting. Both journeys last week admittedly were not in rush hour but I am now confident about my next journey when the Allbright Club hosts a coffee morning to celebrate reopening their doors. Having spoken to my neighbours at our weekly Friday night drinks in front of our houses I realise that I am the trailblazer and so thought it timely to share WHY I feel confident using the tube to get round town again. The familiarity of descending into the station with the new commuter habit of pausing to hook your mask (home made cotton so rewashable — no need to contribute to the new menace of discarded surgical masks which are the new normal equivalent of the plastic bag lying in the gutters and catching on twigs in bushes) already becoming second nature and feels strangely comforting & reassuring. Smiling at the TfL staff (and trying to make sure it reaches your eyes so they can see it!) you start the familiar journey and nod at other mask wearing commuters all spaced seats apart and settle into your usual habit of reading a book/magazine or paper, making notes or most likely scrolling your phone for games. As you whizz up to your stop (and compare the speed of your journey to the recent trips out of town to wildswimming spots where the traffic to get onto the A3 seems to have recently returned to gridlock most afternoons) you also appreciate the escape from the whole family all working at home (and that you’ve escaped responsibility for yet another lockdown lunch!). Ascending the escalator back out onto the street but this time in a central London postcode you help yourself the hand sanitiser (especially grateful when you realised you picked up a pre Covid handbag on the way out and so have none of your own with you) and emerge blinking into the new normal feeling hopeful that London really is open again…. Let the real life networking return!!!
https://medium.com/@BarefootBronnie/use-the-tube-91d66f9ff55a
['Bronwen Gray']
2020-07-02 17:33:21.326000+00:00
['London', 'Travel Tips', 'Masks', 'Lockdown Diary', 'Commuting']
Implementing Multi-Language Search in Gatsby v2 — Powered Projects
Hello, my name is Alex. I work at humansee labs. We create cool web-projects in shortest time using JAMstack approach, React and Gatsby. Now, I’ll show you how we implement multi-language search in our projects. If you decided to get some search plugin from Gatsby Library then you will identify that there are not too many variants there in case you need a multi language search. Thus we have nothing to do except writing our own search plugin. We start with finding a search engine which could help us and choose lunrjs. After a few days, npm multi-language search module is written, and it’s available on npm as gatsby-plugin-lunr — any pull requests are welcome! :-) In this article I will show how to add the multi language search functionality to the Gatsby v2 boilerplate project using our plugin. It’s even faster than I can run 100 yards. First of all, you need to create a gatsby project. You could do all the steps by yourself but I recommend to use gatsby starters. In this tutorial I will use gatsby-starter-default with Gatsby v2. Install Boilerplate Gatsby boilerplate is installed. Let’s add our search plugin. I use gatsby-plugin-lunr. List of supported languages is in the link. Add the Search Plugin To add the plugin, open gatsby-config.js file in the root directory and add the following code in the plugin section: In the plugin, I use mapPagesUrls, so add this code to the top of gatsby-config.js: The search plugin is indexing static files such as .json or .md. In my example, I will use files with .md extension. Add .md Files Support Let’s add support of .md files to our project. To do so, open gatsby-config.js and add the following plugins: After adding the plugins, install these modules via npm or yarn. Now we have support of .md files, but still no content. I recommend creating a templates folder in the root directory. Files in this folder are pure React components that will be initialized with content that we have in data folder with .md extensions. In the templates folder, create the Page.js file and add the following code: This React component will create a page from the content that we have in our .md file. Let’s create index.md in data directory: Small step should be done before we finish. We still have to let gatsby know how to create pages. To do so, open gatsby-node.js and add the following code: Add the Search React Component Go to the components folder and create a Search.js file: Now add our Search component to Header.js in the components folder: Our Layout.js component should look like this: Finish Now, everything is done, but it’s not working. The reason is in the principle of creating indexed files. These files are created when you build a gatsby project. Run gatsby build. After the build is successfully ran, restart the gatsby develop server. That’s all, now our search component works as we expected. Try to input “React” in the search field. Full code for this article is available on github, issues and any proposals are welcome :-) Thanks for reading!
https://medium.com/humanseelabs/gatsby-v2-with-a-multi-language-search-plugin-ffc5e04f73bc
['Alex Duda']
2018-07-25 06:50:35.592000+00:00
['Reactjs', 'JavaScript', 'Gatsbyjs', 'Multi Language Search', 'Jamstack']
Building a Recommendation System with PySpark MlLib — Part 1
Recommendation Systems and their implementation on Distributed Computation Framework Spark Source : Marutitech This article is Co-Authored by Ubaid-ur-Rehman and Zakriya Ali Sabir Recommendations systems have gone a long way under evolution process since their inception back in 1994. In 1994, a recommendation system by the name of “Tapestry” was built by P. Resnick and his team at MIT. Tapestry, was designed to recommend documents from newsgroups. The authors also introduced the term collaborative filtering as they used social collaboration to help users with large volume of documents. Since then, recommendation systems and the approaches used for their creation have gone under serious evolution, thanks to competitions like Netflix Prize competition. Firstly, let’s realize the importance of recommendation systems. Importance of Recommendation Systems To realize the importance of Recommendation Systems, McKinsey published a report stating 35% of Amazon’s revenue is generated by its recommendation engine. Also, the Netflix Prize competition mentioned earlier was worth $1 million and it took 3 years to complete. In 2006, Netflix hosted a competition worth $1 million for improving their recommendation engine by 10%. It took three years to complete. Results of Netflix Prize Competition can be seen here. The significance of recommendation system is evident from the facts and figures stated above. Besides, there is another important factor which increases the significance of recommendation system. Nowadays, with humongous amount of data and information available over the internet, it becomes a hassle for user to filter out the exact same information that he/she wants. Recommender Systems as Mutualists Source: Google Recommendation system serves the “Mutualism” relationship between two parties. Mutualism is a relationship in which both the involved parties in a relationship benefit by maintaining a relationship just as a relationship between humming bird and flowers. How does this work in this context? Lets digest this by an example: Suppose you are a visitor to e-commerce website and you buy an item there. The website starts recommending you similar items based on your behavior at that particular website. Lets say their recommendation system is 100% accurate (only in ideal scenarios) and will only recommend you items that you are highly interested in. What will happen is that you will start visiting their website more often generating more traffic to their website and ultimately revenue for them. But how this will benefit you as a user? Imagine crawling the whole web without any assistance to get to your required item. In this era of overwhelming information on the web, it is very difficult to distill your required information. Therefore, a good recommender system along with adding revenue to the provider, is a very good friend of the user and helps save time in this very fast paced world. Recommendation System Techniques Before moving on to algorithm explaining and implementing part, lets first discuss different approaches to achieving a recommender system. Source: VUB Artificial Intelligence Lab Above flow chart displays the basic techniques of implementing a recommender system from an eagle eye’s perspective. In this article, we will be explaining an algorithm that is built on top of collaborative filtering technique. Therefore, we will discuss only collaborative filtering here. For content-based algorithms, this type of filter does not involve other users if not ourselves. Based on what we like, the algorithm will simply pick items with similar content to recommend us. This is one of the metric that we can use when calculating similarity, between users or contents. Perhaps we will deep dive content-based algorithms more in future articles.
https://medium.com/data-science-simplified/building-a-recommendation-system-with-pyspark-mllib-part-1-836ada3620ed
['Muhammad Faheem Sharif']
2019-08-27 12:39:42.259000+00:00
['Collaborative Filtering', 'Recommendation Engine', 'Data Science', 'Apache Spark', 'Machine Learning']
Sales forecasting in retail: what we learned from the M5 competition
Our review of recurrent issues encountered in a sales forecasting project, and how we handled them for the M5 competition. TL;DR This article sums up our learnings from the M5 sales forecasting competition, which consisted in predicting future sales in several Walmart stores. We will walk you through our solution and discuss the following topics: What machine learning model worked the best for this task? Which features had the biggest predictive power? How to tackle a dataset with intermittent sales? How to deal with an extended forecasting horizon? How to ensure model robustness with an appropriate cross-validation? Using machine learning to solve retailers’ business challenges Accurate sales forecasting is critical for retail companies to produce the required quantity at the right time. But even if avoiding waste and shortage is one of their main concerns, retailers still have a lot of room for improvement. At least, that’s what people working at Walmart think, as they launched an open data science challenge in March 2020— the M5 competition — to see how they could enhance their forecasting models. The competition aimed at predicting future sales at the product level, based on historical data. More than 5000 teams of data lovers and forecasting experts have discussed for months about the methods, features and models that would work best to address this well-known machine learning problem. These debates highlighted some recurring issues encountered in almost all forecasting projects. And even more importantly, they brought out a wide variety of approaches to tackle them. This article aspires to summarize some key insights that emerged from the challenge. At Artefact, we believe in learning by doing, so we decided to take a shot and code our own solution to illustrate it. Now let’s go through the whole forecasting pipeline and stop along the way to understand what worked and what failed. Problem statement Hierarchical times series forecasting The dataset contains 5-year historical sales, from 2011 to 2016, for various products and stores. Some additional information is provided, such as sell prices and calendar events. Data is hierarchically organized: stores are divided into 3 states, and products are grouped by categories and sub-categories. Our task is to predict sales for all products in each store, on the days right after the available dataset. It means that 30 490 forecasts have to be made for each day in the prediction horizon. This hierarchy will guide our modeling choices, because interactions within product categories or stores contain very useful information for prediction purposes. Indeed, items in the same stores and categories might have similar sales evolution, or on the contrary they could cannibalize each other. Therefore, we are going to describe each sample by features that capture these interactions, and prioritize machine learning based approaches over traditional forecasting ones, to consider this information when training the model. Two main challenges: intermittent values and an extended prediction horizon At this stage, you might think that it is a really common forecasting problem. You’re right and that’s why it is interesting: it can relate to a wide range of other projects, even if each industry has its own characteristics. However, this challenge has 2 important specificities that will make the task more difficult than expected. The first one is that the time series we are working with have a lot of intermittent values, i.e. long periods of consecutive days with no sales, as illustrated on the plot below. This could be due to stock-outs or limited shelves’ area in stores. In any case, this complicates the task, since the error will skyrocket if sales are predicted at a regular level while the product is out of shelves. The second one comes from the task itself, and more precisely from the size of the prediction horizon. Competitors are required to generate forecasts not only for the next week, but for a 4-week period. Would you rather rely on the weather forecast for the next day or for 1 month from now? The same goes for sales forecasting: an extended prediction horizon makes the problem more complex as uncertainty increases with time. Feature engineering — Modeling sales’ driving factors Now that we have understood the task at hand, we can start to compute features modeling all phenomenons that might affect sales evolution. The objective here is to describe each triplet Day x Product x Store by a set of indicators that capture the effects of factors such as seasonality, trends or pricing. Seasonality Rather than using the sales date directly as a predictor, it is usually more relevant to decompose it into several features to characterize seasonality: year, month, week number, day of the week… The latter is particularly insightful because the problem has a strong weekly periodicity: sales volumes are bigger on the weekends, when people spend more time in supermarkets. Calendar events such as holidays or NBA finals also have a strong seasonal impact. One feature has been created for each event, with the following values: Negative values for the 15 days before the event (-15 to -1) 0 on the D-day Positive values for the 15 days following the event (1 to 15) No value on periods more than 15 days away from the event The idea is to model the seasonal impact not only on the D-day, but also before and after. For example, a product that will be offered a lot as a Christmas present will experience a sales peak on the days before and a drop right after. Trends Recent trends also provide useful information on future sales and are modeled thanks to lag features. A lag is the value of the target variable shifted by a certain period. For any specific item in a given store, the 1-week lag value would be the sales made one week ago for this particular item and store. Different shift values can be considered, and the average of several lags is computed as well, to get more robust predictors. Lags can also be calculated on aggregated sales to capture more global trends, for example at the store level or at the product category level. Pricing A product’s price can change from one store to another, and even from one week to another within the same store. These variations strongly influence sales and should therefore be described by some features. Rather than absolute prices, relative price differences between relevant products are more likely to explain sales evolutions. That’s why the following predictors have been computed: Relative difference between the current price of an item and its historical average price, to highlight promotional offers’ impact. Price relative difference with the same item sold in other stores, to understand whether or not the store has an attractive price. Price relative difference with other items sold in the same store and same product category, to capture some cannibalization effects. Categorical variables encoding Categorical variables such as the state, the store, the product name or its category also hold a significant predictive power. This information has to be encoded into features to help the model leveraging the dataset hierarchy. One-hot encoding is not an option here because some of these categorical variables have a very high cardinality (3049 distinct products). Instead, we have used an ordered target encoding, which means that each observation is encoded by the average sales of past observations having the same categorical value. The dataset is ordered by time for this task to avoid data leakage. All categorical variables and some of their combinations have been encoded with this method. This results in very informative features, the best one being the encoding of product and store combination. If you wish to experiment other encoders, you can find a wide range of methods here. One LightGBM model per store Now that we have our features, let’s move on to the modeling part. LightGBM is one of the great winners of the M5 competition as it has been used by most of the top solutions. This is also the model we have chosen, due to its high speed and low memory usage. As it is a tree-based method, it should be able to learn different patterns for each store, based on their specificities. However, we observed that training a model with samples from only one store resulted in a slight increase of forecast accuracy. Therefore, we have decided to train 10 different LightGBM models, one per store, which also had the advantage to reduce the global training time. Tweedie loss to handle intermittent values Different possible strategies can be used to deal with the intermittent values issue. Some participants decided to create 2 separate models: one to predict whether or not the product will be available on a specific day, and a second one to forecast sales. Like many others, we have chosen another option, which is to rely on an objective function adapted to the problem: the tweedie loss. Without going into the mathematical details, let’s try to understand why this loss function is appropriate for our problem, by comparing sales distribution in the training data and the tweedie distribution: They look quite similar and both have values concentrated around 0. Setting the tweedie loss as an objective function will basically force the model to maximize the likelihood of that distribution and thus predict the right amount of 0s. Besides, this loss function comes with a parameter — whose values are ranging from 1 to 2 — that can be tuned to fit the distribution of the problem at hand: Based on our dataset distribution, we can expect the optimal value to be between 1 and 1.5, but to be more precise we will tune that parameter later with cross-validation. This objective function is also available for other gradient boosting models such as XGBoost or CatBoost, so it’s definitely worth trying if you’re dealing with intermittent values. How to forecast 28 days in advance? Making the most out of lag features As explained above, lag features are sales shifted by a given period of time. Thus, their values depend on where you stand in the forecasting horizon. The sales made on a particular day D can be considered as a 1-day lag if you’re predicting one day ahead, or as a 28-day lag if you’re predicting 28 days ahead. The following diagram illustrates this point: This concept is important to understand what features will be available at prediction time. Here, we are on day D and we would like to forecast sales for the next 28 days. If we want to use the same model — and thus the same features — to make predictions for the whole forecasting horizon, we can only use lags that are available to predict all days between D+1 and D+28. This means that if we use the 1-day lag feature to train the model, that variable will also have to be filled for predictions at D+2, D+3, … and D+28, whereas it refers to dates in the future. Still, lags are probably the features with the biggest predictive power, so it’s important to find a way to make the most out of this information. We have considered 3 options to get around this problem, let’s see how they performed. Option 1: One model for all weeks The first option is the most obvious one. It consists in using the same model to make predictions for all weeks in the forecasting horizon. As we just explained, it comes with a huge constraint: only features available for predicting at D+28 can be used. Therefore, we have to get rid of all the information given by the 27 most recent lags. It is a shame as the most recent lags are also the most informative ones, so we have considered another option. Option 2: Weekly models This alternative consists in training a different LightGBM model for each week. On the diagram above, every model is learning from the most recent possible lags with respect to the constraint imposed by its prediction horizon. Following the same logic as the previous option, it means that each model can leverage all lags except those that are newer than the farthest day to predict. More precisely: Model 1 makes forecasts for days 1–7, relying on all lags except the 6 most recent ones. Model 2 makes forecasts for days 8–14, relying on all lags except the 13 most recent ones. Model 3 makes forecasts for days 15–21, relying on all lags except the 20 most recent ones. Model 4 makes forecasts for days 22–28, relying on all lags except the 27 most recent ones just like in option 1. This method allows us to better capitalize on lag information for the first 3 weeks and thus improved our solution’s forecast accuracy. It was worth it because it was a Kaggle competition, but for an industrialized project, questions of complexity, maintenance and interpretability should also come into consideration. Indeed, this option could be computationally expensive and if we are aiming at a rollout on a whole country scale, it would require maintaining hundreds of models in live. In that case, it would be necessary to evaluate if the performance increment is large enough to justify this more complex implementation. Option 3: Recursive modeling The last option also uses weekly models, but this time in a recursive way. Recursive modeling means that predictions generated for a given week will be used as lag features for the following weeks. This happens sequentially: we first make forecasts for the first week by using all lags except the 6 most recent ones. Then, we predict week 2 by using our previous predictions as 1-week lags, instead of excluding more lags like in option 2. By repeating the same process, we always get recent lags available, even for weeks 3 and 4, which allows us to leverage this information to train the models. This method is worth testing, but keep in mind that it is quite unstable as errors spread from week to week. If the first week model makes important errors, these errors will be taken as the truth by the next model, which will then inevitably be poorly performing, and so on. That’s why we decided to stick with option 2, that seems to be more reliable. Ensuring model robustness with an appropriate cross-validation Why cross-validation is critical for time series In any machine learning project, adopting an appropriate cross-validation strategy is critical to simulate correctly out-of-sample accuracy, select hyper-parameters thoroughly and avoid over-fitting. When it comes to forecasting, this has to be done carefully because there is a temporal dependency between observations that must be preserved. In other words, we want to prevent the model from looking into the future when we train it. The validation period during which the model is tested also has a greater importance when dealing with time series. Model performance and the optimal set of hyper-parameters can vary a lot depending on the period over which the model is trained and tested. Therefore, our objective is to find which parameters are the most likely to maximize performance not over a random period, but over the period that we want to forecast, i.e. the next 4 weeks. Adapting the validation process to the problem at hand To achieve that goal, we have selected 5 validation sets that were relevant to the prediction period. The diagram below shows how they are distributed over time. For each cross-validation fold, the model is trained with various combinations of parameters on the training set and evaluated on the validation set using the root mean squared error. Folds 1, 2 and 3 aim at identifying parameters that would have maximized performance over recent periods, basically over the last 3 months. The problem is that these 3 months might have different specificities than the upcoming period that we are willing to forecast. For example, let’s imagine that stores launched a huge promotional season over the last few months, and that it just stopped today. These promotions would probably impact the model’s behavior, but it would be risky to rely only on these recent periods to tune it because this is not representative of what is going to happen next. To mitigate this risk, we have also included folds 4 and 5, which correspond to the forecasting period respectively shifted by 1 and 2 years. These periods are likely to be similar because the problem has a strong yearly seasonality, which is often true in retail. In case we had a different periodicity, we could choose any cross-validation strategy that has more business sense. In the end, we have selected the hyper-parameters’ combination with the lowest error over the 5 folds to train the final model. Results The different techniques mentioned above allowed us to reach a 0.59 weighted RMSSE — the metric used on Kaggle — which is equivalent to a weighted forecast accuracy of 82.8%. The chart below sums up the incremental performance generated by each step: These figures are indicative: the incremental accuracy also depends on the order in which each step is implemented. Key takeaways We have learned a lot from this challenge thanks to participants’ shared insights and we hope it gave you food for thoughts as well. Here are our key takeaways: Work on a small but representative subset of data to iterate quickly. Be super careful about data leakage in the feature engineering process: make sure that all the features you compute will be available at prediction time. Select a model architecture that allows you to leverage lags as much as possible, but also keep in mind complexity considerations if you’re willing to go to production. Set-up a cross-validation strategy adapted to your business problem to evaluate correctly your experiments’ performance. Thanks a lot for reading up to now and don’t hesitate to reach out if you have any comment on the topic! You can visit our blog here to learn more about our machine learning projects.
https://medium.com/artefact-engineering-and-data-science/sales-forecasting-in-retail-what-we-learned-from-the-m5-competition-445c5911e2f6
['Maxime Lutel']
2021-02-03 13:09:43.608000+00:00
['Retail', 'Data Science', 'Forecasting', 'Lightgbm', 'Machine Learning']
Two States of the Union
The State of the Union is one of the few times in any given year when most people hear a public speech, much less read think pieces and rhetorical criticism about a speech. The contemporary “state of the union” address is a century old—meaning the presidential tradition of delivering what might be an epideictic (praise and blame) speech is a key part of what we’ve come to expect from the office. When I listened to this year’s speech, I was struck by the emotional whipsaw of it. The highest possible highs trading off with the lowest lows. This seemed quite different from past addresses I have written about. The general format of the State of the Union address involves the president being introduced by the Speaker of the House (Trump chose to launch right in this year—procedural style be gone), followed by a list of highlights and something like the phrase “the state of our union is strong.” This phrase is a cornerstone of any of these addresses, much like any Spider-Man film will include someone saying “with great power comes great responsibility.” The middle of the speech is usually a boring list of policy priorities. My prior research suggests that the middle of the speech is intentionally boring to lead into a positive ending. This year’s State of the Union, however, violated my expectations in some surprising ways (e.g., the reproductive rights section of the speech). Using some fairly basic methods in sentiment analysis, I’ve created a rhetorical critique of the speech. Sentiment analysis is a form of emotion A.I. that uses text analysis and language processing to assess the affect of subjective information. It can be used with any kind of text, and is often used in marketing and customer service in relation to survey responses and online posts, but it provides an interesting look at the rhetoric and language used in speeches as well. The code I used to build this project, and the base CSV needed to reproduce it, are available on Github. I am not interested in facts so much as style here. Facts are useful and important, but they miss the real power of the epideictic address to assign praise and blame. Line by line debate is ineffective in these cases because affect always wins out over argument in speeches. It just doesn’t matter how fast or how strong the facts are.
https://medium.com/s/story/two-states-of-the-union-3b58814f096f
['Dan Faltesek']
2019-02-10 15:41:33.709000+00:00
['Rhetoric', 'State Of The Union', 'Data Science']
Hello, (Machine Learning) World!
In a former post, I showed how to setup some basic tools for Machine Learning. Now it's time for our Hello, World!. In any language or platform, it's quite common to start learning with a simple program that prints out "Hello, World!". The Machine Learning — specially Neural Networks — equivalent to that is the MNIST, which is a dataset with annotated data on handwritten digits. An example of a number "5" in the MNIST dataset. If you've never ever trained a model or written any code, you might wanna start with that. And this is what this post is about. If you already know how to train a model, there will soon be a post here on how to import it to Core ML and use your model in an iOS app. As I said in the previous post, many iOS developers have started thinking about Machine Learning since the launch of Core ML and that's great! Nonetheless, it is sometimes a little overwhelming to come into a completely new world with so many new things appearing everyday. It's hard to decide where to start. If you follow along, by the end of this post you will have trained your own model for handwritten digit recognition. What we will do: Review the necessary tools Get the dataset Build a very simple model Train it on our data Test it and check how well it performs Think about what could make it better Our toolbox 🛠🗜⚙️ If you need help installing the tools, you can take a look at this post. Basically, we will use Python 2 as the programming language and we will also use keras in order to get the dataset and build the model — it will also help with some preprocessing. In order to make things a little more organised, I'm also using virtualenv and virtualenvwrapper to keep track of the versions and to avoid conflicts with the rest of the system; and I'm using jupyter to write the code together with the text explaining each step. Those last tools are optional though. Getting the data Keras makes it very easy for us to get the MNIST data for this first experiment. Once you have keras installed, all you need to do is import the mnist dataset helper and load it: Preprocessing Then, we will need some preprocessing in order to pass our data to the model. Basically, what we will do is change the shape of X_train and X_test ; change the its type; and normalize it. We will also change y_train and y_test a bit so they can be encoded as one-hot vectors (i.e. an array with 10 positions where each position represents a digit from 0–9): Creating the model Our first model will be very simple, so all we need is to import Sequential from keras.models and both Dense and Flatten from keras.layers . Once we have them we can create and compile our model: Training and validating Now that we have our model compiled, it's time for us to train it. We will be using a batch size of 32 and we will train for 10 epochs. X_test and y_test will be used as our validation data: That very simple model can achieve 92% accuracy. That's not bad for a first time, is it? But we can definitely do better. Where can we go from here? This was a very simple model and it's intended to make you less afraid of training a model and actually having a hands on experience with machine leraning. There are many challenges you can move from this tutorial. You can decide to create a better model by using a convolutional neural network, which is commonly used for problems that envolve image processing and computer vision. You can also decide that 92% is fine for an MVP and try to convert your model into an .mlmodel file and use it in your iOS app. If you choose to do that, you will find some very interesting challenges on how to prepare the data in iOS for your model and how to interpret the results. That's it for now! I hope you had a good time creating and training your first neural network! I will soon write a post on how to import your model to Xcode using coremltools. Until then, if you have any questions, comments, suggestions, etc. you can comment here below or reach me at twitter.
https://medium.com/cocoaacademymag/hello-machine-learning-world-28ed7167dee6
['Emannuel Carvalho']
2018-01-08 20:20:15.690000+00:00
['Coreml', 'Machine Learning', 'iOS', 'Beginner', 'Hello World']
About Eureka Gear
Eureka Gear recommends the best sports, fitness, health, outdoor and travel gear at www.eurekagear.com. We are passionate about promoting sports, fitness, the outdoors, adventure and travel, by making it quick and easy for our readers to find the best gear. Eureka Gear is just starting its journey. Take the first steps with us. Gear Guides Eureka Gear might be new, but we already have gear guides on: We also have many more gear guides, and list the best Amazon Prime Day discounts, Black Friday sales and Cyber Monday deals each year. Follow Eureka Gear We’d love to hear how Eureka Gear helped you experience a #eurekamoment.
https://medium.com/@eurekagear/about-eureka-gear-26f152a8107f
['Eureka Gear']
2020-12-15 04:14:35.126000+00:00
['Health', 'Gear Review', 'Fitness Equipment', 'Fitness', 'Outdoors']
CI/CD Pipeline with Cloud Build and Composer (with Terraform)
Hey Sometimes I use some Google tutorial to do some training. But I like to automate (yes, I know you know!). So, let's talk about CI/CD for data processing in GCP. I'm going to use this tutorial: In summary: This tutorial describes how to set up a continuous integration/continuous deployment (CI/CD) pipeline for processing data by implementing CI/CD methods with managed products on Google Cloud. Data scientists and analysts can adapt the methodologies from CI/CD practices to help to ensure high quality, maintainability, and adaptability of the data processes and workflows. Looks good, huh? We will use 5 things here: Terraform — The tutorial it's a hands-on, but I will "transpose" to Terraform. 🤓 Cloud Build — Similar to Jenkins, where we will create the pipelines, triggers, … Cloud Composer — It's a managed Apache Airflow in GCP. We will use to define the steps of the workflow, like start the data processing, test and check results. Dataflow to run a job in Apache Beam as sample. There's also Cloud Source Repositories, that is the "GitHub" from Google (but reeeeeeaaaly far away from GitHub). All the code can be found here: CI/CD Repository First thing, we need to have a user with "Owner" permission in some folder (I will not create this in root level, there's a way to create in some specific folder. And I know Owner is not the best way to grant permission, but this is for test purposes). You can get the list of folders in GCP with this command: gcloud resource-manager folders list — organization=<Your Org ID> Cool! Now update the terraform.tfvars file (I'm using Terraform 0.13.6 version) in bootstrap folder. File is really simple! From here, please note that this is a PAID test. Some resources will charge you, so remember to delete the project when finish. :) Run the Terraform steps: terraform init terraform plan (Good to review, right?) terraform apply You should see something like this: Plan: 54 to add, 0 to change, 0 to destroy. The apply process should take 30 minutes. Just go get some coffee. The output should return this: Take note of Cloudbuild project and csr_repo.id. You should be ready to go! If you go to Cloud build, you will see 2 triggers: Composer is now created also: Let's test our plan trigger. So, just to understand, everything that you commit that is not "master" branch, will execute the plan trigger. Let's see. First let's clone the CSR repository (go outside of our code that you cloned before): gcloud source repos clone gcp-cicd — project=<CloudBuild Project ID> Now change to a different branch (I will use plan) and copy everything inside source-code from our previous repo inside this one (change the command accordling your actual path). git checkout -b plan cp -rf ../gcp-cicd-terraform/source-code/* . git add -A git commit -m "First Commit" git push -set-upstream origin plan If you check your Cloudbuild page, you will see the plan started: If you open you can see all steps and information: In AirFlow UI, you can see DAG information: And DataFlow the Job Graph: So, what happened? This: A developer commits code changes to the Cloud Source Repositories. Code changes trigger a test build in Cloud Build. Cloud Build builds the self-executing JAR file and deploys it to the test JAR bucket on Cloud Storage. Cloud Build deploys the test files to the test-file buckets on Cloud Storage. Cloud Build sets the variable in Cloud Composer to reference the newly deployed JAR file. Cloud Build tests the data-processing workflow Directed Acyclic Graph (DAG) and deploys it to the Cloud Composer bucket on Cloud Storage. The workflow DAG file is deployed to Cloud Composer. Cloud Build triggers the newly deployed data-processing workflow to run. Cool, our process is now working in plan/test!! Now we can just apply to prod pipeline! For this article, I will do a manual deployment to production by running the Cloud Build production deployment build. The production deployment build follows these steps: Copy the WordCount JAR file from the test bucket to the production bucket. Set the Cloud Composer variables for the production workflow to point to the newly promoted JAR file. Deploy the production workflow DAG definition on the Cloud Composer environment and running the workflow. There's some wayt to automate this steps with Cloud Function or even during plan pipeline, but the idea here is just to understand a simple way. So, first thing, we need to get the name JAR filename to update or trigger. Let's use gcloud command: gcloud composer environments run <COMPOSER_ENV_NAME> \ --location <COMPOSER_REGION> variables -- \ --get dataflow_jar_file_test 2>&1 | grep -i '.jar' Now that we have this, let's change the Apply trigger this value. Go to Cloudbuild and edit the apply trigger (change the "_DATAFLOW_JAR_FILE_LATEST" to the result before): Now let's run the trigger (just run): Let's check: Now we have the DAG deployed to Composer. You can see if you go to AirFlow UI: Let's just run the job. In AirFlow UI, just click on "Trigger Dag" Now you can go to Dataflow and check the job: And that's it! You have now a CICD pipeline that you can use for data processing, or any other model of process. To destroy the resources, simple: just go inside the bootstrap folder and run: terraform destroy I hope you like this! As always, feel free to reach me, provide feedbacks, anythin!! Stay safe, folks!
https://medium.com/marcelo-marques/ci-cd-pipeline-with-cloud-build-and-composer-with-terraform-379a05a4ca09
['Marcelo Marques']
2021-04-25 17:30:28.103000+00:00
['Gcp', 'Technology', 'Ci Cd Pipeline', 'Google Cloud Platform', 'Terraform']
Game Design with Singleton Pattern
Game Design with Singleton Pattern There can only be one, so make it count This is the 6th entry in the Game Design with Programming Patterns series looking at the game design side of programming. Try the example experiments in the interactive supplement! Photo by Charles Deluvio on Unsplash What Is The Pattern? Singleton pattern objects have two properties: they are limited to one instance of themselves and provide global access to methods and data from themselves. Once created, any subsequent reference to a Singleton object will be that one object — it never creates more than one instance. In addition, any other part of the program can use the object because it is globally accessible. Singleton’s openness lets it provide data and services throughout the rest of the program. As a consequence, hastily applied Singletons create more problems than they solve, leading to Singletons that have more information or behavior than reasonable. Avoid creating unwanted problems by thinking carefully when singular uniqueness and access are needed. How I Used It The singleton will track the clicks inside and outside of the scene in the UI at the bottom left. | Clicking on cubes in this scene makes them jump The nature of the Singleton made the most sense to me as a statistic tracker. By entering the singleton scene, a fairly simple stat tracking singleton instance is created. It counts the mouse clicks both inside and outside of the scene as well as the total seconds the application has been running. The singleton can be told to reset the click count by the button at the bottom and will resume tracking after that reset. Once the singleton scene has been loaded, the object will continue tracking the clicks and time from anywhere. Storing these stats in the singleton keeps them persistent and global. Design Impressions Creating a singleton in Unity is easy. Deciding when to create one is hard A Singleton can be the best place to provide dependable utility and information to the rest of our game. Narrowing a Singleton’s focus to metadata like clicks or time counting lets it operate outside of the game space while still connecting to objects within. The Singleton can hold references to services or methods related to itself, and try to keep other objects from relying too heavily on it. This serves as a basic guideline to avoid bad Singleton scenarios where they become too entangled with other objects and systems. Going Forward Singleton objects should be used to address consistent and unchanging demands over the progression of the game, and they should reliably serve those demands no matter what. They are most suited for systems that are independent of specific games. Systems like a game’s rendering, debug, or stat tracking systems all inform how a good Singleton is constructed, that is, by not containing any critical logical functions. They should instead hold complementary processes and information. Holding references to other objects, handling player inputs, or containing “universal” information about a game all are good candidates for Singleton objects. Limited to that external domain, the Singleton pattern finds its place in the implementation details and not in the actual game design. Previous: Prototype Next: State Code: https://github.com/jasonzli/game-programming-study Reference: Game Programming Patterns, Nystrom, Robert 2014 http://gameprogrammingpatterns.com/singleton.html
https://medium.com/dev-genius/game-design-with-singleton-pattern-21685f7a43bb
['Jason Zhen Li']
2020-10-06 00:43:38.054000+00:00
['Game Design', 'Game Development', 'Indiedev', 'Software Development', 'Game Programming']
Do You Want to Submit An Article?
Photo by camilo jimenez on Unsplash Submissions to this publication Writing from the Catholic Abbey to the Secular World are welcome. Requirements are that writing must be respectful to the Catholic community reading them. Writers do not have to be Catholic and the point of view can be challenging to the Catholic position as well. Stories must focus on topics of concern to Catholics that encourages dialogue and helps all readers deepen their understanding of their faith and their role in the world. The point of this publication is to publish material that mainline Catholic publications do not. Therefore, the key restriction is against saccharine pieces. (i.e. “Every morning I thank Jesus for making the sun even though I am dying of cancer.”) Challenging relevant pieces such as A Catholic Take on Conversion Therapy are welcome. Critical pieces such as Twitter Is No Place for Catholics, are also welcome. A good knowledge of Catholic teachings and of the writings of saints and Catholic writers is a plus. I am looking for: Material that shows you have a genuine interest in writing to the Catholic Community, even if you are not Catholic or do not agree with Catholic teaching. Take time to research what Catholics believe and show a genuine respect for your readers. Stories that challenge Catholic teaching on controversial topics in a way to encourage people to investigate why they believe what they are doing are most welcome. “Should a Catholic Bake a Cake for a Gay Wedding, in Light of Jesus’ Teaching about Following the Law in the Sermon on the Mount?” Inspired by the “Engaging the Powers” series by Walter Wink. “My Confirmation class was so boring I never returned to Church and I left Catholicism immediately after the ceremony.” (That was actually spoken to me by a former Catholic who thankfully did not attend my Confirmation program.) Stories that inspire Catholics and others are also welcome: “When I was ready to give up, I learned that God was not in the good times, but was teaching me through the bad times,” would be a good topic. What is not accepted: Nothing about the occult, pagan or spiritualist practices: no articles on tarot cards, witchcraft, astrology, etc. Whining pieces or those articles that contain false or unverified information cannot be accepted, i.e: “Catholic people must be stupid to believe in what they do and they make my life miserable.” Stories designed to demoralize or attack the Catholic community or designed to get them to reject Catholic teaching also cannot be accepted. Finally, all stories must conform to Medium’s terms of service. When writing for Catholics, how you phrase your position is key. As the old joke goes: “The monk who smoked while he prayed was castigated; the monk who prayed while he smoked was honored.” Payment is through the Medium system.
https://medium.com/writings-from-the-catholic-abbey-to-the-secular/do-you-want-to-submit-an-article-aa1a3fe44fb2
['Rj Carr']
2019-11-08 01:36:02.105000+00:00
['Catholic']
IMDB Television Show Data Analysis: Part 2
IMDB Television Show Data Analysis: Part 2 TV Show Analysis from IMDB. Which genres are popular over time? Is there ageism in TV shows? Is there gender bias in rating of shows? Are there more new shows now than before? To view the story in its original form, click here. This post is Part 2 of the IMDB TV show analysis. Part 1 of the analysis which deals with overrated/underrated TV shows, consistency of TV shows, and whether shows were canceled early or went too far, can be found here. In this post, I primarily work on answering 4 questions — which genres have become more or less popular over time, have there been changes in proportion of new TV shows over time, lead age over time, and if there is gender bias during voting on shows — more specifically if shows targeted towards females are voted higher or lower than males. Introduction I begin by looking at some exploratory plots for number of shows over time. In Fig 1 below, it can be seen that number of shows over the years have grown until 2017. However, interestingly after 2017, there has been a small decrease in number of shows. Until around 1990, the growth trend can be seen as linear but afterwards the number of shows has been growing at an increasing rate. This is not so surprising given the increased accessibility to television as well as increase in streaming services like Netflix, Amazon Prime, Hulu, etc. Shows in which genres have increased or decreased over time? Here, I wanted to look at which genres have changed in popularity — in terms of proportion of total TV shows in that period. In this dataset, each TV show title can have up to 3 genres. In Fig 2a below, the arrow plot shows how genres have changed prior to and post 2000. Reality-TV increased from around 1% of total shows before 2000 to over 12% after 2000. Documentary and Comedy shows increased as well — although Comedy was high to begin with. Family, Drama, and Music genres decreased in proportion. In Fig 2b below, I compare only the largest show decades in prior and post 2000 groups — i.e. decades of 1990 and 2010. We see similar patterns for Reality-TV, Comedy, Family, Drama, and Music as in Fig 2a. However, in this comparison, we can see Documentary also going down. Animation and Game-Show genres also decreased in proportion of total shows. Has proportion of new shows changed over time? Next, I look at whether over the time more new shows are coming up or if the older shows (that started in previous years) are dominating in numbers. For this, we can either use TV show data or TV episodes data. TV show data weighs each show equally — and not according to their number of episodes, while TV episodes captures more of what is on television — in terms of time. A new show or new episode is defined as the TV show in its first year or episodes that air in the first year of the show inception. So, for instance, for Breaking Bad, 2008 is the year of inception and hence all episodes appearing in 2008 are new episodes. This also happened to be Season 1 in this case though this is not always guaranteed — the same season can be spread across multiple years and also two seasons can air in the same year. For episodes airing in 2009, Breaking Bad is not counted a new show anymore and its episodes are also not new episodes. In Fig 3 below, we can see the proportion of new shows and episodes over time. New shows proportion is more stable and for the most part has stayed in the range of 45–55%. New episodes, on the other hand, have varied a great amount. In the period except between early 1980s and early 1990s, new episode proportion is much lower than new shows proportion. This indicates that first year shows tended to have low number of episodes compared to non-first year shows in the period except between early 1980s-1990s. To confirm this, I plot mean episodes by grouping shows in first year and non-first year shows over time in Fig 4. It can be seen through this plot that indeed first year shows have lower mean number of episodes than non-first year shows except in the above said period. Generally, one would expect first year shows to have less average number of episodes, because that is when a lot of the shows would probably be canceled. If a show is not good, a lot of times it would be canceled early — so that should bring down average for first year shows. It can also be due to TV mini-series as they are only 1 season long and also tend to have less number of episodes than TV series. Finally, another way of looking at how old TV shows are at a particular time is to plot mean difference of episode year and show year. This has been used to define a season number as the usual season number can change during a year. Again, this can be visualized using both episodes and TV show data. In Fig 5 below, we can see this plot. An average episode age has been almost on an increasing trend and in 2019, it is in 6th season (or 5 years old). An average TV show on the other hand peaked around 1990, and most recently in 2019, is in 4th season (or 3 years old). Again, the reason for this discrepancy can be explained based on Fig 4 as TV shows in first season tend to have less episodes on average, larger season episodes are over-represented in episodes data. In a sense, TV show data only counts one episode per TV show per year. Has the lead age changed over time? In this section, I look at whether lead actor/actress age has changed over time. We often hear or read about ageism in Hollywood. Previously, it has been looked for movies using IMDB dataset here. In Fig 6 below, we can see some evidence of ageism in the context median lead’s age from mid 1980s to late 2000s as median age continued to fall. Normally, as a TV show becomes older by one year, the lead’s age will also increase. So, in theory, with no new shows, lead’s age should increase. However, in Fig 3 above, we noticed that there are around 45–50% new shows in any given year. Since late 2000s, we see an increase in median age — though we need to add a caveat of actor reuse as generally same actors are part of multiple shows over the years. So, the trends need to be observed for a longer duration of time. Next, in Fig 7, I compare difference between lead actor and lead actress age over time. There is almost an equal amount of difference of 5–6 years between median lead actor and actress age over the years. In 2010s, the gap seems to be reducing a little. Is there gender bias in IMDB ratings? Before, looking at the evidence for gender bias in IMDB ratings, first I look at number of TV shows and episodes and their share by lead’s gender in Fig 8 below. The year axis for TV shows is start year of the show while for episodes it is the airing year of episode. Based on both of these measures, it can be seen that proportion of TV shows with female lead has been increasing over time since the 1960s. It is interesting that based on the episodes data, shows with female leads are almost close to 50% but based on TV show data, it is around 40%. This suggests that either shows with female leads tend to have more episodes or there is discrepancy in leads defined for TV show and in their respective episodes — this can be the case for something like TV shows where each episode has different leads. In Fig 9 below, I plot average number of episodes for TV Show grouped by lead gender over time (TV show start year). We can see that indeed shows with female leads tend to have higher number of episodes. Finally, I look at whether there is evidence for gender bias in IMDB ratings. Previously, it has been discussed that men sabotage ratings for shows aimed at women and TV shows targeted towards female audience tended to have lower ratings on IMDB. Here, I control for host of other variables — like genres, number of seasons, number of episodes, number of votes, TV show start year; and look for evidence of this bias. To figure out shows targeted towards females or males, I created a proxy variable for female oriented shows and male oriented shows by looking at the ordering of actors. IMDB dataset provides this information. A female oriented show was classified as any show with lead actor as female and either both actors among top 2 actors to be female or most of top 4 actors (i.e. >= 3) to be female. Similarly, a male oriented show was classified with applying the same rule but with male actors. TV shows that did not fit in either of these classifications were classified as gender neutral. This rule was experimented with to replicate finding sets of female oriented shows identified here. Although this rule is not perfect, it allows to remove lot of false positives (shows incorrectly classified as female shows just based on lead actor being female). I ran a linear regression model on ratings, controlling for a bunch of variables in addition to whether the show was gender neutral, female, or male oriented. Restricting to TV shows with at least 1,000 votes, shows targeted towards females are rated 0.19 points lower than male shows and neutral shows are 0.11 points lower. Looking at TV shows with at least 5,000 votes, female shows are rated 0.26 points lower while neutral shows are 0.16 points lower. Finally, for shows with at least 20,000 votes, female shows are rated 0.34 points lower and neutral shows are 0.17 points lower. All of these effects are statistically significant (confidence interval — 99%). Fig 10 is an interactive box plot (with points denoting TV shows) grouped by the gender classification discussed for shows above. This graph only includes shows with at least 5,000 votes and median episode rating greater than 250. It can be seen that male shows on average are rated higher than neutral and female shows — confirming some of the gender bias in rating for female oriented shows. Note: For the interactive plot above, hover over the points to see additional information (or tap if on mobile). Box selection allows zooming in, and double clicking resets the zoom to default. This concludes part 2 of the IMDB TV Show Analysis. The code used for this analysis (and part 1) can be found here. Part 1 of the analysis which deals with overrated/underrated TV shows, consistency of TV shows, and whether shows were canceled early or went too far, can be found here.
https://towardsdatascience.com/imdb-television-show-data-analysis-part-2-39ebf47977ff
['Hitesh Sabnani']
2020-06-29 14:14:38.310000+00:00
['Data Science', 'TV Shows', 'TV Series', 'Data Analysis', 'Data Visualization']
Writing copy for an industry you know nothing about
Breaking into an industry you know nothing about, can be unnerving. I know, because I’ve had to do it many times. But it does get easier over time. Here are some ways that have helped me write copy for new industries. 1. Remember that it’s not your job to be the industry expert As a copywriter, your line of expertise is selling. It’s not your job to know more about the industry than your client does. And really, if that were the case, it would not bode well for the client. Understand that copy is the result of bridging your expertise in selling, with your client’s specialised knowledge in their respective industry. This doesn’t mean you don’t have to do your research. You still need to cross-reference what you know with what is working in that industry in terms of the approach and implementation of copy. 2. Cherish the opportunity because it makes you better Yes, specialisation can be an asset. But diversification makes you a far superior copywriter. Why? Because it allows you to cross-pollinate knowledge from across industries. As you expose yourself to different verticals, you’ll start to notice “holes” in the way they do things. True, established practices are there for a reason — because they’ve been tried and tested, and found effective. But these practices can also act as blinders to new ways of doing things. If you can enrich your copy with external ideas, and make it work better, it will give you and your client an important competitive advantage. 3. Consult with your client Your client has been in the industry for a while. They’ve tried things and had their fair share of hits and misses. Use that knowledge. Get your client to tell you about what they’ve tried, what has worked, what had failed, and what they’ve learned from these experiments. Most of the time, you’ll find that the failures were due to minor missteps that can easily be fixed. Use that to your advantage. Propose the fix and then test again. If you get a positive result, it’s a fantastic ROI because the fix required minimal effort. And if it flops, no harm is done because the opportunity cost was minimal. 4. Build a swipe file While it’s not your job to be the industry expert, it’s definitely your responsibility to know as much as possible about how copy works in said industry. Your first step in that direction is to find out what others are doing. Build a swipe file with copy from that industry. Sign up to email lists of competitors. Read their blogs. Keep detailed notes on CTAs (calls to action), copy length, build-up, sequence, format, pain points, selling points — anything you notice. Remember that successful copy is built on elements that already exist in the world. It’s like making music. A great tune uses the exact same elements as a crappy tune. It’s the assembly that makes the difference. Do you have anything to add? I’ve been doing this for almost two decades, but not a day goes by that I don’t discover a new or better way of doing something. So even if you’re just starting out, you might know something that I don’t. If that’s the case, I’d love to hear it. Seriously. Leave a comment and tell me.
https://medium.com/heycopywriter/writing-copy-for-an-industry-you-know-nothing-about-b2e0a319ec04
['Cedric Debono']
2021-04-01 09:19:26.588000+00:00
['Freelance', 'Freelancing', 'Copywriter', 'Clients', 'Copywriting']
Orange
when it struck we were forced inside stillness became us with nothing to busy us our homes were our diaries filled with resistance uncertainty then conviction we had to commit to being safe so we wrote lists of things needed we wanted our minds wandered to the past and one another we connected withdrew made decisions were silent in thought on the emptiness of corners we choose to clean or ignore with no structures to place the value of living I discovered my home was unsafe the daily forgiveness for wrong I had committed was trauma not love my hope for the world which was vacuumed was dance and my children bound to gratitude for nature joy in creativity and imagination was snug between rocks I listened to the ocean and let it in.
https://medium.com/@naomifolb/orange-fd94d7e81333
['Naomi Folb']
2020-12-18 19:57:44.793000+00:00
['Resilience', 'Mindfulness', 'Poetry', 'Narcissism', 'Creativity']
With Series Seed 2 Round, Climate Tech Leaders Advance the Future of Solar
This month Swift Solar is closing a Series Seed-2 financing round, with expected new investments totaling $9.6M. In this Seed-2 funding round, we’re excited to welcome a diverse group of investors who are leaders in their fields and—more importantly—compassionate people who share our mission. This group includes experienced founders like our lead investor Sid Sijbrandij (CEO/co-founder of GitLab and founder of OpenBook Ventures), dedicated climate tech and deep tech investors like Good Growth Capital, Safar Partners, Climate Capital, Jack Fuchs, and Sierra Peterson, and crypto and finance experts like James Fickel (co-lead), Jonathan Lin, Grant Hummer, and Vitalik Buterin (creator of Ethereum). Swift Solar is all about building a stronger foundation for the future of clean energy. Our new funding will help us assemble the best team to tackle one of the biggest challenges facing humanity today. There’s no time to waste. A better solar panel Our mission is to create a world where all energy is clean energy, and our approach is simple—build a better solar panel. Solar photovoltaic (PV) power has gone from very expensive to very cheap in the last 40 years, dropping from ~$10/kWh in 1975—corresponding to a monthly utility bill of over $8,000 for a typical U.S. household—to less than $0.10/kWh in 2020. Today solar is the most affordable source of electricity in many parts of the world. It’s also the most equitable and abundant energy source on Earth and one of our most important tools for mitigating climate change. But that’s not the end of the story. The lower the cost of solar energy, the more it can contribute to the climate fight. And the most direct way to make solar energy more affordable ($/kWh) is to make solar panels more affordable, more efficient, and easier to install ($/W). At Swift, we’re working on a new solar PV technology—metal halide perovskites—that could fundamentally outperform today’s silicon and thin-film technologies in many ways. Perovskites use lower-cost and more-abundant raw materials (and less material overall), simpler and higher-throughput manufacturing equipment, and less energy during production. They can be made on ultra-lightweight and flexible plastic substrates, enabling solar panels with unparalleled power densities (W/kg) that can be rolled up like a tarp, installed like wallpaper, and even upgraded periodically. And they enable multijunction (tandem) cell structures that could reach efficiencies of over 40%, breaking through the ~30% barrier of current technology and providing more W per $ spent on installation. With this combination of higher efficiency, power density, and flexibility—all in an affordable package—Swift Solar is creating new opportunities for solar power. Our technology could keep drones and airships in the sky for months at a time, recharge electric cars, trucks, and buses, power homes, and even run portable irrigation pumps in rural India. Swift’s first products are tailor-made for powering stratospheric flight. Our ultra-lightweight solar product—recently recognized by a joint R&D 100 Award with NREL—increases payload capacity and flight endurance for high-altitude pseudo-satellites (HAPS) and UAV platforms. Looking forward 5–10 years, our perovskite tandems should be more efficient, more affordable, and more scalable than any PV product on the market. If we’re successful, Swift will ultimately provide the most affordable source of zero-carbon electricity in the world. Solar for the next generation To be clear, even as a founder working on a new PV technology, I believe we should be deploying silicon solar panels far and wide today. Climate change leaves no time for delay—no time to wait for a new technology. But the climate challenge won’t be done tomorrow, and I believe that someday the baton in the solar technology race will pass from silicon to perovskites. Our goal is to allow that handoff to happen as soon as possible. Perovskite technology represents new possibilities for the next generation. Sunlight is the most abundant power source in our solar system, and PV is the most direct way to convert light into useful energy. If this PV technology reaches its full potential, it could serve humanity for thousands of years to come—no matter what planet or space station we end up on. It makes no sense to say “good enough” and stop building today. If you want to help build the future of solar power, I invite you to join us. We’re always looking for growth-minded people who value hard work, curiosity, and inclusiveness. You can apply to join the Swift team at swiftsolar.com/careers. Joel Jean CEO & Co-founder, Swift Solar Check out the full press release and follow our progress on LinkedIn and Twitter!
https://medium.com/swift-solar/with-series-seed-2-round-climate-tech-leaders-advance-the-future-of-solar-7757df56d34c
['Joel Jean']
2020-12-31 05:29:18.375000+00:00
['Photovoltaic', 'Perovskite', 'Fundraising', 'Startup', 'Solar Energy']
Why the Recent ELLE UK Cover Sends a Powerful Message
A few days ago, ELLE UK debuted their January 2021 issue and it was certainly not what most people expected. The cover featured South Sudanese model, Aweng Ade-chuol, locked in a kiss with her beautiful wife Alexa while wearing matching polka dots outfits. They say a picture is worth a thousands words and this was perhaps the best way to show the world that they still remain in love even in the face of needless homophobic attack. In the interview, Aweng opens up about mental health and the heavy backlash from her community when she married Alexa last December. Aweng hails from South Sudan, where same-sex marriage is constitutionally banned. South Sudan calls for ten years of imprisonment for this, and refuses to recognize same-sex unions. Majority of South Sudanese maintain traditional or Christian beliefs and religion is still very influential. Aweng spoke on how she found the whole situation baffling, not really understanding why her community wished her death simply because she married a woman. She spoke on how the situation saddened her saying,“We got married and the whole world, literally the whole of my community, were wishing that I passed, in a way… a few months later, I attempted suicide.” She said subconsciously, she felt drained by the fact that she had gotten married and her community turned on her. A year later, the community is still discussing her marriage pondering on why she dared to marry a woman. The South Sudanese media have had a lot of unkind words to say in regard to her marriage. She opened up about how she still finds the situation difficult because she has no control of what the people say and what newspapers and tabloids publish in Sudan. It was confusing to her how her marriage seemed all they could talk about — especially given the recent political climate in the country. She had no idea that her marriage announcement would turn out to be a “whole thing.” The whole situation was deeply saddening for her, because it was meant to be the happiest day of her life but the haters couldn’t let her enjoy it. She is however grateful for the support and respect she has gotten from other Sudanese girls upon coming out. This goes to show not everyone is against LGBTQ+ rights in the country, and there are other girls and women who understand and empathize with what Aweng is going through. “It was beautiful to see how people react with having someone who validate who they are.” She explained how this support left her feeling a responsibility to speak up for the LGBTQ+ community in South Sudan. “At first I did, but then I realized that I’m in my twenties. I wish I could say, Let me hold the torch for the LGBTQ+ Sudanese community, but it is a lot to handle and I am only human, I am learning myself.” The cover picture in itself tells a story of defiance against homophobic abuse. The pictures in the interview make a bold statement that they won’t be backing down.The fact that Elle decided to celebrate this love story by having a same-sex couple passionately embrace on the cover is a milestone in itself. We need more covers and stories that allow everyone to share their truth with the world. We need to speak more on how homophobia affects our mental health. As a young queer Ethiopian girl, this candid cover was like an early Christmas gift to me. I realized someone understands and is going through the same things. It motivated me to go on with my journey; sometimes all it takes it to see that you are not alone.
https://medium.com/matthews-place/why-the-recent-elle-uk-cover-sends-a-powerful-message-d427570314a6
[]
2020-12-04 17:40:54.971000+00:00
['Civil Rights', 'Africa', 'Sudan', 'Elle', 'LGBTQ']
How to Build Custom Transformers in Scikit-Learn
FunctionTransformer Let’s start simple with a great tool for on the fly transformations: FunctionTransformer. FunctionTransformer can be used for everything from applying a predefined function to a feature, to selecting specific columns in your feature set. The basic idea is that FunctionTransformer accepts a function (you can also pass an inverse function), and applies the function to the data via a fit_transform method. This makes it a great tool for uncomplicated transformations that can be encapsulated in a simple function; you can almost think of this as the “lambda function” of scikit-learn preprocessing. We demonstrate a few use cases below. Selecting Features: Here we use FunctionTransformer to select two of the thirteen features in the full Boston Housing dataset. Output of the above Code (Image by Author) Simple Transformations: We were able to select the features we wanted, but perhaps we’d like to scale the values of the ‘CRIM’ (Crime Rate) feature. We could use StandardScaler, but instead we’ll apply a log scale to demonstrate how to use FunctionTransformer to apply simple functions on the fly.
https://medium.datadriveninvestor.com/how-to-build-custom-transformers-in-scikit-learn-edd65951b2e8
['Jake Miller Brooks']
2020-12-08 15:41:49.334000+00:00
['Python', 'Data Science', 'Feature Engineering', 'Machine Learning', 'Scikit Learn']
Novell to be Purchased by Attachmate for $2.2bn
Finally ending the long-speculated rumors, Attachmate is acquiring software maker Novell for $2.2 billion dollars. The buyout, which will take Novell private, has been a source of speculation for months since Elloiot Associates announced their intention to buy Novell in March 2010 for $2Bn. Since that announcement, revenues year-over-year continued to lag each quarter until Novell’s fiscal year end, October 31, 2010. “The company started looking at strategic alternatives including a sale after rejecting a $2 billion takeover offer in March from shareholder Elliot Associates LP as inadequate. Attachmate, whose owners also include Francisco Partners and Thoma Bravo, said Novell products will complement a portfolio that includes other technology assets”, BusinessWeek reports. Not surprisingly, Novell stock is up today, currently 36cents over its close last Friday, although it likely won’t exceed the $6.10/share selling price that Attachmate has agreed to spend on Novell. Attachmate, a comparatively unknown company, has about 900 employees worldwide compared to Novell’s 3,500, and generated $300MM in revenues last year compared to Novell’s revenues which are approximately 3–4x that amount annually, though earnings have been down repeatedly in the last few years, some speculating that the economy, especially weakness in the financial sector, in which Novell’s security and identity products are popular, has dealt hard blows for the company to recover from. Shira Ovide at the WSJ notes that Elliot may have forced this transaction to go through, but didn’t get rich on the deal. Of course, the rumors are flying in the Linux community already about what this may mean for the open source software platform, especially since Microsoft has also tossed in half-a-billion for yet-to-be-named technology assets.
https://medium.com/connected-well/novell-to-be-purchased-by-attachmate-for-2-2bn-74afcee6da2a
['Robert Merrill']
2016-03-27 20:25:15.184000+00:00
['Acquisition', 'Attachmate', 'Business']
I Was Denied Birth Control
I Was Denied Birth Control My healthcare provider prioritized a maybe-baby over my bodily autonomy. Photo by rawpixel.com from Pexels Last summer, my medical practitioner refused to re-insert an IUD on the grounds that I might be pregnant soon. If that sentence didn’t make sense to you, you’re not alone. It didn’t make sense to me either. I went in on a Tuesday afternoon and had my perfectly good Paraguard (copper) IUD taken out and a Mirena (hormonal/plastic) IUD put in. I’d had the Paraguard for 4 years and loved it, but because of my PCOS and recent potentially hormone/peri-menopause related symptoms, the nurse suggested swapping. Three days later, I got out of bed to pee, and when I wiped there it was. My new IUD had slipped out of my uterus without a pinch or poke. If I hadn’t glanced at the toilet paper, I would likely have flushed it without ever knowing it was gone. Perhaps it had been placed incorrectly, or my body had rejected it, but either way I was suddenly without my care-free birth control. I called right away and talked to the advice nurse. They had an opening that morning, but I wasn’t available so I scheduled for the following Monday after work. I wasn’t given any special instructions. They also didn’t advise me what to do with the expelled IUD, which lived on my bathroom counter for the next couple of weeks. It was now a $500 piece of garbage, but it felt strange to just throw it away. I left work early the next Monday and drove across town to my appointment. When I checked in, they asked me to give a urine sample. “I literally just had my IUD fall out Friday,” I answered, “I can’t be pregnant.” She nodded, and I figured that was the end of it. The nurse I spoke with on Friday had given no instructions about not going about my life as normal over the weekend or refraining from sexual activity. “Have you had sex since Friday?” The question was asked casually as the nurse put away her blood pressure cuff. “Yes,” I replied, “on Saturday.” “Did you used a condom?” “No…” I replied, reluctantly. It had been so long since I’d used one with my long term partner that I didn’t even consider it. Besides, I was getting my IUD re-placed in less than two days, so I thought I’d be covered. “We’re going to need a urine sample,” she insisted. “I don’t understand,” I told her uneasily. “A urine sample won’t tell you anything. You can’t detect anything on a pregnancy text from sex that happened 2 days ago.” The next answer was a forced smile, and a reply that she would send the Nurse Practitioner in to talk to me. What happened next floored me. The NP came in and told me she wouldn’t be able to insert the IUD today, because I might be pregnant. My mind started spinning… pregnant two days after sex? “I can’t insert the IUD because it would terminate the pregnancy.” I was dumbfounded. Surely these medical professionals knew that you can’t be pregnant two days after having sex? Even if an egg had been released and somehow a sperm had reached it, it would be a week before that fertilized egg implanted in the wall of my uterus. I stared at her wordlessly as I tried to untangle what she was saying. I couldn’t reconcile the idea of being denied the contraception because a baby may come into existence in the future. Didn’t that defeat the whole purpose of having birth control in the first place? I knew that all of the mechanisms by which the Mirena can prevent pregnancy aren’t known, but one that IS known is that it keeps the lining of the uterus thin so an egg can’t implant. “So, if the IUD was already inserted and next month an egg got fertilized but couldn’t implant, preventing pregnancy, that would be okay. How is this any different?” I asked. I was being told that because I had sex and wanted the IUD placed 2 days later, if the process or the IUD itself prevented implantation, that egg’s inability to implant was being considered a termination rather than a prevented pregnancy. “I just can’t do it, because it could potentially terminate a pregnancy if you’re pregnant,” she replied. “So that’s not my decision to make, its just not something you do here?” I could already feel that I was defeated. I wasn’t getting an IUD that day. Her reply was that it was “complicated” and that it would be “a mess” for me and the office to deal with. She ended by admitting she just wasn’t comfortable putting it in. I am still unsure what was complicated about it. The logic behind refusing me the procedure seemed inconsistent at best. If I’d said I used a condom, it seems like they would have done it even though condoms fail. I also never figured out how it would be “a mess” to deal with, and I didn’t argue or question her further because it was obviously a waste of both of our time. The chances of my being pregnant at that appointment were zero. The chances of my uterus containing a fertilized egg were minuscule, based on my medical history and statistics in general. If, by some freakish miracle, that egg implanted and I became pregnant, I would not have continued the pregnancy and that should be my decision. I tried not to be rude. I asked about permanent sterilization because it seemed like a better option than ever going through this trouble again. I told her it would have been nice if the nurse I had talked to on Friday had mentioned anything about not having sex. I walked out of the office that day with hot rage and frustration under my skin. Hours later, I was still unsettled and filled with that kind of impotent anger you can’t do much of anything about. The disrespect of my time and the fact that I’d left work early and lost two hours of pay, the fact that I’d have to do it all again in two weeks was irritating. The idea of having to use some other form of birth control for the next two weeks was mildly annoying. Neither of these felt nearly as unsettling as the crawling feeling that my bodily autonomy had been taken away from me. I live in a very liberal-leaning town, in a country where birth control and abortion are legal, and I was denied birth control because I’d had sex too recently. My struggles pale in comparison to women who aren’t able to get birth control at all. Trying to imagine what it would be like if this minor inconvenience was a life-or-death situation is impossible and heart-wrenching. Beyond abortion bans are a myriad of other reproductive rights issues. When women get mad about being told what to do with our bodies, it is because we face being stripped of power in this way all the time. This was one instance, one doctor visit, in a lifetime of stolen autonomy. I ended up not getting the IUD reinserted. I called my insurance company the next morning to confirm it was covered, and had my tubes removed last October, but not before a consultation appointment where the male doctor asked if I was “sure” I didn’t want to have “a bunch more” kids. I am lucky that I had access to that option, I am lucky to have good insurance coverage. I no longer have to worry about whether my rights will be trampled by the idea of a future maybe-baby.
https://medium.com/rachael-writes/i-was-denied-birth-control-f991c92b8b56
['Rachael Hope']
2019-06-03 22:08:25.491000+00:00
['Womens Health', 'Women', 'Feminism', 'Health', 'Womens Rights']
How a homeless high school dropout became CEO of a $1 billion company
Good News Notes: “Taihei Kobayashi has gone from sleeping on the streets of Tokyo to heading a technology startup whose market value topped $1 billion. His rags-to-riches story is among the most remarkable to emerge from a small-cap stock boom that’s minting fortunes in Japan. Kobayashi’s company, which helps startups and other firms to design and create new businesses and products, went public in July and its shares have since more than tripled. It’s an outcome that few could have imagined two decades ago. As Kobayashi tells it, his parents kicked him out at 17 when he quit a prestigious high school to focus on his band. He played music during the day and mostly slept outdoors, using cardboard boxes to keep warm during freezing winter nights. He was homeless for a year and a half. A series of encounters got him off the streets and eventually into a job as a software engineer. He was one of the core members in establishing the predecessor to the company now known as Sun* Inc., pronounced Sun Asterisk, in Vietnam in 2012. He’s now Sun*’s chief executive officer. ‘The winters were cold,’ Kobayashi, 37, said of his experience on the streets. ‘There may have been times when things felt like hell. But I’ve overcome those times.’ According to Kobayashi, his parents wouldn’t accept his decision to drop out of high school. They had made financial plans to allow him to get a university education, he said. Attempts to contact Kobayashi’s parents were unsuccessful. ‘They told me to leave, so I left, and that was that,’ he said. ‘I wanted to live my life doing what I enjoyed.’ Kobayashi ended up spending two winters on the streets of the Shinjuku and Shibuya districts of Tokyo. Mostly Outdoors ‘I might have died,’ he said. ‘I slept anywhere I could,’ he said. ‘About 80% of the time it was somewhere outside.’ Yushi Fukagawa, a close friend since Kobayashi’s school days who currently works at Sun*, recalls the time the entrepreneur became homeless. ‘I didn’t think too much of it’ Fukagawa said. But ‘my parents seemed worried.’ At 19, a manager of a live-music club took pity on Kobayashi, offering him a job and saying he could crash at the club. He did so for about six years. Eventually, Kobayashi decided it was time to move on. First, he made some money trading music records online. Then he came across a job advertisement that didn’t require any qualifications or experience. All you had to do was take a test, it said.” View the whole story here: https://www.bloomberg.com/news/articles/2020-12-10/he-went-from-homeless-musician-to-ceo-of-a-1-billion-company
https://medium.com/@tonycowger/how-a-homeless-high-school-dropout-became-ceo-of-a-1-billion-company-b2369838ffb1
['Tony Cowger']
2020-12-21 12:17:22.226000+00:00
['News', 'CEO', 'Home', 'High School', 'Homeless']
Do you feel the pandemic is messing your plans? You probably haven’t heard of Gour Saraff
A story of meat and guns. And resilience When Mr. Saraff moved to Ukraine the economic crisis was critical. At the time, in 1991, Ukraine became an independent country from the Soviet Union. That meant chaos. And hunger. For many many people. The Ukrainian Hryvnia was losing value, making essential items extremely expensive. Despite that, Gour stayed in Kiev, not sure of what the future would bring and how he would face uncertainty. One day, walking in the street, he bumped in a long queue. So, he queued too, just to find out what people were in line for. After a while, he got to see through the door a butcher cutting, with a large knife, meat off a hook. Nobody could choose what and how much they wanted. What it might have looked to all like a lack of choice and, for some of them, just a cut of bad meat, he saw opportunities. He saw the future: a production and distribution line. Demand was clearly high and the market wasn’t taken at all. He called up two friends, and told them about the idea of putting up this business and looked for potential partners to start a collaboration. At the time there was no free resources on line, no privileged business courses to teach him venture capital, nor free template for cash flow. Nothing! Just vision and talent. He identified the customers with the highest demand — supermarket and restaurants — and later expanded the product lines into poultry and vegetables, reaching the 60% market share in high product quality, within a time of an eye-blink. One day, several people broke in the office handing Kalashnikov, pointed them at him and the staff. They were terrorised! The ‘gentlemen’ stole everything, filing cabinets and computers. They even took the chairs and desks, leaving them with barely nothing. Only uncertainty. Recovering business wasn’t easy and a message from the attack was quite clear to them. Mission impossible to get it all back. So he left Ukraine and went home, precipitating in a middle life crisis. After few years, while reading a newspaper, an article prompted to his attention: somebody was talking about this business, that someone else in the meanwhile had took over and generated jobs, opportunities. Selling meat products to supermarket and restaurants in Ukraine. How funny! The first reaction was sadness, as he felt that lost the opportunity to build a future for that country. Perhaps he gave up too early and someone was braver. But later he felt positive. Power of Mindset! He realised his idea was so successful, that it actually ignited the success business for the country. He learnt that every individual has the possibility to transform the world, by succeeding or failing. If he could restart, he would consider few things for the success of the business. And this is where you should listen if you are starting a new business too: Select the ‘right’ country. But how do you do? well, first should be safe for yourself and your business. Consider cost of living. When you start a new business, this is going to be a bootstrapping cash generating business. Nobody is going to fund you (or steal from you) until you have already some income. Business always take longer than what you planned. Nowadays you can easily find a list of countries sorted by safety and costs. Advantage of modern times! Go there!! Only by going to that place, you can make your experience and find out what exactly do they need. Has what you read and researched, got the same taste? Make a clear business plan to take you through the journey to avoid disappointments. Start the Business! Do not overanalyse the data, do not overthink! Data are changing so fast, it could be obsolete tomorrow! The best experience you can make is by making mistakes! If you don’t make money, you are going to build instinct. What could happen once you tried? In the best scenario, you do well, you make money and scale up your business, until you move to other countries, and decide that you don’t want to work for anybody else. In the worst case, you are back with a learning curve that nobody has as much as you! You have instinct of navigating in a world, where nothing is clear. Of course, sometimes only after your business has been expropriated. In both cases, the country gets transformed, and regardless where you lose or win, the country always win and it will be grateful to you! Now, what are you waiting for?
https://medium.com/@francdambrosio/do-you-feel-the-pandemic-is-messing-your-plans-you-probably-havent-heard-of-gour-saraff-fafe2f2bde8b
['Francesco Dambrosio']
2020-11-11 21:44:58.065000+00:00
['Resilience', 'Leadership', 'Meat', 'Crisis', 'Guns']
Часто задаваемые вопросы о чат-боте в Инстаграм | Boss.Direct
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/boss-direct/%D1%87%D0%B0%D1%81%D1%82%D0%BE-%D0%B7%D0%B0%D0%B4%D0%B0%D0%B2%D0%B0%D0%B5%D0%BC%D1%8B%D0%B5-%D0%B2%D0%BE%D0%BF%D1%80%D0%BE%D1%81%D1%8B-%D0%BE-%D1%87%D0%B0%D1%82-%D0%B1%D0%BE%D1%82%D0%B5-%D0%B2-%D0%B8%D0%BD%D1%81%D1%82%D0%B0%D0%B3%D1%80%D0%B0%D0%BC-boss-direct-b91345e39d29
['Чат-Бот В Инстаграм']
2020-12-22 15:44:13.130000+00:00
['Продвижение Инстаграм', 'Chatbots', 'Instagram Marketing', 'Чат Боты', 'Продажи В Инстаграм']
Let’s Build a MERN Stack E-Commerce Web App
Let’s Build a MERN Stack E-Commerce Web App Let’s build a simple E-Commerce website using MERN stack (MongoDB, Express, React and Node) where users can add items, pay and order. Image by Roberto Cortese on Unsplash Hello friends! So, I am starting a new article series based on MERN stack and this article is the first part of that series. This series will be completely focused on MERN stack (MongoDB, Express, React and Node). Previously, I have made two series which were a social media website and job search website but both of them were built on Django framework and we used Django templating engine to create the frontend for our applications at that time. But, now we are using Full Stack Javascript to design and develop our applications. This means we would be using Node, Express and MongoDB to design the REST APIs and then we would use those APIs in our React frontend. So, it would be very beneficial since it would teach you the concepts of REST API and will help you to integrate these frameworks. So, in this first part, we would talk about the basics of the project and also set the project up. So, basically, it would be a simple E-Commerce website. It would not have all the bells and whistles of a complete modern E-Commerce website since this is aimed at learning and understanding how everything actually works. We can surely add features on top of this project to make it better. We would keep our design simple and minimal on Frontend side. We would not be dealing with CSS much as our focus would be on understanding how we deal with APIs on the frontend and will focus on the basics part. We would use React Bootstrap to design our React Frontend minimally. We aim to make a working e-commerce website where everything functions correctly. So, the features we would be having in the application that we would be building are:- Authentication using JSON Web Tokens (JWT). Option to add, edit, view and delete all the items in our store. Option to add items or remove items from the cart. Display the total bill of the cart and update it as soon as the cart is updated by the user. Using Local Storage to store the JWT so that we only allow logged-in users to buy items. Option to pay and checkout thus creating order and emptying the cart. So, these are the basic features we would be having in our application. Now, let’s get familiar with the tech stack we are going to use for building this application. Frontend — In the frontend side, we would be using React as the frontend library. We would use Redux for state management. We would use React Bootstrap library for basic designing of the interface. Backend — For the backend side, we would be using the Express library on top of Nodejs. We would use MongoDB as the NoSQL database to store our data as documents in JSON format. We would use mongoose to connect to our MongoDB database. We would create REST APIs with Express and use these endpoints in the React frontend to interact with our backend part. To learn more about creating REST APIs using Express, Node and MongoDB, check out this article which deals with it simply and elegantly. This tutorial would really help you understand REST API and you will learn how to build them easily. Also, if you are new to React, this simple article will be really great for you to get started with React. This article details on building a simple Todo app using React. This would be good to understand CRUD (Create, Read, Update and Delete) principles. So, we now have an overview of what we are going to build, so we would now like to start building the project. First, of all, we would need to download Nodejs in our system since it would then allow us to use NPM (Node Package Manager). If have not already downloaded it, here is the link to download it. After downloading and installing Nodejs into the system, we are ready to start building the project. So, let’s open up the terminal and move into the folder of our choice where we would like to create our project. So, then we would create a new folder of any name of our choice to store all the project files. I named my folder ‘E-Commerce’. Then move into the created folder and type in the following command in the terminal to start a new Node project there. npm init It would then ask a series of questions like this:- npm init We can choose any name for our package and we give any description of our choice, we put our name in the author section. We change the entry point from index.js to server.js as we are going to name our entry file as server.js instead of index.js. It will work like a server so naming it as such seems more reasonable. We leave all other fields blank. When we click on yes then it would create a package.json file in that folder. Open the package.json file in the code editor of your choice. I use the VS Code for this purpose. We would now need to install certain dependencies using npm which would then automatically add them as dependencies in our package.json file. Here is the package.json file with all of the dependencies we would be needing for this project as of now. We would add some dependencies later on when we need to do so. Actually, you can copy the dependencies and dev dependencies from the package.json file and update your file. Then we can run npm install to install all the dependencies listed in the package.json file. After you have installed these dependencies, let’s first understand the significance of these packages which we just installed. bcrypt — We will be authenticating users in our application. We would need to store the password of our users in our database. So, it is never recommended to store plain text passwords since they can be compromised easily. So, we use bcrypt library to hash the passwords before we save them. We would delve into more detail into how it works when we actually use it. concurrently — This package helps us to run two processes at the same thus we would be able to run both our server and client at the same time without having to use two separate terminals to do so. config — This is a simple package which helps us to store our important data like secret keys, database ID etc. in a separate JSON file and it allows us to access it easily within any file. express — This is the library which we would use on top of Node to build our REST APIs. jsonwebtoken — This helps us to create JWTs for the authentication purpose. mongoose — This helps us to establish a connection between MongoDB and our Express app. validator — It helps us to validate a few things such as emails. It is a small package and is useful for validation. nodemon — It helps us to keep our server running and lets us rerun the server as soon as any changes are detected as we do not need to restart the server for changes to take place. We have added a few scripts too to make it easier for us to run the server and client. Let’s have a look at them:- start — It uses node to run the server.js file. It would need to restart for updates. server — It uses nodemon to run the server.js file which allows it to update changes and restart the server automatically. client — Running this command runs the client. We use a prefix to let it know that we want to first move into client folder and then run the command. dev — It uses concurrently to run both the server and client at the same time. So, now let us create a server.js file in the root directory. Let’s start building our server.js file. So, we would start by doing all the required imports of the various libraries we would need in this file. const express = require('express'); const mongoose = require('mongoose'); const path = require('path'); const config = require('config'); We would then call our express app and will set it to use it in our application. const app = express(); app.use(express.json()); Next off, we will set up our server file to serve static content which will be generated from React app in production. This will only work in the production environment. if(process.env.NODE_ENV === 'production') { app.use(express.static('client/build')); app.get('*', (req, res) => { res.sendFile(path.resolve(__dirname,'client','build','index.html')); }); } Next, we configure our server file to connect to the MongoDB database and then start running the server to listen to our requests on port 4000. const dbURI = config.get('dbURI'); const port = process.env.PORT || 4000; mongoose.connect(dbURI, { useNewUrlParser: true, useUnifiedTopology: true, useCreateIndex:true }) .then((result) => app.listen(port)) .catch((err) => console.log(err)); As you can see, we have used config to get our Database URI. We define a port variable to use any port value present in the environment variable as in case of production but in development, we will use port 4000. We would then connect to our database using mongoose and after we successfully connect to the database, we start listening to requests on the port i.e. server is up and running. This is the server.js file which we have built till now:- We will also create a new folder named config in the root directory. We would then create a new file named default.json inside the config folder. We would then store our important keys and secrets inside this file. We do so in key-value pairs. { "dbURI": "YOUR_DATABASE_URI", } We would then create different folders to keep our routes, controllers, models files. Doing so will reduce the clutter and keep our code readable and maintainable. So, we would deal with all these things in the next parts. We would dedicate a part to the authentication to understand it in a proper sense. Next, we would deal with the models, routes and controllers related to Items, Cart and Orders in a separate part. Completing all that would mostly sum up our backend part of the series and we would then move on to the frontend part after finishing those parts. It is going to be an exciting series and I have the belief that we all would learn something new and productive which would help us develop our skills. I hope you all understood this first part we dealt with in this article. I hope you all are excited about the upcoming parts! Click here to go to the second part of the tutorial series where we deal with models of the file. If you are excited about learning Django, I have two really good series which would help you learn and build something practical using Django. They are 5 part and 6 part series respectively. Click here to access the GitHub repository for this complete project. In these projects, we explore various features of Django. We also use AJAX in the social media project while we also use Social Login features (Google, Github, LinkedIn) in the Job search project. Have a look at them as these are exciting projects that you can do using Django.
https://javascript.plainenglish.io/build-an-e-commerce-website-with-mern-stack-part-1-setting-up-the-project-eecd710e2696
['Kumar Shubham']
2021-06-12 02:19:30.845000+00:00
['JavaScript', 'Mongodb', 'React', 'Expressjs', 'Nodejs']
Why dentists start to feel isolated in their practice? How do we fix this?
You are an amazing dentist! You went to a really good undergraduate program and then went on to finish a 4-year dental program. You learned so much and even though you were a little nervous the day that you graduated, you were ready for what lay ahead. You started work as an associate, or you took on the task of practice ownership. You got busy. Super busy… So freaking busy….Days melted into weeks, weeks into months and now a few years have gone by. Today, at this moment, do you feel isolated in your practice? Do you work in a silo? Do you have a mentor, someone you can reach out to? Wasn’t continuing education supposed to help you stay up to date? But has it really worked for you? Have you even taken enough courses to be compliant or if you have, have you really retained anything? Not to fear, we fixed continuing education. We know it’s a crazy bold statement, but we believe that connections are the foundation to an amazing dental education. You don’t learn anything in isolation. Learning is the ultimate social activity. You learn from others just as much as they learn from you. We literally programmed (coded from scratch, no wordpress or some mish-mash of plugins here) an education platform to solve this problem. http://core.edropin.com is home to Education Tracks, which are a set of courses bundled together, that focus on one particular goal. For example, our first track helps you get your entire 3-yr. Category-1 CE requirement in Toronto (AGD PACE) complete in one-go. 24 CE Credits for only $250, which is less than $10.50/CE, including meals and with amazing speakers. More importantly, JOINING A TRACK is totally free. That’s right you don’t even have to buy the ticket to be part of the community. By joining you get direct access to all the discussions and potential opportunity to meet mentors and expand your professional connections. See, I told you we fixed continuing education. Now you get to learn with others, share ideas, discuss topics and truly engage with people both in-person and virtually. These are the people you can reach out to when you feel isolated at any point in your dental career. Seriously, don’t miss out on the conversation and an incredible and free networking opportunity today. Join now at http://core.edropin.com I look forward to greeting you there. Cheers, Saj
https://medium.com/@edropin/why-dentists-start-to-feel-isolated-in-their-practice-how-do-we-fix-this-8f757caa13fc
['Saj Arora']
2019-12-20 06:30:59.451000+00:00
['Connection', 'Sad', 'Continuing Education', 'Depressed', 'Dentist']
Psychedelics and the Hero’s Journey
“The privilege of a lifetime is being who you are.” — Joseph Campbell I am watching a single drop of water. Lights bounce and dance off of dark corners while the musicians onstage pour their heart into their strings, their keys, their drums. The crowd moves as one, sinuous and fluid-like, to the beat of the music pouring from the speakers. I am standing too close to the speakers, as usual, pressed up against the raised platform that forms the stage, my spirit moving in time, a cell in rhythm with its whole, an organism formed of many twisting limbs like some ancient goddess embodied. But I am not listening to the music. I am watching a single drop of water. A row of water bottles lines the stage, a convenient place to set a drink down where it won’t be disturbed. Someone has spilled one and a droplet sits on the edge of the stage, a bead of moisture encapsulating a raised bump of black paint. It, too, is caught up in the movement of the music — the bass from the speaker is thumping so loud that the droplet vibrates along. I am fascinated, mind bent from both LSD and MDMA, and that dancing drop of water is the most beautiful thing I’ve ever seen. There is no separation, it tells me. All is connected, even down to the smallest droplet of water, all of us wiggling along in to the music that is life. My soul opens up, not for the first or the last time. I’ve always been a very closed-off sort of individual, afraid that the smallest thing I say or do will cause the meltdown of my entire life, and that same fear prevented me from experiencing the world in its most simple and divine state. To open myself up was to approach fear head-on. It wasn’t until psychedelics entered my life that I was able to even look that fear in the face, let alone confront it and move past it into the oneness of everything. I experienced ego death that night, dancing along with that droplet of water. “Death” might be a misnomer, because ego death is not a true death — it is a rebirth of the mind and soul into a new phase of being, a recognition of your true nature as a part of the cosmic whole. But ego death is not just the purview of psychedelics, though they make the path to it a little more direct. If you’ve ever read a particularly thrilling book, you know that the death of the ego isn’t always a terrifying thing — it can instead bring you to other worlds, put you into the shoes of others. “The cave you fear to enter holds the treasure you seek.” — Joseph Campbell The Hero’s Journey Joseph Campbell is one of my all-time favorite people — “The Hero’s Journey” was his seminal work, a book about the single story that is told throughout recorded history. That story is simple: Something upsets the balance of a person’s life and they must go on a journey to right that balance, along the way confronting their deepest and darkest fears and growing into a new iteration of themselves — hence, “hero”. Ever seen The Matrix or Lord of the Rings or Star Wars? All of these ring true with the echoes of the hero’s journey. The first time I ever heard of this concept was in a high school Creative Writing class, taught by a teacher I will forever thank. We studied the phases of the journey and then watched The Matrix, writing down each phase as it happened to Neo. But the journey is not just a set of steps, a few plot points to create a well-told story. The journey is had in confronting your fears, in slaying your own metaphorical dragons and becoming something greater than just yourself— a true “hero”, if you will. When I was a teenager, this idea didn’t really resonate with me. I was a socially-anxious, overly-emotional, egotistical, and a people-pleaser. My entire life was wrapped up in simultaneously trying to appease the wishes of others while remaining at heart a rebellious, angry individual who knew none of this made her happy. But the idea of confronting my fears didn’t seem particularly desirable at that time in my life — I was just trying to make it to the next day without killing myself. It wasn’t until I was in college that I began to realize how much my own life had common with Joseph Campbell’s ideas. My biggest challenge, the treasure that lies at the heart of my fear, has always been overcoming my traumatic past, and as I began to confront the most hated parts of myself, I experienced growth like no other. LSD helped quite a bit — it enabled me to realize how much I truly detested myself and begin to make changes in my life — but as I became more focused on my own “hero’s journey” I realized that psychedelics were only a small part of it. It is more about attitude and taking what comes at you than pushing yourself towards huge epiphanies. So I began my work. By “my work” I mean my work on myself, my work on my own fragmented and broken heart. The work of waking up every single day, meditating, saying my affirmations, writing my gratitude journal, reading personal development books, and pouring my extra energy out into exercise (mostly my hula hoop). I’ve done this work for years. Not consistently, not every single day, but over time, discipline developed. Over time, my monsters seemed less scary, less intelligent, and more like big stupid dinosaurs than some kind of Lovecraftian horror. Whereas before entering the “cave you fear to enter” (as Campbell puts it) was an intense, terrifying thing — now that cave is my haven, my place to relax my mind and allow my creativity to flow. My Continuing Journey For so many years, I’ve put off starting my career as a writer because of my mental health. I’ve pushed back my goal of being published in favor of taking care of my mind — which I do not regret one iota. However, I’m now twenty-seven years old with no real future ahead of me and I’m tired of working the daily grind to achieve what I could do with a cup of coffee and a few hours behind a laptop every day. The “cave” that I fear to enter will become my haven. These monsters I fight will become my hard-won words of wisdom that I pass onto you all, letter by letter. The thousands of books I’ve read will become my army, generals passing on strategies in whispers as I write. My hero’s journey is continuing. That is the biggest thing that Joseph Campbell taught me — in this long saga of life, we all have many phases to pass between to become our true selves. It is only through struggle and suffering that we learn who we are at our most deepest level. Once we learn who we are, we can then portray that to the world more accurately and use that to further ourselves as both humans and working cogs in this ever-moving machinery of society. I intend to honor the lessons taught to me by both Campbell’s work and my own inward journey with psychedelics. There’s an oft-quoted aphorism in the psychedelic community — “When you’ve heard the message, hang up the phone.” It means that once you’ve absorbed what you need to learn, psychedelics become a tool like any other to enact change in those areas of your life. Use the ideas and all of the wisdom you have gained over your years of existence to push yourself to the next level of your soul. Don’t lose yourself so much in the beauty that you become blind to the work you need to do on yourself. Honor this wisdom, and you will become a hero, too.
https://sameripley.medium.com/psychedelics-and-the-heros-journey-6f1600256b19
['Sam Ripples']
2019-05-05 15:13:18.859000+00:00
['Mental Health', 'Spirituality', 'Psychedelics']
If a man in the morning hear the right way, he may die in the evening without regret.
If a man in the morning hear the right way, he may die in the evening without regret. Daily three quotes from China classics The Master said, “If a man in the morning hear the right way, he may die in the evening without regret.” — The Analects, Confucius Allowing men to take their course He whose boldness appears in his daring (to do wrong, in defiance of the laws) is put to death; he whose boldness appears in his not daring (to do so) lives on. Of these two cases the one appears to be advantageous, and the other to be injurious. But When Heaven’s anger smites a man, Who the cause shall truly scan? On this account the sage feels a difficulty (as to what to do in the former case). It is the way of Heaven not to strive, and yet it skilfully overcomes; not to speak, and yet it is skilful in (obtaining a reply); does not call, and yet men come to it of themselves. Its demonstrations are quiet, and yet its plans are skilful and effective. The meshes of the net of Heaven are large; far apart, but letting nothing escape. — Tao Te Ching, Lao Zi Love “Mendicants, these four people are found in the world. What four? Firstly, a person meditates spreading a heart full of love to one direction, and to the second, and to the third, and to the fourth. In the same way above, below, across, everywhere, all around, they spread a heart full of love to the whole world — abundant, expansive, limitless, free of enmity and ill will. They enjoy this and like it and find it satisfying. If they abide in that, are committed to it, and meditate on it often without losing it, when they die they’re reborn in the company of the gods of Brahmā’s Host. The lifespan of the gods of Brahma’s Host is one eon. An ordinary person stays there until the lifespan of those gods is spent, then they go to hell or the animal realm or the ghost realm. But a disciple of the Buddha stays there until the lifespan of those gods is spent, then they’re extinguished in that very life. This is the difference between an educated noble disciple and an uneducated ordinary person, that is, when there is a place of rebirth. Furthermore, a person meditates spreading a heart full of compassion … rejoicing … equanimity to one direction, and to the second, and to the third, and to the fourth. In the same way above, below, across, everywhere, all around, they spread a heart full of equanimity to the whole world — abundant, expansive, limitless, free of enmity and ill will. They enjoy this and like it and find it satisfying. If they abide in that, are committed to it, and meditate on it often without losing it, when they die they’re reborn in the company of the gods of streaming radiance. The lifespan of the gods of streaming radiance is two eons. … they’re reborn in the company of the gods replete with glory. The lifespan of the gods replete with glory is four eons. … they’re reborn in the company of the gods of abundant fruit. The lifespan of the gods of abundant fruit is five hundred eons. An ordinary person stays there until the lifespan of those gods is spent, then they go to hell or the animal realm or the ghost realm. But a disciple of the Buddha stays there until the lifespan of those gods is spent, then they’re extinguished in that very life. This is the difference between an educated noble disciple and an uneducated ordinary person, that is, when there is a place of rebirth. These are the four people found in the world.” — Aṅguttara Nikāya 4.125, Buddha
https://medium.com/china-three/if-a-man-in-the-morning-hear-the-right-way-he-may-die-in-the-evening-without-regret-1a49e716222e
['Jian Xu']
2020-10-19 10:38:26.381000+00:00
['Quotes', 'Philosophy', 'Culture', 'China', 'Religion']
Looking to Increase Buy in for Racial Equity? Read This.
New Research Shows that Talking About Race and Class Together is Key With US election results finalized, the demographic data show that Black voters overwhelmingly supported Biden, while 57% of white voters chose the incumbent president. Why the huge disparity? Professor Ian Haney López, a specialist in race and racism, sheds light on this situation. In an interview with Next Economy Now, Professor López discusses the results of his two year research project about race and class political messaging. López’s team conducted national polling and focus groups to test different combinations of campaign narratives around economic and racial justice. The researchers found that a combined message of economic prosperity and racial equality moved the most voters to support working together across racial lines. However, when those messages were separated, voters were not as convinced to support either a message of “colorblind economic populism” or a racial division message. López gave a real world example of this finding that he experienced when speaking to a national trade union. Trade union leaders and members were majority white, and when López asked if racism was a problem in the union, they laughed and became disengaged. He went on to explain to the trade union the history of how rich elites have used racial division to keep working class people from organizing together. He explained how the future economic health of the union and their families hinged on the economic health of fellow Black workers. After this lesson, trade union members were eager to talk about racism in the union and wanted to know how to organize with Black workers. When the message connected racial justice with economic justice for people of all colors, white people were motivated to work toward a future that benefits all. López explains that this strategic separation of race and class is nothing new. He reference’s Bacon’s Rebellion as an early example of elites cracking down on solidarity between working class white and Black enslaved people. In 1676, British aristocrat Nathanial Bacon united with discontented people in the Virginia Colony against Governor William Berkeley. Professor Dale Craig Tatum writes in the Journal of Black Studies that Bacon joined in solidarity with “farmers, former indentured servants, indentured servants, and enslaved African Americans who wanted to create a more egalitarian society.” Although the rebellion failed, the elites still felt threatened by solidarity between white and Black poor people. To combat this new unification, landed elites gave free land to poor whites and intensified chattel slavery. Tatum writes that elites bought “the support of working class Whites by offering them a few crumbs” and convinced the working class whites that “they were partners in the political system when they were not.” In order to gain racial equity buy in López’s research shows that race discussions must be combined with class discussions. In an organization class is about distributing opportunities for growth and development based upon visibly transparent and fair workforce development systems, structures, and programs. As Matthew Syed discusses in his book Rebel Ideas: The Power of Diverse Thinking, teams of diverse ‘rebels’ outperform teams of ‘clones.’ Solidarity between all races will make our economy stronger overall. We will be able to come up with a larger range of useful solutions for our complex problems when different perspectives are included. Despite hundreds of years of divisive messaging and tactics, we now know that diversity helps, rather than hurts, our society. López’s research pinpoints the specific message that convinces people that this is true. To learn more about diversity as the key to progress, check out our article Why Cognitive Diversity Can Save Lives and Lead to Team Success. We at Small World Solutions know that building diverse teams and teaching team members how to communicate across their differences increases the team’s overall effectiveness. One of our first steps in this process is to help team members realize that cultivating diversity is essential for them to achieve complex project goals. While Professor López’s research focuses on political messaging, we leverage science-based communication tactics tailored to the government and private sectors. After team members become aware of diversity’s importance, they are motivated to participate in the process, and begin to deepen their interpersonal intelligence abilities — what we call the New IQ.
https://medium.com/@j-brucestewartphd/looking-to-increase-buy-in-for-racial-equity-read-this-de6b9c66386e
['Small World Solutions Group']
2020-12-15 02:45:28.933000+00:00
['Team Building', 'Diverse Teams', 'Network Science', 'Cognitive Diversity', 'Inclusive Diversity']
Monster Finance Public Sale Whitelist Is Now Open!
We are pleased to announce that Monster Finance Whitelisting process will start on the 8th of June, 2021 at 4PM UTC. The project is centered around the larger community of sport enthusiasts across the globe with the sole mission of redefining sports fandom in a Decentralized, Incentivized and Tokenized manner. We have received great backings from key players in different sport and the excitement level within the team and the Monster immediate community is at an all time high. After completing the private sale few weeks ago, the team is hard on the V1 launch of the MONSTER Platform. As we move closer to the launch of the MONSTER platform, the much anticipated public sale is part of our strategy to bring Monster closer to the community in an exclusive and rewarding way. The whitelist will be open for at least 5 days. What is Monster? Monster is creating a decentralized ecosystem for sport. At MONSTER, we seek to financially reward the most diehard fans in a ‘first of its kind’ Decentralized finance platform and sporting community which distributes wealth to the diehard fans. MONSTER brings together the traditional benefits of Decentralized finance with the love of sports in a single, user-friendly ecosystem built on the Binance Smart Chain. If you want to find more about MONSTER, kindly refer to this page. Some key features in the Monster Playbook include MonsterBet(Daily Fantasy Sports Betting) MonsterPad(Earn allocations in new projects based on your Sports IQ(SIQ), and not on the size of your wallet) MonsterScout(Tokenized contracts for up-and-coming athletes) The Whitelist Process. Join Telegram group. Follow on Twitter. Retweet the pinned tweet and tag 3 friends. Complete the form on the website and submit. AIRDROP Because of the high interest in our public sale, not everyone will be able to participate. To compensate and reward all early supporters we are hosting an airdrop. This is a guaranteed airdrop and not a lottery. The total airdrop will be 2,000,000 MNSTR tokens and will be airdropped to ticket holders relatively to the size of the total tickets. By completing the whitelisting process, you are automatically guaranteed an airdrop TICKET. For each referral you get an extra ticket. Tickets To increase the size of your airdrop as well as your chances in the public sale lottery you can collect more tickets. If someone else signs up using your unique referral link you get awarded an extra ticket. You can find your unique referral link after you successfully submit the whitelist application. STAY CONNECTED Follow our official social media accounts to stay up to date. Twitter:https://twitter.com/Monsterfinance_ Telegram Group:https://t.me/mnstrfinance Telegram Channel:https://t.me/monsterann Important Be careful of fake Telegram groups trying to impersonate Monster. The token is currently not purchasable anywhere. Always verify that you are in our official channel or group.
https://medium.com/@monsterfinance/monster-finance-public-sale-whitelist-is-now-open-9163f4184b8
['Monster Finance']
2021-06-08 14:32:03.519000+00:00
['Airdrop', 'Binance Smart Chain', 'NBA Playoffs', 'Tokenization', 'Defi']
Advantages and Disadvantages of Outsourcing Disaster Recovery
Having a disaster recovery strategy is crucial for your business to keep running in case of a disaster leading to data loss. After all, you cannot risk losing your valuable business data as it is essential to restore your business. Data loss can affect productivity, cause customer loss, and even lead to the failure of the whole business. Server disaster recovery can be set up in-house or outsourced to a third party. Both the approaches have their advantages and disadvantages. And, here you will get to know the pros and cons of outsourcing disaster recovery. But before that, here is a brief on the difference between the two disaster recovery approaches. If you set up disaster recovery in-house, you will have to invest in infrastructure as well as an expert team. However, you will have full control over your data and disaster recovery. On the other hand, outsourcing IT disaster recovery eliminates the need for hefty investments and lets you choose a package suitable for your budget. However, you may not have as much control as with in-house disaster recovery. Advantages of Outsourcing Disaster Recovery Your disaster recovery expenses can be significantly reduced by outsourcing disaster recovery. Building an in-house disaster recovery requires investing in costly infrastructure, managing it, and hiring expert staff. With outsourced disaster recovery, you only have to pay for the services you use, and it is much more affordable. Meanwhile, they take care of IT infrastructure management services and maintenance. Outsourcing disaster recovery enables you to remotely and quickly recover your data in case of a disaster. So you can reduce the downtime of your business to a great extent and restore its full functionality. When you outsource disaster recovery, you don’t have to be concerned about cyber attacks since the provider specializes in dealing with security risks. Besides, they employ advanced technologies to prevent security risks. As your business grows, your requirements change as well. And outsourcing disaster recovery allows you to scale up the resources you need as and when required. When you outsource, you can control everything easily from a single screen while the disaster recovery service provider takes care of complex tasks such as performing backups, upgrades, and maintenance. Furthermore, the disaster recovery service provider ensures that their infrastructure is compliant with security regulations, thus making your business compliant as well. Disadvantages of Outsourcing Disaster Recovery
https://medium.com/@techbrace/advantages-and-disadvantages-of-outsourcing-disaster-recovery-68961d138fc5
[]
2021-12-29 09:56:07.173000+00:00
['Outsourcing', 'Disaster Recovery', 'Outsourcing Services', 'Disaster Recovery Service']
Orange Paint
Orange Paint Jack’s keys clanged in his bag as he moseyed his way down the street. There was a clink accompanying each step. He tightened his grip on his backpack straps, looking around every few minutes for signs of other people. The world remained silent. Ahead he could see his destination clearly, the streetlamp illuminating his path. The clinks became more rapid as he turned the corner into the ally way beside the drug store. He took a deep breath and pulled out his phone to check the time. 1 am, a bit behind schedule but he’d make due. He slowly placed his backpack onto the ground. Jack could hear his keys crashing into the metal cans as he silently cursed himself for not taking more precautions. If he was caught, he was dead and being noisy was not the way to avoid that from happening. He froze in an attempt to remain silent. The movement stopped and he reached his hands down to unzip his bag. He kept pausing after every inch by tedious inch determined not to draw any onlookers. This had to be quick. The front of the bag flopped open and he began removing each can of spray paint. He’d wrapped each can in a rag to stop them from clanging around in his bag as he walked but it seemed that one had escaped. He found the culprit at the bottom, tangled in his keychain. The time had begun, he stepped back to gauge the space he had scoped out days ago. This had to be perfect. It was Joe’s own fault for not listening to reason. Joe had to know it was him but it couldn’t be obvious. He did not need the cops knocking on his door. His parents were already angry enough about his being fired. He could only imagine how much trouble he’d be in if they discovered the continuation of his vandalization habits. In a way, he felt bad that Joe was going to be out here half of tomorrow trying to wash Jack’s art off the wall, but then he remembered the thirty-minute lecture he endured about coming in late only a few times before Joe basically threw him out of the store. With his thoughts of remorse squashed, he began to paint. He stepped up close to the rough bricks and began sketching in his initial lines. He took out more colors and started filling up his canvas. Jack watched a few drips escape down the cracks and smiled to himself. From blue to green to orange, he worked for hours. The chemical smell was strong and made his eyes water but he continued on. He kept pulling out his phone as he watched the clock tick towards 6 am. He had the urge to pick up speed but he knew he had to be meticulous. With one final streak of paint, his masterpiece was complete. He stepped back to admire his work, only stopping when a sudden commotion in the street sent him speeding to gather his things. Joe always did make a fuss about punctuality. He always said that early was on time and on time was late of course, this would be the day he’d come to work at 5:30. Haphazardly, he stuffed every can full and empty into his backpack and decided that he best sneak away. Jack poked his head around the corner, looking both ways. Confusion took over as the street was as empty as when he began. He could have sworn he heard a car pulling up and there were definitely voices. Jack stepped out onto the sidewalk and brushed off the oddity as a sign to get back home. That was until he heard something call out. His head whipped around, looking for the source of the noise. He was anxious now, he couldn’t afford to get caught here. He elected to ignore it and started walking in the direction of his house. The farther he went the less he heard the noise. He didn’t get more than half a block before curiosity got the best of him and he was turning around. He could hear it clearly when he stood in front of the garbage cans placed outside the apartment complex that neighboured the drug store. Jack assumed some animal had gotten itself trapped inside and he moved to open the lid. As he did so, he leaped back in order to avoid the most likely rabid creature that he imagined jumping out. After a few seconds, he opened his eyes to realize there was no wild raccoon ready to eat his face, in fact, nothing had come out of the trash can at all. Jack leaned forward and peered inside. There was nothing but bulky garbage bags that only could have fit after some forced maneuvering. There were no signs of life. His concern grew as he pondered what could have possibly been the source of the noise. It was almost instinctive when he gripped the handles and pulled the trash can back from the wall it was placed against. What he saw made his heart sink to his feet. Its skin was pink and wrinkled. There were dark patches of blue and greens spotted across its body, or at least what he could see poking out of the ratty t-shirt it had been bundled in. Its eyes were scrunched closed and its tiny mouth was open while it cried out desperately. Little hands were reaching out, touching nothing but air as it looked for help. Jack’s head started to spin as he realized exactly what he overheard earlier. He quickly reached down to grab the child from where it had been tossed away. He pulled him close to his chest and he felt the little fingers grip onto his jacket. “We have to get you to a hospital, little guy.” He unzipped the top part of his jacket and shoved the baby, who couldn’t have been more than a few days old, inside. Jack ran. He ran towards the main street which would lead him to the hospital a few blocks away. He had never been one to take anything seriously but damn him if this wasn’t a good place to start. He could feel the baby shiver against his chest and its wailing had ceased altogether. He knew the kid had just been put in the ally but who knows what happened before he got there. He saw the lights in the building’s windows before the sirens jolted into his senses bringing him to a halt. He was like a deer in headlights as the two officers stepped out of the car and slowly moved towards him. “Please, you have to help him.” Jack was quiet as he pulled the child from inside his jacket. Both men looked shocked, not having expected a baby. “He needs a hospital, I just found him and…” He was hyperventilating, he hadn’t been doing that before. “It’s okay, son. We’ve got him.” The one who had been driving took the baby from his arms while the other gripped his shoulders. “Get in the car, we’ll take him to see a doctor.” Jack’s whole body felt numb. It could have taken them two minutes to get to the hospital or it could have been two hours. Either way, he was shocked when they arrived. The officer holding the baby took off through the ER doors while the other came around the car and helped him out. He was thankful for the arm around his shoulders as he didn’t trust his legs to hold him up. The world went by in flashes, one minute he was being told to sit and the next he was watching his hands shake as both officers were talking to a nurse. They kept looking at him and he couldn’t tell why. He couldn’t even remember why he was there. Jack looked at his hands again and saw the orange paint specks that dotted his fingertips. He should have been more careful, his parents were going to kill him. When the officer walked towards him he knew it couldn’t be good. It was past noon now, he should be at home. He’d felt his phone going off in his pocket so many times but he didn’t have the energy to even pull it out. “Son, where did you find that baby?” The officer’s voice was soft as he put his hand on Jack’s arm. “They put him in the garbage.” He spoke barely above a whisper. “Did you see who?” “No, if I hadn’t been so paranoid packing up, maybe I could have caught them.” “Then it’s probably best you took your time.” He paused and seemed to have to gather his courage before speaking again. “That child has been missing for a few days now, he was kidnapped from the hospital. His mother died having him and it seems… well, it seems someone decided to take him amidst all the chaos.” “Is the baby okay?” The question was desperate though based on the look on the officer’s face, he knew the answer. “There was nothing you could have done, he’s with his mother now.” Silence passed between them as there was nothing else to be said. The tears were falling before he could even think about stopping them. He placed his head in his hands and he felt the officer’s grip on his arm tighten as he grieved for a child he didn’t even know. He imagined his small crinkled face and his tiny delicate hands. He would never know his mother was dead and he would never know someone decided to throw him away, but at least for a moment, he had someone to cry for him. The thought remained on his mind even as the officer called after him and he began his walk home.
https://medium.com/@sydneygcoleman/orange-paint-4aad5ca50125
['Sydney Coleman']
2020-12-18 14:38:35.584000+00:00
['Fiction', 'Fiction Friday', 'Fiction Writing', 'Short Story', 'Story']
CyberPower SL700U standby UPS review: Great for keeping your home network going during power outages
CyberPower SL700U standby UPS review: Great for keeping your home network going during power outages Ashley Jan 11·5 min read The eight-outlet CyberPower SL700U has a problem: it’s not exactly fish nor fowl. This well-made, relatively compact unit can provide backup power of up to 370 watts/700 volt-amperes (VA), enough to provide standby power for about nine minutes for a modest computer, like an iMac or mid-range Dell Inspiron tower with an external monitor. This UPS, however, offers power output from its battery that only simulates the smooth sine wave produced by alternating-current power that comes from an electric utility. This simulation lurches through steps instead of sliding smoothly between negative and positive voltage, which is a problem for modern computer power supplies. This isn’t an issue with solid-state devices such as Wi-Fi routers or broadband modems—those are the devices best suited to use with it. This review is part of TechHive’s coverage of the best uninterruptible power supplies, where you’ll find reviews of the competition’s offerings, plus a buyer’s guide to the features you should consider when shopping for this type of product.Most modern computers feature active power factor correction (PFC). This lets the hardware’s power supply more efficiently transform incoming AC into the DC power used within a computer. It can also work across a variety of voltages and frequencies supplied by utilities around the world. But when you combine a simulated sine wave with active PFC instead of a pure sine wave produced by more-expensive UPSes, the power supply may produce a high-pitched whine whenever power comes from the backup device’s battery. It’s also possible that the power supply can experience small amounts of damage that will add up over time. CyberPower Systems The CyberPower SL700 is best suited to providing backup power to networking equipment, but you might only be able to fit two oversized power adapters because its outlets are close together. This UPS is also configured as a standby model, in contrast to a more advanced line-interactive UPS. A standby UPS taps its battery whenever voltages sags or surges enough that it needs to cut devices from power mains attached to its battery-backed outlets and feed electricity directly. A line-interactive model passes all its power through a conditioner, which cleans up small eccentricities and keeps the battery in reserve for more extreme conditions, or a full-on power outage. That preserves the life of the battery with a line-interactive model in areas with routine power sags, surges, outages, or other problems. A line-interactive UPS can also leap into action in about 4 milliseconds (ms) to provide power from its battery, as opposed to a rated 8ms for the SL700U. That difference can sometimes be enough to crash a sensitive computer. With line-interactive UPSes costing only slightly more watt-for-watt as standby, I recommend computer owners pick that style. This UPS does have a cost advantage for providing battery backup to less-sensitive, but potentially more critical network devices in your home—to wit, a broadband modem, a Wi-Fi router or the access points in a mesh Wi-Fi network, as well as an ethernet switch if you have a profusion of wired devices. Glenn Fleishman / IDG CyberPower’s software lets you configure a number of options for its UPS, even if the UPS isn’t permanently connected to your computer. That kind of equipment has a low power draw, allowing a long runtime on an affordable standby UPS like the SL700U. The rated time for this model would allow keeping 100W of devices running for about 20 minutes, and 50W for about an hour. With particularly power-efficient gear, you might get as long as two hours out of the battery. [ Further reading: The best surge protectors for your costly electronics ]If you’re in a location with relative frequent outages, including ones that last 5 to 20 minutes, you might be able to keep your network going with this UPS, while you use mobile devices and laptops that don’t require their own UPS. (internet service providers often have generators and other backups that let them continue to provide phone, data, and cable services even while there’s a localized or even regional power outage.) If you need just surge protection Tripp Lite TLP1208TELTV Surge Protector Read TechHive's reviewMSRP $30.04See it The eight outlets on the SL700U are divided into three that are protected only against surges, as with any standalone surge protector, and five that are connected to the battery. The placement is a little tight: the five outlets have one placed 2.25 inches away from the next, while the other four are spaced just 1.125 inches apart. (The same is true for the surge-protected outlets, with one spaced further than the other two.) It also has two USB charging ports (5V at up to 2.4A), which is a nice extra. CyberPower offers downloadable software for macOS and Windows that lets you configure the UPS, even if it’s not permanently connected to the computer. You can disable all audible alarms (a big bonus for some people), and set the UPS to restart itself when power returns, among many other features. If you wind up using the UPS with a computer that doesn’t rely on active PFC, you can also use the PowerPanel software to schedule shutdowns and startups, or react automatically when the power is out. (I’m happy to report that an updated version of the Mac software released in August 2020 eliminates problems in installation and usage that I experienced when reviewing another CyberPower devicev earlier in the year.) The SL700 features a wiring fault light—on one short side, next to the USB connector for a computer hook-up—which you should check when first plugging it in. The LED illuminates red if there’s any wiring issue, such as a missing ground, bad ground, or reversed wiring. If you were to buy this unit and see a red light when plugging this UPS in to power, disconnect it and call an electrician immediately. CyberPower includes a useful and extensive manual in the box, as well as full warranty information—unlike some other manufacturers. The company offers three years’ worth of protection against an unexpected failure of the UPS and a perpetual $100,000 worth of insurance against repairs or destruction of attached equipment. Both the UPS and attached-equipment warranties are available only for the original purchaser, who needs to report a failure within 10 days of the incident and provide the original dated purchase receipt. The bottom lineThe computer world has largely moved beyond standby UPS models, but they still have a place: the CyberPower SL700 is the right price for network stability and continuity in the home. Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
https://medium.com/@ashley83522262/cyberpower-sl700u-standby-ups-review-great-for-keeping-your-home-netwo-f3c72ba8e952
[]
2021-01-11 18:42:43.145000+00:00
['Home Tech', 'Chargers', 'Streaming']
Autoencoders and the Denoising Feature: From Theory to Practice
Generalities about Autoencoders If we had to summarize them in one sentence, it would probably sound like: Autoencoders are neural network trained in an unsupervised way to attempt to copy inputs to outputs. Yes I know, it may seem quite easy and useless. However, we will see that this is neither trivial nor pointless. In fact, Autoencoders are deep models capable of learning dense representations of the input. These representations are called latent representations or codings. An Autoencoder has two distinct components : An encoder : This part of the model takes in parameter the input data and compresses it. E(x) = c where x is the input data, c the latent representation and E our encoding function. : This part of the model takes in parameter the input data and compresses it. E(x) = c where x is the input data, c the latent representation and E our encoding function. A decoder: This part takes in parameter the latent representation and try to reconstruct the original input. D(c) = x’ where x’ is the output of the decoder and D our decoding function Undercomplete Autoencoders Throughout the training phase, the goal is for our network to be able to learn how to reconstruct our input data. The following figure illustrates this idea by showing the Autoencoder model architecture. However, most of the time, it is not the output of the decoder that interests us but rather the latent space representation. We hope that training the Autoencoder end-to-end will then allow our encoder to find useful features in our data. The decoder, , is used to train the autoencoder end-to-end, but in practical applications, we often care more about the encoder and the codings. To highlight important properties, one can, for example, constrain the latent space to be smaller than the dimension of the inputs. In this case, our model is an Undercomplete Autoencoders. In the majority of cases we work with this type of autoencoders since one of the main applications of this architecture is dimensionality reduction. The learning process is quite regular, it aims at minimizing a loss function. There are different metrics to quantify this loss function such as the Mean Square Error or the cross-entropy (when the activation function is a sigmoid for instance). This loss must penalize the reconstruction for being dissimilar from x. Linear Autoencoders & Principal Component Analysis So one of the main applications of Autoencoders is dimensionality reduction, just like a Principal Component Analysis (PCA). In fact, if the decoder is linear and the cost function is the Mean Square Error, an Autoencoder learns to span the same subspace as the PCA. Comparing the two methods or explaining them in detail is outside the scope of this article. However, if you are interested, I recommend you to read this blog post (this one is also very interesting) which gives you a first intuition of PCA as well as these excellent slides which compare the two methods. But if one were to summarize the two methods… PCA: In PCA, we also seek to minimize the gap between input data and its reconstruction by measuring and minimizing the distance between the two. (For example the Euclidian distance.) N.B: Vectors of the decoding matrix must have unit norm and be orthogonal This optimization problem may be solved using Singular Value Decomposition. Specifically, the optimal P is given by the eigenvectors of the X covariance matrix corresponding to the largest eigenvalues. Again the full demonstration of that property is outside the scope of this tutorial. Autoencoders: If we choose to train them with the Mean Square Error, then we aim at minimizing In the case where f and g are linears, the loss function becomes Both methods have the same objective function, which is convex, but uses two different ways to reach it. Indeed, Autoencoders are feedforward neural networks and are therefore trained as such with, for example, a Stochastic Gradient Descent. In other words, the Optimal Solution of Linear Autoencoder is the PCA. Now that the presentations are done, let’s look at how to use an autoencoder to do some dimensionality reduction. For the sake of simplicity, we will simply project a 3-dimensional dataset into a 2-dimensional space. The first step to do such a task is to generate a 3D dataset. It is possible to do it simply with the following code. And here is our dataset! Now we have to create a Linear Autoencoder to perform PCA and this is possible with the following code. And we can also visualize the corresponding graph! Even if the code, as well as the graph, are quite self-explanatory, let’s take a closer look at them. First, we define the encoder, a dense layer of 2 neurons that accepts inputs of dimension 3 (according to our dataset). So here we constrained the latent-space representation to be of dimension 2 (the output of the encoder). Then, we define the decoder, also a dense layer but of 3 neurons this time because we want to reconstruct our 3-dimensional input at the output of the decoder. The combination of the two forms the Autoencoder. All that remains is to train it using data as inputs and targets. (We want to reconstruct our inputs remember ?) And finally, the Autoencoder find the best 2D plane to project the data into while preserving as much variance as possible. Now let’s see the projection of our data! And with the resulting plot… So, the Autoencoder has reduced the dimensionality of our problem in the same way as PCA would have done. Regularized Autoencoders The problem is if we give our network too much capacity with many hidden layers, our model will be able to learn the task of copying data in inputs without extracting important information. In this case, the Autoencoder could not learn anything and would be overfitting. This phenomenon occurs as well with Undercomplete AE as Overcomplete AE (when the codings have higher dimensions than the inputs). We want to be able to give our model capacity and not restrict it to just small networks to limit the number of parameters. To do this, we use Regularized Autoencoders which encourages the model to develop new properties and to generalize better. There are many different types of Regularized AE, but let’s review some interesting cases. Sparse Autoencoders: it is simply an AE trained with a sparsity penalty added to his original loss function. Sparse AEs are widespread for the classification task for instance. This sparsity penalty is simply a regularizer term added to a feedforward network. Denoising Autoencoders: Adding noise (Gaussian for example) to the inputs forces our model to learn important features from our data. Right now we are going to dive deeper into the concept of Denoising Autoencoders (DAE)
https://towardsdatascience.com/autoencoders-and-the-denoising-feature-from-theory-to-practice-db7f7ad8fc78
['Lucas Robinet']
2020-11-26 20:13:09.398000+00:00
['Deep Learning', 'Editors Pick', 'TensorFlow', 'Autoencoder', 'Keras']
Vacuous
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/blueinsight/vacuous-5f205392ea5b
['Patrick M. Ohana']
2020-12-23 14:37:09.681000+00:00
['Blank', 'Blue Insights', 'Haiku', 'AI', 'Poetry']
Why We Serve: Anissa Pérez
Why We Serve: Anissa Pérez In this series you’ll hear stories from USDSers and learn why they decided to join, why they stay, and how their work is making an impact for all Americans. U.S. Digital Service Follow Jun 23, 2020 · 4 min read Anissa Pérez (she/ her), Talent Management Manager, USDS @ Departments of Homeland Security and Health and Human Services. Previously Congressional Hispanic Caucus Institute. From Silver Spring, MD What’s your background? I’m a proud Marylander and Montgomery County resident! Prior to joining the Federal Government I managed the national scholarship program at The Congressional Hispanic Caucus Institute (CHCI)- a national non-profit organization in Washington, D.C. whose mission is “developing the next generation of Latino leaders.” My work at CHCI included developing educational publications, recruiting students from across the United States and Puerto Rico, researching and developing policy recommendations, working with Latinx students and parents (one of the most rewarding experiences), and leading the organization’s transition from paper to digital applications. It’s cool to be part of their history! During my time there I also had the privilege to be invited into Supreme Court Justice Sonia Sotomayor’s chambers and I’ve had the honor to meet President Obama, 4 times! Other world celebrities I met through CHCI include the Prince of Spain Felipe IV, First Lady Michelle Obama, Vice President Joe Biden, myriad members of congress, and others. I cannot wait to tell my 1 year old son the stories and show him pictures! How does your work or the work of USDS make an impact? As a Talent Management Manager here at USDS, my job is to make sure the onboarding process for new employees is as smooth and enjoyable as possible. Anyone who has joined the federal government knows how taxing the hiring process can be and my job is to shield folks from experiencing too much of that. It is impactful because we seek to empower engineers, designers, product managers, operations folks and others. New USDSers are about to become public servants working on projects that benefit every single person in this country. The onboarding process sets the tone for their USDS experience and we strive to make sure it is the best it can be that way they can successfully hit the ground running once they arrive. In addition to my primary role, I also organize internal events, translate Spanish to English for certain projects, and co-lead the DHS Digital Service inclusion working group. What do you want to do after USDS? How will my USDS experience ever be topped? USDS is not a perfect organization, however it has been transformative in both my professional and personal life. I’ve been pushed to my limits and have grown more in these past few years than I ever anticipated. In my first job out of college I worked directly with students and I miss working more closely with my community. Perhaps my calling will be to return to my roots of working closely with students or the Latinx community. I’m also interested in working in a giant tech company to help bring about much needed diversity and inclusion change. I’d also be interested to continue in public service after USDS, depending on the opportunity. What will you miss most about USDS when you leave? This is a no brainer — I will miss my colleagues most of all when I leave USDS — both USDSers and government partners alike. I have been blessed to have made lifelong friendships and meaningful relationships with colleagues. I mean when else in my lifetime will I have such beautifully unique colleagues like the ones I currently have? If I want to make sourdough bread and get stuck, I can call people who have mastered the art of bread making. If I want to learn how to draw microbiomes, I can call someone who has perfected their craft. If I need tips on climbing the Himalayas, I know exactly who to reach out to. If I want to train guide dogs, we got that too. Herbs for my dinner, style advice, joining the Peace Corps, our network is vast and spans all facets of life. I will never forget my USDS family when I move on.
https://medium.com/the-u-s-digital-service/why-we-serve-anissa-p%C3%A9rez-f13cd996d179
['U.S. Digital Service']
2020-06-23 01:21:00.258000+00:00
['Engineering', 'Interview', 'Civictech', 'Design', 'Government']
Why Getting Married No Longer Feels Like an Obligation I Must Fulfill
I love telling the story of my parents’ engagement and wedding. For starters, my dad had to propose — twice. My mom told my dad she didn’t want to get married until she had finished school, but this did little to deter my father. Before he even had the chance to pop the question, they descended into an argument that resulted in a three-month break. Obviously, they reunited, as my dad managed to get the green light the second time around during a commercial break for ALF, no less. Fast forward to August 2, 1991: their wedding day. My mother wore her bridesmaid dress from her sister’s wedding from the summer before. They exchanged $50 gold bands from JCPenney’s in front of a congregation of nine people. My dad barely made it through the vows because he was so choked up; my mom told him to pull it together so that he could exchange his vows intelligibly enough to be deemed legit. Not exactly the stuff marital dreams are made of. And yet, they’re still going strong almost 30 years later. They laugh and crack jokes together, plan and execute successful political campaigns together, and most importantly, remind each other frequently of how much they love each other. The disintegrating institution of marriage notwithstanding, my parents have provided me the best example of what a marriage can be. Maybe that’s why I’m so scared for myself. As I now approach zero velocity in my mid-to-late twenties, the crescendoing knell of wedded bliss is getting harder to drown out. Suddenly, I’m seeing old high school classmates chart the rest of their lives with their dearly beloveds. They all look so airbrushed and happy and quite frankly, it’s a bit unnerving. It would be dishonest to say that this newly arrived marriage season does not fill me with a faint sense of dread. I’m dutifully happy for my former schoolmates, of course; I’m not a total animal. But I may very well be a slightly different breed than the lot of them and a breed who is afraid she’s a little too late to this whole love thing. I’ve only had two boyfriends in my whole gosh darn life, so I’m wading in a pretty shallow pool, romantically-speaking. While I fancy myself to be a wizened version of the heartsick teen I once was, I cannot deny the fact that my ineptitude in romantic relationships has caused me pain in my twenties. But the beauty of these intensely intimate relationships is that you gather so much valuable information about who you are and what makes you tick, especially in the partnership department. For the longest time, I only really wanted to date people I could see myself marrying because I thought that marriage was the end goal I was supposed to be aiming for. It only dawned on me recently that this is a pretty restrictive way to look at a romantic partnership. Not to mention, it puts a lot of pressure on a relationship to turn into something marriageable. The ever-present obligation to partake in this societal rite of passage can be a real mood killer. I wonder if this has anything to do with my most recent ex; I’m sure that his relatively jaded outlook on love and marriage has left a bleak imprint on my still impressionable heart. So really, the jury is out on whether my attitude toward marriage is genuinely evolving or if I’m still shaking off the debris of my last relationship. Something tells me it’s the former, though.
https://medium.com/fearless-she-wrote/why-getting-married-no-longer-feels-like-an-obligation-i-must-fulfill-31ec6d50d27a
['Meg Paulsen']
2020-03-21 15:36:37.081000+00:00
['Women', 'Marriage', 'Relationships', 'Feminism', 'Self']
Samsung’s 110-inch MicroLED TV brings The Wall to your living room
Samsung delights in scoring splashy headlines at CES with its mammoth micro-LED displays, with the company springing a humongous 292-inch model of “The Wall” on CES attendees back in January. But while its earlier micro-LED panels arrived in modules that needed to be professionally assembled, its new 110-inch MicroLED TV will come ready to watch, right out of the (giant) box. Related product Samsung Q90T 4K UHD TV (55-inch model) Read TechHive's reviewSee it Slated to ship globally in the first quarter of 2021, the Samsung MicroLED TV is based on micro-LED display technology: self-emitting pixels that offer vivid colors and perfect blacks similar to OLED, because they can be turned on and off individually. Unlike the organic pixels in OLED panels, however, micro-LED panels are not susceptible to burn-in. Samsung has been touting its micro-LED-based “The Wall” displays for a couple of years now, with the company offering sizes from a crazy-big 292-inch panel down to a more reasonable 75 inches. [ Further reading: TechHive’s top picks in smart TVs ]Previous versions of Samsung’s micro-LED displays have been saddled with a couple of key problems. For starters, due to the difficulties inherent in micro-LED manufacturing, the displays usually arrive in separate modules that must be assembled by a professional installer. Second, Samsung’s micro-LED displays are prohibitively expensive (think six figures), which means they’ve been aimed mainly at business and luxury customers. Enter the 110-inch MicroLED, a TV that promises to fix the first problem with Samsung’s micro-LED displays by eliminating the need to assemble multiple panels. Instead, the new TV comes as a complete, prefabricated unit, with Samsung boasting that it has developed a new production process to streamline micro-LED panel manufacturing. With this new set, you’ll need only to take it out of the box, plug it in, and turn it on—although, given that we’re talking about a 110-inch TV, removing it from the box could prove to be quite the operation. Whether the MicroLED TV addresses the second problem with Samsung’s micro-LED displays—the exorbitant price tag—remains to be seen: Samsung has yet to reveal pricing. (Honestly, we’re not holding our breath for affordability.) Samsung promises that the MicroLED will deliver “stunning,” “bright,” and “vivid” images, thanks to a new Micro AI Processor. It’s worth noting, however, that this 110-inch TV is only capable of 4K maximum resolution, not 8K like Samsung’s larger “The Wall” displays or its pricier LED-based QLED TVs. The MicroLED will boast a near bezel-less display with a 99.99-percent screen-to-body ratio, Samsung says. In addition to watching one giant image, you’ll also be able to split the display into four 55-inch screens, ideal for NFL Sunday Ticket junkies. Besides the images, Samsung says the TV’s integrated Majestic Sound System with Object Tracking Sound Pro functionality can crank out realistic (if virtualized) 5.1-channel sound without the need for external speakers. All very impressive, but we’ve yet to see (or hear) the 110-inch MicroLED in action, nor do we know how much Samsung plans to charge for its giant new set. Given that Samsung’s 98-inch Q900 QLED TV, an 8K set based on traditional LED technology, goes for a breathtaking $60,000 (and that after a 40-percent discount), we’re steeling ourselves for the MicroLED’s eventual price tag. Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
https://medium.com/@nicole62698340/samsungs-110-inch-microled-tv-brings-the-wall-to-your-living-room-ef253bda3d43
[]
2020-12-20 17:44:06.751000+00:00
['Streaming', 'Connected Home', 'Consumer Electronics', 'Gear']
A psychological study, “Does Barbie make girls want to be thin” looked at girls 5 to 8 years of…
Is A Barbie Body Possible? A psychological study, “Does Barbie make girls want to be thin” looked at girls 5 to 8 years of age where they were shown images of either Barbie dolls, Emme dolls (U.S. size 16), or no dolls. After the girls saw the images’ they then completed body image assessments. The study concluded that girls exposed to Barbie dolls reported lower body image and a higher desire for a thinner body frame than the girls in the other exposure groups. The study also found that early exposure to dolls with unrealistic thin body shapes could damage girls’ body image, and increase their risk for an eating disorder or weight cycling. When I worked at Rehabs.com, the team created an infographic called, “Is A Barbie Body Possible?” They had a realistic photo rendering done based on a Barbie dolls measurements. They discovered that it is impossible to have her physical proportions. For example; with Barbie’s waist being smaller than her head, she only has room for half a liver and only a few inches of intestine. The Yale Center for Eating and Weight Disorders calculated a healthy woman’s body and how much it would have to change in order for her to have the measurements of a Barbie doll. They found that, “women would have to grow two feet taller, extend their neck length by 3.2 inches, gain 5 inches in chest size, and lose 6 inches in waist circumference.” This is an impossible dimension to achieve, and although media shows that this is a body to emulate, I believe that the ideal female body image is changing. Thanks to movements like #bodypositive, Dove #realbeauty, #nofilter, and #healthyisthenewskinny we are now seeing bodies of all different sizes in ad campaigns. The emphasis is less about striving for the perfect body and more about loving our body just the way it is. We are also seeing Mattel change as well. Three years ago they offered three new body types to Barbie, and one was “curvy.” Let’s hope that she loves her new body, and encourages women young and old to do the same.
https://medium.com/@hallieheegkotrla/a-psychological-study-does-barbie-make-girls-want-to-be-thin-looked-at-girls-5-to-8-years-of-6907b2d82427
['Hallie Heeg-Kotrla']
2019-05-27 22:23:36.992000+00:00
['Life Coaching', 'Self Esteem', 'Body Positive', 'Body Image', 'Eating Disorders']
Interview with Castro Antwi-Danso of Esoko
Castro Antwi-Danso, Esoko “What I’d like to get out of a Network of Data Stewards is to see the best practices from other places that could inform what we do, especially in Africa where data use and management isn’t very enhanced.” In this intervew recorded at the Data Stewards Network Camp in Cape Town, South Africa, Castro Antwi-Danso, the director of sales and marketing at Esoko shares insights on the opportunities, risks, and best practices of stewarding private-sector data in the public interest. Drawing from his experience helping Esoko to leverage its agricultural data to create positive impacts in rural communities across Africa, Castro reflects on the potential for data collaboration to help minimize duplicative data collection, and the important role of data stewards in establishing and maintaining trust with data subjects and users.
https://medium.com/data-stewards-network/interview-with-castro-antwi-danso-of-esoko-d7b57e6828de
['Andrew Young']
2019-01-02 19:07:41.052000+00:00
['Interviews', 'Data Stewards', 'Big Data', 'Data Stewardship', 'Data']
Functional Pipe in Go
Functional Pipe in Go Practical Function Composition in Go This article introduces a practical approach for implementing the function composition in Go. If you are too impatient to read the whole story, you can directly go to the source code.¹ Introduction One of the most interesting features of functional programming languages is the ease of composing functions. The function composition in mathematical notation is simply an operator dot ‘⋅’ that takes two functions f(x) and g(x) and produces another function h(x) = (f ⋅ g)(x), defined as f applied to the result of applying g to x, or (f ⋅ g)(x) = f(g(x)).² A pipe ‘|’ operator is also a widely accepted notation in programming which works the same but usually in a different direction, i.e. (f ⋅ g)(x) = (g | f)(x). Function composition is quite popular in functional programming. So that almost all functional languages have defined a specific operator to make composing functions yet easier. For example, Haskell uses dot . notation.³ Here is an example of function composition in Haskell: inc :: Integer -> Integer inc x = x + 1 sqr :: Integer -> Integer sqr x = x * x sqrPlusOne :: Integer -> Integer sqrPlusOne = inc . sqr And, F# uses a composition >> or pipe |> notation:⁴ let inc x = x + 1 let sqr x = x * x let result = (sqr >> inc) 8 // result = 65 let result = (8 |> sqr) |> inc // result = 65, ()'s are optional The elegance of composition comes from expressing complicated algorithms by combining much simpler formulas. In functional programming, these formulas are functions, and the result of composition is just another—but more complex—function. Besides the mathematical interpretation, composition and pipe can represent a broader approach. In real-life programming, we typically encounter business problems which consist of several steps that each step depends on the outcome of the previous step. In their nature, composition and pipe are very beneficial and can provide a general solution for this kind of problem. The goal of this article is to find a way to bring the power of function composition into the Go world using pipes. Functional Go Functions are first-class objects in Go that is crucial to apply functional principles. Unfortunately, Go (as of Go 1.x) lacks a notation for function composition or pipe. However, we can always achieve the same goal by applying functions consequently: func sqrPlusOne(x int) int { sqr := func(x int) int { return x * x } inc := func(x int) int { return x + 1 } return inc(sqr(x)) } But, the problem arises with Go functions returning multiple values, or commonly, errors as their last return value. In such cases, we need to handle errors explicitly for each function: func sqrPlusOne(x int) (int, error) { sqr := func(x int) (int, error) { if x < 0 { return 0, errors.New("x should not be negative") } return x * x, nil } inc := func(x int) int { return x + 1 } y, err := sqr(x) if err != nil { return 0, err } return inc(y), nil } This is somewhat straightforward for two functions. But it can quickly become a burden when the number of functions increases. Solution Here, we need to find a solution both to compose functions easily and to handle errors transparently. Since the number of function arguments and return values can vary, one can only think of one answer: reflection⁵. Go’s reflection (via reflect package) is a powerful tool which enables us to investigate functions for their arguments and return values, and to call them dynamically. This is not a tutorial about Go reflection, but here we give you some examples: sqr := func(x int) (int, error) { if x < 0 { return 0, errors.New("x should not be negative") } return x * x } t := reflect.TypeOf(sqr) // t.Kind() == reflect.Func t.NumIn() // num inputs: 1 t.NumOut() // num outputs: 2 t.In(0) // input types: int t.Out(0), t.Out(1) // output types: int, error v := reflect.ValueOf(sqr) v.Call(...) // dynamic dispatch As Go does not support overloading and/or defining new infix operators, we have to implement the pipe operator as a function. We start by defining Pipe function and Pipeline type for its result: type Pipeline func(...interface{}) error func Pipe(fs ...interface{}) Pipeline { return func(args ...interface{}) error { ... } } Pipe uses variadic arguments to take zero or more functions as inputs and produces a Pipeline . A Pipeline instance is just another function that does the actual work. It accepts zero or more inputs and gives an error. The number of its input arguments must match the input arguments of the first function in fs . But its output may or may not match the last function, and that is because Go does not have variadic return values. We call the last function a sink, i.e. its job is to gather the results and clean the pipeline. The sink should not return any values other than an optional error.⁶ With these two definitions, we can rewrite the previous sqrPlusOne example function as: func sqrPlusOne(x int) (int, error) { var result int err := Pipe( func(x int) (int, error) { if x < 0 { return 0, errors.New("x should not be negative") } return x * x, nil }, func(x int) int { return x + 1 }, func(x int) { result = x }, // the sink )(x) // the execution of pipeline if err != nil { return 0, err } return result, nil } Here you see how easy it is to compose functions with Pipe . There is no need to handle errors for each function, and the list of composing functions can go on indefinitely. How can we implement the pipeline? We start by processing the input arguments. Go reflection enables us to call functions dynamically, but for that, we need to pass the input arguments as an slice of reflect.Value values. So first, we need to do the conversion: var inputs []reflect.Value for _, arg := range args { inputs = append(inputs, reflect.ValueOf(arg)) } Secondly, we have to solve the nested function calls, f(g(…)). We can always unwind nested functions using a for-loop as following pseudo-code: // equivalent to f_n(...f_1(x)) for f in fs { x = f(x) } The same implementation using reflection would look like as: for _, f := range fs { inputs := reflect.ValueOf(f).Call(inputs) } But we are missing the errors which should be checked after each function call and should not be passed to the next function. So we extend the previous code and write: var errType = reflect.TypeOf((*error)(nil)).Elem() for _, f := range fs { v := reflect.ValueOf(f) t := reflect.TypeOf(f) outputs := v.Call(inputs) inputs = nil // clear inputs for i, output := range outputs { if t.Out(i).Implements(errType) { if !output.IsNil() { return output.Interface().(error) } } else { inputs = append(inputs, output) } } } And it ends what we require to implement the pipe operator. See how we check return values for errors and neglect the nil errors. A complete implementation of this approach can be found here. Update: Performance Issues In order to measure the performance of the previous implementation, we can write a simple benchmark test to compare Pipe against direct function call for a small example: var result int // to prevent compiler optimization func BenchmarkPipe(b *testing.B) { sqr := func(x int) int { return x * x } inc := func(x int) int { return x + 1 } x := rand.Intn(1000) b.Run("Direct", func(b *testing.B) { for n := 0; n < b.N; n++ { result = inc(sqr(x)) } }) b.Run("Pipe", func(b *testing.B) { pipe := Pipe(sqr, inc, func(x int) { result = x }) for n := 0; n < b.N; n++ { pipe(x) } }) } The benchmark on my machine⁷ shows that: while the direct function call results 4–5 ns/op, Pipe runs in ~1000 ns/op which is 200–250× worst. This makes it impractical for repetitive small tasks, as it has a relatively big overhead because of the reflection. Conclusion This article presents a rather simple method to implement a pipe operator in Go. It works, but it also eliminates type-safety. It casts all arguments into interface{} before the execution of the pipeline, and it uses reflection to apply functions. So, there is no way to check functions and their arguments at compile-time. Although it is practical, but in my opinion, it is not a Go-ish way to write applications. It would be nicer if Go has a built-in pipe operator and do the static type-check at compile-time.
https://medium.com/swlh/functional-pipe-in-go-1c755467fe14
['Ali Aslrousta']
2019-12-31 19:07:59.836000+00:00
['Practice', 'Functional Programming', 'Programming', 'Golang', 'Pipeline']
Farewell Rebooters
At the end of May 2019, I bade farewell to my colleagues and family at Reboot, a firm where I had spent the last 3+ years working on the most challenging issues in Nigeria and Africa. Just this week, my colleagues hosted a send-off party for me and I had the opportunity for the last time as a team to revisit some of the most heartwarming experiences we shared. In 3 years, they promoted me three times while contributing my quota to building the company’s portfolios in Africa and Nigeria, including Institutional reforms, media development, and open governance. Personally, I spent much of my time leading a DFID project in Northern Nigeria focused on elevating civic tech engagement towards improved delivery of public services. I am transitioning satisfied that the voices of more marginalized people now count in deciding the policies that shape their lives, a grand vision that my colleagues and I share. I have gained so much experience understanding and supporting civil societies, community-based organizations, media, and government bodies to better how they leverage technology in doing the work that they do. Perhaps the most important experience I have gained and the lessons I have learned is the application of Human-Centered Design and Design Research to tackling the most complex challenges our world face today. My colleagues remain the most important people in my life and the best people I have worked with yet. I am grateful for their love, support, and guidance throughout. Purpose, dedication, and collaboration among others have defined my work culture as I strive alongside others for impact. And I remain grateful for all the lessons learned along the way. In a few weeks, a new chapter begins. A very exciting and challenging journey that I am ready for. Grateful to the Almighty creator of the universe for the opportunity.
https://medium.com/@Sir_Ruffy/farewell-rebooters-fc43e3b335dc
['Ahmed Rufai Isah']
2019-06-20 21:38:06.511000+00:00
['Transition', 'Reboot', 'Civictech', 'Nigeria', 'Governance']
H3RO3S Raises $1.65M from Industry Heavyweights
Heroes is proud to announce that it has completed its seed and private sale funding rounds. The funding rounds saw us get strong investment support from heavyweights in the industry. At the end of the raise, the seed and private rounds saw us raise a total of $1.65 million. With this, we have reached another milestone in our journey to providing the blockchain community access to a play-to-earn platform that won’t only provide fun but would also provide a means for players to earn extra income. We are announcing with a lot of pleasure that our fundraising round drew the attention of important investors, who saw our project as valuable and decided to make significant contributions and assist us in raising $1.65 million in a very short period. Both rounds were sold out as we had investors get in a record amount of time that had even us stunned. Our investors believe in our team and they have strong convictions that Heroes’ revolutionary play-to-earn gaming platform will be a game-changer in the Blockchain gaming space. This is because we provide a unique and innovative solution to a problem that has plagued the gaming industry for a long time, while also pushing the message of socialization and networking amongst our users. Some of the lead investors in these rounds are: Gains associates CSP DAO, X21, Lotus Venture Capital, DCI Capital, Maximus Capital, Halvings Capital, Moon Carl, Mandy from Mandy’s ICO Research, Crypto thugs Capital, Bitcoin Guru from 100x club, Crypto nation, Crypto Curry, Crypto legend, Altcoin alerts Overview H3RO3S is a revolutionary real-life play-to-earn gaming system that attaches incentives to the different products, levels and talents on the platform and allows the end-users to redeem these incentives by completing tasks for one another. The developers behind H3RO3S believe that “everyone deserves the joy of opportunities.” Gaming transcends borders and because of this, the company believes in taking a standpoint of being a platform where inclusivity is a priority. Their ideology is that everyone regardless of race, ethnicity, or gender are welcome to be part of the community as the integration of people from all walks of life into the community is one of the goals. Users on the platform can earn as high as $1500 per month just by simply participating and growing from the rookie stage where they earn $3 per task to the prestige stage where they get the ability to choose the tasks they would like to be notified of. The team at Heroes would like to express their gratitude to the community and investors for showing and giving their incredible support in our journey so far. On our part, we will remain a solid and trusted partner and will continue to create strong values for our investors. Again, we are grateful for the support and will always be indebted for the immense trust you have put in us. About H3RO3S: H3RO3S is World’s First Real Life Play-2-Earn Game, with a decentralized Revolutionized Marketing System on Binance Smart Chain, aiming to provide incentives to the company’s products and allows the end-users of the platform to redeem these incentives by completing tasks for one another.
https://medium.com/h3ro3s/h3ro3s-raises-1-65m-from-industry-heavyweights-ef8f26bfae48
[]
2021-09-14 07:54:28.671000+00:00
['Fundraise', 'Investors', 'Gaming', 'Blockchain', 'Playtoearn']
Skydio Announces The Availability of New Enterprise Software For Skydio 2™ and a new Training Program for Certified Skydio Pilots
Skydio Announces The Availability of New Enterprise Software For Skydio 2™ and a new Training Program for Certified Skydio Pilots Skydio Follow Dec 16, 2020 · 4 min read Demonstrates swift market traction as the company continues accelerated product roadmap and training for enterprise and public sector December 16, 2020 — Redwood City, Calif. — Skydio, the leading US drone company and world leader in autonomous flight, announced the general availability of Skydio Autonomy Enterprise Foundation (AEF), a broad set of advanced AI-pilot assistance capabilities for enterprise and public sector operators. Controlled through the new Skydio Enterprise App, AEF optimizes Skydio 2 for professional use in a wide range of missions, including outdoor/indoor inspections, search and rescue, emergency response, security patrol, and situational awareness. “This is an important milestone for us as we continue to reframe the drone industry through the power of autonomy,” said Skydio CEO, Adam Bry. “Because Skydio drones are software-defined flying computers, we are able to transform the drone’s behavior and add capabilities for different users without having to change the underlying hardware. Skydio 2 has already been adopted by many enterprise users who recognized the value of autonomy. With Autonomy Enterprise Foundation, we are delivering an enhanced user experience and feature set tailored to the needs of enterprise and public sector operators.” Skydio’s AI-based autonomy delivers unmatched ease of use and obstacle avoidance that helps enterprise and public sector operators capture more accurate data in a fraction of the time. As a result, customers can see immediate efficiency and profitability gains over traditional manual drones. AEF delivers a new user experience optimized for enterprise and public sector missions with new AI-pilot assistance capabilities that further increase the benefits of Skydio 2. Advanced features in Autonomy Enterprise Foundation include: Close Proximity Obstacle Avoidance to fly as close as 16” (roughly 40 cm.) to obstacles and easily go through standard size doors with confidence to fly as close as 16” (roughly 40 cm.) to obstacles and easily go through standard size doors with confidence Precision Mode for maximum control over the position of the drone during detailed inspections or in tight spaces for maximum control over the position of the drone during detailed inspections or in tight spaces Vertical View to capture images by orienting the gimbal camera straight up overhead to capture images by orienting the gimbal camera straight up overhead Superzoom™ to provide an omnidirectional view of the drone surroundings and zoom into points of interest with a powerful digital zoom to provide an omnidirectional view of the drone surroundings and zoom into points of interest with a powerful digital zoom Point-of-Interest Orbit to fly autonomously around a user-specified point on the map. to fly autonomously around a user-specified point on the map. Track In Place to visually track a subject from a fixed position to visually track a subject from a fixed position Visual Return-to-Home to use visual wayfinding when flying back to base in GPS-denied environments to use visual wayfinding when flying back to base in GPS-denied environments Offline Maps to fly in LTE-denied environments by saving maps in the Skydio Enterprise App “We love to see the progress that Skydio is making in bringing the power of autonomy to enterprise users. Skydio’s AI-based drones are a game-changer for operators like us. Coupled with the new enterprise software capabilities these will allow us to perform an even wider range of missions faster, safer, and with higher quality than with any other tool available today.” — Jarvis Worton, Global Platform Technology Specialist / sUAS SME | Jacobs Engineering Group “Close Proximity Obstacle Avoidance allows us to fly our aircraft into tight, complex environments without giving up the safety that normally comes with flying Skydios. We are excited to use it for search and rescue missions,” said Ryan Gifford, Sacramento Metro Fire Department Captain / UAV Program Manager. Additionally, Skydio announced the upcoming availability of Skydio Academy, a new professional training program designed for enterprise and public sector operators to become certified operators of Skydio’s autonomous drones. As enterprise and public sector organizations of all sizes switch to AI-based autonomous drones, traditional approaches to drone missions become obsolete and inefficient. Skydio Academy helps organizations prepare the next generation of pilots, and ensure they have the expertise to scale their autonomous drone programs efficiently taking full advantage of the capabilities Skydio has to offer. “I am excited that Skydio Academy will soon be available. Skydio’s training staff responsible for building the Skydio Academy curriculum have decades of instructional experience with drones and have trained thousands of enterprise and public sector pilots with in-person and remote courses. Skydio’s autonomy engine can turn any operator into an expert pilot and our expert trainers can help any size organization or individual pilot unlock the full value of Skydio’s solutions,” said Alden Jones, Skydio Sr. Director of Customer Success To learn more about the latest capabilities of Skydio Enterprise Autonomy Foundation and training options, join us on December 22nd online for an in-depth overview (Webinar Link). Resources Announcement blog AEF webpage Skydio Academy web page Skydio 2 Overview Celebrating the FAA’s Tactical BVLOS Announcement for Public Safety Agencies Breaking Regulatory Barriers for Bridge Inspection: NCDOT and Skydio Secure the First True BVLOS Waiver Under Part 107 Skydio Introduces The New X2 Family of Drones and Breakthrough Autonomy Software For Situational Awareness and Inspection Media Kit About Skydio Skydio is the leading U.S. drone manufacturer and world leader in autonomous flight. Skydio leverages breakthrough AI to create the world’s most intelligent flying machines for use by consumers, enterprises, and government customers. Founded in 2014, Skydio is made up of leading experts in AI, robotics, cameras, and electric vehicles from top companies, research labs, and universities from around the world. Skydio designs, assembles, and supports its products in the U.S. from its headquarters in Redwood City, CA, to offer the highest standards of supply chain and manufacturing security. Skydio is trusted by leading enterprises across a wide range of industry sectors and is backed by top investors and strategic partners including Andreesen Horowitz, Levitate Capital, Next47, IVP, Playground, and NVIDIA. Media Contact Aircover Communications Morgan Mason [email protected]
https://medium.com/skydio/skydio-announces-the-availability-of-new-enterprise-software-for-skydio-2-and-a-new-training-11bdb668ceb4
[]
2020-12-16 15:38:08.999000+00:00
['Technology', 'Tech', 'Autonomous Vehicles', 'Inspection', 'Drones']
The Venture Debt Investors Journey in Southeast Asia
Looking back at my venture investing career, its been 12 years since I hung up my lab coat at A*STAR and Imperial College and switched sides to become a venture capitalist. Started with venture equity which took me across the Pacific to Silicon Valley and then back to Southeast Asia where I took on a new challenge as a venture lender. I recalled trying to raise some venture debt investors in 2008 for one of my portfolio companies but got nowhere with the banks. Reason? Lack of financial and operating track record. When I was the Head of EDBI’s US office and sat on the board of our portfolio companies, it was intriguing to learn how pre-profit venture-backed companies create a judicious balance of their equity and debt capital raise, either concurrently but at times, just venture debt solely. Venture debt really started blossoming in Southeast Asia only from 2015. I joined DBS to build a venture lending business that year. Assembled a venture-experienced team that went on to build a well-balanced loan book but more importantly, my team helped many founders to understand the value of venture debt. I decided to leave corporate banking to start Southeast Asia 1st private venture debt fund with 2 other co-founders. Genesis Alternative Ventures was born in August 2018 having seen the dramatic rise in demand for venture debt. Founders were beginning to look at debt seriously. The common misnomer inaccurately protray venture debt as convertible debt. Let me try and explain in simple terms what venture debt is, and how venture debt can be a useful financing tool for start-ups. The Genesis of Venture Debt Let’s start with the term venture capital. It’s commonly associated with equity but in finance 101, capital refer to both equity and debt. Equity is where shares of the company is exchanged for capital. In 2018, Southeast Asia start-ups raised a total of $11 billion equity and this amount is expected to growth exponential over the next few years. Early growth start-ups (not Seed, but Series A and beyond start-ups) can opt to raise venture debt alongside the equity they raise to augment their cash position. And founders choose to raise venture debt for several reasons: lower equity dilution, using debt to finance working capital needs (e.g. supply chain, inventory, accounts receivables etc). Read More : Launch of Southeast Asia Independent Venture Debt Financing Business So what is venture debt? Venture debt is a form of “risk capital” and primarily a form of debt financing from specialist lenders to pre-profit venture-backed companies with an established business model and clear growth prospects. A venture lender provide the debt financing which is repaid over a period of 2–4 years and come with a equity kicker for the lender to purchase preferred stock. Who are Venture Lenders Traditional banks are reluctant to lend to businesses which don’t have a profitability track record or hard assets that can be secured. Start-ups often don’t have these and this is where Venture Lenders play a big role. Venture Lenders come in two forms: either as a bank lender or a venture fund specialising in debt. In the US, Silicon Valley Bank (SVB) is probably the go-to bank lender for venture debt. Genesis Alternative Ventures is established as a specialist venture debt fund and we only provide debt financing, to be absolutely clear. Genesis collaborate with venture capital funds (VCs) to provide venture debt to their portfolio companies. Why raise Venture Debt Emerging start-ups often view debt financing as a means to augment equity and a building block towards a balanced capital structure. In the US where venture debt investors has been around for more than 4 decades, start-ups raise venture debt as a form of insurance buffer. Some start-ups could have a slow start to their revenue and require an additional 3–6 months of cash runway to get to their planned targets. Venture debt can be a very useful tool for working capital requirements. Financing inventory stock, accounts receivables are excellent use cases of debt leverage. In fact, venture debt in Silicon Valley started life providing much needed financing for equipment leasing. In 2008, Facebook purportedly raised $100 million from Triplepoint Capital to purchase additional servers, certainly very wise as that amount of equity could incur severe equity dilution. More than often, start-ups are laser-focused on building products and growth and neglect creating financial discipline within the company. And as one of my portfolio CFO rightly pointed out, a good venture lender will be able to help whip the financials into shape and with some debt on the books, the company has the financial rigour which incoming venture investors will appreciate. Which Venture Lender to work with It really depends. Interview the various venture lenders and have them go through with you what you are looking for and how they can value add to your business. Founders are always going after “smart capital” but would often end up negotiating for bank-loan rates with no strings attached. A good venture lender want to provide more than just debt. The lender must be able to value add through understanding the business and value add comes in the form of business development, fund raising for subsequent rounds and helping to refine the way that financial line items work through the financial model. Summary Start-ups operate in a dynamic and uncertain environment where customer loyalty and competition are hard to gauge. Raise appropriate amount of capital and some insurance buffer — but raise capital when you don’t need it. Founders often delay the extra fund raise till they are few months of cash runway which puts them in a precarious situation and financing dilemma. In my next article when I find time in-between fund raising, I will go into the details of what a founder should be looking out for when looking for venture debt financing. That’s it for now. Go get those revenues and start fund raising! Reach out to the Genesis Alternative Ventures team if you are keen on venture debt.
https://medium.com/@genesisalternativeventures/the-venture-debt-investors-journey-in-southeast-asia-e94b7dd742ba
['Genesis Alternative Ventures']
2020-04-23 10:22:32.146000+00:00
['Investors', 'Financing', 'Venture Debt', 'Venture Capital', 'Business']
Feel the Fear and Be Creative Anyway
Five steps to make fear work for you “Everybody experiences fear, but it’s how you respond to the fear that makes the difference.” — Dan Sullivan, The 4 C’s Formula Here are five techniques that can help you stay creative even though you feel uncomfortable. They are all based on feeling your fear, understanding it, and then gradually reframing how you think about it. I’ve tried them in my own anxiety therapy when I suffered from panic attacks and writers’ block, and I use these tools with my coaching clients when they get stuck. Step 1: Know your fear “We have to look at our own inertia, insecurities, self-hate, fear that, in truth, we have nothing valuable to say. It is true that when we begin anything new, resistances fly in our face. Now you have the opportunity to not run or be tossed away, but to look at them black and white on paper and see what their silly voices say.”― Natalie Goldberg, Writing Down the Bones: Freeing the Writer Within Whenever you feel procrastination creeping in while you’re working on something new, don’t turn away. Turn towards it and listen. What are you afraid of? If you don’t give it some room, the fear or block will only get louder. I have a quote above to my desk by Buddhist teacher and author Pema Chödrön: “Nothing ever goes away until it has taught us what we need to know.” When my coaching clients are stuck, I ask them to listen and journal about it, as recommended by business coach Gay Hendricks in The Big Leap, or creative Julia Cameron in The Artist’s Way. Take pen and paper and explore what underlying fears stand between you and your creativity. Finish these sentences with as many reasons as you can think of: I can’t be successful because… I cannot expand to my full potential because… Make this an exhaustive list. How long is it? I get to 15–20 entries myself when I think about my book writing. For example, my fear is that no one wants to read my book — it’s all been said before. Here’s what you need to know about your list: These are usually beliefs, not facts. “None of these core negatives need be true. They come to us from our parents, our religion, our culture, and our fearful friends,” says Julia Cameron in The Artist’s Way. Look at your list again. Try to find objective evidence for each one. Are your fears really a fact? Step 2: Work with affirmations Once you have deeply understood your fear, start reframing it. One way of doing that is to rewrite your negative statements into positive affirmations. For example, if you fear that publishing a book will show people that you’re a rubbish writer, your positive affirmation can be: “Publishing a book will show people that I managed to finish a whole book.” Or, “I’m a consistent writer, not a perfect writer.” For over a year, I wrote the affirmation “I, Nicole, am a brilliant and prolific writer.” I wrote that 10 times every morning in my journal. It felt silly at first, but it gave me a sense of hope, optimism, and safety. Positive affirmations are not a magic bullet and it takes time. But it’s a simple, free way of reframing fear. One of my current affirmations is this (since I had a relapse into panic attacks last year): “I accept anxiety as a direct way to listen to my body and inner wisdom.” Panic attacks are not the same as procrastinating on your book. But the principle is the same. If your inner critic shouts at you while you’re writing these affirmations, and you feel embarrassed or silly, then you’re doing it right. Your inner critic wants to keep its job. So you do yours: Keep writing your affirmations. Step 3: Invite your fear Another tool is to invite fear to the table — but not let it take over completely. For example, Elizabeth Gilbert recommends in Big Magic to accept that fear and creativity belong to each other: “Your fear will always be triggered by your creativity because creativity asks you to enter into realms of uncertain outcome.” Gilbert makes space for her fear and invites it along on her writing journey. She even wrote a letter to fear: “Dearest Fear: Creativity and I are about to go on a road trip together. I understand you’ll be joining us, because you always do. I acknowledge that you believe you have an important job to do in my life, and that you take your job seriously. Apparently your job is to induce complete panic whenever I’m about to do anything interesting — and, may I say, you are superb at your job. So by all means, keep doing your job, if you feel you must. But I will also be doing my job on this road trip, which is to work hard and stay focused.” — Elizabeth Gilbert, Big Magic Take pen and paper and write a letter to your fear. Adjust the letter to your situation. You might be an artist. You might be an entrepreneur. Create your personal letter to fear, take a deep breath, and let it sink in. Stick the letter above your desk and read it every day. Accepting that fear will accompany you on the ride is the most effective way of letting it go. Step 4: Reframe fear as courage It might help you to tell yourself that fear is a way to develop courage and new capabilities. If we always work within our comfort zone, we’ll never progress. “You may not yet have the capability or confidence to pull it off, so when you go into action anyway and start moving toward the result, that takes courage. It doesn’t mean you feel good about it, but you’re going to persist until the new capability and confidence actually come into play,” explains business coach Dan Sullivan in The 4 C’s Formula. Fear is a sign that you’re learning. You’re courageous because you keep going. You can use this idea in your journaling when you feel procrastination and fear creeping in. Write about what your fears are, and write down what you’re learning at the moment, where you’re developing new capabilities. Acknowledge that, if you keep going with your project, you show immense courage. Step 5: Make the mountain smaller Reframing how you think about fear is complex and takes time. It’s a mindset shift. It won’t happen overnight. So, here’s a simple action you can take right now: make the mountain smaller. Break down all your tasks into tiny steps and use micro goals. Micro goals are simple, concrete tasks that can be completed in 30 seconds or less. For example: Go over to my desk and sit down. Open my laptop. Open the file I’m writing on. Scroll down to the right section. Write one word. “It is daunting to think of finding time to write an entire novel, but it is not so daunting to think of finding time to write a paragraph, even a sentence,” says Julia Cameron in The Right to Write. The same goes for new projects in your business. And while you keep working away on small chunks of the mountain, you’ll gradually see that by working with the fear by your side you’re doing a hard thing, as Glennon Doyle says in Untamed. But you're doing it.
https://betterhumans.pub/feel-the-fear-and-be-creative-anyway-e435b1a3c69
['Janz Is Writing']
2021-09-13 08:33:15.771000+00:00
['Procrastination', 'Anxiety', 'Journaling', 'Fear Of Failure', 'Creative Process']
What I learned about the Corona vaccine in an interview with a mRNA researcher
Corona is already a very controversial topic, with the approval of the vaccine, it has become even more itchy to talk about it. Aside from the ethical and moral questions, there are facts. BioNTech and Pfizer have launched one of the vaccines that has overcome clinical trials in several countries. The vaccine is based on mRNA technology. With the approval suddenly came a lot of concerns about this technology, questions like; “Does it now interfere with my DNA and can I become infertile with a vaccination?”. As anybody would I also asked myself how much truth is there to these rumours, after all that vaccine will be injected into my body. So I’ve decided to to an interview with an expert, from the University of Zurich, Steve Pascolo. And what I can tell you in advance: There is practically nothing to the rumours. Steve Pascolo specialises in molecular biology as a tool for improving health. He has been researching RNA and the technology since 1996. He was kind enough to tell me honestly and openly about his experiences with the technology. As he told me how mRNA works I’ve got to know how complicated and hard it is to understand the technology. The CDC explained it so everybody understands it. Here a snipped of their explanation. mRNA vaccines have strands of genetic material called mRNA inside a special coating. That coating protects the mRNA from enzymes in the body that would otherwise break it down. It also helps the mRNA enter the dendritic cells and macrophages in the lymph node near the vaccination site. mRNA can most easily be described as instructions for the cell on how to make a piece of the “spike protein” that is unique to SARS-CoV-2. Since only part of the protein is made, it does not do any harm to the person vaccinated but it is antigenic(….) My two main questions were, does it affect the DNA and could it make me infertile. His answer was a clear “NO”. He explained to me that the mRNA, which is in the vaccine, does not affect the DNA. In contrast to DNA viruses, this could be the case. By the way, rubella, mumps and measles are a kind of mRNA vaccine, they are attenuated viruses, but they are still alive. They release their RNA into our cells. The cells then release a protein and trigger an immune response. The synthetic mRNA vaccine triggers the exact same response. He assured me that the mRNA vaccine is very safe. But it is true every vaccine triggers some kind of immune reaction, the reaction of the body can lead to symptoms like fatigue and fever, in some rare cases the immune reaction can also lead to problems. The risk exists with every vaccination. He explained to me that there is no reason to believe that the vaccine is problematic during pregnancy. On the contrary, he suspects that the viral infection could be a problem during pregnancy. However, pregnant women who want to be vaccinated, should wait for the approval of the authorities. When asked if he would vaccinate himself and his family, a quick and direct answer came; “Yes, of course.” He wants his family and friends to be vaccinated with the very safe vaccine. The vaccine contains only 30 micrograms of mRNA, which is a miracle, he explained to me. He is convinced that thanks to the vaccination the pandemic will be over and we can return to a normal life. It is also amazing that he already received mRNA injections 15 years ago. He wanted to try out for himself the mRNA he used for the vaccine he was working on. He did not have any problems with it and did not suffer any sequelae. So he also tells me “I can say for sure that the technology works and is safe, I mean I have tried it out myself and I still alive.”
https://medium.com/@sara-aurora/what-i-learned-about-the-corona-vaccine-in-an-interview-with-an-mrna-researcher-115ea98e3631
['Sara Aurora']
2021-02-10 09:20:02.595000+00:00
['Interview', 'Expert', 'Coronavirus', 'Vaccine', 'Coronavirus Covid19']
An Alternative Action Plan to Avoid the Circularity Trap?
Author: Hanna Helander In 2015, the European Commission launched “Closing the loop — An EU action plan for the Circular Economy”. It states that the circular economy will create jobs and “help avoid the irreversible damages caused by using up resources at a rate that exceeds the Earth’s capacity”. Sounds like a concept worthwhile pursuing! But how does this magic pill work, and how can we make sure it does not have side effects? Obviously, the communication from the Commission is a result of political negotiation between stakeholders with a wide range of interests. At the same time, the circular economy has roots in Industrial Ecology concepts about how to design material flows in our industrial societies. By seeing the society as a socio-economic metabolism, we conceptually linked environmental pressures to the abundant circular economy principles (e.g. reuse, recycle and reduce) and identified the necessary changes in material flows to reach the objective of decreased environmental pressures. The idea is rather simple. We looked at the society and all its activities as if it were an organism using materials and other resources from the surrounding environment; we call these input flows to the society. The society also produces output flows; these consist of waste and emissions. Input and output flows comprise the pressures on the environment. Depending on what kind of flow it is, the environment can deal with it to a certain extent. For instance, CO2 would not be a problem if it wasn’t because of the big quantities released; neither would nutrients would they not accumulate in our seas. The same goes for input flows; the consequences of input flows of wood, sand and water all depend on the magnitude of extraction. Fossil-based materials are of course particularly critical. The bottom line is that we need to decrease input and output flows to stay within the earth capacity and thus enable a continuous prospering human population. The main idea of a circular economy is the circulation and maintenance of resources within the society to decrease input and output flows So how can the circular economy help us do something about this? The idea is to circulate materials and resources within the societies, and thus replace input flows with secondary materials and at the same time decrease output flows as they are redirected back into the economy. Or, as some researchers stress, we need to prolong the lifetime of products and materials, increase resource efficiency in all possible ways, and share resources to a higher extend to meet our needs with less resource use. Then we asked how we could assess the circular economy in a way that captures this goal of decreased environmental pressures. We started by investigating what other researchers have suggested for monitoring the circular economy and to what extent their suggestions capture input and output flows. Given that materials cannot disappear nor emerge, we systematically assessed if the indicators carry information about input and output flows. We concluded that most of them do not, or only to a limited extent. The reasons why, for instance, increased recycling rates or resource efficiency do not necessarily result in decreased net environmental pressures are various. For instance, increased resource efficiency may cause rebound-effects resulting in increased production and consumption. An example is from the metal scrap market: if we decrease the metal use in one product, it may result in an increase of this metal in another product due to market dynamics. Likewise, price fluctuations, substitution of materials and other mechanisms can lead to burden shifting. Moreover, recycling most often means downcycling: the materials cannot be used for the same purposes as they are mixed or contaminated with other materials. Even if they could, it is important to keep in mind that as long as the societal stocks are growing, material recovery will never meet the demand for new materials. This would require a stable-stock-economy with 100% recovery without quality loss, a perpetual motion machine that only serves as a theoretical benchmark. Thus, there is no guarantee that recovered materials replace raw material extraction, and in turn decrease the input flows. The conceptual idea of a circular economy is, however, still worth pursuing. Nevertheless, strategies need to be assessed in terms of environmental pressures. Otherwise we risk falling into the circularity trap: the belief that any activity labelled circular economy will help us stay within planetary boundaries and sustain human life on earth. Therefore, our action plan includes a stronger focus on the environmental objectives of the circular economy. Based on these, we can prioritize between strategies and identify effective points of intervention to decrease input and output flows. To assess which strategies comply with this requirement, consumption-based environmental footprints offer a useful tool, for which system boundaries need to be defined with great care. Only in this way, can we know if circular economy actually “help[s] avoid the irreversible damages caused by using up resources at a rate that exceeds the Earth’s capacity”. The next step in our action plan is to identify effective points of intervention for the food sector. So stay tuned!
https://medium.com/@circulusresearch/an-alternative-action-plan-to-avoid-the-circularity-trap-aa97bf392938
['Circular Economy Stories']
2019-07-25 10:00:53.280000+00:00
['Circulareconomy', 'Global Warming', 'Sustainability', 'Resources', 'Climate Change']
Turkey authorizes 18-month extension of Libya troop deployment
Turkey authorizes 18-month extension of Libya troop deployment President Erdogan meeting with Libya’s GNA head Fayez al-Sarraj in Istanbul, Turkey. Turkey’s parliament on Tuesday authorized an 18-month extension of its troop deployment in Libya in support of the the Government of National Accord (GNA) in Tripoli. Turkey’s support GNA in Tripoli helped stave off an offensive by eastern Commander Khalifa Haftar in April 2019. The sides struck a ceasefire agreement in October formally ending the fighting and setting the stage for elections at the end of next year. Turkey’s presence in Libya is linked to its broader interests in the eastern Mediterranean, where it is hunting for natural gas in disputed waters claimed by Cyprus in Greece. Ankara struck an agreement with the GNA leadership in November 2019 that extended Turkey’s maritime claims in the Mediterranean in exchange for military support. Turkey’s parliament authorized the first one-year troop deployment to Libya in January of this year.
https://medium.com/@deserttalks/turkey-authorizes-18-month-extension-of-libya-troop-deployment-1e4b49b9a54b
['Desert Talks']
2020-12-25 08:00:40.173000+00:00
['Military', 'Türkçe', 'Greece', 'Libya', 'Maritime']
I’m 35 and I may suddenly have lost the rest of my life. I’m panicking, just a bit.
It’s been a while since I put a piece of writing in the public domain, but suddenly I have a lot to get off my chest, well my colon actually. Just three weeks ago life was good. Correction. It was awesome. The newest addition to our family had arrived on Christmas Eve, joining his two sisters aged 5 and 3. A month later we were on a plane home to Sydney, having spent four great years working for Google in California. My beautiful wife had been working at a startup on NASA’s Moffett campus and was worried about finding something equally interesting in Australia, but she managed to land a very similar gig with an innovative logistics start-up in Sydney. We’d come back primarily to be closer to family, but also to pursue a dream of setting up a family farm in partnership with my parents — intended as a great place to bring up our three kids but also as a new sideline income stream. We’d spent every weekend scouring Sydney for areas that met our criteria (good schools, commutable, cost of land etc) and we were settling on Kurrajong in Sydney’s west. I was just getting into a training routine for the CitytoSurf run having done the Monteray Bay half marathon a few months prior. I’m 35 years old. On July 19th I went for what I thought would be a routine GP visit. In my mind it was primarily to re-establish a GP relationship in case my kids needed an urgent care visit (the practice is literally around the corner from our place). I’d also noticed a bit of unusual bleeding from, well, my back passage and very recently a change in bowel habit. I wasn’t alarmed by either of these symptoms but my GP was concerned enough to refer me for a colonoscopy. So began the roller coaster. I’ll skip much of the detail but in short the colonoscopy revealed a lesion which they believed to be malignant. This was confirmed by a biopsy three days later. A CT scan revealed suspicious swelling in the surrounding lymph nodes so I was booked for a PET CT scan — where radioactive sugar is injected prior to a CT scan to highlight cancer in your body. That PET scan changed the game because it not only confirmed cancer in the surrounding lymph nodes but it also found two smaller tumors in my liver, which the original scan had not identified. So, by August 2nd I was confronted with a stage 4 colorectal cancer diagnosis. Now for those (like me) who are not all that familiar with cancer, you never just “have cancer”. Cancer is really the name given to a broad family of related diseases. There is also a fairly well established methodology for measuring the progress of any cancer. Here’s the short version: Stage 1 — this usually means that a cancer is relatively small and contained within the organ it started in. — this usually means that a cancer is relatively small and contained within the organ it started in. Stage 2 — this usually means the cancer has not started to spread into surrounding tissue but the tumor is larger than in stage 1. Sometimes stage 2 means that cancer cells have spread into lymph nodes close to the tumor. This depends on the particular type of cancer. — this usually means the cancer has not started to spread into surrounding tissue but the tumor is larger than in stage 1. Sometimes stage 2 means that cancer cells have spread into lymph nodes close to the tumor. This depends on the particular type of cancer. Stage 3 usually means the cancer is larger. It may have started to spread into surrounding tissues and there are cancer cells in the lymph nodes in the area. usually means the cancer is larger. It may have started to spread into surrounding tissues and there are cancer cells in the lymph nodes in the area. Stage 4 means the cancer has spread from where it started to another body organ. This is also called secondary or metastatic cancer. These days having a stage 1 cancer is generally no big deal. Stage 4 however is not too good at all. Doctors use ‘survival curves’ — survival statistics for people with your cancer and your stage of progression — to provide some kind of prognosis. In my case, most published survival curves suggest that only 10% of people are still alive 5 years post diagnosis. Now, I’ve since learnt that there are many reasons not to focus too much on these statistics. My prognosis is likely better (none of my doctors will venture a guess) but it is no better than 50/50. And even if I live beyond 5 years, my life expectancy as a survivor of metastatic cancer will almost certainly be much curtailed. Over the next 6 months I’ll be doing radio therapy and chemotherapy and at some point I’ll have two surgeries, one to remove a section of my colon and the other to remove two chunks from my liver. This for a guy who’s never been seriously sick in his adult life, and who happens to have a major needle phobia. I know this is so cliched, but life really can change overnight. Suddenly I can’t be sure I’m going to see my new son’s 5th birthday, or even his 2nd. It’s now highly unlikely I’ll see even my eldest daughter get married. I probably won’t know what careers my children pursue. And as for my own career, it’s come to a screeching halt. I’m also struggling to imagine what it might look like in future if I do manage to survive this, because my outlook on life is already so fundamentally altered I don’t think I can just go back to my old world. Life now is a weird kind of duality. On the one hand I must be optimistic — I must believe I can beat this cancer to get through the next six months. On the other hand I need to be pragmatic and prepared for a scenario where the treatment is unsuccessful and I’m told one day that I have X months to live. As a husband and dad of three kids under five that scenario is obviously terrifying, but it also has many practical implications that I feel I need to prepare for — financial readiness, a mechanism to ensure my children remember me, legal authority for my wife over our assets, etc. One of the things I’m struggling most with is this concept of legacy. I’m a planner. Before this diagnosis I’d been thinking of my 1st 35 years — aside from being a ton of fun and travel — as preparation. I felt like I was building a platform (savings, networks, skills, experience) that I could then use in my second act to make a real contribution, to “make my mark”, to build a real legacy for my kids. Perhaps that was a mistake on my part, because I may have no time to do that now. I guess I’m panicking a little. I feel like I have so many messages to deliver to the blissful masses from my now precarious vantage point, from the importance of early precautionary doctor visits to the merits of life insurance. But putting pragmatism aside, there is one thing I’d urge everyone to do. Stop just assuming you have a full lifetime to do whatever it is you dream of doing. I know it sounds ridiculously cliched, and of course you never think it will happen to you, but let me assure you that life really can be taken from you at any time, so live it with that reality in mind. Oh, and please stop complaining about the small stuff !
https://sgriddle.medium.com/im-35-and-i-may-suddenly-have-lost-the-rest-of-my-life-i-m-panicking-just-a-bit-35d6a28dcbc
['Scott Riddle']
2019-02-12 10:20:54.177000+00:00
['Cancer Survival', 'Cancer', 'Living With Purpose', 'Death And Dying', 'Life Lessons']
Why we need SIP for Messaging
TL;DR Enterprises are struggling to get multiple vendors messaging tools (e.g. service desks, chat bots, in app notifications) to work together and the equivalent to the SIP standard is needed for the messaging space to foster vendor interoperability. Background The SIP Refer specification was released nearly 20 years ago. It allows for many different use cases in the telephony space. Importantly for this article, it allows multiple systems from different software vendors to interoperate when handling a phone call to an enterprise. A typical call flow may look like the following The customer calls a toll free number and the call is answered by a traditional Interactive Voice Response (IVR) system. We all know what these feel like, press 1 to talk to sales, 2 to talk to support etc. Once the customer has fought their way through the organizations IVR they may end up waiting for a call center agent, at this point the IVR has handed the customer off to a different system. These systems are called Automatic Call Dispatchers (ACDs) and they queue the caller until an appropriate agent is available. Eventually the customer gets through to an agent and the customer gets help with part of the query but the customer also needs to talk to a different agent in a different group to finish what they need to do so this agent forwards the call to a different group. The customer now gets another IVR with different options before finally ending up with a different agent attached to a different ACD. We can certainly debate how good the customer experience is through all this, but the customer was able to stay on one phone call while being passed between multiple IVRs and ACDs and each of these IVRs and ACDs can be provided by a different vendors. SIP refer likely powered all these hand offs and is capable of passing the conversation context around (e.g. who is on the call) via a common session id. SIP and SIP Refer is allowing an enterprise to be able to choose best of breed and, more likely, deal with the heterogeneous environments that it faces due to acquisitions, mergers, department budget autonomy etc. Messaging So now let’s contrast this with messaging, and when I say messaging I mean real time (ish) chat. The kind that we’re all used to with SMS or apple business chat or facebook messenger or whatsapp. More enterprises are turning to this as a means of supporting their customers and their customers generally prefer this way of interacting as they aren’t stuck waiting on a phone. They can send in a message and maybe get a response right away or maybe they need to wait for a response, but they can go about their daily busy lives while periodically checking in on the conversation. The technologies handling these messaging conversations are maturing fast and there are bot providers, service desk providers, marketing and sales tools providers all making a play to be the system that handles the “call”. A plethora of chat widgets have sprang up on web sites, enterprises are communicating with their customers over whatsapp and apple business chat and they’re all having some success. The challenge is that they’re having success in pockets and, even though the technology is new, large enterprises have already reached a point of technology fragmentation. Different departments chose different vendors, one vendor is chosen for the service desk, a different one is chosen for sales and a chatbot is bought from yet another provider. None of these providers solutions interoperate with each other in any kind of standard way. Indeed many vendors are looking at deliberately creating walled gardens with proprietary approaches for integration leaving enterprises either rolling their own or having to pick single vendor solutions. Messaging Payloads Making matters somewhat harder is the fact that the different messaging front ends that customers interact with (e.g. whatsapp, imessage) all have slightly different apis. While there is similarity there’s no interoperability for example between messengers apis and imessage’s. The payloads are all slightly different. This problem is compounded further by the proprietary chat widgets that are springing up on websites and mobile application’s menus. It’s often the case that the api behind these widgets aren’t even documented, let alone being made available to 3rd parties. What Is Needed So what is needed to help enterprises. I have been talking to vendors in this space for many months (as well as some of the channel providers) and I think a few things are clear. This is not a problem to be solved by the channel providers e.g. facebook messenger or apple business chat. While facebook has done a great job with its messenger handover protocol it’s only solving the problem for messenger and it’s doing so in a very proprietary way. It doesn’t help vendors and enterprises support other channels and it does nothing for the proprietary web widgets. Standardizing api payloads isn’t the immediate challenge. While it would be nice for some html like standard to emerge that all the messaging providers utilized and that was adopted by all the widgets it’s not the immediate need for interoperability. The parallels with voice and telephony are everywhere and the community should be looking to both SIP Refer and to codec negotiation as inspiration. Open source or a standard has a role to play in solving this problem. Over the next few posts I will outline more detailed technical proposals. I am eager to hear from members of the community that share the belief that standards and / or open source is required in this space to provide enterprises the flexibility that they need to provide a modern messaging solution made up of best of breed solutions provided by multiple vendors.
https://medium.com/ibm-watson/why-we-need-sip-for-messaging-87df21995fb
['Rob Yates']
2020-10-12 12:27:28.177000+00:00
['Sip', 'Chatbots', 'Editorial', 'Wa Editorial', 'Watson Assistant']
The One Mistake That Is Holding You Back
The One Mistake That Is Holding You Back Why some people achieve their goals, and others don’t? Photo by Nathan Dumlao on Unsplash Simple Experiment, Strange Results Jerry Uelsmann was a University of Florida professor, who once performed a rather strange experiment with his students. In this experiment, Uelsmann divided his film photography class of students into two groups: On the left side of the room, he labelled one group as quantity group. They will be graded only for the quantity of work they produced—the greater the number of photos submitted by each student in this group, the higher their grades. On the right side of the studio, the second group of students is labelled quality group. They will be judged only for the quality of work they produce. Unlike the “quantity” group, to get the highest grade, these students were only required to submit one nearly perfect picture. After a few weeks of competition, Uelsmann graded the students in both the quantity and quality groups. The results were astonishing: The best photos were all produced by the quantity group! As Uelsmann looked into the reasons behind these unusual findings, the facts start to emerge. Whilst the quantity group were busy taking photos, learning from their mistakes and improving the quality of their work, the quality group sat around pondering on how to create the perfect picture, procrastinated on taking action, and in the end, produced mediocre results. The key observation Uelsmann discovered was this:
https://medium.com/illumination-curated/the-one-mistake-that-is-holding-you-back-2a7e316b11e2
['Younes Henni']
2020-10-09 08:55:33.817000+00:00
['Life Lessons', 'Motivation', 'Productivity', 'Personal Development', 'Self Improvement']
What really matters if you started your company early?
Even if we talk and study, days and nights about having a company or a startup, what really matters to me is that I am employing people rather than firing them (at the age 17) during this pandemic. If you compare me to the big #employer of our country, then, you can find me nowhere. Still, for me, it’s proud when I say that during a time when companies were firing people off, I started something, which, in turns, became employer to tens of peoples. I am just 17. I don’t know much about how the global complexities works or what should be the perfect revenue-to-expenses ratio, as thaught in the masters. I just know that I saw something cool on the internet and tried to live a life like that. Then, in turn, I got deeper into it and in 5 years, I built a company worth 15 million rupees. If you ask about responsibilities, I can franky say that I am still not that perfect executive officer that everyone thinks. I still rely on my core team to learn more about my own business. But, after seeing this tremendous lack of jobs and opportunities for people across the country, I am well clear for what to do next. Within next 2 years, I now aim to turn this company from 15 million to 150 million,so that it can not employee tens of peoples but rather, hundreds and thousands of #people.
https://medium.com/udit-akhouri/what-really-matters-if-you-started-your-company-early-12e74412eedb
['Udit Akhouri']
2020-12-08 21:35:06.004000+00:00
['Company Culture', 'Employer', 'Startup', 'Teamwork', 'Startup Lessons']
Web Design Essentials For Disabled Audience
Web Design Essentials For Disabled Audience Having a flawless business website is one of the crucial things these days in order to survive in the competitive market. The specialists for web design services in Reno focus on the following design essentials in order to build an impeccable website for the disabled audience: Support Screen Readers Screen readers are programs that let visually impaired or blind audience to read and identify with content. It is a total must for a fully accessible business website. This signifies that you not simply are required to have alt text for every image, but even that every interactive element and section must be obviously and let the audience know precisely why they are there and their function. Turn Away From Automatic Media Do you find it irritating when you get on a webpage and media begins to play automatically? Now visualize what that is like for somebody with a disability? It turns out to be a genuine concern. It is particularly if they were not looking forward to it because they can become perplexed by an unexpected sound. The experts of every reputed web design agency in Reno always avoid doing this. As an alternative, always prompt them prior to playing any sort of media. Test Everything At last, testing everything thoroughly is the sole approach to recognize if the website is really accessible for every user. For instance: Load it on a mobile browser Navigate and interact with it by just using the keyword Use it with a screen reader Zoom in and out to notice the way everything is rendered Remember, accessibility extends past a website and into other channels such as mobile apps owned by a brand. Make sure you hire professionals of Stack Mode, a leading Web Design Company in Nevada, to ensure you have everything covered appropriately.
https://medium.com/@stackmodemarketinggroup/web-design-essentials-for-disabled-audience-d23544d7a63d
['Stack Mode']
2020-12-09 15:18:10.329000+00:00
['USA', 'Website', 'Web Design', 'Information Technology', 'Web Development']
The Best USB Flash Storage for Creatives
In our daily lives, we work on so many different types of projects and with all of those projects comes multitudes of data that we need to save. Thankfully, the days of floppy disks are long gone and now, most external storage is done using USB flash drives. While they can be small, they can also hold a mountain of data from photos to video and everything in between so when we set out to create our list of the Best USB flash storage for creatives, we wanted to make sure our choices were the best out there. Before we reveal our choices however, there are some things you should consider before you decide to buy a USB flash storage device. Related Posts: Top Laptops Top Gear for Video Editing More Top Gear for Designers View All Buying Guides So without further ado, let’s get started! What to consider before buying a USB flash storage device We know what you’re thinking. You can go just about anywhere and buy a USB flash storage device these days to save your work. While that’s pretty accurate, when you need to save important files and projects, just any drive won’t do. Here are the things you should consider before purchasing one. Storage Capacity When it comes to choosing the right storage capacity for your flash drive, it isn’t the size of the flash drive that is important but what you intend to use it for that matters. Flash drives come in a multitude of sizes from 1GB to 1 TB and beyond. If you’re using a flash drive to save documents and spreadsheets, then you may be able to have enough storage by using a 1GB flash drive. However, if you’re saving video, photos, illustrations to your flash drive, which can all be very large files, choosing a flash drive that’s a TB or more is probably the best option. Transfer speed The faster you can get something done, the better and this is no different when it comes to saving files to your USB flash drive. The speed at which your files are saved to your flash drive largely depends on which generation your flash drive connector uses. For example, the standard connector on many USB flash drives is 3.0. This comes with a decent amount of transfer speed at 5Gbps. The next step up from there is 3.1 or 3.1 Gen 2. These are becoming more and more common and can offer transfer speeds up to 10Gbps. Security USB drives can sure be convenient but that convenience can also come with some security risks. The small size can cause them to get lost easily. They can also be hard to track physically and many companies have banned the use of flash drives for this reason, and they can also be used to transfer malware from one computer to another. While the size of a flash drive isn’t likely to change anytime soon, security programs such as software and hardware encryption can help prevent unauthorized access. Full disk encryption programs offer on-the-fly encryption of removable media to keep it protected. A built-in keypad is sometimes used as an option. In this case, uses need to enter a PIN to gain access to the drive. While having data security on your flash drive can increase the cost of the drive, it no comparison to what it may cost if it fell into the wrong hands. The best USB flash storage for Designers & Creatives The Best USB Flash Storage Compared Thumbnail Best Value Fastest USB Smallest USB Best Rugged Best for Apple Title WD 1TB My Passport Portable External Hard Drive, Black — WDBYVG0010BBK-WESN Samsung T5 Portable SSD — 1TB — USB 3.1 External SSD (MU-PA1T0B/AM), Black SanDisk 16GB Ultra Fit USB 3.1 Flash Drive — SDCZ430–016G-G46 LaCie Rugged Triple 1TB External Hard Drive Portable HDD — USB 3.0 FireWire 800 compatible for Mac… SanDisk Ultra Flair 16GB USB 3.0 Flash Drive — SDCZ73–016G-G46 Patriot 256GB Supersonic Rage Elite USB 3.1 Type A, USB 3.0 Flash Drive with Transfer Speeds of Up… Prime Status - Star Rating Reviews 509 Reviews 4,106 Reviews 3,708 Reviews 21 Reviews 5,008 Reviews 1,296 Reviews Interface USB 3.0 USB 3.1 (Gen 2) USB 3.0 USB 3.1 USB 3.0 USB 3.0 Type-A Type-A Type-C Type-A Type-C Type-C Type-A Capacities: 1TB — 5TB 500GB — 2TB 16GB — 256GB 1TB — 5TB 16GB — 128GB 128GB — 1TB Storage Type Hard Disk SSD Flash Hard Disk Flash Flash Price $53.00 $168.94 $6.29 $120.60 $6.29 $44.99 Buy Now Buy on Amazon Buy on Amazon Buy on Amazon Buy on Amazon Buy on Amazon Buy on Amazon Best Value Thumbnail Title WD 1TB My Passport Portable External Hard Drive, Black — WDBYVG0010BBK-WESN Prime Status Star Rating Reviews 509 Reviews Interface USB 3.0 Type-A Type-A Capacities: 1TB — 5TB Storage Type Hard Disk Price $53.00 Buy Now Buy on Amazon Fastest USB Thumbnail Title Samsung T5 Portable SSD — 1TB — USB 3.1 External SSD (MU-PA1T0B/AM), Black Prime Status Star Rating Reviews 4,106 Reviews Interface USB 3.1 (Gen 2) Type-A Type-C Capacities: 500GB — 2TB Storage Type SSD Price $168.94 Buy Now Buy on Amazon Smallest USB Thumbnail Title SanDisk 16GB Ultra Fit USB 3.1 Flash Drive — SDCZ430–016G-G46 Prime Status Star Rating Reviews 3,708 Reviews Interface USB 3.0 Type-A Type-A Capacities: 16GB — 256GB Storage Type Flash Price $6.29 Buy Now Buy on Amazon Best Rugged Thumbnail Title LaCie Rugged Triple 1TB External Hard Drive Portable HDD — USB 3.0 FireWire 800 compatible for Mac… Prime Status - Star Rating Reviews 21 Reviews Interface USB 3.1 Type-A Type-C Capacities: 1TB — 5TB Storage Type Hard Disk Price $120.60 Buy Now Buy on Amazon Best for Apple Thumbnail Title SanDisk Ultra Flair 16GB USB 3.0 Flash Drive — SDCZ73–016G-G46 Prime Status Star Rating Reviews 5,008 Reviews Interface USB 3.0 Type-A Type-C Capacities: 16GB — 128GB Storage Type Flash Price $6.29 Buy Now Buy on Amazon Thumbnail Title Patriot 256GB Supersonic Rage Elite USB 3.1 Type A, USB 3.0 Flash Drive with Transfer Speeds of Up… Prime Status Star Rating Reviews 1,296 Reviews Interface USB 3.0 Type-A Type-A Capacities: 128GB — 1TB Storage Type Flash Price $44.99 Buy Now Buy on Amazon A small flash drive with great performance Interface: USB 3.0 | Connector: Type-A | Capacities: 128GB — 1TB | Storage Type: Flash A great little flash drive, the Rage Elite from Patriot comes in a wide range of capacities which is great for all kinds of data, especially if you’re working with photo or video files which can be quite large. Losing the cap on top of the connector is no longer a problem as the connector retracts with a simple push. If you suffer from slower internet speeds its likely that access to cloud storage is limited as well so having lightning-fast transfer speeds can make running large programs such as Adobe Creative Suite or backing up a ton of files a breeze. The Rage Elite’s internal design pushes the flash storage to its mint with speeds of up to 300MB/sec which outpaces many hard disks. Learn More Latest Price on Amazon: 1,296 Reviews Patriot 256GB Supersonic Rage Elite USB 3.1 Type A, USB 3.0 Flash Drive with Transfer Speeds of Up… 256GB USB 3.1/USB 3.0 Flash Drive up to 400MB/s read: up to 200MB/s write Durable slide to connect design to protect the Drive. Rubber coated housing protects from drops, spills, and daily abuse. Backwards compatible with USB 3.0, USB 2.0, USB 1.1 Compatible with legacy and current versions of Windows, Linux 2.4 and later, Mac OS9, OSX, and later $44.99 Buy on Amazon The best choice for Apple laptops Interface: USB 3.0 | Connector: Type-C | Capacities: 16GB — 128GB | Storage Type: Flash Did you buy a new MacBook or MacBook Pro recently and had to give up a lot of your old USB storage because the new Macs don’t use Type-A ports? Well, we have some good news. The Type-C ports on the MacBook have a flash drive of their own. With the Ultra USB-C from SanDisk, you can plug in your flash drive without the need for a dongle, dock, or converter. With a capacity range of 16GB to 128GB, the Ultra USB-C boasts quite an impressive reading and writing speeds of up to 150 MB/sec, A Type-C USB flash drive at an affordable price? It sounds like a good idea. Learn More Latest Price on Amazon: Sale 5,008 Reviews SanDisk Ultra Flair 16GB USB 3.0 Flash Drive — SDCZ73–016G-G46 High Speed USB 3.0 performance Transfer to drive faster up to 15X faster than standard USB 2.0 drives Sleek, durable metal casing Easy-to-use password protection for your private files 5-year limited warranty $6.29 Buy on Amazon The best flash drive for keeping your data safe Interface: USB 3.0 | Connector: Type-A | Capacities: 4GB — 128GB | Storage Type: Flash Depending on the type of work you do, your project and portfolio can quite literally be your livelihood. If this is true, why would you not want a flash drive that offers to protect your data from many different factors like the IronKey D300SM from Kingston is one of the most secure flash drives you can get, the drive comes with a waterproof design but that’s only the beginning of the security features. The drive is also password protected with 256-bit encryption which complies with FIPS standards that many corporate environments demand and all of the decryption is done on the drive itself. While the speeds can vary with the largest, 128GB having the fastest write speeds of 250MB/sec. Regardless of the transfer speeds, the security offered by the IronKey D300SM makes it worth it every time. Learn More Latest Price on Amazon: Kingston IronKey 8GB D300SM USB 3.1 Flash Drive — 8 GB — USB 3.1–256-bit AES — TAA Compliant Storage Capacity: 8 GB Encryption Standard: 256-bit AES Host interface: USB 3. 1 Height: 0. 5" Width: 0. 9" $81.75 Buy on Amazon The fastest USB storage you can buy Interface: USB 3.1 (Gen 2) | Connector: Type-C | Capacities: 500GB — 2TB | Storage Type: SSD While it’s not technically a flash drive, the Samsung Portable T5 is an external solid-state drive and uses the same technology that your computer uses for fast storage. It ends up being faster than any other kind of USB storage and makes the most out of the high bandwidth that USB 3.1 provides to deliver transfer speeds of up to 550 MB/sec. Because of this high-performance, it’s also much more than a standard flash drive. The speed this device can provide isn’t meant for only backing up your files, it can be a way to work on large files directly from the T5 itself. So while it’s not a standard flash drive, if your work means big files, you can’t go wrong. Learn More Latest Price on Amazon: Sale 4,106 Reviews Samsung T5 Portable SSD — 1TB — USB 3.1 External SSD (MU-PA1T0B/AM), Black Superfast Read Write speeds of up to 540 MB/s Top to bottom metal design that fits in the palm of your hand Optional password protection and AES 256-bit hardware encryption Includes USB Type C to C and USB Type C to A cables 3 year warranty user can set password when it is necessary. Requires Windows 7, Mac OS X 10.9 (Mavericks), Android 4.4 (KitKat), or higher; Older versions of the Windows, Mac and Android operating… $168.94 Buy on Amazon The USB hard disk that’s the best value for the money Interface: USB 3.0 | Connector: Type-A | Capacities: 1TB — 5TB | Storage Type: Hard Disk If you do any type of video work, you know how large some of those files can get. While the technology behind har disk drives may be ancient compared to current USB flash storage technology, you can’t argue the value that the MyPassport offers n terms of the capacity you get for the price. While the larger, 3.5-inch models have the largest capacities, they require an external power supply. The 2.5-inch model is carried easily, powered by USB and with up to 5TB available, it offers massive storage you can fit in your pocket. With transfer speeds around 100MB/sec, it will take some time to fill it. The MyPassport is an inexpensive option for those looking for a way to do a massive backup of their data over USB rather than the fastest transfers, this is the drive for you. Learn More Latest Price on Amazon: Sale 509 Reviews WD 1TB My Passport Portable External Hard Drive, Black — WDBYVG0010BBK-WESN Automatic backup — Easy to use Password protection + 256-bit AES hardware encryption Western Digital Discovery software for Western Digital backup, password protection and drive management Superspeed USB port; USB 2.0 compatible 3-Year manufacturer’s limited warranty $53.00 Buy on Amazon The tiniest USB flash drive out there Interface: USB 3.0 | Connector: Type-A | Capacities: 16GB — 256GB | Storage Type: Flash USB flash drives can come in all shapes and sizes these days. There are even flash drives shaped like your favorite science fiction characters. While these can be a great novelty item, they aren’t the best option if you’re in the middle of a board meeting. The Ultra Fit USB 3.1 Flash Drive from SanDisk combines great storage capacity along with a minimalistic design. About the size of a thumbnail, the drive all but disappears when it’s plugged in. It’s small enough where if it weren’t for the small lip at the end, making it easy to remove, it would disappear. Just because its small in stature, doesn’t mean that the transfer speeds are lacking anything. With transfer speeds up to 130MB/sec, it truly is a tiny dynamo. Learn More Latest Price on Amazon: Sale 3,708 Reviews SanDisk 16GB Ultra Fit USB 3.1 Flash Drive — SDCZ430–016G-G46 A compact, plug-and-stay, high-speed USB 3.1 flash drive that’s ideal for adding more storage to laptops, game consoles, in-car audio and more Simple, fast way to add up to 256GB of storage to your device Write faster than standard USB 2.0 drives Move a full-length movie faster than standard USB 2.0 drives Read speeds up to 130MB/s1 $6.29 Buy on Amazon A great drive if you work in the great outdoors Interface: USB 3.1 | Connector: Type-C | Capacities: 1TB — 5TB | Storage Type: Hard Disk While we have been featuring many different options for storing your data using flash storage, all of the transfer speed in the world won’t help you if you drop your flash drive in a puddle or if you get caught in the rain. This is why the Rugged USB-C hard drive from LaCie is such a good option. With its orange rubber case, the drive can survive an accidental drop whereas other drives or other flash devices may not be so lucky. According to the manufacturer, the drive can withstand a 1-ton car, is water- and dust-resistant, and survive a drop from three feet. If you do a lot of work in the wild, maybe as a photographer, having a durable drive that you can save your work without worrying about any damage it might take let you put all of your focus on your work. Learn More Latest Price on Amazon: Sale 21 Reviews LaCie Rugged Triple 1TB External Hard Drive Portable HDD — USB 3.0 FireWire 800 compatible for Mac… Perfect for storing up to 1TB of content, Rugged Triple external hard drive is the easy way to connect to FireWire 800 and USB 3.0 computers For those who have a need for speed, this portable hard drive features dual FireWire ports for daisy chaining and a USB 3.0 port for speed transfers of up to 110MB/s Trek confidently with a portable external hard drive that offers all-terrain durability of drop, dust, and water resistance Take advantage of a complimentary one-month membership to the Adobe Creative Cloud All Apps Plan for access to awesome photo and video editing apps Enjoy long-term peace of mind with the included two-year limited warranty $120.60 Buy on Amazon A solid and affordable flash drive Interface: USB 3.0 | Connector: Type-A | Capacities: 16GB — 256GB | Storage Type: Flash Cheaply priced flash drives have the problem of being seen as not worth the time of an affordable amount of money because of perceived limitations that the drive may have. This is not the case with the DataTraveler 100 from Kingston. With its solid build and very affordable price, it can be a great option for saving documents or even as a backup for your work. Read/write speeds of 150Mbps and 70Mbps easily makes this drive worth every penny. Learn More Latest Price on Amazon: 1,632 Reviews Kingston 16GB 100 G3 USB 3.0 DataTraveler (DT100G3/16GB) Compliant — with USB 3.0 specifications USB 3.1 Gen 1 (USB 3.0)¹ — DT100 G3 delivers USB 3.0 speeds, Backwards compatible — Can be used with USB 3.0 and 2.0 ports. Stylish — black-on-black, sliding cap design Ideal USB 3.0 starter storage device Guaranteed — five-year warranty, free technical support $5.75 Buy on Amazon The most affordable flash drive available Interface: USB 3.0 | Connector: Type-A | Capacities: 16GB — 64GB | Storage Type: Flash Offering you plenty of storage, the CZ80 flash drive from SanDisk is one of the best flash drives on our list. Not only is the price so affordable that you could buy them in bulk, the performance that it provides is anything but cheap. With fast transfer speeds, the CZ80 is one speedy little flash drive. The connector slides in and out of the stick when it’s not being used which can be great to protect it when it’s not in use and can also minimize the risk of it getting damaged.Overall, this USB flash drive outperforms many others on our lit. Learn More Latest Price on Amazon: 179 Reviews SanDisk SDCZ48–032G-U46 Ultra 32 GB USB 3.0 Flash Drive Up to 80MB/s [Older Version] Access hi-res photos, HD videos or other large files up to 4 times faster than USB 2.0 drives Fast performance speeds (up to 80 MB/sec) USB 3.0 enabled (USB 2.0 compatible) Keep private files private with SanDisk SecureAccess software Backed by 5 year limited warranty $10.39 Buy on Amazon A great flash drives for students Interface: USB 3.0 | Connector: Type-A | Capacities: 32GB — 256GB | Storage Type: Flash While it may not have the fastest speeds on our list, the Turbo from PNY, one of the biggest names in flash drives is a great option for students looking for a drive with large capacities and an affordable price. With a capped design to help protect the connector and read/write speeds of 80 and 20Mbps respectively, it can be a great option for any student in need of a way to save their data. Learn More Latest Price on Amazon: Sale 11,510 Reviews PNY Turbo 256GB USB 3.0 Flash Drive — P-FD256GTBOP-GE Transfer speeds approximately 10 times faster than standard PNY USB 2.0 Flash Drives Store and transfer large files faster than ever with USB 3.0 technology Allows for quick and easy transfer of all content The 256GB Turbo USB 3.0 Flash Drive can hold approximately 47,349 songs Sliding collar, capless design with integrated loop makes it easy to attach to key chains, backpacks and etc $32.99 Buy on Amazon The best USB flash storage for creatives When you’re in the market for a new USB flash storage device, it can be very easy to pick the first one that you see. With our list of the best USB flash storage for creatives, we hope that we have presented you with some options to consider the next time you go shopping. Do you have a flash storage option you just have to tell others about? Let us know in the comments below!!
https://medium.com/just-creative/the-best-usb-flash-storage-for-creatives-dac43f1cb0ee
['Jacob Cass']
2020-01-28 09:28:25.980000+00:00
['Chargers', 'Tools']
How to Part-of-Speech Tag a String and Filter the Adverbs in Node.JS
Of all the great things you can accomplish through natural language processing (NLP), we will just be focusing on POS tagging to allow filtering out adverbs. Before you get worried about the complexity of this task, let me just say that a nice little shortcut will be involved, one that allows us to drastically reduce the time and difficulty. Let’s take a look. To begin, we are going to add a reference to our package.json that will install our API client and let us continue. "dependencies": { "cloudmersive-nlp-api-client": "^1.1.2" } Next comes our function call for posTaggerTagAdverbs. var CloudmersiveNlpApiClient = require('cloudmersive-nlp-api-client'); var defaultClient = CloudmersiveNlpApiClient.ApiClient.instance; // Configure API key authorization: Apikey var Apikey = defaultClient.authentications['Apikey']; Apikey.apiKey = 'YOUR API KEY'; // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.apiKeyPrefix = 'Token'; var apiInstance = new CloudmersiveNlpApiClient.PosTaggerApi(); var request = new CloudmersiveNlpApiClient.PosRequest(); // PosRequest | Input string var callback = function(error, data, response) { if (error) { console.error(error); } else { console.log('API called successfully. Returned data: ' + data); } }; apiInstance.posTaggerTagAdverbs(request, callback); Let’s test this out. I used this sentence: “I ran quickly and quietly through the dark alleyway.” And here is the result given back by the API:
https://cloudmersive.medium.com/how-to-part-of-speech-tag-a-string-and-filter-the-adverbs-in-node-js-261ac1a69a94
[]
2020-04-09 20:17:39.692000+00:00
['Adverbs', 'Nodejs', 'Pos Tagging', 'Part Of Speech', 'NLP']
Achieving Your Goal, Regardless Of What Life Throws At You
Less than adequate sleep, a tiring day, an enticing new Netflix series, one more YouTube video. Attention demanding social media apps, activities, friends, family and kids. Fantastic reasons to not do the work. Who could deny it? But you must, you must do the work. You will be tired today, and you will be tired a year from now. The incremental amount of work you do. The tiny 30 min chunks of time you set aside every day to write your book, to create your content, to chase after the future you — That will be the defining factor in your success a year from now. You must understand. You need to remind yourself. You have to look in the mirror every single day and say to yourself — Just because I don’t feel like it, doesn’t mean I can’t do it. There are too many people crushed, depressed and utterly disappointed in themselves for not doing the work. Forget the past. Today is a new day, and there is nothing stopping you from starting. Know this. You always have the power to do the work. Always. The world is perpetuated with the idea that you need to ‘feel like it’ to do the work. You don’t. You could feel like doing anything else. All you have to do in that moment, is to sit down and do the work. It sounds simple, and it is. Procrastination is ever present and your attention will be forever fought for. It’s a fact of life. Self control, preemptive measures, focusing strategies and all the rest can greatly support you in your quest to do the work. But in the end, you must rely on yourself to actually do it. It’s a commitment from present you, to future you. If you don’t know where to start, here’s a simple guide. Imagine you’re on a Plane, and your destination is future you. The flight is your work time. Keep your phone in airplane mode for the duration of your flight. No social media, no messages, no distractions. Don’t miss the flight. You need to be on board at a certain time, prepared. There’s not much around you, you’re not distracted, you sit down with only the essentials to occupy yourself throughout the flight. You’re quiet and courteous. You’re in the zone. You’ve heard these tips before, you know they’re effective, you know they’re nothing new. You know what to do and how to go about it. All you have to do is put them to use. You will always encounter resistance is one form or another. Procrastination, opportunity cost, the fight for your attention and worst of all, fear. Fear of failure, fear of exclusion, fear of missing out. It’s ever present. The amateur believes he must first overcome his fear; then he can do his work. The professional knows that fear can never be overcome.― Steven Pressfield, The War of Art You’re a professional. Whether you’re aspiring or practicing. Professionals sit down and do the work. There’s nothing stopping you but your yourself. The internal struggle exists. It’s a curse and it’s a blessing. The curse; You’re your own obstacle. The blessing; If it’s only you, then you have the power. You can choose. At any time, you can commit to sitting down and just doing. Practice makes perfect. And as a professional, you sure as hell practice. I won’t wish you luck, because you don’t need that — you’ll get there without it. With love, Sah
https://medium.com/the-post-grad-survival-guide/achieving-your-goal-regardless-of-what-life-throws-at-you-52169aa6b9c9
['Sah Kilic']
2019-07-30 09:21:22.439000+00:00
['Self Improvement', 'Personal Development', 'Productivity', 'Life', 'Life Lessons']
A design for decentralized organisations: part 3
In part 2 of this series, I discussed methods to scale communication in decentralized organisations, and recommended the use of a sybil-resistant voting system, prediction markets, and groups smaller than 8 in order to achieve scalable, verifiable finality. Now on their own, these methods would not necessarily function well unless data required for reaching finality was made available transparently. Hence, this part introduces the challenges that transparency presents to scalability, and proposes self-sovereignty as a way to achieve both transparency and scalability. Principle 3: either be transparent… In the absence of provably aligned interests, opacity is the enemy of consensus, and, by extension, is the enemy of trust. After all, if relevant parties cannot obtain information about some scenario, how can they be expected to make good decisions or reach finality? Moreover, if information is required for some decision, yet is not provided, it is easy to mistrust the motives of those you’re expecting the information from. As such, for every decision, it is optimal for all significant information to be provided to all stakeholders, and in a “trustless” manner. …or don’t make something other people’s problem If it is feasible for less than everyone to be involved in a decision, then it is usually performant to reduce the size of participants, as sketched above. For example, if some code needs to be written, it is seldom feasible for a large crypto community to collectively decide how to write it; instead they typically aim to employ a skilled coder to do the work. More generally speaking, decisions might be made by a team of experts, or by the leader of some team, and wherever this is workable, the requirement of transparency is radically simplified, because it is no longer necessary to ensure information is accessible to everyone in a transparent manner. This brings down administrative workloads, and helps organisations avoid bureaucratising. Now this sort of optimisation can go far deeper and be quite radically effective not only at avoiding performance penalties, but also in actively achieving transparency. “Not making things other people’s problem” is the larger part of how decentralized designs are actually achievable. Take Bitcoin for example. It achieves a 100% transparent transaction history, but in order to do so, it does not require block verifiers to perform complex, bloated auditing of amounts sent, time of transaction, receiver and sender IDs, etc. Instead, it effectively requires them to collectively agree on only time of block solution, which is why a blockchain is also known as a timestamp server. Think about how incredibly efficient this is: collectively validating one datum per block provides double-spend protection for every single transaction, which then secures all the other properties listed above. Block verifiers just don’t have to worry about the indeterminate number of other potential checks and balances that could conceivably be required in order for transactions/contracts to be valid. That task is offloaded to each individual peer’s instantiation of the rules of Script. Now that’s a good engineering solution. Decentralization without consensus: self-sovereign action In the context of governance, it’s obvious that similar efficiency techniques are required, since possibly the most common malaise of corporate bureaucracy today is the incremental bloating of “process,” rule after rule, requiring employee after employee, until the cost and complexity of “safe,” “fair,” or “compliant” operations consumes a vast proportion of resources. This is the case, for example, with the Oxford County Council, or, more egregiously, with the Gauteng government — and these are sadly not exceptions, but standout instances of the norm. Now while a general strategy like decentralization does not directly afford efficiency-gains for organisations, and, worse still, directly multiplies the intercommunication load (see above), it can nonetheless afford a design pattern that, wherever it is practicable, can radically improve efficiency in the same manner that it does for Bitcoin. Let’s term it self-sovereignty: the decentralization of decisions without requiring consensus between participants. Just as every full node on the Bitcoin network runs Script locally and can thus determine for itself the validity of a transaction independently of other nodes’ judgements, people in a decentralized organisation may permissionlessly market a project, maintain websites, contact potential adopters and partners, create funding proposals, do due diligence on a funding proposal, write code, and so forth. The limit on self-sovereignty is, of course, the extent to which any given state change (e.g. an action, a process…) must inescapably be subject to either consensus or be dependent on other forces outside the self-sovereign actor. Now one self-evident, albeit ideal, corollary of self-sovereignty is that in order for each actor (whether a person or a team tasked with a given output) to be self-sovereign, it must be in a position do its work without dependency on other actors. Typically, this is a matter of electing team members who can jointly reach the team’s goals self-sufficiently; in other cases it may be a matter of merely reducing the risk of third parties compromising the operation in various ways (e.g. on a p2p network, forming connections with many nodes in case some of them give poor service); otherwise it may be a matter of reducing scope of the work itself so that it is immune from such potential compromises. Either way, unless work is done in a self-sovereign manner, governance problems will arise — and this is what accounts for the majority of the costs and inter-departmental friction of a modern corporation. Corporations typically set up work process as a series of “deliverables” between several teams or departments. It quickly becomes the norm for the state of a process to be contested, with reports, “deliverables,” or other “output” sent back and forth between departments — often for mere technicalities, or even reasons having little to do with the actual work being handed over (e.g. workload in the downstream team is already too high). One way or another, poorly aligned incentives and insufficient recognition of the importance of self-sovereignty create an organisational hazard. Such is the requirement internal to each team in a decentralized (or centralized) organisation: to be performant, it must preserve its sovereignty. Beyond each team’s sovereignty, at an organisational level a scalable decentralized organisation should limit as far as possible the types of state changes requiring consensus. The way Bitcoin accomplishes this is to make all state changes depend indirectly upon a fundamental type of state change — one that can function as the foundation of the security of all the others — the determination of valid blocks (mining). In the next section, I will explore this in some detail, noting how the scalability of a decentralized organisation depends critically on how few attributes it needs to obtain consensus about, and then devoting the bulk of the remainder of this series to identifying the fundamental attributes of a decentralized organisation, which alone require consensus, and upon which every other process can be built. Continued in part 4.
https://medium.com/flatus-vocis/a-design-for-decentralized-organisations-part-3-7465f7190865
['Arlyn Culwick']
2019-03-05 16:56:20.014000+00:00
['Blockchain', 'Decentralization', 'Organisation', 'Self Sovereignty', 'Governance']
Yelp Reviews Analysis for Bubble Tea Shops
Most of the bubble tea shops are located in the major cities of each state. Especially there are a massive amount of bubble tea shops in Los Angeles and San Francisco. — Population If we compare the number of bubble tea shops with each state’s population, we can see the positive correlation between the number of bubble tea shops and the population. Given that bubble tea is so prevalent in the East Asian countries, we should also consider the east-Asian population in each state. Using the U.S. census bureau’s demography data, I plot the number of bubble tea shop v.s. east-Asian population. As expected, we get a beautiful linear relationship! — Age Group In addition to the population, we can ask if the bubble tea shop is related to a particular age group. Interestingly, the number of bubble tea shops has a positive correlation with the percentage aged 25–44 but seems to be uncorrelated with the Aged 14–24 group since I would naively expect that most of the customers are teenagers and young adults. To summarize this section, I present the correlation matrix between the number of bubble tea shops, the percentage of different age groups, and the population. Modeling We see that several features are seemingly the crucial factors in determining the number of bubble tea shops in a city since these features positively correlate with the number of bubble tea shops. However, I want to remind readers that it’s dangerous to conclude by merely comparing the correlation! Because correlation does not always imply causality! Especially when we use two features that are causally related, i.e., population and east-Asian population. (For more details, see Spurious relationship) To understand each feature’s importance and predict the number of bubble tea shops, I build a model through support vector regression (SVR) and use the grid search algorithm for tuning the hyper-parameters. The dataset is split into two parts: a training set and a test set. I train the model using the training set and use the test datasets to evaluate the model. The comparison between the number of bubble tea shops in the test dataset and the model’s prediction is shown in the left figure. Overall, it seems that we can make a reasonable prediction by using the trained model. To gain further insight from the model, I compute the permutation importance for each feature. The permutation importance quantifies how important a feature is by measuring how the fitting score decreases when a feature is not available. We see that the East-Asian population, the total population of a state, and the percentage of aged adults 25–44 give significant contributions. The model for predicting the number of bubble tea shops based on demography data is a useful tool. Especially when we want to know whether the bubble tea market in a place is an oversupply or undersupply. So, don’t forget to check it before starting your own bubble tea business.
https://towardsdatascience.com/yelp-reviews-analysis-for-bubble-tea-shops-f23094d3d32d
[]
2020-11-14 16:40:05.086000+00:00
['Yelp', 'Naturallanguageprocessing', 'Regression', 'Data Science', 'Bubble Tea']
Automated Lip Reading : Simplified
Automated Lip Reading : Simplified Take a peek into the world of Automated Lip Reading (ALR) Why Lip Reading? Professional lip reading is not a recent concept. It has actually been around for centuries. Obviously, one of the biggest motivation behind lip reading was to provide people with hearing impairment a way to understand what was being said to them. Nevertheless, with the advancing technologies in the field of Computer Vision and Deep Learning, automated lip reading by machines has become a real possibility now. Notice the growth of this field, shown by the cumulative number of papers on ALR that were published per year. Cumulative number of papers on ALR systems published between 2007 and 2017 [1] Such advancements open up various new avenues of discussion regarding the applications of ALR, the ethics of snooping on private conversations and most importantly its implications on data privacy. However, I am not here to discuss that today. This blog is for the curious few who would like to gain a deeper understanding of how these ALR systems work. Anyone with absolutely no previous experience in Deep Learning can superficially follow the blog. Even a rudimentary understanding of Deep Learning is enough to fully appreciate the details. Is Lip Reading difficult? Just take a look at this video of bad lip reading of a few short clips from The Walking Dead (keep the sound off). Watch it again, with sound, just for fun :P Funny.. right? The dialogues seem to match the video brilliantly, yet clearly something doesn’t feel right. What exactly is wrong? Well, for starters, those are obviously not the actual dialogues. But then why do they seem to fit so perfectly? That is because there exists no direct one-to-one correspondence between the lip movements and the phonemes (smallest unit of sound in a language). For example, /p/ and /b/ are visually indistinguishable. So, the same lip movements can be a result of a multitude of different sentences. But how do the professional lip readers do it then? Professional lip reading is a combination of understanding the lip movement, the body language, hand movements and context to interpret what the speaker is trying to say. Sounds complicated.. right? Well, let’s see how are the machines doing it… What is the difference between ALR, ASR and AV-ASR? For starters, let’s understand the difference between the 3 seemingly common terms ALR, ASR and AV-ASR Automated Lip Reading (ALR) :- Trying to understand what is being spoken based solely on the video (visual). Automated Speech Recognition (ASR) :- Trying to understand what is being spoken based solely on the audio. Commonly called speech-to-text systems. Audio Visual-Automated Speech Recognition (AV-ASR) :- Using both audio and visual clues to understand what is being spoken. Alphabet and Digit Recognition Early work in ALR was focused on simple tasks such as alphabet or digit recognition. These datasets contain small clips of various speakers, with various spatial and temporal resolutions, speaking a single alphabet (or phoneme) or digit. These tasks were popular in the early stages of ALR since they allowed researchers to work in a controlled setting and with a constrained vocabulary. Frames from AVLetters, one of the most used alphabet recognition dataset Word and Sentence Recognition While the controlled settings of alphabet and digit recognition is useful to analyze the effectiveness of algorithms at early design stages, the resulting models do not have the ability to run on the wild. The aim of ALR systems is to understand natural speech, which is mainly structured is terms of sentences, which has made it necessary the acquisition of databases containing words, phrases and phonetically balanced sentences and models that can work efficiently on these. Frames from OuluVS2, a multi dimensional audio-visual dataset for tasks like ALR, ASR and AV-ASR A Superficial view of the pipeline A typical ALR system consist of mainly three blocks Lips localization, Extraction of Visual Features, and Classification into sequences The first block, focused on face and lips detection, is essentially a Computer Vision problem. The goal of the second block is to provide feature values (mathematical values) to the visual information observable at every frame, again a Computer Vision problem. Finally the classification block aims to map these features into speech units while making sure that the complete decoded message is coherent, which is in the domain of Natural Language Processing (NLP). This final block helps disambiguate between visually similar speech units by using context. Deep Learning based ALR systems There has been a significant improvements in the performance of ALR systems in the last few years, all thanks to the increasing involvement of Deep Learning based architecture into the pipeline. The first two blocks, namely lip localisation and feature extraction are done using CNNs. Some other DL based feature extraction architectures also includes 3D-CNNs or Feed forward networks. And the last layer consists of LSTMs to do final classification, by taking all the individual frame outputs into account. Some other DL based sequence classification architectures include Bi-LSTMs, GRUs and LSTMs with attention. An example of a DL based baseline for ALR systems [1] What’s next? In the last few years, a clear technology shift can be seen from traditional architectures to end-to-end DNN architectures, currently dominated by CNN features in combination with LSTMs. However, in most of these models, the output of the system is restricted to a pre-defined number of possible classes (alphabets, words or even sentences), in contrast to continuous lip-reading where the target is natural speech. Recent attempts to produce continuous lip-reading systems have focused on elementary language structures such as characters or phonemes. Thus, it is not surprising that the main challenges in ALR systems currently is to be able to model continuous lip-reading systems.
https://towardsdatascience.com/automated-lip-reading-simplified-c01789469dd8
['Prakhar Ganesh']
2019-06-30 23:50:47.997000+00:00
['Surveys', 'Naturallanguageprocessing', 'Deep Learning', 'Computer Vision', 'Data Science']
Latin American socialists unite with Axis of Resistance against Western imperialism
Latin American socialists unite with Axis of Resistance against Western imperialism Ben Norton Aug 17·7 min read The leftist governments of Venezuela, Cuba, Nicaragua, and Bolivia have found a key strategic ally in Iran, the heart of the Axis of Resistance. (This article was written for Al Mayadeen English.) Revolutionary socialist movements in Latin America are developing closer relations with anti-imperialist resistance forces in West Asia, building a united front against Western aggression and exploitation. This budding alliance is an extremely important development in the struggle against an authoritarian international political and economic system that is essentially a global dictatorship, ruled by the United States and its junior imperialist partners in the European Union, NATO, apartheid “Israel”, and the Gulf monarchies. As this Washington-led, trans-Atlantic hegemonic order was constructed over the past century, through a long series of wars, military occupations, foreign interventions, coups, regime-change operations, assassinations, and grossly unequal trade arrangements, two regions of the world have been especially targeted: Latin America and the Middle East, or more accurately West Asia. Both regions have plentiful natural resources and are very geostrategically located. Latin America has vast mineral reserves and agricultural products. West Asia has a plurality of the planet’s hydrocarbon reserves, and connects Europe to Asia, sitting right in the middle of what geopolitical analysts have long called the “World Island.” Given their status as principal targets of Western imperialism, it only makes sense for resistance forces in these regions to unite. Attempts at forming such an alliance had been made in the past — revolutionary Palestinian militants trained in Cuba and with Nicaragua’s Sandinistas, for instance, and Muammar al-Qaddafi’s Libya supported leftist Latin American guerrillas — but this collaboration was historically limited in scope. That is, until recently. As the United States accelerated its hybrid warfare to try to re-colonize Latin America and West Asia in the 2000s, indigenous anti-imperialist movements in both regions joined forces, forging not only close political ties, but economic relations as well. The leftist governments of Venezuela, Cuba, Nicaragua, and Bolivia have found a key strategic ally in Iran, the heart of the Axis of Resistance. Revolutionary ALBA member states unite with Iran The director of the main instrument of Latin American economic integration, the Bolivarian Alliance for the Peoples of Our America, known simply as the ALBA, took a historic trip to Tehran this August to meet with the new Iranian President Ibrahim Raisi. “ Iran and the ALBA have a lot in common, and both seek to defend the independence and sovereignty of nations and confront the outrageousness of the United States,” remarked the ALBA’s executive secretary, the Bolivian diplomat Sacha Llorenti. For his part, Raisi kicked off his new administration calling for strengthening relations with Latin America, stressing that it is one of Tehran’s top foreign-policy priorities. “Iran is determined to develop its political and economic relations with the member states of the ALBA-TCP,” Raisi said, highlighting “the shared values and positions of both parties.” “There is no doubt that a greater development of the relations between Iran and Latin American countries can halt the North Americans and other arrogant countries,” Raisi added. Joining Llorenti in Tehran were top officials from Venezuela, Nicaragua, and Bolivia — all member states of the ALBA. In a meeting with Venezuela’s vice president of planning, Ricardo Menéndez, Raisi stated that “ Iran and Venezuela alike have common interests and enemies. We have always shown that with resistance and wisdom, we can thwart the plots of the United States and world imperialism.” Nicaragua’s Foreign Minister Denis Moncada met with Raisi as well, and called for strengthening relations with Tehran. The Iranian president praised the Central American nation’s Sandinista government as a model of resistance against US aggression, and said, “The people of Iran have always wished for success and victory for the revolutionary nation of Nicaragua.” Likewise, in his meeting with Raisi, Bolivian Foreign Minister Rogelio Mayta pledged to work more closely with Iran, stating, “Despite sabotage by the United States, we are determined to increase the level of relations with Tehran in all areas.” Iran and Venezuela resist illegal US blockades Iran’s support for revolutionary governments in Latin America goes beyond mere words. While many liberal and center-left political forces in the region have opportunistically turned their back on Venezuela, betraying their neighbor on behalf of Washington, Tehran has shown real, tangible support for Caracas. Both Venezuela and Iran are suffering from illegal US blockades, and these murderous sanctions have led to a shortage of food, medicine, and gasoline. (Venezuela has massive oil reserves, but it is some of the heaviest crude petrol on the planet, which cannot be used or exported without first being refined, so Caracas needs to import lighter crude or other materials that are blocked by Washington.) To help meet the needs of the Venezuelan people, Iran has repeatedly defied the criminal US blockade and delivered supplies to Caracas, sending huge tankers full of food, medicine, and fuel. In these altruistic acts, Tehran has valiantly risked US military aggression, putting its money where its mouth is to support the revolutionary government and people of Venezuela. Iran has also opened a supermarket chain in Venezuela, called Megasis, to help support an ally that is heavily reliant on food imports. It is part of a larger strategy to boost bilateral trade and economic cooperation between both nations. The brotherhood between Venezuela and Iran was most poignantly illustrated at the 2013 funeral of President Hugo Chávez, who initiated the Bolivarian Revolution. Iranian President Mahmoud Ahmadinejad was photographed hugging and consoling the Venezuelan Comandante’s crying mother. Solidarity between Latin America and West Asian Resistance Axis Revolutionary Latin American governments have also sought to collaborate more closely with other forces in the West Asian Axis of Resistance. Venezuela, Cuba, Nicaragua, and Bolivia vociferously opposed and condemned the US-led imperialist proxy wars against Libya and Syria, which expressly sought the collapse of the nations’ central governments, and succeeded in the former while failing in the latter. Similarly, these ALBA member states have all shown unflinching solidarity with Palestine. In response to apartheid “Israel’s” 2008–2009 massacre in Gaza, Venezuelan President Chávez officially broke ties with the Zionist regime, denouncing it as a “genocidal state” and “the murderous arm of the US government.” Then in 2010, in a daring challenge to Washington’s declaration that Iran, Iraq, and North Korea constituted a supposed “Axis of Evil,” Comandante Chávez announced an alliance with Syria, which he dubbed the “Axis of the Brave”. The Axis of the Brave was a “strategic alliance” against US imperialism, Chávez explained. “A new world is being built,” and “we seek a strategic relationship with that continent,” the Venezuelan president said, referring to West Asia. Less than a year after Chávez’s announcement, the United States and its proxies launched a devastating decade-long regime-change war on Syria — one that continues today, with more than one-third of Syria’s sovereign territory, seizing most of its oil and wheat reserves, illegally militarily occupied by the United States in the northeast and NATO member Turkey in the northwest. Chávez’s defense of and alliance with Syria against Western aggression led to the inauguration this March of a monument at the University of Damascus. Nicaraguan President Daniel Ortega, the leader of the revolutionary Sandinista Liberation Front, has likewise steadfastly defended Syria and “ condemned all forms of aggression by foreign powers that attack the sovereignty and self-determination of the [Syrian] people, in clear and flagrant violation of international law.” During the 2011 NATO regime-change war that intentionally collapsed the state of Libya and unleashed open-air slave markets, Nicaragua’s Sandinista government staunchly opposed Western imperial aggression. As NATO bombed Libya, the US government refused to give a visa to the North African nation’s United Nations delegate. So in response, Nicaragua’s former foreign minister, Miguel D’Escoto Brockmann, announced he would represent Libya at the UN. (Washington then tried to block D’Escoto’s representation too.) Axis of Resistance forces in Yemen has returned the solidarity. The de facto government in northern Yemen, ruled by the revolutionary Houthi movement, known officially as Ansarallah, has staunchly defended Venezuela against US aggression. In a 2015 interview, a senior Ansarallah member declared, “We support Chávez in Venezuela.” When Washington initiated another coup attempt in Venezuela in February 2019, Ansarallah and leftist parties in Yemen held a protest condemning US interference. Global vanguard in building a new multipolar world Latin American socialist governments and the Axis of Resistance in West Asia are the vanguards in the struggle to build a new, truly multipolar world based on national sovereignty and self-determination. Together, they are helping to construct a truly multilateral order that challenges the authoritarian, unilateral, and brutally violent system created and controlled by the United States and its junior partners in imperialism. This was further illustrated in July, when these nations launched an anti-imperialist alliance inside the United Nations, called the Group of Friends in Defense of the UN Charter. Venezuela, Cuba, Nicaragua, and Bolivia were joined by Iran, Syria, and Palestine, as well as the People’s Republic of China, the Russian Federation, Algeria, the DPRK, Cambodia, Laos, Angola, Belarus, Eritrea, Equatorial Guinea, and Saint Vincent and the Grenadines. The economic partnership between member states of the Bolivarian Alliance and Iran likewise serves as a model for South-South integration that not only weakens Western imperial hegemony, but also helps to develop these countries in their mutual interests. The ALBA was itself created to remove the middleman of the United States, so that Latin American nations could trade with each other and strengthen their own domestic economies, cutting out the North American corporations that want them to be dependent on imports. The historic, 25-year, $400 billion agreement Iran signed with China this March was another crucially important step in building alternative economic structures to weaken Washington’s dominance. Similarly, the announcement that Cuba and Iran will work together to manufacture COVID-19 vaccines exemplifies how this South-South partnership can help overcome the global pandemic. If Latin America and West Asia can create a coherent formal alliance with China and Russia, it could pose a serious challenge to the imperialist US-EU-NATO axis. As the United States accelerates its new cold war on China and Russia, such a coalition will only become more urgent.
https://medium.com/@benjaminnorton/latin-american-socialists-unite-with-axis-of-resistance-against-western-imperialism-c429cbf77be
['Ben Norton']
2021-08-18 21:32:03.071000+00:00
['Middle East', 'Latin America', 'Venezuela', 'Iran', 'Geopolitics']
21 Positive affirmations to repeat everyday
Appreciating every day of my life as.. The things I am doing are right and are moving in a good direction. Hard work I do all day alone will help me learn new things about myself. I am appreciating every day of my life as an opportunity to learn and grow. This weekend will be satisfying with great achievements. You are always strict with yourself and You are always strict with yourself and you do things as perfectly as possible- the reason why you are in a better position surpassing your past. But take a minute to pause and think about the beautiful journey of life you may be bypassing. Do all things that keep you ahead but with a little mix of gratitude, enjoyment & relaxation. day will train me for the I’m visualizing my day as a calm, satisfying day. I will end up getting lots of small achievements. I’m feeling focused and productive. This day started with a positive urge to pass it beautifully. I hope this day will train me for the betterment of my life. I’ll do things with my pace… I am grateful for this life. I’m responsible to make it great. Practicing 5-minute relaxation will help me stay focused. Today I may have lots of things to cover up but I’ll do things with my pace. I’m good enough to handle my life. I’m beginning my day… I have many strengths, skills, and a powerful urge to serve better. I did good work in the past and had overcome setbacks. These days are tough and are the best options to learn, build, and reignite the inner passion. Gathering all qualities and dreams together, I’m beginning my day. In making upcoming days… Today is an exciting new day, a vibrant day like a blank page of a notebook. Keeping positivity in mind and aggression to change the lifestyle, let’s register good things today. In making upcoming days good, let’s endeavor for happiness. Every day I’m reaching a step… I’m attracting good things to my life. I have achieved a lot till now. Every day I’m reaching a step closer to my dreams. What I have envisaged till now will come true. The efforts I put in every day will reap me positive results. Hoping for a positively wonderful day today. As I’m growing wise, I’m… I’m privileged to have a healthy mind and good intentions. I just adore these gifts blessed by God. As I’m growing wise, I’m realizing that I should use these gifts to create good, promote good and appreciate good in life. Gathering all my strengths together, I’ll make this day a remarkable part of 2020. I know that everything I want… I’m patient. I’m calm and relaxed because I know that everything I want is making its way to me right now. All that I need to do is to practice and polish my good habits daily in order to receive that truly meant for me. Let’s pledge to complete good work today. The energy I’ll put into… The mind monkey will always distract you from what is right. Think about the vision that you created, to come so far. Remember your expectations from this life. The amount of courage I’ll carry today will help me do things that are supposed to be done. The energy I’ll put into my tasks will help me move ahead. Vigor & focus is what I need to bring in my work today. So much can be done over… Over the next twenty-four hours, I vow to appreciate this day as it is all I really have and to use every minute wisely and fully. So much can be done over the next twenty-four hours to advance my life’s agenda and complete my legacy. I will throughout this day, remember that this day will not come again. Today is an extraordinary day to… I have the potential to achieve whatever I want and manifested. Every passing day I’m learning new things about myself, handling different emotions, and reacting to them. Today is an extraordinary day to relax and calm my mind to strengthen my learnings before using them for my benefit. Relaxation, Restoration & Recovery are the objectives of my day. I am becoming a selective… I am clearing my mind from thoughts that are causing resistance to my growth. Defragmenting my brain will eventually immune my vision from flashy opportunities. I am becoming a selective listener day after day by filtering the pretentious foresight of people. I stopped sweating the details and just moving in the right direction. I’m continuously fueling myself… Today is another day in the chain of self-learning. Day after day, I’m becoming resilient and strong. I’m continuously fueling myself with positivity and worthy thoughts. My daily choices are helping me to heal and recover from fear, uncertainty & doubts. The feeling of passion & inspiration is flowing through my body. I am awake & alert. I’m also giving myself a… I am wise and serene. I am learning from my mistakes and growing into the person that I always dreamed I could be. Being an altruist, I’m also giving myself a priority. My endearing style, combined with a positive attitude, supporting me to receive an abundance of love and blessings. I’m striking off the challenges to… My time is precious, and I am spending every minute into the most valuable activities of the day. I am completely drowning in the ocean of self-advancement and egoism. I am doing things that are advancing me on the path of legendary success. Every passing day, I’m striking off the challenges to uncover my true potential. I respect my time and that of others. Inadvertently, I am slacking off unwanted interaction with others. I am eradicating problems coming in my way with silence and smile. I am the sole creator of my… I have a clear vision of what I want out of life. I know that I have what it takes to manifest any life that I want for myself. I am a powerful and determined individual. I am the sole creator of my reality. What I think and feel inside is what I will end up attracting into my life. I center my thoughts and feelings around what I want to create for myself. I am creating my worth by… Every day I am following discipline and self encouraging activities to reach where I want myself to be. I am creating my worth by emitting positive vibes and developing an environment of possibilities and growth. I am happy and confident in my strengths that are helping me to serve the world. My contribution to the people is returning to me with magnified blessings. I am devoting this day to prosperity and grandeur. I am continuing on… I am transforming every day, inch by inch. My life is no longer the same as previous. I am embracing change and becoming proactive, week after week. I am continuing on the path to prosperity and wellness. I am becoming aware of my interest and choices. I no longer stress myself about the things I couldn’t achieve. I know my role in this universe, and I am persistently updating my version to stay resilient and ingenious. I appreciate the problems… I am a growth magnet. I attract opportunities wherever I demonstrate my out-of-the-box thinking. I am grateful for the fluctuations that are happening in my life. I am thankful to God for choosing me to flourish in the creative demands of the world. My ideas are congenial and unique. I appreciate the problems that challenge my ideas and cause me to create thought-provoking solutions. My internal driving force… I am an agile member of the optimistic community. I suspect and close opportunities in hand. My internal driving force triggers me to perform well in my assigned job. I am the manager of my life and the responsibilities I have. I am productive and aggressive towards my work. I have a clear agenda for this week and, I’m visualizing myself enjoying the weekend with a lot of accomplishment and success. My hard work and efforts are… Today is an amazing day for me. I am going into this day with a clear objective and aggressive attitude. My hard work and efforts are now panning out. I am focused and constantly putting my precious hours into the most important tasks. I am unfolding the opportunities because I know wherever I try my talent, I will get success. I am ready to turn my efforts into results. I am living in an age of… I make mistakes and learn from them. I plan and re-plan things that don’t turn up well. I have a habit of learning from people and giving them back. My bandwidth to handle rejection and failure is widening at ever highest rate. I am living in an age of possibilities and miracles. My choices and decisions are helping me dehaze the clouds of uncertainty. I am letting the bright light of fulfillment enter into my life. I’m aligning my perspective to the most desirable angle for my life. I have trust in my process and I’ll soon become the master of it. I am feeding my mind with… I am a worthy person to talk to. I know my value and reason for my existence. I am feeding my mind with the fundamental thoughts of creating one’s relevance to this world. I’m bracing my foundation with knowledge and wisdom. I’m reviving my mind from stinking thoughts of anger and disgust. Sooner or later, I will become a thought leader in my industry. I am shifting my focus… I am dreaming of an extraordinary life that is full of joy and happiness, pleasures, and luxury. I am envisaging the magnificence of my life. I am shifting my focus from daily struggles to the radiance of this living. My life is worthy and I am proud of spending each day to make it more valuable. I know the life I want to live is possible. I know the things I can imagine myself capable of doing can be done successfully. I’m privileged to have good… I am healthy and mindful. I have a perfect body possessing attributes of fitness and vigor. I’m privileged to have good food and a healthy lifestyle. I’m thankful to God for blessing me with such a beautiful brain. I am responsible for keeping it alive and afresh. I follow healthy routines to maintain and cherish these treasures. I’m flourishing my body and soul with the right habits. I take charge of myself and… I am active and agile. I do not wait for circumstances to control me instead, I quickly act upon the situation using my experience and wisdom. I am a motivated and self-driven person. I take charge of myself and deliver the best out of me. I am confronting challenges and difficult situations bravely. Difficult situations kneel in front of me because of my positivity and courage. I can handle every situation with ease. I’m excited and curious to… Every day I am getting to know myself and becoming more aware of my choices and interests. I’m discovering my future-self by visualizing a good life ahead. I’m excited and curious to know about my passion. Every day, I am loving myself just a little bit more. I am focusing to stabilize my thoughts. I am happy with the direction I am going in my life. I have good intentions for… Today is a beautiful day. I woke up with energy and excitement. I am planning for a smooth transition between the start to the end of the day. I am going to do things that will elevate my spirituality. I am sensing a feeling of joy and peace from the beginning of the day. I have good intentions for the day and waiting for good things to embrace me. This amazing day will uplift my mood and spread vibrance around me. Every day I am working on… I know where my ultimate happiness resides. I have discovered a path to reach there. I am intensely visualizing myself-surrounded by joy and vibrance. I am constantly drifting myself to a state of radiance, which I truly deserve. Every day I am working on building that bridge, which will eventually help me to transit from current suffering to a profound state of happiness. I am striking off the things I don’t… I am clear about what I want in my life. I’m setting clear expectations from myself. I am striking off the things I don’t want in my life. Every morning I am waking with a positive feeling for my goal. I started preparing for my desires and will soon achieve them. I am not taking any pressure but taking baby steps to reach where I want myself to be. I know my efforts will not go waste. I am strict about what I have imagined, and I will soon turn it into reality. I wish that god will bless… I am a beautiful person by heart, who always love to help others. I’m proud of my education and knowledge which I am utilizing in the welfare of others. I feel satisfied after solving problems of peoples. I wish that god will bless me with enough resources and capabilities to support the people in need. I have been appreciated by… I am blessed with the power to influence others with my positive attitude and way of delivering information. I have been appreciated by many people for my in-depth knowledge and passionate working style. Day by day, I’m glowing by the magical power of positivity. I’m bullish on my chosen… This month will bring a lot more clarity to my vision. I will soar with plan and purpose. I’m oozing negativity out of my mind and fueling it with optimism. I’m bullish on my chosen path and hoping for an auspicious transformation.
https://medium.com/design-stripes/positive-affirmations-to-repeat-everyday-200833fce9d6
['Design Stripes']
2021-04-25 13:28:16.489000+00:00
['Affirmations For Success', 'Positive Affirmations', 'Morning Routines', 'Positive Thoughts', 'Affirmations']
DeFi Investment Report: Feb 2021
Summary: I have invested in $CELO, $FARM, $AAVE, $CRV and $UNI in the ratio of 4:4:1:1:1 . I only invest in tokens where I’m active on Discord and in governance votes. I selected these five tokens to get a taste of stables, yield farming, lending and exchange protocols. My most common concerns are around a lack of collateralisation and a lack of clarity around governance token income. An updated version of this report is maintained here. Investing Principles: Buy and Hold: Once I buy, I don’t sell. Dollar Cost Averaging: Every quarter, I invest roughly the same USD amount in DeFi. Understand what I own & Own what I understand: I try to get involved in governance proposals in all platforms I own stakes in. I will continue to invest in protocols I understand and believe in, and stop investing in those I don’t. Don’t look at returns: I don’t track prices on a monthly or quarterly basis. I only look at how many units I have invested in each holding. I’m defining a unit as a fixed dollar amount (cost basis). I see this type of investing as all or nothing — either the token will go sky high, or flop to zero. As a disclaimer, this is a summary of my approach at the time of writing. I am not recommending or tailoring this strategy to anyone other than myself and my own learning and investment goals. I see a high probability that I will lose the entire value of these investments. If you are investing in the same or similar DeFi tokens, I suggest that you invest only what you can afford to entirely lose. Consider it the price of learning. Initial holdings — ~Jan 2021 My initial investments were made over a period from December 2020 to February 2021, and were made in the following tokens and proportions: 4 units — $CELO (locked and earning ~5.5% interest on WOTrust.US) 4 units — $FARM (staked in the Profit Sharing pool) 1 unit — $AAVE (staked in the safety module to earn ~5% interest) 1 unit — $CRV (locked for 4 years on curve.fi) 1 unit — $UNI If you’re new to these tokens, I recommend reading through this DeFi mini-course, which covers each token in turn. There are hundreds, if not thousands, of DEFI tokens with much lower standards than the five I will review. My report will appear critical, but keep in mind that I own these tokens and their success is in my interest. Also keep in context that I have been getting deep into DeFi for only two months — many people behind DeFi platforms have been working in the space for years. $CELO Platform type: Celo is the governance token of the CELO platform, which is proof of stake platform that operates independently of Ethereum (mini-course here). Investment rationale: The founders in the platform are successful serial entrepreneurs. The Celo platform has a) a proof of stake protocol that allows for very low transaction fees of <$0.01 — tackling a major issue with Ethereum cryptos, and b) the mobile platform for payments, ValoraApp.com, has good UX design — unlike many DeFi projects. Latest impressions and concerns: Collateralisation. The Celo platform supports a US dollar stable coin called the Celo Dollar (cUSD). This stablecoin is backed by reserves that predominantly consist of the CELO governance token. While reserve levels are high at present, in a down-market, I am concerned that the CELO token will drop in value and Celo stablecoins could become undercollateralised and de-peg. On the plus side, Celo.org is transparent about the level of reserves, and — to date — I find the community responsive to questions. I also find governance to be transparent (although currently quite centralised among early investors and employees). I would prefer to see an updated approach to collateralisation prior to further investing in the Celo governance token. Locking. I am happy to lock my token to vote and earn 5.5% interest. Celo have separated voting out from custody of the token, so I can assign my votes without giving up custody of my Celo, which is comforting. I will continue to lock my CELO if I buy more. $FARM Platform Type: FARM is the governance token of Harvest.Finance, a yield farming platform where the governance token earns 30% of the profits made by investors (liquidity providers) in the yield farming strategies offered by FARM. (Mini-course here). Investment rationale: FARM had undergone a hack before I invested, and recovered somewhat. I believe systems that survive serious incidents are less likely to be complacent in the future. I also was drawn by the earning power of the FARM token. In contrast to many tokens that have no direct earnings at present (e.g. Uniswap governance token or Compound governance token), FARM earns 30% of all profits generated for investors on the harvest.finance platform. The FARM token has been priced very attractively relative to this stream of earnings, with the token price as low as between one and two times the annualised profit stream accruing to the token. Latest impressions and concerns: Transparency. Engaging on Discord, it is difficult to identify (even pseudonymously) who the real decision makers are. I plan to join some of the coding meetings in order to better understand what is really happening. Financialisation. Harvest recently approved issuance of an iFARM token, which is a staked version of FARM. To me, this is a sign of increasing layering of complexity in the platform, giving it less straightforward and more ponzi-like characteristics. I’ll be looking closely at that the way the Harvest community handles the implementation of the iFARM token, as well as keeping a close look-out for more careful consideration within governance proposals of keeping the platform economics simple and transparent. $AAVE I’ll now provide an update on three tokens where I only invested one unit this quarter. This lower investment level is because I have done less diligence on these tokens. Expect deeper updates over the next months. Platform Type: AAVE is the governance token of AAVE.com — a lending platform. (Mini-course on DeFi lending here). Investment rationale: I determined that I should invest in at least one DeFi lending platform for learning purposes. I briefly reviewed Aave and Compound and choose Aave based on the (perhaps unfair) assessment that Aave was more decentralised, whereas Compound seemed to have some dominant VC backers. I also felt (probably more fairly) that the Aave governance token has a clearer vision for earning fees, whereas Compound tokens do not currently have any earnings. Latest impressions and concerns: Token Economics. While Aave tokens earn a portion of platform fees, the details are not clear on the Aave.com website. Ultimately, token economics underpin the financial reasons to own a token, so I’ll be keeping a close eye on improvements in communication of token economics from Aave on this front. Still, credit to Aave for having progressed this further than compound.finance . Collateralisation. The Aave platform (as are most lending platforms) tend to disproportionately service lending of volatile cryptos (like BTC and ETH) collateralised with stablecoins (DAI, USDC, TUSD). Collateralisation requirements are only a fraction of what would be required to cover a fall in BTC/ETH prices to March 2020 levels. This means the platform stability is reliant on a highly functioning liquidation mechanisms in the case of a market crash. I believe this risk is under-appreciated by the platform, although the presence of a safety module is a step in the right direction. (More analysis on collateralisation here). Utility. My understanding is that much borrowing on these platforms is done in order to make leveraged long investments in BTC and ETH. I’ll be keeping an eye out for whether there are less financialized uses of lending platforms like Aave that benefit the general public. Safety Module. I have staked my AAVE in the safety module for about 5% annual interest. In return, my holdings could be slashed by 30% if the platform runs short of liquidity. I think this is a bad deal. I think the real market rate should probably be closer to 15% because I put the chance of needing the safety module (with current collateralisation levels) as quite high. The fact that few Aave holders are staking their Aave also suggests to me that this is an issue. One solution would be to increase the interest payable until staking rises to a predetermined threshold (say 75% of outstanding Aave tokens). $CRV Platform Type: Curve.fi is an exchange platform for stablecoins. I have had limited exposure to this platform and so will reserve further remarks for a later post. Investment rationale: I wanted to invest in at least one exchange for stablecoins. I was further attracted by the long-term staking features built into CRV whereby you are rewarded for holding your tokens for years. I have yet to understand the detail of this approach but — where there is long term thinking — I am enticed! $UNI Platform Type: Uniswap is an exchange platform for ERC-20 tokens like $FARM. The DeFi mini-course covering exchanges is here. Investment rationale: I wanted to invest in at least one DeFi exchange. I was attracted by Uniswap’s forward thinking on foreseeing a protocol fee (a 0.05% fee on transactions that can be turned on and directed to the governance token) and also the strength of online documentation and the open source emphasis of the protocol. I have, however, had limited exposure to details of governance. More analysis to come in future posts. Coming Up Next Month: In March I’ll likely do a deeper dive on CRV and UNI. To get on the list for the release of that report, subscribe at Pinotio.com . In the meantime, please do comment below if you identify mistakes/omissions or if you have a question.
https://medium.com/@pinotio/defi-investment-report-feb-2021-fae7721bc620
[]
2021-02-20 16:28:34.441000+00:00
['Harvest Finance', 'Ethereum Blockchain', 'Aave', 'Defi', 'Cryptocurrency']