title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
829
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Patient Advisory Groups in Diabetes Research: A Caregiver Perspective
Patient Advisory Groups in Diabetes Research: A Caregiver Perspective As a Knowledge Exchange Coordinator for the Developmental Origins of Chronic Disease in Children (DEVOTION) Network, my work supported stakeholder and patient engagement. One example is a patient advisory group for the Improving Renal Complications in Adolescents with Type 2 Diabetes (iCARE) Cohort study. Put simply, iCARE is an observational clinical study designed to address the high rates of kidney damage in youth living with type 2 diabetes (T2D). Beyond biological risk factors, this study is also exploring the psychological and social risk factors associated with kidney damage in youth with T2D. What can patient engagement look like in reality? We’ve previously shared two blogs that highlighted patient engagement within iCARE study’s patient advisory group. Dr. Brandy Wicklow (a co-principle investigator for iCARE) talked about the logistics of starting an advisory group (read it here), while Shayna Quoquat shared her perspective as a youth advisor (found here). In this post, we share the perspective of a caregiver involved in this study: Jackie McKee (co-chair of the iCARE advisory group), who both lives with T2D and cares for her child living with T2D. Can you tell us about yourself, and how you got involved in health research? “It all started when my son was diagnosed, two months before his 13th birthday. I’ve been very familiar with type 2 diabetes in my family, my grandmother and mom both had it. I was diagnosed myself 5 years ago. Arlene, the nurse in charge who works in Waywayseecappo First Nation, asked if my son and I would be interested in joining this advisory group. My son couldn’t join in the end, but I did and I’ve been with it ever since.” How has this role been valuable to you? “It has given me a forum to say how I feel, my concerns and issues with diabetes. But the biggest thing is that I now feel like I have a voice, one that reaches a larger audience. One of the biggest things when we first started was establishing trust between parents, kids and researchers. Trust has to be established with the healthcare workers in the system. A lot of people have talked about not feeling empathy from their doctor outside of the clinic. I think trust is there now with the group and researchers, it has evolved. When we first started these meetings, it was common feelings that bonded us. There is constant worry as a parent when your child has type 2 [diabetes]. Even though we (the iCARE advisory group) have been in existence for almost 3 years and we’ve only met a handful of times, it feels like we’ve been together for a long time.” How has the voice of the iCARE advisory group impacted the iCARE study? Editor’s note: issues in mental health was identified as an important risk factor by the iCARE advisory group, leading the research team to incorporate and measure this within their study. “Mental health is something that the researchers are now focusing on within their study. To me, mental health support is just as important as the medicine. As a group, we’ve looked at the types of questions they now ask in clinic to find out more about people’s current mental health support, and provided advice around what and how they should be asking patients these questions during their visits at clinic. Now we’re talking about elder support in clinic and ways to increase support for patients. Sometimes it feels like we’re not making an impact. But I watched a clip on the news that featured the iCARE study the other day with Allison and Shayna (note: Dr. Allison Dart is the other co-principle investigator for iCARE and Shayna is one of our youth advisors). Allison had mentioned how they were now incorporating mental health in the study and within clinic. When I heard her say that, I thought “someone’s heard us”. We as a team have contributed to diabetes in that way.” What challenges have you faced within the iCARE patient advisory group? “In our group we have two types of people: ones that are more vocal, and ones that are quiet. I’m one of the ones who talks. Some people express themselves differently, like through art. But I think sometimes it’s hard to have discussions because not everyone has been able to share how they’re feeling. The other challenge is having a consistent core group. Once the youth are finished high school they sometimes leave the group because they have changed directions in their lives with either attending school, or have entered the work force.” What are the key things you’ve learned? “The biggest thing I’ve learned is the increased risk kids with type 2 diabetes have have in terms of kidney problems by the time they’re in their 30’s. It scared the hell of me because that’s where I see my son heading. Kidney problems equals dialysis in my opinion. It’s also understanding how big this is, kids getting type 2 diabetes. I do this work because I don’t want another parent to go through this. You can’t imagine this unless you’ve gone through it. It’s not necessarily inevitable that your child will get diabetes. The other thing I learned is about the misconception that type 2 diabetes is 100% preventable. Which is not the case! Contrary to popular public belief this is not the case. There are many factors involved, including a hereditary factor. This issue is beyond this group and the iCARE study. So many people and different departments within our healthcare system need to get on board to stop this.”
https://medium.com/knowledgenudge/patient-advisory-groups-in-diabetes-research-a-caregiver-perspective-4fb9e9155480
['Leanne Dunne']
2018-09-19 14:37:46.125000+00:00
['Diabetes', 'Patient Engagement', 'First Nations', 'Health', 'Kidney Disease']
Where Aliens Attacked in the Independence Day Movie
Today, we celebrate Independence Day, when in 1776 the Continental Congress declared the 13 American colonies were no longer subject to the monarch of Britain and were now free, united and independent states. We’re excited to celebrate Independence Day and its significance, but we don’t want the fun to stop there! After all, Independence Day isn’t just a beloved historical day in American history… it’s also a 1996 blockbuster that grossed over $817 million in ticket sales and eventually led to this fandom Wiki (which also looks like it’s straight out of 1996). In the film, aliens use Earth satellites to signal to each other, which allows them to coordinate an attack and destroy our planet by (spoiler alert!) blowing a lot of stuff up. There’s much turmoil over the course of the epic 2.5 hour movie. The aliens are vicious, the Statue of Liberty ends up face-down in the New York harbor, and it seems all hope is lost! But then satellite technician David Levinson (Jeff Goldblum) and Captain Steven Hiller (Will Smith) combine forces to upload a virus into the Earth’s satellites to infect the alien mothership and save the day. Would you expect anything less from the shining stars that are Jeff and Will? David Levison uploads the virus that will doom the alien invaders // Credit: Giphy Below, you’ll find Planet imagery showing the cities and landmarks where some of the most notable, explosive scenes in the movie take place. In the extremely unlikely event that Earth satellites are hijacked for alien takeover, we’ll be ready. We’ve got our own brainy, Jeff Goldblum-ish characters on staff, ready and willing to upload a virus should the day come. Happy Independence Day! The White House and the Capitol Building in Washington D.C. // Imagery and GIF by Leanne Abraham, Planet The Empire State Building in New York City // Imagery and GIF by Leanne Abraham, Planet The U.S. Bank Tower in Los Angeles, CA // Imagery and GIF by Leanne Abraham, Planet And last but not least: Area 51, Nevada. It didn’t get blown up, but the Independence Day film opens with it, and it’s totally iconic. // Imagery and GIF by Leanne Abraham, Planet P.S. Aliens, if you’re reading this: Please don’t blow up the Earth. Just watch Independence Day. It will be just as satisfying. Probably more so, to be honest. It is a classic.
https://medium.com/planet-stories/where-aliens-attacked-in-the-independence-day-movie-2c47faa5bfee
[]
2019-07-04 14:01:01.036000+00:00
['Satellite Imagery', 'July 4', 'Independence Day', 'Satellite']
Facebook Newsfeed Algorithm: 5 Ways to Recover Organic Reach
Ladies and gentlemen. We come together today to once again mourn the loss of Facebook organic reach, to share the grief all of us marketers feel. And perhaps, in that sharing, we can find the strength to look toward the future with some hope. Yes, organic reach on Facebook is abysmal and getting worse, thanks to the latest announcement from the social network that’s visited by more than a billion users every day. Facebook will show more funny videos and baby pictures posted by family and friends instead of news and other marketing content from brands, businesses, and publishers. How bad is organic engagement on Facebook? On average, it’s somewhere in the neighborhood of less than 1 percent. Yikes. Every once in a while, one of your posts might still get tons of organic engagement. But it’s fast becoming mission impossible. Facebook: Unhackable. Facebook’s algorithm is powered by machine learning. While I don’t know the secret formula Facebook uses, we know from a computer-science perspective that machine-learning algorithms learn by testing and figuring out howpeople react to those tests. Bottom line: if people really love your content and engage with it, then they are more likely to see more of that type of content in the future. The reverse is also true — if you post garbage, and if people don’t engage with it, then those people are even less likely to see your stuff in the future. More engagement (i.e., shares, comments, Likes) means more visibility in Facebook’s news feed. Facebook’s algorithm is more likely to give more visibility to posts that resonate well, to audition it in front of more people. In fact, Facebook Ads, Google AdWords and even organic search work the same way. So what’s the solution? Your mission, if you choose to accept it, is to mitigate the loss from the latest Facebook newsfeed algorithm. You must raise your organic engagement rates. Let’s meet your new weapons — the five crazy hacks that will help you do what’s said to be impossible: hack the Facebook newsfeed algorithm. Note: Some of these hacks involve spending a little bit of money. Others are totally free. All of them are totally worth your time. Facebook Newsfeed Hack #1: Preferred Audience Targeting Listen up: Preferred audience targeting is a brand new Facebook feature that works just like ad targeting, but for your organic posts. That’s right, this new feature lets you target your organic updates as if they were ads, for free.Facebook lets you target your update so only the people who are most likely to be interested in your update will see it. Here’s where the preferred audience targeting option can be found: This feature is so powerful because not everyone who follows your Facebook page is going to care about every single update you publish. If you want to start raising your organic engagement, you need to stop broadcasting to all of your followers and focus on those people who are most likely to engage with specific updates. Think about it. Why do people follow huge companies like IBM or GE? It could be for any number of reasons. Facebook’s preferred audiences feature is pure genius for companies that have a variety of products and divisions, or that operate in multiple countries. You can narrow the targeting based on users’ interests and locations to reach the people you really want without bothering the rest of your followers. This feature also has benefits for smaller companies and publishers. Take me for example. I post updates on a wide variety of topics, including online advertising, entrepreneurship, social media marketing, SEO, branding, andgrowth hacking. Preferred audience targeting allows me to decide who sees my posts — or who won’t see my post, using audience restrictions: Here’s another example. Let’s say you’re a French clothing retailer with locations in France, Poland, and Germany. You could make it so that only French-speaking millennial females who live near your locations will see your post announcing your latest deals. Remember: everybody who likes your page isn’t your target market. Plenty of random people will like your page over time, but then never engage with your updates, visit your website, or buy from you. If you can only reach 1 percent of your audience, you should more narrowly target the people who aretruly interested in what you have to offer. Giving people what they’re interested is what great marketing is all about — and, in the process, it will help you raise your Facebook engagement rate significantly. Facebook Newsfeed Hack #2: The Unicorn Detector Pyramid Scheme The Unicorn Detector Pyramid Scheme is the process you can use to separate your content unicorns from the donkeys. What is a content unicorn? Well, content becomes a unicorn when it is clearly among the top 1 to 2 percent of all of your content. These are your most rare and beautiful pieces of content that attract the most shares, engagement, and views. A content donkey, on the other hand, doesn’t stand out at all. At most, it’s average. Ninety-eight percent of your content will be donkeys that get average engagement — again, less than 1 percent is the average organic engagement on Facebook, which is insanely low, right? To raise your organic engagement rates on Facebook, you need to post fewer, but better updates. You can test out your content organically on Twitter.Here’s how it works. Post lots of stuff on Twitter — somewhere around 20 tweets per day. But imagine that every tweet has been infected with a virus, one that will ultimately kill them without the antidote within less than 24 hours. The only cure for these infected tweets? They need to get a significant number of retweets, clicks, likes, and replies. Examine your top tweets in Twitter Analytics. Those tweets with the most engagement — your top 5 or 10 percent — have survived! Your content that got the most engagement on Twitter is also highly likely to generate similar engagement on Facebook. Facebook Newsfeed Hack #3: Post Engagement Ads You can use Facebook’s Post Engagement Ads to give your posts a bit of a push. Yes, that means you’re spending a little money to “earn” some free reach in the news feed. For example, let’s say I posted the above update only on my wall. The engagement is going to be pretty low. Maybe a few hundred people will see it. So what happens if I spend just $20 to promote it? In this case, I paid for more than 4,400 impressions (clicks, follows, likes, etc.), but also got more than 1,000 organic engagements for free as a result. How? Whenever someone shares your promoted post, it results in more people seeing it organically in their newsfeeds and engaging with it. Facebook Newsfeed Hack #4: Add Engaged Followers Did you know there’s a way you can selectively invite people who have recently engaged with one of your Facebook posts to like your page? This is a valuable but little-known feature available to some (but not all) pages. You want people who engage with you to become part of your Facebook fan base. You know these people like you and are more likely to engage with your content because they’ve done so in the past. Here’s how you do it: Click on the names of the people who reacted to your post (liked, loved, etc.). You’ll see three types of buttons (Invite, Liked, Invited). Clicking on that Invite button will send an invitation to people who engaged with one of your Facebook posts to like your business page. Does it work? Yep. Between 15 to 20 percent of the people I invite to like my page are doing so. Oh, and did I mention it’s totally free? You can read more about the Facebook invite button here. If you want to further increase your Facebook following, you could run a remarketing and list-basedFacebook Fan / Page Promotion campaign, but I wouldn’t recommend it. I don’t think it’s a good investment unless you have a ridiculously low number of followers. You’re better off doing nothing. Our goal is to increase engagement rates to increase earned organic engagement. Attracting the wrong types of fans could hurt, rather than help, your engagement rates. Facebook Newsfeed Hack #5: Use Video Content The decline of organic reach almost mirrors the rise of video on Facebook. Users watch more than 8 billion videos every day on the social network. And these videos are generating lots of engagement. Just look at this recent research from BuzzSumo, which examined the average total number of shares of Facebook videos: Facebook is doing its best to try to kill YouTube as the top platform for video. If you haven’t yet, now is the time to jump on the bandwagon. Stop sharing vanilla posts that get little to no engagement. Add some video into your marketing mix! That should help improve your organic engagement because engagement begets engagement. Closing Thoughts on the Facebook Newsfeed Algorithm Facebook organic reach is pretty terrible. That’s why you should start treating your organic Facebook posts more like a paid channel, where you have to pickier and optimize to maximize engagement, in the hopes of getting more earned organic engagement. We’ll never get back the Facebook organic reach we’ve lost over the past few years. However, these five hacks will help dramatically increase your organic engagement and mitigate your losses from the latest Facebook news feed change. Be a Unicorn in a Sea of Donkeys Get my very best Unicorn marketing & entrepreneurship growth hacks: 2. Sign up for occasional Facebook Messenger Marketing news & tips via Facebook Messenger. About the Author Larry Kim is the CEO of MobileMonkey — provider of the World’s Best Facebook Messenger Marketing Platform. He’s also the founder of WordStream. You can connect with him on Facebook Messenger, Twitter, LinkedIn, Instagram. Originally published on Wordstream.com
https://medium.com/marketing-and-entrepreneurship/facebook-newsfeed-algorithm-5-ways-to-recover-organic-reach-5925adcc009
['Larry Kim']
2019-07-16 10:11:01.091000+00:00
['Entrepreneurship', 'Facebook', 'Social Media', 'Algorithms', 'Marketing']
Demystifying PyTorch: Understanding interaction between various PyTorch abstractions.
PyTorch is one of the most used libraries for deep learning but is also one of the very difficult libraries to understand due to lot of side-effects that one object can have over another. For instance, calling the “step” method of an optimizer updates the module object’s parameters. Trying to wrap my head around PyTorch objects better and how they interact with each other, I found this Coursera course to be very helpful. It shows how different abstractions (such as Dataset, Dataloader, Module, Optim, etc) interact with each other. This post has a similar motivation. I start with the explicit implementation of linear regression algorithms using minimal PyTorch abstractions. After that, each iteration introduces a new PyTorch abstraction that helps simplify the code and brings much more flexibility overall. Version 1: Implementing Linear Regression The Hard Way import numpy as np import torch # fix seed np.random.seed(10) torch.manual_seed(10) # Randomly generate 1000 x values and compute Y as -1 + 3 * X . The goal # is to learn parameters -1 (bias) and +3 (w1). We will fold bias in X # to help simplify parameter updates samples=1000 X = torch.hstack([ # Insert ones to simplify computing bias torch.ones(samples, requires_grad=False).view(-1, 1), # Randomly generate X torch.randn(samples, requires_grad=False).view(-1, 1) ]) # (generates a 2D array of (1000, 2) # Compute Y. The goal is to learn bias (1) and coefficient for X i.e. 3 Y = -1 * X[:,0] + 3 * X[:,1] # generates (1000,) shape vector # Helper Array: Generate index ids. We will shuffle this and split it to generate random batches idx = np.arange(X.size()[0]) # Generate parameter tensor. Note to set requires_grad to True over here w = torch.randn(X.size()[1], requires_grad=True) # Prediction function def forward(w, x): return (w * x).sum(axis=1) # we are using mean square error as the cost function. def mse(y, yhat): return torch.mean((y - yhat) ** 2) # learning rate lr = 0.01 # run 100 epochs through the data for epoch in range(100): # Randomize indexes and split into batch size of 100. np.random.shuffle(idx) for batch in np.split(idx, 100): curX = X[batch, :] curY = Y[batch] # compute predicted value yhat = forward(w, curX) # compute cost cost = mse(curY, yhat) # compute gradient -- this will update grad variable of w tensor cost.backward() # update parameters w.data = w.data - lr * w.grad.data # reset grad to zero for w. w.grad.data.zero_() # use detach as we don't want the parameters to update again and print them print(w.detach().numpy()) Version 2: Encapsulating data management Using Dataset and DataLoader PyTorch provides Dataset abstraction to hide how data is managed. This provides a better encapsulation of data. Further, it provides the concept of DataLoader to split data into batches. We will use these concepts to hide some of the implementations of how our data is stored and organized. Below is the second version of the code. Using DataSet and DataLoader helps encapsulate some of the data managed related code pieces and make them irrelevant in version 2. These are highlighted in the code below. import numpy as np import torch # fix seed np.random.seed(10) torch.manual_seed(10) # Randomly generate 1000 x values and compute Y as -1 + 3 * X . The goal # is to learn parameters -1 (bias) and +3 (w1). We will fold bias in X # to help simplify parameter updates samples=1000 class RandomData(torch.utils.data.Dataset): def __init__(self, samples): self.X = torch.hstack([ # Insert ones to simplify computing bias torch.ones(samples, requires_grad=False).view(-1, 1), # Randomly generate X torch.randn(samples, requires_grad=False).view(-1, 1) ]) # Compute Y. The goal is to learn bias (1) and coefficient for X i.e. 3 self.Y = -1 * self.X[:,0] + 3 * self.X[:,1] def __getitem__(self, index): return (self.X[index, :], self.Y[index]) def __len__(self): return self.X.size()[0] data = RandomData(samples) dataloader = torch.utils.data.DataLoader(dataset=data, batch_size=100, shuffle=True) # Helper Array: Generate index ids. We will shuffle this and split it to generate random batches # idx = np.arange(X.size()[0]) # Generate parameter tensor. Note to set requires_grad to True over here w = torch.randn(X.size()[1], requires_grad=True) # Prediction function def forward(w, x): return (w * x).sum(axis=1) # we are using mean square error as the cost function. def mse(y, yhat): return torch.mean((y - yhat) ** 2) # learning rate lr = 0.01 # run 100 epochs through the data for epoch in range(100): # Randomize indexes and split into batch size of 100. # np.random.shuffle(idx) # for batch in np.split(idx, 100): # curX = X[batch, :] # curY = Y[batch] for (curX, curY) in dataloader: # compute predicted value yhat = forward(w, curX) # compute cost cost = mse(curY, yhat) # compute gradient -- this will update grad variable of w tensor cost.backward() # update parameters w.data = w.data - lr * w.grad.data # reset grad to zero for w. w.grad.data.zero_() # use detach as we don't want the parameters to update again and print them print(w.detach().numpy()) Version 3: Encapsulating Model using nn.Module class Version 2 encapsulated data management using Dataset and DataLoader. In Version 3 we leverage nn.Module class to encapsulate model related things. There are two things: coefficient parameter (w) and forward function to compute prediction for a given data point. import numpy as np import torch # fix seed np.random.seed(10) torch.manual_seed(10) # Randomly generate 1000 x values and compute Y as -1 + 3 * X . The goal # is to learn parameters -1 (bias) and +3 (w1). We will fold bias in X # to help simplify parameter updates samples=1000 class RandomData(torch.utils.data.Dataset): def __init__(self, samples): self.X = torch.hstack([ # Insert ones to simplify computing bias torch.ones(samples, requires_grad=False).view(-1, 1), # Randomly generate X torch.randn(samples, requires_grad=False).view(-1, 1) ]) # Compute Y. The goal is to learn bias (1) and coefficient for X i.e. 3 self.Y = -1 * self.X[:,0] + 3 * self.X[:,1] def __getitem__(self, index): return (self.X[index, :], self.Y[index]) def __len__(self): return self.X.shape[0] data = RandomData(samples) dataloader = torch.utils.data.DataLoader(dataset=data, batch_size=100, shuffle=True) class CustomLinearModel(torch.nn.Module): def __init__(self, num_parameters): # Generate parameter tensor. Note to set requires_grad to True over here self.w = torch.randn(num_parameters, requires_grad=True) def forward(self, x): return (self.w * x).sum(axis=1) model = CustomLinearModel(2) # Helper Array: Generate index ids. We will shuffle this and split it to generate random batches # idx = np.arange(X.size()[0]) # NOT RELEVANT # learning rate lr = 0.01 # we are using mean square error as the cost function. def mse(y, yhat): return torch.mean((y - yhat) ** 2) # run 100 epochs through the data for epoch in range(100): # NOT RELEVANT -- Randomize indexes and split into batch size of 100. # np.random.shuffle(idx) -- not required anymore # for batch in np.split(idx, 100): # nore required anymore for (curX, curY) in dataloader: # curX = X[batch, :] # NOT RELEVANT # curY = Y[batch] # NOT RELEVANT # compute predicted value yhat = model.forward(curX) # compute cost cost = mse(curY, yhat) # compute gradient -- this will update grad variable of w tensor cost.backward() # update parameters model.w.data = model.w.data - lr * model.w.grad.data # reset grad to zero for w. model.w.grad.data.zero_() # use detach as we don't want the parameters to update again and print them print(model.w.detach().numpy()) Version 4: Using Optimizer Above, we are manually updating the parameters (w) and limited to a few simple implementations of gradient descent. There are so many other forms of gradient descents, such as momentum, Adam, etc. PyTorch provides these variants of gradient descents as part of the “optim” module. To leverage this module, we will also need to make a minor change to our “CustomLinearModel” class. We will need to wrap “w” tensor as “nn.Parameter” (see line 41 below). import numpy as np import torch # fix seed np.random.seed(10) torch.manual_seed(10) # Randomly generate 1000 x values and compute Y as -1 + 3 * X . The goal # is to learn parameters -1 (bias) and +3 (w1). We will fold bias in X # to help simplify parameter updates samples=1000 class RandomData(torch.utils.data.Dataset): def __init__(self, samples): self.X = torch.hstack([ # Insert ones to simplify computing bias torch.ones(samples, requires_grad=False).view(-1, 1), # Randomly generate X torch.randn(samples, requires_grad=False).view(-1, 1) ]) # Compute Y. The goal is to learn bias (1) and coefficient for X i.e. 3 self.Y = -1 * self.X[:,0] + 3 * self.X[:,1] def __getitem__(self, index): return (self.X[index, :], self.Y[index]) def __len__(self): return self.X.shape[0] data = RandomData(samples) dataloader = torch.utils.data.DataLoader(dataset=data, batch_size=100, shuffle=True) class CustomLinearModel(torch.nn.Module): def __init__(self, num_parameters): super(CustomLinearModel, self).__init__() # Generate parameter tensor. Note to set requires_grad to True over here self.w = torch.nn.Parameter(torch.randn(num_parameters, requires_grad=True)) def forward(self, x): return (self.w * x).sum(axis=1) model = CustomLinearModel(2) # Helper Array: Generate index ids. We will shuffle this and split it to generate random batches # idx = np.arange(X.size()[0]) # NOT RELEVANT # learning rate # lr = 0.01 optimizer = torch.optim.SGD(model.parameters(), lr=0.01) # we are using mean square error as the cost function. def mse(y, yhat): return torch.mean((y - yhat) ** 2) # run 100 epochs through the data for epoch in range(100): # NOT RELEVANT -- Randomize indexes and split into batch size of 100. # np.random.shuffle(idx) -- not required anymore # for batch in np.split(idx, 100): # nore required anymore for (curX, curY) in dataloader: # curX = X[batch, :] # NOT RELEVANT # curY = Y[batch] # NOT RELEVANT # compute predicted value yhat = model.forward(curX) # compute cost cost = mse(curY, yhat) # compute gradient -- this will update grad variable of w tensor cost.backward() # update parameters # model.w.data = model.w.data - lr * model.w.grad.data optimizer.step() # this will update model parameters # reset grad to zero for w. # model.w.grad.data.zero_() optimizer.zero_grad() # reset gradients # use detach as we don't want the parameters to update again and print them print(model.w.detach().numpy()) What’s next There are other abstractions that you can use. For instance, pytorch already implements most of the common loss functions and hence we don’t need to implement “mse” function above. We can use torch.nn.MSELoss. Also we don’t need to handle parameters. Lot of models are already for you and the custom module class can build upon that. Checkout the list of already implemented models over here.
https://medium.com/@ragrawal-24314/demystifying-pytorch-understanding-interaction-between-various-pytorch-abstractions-60f919d6fc3
['Ritesh Agrawal']
2021-02-01 17:05:27.113000+00:00
['Machine Learning', 'Pytorch']
Are You an Accounting Student? Will Your Future Job Be Replaced?
In the Philippines, only 15.25% of examinees passed the 2021 CPA licensure exam. While you can work some accounting job without that, the job itself is…. in danger?? 2020, I was in a web conference for an inter-school marketing competition of some sort, The company was an advertising agency based in the Philippines with clients all over the world. Its founder-CEO told us that they don’t have accountants anymore. Why? They use Quickbooks. What is Quickbooks? Quickbooks is a subscription-based accounting software. It is oriented to become a powerful (but easy) all-in-one tool for personal use, freelancers, businesses, and accountants. It can track cash flow, income, payments, sales, time, inventory, etc. It can also run reports, maximize tax deductions, send estimates, manage bills, payments, and more. However, Quickbooks is not the only one out there. More on doing taxes, TurboTax, Tax Slayer, and Fresh books can make the process easy for business owners. More on payroll, Wagepoint and Gusto are a couple examples. The main appeal of these programs is their price and ease of use. Businesses can save more by using these programs than hiring accountants.💰 So, are accountants going away? Generally, a good accountant is still important. These programs are only maximized if you have some accounting knowledge. Which anyone should have, by the way. If that’s the case, here’s what’s alarming. According to a study in 2013 by Oxford Engineering Sciences that was published in 2017, an accountant’s and auditor’s job has a 94% probability of being computerized and replaced. While its clerks are at 98%. If you ask a business trying to cut expenses, they will agree. If you ask any accountant, they will probably disagree. And they should. Here’s why and how to be computer-proof! If you are a junior accountant or are an accounting student right now, you have to realize that most of your daily manual tasks and responsibilities can now be automated. Even some insights and predictions can be made for you. The basics are still necessary, but you have to get paid for your judgment rather than effort. What do I mean by this? If a business owner can tell you that it can do your job for you at a lower price, should you automatically accept the lower pay? This is where judgment comes in.🧙‍♂️ If you can go beyond what’s already done by software, you can show your specific expertise. You have to be more. You have to be more than the typical accountant. You have to be more than just someone to do the numbers and reports. You have to be a business advisor (and charge more). You have to show that you are an expert that saves them time and money. Someone to help them make the right financial business decisions. Show them what the software interpretations might miss. You can also look at their company’s future direction, growth, and potential future investments. As an accountant, you can also take on an analyst’s role. You can point out risks, trends, and opportunities in the market and complement that with your accounting expertise. Prove yourself with more value by doing what an “all-in-one” software might miss out on. And I’m pretty sure there’s plenty. I hope this helped :)
https://medium.com/@carlumaliv/are-you-an-accounting-student-will-your-future-job-be-replaced-c7fbb14d5652
['Carl Umali']
2021-11-22 09:54:35.723000+00:00
['Accounting', 'Accounting Software', 'Personal Development', 'Self Help', 'Self Improvement']
Be Friendly
Be Friendly Christ for Youth International “A man that hath friends must shew himself friendly: ...” [‭Proverbs‬ ‭18:24‬ ‭KJV‬‬] What report do your friends and family give about you? Do you come across as cold, rude, grumpy and short-fused or would you be described as a warm, loving and kindhearted person? If we want to have great and fruitful relationships, we must learn how to live and relate with different people. Being friendly is having a welcoming personality that facilitates neighbourliness. To frame our opening scripture in other words, if we have friends then we must be ‘friend-worthy’. We cannot expect to have thriving relationships when we have the wrong attitude. We push people away if we continue to be hostile to those around us. By being pleasant to our friends, we show that we value them and are thankful to have them in our lives. Rude and hostile behaviour devalues relationships and brings about conflicts. Growing in the fruit of the Spirit is the surest way to build lasting relationships that glorify God. We must yield to God’s work in us by practically living out the fruits of the Holy Spirit in all our relationships. It is worth noting that we do not only have to show brotherly kindness and hospitality to people in our social circles only, rather, we must acknowledge that all men have been created in God’s image so we must treat the people we encounter with love and respect always. Thank You, Jesus, for being the most perfect friend I could ever ask for. As I grow in my fellowship with You, make me a better companion of the people that are in my life. Further Reading: Hebrews 13:1 Bible In One Year: Deut [7–10]
https://medium.com/christ-for-youth-international/be-friendly-f29786a427d8
['Precious Moment']
2020-12-16 01:44:00.689000+00:00
['Friends', 'Youth', 'Precious Moments', 'Family', 'Christ']
It Changed That Day
C.C.O image From Pixabay It didn’t take much For the unbendable To bend and break’ Just a dose of hate And in an instant Our nation changed But not only our nation The world. Those planes forever Etched in our mind Images frozen in time As we close our eyes And see innocent lives Jumping to their deaths The fury of the fire Just too much to bare. Heroes were born that Day And far to many died It was a lesson in hate And love At the same time And those images Forever Etched in our mind. First responders Risking their lives But not only that Strangers Digging under The rubble After those towers Came down Like a house of Cards Only slightly jarred Thousands of lives Died under the rubble The weight of the world Was on their shoulders. It was a lesson in hate As we saw lives destroyed And a lesson in love As a nation became united Once again Even if only for a time But everything changed that Day The innocent image Of America destroyed. In the time it took For a plane to crash into those Buildings. © Michelle R Kidwell 8/01/2018
https://michellereneekidwell-95261.medium.com/it-changed-that-day-7be3d5c030fb
['Michelle Renee Kidwell']
2018-08-08 19:24:38.639000+00:00
['Strength', 'Poety', 'Hate', 'Love', 'September 11']
These are great!!
Pegasus Design System Pro for Figma Move projects to your users faster with a UI Kit designed for greater consistency and usability that adapts to your…
https://medium.com/@hellojohnnymac/these-are-great-d110d63850d7
['John Macmenamin']
2020-12-04 13:01:26.019000+00:00
['Design System']
How to Build a Crypto-Portfolio For Trading and Investing
Cryptocurrencies are now the hottest asset class in the marketplace and the recent influx of crypto-based investment funds only further testifies to this truth. With hundreds of new altcoins popping up in the market as a result of billions in investor funding, traders see a new world of opportunity in the digital token boom despite bitcoin’s continuing decline in price. Just the other day, the Silicon Valley-based investment firm Andreessen Horowitz announced a $300 million fund dedicated to investing in blockchain projects as well as the various currencies themselves. Crypto exchange Binance has already made it known that it was planning a $1 billion dollar crypto-fund themselves. As blockchain technology continues to become more popular, the potential profit opportunities for investing in this area are staggering. At the same time, investing in the cryptocurrency world is changing. The last vestiges of the unregulated, “wild-west” atmosphere that existed during the early ICO’s up until the boom in 2017 is fading away. Instead, the industry is facing a future where increased regulation, growing adoption rates, as well as mainstream acceptance will cause an irreversible maturation in the industry. What that means for investors today is that there has never been a better time to jump into the crypto investing world than now. Here are a few considerations to keep in mind when getting started. Good and Bad Reasons To Invest It’s worth mentioning that there are many good reasons to start investing in cryptocurrencies. Whether that might be hedging your net worth from fiat collapses, supporting the social vision and technology behind a project, or even just the pursuit for profitable returns are all valid reasons why to start investing. However, there are many reasons why you should not get involved in the area. For some, that might be falling victim to the hype that surrounds the market. In other cases, this can come in the form of a fear of missing out, worrying that you’re missing out on an opportunity that may never come again. While getting excited about the markets potential as well as seizing opportunities aren’t bad things in an of themselves, they need to be undergirded by genuine research and knowledge rather than just emotion. Understanding the Markets Before you even consider wallets, exchanges, or any potential software, it’s imperative to have a solid understanding of not just blockchain technology, but how the cryptocurrency markets operate and move. There are significant differences that characterize this specific area that isn’t an issue in more traditional financial markets like stocks, bonds, and derivatives. For one, it’s important to understand the underlying technology and monetary rules behind whichever token you want to invest or trade in. Some currencies, such as bitcoin, are decentralized. Others are more centralized, which his largely due to the varying consensus algorithms in existence (which you can read more about in a previous article we wrote on the subject here). Closely related is the issue of mineability, with some tokens not being able to be mined. Ripple, NXT, Waves, and others are examples of these and have all their tokens controlled by a single company. It’s also essential to understand what the business use behind each token you wish to invest in is. In some cases, such as bitcoin, the token serves as a store of value and a method of payment between different parties. Ethereum, on the other hand, is used more as a platform for creating decentralizes applications and autonomous smart contracts. These differences mean a lot in understanding the future potential of an asset. From then on, specifics such as how many developers are on a project, how large it’s community is, average trading volume (liquidity), market capitalization are all things to consider. When evaluating specific altcoin fundamentals, the process is similar to analyzing an ICO. Look for a transparent technical vision with an active, visible management team. Poor projects are the opposite and tout fuzzy technical promises without expounding on the details. Know Your Risk Tolerance Level As is the case with all investments, underlying what level of risk is acceptable to you directly determines which cryptocurrencies you would do best investing in. Of course, digital assets in general have a higher baseline level of risk, partially because the market is still unregulated but also because of other things (volatility and liquidity risks, etc.). Newer currencies are usually available at quite cheap prices, but while they bring a big potential for returns, are also more likely to fizzle out. Mainstream currencies are less likely to see rapid changes in their price level, instead move at slower rates (although as seen in the case of Bitcoin, a mainstream token can still lose most of its value over several months). To compare the cryptocurrency investing world with regular financial markets, mainstream coins such as Bitcoin can be considered the large-cap stocks of the market. Alt-coins can be compared to the various small-cap and mid-cap stocks with moderate potential for growth. As for ICO’s, the closest equivalent in terms of risk/reward profile would be penny stocks. For more active traders, the cryptocurrency market is still young enough that significant arbitrage opportunities still exist. Unlike traditional markets, the lack of large, institutional traders with advanced computing software means that price discrepancies are still a (relatively) risk-free method of making money). Feel free to read more about the subject here. The Basics Behind Setting Up Your Account As for the actual mechanics behind trading and setting up your account, there are two things that you will need to do. Firstly, set up a digital wallet, and secondly, picking an exchange. Digital Wallets is what will store all incoming and outgoing information relating to your cryptocurrency holdings. You will need to decide whether to store your wallet online through a website or platform (called a hot wallet), which tends to be simpler to access, or instead use a USB stick or hard drive (a cold wallet). The latter method is more secure from hacks but also makes it harder to use, especially for beginners). If you wish to invest in lesser-known cryptocurrencies, you will also need to make sure that your digital wallet in question can store those specific tokens, as some online wallets don’t accept all types of altcoins. The next question is to pick which exchange you wish to use. While there are hundreds of exchanges out on the marketplace, some of the more notable ones include bittrex.com, gdax.com, coinbase.com, Binance.com, and others. It’s important to remember that not only do exchanges have different fee structures, but some (such as Bitfinex and Coincheck) have been hacked before. Always keep your cryptocurrencies stored safely in your wallet, rather than in an exchange, no matter how secure they may market themselves to be. Investing Philosophy Within the financial world, investors tend to fall within two schools of investing thought, fundamental and technical analysis. A fundamental approach would be more focused on the intrinsic aspects of a company or venture, including its balance sheet, evaluating its market potential, earnings ratio, debt levels, and other aspects to find whether it’s undervalued or overvalued at the market price. Technical analysis puts all of that to the side and instead focuses on forecasting future price movements by looking at past behavior of markets, chart patterns, and historical trends. While the precise application of these two competing approaches isn’t the topic of this article, it’s worth mentioning that while fundamental analysis tends to apply to a long-term approach to investing, technical analysis is more proponent amongst short-term traders, speculators and the like. A Hypothetical “Balanced” Portfolio As for the actual distribution of your funds, much of this depends on what your earlier mentioned risk levels are as well as the amount of time you’re willing to put into investing. For someone that’s still unsure of what kind of approach they’re looking for, a balanced, diversified portfolio distributed between a variety of cryptocurrency asset types is recommended. In such a case, 60–70% of one’s portfolio can be invested in mainstream tokens, such as Ethereum, Litecoin, Bitcoin Cash, and others for stability. An argument can be made not to include Bitcoin in this list since it’s price has dropped significantly, although others will argue that this low price is a great time to pick up on a deal. From there, 10–20% of your portfolio can be distributed to other altcoins, along with 5–10% going to any exciting ICO’s going on in the marketplace. Even if only one or two of your ICO’s become successful, the idea is that the return from their successes will make up for the rest of the projects that fell under, similar to how venture capital funds operate. Since this will be the riskiest part of your portfolio, it’s important to allocate to much of your funds in this area. Final Thoughts However you want to distribute your portfolio is up to you, but regardless of your confidence levels, there are some important things to keep in mind. For one, never invest more than you are willing to lose. Secondly, keeping your portfolio diversified will ensure that one mistake or poor decision won’t ruin you. Lastly, never take anything for granted, whether it’s that cryptocurrencies will always be popular or that a token today will continue to exist in the future. With a cautious, albeit well-researched, approach, investing in the cryptocurrency field should be a fruitful opportunity for many.
https://medium.com/blocktoken/how-to-build-a-crypto-portfolio-for-trading-and-investing-4a951ef6475a
['Genson C. Glier']
2018-08-14 10:29:02.580000+00:00
['Investing', 'Trading', 'Crypto', 'Kapitalized', 'Bitcoin']
Can we say a Chauffeur-driven luxury car is a good idea for renting?
A foodie always looks for new places and for new delicacies. Every place has their own speciality in food items and mostly that attracts others to visit the place. Cameroon is a place especially known for its wildlife and beaches but have you heard of their food items? Cameroon has attracted tourists from all over the world and the reason is obviously nature so here we want to give another reason to visit Cameroon. Especially for the people who are foodies, this blog will definitely make you pack your bags and travel to Cameroon. Explore the beauty of Cameroon and also the delicious food items. Here we want to introduce you to some mouth-watering food dishes from Cameroon. Also, get to know about the best car rental service in cameroon that will take you to all the places that are famous for their food. Read till the end and explore the food items of Cameroon with us. Le Biberon Bamenda This underrated bistro hiding in plain sight at Foncha Junction deserves to be more popular than it is. Nicely crafted bamboo and wood hide this paradise that serves fish and chips, mushroom, grilled chicken, and paper soup. Here you can enjoy the best music you may hear anywhere in Cameroon. Le Biberon’s most special thing is they are organically growing their own food. They have set small tables that look so beautiful and for the people who want to plan a date at a beautiful place, trust us, this is the perfect place to spend time with your loved ones. If you go to Cameroon, you will need a car to explore and Cars in Africa is one of the best rental car agencies to rent a car from. Cars in Africa are the best one-stop transformation solution in the world. CIA is so up to the mark service for those who need a card on rent. Customers have the liberty to choose their favourite car. So basically the portal belongs to Africa so it would be easy to get a car for Africans. Etok Koss Yaounde This place is on the outskirts of Yaounde which is known as an ideal setting for cultivated ponds that grow fresh fish. The most important thing is the thatched huts and traditional setting are the best place to enjoy fresh and delicious food with family and friends on a day out. Get to know about the cars you may get for on rent. The company has very sophisticated and comfortable cars such as BMWs, SUVs, Mercedez, Swift, Sedans, Jeeps, Hyundai, etc. These and many like these cars you can get from Cars in Africa. The CIA always keeps offers that are beneficial for the customers. The company also offers an option of pick up and drop for their customers. The CIA thinks so much about their customers and so they have focused on the cars so that customers won’t have to refuel them and get them back to the place. Bois d’Ebene Yaounde This is an extremely beautiful place to go to for a whole cultural experience. Local dishes in the vibrant atmosphere accompanied by live music. They have grilled fish and shrimps with beef and sauce with miondo and fried plantains and whole meals like ndole, eru and gombo. Get the best food with soothing music by the live band. Since the situation due to pandemic has ruined regarding rental cars, the Cars in Africa is there for you. As the everything mentioned above you can get your favourite car and enjoy your trip to the fullest. Even in this situation, Cars in Africa is trying hard to provide best of the service to their customer. Just book your car at Cars in Africa and experience the best drive ever you had.
https://medium.com/@carsinafrica0/can-we-say-a-chauffeur-driven-luxury-car-is-a-good-idea-for-renting-f561da089443
[]
2021-12-20 10:14:15.518000+00:00
['Car Rental', 'South Africa']
Limited number of plots still available as Cortez Community Garden enters second year
Abundance in the late summer garden at the Cortez Recreation Center. (March 18, 2019) CORTEZ, CO — Spring approaches and community gardeners in Cortez are ready to begin another season of growing at the Common Ground Cortez Community Garden, located on the southwest corner of the Cortez Recreation Center. The 2018 gardening season was a success and with the expansion last spring adding 20 new garden plots, the garden was full of vegetables, flowers, and four generations of the Cortez community growing food together. Three garden plots are available to residents of Montezuma County for the 2019 season. They will be assigned on a first-come-first-served basis, so don’t hesitate to sign up if you’re looking for some sun, soil, water, tools, and a community to grow with. The plots might seem quiet for now, but the soil (and gardeners’ anticipation) has been building all winter. Colette Cummings and Read Brugger, two of the group’s gardeners, have started lettuce and onion seedlings, with tomatoes and peppers soon to follow. Brugger and Cummings are part of the group of volunteers that worked with the City of Cortez to establish the expanded garden in 2018. The seedlings will soon be featured in upcoming transplanting workshops at the community garden. Many will be planted in demonstration beds where they will eventually be harvested and donated to the Good Samaritan Center food pantry in Cortez. Last year, over 200 pounds of organically-grown vegetables, picked that same day, made their way to the food pantry. “Our clients look for the community garden’s produce and really like the idea that it is being grown right here in Cortez.” -Kristen Tworek, food pantry director Garden members have been meeting regularly over the winter, and have plans for 2019 to be even bigger and better than last year. Priorities include adding more soil to the raised beds and creating both a children’s garden and a container garden, as well as more community-wide events which will include educational gardening workshops throughout the season. The volunteer-run community garden is part of a larger effort in Montezuma and Dolores Counties to address food insecurity in the Four Corners. Access to affordable and healthy food reverses the trend toward diets of additives, fast food, and crops bread for ease of transportation and packaging rather than nutrition. Common Ground’s vision is for a diverse and vibrant network of community gardens that inspires people to be self-reliant and build resilient communities by creating spaces for community members to help each other grow healthy food. Brugger is coordinating the fundraising that will make these ideas a reality. He cites recurring donations from Tri-State Generation and Transmission, First National Bank Cortez, Dolores State Bank and the Mesa Verde Garden Club as “examples of the broad support that community gardening has in our business community.” Slavens True Value has once again supported the project with discounts on building materials and Sprinkler Pros will assist gardeners in upgrading their water systems. A family plot (144 sq ft) and two individual plots (64 sq ft) are ready and waiting for residents of Montezuma County to sign up and start planting. Plots are available on a first-come-first-served basis. For more information on how to apply for a plot at the Common Ground Cortez Community Garden, or to find out how you can get involved, go to www.commongroundcortez.org or email [email protected]. The members of the community garden welcome visitors to the garden. The gate is always open.
https://medium.com/common-ground-cortez/limited-space-still-available-as-cortez-rec-center-community-garden-enters-second-year-5049b63d0dc9
['Kirbi Vaughn']
2019-03-18 17:14:22.171000+00:00
['Local Food System', 'Food Justice', 'Gardening', 'Health', 'Community Engagement']
I Love External Reads On My Medium Stories
Let me explain Writing on Medium is a fun fulfilling journey for me. I get to put my thoughts into words and immortalise them on the internet for people to read. I also earn a few dollars in the process as has been the case so far which is cool. A lot of Medium writers prefer internal views and reads to external ones because as part of the stories being behind the Medium Partner Program pay wall, they earn from them. Writing takes a lot of time and energy. A lot of writers on here do it on the side, second to their other income generating projects but lots of writers have it as their core source of income and it is fulfilling if their efforts get rewarded monetarily as it is one of their aims. Even though I obviously prefer many internal views and reads for this reason, I don’t mind external ones. I could even say I love external views for several reasons. Providing answers If my story earns several external views, it is testament that it probably had some answers that other people were looking for. Those reading them externally didn’t just stumble upon it on their feed but went out actively looking for something like it and came across my story. It leaves a good feeling knowing that what you wrote about was needed by someone out there and that they found it, they read and it probably helped them or provided the answers they were seeking. Relevance If someone was going out to look for a story from their own initiative, it means that the content of the story was relevant to their needs. Meaningful and impactful If someone was looking for your story, it means that it might have been meaningful and impactful or I hope that is why what the readers take from my stories. Not just about money Its not all about earnings for many writers on Medium. If that was so, we would have quit the minute our writing was rewarded with pennies. My most viewed and read stories have attracted external views and reads more than internal ones, meaning I have earned zero or little from my most read stories. When my story receives significantly more external than internal views and it still leaves me happy and contented, it proves to me that its not all about money for me. Its about being read too and that’s why many of us write in the first place. New members If someone likes your story from reading them from external views, they may want to subscribe for a Medium Membership in order to read more of your stories or stories from other writers. I read somewhere on Medium that this could translate to earnings for that Medium writer that helped ‘recruit’ a new member into Medium. The more paying members the merrier right? What’s your opinion on external reads on Medium?
https://medium.com/writers-blokke/i-love-external-reads-on-my-medium-stories-8ea9b45cacbb
['Gal Mux']
2020-12-19 10:43:09.202000+00:00
['Writers Life', 'Stats', 'Writers On Writing', 'Views', 'Writers On Medium']
Hands-On Blockchain Smart Contracts
Introduction This article assumes the reader has a basic understanding of Blockchain protocol and platforms like “Hyperledger” and “Ethereum”. The article focuses on understanding Smart Contracts and a walk through the basics of writing a simple Smart Contract. We will use a Gift Card example to understand and create a Smart Contract. Think of Smart Contract as defining a Class in programming languages like Java or Python. When we define a Class it has attributes and functions/methods. When an object is created from that Class it has specific attribute values, which is called the state of the Object. Functions are used to read or update the state of the Object. Now, let’s apply the same analogy to the idea of Smart Contracts, in order to think of them as a Class. When a Smart Contract is deployed to a Blockchain platform, it is like one unique Object is created as part of that transaction. From that point onward, any read or update to Smart Contract Object is recorded as an unique transaction on the Blockchain Platform. Each Blockchain transaction is identified by a unique cryptographic hash value and is immutable. Simple Gift Card Scenario Consider a fake company “GC Inc” issuing Gift Cards in the denominations of $50. When a Gift Card is issued a Smart Contract is deployed as a Block on the Blockchain Platform like “Hyperledger” or “Ethereum”. Every transaction conducted on the Gift Card is recorded as a new Blockchain transaction. Let’s assume a gift card of $50 with the following code -”ABC-PQR-XYZ” is issued to a customer on 03/31/2020. The customer spends the $50 in the following four transactions: $20 on 04/01/2020 $10 on 04/15/2020 $5 on 04/30/2020 $15 on 05/04/2020 This will create the following chain of Blocks(Transactions) as shown in the below diagram. Define a Smart Contract Smart Contract for the Simple Gift Card Scenario will have the following attributes and functions: code: attribute for the unique code associated with Gift Card balance: attribute for the balance left on Gift Card transactions: attribute for collection of transactions recorded on the Gift Card new(balance,code): constructor to create a new object of the Contract. For this example values will be 50 and “ABC-PQR-XYZ” redeem(amount,date): function to redeem Gift Card for supplied amount and date. This function will throw an error if the balance is less than the redeem amount balance(): function to return current balance on the Gift Card transactions(): functions to return all transactions recorded on the Gift Card. Blockchain transactions need a digital identity for the “GC Inc” and the “Consumer” . For this example, we will not focus on digital identity part of the Blockchain framework. Write Smart Contract using Solidity “Solidity” and “Vyper” are two popular languages for writing Blockchain Smart Contracts. For this example, let’s use “Solidity”, a contract in Solidity will look something as below: pragma solidity >=0.4.22 <0.7.0; pragma experimental ABIEncoderV2; /** * Smart Contract for Gift Card Issued By GC Inc * */ contract GCIncGiftCardContract { uint256 balance;// Gift Card Balance string code;//Gift Card Code string[] transactions; // Transactions recorded on Git Card /** * constructor to issue a new Gift Card */ constructor(uint256 _balance,string memory _code) public { balance = _balance; code=_code; } /** * function to redeem Gift Card with amount and record transactions */ function redeeem(uint256 amount,string memory description) public { assert(balance >= amount); //throw error if the balance is less balance=balance-amount; transactions.push(description); } /** * function to return current balance on the Gift Card */ function getBalance() public view returns (uint256){ return balance; } /** * function to return current transactions on the Gift Card */ function gettransactions() public view returns (string[] memory){ return transactions; } } Deploy/Test Smart Contract with Remix IDE Remix provides a web-based IDE to build, deploy and test Smart Contracts. Open this URL https://remix.ethereum.org/ in a Chrome or Firefox Browser. Step 1: Select “Solidity” environment on the home page of Remix Step 2: Copy the above code as a new file “GCIncGiftCardContract.sol” Step 3: Compile the Contract — “GCIncGiftCardContract.sol” Step 4: Deploy the Contract — “GCIncGiftCardContract.sol” with values “_balance=50” and “_code=ABC-PQR-XYZ”. Select default values for other parameters on this screen. Step 5: Add four “redeem” transactions to the Contract — “GCIncGiftCardContract.sol” as listed in the section “Simple Gift Card Scenario” Step 5: Check the Balance and Transactions — “GCIncGiftCardContract.sol”. The balance will be 0 and 4 transactions listed as below. If we try to add any more “reedem” transactions it should fail as the balance is zero now. Final Thoughts This is a very simple use case to explain the basic concepts of Blockchain Smart Contracts. This is rapidly expanding technology with a large community of developers supporting the movement. We all know the virtual currency, BitCoin, is the most popular use of this technology. We are starting to see other real-life business applications of this technology as it continues to grow. “Hyperledger” and “Ethereum” are the two most popular Blockchain platforms with extensive tooling and industry-wide support. I think this is an exciting time to be part of this technology! Enjoy! Disclaimer: This is a personal blog. The opinions expressed here are my own and not those of my current or any previous employers.
https://codeburst.io/hands-on-blockchain-smart-contracts-8c40695e89ee
['Rajesh Shah']
2021-02-16 00:25:43.671000+00:00
['Gift Cards', 'Remix', 'Blockchain', 'Smart Contracts', 'Solidity']
Abortion, pro-life or pro-choice?
Abortion is a very touchy subject to many people. There is not a correct answer on if it should be legal or not around the world, but every one has their opinions. I am a strong pro-life supporter. By the end of this I am going to show you that abortion should be illegal because its murder, always consequences for your actions, and some might say it should be legal because of rape but i can prove its not worth it. An abortion is a medical procedure done by a trained medical professional, to end a pregnancy, It uses medicine or surgery to remove the embryo or fetus and placenta from the uterus. Abortion is killing an innocent child that has done absolutely nothing. “…the murder of the defenseless child.” (catholic review) The child has done nothing and everyone deserves the equal chance to life. The child has no say on what they do with its life. No matter what anyone says it has DNA. “The DNA it has is not only human DNA but the DNA of an individual distinct from that of either parent.” (catholic review) The child already has DNA from both parents and therefore makes it a child. At the point that it has DNA it is murder of an innocent child. www.Kaloloa-Abortion-Murder-Banner-Decoration/dp/B07XQ36Y3X There are always going to be consequence's to the mistakes we make. If some one wants to have sex and not be careful about it they can end up with a child. There are many ways to prevent pregnancy's. “make sure your using a condom correctly.” (Healthline) condoms are just one of many ways to avoid getting pregnant. Other ways are the multiple different kinds on birth control and the plan-b pill you can take the next morning if you forgot protection. There is absolutely no reason for innocent children to have to face the cosequnseces of their parents. Some people are all for abortion, and try to say that women that get raped should not have to deal with that. That child was just as innocent as the mother in that case and doesn't deserve to die. Only a small percent of abortions are from rapes. “just 1% of women obtain an abortion because they became pregnant through rape.” (dastagir) That fact that only 1% of all abortions is due to rape proves that's no excuse to keep it legal. Most abortions are because they are not ready but if your going to do that put the child up for adoption. everyone deserve the equal chance to life. The Childs heart beats after 16–25 days and the brain works after 40 days, it takes at least 14 days to know if your pregnant so by the time they get to the abortion center the child or fetus already has a heartbeat. My friend was born at 26 weeks and is still living a wonderful life, people can still get abortions at 26 weeks pregnant, that is a child that is being killed not a fetus.
https://medium.com/@brownjenna61/abortion-pro-life-or-pro-choice-db4ae1283de5
[]
2020-12-04 04:16:52.446000+00:00
['Abortion Rights', 'Pro Life', 'Abortion']
Home CCTV DVR System
Home CCTV DVR System Published by Handy Work on 24 December 2020 What are the most important tips for CCTV home systems? The system has to rely on a highly efficient monitoring company which can quickly from anywhere respond to an alarm signal. Wireless cameras are also very affordable and give you the ability to get the entire picture from a remote location. You should also choose a CCTV camera that can be hidden in its surroundings and will not attract attention of intruders. A popular location for hidden CCTV cameras is DVR systems, where they are completely unseen as they look just like a computer outlet port rather than a camera. What are the different types of DVR cameras? There are two camera devices which you need to be aware of: composite and component. The issue is that they offer different visual quality; each offering their own advantages and disadvantages. Component cameras capture better image resolution than composite, but they also consume more bandwidth in comparison. A better choice for most CCTV set ups would be the composite cameras, as it offers a standard video resolution that is excellent enough to give a clear description of what’s going on in your property. Last but not least, if you want to make an investment that will improve your personal or business security at home or on the job, then there’s no question that installing a CCTV system is one of the best decisions you can make!
https://medium.com/@handywork/home-cctv-dvr-system-fa3071a78637
['Handy Work']
2020-12-27 01:13:01.619000+00:00
['Home', 'Cctv', 'Handyman']
Ellie and the Snow Ball experiment — the book
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/for-the-love-of-steam/ellie-and-the-snow-ball-experiment-the-book-ba349cff0618
[]
2020-12-21 08:03:27.441000+00:00
['Kids', 'Books', 'Reading', 'Steam']
Emergent behaviour and dashboarding in Instana
Introduction They say that the game of rugby was invented in 1823 when a young boy, William Webb Ellis, studying at Rugby Public School, “with a fine disregard for the rules of football as played in his time, first took the ball in his arms and ran with it.” (This story is apocryphal and the rules of football were probably not well regarded, in any case. But why let mere facts get in the way of a good story with an excellent lesson?) The game of basketball was certainly invented in 1891 by James Naismith, who wanted to exercise his students with a strenuous indoor game. He adapted a game from his youth called “Duck on a Rock” and the rest, as they say, is history. These are examples of emergent behaviour or emergent game play. Essentially, emergent behaviour in a game is when the game environment is flexible enough to allow players to choose their own directions and challenges beyond those which the designers of the game had in mind. The myriad varieties of card games, all using the same 52 cards, showcase just how flexible that environment is. On the other hand, a checkers board and checkers tokens don’t have the same flexibility, and there are far fewer varieties of checkers. Some types of computer games are explicitly called “sandbox games” where gamers have the flexibility and the tools to find creative ways to fulfill goals — no matter what the original designers may have thought was the best way to do something. Emergent behaviour in operational tools is the equivalent of this sandbox, where sysadmins, operators, DevOps and Site Reliability Engineers can customize their tools to solve the problems that they have in front of them, not merely the problems the tools were originally designed to solve. Emergent behaviour — some clever examples In this article I’ll discuss how IBM Observability by Instana APM encourages emergent behaviour, specifically in the design, development and usage of dashboards. One of my favourite examples of emergent behaviour comes from a feature called Maintenance Mode in IBM Cloud Pak® for Watson AIOps/Event Manager (previously Netcool Operations Insight/NOI). The designed purpose of this feature is to mark specific computers as “in maintenance”. This means that if a problem occurred during this period, then the system would consider it a legitimate, even expected, issue and therefore not escalate the problem for human attention or try to resolve it automatically. Since the computer was in maintenance, any problems which arose in this period were expected. My client, a bank, used this mechanism in an ingenious way. Part of its responsibilities was to track the opening of certain safes. So the bank hooked up the safes to emit a message whenever they were opened, and put the safes in maintenance mode during regular working hours. This meant that the error message “the safe has been opened” would be automatically quashed during working hours and an alarm would be raised out of working hours. Were there other ways to solve this problem? Of course. But these engineers found it easier to repurpose a tool they were familiar with than to develop a new solution using other tools. Instana infrastructure dashboard — You can build a diagram of your datacenter/cloud environment with the skillful use of tags and labels. Another example is that of a public transport company which used IBM monitoring tools, not to track the performance of IT transactions, but rather the speed of escalators in underground stations. They wanted to track anomalies in escalator performance to be able to send maintenance workers to fix the problem. My colleague Matthew Duggan has used the topology mapping component of Watson AIOps (originally designed to map the relationships between IT components) to map out London Transport and the relationships between Super Mario characters!
https://medium.com/ibm-cloud/emergent-behaviour-and-dashboarding-in-instana-aff9af9e6cd9
['Robert Barron']
2021-09-09 12:13:27.267000+00:00
['Sre', 'IBM', 'Observability', 'API', 'DevOps']
What exactly is Satta King?
SattaKing is primarily famous in India. It is classified as a lottery-based game and falls under “Gambling.” Satta Matka is the actual name of the so-called “Satta” game. Nowadays, it is only referred to as “Satta king” in its abbreviated form. Satta Matka has a history related to the name Matka (POT). Previously, someone would draw a number from POT, and if the person won, he was referred to as “Satta King.” People began to know the name Sattaking after this game became popular. Satta King Facts You May Not Be Aware Of At the start, choose a number between one and one hundred. These numbers are also divided into dishware, Gali,Jodi and Delhi Satta King games. We’ll go over the main game. So, you choose a number between one and one hundred, and now you talk to your booking and tell your bookie your desired number. The bookie will take note of that number and the amount associated with it, and the process is now complete with your bookie. Now, the next day, the game Sattaking result will open at a specific time, and if you are blessed enough to get the desired number you chose, you will almost certainly win 90 times the money you invested with your bookie. So now you understand how the Delhi Sattaking game works. And, most importantly, Satta king is all about your luck. While this game Oline Satta King is based on success and the winner receives 90 times the amount, it creates a desire for money in people’s hearts. People who look at this game and play it frequently are aware of changing numbers and thus win the game easily. People who often play the Delhi Sattaking game may also guess the correct number. SattaKing offers the fastest game updates because they open at specific times. The Most Accurate Way to Predict Satta Number There are many websites to learn how to predict or anticipate Satta King. Agents can assist you in determining the correct Satta number. Site agents will occasionally provide you with educated guesses. Would you please continue reading our section below for more information on how it works? Sattaking numbers are predicted in a variety of ways. People guess which number will open on which day, and they bet money on that number. Furthermore, today in which game, which number will be open, numerous Satta ruler masters and have numerous long stretches of involvement make such forecasts. Finally We Can Say: A few people get the Delhi Satta number by guessing and betting money on it. Satta King While some people go to tantric babas to discover the Satta number and place money on the number the babas give, Baba takes money from them to tell the number. At the same time, the Babas themselves are unaware of which number Satta Matka Online will open today. Baba also reduces the numbers and renders people speechless. Tuukka does, however, open the number mentioned by Babas on occasion. Some people keep a sattaKing record for a long time and then guess the Satta number.
https://medium.com/@priyankarawats/what-exactly-is-satta-king-9097b2b42a76
['Priyanka Rawat']
2021-12-27 07:22:11.329000+00:00
['Gambling', 'Satta Matka', 'Online Gambling', 'Satta King Disawar', 'Satta King']
What Makes a Perfect Christmas Gift
Everyone loves gifts. Receiving gifts is one of the original Five Love Languages. Giving of gifts has to be love language number six. Giving and receiving gifts are a significant part of the Christmas season. Have you searched for the top ten Christmas gifts? The search reveals a few perfect gifts for the Christmas season. The hot items of this year include:
https://medium.com/change-your-mind/what-makes-a-perfect-christmas-gift-541d1c8b7658
['Thomas E. Mcdaniels']
2020-12-21 12:21:37.744000+00:00
['Generosity', 'Christmas', 'Family', 'Happiness', 'Gifts']
8 Smart Kitchen Storage Tips for Your New Home
Can’t seem to find your kitchen tools. Always having a hard time visiting the kitchen. Too much clutter everywhere. If these are your problems, it is time to get inspired and start organizing. The kitchen should be the place where healthy meals are prepared and make the family come together. To help you organize, here are smart kitchen storage tips for your home: 1. Island turned into Storage Possibility A kitchen island is not only good as a table but can be transformed into one helping hand in the kitchen. Customise your island and have additional storage space for your clutter. https://www.bhg.com/kitchen/storage/organization/kitchen-storage-solutions/?slide=slide_5972909321001#slide_5972909321001 The take away from this DIYer clever idea is his use of pegboard mounted to one side of the island for accessible and visible kitchen tools. Good to note, this pegboard storage idea can also be attached to a free wall to allow hanging of your kitchen tools. 2. Custom Built Spaces for Appliances Image source: Better Homes & Garden If you happen to be undergoing kitchen renovation, it is good to note what appliances you’ll be using in the new kitchen. The storage unit for the appliances could be custom built to their sizes and not waste aerial space. Also, to avoid going back and forth with plugging your appliances, having a nearby outlet would be a convenient idea plus consider ventilation and heating for appliances like microwave or toaster. 3. Consider your Reach Capabilities No shaming here but it would be ideal to tailor made your kitchen storage space according to your reaching abilities. Allowing you to reach all your essentials without having a break down every time you can’t reach it. 4. Magnetic Storage Image source: Pinterest / Into DIY Since most of the kitchen tools can be made of stainless steel which is magnetized. Cleverly allotting a space or wall where the magnet could be attached and have your tools just hang in there. Visible and accessible, every kitchen should be. However, there are stainless steel kitchen tools that are non-magnetic due to their modified physical structure. No worries you could use tip number 1 for this. 5. Produce Into Bins Image source: Watchtower Interiors Ditch the countertop produce bowls that hide small veggies letting it rot. In this way, you separate them according to their type and enables you to keep track of your produce supplies. 6. Side Note to Remind You Image source: Better Homes & Garden Be like a storage manager with your own supplies. Have a dedicated note board attached on the door of the cabinet to monitor your supplies. This way you’ll avoid spoilt goods or keep track of what supplies need to be replenished. 7. Fake Drawer Fronts Under Sink turned Storage Image source: Domestically Speaking Fake drawer fronts are good for aesthetics but can also be good as storage for small items. Turn this drawer fronts into a tilt open storage for your sponges and dishwashing. 8. Embrace Modernity Image source: Cattleya Kitchens / Comfort Kitchens — Top 1466 Kitchens are now made towards convenience and modern day kitchens are equipped with corner units, larder, drawer inside a drawer, and more. Allowing to have a kitchen experience without the hassle of finding kitchen tools.
https://medium.com/@patriciopantheleon/8-smart-kitchen-storage-tips-for-your-new-home-3feba9879fb9
['Patricio Pantheleon']
2019-04-12 04:50:20.014000+00:00
['Home Decor', 'Kitchen Storage Solutions', 'Kitchen Storage Ideas', 'Kitchen']
Leap Into These Interesting Leap Year Facts
Leap Into These Interesting Leap Year Facts By Adam Barrett “Today is an ephemeral ghost” — Vera Nazarian “Might as well jump” — “Diamond” David Lee Roth If you spend enough time talking to people, eventually, somebody is going to mention how 2020 feels like it’s moving at a snail’s pace. January crawled, folks are predicting March will come in like a lion and leave… also like a lion, but February? Well, this month seems to be zipping along, which is especially strange, considering it’s technically longer than it’s been since 2016. That’s right, 2020 is a leap year — something that only happens once every four (or so) spins around the sun. Since this isn’t an every day (or even every year) occurrence, it’s a great chance to share a few facts about leap year with you! First Things First — What is a Leap Year Anyway? A leap year is an event that occurs on the Gregorian calendar (aka, the calendar on your phone, your computer, and your wall). It involves adding an additional day (known as leap day) to February, bringing it from 28 days to 29. This isn’t just some fun way to spice up the calendar every few years — a leap year actually serves an essential purpose. There are 365 days in the regular Gregorian calendar year, but it takes 365.24 days (365 days, five hours, 48 minutes, and 45 seconds to be precise) for Earth to complete a solar orbit (known as a tropical year). This means the Gregorian calendar year starts about six hours before a tropical year. While this may not seem like a big difference, scope creep is real in all aspects of life — even astrological ones. Without adding an extra day every four years to make up this difference, the calendar year would begin drifting out of sync with the tropical year. The seasons would shift, April showers would no longer bring May flowers, and — eventually — the Northern Hemisphere would be gearing up for winter in the middle of July. In short… When is a Leap Year Not a Leap Year? Okay, so, every four years, without fail, we have a leap year in the calendar. Easy peasy, right? Wrong! You’re actually thinking of how things worked with the Julian calendar (don’t you hate when that happens?). The Julian calendar, named after Roman general Julius Caesar, introduced leap days every four years to better align the calendar year with the tropical year. But, eventually, this extra day was found to be too much of a course correction. As a result, we moved from the Julian calendar to the Gregorian calendar, introduced by Pope Gregory XIII. The major difference? Instead of a leap day every four years, one occurs every year that is divisible by four, except for those divisible by 100 and not divisible by 400. So, the year 2000? Leap year. The year 1600? Leap year. 1700, 1800, and 1900? Not leap years. Feeling confused? Then you’re going to love this — on occasion, scientists have also been known to add “leap seconds” into the calendar. In fact, since 1972, there have been 27 extra seconds added to further align the calendar year with the tropical year. Luckily, we don’t make the calendars, we just follow them! Leap Year Birthdays About 0.07% of the world’s population — roughly 4.8 million people — celebrate their birthdays on February 29. A few of them you may have even heard of! Ja Rule was born February 29, 1976 (happy 11th birthday, Ja Rule), the same day as famous character actor Dennis Farina (1944), Canadian hockey player Cam Ward (1984), and singer/actress Dinah Shore (1916). And if you’re a true crime fanatic, it may keep you up at night to know “The Night Stalker” Robert Rameriez (1960) and Aileen Wuornos (1956) were also born on February 29. So, what are the odds of being born on leap day? Honestly, it’s not as much of a long shot as you might think — one in 1461. That said, According to Guinness World Records, only one family has ever had three consecutive generations born on leap day. The line begins with Peter Anthony Keogh, who was born on February 29, 1940. His son, Peter Eric, followed on leap day in 1964. Then, granddaughter Bethany Wealth arrived in 1996. Perhaps even more interestingly, a woman named Karin Henriksen gave birth to children on three consecutive leap days — a daughter in 1960, followed by sons in 1964 and 1968. Coincidences? The most impressive attempts at family planning ever? Who can say? Leap Year Events In addition to births, a few interesting things have happened throughout the (leap) years on February 29. In 1692, the first arrest warrants for the Salem witch trials were issued. In 1940, Hattie McDaniel became the first African American to win an Academy Award for her role in Gone with the Wind. In 1980, 51-year-old hockey legend Gordie Howe scored his 800th career goal as a member of the Hartford Whalers. And in France, La Bougie du Sapeur has been published every leap day since 1980 — making it the world’s least-frequently published newspaper. One event that folks in Greece try to avoid during a leap year is getting married. That’s because they believe tying the knot any time during a leap year — not just on February 29 — is bad luck. The same can be said for Italians, who have an extremely light-hearted expression summing up their leap-year feelings, “Anno bisesto, anno funesto” (leap year, doom year). A Few Final Fast Facts For Leap Year Here are a few more leap year facts to think about. Let’s start with the bad news, if you’re a salaried employee, you’re probably not getting paid for your extra day. But it could be worse, if you were in jail over a leap year, you’d have to serve an additional day behind bars. On the flip side, you’ll have a free night’s stay in your apartment on a leap year — renters aren’t charged for the extra calendar day. And if you find yourself traveling on February 29, we can’t think of a better spot to be than Anthony, Texas — the Leap Capital of the World! Every leap day, this West Texas town hosts an officially sponsored leap year birthday parade and festival. And there you have it, everything you could ever possibly need to know about leap year! Now, if you’re looking for something to do on your extra day (a Saturday, no less), we can help. Get caught up on our other great re:VERB articles or listen to the latest episode of the re:VERB Podcast. They’re sure to have you jumpi— er, leaping for joy! Adam is a Digital Copywriter and Content Strategist with VERB Interactive — a leader in digital marketing, specializing in solutions for the travel and hospitality industry. Find out more at www.verbinteractive.com.
https://medium.com/re-verb/leap-into-these-interesting-leap-year-facts-97617752f676
['Verb Interactive']
2020-02-28 14:04:54.100000+00:00
['Facts', 'Fun', 'Marketing', 'Interesting', 'Digital Marketing']
PANEL: Data Collaboration in the Age of COVID at My Data Conference 2020
On Thursday, December 10, 2020, The GovLab hosted a panel discussion on “Data Collaboration in the Age of COVID-19” at the MyData Conference 2020. Facilitated by Stefaan Verhulst (Co-Founder and Chief R&D at The GovLab), panelists Kelly Jin (City of New York), Claudia Juench (Cloudera Foundation), Dave Greene (Microsoft), and JoAnn Stonier (Mastercard) discussed ways the COVID-19 response could inform efforts to foster cross-sector data reuse in a responsible, systematic, and sustainable way. Given the MyData Conference’s focus on “strengthening digital human rights while opening new opportunities for business to develop innovative, personal data-based services,” the panelists discussed a range of issues including institutionalizing data stewardship, the need to build a business case for sharing, and the barriers to collaboration. Six key topics included: Lack of Readiness at the Outset of the Crisis : Each of the panelists discussed how unprepared organizations were for the data reuse necessitated by pandemic. Though some organizations were developing capacity before the crisis, none could have predicted the speed and scale at which they would need to transform. “When COVID happened, every organization transformed overnight into a data organization,” said Mastercard’s JoAnn Stonier. This lack of preparedness points to a need to build data infrastructure and an ecosystem to tackle dynamic threats. : Each of the panelists discussed how unprepared organizations were for the data reuse necessitated by pandemic. Though some organizations were developing capacity before the crisis, none could have predicted the speed and scale at which they would need to transform. “When COVID happened, every organization transformed overnight into a data organization,” said Mastercard’s JoAnn Stonier. This lack of preparedness points to a need to build data infrastructure and an ecosystem to tackle dynamic threats. Value of Aligning Around a Common Purpose : The panelists also discussed the need to move quickly into action given the size of the crisis. To do this, Microsoft’s David Green talked about the importance of focusing on an actionable problem, a purpose to motivate data collaboration. “Aligning around a purpose,” Green said, “helped us to plan our strategies for [data] extraction and reuse.” : The panelists also discussed the need to move quickly into action given the size of the crisis. To do this, Microsoft’s David Green talked about the importance of focusing on an actionable problem, a purpose to motivate data collaboration. “Aligning around a purpose,” Green said, “helped us to plan our strategies for [data] extraction and reuse.” Importance of Building Capacity : For Claudia Juech, CEO of the nonprofit Cloudera Foundation, a major issue amid the pandemic was a lack of pre-existing data capacity. “The capacity to manage data was very, very rare in the nonprofit space at the start of the pandemic [and] it has a lot to do with funding,” said Claudia. “Who are the funders out there willing to help organizations set up practices and develop a data culture?” A lack of data skills can prove crippling amid a crisis. : For Claudia Juech, CEO of the nonprofit Cloudera Foundation, a major issue amid the pandemic was a lack of pre-existing data capacity. “The capacity to manage data was very, very rare in the nonprofit space at the start of the pandemic [and] it has a lot to do with funding,” said Claudia. “Who are the funders out there willing to help organizations set up practices and develop a data culture?” A lack of data skills can prove crippling amid a crisis. Need to Develop a Business Case : These remarks echoed a similar sentiment from JoAnn Stonier. Though data collaboration can produce value for the public, the sustainability of a project often depends on the ability of its sponsors to connect that to some business case. JoAnn noted, “The questions [around reuse] are ‘what’s the business case’ and ‘why prioritize this over everything else?’” : These remarks echoed a similar sentiment from JoAnn Stonier. Though data collaboration can produce value for the public, the sustainability of a project often depends on the ability of its sponsors to connect that to some business case. JoAnn noted, “The questions [around reuse] are ‘what’s the business case’ and ‘why prioritize this over everything else?’” Role of Data Stewards : Meanwhile, in her remarks on the struggles facing public-facing institutions in the early days of the crisis, Kelly Jin, Chief Analytics Officer of the City of New York, spoke about the value of data stewards in facilitating data reuse projects. Companies with responsible data leaders empowered to seek out and respond to requests for data reuse tended to be easier to engage, especially when the data steward “understood data, policy, and people.” : Meanwhile, in her remarks on the struggles facing public-facing institutions in the early days of the crisis, Kelly Jin, Chief Analytics Officer of the City of New York, spoke about the value of data stewards in facilitating data reuse projects. Companies with responsible data leaders empowered to seek out and respond to requests for data reuse tended to be easier to engage, especially when the data steward “understood data, policy, and people.” Need to Center Privacy and Data Responsibility: Finally, all the participants spoke about the importance of respecting the rights of data subjects and ensuring projects were responsible by design. Part of this work, as Kelly noted, involved improving the accessibility of projects so the public can have a say in how data is used. The data ecosystem needs to be built for everyone. Another part, as JoAnn noted, is having principles around data sharing at the outset that respect the need for privacy and security. You can learn more about this panel by visiting the MyData Conference site for context and materials. Further information, including quotes from panelists, can be found in the Twitter thread here.
https://medium.com/data-stewards-network/panel-data-collaboration-in-the-age-of-covid-at-my-data-conference-2020-7278116e13a7
['Andrew J. Zahuranec']
2020-12-18 17:11:31.013000+00:00
['Events', 'Covid 19', 'Data Steward', 'Data Collaboration', 'Data']
My Colorful World In Black & White
My Colorful World In Black & White Florida may well be the most colorful state in the U.S. All year long, we’re immersed in blues, greens, magentas, purples, yellows, reds, and oranges. Each color has a dozen or more shades. Each sunset or sunrise is an artist’s canvas. Each flower is a masterpiece. In winter, when other places are brown, gray, and beige, we’re drenched in dazzling tropical tints. With the first day of winter a week away, I recalled the cold, bland landscape of my childhood home in Virginia and wondered how my current world would look if transported north in December. So, I created that world. This was an interesting experiment that produced some lovely b&w photos. But, when I was done, I realize that nothing can replace the glorious colors of my Florida. What would your world look like in black and white?
https://medium.com/snap-shots/my-colorful-world-in-black-white-10e6dde72df3
[]
2020-12-18 13:58:54.347000+00:00
['Winter', 'Florida', 'Blackandwhitephotography', 'Photography', 'Colors']
These races will shape what the U.S. elections mean for climate progress
These races will shape what the U.S. elections mean for climate progress Creosote Partners Follow Oct 13 · 6 min read By Mark Hertsgaard What follows are not candidate endorsements. Rather, this nonpartisan guide aims to inform voters’ choices, help journalists decide what races to follow, and explore what the 2020 elections could portend for climate action in the United States in 2021 and beyond. Will the White House turn green? President Trump has expressed skepticism about the scientific consensus that human activity is driving climate change, and has rolled back federal regulations aimed at mitigating the effects of climate change. Whether the White House changes hands is the most important climate question of the 2020 elections. President Donald Trump rejects climate science, is withdrawing the United States from the Paris Agreement, and has accelerated fossil fuel development. His climate policy seems to be, as he tweeted in January when rejecting a US Army Corps of Engineers proposal to protect New York City from storm surges, “Get your mops and buckets ready.” Joe Biden, who started the 2020 campaign with a climate position so weak that activists gave it an “F,” called Trump a “climate arsonist” during California’s recent wildfires. Biden backs a $2 trillion plan to create millions of jobs while slashing emissions — a Green New Deal in all but name. Equally striking, his running mate, California Senator Kamala Harris, has endorsed phasing out fossil fuel production — a politically explosive scientific imperative. Democratic presidential candidate Joe Biden has proposed major spending to build renewable energy infrastructure, and pledged to put the U.S. on a path to zero carbon emissions by 2050. The race will be decided in a handful of battleground states, five of which already face grave climate dangers: Florida (hurricanes and sea-level rise), North Carolina (ditto), Texas (storms and drought), Michigan (floods), and Arizona (heat waves and drought). Public concern is rising in these states, but will that concern translate into votes? Will Democrats flip the Senate, and by enough to pass a Green New Deal? With Democrats all but certain to maintain their majority in the US House of Representatives, the Senate will determine whether a potential Biden administration can actually deliver climate progress. Democrats need to pick up three seats to flip the Senate if Biden wins, four if he doesn’t. But since aggressive climate policy is shunned by some Democrats, notably Joe Manchin of coal-dependent West Virginia, Democrats probably need to gain five or six Senate seats to pass a Green New Deal. Environmentalists, including the League of Conservation Voters, are targeting six Republicans who polls suggest are vulnerable. Steve Daines of Montana, who denies climate science Martha McSally of Arizona Thom Tillis of North Carolina Susan Collins of Maine Joni Ernst of Iowa (bankrolled by Charles Koch) John James of Michigan (also a Koch beneficiary) Republican Senators are even at risk in conservative Kansas and Alaska. In both states, the Democratic candidates are physicians — not a bad credential amid a pandemic — who support climate action. In Kansas, Barbara Bollier faces an incumbent funded by Charles Koch. In Alaska, Al Gross urges a transition away from oil, though his openness to limited drilling in the Arctic National Wildlife Preserve dims his appeal to green groups. He faces incumbent Republican Dan Sullivan, who receives an 8 percent lifetime voting record from the League of Conservation Voters. In the House, environmentalists are working to elect these candidates, in one case over an establishment Democrat: Beth Doglio of Washington state Georgette Gómez of California Marie Newman of Illinois Cameron Webb of Virginia Mike Siegel and Wendy Davis of Texas We rightfully focus on federal climate policy, but climate action must also be made at the state and local level — and there are plenty of races and initiatives to pay attention to. Will local and state races advance climate progress? The climate hawks Under Democratic and Republican leadership alike, Washington has long been a graveyard for strong climate action. But governors can boost or block renewable energy; the Vermont and New Hampshire races are worth watching. Attorneys general can sue fossil fuel companies for lying about climate change; climate hawks are running for the top law enforcement seats in Montana and North Carolina. State legislatures can accelerate or delay climate progress, as the new Democratic majorities in Virginia have shown. Here, races to watch include Pennsylvania, North Carolina, and Colorado. The climate policy makers Perhaps the most powerful, and most overlooked, climate policy makers are public utility commissions. They control whether pipelines and other energy infrastructure gets built; they regulate whether electric utilities expand solar and energy efficiency or stick with the carbon-heavy status quo. Regulatory capture and outright corruption are not uncommon. A prime example is Arizona, where a former two-term commissioner known as the godfather of solar in the state is seeking a comeback. Bill Mundell argues that since Arizona law permits utilities to contribute to commissioners’ electoral campaigns, the companies can buy their own regulators. Which may explain why super-sunny Arizona has so little installed solar capacity. In South Dakota, Remi Bald Eagle, a Native American US Army veteran, seeks a seat on the South Dakota Public Utilities Commission, which rules on the Standing Rock oil pipeline. And in what HuffPost called “the most important environmental race in the country,” Democrat Chrysta Castaneda, who favors phasing out oil production, is running for the Texas Railroad Commission, which despite its name decides what oil, gas, and electric companies in America’s leading petro-state can build. Will the influencers usher in a green new era? The uncounted The story that goes largely under-reported in every US election is how few Americans vote. In 2016, some 90 million, roughly four out of every 10 eligible voters, did not cast a ballot. Attorney Nathaniel Stinnett claims that 10 million of these nonvoters nevertheless identify as environmentalists: They support green policies, even donate to activist groups; they just don’t vote. Stinnett’s Environmental Voter Project works to awaken this sleeping giant. The sunrise movement Meanwhile, the young climate activists of the Sunrise Movement are already winning elections with an unabashedly Green New Deal message. More than any other group, Sunrise pushed the Green New Deal into the national political conversation, helping Representative Alexandria Ocasio-Cortez and Senator Ed Markey draft the eponymous congressional resolution. In 2020, Sunrise has helped Green New Deal champions defeat centrists in Democratic primaries, with Markey dealing Representative Joe Kennedy Jr. the first defeat a Kennedy has ever suffered in a Massachusetts election. But can Sunrise also be successful against Republicans in the general elections this fall? The starpower And an intriguing wild card: celebrity firepower, grassroots activism, and big-bucks marketing have converged behind a campaign to get Latina mothers to vote climate in 2020. Latinos have long been the US demographic most concerned about climate change. Now, Vote Like A Madre aims to get 5 million Latina mothers in Florida, Texas, and Arizona to the polls. Jennifer Lopez, Salma Hayak, and Lin-Manuel Miranda are urging mothers to make a “pinky promise” to vote for their kids’ climate future in November. Turning out even a quarter of those 5 million voters, though no easy task, could swing the results in three states Trump must win to remain president, which brings us back to the first category, “Will the White House Turn Green?” https://stateimpact.npr.org/pennsylvania/2020/09/27/these-races-will-shape-what-the-u-s-elections-mean-for-climate-progress/
https://medium.com/creosote-partners-press-room/these-races-will-shape-what-the-u-s-elections-mean-for-climate-progress-a0d3e53e6d74
['Creosote Partners']
2020-10-13 14:27:44.494000+00:00
['Environmentalist', 'Climate Progress', 'US Elections']
[Watch/Free]**Netherlands vs England 2019 Live Stream : How TO Watch Soccer Free TV Game..
[Watch/Free]**Netherlands vs England 2019 Live Stream : How TO Watch Soccer Free TV Game..Netherlands vs England — Preview, Live Match | 06 Jun 2019 Watch Now ::: ClicK Now ::: UEFA Nations League match Netherlands vs England (6 Jun 2019) Preview and stats followed by live commentary, video highlights and match England v Netherlands: Watch Nations League on TV, live stream Watch Now ::: ClicK Now ::: England and Netherlands go head-to-head in the UEFA Nations Leagu e semi-finals England v Netherlands will kick off at 7:45pm on Thursday 6th June 2019 How to watch England v Netherlands on TV and live stream Watch Now ::: ClicK Now ::: Netherlands vs England prediction, Uefa Nations League Finals 2019 Netherlands vs England prediction, Uefa Nations League Finals 2019: Live stream, TV channel, lineups, odds Netherlands vs England: Nations League semifinal prediction, pick Netherlands vs England: Nations League semifinal prediction, pick, TV channel, live stream, watch online The winner moves on to the Nations Netherlands vs England LIVE 2019 | UEFA Nations League 2019 HD Netherlands vs England LIVE 2019 | UEFA Nations League 2019 HD Visit my site to view more content and scores Netherlands v England — BBC Spor Preview followed by live coverage of Thursday’s Uefa Nations League Netherlands v England: Raheem Sterling says 50th England cap is ‘a Holland vs England: TV channel, live stream, kick-off time and team - Holland vs England: TV channel, live stream, kick-off time and team news for Uefa Nations League semi-final By Jim Sheridan 6th June 2019 Netherlands vs England: Where to Watch, Live Stream, Kick Off Time Netherlands vs England: Where to Watch, Live Stream, Kick Off Time & Team News Jack Spedding 03 Jun 2019 10 Love It 0% ​TV Channel Live Stream?Watch Now ::: ClicK Now :::
https://medium.com/@mdswhankhanbpcboos/watch-free-netherlands-vs-england-2019-live-stream-how-to-watch-soccer-free-tv-game-a4edc0d10dac
[]
2019-06-06 07:07:36.726000+00:00
['Soccer', 'Hate', 'Live', 'Live Streaming']
Announcing the 2021 Cantrip Theme Build-Off (Earn some Nano!)
If you haven’t used it before, Cantrip is a website building tool for very basic web pages. As Cantrip development expands in features, our development has become heavily focused on the experience of our website builder and dashboard. As a result, the themes we offer on our websites are still rather lacking at the moment. We wanted to open up this part of Cantrip to our talented community and in turn, provide an additional way to earn cryptocurrency! Convert your CSS skills into some precious Nano Now and through the month of January, each theme that is approved into the Cantrip live theme repository will be rewarded with 25 Nano to its developer. In addition, one developer will be awarded 100 Nano for the best theme submitted during this time. How do I start? To get started, simply ‘git clone’ the themes repository here: This git repository contains the starting HTML and tools to begin. Once you have the files on your machine then follow the README.md instructions to get hacking. Some basic knowledge of NPM, SASS, and JS are required. How do I get paid? When you create your theme you will create a theme.json file. Within this file, be sure to add your own Nano address in the ‘nano_address’ field (View the “rad” theme as an example). Payments will be made on these themes as they are approved and entered into the Cantrip application, and the grand prize winner will be awarded at the end of January.
https://medium.com/cantrip/announcing-the-2021-cantrip-theme-build-off-earn-some-nano-ef2ce3be591a
['Jeffrey Kilroy']
2020-12-27 20:23:03.198000+00:00
['Nanocurrency', 'Bitcoin', 'Nano', 'Frontend', 'Cryptocurrency']
Unique x Satoshi CLUB AMA Session 19 Dec 2020
Preselected questions about the Unique.One project. Gold Rocket | Satoshi Club: Q1 from Telegram user @yellowchamp With your “Artist No-Fee policy” which stated that you don’t charge creators fees. Artists get full price for their work.I’m a little bit confused with this statement, can you explain it more? Why you are not charging creators fees? Do you have any criteria for giving a price with your Artist or it is fixed already? What kind of Artist are you recommending to join your platform? Unique G: Will never DM first: Traditionally, NFT platforms will charge the creator a fee for creating an NFT token. Unique One does not charge this fee. So there is no additional cost other than that for the interaction with the smart contract. The other charge is when the artist sells. On other platforms, they can charge both the buyer and seller 3.5% for example. This means the company collects 7% in total. Here at Unique One, we don’t charge artists for selling. We only charge 1% platform fees for the buyer. And as I explained earlier, this 1% goes back to the token holders. Gold Rocket | Satoshi Club: It’s very smart! Thanks for the answer! Unique G: Will never DM first: As for our artwork policy, we accept all kinds of digital art. The only thing that we do not tolerate is copying and selling other people’s work. This is theft! We have clearly detailed in our terms for such violations. I think people in crypto have to understand that the reason they lose money or do not get full value is that all the value is taken by someone else. Gold Rocket | Satoshi Club: Strongly agree about copyright Unique G: Will never DM first: We want to prove that blockchain is made for disintermediation and not a tool for intermediaries to use to make money. D| Satoshi Club: Do you have a mechanism, how to check that it’s a copy work and not the original one? Gold Rocket | Satoshi Club: However, money is also important. Unique G: Will never DM first: At the moment it is complaints based. We will depend on the community to keep the platform in order. This way we can keep censorship down to a minimum. Other platforms also ban projects doing airdrops for their community. It is extremely dangerous to play God in a decentralized world. This is the problem. The community wants to realize value but when the projects are greedy. The community gets burned. Gold Rocket | Satoshi Club: You noticed it perfectly! Thanks for the answer! Everything is clear to me. The next question? Unique G: Will never DM first: Yes Gold Rocket | Satoshi Club: Q2 from Telegram user @Highpee Unique.One is a permissionless decentralized NFT exchange. Of recent, Defi projects have decided to become open and permissionless in order to gain trust. However, going open and permissionless have its own risk as your codes are open and your project can be easily cloned. Also, some cloned or fork projects have also been found to perform better than the original project. How do you intend to deal with this kind of situation in order to maintain your originality and competitiveness? Are you a clone of a previous project? Not an easy question Unique G: Will never DM first: Sure. For codes, when we refer to open source, it is often the smart contract portion that is open source. We still have to create new code for the website and interface. We studied the rarible code and used a large part of it with the exception of a few parts and made some changes to it. There will be a difference between us and Rarible in the sense that all our codes will be fully open-sourced. This is the principle we live by and we hide nothing. In terms of competitiveness, there are many improvements we can make and we are looking into such as NFT fractionalization and integration with DeFi. Our policy here is to work with everyone and not do everything ourselves. After all, we are a community project and have no profit agenda. Gold Rocket | Satoshi Club: Btw is your team anonymous or are you, public persons? how many people are on the team? D| Satoshi Club: And here also we have to ask our favorite question about the audit. do you plan some audits? Unique G: Will never DM first: It is worth noting that Unique One despite being a young low cap project, will be sponsoring ETH Denver 2021 VR Art Gallery. Super-rarre will be displaying their works there. It is a purely digital event that lasts 24 hours for 7 days and free to the public to join. It starts on 5th Feb 2021 and we hope everyone can join in. Yes for the audit, we are in progress and hope to have the audit results published before our main-net launch on 28th December. We choose this date to launch the main-net because it coincides with the Comiket in Japan which is the major anime and manga event in Japan but it will be done virtually this year. We have very close links with Japan and Japanese projects. D| Satoshi Club: This is great! Unique G: Will never DM first: I really encourage everyone to look at our mining program and liquidity provider programs. We have transaction mining and also volume mining. Basically, we want to be sure not only whales can earn the tokens. In total, 60% of the tokens of the project can be earned. And the community will own this project in its entirety. This is our token distribution Dev Funds — 20% DAO Allocation — 20% Creation Mining — 10% Transaction Mining — 10% Volume Mining — 10% Contest Rewards — 10% LP Mining — 10% Bounties — 5% Airdrop — 5% Our long-term plan is to create an ecosystem of marketplaces and obviously, the $rare token holders will have some benefit when these sister projects are launched. We are also keeping an eye on the Polkadot/Kusama ecosystem and impressed by some developments by Unique Network (no relations). D| Satoshi Club: Thank you! @GoldRocket27 do you have any additional questions? Gold Rocket | Satoshi Club: I wanted to clarify about the team that helps create this project Thanks for this. This is really impressive. Unique G: Will never DM first: If you guys know anyone, even if they are traditional artists. We have community members ready to guide them and onboard them. We believe there is only a fraction of a possible number of artists that is on NFT platforms. Other platforms will treat you are customers. Here we treat everyone as members of the community, as it should be. We are an anonymous team and choose to be for the reason that we want to move this to a DAO governance. Once the DAO governance takes over, then the community will decide what to do when the regulations eventually arrive for DAOs. We call ourselves the founding team and our objective is to pass the governance to the community asap. In terms of expertise, we have veterans with more than 20 years of experience in all the key areas such as marketing, business development, software development, public relations, management, etc. But again, the objective is to pass this on to the community. Gold Rocket | Satoshi Club: great! Can we proceed to the next question? Unique G: Will never DM first: We understand choosing to be anonymous also means that people will scrutinize us. This is why we did not go public until we have a test net. And this is why we only list when we are sure we can deploy on main-net. We want people to judge us by the product and experience. Come and talk to us. Sure. Gold Rocket | Satoshi Club: Q3 from Telegram user @Pannicota Hi sir Unique.One provides for users an ability to vote on governance decisions because it’s built on DAO structures right? But It seems that many DAO project’s voting systems are being Heavily manipulated by some Large Tokens Holders (Called Whales), I said that because big holders = big powers in voting, and tokens are easy to get through exchanges. And it brings bad DAO Voting behaviors, unfair for many small investors, small holders so how will Unique.One provides the fairness / the trusted, the balanced in the DAO Voting process on Unique.One projects? Unique G: Will never DM first: This is a great question. Yes, absolutely we recognize this as a major problem. We have released our first DAO Governance Proposal for community voting at governance.unique.one. This is so that the project is from day 1 a decision by the community. In this phase, we are using Snapshot which is an off-chain voting mechanism and yes it uses the number of tokens in an account to count as votes. This is why we have a phase 2 governance.. let me explain. D| Satoshi Club: Sure! Unique G: Will never DM first: In phase 2, we will adopt a DAO system but it is important for everyone to understand that the DAO is just a mechanism to vote and ratify a decision. The actual governance structure needs to be built on top of this. We have a proposal for the committee which are separated by functions and also by governance or management. We will announce when we are ready but we expect our proposal to be the first in the world. As for the voting power, we have a solution for this. Being an NFT platform, we can easily deploy a system where there will be requirements to participate in voting for certain matters (this means the issue can be categorized and various types of voting used). So for those who qualify, we can issue the person an NFT. This NFT can be deposited into a contract that produces another ERC20 that we will use for voting. So this means one person one vote instead of one token one vote. I hope that answers your question. Gold Rocket | Satoshi Club: All clear! Thank you! next question? Unique G: Will never DM first: In terms of languages, we have our Chinese site up at cn.unique.one and we should have our Japanese site up soon as well. We will gradually expand to all languages. Sorry I was taking advantage of the delay. Yes, please. D| Satoshi Club: Interesting system… Unique G: Will never DM first: The technology is very flexible but it is only one part of it. At the end of the day, we are humans and we need a human system as well. Gold Rocket | Satoshi Club: You can take your time if you want! we want you to feel comfortable! Unique G: Will never DM first: Hahaha thanks. We want people to come to use the platform and own the platform and I hope many of you here will come join us. Gold Rocket | Satoshi Club: Please share useful links to the project with our community! Unique G: Will never DM first: Upcoming Events 18 Dec to 21 Dec — Governance DAO Proposal Ratification on a snapshot 21 Dec — Unique.one Chinese AMA in Biki WeChat 22 Dec — Biki Exchange and Uniswap Listing + Biki CEO AMA in Unique.one telegram + Biki Grid Trading Reward Programme 28 Dec — Comiket Air + Mainnet Launch Website : unique.one Chinese website : http://cn.unique.one Our Testnet is Live! http://testnet.unique.one (We recommend Metamask on desktop at the moment and remember to select rinkeby network) Governance Voting Ratification of Governance DAO Proposal https://governance.unique.one/#/uniqueone.eth/proposal/QmcTuiju5kwREZ6DzM5sBgcbrzDVU6C2txLqZugcsLcgi5 Articles about Unique.one Introducing Unique One https://uniqueone.medium.com/introducing-unique-one-6cfe19450e3c Testnet Is Live https://uniqueone.medium.com/unique-one-testnet-is-live-31ac7c567dc3 DAO Governance Proposal https://uniqueone.medium.com/unique-one-dao-proposal-for-governance-structure-bdd7db0ea6bc CEX and DEX Listing https://uniqueone.medium.com/unique-one-rare-listings-cex-and-dex-392dcfa2482d Gold Rocket | Satoshi Club: Wow! Thank you! Then the next question can be asked! Unique G: Will never DM first: There are contests as well. The Biki free $RARE sign up is almost over I think. But we have a meme contest coming up for all you creative degens. Biki exchange will have Grid Trading Promo as well. And we will have Uniswap LP rewards and so on. It’s going to be exciting! Gold Rocket | Satoshi Club: Amazing Q4 from Telegram user @lzamg Most of the new crypto projects start the launch of their tokens with a public or private pre-sale and even airdrops. So I’m curious to know: why did you decide to not do any of this and directly put the $RARE tokens on Uniswap for the people to purchase? Is this a strategy to arise trust and security within the users? Unique G: Will never DM first: This is an excellent question. We follow a method called Fair Launch. This means even team members have to buy the tokens as everyone else in the private sale. The main advantage of this system is that it prevents dumps in the market on the listing. We have learned a lot from the 2017 ICO craze and most of the team members and advisors who obtained their tokens free will dump on the market. This means investors will lose their money and there is no end to the price drop. Personally, I have invested in a few of such projects and it is unpleasant. The other key to this is actually the market cap. We only raise what we need to get the platform developed and get off the ground running. There are no extras for us. Other projects will try to raise as much as possible and this is greedy in our opinion. Early adopters need to be rewarded and we have to respect this basic law in crypto. D| Satoshi Club: Can you tell us, how much liquidity will you add to the Uni pool? and at what price people can buy $RARE? Unique G: Will never DM first: We want to grow from 20 to 200 and from 200 to 2,000.. and from 2,000 to 20,000.. and the only way this is going to happen is if everyone realizes value from the platform. As you can see there are many dead projects in crypto and we believe is it simply bad karma. They took advantage of their community and this project will never succeed. | Satoshi Club: Yes, this is true. Unique G: Will never DM first: We are listing on Biki Exchange and Uniswap. We aim for a liquidity of at least USD10k gradually building up in the first few days. The reason is that the community needs to come in and provide the liquidity. The method of adding liquidity on Biki Exchange is different and actually very interesting. Grid Trading is actually an automated trading tool and if enough users provide liquidity, you have a very active market. So of course there are risks but we have already allocated tokens to Biki to give as reward. So we think it is a good place to try to earn some tokens as you will be essentially getting an airdrop regardless of trading performance. At first, we were only going to list on Uniswap but then we realize not everyone can use DEX and also not everyone wants to pay the high fees for a trade. Biki Exchange has been very accomodating to us to our surprise and they have 3 million traders and big exposure to China. This is why we created the Chinese website, it is to cater to this community. This was not in our original plan. Additionally, we will also launch a balancer pool. Our philosophy is to give as many options to the community as possible. Gold Rocket | Satoshi Club: This is really very tempting. D| Satoshi Club: Yes, I actually think it’s a very wise decision to going to the list on CEX and DEX on the same date. Unique G: Will never DM first: We expect Grid Trading to have a positive impact on the price (not financial advice). The reason is other traders on the platform will need to buy rare in order to join the Grid Trading contest. It also serves as a liquidity lock. We are hoping for good results but we are in the same position as everyone when it comes to prediction. D| Satoshi Club: Thank you for the detailed answers! ready for the next question? Unique G: Will never DM first: We are keeping an eye out for the developments on the Polkadot ecosystem and especially Polkaswap which will be a CEX and DEX aggregator. There is no way to get Polkaswap except through the sora and Val farming game at sora.farm. That’s a tip, but really, we are ready to move there from Uniswap especially if it has greater liquidity and rewards. Yes, please. Gold Rocket | Satoshi Club: Q5 from Telegram user @Jonahapagu Unique.One will hold weekly artist contests providing Artists with an opportunity to win $RARE… Can you explain more about this contest how do artists qualify and partake in this contest, and what do they have to do to win… you have reserved 1 million tokens for this, will the contest stop after the 1 million tokens have been exhausted. Unique G: Will never DM first: I am glad you asked this question. The artist contest will start in early Jan 2021 and the format will be decided by the community. The topics are endless, we can have character design or landscape, or photo manipulation, or gif. We can also make it a theme like Satoshi or CZ. We can have a collaboration for example with Satoshi Club where we can have a Serg contest. Maybe who makes Serg the most handsome wins. LOL. More importantly, the reason we want to hold this is so that the qualitative aspect of the art is recognized and rewarded. We want new artists to be known, not only the well-established ones. And when they become more famous, we hope they can enhance their ability to earn. Gold Rocket | Satoshi Club: Let’s ask Serge Unique G: Will never DM first: The token allocation is over 5 years and we believe that we will know whether we succeed or fail over 5 years period. But there is no need to worry after 5 years because the technology would have improved and as long as the DAO decides, the project can adapt to the new conditions and continue to grow. I want to emphasize that the value of digital art is more than what many can imagine. We are looking at a market of tens of billions and where existing platforms especially photography or stock photos can charge as much as 40% for sales. If we succeed as a community and platform, there will be tremendous value for everyone. Unique G: Will never DM first: He will say he is already too handsome D| Satoshi Club: HaHaHa. Gold Rocket | Satoshi Club: I think so too D| Satoshi Club: Ok, so who will be picking the winners of the contests? community? Unique G: Will never DM first: Yes, the voting will be via snapshot. A platform is just a tool and the team is just a facilitator, the community owns Unique One and they decide. D| Satoshi Club: Thanks for the answer! ready for the last question, from part 1? Unique G: Will never DM first: Yes, please. D| Satoshi Club: Q6 from Telegram user @Garrinepotter Farming tokens in liquidity pools is a common and boring business. You create an NFT marketplace where art will be presented as a commodity. Will it be possible to farm on your platform using NFT? For example, for posting quality content. Is it possible to incentivize artists with your tokens? By the way, Satoshi Club we need more people there @unique_one_rare let’s join Unique G: Will never DM first: Great question! The idea of farming NFT is excellent and it is something we foresee doing. We may also do NFT games but our idea is that it should be a platform so that others can easily do it. There are some NFT games that are not really open source. Most of the code is actually closed. And yes, we are talking to two NFT fractionalization platforms. As you know the potential is endless for this. It can be used to fractionalize artwork but it can also be used to fractionalize bonds or insurance or other financial instruments. We are at the very beginning of a large revolution. If you guys have a startup idea, please come and talk to us. We are all about community and collaboration. Thank you so much, guys. We really appreciate all your support. D| Satoshi Club: Thank you for such cool and informative answers! Gold Rocket | Satoshi Club: Thanks for your great answers! Are you ready for the storm of questions? Unique G: Will never DM first: I just want to emphasize that you do not have to be a developer to start a project, please come and talk to us. We are also talking to people who are deep into Music NFTs and DeFi. We always believe that relationships come first and eventually we will be able to work together. Yes. I have seen the craziness.. hit me. Gold Rocket | Satoshi Club: You have to choose the best 10 questions and answer Unique G: Will never DM first: Sure. D| Satoshi Club: Great! Satoshi club if you have some good ideas, go and share them Unique G: Will never DM first: Absolutely. Satoshi Club is VIP Gold Rocket | Satoshi Club: @Cool_as_Ice open live chat! Thanks for your kind words D| Satoshi Club: let’s go!
https://medium.com/realsatoshiclub/unique-x-satoshi-club-ama-session-19-dec-2020-2be20e9483a5
[]
2021-01-06 06:40:36.844000+00:00
['Unique', 'Cryptocurrency News', 'Recaps', 'Ama', 'Bitcoin']
DUs in the EU, Ocean Market updates, and MyData Online 2020 — the latest from Swash
New pages on swashapp.io If you haven’t seen it already, check out the new media page on Swash. This is where you’ll find all mentions of Swash across media, external blogs, podcasts, and videos. We’ve also created a new contact page too, so it’s even easier to get in touch with the team. On top of this, there’s been a few bug fixes, newly-added collection points on Amazon, and a new module for collecting data from Etsy. Ocean Marketplace Following the launch of Ocean’s V3 Market and the success Swash has received, we have now automated updates to the Swash dataset. These updates will happen weekly from today onwards. To learn more about the marketplace, check out the video below with Jamie Burke from Outlier Ventures and Bruce Pon from Ocean discussing all things datatokens and Initial Data Offerings. Data Unions in the EU Data Governance Act Big news for Data Unions! The latest public draft of the EU Data Governance Act outlines European data policy for the next decade. In this, the European Commission mentions Data Unions as “intermediaries between data subjects and potential data users in the economy”. The impact of this is extraordinary. As the world’s first digital Data Union, it’s exciting to see Swash paving the way for new digital ecosystems. Read more about what this means for Data Unions in this article by Streamr’s Marlene Ronstedt. Data is Labor — Why we need Data Unions Given the news about DUs in the EU, you may want to learn more about how Data Unions work, or why they’re important. This article by James Felton Keith for CoinDesk puts it all into context; the historical significance, data as a form of labor, and the potential of DUs in policy, technology, and legislation. For more on this, check out this conversation between JFK, Streamr’s Shiv Malik, President of RadicalxChange, Matt Prewitt, and the Data Dividend Project CEO, Enoch Liang: MyData Online 2020 On 11th December at 18.30–19.30 (UTC), Swash will be at MyData Online 2020. MyData is a conference that works to empower individuals and organisations with their personal data by answering questions around ethical data sharing, mitigating global challenges, and shaping rules for a better internet. Join in the discussion by grabbing your tickets here. 1000 $DATA monthly giveaway Finally, if you’re the owner of wallet 0x32A72eB1ecff8AF84D14b56dA6E0AA6878a00d2A then check your balance — Swash gifted you 1000 $DATA for the most new referrals in October! What inspired us this week 📺 This talk by MIT’s Prof. Sandy Pentland for a deep contextual dive into Data Unions and their potential. 📚 This article by Hannah Fry for The New Yorker on testing in Big Tech. 👂 This refreshing playlist by ICA Daily.
https://medium.com/swashapp/dus-in-the-eu-ocean-market-updates-and-mydata-online-2020-the-latest-from-swash-22252bbc223b
['Chloe Diamond']
2020-11-18 18:22:16.099000+00:00
['Blockchain', 'News', 'Technology', 'Data', 'Startup']
Is Amazons Logo a Phallus Symbol?
AFTER ALL THE YEARS of getting packages delivered by Amazon or watching Amazon Prime movies or going to the Amazon website I just noticed something. The yellow squiggly thing under the Amazon word is most definitely shaped like an upward sloping penis! Isn’t it? I thought it was just a smile until my boyfriend Bob pointed said this while drinking his morning coffee: “Have you noticed that the Amazon logo looks like a dick?” HMMM. yeaaaaaaaaa. OMG. I never noticed.
https://medium.com/pickle-fork/prime-penis-video-deb93e4b7cc7
['Michelle Monet']
2019-01-16 20:52:43.009000+00:00
['Design', 'Fun', 'Advertising', 'Humor', 'Comedy']
Curse of Busyness!
A life of leisure was once the aspiration of the upper class. But now, bragging about busyness is how people indicate their status. Busyness is a powerful social signal, though a somewhat counterintuitive one. At the turn of the 20th century, economists predicted that the ultimate symbol of wealth and success would be leisure — showing others that you were so successful that you could abstain from work. Instead, the opposite occurred. It’s not free time, but busyness, that gestures to a person’s relevance. Source: Google Images In 1928, the economist John Maynard Keynes gave a lecture, later published as Economic Possibilities for Our Grandchildren, in which he foresaw that people in the year 2028 would work only 15 hours a week thanks to a productive economy and technological innovation Well-off people in the western world are nowhere close to working just three hours a day. Just as being leisurely around 1900 was a status claim, being unmanageably busy at the turn of this century was a status claim based on the fact that the busiest people also tended to be the richest. What can we do to be more efficient rather than busy? Three elements that create an experience of deceleration: embodied deceleration, technological deceleration, and episodic deceleration. Embodied deceleration is the physical slowing down of your body: walking or riding a bike versus moving your body around in cars, planes, or buses. Technological deceleration is not giving up technology, but feeling like you have a sense of control over it. Episodic deceleration is having fewer episodes of action per day — Not feeling like you’re running from meeting to meeting at work, and then you have to run to pick up your kids and drive them to three different activities at the same time as you’re trying to cook dinner. Emphasizing the pleasure and social benefits of deceleration can help shift attention away from busyness. Changes in attitudes lead to changes in behavior. And both attitudes and behavioral changes lead to changes in social structures.
https://medium.com/@arusharma/curse-of-busyness-c136ac75fd5d
['Arush Sharma']
2021-06-17 07:46:19.580000+00:00
['Efficiency', 'Productivity']
End of year update — 2020. A year of big challenges and even…
End of year update — 2020 Hey everyone, Looking back at 2020, it’s fair to say that it’s a year that none of us will be forgetting anytime soon! With so much of the world still experiencing hardship due to Covid, we’ve been extremely lucky to have out-stepped the virus ‘relatively’ unscathed. This year has been the biggest year we’ve ever had at Sylo in terms of development, and it feels amazing to know that not even a global pandemic could stand in our way. Our global team have been fantastic at working remotely this year, staying focused to send out into the wild some of the biggest developments we’ve ever made. We have been doubly fortunate in the tail end of this year that due to our headquarters’ location in Auckland, New Zealand, our team has been able to collaborate and socialise in person. Without further ado, I’d like to reflect on our highlights from 2020. DEPLOYED BITCOIN INTEGRATION A feature that was in high demand for some time, early on in March, we officially welcomed Bitcoin to the Sylo Smart Wallet. The first big tick of 2020 in terms of making the app more useful for our users, given the consistent rise of BTC this year, we’re only too happy to have been able to provide that nice and early on! RELEASED THE FIRST SYLO EXPERIENCE One of the [many] special parts of the Sylo Smart Wallet is its ability to, like WeChat, be a home for mini apps, also known as Sylo Experiences. We were pleased to unveil the first such experience in April with the launch of the Centrality Swag Store. The Swag Store functioned as a unique shopping experience within Sylo, where users could purchase Centrality merchandise with crypto. This was a great learning experience for our development team as well, which if you’d like to hear more about, make sure to check out the case study from our Head of Design James Carolan. BROUGHT CRYPTO TO THE REAL WORLD In June, along with our partner Centrapay and Coca-Cola Amatil, we gave users in Australia and New Zealand the ability to make purchases with Bitcoin at vending machines by using their Sylo Smart Wallet. A huge and tangible step towards the everyday use of crypto, we were really excited to see this innovative campaign come to life. If you loved buying a Coke with BTC this year, you’re going to love what we have in store for our users in 2021… LISTED THE SYLO TOKEN ON THREE TOP EXCHANGES Next up, a series of events this year of which we were proud to see come to pass was the listing of the Sylo Token on top global exchanges KuCoin, BitBns and most recently, Gate.io. We’ve just confirmed some exciting relationships that will supercharge the tokens utility in 2021, which we’re looking forward to rolling out. Watch this space. LAUNCHED A WEB3 BROWSER You know what they say, all work and no play makes one dull. So it was great to launch the in-app Web3 Browser to not only give our users access to more places to spend crypto, but to be entertained as well. Utilising the Web3 browser, Sylo Smart Wallet users became able to access any Ethereum-based dApp, such as decentralised games, marketplaces, exchanges, or social networks. ANNOUNCED SYLO SOCIAL DEFI Next up, in Q4, we announced a set of features coming soon that are sure to delight users around the world — Sylo Social DeFi. With stake and yield mechanisms built around the concept of groups, this development will allow people to engage in earn mechanics around activities they already know and love, such as chatting, hosting VIP content and knowledge sharing. Read more about Sylo Social DeFi here. FAT PROTOCOLS & FAT DAPPS Also in August, we decided to get academic and myself and Sylo Advisor Tyler Ward of BarnBridge, took a magnifying glass to how to best assess the value of decentralised projects. This is one you might want to sit down for. So if you’re curious about how we contemplate the decentralised realm and have a moment to yourself over the holidays, jump in and give it a read. As ever, we’re keen to hear what you think. THE OYA LAUNCH Following this consistent string of developments, our team was head-down, hustling to the line to bring you the biggest release of our product ever — the Oya launch. A total realignment of our backend architecture centering around the release of user-run Sylo Nodes and a world-first, this release has been one giant leap forward for decentralisation. Since December 4, if you’ve updated your Sylo Smart Wallet, then congratulations — you’re using Oya! If you’d like to know more about Oya, check out our sector of Oya related content here. DEPLOYED TEZOS INTEGRATION Last to arrive this year but certainly not least, we deployed support for Tezos to the Sylo Smart Wallet. Built atop the robust foundation of Oya, our decision to integrate Tezos opens the door to a wealth of new opportunities across DeFi, dApps, fiat and more, all underpinned by the power of the Tezos blockchain. “Baking”, which serves to validate “blocks” in the Tezos blockchain and ensure the stability and security of the network, is already in the pipeline, with additional Tezos-specific functionality to be delivered during early 2021. Happy holidays, keep safe, and best wishes for the festive season! Until next time… Dorian Johannink Co-Founder, Sylo. — We would love to hear from you. Send any queries to us now via DM on Twitter or Telegram. Experience the Sylo Smart Wallet now by downloading from the Google Play or Apple App stores. For further announcements, follow Sylo on Twitter, Telegram or visit www.sylo.io
https://medium.com/sylo-io/end-of-year-update-2020-729d780378fb
['Dorian Johannink']
2020-12-23 08:42:59.798000+00:00
['Communication', 'Defi', 'Messaging', 'Dapps', 'Blockchain']
Family Constellations Opens the Flow of Life Like Nothing Else
Photo by PublicDomainPictures on Pixaby Family constellations helped me heal deep father and mother wounds and find peace with who my parents were, their limitations, strengths, and all. I’ve found constellations to be enormously helpful to me and others and have seen wonderful kinds of community forming in constellation circles. And so I took training and, in 2016, became a constellations facilitator myself. Benefits of family constellations: Finding answers to repeating negative patterns Experiencing enhanced self-love and acceptance of what is Harmonizing and deepening relationships with family, community, and nature Connecting with ancestral wisdom to more fully develop our humanness Being part of a supportive community in service to healing and growth that is emergent and stable, grounded in loving kindness, dynamic and responsive I’m proud to be part of a professional constellations community that looks at our biases and is open to other ways of viewing gender, race, class, and other kinds of cultural conditioning. At the end of the article, there are links to more information about the actual mechanics of a family constellation. In this short article, I’ll share the basics of what makes family constellations so powerful, by using myself and a small fragment of my family story as an example. Family I grew up in hetero-normative culture, in a multi-cultural neighborhood in Chicago and come from a middle class, immigrant, assimilated Jewish family. My mother encouraged me not to have children. She did it in two ways — one unconscious and the other with conscious intention. As a single mom, she felt cheated because she had to sacrifice her own desires to care for us, pretty much without my father’s help. She acted in many ways more like a traditional father than a mother. She valued work and independence over domesticity. She had a job, made sure we had food, clothes, and a place to live, but when it came to hugs and cuddles, they weren’t in her vocabulary. My younger brother and I both felt her ambivalence about having had us. I didn’t want to give a child what I’d got — or not got. And so not feeling entirely wanted was an unconscious encouragement not to have kids of my own. When I was eleven, awareness of human overpopulation was cresting and my mother was the one telling me how overpopulated the world was. She intentionally made it clear that I had a choice about whether or not to have children and rewarded me for excelling at school, stressing the importance of my having a career. My brother married. He and his wife had two children. Both their children married and now there are three delightful grandchildren. I, on the other hand, made different choices — devoting my life to healing my inner child, helping others heal and mature, and following a spiritual growth path. Which is more important today? To pass on life or to attend to becoming emotionally and spiritually mature humans, able to live together on our little planet? In my family, it seems, life chose both, one side for me and one side for my brother. Balance. My dad was brilliant, charismatic, warm, affectionate — and crazy. He would act out at work and lose a good job. He would pretend to commit suicide and scare us all (except my little brother who was too young to know what was happening). I felt close to my dad in a way I never did with my mom, even though he didn’t live with us after I was eight. When I was sixteen, my dad committed suicide for real. I was devastated, and, tragically, not surprised. That year, my brother turned fourteen and we all understood the pressure our dad had been under — around the time our dad was fourteen, his dad had committed suicide too. My dad had felt responsible for his dad’s death. Suicide intensifies the guilt the living feel. My parents were quite in love when they conceived me. Two years later, when my mom was pregnant with my brother, my dad’s acting out was already wreaking chaos in our family and straining their marriage. After she asked my dad to leave, it was natural for my mom to take on the traditional father’s role. Partly out of necessity, but equally out of capacity, she rose to the challenges of divorcing my father, at a time when divorce was still quite taboo, and creating a stable home for us. My mom’s mom, my grandmother Laura, who lived nearby, helped my mom in many ways — babysitting us, making our meals, and giving us guidance. Despite their care, I longed for my dad. His mom, my grandmother Esther, had lived in another city and passed away when I was five. I modeled myself on her in ways that are startling considering how little I knew of her. She was affectionate, caring, and patient with me like my dad. They gave me comfort. After my father’s death, I felt like a motherless child. Powerless to raise my dad from the dead, I became chronically depressed, with suicidal thoughts of my own. I was terrified of death and that I might kill myself. There came a moment, a few years after his suicide, where I realized I was now in peril. And I confronted myself, “Are you going toward life? Or are you going to follow your father toward death?” In that moment, I said, “Yes,” to life. I determined to try to pull away from death. But it felt like I was pulling away from my father too. My loyalties felt torn between my love for my father and my own life. New questions arose. How could I make peace with my father and the family legacy I’d inherited from him? How could I find the freedom and love to go forward in my life, instead of constantly feeling pulled back by his undertow? Many modalities and practices helped me. Compared to other family systems therapies, indigenous wisdom makes family constellations unique. Family constellations gave me basic understanding that is more ancient and more fundamental than Western psychology. Family constellations Bert Hellinger (1925–2019) became a psychotherapist and created family constellations after living and working among traditional Zulu people for sixteen years, starting in the early 1950s — even learning their language. The Zulu, as do many traditional peoples, understand the place, or role, of ancestors to be sacred intermediaries between living community members and Source. Not only do our ancestors give life to the living, but they also continue to help the living maintain harmony in community and family, even after they are “gone”. Living with the Zulu, Hellinger came to understand the importance of embracing one’s “place” in the family, in terms of one’s relational roles (grandfather, grandmother, father, mother, sister, daughter, and so forth). From the Zulu, and many other life experiences, Hellinger learned about natural, loving patterns in family and community, like deep respect for the flow of life. Hellinger called such patterns “orders of love” because it’s the unhindered flow of love that creates healthy order in families. The “constellation” of family constellations is the arrangement of family members that reveals the state of the family soul — with love’s order distorted and disconnected or healthy and whole. Life flows always in the direction of the next life. Parents create life, mothers give birth to children — not the other way around. If we try to push back against the river of life, to go in the opposite direction, like me as a daughter trying to stay with my father, trouble results, as in my depression and suicidal thoughts. Once I learned to accept my father’s choice and fate as his own, I could love and honor him as my father, without guilt, free to move forward into my own life. Family soul Human families each have what Hellinger called a “family soul”. His use of the word “soul” here can be confusing. Hellinger’s concept of family soul seems to me to suggest a subtle energy field containing information particular to one’s family — but not a sentient soul, like an individual soul or spirit. Hellinger also coined the term “family conscience”, which encompasses the values, cultural imprinting, attitudes, and beliefs held by a particular family soul (also not sentient). As well, a family’s particular history of major traumatic events (e.g. stillbirths, murder, war, religious persecution, or famine) may color and distort the family soul. Hellinger asserts families have more commonalities than differences — and if we focus on the commonalities, much healing can happen. Human relationships of grandparent(s) to children, parent(s) to children, siblings to each other, and so forth, are universal to all of us. The basic nature of family relationships gives us a kind of sixth sense — the potential to tune in to another’s family soul in a constellation setting. Knowing one’s place in one’s family and recognition of each member’s belonging are both fundamental to a healthy family soul. And belonging is another of Hellinger’s orders of love. When our sense of belonging is lost, we feel alienation and there will be some level of distortion in the family soul. For example, in my family, my father didn’t seem to belong. My experience as a kid was that he kept disappearing until the ultimate disappearance. Also, my mother didn’t invite belonging. (What happened to my mother, in her family, is another story.) Naturally, I didn’t feel like I belonged either. Working with the distorted constellation of my family soul led to a healthy image for my family soul. I did constellation work on my nuclear family and both my mom’s and dad’s families to understand and heal the parts in me that were hurt by family soul distortions. My healing came in stages over time, using family constellations and other modalities. Connecting with my ancestors Recently, as a participant in an online family constellation ritual, I asked to connect with my family lineage ancestors. By ask, I mean with the intention to open and learn, I went inside to explore an inner experience of my family line. Touching into father’s father’s father’s line back to the beginning of human life, I sensed a dense atmosphere with wisdom in it. There was also a lingering, if slight, quality of suffering of the Jewish men that seemed to still need healing. Yet the overall feeling was strong. I felt dynamic throbbing energy infusing me from my father’s line and giving me the agency to move into life with purpose. Sensing into mother’s mother’s mother’s line, I felt pure joy, bubbly, complete, and connected in my heart, with all the women who came before me. It was quite a miraculous feeling, so pure and unadulterated. Feeling joyfully at one with my mother’s line, I once again invited my father’s line. At first, I sensed maternal and paternal distinctly and then a mysterious melding as the two lineages combined in me. It was like two rivers of energy melting together into a new balance. I am free and able to express life in harmony with my ancestors. Family constellations doesn’t replace individual counseling. They each have their place. One might need one, the other — or both, as I did on my healing journey. With the insights and resources family constellations brings, it can be easier to resolve issues in counseling. Family constellations is like nothing else I’ve found for moving our understanding of family or lineage trauma out of the head and into the heart, and for restoring the flow of love that gives us life. To learn more about the process: Family, Nature, and Systemic Constellation Circles. To read an example of an actual family constellation: Restoring Love’s Flow in a Family Constellation.
https://medium.com/our-blossoming-matters/family-constellations-opens-the-flow-of-life-like-nothing-else-130d44a502e6
['Olivia Fermi']
2020-12-09 21:08:20.126000+00:00
['Personal Growth', 'Family Constellations', 'Flow Of Life', 'Healing', 'Community']
What Happened To The Kingston Detective Who Blamed Police Killings On “Resisting Arrest?”
What Happened To The Kingston Detective Who Blamed Police Killings On “Resisting Arrest?” Scott Martin Jan 15·8 min read Source: Global News Kingston As every reader of this article knows, last year, Minneapolis Police Officer Derek Chauvin put his knee on George Floyd’s neck for 8 minutes and 46 seconds, with the aid of fellow police officers Thomas Lane, J. Alexander Keung and Tou Thao, killing him in cold blood. The event shook the world to its core, setting off protests against police brutality around the globe. Kingston itself joined in, rallying with a vigil on June 2nd in Skeleton Park, organized by the Black Luck Collective, who’s Facebook page describes their purpose to be “about inviting and the coming-together of new and seasoned Black Kingstonians to see, to know, that they are not isolated here and to uncover our dependable and visible community.” Less prideful to the city was an anti-police brutality protest announced for June 6th, with a horrible mess of a genesis. It was created by inexperienced organizers, neither of whom knew previously about the vigil on the second, with no planned speakers. They also initially seemed to invite the Kingston Police to attend and speak, though they later claimed this was miscommunication. Attendees of the protest reported that any and all people were allowed to approach the speaking area, including one individual who explicitly declared “All Lives Matter”. All in all, it may have been a gesture of good intentions, but unfortunately, it probably did more harm than good. It is also important to note that The Black Luck Collective stated they had no hand in the June 6th protest. As frustrating as it is that an anti-police brutality protest that initially openly welcomed police was held in a city where Black people are stopped nearly 4 times as much as white people, what is even more galling that one of Kingston’s own, Detective Brad Hughes, published a post on his personal Facebook account on June 1st, five days before the botched protest, which openly declared that George Floyd had a hand in his death by resisting arrest. The post, which was removed at the behest of a senior commander four days after posting, created an outrage in the community. Global News, The Whig Standard, the Ottawa Citizen and the Kingstonist all ran pieces on it, a petition was created to fire the officer, and multiple social media scribes were penned against him. I know this because, yours truly, penned one of those social media scribes. Source: @YouCaughtScott on Twitter That particular tweet was even featured in the Kingstonist article on the matter, of course with the “fucking” taken out. Evidently implying that a post amounting to “resisting arrest is grounds for execution” is more palatable than “fuck”. Kingston’s own, Detective Brad Hughes, published a post on his personal Facebook account on June 1st…which openly declared that George Floyd had a hand in his death by resisting arrest. But what did Brad Hughes say, exactly, to warrant this reaction from myself and the Kingston community? Although the post was removed, pictures are still available. I will link the full text in a picture below. Content Warning for defending police brutality. Source: Kingstonist.com Despite preemptively couching the forthcoming language by declaring that he would be the first to admit when an officer is wrong and will be held accountable, appearing to condemn the actions of his career-comrades, he then writes “The main component which is 100% present in ALL incidents where someone was injured or died while in police custody regardless of what race is involved is that the accused was resisting their arrest.” Either Hughes has lived in a subterranean cave for the entirety of his life, or he was being willfully ignorant. Do I need to point to the case of Breonna Taylor, where cops burst into the wrong apartment without identifying themselves and proceeded to shoot and kill her? Should I retell the shooting of Philando Castile, a legal gun owner being shot next to his partner and his then 4-year-old daughter in a traffic stop? Perhaps Fred Hampton? Who was assassinated by the police? I could be more charitable and assume Hughes was talking about Canada instead. This is a preposterous notion as the case in question Hughes was referring to was in the U.S, but let’s do this mental exercise for the sake of steelmanning his position. Just for a sampler, in Canada we have the seemingly unending cases of cops shooting people on mental wellness checks. We also have the Saskatoon freezing deaths, the brutal killing of Myles Gray, and of course the RCMP having a 24-point, nearly 4,000 word Wikipedia article just for their controversies. This of course, would be in conjunction with the fact that police officers use “resisting arrest” against Black suspects far more than white suspects, and the concept itself is used to justify police brutality. I could also make a strong argument that George Floyd was not resisting arrest according to witnesses at the scene. But saying all of this without further context would mean conceding the point that those who do resist arrest should be expected to die. This is a lie, and is the conclusion that Brad Hughes wanted readers to accept. Admittedly, it may be hard to tell as of late, but at time of writing, we do not live in Judge Dredd’s universe. Very few crimes carry the death penalty, and those that do are, supposedly, to be decided in court. So implying that any of the scramble one may put up upon arrest somewhat permits the cops to kill them, unless their life is in immediate danger, is authoritarian bullshit. Not that cops won’t claim they “feared for their life” every chance they get, even when unwarranted. No, Hughes’ attitude exemplified in this post made it clear, any Kingstonians who are accused of “resisting arrest” and are subsequently beaten, injured or killed, are also to blame for whatever happens to them. But now that the issue with the post has been clarified and explained, why bring it up 8 months later? Surely the police have dealt with the matter? After all, Sgt. Steve Koopman announced they were to investigate the incident and discipline accordingly, should the situation call for it. As Global News reported in June of last year, Koopman said that these could range from anything from a written warning, to a financial penalty to possible termination. In the same article, Hughes himself said that he has a comment prepared, but plans to wait upon its release until the investigation concludes. Hughes’ attitude exemplified in this post made it clear, any Kingstonians who are accused of “resisting arrest” and are subsequently beaten, injured or killed, are also to blame for whatever happens to them. Well, the reason I bring this up now, 8 months later is because, as far as I can tell, the investigation is still ongoing. The last public mention I could find of it was in a Queen’s Journal report from October of last year, stating from an email that Constable Ashley Guthienz sent them regarding their questions on the matter: “on the organization’s end [the investigation] is an internal matter […] after previous media inquiries it was identified that internal matters will not be discussed publicly.” I found this curious, as it was explicitly said by Brad Hughes that he had a statement prepared and would post it as soon as the investigation had concluded, a statement which I could not find. So I took it to myself to message Kingston Police directly, as a representative for The Limestone Dissection. I figured it would be a long-shot considering the DIY attitude of myself and this project and unsurprisingly, they were unwilling to answer my questions. Constable Guthienz told me that they only respond to accredited organizations, and “Unfortunately we will be unable to provide you with any information in regards to this request”. Please don’t read into this, as I myself would find giving details of an investigation to a new org started from the ground up pretty unreasonable as well. That being said, the lack of media coverage in accredited organizations is strange on its face, and police not discussing internal matters is, sadly, par for the course. It is a frustrating feature of modern-policing that shields the cops from the public’s concerns against the people that monitor them (the monitors who are being paid by the public’s tax dollars). But the length of the investigation warrants further inspection. Inspection that our local news organizations should look into, as previously shown, the public is not able to obtain such info. An internal investigation into a Facebook post taking five months would be laughable in any industry. But a Facebook post that: caused this much of a public outcry, which was acknowledged by the poster and his superiors, was taken down due to the public reaction, and included suggesting blame for the death of an arrested man was partially his own, when that man was killed by a cop and when the poster himself was a cop, would be a pitiable hilarious farce, were it not the veiled public threat to the citizens of Kingston that it was. That being said, without an update, it’s entirely possible that the internal investigation is taking 8 months and counting. An internal investigation into a Facebook post taking five months would be laughable in any industry. To be clear, I do not want to suggest that Brad Hughes intended this post to be taken as a threat, or that he has committed any act of police brutality, or even remotely plans to. There are no instances of him being involved in such acts, or investigations into planning them, or anything of that matter. But it is not the intent that concerns me so much, as the effects. In a similar scenario, were a man to say that there was “violence on many sides” about a rally where neo-nazis murdered a woman, we would rightfully call him out for downplaying the acts of violence, regardless of his intent.* Two separate websites created to ease access to Ontario’s sunshine lists (a public sector salary disclosure site) put Brad Hughes salary at a whopping $114,300. Well $114,300.71 to be precise, (search yourself!). The Kingston Police Municipal Board received $43,486,975 last year, 10.8% of the operating budget for 2020. The next highest Board’s funding? Frontenac County-Land Ambulance, at $7,952,623, which is not even 1/5th of the funding allocated to the police. The public deserves to know the results of this investigation and quite frankly, the investigators in question need to stop dragging their feet. If this 8-month-long deep-dive into a tone-deaf Facebook post is any indication of the Kingston Police’s ability to investigate, I want my money back.
https://medium.com/@youcaughtscott/what-happened-to-the-kingston-detective-who-blamed-police-killings-on-resisting-arrest-cb4e83f28884
['Scott Martin']
2021-01-15 19:17:53.746000+00:00
['Black Lives Matter', 'Kingston', 'Police', 'Resisting Arrest', 'George Floyd']
eToro 2020年底盤點!
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/jcstorytelling/etoro-2020-nov-and-dec-webinar-339183c110ba
['Jc 隨寫隨筆']
2020-12-30 10:14:18.685000+00:00
['Etoro', 'Investment', 'Financial Market News', 'Etoro Webinar']
Last Minute Album Reviews — 2020 Edition
We have finally arrived at one of the most interesting parts of my musical year, in which I revisit some of the albums that I overlooked in the year, but have garnered enough praise from other sources, or have remained enigmatic enough for me to give them a proper shot. Nevertheless, I can walk out of 2020 being extremely proud of how many reviews I have actually written this year. I would estimate it to be twice as many as last year’s reviewed albums, so here’s that unwarranted moment for me to give myself a pat on the back. Going through this process of revisiting albums for the last couple of years, has allowed me to discover some fantastic albums that might’ve otherwise slipped under my radar for good. Going through a similar thing last year, doing so allowed me to listen to some pretty phenomenal projects. in last year’s case, these namely included Michael Kiwanuka’s KIWANUKA, and Dorian Electra’s Flamboyant. But fast forward to the closing month of this rather bleak in (in nearly every respect, save musically), and we have seven final pieces that garnered my attention at the very last minute. Unlike previous years, I will be putting all of these miniature album reviews in one article, as opposed to seven separate articles for each day of the week. However, I will still be covering these albums day-by-day, and will be adding to this one post daily, on behalf of each album. So without any further ado, let’s conclude my reviews of 2020 with these final seven.
https://medium.com/@joeboothby/last-minute-album-reviews-2020-edition-90a757dad530
['Joe Boothby']
2020-12-20 18:04:10.435000+00:00
['Album Review', 'New Music', '2020 Music', 'Music', 'Music Review']
Why I Won’t Be Sharing Client Gifts on LinkedIn
Over recent weeks, I’ve aired my views about LinkedIn to my network. I’ve made the point that — while I see great value in LinkedIn as a means for connecting with other professionals and mapping out organizations — I think that in too many other respects it’s become a toxic breeding ground for jealousy, humble-bragging, and virtue-signalling. I refer, to the most part, for the LinkedIn feed and not to other aspects of the platform. There is good there. But a lot of bad. So I’ve come to the conclusion that carefully curating the feed is essential to maintaining equanimity while browsing its contents. (And it must be pointed out: for all the self-aggrandizing promotion, there are worthy and important conversations being held there). One of the many features of the LinkedIn feed that has been ticking me off recently is the growing trend of people posting a corporate gift hamper along with some cringeworthy ode to their boss and/or employer. In case you don’t know what I’m talking about, the usual formulation is something like this: Why I Will Not Be Posting Gift Pics I’m not refraining from client gift hampers because I don’t get any. Surprisingly, nice clients do buy their freelance writers gift baskets and other niece things around holiday time (I also, when appropriate, try to reciprocate the favor). Instead, I just think that this is another one of the many features of LinkedIn that doesn’t make the world a better place. And which I will therefore refuse to participate in so long as it’s a viable option for me to do so. For one, I think that gifts should remain nice private gestures between a business and their staff. It’s lovely that many organizations are in a position to reward their staff albeit in this small manner. But — after a brutal year in business for many — surely there are many organizations that can’t afford this small luxury, let alone to keep their doors open? As with many facets of LinkedIn, the result is thus to simply create jealousy and a dismal contest among companies who are trying to outdo one another with the best gift hampers. Judging by the gift hamper photos that crop up like mushrooms around this time of year, I can also only assume that many HR managers and departments are creating the expectation that their staff will post the hampers, with some laudatory words about the company, on LinkedIn. If not an ode to one’s “great boss” then it’s typically an exclamation of “how well taken care of” one feels as an employee. HR departments using this strategy are effectively using their employees as advertising billboards. Employees who play the game of their own volition and post about how wonderful their company is for sending them a gift are only playing into the above. At least in my (highly partisan) view. While it’s a nice gesture, sending your employees a gift hamper does not make a company a “great place to work.” It’s a minor perk. Finally, of course, posts like this fail the “who cares?” test. Those sharing gift hamper photos are contributing absolutely no value to the lives of anybody unfortunate enough to stumble upon them. Or maybe I’m just that rare breed of misanthrope that is insensitive to these things and isn’t excited to know that somebody I once met at a conference five years ago received a nice bottle of peanuts and some honey for the holidays from their employer. Enough said. I’m going to enjoy some peanuts and wine that I purchased for myself! I will not be posting the photo on social!
https://medium.com/@danielrosehill/why-i-wont-be-sharing-client-gifts-on-linkedin-400f72499d0d
['Daniel Rosehill']
2020-12-17 17:18:12.719000+00:00
['Bragging', 'LinkedIn', 'Rant', 'Corporate Culture', 'Communication']
A Case for Teaching Focus
How can students harmonize with the technologies surrounding them? It’s very easy to become distracted. | Source For a handful of months now, I’ve been engaged with a project that I believe to be fairly unique in its nature: assisting in the teaching of an AP Computer Science high school class — over the internet. Sounds crazy, right? On given days during the week, I video chat with a classroom full of students in order to try to help them bolster their programming skills, communicate computing ideas through my lens as a software engineering professional, and establish a rapport with the students. The program that I was able to conduct this through is called TEALS; sponsored by Microsoft as part of its philanthropic endeavors, and aimed at spreading quality computer science knowledge nationally. It’s been an incredible experience, to say the least. I absolutely love teaching. There is an indescribable thrill and pride in seeing a student break through a mental or physical challenge, and emerge with that triumphant look of, “I get it!” Some of my former teachers have been inspirations for me in my continuing work and education. A teacher can occupy a special role in the life of a student that can fill in crucial but missing areas of emotional connection, friendship, or guidance — beyond simple intellectual shepherding. For these reasons, I find my connections with students equally special; each mind is so creative and open, hungry for knowledge and the opportunity to explore a complex and frequently amazing world. I strive to be, for my students, who my favorite teachers were to me. It was my truly romantic love of teaching that recently led me to feel a commensurately disappointing realization when I had the opportunity to visit my students in person. I traveled to their school, and for three days, I sat in on my class, and a handful of others, to observe what a modern high school looks like. What I saw swirling around me left me feeling a nauseating discomfort that came with what I believe to be a profound realization. The school was an environment that was as close to a physical embodiment of “noise” as I think I have ever experienced. To what I believe to be no fault of the students’ at all, they seemed at the mercy of their devices: laptops beeping, phones buzzing, or headphones always in. Students’ learning seemed to be damaged by addictive technologies, and teachers did not seem to know how to help. I’d like to describe what I observed during my time at the school to illustrate an issue that I believe is unparalleled in its threat to the learning of students — young or old: distraction. As the prescription to this diagnosis, I would also like to offer a solution that I see as crucially unavoidable if we are to put upcoming digital technology generations on a path towards success in our distracting world: learning to focus. The Modern Sea of Noise I have to begin by saying that I was pretty surprised by the amount of distracting background noise that existed in this school. Perhaps making the problem even worse than some other schools is that the school at which I assist teaching is a technical Magnet school; which means that no matter what the students are learning, it must fit within a broader goal of molding minds into those of technology gurus who can demonstrate proficiency in programming, knowledge of electronic hardware, and skills derived from modern business practices. It is certainly an admirable goal, but it comes with the problem that every student is then sat in front of a computer screen and expected to not watch YouTube, chat on Discord, or browse the internet, just because a teacher told them not to. How can students choose wisely between either instant, technological gratification, or more arduous self control when faced with frustrating challenges? Technology is distracting enough for me as an adult. As a software engineer, I have taken explicit steps to set up my work environment in a way that limits the built-in distractions that modern operating systems and applications love to throw our way. From my previous studies of the effects of technology on society, I know that the immediate satisfaction of checking emails, or scrolling through social media is incredibly addicting; designed that way on purpose to make inordinate amounts of money from users’ attention. I can hardly imagine how a teenage student is expected to automatically know how to both be interfacing with a purposely addictive machine and also be present enough to learn content. I observed that the predominant method by which the technology surrounding students interfered with their minute-to-minute tasks was that it served as a relief valve at the slightest hint of discouragement. Any stress that came from growing tired of reading a historical passage, or from being stumped while altering an algorithm, was mitigated by immediately opening a YouTube video, checking Snapchat, or hopping on Discord. It is a fact that to get work done, we must be diligent and have grit in the face of discouragement — which is not easy by any means. It was clear to me that technology covertly altered the development of coping mechanisms students needed to push through challenges as they navigated the landscape of learning. Even if students sought to be better at being continuously engaged, which I am sure many have, this can be very difficult without guidance. Unfortunately, it also seemed that their teachers had either no interest, or no ideas, to try to help them resist distractions. Instead, students fell prey to the sway of easy dopamine many times throughout their school day. As a teacher myself, I certainly also sympathize with instructors, and understand that these problems are almost impossible to solve through any type of authoritarian means (not that this should even be the goal). If no alternative methods to encourage focus occurred to you, as an instructor, the problem would likely seem intractable and endlessly frustrating. There is no way to police every student during every minute of a period: ensuring that they are on task, and unplugged from distractions. I believe that what logically follows is that the only solution to such a problem is that students must engage in their own self control. Thus arises the big question: how can students choose wisely between either instant, technological gratification, or more arduous self control when faced with frustrating challenges? Learning to Focus I firmly believe that focusing is a skill that should be taught. If you think deeply about your ability to concentrate and work hard, you may realize that it is a learned skill that you acquired over time as you slogged through years, or even decades, of frustrating engagement with difficult tasks. When we’re not taught how to focus, we often end up learning through trial by fire. If you had instead been coached in the art of focusing, do you think you would have both saved yourself a lot of frustration, and been able to achieve more? My guess is: yes. Once you know how the mind works, you can control it; and once you control it, you can focus it. About a year ago, I discovered that I was not the only person who thought about focusing as something that should be taught. In a TEDxReno talk, Australian Hindu monk, Dandapani, describes his experiences learning to focus. His voice is soothing and calm as he speaks to the audience. As a child, he explains, he was constantly unfocused, and reprimanded as a result. If he was born only a few decades later, he likely would have been labeled A.D.D. or A.D.H.D. He asks the audience, “How many of you were taught in school how to concentrate?”; about two or three hands. “How many of you were told in school to concentrate?” and the audience breaks out in laughter as many more hands are raised in the air. Dandapani beautifully elucidates what I had held as intuition for years: “…most people can’t concentrate today for two reasons: one is, we’re never taught how to concentrate, and the second is, we don’t practice concentration. So, how can you do something if you’re never taught how to do it? And how can you be good at something if you never practice it?” My mind flooded with sensation as this simple, logical explanation made it all click for me. Of course so many of us cannot concentrate. We live in a world of distractions and nobody helps us learn how to avoid them. We spend most waking moments of each day immersed in noisy, distracting environments, and spend little to no time intentionally practicing concentration. Dandapani also acknowledges the role that technology plays in distracting us. He masterfully explains: “Technology in itself is not a bad thing; it’s actually a beautiful thing: as long as we’re in charge of it. But, if every time your iPhone beeps or makes a sound, and you turn to it, and you go, ‘Yes master… How can I serve you today?’, then you live in that world of distraction.” We too often allow our technologies to use us. We think ourselves “users” because we’re given usernames and accounts, but in a way, we may gain more insight from thinking of ourselves as “users” akin to drug users. The instant gratification we get from modern technology is more like the instant gratification we get from drugs than we often realize. Our addictions to technology can truly make us slaves to our devices. However, if we learn to control our attention, avoiding the urge to give in to the constant beeps and buzzes, we will likely be able to harmonize with our tools in a much more productive way. Our guru of focus teaches us that, “once you know how the mind works, you can control it; and once you control it, you can focus it.” One-Mind Schoolhouses Modern school systems — especially those with a technical bend, filled with laptops, phones, and other “smart” devices — must adopt practices of teaching focus if they are to enable success in the high-tech world. It’s not the students’ faults at all that they are so distracted. They’ve grown up in a world full of addicting, digital immediacy, so of course when they’re in the classroom, they unconsciously succumb to its enticing stress relief. I could easily imagine my students flourishing in their brilliance if exposed to a curriculum of concentration. As a passionate educator, I truly become overjoyed when thinking of the prospects of students who are calm, focused, still fun and creative, and able to express their magnificent talents in the classroom. Beyond being able to comfortably divorce from addicting technology, there are many examples of how learning to be mindful and focused can yield other tremendous benefits to students. The podcast, What Were You Thinking, produced an episode called, “It Isn’t Spirituality, It’s Neuroscience” during which they explore stories from teachers who teach mindfulness in their high schools and universities. The aim is to understand how best to coach the developing adolescent mind. Dan Siegel from UCLA, and the host, Dina Temple-Raston, illustrate that stress hormones in the brain have been shown to slow the process of building and strengthening neural connections during the learning process. If educators can help students reduce stress through mindfulness, it seems that even their brain’s growth could be accelerated during learning. Bita Moghaddam, the head of the Department of Behavioral Neuroscience at Oregon Health and Science University, says that her colleagues measured student learning during mindfulness sessions, and observed actual, positive physical changes in the brain after meditation. There is so much more literature detailing the benefits of practicing mindfulness and learning how to focus that at this point, implementing these regimens in schools seems like a no-brainer. It’s no secret that the education system in the United States ranks far below many others globally. My school is just part of a wide-scale problem. The Pew Research Center cites the Programme for International Student Assessment conducted on OECD countries to show that the United States ranks, out of 71 counties: 24th in science, 38th in mathematics, and 24th in reading. Perhaps other countries have solved the puzzle of harmonizing technology with learning, or perhaps they simply leave technology at the doorstep. But as long as U.S. students are unintentionally distracted in school, those numbers seem to have little to no chance of improving. By refocusing the education system in the United States, there may be a chance to reverse a depressing trend of under-performance from students. Obviously there are other, more systemic issues that must be addressed as well, but teaching students how to focus may help along the way. The science and motivations seem clear. All that is lacking may be the courage or imagination required to implement systems that could truly allow students to reach their greatest potential. I believe that the creative minds I met could reach incredible heights if only guided toward a path that beautifully melds calm, creativity, and focus. It seems like such a strategy would only yield rich, and long-lasting benefits; so, what are we waiting for? Let’s take a deep breath, clear our minds, and help bring about an education system that students deserve.
https://ricknabb.medium.com/a-case-for-teaching-focus-94f691e6295d
['Nick Rabb']
2019-04-05 11:45:32.063000+00:00
['Mindfulness', 'Focus', 'Technology', 'Education', 'Neuroscience']
True stories of a Model 3 in the cold.
Battery Performance in Freezing Temperatures I made a 250 km trip when the temperature was -18C. I started the trip with 92% range and arrived with 20% remaining. 72% used works out to 350km of range based on the EPA rating. I only travelled 250km which means I got about 70% efficiency. This is with a full car using all the creature comforts. Streaming radio, heated seats, charging up our phones & tablets. Considering that I rarely travel distances close to the range of my car, the reduced efficiency hasn’t affected me much. When I do need to travel, it typically means a 20 minute supercharger stop midway to the destination. I always use the navigation system because it does a really good job of predicting the actual range that will be used for the trip. In -30C, with a wind making it feel like -40C, the car also did well as we were out & about town. One stop was for about an hour. When I opened the app to start warming up the car, I was happy to see no range lost and no cold battery warning. Warming the car back up to 20C did take a couple % points. The next day we made a pit stop for coffee and I set the climate control to stay on. It used about 1% for every 10 minutes. -30C outside, +22C inside Charging with a “regular” 110V plug My biggest concern with buying an EV was charging at home. My garage is detached, not heated (or insulated) and is only wired for 110V. Maintaining a good battery temperature for charging, especially in the bitter cold, will take some power. I wasn’t sure if I’d be able to keep the battery warm and charge the car on a 110V plug. Wash me Throughout the summer I was seeing a charge rate of about 1.5% per hour. Plugged in, inside my garage, I’ve still been getting 1.5% added every hour until this cold snap came by and its been closer to 1% added per hour. This may not be the case when parked outside though. One afternoon when it was below -20C the car wasn’t adding range despite being plugged in. Fortunately things changed as soon as I moved the car into the garage. Rear Wheel Drive & All Season Tires I live in the city and typically the snowplows or the traffic work the roads into pretty decent conditions. The residential streets may stay snow covered for a while, but the main routes are generally pretty good. But not always! When you get cold snap with snow, even the main roads can become skating rinks. I’m sure there is a road under here somewhere But this isn’t anything new to me. I’ve been driving these snowy streets since I got my license. My Model 3 is a rear wheel drive which is a red flag for anyone living in snowy conditions, until you explain that the Model 3 has a very heavy battery pack on the floor that distributes the weight of the car evenly over the axels. I’ve never owned an all wheel drive vehicle and never bought winter tires, so in my experience, the traction and handling is great compared to my previous FWD vehicles! I do, absolutely, drive with caution around the city but I can say I’ve never lost control of the vehicle. I’ve skidded and my tail end may spin out for a second but then the traction control kicks in. Conclusion Based on my experience of 25+ years of driving in Canadian winters, the Tesla Model 3 has blown me away for winter performance and I continue to LOVE this car! If you have any questions about the Model 3 in extreme winter conditions, let me know quickly because it’s going to warm up next week!
https://medium.com/@sumiguchi/true-stories-of-a-model-3-in-the-cold-ebe219bf276b
['Jeremy Smith']
2019-02-07 03:50:55.531000+00:00
['Tesla', 'Driving', 'Model 3', 'Winter', 'Snow']
Reaching The Balance of Working Remotely & In The Office
Long before the pandemic hit, the debate of the merits of working remotely versus being in the office was an ongoing saga in our company. Me being from the millennial generation, and my business partner and father being of the boomer generation, we found it difficult to see eye to eye. In many ways, our experiences are so different from each other. I grew up with the computer. As a preteen and teenager, a large portion of my interactions with my friends was online. Compared to those of the boomer generation, who were adults when interacting on the computer with others became a regular occurrence. As an IT organization, we had the capabilities of working remotely far before most small businesses of our size. At the same time, being in the service industry, many of the roles in our company cannot be performed remotely. This resulted in many (sometimes heated) discussions on the merits of leading a team remotely and what the “right” balance looked like. I was open to more extended periods of working remotely while my father felt the need to be present in the office every day. What emerged from our constant back and forth over the years was what we call “Office Day.” Once a month, our entire team would be present in the office at the same time. We would have smaller team meetings throughout the day, and we all enjoyed breakfast and lunch together, which the company provided to the team. I would work remotely three weeks out of the month and be in the office for the entire week that coincided with Office Day. We seemed to have reached that “right” balance for about a year, and then COVID-19 entered the scene. What I have learned from being forced into a longer stretch than normal of remote work and being robbed of our “Office Days” was just how important they are. While a lot can be accomplished remotely, there is an intangible energy created from being together in person. This energy is vital for team cohesion and happiness. Not only that, but not everyone is “made” for remote work. Some find coming to the office every day energizing, not draining. Every person has a different ideal mix of working in the office or working remotely, depending on their lifestyle, personality, and gifts. And I believe that the more diverse a group of people, the better the ideas are that emerge from their collaboration. What I see coming out of all of this for the future is flexible work environments becoming the norm. Depending on how often an employee frequents the office will dictate their setup, those working a certain number of days in the office will have dedicated desks, and others will book shared workspaces. Perhaps more companies will enter into partnerships to share office real estate (given the number of team members that work remotely) and the office will turn into mostly a collaboration space. As we expand upon our technology offerings, we are holding this idea in mind. I see the transformation of how we work as a society coming and am looking forward to it — and I believe we will all be better off.
https://medium.com/@jaclynmorse/reaching-the-balance-of-working-remotely-in-the-office-fd4c98511c98
['Jaclyn Morse']
2020-12-10 15:15:45.636000+00:00
['Working Remote Challenges', 'Business', 'Leadership', 'Remote Work']
7. Data Flow between Components in Vue.js
Just like any other application, the Components organization goes like a nested tree. For example, a simple website that includes a header, sidebar, and some other components in a container. The organization of the component would be like this. Data Flow between Components There can be two types of data flow between components: Parent component to Child Component Child component to Parent Component We can send data from the parent component to the child component using props. Child Component to Parent Component We can send data by emitting an event from the Child component and listen to it on the other end (Parent component). Parent component to Child Component As I mentioned that we can send data from the parent component to the child component using props. Props are the way components can accept data from components that include them (parent components). // hello.vue Component <template> <h1 class="title">Vue.js Component | {{msg}}</h1> </template> <script> export default { name: 'HelloComponent', props: ['msg'], data: ()=>({ }), methods:{ } } </script> <style> .title{ color: red } </style> Accept multiple props You can have multiple props by appending them to the array: props: ['msg','fullName', 'age'], Set the prop type You can specify the type of a prop by using an object instead of an array, using the name of the property as the key of each property, and the type as the value: props: { firstName: String, lastName: String }, The valid types you can use are: String Number Boolean Array Object Date Function Symbol When a type mismatches, Vue alerts (in development mode) in the console with a warning. Prop types can be more articulated. You can allow multiple different value types: props: { firstName: [String, Number], lastName: String }, Set a prop to be mandatory You can require a prop to be mandatory: props: { firstName: { type: String, required: true } }, Set the default value of a prop You can specify a default value: props: { firstName: { type: String, required: true, default: 'Unknown person' } }, for object props: { name: { type: Object, default: { firstName: 'Unknown', lastName: '' } } }, You can even build a custom validator, which is cool for complex data: props: { name: { validator: name => { return name === 'TechFerment' //only allow "TechFerment" } } }, Passing props to the component You pass a prop to a component using the syntax <ComponentName color="white" /> if what you pass is a static value. If it’s a data property, you use <template> <ComponentName :color="color" /> </template> <script> //... import component export default { //... register component data: ()=>({ color: 'white' }), //... } </script> You can use a ternary operator inside the prop value to check a truthy condition and pass a value that depends on it: <template> <ComponentName :color="color == 'white' ? true : false"/> </template> <script> //... import component export default { //... register component data: ()=>({ color: 'white' }), //... } </script> Child component to Parent Component Child Component to Parent Component We can send data by emitting an event from the Child component and listen to it on the other end (Parent component). Here is the syntax for the child component for the emitting events. // Without parameters(data) this.$emit('eventName') // With Parameters(data) this.$emit('eventName', param1, param2) In this syntax, we need to be careful while giving a name to the event because using the same name; we will later listen to this event. In order to listen to this event, we can listen to it as we listen to a click event in Vue.js. For example <myComponent @eventName="doSomething"></myComponent> Example // Child Component <script> export default { name: 'CarChildComponent', methods: { handleClick() { this.$emit('clickedSomething') } } } </script> The parent can intercept this using the v-on directive when including the component in its template: <template> <div> <Car v-on:clickedSomething="handleClickInParent" /> <!-- or --> <Car @clickedSomething="handleClickInParent" /> </div> </template> <script> export default { name: 'App', methods: { handleClickInParent: function() { //... } } } </script> You can pass parameters of course: // Child Component <script> export default { name: 'ChildComponent', methods: { handleClick() { this.$emit('clickedSomething', param1, param2) } } } </script> and retrieve them from the parent: <template> <div> <Car v-on:clickedSomething="handleClickInParent" /> <!-- or --> <Car @clickedSomething="handleClickInParent" /> </div> </template> <script> export default { name: 'App', methods: { handleClickInParent(params1, params2) { //... } } } </script> Keep watching this and follow us for more tech articles or you can reach out to me for any doubt and suggestions and the next blog in the series will be published soon. Thanks Happy Coding!
https://medium.com/techferment/7-data-flow-between-components-in-vue-js-3e2989aa27cd
['Vrijraj Singh']
2021-07-13 12:00:17.522000+00:00
['Vrijraj', 'Vuejs', 'Vue', 'Vue Component', 'Techferment']
The Great Vaccine Race
Big bets on long shots Expenditures by the U.S government are closing in on $10 Billion to bring a viable vaccine to market. Some candidates who received funds including, Johnson&Johnson, Pfizer, Moderna, and AstraZeneca are all industry heavyweights, while one, Novavax, awarded $1.6 Billion in grants, has never in their thirty-three year history brought a vaccine to market. Some level of optimism, as well as reservation, exists with each biotech firm and their respective clinical trials. There is a high degree of coordination between American, European and Australian trials, many of which are being conducted across borders. While not verified by global health agencies, Russia has registered a vaccine, with President Putin indicating one of his daughters was a trial subject, and has been vaccinated. As of this writing, the United Arab Emirates has approved a vaccine from Chinese state owned manufacturer Sinopharm, to be used on front line healthcare workers before Phase III is completed. The UAE has 31,000 volunteers participating in the clinical trials, and officials have reported no adverse reactions thus far. When a vaccine against Sars-Cov-2 is ready for use stateside, some critical questions remain; how many people will volunteer for an injection, will it be mandated for children prior to attending school, or travelers boarding a plane, and what are the ramifications, if any, for those choosing not to vaccinate? A Harris Poll survey in the summer of 2019, before the current pandemic, of 2,000 U.S adults is in no way encouraging. Of those polled, 45 percent noted at least one source that caused doubts about the safety of vaccination. The top three doubt-causing sources were online articles (16 percent), past secrets/wrongdoing by the pharmaceutical industry (16 percent) and information from medical experts (12 percent). The survey also asked Americans to choose a statement that best represented their feelings about vaccine safety and efficacy. While the vast majority (82 percent) chose in favor of vaccines, 8 percent selected responses expressing serious doubt. An additional 9 percent said they were unsure. More recently (August 7th 2020), Gallup found one in three Americans would not opt for a Coronavirus vaccine, this even if the vaccine was both free and FDA approved. Such resistance is not unprecedented. When Gallup in 1954 asked U.S. adults who had heard or read about the then-new polio vaccine, “Would you like to take this new polio vaccine (to keep people from getting polio) yourself?” just 60% said they would, while 31% said they would not. So far, willingness to adopt a new vaccine looks similar today. It’s one thing to answer a poll question, but another to make a real life, health balancing decision for yourself and your family. Here in Michigan, our Governor Gretchen Whitmer received a flu vaccine shortly after an address on the importance of such measures. Flu vaccines though, have a decades long record for safety. Will President Trump volunteer for a Coronavirus vaccine? Will he encourage his extended family, and supporters to do the same? That and other questions remain open and unresolved for now. Should they chose to however, China, and possibly Russia, have one available.
https://medium.com/illumination/the-great-vaccine-race-66c121305376
['Bashar Salame']
2020-09-16 20:19:32.722000+00:00
['Covid 19', 'Vaccines', 'Society', 'Health']
Integration Hubs: The Solution to Hotel Companies on Legacy Technology
How an Integration Hub can Help your Hotel Centralized Database A centralized database prevents wasting time inputting duplicate information. With all data in one place, all staff and departments have access to the same information, which in turn results in guests being provided with a seamless customer journey and an improved guest experience overall, fewer complaints, and increased revenue. THYNK’s Ecosystem Connector seamlessly integrates with the most critical and most commonly used PMS and POS hospitality solutions to centralize data, making it easier to analyze data and make better decisions, such as forecasting budgets and trends. Streamlined Processes Sales and operations processes become easier and faster with an integration hub. There is no wasted time for staff, improving productivity and motivation. Customer-Centric System Framework An integration hub provides hotels more control over their customers’ journeys. They can up-sell, cross-sell, and take advantage of every opportunity to market to the right people at the right time. For instance, when a guest checks in, the receptionist can see that they booked to use the spa during their last stay. The receptionist can ask if they would like to do so again, and let them know of any spa offers. This customer-centric engagement will only increase customer loyalty, lower guest acquisition costs, and achieve a higher guest lifetime value. Customization THYNK’s Ecosystem Connector allows hotels to create personalized communication, providing guests the personal service they expect. For instance, an email can be triggered on a guest’s birthday to send them a discount to celebrate their birthday. With THYNK, hotels can ensure they are tailoring their services to meet the desired experience of their guests. Reporting Centralized data makes reporting quick and easy. Data is in real-time, so you can see what is happening at any time. Having accurate and easily accessible data helps you make decisions quickly to avoid issues escalating, or to take advantage of trends. THYNK leverages Salesforce to create simple and visual drag and drop reports and dashboards for sales, PMS, and POS data.
https://medium.com/@thynk-cloud/integration-hubs-the-solution-to-hotel-companies-on-legacy-technology-a1bd4b6ab157
[]
2021-12-27 08:03:00.329000+00:00
['Hotel', 'Hospitality', 'Integration', 'Travelling', 'Hospitality Industry']
Choosing a Bench Power Supply
Choosing a bench power supply is a vital part of your test setup. You need a reliable unit that can provide both input and output voltages with accuracy. It should have at least two output channels and preferably three. A good bench power supply will have a response time of at least 10 seconds. This refers to the amount of time it takes to ramp up and down. This is an important feature because it will determine the accuracy of your output voltage. A bench power supply will provide clean, constant power. This feature will save your circuit from damage due to the sudden fluctuations in the AC line’s frequency and voltage. The output voltage and current will remain consistent. If your circuit changes mid-project, this feature will keep it from causing damage. A bench power supply will also show you whether your circuit is consuming the correct amount of current or voltage. With the right bench PSU, you can perform more tests with more confidence. Features of a Bench Power Supply A bench power supply has several features to help you conduct your tests with more precision. It has constant current and voltage control options, so you can customize the output to fit your needs. A current-controlling bench power supply allows you to control the voltage and current. This makes it much easier to control and monitor the output voltage and current. It can also come with multiple or single outputs and has a fan to dissipate excess heat. A bench power supply should have a current-limiting capability to control the output current. This is essential because it can change its output voltage during the process of working with your circuit. An adjustable current-limiting bench power supply allows you to choose the right value. It also has multiple outputs and has the ability to dissipate heat. This will make it easy to use the bench to power other devices. Moreover, a good bench power supply should have a fan to dissipate heat. A good bench power supply should have load regulation. This refers to the unit’s ability to maintain stability even when the input voltage and frequency fluctuate. It is also important to check if the bench power supply is protected against overheating and overcurrent. If it has a built-in fan, it is a safe bet that the unit will be more stable when it is operating. This is essential for test purposes, as it is a critical component in your circuit. LT3081 Bench Power Supply The LT3081 bench power supply has a current-limiting feature that keeps it from overheating. This feature helps you to determine what current is needed to ensure the safety of your circuit. Likewise, a LT3081 bench power supply should have current-limiting capabilities that can be adjusted to suit your needs. The LT3081 has a constant-current feature that enforces a sharp cliff at 3.1A. This is essential for safety purposes. A bench power supply should have a load-regulating feature. A load-regulating bench power supply can be a great option for test engineers. Its output voltage and frequency can fluctuate as the project progresses, and the voltage and the current limit of the bench power supply are important to ensure your safety. It should have current-limiting capabilities to prevent the overheating of devices. A high-quality bench power supply should allow for easy testing. Types of Bench Power Supplies Various types of bench power supplies have different power envelopes. A hyperbolic envelope is more stable and provides a smooth transition from one range to another. The current envelope of a bench power supply depends on the device that is under test. The maximum output power might be required in certain situations, such as when it is needed at a particular point in the frequency spectrum. If the device is sensitive, a load regulation feature should allow it to be regulated in the correct manner. DC Bench Power Supply A portable DC bench power supply should have a current limit control. This will allow you to adjust the output voltage and current and avoid overheating of equipment. A bench power supply should also have a switch for voltage limiting. The switch should be placed where the voltage is needed. Depending on the manufacturer, a programmable unit can be used for the test. The output voltage of the battery-powered unit is determined by the current-limiting function of the device.
https://medium.com/@rebeccaroseuk123/choosing-a-bench-power-supply-8abbee6b776d
[]
2021-12-17 11:40:37.739000+00:00
['Hardware', 'Electronics']
It’s Time to Talk Production Design for Your Zoom Calls
I’ve decided to start a new career as a Production Designer for Zoom calls. Seriously, people, give me a call. I can share tips I’ve learned from over 20 years in low-budget (no-budget) independent filmmaking. On these no-budget films, I directed, produced, and edited — and I also did everything else — including production design. This entailed using my own living space for several dozen different sets and spending an excessive amount of time (and my own money) running to flea markets and thrift shops in search of cheap set pieces and props. Over the years, I’ve learned to be very creative (i.e., perform miracles) when it comes to transforming bland, small spaces into beautiful (or at least interesting) sets. I’ve turned my kitchen into a cafe and my friend’s basement into a full-scale bar. I’ve turned a living room into a lawyer’s office, and a tiny space next to a furnace into a bedroom. In independent filmmaking, all you need is one good wall to do your magic. The rest of the space is generally filled with your crew and their camera/sound/lighting gear anyway. Trust me, turning a tiny corner of your home into your Zoom office is a piece of cake. Location, location, location Despite having two separate office areas in my home, my bedroom has become my designated office and Zoom conferencing area. This is mainly because the actual office space in my home doesn’t have doors. I like to control my Zoom set, and I don’t want other people or my two dogs busting in during the middle of calls, classes, or conferences. My quarantine roomies (my mom and my son) happen to be occupying my actual office areas at the moment. When I moved into my townhouse in December, I was primarily living alone, and I thought having an open loft for my office would be cool. It was cool for about two months until I started teaching virtual classes and needed quiet private space, and doors that locked. Set design A common mistake in most Zoom calls? Too many people set themselves up in high-traffic areas where other callers are distracted by the messy, busy, open space behind them. Often, I see family members moving in and out of the background, like extras in a low-budget horror film. Sometimes, I want to scream at my fellow Zoomies and warn them that a man is lurking behind them. Do you want me to call 911? It’s too easy to become distracted by other people’s stuff, especially when you are looking at a dozen people and their homes at the same time. That’s a lot of eye-clutter for easily-distracted minds. There are simple ways to clear the clutter and give off a more professional — and calm — vibe. My Zoom space. Photo by Lizzie Finn/Art by Daev Finn This photo shows my current Zoom setup. I’ve carefully selected the items behind me for their simplicity. Instead of just a bland grey wall, these items add a pop of color and texture without overpowering the frame. It helps to have a brother who is an artist who created large art pieces I can move around my home and and use as colorful backdrops. I’ve wedged myself into a corner, so none of my ‘extras’ can wander in behind my back. In this way, I maintain total control of my set. Now, I know it doesn’t look like much — my little Zoom set. And maybe you’re not that impressed. Well, let me tell you what you do not see in the above photo. Or rather, let me zoom out and show you. Work in progress. Photo by Lizzie Finn This photo shows what my bedroom looks like at the moment. I took these two photos two seconds apart. My bedroom is a disaster zone because I am spending 95% of my time living and working in this one space. And also, for some reason I decided NOW would be a good time to paint this room. After weeks of looking at these four walls, I could no longer stand their bright gold shade and knew I wanted to paint it a soothing grey. The only problem? I’m on my third attempt at getting the right color. The first shade of grey was too light (i.e., almost white). The second shade of grey was too blue. And the third shade of grey? I think it’s just right. I won’t know for sure until I paint more than just the wall of my Zoom set. At the moment, I’m so busy taking classes, teaching classes, writing, developing workshops, and attending virtual seminars and social events, that I only have energy at night to paint one wall at a time. Before this pandemic, I couldn’t work or create if there was chaos or clutter. Even when I worked from home when my kids were toddlers, I still needed a clean and tidy house to get things done. Those days are over. We’re in the apocalypse now — and I can work anywhere, anytime, anyhow, and anywho. In this quarantine, I’ve learned to let go of stringent rules or rituals. Instead of demanding perfection, I’m able to make do with what I have — and make it work. I’ve also learned to make it fun. Pets like to make guest appearances on Zoom. Lock your set down! So, please don’t get your panties in a twist about any of this advice. I know we are all going through tough times and have much bigger concerns at the moment. I don’t want anybody to get stressed out about my Zoom production design guidance. Always, do what works best for you, and take time to practice self-care first. Writing, creating, painting, designing — these are ways I relax, renew, and reinvigorate myself. However, if, for some reason, you want to embrace the chaos and also present a calm and professional image to the world — give me a call. I’m here to meet all your Zoom production design needs.
https://medium.com/narrative/its-time-to-talk-production-design-for-your-zoom-calls-e148154b17ec
['Lizzie Finn']
2020-04-30 18:11:42.603000+00:00
['Remote Working', 'Arts And Culture', 'Zoom', 'Life Lessons', 'Covid 19']
Looking ahead to 2021
2020 in Review “At the end of a year as uniquely awful as this one, there is little I can say that hasn’t been said already. With 2020 shortly about to be consigned to the past, me and the team are looking forward to next year, where we will begin an exciting expansion of the agency. 2021 will see us grow our strategy offering, extend our design team and see the launch of numerous technological products throughout the year. We’ll keep you posted throughout. I would like to extend a massive thank you to both the Conjure team and our brilliant clients. The former worked tirelessly throughout the year to deliver and the latter reacted with pace and understanding as we all adjusted to the new conditions. Finally thanks to all friends of Conjure, stay safe, and try and make the best of the festive season.” Sam Clark, CEO As part of our growth plans for 2021 we are recruiting for a number of new roles across the agency. If any of these roles sound like you, we’d love to hear from you. Technical Director We’re seeking an experienced, motivated and resourceful Technical Director with strong experience of working in a fast-paced design and technology agency environment. The successful candidate will join our Senior Leadership Team alongside our Design Director and Client Services Director, inheriting many of the current duties of the CTO. As well as taking over the technical management of existing projects, you will provide input into new business proposals and tenders from a technical perspective, having had experience of scoping and pricing technically innovative projects with budgets between £250k and £1m+ in the past. Android Developer for Hardware/AOSP/Automotive and Digital Products We’re looking for an ambitious and experienced Android Developer to join our development team. This role will work as part of our talented cross-functional development team and will report into our Technical Director. This is a great role for the right candidate and will particularly suit someone who enjoys the variety of working on multiple projects across different client sectors and disciplines. We are seeing significant growth in the Android Open Source Project (AOSP) field and are working with clients globally to support Android Automotive OS projects both at the framework and application levels. Technical Product Owner We’re also seeking a smart and savvy Technical Product Owner to join our growing team. If you can blend technical expertise with the ability to build strong and sustainable relationships and an understanding of product ownership, we want to hear from you. You’ll work closely with our Technical Director, Product Lead/Business Analyst and client teams to shape the vision, product strategy and specification ahead of development and delivery. Senior React Developer Finally, we’re looking for a skilled React.js Developer to join our engineering and development team. In this role, you will be responsible for developing and implementing user interface components and business logic using React. You will coordinate with the rest of the team working on different layers of the infrastructure. Therefore, a commitment to collaborative problem solving, sophisticated design, and quality product is important. This role will work as part of our talented cross-functional development team and will report into our Technical Director.
https://insight.conjure.co.uk/looking-ahead-to-2021-f3807d2d0672
['Ameera Macarthur']
2020-12-23 16:01:47.960000+00:00
['Review', 'Hiring', '2021 Expansion', '2020']
Our top 5 web3 and NFT predictions for 2022
The end of the year is here and therefore it’s the ultimate moment to look forward to what’s coming in cryptoland! Without further ado, let’s give you our top 5 predictions for web3 and NFT: While web3 is all about the decentralized internet and “you” own web3, a lot of traditional companies will move their attention to web3 and how they can own it. Prime example is obviously Facebook, a company that even changed their name to Meta just for the sake to pretend they can own the metaverse (or at least sue every other company trying to use Meta in their name). 2. Brands, brands, brands Since our target audience is brands, this is definitely a prediction we welcome with open arms. We predict that brands will become further active in the metaverse and that they will realize that NFTs are a perfect mean to do brand marketing. The Say solution is exactly answering this need for more utility and the creation of brand loyalty. NFTs are a perfect mean to do brand marketing 3. NFT and metaverse will become the Instagram of brands Consumers always look for the next big thing, and like they moved from Instagram to TikTok, we see further adoption of metaverses and buying of NFTs. Brands will put less effort in Instagram and will be present in these metaverses and NFTs. We believe the world will also be introduced to new, currently unknown, brands, that simply start as “NFT native”. Likewise with Instagram, you can place your bet that the Kardashians have soon their own personal NFT brand. 4. Owning an NFT will become a mean of digital identity Owners of certain NFTs will come together based on the type of NFTs they own, and will create a group’s identity based on that. Call it a new form of social network where like on Facebook today you have specialized pages for practically every subject you can imagine. Well, these pages will turn to new metaverses. 5. Defi becomes more mature but also more regulated Decentralized identity and on-chain KYC attestation services will play key roles in connecting users’ real identity with Defi wallet endpoints.
https://medium.com/@solsnakenft/our-top-5-web3-and-nft-predictions-for-2022-1830d293ed0e
[]
2022-01-07 07:30:13.072000+00:00
['Solana Network', 'Nft Marketplace', 'Nft Collectibles', 'Nftart', 'Nft']
There is no Internet Savior: To fix the Internet’s problems, we’ll need to work together
@talks: Solutions to the Social Dilemma — A virtual discussion series How many extremely knowledgeable and relentlessly creative Internet experts does it take to unscrew a lightbulb? If that lightbulb is the Internet’s problems — quite a lot. On December 3rd, we invited three notable figures in the data privacy landscape to join us for a fascinating, even inspiring, discussion about the future of the Internet. Among these were: Enoch Liang of Andrew Yang’s Data Dividend Project (DDP). A California-based lawyer and entrepreneur, Enoch started the Data Dividend Project with Andrew Yang to help consumers collectively exercise their data rights and bargain with tech companies. Dr. Jennifer King, Director of Consumer Privacy at the Center for Internet and Society, Stanford Law School. A recognized expert in information privacy, Dr. King examines the public’s understanding and expectations of online privacy and the policy implications of emerging technologies. Our very own Kevin Nickels, CPO of The @ Company. For Kevin, The @ Company represents the embodiment of a passion that he has been pondering for more than a decade — technology that empowers individuals to control their digital selves. He views this as a vitally important solution needed to address some of society’s most vexing problems. In this hour-long conversation, we touched upon recently passed policy (California Prop 24), data privacy as a collective action problem, and potential technology solutions. We’ve included some highlights from our conversation below. There are no 5 easy steps to protect your online privacy According to Dr. King, articles advertising 5 easy steps to protect your online privacy are missing the point. They perpetuate the false belief that data privacy is a personal issue: If you make a few small tweaks to your lifestyle, then you can reclaim control over your data. Dr. King maintains that this couldn’t be farther from the truth. In our current Internet, individuals can’t control how they interact online. Unless they want to become “electronic hermits,” the average person must “take it or leave it,” having no choice but to relinquish their right to digital privacy. Nickels adds, “Security at the enterprise-level today is an illusion of security.” Behind every data security system, there is a human administrator that holds the keys to the database. Who’s to say that this person might not one day decide to exploit everyone’s data for nefarious purposes? Everyone has to work together “This problem is broader than all of us individually can tackle,” says Dr. King. “We need to think of other solutions that allow us to collectively wage power and gain control over data.” Enoch agrees. In his eyes, the solution to the Social Dilemma is multifaceted, much like the four legs of a chair. These four legs include: 1. Consumer awareness “Step one [to rebuilding the Internet] is realizing that you can’t control something that you don’t know about,” says Dr. King. Many people are unaware of the ways their data is mishandled or at risk. Research shows that most people are discontented when they discover the repercussions of practices like data-driven advertising or social media algorithms. Documentaries like Netflix’s The Social Dilemma are part of the movement to educate consumers about the impacts of technology. (The documentary has since then garnered criticism about its focus on the “prodigal tech bro,” but the point made on how people’s data is being used to manipulate them is well-taken.) The Data Dividend Project (DDP), spearheaded by Andrew Yang, also falls within this category. “Its goal,” says Liang, “is to educate consumers as to what’s happening to their data, how much money technology companies are making off of it, what rights they do have. Those individual rights are much more effective if they’re bundled into collective rights and collective action is taken.” 2. Technological solutions: Building a network of trust In an Internet that conditions people to behave in certain ways without them even realizing it, the question Nickels asked himself was, “‘How do we encourage online behavior that engenders a network of trust, where interactions are based on trust?’” The answer that he and fellow co-founders Colin Constable and Barbara Tallent landed on was the @protocol. By encouraging everyone to ask for permission before using someone else’s data, the @protocol makes it possible for people to build trust with others online. (Learn more about how this network of trust is built.) 3. Regulation and policy “Tech gets free reign for 10 or 15 years until some scandals happen and people in Congress start waking up,” says Liang, “but it’s too little, too late. Pandora’s box has already been open for 10 or 15 years.” 4. Enforcement “Law and regulations are no good on the books if there isn’t any enforcement,” says Liang. One way he sees this being upheld is through DDP, which encourages people to lay claim to their data. “If you had a Yahoo account, the settlement there was $125 million plus. If you were a resident of Illinois and had a Facebook account, the settlement there was $650 million,” Liang says. “How many of us actually laid claim to that? The average claims rate for a class actions settlement is somewhere in the range of 3–5%. DDP has been trying to help consumers navigate that claims process and help you realize that there is money out there, and you should be laying claims to it.” The talk gave us a lot to think about, especially regarding the future of the Internet. Sure, the Internet isn’t perfect, but as Internet Optimists we’re daring to believe in a better Internet. If you weren’t able to attend, no worries! Watch the full Zoom session here. — At The @ Company we are technologists, creators, and builders with one thing in common: We love the Internet. You could go so far as to call us Internet optimists. Though we acknowledge that the Internet has deep flaws, we believe that we can extract all its goodness without sacrificing our privacy, time, and control over our digital identities. We’ve committed ourselves to the creation of a more human Internet where privacy is a fundamental right and everyone owns their own data. Let’s say goodbye to the fear and paranoia caused by data breaches and unsolicited online surveillance. With the power of the @protocol, we’re resolving these long-standing issues with a spirit of exploration and fun. Learn more about us here.
https://medium.com/@atsigncompany/there-is-no-internet-savior-to-fix-the-internets-problems-we-ll-need-to-work-together-196c28023ade
['The']
2020-12-24 01:22:05.409000+00:00
['Andrew Yang', 'Internet of Things', 'Data Dividend', 'Technology', 'Data Privacy']
Nurturing High Performance Teams
Take the example of an F1™ Pit Crew: Image Credit: Gustavo Campos on Unsplash To the best of my knowledge, the team is composed of 6 key roles: Wheel Gunners Tyre Carriers Stabilisers Front Wing Men Front & Rear Jack Men, and and Fire Extinguishers Each of these individuals is highly trained in their craft, likely accumulated over years of intense training, field and track operations, assessments and constant reevaluation. They are confident, ambitious, independent thinkers and thrive on enabling success. They’re all also intimately familiar with the breadth and scope of their comrades’ roles, relying on them to execute efficiently so as not to compromise their own efficiency. As such, they are trusted by their leadership to execute their craft to its fullest potential, working together towards the same target objective: bringing the best of their combined selves to the car when it rolls into the pit. They have strict directives to follow, but they can only do so if they’re given the trust, autonomy and resources they need to comply with those directives.
https://medium.com/@integracore/nurturing-high-performance-teams-programmers-f1-pit-crews-rugby-legends-c06e9722fe08
['Ali Saif']
2020-12-25 20:19:54.602000+00:00
['Team Performance', 'Python Programming', 'Management And Leadership', 'Programming', 'Team Productivity']
5 Warning Signs Of Dehydration / Beware Of These Signs Of Dehydration
HomeDehydration 5 Warning Signs Of Dehydration / Beware Of These Signs Of Dehydration It acts as a lubricant that helps the bones to move easily and also acts as a lubricant to help keep our eyes comfortable. It also helps in digestion and helps to detoxify the body and keeps our skin healthy. When you do not have water, your body becomes less active. Therefore, it is very important to keep your body hydrated. To keep your body hydrated, drink plenty of fluids, and water-rich foods. You need to keep in mind that under certain conditions, the body loses more water than normal. Excessive sweating, diarrhea, vomiting, diabetes, and frequent urination can be cited as causes of abnormal dehydration. This causes an imbalance of ionized fluids in the body, which dehydrates and impairs the body’s proper functioning. Some people do not realize that their body water level is less than the prescribed amount. A good understanding of how our body notifies us when we are dehydrated helps us to understand when we need to give our body more water. This article gives you a brief overview of the top 5 warning signs that your body may be dehydrated. 1) Rapid / difficult heartbeat / irregular heartbeat. The first thing you need to know is that a decrease in the amount of water in your body causes an imbalance of minerals (salt/sugar) in the body. This will affect the proper functioning of the body. These symptoms are caused by a decrease in the amount of potassium and sugar in the blood. This can be caused by premature contractions of the upper and lower cavities of the heart. Also, you should know that magnesium deficiency caused by excessive sweating is also a factor in abnormal heart rate. 2) Painful bone-joints and muscles. Did you know that 65% -80% of the cartilage in our bones is made up of water? Yes, it shows us that when we are dehydrated, the cartilage between the bones is destroyed. When there is weak cartilage, the bones between the joints collide and wear out over time. This causes pain and swelling in the joints. In addition, magnesium deficiency due to dehydration can lead to muscle spasms and muscle pain in the legs. Eat more dark green herbs and aliphatic to avoid these stressful situations. These foods increase the amount of water in your body as well as the number of minerals. 3) Headache The human brain, located inside the skull, relies heavily on a special fluid system to prevent it from colliding with the skull. This protective fluid system protects the brain from the damage that can occur when we engage in activities such as walking and running. When this fluid system is depleted, the brain and skull can collide with each other. This can be presented as one of the main causes of headaches. Furthermore, hydration can reduce the blood supply to the brain. Decreased oxygen and glucose supply to the brain can lead to severe headaches. 4) Fatigue and laziness Decreased fluid volume decreases blood volume and increases blood pressure. As a result, the heart has to work harder to supply the brain, skin, and muscles with the amount of blood, oxygen, and nutrients they need. This causes drowsiness and fatigue. A recent study confirmed that women who did not drink enough water after exercise got poor results from a questionnaire used to assess mood. The study also found that dehydrated people are more likely to be tired and confused. This had the same effect on the male party. 5) Dry skin and lips First, you need to know that about 30% of our skin is made up of water. Depending on the size of the yard, only one or two toys will fit. The oil produced by the skin prevents water from evaporating and leaving the body. However, conditions such as bathing, dry wind, problem, and skin infections are factors that cause more water to evaporate through the skin. These habits dry out your skin and cause itchy skin. Internal dehydration is the first loss of fluid from the skin cells. This is done to keep the fluid in the internal organs of the body safe. So drink more water to get rid of these conditions. This will help to maintain the proper density and thickness of the skin.
https://medium.com/@frenurofficial/5-warning-signs-of-dehydration-beware-of-these-signs-of-dehydration-6524f752f2d2
[]
2020-12-21 17:30:17.536000+00:00
['Water Damage Restoration', 'Healthcare', 'Healthy Lifestyle', 'Dehydration', 'Nutrition']
Reflection: From Sunrise to Sunset: A Day in the Life of a Getty Alumni Museum Professional
Reflection: From Sunrise to Sunset: A Day in the Life of a Getty Alumni Museum Professional Bob Beatty Nov 19, 2020·4 min read Over the past several months, I have been working with a group of museum professionals from across the globe on a truly wonderful program: From Sunrise to Sunset: A Day in the Life of a AAM-Getty International Alumni Museum Professional. 2018 Getty International Participants Dean Phelus of AAM brought me in to facilitate and coordinate the effort. His charge was simple: to expand participants’ circle of connections across the AAM-Getty International program. This year marks the tenth anniversary of the program, which “is a professional growth opportunity that provides financial support to non-US-based museum professionals to attend the AAM Annual Meeting and MuseumExpo.” I have been working with the group since 2017, when Greg Stevens tapped me to help facilitate onsite in St. Louis. 2019 Getty International Participants In the past four years, I have gotten to know a tremendous group of museum professionals worldwide. Their love and passion for museums and for museum professionals has inspired me. I am forever grateful to be a part of this wonderful program each year. Since we were unable to meet in person this year, we created a mini-conference via Zoom. Over the course of three days, several dozen international museum professionals met together virtually to discuss their lives and careers in the current era. They organized sessions around five topics: Rituals, Respite, Reflection, Renewal, and Reconnection. Screenshot of the Reflection session. Wednesday morning’s session focused on Reflection, things that have been powerful learning lessons personally and professionally. Milena Milosevic Micic of Serbia’s Завичајни музеј Књажевац (Knjaževac Homeland Museum) organized it. Below are thoughts that Germán Paley, an alumnus of the program, shared with those of us who organized the entire program. They are powerful reminders to me of our work and how interconnected we are. I am sharing them here with you, verbatim. We started the session listening to a song called “Writing a Future History” and I wonder about history and its senses/meanings. In Western Culture, history is related to legacy and makes us look into the past, but is it possible to move forward to rewrite the future? Germán Paley (photo: Catalina Bartolomé) Opening to new possibilities and to end with a legacy that perpetuates what we don’t longer want? This allows us to think of the PRESENT, our present as a time for CHANGE, so… what are we capable of transforming, to write new futures? This question leads us to the dimension of UTOPIA, to our DESIRE and if we want to create new realities, we should resort to RADICAL IMAGINATION, we need to be more CREATIVE than ever. However, this pandemic present is full with DYSTOPIA, in a never-ending quarantine and constant lockdowns, what can we do in order to preserve our senses and not to lose meanings? We are facing unprecedent times, full of uncertainty, moving between enthusiasm and exhaustion, and many of us are still wondering if a cultural reset is still possible… Our lives have mutated, so how are we going to adapt, how are we dealing with conflict and trauma? Are we going to keep perpetuating what we used to do or we can move forward and create other possibilities? Is it just a matter of cancellation/postponing or we can rewrite what we do? What new missions/visions are we going to allow to create new ways? We must redefine our practices through active dialogue, support, social bonds, and true engagement. This existential process (not only individual but also professional) forces us to deal with VOID: empty temples and body-less museums… but, what values are being embodied now and projected to the future? Are we going to resume what we know or take the uncertainty to imagine other possibilities? Is it just a matter of museum activation — Keep Museums Active — or we can become museum activists to make them meaningful places for our communities? I urge you to consider these questions that your international colleagues have posed to us. They are important in Covid times and beyond.
https://medium.com/@lyndhurstgroup/from-sunrise-to-sunset-a-day-in-the-life-of-a-getty-alumni-museum-professional-512e830cdb49
['Bob Beatty']
2020-11-19 17:45:40.764000+00:00
['Museums', 'Internationalmuseumday', 'Museum Studies']
Lessons From an Ass-Kicking: ‘I Blacked Out Trying to Run Across Death Valley’
Lessons From an Ass-Kicking: ‘I Blacked Out Trying to Run Across Death Valley’ Ultramarathoner Dean Karnazes shares three things he learned from getting his ass kicked by one of the hottest places on Earth In our new series, What I Learned from Getting My Ass Kicked, people from all walks of life discuss setbacks that could’ve finished them, and how they dealt with those defeats. This edition, ultrarunner Dean Karnazes — who has run across the Sahara Desert and completed a marathon to the South Pole — tells us about his first attempt to complete an impossibly grueling 135-mile footrace across Death Valley in the dead of summer, and the three big lessons he learned from passing out halfway through. Lesson #1: Never Think About Failure (Until You’ve Failed) “In the heat of battle, I knew things were bad,” Karnazes says. “I’d been running in the Badwater Ultramarathon for more than 18 hours straight and the temperatures had reached 127 degrees. I was extremely dehydrated and my electrolytes were totally out of whack, but I was determined not to stop. Bit by bit, I began losing perceptions. My field of vision narrowed, and all I remember seeing was the immediate 3 feet in front of me. At that point, I was beyond processing how defeat would feel — I was like a boxer in the ring that’s concussed and staggering to remain upright and coherent. Then everything went dark and I passed out. “It wasn’t until afterward that I was able to rationally process the defeat and the implications it had on me. Only then could I analyze the situation. That’s how defeat should be: You should be so ensnarled in the battle that all you’re focusing on is success. If your mind is wandering and thinking about failure, you’ve already failed.” Lesson #2: You Should Absolutely Hate Yourself for Losing “People say beating yourself up over a defeat is pointless,” says Karnazes. “I say bullshit. You should beat the hell out of yourself for failing! When I failed at the Badwater Ultramarathon, my entire world collapsed. I took it seriously — some might argue too seriously. The whole of my self-worth was pegged to finishing that motherfucker, and instead it crushed me and spat me out on the roadside in a useless heap. Some might have said I was lucky to be alive and would surely never attempt anything that crazy again, but all I wanted was revenge. “I couldn’t think of anything else for a year: Nothing mattered to me except getting to the finish line. I analyzed my failure, dissected specific shortcomings, and delved into the mindset that held me back. I changed my tactics as a result. I also trained obsessively, never compromising. It was an all-or-nothing assault. The next year I got my revenge: It was difficult — sometimes earning victory is a messy business — but I prevailed. Finishing was one of the pinnacle achievements of my career, precisely because of the effort it took to earn it.” Lesson #3: Defeat is the Only Way to Overcome Your Limitations “People often hear about my successes, but the lessons learned from defeat far outstrip those gleaned from success. In fact, failure is a necessary process for reaching one’s full potential. A good ass-kicking is a gift: It sets the bar and resets your attitude. Defeat makes you humble and hungry. Once you know your limitations, you can take measures to surpass them. It does, however, take courage to challenge yourself to surmount the seemingly insurmountable. Some are afraid to push to the breaking point and experience defeat. This is safe stance, yet a tragic one. The bold may not live forever, but the timid don’t live at all.” More words of wisdom:
https://medium.com/mel-magazine/i-blacked-out-trying-to-run-across-death-valley-9820da97989b
[]
2017-02-07 16:40:23.647000+00:00
['Running', 'Body And Mind', 'Inspiration', 'Failure']
Running High Mileage Comfortably
Building your aerobic base is crucial for you to run your best in your distance races. However, building your aerobic base and increasing your mileage could cause issues and pains, causing unfortunate setbacks. Keeping these things in mind while running high-mileage can help your quality of running. Nutrition Nutrition is critical for a runner to stay healthy as well as increase their overall fitness. Without a sufficient diet, you are very susceptible to injury and lack of energy. A recommended diet for a runner includes a balanced meal of protein, carbohydrates, and fats to ensure proper energy and performance. Another big key to staying on top of your running is to stay hydrated. Staying hydrated allows your body to cool properly and excrete any unwanted waste your body produces. Vitamins are also crucial as it helps your body prevent injuries and provide the body to process roles within the body. Source: https://pubmed.ncbi.nlm.nih.gov/11417160/ Foods such as avocado, cocoa, mushrooms, peppers, and many more are anti-inflammatory foods. This means that these foods are loaded with vitamins that help reduce inflammation, the leading cause of pain. Photo by Anna Pelzer on Unsplash Cold Showers Cold showers are proven to be anti-inflammatory as it increases hormone levels called norepinephrine. Cold showers are also proven to help reduce delayed onset muscle soreness (DOMS). All of this contributes to enhanced recovery between runs, which helps you experience less pain and allow you to enjoy running more. Strength & Conditioning Strengthening your muscles is key to reducing injuries and running efficiently. Stated by a certified USATF running coach in Colorado, “It (Strength training) prevents injuries by strengthening muscles and connective tissues; it helps you run faster by improving neuromuscular coordination and power, and it improves running economy by encouraging coordination and stride efficiency.” Furthermore, in a study involving 16 well-training athletes, it was concluded that the people who strength trained for six weeks, had significantly improved their 5-kilometer time trial by over 3%. Strength and conditioning help your flexibility, balance, mobility, and strength. This overall helps your running economy and promotes stride efficiency. Because of this, your muscles are able to expend less energy while running, therefore improving on your overall VO2 Max. Photo by Jonathan Borba on Unsplash Sleep Sufficient sleep is key to helping your body recover from the impact force of running high mileage. Insufficient sleep on a daily basis causes your body to increase TNFα levels, which is an inflammatory cytokine produced by macrophages, which causes swelling and inflammation, therefore pain. Sleeping is also the time where your body is rebuilding and repairing any worked muscles during the day. This means that if you were to lack sleep, your body would not be fully restored to function the next day fully. Photo by Kate Stone Matheson on Unsplash Knowing when to back-off Listening to your body is key to being able to run well. Overtraining could lead to injuries and burnout, which is an issue when running high mileage. To avoid this, you must learn to take it easy on days your body needs recovery. Instead of taking a day entirely off, you can cross-train, such as hopping on a bike or going for a swim. Building up to high mileage running is crucial to injury-free running. A general rule of thumb is never to increase your mileage by 10%. For example, if you are running 50 miles one week, you should not run any more than 55 miles the next week.
https://medium.com/runners-life/running-high-mileage-comfortably-5863e9b67595
['Dylan Zhao']
2020-07-02 17:04:51.379000+00:00
['Nutrition', 'Health', 'Running', 'Fitness', 'Lifestyle']
To Read Her Like A Book
She wore her heart on the outside like the cover of my favorite book She gave me the key to unlock it Asked that I, be careful. I’ve opened it many times some times, for a simple look. Carelessly I scribbled on the insides recklessly, signed my name in this book. Earmarked her pages highlighted many lines collected her secrets gaining insight as a thief staking my claim like a crook. She asked that I be kind, gentle and patient the final chapters, I skimmed everything she revealed to me tucked away in-between bent pages left in used condition and hidden within. Except this time it was different I missed my forever, this is my regret my should have been. Her secrets on the page were my salvation. A man damned, unprepared unready, but reading her book to find my way back to feeling something to loving again.
https://medium.com/intricate-intimacies/to-read-her-like-a-book-1d144e81620
['Lex Nickels']
2020-03-16 16:38:06.579000+00:00
['Life Lessons', 'Books', 'Love', 'Intimately Intricate', 'Poetry']
Dating Freak-Out: Getting Past Dating Anxiety
It’s not uncommon for single people to feel a little dating anxiety. For single parents, this is perhaps even more true. Let’s consider just a few of the reasons why you might have dating anxiety as a single parent: 1. You’ve already had one relationship break up and impact your kids. It’s natural that you’d be worried that it might happen again. 2. You’ve been out of the dating world for a while. You don’t know if you’re up to speed on what it’s like to date today after you’ve been in a relationship for the last 5, 10, 15 years. 3. You’ve heard all the horror stories about dating as a single parent: the people who won’t date single parents; the ones who date single parents to prey on their kids; the ones who treat their kids differently than your kids… the list goes on. Who wouldn’t be afraid with those thoughts? 4. You’re not sure you can balance dating with being a parent. And while you know you have to make parenting the priority, the thought of missing out on your soulmate because you can’t balance it all scares you. If you have dating anxiety, don’t beat yourself up about it. But don’t let it take over your life either. Let’s break down the reasons above and talk about why you should let the anxiety go, and how you can put yourself out there with less fear. Photo by Chinh Le Duc on Unsplash Reason #1: You’ve already ended one relationship that impacted your kids Whether you were married to your kids’ other parent or not, ending the relationship affected your kids. They now have two homes. They’re confused about what happened, why it happened, and maybe even wondering if it was their fault. You want your kids to be happy and healthy, and you definitely don’t want to see them go through the heartbreak of losing another person they care about if you find someone and then break up. Your feelings are valid, and they show just how much you love your kids. Putting your kids first is important, but you also can’t use your kids as an excuse to avoid getting back out there. What can you do about it? First, you need to remember that you are more than just a parent. You’re an adult, a human being with feelings and needs that have nothing to do with being a parent. And you’re allowed to find someone with whom to indulge those feelings and needs. Second, you need to keep your kids first philosophy, but tweak it a little bit. This is done by taking some simple precautions, such as not introducing the people you date to your kids until you feel the relationship is getting serious. Finally, take things slow. It’s understandable that after dedicating years to a relationship that didn’t last, you might feel a little pressure to find a new one. But there are no deadlines for this. There is no point at which you are too old or single too long to be able to start dating and find someone who’s right for you. So take your time getting to know people, don’t rush to move the relationship forward faster than you truly want, and really get to know someone before trying to mingle all the parts of your life together. Reason #2: You’ve been out of the dating world for a while Photo by Kinga Cichewicz on Unsplash You were in your last relationship for a long time. Whether it was five years, 10 years, or 25 years, dating can change rapidly. It’s easy to feel like you’re totally out of the loop and have no idea what to do now. Do you offer to split the check? Is online dating the right way to go? Dating as a single parent can often feel a bit like dating when you were a teenager. Everything is new, and while it’s exciting, it’s also a little bit scary and uncertain. And when you try looking at dating blogs or books, you feel like you’re getting a bunch of different suggestions about what you should do. What can you do about it? Relax! Yes, it’s very possible that dating has change a little — or a lot — since the last time you were single. You might find that things are very different than you’d expect, and you might even make a few mistakes. But that’s to be expected. Just like you might need a little time to get your bearings if you got on a bike again after not having ridden one in years, there’s going to be a bit of a learning curve to dating again. So relax and accept it. Know that it will happen, and know that you’ll survive it. Also, consider being honest with the people you date. Let them know it’s been a while since you were single and dating, and you feel a little out of your element. Not only will it make them a bit more understanding of the mistakes you might make, it might even endear you to them, a sign of vulnerability that sparks a connection. Reason #3: You’ve heard horror stories about dating as a single parent Whether it’s that no one wants to date a single parent, or stepparents that treat their bio children better than their stepchildren, or the rare individual that dates single parents to prey on their children, there are some terrifying stories about dating as a single parent. As a parent, protecting your children is your top priority. Protecting your own heart is next. So when you hear all those horror stories, it’s tempting to throw up your hands and decide you’re just going to give up on dating. It just doesn’t seem worth it. What can you do about it? Stop listening to the horror stories. Do they happen? Yes, sometimes they do. Are they likely to happen to you? Probably not. It’s possible, but not likely enough to make it worth not dating at all. You should be alert and on your guard, of course. If someone has a lot of red flags, or you get a vibe that doesn’t feel right, pull back and maybe stop seeing them entirely. But remember that most people are genuinely good people who are looking for the same thing you are. Even if what they want and what you want doesn’t match well enough for you to keep seeing each other, you’re far more likely to meet nice people that you just don’t connect with than outright jerks, abusers, or other sick or scary people. Photo by Jon Flobrant on Unsplash Reason #4: You’re not sure you can balance dating with being a parent Whether you co-parent with your ex or you’re parenting completely solo, being a parent takes up a ton of your time. Add to that working to support your kids, taking care of the house, and finding time for yourself, and you might start to wonder exactly where dating fits into the schedule. Balancing dating and being a parent can be a delicate act. You can’t neglect the kids, but you can’t build a relationship if you don’t spend time with the person you’re interested in. And as long as you’re in “never the twain shall meet” mode, I admit it can be difficult to make time for dating. What can you do about it? Start by acknowledging that yes, it will be difficult. Some of the best things in life are difficult — that’s what makes them worth it. So admit it will be hard and decide if it’s worth it to you. Then be honest with the people you date. Let them know you have extremely limited availability, and what your parameters are around dating. How much notice do you need before a date? How often can you go out? How long can you be out for a date? Are you only able to go out when the kids are with their other parent? Be clear about what you can give so that the person who decides to date you (remember it is a mutual decision) understands what they’re getting into and goes for it anyway. Finally, don’t feel pressured to date more than you feel comfortable with. If someone pressures you for more time than you’re willing to give, they’re not for you. If friends or family are pressuring you to go out more, ignore them. This is about your comfort level and no one else’s. No one else can decide what’s best for you. There’s nothing wrong with being single You shouldn’t let dating anxiety stop you from dating. But it’s also important to remember that there’s nothing wrong with being single and not dating. If you work through your anxiety and realize you don’t want to date right now — or realize the anxiety is actually a result of not wanting to date — know that it’s okay to skip dating for now. Life is about so much more than dating and finding a relationship. You can focus on your kids, your career, your friends and family, and all the other parts of your life. You can choose not to date for a month, or a year, or any amount of time you want. You can decide not to look for love and simply wait until it comes along, wrestles you to the ground and demands you go out with it (although that does sound a little violent). Take a dating hiatus. Or date yourself! Dating yourself can be a great way to reconnect with yourself, learn more about who you are and what you want, and prepare for dating others again. And it can be a ton of fun, too! Just remember that if you let your dating anxiety take over, not only will it stop you from dating in the present, it will make it that much harder to date in the future. So even if you decide against dating right now, it’s still a good idea to work through your anxiety around it. Set yourself up to be ready for dating success.
https://medium.com/love-the-single-parent/dating-freak-out-getting-past-dating-anxiety-714da0c3647f
['Wendy Miller']
2020-03-02 16:49:12.521000+00:00
['Dating', 'Relationships', 'Dating Anxiety', 'Self', 'Parenting']
Talk about democracy as much as you like, but there’s none here
Talk about democracy as much as you like, but there’s none here No one on the Jewish political spectrum wants to be portrayed as an “Arab Lover”. Meretz is no exception. This is outrageous racism and if you were an Arab you would understand that. Israel needs people who want a country that loves human beings. The Palestine Project Dec 7, 2020·4 min read Citizen K’s blog in Haaretz Hebrew edition.** By Karen Haber Invariably, in almost every discussion concerning working together with Arab politicians or with Arabs in general, an argument emerges setting out the parameters. It’s depressing, but sadly it does describe the Israeli reality: “whatever you do, never allow yourself to be portrayed as an Arab lover.” Yes, there’s a taboo on comparing our situation to Germany in the 1930’s, but I do remember my family talking about the fact that even in the early 1930s there were quite a few “friends” who did not want to be portrayed as “Jew lovers”. This is not a comparison to the Holocaust; this is a comparison of racism with racism. And then there is the corollary: the understanding that nothing can grow out of such basic premises. Any Israeli who cooperates with this terrible paradigm of not being portrayed as “Arab lover” can hide behind descriptions such as realist, practical or pragmatist, but these are just euphemisms that emphasise the legitimacy of Arab exclusion and Israeli racism. Even the attempt to wrap this argument in patriotism or “Zionism” does not hide the bottom line — because patriotism is love of your country, not hatred for someone who is not exactly like you. Striving for a democratic state is not an anti-Zionist act, but discrimination against Arab citizens is an anti-democratic and a racist act, and if we continue to remain silent, we’re all responsible for it. In the last election, the political mainstream in Israel had the opportunity to differentiate itself from Benjamin Netanyahu’s agitation and incitement and form a government with Arab partners. But the mainstream had succumbed to the racist Zeitgeist, the one that does not include 20 per cent of the population. The Right does not cooperate with the Arabs, the political Centre does not cooperate with the Arabs and the Left does not cooperate with the Arabs. This week we were informed that Meretz is not cooperating with the Arabs either. No one cooperates with the Arabs. But no, do not compare us, heaven forbid to Germany in the 1930's. My grandfather fled Germany — not because of the war, not because of the concentration camps and not because of “Kristallnacht”. Back in 1935, my grandfather fled because he was a Jew who had sex with a gentile woman and because it was forbidden, and no one wanted to be portrayed as a “Jew lover”. Therefore I compare, I sure do. I compare, because as it turns out, acquiescence leads to this discrimination becoming institutionalised. I compare because the belief that given enough time, the modern perception that see every person as equal in the end will win, actually makes us come to terms with discrimination. I compare because a fifth of the population is discriminated against every day and we live with it. We live with the fact that a lawsuit get thrown out under the auspices of the Nation-State Law (clause 7, which states that the development of Jewish settlement is a national goal and the state would work to encourage and promote its creation and establishment) under the racist pretext that Karmiel is “a Jewish city”. Such a scenario, had it happened in another country and discriminated against Jews, would have been immediately, and rightly, deemed as antisemitic and would have received a barrage of condemnation. When nationalism takes precedence over any other value, it undermines other values. In the past, there was an attempt to paint Green Line Israel as a democracy. But there is no such thing as democracy for Jews only. It simply does not exist. Fact — civic equality in Israel does not exist, because democracy is not measured solely by the right of citizens to vote. Nor can you gauge it by comparing the country to autocratic states. Democracy is measured by the degree of civic equality. With the Nation-State Law there is no civil equality in Israel, neither legally nor in practice. Talk about democracy as much as you like, but there’s none here. If you were an Arab you would understand it, and if you are a Jew who does not yet understand it, one can only hope that it won’t be too late until you understand it. This is a democratic emergency, and it requires brave people who are willing to be portrayed as “Arab lovers” because they desire a country “loves human beings.”
https://medium.com/@thepalestineproject/talk-about-democracy-as-much-as-you-like-but-theres-none-here-bd73b4f389d8
['The Palestine Project']
2020-12-07 09:31:43.800000+00:00
['Democracy', 'Palestine', 'Israel']
Three Ways Global Public Health Can Work with the Private Sector
Global health – the health of the world’s population regardless of geography, gender, and other demographic factors – is an issue area traditionally addressed with government and philanthropic dollars. As health experts Ken Kelley and Jenny Yip discussed in our recent webinar, the private sector is increasingly emerging to play a critical role in global health. Here are three ways that the private sector can help advance global health: By filling significant funding gaps to meet remarkable demand – While few funders are filling the critical financing gap in the research and development of global health solutions, large-scale customers are in dire need of these solutions and are willing to spend billions of dollars to buy better products. The private sector can help supply the much-needed R&D capital and potentially generate positive returns while providing disease solutions that meet market demand. By targeting diseases that are less well-known and commercially rewarding – The “Big Three” diseases – AIDS, tuberculosis, and malaria – receive considerable attention and funding, as do diseases that are obvious commercial targets. The private sector can help address orphan, emerging, and other types of diseases that are out of the spotlight. By creating new solutions that revolutionize traditional medicine – Machine learning and artificial intelligence is disrupting the longstanding methods for the testing and delivery of new medicines and medicinal technology. The private sector can invest in companies that are creating more effective, scalable, and less expensive health solutions. To hear more from Ken and Jenny about the intersection of the private sector and global health, click here for the full webinar.
https://medium.com/capshift/three-ways-global-public-health-can-work-with-the-private-sector-4b4181602c37
[]
2019-11-21 16:53:46.133000+00:00
['Impact Investing', 'Disease', 'Investing', 'Global Health']
Augmented Reality Headsets Market Size Worth $78.47 Billion By 2025
The global augmented reality headsets market size is expected to reach USD 78.47 billion by 2025, registering a CAGR of 73.8% from 2019 to 2025, according to the new study conducted by Grand View Research, Inc. The rising trend for immersive experience is driving the adoption of AR headsets among enterprises and consumers. Moreover, the convergence of IoT, wearable devices, and augmented reality technology is expected to induce market growth. Furthermore, growing popularity of operating, navigating, and maintaining inventories using headsets across industries is driving the demand. The popularity of the AR headsets has witnessed steady rise with the introduction of smart glasses and wearable lenses; particularly from the entertainment industry. Steady rise in application areas, such as research and development, surgery, navigation, product advertisement, and promotions across industries is expected to bode well for the market growth. The demand for AR glasses or headsets also increased with the rise in need to detect or view 3D objects from the available AR contents or apps. For instance, the display of additional product information and its fittings around different environment condition can be experienced using the head-mounted headsets or AR glasses. Rising applications of augmented reality headsets in industrial and manufacturing industries to improve the maintenance, repairing, and operational function is expected to drive the growth of the AR headsets market. These headsets enable the enterprises to monitor and check machine specifications in a real-time scenario. For instance, Google LLC, collaborated with Indiana Technology and Manufacturing Companies (ITAMCO) to provide an intuitive machine vision experience using the Google Glass and MTConnect. The real-time visual experience would help monitor internal machine functioning and calibrations. Furthermore, application of AR headsets in assembly and installation phases in the industry also professes a growth in the market. The introduction of AR tool kits and AR software development kits further contributes to the development of the market. The addition of augmented reality tool kits has enabled the development of numerous AR apps and thus has increased the number of AR users. The tool kits have witnessed considerable advancement over the years with the AR compatible smartphones. For instance, Google’s ARCore software development kit works on more than 100 million android smartphones with advanced capabilities. In February 2018, the extension of ARCore with Google Lens contributed to the development of the market. The increase in investments and funding raised by the emerging AR headsets enterprises for the development of AR smart glasses is also expected to drive the market. Despite the hefty prices of Head-mounted Display (HMD) and glasses offered by the manufacturers and shutdown of certain leading AR headset enterprises such as Daqri, the industry still regulates with the flow of significant investments. Large technology giants such as Google LLC, Tencent, Alibaba, and many more are extensively investing in emerging startups. For instance, in April 2019, Magic Leap, Inc. an AR goggles developer secured USD 280 million from NTT DoCoMo, a Japan cellphone service giant. Earlier Magic Leap, Inc. had already secured funding of more than USD 2.3 billion, from Google LLC, Alibaba Group, and J.P. Morgan. Click the link below: https://www.grandviewresearch.com/industry-analysis/augmented-reality-ar-headsets-market Further key findings from the report suggest:
https://medium.com/@marketnewsreports/augmented-reality-headsets-market-ce4d9caf5f8e
['Gaurav Shah']
2020-02-20 09:48:11.572000+00:00
['Middle East', 'Asia', 'Europe', 'Augmented Reality', 'America']
Meeting the engineers’ expectations as a product manager
One of the teams that product managers closely work with is the engineering team. This is the team that literally brings ideas to life. Since the interaction with the engineers happens on a day to day basis, it becomes necessary for product managers to understand what engineers expect from them. The simple reason for this is that you don’t want engineers to work in a frustrated manner because that will surely impact the quality of the deliverables. Without further adieu, let us look at some of the things which engineers would like product managers to understand before working on any project. 1. Giving clear and concise instructions Though this sounds relatively simple, it is one of the things which product managers often tend to not do. What happens is that once the requirements have been gathered, product managers will prepare lengthy documentation that consists of the features and specifications to be built. Now there is nothing wrong with lengthy documentation because after all, it contains every little detail which will ensure that the set expectation is met. However, if you are assuming that the engineers will go through it in detail, then you are mistaken. You have to understand that engineers are not here to read long documents. They are here to understand what they have to do and then actually code. Therefore, whenever you need the team to build or change a feature(s), give the instructions that the team actually needs for implementation. In other words, don’t beat around the bush, please! Photo Credit: Giphy 2. Take necessary meetings only If there is one word that may make the engineering team sigh a little it is the word “meeting”. In no way am I saying that engineers don’t like meetings. They also understand that meetings allow the entire team to be on the same page. However, it is crucial to understand that engineers are not going to appreciate you as a product manager if you will keep having discussions with them related to the projects. Think about it. The engineering team cannot give all their time to the meetings because they have to do the coding, and they know that if their tasks are not completed, the project will come to a halt. So as a product manager, one way to ensure that you are only taking the required time out of the engineers’ schedule is first to figure out all of the things that need to be discussed. Go ahead and see who would be the best person/people to give those answers. Wherever you need the engineers, schedule a discussion where you are getting as many answers as possible. 3. Providing with finalized designs From my own experience of working with the engineers, I can guarantee you that your engineers will love you if you provide them with finalized designs to develop. By completed I mean that these designs have received the approval of the boss, client, and any other stakeholder. When you put yourself in the shoes of the engineering team, this makes a lot of sense. Developing fully approved designs means that you are not going to be making nearly 1000 changes. That is just frustrating for anyone! Having said this, it happens so that sometimes getting approval for the designs from clients or other stakeholders can take time. At the same time, you also have deadlines to meet. In such cases, do not try to give half-approved designs to the developers right away. First, try to get the approval for a set of designs and then give it to the engineers. This way you and your team will be much more productive! 4. Take up a stand for them when needed For a moment, I would like you to think about who are the closest ones to you. If I am not wrong, one of the reasons why someone is close to you is because you know that they will take up a stand for you if such a situation arises. The engineers expect the same from the product manager. Engineers may not say this to the product managers, but they would greatly appreciate it if product managers would stand by them in times when perhaps there are delays in the deliverables due to genuine reasons. When such support is given to them, it will motivate them to fight through the obstacles and get there. 5. Tell them the truth They say honesty is always the best policy. No matter how bitter the truth may be, it is better to know it. Engineers, like anyone else, would like to know the truth at all times. As a product manager, you must be honest and tell them upfront about scenarios that can range from further changes to be made in the design to stop building a feature because of another requirement. Don’t try to keep it from them for as long as you can because eventually, they will come to know of it. Also, one more thing: though the team may not react happily to all truths, they will indeed not despise you because firstly, you are being honest and not creating stories out of thin air, and secondly, they understand that things like this can happen. It’s okay! Wrap Up The engineering team is a great team to work with as they always bring a new perspective to the table and teach you the things you may have not known. Having said that, it is essential as well as dutiful as product managers to understand the key expectations of the engineers. This could take some time and can be challenging too. However, once you get in the habit of doing this, you will see how your team will become more enthusiastic while building rocking products for the customers! To the engineers who have read this, I would love to know whether you agree or disagree with the expectations that you would have for product managers. Also, how do you all find this article overall? Let me know your valuable feedback in the comment section below. (For anyone looking for ways to be rocking product, do check out this blog!) This post has been published on www.productschool.com communities
https://medium.com/agileinsider/meeting-the-engineers-expectations-as-a-product-manager-8d6b8ace3a92
['Ishita Mehta']
2020-09-02 11:06:58.546000+00:00
['Technology', 'Product', 'Product Management', 'Expectations', 'Product Manager']
Contactless or Cashless Transactions in the World of “ New Normal"
2.3 CIBA Authentication Flow — Poll Mode Now, with the basic understanding of the endpoints, authentication flow, and the token request modes, taking POLL Mode as an example lets have a deep understanding of important steps. 2.3.1 The Authentication Request Authentication Request is initiated by the client (And that’s why it is Client-Initiated Back-channel Authentication). The client needs to inform the authorization server about for whom it is requesting the authentication. So it is mandatory that there should be some user identity from the Consumption device for which it can request, Access, and ID token. There are three ways for a consumption device to add user identity. login_hint,login_hint_token, id_token_hint can be a used. login_hint is a hint to the Authorization server about the user for whom the Authentication flow is initiated. The value may be an email address, phone number, account number, username, etc., which identifies the end-user to the OP. The value may be directly collected from the user by the Client just before the CIBA Authentication request or can be obtained in other ways. login_hint is simple to understand and implement though login_hint_token and id_token_hint can provide improved security. login_hint_token is a token with user details and id_token_hint is sending a previously received ID Token as a hint. Do not worry if you don't understand it deeply. Just consider these hints provide information about the user to the Authorization Server. Scope is a required parameter. “openID” is the mandatory value (Eg. { “Scope” : “openid%20email%20example-scope”}). ACR values is an optional parameter. Acr stands for Authentication Context Class Reference. Indicate whether authentication happens successfully or not. If '0' - Did not meet the authentication requirements successfully. Highly recommended that if the request has this, let the response has this too. User-code too is an optional parameter which is a pin/password only known for the end-user which is used to authorize the Authentication Request preventing random clients or users initiate authentication requests for the legitimate user who is unaware and has to deny the access. The binding_message, an optional parameter is a human-readable message displayed on both Authentication and Consumption devices to interlock. Requested expiry an optional parameter is a positive integer value that allows the client to request 'expires_in' value for auth_req_id. POST http://localhost:8080/CIBAEndPoint HTTP/1.1 Host:http://localhost:8080 Content-Type: application/x-www-form-urlencoded Authorization: Basic czZCaGRSa3F0MzpnWDFmQmF0M2JW{ "scope" : "openid%20email%20example-scope" "binding_message: "W4SCT" "login_hint":"[email protected]" } The CIBA specification recommends signed authentication requests which can be sent as the request parameter. And for simplicity, we will not be going through them here. You can check this blog for more details. 2.3.2 Authentication Response Once the request is successfully validated, the Authorization server responds with “200 OK” indicating that the authorization request has been accepted. And Authorization server sends an Authorization response. If failed invalidation, the response message is sent with error_message and error_code. We can see them later in detail. auth_req_id is a required parameter. It has to be a unique identifier for the Authentication request made. expires_in is also a mandated parameter. It is a positive long value representing seconds. It marks the expiry time of auth_req_id. And if the auth_req_id expired, has to respond with an error response. Interval is an optional parameter. It is mandatory only for ping/poll token request modes. It is a positive long value representing seconds which denotes the polling frequency. HTTP/1.1 200 OK Content-Type: application/json Cache-Control: no-store{ "auth_req_id”:”702552be-f0ef-4ca0–9f66-c766e9525e63", ”interval”:2, ”expires_in”:3600 } 2.3.3 Communication Now OP(OpenId provider-Authorization server) communicates with the end-user. This communication step is out of the scope of the specification of CIBA. We need to choose a better option to implement. 2.3.4 Token Request For Ping one it receives Authentication device requests for Token with confirmation with the reception of correct auth_req_id back to the Authentication device. For Push, the Token is sent once the Authorization server receives the provision of consent and credentials. For Poll Token mode Authentication device can continue to poll for a token once it receives Authentication response with auth_req_id and poll till it expires(will result in an error if it tries to poll again once it receives token). grant_type is a mandatory parameter and the value should be openid:params:grant-type:ciba . auth_req_id is a mandatory parameter that is used to validate the request. 2.3.5 Token Response Token response too, common for ping and poll. Token Response usually adds an access token, ID token, and optionally a refresh token. It also includes expires_in parameter to mark the expiry time of the token provided.
https://medium.com/authenticate/contactless-or-cashless-transactions-in-the-world-of-new-normal-f30e5e3dabbc
['Vivek Vinushanth Christopher']
2020-12-22 13:56:02.479000+00:00
['Ciba', 'Identity Management', 'Computer Science', 'Open Source', 'Open Banking']
Building an E-Commerce application using Java & React
E-commerce applications are the backbone of today’s online shopping world. In this post, we will see how to build an e-commerce application easily using Java, JHipster, Spring Boot, and React. Since we will be scaffolding the application, the post will focus on how to build a shopping cart and payment integration rather than how to build a Java web application. Tools and technology we will use We will use the below tools and technology to build this application: JHipster : JHipster is a rapid application development platform. It can quickly create web applications and microservices with production-grade code. Head over to the installation instructions to set it up. JHipster can scaffold applications with a wide variety of languages, frameworks, and configurations. For this tutorial, we will stick with the following major options. You don’t have to install anything for this, as JHipster will manage these for you. : JHipster is a rapid application development platform. It can quickly create web applications and microservices with production-grade code. Head over to the installation instructions to set it up. JHipster can scaffold applications with a wide variety of languages, frameworks, and configurations. For this tutorial, we will stick with the following major options. You don’t have to install anything for this, as JHipster will manage these for you. Spring Framework : Spring is an application framework in Java that comes with all the bells and whistles required for enterprise-grade Java application development. It comes with Spring Boot which makes development faster and convenient. This lets us focus more on our business needs rather than spending time setting up technical integrations. : Spring is an application framework in Java that comes with all the bells and whistles required for enterprise-grade Java application development. It comes with Spring Boot which makes development faster and convenient. This lets us focus more on our business needs rather than spending time setting up technical integrations. React : A popular JavaScript UI library that helps build modern scalable front ends. We will be writing React code using TypeScript. We will also be using a few other components like Redux and React Router from the React ecosystem. : A popular JavaScript UI library that helps build modern scalable front ends. We will be writing React code using TypeScript. We will also be using a few other components like Redux and React Router from the React ecosystem. Bootstrap : An UI framework for web applications with a variety of themes and customizations. : An UI framework for web applications with a variety of themes and customizations. Gradle : Gradle is a Java build orchestration tool that provides a highly customizable and easy-to-use domain-specific language (DSL). : Gradle is a Java build orchestration tool that provides a highly customizable and easy-to-use domain-specific language (DSL). Webpack : A front-end build tool for modern web applications : A front-end build tool for modern web applications Adyen Payments Platform : Adyen is one of the leading payment platforms for medium to large scale businesses. It provides a plethora of payment options and provides SDKs for easy integrations. And I also happen to work for Adyen 😄 : Adyen is one of the leading payment platforms for medium to large scale businesses. It provides a plethora of payment options and provides SDKs for easy integrations. And I also happen to work for Adyen 😄 Docker : A containerization technology, which we will use it to quickly run our database. Make sure you have Docker and Docker compose installed. If you can run a local MySQL setup, you won’t need Docker. : A containerization technology, which we will use it to quickly run our database. Make sure you have Docker and Docker compose installed. If you can run a local MySQL setup, you won’t need Docker. Git: Distributed version control system for source code management. Make sure you have Git installed. Prerequisite To follow this tutorial effectively you would need to be familiar with at least the below tools and technology Java Spring Framework React Redux Bootstrap We have a sample application built to accompany this post. Each section here is points to a particular commit in the sample app to help give you a better picture of what is being changed. Designing the entity model Since we are going to scaffold our application, it is important to make sure that we have the correct entity model for the e-commerce application. We will use the JHipster Domain Language(JDL) to do this. Below is the JDL model for an e-commerce application: The User entity is built-in from JHipster and hence we don't have to define it in JDL. However, we can define their relationships. Here is a UML visualization of the same: Head over to JDL Studio if you want to visualize the model and make any changes. Next, create a new folder and save the above to a file named app.jdl within that folder. Scaffolding the application Now that we have our model in place, we can go ahead and scaffold a Java web application using JHipster. First, let’s define our application. Add the below snippet to the file ( app.jdl ) we created earlier. We just defined an application named store that uses JSON Web Token (JWT) as the authentication mechanism, MySQL as the production database, Gradle as the build tool and React as the client-side framework. You can see all the options supported by JHipster here. We also defined that the application uses all the entities we defined with entities * . Now, let’s invoke JHipster to scaffold the application. Run the below command inside the folder where we created app.jdl : jhipster import-jdl app.jdl This will create our application, install all necessary dependencies, and initialize & commit everything to Git. Make sure you have Git installed on your system. Let’s check out the application. Run the below command to run the application in development mode: ./gradlew After running the application, visit https://localhost:8080 and use the default users mentioned on the home page to log in and explore the application. You can find the commit in the sample application. You can also run the generated unit and integration tests with this command: ./gradlew npm_test test integrationTest So far, the generated application doesn’t have any specific business logic or custom screens. It is just a CRUD application for the model we defined. If you are familiar with Spring Framework and React you should be able to navigate the source code created easily. The Spring/React application created by JHipster is not the focus of this post, and for that I recommend you head over to documentation provided by JHipster, Spring, and React. Building a products landing page Now that our application and all the CRUD APIs are ready, let us build a product landing page that lists all the products offered by the store. We will convert src/main/webapp/app/modules/home/home.tsx to be our product landing page. This involves updating the JSX to show the products list and using the product redux reducer to fetch the data from product API. Here is the complete diff for home.tsx and here is the entire changelog for this step. Start the application client-side in dev mode to speed up development. Keep the application running in a terminal using ./gradlew if it not already running from the previous step. In a new terminal, run npm start and it will start a development server for the client-side, which proxies API calls to the backend and open up a new browser window pointing to https://localhost:9000. At this point, the front-end and back-end are running in development mode with hot reload functionality. This means the entire application will automatically reload when we make any changes (the browser will reload as well). For backend changes, the reload will happen when you compile using your IDE or by running ./gradlew compileJava . Update home.tsx according to the changelog and see the changes reflected on the home page. Building the shopping cart Now let us build a persistent shopping cart page, where we can list all the items added to the cart by the user. The user can also start checkout from this page. The shopping cart will hold the items added until the payment is complete even if the user logs out or uses the application in a different machine as the state is persisted automatically using the generated CRUD API: For this feature, we also add/update the below on the server-side: security configurations to ensure that a user can update only his/her shopping cart when logged in. Only administrators will be able to see the shopping cart of other users and manage all entities. New REST endpoints to add and remove products to and from a shopping cart. Service methods Database operations. These updates are quite straightforward due to the framework provided by JHipster and Spring. On the client-side, we will update: shopping-cart reducer to talk to the new endpoints. Add a new route and module to show the shopping cart. The shopping cart React page uses the below snippet. Note that the listing content is quite similar to the product listing page. Here is the entire changelog for this feature. Make the changes to the application and see the changes reflected on the shopping cart page. Please note that I also made some improvements to the fake data generated by JHipster in this commit and made improvements to product and cart pages in this commit. I also fixed the tests in this commit. Update your application according to these changelogs as well. Payments integration Now that our shopping cart is ready, we can integrate the Adyen checkout API to make payments. First, make sure you sign up for an Adyen test account. Follow this guide to get your API keys and Merchant Account. You will also need to generate an origin key per domain you use to collect payments. In our case for development use, we need to create an origin key for http://localhost:9000 and http://localhost:8080 . We will use the Adyen Java API library to make API calls. We will add the dependency to our Gradle build. We also need to exclude the Adyen domain in the content security policy defined in src/main/java/com/adyen/demo/store/config/SecurityConfiguration.java . We will create a new Spring REST controller that will use the Adyen Java library and make payment API calls for us. Here is the src/main/java/com/adyen/demo/store/web/rest/CheckoutResource.java class. Here is a method from this class. The controller ensures that all actions are done against the active shopping cart of the user logged into the session. This ensures that security issues like man-in-the-middle attacks and request spoofing do not happen. When payment is completed successfully, we close the active shopping cart, ensuring every user has only one active shopping cart at a time. On the client-side, we will create a React page to show the payment options and payment result status, a redux reducer to talk to the new API endpoints. We will also download and add the Adyen client-side resources to our index.html file. Here are the important bits of the checkout page since this is where we handle the Adyen javascript integration from within React. Here is the entire changelog for this feature. Make the changes to the application accordingly and see the changes reflected on the shopping cart page. Make sure to set the following environment variables first. Running the app in production Now that we have made all the required changes, let us compile and run our app in production mode. First, let us run the generated unit and integration tests to ensure we haven’t broken anything: ./gradlew npm_test test integrationTest Now, let’s start a MySQL database as our application uses an in-memory H2 database for development and MySQL for production, this makes development easier. We will be using Docker compose to run the DB. You can also manually run a MySQL DB if you prefer. docker-compose -f src/main/docker/mysql.yml up -d The above command will start up MySQL DB from the included Docker compose file. Now, run the below command to run the application in production mode: ./gradlew -Pprod You can also package the application using the command ./gradlew -Pprod clean bootJar and then run the JAR using java -jar build/libs/*.jar Now, visit https://localhost:8080 and use the default users mentioned on the home page to log in and explore the application. You can use the test cards from Adyen to simulate payments Conclusion That’s it. We have successfully built an e-commerce application complete with a product checkout and payment flow that can accept multiple forms of payment. I highly recommend that you check out the sample application to get a better context of what has been built. The example application also has a user registration flow that you can checkout I hope this helps anyone trying to build shopping carts and payment flows for their Java E-Commerce application.
https://medium.com/adyen/building-an-e-commerce-application-using-java-react-54015b81d6c9
[]
2020-07-08 05:19:20.427000+00:00
['Jhipster', 'React', 'Adyen', 'Spring', 'Java']
The peopling of the Japanese archipelago and the formation of modern Japanese people and culture (including the Ainu people)
The peopling of the Japanese archipelago and the formation of modern Japanese people and culture (including the Ainu people) Saito Takashi Jul 5·12 min read This post will include general information about the Jōmon period people and the peopling of the Japanese archipelago. It gives you a good overview about the formation of Japanese people, including the Ainu. Japanese people formed from various ancient East Asian-related groups. Noteworthy are: The heterogeneous Jōmon period hunter-gatherers and fishers, which can be traced back to ancient Siberians, ancient East Asian Highlanders (Himalayans), ancient Northeast Asians, and Southeast Asians. → Jōmon period people were heterogeneous and diverse, they homogenized only during the late Jōmon period, following a population decline and food shortage. Likely also some bottleneck events. (The modern Ainu are neither a representative of the ancient Jōmon, nor their direct descendants. More about that later.) The Yayoi rice-agriculturalists, from the Korean peninsula and ultimately likely from the Yangtze culture area. They were rather homogeneous and had advanced technology compared to the Jōmon locals. They did not replace the Jōmon, but merged with them, together forming the Yayoi culture. Noteworthy is that there were strong differences between the different Jōmon period populations. Especially Hokkaido Jōmon differed greatly from other Jōmon samples, having also links to Paleolithic Siberians (samplified by the Yana sample, Upper Paleolithic people of Northern Eurasia, which had European-related ancestry and phenotype). Here we see the various migration routes into the Japanese archipelago: Modern Japanese can be modeled as: Most similar to Koreans and other East Asians (Han), followed by an Central Asian/Tibetan component, Southeast Asian component and South Asian component. This may be because of the heterogeneous Jōmon period people and their genetic impact onto modern Japanese. It is in accordance with other recent studies, modeling Japanese as 92% Liao river farmers (a combination of Northern East Asian components and Southern East Asian components, commonly found in the Liao river region and the Korean peninsula), and 8% Jōmon period hunter-gatherers (samplified by the various ancient samples). It should be noted that Jōmon ancestry among Japanese is modeled, so we not know how much the Jōmon period people contribute towards modern Japanese. It is rather hard to distinguish East Asian inter-ethnic genetic components because of the heterogeneity among the early Jōmon. Jōmon-derived ancestry is suggested to range from 8% up to 45%, depending on the test methods. For Liao River farmers and the formation of Koreans see: The Ainu and Jōmon are not the same and the Jōmon of Hokkaido differed from the Jōmon of Honshu or Kyushu. The historical Ainu formed from Hokkaido Jōmon, Okhotsk, and Satsumon tribes, thus are not a representative for the Jōmon period people, especially not for all of Japan! Hokkaido Jōmon formed from Upper-Paleolithic groups of Hokkaido and Siberia and from groups migrating into Hokkaido from Honshu. See also: In 2021, it was confirmed that the Hokkaido Jōmon population formed from “Terminal Upper-Paleolithic people” (TUP) indigenous to Hokkaido and Northern Eurasia and from migrants of Jōmon period Honshu. The Ainu themselves formed from these heterogeneous Hokkaido Jōmon and from a more recent Northeast Asian/Okhotsk population. Also associating this looks solely with possible Jōmon ancestry (which itself is contradictory, as they formed from heterogeneous groups during the Paleolithic and late Neolithic) is misleading. I do not understand why westerners are so fascinated by this? Northeast Europeans also received high amounts of Northeast Asian geneflow and look slightly different from contemporary Europeans (even speak Uralic languages). My point is that Jōmon were not all European looking. Only Hokkaido Jōmon had geneflow from Paleolithic Siberians, which brought in this gene allele associated with facial features commonly found in Europeans. There than existed a North to South cline from Hokkaido to Honshu. Jinam et al. found that there is a gene allele associated with facial features commonly found in Europeans among some Hokkaido Jōmon and Ainu. This allele is suggested to have arrived from Paleolithic Siberia with the spread of the microblade culture. A new study in 2021 found also affinity between Hokkaido Jōmon and the Yana sample (Ancient North Eurasian sample in Russia). Another previous study found 14% European-related ancestry in Hokkaido Jōmon. A study published in the scientific journal “Nature” by Jinam et al. 2015, using genome-wide SNP data comparison, found that the many Ainu have gene alleles associated with facial features which are commonly found among Europeans but absent from Japanese people and other East Asians, but these alleles are not found in all tested Ainu samples. These alleles are the reason for their pseudo-Caucasian appearance and likely arrived from Paleolithic Siberia. It is noteworthy that Hokkaido Jōmon differed from Honshu Jōmon quite strongly. Different groups existed in Jōmon period Japan which later homogenized during the late Jōmon period, following a population decline and food shortage. Additionally, the Ainu did not all look European-like/Eurasian, but many looked typically Northeast Asian. The Ainu language itself is even suggested to have been originated from the Okhotsk component. See also: Jōmon people mostly loolek East Asian, some had the specific allele and looked mixed/Eurasian-like or Native American like. Thus I want to make people aware that it is more complicated than simply Jōmon look, as most Jōmon also looked East Asian. Forensic reconstructions of Jōmon period samples: I.e. Jōmon can be somewhat compared to Native American look, which also carry both East Asian-related alleles (75%) and European-related alleles (25%). Peruvian Native: Kennwick man reconstruction: These results suggest a level of inter-regional heterogeneity not expected among Jomon groups. This observation is further substantiated by the studies of Kanzawa-Kiriyama et al. (2013) and Adachi et al. (2013). Kanzawa-Kiriyama et al. (2013) analysed craniometrics and extracted aDNA from museum samples that came from the Sanganji shell mound site in Fukushima Prefecture dated to the Final Jomon Period. They tested for regional differences and found the Tokoku Jomon (northern Honshu) were more similar to Hokkaido Jomon than to geographically adjacent Kanto Jomon (central Honshu). Adachi et al. (2013) described the craniometrics and aDNA sequence from a Jomon individual from Nagano (Yugora cave site) dated to the middle of the initial Jomon Period (7920–7795 cal BP). This individual carried ancestry, which is widely distributed among modern East Asians (Nohira et al. 2010; Umetsu et al. 2005) and resembled modern East Asian comparison samples rather than geographical close Urawa Jomon sample. In this respect, the biological identity of the Jomon is heterogeneous, and it may be indicative of diverse peoples who possibly belonged to a common culture, known as the Jomon. See: https://www.researchgate.net/publication/281036097_Jomon_Culture_and_the_peopling_of_the_Japanese_archipelago_advancements_in_the_fields_of_morphometrics_and_ancient_DNA Genetic data of Jōmon period people: It is now established that the Jōmon people initially formed from heterogeneous groups, which merged during the Jōmon period in Japan. There existed a North to South cline. The Jōmon are overall close to East Asians, and appear basal to Northeast Asians and Native Americans (per Gakuhari et al. 2020). Gakuhari et al. 2020 could successfully analyze Jōmon samples and modern populations from Eurasia. East Asians and East Asian-related populations form a tight cluster: Interestingly, Oceanias are as distant to East Asians as they are from Europeans. The Andamanese/Onge however are shifted towards East Asians, as they received geneflow from East Asian Highlanders and southern East Asians. Andamanese/Onge were found to have between 6% up to 45% East Asian-related ancestry, with 31% being found in analyzed Onge as average. Lineages which are suggested to have arrived during the Jōmon period include C1a, D1, C2, F and K. The first haplogroups were F and K, followed by C and D. However K and F are less than 2% today and thus may be an echo and got early replaced by the later C1a, C2 and D. This correspond with the findings that another Paleolithic population was replaced by the early Jōmon tribes. I.e. the shared haplogroup D can be related to an deep/divergent East Asian lineage, which contributed to the Jōmon period people and to Andamanese people, as well as Ancient Tibetans (Highlanders). Andamanese formed from this East Asian-related component and from an older Oceanic component shared with Australasians (Papuans). Another common haplogroup among the Jōmon was C1a and C2, traced back to Northeast Asians. D additionally arrived into the Andamanese about 7,000 years ago through a male migration and founder effect. Great Andamanese tribe however persevered the original haplogroups which are M and S (similar to the ones in Papuans). A 2019 study found that haplogroup D had a positive bottleneck event shortly before the Yayoi migration and became dominant in several Jōmon tribes. D expanded from Kyushu up north and down south respectively, replacing previous lineages such as K, F and C1a. → D is not directly related to Ainu people, but became dominant throughout genetic drift. The actual haplogroup which distinguish Ainu from Japanese is C2. Here is the distribution of haplogroup D today. Note Japan, Tibet and the Altai. Ancient and modern Tibetans share genetic drift with the Jōmon populations (at least partially). Per Yang et al. 2020, see: Melinda Yang, population geneticist and historian, also concluded that Northeast Asians and Southeast Asians are closely related, genetically speaking, ultimately descending from basal East Asian source in southern China and Mainland Southeast Asia, which gave also rise to the East Asian Highlanders, Hoabhinians (at least partially) and Paleo-Siberians (ancestors of Native Americans, not to be confused with Paleolithic Siberians). Jōmon haplotype sharing by Watanabe et al. 2021, shows that Jōmon were most similar to contemporary East Asians, but the authors note diversity among Jōmon samples, and that the Hokkaido Jōmon had gene alleles associated with facial features commonly found in Europeans. I.e. there was a North to South cline. Northern Jōmon in Hokkaido carried alleles associated with facial features commonly found in Europeans (possibly the reason why some Ainu look somewhat European or Central Asian). Additionally, the northern Jōmon shared affinity with the Yana sample of northern Siberia in Russia. Jōmon SNPs haplotype sharing (Watanabe et al. 2021): Tujia people and Miao people share noteworthy high amounts of SNPs haplotypes. Jōmon PCA position: The Tianyuan sample is ancestral to East Asians, including Jōmon and Southeast Asians, and positioned as basal. More than 45,000BC. What languages did the Jōmon speak? A very good question. We know that they had multiple languages. Among these were possibly Ainu, Amuric, Austronesian, Tungusic and Japonic. Chaubey and George van Driem 2020 found evidence that Japonic was already present in southwestern Japan during the Jōmon period before the Yayoi migration. The Japonic-speaking Jōmon people must have been drawn in to avail themselves of the pickings of Yayoi agricultural yields, and the Yayoi may have managed to accommodate the Jōmon linguistically and in material ways. Similarly a study by Yosuke Igarashi 2017 already proposed a homeland for Japonic within Jōmon period Japan. According to him and his results, Japonic was the language of the southern Jōmon period people and later expanded during and after the arrival of Yayoi agriculturalists with the new merged population. “Evergreen broadleaf forest culture, associated with the southwestern Jōmon and possibly proto-Japonic languages” It is suggested that the southwestern Jōmon belonged to a wider “evergreen broadleaf forest culture” which stretched from southern Tibet through China to Japan. It is associated with the Azuki bean cultivation. Later it seems that new techniques related to the Azuki bean agriculture and the domestication of the peach spreaded from southwestern Japan to China. It is still the culture of several Tibeto-Burmese tribes in China and Northeast India. It was characterized by the Azuki bean agriculture practiced by southwestern Jōmon and some southern Chinese groups. Azuki and Peach agriculture during the Jōmon period: Conclusion: Japanese are close to all East Asian and East Asian-related populations worldwide, but especially close to Korean people . Koreans and Japanese form an indistinguishable cluster, followed by Han. . Koreans and Japanese form an indistinguishable cluster, followed by Han. Ainu people have heterogeneous ancestry, mostly Hokkaido Jōmon and Okhotsk, with some Yayoi and Siberian components. Thus Ainu are shifted towards northeastern Siberians (Itelmens and Chukchi) compared to Japanese people. Ryukyuans are close to contemporary Japanese, but have relatively more affinity with Han and Southeast Asians, compared to contemporary Japanese. Some Japanese individuals carry the mentioned gene allele associated with facial features commonly found in Europeans, causing the somewhat exotic look. But as I said, such look is very seldom and often hyped by Westerners. An association with Ainu ancestry is also misleading and a erroneous simplification. The reference population for the Japanese (Yamato) used in Geno 2.0 Next Generation is 89% East Asia, 2% Finland and Northern Siberia, 2% Central Asia, and 7% Southeast Asia, making Japanese approximately ~100% East-Eurasian. Genealogical research has indicated extremely similar genetic profiles between these groups, making them nearly indistinguishable from each other and ancient samples. Japanese people for example were found to share high genetic affinity with the ancient (~8,000 BC) “Devils_Gate_N” sample in the Amur region of Northeast Asia. As said in the beginning, East Asian-like look among Europeans is much more common than Eurasian or European-like look among Japanese. So I do honestly not understand the hype about Japanese people and the many misconceptions regarding the Jōmon period people. I hope this post helps to clarify many questions about the Jōmon period people and the Japanese.
https://medium.com/@takahashi2saito0/the-peopling-of-the-japanese-archipelago-and-the-formation-of-modern-japanese-people-and-culture-d1d561a3e11c
['Saito Takashi']
2021-07-05 07:02:25.214000+00:00
['Jomon', 'Japan', 'Japanese History', 'Ainu', 'Japanese']
But First, Diet Culture: A Brush With Internalised Fatphobia Before I’ve Even Had Coffee
DIET CULTURE But First, Diet Culture: A Brush With Internalised Fatphobia Before I’ve Even Had Coffee Please allow me to wake up before I deal with hundreds of years’ worth of social conditioning around women’s bodies Photo by Ohmky on Unsplash I walk to the counter and the barista is cheerful and friendly. ‘Morning!’ she says, while tidying cups, moving stray milk jugs out of her way, putting something in the till. I return her greeting, ask how she is; she asks me back. ‘Can I get you a hot drink?’ she says. I order a decaf flat white. She asks if she can get me anything else. I take a quick look at the pastries and cakes under the glass, and consider whether I’m hungry. I’m not. I say, ‘No thanks.’ Then she says something that perhaps only a couple of years ago, I would have felt awkward about, but dared not respond to, feeling as I did so remote from my own body, so in need of remaining invisible, and therefore hopefully (but unlikely) free from judgement. She looks at me with astonishment and says, ‘Such amazing willpower there!’. I don’t get the feeling that she is making a comment about my larger body. I am acutely attuned to that particular vibration: there is something in the eye of the person commenting; intention cannot easily be masked. The comment seems more about her than me. But I pick up on that word: willpower. That made-up notion to make us feel that whatever is oppressing us is our own fault; our own inability to harness it in order to overcome any iteration of perceived weakness is a moral failing. Willpower does not take into account systemic prejudice, cultural conditioning, physical affliction. Easier to blame ourselves than to look at the bigger picture. The word is used so extensively as a measure of our value, depending on how much of it we deploy; except, it doesn’t exist. When she comes back to the counter, I have to ask. ‘Sorry,’ I say, ‘what did you say about willpower?’ I smile widely, so as not to appear confrontational. This is a sensitive subject, and requires an extended smile to show her that I am asking with curiosity, not judgement. ‘Oh!’ She is surprised, but her cheer continues. ‘I said you have amazing willpower — I would just eat everything if I could.’ ‘Oh, it’s not willpower,’ I say, ‘I’m just not hungry.’ Then I pause for a second, and ask myself if I’m going to say this next part — and before I can answer, I am already speaking. ’But you can eat whatever you want, whenever you want to.’ I meet her eye, trying to say this as lightly as I can. She looks away. A woman suggesting to another woman, out loud, in public, that she can eat whatever she needs or wants, whenever she needs or wants to should not be a revolutionary act, but the air hangs heavy with treason. In general, women do not encourage other women to eat, and especially not in public — except sometimes, when that food is salad or soup or steamed vegetables, and only in small portions. If it is not one of these foods, it is usually suffixed with some conspiratorial mention of ‘treat day’ or ‘cheat day’ or ‘being naughty,’ or comes with a pledge to return to the diet — the one it is assumed that all women are on by default, or at least, should be — the following day. There are other things at play in this particular situation, of course. I am breaking the perfectly pleasant but necessarily superficial fourth wall between customer and barista. I hope I don’t make her feel uncomfortable by doing this, but perhaps I do, and I have to own that. Also, I am saying these things to a stranger, and though I believe the notion to be true — that we can and should eat what our body asks for — it is a truth fairly new to me, and it’s the first time I’m trying it with someone I don’t know. I feel like there’s a neon sign over my head, LED arrows pointing down at me. I hear an internal whisper of ‘who do you think you are,’ that — thankfully — now diminishing refrain of my life. My personal journey around food and my body is still in progress; self-consciousness zings through my blood for a moment. I am not yet healed, but I cannot help feeling that part of the healing is to call out the bullshit that I have internalised as shame for my entire life thus far. She is still busy making the drink, going back and forth between tasks. She returns to the counter, and continues. ‘Oh no, I can’t eat what I want. My problem is I would never stop.’ (I feel that this woman would be judged as straight-sized by anyone. I deliberate in mentioning her body at all, but it feels important in this context because fatphobia is not reserved for those in larger bodies, and it is just as often directed inwards as out.) I say, ‘Oh’ in response, and my heart aches a little for her, but I can relate so closely to what she is saying, and I feel all the more grateful for what I have learned these past couple of years to help steer me away from all the dangerous messaging I have absorbed over a lifetime. She is telling me that she does not trust her body, just like billions of other people, and especially women and women-identified folx, all over the world. She will believe that her desires, especially around food, are a problem to overcome. She will resist making pleasurable food choices, calling them ‘temptations,’ making the resistance into something religious; making it sinful to want to eat delicious food, as if sinning were real, as if it’s not enough that the rest of humanity is watching her eat, that there has to be some deity keeping count, too. She believes, has been told over and over, that her body is the problem, that it is unruly and wild, and that it will not tell her when to stop, so it is best to just never allow it the food she wants. Her head is there to oversee and counteract the desires of her body. She may believe herself ‘good’ to not give in to her own personal desires, except the word ‘good’ in relation to desire is merely a euphemism for ‘obedient.’ In that moment, I feel grateful that I have been exposed to the books, blogs, fierce women writers, activists, campaigns and podcasts telling me that my body is okay as it is. That I am okay and that I always was. That I was born entirely attuned to the needs of my body, and that I have been conditioned over time to hate it, mourn its imperfections, punish myself for its unwillingness to conform to what is expected of it in this culture, to disassociate from its desires. I do not know how long it will take to entirely undo this damage, but I do know that I am standing on the other side of that joyless bridge now, and I hope that, when she’s able to, this woman will join me. She glides back over. ‘Here you go, sorry for the wait!’ Her smile has been unbroken this whole time. She pushes the cup forward. The foam-and-coffee feather on top is beautiful. ‘Have a wonderful morning!’ she says, and before I can say ‘You too,’ she is already greeting the next customer.
https://medium.com/modern-letters/but-first-diet-culture-a-brush-with-internalised-fatphobia-before-ive-even-had-my-coffee-bab0067d0213
['Kellee Rich']
2020-09-17 11:20:02.149000+00:00
['Feminism', 'Experience', 'Identity', 'Fatphobia', 'Self']
Have you tried Pylance for VS Code? If not, here’s why you should.
PYTHON DEVELOPMENT | VISUAL STUDIO CODE Have you tried Pylance for VS Code? If not, here’s why you should. The latest language server for Python (from Microsoft) is a massive productivity enhancer. If you work with Python and Visual Studio Code, go ahead and do yourself a favor: download the Pylance extension (preview) and try it out for yourself. What is Pylance? Pylance is an extension for Visual Studio Code. More specifically, Pylance is a Python language server — this means it offers enhancements to IntelliSense, syntax highlighting, package import resolution, and a myriad of other features for an improved development experience in the Python language. You can find the full list of features here. To keep things simple, I’m going to focus on the three most useful features based on my experience with the extension thus far. Automatic imports (a.k.a. less headache) In Python, understanding how to correctly import dependencies from both internal and external modules is a challenge for newbies and professionals alike. The Pylance extension offers a feature which automatically adds imports to the top of your Python files when you reference a dependency in your environment. It will also show a lightbulb icon with suggestions to add or remove imports depending on the scenario. It’s worth mentioning that you need to have the modules you’re referencing installed in your Python environment for this to work.
https://towardsdatascience.com/have-you-tried-pylance-for-vs-code-if-not-heres-why-you-should-a8a7b5adc5f2
['Sj Porter']
2020-10-17 04:21:46.596000+00:00
['Python', 'Development', 'Python Programming', 'Vscode Extension', 'Vscode']
Augmented Reality Starters Guide
After juggling around with multiple Augmented Reality SDK’s I’ve decided to write down what I realized so far. In case you are planning to develop the next AR application, this would benefit you very much in the way you probably haven’t though yet. LICENCING This is something you probably worry about at the last. But hear me out. This would be your first concern. Why? Because even if you are just playing around with some SDK you do want to get paid at some point for that skill right? So don’t arbitrarily learn skills that may be too expensive to afford for most clients. Just so you know more or less All the SDKs for AR out in the market MUST be licensed and the industry-standard ones don’t have any free license for publishing except for ARCore and ARKit. If you look up for the best AR SDKs you will find Vuforia on top. And that doesn’t have any free license. It’s a great tool by the way. And if you are a low profile company you can just get past using the Basic license of 42$/Month. Even that might be expensive for some small firms who just want to test things out. And if you are developing for a giant company you might want to think twice before you make any commitment because all the enterprise licenses cost from four Digit to five-digit US dollars. So before you even start developing or proposing a product for a client take a look at the policies and the price points of the SDKs that you wish to use. Trust me this will be a headache later on if you don’t plan this at first. Tools If you are developing an augmented reality application. It’s very likely that you would use a game engine and possibly Unity 3D. Unity does have a free tier. So if you are playing around it’s free to use. But if you are working for a client he would very much want that unity logo removed from the initial login screen. And the only way to do that is to buy a license. So if you are planning to use unity better be prepared to pay for that as well. Alternatively, you can use Native Android or iOS development. But that would be rather expensive In terms of development cost. So it’s better to pay unity for the license. But it’s possible to develop AR apps with native development which may not be as easy as it is with Unity or some other Game Engine. So before you quote anyone anything, you should be very careful. Available SDKs Now that you know what to worry about the most let us talk about the available SDKs on the market. We will only talk about the most used SDKs ARKit ARCore Vuforia Wikitude AR SDKs Common Augmented Reality Features Before you choose an SDK its important to know which SDK offers which features. And the popular ones in the market. Image Tracking Image tracking is the simplest form of augmentation. It tracks a fixed image, With image recognition, it can detect trained images and augment on top of the images. Image Tracking Plane Tracking This is a little complex feature, it uses real-time calculations and tries to map a flat surface. The surface can be horizontal or vertical. Almost all the SDKs that support Plane Tracking supports both planes. And some just support horizontal tracking only. Once the surface is detected, augmentation is possible on top of the surface. Plane Tracking Lighting Estimation This is a very advanced feature. This allows the developer to know how much lit an area is in real life. That way he can put light effects on the 3D Model that’s being augmented. It makes a more realistic experience. But this is very limited at the moment. It doesn’t give you the direction of the light so you can’t create a real-time shadow.[Please note that this is only true at the moment maybe in the future this might be different] Light Estimation Shared worlds This is a concept when you wish to develop a multi-user AR experience. mostly multiplayer AR games. This is a very new feature. Only a few SDKs offer this. With this multiple users can see the same world with their different devices Shared World Face Tracking This is something most of us are already familiar with. Remember Snapchat? All those filters on the camera apps now? Yes, those are done with Face Tracking. Body Tracking This is probably one of the most complicated features. It must be mentioned that Microsoft was the pioneer of Body Tracking with Kinect. With this feature, it’s possible to track one person's body movement and place augmentations on top of it. One good use case would be a Virtual Dressing Room. Or creating Movement Animations for game development usage. 3D Object Tracking This is another advanced feature that’s not supported by all the SDKs yet. What it does is, like image tracking it can detect 3D Realworld models based on their Geometry. The use case for this feature is pretty self-explanatory. You can augment Toys or Real-life cars to show various features. Occlusion By default, all the augmentation is done on top or in front of the real world. What if you want real-world objects to occlude(hide) it? This is where this feature comes in. Decision Matrix Now that we have a clear idea of what augmented reality SDKs commonly offer let us see which SDK offers which features then we can move on to deciding which SDK we really need to use. AR SDK Comparision Char We can clearly see in terms of feature ARKit just dominates every other SDK and it’s totally free. The only drawback is it’s only for iOS. So if your application is iOS only. Close your eyes and pick ARKit. If you want to develop an AR app for Android, ARCore should be your primary choice. And the best thing about ARCore is they do support iOS too. So this opens up the scope of Cross-Platform Development. But there’s one feature that AR Core doesn’t yet support unlike other Cross-Platform SDKs, The 3D Object Tracking. In case you wish to make a Cross-Platform 3D Object tracking App. You have to go to one of the other two SDK. And point to be noted they aren’t Free rather quite expensive. Maybe after a few months, ARCore would offer the same features as ARKit then these two would become the mainstream SDK by default. Summary If in doubt start with AR Core because that’s where everyone will be heading very soon. But if you need to deliver a specific solution as soon as possible, you may have to pick some paid licenses unless you are just focusing on iPhone then you can just go with ARKit.
https://medium.com/brainstation23/augmented-reality-starters-guide-d757f77e60f
['Tanim-Ul Haque Khan']
2020-05-29 08:55:56.620000+00:00
['AR', 'Vuforia', 'Arcore', 'Arkit', 'Beginners Guide']
A Complete Guide on Chatbot Development
You might have heard about chatbots and for sure interacted with one too. And if you thought like a businessman, you probably thought it could be of use in your business. Here where you are right, most companies tend to choose newer and unique ways of communicating with their audience. And that’s what chatbots are in trend. In short, a chatbot is an AI-powered software that can do several already programmed tasks without human interruption. As the name suggests, the most apparent function is to have a dialogue with the users. Every bot performs different functions as the range differs, such as personal assistant, restaurant reservation, providing requested dates in other provinces, like the flight schedule or weather forecasts. Why Do You Need to Think About Chatbot Development? Bots have a lot to offer to every business out there, struggling with effective communication with the audience. Notably, if you determine its purpose and functionality based on the needs of your business or brand. Here are some of the highlights listed below that every sphere going to like: #1. Saves human staff/resources Bots don’t get tired of answering repetitive questions, or inquiries like human staff do. With this, you can save more of your human resources and their efforts. Instead, they will be of use in more complex and prominent business jobs or areas. #2. Showcase a Better Brand Image With chatbots, you can be present for your audience round the clock, and customers can get the desired information anytime they decide to make a purchase, which creates a sense of being heard. Hence, it develops a better image of your brand in the eyes of your prospects. #3. Cost-Effective You might be wondering that Chatbot development bears higher costs than you are mistaken here. As ai chatbot development costs less than app development. And you will save a lot of money and serve a great bunch of benefits too. Hence, replace the human target factor for more superficial inquiries from prospects. #4. A Great Way to Fight the Competition As you already know, chatbots tend to be adopted by most businesses in numerous sectors shortly. Thus, being among the first ones will give you leadership in your industry to build a tech-savvy user interface. #5. Quicker Processes An AI chatbot isn’t limited by the number of conversations and can respond to multiple prospects simultaneously. And have the ability to read the customers effectively, unlike humans do. It makes the whole process more efficient and faster. How do they Function? Now let’s discuss how exactly chatbots function and what helps in smart Chatbot development. The leading technologies termed as the backbone of ai chatbot development are machine learning, artificial intelligence, and NLP (natural language processing). The moment any question or query is presented to a bot, complex algorithms process the input and understand the query’s right tone and context. The user is asking and determining an appropriate answer. Hence, they have to rely on the algorithms’ ability to detect the complexity of both written and spoken words. It might have noticed some bots perform this task quite well, and it’s tough to differentiate it as a human or machine. Moreover, handling human talk or text is one of the biggest challenges, but the bot does quite well if trained and utilized tightly and in the right position. Technology Behind Bots There are some of the elements that make sure your bot functions well. API (application programming interface) makes it possible for your chatbot to interact between your application and users via voice or text message. Whereas, during the Chatbot development process, your bot’s intelligence will depend on the way you apply machine learning while deploying the bot in your systems. Moreover, if you want to know deeply about the whole Chatbot development process, it is not enough to know only about technology. For instance, you would be excited to find out different tools and design elements and have a more comprehensive picture before making a final decision. It is always wise to go with proper research and study before deploying anything to your business, directly impacting the functioning and growth. But before moving forward, let’s take a quick look over some of the technology. Rule-based: a user asks a particular question or inputs the query, bot replies with an answer as per the pre-set answers. Under adaptive AI: here, the bot will learn from the unlabeled data in addition to the probabilities of two former generations and past inputs. Under supervised AI: a massive bunch of labeled data is created to imitate prospects’ future situations and behavioral patterns. During the initial Chatbot development, the first generation bots can perform more straightforward tasks and be built by ordinary programming languages. Whereas, in the second and third generations, NLP steps gave birth to more intelligent and smart bots ever in the marketplace. Tools to have AI Chatbot Development One of the prevailing confusion is between Chatbot development and publishing platforms. The main difference is that the publishing platform is the sphere where you can interact with your bot. Ai chatbot development platform is like tools that enable you to build or deploy the bot on your system. You can quickly get partnered with a chatbot development company that offers an easy peasy way of developing bots without any coding. Let’s discuss some of the famous platforms. #1. Microsoft Azure Bot Service It provides Chatbot development services with SDK and portal and a chatbot connector that enables you to connect with other social media platforms. It also helps you debug your bot and a plethora of sample bots, from which you can quickly get the ideal bot for your brand. It goes best for developers and gives the power of creating highly interactive chatbots. #2. QnA Maker Another bot from Microsoft is that it is quite helpful for businesses that need to ask frequent questions from their target audience about products and services. QnA Maker helps you in Chatbot development for FAQs URLs and other structured documents and manuals within a few minutes. #3. IBM Watson It is one of the most preferred, where you can develop AI bots. The benefit of it is its capability to serve different spheres and manage complex conversational circumstances with ease. If you think of doing a Chatbot development, you must start while considering all your requirements and what scenarios need to be addressed by your chatbot. Last but not least is “BotPenguin,” which has been making its mark since its existence. Quite popular among small-scale businesses because of its full-fledged suite of tools at a very feasible pricing chart. You can have a highly competitive and smart chatbot serving your customers within a few minutes. Moreover, it considers all your needs and industry types and offers you an ideal bot for your marketing strategy. SignUp Now! Best Practices for Chatbot Development To grow in the long run, you need to focus on the latest technology and trends, and that’s what you can achieve via ai chatbot development. With the innovation of a rich set of bots, you can make your business more comparative and robust in the digital marketplace. Moreover, many companies may ask- what’s the strategy behind successful Chabot development. #1. Recognize the Right Use Case Scenario With the improvisations in chatbot technology, businesses in different sectors find their way into more use cases. Some companies use cases are finance bots, virtual assistants, supplementing HR practices, and more. The type of use case will solely depend on you and the business type or area you are thinking of deploying. #2. Select the Right Chatbot Development Framework Businesses can develop bots from scratch or use comprehensive frameworks. These are aimed to mass-produce chatbots. You can take support from a chatbot development company with its frameworks and specialized offerings.
https://medium.com/@imbotpenguin/a-complete-guide-on-chatbot-development-1242c2e38437
[]
2020-12-19 12:26:35.197000+00:00
['Chatbots For Business', 'Chatbots', 'Bots', 'Chatbot Development', 'Chatbot Design']
The Purpose Of Religion
Image by Nathan Dumlao on Unsplash Since the dawn of the age of enlightenment, religions are seen as something which is a concept of the past and is now obsolete. The majority believes that religion is a concept which our ancestors found to answer our questions; to answer questions like who are we, what are we doing on this planet, who created it, why we die, why there are pain and suffering, etc. And since now we have the tools of science to answer these questions, religions are not needed anymore and because of its negative effects, it should now be dismantled. But it seems like this is only half the story. There is a second objective of the religion, a bigger and more significant one, and that is to teach us how to live. It doesn’t matter what is happening in our lives, the amount of pain, the deaths, hunger, and everything else if at the end of the day we are at peace with reality. And this is the objective of the religion, not science. People from the liberal end of the spectrum have the view that if you will educate a person enough, he or she will become liberated. I respectfully disagree. Science and rationality only help us to understand the working of the universe outside, but not about existential questions, at least not yet. So how will we answer those questions? There are two ways. First, develop your own philosophy of life and live by that. Second, believe in some pre-existing philosophies which are now encapsulated in the form of religion. In the end, it doesn’t matter. It doesn’t matter whether you believe in a bearded man, a cloth, an animal, a helicopter, or believe in nothing; whether you bow to a stone, a statue, a pyramid, the sun, or your selfie; as long as it enables you to move through your life with peace and stability regardless of what you are going through, it is valid. This is what religion is for. But we also have to be brave enough to check if it is working or not, because if your religion or beliefs are not working for you, something has to change.
https://medium.com/from-mind-to-cosmos/the-purpose-of-religion-d0fcb3b27cd6
['The Franc Blanc']
2020-12-26 17:36:51.329000+00:00
['Life', 'Religion', 'Thoughts']
The Art of Multitasking
Ah, the “M” word. A topic of much debate in the age of information overload. Merriam-Webster defines multitasking two ways: 1) “the concurrent performance of several jobs by a computer” and 2) “the performance of multiple tasks at a time.” You don’t say… While multitasking isn’t solely a job for computers, the comparison makes a lot of sense. Think of a computer with 10 different tabs open (much like my computer at the moment). Even though all tabs are open, the computer can only focus on one at a time. Neuroscientists explain this is how our brain works when we try to tackle different tasks all at once. And oftentimes, just like our computers, there comes a point where we hit overload and freeze up. This is when our productivity takes a serious hit. As nice as it would be to focus on one task through its completion, that isn’t realistic. If I focused on my laundry and nothing else, that would eat up my entire day. As a single mother, every moment is precious in more ways than one. Parenthood will force you to multitask for sheer survival. That’s a bit dramatic, but I think every parent can attest to the fact that multitasking is an inevitable part of life. If I don’t pick up the toys on the ground with my monkey toes while feeding my infant, then they are going to stay on the floor and I will most likely step on them later. I am here to tell you that it is possible to focus on tasks and juggle them at the same time. Here are eight ways I avoid incessant task-switching and actually manage to stay on top of life. 1. To Do lists are my best friend Am I the only one that will literally lie awake trying to remember everything I have to do the following day? My mind will not stop until I get it all out on paper. Not only does it unclutter your brain, but it allows you to reflect and prioritize obligations. 2. Prioritize and minimize Lists help me look and see what tasks need to be completed immediately and which ones can be pushed back if I’m beginning to feel overloaded. Refrain from continuously pushing back the same tasks. If you keep procrastinating on a task, make sure it’s even something worth your time and energy. Don’t be afraid to say no to obligations that aren’t helping you complete your goals. 3. Multitask common tasks Save the multitasking for everyday activities and try to focus on novel tasks independently. Talking on the phone while grocery shopping may be slightly distracting, but not near as much as talking on the phone while trying to put together brand new IKEA furniture. Try listening to new music while cleaning your house, maybe not while working on that new project at work. 4. Organization is key You may have 301 things on your to do list, but something about it being typed or written neatly brings ease to my mind. I often rewrite to do lists once the scratched out writing and arrows overwhelm the actual words. You don’t have to take it to my OCD level but keeping your tasks legible and clear can definitely help ease stress. Same goes for your house and office space. Cluttered house, cluttered mind! 5. Celebrate the small victories If you’re like me, you find deep satisfaction in crossing things of your to do list. Sometimes I write small tasks like “take out the garbage” or “make the bed” or “brush your teeth” so I start the day off feeling like I’ve accomplished something. I’ve found the hardest part is getting the productivity momentum going, but once the ball is rolling it can be hard to stop. 6. Break up larger tasks into smaller goals This makes them more feasible. If I need to complete a project for a client, I separate the task into four categories: research, outline, write, and edit. This allows me to work on several projects at once without eating up time trying to figure out where I left off. 7. Don’t beat yourself up It’s okay if you didn’t get to something on your list today. The fact of the matter is there is always tomorrow. This isn’t an excuse to procrastinate but a pass to be human. Be forgiving with yourself. Humans were not made to be on-the-go all the time like society makes us believe. Negative self talk does nothing but hurt productivity. Be gentle with yourself, you are doing the best you can. 8. Take some time to do absolutely nothing We do what we can but don’t forget to relax a little each day too. Take your dog on a walk and leave the phone, eat and actually taste your food, take a shower and focus on nothing but the water. Taking five minutes out of your day to silence the noise will do wonders for your psyche.
https://medium.com/@shelbyleighbuchanan/the-art-of-multitasking-4592bd007a93
[]
2019-01-10 22:48:08.016000+00:00
['Parenthood', 'Psychology', 'Lifehacks', 'Multitasking', 'Productivity']
Conversation
Conversation “What people are mobilised by is dignity. They want status and a place in life.” Esther Duflo talks with Matthew Taylor about how economists can regain the public’s trust Matthew Taylor: What was the core purpose for you and Abhijit Banerjee in writing the book? Esther Duflo: We realised that many of the core debates that people were having in western Europe and in the US were fundamentally around economic issues, or at least issues that had a lot of economics in them, such as immigration, trade, or Brexit for that matter. Yet economists seemed to have no place in these conversations, and we thought we should try and do something about it. Not to give people answers necessarily, but at least to show a different way to reason about these things rather than just going at it with emotions and ideology. We wrote this book in the hope that the conversation could improve. Taylor: One of the arguments you make in your book is that we have a pretty deep disposition to be biased against people who are not like us. And that just seems to be a human characteristic. On the other hand, it doesn’t take much for us to be snapped out of that; it is who we are, but it isn’t our fate. Duflo: We are very quick to define the other. There was this experiment where kids were sent to an island and divided into two groups. These developed bonds and when the two groups were put back together they competed like crazy. That’s the bad thing: we’re quick to make friends and to define an enemy. But there are two hopeful messages. One is that it’s completely arbitrary who you decide your friends are and therefore it doesn’t have to be attached to a strong label like race or religion. The second is that when the children were faced with a challenge that required them to work together they overcame this animosity. for many years economists sold people a bill of goods in terms of how simple things were Taylor: There’s a lot in your book about wanting to use a sophisticated, progressive understanding of economics to counter polarisation. But you look around the world and you just think, people don’t seem interested in facts. So that is quite a leap of faith. Duflo: I still hope that it is possible. The current situation is, in large part, the result of a failure of economists and, to some extent, the rest of the social sciences and intellectual class, in trying to communicate with the broader public. Economists are the least trusted experts about their own field of expertise. There was a YouGov poll in the UK that showed 25% of people believe economists when they talk about economics. That’s the lowest possible level of trust save for politicians, who are not the most popular figures in the UK at the moment. We repeated the same survey in the US and we found exactly the same answer. People are not willing to listen to economists but they will listen to facts in other sectors; for example, they trust doctors and nurses. They even trust weather forecasters. It’s not that people have abandoned reason in general. It’s more that economics has gone down an ideological path, taking the assumptions that power our models as assumptions that are true in the world, even when that isn’t the case. And frankly, for many years economists sold people a bill of goods in terms of how simple things were and where their self-interest lay. They said you have to tighten your belt because eventually it will come back to you in some form of trickle-down magic; naturally, at some point people said well, where is it? Where is the trickle-down? It’s understandable that they have no interest in listening to economists. On most issues, economists and people entirely disagree. Sometimes it’s because facts are missing in a conversation and sometimes it’s because economists have a blind spot. What we’re trying to do in the book is to lay aside all-knowing expertise and say let’s have a conversation, let’s try to see what the facts are and then try to understand why the facts are the way they are. Taylor: This sounds like an attempt to help save economics from itself. But what struck me reading the book, as a sociologist, was academic polarisation. No psychologist would ever have subscribed to the model of human motivation that economists struggled under for decades and is still the orthodoxy in economics. Duflo: Some disciplinary strengths and approaches are fine to maintain, but I think it’s important for the disciplines to talk to and learn from one another. I’m a voracious reader, I trained as a social scientist more broadly and as a historian before becoming an economist. All of these disciplines would benefit from doing joint projects as opposed to just reading each other. Taylor: You’ve been studying human behaviour in fine-grain detail for years now. What has surprised you? Duflo: One of the core tenets of our work has been that we need to beware of intuition. Economists’ intuitions are usually wrong, as are most people’s. Whenever you have an intuition that something might work or make a difference, or people may behave this way or that way, you have to test it. That’s why we developed the method of doing randomised controlled trials. It gives you a strong test of whatever you want to find out. But you cannot always do them; in the book we rely on other types of evidence as well. One core thing that runs counter to many people’s intuition, both economists and non-economists, is that people are much less sensitive to financial incentives than we think they are. A lot of economists say if there is a better job people will move to take it, or if taxes go up people will stop working because work will not pay as much. You name it, look at any economic policy and usually some economist is asking what the financial incentives are. One thing we discovered in our work in the developing world, but you also find empirical evidence in western societies, is that people are much less sensitive to financial incentive than is commonly thought. Often what makes them tick is something different. Taylor: The way in which we think about the poor is critical. You talk about a Victorian mindset, which is that the poor are squalid and dirty and immoral and their problem is a moral problem, and that is then overlaid with a neoclassical Homo economicus view telling us that everything people do is a reflection of their preferences in a world of rational utility maximisation. Duflo: The Victorian mindset gets a booster shot in the Reagan/Thatcher era, which was very much fuelled by neoclassical economics in its most bearable version, which again was about incentive. The welfare queen of Reagan is in a sense the neoclassical version of the Victorian dirty poor. And that has stayed with us in a very persistent way, because after all who reformed welfare in the US? It wasn’t Reagan, although he had the rhetoric, it was Clinton. This means that it was under a Democratic administration in the US that we ended welfare as we know it. Paradoxically, we’ve known in the US for years that people are not that sensitive to financial incentives. In the early 1970s, they ran a series of experiments that gave people a guaranteed income. They had a fixed transfer that was taxed away as they earned more. You would imagine that therefore the poor faced a strong disincentive to work, but it had almost no impact on the labour supply. This is what all the reports at the time concluded. But it’s something that economists have managed to keep to themselves somehow; the little dirty secret of economics. People don’t act like economists think they act and they don’t think like economists think. But to some extent they have drunk some of the magic Kool-Aid; they think that other people are sensitive to incentives. We did an interesting experiment for the book. We interviewed 10,000 people and we asked 5,000 of them, randomly selected, the following question. If there was a fixed guaranteed universal basic income (UBI) of US$13,000 a year would you stop working or would you work less? To the other half we asked, if there was a UBI of US$13,000 a year would others stop working, or would they work less? When you ask people about themselves they say no, of course not, I would continue to work. But when you ask them about other people, they think others will stop working. That has made the politics very complicated. You cannot really trust the economy to go with the flow and adjust itself when there’s a shock. People are not just going to move elsewhere; they need help. We should try to change the image of welfare so it is not seen as something that is there to punish you for being poor but instead is there to thank you for being the victim of the disruption on behalf of the rest of us. The problem is that even though that’s the rational thing to do, the politics of it are hard. Taylor: Are you completely rejecting that story of conditionality and structure? Duflo: Let me separate the economics and the political. I’m not naïve; I understand the need to keep the politics on board. On the pure economics, the conditional cash transfer options have been repeated in many countries and they’ve been very successful in increasing children’s education and health. In many cases, people need more than money; they need structured help to achieve their goals. My view on pure UBI in a country like the US, or the UK, or in France, is that it’s probably not what people need or want. What people are mobilised by is dignity. They want status and a place in life. For a lot of people, this sense of self-respect comes from doing a meaningful job. Taylor: A challenge is that your work requires years and years to be able to test whether things work, but politicians have a very short time frame. Does policy work in the modern world? What do we have to do differently to make policy have a better hit rate? Duflo: You’re right to point out there’s a crisis of legitimacy of government in general. Again, I would blame economists. You’ve got Milton Friedman saying I’ve never seen a tax cut I don’t like, or government is not the solution, government is a problem. Many economists have absorbed this as a mantra. We need to rebuild the legitimacy of government in the eyes of the public. Our approach of doing experiments takes a long time but the advantage is we’re not starting from scratch. We have a whole pot of experiments that one could draw on to get started. But there is always a bit of an effort in encouraging innovation and new ways of thinking, and that’s why we set up Poverty Action Lab as an institution, or J-PAL, which is the name of our network. We realised we can do the research and talk about it until we’re blue in the face, but to get by in policymaking and on the ground it takes a lot of actual groundwork. It’s patient work, it’s not going to happen overnight, but I see no reason to be discouraged. This article first appeared in the RSA Journal Issue 4 2020
https://medium.com/rsa-journal/conversation-1e6a1fb8a227
['The Rsa']
2020-02-27 16:18:00.538000+00:00
['Distrust', 'Economist']
Dashboard Using R Shiny Application
I decided to look at the change in patterns related to terrorism from 1970 to 2017 using the Kaggle dataset. I know the data is sad but interesting to work with. About the DataSet: The Global Terrorism dataset is an open source database which provides data for terrorist attacks from 1970 to 2017. Geography: Worldwide Time period: 1970–2017, except 1993 Unit of analysis: Attack Variables: >100 variables on location, tactics, perpetrators, targets, and outcomes Sources: Unclassified media articles (Note: Please interpret changes over time with caution. Global patterns are driven by diverse trends in particular regions, and data collection is influenced by fluctuations in access to media coverage over both time and place.) Overview of R shiny Application: Before we deep dive and see how a dashboard is created we should understand a little bit about the shiny application. Shiny is an open source R package that provides an elegant and powerful web framework for building web applications using R. When you are in R studio environment you have to create a new file and select R shiny web application: Create a new shiny application file To begin we should install some packages if they are not installed: Shiny Shinydashboard leaflet Shiny Dashboard: It is really important to know the structure and principles on which the dashboard is built. This is called the User Interface (UI) part of the dashboard Header Sidebar Body Header: Header as the name suggests describes or provide title to the dashboard. You have to use dashboardHeader function to create the header. the below code defined what should be written in the heading of the dashboard. Code for header Sidebar: Sidebar of the dashboard appears on the left side of the dashboard. It is like a menu bar where you can put information for the user to select from. In my dashboard I created two menu items. As you can see that one is the tab with the name “Dashboard” and the other is a link to the kaggle dataset. When you click it will open the link source of the data. Codes for Sidebar Below is the visualization of the sidebar menu: Body: This is the most important part of the dashboard as it contains all the charts, graphs and maps which you want to visualize. It consist of rows which are determines in which row your data is visualized. These rows are called fluidRow. Inside fluid row you place boxes and determine the kind of options you want to put in that box using selectInput function. In my dashboard I have total of three rows. The code and visuals for row number two are shown below. Codes for the fluidRow 2 Boxes in fluidRow 2 Combining all the rows makes the body for the dashboard using dashboardBody function: Combining all the rows in a body Creating server functions: The server function tells the shiny app to build the object. Server function create output/s containing all the codes needed to update the objects the the app. Each input to output must contain a render functions. Render functions tell what kind of output is required. Some of the render functions are below: renderDataTable → DataTable renderImage → images (saved as a link to a source file) renderPlot → plots renderPrint → any printed output renderTable → data frame, matrix, other table like structures renderText → character strings renderUI → a Shiny tag object or HTML Code example of renderPlot Once you are done with writing codes within server functions the last step is to run the shinyApp. Code to launch the app The final output of my dashboard is below: Once This dashboard was created I deployed it online. You can open it on your systems as well as on your smartphones. You can find the codes and the data file for this dashboard on my Github. Sources used:
https://medium.com/analytics-vidhya/dashboard-using-r-shiny-application-a2c846b0bc99
['Hassaan Ahmed']
2020-07-10 13:58:28.169000+00:00
['Dashboard', 'Rstudio', 'Shiny', 'Kaggle', 'Data Visualization']
Smart cities are the future of resilient, equitable societies
In Singapore, a robot dog roams the streets making sure people maintain social distancing. In England, a man asks his smart phone for directions to the nearest train station. In Australia, drones scour the countryside counting koala populations in the wake of a bushfire. These are just a few examples of the myriad ways smart cities are designed and implemented to improve our lives, and the livelihood of the environment around it. Associate Professor Tan Yigitcanlar from QUT’s School of Built Environment specialises in smart city research. “These visible signposts of smart cities are really just the tip of the iceberg when it comes to what a smart city really means,” Yigitcanlar said. “What lies beneath the water lies an immense quantity of data, predictive algorithms and powerful analytics that drive our cities to be more than just bricks and mortar.” It’s about prediction and planning While the term “smart cities” may bring to mind futuristic visions of flying cars and fully automated civic services, the reality of a smart city lies in the predictive power and algorithmic planning that allows that automation and artificial intelligence (AI) to happen smoothly. “Digital technology and data are part of the intrinsic fabric of our cities’ urban systems and infrastructure, driving efficiencies and improving quality of life for those who live within the city,” Yigitcanlar said. “We completed a study of 180 Australian local governments — covering two thirds of Australia’s population — and found that 35% of them qualified as smart cities. And they’re rapidly becoming smarter.” By using interlinked systems of data analysis and prediction, smart cities have the potential to create resilient and robust cities in the face of natural disasters and disease pandemics. “Let’s look at Covid-19 as an example: a smart city knows about hospital capacities, healthcare capabilities, population numbers and locations, and trends for infection and disease morbidity and mortality,” Yigitcanlar said. “The city’s systems can understand the limits of healthcare infrastructure, and model what will happen if those limits are surpassed, and can also predict methods to mitigate that collapse. “It can also measure public perception about the disease, leadership response to the pandemic, any public health measures put in place to protect the population.” This wealth of information about our cities and their inhabitants can also predict behaviours and impacts during a severe weather event. If a cyclone is predicted to hit northern Queensland, a smart city will have data on how many houses have cyclone-proof roofs, previous data from similar weather events and the damage caused, information about crisis centres, evacuation plans, tracking of emergency foods and water supplies, and cost analyses of repair and clean-up from damage. “Experts can interpret these predictions and models and decide on the best courses of action to create the best outcomes for a city and its population,” Yigitcanlar said.
https://medium.com/thelabs/smart-cities-are-the-future-of-resilient-equitable-societies-76115df1db46
['The Labs']
2020-11-26 03:00:08.059000+00:00
['Big Data', 'Data Science', 'Cities', 'Urban Planning', 'Artificial Intelligence']
How you should engage your new subscribers
Building your own mailing list is something that all businesses should do. However your list is only useful to you if you engage with the people on it — and that’s where many businesses fall flat. They understand it’s essential to build a list of interested people, but then proceed to make two critical mistakes. The first; only sending promotional emails, and the second — they fail to engage with their new subscribers. If you want to engage your new subscribers, you do need to stay in touch! It’s relationship building, not a race! As much as we’d all like instant sales as soon as someone signs up, it isn’t always the case. Some people need a little time to get to know you and your business before they make a decision. Others may have followed you before they’re at the stage where they need your services. Whilst some may simply have signed up to get your latest freebie. The key point to remember here is that it isn’t a race. There is no maximum time they can stay on your list before you decide they’re not going to buy. It’s your job to build a relationship with the people on your list and engage them in that process wherever possible. And when they’re ready, they will buy. Your new subscribers need to know, like and trust you When you automate your lead generation, you can set up a lead nurture sequence to build in some know, like and trust elements. Initially, you’ll follow the lead nurture sequence tips in the next section below. You can then continue the nurturing process by regularly staying in touch with your subscribers. Your nurture sequence follows on from your lead nurture sequence, and it’s for showcasing your best resources, sharing advice and tips with them and keeping them updated on your latest offers etc. This nurture process continues, for as long as you like. Some businesses will have a year-long nurture sequence, that they just keep topped up with new emails and products etc. The lead nurture sequence The lead nurture sequence is one of the most important email sequences you need to have in place. Typically, it’s 5–7 emails long and set expectations. You’ll want to use the nurture sequence to show subscribers their next steps and to remind them why it’s a good idea to stay on your list. This is where your know, like and trust elements come in. Use your nurture sequence to help engage your new subscribers. So look to use those emails to: Let them know what to expect, now they’re on your list The first emails you send out need to reassure subscribers that they made the right choice, signing up to your list. If you offered them an opt-in incentive, make sure you send it within the first email. You can then use the next email or two, letting them know how often you will email them, the type of content you will send, and sharing a bit of your business story with them. Show them how to find your best resources Use a couple of your lead nurture sequence emails to point them towards your best resources. You could use one to signpost your best blogs, and another to give them the different options for working with you. Encourage your new subscribers to engage by using a call to action on each email. Sharing tips and advice that will benefit them is a great way to get subscribers engaged. You can then encourage them to engage more by asking them to do something, on each email. This could be to follow you on your primary platform, join your group, or simply hit reply and answer a question. These little ‘micro-actions ‘help get them used to completing an action, when you email them. Building your own mailing list takes time and effort, but it doesn’t have to be complicated. To set up your sequence, check out How to set up a lead nurture email sequence. You can then use the tips above to ensure you engage with your new subscribers regularly. So over to you! Schedule in a slot in your calendar to work on your lead nurture sequence and get it uploaded into your email marketing platform. If you haven’t decided which platform to use, I’d recommend Mailerlite — and I’ve even created some free training to walk you through it and show you the steps you need to take, to set it up! You can opt into that free training, here.
https://medium.com/@lisa-pierce/how-you-should-engage-your-new-subscribers-533da76fb4d5
['Lisa Pierce']
2020-12-10 15:25:23.112000+00:00
['Engagement', 'Email Marketing', 'Lead Generation', 'Mailing Lists', 'Email Marketing Tips']
Tackling the Small Object Problem in Object Detection
Tackling the Small Object Problem in Object Detection Note: we have also published Tackling the Small Object Problem on our blog. Detecting small objects is one of the most challenging and important problems in computer vision. In this post, we will discuss some of the strategies we have developed at Roboflow by iterating on hundreds of small object detection models. Small objects as seen from above by drone in the public aerial maritime dataset To improve your model’s performance on small objects, we recommend the following techniques: If you prefer video, I have also recorded a discussion of this post Why is the Small Object Problem Hard? The small object problem plagues object detection models worldwide. Not buying it? Check the COCO evaluation results for recent state of the art models YOLOv3, EfficientDet, and YOLOv4: Check out AP_S, AP_M, AP_L for state of the art models. Small objects are hard! (cite) In EfficientDet for example, AP on small objects is only 12%, held up against an AP of 51% for large objects. That is almost a five fold difference! So why is detecting small objects so hard? It all comes down to the model. Object detection models form features by aggregating pixels in convolutional layers. Feature aggregation for object detection in PP-YOLO And at the end of the network a prediction is made based on a loss function, which sums up across pixels based on the difference between prediction and ground truth. The loss function in YOLO If the ground truth box is not large, the signal will small while training is occurring. Furthermore, small objects are most likely to have data labeling errors, where their identification may be omitted. Empirically and theoretically, small objects are hard. Increasing your image capture resolution Resolution, resolution, resolution… it is all about resolution. Very small objects may contain only a few pixels within the bounding box — meaning it is very important to increase the resolution of your images to increase the richness of features that your detector can form from that small box. Therefore, we suggest capturing as high of resolution images as possible, if possible. Increasing your model’s input resolution Once you have your images at higher resolution, you can scale up your model’s input resolution. Warning: this will result in a large model that takes longer to train, and will be slower to infer when you start deployment. You may have to run experiments to find out the right tradeoff of speed with performance. You can easily scale your input resolution in our tutorial on training YOLOv4 by changing image size in the config file. [net] batch=64 subdivisions=36 width={YOUR RESOLUTION WIDTH HERE} height={YOUR RESOLUTION HEIGHT HERE} channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue = .1 learning_rate=0.001 burn_in=1000 max_batches=6000 policy=steps steps=4800.0,5400.0 scales=.1,.1 You can also easily scale your input resolution in our tutorial on how to train YOLOv5 by changing the image size parameter in the training command: !python train.py --img {YOUR RESOLUTON SIZE HERE} --batch 16 --epochs 10 --data '../data.yaml' --cfg ./models/custom_yolov5s.yaml --weights '' --name yolov5s_results --cache Note: you will only see improved results up to the maximum resolution of your training data. Tiling your images Another great tactic for detecting small images is to tile your images as a preprocessing step. Tiling effectively zooms your detector in on small objects, but allows you to keep the small input resolution you need in order to be able to run fast inference. Tiling images as a preprocessing step in Roboflow If you use tiling during training, it is important to remember that you will also need to tile your images at inference time. Generating More Data Via Augmentation Data augmentation generates new images from your base dataset. This can be very useful to prevent your model from overfitting to the training set. Some especially useful augmentations for small object detection include random crop, random rotation, and mosaic augmentation. Auto Learning Model Anchors Anchor boxes are prototypical bounding boxes that your model learns to predict in relation to. That said, anchor boxes can be preset and sometime suboptimal for your training data. It is good to custom tune these to your task at hand. Thankfully, the YOLOv5 model architecture does this for you automatically based on your custom data. All you have to do is kick off training. Analyzing anchors... anchors/target = 4.66, Best Possible Recall (BPR) = 0.9675. Attempting to generate improved anchors, please wait... WARNING: Extremely small objects found. 35 of 1664 labels are < 3 pixels in width or height. Running kmeans for 9 anchors on 1664 points... thr=0.25: 0.9477 best possible recall, 4.95 anchors past thr n=9, img_size=416, metric_all=0.317/0.665-mean/best, past_thr=0.465-mean: 18,24, 65,37, 35,68, 46,135, 152,54, 99,109, 66,218, 220,128, 169,228 Evolving anchors with Genetic Algorithm: fitness = 0.6825: 100%|██████████| 1000/1000 [00:00<00:00, 1081.71it/s] thr=0.25: 0.9627 best possible recall, 5.32 anchors past thr n=9, img_size=416, metric_all=0.338/0.688-mean/best, past_thr=0.476-mean: 13,20, 41,32, 26,55, 46,72, 122,57, 86,102, 58,152, 161,120, 165,204 Filtering Out Extraneous Classes Class management is an important technique to improve the quality of your dataset. If you have one class that is significantly overlapping with another class, you should filter this class from your dataset. And perhaps, you decide that the small object in your dataset is not worth detecting, so you may want to take it out. You can quickly identify all of these issues with the Advanced Dataset Health Check that is a part of Roboflow Pro. Class omission and class renaming are all possible through Roboflow’s ontology management tools. Conclusion Properly detecting small objects is truly a challenge. In this post, we have discussed a few strategies for improving your small object detector, namely: As always, happy detecting!
https://towardsdatascience.com/tackling-the-small-object-problem-in-object-detection-6e1c9976ee69
['Jacob Solawetz']
2020-10-12 14:31:18.087000+00:00
['Artificial Intelligence', 'Data Science', 'Object Detection', 'Computer Vision', 'Deep Learning']
Just for Kicks
Just for Kicks Poem Photo by Andriyko Podilnyk on Unsplash I bloomed late and into Frigid winter, petals of Disdain and years of Longing On my lips, I Kiss too much and Don’t reflect the me I found on Lonely moon-washed Walks, and Inside I am still the Shovel and the Sand, inside I am Still unsure of What I mean when I say Love but I am Getting closer every time I fall or at least Each time they Do I Can’t help but hold The fragile parts of People They feel Just like Home, Like sipping gin From glasses made of my Own breaks, and I Have learned that we all Live at least half Shattered, and I have learned to listen Not to speak and when The time comes I cut Through my fingers just For kicks, Bleed the same Nostalgic red Along the newly-unfurled Leaves Bradon Matthews
https://medium.com/scrittura/just-for-kicks-877b44512cc
['Bradon Matthews']
2020-12-10 04:07:02.553000+00:00
['Growing Up', 'Poetry', 'Scrittura', 'Love', 'Nostalgia']
GOOD RIDDANCE, 2020
It’s a running joke now (but not really a joke) that 2020 has been THE ABSOLUTE MOST. I mean, the year began with apocalyptic fires in Australia and is on track to end with another surge of cases in the deadliest pandemic since the 1918 Spanish influenza. Between all this, we faced the death of Kobe Bryant, the weird royal exit of Prince Harry and Meghan Markle, a stock market crash and global recession, violent (and peaceful) protests in response to racial injustice, MURDER HORNETS, the Beirut explosion, MORE wildfires along the West Coast, the deaths of Ruth Bader Ginsburg, Chadwick Boseman and Eddie Van Halen, devastating hurricanes along the Gulf Coast, and the whole Ghislaine Maxwell ordeal. Not to mention we’ve all been fronting our own personal issues throughout this whirlwind of a year. CAN WE ALL CATCH A COLLECTIVE BREAK, PLEASE? Okay, okay. Rant over. Thanks for reading. The real reason we’re writing to you right now is to let you know that while 2020 has beaten us down and worn us out, we have decided to tackle 2021 head-on. We know next year will bring circumstances we can’t control and we will face more hardships and possibly setbacks. But we’re ready to tackle those things, because we’re tired of being pushed around by 2020. We’re making 2021 different. And we’re doing this by focusing on the small things we CAN control every day. We can’t control whether or not the murder hornets come back or if more natural disasters wreak havoc on our planet. But we CAN control what we do when we wake up. We can control how much time we spend doom-scrolling. We can control how often we exercise, what we eat, how much water we drink, how much sleep we get, how we spend our spare time, and who we interact with. And these things — though small and seemingly insignificant among the insanity that has been 2020 — will make all the difference in our ability to handle hardship, overcome setbacks, and continuously move forward with purpose. You can do this, too. If you’d like to take the thought and planning out of just one aspect of all this, we invite you to download our soon-to-launch, completely done-for-you workout and lifestyle programs. Set to launch December 14, these programs encompass much more than just fitness. We lay out lifestyle guidelines that will help you build the habits you need to tackle 2021 — and every other year — head-on. You can read about our programs here, but if you have any questions, simply respond to this email and we’ll get back to you. We hope you’ll join us on our journey in 2021!
https://medium.com/smarter-sweat/good-riddance-2020-fd9aaa5a996
['Smarter Sweat']
2020-12-02 01:02:32.623000+00:00
['New Years Resolutions', 'Lifestyle']
Why Do We Feel Sad During The Cold Season? Winter Blues Is A Serious Mood Disorder
Feeling low? Low on energy? Disinterested in doing things? Does the emergence of winter may seem to be ‘upsetting’ you? Be assured that you’re not alone. In India, more than 10 million people experience similar symptoms and ailments which are often self-diagnosable as Seasonal Affective Disorder or SAD. It’s not necessarily something to worry about, but if your symptoms crop up around the same time each year, have a real impact on your quality of life, and improve when the season changes, you may have Seasonal Affective Disorder (SAD). Most often the symptoms of SAD are similar to milder depression, hence it’s often called seasonal depression as well. SAD usually occurs during the same time of the year, usually during winter or in summer. SAD is distinguished from depression by the remission of symptoms in the spring and summer months (or winter and fall in the case of summer SAD). Common symptoms include: Depressed mood, low self-esteem Loss of interest or pleasure in activities you used to enjoy Appetite and weight changes Feeling angry, irritable, stressed, or anxious Unexplained aches and pains Changes in sleeping pattern Difficulty concentrating Fatigue and lack of energy; reduced sex drive Use of drugs or alcohol for comfort Feelings of sadness, hopelessness, and despair Like depression, the severity of SAD symptoms can vary from person to person- depending on genetic vulnerability and geographic location. While the exact cause of SAD is unknown, most theories suggest that it is due to the reduction of daylight hours in winter. The shorter days and reduced exposure to sunlight in winter affect the body by disrupting its natural circadian rhythms. Also, in winter, your body produces too much melatonin, leaving you feeling drowsy, and low on energy. Similarly, reduced sunlight lowers the body’s production of serotonin, a neurotransmitter that helps to regulate mood. A deficit may lead to depression and adversely affect your sleep, appetite, memory, and sexual desire. Risk factors The seasonal affective disorder can affect anyone but more common to people living in a colder region where you experience less sunlight in the winter and longer days during the summer. Other risk factors include: Gender. Women are four times more likely to experience symptoms of SAD than men; however, men often experience more severe symptoms. Women are four times more likely to experience symptoms of SAD than men; however, men often experience more severe symptoms. Age. In most cases, young adults or people aged 18 to 30 years are more likely to experience SAD. In most cases, young adults or people aged 18 to 30 years are more likely to experience SAD. Your family history. Having relatives who’ve experienced SAD or another type of depression puts you at greater risk. Self-help strategies to combat SAD Get as much natural sunlight as possible. Whenever possible, get outside during daylight hours and expose yourself to the sun. Sunlight, even in the small doses that winter allows, can help boost serotonin levels and improve your mood. Simultaneously increase the amount of natural light in your home and workplace by opening blinds and drapes and sitting near windows. Stick to a schedule. Keeping a regular schedule will expose you to light at consistent and predictable times. Get active and spend more time outdoors. Take a short walk outdoors, have your coffee outside if you can stay warm enough. Regular physical activity and exercising is a powerful way to fight seasonal depression, especially if you’re able to exercise outside in natural sunlight. Regular exercises can boost serotonin, endorphins, and feel-good brain chemicals. Eat the right diet. Eating small, well-balanced meals throughout the day, with plenty of fresh fruit and vegetables, will help you keep your energy up and minimize mood swings. Practice relaxation techniques. This will help you to manage stress, reduce negative emotions such as anger and fear, and boost feelings of joy and well-being. Do something you enjoy (or used to) every day. While you can’t force yourself to have fun or experience pleasure, you can push yourself to do things, even when you don’t feel like it. You might be surprised at how much better you feel once you’re out and about. Stay connected. Call or connect with your friends virtually or in-person, participate in social activities, even if you don’t feel like it. Being around people will boost your mood. Volunteer your time in helping others, meet new people with a common interest by taking online classes, joining different clubs, or enrolling in a special interest group that meets on a regular basis. Try aromatherapy. Good smells and essential oils influence the area of the brain that’s responsible for controlling moods and the body’s internal clock that influences sleep and appetite. Maintain a journal. Writing down your thoughts can have a positive effect on your mood. It can help you get some of your negative feelings out of your system. Plan on writing for about 20 minutes on most days of the week, maybe at night when you can reflect on what all happened throughout the day. Seek professional help. The most common treatment for SAD is a light therapy using a lightbox or dawn simulators for 20–60 minutes daily. Other treatments include medication (antidepressants and SSRIs) and psychotherapy with mental health professionals. Depression, whether it’s seasonal or not, isn’t something that you have to deal with on your own. Download the MFine app and talk to a mental health professional online today. Your consultations will be instant, secure and private.
https://medium.com/@health-87073/why-do-we-feel-sad-during-the-cold-season-winter-blues-is-a-serious-mood-disorder-9f258f888655
[]
2020-12-18 04:33:08.727000+00:00
['Sad', 'Mental Health Awareness', 'Winter', 'Mental Health']
Too Small Data — Solving Small Files issue using Spark
I am pretty sure you all must have come across the issue of Small files issues while working with Big Data frameworks like Spark, Hive etc. What is a small file? Definition of small file can be a data file which is considerably smaller than the default block size of the underlying file systems (e.g. 128MB by default in CDH) Sources of small files? These small files originate from different sources of data as well as resultant of data processing jobs as well. Problems due to small files In addition to creating inefficient storage (particularly in HDFS etc) mainly, small files affect the compute performance of the job a lot as well. The reason being small files results in more disk seeks while running computations through these compute engines. For example, we know that in Spark, the task within an executor reads and processes one partition at a time. Each partition is one block by default. Hence, a single concurrent task can run for every partition in a Spark RDD. This means that if you have a lot of small files, each file is read in a different partition and this will cause a substantial task scheduling overhead compounded by lower throughput per CPU core. Solution The solution to these problems is 3 folds. First is trying to stop the root cause. Second, being identifying these small files locations + amount. Finally being, compacting the small files to larger files equivalent to block size or efficient partition size of the processing framework. For avoiding small files in the first place make sure the processes producing the data are well-tuned. In case the process is Spark make sure you use repartition() OR coalesce() OR set the spark shuffle partitions property effectively to avoid small files issue in the first place. If things are already worse. Now, let's try to understand how to identify these small files. We will consider HDFS as the storage framework here. Below shell script can be run using all list privileges in the base path on which small files need to be identified. Now that final_list.csv has a list of the directory at N depth_level and the respective amount of small files in it sorted by the amount in decreasing order, we can very well run a Spark job to concatenate these small files as an offline process. Below is a snippet of the same. This code snippet written in Scala will merge the small files into larger files by dynamically calculating the size of the target directory using the bucket_cal function. Hope it will help, do let me know in case of any questions.
https://medium.com/@sauravagarwaldigital/too-small-data-solving-small-files-issue-using-spark-b7ef66827a24
['Saurav Agarwal']
2020-12-25 16:27:33.992000+00:00
['Spark', 'Small Files']
Learning SQL the Hard Way
Joins in SQL Till now, we have learned how we can work with single tables. But in reality, we need to work with multiple tables. So, the next thing we would want to learn is how to do joins. Now joins are an integral and an essential part of a MySQL Database and understanding them is necessary. The below visual talks about most of the joins that exist in SQL. I usually end up using just the LEFT JOIN, and INNER JOIN, so I will start with LEFT JOIN. The LEFT JOIN is used when you want to keep all the records in the left table(A) and merge B on the matching records. The records of A where B is not merged are kept as NULL in the resulting table. The MySQL Syntax is: SELECT A.col1, A.col2, B.col3, B.col4 FROM A LEFT JOIN B ON A.col2=B.col3 Here we select col1 and col2 from table A and col3 and col4 from table B. We also specify which common columns to join on using the ON statement. The INNER JOIN is used when you want to merge A and B and only to keep the common records in A and B. Example: To give you a use case lets go back to our Sakila database. Suppose we wanted to find out how many copies of each movie we do have in our inventory. You can get that by using: SELECT film_id,count(film_id) as num_copies FROM sakila.inventory GROUP BY film_id ORDER BY num_copies DESC; Does this result look interesting? Not really. IDs don’t make sense to us humans, and if we can get the names of the movies, we would be able to process the information better. So we snoop around and see that the table film has got film_id as well as the title of the film. So we have all the data, but how do we get it in a single view? Come Joins to the rescue. We need to add the title to our inventory table information. We can do this using — SELECT A.*, B.title FROM sakila.inventory A LEFT JOIN sakila.film B ON A.film_id = B.film_id This will add another column to your inventory table information. As you might notice some films are in the film table that we don’t have in the inventory . We used a left join since we wanted to keep whatever is in the inventory table and join it with its corresponding counterpart in the film table and not everything in the film table. So now we have got the title as another field in the data. This is just what we wanted, but we haven’t solved the whole puzzle yet. We want title and num_copies of the title in the inventory. But before we can go any further, we should understand the concept of inner queries first.
https://towardsdatascience.com/learning-sql-the-hard-way-4173f11b26f1
['Rahul Agarwal']
2020-09-28 11:21:58.749000+00:00
['Sql', 'Data Science', 'Programming', 'Machine Learning', 'Productivity']
How to install React Native App developed using Expo CLI on my Phone?
For those who are lazy to figure it out using above link, just read through below steps and then go through the link, wherever necessary. It won’t be that tough. Trust me. All you need is 1. command line and 2. an Android phone. Then Expo app, not necessary, but you can see you app deployed in that as well. Steps to follow. 1, Check if your app.json is proper, mostly you won’t be having Android based entry. Just add it. Give proper name of your app etc. 2, Give ur unique icon file, so that your app looks good. If you have created any pwa before, figuring out this will not be that much tough, just use that 192 width one And give a splash file as well, which will be shown when app is loading. 4, Just give a build as per the link, either apk file or your chosen one e.g, expo build:android -t apk 5, To build to start, you need to login to expo. Create an account in expo, if already there, it will prompt you to signin, all from command line. 6, Just wait to get the build done, you can start sharing the good news 😉 7, Expo will ask for some keystore related info, trust them to generate it. 7, Now, our Build will be done mostly in 15-20 mins, now go to the given urls in your command line. You can exit command line once they prompt that build will be done parallely in server. That’s it. Just See your app live in Expo or Download using the link provided in your expo account dashboard. (While installing android ask for review etc, just use continue anyway and skip)
https://medium.com/@iamsafnas/how-to-install-react-native-app-developed-using-expo-cli-on-my-phone-cd47fd323636
[]
2020-04-22 20:24:26.395000+00:00
['First Android App', 'First Post', 'Expo Cli', 'React Native', 'Expo']
Happy Anniversary to me
About this time last year I was being wheeled into surgery. A surgery I wish I was brave enough to do years earlier. I had a sleeve gastrectomy. For those of you not sure what that is, it’s the permanent removal of about 75% of my stomach, a highly effective weight loss surgery. My only regret is not doing it decades ago. I’ve struggled with healthy weight management my whole adult life. I love food, love cooking and feeding people. Planning a dinner party, prepping all the food then watching everyone enjoy my food has always filled my heart with joy. And anyone that knows me also knows I love a drink too 😊 None of these things have changed, I still enjoy food — just tiny portions. I still enjoy a drink, just not quite as many as I used to. I was on a slew of medications for reflux, high blood pressure, chronic pain from osteo arthritis in my knees and many more. I was pre-diabetic and insulin resistant. I couldn’t walk the 400m to my bus stop without so much pain I would be holding back tears. My knees used to pop and crack so loudly when moving that people 10 metres away would cringe. Don’t even get me started on going clothes shopping. A year later, I know I could have lost more weight had I not let lockdown boredom (read baking and drinking) get the better of me but I’m still so proud of how far I’ve come and my attitude towards food, and my health in general. I’m enjoying exercise again, can move without so much pain, am off all the meds and don’t break into sweats at the thought of going clothes shopping. For years I battled a voice in my head that told me surgery was a cop out, the easy way, the cheats way to lose excess weight. Boy oh boy was I wrong. I had to learn to listen to my teeny tiny tummy and not overeat, if I did then it would come straight back out without warning. This caused a fair amount of anxiety about eating away from home! I also had to learn that I can’t drink and eat at the same time, after 47 years of having drinks with meals, I still don’t get this right all the time. I was subjected to other people blatantly judging me, purely based on my size. Feeling ‘too much’ is as bad as feeling ‘not enough’. Both are painful, shameful and basically shit. Guess what, I’m ‘just right’ — a bit like goldilocks finding the right bed 😊 I’m so much more than the number on the scales, or the size on my clothing labels. I’ve always known that, but….. seeing the way people looked at me, hearing people talk about me or having people be outright disgustingly rude to me, it does hurt, it does take its toll. I wanted to be invisible, I didn’t want to be seen, or the centre of attention. Even well meaning unsolicited weight loss advice used to make me feel like shit. I hated having my photo taken, hated catching sight of reflection. Don’t get me wrong, I didn’t hate what I saw when I looked in the mirror, deliberately looked, like when getting ready to leave the house. It was the accidental glimpses that I wasn’t prepared for that shocked and shamed me. The one thing other peoples attitudes and bias towards me purely based on my size has taught me, is to be kind, don’t judge others. From lived experiences, especially painful ones comes growth, understanding and empathy. Having workmates offer to drive me home well out of their way despite me being only a few bus stops away from work, having my niece or grandson want to help me get up ONE step, feeling puffed walking anywhere, getting over heated just from normal movement. These things all built up, like a massive weight, adding to my physical weight. I was tired ALL the time, I never felt well rested. I needed to have surgery for my physical health, I was a very high risk for stroke. It finally got to the point I needed to deal with my bias about weight loss surgery and do what was necessary for me to live longer. To spend more time with my amazing husband, doing more overseas adventures (once possible and safe to do so obviously!). Playing with (or chasing) my very cheeky active grandson. Being a willing participant in life. I also needed to work on my attitude towards my body and relationship with food/alcohol. I did the work, it started about year prior to surgery. It was at times, very difficult and upsetting to untangle and understand my relationship with food and with myself. It was enlightening and exciting too. I know for sure, had I not done the pre-work, I absolutely would have struggled so much more mentally post-surgery. I feel really well, grateful that I had the option to do surgery, lucky that I didn’t develop diabetes, thankful that my physical and mental health are significantly better. I no longer want to fade into the background, no longer get anxious about being too big to fit, no longer unable to participate in physical activity, no longer dread going clothes shopping. I’m not done yet, and will continue to work on making better health choices, work on making my body stronger and enjoy life to its fullest. 2020 hasn’t been all bad, I’ve gained morning walks with the best man I know, we talk more. I miss my whanau back home something terrible, but at least we’re all safe and relatively healthy. I’m proud of me and cant wait to see what 2021 holds.
https://medium.com/@mellyfromwelly/happy-anniversary-to-me-c26750a852fd
['Mel Kramer']
2020-12-19 07:21:29.263000+00:00
['Surgery', 'Healthy Lifestyle', 'Weight Loss', 'Bias']
An Introduction to GitHub
GitHub is a code hosting platform for version control and collaboration. It hosts millions of lines of code, and millions of developers use it to collaborate and to report issues with open source software. It lets you and others work together on projects from anywhere. Ah! Version control! Quite a heavy word yeah ?? Let me help you to understand it. Version control basically means that it is a tracker for what you have done so far on it, say your code. So, the code which is stored in GitHub keeps changing as more code is added. Also, many developers can add code in parallel. So Version Control System helps in handling this by maintaining a history of what changes have happened. Also, Git provides features like branches and merges, that shows the actual tree-like structure. Though anyone can use GitHub to keep track of the changes they are making in their files, but most developers use it either to host their code or to collaborate with other developers. Here, you will get to know about some of the key features of GitHub. Some important terms you will learn later in this article are: Repository Commit Branches Master Merge Pull Request Fork GitHub Issues Let’s study these in more detail here: Repository: A directory or storage space where your projects can live. Sometimes GitHub users shorten this to “repo.” It is like a folder for your project. IT contains all the files for your project and stores each file’s revision history. You can own repositories individually, or you can share ownership of repositories with other people in an organization. What does that mean? This basically means that a repo can be public or private. You all know what public and private is. but let us revise it again. Public repo is visible to everyone. The private repository can be viewed and contributed by the owner only. Also, one can have an unlimited number of repositories. Commit: In normal language, you can say commit is just like the save button. you save what you have done. It is an individual change to a file (or set of files). Every time you commit, it creates a unique ID (a.k.a. “hash”) that allows you to keep a record of what changes were made when and by who. Commits usually contain a commit message which is a brief description of what changes were made. Branches: Till now you know that multiple people work on a project at the same time. But you must be thinking about why they are not confused? This is where branches came to the role. Usually, they “branch off” of the main project with their own versions full of changes they themselves have made. A branch is a parallel version of a repository. It is contained within the repository but does not affect the primary or master branch allowing you to work freely without disrupting the rest of the branches or master. When you've made the changes you want to make, you can merge your branch back into the master branch to publish your changes. You will learn about merge later. You can use branches to: Develop features Fix bugs Safely experiment with new ideas Master: By default, your repository has one branch named master which is considered to be the definitive branch. When you create a branch off the master branch, you’re making a copy of master as it was at that point in time. We use branches to experiment and make edits before committing them to master . If someone else made changes to the master branch while you were working on your branch, you could pull in those updates. Merge: To take the changes done on one branch to the master, we use merge. Merging takes the changes from one branch (in the same repository or from a fork) and applies them into another. This often happens as a pull request (which can be thought of as a request to merge). A merge can be done automatically via a pull request if there are no conflicting changes. Fork: fork is just a copy of the original repo. Just like in your computers, you make a copy of something so that the changes you make into that copy will not affect the original file. The concept is same in GitHub just a different name — Fork. Forking a repository allows you to freely experiment with changes without affecting the original project. Most commonly, forks are used to either propose changes to someone else’s project or to use someone else’s project as a starting point for your own idea. GitHub Issues: Issues can act as more than just a place to report software bugs. These can be feedbacks or some required changes or issues with your code..like anything. You can collect user feedback, report software bugs, and organize tasks you’d like to accomplish with issues in a repository. They’re kind of like email — except they can be shared and discussed with the rest of your team. The dashboard works very similar to the issues section, but collects issues differently: All issues in repositories you own and collaborate on Issues assigned to you Issues you’ve created In the end, I hope this tutorial will help you get started with GitHub but go on further to get some real experience of working on GitHub. Good Luck!
https://medium.com/@ghangas-rakhi/an-introduction-to-github-acbc31e3644
['Rakhi Ghangas']
2019-06-26 09:46:30.304000+00:00
['Github', 'Git', 'Github Actions']
My talk about the power of community at ESHIP Summit 2018
I tend to be very critical about my own talks and often find them hard to watch without feeling horribly embarrassed. Yet I’m also trying to share out what I’m learning on this community journey. So in that spirit, I wanted to share my talk from the ESHIP Summit 2018 in Kansas City this past July and would be grateful for your feedback how to improve future versions of it.
https://medium.com/together-institute/my-talk-about-the-power-of-community-at-eship-summit-2018-79ce527f98e
['Fabian Pfortmüller']
2020-01-08 14:46:01.133000+00:00
['Network', 'Leadership', 'Community', 'Ecosystem', 'Community Building']
COVID-19: Using AI to Predict Stock Market Movement
Written July 2020 With the spread of COVID-19, global stock markets have declined significantly. U.S indices including the S&P 500, Dow Jones and NASDAQ have dropped close to 30%, bringing us to values which were previously observed in 2017. GSPC, DJI, IXIC Index Values (Yahoo Finance, 2020/03/31) From prior crashes we know that large drops in the stock market results in excellent investment opportunities. But how do we know the right time to take a shot and buy some stocks 🤔? This past week, I challenged myself to train an AI to predict the S&P 500’s movement based on data from past market crashes. If you are interested in the process/programming behind the AI, I will be breaking it down in the next section. Otherwise, you can skip to the end to see the final result! Programming the AI (Python) GitHub: https://github.com/Vedant-Gupta523/corona-ai The Data Set The data I used consists of the S&P 500 index value for the following crashes: The Wall Street Crash (1929) The 73–74 Market Crash (1973) Black Monday (1987) The Dot Com Bubble (2000) The Financial Crisis (2007) COVID-19 (2020) For each of the above, I used values starting from 100 days before the pre-crash peak up until the index regained its initial value (the current value in the case of COVID-19). Objective: Train a Support Vector Regressor (SVR) to predict the next index value given the 30 prior values. Pre-processing The Data I start by importing all of the libraries I will use. I then create a Pandas DataFrame out of my data set. # Importing libraries import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.svm import SVR import matplotlib.pyplot as plt # Get data from csv file dataset = pd.read_csv("data.csv") A good machine learning practice is to scale the data we train on. This is because normalization helps weigh all of our features equally. In some cases, it can also help speed up the calculations our model performs! With the data from the DataFrame I create two lists, a scaled data set for training our model and an unscaled data set for graphing/visualization. Additionally, I break the scaled data set into our training data (all crashes excluding COVID-19) and our test data (COVID-19). # Create scaled/unscaled datasets, divide into train and test data scaler = MinMaxScaler(feature_range=(0,1)) scaled_dataset = [] unscaled_dataset = [] for crash in list(dataset)[1:]: data = dataset.filter([crash]) scaled_dataset.append(scaler.fit_transform((data.values))) unscaled_dataset.append(data.values) for i in range(len(scaled_dataset)): scaled_dataset[i] = np.reshape(list(filter(lambda x: x==x, scaled_dataset[i])), (len(list(filter(lambda x: x==x, scaled_dataset[i]))), 1)) unscaled_dataset[i] = np.reshape(list(filter(lambda x: x==x, unscaled_dataset[i])), (len(list(filter(lambda x: x==x, unscaled_dataset[i]))), 1)) train_data = scaled_dataset[:-1] test_data = scaled_dataset[-1] unscaled_test_data = unscaled_dataset[-1] Next we separate our independent variable (prior 30 index values) from our dependent variable (next day’s index value). This is done for both our training set and our test set. # Number of prior values (can be changed) batch_size = 30 # Split data x_train = [] x_test = [] y_train = [] y_unscaled_test = [] y_scaled_test = [] for crash in train_data: for i in range(batch_size, len(crash)): x_train.append(crash[i-batch_size:i, 0]) y_train.append(crash[i, 0]) for i in range(batch_size, len(test_data)): x_test.append(test_data[i-batch_size:i, 0]) y_unscaled_test.append(unscaled_test_data[i, 0]) y_scaled_test.append(test_data[i, 0]) Finally, we train our SVR on the training data and make predictions 😎. # Fitting SVR to the dataset regressor = SVR(kernel = "rbf") regressor.fit(x_train, y_train) # Making predictions on test data y_pred = [] for test_case in x_test: y_pred.append(regressor.predict([test_case])) y_pred = scaler.inverse_transform(y_pred) # Making predictions beyond known data x_future_test = x_test[-1][1:] x_future_test = [np.append(x_future_test, y_scaled_test[-1])] future_preds = [] for i in range(future_prediction_size): future_preds.append(regressor.predict([x_future_test[i]])) x_future_test.append(np.append(x_future_test[i][1:], future_preds[i])) future_preds = scaler.inverse_transform(future_preds) Bonus: Graph the results # Graphing predictions plt.title("COVID-19 Crash Analysis") plt.xlabel("Days from Crash") plt.ylabel("S&P 500") plt.plot([x for x in range(-99, len(y_pred) - 99)], y_pred, color = "orange") plt.plot([x for x in range(-99, len(y_pred) - 99)], y_unscaled_test, linewidth=1) plt.plot([x for x in range(len(y_pred) - 99, len(y_pred) - 99 + future_prediction_size)] , future_preds, color = "red") plt.show() The Results Blue — Actual values, Orange — Predicted values based on prior 30 actual values, Red — Predicted values based on prior 30 predicted values After running the code, the AI outputs the above graph with a one month prediction. It suggests that the S&P 500 index will rise for the next two weeks and then decline the following two weeks. Disclaimer: I am not suggesting that these predictions will be accurate or even close. Making predictions with AI has both pros and cons: Pros AI is excellent at finding mathematical correlations in previous events and applying them to predict a new situation. Cons Stock prices are affected by countless factors which are next to impossible to predict with AI. The further into the future we try to predict, the more inaccurate our predictions become. This is because predictions that are farther out start to be based on previously predicted values. Conclusion Despite staying at home, this past week has been an amazing learning experience. I look forward to taking up more programming projects while I have all this extra time on my hands 🙌. If you have any questions or you just want to say hello, feel free to email me at [email protected] 😃.
https://vedantgupta523.medium.com/predicting-the-coronavirus-market-crash-using-ai-7b98aed917fb
['Vedant Gupta']
2020-12-02 00:49:43.836000+00:00
['Artificial Intelligence', 'Support Vector Regression', 'Coronavirus', 'Stock Market', 'Predictive Analytics']
Of Wounds
Of Wounds Anna Ismagilova: Shutterstock I cannot tell which of the wounds I acquired hurts more. I gather all of them in a large wicker basket and sort them out every summer morning when fields are filled with lavender and roses. During autumn nights, while I listen to the wind unbraiding the old oak trees, I re-live each of them. I see how the Lie walks hand in hand with the Betrayal, and how the Betrayal indulges herself in the sweetest of wine. Oh, that irresistible taste of black grapes that melts in her mouth. It almost makes her attractive. The Envy wears red lipstick and high heels. She dances naked on a wooden table. At every turn she spreads poisonous confetti in the air, and she lowers her eyes. I try to decipher the meaning of her gestures. I cannot. The Greed, with her childbearing hips, indulges herself with poor souls who live at the margins of the city. The children are hungry, and the mother is long exhausted. The beds are cold, and the moonlight enters the rooms through broken windows. I feel the pulse in my temples. Exhausted I go over the meaning of love and sacrifice. I try to restore them in their right place. Love and sacrifice are the consummation of all acts that lead to the warm meal that one hands to an old man who dwells in the streets during cold winters. They are the sum of all unknowns. They are the fingers that draw the light of stars in the darkest of the skies.
https://medium.com/literary-impulse/of-wounds-4d03aded257e
['Gabriela Marie Milton']
2020-12-04 14:16:48.825000+00:00
['Sad', 'Prose On Medium', 'Poetic Prose', 'Literary Impulse', 'Poetry']
Introducing BarnBridge
DeFi Primer: Risk Ramps and TradFi Bridges The yield products in the decentralized markets which are yielding higher APY than yield products in traditional markets are currently crypto backed loans. Instead of selling crypto for fiat, borrowers are staking digital assets and receiving digital assets in return. While these loans have mostly been short term loans to traders, the system has proven to be efficient & ripe for expansion. These efficiencies will inevitably attract higher value, longer duration loans to decentralized ledgers. The efficiencies referenced are enabled by smart contracts’ ability to hold digital collateral until both sides of the transaction fulfill their obligations algorithmically. The reduction of custody, settlement, and escrow — labor-intensive, costly actions within the legacy system — to algorithmic actions is reducing the rent charged by the labor to perform these actions. These efficiencies, coupled with the perception of higher risk, are why the yields are higher on decentralized systems. As risk in DeFi converges on risk levels perceived in TradFi, by the nature of the loans moving from crypto backed loans to traders to collateralized mortgage loans to homeowners, for instance, the efficiency of smart contracts will continue to offer higher yield on decentralized systems than traditional centralized systems. What’s more, the efficiency of smart contracts and DAO technologies allows for far more complex derivative instruments to be built & provides a level of transparency and security unfathomable to current financial networks. All of these efficiencies are currently stemmed and built off of crypto backed loans. As previously discussed, these efficiencies should extrapolate to mortgage debt and corporate debt moving to decentralized platforms on a longer timeline. This should also encourage more complex derivatives based on debt and yield to move to decentralized platforms. We will be able to structure far more complex derivatives and track them with far greater efficiency and transparency than possible before the innovations of blockchain, cryptocurrency, smart contracts, and decentralized autonomous organization technology were realized. $244 trillion in debt and yield based derivatives will continue to move to more efficient technologies over time. The migration of yield and yield-based derivatives from less efficient centralized financial systems to more efficient decentralized financial systems will be one of the largest movements of wealth in human history. BarnBridge exists to help facilitate this transition and make the decentralized financial system much more efficient, risk-flexible, and attractive to a wider range of participants. There is a massive market for people wanting to get into crypto who (1) don’t want to bite off the entire risk curve of owning, lending, or receiving an entire digital asset & (2) will never take the time to use a decentralized autonomous organizations (DAOs) to create a smart contract which algorithmically scripts both sides of the loan or agreement. Over 99.9% of global debt is still structured via traditional markets and is starving for yield. Conversely, more advanced financial companies have different risk tolerances. This allows for different structures at each point of the yield curve with the riskiest (likely hedge funds) wanting to put the least money down with the highest return for their bet/hedge. On the contrary, more conservative investors are often willing to give up a large portion of upside opportunity in order to access safer instruments. “Riskless” products, as tradFi describes them, are not currently offered in the decentralized financial ecosystem. The opportunity to structure these types of instruments will allow for more risk averse investors in the traditional markets to move into the decentralized markets. In the shorter term phase (DeFi) & medium term phase (Proof of Stake) risk ramps will continue to create markets and industries for traditional investment firms who want to “get off zero” or “get above 1%.” As this happens, more and more types of loans will move to decentralized ledgers. In the long run, and partially through this process, lenders and borrowers will understand why decentralized and trustless intermediaries are superior and less costly than the current 3rd party intermediaries. As this happens, larger portions of the $244 trillion in global debt will move to the chain, creating the opportunity for more yield, more risk ramps, and higher CD-like (collateralized debt) products for fiat and crypto depositors of the new age commercial banks & financial markets.
https://medium.com/barnbridge/introducing-barnbridge-3f0015fef3bb
['Tyler Scott Ward']
2021-04-19 18:49:21.852000+00:00
['Derivatives', 'Ethereum', 'Cryptocurrency', 'Yield Farming', 'Defi']
Messaging Service with Twilio!
“The Twilio Customer Engagement Platform can be used to build practically any digital experience, using capabilities like SMS, WhatsApp, Voice, Video, email, and even IoT, across the customer journey.” That being said, let’s create a messaging service using Twilio. You have to create an account in Twilio and it is going to be on the trial version. → Take a phone number using the trial version that can be used for messaging. Trial Version Number! → We are going to be using MessagingResponse service from Twilio. In this article, we are going to use our mobile to send a text to that phone number that we got from Twilio. In return, MessagingResponse will return something back as a response. Make sense why it’s called MessagingResponse ? →Now as a response, we are going to use Beautiful Soup to extract information from IMDB based on marvel movie names! The Code: SMS Bot! → The endpoint of the app is “/sms” and as you can see from line 15, Twilio will receive a “movie name” from the user which is you. → To extract the information from IMDB, we need the unique ID number for each movie, which we set in a dictionary so that we can find it easily using the movie name. → Then, we used the requests and BeautifulSoup library to get the page response and page content. Using JSON, the page content can be turned into JSON format, which is easier to find specific information such as movie names and descriptions. → Line 27 & 28 were used to create a response object using MessagingResponse() library from Twilio. → Line 34: Put the return information in the message object to return it as a string in line 35. Codes are pretty simple in this example. We have used flask and Twilio to build an SMS chatbox! Well, let’s fire it up! python bot.py! The above picture says the server is running on HTTP://127.0.0.1:5000/ which is a local server using port 5000. How will Twilio communicate with the local server? The answer is: Ngrok Ngrok is a cross-platform application that exposes local server ports to the Internet. Public URLs for building webhook integration. Now, since the local server is using port 5000, then we have to expose it to the internet. How? ngrok HTTP 5000 ngrok! The above command will expose HTTP://localhost:5000 to the internet using a temporary URL. Copy that URL and save it to the webhook section(Messaging) in Twilio for that specific number! ngrok URL setup as webhook! Now, it’s time to test the bot to see if it reply back properly. That being said, if we send “Endgame” as a text msg to the Twilio number, it should return the full movie name along with the movie descriptions. Right?
https://medium.com/@mahedihasanjisan/messaging-service-with-twilio-6fbf4807ca2
['Mahedi Hasan Jisan']
2021-10-19 01:14:48.020000+00:00
['Bots', 'Twilio', 'Beautifulsoup', 'Flask', 'Messaging']
@ngrx/data Will Change Your Life
@ngrx/data Will Change Your Life Remove React boilerplate and simplify your codebase Photo by Ben White on Unsplash I’ve been using some form of the Redux pattern for a couple of years now, and with some small exceptions, I’m largely doing the same kind of things each time — creating entities, reading them, updating them, deleting them. CRUD applications still form the bulk of the work that I do as a front-end developer. A little voice in the back of my mind was telling me that I should have found a way to reduce the boiler plate I keep writing, but a louder voice in the front of my mind was relishing the chance to tweak my approach with each successive implementation. I’ll be the first to admit that some of NgRx felt like overkill — I rarely use things like createSelector , and didn’t really use side-effects much either. Most of the time, I felt like I was doing it “wrong”, but it was right enough for me. Still, I kind of hoped that next time around I might have that lightbulb moment that would lead me to do it “right”. Then, I attended a workshop at AngularConnect 2019 run by John Papa and Ward Bell when they blew my tiny mind right out of my skull with @ngrx/data. This is the library I’ve been waiting for, but to make sure we’re all on the same page, let’s talk about how I’d do this stuff with just @ngrx/store. Disclaimer: I wrote the below snippets from memory without testing them —they are missing a few bits and I’d be very surprised if they actually work. They are, however, mostly right and indicative of the kind of work I’m trying to demonstrate. Actions and Reducer for a simple customer store The above snippets are missing some obvious pieces: they aren’t wired up to AppModule and there are no effects for persisting data. But you can see that for a relatively simple example, there are a total of 62 lines of code. That’s not terrible, and is a small price to pay to adopt a pattern that scales incredibly well as your application grows. But you will have to write the same 62 lines of code when you want to implement another entity-type state (e.g. products, orders, etc.). On top of that, the effects to fetch that data from an API are another big time-sink, requiring you to consider things like signaling when the data is loading (commonly, you’ll be loading via an API where latency might be an issue). Perhaps more importantly, this is all code you have to write tests for, since it’s code you wrote and you control. I tend to find that Redux makes writing tests easier, but there’s still a lot of effort involved if you’re going to be doing the same thing for every entity type you store. This is where @ngrx/data comes in. The authors have identified commonalities between these kinds of data and written a library to handle it. All you need to do is provide a config. Here’s an example: This is already down to 22 lines of code. And get this: that includes a default behavior that persists data via a RESTful API using effects. It assumes you use proper HTTP verbs to interact with your API. For instance, if you have the following endpoints, the DefaultDataService will persist data via REST with no configuration: /api/customer (POST) /api/customer/:id (GET, PUT or DELETE) /api/customers (GET) If you don’t want any effects at all, you can easily configure your store not to use any effects. If you would rather persist data in another way, you can do so by writing your own data service. For example, I wanted to persist entity data to local storage for a phone app I was building, so I wrote this service: LocalStorageDataService example Feel free to reuse it in your own work — it is incomplete but demonstrates how to wire up something other than the default behavior.
https://medium.com/better-programming/ngrx-data-will-change-your-life-8528736e4fc0
['Jamie Morris']
2019-12-01 23:05:43.464000+00:00
['Ngrx', 'Angular', 'JavaScript', 'Programming', 'Redux']
Shady deals, authenticity, and why midsize matters — creator Tiffany walks us through
@tiffany_schutte is on a body positivity mission, combating societal pressures to conform to a certain mould. This is the first in our series that gives insight into the lives of creators, in which they share the challenges that they have faced, and what they have learned along the way. How long have you been creating fashion content for? Since around 2018. I was working at a shop and buying a ton of their clothes. I started to randomly share the outfits I was wearing to work and people loved it. I’m from Indiana, Columbus, and no one there was really doing that kind of thing. At first I wasn’t really getting paid for anything. I’m not sure people realise that you don’t just start making all this money when you first start out! I would buy all these clothes to the point I was almost losing money. I realised how much I was bringing in to companies whose clothes I was posting. So I started getting gifted collaborations which then turned into paid. How have you developed over the years? My stuff wasn’t really curated at the start. I didn’t really know what photo presets were or how to use them. It’s good to have some variation — I don’t want to have makeup photos back to back. I like to play around, but at the moment I’m into neutral and warm tones. People seem to really like it. I try to do a mix of what I like and what other people like. How do you balance doing what you want and creating content that your audience likes or expects? At first it was hard. Even sometimes now I find myself struggling to make sure I’m not doing what everyone else wants me to do, or what I see others doing. My husband helps me a lot with support. I like to do polls and allow some input when it comes to makeup looks, but I always try to make sure the outfits I choose are all me. I try to create looks that aim inspire people, such as playing with colour. I know that some styles are just not for me and that I have to stick to my guns. What do you define as midsize fashion and what led you to find this niche? Here in the US, they consider 16 and up plus-size, even though it is the average size. For me midsize is (US dress size) eight to 16. When I started, I noticed a lot of people I followed were super tiny. And there’s nothing wrong with that. But then my body was changing and I didn’t really see that many mid-sized girls represented. I’d see a lot of plus size accounts, but I was just somewhere in the middle at a 10 or 12. So then I thought why don’t I just be that representation. Through this I met some really great girls that are in the same kind of niche. It is great because even though we live far away we can message if we’re struggling with something like a bad experience with a brand. So what kind of bad experiences have you had? I once approached a brand and said hey, you don’t have any mid-sized representation. I thought it was odd since they carried those sizes. They went from like double zero to 26, but only showed petite women. They said that I wasn’t the body type I was looking for. There are still some shady brands out there. Lately I’ve seen so many girls just getting screwed over. It’s kind of scary. They aren’t being paid, or are just taken advantage of. A friend recently had an issue with a big clothing brand. They stole her video and posted it on their page. The company re-edited it and everything and never gave her credit or paid her, and it ended up being their highest rated video. If you could give any advice to yourself when you were just starting out, what would it be? Honestly to just stay true to yourself. Don’t let negative comments get to you or the naysayers. When it comes to partnerships, think about what you can provide for them, come from a place of service. How do you deal with haters? I don’t really get a lot anymore, it was more when I had just started. They are just trolls so I don’t let it get to me, I just remind myself that they don’t know me, they’re not my best friend or family, and they’re not paying my bills so their opinion doesn’t matter. So how do you separate real life from social media? Honestly, at first it felt like social media consumed my life. My husband helped me so much. Like every day we were shooting. Over the last year we’ve started to do the content in batches instead, which I think really helps. You don’t always have to shoot and post at the same moment or day. Every now and then I do stories but I try not to share too much of my life. I don’t share any personal drama and try to keep most of my life pretty private. I put my phone down at a certain time every night, usually before dinner. Did you notice a sudden growth spurt? Do you have any tips? For the most part I would say it was pretty slow and gradual. My first big tip is to be true to yourself. My second is to not necessarily take every deal that comes your way. I’ve seen so many friends fall for this. It’s a huge mistake, because you will start getting deals early on, for random skinny teas or underwear on a monthly subscription. But a lot of the time they don’t pay. If you can buy your own underwear then I would not recommend it! You’re not going to make a lot of money at first, but you will keep your audience’s trust. I don’t want to post random things just for a bit of money. I’ve seen a lot of people mess up and then their engagement starts to drop. It’s hard to win your audience back if you lose it right away. How do you keep organised? I’m working two jobs, blogging and finishing a bachelor’s degree in business. It’s really tricky. I always keep a planner which I think is huge. I write in dates and stick to a plan on what to post or if a sponsor needs something up on a certain day. I check it daily just to make sure I’m not missing anything. My husband is a big help and sometimes even checks my notifications! Did working with Matchmade help things get a bit smoother? I was happy to come across it. I always share fun games that I play on my phone on my stories, and that is what I got matched up with. We got the ball rolling straight away. So I think it was the same day I got matched with a brand, I was asked how much I want for doing some stories and I gave them a number and it was confirmed. Communication-wise it has been really great, everyone’s so responsive and really nice. Did you use any tools? I love how you can see in real-time how your post is performing, most places don’t offer insights like this, so that was really useful. The platform was also really handy when it came to planning the post itself. How has the pandemic affected your work? Luckily, it hasn’t really affected me content-wise. Some brands have seen budgets cut though. For example now, it’s the holidays so it’s the busiest month and I’ve had way more sponsorships than I’ve had for the rest of the year combined. Everyone says oh, you know, you can triple your rates, it’s the holidays. But to me we’re still in the middle of a pandemic and it’s not so ethical. My family has small businesses and I know how hard they have been impacted. For now I aim to continue chugging along and growing. If we can get through this pandemic we can get through anything!
https://medium.com/@matchmadetv/shady-deals-authenticity-and-why-midsize-matters-creator-tiffany-walks-us-through-b8bcac1d8dfd
[]
2020-12-16 16:55:11.790000+00:00
['Creators', 'Body Positivity', 'Influencer Marketing', 'Creator Marketing', 'Influencer Marketing Tips']
Oheka Castle Wedding
New York City as a destination! Proud and Jon may live in New York but their time is mostly spent in Thailand running their multiple businesses. And with a father that worked for the government in Thailand this wedding was especially important as the delegates and family members that were in attendance required security, privacy and 5-star service. Proud and Jon needed a wedding planner with experience on high profile events, was an expert in NYC destination weddings and can plan around the time difference between NYC and Thailand. LLG Events became the perfect fit and for a year and a half, we were their liaison to the New York wedding industry, Oheka Castle, their selected vendors and LLG were their trusted resource when it came to pricing and vendor services. Navigating the NYC wedding market can be challenging especially when you do not live in NYC full time. You do not have time to meet with the vendors on your own, you do not know who is a qualified vendor and you do not know if they are charging you fair market value. Your most trusted resource is your wedding planner, the person who can properly vet your vendor team for you as well as ensure you are getting the quality and level of service you deserve. The first vendor referral we sent Proud and Jon was Tantawan Bloom — an incredible NYC wedding designer who specializes in romantic, lush, dreamy wedding receptions. We knew that they would be the perfect design partner and the team who can create glamorous yet understated events. Once we had the design team down our next referral was for entertainment. Starlight Orchestra was the perfect choice knowing that this was a party and dancing crowd! The photo and video team, unfortunately, we cannot take credit for that referral as Greg Finck has an impeccable reputation and we were surprised he was even available for our client’s wedding date! On event day Proud and Jon had the right team of professionals, that arrived early for setup, were ready and prepared. Congratulations to Proud and Jon, we wish you a lifetime of health and happiness! We Could Not Have Produced This Event Without The Help Of Our Amazing Vendors: Venue: Oheka Castle Photographer: Greg Finck Event Planner: LLG Events Photography: Greg Finck Design: East Made Co. Floral Design: Tantawan Bloom Wedding & Events Hair and Makeup: Makky P Bridal Attire: Monique Lhuillier Grooms Attire: Vvon Sugunnasil
https://medium.com/@llgevents/oheka-castle-wedding-437edf0fbcce
['Llg Events']
2021-04-09 15:22:18.163000+00:00
['Wedding Vendors', 'NYC', 'Wedding Design', 'Wedding', 'Oheka Castle']
How Many LinkedIn Recommendations Should I Have?
A few weeks ago, I hosted a personal branding workshop for some executives, and I was asked a few questions about LinkedIn recommendations: Do LinkedIn recommendations even matter? How many LinkedIn recommendations do I need? Can you have too many LinkedIn recommendations? In short, yes, LinkedIn recommendations can make a positive difference to your online reputation and have a positive impact on your prospects as a job candidate. However, the effectiveness and weight those recommendations carry depends on the quality of those recommendations and who writes them. Before we go any further, let’s differentiate between LinkedIn recommendations and LinkedIn endorsements. While LinkedIn recommendations can be useful to have on your profile, LinkedIn endorsements — at least in their current form at the time of me writing this post in late 2017 — carry little to no weight. Endorsements can be given by any of your connections, including people who have never worked with you. That’s the problem. And experienced recruiters and hiring managers know this. Those I’ve spoken to pay absolutely no attention to endorsements for this reason. That’s why endorsements are pretty much meaningless. In 2016, LinkedIn tried to address this flaw by differentiating between endorsements made by 1st degree connections vs other contacts. This makes little material difference because you might have a 1st degree connection endorse you even if you’ve never worked with them. Until LinkedIn completely limits the option to endorse only those contacts you’ve directly worked with (not sure how this would be done) endorsements won’t carry much weight. For now, I wouldn’t even bother with endorsements. Personally, I don’t bother giving them, I don’t ask for them, and I don’t even have them visible on my own LinkedIn profile. You can opt out of endorsements if you wish following these instructions. On the other hand, LinkedIn recommendations are those that you either have to specifically request, or specifically accept from someone. They are about as close as you can get to a “reference,” and although it’s not the same as calling up a former manager and asking for his or her opinions, the LinkedIn recommendation can provide a reasonable proxy for someone’s performance. Here’s my guidance on how many LinkedIn recommendations you should have, and who should write them for you. Ideal: at least 1 for each role from your direct manager Your direct manager is likely the person in your organization who has the greatest visibility and awareness of your contributions and accomplishments in your role. Hiring managers and recruiters know this. Therefore, the best-in-class approach would be to have at least one recommendation from each of the managers you’ve had for each of your roles. If you have had multiple roles/titles within the same organization, try to get a recommendation from each manager you have had for those individual roles. If you have had multiple managers on the same role, getting one from the manager who knows your work the best can suffice. Sometimes I get asked whether it carries more weight to get a recommendation from someone higher up in the organization, for example, your manager’s manager. Not necessarily. While that can look impressive if that person knows your work well, the principle is to find someone who can speak very precisely about the exact contributions you have made to the organization. If you feel like someone higher up is also well placed to do this, there’s nothing wrong with that. But avoid simply trying to get someone with a fancy title to write your recommendation if it comes at the expense of the recommendation’s quality and precision. Next best: at least 1 recommendation for each role from close team member If you are not able to get the recommendation from your direct manager for whatever reason, the next best option would be to get a recommendation for each role from a team member who has worked very close with you. This could be: A lateral cross functional team member A project manager or lead Anyone with day-to-day experience with how you approach your work and results Sometimes I get asked whether a direct report can write a recommendation for you. This can actually be useful if you are trying to highlight your people-management or leadership skills. So there’s nothing wrong with this. However, they generally don’t carry quite as much weight as a recommendation from a supervisor who has overseen your work. I’ve spoken with some hiring managers and recruiters who sometimes question the precision and validity of recommendations written by direct reports given the power dynamic that exists. You could imagine that writing a recommendation for your manager comes with some pressure, and you may not feel at full liberty to say what you really think ;) However, I think this issue is a subtle one, and if you have a direct report to has been really happy with your managerial style, go ahead and get a recommendation from them in case you ever want to highlight this skill area on your profile. At the very least: 1 recommendation from team member per organization If you are not able to secure a recommendation from someone for each of your individual roles, try to at least get a single recommendation to cover each of the organizations you’ve worked for. For example, if you’ve had three roles within one organization, and you’re NOT able to get a recommendation to cover each of those three roles, try to get a single one to cover your time at that company or organization, ideally one that speaks to your contributions during your most recent, most senior role. Can I have too many LinkedIn recommendations? Yes, having too many LinkedIn recommendations could dilute the power of each one. Quality is more important than quantity. Part of this has to do with the nature of how these LinkedIn recommendations are shown on your profile. Depending on the device you’re on, sometimes only the most recent ones are immediately visible anyway. And I don’t know about you, but I don’t always click-to-reveal extended lists. Think about your own Google search behaviour. How often do you go to page 2 of your search results? The other issue has to do with the nature of information overload. Whenever you overwhelm someone with too much information, they often get lost in the details and don’t walk away with clear takeaways. One final issue is more of a subconscious one. Sometimes, if I see too many recommendations, I start to wonder if that person has really solicited recommendations from people who matter. After all, you typically only have one supervisor per role. So if I see 10 recommendations under one role, it can sometimes make me wonder if this person is just loading up their profile with recommendations from anyone, even individuals who aren’t really familiar with that person’s work. My recommended approach With all this in mind, I generally recommend to my clients and audiences to follow this approach: Immediately upon finishing a specific role, ask for recommendations via LinkedIn from your manager and/or any close team members who can comment positively about your concrete contributions and accomplishments to the organization. Be selective about which recommendations are visible on your profile. Ensure that your managers’ recommendations are visible on your profile. Include any high-quality recommendations from team members that speak to skills and accomplishments that reinforce those specific elements of your personal brand you feel are relevant to your target audience. If you’re having a hard time deciding which recommendations to feature, remember these 3 “S”s when it comes to creating a strong personal brand using your recommendations as a whole: Be specific- Highlight precise skills Be selective- Showcase key skills Be suitable- Communicate relevant skills Ask for those LinkedIn Recommendations If you haven’t gotten around to asking for recommendations from your former managers and colleagues, don’t procrastinate. Ask now. Every day that goes by is another day their memories of your contributions may start to decay and negatively affect their ability to write you an effective recommendation. When used strategically and selectively, LinkedIn recommendations can help boost your credibility with the hiring manager and signal that you have had strong relationships throughout your professional career. So make sure you invest this time to reinforce your professional reputation. Ask for recommendations. And don’t forget to return the favor if you have an opportunity! Watch my video on LinkedIn Recommendations to learn more Over to you What approach has worked well for you when asking for and displaying LinkedIn Recommendations? Leave a comment below with your thoughts! Did you enjoy this article? Get my future articles on career change and personal branding sent straight to your inbox by joining my free career community. Also, to hear directly from people who have successfully reinvented their careers, tune into my Career Relaunch podcast for even more inspiration.
https://medium.com/your-brand/linkedin-recommendations-needed-b832510636a6
['Joseph Liu']
2018-01-17 15:06:15.868000+00:00
['LinkedIn', 'Careers', 'Career Advice', 'Career Change', 'Personal Branding']
Top 6 Best MP3 Merger Online Tools in 2020 [100% Free]
Are you looking for a quick way to combine several MP3 files online? Then you’ve come to the right place. In this article, we are going to take you through the 6 best online platforms that enable you to merge MP3 files for free. If you need more control over your MP3 files, try MiniTool MovieMaker. 1. Audio Joiner Audio Joiner is the best MP3 merger online tool offered by the 123Apps platform. With a simple interface, this tool enables you to merge as many MP3 files as you want, and it supports more than 300 audio file formats. In addition, you can trim audio to cut unwanted parts, apply the crossfade effect to make the transitions between two audio clips smoother, and even select the output format for the final audio file. 2. Filesmerge Filesmerge is another lightweight online MP3 merger, which is capable of merging multiple MP3 files into one MP3 file. It enables you to add MP3 files by straightly dragging and dropping them onto the upload area, or entering specific URLs. Besides, there is an option for you to select the output quality, encoder, sampling rate, and the number of channels. However, the site doesn’t allow you to upload MP3 files that are larger than 50MB and it only supports the MP3 file format. Also read: 3 Free Sites to Download Uncopyrighted Music for Your Videos 3. Clideo Audio Joiner Clideo is another popular MP3 merger online free tool that supports adding local, online, and cloud MP3 files for combining. You’re allowed to add multiple MP3 tracks, change their order, and add a crossfade effect. In addition, this platform is compatible with nearly all popular audio formats like MP3, WMA, OGG, WAV, and others. That’s to say, once the process is complete, you can download the merged audio file in the same or selected format. 4. Aconvert AConvert is widely used to convert all kinds of document, eBook, image, audio, video, and archive files online. Few people know it can be used to merge MP3 files online. The MP3 files from the system, URL, Google Drive, and Dropbox can be easily merged through this tool. The sites support an array of audio formats like MP3, WAV, OGG, AAC, WMA, AU, FLAC, MKA, M4A, and much more. And the maximum supported file size of 200M. However, it doesn’t allow you to rearrange your MP3 files after you upload them. 5. Bear File Converter Similar to AConvert, although Bear File Converter is named as a file converter, it also supports merging MP3 files online. You can add any MP3 files with a size of up to 50M to the program for merging online. You can choose the audio quality, audio encoder, audio sampling rate, as well as audio channels depending on your requirements. Unfortunately, this MP3 merger online free tool only supports MP3 as the input and output format. 6. MP3Cutter The last MP3 merger online program we’d like to introduce is MP3Cutter. This platform is designed to help users cut all types of audio files. Meanwhile, it is capable of merging audio files in MP3, WAV, AAC, FLAC, OGG, WMA, M4A, and many other formats. Once you’ve uploaded all the MP3 files you’d like to merge, you can simply drag them to rearrange them, select the output format, apply the crossfade effect to achieve seamless merging, as well as rename the new audio file. Bottom Line The 6 most popular online MP3 merge tools are listed above. These tools are easy to use and available for free. However, some of them have certain restrictions on file size and the number of files that can be added. If you have any better online MP3 merger to recommend, please share them in the comments section below. ABOUT THE AUTHOR Position: Columnist Majoring in Business English at the university, Doreen is an editor of MiniTool at present, mainly writing tech articles. In her view, technology changes lives and all she wants is to share the lastest tech thoughts and knowledge with people. She enjoys reading books, watching movies, and climbing mountains with friends in her spare time.
https://medium.com/@doreen-wylin/top-6-best-mp3-merger-online-tools-in-2020-100-free-fe40ba4b8010
['Yalin Wang']
2020-12-16 02:03:01.412000+00:00
['Merger', 'MP3', 'Online', 'Tools']
How to fix Cumulative Layout Shift?
But if you want to be 100% sure there is no layout shift happening, you should use font:display in conjunction with <link rel=preload> . Like I have used here: Include width and height attributes on image and video elements Specifying height and width on images used to be a healthy old practice, but with the popularity of responsive web design, it got lost. This is how it should be: <img src="datadab-home-page.jpg" width="1440" height="810" alt="DataDab Home Hero Image"> Modern web browsers now set the default aspect ratio of images based on the height width attributes of an image. So it’s a good practice to set them to prevent layout shifts. img { aspect-ratio: attr(width) / attr(height);} This predicts an aspect ratio, on the basis of dimension attributes before loading the image. It provides that information at the very beginning of the calculation of the layout. The aspect ratio is used to measure the height as soon as an image is told to be a certain width. When dealing with responsive images, srcset determines the photos that you let the browser choose between what sizes each image is. To set attributes of img width and height, each image should use the same aspect ratio. <img width="1000" height="1000" src="dd-about-1000.jpg" srcset="dd-about-1000.jpg 1000w, dd-about-2000.jpg 2000w, dd-about-3000.jpg 3000w" alt="dd-about"/> Setting dimensions on ads and embeds Ads are among the biggest contributors to layout shifts. Publishers and ad networks often support dynamic ad sizes. Owing to higher click rates and more advertisements competing in the auction, ad sizes improve efficiency and hence revenue. Sadly, due to ads pushing visible content that you are viewing down the page, this can lead to suboptimal user experience. You can try these steps to reduce the chances of high CLS because of ads: Reserve space statically for ad slot. That is to say, style the item before loading the ad tag library. When putting ads in the content flow, make sure the slot size is allocated to avoid shifts. If loaded off-screen, those ads will not trigger layout shifts. When putting non-sticky ads near the top of the viewport, take extra care. Avoid collapsing the allocated space if a placeholder shows no ad when the ad slot is available. Eliminate layout shifts by reserving the ad slot with the widest possible duration. Use historical data to choose the most likely size for the ad slot. In the case of iframes and embeds, do note the dimensions and style of a placeholder correspondingly for the embed. Using media queries, you may need to account for subtle differences in ad/placeholder sizes between various form factors. Statically style slot DOM elements passed into your tag library with the same sizes. This ensures while loading, the library doesn’t cause layout changes. If not, the library might adjust the size of the slot element after page layout. Improve HTTP response & elements synchronization Slow server HTTP response may also cause issues with the content layout. When you are using a CDN, loading of the indented elements takes quite a few milliseconds. That causes the content to jump. You then need to either build a space in DOM or synchronize the load with other elements. Properly handle dynamically injected content Never inject content above existing content, except for user interaction. This ensures anticipation of any layout shifts that occur. You’d have noticed layout shifts due to a UI that pops up at the top or bottom of the page when you’re trying to load a site. Like an ad, this also happens with banners and forms that change much of the page ‘s layout. When you choose to display these kinds of UI affordances, reserve enough space in the viewport for it in advance. Try to use a placeholder or skeleton UI so that once it loads, it does not trigger page content to move around unexpectedly.
https://medium.com/swlh/how-to-fix-cumulative-layout-shift-1b48a138faa9
['Amit Ashwini']
2020-08-13 11:51:32.244000+00:00
['Marketing', 'Marketing Strategies', 'Web Performance', 'Web Development']
The Freedom Dance
Free of abuse, free of sexual inequality and free of my old self. When I say freedom dance, I mean that burst of absolute joy when you feel that sudden, overwhelming sense of being free, whether it’s a dance in your head or you really are grooving out in the middle of the street (this is highly encouraged). It feels like you suddenly know all the answers to the world (which if you do, please share) and the anger and pain you have felt so intensely from being the victim of sexism is for a second replaced with this new energy. It feels so good because you’ve had to work so hard for this dance. My first freedom dance happened on the way to school one day. My mum used to drop me off on her way to work and our commute together became our favourite thing to do, as well as the usually harsh critique we’d give of whichever wedding dress was in the window that morning of the bridal store we drove past. But one morning we had to detour to my ex boyfriend’s house. Armed with a guitar he’d given me (I have absolutely no talent or interest in playing an instrument) and a plastic bag of the last few clothes he had left at my house, we pulled up on his drive way to give me my final closure, *disclaimer* this unfortunately didn’t happen just yet. I got out of the car and suddenly felt intensely strong, intensely angry and extremely ready for a fucking break from him. As I threw his belongings at his front door, breaking the guitar as I did so, I felt like the strongest 17 year old in the whole world. I wasn’t since there are so many girls with harder struggles, but this one gesture meant the world to me. The dance happened as I got back in the car with my mum beaming at me, and the song ‘Happy’ came on the radio. The past year of emotional and sexual abuse I had suffered from him was forgotten about for a second, and I revelled in how freeing it felt just to throw his things back at him. Well, his door. I felt like I could start to free myself from him. This was the freedom dance. Obviously, the journey of recovering from abuse, sexism and pain is a lot longer, harder and more intense than one gesture, but these small actions feel like a push as you do work yourself through it all. I’m not a doctor, therapist or anyone of stature within a medically qualified field to be an expert in recovery and abuse, but I’ve had my own experiences that have taught me so much, so I can say that you should give yourself your own freedom dances. Whatever it is that makes you feel so imprisoned, help yourself by searching for and trying ways that will make you feel so free. My second freedom dance was an actual dance. Now as controversial as it may be, when I was 19 I became a stripper. In a month of deliberating with myself whether I could actually do a job like this, there was a lot of debating in my head. I had always been the girl who barked back at any unwanted male attention, most likely in the street, so how on earth had I come to asking the club “when can I start?”. When I eventually did, I realised how being seen as this naked goddess, albeit to a drunken man, was so freeing. This job put me in control of a male for once. Side note; there is a much bigger conversation to be had about sex work, so bare with me I’ll get there, but for now as you’re reading this please leave aside all judgement. But the way I felt suddenly having all this control, in a safe and controlled environment, showed me that I didn’t have to be pretend like I was in control of my relationships with men, but that I could actually learn how to be. You don’t necessarily need to strip to gain back your power, but it gave me back my freedom. After 2 years of suffering with anxiety from the emotional abuse, I suddenly had not a care in the world. I suddenly walked down the street by myself without bursting in to tears. I also suddenly had money. So when I walked through town the day after revealing myself to men I didn’t know the night before, I held my head so high. I realised that if I could strip off and give random men lap dances, then I really could do anything. So I wasn’t going to cry as I walked down the street anymore. As I did my Saturday shop without any fear I danced freely in my head. The lap dances were my freedom dances, showing me that I can be in control of myself. They pushed me to be confident and embrace who I was, what I looked like and accept who I wasn’t going to be anymore. So please, HAVE YOUR FREEDOM DANCES! Whatever they are, wherever they are and for whatever reason that has you suffering from sexual inequality, find a way to make yourself feel free from sexism. Let me know what they are and tell me your own experiences, and if you would like me to share them then even better!
https://medium.com/@chloecreatedthis/the-freedom-dance-4a6415a0ca5f
['Sex', 'The Society']
2020-03-11 10:41:53.405000+00:00
['Abuse', 'Sexism', 'Sex', 'Sex Work', 'Sexual Harassment']
Distributed ledger
Distributed ledger is one of the four core technologies of blockchain. If cryptography is the cornerstone of the blockchain, then the distributed ledger is the skeleton of the blockchain. Simply put, a distributed ledger is a data storage technology, a decentralized distributed database. For example, in the traditional centralized Internet we used in the early days, our information is actually stored in a large database, and the information is relatively concentrated. Once this centralized database has a problem, it will go down and become unusable. serious consequence. Later, the traditional centralized Internet also realized the potential risks of data, so they distributed the data to multiple databases and stored the data together. Therefore, even if there is a problem with one of them, other databases can continue to work in place of it to ensure the normal operation of the entire network. And this kind of decentralized storage technology is distributed data, and the distributed ledger used by the blockchain is more special on this basis. Distributed ledger is also a distributed database, and the difference between it and the distributed database used in the traditional centralized Internet is that the blockchain is decentralized, while the traditional centralized Internet is centralized. Like the centralized database used in the traditional centralized Internet, the data is stored in a large database, which the centralized giant maintains itself, and users do not have permission to access it. If users want to view historical data, they have to access their central server to send an application. Once these centralized giants want to do something with your data, there is nothing you can do. The distributed ledger is a decentralized database, connected by each database to form a large distributed database. The permissions of each database are the same, all data can be viewed and stored. It’s like everyone has a ledger. Every transaction will be recorded together. After a while, everyone will gather together to check the ledger. Once anyone has tampered with the historical record, it will be immediately discovered by everyone,and if you want to participate, you can become one of the nodes with the permission of the blockchain network. The role of distributed ledgers in the blockchain is not only to provide multiple backups of data to effectively prevent data loss, but also to endow the blockchain with the characteristics of decentralization, preventing data from being concentrated in the hands of giants, and giants holding Your data is evil. In general, distributed ledgers are like the soul of blockchain. + In today’s era of big data, centralized giants do evil by data everywhere. As a user, one can only hope that the blockchain will be able to overcome obstacles like a warrior and put an end to the current data chaos.
https://medium.com/@aitdblockchian/distributed-ledger-b84d0320a4cc
[]
2020-12-09 01:43:06.549000+00:00
['Airdrop', 'Blockchain', 'Blockchain Technology', 'Blockchain Development', 'Distributed Ledgers']
What is mobile marketing?
About mobile marketing Mobile promoting may be a varieties-channel, digital promoting scheme geared toward arriving a target market on their tablets, smartphones, and different mobile methods, SMS, email, via websites, and social media, MMS, and apps. In recent times, clients have begun to shift their concern to mobile. Due to this, traders do constantly so as to form a true multi-channel involved. As technology becomes a lot fitful, therefore will promoting. And so as to earn and sustain the eye of potential consumers, content should be strategic and extremely personalized. When it involves mobile promoting, this suggests keeping devices in mind and utilizing SMS/MMS promoting and mobile apps. Mobile promoting is a very important piece of the puzzle once it involves building out any short or long-run promoting setup. From email to pay-per-click, computer program improvement (SEO), content promoting, and social media promoting, there’s a mobile promoting channel to achieve each part of your clients wherever they’re most comfy. For mobile promoting to be effective, you would like to parson cohesive expertise that customers expect — and that may be a true challenge as you’re employed to amass, engage, and retain clients across a spread of platforms. Mobile promoting will do wonders for driving whole worth and demand for your product or services by leverage mobile devices to attach with a lot of customers in real-time at any purpose within the client lifecycle. Mobile is additionally growing steadily. in keeping with eMarketer, mobile versus desktop usage stats within us in 2018 show that the mobile-only clients can grow to fifty five.7 million (nearly 19%) by 2022, and Adweek estimates that seventy-nine of smartphone users have their phones on or close to all of them however 2 hours daily. Today, there square measure a lot of mobile devices within the world (8.7 billion) than folks (7.1 billion), due for the most part partially to our greedy appetence for brand spanking new technology. U.N. knowledge analysts have found that within us, 71.5% of voters over the age of thirteen have a smartphone, and 66.5% have smart-phones worldwide . To track the increasing power of mobile promoting, you need to specialize in making perfect expertise that your client predicts.
https://medium.com/@riarodriquesdm/what-is-mobile-marketing-b9868f2fd9e9
['Ria Rodriques']
2020-12-26 07:55:32.093000+00:00
['Mobile Marketing', 'Digital Transformation', 'Mobile Marketing Tips', 'Digital Marketing', 'Digital Marketing Agency']
The Clairvoyant
1860. Tuscany. Inside a ramshackle carpenter’s cottage, Assunta Orsini, 17, was confined to her bed. She wore a small, laced black cloth on her head. Thin and ashen, she looked years younger than her age and yet seemed drained of life. When the door opened on this day, Assunta sat upright in her bed, as if she had known someone was coming to see her. A young married couple entered. The wife, Signora Cinelli, began to introduce herself, but then Assunta spoke. “Come, Serafina Cinelli,” said Assunta. Signora Cinelli, dumbstruck by the teenager, a complete stranger, uttering her name, took her husband by the arm. The door closed behind them. “Get down on your knees!” shouted Assunta. The Cinellis, frightened by the change in the young woman’s voice, did as she commanded. The name Assunta Orsini had been spreading throughout Tuscany for the last year as rapidly as the growing unrest and uprisings plaguing the disjointed Italian territories. People claimed Assunta was a clairvoyant with powers that went far beyond predicting the future. On this occasion, Assunta determined the Cinellis needed an exorcism as quickly as possible. Typically, a Catholic priest would present himself to lead the sacramental rite. But there would be no priest today. The exorcism had already begun. And Assunta was in charge. In this region and era, women were assigned the status of a minor for their whole lives, raised to invisibly perform household duties. Assunta had defied all expectations of what a young woman could be capable of, in the process drawing hordes of followers seeking help — and enemies set on destroying her. Prophetess. Witch. Oracle. Devil. Those were some names people called Assunta Orsini. Rumors indicated she could tell your future, reveal your sins, contact the dead, and channel the voice of God himself. Assunta — named for the Assumption of the Virgin Mary, her ascendence into heaven — had no formal education. An Italian thinker of the seventeenth century summed up the general attitude on women and education with the expression, “son capaci, ma non devono” — they are able, but they should not. Assunta could not read, but she would brush her fingers along the pages of a purportedly “magic” book, a copy of Thomas à Kempis’s The Imitation of Christ. A monk had given the devotional text to Assunta while her sister was ill at the hospital. “Before you open it,” the monk instructed her, “make the sign of the cross, and say to yourself: My Jesus, talk to me.” Assunta would have this book handy, for example, when visited by a sailor’s wife from Viareggio whose husband had departed on a long voyage. She implored Assunta to tell her anything. When would he return home? “My Jesus,” Assunta would say, “speak to me,” as she scanned the page, not to see the words in print, but rather for some sublime and unseen truth. “Your husband is well,” she said to the wife. It must have been a relief. But then: “He died in God’s grace. His death was very good. He drowned. Before dying, he suffered so much, and prayed so much, that God forgave his sins and he is in Heaven now.” Assunta could be full of kindness, but could also be blunt — even harsh — when sharing what she believed God was revealing to her. A woman surrendering her baby to a foundling home. It was later verified that this fisherman had died at sea. Equally brusque was Assunta’s revelation to a young mother who had given up her baby. When a woman gave birth out of wedlock, there were few options other than the orphanage or gettatelli, literally meaning the place for thrown-away children. Her act of desperation had left her heartbroken. Assunta, she hoped, could foresee her reunion with her son. When will we see each other again? “He died,” replied Assunta flatly. The woman immediately fled to continue her search for the child only to discover that Assunta’s prophecy had come to fruition. No one was hit harder by shock than Assunta’s family. Her stepmother, Carolina, was a severe woman who tended to punish her harshly, often humiliating and ridiculing her in public. Assunta’s nonna, on the other hand, was seemingly unrattled by the commotion around her granddaughter. The young woman’s father, Luigi, was caught unprepared for what many considered miraculous. He did not try to stop her, though she was barely recognizable to her own family at times. When Assunta spoke about everyday matters, her language was described as “crude dialect, full of errors and vulgar idiocies.” But when delivering prophecies, she spoke in “a clear, correct, dignified, and serious” manner. Witnesses described it as “another person speaking through her mouth.” In her trances she reportedly could converse in French and Latin. She spouted catechismal philosophies and sustained theological disputes with concepts beyond her knowledge. After these bouts she would confess she had not understood the meaning of the words that she had spoken. Assunta seemed to obey “a mysterious, occult force that communicated words and ideas to her in an extraordinary and phenomenal way,” wrote one chronicler. Over time, she relied less and less on the devotional book, using other objects including rags, paper, stones, rocks and wood to channel secrets. As she grew more powerful, she would not need any totems. In the eyes of society, the young woman, small and plain but considered attractive enough, should have been dreaming of marriage. But when 17-year-old Arturo Cempini showed up eager to see Assunta, he was no suitor. Arturo, the son of a government auditor, had other priorities; namely, the battles erupting over attempts to align the territories of the region into a unified Italy. Arturo wanted to peek into his future as a soldier, which could pave the way for other successes. Would his glory come to pass? Assunta lowered her eyes, refocusing herself as if she were recalling a forgotten memory. Then she glared in Arturo’s direction. She described in vivid, repulsive detail a demon surrounded by 40 snakes. She believed these were Arturo’s constant, invisible companions that slithered around his body, waiting eagerly to drag him to Hell. “You will not go to war.” She told him his death was imminent. The news devastated the once cheerful boy and his family. His family assured him he was healthy and well, but he could not shake the girl’s prophecy. Within a few months, he started to cough, and then to “spill blood” from his lungs uncontrollably. Would her son heal? his mother asked Assunta. “He will heal perfectly,” Assunta said, seeming to reverse her initial prophecy. But often her declarations were framed in metaphorical or spiritual terms. Arturo gradually became sicker until he died only a few months later in September 1860. “God wanted him for Himself,” Assunta explained to his weeping mother. Assunta’s knowledge was awe-inducing to those who reported being on the receiving end of it. It was also intimidating. Her harsh stepmother began to give her a much wider berth. Once people heard about the young woman’s abilities, they came from all over Tuscany to see her. Even travelers from abroad, such as the English painter and Florentine literary figure Seymour Kirkup, heard about Assunta and went to test her prophetic abilities in person. In a letter to a friend in England, Kirkup insisted “the facts themselves are very positive” on Assunta, noting that some priests “treated her as a saint.” He judged the case “authentic.” When Father Gentile Pucci, a priest based in the Quarto area, came across Assunta, he was intrigued. Pucci tended to be disheveled and a bit disorganized. He did not project a particularly intellectual or scholarly air, but he was open-minded and enthusiastic. When the two met, Assunta claimed to be able to reveal the state of Father Pucci’s late mother. The dead woman’s words — pleas for her son to pray for her salvation — came directly from the teenager’s lips so uncannily that they drove the priest to tears. From this point on, Pucci was a frequent companion and even confessor to Assunta, listening to her secrets and absolving her of her sins. Many from the church, however, had very different reactions, ranging from skeptical to furious. Belief in a lay person’s spiritual powers weakened public reliance on the Church, already being challenged by the political turmoil of the day. Accusations and explanations flew as clergy heard about Assunta. Maybe she was possessed by demons. Maybe she was insane. Maybe she was just a teenager who craved attention — and money. One theory posited that she was controlled by someone else through hypnotism, which would explain her ability to speak in languages she did not know. The Convent of the Encounter (Convento dell’Incontro) in Florence, from which Assunta’s activities were monitored. Still, to challenge her directly was a daunting prospect. Whenever someone tried to trick her with false stories or fake misfortunes to test her, Assunta ignored them or refused to answer their questions. Others tried to profit off of her apparent gifts, in the process testing her moral compass. Assunta’s own brother tried to solicit the winning lottery numbers from her. The Papal State sanctioned a public lottery and, drew winning numbers daily. Assunta refused the request, knowing that to do so would be stealing from the church and God Himself. No one had been more skeptical of Assunta than Brother Orazio, a local Franciscan monk. But then Assunta revealed the details of a private conversation Orazio had the day prior, when a man speaking to the monk blasphemed the pope. When Assunta repeated the harsh, blasphemous words back to Orazio — words she had no way of knowing — Orazio became an instant believer. Father Pucci, her biggest advocate among the clergy, gave her a certificate to continue her visits with those who sought her visions, contingent on her not discussing matters or prophecies related to the government and current political upheavals. But as Assunta’s popularity soared, the local monastery’s head Franciscan, Father Guardiano, instructed the monks not to confer with the girl. The Florentine government banned all visitors to the Orsini household. The attempted suppression only made the public all the more determined to reach Assunta. When the Cinellis, the desperate married couple, came to Assunta’s modest house for help, they obeyed her command to kneel before her and begin the process of an exorcism. Assunta sprang to the floor and reached for an olive branch that had been blessed by a priest. She dunked the branch in a dish of holy water and sprinkled the couple, flicking it so it fell like raindrops against their skin. Signor Cinelli recoiled at the touch of the holy water while his wife tried to restrain him. This visit was a last ditch effort to save the man’s life. An inexplicable darkness had overtaken him after he witnessed the early deaths of his two young sons. According to the bereft family, it was something beyond grief that had overtaken him — it was evil. Doctors had been confused and then terrified. Attendants at Saint Lucia Hospital would later testify to a priest that Signor Cinelli exhibited inhuman strength when they had tried to sedate him. Each day, more than the last, he was “showing himself to be a madman [who] tried to grab, bite, tear, anyone who approached him,” as a priest recorded. Clergy at the couple’s local parish had also examined Signor Cinelli. Witnesses recorded signs associated with demonic possession: an aversion to holy objects and a supernatural ability to distinguish between plain water and holy water, the latter inducing a violent reaction upon contact. Parish priests swore that the man was able to utter words in Latin, a language beyond his simple dialect — in a voice not his own, not unlike Assunta herself — a possible sign of speaking in tongues. When he went into his “frenzies,” six grown men were unable to restrain him. Now the humble teenager faced him alone. It was a standoff, not with the grieving man, but with the malicious force she believed was possessing him. “Damned spirit,” shouted Assunta, in a voice later described by those who heard it as supernatural. “I command you this in the name of God, leave this body, that is not yours: it is Jesus, Mary, and the Guardian Angel’s.” Assunta had been on the other side of an exorcism before. Priests had insisted on trying to exorcise Assunta but each time gave up, finding no evidence of demons. This was her first attempt at conducting one herself. If she did possess powers from God, as so many believed, maybe she could use it to expel demonic control from this poor man. She continued: “Leave this soul, that is not yours! Leave, and return to the place of desperation, to which God has condemned you.” The man contorted, recoiling like a serpent to a mongoose at the touch of the blessed water and the petite young woman wielding it. “No, I don’t want to leave,” came the reply, “I don’t want to leave… I want to be here, in spite of you… greedy, so greedy… no, you don’t have to have this soul, that is mine!” Deep into her battle with whatever power seemed to control Signor Cinelli, Assunta recited the Liturgy of the Mass. She lit an altar of candles in front of the images of the cross, the Blessed Virgin Mary and the saints that covered nearly every inch of her walls. She used these blessed relics to hold the holy water she threw at the stricken man. The man cowered and blenched, writhing around on the floor. His threats grew more violent. “Ugly little monkey, be careful, if you don’t stop, I am getting on top of you,” he growled at Assunta. Assunta continued her blessings, though this large man could easily overtake her — or at any moment the authorities might burst into the house. A young woman was expected to bow down to the power of men, to shrink into obedience. But she braved it all, crying out: “You will not be able… I am not afraid… God, Mary, the Guardian Angel defend me.” The man’s body contorted and convulsed violently. Assunta’s chants grew louder. The man’s head would later be described as becoming gruesomely deformed, his throat swollen and chest bulging forward like a frog’s. For those well-versed in demonology, these were potential clues to the demon’s identity. Baal, according to legend, had been a deity before falling from grace and taking his place as a king of demons, holding command over 66 legions — with as many as 6,000 demonic soldiers in each. The hoarse-voiced royal demon was said to have a variety of appearances, with the sixteenth century demonology text Pseudomonarchia Daemonum reporting that King Baal could appear with the head of a toad, cat, and human simultaneously. Signor Cinelli wheezed furiously, stretching his gaping mouth wider and wider. At the sight of this, Assunta pounced on top of him, forcing the contents of a glass of holy water into the gaping mouth before forcing it closed with her hands. She then stepped back. His body reacted with ruinous shock, wiggling “as if it were a ball of yarn.” The volume of his throat increased while his eyes opened wide. “It seemed,” one chronicle recorded, “that from one moment to another he should burst, as if he should succumb to an unknown, oppressive weight.” The man gave a horrendous bellow before passing out, seeming to be dead or near-death. When he awoke, he seemed reborn. Assunta gave him a holy medal to wear around his neck and instructed him to never take it off. She warned him to confess all of his sins to God, recite prayers daily, sprinkle holy water around his room and drink sips of it from time to time. She opened the door to her chamber. “Evil demon!” Assunta called out. “So you left that soul that you wanted? So I was able to cast you out?” Archbishop of Florence, Giovacchino Limberti, circa 1857. The family would credit her for years with saving them, but Assunta’s own life still hung in the balance, now more than ever. As Signor Cinelli recovered and his wife marveled, Assunta reported hearing the lurking demon, possibly King Baal himself, continue to address her. “Ugly little monkey, now you are happy, that you have made me lose that soul? I will never, ever leave you.” Assunta’s exorcism crossed lines into dangerous territory that provoked enemies from all sides. She broke restrictions imposed by both the government and the church on her household. The Catholic Church, already under threat from the newly unifying Italy, now had an upstart, unsanctioned layperson — a teenage girl, no less — performing one of the most sacred and secretive rites out of a shack. After only a year, Assunta had already been credited by the public with hundreds of miracoli, an alarming incursion into the church’s dominion. Father Pucci’s well-meaning certificate of permission to Assunta could only give her cover for so long. Hostility turned into an outright campaign against Assunta and her family. Government operatives turned the tiny Orsini home inside out looking for any illegal funds they might have collected without taxation or state approval. As the patriarch of the family, Luigi Orsini faced the brunt of the harassment. At one point, the family found Luigi dazed after being roughed up by the gendarmi, the military police tasked with squashing internal enemies throughout the kingdoms. “It’s my own fault,” Assunta lamented. The weight of the world seemed to fall on her shoulders. Meanwhile, Assunta’s devoted followers lined up outside her door — sometimes 25 to 30 carriages waited outside at a time. In addition to Catholics, Jews and Protestants also came to visit the young woman, sparking further controversy among church observers. The onslaught of visitors demanded that she “perform” even at times when she warned that she did not see anything. She would sometimes throw out guesses to appease the crowds. At those times, some were quick to turn on her as a false prophet. The routine gradually debilitated Assunta. Priests from surrounding towns and villages forbade their penitents from consulting Assunta. They chastised those who did. One priest said of his peers: “they sought out jealousies, lies, persecutions and harassments against her.” When priests visited her to try to discredit her, Assunta greeted them respectfully. But when put to the test, she never held back from revealing their personal sins, embarrassing them and accumulating enemies in the process. The ecclesiastical curia, a powerful council of clergy, summoned her. The Archbishop of Florence, Giovacchino Limberti, personally interrogated Assunta behind closed doors. Limberti, 39, had an intense stare and a stern presence. He was an intellectual armed with a knife-sharp memory. But the ambitious Limberti was considered a disappointment by many colleagues and rivals, too indecisive for tumultuous times. He disliked trouble and scandal, and rather than solving problems he had a reputation for brushing them under the rug. Assunta did not waver in defending herself when questioned. Then came a commanding voice from Assunta — the kind of voice her believers had heard many times that seemed to come from the spirit world. “Believe in this simple, crude girl,” the voice coming from the young woman urged the curia. “Believe her. It’s not her that speaks: it is I who reveal myself through her mouth. Be certain that I have given her so much light and strength… I have chosen her as my wife since the first Communion. No one will go against the efficacy of my word, until I can convert souls, not even the ecclesiastical authority. Not anyone will impede the virtue of my word.” With jaws dropping at Assunta’s declaration, questioning came to a halt. The voice’s allusion to a spiritual marriage “since the first Communion” suggested Assunta was empowered directly by Jesus. For her disciples, this would have confirmed Assunta’s powers and status. Statue depicting Baal (also called Molech) as displayed at the Roman Colosseum. Father Pucci continued his dedication to and his firm belief in Assunta. As her de facto confessor, he had become the only person to know her secrets, a risky spot to be in. Assunta told Pucci that she had urgent visions to communicate to Pope Pius IX about the Catholic Church’s precarious position in the growingly secularist, unifying Italy. She dictated the letter in Latin — which, by all accounts, she should not have been able to do. She warned the pope that he and the Church were under imminent threat. The revolutionaries would drive him from Rome for a time, but he would eventually return to where “God had placed him.” She entrusted the missive to Pucci. Though his actions could have put him at further risk, Pucci submitted the letter to the proper channels to reach the Vatican. Assunta was sure the Holy Father would respond. But her letter went unanswered. This time her predictive powers seemed to fail, and at the worst possible moment. Father Pucci, keeping a low profile, gathered intelligence directly from the church that could help him protect Assunta. Whispers had spread that the church was contemplating forcing Assunta from her home and confining her to a convent for the rest of her life. A panicked Pucci reached Assunta and warned her that she was in grave danger. The priest instructed her to immediately burn the document he had given her that granted limited permission to share her prophecies. Assunta understood. The priest fled from his now-ostracized parishioner. In the last days of November 1860, the gendarmi arrested Luigi Orsini under suspicion of fraud. Father Pucci was also charged as “an aid to this transgression.” The walls were closing in. After eight days, Luigi was released. He went to his daughter, asking for what may have been her most important prediction yet. “People talk about us everywhere and are against us,” Luigi lamented. “Tell me, when will these persecutions end?” “Daddy, be patient,” said Assunta. “Jesus tells me that by the middle of the month everything will be over.” For weeks, the courts summoned Assunta’s acquaintances to testify about her conduct. All of the boys from her father’s carpentry workshop had been called to appear. Questioned by such an august panel of investigators, some of the witnesses could be intimidated into implying wrongdoings. It seemed only a matter of time before authorities came for Assunta herself. When that time arrived, a messenger of the court found her alone in a church, practicing her daily devotions. There was much for her to pray about right now. She called upon the spiritual forces she channeled every day in her visions. If help were coming from above, there was no time to lose. The messenger interrupted, delivering a court summons ordering Assunta to appear immediately before the Tribunale di prima Istanza, the preliminary judicial body for civil cases in Florence. Darkness loomed in the sky on the morning of her appointment. A downpour began. Assunta journeyed out into the storm. Together with her elderly nonna — the only one of her family willing to brave the crowds and the courts after months of torment — they walked the two-hour journey through the hills on the ancient stone path and the flooded dirt roads. Soaking wet and hungry, Assunta arrived full of fear. She entered the courtroom where Auditor Girolamo Gori, the investigating judge, presided over the proceedings. Gori was both a judge of her guilt and the chief inquisitor who questioned witnesses. The questions, based on the evidence collected by the gendarmi, came like rapid-fire, with insinuations of fraud, deception, and charlatanry. Did she take money from visitors without paying taxes? Did she knowingly deceive people, and if not, how did she come to know the secrets of past, present, and future? Oddly, the authorities were armed with extensive inside knowledge about her and the Orsini family. Someone from within their close circle clearly had leaked to authorities. Sensible suspects included Carolina, Assunta’s stepmother, whose control over Assunta had been undermined by the young woman’s activities and her celebrity. Then there was Assunta’s brother, for whom she had refused to exploit her gifts to manipulate the lottery. In addition to answering the questions, Assunta presented the certificate signed by Father Pucci. Wise beyond her years, she had disregarded the priest’s advice to destroy the document, and in doing so had saved the priest’s neck. Because the certificate specified that she should only discuss religious matters and not government issues, she successfully proved that Pucci had not defied government orders. After three hours of inquisition, Gori would be tasked with deciding whether to acquit Assunta or commit her for a full criminal trial before the state. For the moment, Assunta was free. Exhausted, she stumbled out into the flooded, mud-caked streets. Someone had spread word of her visit to the court, leading to an angry mob waiting outside to greet her. That same unknown adversary “had persuaded the mob of urchins and idlers to stalk her,” wrote one priest. The crowd of angry young men verbally attacked her “bestially” in the street, circling around her with revolting, obscene chants. Not since Assunta’s exorcism of Signor Cinelli had she heard such curses cast against her. The Baal-like demon that reportedly had mocked her that day had promised it would not go quietly: I will never, ever leave you. Today, the young woman’s strength faded, and those who believed in the intervention of a sinister demon such as Baal could see his dark handiwork. “Fool! Lush! Crazy! Sorceress! Possessed!” Members of the crowd spit out epithets toward Assunta before they began with their more vulgar counterparts. Assunta clutched her nonna and tried to navigate past the men, but the crowd followed closely, tormenting them at every step. They brutally grabbed the women with outstretched hands while Assunta struggled to break free. The women barely escaped but the damage was already done. “The fact is that the innocent girl was so shaken with fear,” came one later report, “that her circulation stopped… little by little she got sick.” An acute fever took hold of Assunta and she was confined to her bed, falling in and out of consciousness. It was natural to expect Father Gentile Pucci, her greatest ally in the Church, to be at her bedside. But he was nowhere to be found, as he was possibly in danger himself if the authorities or angry mob got ahold of him. For now, though, the main concern was Assunta, not just her physical health but also her spiritual well-being. A priest administered her last Holy Communion, the beginnings of the Last Rites in the Catholic Church. Assunta tossed and turned, still haunted by the words of the unruly mobs, but at the moment of receiving the Eucharist, those who watched by her bedside said they saw her revive for a moment. “Oh! what light! What splendor! What beautiful things I saw!” Assunta declared. “Open… open… now yes… now yes! Tonight I heard a voice that told me: Tomorrow they will believe you dead — but then you will revive yourself.” Assunta indeed became unresponsive, then would wake up. On December 17, 1860, after three days in and out of consciousness, Assunta Orsini died at the age of 17. Her prediction of her troubles ending in the middle of that month had come to pass — but in the most devastating way possible for her father. He had been told a miracle had unfolded in his humble home, and now his daughter was gone and his family irretrievably broken. For three days, the Orsini family allowed crowds to visit the body of the “blessed virgin,” only increasing her influence in death. No sooner had she died than people competed for souvenirs from her corpse and modest possessions, now considered relics of a clairvoyant and miracle worker. People cut strands of hair from her corpse, and handed them out to her devoted followers to “preserve it as a precious memory.” No witnesses at the improvised shrine reported seeing Father Pucci, her spiritual confidante. As it turned out, he was not being held against his will. Pucci had reason to stay away because he had been the chief witness against Assunta. He had secretly shared with the government details of her activities, visitors, and possessions. Pucci — not Assunta’s embittered stepmother or brother — had been the source of all the officials’ inside information. Pucci, in over his head, likely had bowed to the mounting threats against him, rather than setting out with the intention to betray Assunta, who for her part had gone out of her way to clear Pucci’s name in court. But by trying to save himself, ironically he ensured his life continued to be in danger from Assunta’s fanatical followers for the foreseeable future. To them, Pucci had committed treason against a saint. It took more than a year for the verdict from those hearings that had facilitated Assunta’s death to surface publicly: Hearing the Report of Auditore Girolamo Gori, Investigating Judge Verdict: It is expected that the trial is full of proof, that the young girl of 17 years, Assunta Orsini, is believed, and deemed superiorly inspired to answer about the things related to the souls of the living, and of the deceased, and full this proof because, and she herself in her constituents asserts it, and supports it, and all the Witnesses heard in the trial trust in her as those, who had responses from her on the state in particular of the salvation or not of the souls of their dead…. There is no place to proceed further, neither against Assunta Orsini nor against Father Gentile Pucci for the charges that were brought against them above. Thus decreed the fifteen December One thousand eight hundred and sixty. The verdict dismissed all charges and even tacitly supported her ability to pursue her prophetic work. That verdict was issued two days before Assunta Orsini died — though neither she nor her family had even known. As for accusations that Assunta or her family had monetarily profited from her powers, the only evidence that could be produced were two small pieces of wax gifted by one of her visitors. The finding undercut theories that Assunta had been part of some scam for profit; nor could skeptics convincingly point to coincidence or trickery to explain even a fraction of the predictions and private knowledge attributed to Assunta. One of Father Pucci’s fellow priests, Father Giovanni Pierini, was initially dismissive when asked to investigate Assunta’s story. Pierini, 30, who was also a journalist, interviewed approximately 100 witnesses who claimed Assunta had performed miracles using her clairvoyance. Pierini was only able to disprove a handful of accounts, and to his surprise many inquiries seemed to produce iron-clad evidence of Assunta’s powers. He now believed there was more to this young woman, an “angel of goodness,” than he had thought, and that her enemies had certainly thought, too. Pierini reasoned that Assunta had been a vessel for someone else, for good, evil, or some other supernatural authority that remained to be seen. To Pierini, the mob outside the court had been deployed as “scapegoats” by even more sinister forces that needed Assunta’s powers stopped. Suspects lurked everywhere with motives to turn the mob loose: the archbishop with a shaky reputation who tended to cover up disturbances; the once-adoring Father Pucci who may have felt the need to get rid of Assunta to conceal his treachery from her; a follower dissatisfied with Assunta’s prophecies or her ability to reveal private sins. For the believers, of course, a possible culprit was the demonic Baal — with legions of devils at his command — that had attached itself to her seeking to stop her powers. The British expat Seymour Kirkup, who’d met Assunta and knew people involved with the case, noted cryptically, “They killed the poor girl among them.” Pierini compiled crucial information related to Assunta’s story. Truly*Adventurous acquired one of the last remaining copies of Peirini’s documentation known to be extant, and this may well be the only narrative of Assunta’s case in more than a century and a half. Assunta Orsini was never canonized as a saint, but Pope Pius IX did in fact receive her letter and read her prophecies with great interest. Her premonitions would later come to pass when Pope Pius IX was exiled from Rome by the unifying armies and the future of the church hung in the balance but was ultimately allowed to reclaim this location in the new Italy. These prophecies were included in a well-known 1864 Catholic collection of major prophecies regarding the “upheavals of the kingdoms of the world.” Though her name would fade into obscurity, buried by the powerful men who closed ranks against her, a bit of her voice lives on thanks to her prophecies. Author’s Note: Special thanks to Julie M. Allen for help with translations from Italian. Gabriella Gage is a nonfiction writer and journalist from Somerville, Massachusetts. Her work has appeared in The Boston Globe and The Globe and Mail, and she is a former James Merrill House Fellow. For all rights inquiries email here.
https://medium.com/truly-adventurous/the-clairvoyant-f90615cb4252
['Gabriella Gage']
2021-07-29 19:32:59.232000+00:00
['True Story', 'Longform', 'Mystery Thriller', 'History', 'Mystery']
Use Min-Width, Not Max-Width, in Your CSS
Use Min-Width, Not Max-Width, in Your CSS Your websites should be mobile-first, not mobile-last I am unhealthily addicted to inspecting the websites I visit. Maybe it’s a sign that I’ve been a developer for too long, but I really do enjoy dissecting the websites I stumble upon as I surf the internet. Sometimes I’m stuck dismembering a website for hours, ultimately forgetting why or how I even arrived at the website in the first place. You learn a lot when you dissect other peoples’ websites. I’ve seen clever JavaScript optimizations and mind-blowingly genius CSS hacks. But you at least see a lot of terrible code. I would like to talk about one common design flaw that I’ve seen far too frequently. Max-width. Let’s say you want your images to have a width of 100% on mobile devices and 50% on desktop. You could accomplish this by typing this CSS: An example of desktop-first CSS. The default width of images is set to 50%, and is then increased to 100% for devices with a small screen width. The image is made by the author. You could also enter this CSS and achieve the same result: An example of mobile-first CSS. The default width of images is set to 100%, and is then decreased to 50% for devices with a large screen width. The image is made by the author. In the first example, we set the width of images to be 50% by default but increase the width to 100% for devices with small screen widths. Meanwhile, in the second example, we default at a width of 100% for images and then reduce the width to 50% for devices with large screen widths. These two pieces of code will produce the same result. As such, you might feel inclined to believe that choosing between them is arbitrary. Unfortunately, that is not the case. The website will take longer to load for whichever device has to load more CSS rules. In the first example, mobile devices have to load two rules, while desktop devices need to read only one. In the latter, the roles are switched. The first example is desktop-first, while the latter is mobile-first.
https://medium.com/swlh/use-min-width-not-max-width-in-your-css-e6898fcf6a78
['Jacob Bergdahl']
2020-11-17 10:33:46.084000+00:00
['CSS', 'Technology', 'Development', 'Programming']
Why Outsourcing to a Nearshore Software Development Company is the Right Choice?
Benefits of Software Nearshoring Nearshoring means that your outsourced development team is based in a nearby country. Such a close location opens a lot of opportunities that you can use to get the most of your cooperation. Ideal balance between price and quality The most common reason for companies to resort to outsourcing is the lower rates that outsourcing vendors offer. The local software developers in the US or Western Europe usually charge a lot of an hour of work, while teams from popular nearshoring destinations like Eastern Europe, Latin America, or Asia can show rates are half as much. At the same time, lower prices don’t mean worse quality. The software development market is highly competitive in nearshoring countries, so the developers are bound to provide a high quality of services, otherwise, they will go out of business pretty soon. Lower rates are possible because of the lower level of the average wage in the mentioned country. Proximity In contrast with offshoring, nearshore vendors are a few hours away from your location. For example, you need only two hours to reach Ukraine, one of the most popular nearshoring destinations, if you are located in any European country. It is pretty convenient if you want to visit your outsourced team, discuss your partnership or go through negotiations. A few hours flight and no jet lag — and you can meet your partners in person. Convenient time zones Again, due to proximity, you won’t have a big time difference with your partner. Hence, the overlap in working hours is significant, which allows establishing seamless communication without any delays. Strong infrastructure Nearshoring companies are aimed to provide software development services, so they have their infrastructure and processes are geared towards this type of business. Thus, you won’t need to invest in equipment, additional premises, spend time to establish processes, etc. The vendor takes care of the overhead, developers’ employment, taxes, while you just need to pay for the services they provide. Nearshoring vs Offshoring Nearshoring and offshoring are often compared, as both of these outsourcing options can be suitable for businesses. What company to choose: the one that is close to your own country or the one overseas? Here is the list of the main pros and cons of both models, so you can evaluate them and choose the best for you. Pros and cons of nearshoring: territorial proximity; cultural similarities between you and your partner; the similar business ethics; possible communication difficulties due to remote work. Pros and cons of offshoring: low costs; big talent pool; time zone differences; significant cultural differences; security issues. 👍How to Pick An Nearshore Software Development Company Finding the right partner may take some time and effort. But eventually, it will pay off, as a reliable partner will push your company further and help to streamline processes, enter new markets or increase the capacities of your business. So, we prepared a list of steps you need to take to find your ideal partner. 📌Determine your needs The nearshoring market is full of companies, so we suggest choosing the criteria for your partner from the start. Think about the main specific of your development and the expectations: Why do you need to develop software? Which platforms should it be working on (desktop, mobile)? How big should your partner company big? What budget do you have? These questions will help you to narrow the list of possible vendors. 🔎Research Take enough time to research possible partners. Thanks to the Internet, you may learn a lot about the company even without interacting with its employees. Check their websites and look through their services, portfolio, awards, certification, and testimonials. We also suggest checking tech directories like Clutch or Goodfirms to read through testimonials that their clients left there. Usually, such platforms interview clients themselves or at least verify the reviews, so it’s possible that they will be legit. You may also check social media, job portals, or tech blogs to check the brand of the company. For example, if there is a lot of negative feedback from previous employees, it might mean that the company has low employee retention and a bad work ethic. As for you, it may lead to frequent developers’ replacement, delays, and low productivity. 📞Communicate The negotiation stage is also a good market for your compatibility. While the vendor studies your request, makes estimations, and prepares the offer, you may observe how they build processes and communicate. You may see if they are quick to answer if they research the topic thoroughly or stay superficial if they are ready to go the extra mile to help you reach this goal. If they show themselves as attentive, nice, and hard-working here, they will certainly be the same way while in actual development. In Conclusion Nearshoring connects all benefits of outsourcing with proximity to you. Hence, it’s perfect for those who are looking for alternatives for in-house development or onshoring but want to build close cooperation with a software development partner. If your company is situated in Europe, we suggest checking Eastern European countries for nearshoring like Ukraine, Poland, Bulgaria, etc. If you are in the US, Latin America has a lot of reliable software vendors. Take your time, do your own research and find that one partner you are looking for!
https://medium.com/@gbksoft/why-outsourcing-to-a-nearshore-software-development-company-is-the-right-choice-fd2b97951e19
[]
2021-11-17 11:41:57.316000+00:00
['Gbksoft', 'Nearshore Development', 'Blog', 'Outsourcing Services', 'Outsourcing']
The Journey Ahead. Episode 4: The Journey Ahead
“Good gosh damn gents, hearing you talk about poppycock dust gave me shivers from tube to top hat,” said the mosquito. The air above her head flickered in midnight blues and charcoal grays. Several massive orange puffs behind her cast dark shadows over the table top. The two worms and fat fly she addressed were silent, surprised at her sudden entrance. Her companions, the grizzled earwigs and their subservient silverfish, stood waiting. Inside the house, scavengers who came upon each other without predatory intent could expect inquisitive and jovial interaction. Flies, crude and blunt as they are, didn’t do so well with the jovial part. Donny would normally start with an insult, then move into a rant based on the incoming species’ stereotypes. This time however, he was silent, brow visibly furled due to cognitive overload. How could he get rid of the newcomers and control his hostage situation? One of the hostages broke the silence. Norm spoke, “Uh, you’ve had poppycock dust before?” “You’re diggity darn right I have, and I’m willing to purchase some off you this very moment.” As she finished her sentence, the mosquito shot a sly wink in Norm’s direction. Luckily, he caught it. “But, I’ll only take it if it’s genuine. Don’t you slip me dust from a honeycrisp or granny apple. I won’t take any of that second rate stuff.” “It’s real alright. You can come see for yourself, we were just about to go get the seeds,” Norm cautiously replied. “Well alright then, me and my boys here will join you on your trek. Let’s giddy on up.” Donny’s brow furled to right angles. He started, “Wait just a minute now plasma-sucker.” The mosquito didn’t flinch at the insult. Among the group, adrenaline was rising and the air was as heavy as the sweat dripping down Donny’s antenna. Using all of his brain power to craft his next sentence, he continued, “I’m sorry to inform you that these two worms and myself have an agreement to tend to. We won’t be going nowhere for the time being.” Turning to face the fly, the top-hatted mosquito responded, “Oh, pardon me. I got all worked up over the thought of poppycock dust and I forgot my manners.” At this comment, she audibly clicked two segments in her lower abdomen. Moving behind Donny, the earwigs stealthily unhooked the silverfish, though keeping them put. She continued, “I didn’t invite the fly on our trek. Will you join us?” “Will I join you?” Donny’s voice cracked. “I’m not going anywhere. And… and neither are the worms. As I said, we were in the middle of something, an agreement of sorts.” Donny was shaking with adrenaline and anxiety. The mosquito paused, then calmly replied, “Now I’m real sorry about this friend, but it seems to me that these worms don’t want much to do with your agreement,” she said and looked to Norm and Norman. “Nope!” spouted Norman. “And we can’t just pass up an opportunity for poppycock dust. It sounds like you’ll be staying, and we’ll be taking the worms and going. Don’t make a fuss now, shit-eater.” She turned to leave, motioning that the worms come with. Donny felt the insult, registered his loss of control, and lunged to attack. At the very moment of lift off, two lashes swung around his thorax and abdomen. The lashes quickly tightened, rendering his wings motionless and causing his paralyzed body to nose-dive from the air. He hit face first, feeling something in his proboscis crack as orange spatter spewed onto the table top.
https://medium.com/norm-norm/episode-4-d1825de52871
['Shaun Lind']
2016-08-26 03:35:06.857000+00:00
['Short Fiction', 'Fictional Series', 'Short Story', 'Fiction', 'Graphic Novels']
Building forecasting model using Vector Autoregression (VAR)
Building forecasting model using Vector autoregression (VAR) In this blog post, I have described how to build a forecasting model using Vector autoregression(VAR). Vector autoregression (VAR) is a statistical model used to capture the relationship between multiple quantities as they change over time. VAR is a type of stochastic process model. VAR models generalize the single-variable (univariate) autoregressive model by allowing for multivariate time series. VAR models are often used in economics and the natural sciences. (from Wikipedia) Problem Statement For this demonstration, I have taken a Kaggle problem of Rossmann store sales (https://www.kaggle.com/c/rossmann-store-sales/overview). Rossmann operates over 3,000 drug stores in 7 European countries. Currently, Rossmann store managers are tasked with predicting their daily sales for up to six weeks in advance. Store sales are influenced by many factors, including promotions, competition, school and state holidays, seasonality, and locality. With thousands of individual managers predicting sales based on their unique circumstances, the accuracy of results can be quite varied. Since the company has just started. We have a scope of forecasting for stores 1,3,8,9,13,25,29,31 and 46. Here, I will perform a time-series data analysis for the above store mentioned and will build the model for stores 1 and 46. Here, we will explore various model building approach with the VAR model for individual store The procedure to build a VAR model involves the following steps: Analyze the time series characteristics Test for causation amongst the time series Test for stationarity Transform the series to make it stationary, if needed Find optimal order (p) Prepare training and test datasets Train the model Rollback the transformations, if any. Evaluate the model using a test set Forecast to future import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Import Statsmodels from statsmodels.tsa.api import VAR from statsmodels.tsa.stattools import adfuller from statsmodels.tools.eval_measures import rmse, aic import seaborn as sns import matplotlib.pyplot as plt import statsmodels.api as sm 1. Import the datasets types = types = {'CompetitionOpenSinceYear': np.dtype(int),'CompetitionOpenSinceMonth': np.dtype(int),'StateHoliday': np.dtype(str),'Promo2SinceWeek': np.dtype(int),'SchoolHoliday': np.dtype(float),'PromoInterval': np.dtype(str)} train = pd.read_csv("dataset/train.csv", parse_dates=[2], dtype=types) store = pd.read_csv("dataset/store.csv") # see head values for train train.head() # see head values for store store.head(5) 2. Performing time series analysis # merge data for analysis train_store = pd.merge(train, store, on='Store') #merge data for analysis train_store['Open'] = train_store['Open'].apply(lambda x: 0 if np.isnan(x) else x) #get rid of NaN values if store is 'closed' #a better way to get more details, timeline view of medians of each day of the week- day = train_store[(train_store['Open']!=0)] # Looking on open store sales_day = day.groupby('DayOfWeek')['Sales'].median() # Grouping by day of week for sales cust_day = day.groupby('DayOfWeek')['Customers'].median() # Grouping by day of week to see customers fig, (axis1) = plt.subplots(1,1, sharex=True, figsize=(10,5)) # plot median sales ax1 = sales_day.plot(legend=True, ax=axis1, marker='o',title="Fig: Median sales of each day of the week") ax1.set_xticks(sales_day.index) tmp = ax1.set_xticklabels(sales_day.index.tolist(), rotation=90) # overlay customer data cust_day.plot(legend=True, ax=axis1, marker='x', secondary_y=True) <AxesSubplot:label='faa9fa8a-25d3-4517-8800-d3bc405d5484'> Conclusion Clearly, we can see that we have high sales and high no of customers coming in beginning and end of week On Saturday sales are least On Sunday most of the customer comes #for more insights lets split Year-Month-Date to three different columns def date_change(data): data['Month'] = data['Date'].apply(lambda x : int(str(x)[5:7])) data['Year'] = data['Date'].apply(lambda x : int(str(x)[:4])) data['MonthYear'] = data['Date'].apply(lambda x : (str(x)[:7])) data['date_int'] = data['Date'].apply(lambda x : (str(x)[8:10])) return data train_store = date_change(train_store) import calendar # select all stores that were open subs = train_store[train_store['Open']!=0] # groupby Year and Month selected_sales = subs.groupby(['Year', 'Month'])['Sales'].mean() selected_cust = subs.groupby(['Year', 'Month'])['Customers'].mean() # plot fig, (axis1) = plt.subplots(1,1, figsize=(10,7)) selected_sales.unstack().T.plot(ax=axis1) tmp = axis1.set_title("Fig: Yearly Sales") tmp = axis1.set_ylabel("Sales") tmp = axis1.set_xticks(range(0,13)) tmp = axis1.set_xticklabels(calendar.month_abbr) Conclusion : Clearly, we can see that sales are increasing in the year last All three-year data sets which we have follows quite similar trends # median sales median_sales = train_store.groupby('MonthYear')['Sales'].median() pct_median_change = train_store.groupby('MonthYear')['Sales'].median().pct_change() # median customers median_cust = train_store.groupby('MonthYear')['Customers'].median() pct_median_custchange = train_store.groupby('MonthYear')['Customers'].median().pct_change() fig, (axis1, axis2) = plt.subplots(2, 1, sharex=True, figsize=(15,15)) # plot median sales ax1 = median_sales.plot(legend=True, ax=axis1, marker='o',title="Fig : Monthly changes in Sales/Customers") ax1.set_xticks(range(len(median_sales))) ax1.set_xticklabels(median_sales.index.tolist(), rotation=90) # plot pct change ax2 = pct_median_change.plot(legend=True, ax=axis2, marker='o',rot=90, title="Fig : Percent Change") # overlay customer data median_cust.plot(legend=True, ax=axis1, marker='x', secondary_y=True) pct_median_custchange.plot(legend=True, ax=axis2, marker='x', rot=90, secondary_y=True) <AxesSubplot:label='11736469-6019-440f-83e0-d06ef44fed21'> Conclusion Sales and customer are directly proportional In Jan sales are dropping In Dec we are getting most of the sales Plotting sales at store level def plot_sales_store(store_id): #a better way to get more details, timeline view of medians of each day of the week- store_df = train_store[(train_store['Open']!=0)&(train_store['Store']==store_id)] # Looking on open store sales_day = store_df.groupby('DayOfWeek')['Sales'].median() # Grouping by day of week for sales cust_day = store_df.groupby('DayOfWeek')['Customers'].median() # Grouping by day of week to see customers fig, (axis1) = plt.subplots(1,1, sharex=True, figsize=(10,5)) # plot median sales ax1 = sales_day.plot(legend=True, ax=axis1, marker='o',title="Fig. Median sales of each day of the week for store : "+str(store_id)) ax1.set_xticks(sales_day.index) tmp = ax1.set_xticklabels(sales_day.index.tolist(), rotation=90) # overlay customer data cust_day.plot(legend=True, ax=axis1, marker='x', secondary_y=True) stores_to_plot = [1,3,8,9,13,25,29,31,46] for i in stores_to_plot: plot_sales_store(i) Conclusion: Clearly, We can see that sales distribution is different for the different store so we will need to create a different model for different stores 3. Preprocessing Data sets # remove NaNs- train_store.fillna(0, inplace=True) #take the store as open if no value given- train_store.loc[train_store.Open.isnull(), 'Open'] = 1 # Label encode some features mappings = {'0':0, 'a':1, 'b':2, 'c':3, 'd':4} train_store.StoreType.replace(mappings, inplace=True) train_store.Assortment.replace(mappings, inplace=True) train_store.StateHoliday.replace(mappings, inplace=True) # Calculate time competition open time in months train_store['CompetitionOpen'] = 12 * (train_store.Year - train_store.CompetitionOpenSinceYear) + (train_store.Month - train_store.CompetitionOpenSinceMonth) # Promo open time in months train_store['PromoOpen'] = 12 * (train_store.Year - train_store.Promo2SinceYear) + (train_store.Date.dt.weekofyear - train_store.Promo2SinceWeek) / 4.0 train_store['PromoOpen'] = train_store.PromoOpen.apply(lambda x: x if x > 0 else 0) train_store.loc[train_store.Promo2SinceYear == 0, 'PromoOpen'] = 0 <ipython-input-15-4d14b3963355>:2: FutureWarning: Series.dt.weekofyear and Series.dt.week have been deprecated. Please use Series.dt.isocalendar().week instead. train_store['PromoOpen'] = 12 * (train_store.Year - train_store.Promo2SinceYear) + (train_store.Date.dt.weekofyear - train_store.Promo2SinceWeek) / 4.0 keeping only those store info for which we will be building models train_store = train_store[(train_store.Store==1)|(train_store.Store==3)|(train_store.Store==8)|(train_store.Store==9)\ |(train_store.Store==13)|(train_store.Store==25)|(train_store.Store==29)|(train_store.Store==31)\ |(train_store.Store==46)] # Filtering required data 4. Outlier treatment # Plotting outlier of Store no of which we are going to build model (Store : 1,3,8,9,13,25,29,31 and 46) def plot_sales(store_id): fig = plt.subplots(figsize=(12, 2)) ax = sns.boxplot(x=train_store[train_store.Store==store_id]['Sales'],whis=1.5) for val in [1,3,8,9,13,25,29,31,46]: plot_sales(val) # Plotting outlier of Store no of which we are going to build model (Store : 1,3,8,9,13,25,29,31 and 46) def plot_customers(store_id): fig = plt.subplots(figsize=(12, 2)) ax = sns.boxplot(x=train_store[train_store.Store==store_id]['Customers'],whis=1.5) for val in [1,3,8,9,13,25,29,31,46]: plot_customers(val) # Capping at 99 percentile for customer data def cap_99_percentils_customers(store_id): percentiles = train_store[train_store.Store==store_id]['Customers'].quantile([0.01,0.99]).values train_store.Customers = train_store[['Customers','Store']].apply(lambda x: x.Customers if x.Customers >= percentiles[0] and x.Store == store_id else percentiles[0] if x.Store == store_id else x.Customers ,axis=1) train_store.Customers = train_store[['Customers','Store']].apply(lambda x: x.Customers if x.Customers <= percentiles[1] and x.Store == store_id else percentiles[1] if x.Store == store_id else x.Customers ,axis=1) # Capping at 99 percentile for sales data def cap_99_percentils_sales(store_id): percentiles = train_store[train_store.Store==store_id]['Sales'].quantile([0.01,0.99]).values train_store.Sales = train_store[['Sales','Store']].apply(lambda x: x.Sales if x.Sales >= percentiles[0] and x.Store == store_id else percentiles[0] if x.Store == store_id else x.Sales ,axis=1) train_store.Sales = train_store[['Sales','Store']].apply(lambda x: x.Sales if x.Sales <= percentiles[1] and x.Store == store_id else percentiles[1] if x.Store == store_id else x.Sales ,axis=1) for val in [1,3,8,9,13,25,29,31,46]: cap_99_percentils_sales(val) # capping values for sales cap_99_percentils_customers # capping values for customer # Plotting outlier for salesa after capping def plot_sales(store_id): fig = plt.subplots(figsize=(12, 2)) ax = sns.boxplot(x=train_store[train_store.Store==store_id]['Sales'],whis=1.5) for val in [1,3,8,9,13,25,29,31,46]: plot_sales(val) # Plotting outlier for customer after capping for val in [1,3,8,9,13,25,29,31,46]: plot_customers(val) conclusion After removing outliers our data set is ready for further analysis of model building 5. Model Building 5.1.0 Building model for store 1 # Filtering out data set for store 1 # Setting Date as index column train_store_1 = train_store[train_store.Store==1] train_store_1.sort_values(by='Date', ascending=True,inplace=True) train_store_1 = train_store_1.set_index('Date') train_store_1.head() <ipython-input-24-633006170b3e>:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy train_store_1.sort_values(by='Date', ascending=True,inplace=True) # Checking the shape of data to see if data is missing train_store_1.shape (942, 23) Conclusion There are 942 days between 2013–01–01 and 2015–07–31 so none of the data is missing 5.1.1 Testing Causation using Granger’s Causality Test The basis behind Vector AutoRegression is that each of the time series in the system influences each other. That is, you can predict the series with past values of itself along with other series in the system. Using Granger’s Causality Test, it’s possible to test this relationship before even building the model. So what does Granger’s Causality really test? Granger’s causality tests the null hypothesis that the coefficients of past values in the regression equation is zero. In simpler terms, the past values of time series (X) do not cause the other series (Y). So, if the p-value obtained from the test is lesser than the significance level of 0.05, then, you can safely reject the null hypothesis. The below code implements the Granger’s Causality test for all possible combinations of the time series in a given data frame and stores the p-values of each combination in the output matrix. train_store_1.drop(['Store'], axis=1, inplace=True) # As we have already filtered out for store 1 train_store_1.drop(['StoreType'], axis=1, inplace=True) # Values are constant for a particular store train_store_1.drop(['Assortment'],axis=1,inplace=True) # Values are constant for a particular store # This doesn't varies with respect to time so we can take it as exogenius variable in var max train_store_1.drop(['CompetitionDistance','CompetitionOpenSinceMonth','CompetitionOpenSinceYear','MonthYear','Promo2','Promo2SinceYear','PromoInterval','PromoOpen','Promo2SinceWeek'],axis=1,inplace=True) train_store_1.columns Index(['DayOfWeek', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'Month', 'Year', 'date_int', 'CompetitionOpen'], dtype='object') from statsmodels.tsa.stattools import grangercausalitytests maxlag=12 test = 'ssr_chi2test' def grangers_causation_matrix(data, variables, test='ssr_chi2test', verbose=False): """Check Granger Causality of all possible combinations of the Time series. The rows are the response variable, columns are predictors. The values in the table are the P-Values. P-Values lesser than the significance level (0.05), implies the Null Hypothesis that the coefficients of the corresponding past values is zero, that is, the X does not cause Y can be rejected. data : pandas dataframe containing the time series variables variables : list containing names of the time series variables. """ df = pd.DataFrame(np.zeros((len(variables), len(variables))), columns=variables, index=variables) for c in df.columns: for r in df.index: test_result = grangercausalitytests(data[[r, c]], maxlag=maxlag, verbose=False) p_values = [round(test_result[i+1][0][test][1],4) for i in range(maxlag)] if verbose: print(f'Y = {r}, X = {c}, P Values = {p_values}') min_p_value = np.min(p_values) df.loc[r, c] = min_p_value df.columns = [var + '_x' for var in variables] df.index = [var + '_y' for var in variables] return df grangers_causation_matrix(train_store_1, variables = train_store_1.columns) train_store_1.drop(['Month','Year','date_int','CompetitionOpen'],axis=1,inplace=True) # Values are c train_store_1.head() 5.1.2 Cointegration Test The cointegration test helps to establish the presence of a statistically significant connection between two or more time series. But, what does Cointegration mean? To understand that, you first need to know what is ‘order of integration’ (d). Order of integration(d) is nothing but the number of differences required to make a non-stationary time series stationary. Now, when you have two or more time series, and there exists a linear combination of them that has an order of integration (d) less than that of the individual series, then the collection of series is said to be cointegrated. When two or more time series are cointegrated, it means they have a long-run, statistically significant relationship. This is the basic premise on which Vector Autoregression(VAR) models are based on. So, it’s fairly common to implement the cointegration test before starting to build VAR models. Alright, So how to do this test? Soren Johanssen in his paper (1991) devised a procedure to implement the cointegration test from statsmodels.tsa.vector_ar.vecm import coint_johansen def cointegration_test(df, alpha=0.05): """Perform Johanson's Cointegration Test and Report Summary""" out = coint_johansen(df,-1,5) d = {'0.90':0, '0.95':1, '0.99':2} traces = out.lr1 cvts = out.cvt[:, d[str(1-alpha)]] def adjust(val, length= 6): return str(val).ljust(length) # Summary print('Name :: Test Stat > C(95%) => Signif ', '--'*20) for col, trace, cvt in zip(df.columns, traces, cvts): print(adjust(col), ':: ', adjust(round(trace,2), 9), ">", adjust(cvt, 8), ' => ' , trace > cvt) cointegration_test(train_store_1) Name :: Test Stat > C(95%) => Signif ---------------------------------------- DayOfWeek :: 1450.79 > 111.7797 => True Sales :: 640.86 > 83.9383 => True Customers :: 397.66 > 60.0627 => True Open :: 170.33 > 40.1749 => True Promo :: 83.99 > 24.2761 => True StateHoliday :: 40.15 > 12.3212 => True SchoolHoliday :: 0.05 > 4.1296 => False 5.1.3 Split the Series into Training and Testing Data and standardizing customer and sales Standardizing customer and sales data from sklearn import preprocessing # Get column names first names = ['Sales','Customers'] # Create the Scaler object scaler = preprocessing.StandardScaler() # Fit your data on the scaler object scaled_df = scaler.fit_transform(train_store_1) scaled_df = pd.DataFrame(train_store_1, columns=names) train_store_1.Sales = scaled_df.Sales train_store_1.Customers = scaled_df.Customers train_store_1.head() Finding date from where we have to split data into train and test from datetime import datetime, timedelta start_index = train_store_1.index.min() end_index = train_store_1.index.max() print(start_index) # Start index print(end_index) # End index print(train_store_1.index.max() - timedelta(days=42)) # Finding the date from where we have to forecast we are taking 42 days for forecasting 2013-01-01 00:00:00 2015-07-31 00:00:00 2015-06-19 00:00:00 Splitting data into train and test set df_train, df_test = train_store_1.loc['2013-01-01':'2015-06-19'], train_store_1.loc['2015-06-20':] # Check size print(df_train.shape) # (119, 8) print(df_test.shape) # (4, 8) (900, 7) (42, 7) 5.1.4 Check for Stationarity and Make the Time Series Stationary Since the VAR model requires the time series you want to forecast to be stationary, it is customary to check all the time series in the system for stationarity. Just to refresh, a stationary time series is one whose characteristics like mean and variance does not change over time. Let's use the ADF test for the same def adfuller_test(series, signif=0.05, name='', verbose=False): """Perform ADFuller to test for Stationarity of given series and print report""" r = adfuller(series, autolag='AIC') output = {'test_statistic':round(r[0], 4), 'pvalue':round(r[1], 4), 'n_lags':round(r[2], 4), 'n_obs':r[3]} p_value = output['pvalue'] def adjust(val, length= 6): return str(val).ljust(length) # Print Summary print(f' Augmented Dickey-Fuller Test on "{name}"', " ", '-'*47) print(f' Null Hypothesis: Data has unit root. Non-Stationary.') print(f' Significance Level = {signif}') print(f' Test Statistic = {output["test_statistic"]}') print(f' No. Lags Chosen = {output["n_lags"]}') for key,val in r[4].items(): print(f' Critical value {adjust(key)} = {round(val, 3)}') if p_value <= signif: print(f" => P-Value = {p_value}. Rejecting Null Hypothesis.") print(f" => Series is Stationary.") else: print(f" => P-Value = {p_value}. Weak evidence to reject the Null Hypothesis.") print(f" => Series is Non-Stationary.") # ADF Test on each column for name, column in df_train.iteritems(): adfuller_test(column, name=column.name) print(' ') Augmented Dickey-Fuller Test on "DayOfWeek" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -287422830162108.06 No. Lags Chosen = 7 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.568 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "Sales" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -4.2531 No. Lags Chosen = 21 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0005. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "Customers" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -3.6017 No. Lags Chosen = 21 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0057. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "Open" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -5.7249 No. Lags Chosen = 21 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "Promo" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -8.7921 No. Lags Chosen = 21 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "StateHoliday" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -14.1637 No. Lags Chosen = 3 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.568 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "SchoolHoliday" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -5.4211 No. Lags Chosen = 12 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Conclusion Here we can see that all the columns used are stationary So we can proceed ahead with these data sets 5.1.5 How to Select the Order (P) of the VAR model To select the right order of the VAR model, we iteratively fit increasing orders of the VAR model and pick the order that gives a model with the least AIC. Though the usual practice is to look at the AIC, you can also check other best fit comparison estimates of BIC, FPE, and HQIC. model = VAR(df_train) for i in [1,2,3,4,5,6,7,8,9]: result = model.fit(i) print('Lag Order =', i) print('AIC : ', result.aic) print('BIC : ', result.bic) print('FPE : ', result.fpe) print('HQIC: ', result.hqic, ' ') Lag Order = 1 AIC : 6.478281342619523 BIC : 6.777360152330769 FPE : 650.8535343606709 HQIC: 6.592537697350863 Lag Order = 2 AIC : 5.837179212729006 BIC : 6.398446314256546 FPE : 342.81843602637866 HQIC: 6.051610171083002 Lag Order = 3 AIC : 5.023721840952075 BIC : 5.8476400157091035 FPE : 151.98635009225518 HQIC: 5.338514926446016 Lag Order = 4 AIC : 4.0603131745217445 BIC : 5.147346549336377 FPE : 58.00165239125097 HQIC: 4.475656469676899 Lag Order = 5 AIC : 1.885875402073503 BIC : 3.2364894545933502 FPE : 6.59412695453832 HQIC: 2.4019575506910904 Lag Order = 6 AIC : -49.26685529477507 BIC : -47.65219373065065 FPE : 4.0170056049621183e-22 HQIC: -48.64984508532618 Lag Order = 7 AIC : -53.584712126042945 BIC : -51.705534854702776 FPE : 5.355643143425765e-24 HQIC: -52.86658408252669 Lag Order = 8 AIC : -46.87757071756594 BIC : -44.73340817619887 FPE : 4.3838920141864546e-21 HQIC: -46.058134498567696 Lag Order = 9 AIC : -48.67645510602825 BIC : -46.266836359106904 FPE : 7.258344596034241e-22 HQIC: -47.75551979963143 We can use the model summary method to find the optimal values x = model.select_order(maxlags=50) x.summary() Conclusion Running the above summary report we find that at lag = 17 we find the best metrics. so building model using lag =17 model_fitted = model.fit(17) model_fitted.summary() Summary of Regression Results ================================== Model: VAR Method: OLS Date: Mon, 12, Apr, 2021 Time: 10:47:22 -------------------------------------------------------------------- No. of Equations: 7.00000 BIC: -51.5468 Nobs: 883.000 HQIC: -54.3573 Log likelihood: 16836.4 FPE: 4.38993e-25 AIC: -56.0972 Det(Omega_mle): 1.79919e-25 -------------------------------------------------------------------- # Get the lag order lag_order = model_fitted.k_ar print(lag_order) # Input data for forecasting forecast_input = df_train.values[-lag_order:] forecast_input 17 array([[3.000e+00, 5.809e+03, 6.160e+02, 1.000e+00, 1.000e+00, 0.000e+00, 0.000e+00], [4.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 1.000e+00, 1.000e+00, 0.000e+00], [5.000e+00, 5.384e+03, 6.090e+02, 1.000e+00, 1.000e+00, 0.000e+00, 0.000e+00], [6.000e+00, 4.183e+03, 4.600e+02, 1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [7.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [1.000e+00, 4.071e+03, 5.020e+02, 1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [2.000e+00, 4.102e+03, 4.850e+02, 1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [3.000e+00, 3.591e+03, 4.530e+02, 1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [4.000e+00, 3.627e+03, 4.420e+02, 1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [5.000e+00, 3.695e+03, 4.220e+02, 1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [6.000e+00, 4.256e+03, 5.020e+02, 1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [7.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [1.000e+00, 5.518e+03, 5.860e+02, 1.000e+00, 1.000e+00, 0.000e+00, 0.000e+00], [2.000e+00, 4.852e+03, 5.030e+02, 1.000e+00, 1.000e+00, 0.000e+00, 0.000e+00], [3.000e+00, 4.000e+03, 4.760e+02, 1.000e+00, 1.000e+00, 0.000e+00, 0.000e+00], [4.000e+00, 4.645e+03, 4.980e+02, 1.000e+00, 1.000e+00, 0.000e+00, 0.000e+00], [5.000e+00, 4.202e+03, 4.870e+02, 1.000e+00, 1.000e+00, 0.000e+00, 0.000e+00]]) 5.1.6 Model evaluation Forecasting with the results obtained # Forecast fc = model_fitted.forecast(y=forecast_input, steps=42) df_forecast = pd.DataFrame(fc, index=train_store_1.index[-42:], columns=train_store_1.columns) df_forecast fig, axes = plt.subplots(nrows=int(len(train_store_1.columns)/2), ncols=2, dpi=150, figsize=(10,10)) for i, (col,ax) in enumerate(zip(train_store_1.columns, axes.flatten())): df_forecast[col].plot(legend=True, ax=ax).autoscale(axis='x',tight=True) df_test[col][-42:].plot(legend=True, ax=ax); ax.set_title(col + ": Forecast vs Actuals") ax.xaxis.set_ticks_position('none') ax.yaxis.set_ticks_position('none') ax.spines["top"].set_alpha(0) ax.tick_params(labelsize=6) plt.tight_layout(); Conclusion Clearly, we can see that we are able to forecast sales quite accurately Also, No of customer coming to shops are also forecasted accurately Let’s see other metrics for our predictions from statsmodels.tsa.stattools import acf def forecast_accuracy(forecast, actual): mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE me = np.mean(forecast - actual) # ME mae = np.mean(np.abs(forecast - actual)) # MAE mpe = np.mean((forecast - actual)/actual) # MPE rmse = np.mean((forecast - actual)**2)**.5 # RMSE corr = np.corrcoef(forecast, actual)[0,1] # corr mins = np.amin(np.hstack([forecast[:,None], actual[:,None]]), axis=1) maxs = np.amax(np.hstack([forecast[:,None], actual[:,None]]), axis=1) minmax = 1 - np.mean(mins/maxs) # minmax return({'mape':mape, 'me':me, 'mae': mae, 'mpe': mpe, 'rmse':rmse, 'corr':corr, 'minmax':minmax}) print('Forecast Accuracy of: Sales') def adjust(val, length= 6): return str(val).ljust(length) accuracy_prod = forecast_accuracy(df_forecast['Sales'].values, df_test.Sales) for k, v in accuracy_prod.items(): print(adjust(k), ': ', round(v,4)) Forecast Accuracy of: Sales mape : inf me : -21.6444 mae : 396.8509 mpe : nan rmse : 504.838 corr : 0.9541 minmax : inf 5.2.0 Building model for store 46 # Filtering out data set for store 46 # Setting Date as index column train_store_46 = train_store[train_store.Store==46] train_store_46.sort_values(by='Date', ascending=True,inplace=True) train_store_46 = train_store_46.set_index('Date') train_store_46.head() <ipython-input-214-76e9b1a1370b>:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy train_store_46.sort_values(by='Date', ascending=True,inplace=True) train_store_46.shape (758, 23) 5.2.1 Handling missing values It seems that we have missing values for store 46 let's perform imputation for missing values train_store_46['Sales'].plot(figsize=(12, 4)) plt.legend(loc='best') plt.title('Sales') plt.show(block=False) Clearly, from graph we can see that we have 6 months of missing data from 2014–07–01 to 2014–12–31 anad also we can see that index is missing. So we are first creating an index with null values and will then impute the values # Genearating missing index start = pd.Timestamp('2014-07-01 00:00:00') # start date end = pd.Timestamp('2014-12-31 00:00:00') # end date null_df = pd.DataFrame({'DayOfWeek': np.nan,'Sales': np.nan,'Customers':np.nan,'Open':np.nan,'Promo':np.nan,'StateHoliday':np.nan,'SchoolHoliday':np.nan}, index=pd.date_range(start, end, freq='D')) train_store_46 = pd.concat([train_store_46.loc[:'2014-07-01'], null_df, train_store_46.loc['2015-01-01':]]) # genearating missing index # Checking how missing index are imputed train_store_46['Sales'].plot(figsize=(12, 4)) plt.legend(loc='best') plt.title('Sales') plt.show(block=False) Imputing the values Imputing Sales values train_store_46.Sales.fillna(train_store_46.Sales.mean(),inplace=True) # Checking how missing index are imputed train_store_46['Sales'].plot(figsize=(12, 4)) plt.legend(loc='best') plt.title('Sales') plt.show(block=False) # Imputing Customer train_store_13.Customers.fillna(train_store_13.Customers.mean(),inplace=True) # Imputing open values train_store_13.Open.fillna(train_store_13.Open.mode()[0],inplace=True) # Imputing Promo values train_store_13.Promo.fillna(train_store_13.Promo.mode()[0],inplace=True) # Imputing state holiday values train_store_13.StateHoliday.fillna(train_store_13.StateHoliday.mode()[0],inplace=True) # Imputing school holiday values train_store_13.SchoolHoliday.fillna(train_store_13.SchoolHoliday.mode()[0],inplace=True) # Imputing day of week values train_store_13.DayOfWeek.fillna(train_store_13.DayOfWeek.mode()[0],inplace=True) 5.2.2 Testing Causation using Granger’s Causality Test The basis behind Vector AutoRegression is that each of the time series in the system influences each other. That is, you can predict the series with past values of itself along with other series in the system. Using Granger’s Causality Test, it’s possible to test this relationship before even building the model. So what does Granger’s Causality really test? Granger’s causality tests the null hypothesis that the coefficients of past values in the regression equation are zero. In simpler terms, the past values of time series (X) do not cause the other series (Y). So, if the p-value obtained from the test is lesser than the significance level of 0.05, then, you can safely reject the null hypothesis. The below code implements the Granger’s Causality test for all possible combinations of the time series in a given data frame and stores the p-values of each combination in the output matrix. train_store_46.drop(['Store'], axis=1, inplace=True) # As we have already filtered out for store 1 train_store_46.drop(['StoreType'], axis=1, inplace=True) # Values are constant for a particular store train_store_46.drop(['Assortment'],axis=1,inplace=True) # Values are constant for a particular store # This doesn't varies with respect to time so we can take it as exogenius variable in var max train_store_46.drop(['CompetitionDistance','CompetitionOpenSinceMonth','CompetitionOpenSinceYear','MonthYear','Promo2','Promo2SinceYear','PromoInterval','PromoOpen','Promo2SinceWeek'],axis=1,inplace=True) from statsmodels.tsa.stattools import grangercausalitytests maxlag=12 # remove NaNs- train_store_46.fillna(0, inplace=True) test = 'ssr_chi2test' def grangers_causation_matrix(data, variables, test='ssr_chi2test', verbose=False): """Check Granger Causality of all possible combinations of the Time series. The rows are the response variable, columns are predictors. The values in the table are the P-Values. P-Values lesser than the significance level (0.05), implies the Null Hypothesis that the coefficients of the corresponding past values is zero, that is, the X does not cause Y can be rejected. data : pandas dataframe containing the time series variables variables : list containing names of the time series variables. """ df = pd.DataFrame(np.zeros((len(variables), len(variables))), columns=variables, index=variables) for c in df.columns: for r in df.index: test_result = grangercausalitytests(data[[r, c]], maxlag=maxlag, verbose=False) p_values = [round(test_result[i+1][0][test][1],4) for i in range(maxlag)] if verbose: print(f'Y = {r}, X = {c}, P Values = {p_values}') min_p_value = np.min(p_values) df.loc[r, c] = min_p_value df.columns = [var + '_x' for var in variables] df.index = [var + '_y' for var in variables] return df grangers_causation_matrix(train_store_46, variables = train_store_46.columns) train_store_46.drop(['Month','Year','date_int','CompetitionOpen'],axis=1,inplace=True) # P value is greater thaaaaaaan 0.05 grangers_causation_matrix(train_store_46, variables = train_store_46.columns) # CHecking for P values after dropping columns 5.2.3 Cointegration Test The cointegration test helps to establish the presence of a statistically significant connection between two or more time series. But, what does Cointegration mean? To understand that, you first need to know what is ‘order of integration’ (d). Order of integration(d) is nothing but the number of differences required to make a non-stationary time series stationary. Now, when you have two or more time series, and there exists a linear combination of them that has an order of integration (d) less than that of the individual series, then the collection of series is said to be cointegrated. When two or more time series are cointegrated, it means they have a long run, statistically significant relationship. This is the basic premise on which Vector Autoregression(VAR) models is based on. So, it’s fairly common to implement the cointegration test before starting to build VAR models. Alright, So how to do this test? Soren Johanssen in his paper (1991) devised a procedure to implement the cointegration test from statsmodels.tsa.vector_ar.vecm import coint_johansen def cointegration_test(df, alpha=0.05): """Perform Johanson's Cointegration Test and Report Summary""" out = coint_johansen(df,-1,5) d = {'0.90':0, '0.95':1, '0.99':2} traces = out.lr1 cvts = out.cvt[:, d[str(1-alpha)]] def adjust(val, length= 6): return str(val).ljust(length) # Summary print('Name :: Test Stat > C(95%) => Signif ', '--'*20) for col, trace, cvt in zip(df.columns, traces, cvts): print(adjust(col), ':: ', adjust(round(trace,2), 9), ">", adjust(cvt, 8), ' => ' , trace > cvt) cointegration_test(train_store_46) Name :: Test Stat > C(95%) => Signif ---------------------------------------- DayOfWeek :: 1333.84 > 111.7797 => True Sales :: 621.47 > 83.9383 => True Customers :: 362.73 > 60.0627 => True Open :: 132.4 > 40.1749 => True Promo :: 54.5 > 24.2761 => True StateHoliday :: 3.89 > 12.3212 => False SchoolHoliday :: 0.11 > 4.1296 => False 5.2.4 Split the Series into Training and Testing Data and standardizing customer and sales Standardizing customer and sales data from sklearn import preprocessing # Get column names first names = ['Sales','Customers'] # Create the Scaler object scaler = preprocessing.StandardScaler() # Fit your data on the scaler object scaled_df = scaler.fit_transform(train_store_46) scaled_df = pd.DataFrame(train_store_46, columns=names) train_store_46.Sales = scaled_df.Sales train_store_46.Customers = scaled_df.Customers train_store_46.head() Finding date from where we have to split data into train and test from datetime import datetime, timedelta start_index = train_store_46.index.min() end_index = train_store_46.index.max() print(start_index) # Start index print(end_index) # End index print(train_store_46.index.max() - timedelta(days=42)) # Finding the date from where we have to forecast we are taking 42 days for forecasting 2013-01-01 00:00:00 2015-07-31 00:00:00 2015-06-19 00:00:00 Splitting data into train and test set df_train, df_test = train_store_46.loc['2013-01-01':'2015-06-19'], train_store_46.loc['2015-06-20':] # Check size print(df_train.shape) # (119, 8) print(df_test.shape) # (4, 8) (900, 7) (42, 7) 5.2.5 Check for Stationarity and Make the Time Series Stationary Since the VAR model requires the time series you want to forecast to be stationary, it is customary to check all the time series in the system for stationarity. Just to refresh, a stationary time series is one whose characteristics like mean and variance does not change over time. Let's use the ADF test for the same def adfuller_test(series, signif=0.05, name='', verbose=False): """Perform ADFuller to test for Stationarity of given series and print report""" r = adfuller(series, autolag='AIC') output = {'test_statistic':round(r[0], 4), 'pvalue':round(r[1], 4), 'n_lags':round(r[2], 4), 'n_obs':r[3]} p_value = output['pvalue'] def adjust(val, length= 6): return str(val).ljust(length) # Print Summary print(f' Augmented Dickey-Fuller Test on "{name}"', " ", '-'*47) print(f' Null Hypothesis: Data has unit root. Non-Stationary.') print(f' Significance Level = {signif}') print(f' Test Statistic = {output["test_statistic"]}') print(f' No. Lags Chosen = {output["n_lags"]}') for key,val in r[4].items(): print(f' Critical value {adjust(key)} = {round(val, 3)}') if p_value <= signif: print(f" => P-Value = {p_value}. Rejecting Null Hypothesis.") print(f" => Series is Stationary.") else: print(f" => P-Value = {p_value}. Weak evidence to reject the Null Hypothesis.") print(f" => Series is Non-Stationary.") # ADF Test on each column for name, column in df_train.iteritems(): adfuller_test(column, name=column.name) print(' ') Augmented Dickey-Fuller Test on "DayOfWeek" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -1.7257 No. Lags Chosen = 16 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.4179. Weak evidence to reject the Null Hypothesis. => Series is Non-Stationary. Augmented Dickey-Fuller Test on "Sales" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -5.8957 No. Lags Chosen = 20 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "Customers" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -1.5412 No. Lags Chosen = 20 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.513. Weak evidence to reject the Null Hypothesis. => Series is Non-Stationary. Augmented Dickey-Fuller Test on "Open" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -1.5718 No. Lags Chosen = 20 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.4977. Weak evidence to reject the Null Hypothesis. => Series is Non-Stationary. Augmented Dickey-Fuller Test on "Promo" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -3.0526 No. Lags Chosen = 21 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0303. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "StateHoliday" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -13.7885 No. Lags Chosen = 3 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.568 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "SchoolHoliday" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -7.2939 No. Lags Chosen = 12 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Conclusion Clearly, we can see from the above results that the following columns are not stationary DayOfWeek Customers Open Working on making the above series stationary # 1st difference df_differenced = df_train.diff().dropna() # ADF Test on each column of 1st Differences Dataframe for name, column in df_differenced.iteritems(): adfuller_test(column, name=column.name) print(' ') Augmented Dickey-Fuller Test on "DayOfWeek" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -8.0782 No. Lags Chosen = 15 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "Sales" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -14.7638 No. Lags Chosen = 19 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "Customers" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -9.6245 No. Lags Chosen = 19 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "Open" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -8.9815 No. Lags Chosen = 19 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "Promo" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -10.3626 No. Lags Chosen = 21 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "StateHoliday" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -14.8724 No. Lags Chosen = 11 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Augmented Dickey-Fuller Test on "SchoolHoliday" ----------------------------------------------- Null Hypothesis: Data has unit root. Non-Stationary. Significance Level = 0.05 Test Statistic = -10.3797 No. Lags Chosen = 17 Critical value 1% = -3.438 Critical value 5% = -2.865 Critical value 10% = -2.569 => P-Value = 0.0. Rejecting Null Hypothesis. => Series is Stationary. Conclusion We can see that after 1st order differencing series is stationary Now we can proceed ahead with these data sets 5.2.6 How to Select the Order (P) of the VAR model To select the right order of the VAR model, we iteratively fit increasing orders of the VAR model and pick the order that gives a model with the least AIC. Though the usual practice is to look at the AIC, you can also check other best fit comparison estimates of BIC, FPE, and HQIC. model = VAR(df_differenced) for i in [1,2,3,4,5,6,7,8,9]: result = model.fit(i) print('Lag Order =', i) print('AIC : ', result.aic) print('BIC : ', result.bic) print('FPE : ', result.fpe) print('HQIC: ', result.hqic, ' ') Lag Order = 1 AIC : 9.109819662826853 BIC : 9.409162116974874 FPE : 9043.693676615094 HQIC: 9.22418284061565 Lag Order = 2 AIC : 8.088501294529548 BIC : 8.65026368640934 FPE : 3256.874006536901 HQIC: 8.303132943729961 Lag Order = 3 AIC : 7.598803323087045 BIC : 8.423449331567111 FPE : 1995.9440548408095 HQIC: 7.9138913401013005 Lag Order = 4 AIC : 6.531762270987129 BIC : 7.619756924405895 FPE : 686.7162145511694 HQIC: 6.947495112929074 Lag Order = 5 AIC : 4.475519649993162 BIC : 5.827329331585696 FPE : 87.8670316116561 HQIC: 4.992086336973632 Lag Order = 6 AIC : 2.504938582041416 BIC : 4.121031035393965 FPE : 12.249196366893853 HQIC: 3.122528699465392 Lag Order = 7 AIC : 1.8197855375197989 BIC : 3.700629872052314 FPE : 6.175618235608594 HQIC: 2.538589238395451 Lag Order = 8 AIC : 1.7206067032204424 BIC : 3.8666733996972695 FPE : 5.594765938672929 HQIC: 2.5408147104801135 Lag Order = 9 AIC : 1.6317344199792654 BIC : 4.043495336048177 FPE : 5.121642213442159 HQIC: 2.5535380288124863 We can use the model summary method to find the optimal values x = model.select_order(maxlags=50) x.summary() Conclusion At lag 13 we get the best result building model with lag 13 model_fitted = model.fit(13) model_fitted.summary() Summary of Regression Results ================================== Model: VAR Method: OLS Date: Mon, 12, Apr, 2021 Time: 10:53:07 -------------------------------------------------------------------- No. of Equations: 7.00000 BIC: 4.63725 Nobs: 886.000 HQIC: 2.48807 Log likelihood: -8669.23 FPE: 3.20022 AIC: 1.15796 Det(Omega_mle): 1.60267 -------------------------------------------------------------------- # Get the lag order lag_order = model_fitted.k_ar print(lag_order) # Input data for forecasting forecast_input = df_differenced.values[-lag_order:] forecast_input 13 array([[ 1.000e+00, -4.256e+03, -6.020e+02, -1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [-6.000e+00, 4.388e+03, 5.680e+02, 1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, -4.770e+02, -4.100e+01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, 2.770e+02, -1.000e+01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, 3.000e+00, 2.400e+01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, 1.040e+02, 2.600e+01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, 5.380e+02, -7.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, -4.833e+03, -5.600e+02, -1.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [-6.000e+00, 8.239e+03, 8.280e+02, 1.000e+00, 1.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, -1.587e+03, -1.450e+02, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, -2.990e+02, -1.000e+01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, -5.660e+02, -8.000e+00, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00], [ 1.000e+00, 6.860e+02, 4.500e+01, 0.000e+00, 0.000e+00, 0.000e+00, 0.000e+00]]) 5.2.7 Model evaluation Forecasting with the results obtained # Forecast fc = model_fitted.forecast(y=forecast_input, steps=42) df_forecast = pd.DataFrame(fc, index=train_store_46.index[-42:], columns=train_store_46.columns) df_forecast # Invert the transformation to get the real forecast def invert_transformation(df_train, df_forecast, second_diff=False): """Revert back the differencing to get the forecast to original scale.""" df_fc = df_forecast.copy() columns = df_train.columns for col in columns: # Roll back 2nd Diff if second_diff: df_fc[str(col)] = (df_train[col].iloc[-1]-df_train[col].iloc[-2]) + df_fc[str(col)].cumsum() # Roll back 1st Diff df_fc[str(col)] = df_train[col].iloc[-1] + df_fc[str(col)].cumsum() return df_fc df_results = invert_transformation(df_train, df_forecast, second_diff=False) # Plotting to see the result obtained fig, axes = plt.subplots(nrows=int(len(train_store_46.columns)/2), ncols=2, dpi=150, figsize=(10,10)) for i, (col,ax) in enumerate(zip(train_store_46.columns, axes.flatten())): df_results[col].plot(legend=True, ax=ax).autoscale(axis='x',tight=True) df_test[col][-42:].plot(legend=True, ax=ax); ax.set_title(col + ": Forecast vs Actuals") ax.xaxis.set_ticks_position('none') ax.yaxis.set_ticks_position('none') ax.spines["top"].set_alpha(0) ax.tick_params(labelsize=6) plt.tight_layout(); Conclusion Clearly, we can see that we are able to forecast sales quite accurately Also, No of customer coming to shops are also forecasted accurately Let’s see other metrics for our predictions
https://medium.com/@manish-poddar/building-forecasting-model-using-vector-autoregression-var-6f9998ecfded
['Manish Poddar']
2021-04-25 07:59:48.217000+00:00
['Forecasting', 'Vector Autoregression']
Stylish Spiral Design Dozen Roses “Royal Twist”
Stylish Spiral Design Dozen Roses “Royal Twist” Buy Now! (Click) This unique and luxury spiral design arrangement, consists of one dozen roses stepping up and twisting into a spiral, and filled in with seasonal flowers and greenery. A sure eye catcher and showstopper work of art! Sending one dozen roses is romantic and traditional, but if you want to stand out from the rest and be that tad bit different, we recommend selecting The Royal Twist. Featured is our hot pink blush roses but can be customized into any colour. For example, Red, white, yellow, orange, light pink. This design is exclusive to Neydin.com and can be gifted for many occasions including but not limited to, birthdays, Valentine’s day, Mother’s Day, Anniversary. This is a perfect choice for many reasons. It is different, and it doesn’t require any care. The recipient can just place it on their favorite table or mantlepiece and enjoy it. It is a 360-degree design so it can be placed anywhere. Rose Delivery Melbourne All roses are preconditioned and wired to maintain longevity and shape. Freshness is guaranteed as the roses are designed into wet floral foam, which provides water supply to each bloom. All the recipient needs to do is water it every second day to make sure the foam remains wet. It comes in a classy, re-usable ceramic vase. (subject to availability, the vase style may differ slightly.) Melbourne Flower Delivery
https://medium.com/@neydin/stylish-spiral-design-dozen-roses-royal-twist-3b9c952fe2c7
[]
2021-01-21 23:05:57.918000+00:00
['Roses', 'Melbourne', 'Flower Delivery', 'Florist', 'Flowers']