title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
829
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
| tags
stringlengths 6
263
|
---|---|---|---|---|---|
How I Grew an Instagram Account From 4000 Followers to 190k in a Year
|
I started @theminimalistwardrobe in 2017 for two reasons. I wanted to create an audience for a business I was planning, while at the same time wanting to learn the game of Instagram.
At the time I had an existing business with its own Instagram account, but I was too afraid to try things out. I was stuck in my safe routine. What would my customers think if I suddenly posted 8 posts one day? Would they be annoyed? Is it weird if I post something else than my products?
These were the insecurities I had, and a fresh account with no responsibilities was the perfect solution to test everything.
The new business I was planning to launch was a clothing brand, with high-quality essentials and minimal branding. After some tinkering with names on Instagram, I settled on ‘The Minimalist Wardrobe’. That wasn’t meant to be the name of the clothing brand, I simply wanted to create a like-minded audience, so I didn’t have to launch to crickets.
My first post. I sourced photos from Instagram and stock photo sites.
I posted my first post on February 19, 2017. It was a low-resolution photo of clothes rack with some shirts and a few pairs of shoes underneath. There was no real strategy here. I just enjoyed the freedom of posting whatever and analyzing the results.
Little did I know what it would lead up to.
From 0 to 4000
The first followers are always the hardest to get, everyone knows that. I got my first few followers by posting a few posts and engaging with some similar accounts.
That’s a method that still works, but it’s not scalable. Engaging with other accounts is time-consuming, and even if you’d automate it, Instagram is cracking down hard on all software that is against their terms of service.
I grew the account to a little over 4000 followers in 8 months. Nothing to write home about, but during this time, I didn’t really use any strategy. I just learned a little from every post I posted and leaned into what worked.
I didn’t make any groundbreaking discoveries but learned how to use hashtags, what kind of photos and captions my audience seemed to like, and the best times to post. I started scheduling posts with Later so that I could create a bunch beforehand, and not be on my phone the whole day.
The followers came from my engagement, and from the posts that reached new people through hashtags.
After 8 months I just stopped posting. I had scrapped the clothing brand idea a long time ago, as soon as I realized how much work it would require. I also happened to find some brands that had executed my idea better than I ever could (a shoutout to Asket, from where I still buy my clothes).
As for the learning part, well, I felt like I had learned some useful things, and honestly just lacked the motivation to continue playing around with a useless account.
I logged off the account for half a year.
When it Finally Clicked For Me
I can’t remember why I logged back into the account after 6 months. Maybe I had a boring day. In any case, that was one the most significant days for The Minimalist Wardrobe, because that was the day when I understood that I’m on to something.
To be a little more specific, I understood it the next morning. I had published a post in the evening and woke up to over 300 likes. The caption said “Long time no see! Did ya miss us?”
Long time no see! Did ya miss us?
Now, 300 likes with 4000 followers is nothing to brag about, but it was still enough for me to understand that there’s an actual audience that really enjoyed what I was posting.
I realized that the account is promoting something that people wanted. Beautiful photos of clothes racks and basic garments painted a picture of simplifying your wardrobe. I had somewhat unintentionally conveyed my own philosophy for clothing.
This is the moment I decided to apply a real strategy to grow the account, and treat it as its own project. Now things got interesting.
Sliding Into DMs All Day Long
The first thing I started doing was contacting accounts of the same size (or smaller), asking them to do a shoutout exchange with The Minimalist Wardrobe. They’d simply post about me on their feed, and I’d post about them.
I spent hours and hours finding suitable accounts to cross-promote with, and I must’ve sent over a hundred DMs — daily — to people. I didn’t mind if the accounts were smaller. Anything over 1000 was worth it for me, as posting was easy, and my audience seemed to enjoy the posts.
This is how I usually reached out to people.
Once I grew, I could get bigger accounts on board, which is why the growth was exponential. I had also perfected my strategy by only contacting accounts with good engagement, and instructing them on how to promote The Minimalist Wardrobe when agreeing on the shoutouts. A clear call-to-action to follow made a huge difference.
From Shoutouts to Deeper Collaborations
Sending DMs for hours every day wasn’t sustainable, but the results were undeniable. I needed a better solution. Essentially I wanted collaborations that would give me constant exposure, but only needed to be set up once.
I decided to build a simple website and set up a blog. Then I reached out to sustainable and slow fashion bloggers and asked if they’d like to write for my new blog.
I’ve always believed in fair relationships, not just because I’ll sleep better, but because at some point the one who’s getting the worse end of the deal will call it quits — it’s just a matter of time. Fortunately, The Minimalist Wardrobe’s following was somewhere around 15k at this time, so it was a great opportunity for the bloggers to get in front of a new audience and gain new followers too.
Every time someone wrote a post for my blog, we’d both promote it on Instagram. That way both reached a new audience. Eventually, I had over 20 guest bloggers, with a new blog post 5 days a week — each of them promoted by the blogger.
The big 100. Still with the old logo — I now have a new one by my favorite designer, Hannah.
The account kept growing fast, by over 2000 daily followers at best. 30k, 40k, and 50k were just simple milestones which I celebrated with a smile and started counting when the next one would come. I hit 100k in late November, just 6 months after taking this seriously.
My Experience With the Infamous Follow/Unfollow
The account’s growth kept accelerating, and I didn’t stop exploring different ways to grow. I decided to try the most despised way of growing an Instagram account: Follow/Unfollow.
For those of you unfamiliar with it, the idea is to follow accounts so that they get a notification, and a percentage of them follow you back. Then at some point, you unfollow them.
I did it for a while but stopped doing it for a couple reasons. Firstly, I hated the idea of it the whole time I was doing it. It was a cheap tactic, and honestly, I didn’t need it. My curiosity simply won and I couldn’t help myself. Secondly, it wasn’t sustainable either. I was back to tapping for hours on my phone.
Truth to be told though, it did work. My growth rate increased. It’s hard to say how much this influenced it, but it definitely helped. (Un)fortunately, Instagram has cracked down on action limits recently, so this shouldn’t be as viable anymore.
The Real Reason For the Growth
The collaborations with bloggers were great, as were the earlier shoutout exchanges. I got a boost from following a lot of people. My analytical approach to using hashtags and putting effort into each caption paid off — many posts reached thousands of new people, turning a good amount of them into new followers.
All these strategies accelerated the growth of the account, but the real reason why so many wanted to follow The Minimalist Wardrobe was simple: The core idea was something that people were interested in. I was posting content that people wanted to see.
None of these growth hacks would’ve worked if the foundation of the account wouldn’t have been golden. Now, I got lucky by being into something hundreds of thousands of people are also into and happened to create an Instagram account for it. I probably got lucky with the timing too.
Nevertheless, the core idea of the account is the key to exceptional growth. How you execute it is almost as important. Growth hacks lag far behind.
Instagram suggesting The Minimalist Wardrobe for new followers of The Minimalists.
When you truly have an account people want to follow, Instagram will help you out too. They’ll suggest you to new followers whenever someone follows an account that’s related to yours, and your posts will often be featured on the explore page.
Can This Be Recreated?
Is it still possible to grow any account to almost 200k followers in a year? Sure it is. There’s nothing that’s stopping you. The growth strategies I wrote about here aren’t difficult to copy. If you have the drive to hustle, you can do exactly what I did.
The challenge is coming up with — or stumbling upon, as I did — an interesting idea for your account. That’s really the message I’m trying to send here. I even wrote an article about how to get your Instagram foundation right.
It’s too common to see people apply perfectly good growth strategies to their accounts, and not seeing any growth.
The Minimalist Wardrobe isn’t growing as fast as it used to anymore, and that’s fine. It grew into something so big so fast, that I wanted to take a step back and turn it in to something helpful, not just inspirational. I took my foot off the pedal for a while and am investing in the core idea, which I think will pay off in the future.
“Build it and they will come” is bad advice. You need marketing to grow — at least initially, before word of mouth kicks in. The thing is, the methods to accelerate growth aren’t rocket science. What I did wasn’t particularly sophisticated, and the results were tremendous.
If you put most of your effort into creating a valuable product — which in this case was the Instagram account — you’re setting yourself apart from the masses.
Way too many businesses have great marketing with a mediocre product. Don’t make that mistake.
|
https://medium.com/swlh/how-i-grew-an-instagram-account-from-4000-followers-to-190k-in-a-year-543d238341ad
|
['Sebastian Juhola']
|
2019-10-03 07:01:02.371000+00:00
|
['Instagram', 'Growth', 'Social Media', 'Marketing', 'Growth Hacking']
|
What’s new in Java 15
|
JDK 15 is the open-source reference implementation of version 15 of the Java SE Platform, as specified by JSR 390 in the Java Community Process.
JDK 15 reached General Availability on 15 September 2020. Production-ready binaries under the GPL are available from Oracle; binaries from other vendors will follow shortly.
The features and schedule of this release were proposed and tracked via the JEP Process, as amended by the JEP 2.0 proposal. The release was produced using the JDK Release Process (JEP 3).
Removals
Solaris and SPARC Ports
The source code and build support for the Solaris/SPARC, Solaris/x64, and Linux/SPARC ports were removed. These ports were deprecated for removal in JDK 14 with the express intent to remove them in a future release.
Nashorn Javascript Engine
The Nashorn JavaScript script engine and APIs, and the jjs tool were removed. The engine, the APIs, and the tool were deprecated for removal in Java 11 with the express intent to remove them in a future release.
Deprecations
RMI Activation
RMI Activation mechanism is deprecated. RMI Activation is an obsolete part of RMI that has been optional since Java 8. No other part of RMI will be deprecated.
What's new
Helpful NullPointerExceptions
Usability of NullPointerException has been improved. Messages generated by the JVM are describing precisely which variable was null .
Before Java 15
In Java 15
Text Blocks
A text block is a multi-line string literal that avoids the need for most escape sequences, automatically formats the string in a predictable way, and gives the developer control over the format when desired.
Before Java 15
In Java 15
JVM Improvements
ZGC Garbage Collector
The Z Garbage Collector (ZGC) is a scalable low latency garbage collector. ZGC performs all expensive work concurrently, without stopping the execution of application threads for more than 10ms, which makes it suitable for applications that require low latency and/or use a very large heap (multi-terabytes).
At a glance, ZGC is:
Concurrent
Region-based
Compacting
NUMA-aware
Using colored pointers
Using load barriers
At its core, ZGC is a concurrent garbage collector, meaning all heavy lifting work is done while Java threads continue to execute. This greatly limits the impact garbage collection will have on your application’s response time.
Beginning from a Java 15 it is a production-ready GC.
Hidden Classes
Classes that cannot be used directly by the bytecode of other classes. Hidden classes are intended for use by frameworks that generate classes at run time and use them indirectly, via reflection. A hidden class may be defined as a member of an access control nest and may be unloaded independently of other classes.
EdDSA
EdDSA is a modern elliptic curve signature scheme that has several advantages over the existing signature schemes in the JDK.
DatagramSocket API
Replacement for the underlying implementations of the java.net.DatagramSocket and java.net.MulticastSocket APIs with simpler and more modern implementations that are easy to maintain and debug. The new implementations are easy to adapt to work with virtual threads, currently being explored in Project Loom.
Preview Features
A preview feature is a new feature whose design, specification, and implementation are complete, but which is not permanent, which means that the feature may exist in a different form or not at all in future JDK releases.
Pattern Matching
Pattern matching involves testing whether an object has a particular structure, then extracting data from that object if there’s a match. You can already do this with Java; however, pattern matching introduces new language enhancements that enable you to conditionally extract data from objects with code that’s more concise and robust.
More specifically, JDK 15 extends the instanceof operator: you can specify a binding variable; if the result of the instanceof operator is true , then the object being tested is assigned to the binding variable.
Records
Introduced as a preview feature in Java SE 14, record classes help to model plain data aggregates with less ceremony than normal classes. Java SE 15 extends the preview feature with additional capabilities such as local record classes.
A record class declares a sequence of fields, and then the appropriate accessors, constructors, equals , hashCode , and toString methods are created automatically. The fields are final because the class is intended to serve as a simple "data carrier". It means that records are Immutable.
Sealed Classes
Sealed classes and interfaces restrict which other classes or interfaces may extend or implement them.
One of the primary purposes of inheritance is code reuse: When you want to create a new class and there is already a class that includes some of the code that you want, you can derive your new class from the existing class. In doing this, you can reuse the fields and methods of the existing class without having to write (and debug) them yourself.
Before Sealed Classes
With Sealed Classes
Conclusion
In this article, we have checked what was added and removed in Java 15. And how all the changes that were delivered with a Java 15 can improve your existing projects.
|
https://medium.com/javarevisited/whats-new-in-java-15-70335926cc42
|
['Dmytro Timchenko']
|
2020-11-21 06:19:25.188000+00:00
|
['Technology', 'Software Engineering', 'Programming', 'Software Development', 'Java']
|
Cognitive Overload and Unexpected Copy Writing Insights from Psychology and UX
|
A recent project reminded me of what a fine line there is between expressing one’s creativity and how it can potentially lead to immense frustration on the user side. No judgment here, just an observation that might humor you or hopefully bring some insights.
The sentiment came up during a homepage re-design[for a SaaS company]. Our client planned on the typical hero banner, mission statement and product introduction modules leading to subsequent product pages, the classic sales funnel.
What seemed like a pretty straight forward project [from a UX point of view] ended up being this hard-fought battle over words. We advised on strategy and reviewed the Figma boards every once in a while after the client’s MarComm team updated the copy in each iteration. I noticed that I had this intense reaction to the writing every time I reviewed, even though technically and contextually there was nothing wrong with it.
I tried to take a step back and ask myself why did all these emotions come up? Was it just me or are there some [higher] mechanisms at play?
Psychology might have an answer:
We all know that actively paying attention is very hard, literally. It turns out that stress is actually the gateway to becoming mindful. Our brain mostly operates in the ‘Default Mode Network.’ We are scanning information throughout the day, so we are not really reading word-by-word until something sticks out. This allows the brain to conserve energy, we are being pulled in a million directions every day, and mindfully interacting with content is a substantial commitment in energy. We have this built-in mechanism that makes it literally hard to [mindfully] interact with new stimuli, almost as a test to see if we really want to do this. Stanford neuroscientist Andrew Huberman Ph.D. found in his research that to get into a focused mind state it literally needs a neurotransmitter, epinephrine (adrenaline), that resides in the stress center of your brain to get you going. That’s why you feel stressed and agitated when you have to crack that difficult problem. The brain wants to know why it should engage.
So what can we learn from that when we write content?
Simplicity: The reason I literally experienced stress [when reading that copy] was due to the fact that each content module was written in such a pompous style that it forced my attention for literally every headline and phrase. “What does it mean? What does it say?” Once I tackled a micro problem it took me away from understanding the content as a whole and I had to switch gears, again. A user should be able to quickly scan each section to see if something sparks any interest. Given my level of frustration, I could only assume that most users would bail at this point. Simplicity goes a long way. Do not overcomplicate your copy. Consistency: If a user reads one of your headlines or paragraphs, others should have the same construction, tone and feel. Again, it allows for faster scanning and requires less energy to process. Avoid disruptive patterns. UX designers always think in terms of patterns, when we recognize an interaction that appears over and over, we create a component. It creates familiarity and alleviates stress. Follow Industry Standards: Your target audience might be somebody who is researching your offering and literally has been to a dozen of your competitors’ websites that same day. This can be a very stressful task and carry a lot of responsibility. There might be much at stake for them and the company they represent. Selecting a SaaS offering like yours might be a decision that has ramifications for their company for years to come. So make it easy to extract the key points. This is by no means a call to conformity and sameness but there is a place to be unique and there is a place to follow proven patterns. I am speaking from experience, researching products has been and is part of my job, comparing different vendors and their offerings. I have a clear matrix for this, I am looking for certain data points that I need to get out of a product offering so I can compare features and functionalities with other vendors side-by-side, I literally copy them into a Smartsheet and present my findings to my clients. If a website does not provide this information in a quick and easy way, I will move on to the next competitor. Look at the Bigger Picture: In the heat of the design process, each square pixel is often seen as “real estate,” space to be taken advantage of by the parties involved, the last bridge between a user converting or not. Multiple iterations often lead to overly inflated content, as if the future of the company depends on this corner of the page. UX people worry more about how a certain section fits into the whole ecosystem of the website and abides by the rules established to make it a cohesive customer experience. It would be beneficial to take this holistic approach. Taking a step back and having a closer look at what role a specific section has in the bigger picture of a website is key. Not every summary needs to be the hero. Conversion happens after you provided enough information that lets the user find your value. And, the higher the sticker price, the fewer the chances that conversion will happen with one interaction alone. So relax. I keep using this analogy: You are invited to a cocktail party, somebody approaches you and starts a conversation. You have never met that person before and he/she starts to tell you the whole life story, the strengths and clear choices why he/she is the best [future] life partner. You would be overwhelmed. You would probably run away. The same for your content. Feeding information is a dance, give the user small bite-sized info snippets first, and then, if they show interest, provide more.
What do you think? What has your experience been? Leave your comments below.
Takeaways
Simplicity: There is no need to make copy overly complicated, say it in simple words, allow for easy scanning. Consistency: Use the same construction, tone and feel. Follow Industry Standards: Hit the industry-specific points to allow users to compare you easily with competitors. Look at the Bigger Picture: Keep the bigger picture in mind and stay aware of what purpose certain content serves.
Markus Hagen is the Director of User Experience at verso.
|
https://medium.com/ooh-verso/cognitive-overload-and-unexpected-copy-writing-insights-from-psychology-and-ux-a21e20a27169
|
['Markus Hagen']
|
2020-12-11 18:30:44.951000+00:00
|
['User Experience', 'Branding', 'Content Strategy', 'Cognitive Overload', 'Brand Strategy']
|
Training BERT at a University
|
Training BERT at a University
Machine Learning Models are Enormous.
If you’re reading this post, you’ve probably heard about the remarkable performance of new machine learning models like BERT, GPT-2/3, and other deep learning models for language, image, audio, and video data.
Image by Author
You may ask: Why do these semi-magical machine learning models perform so well? The short answer is that these models are enormously complex and are trained on an enormous amount of data. In fact, Lambda Labs recently estimated that it would require $4.6 million to train the GPT-3 on a single GPU — if such a thing were possible.
Instead, platforms like PyTorch and Tensorflow are able to train these enormous models because they distribute the workload over hundreds (or thousands) of GPUs at the same time. Unfortunately, these platforms require that each individual GPU system be identical (i.e., they have the same memory capacity and compute performance).
Unfortunately, most organizations not named Google or Microsoft do not have a thousand identical GPUs. Instead, small and medium organizations have a piecemeal approach to purchasing computer systems resulting in a heterogeneous infrastructure, which cannot be easily adapted to compute large models. Under these circumstances training even moderately-sized models could take weeks or even months to complete.
If not addressed, universities and other small organizations risk losing relevance in the race to develop newer and better machine learning models.
To help remedy this situation we recently released a software package called HetSeq, which is adapted from the popular PyTorch package and provides the capability to train large neural network models on heterogeneous infrastructure.
Experiments, details of which can be found in an article (available on ArXiV) published at the 2021 AAAI/IAAI Conference, show that base-BERT can be trained in about a day over 8 different GPU systems, most of which we had to “borrow” from idle labs from across Notre Dame.
Before we introduce HetSeq, we first need a little background:
Typical training of a neural network
Training on single GPU
This code shows the training step of a basic supervised learning model in a neural network framework. Given some architecture, this task optimizes the model parameters via SGD on the loss function between predicted instances and ground truth.
The actual training process is made of four individual steps: (1) data loading, (2) the forward pass, (3) the backward process, (4) update.
(1) Data loading
On a single GPU the first thing that happens is that the existing model parameters (which are initially random) and the data are transferred to the GPU. Typically, the dataset includes a large number of training instances which will not all fit on a single GPU. In this common case we split the dataset into multiple batches and load them one at a time.
Forward pass in single GPU (Image by Author)
(2) The Forward Pass
The next step is to compute the loss function. To do this, the data batch is passed over the model (hence “forward pass”) and compared against the ground truth training labels. In the block, the forward pass has two steps: generate predicted label (output) and measure the difference (loss) between output and target.
(3) The Backward Pass
The loss computed in the previous step determines how much to change the model parameters; this is called the gradient, which is applied backward over the neural network architecture (hence backward pass or backpropagation).
Update parameters in a single GPU (Image by Author)
(4) Update
Remember that the goal of this whole process is to optimize the model parameters so that when the data is passed forward over them, then they will minimize the loss. So it is important that the model parameters are updated according to the values of the gradient.
A brief note on training steps
All together, one iteration of data loading, a forward pass of a single data instance, followed by a backward pass, and then the parameter update is called one step. Once all the data batches in the entire dataset are processed, we say that one epoch has been completed. Finally, it has been shown that the learning rate needs to change as the number of epochs increases.
What if we have multiple GPUs?
Since data batches are independent from one another, it is rather straightforward to parallelize this process by sending different data batches to different GPUs. Then, if we can somehow combine the computed loss and synchronize the updated model parameters, then we can make training much faster.
Distributed Data Parallel class
This isn’t a new idea. In PyTorch, we use the torch.nn.parallel.DistributedDataParallel (DDP) module instead of the torch.nn.Module module for the model. Each GPU is an individual process, and the communication between GPUs occurs with standard IPC.
But that’s not the end of it. The four steps need some tweaking:
(1) Data loading with DDP
With DDP we split each data batch over many different GPUs — as many as we have available. In this case it is critical that each GPU has the same model parameters.
This is the core idea of distributed data parallel (DDP): each GPU has identical model parameters yet process different data batches simultaneously.
Forward pass in multiple GPUs (Image by Author)
(2) Forward pass with DDP
Once different data batches are loaded into different GPUs, the next step is to perform a forward pass and compute the loss functions. Unlike in the single GPU case, we now need to compute the total loss of the all data batches, which is the sum of all the losses across all the GPUs. Because our goal is to compute the average loss for the backward step. It is important to output the number of instances (# of ins.) to the final output. We sum up the losses as well as number of instances.
(3) Backward pass with DDP
We use average loss to obtain gradients of the model parameters by backward pass. Before this begins the average loss needs to be communicated to the different GPUs so the model parameters can stay synchronized. Once the same parameters across different GPU obtain their gradients, gradient synchronization is executed to give them the same gradients.
Gradient synchronization (Image by Author)
(4) Update with DDP
Once the gradients are synchronized, we can update model parameters in parallel using the individual optimizers on each GPU.
The next training step can usually begin immediately. However, because nearly all the parameters are in float and because computing error can occur in some GPUs, especially when many training steps are performed, we occasionally synchronize parameters at the beginning or end of a step.
These changes are reflected in the following pseudocode. Notably, the function takes device id (i.e., the GPU id), the Model has to perform parameter synchronization before each forward pass, the loss function has to averaged before the backward pass, and finally the gradients need to be averaged before the model parameters are updated.
Training on multiple GPUs
Scaling Up — Multiple Nodes with Multiple GPUs
So far, we’ve talked about how to utilize multiple GPUs on the single node. That’s great, but it’ll only get us so far. If we want to really scale up, we need to distribute the workload across multiple nodes, each having multiple GPUs.
Fortunately, the same mechanism used to address GPUs on a single node can be extended to multiple nodes. You can simply set the node index, i.e., the rank parameter in the init_process_group function, globally so that each GPU has a unique ID across all nodes.
Communication — Here’s where things get tricky
Intra-node VS inter-node (Image by Author)
When you have multiple nodes with multiple GPUs, communication needs to occur between GPUs on the same node and across different nodes in order to share calculated gradients and parameter updates during the training procedure.
Of course, inter-node communication is muchslower than intra-node communication. And sharing gradients and parameter updates become a complete mess when the nodes and GPUs are not identical — as is the case when you don’t have a billion dollars to spend on a data center with custom compute hardware.
When your parents make you share your toys
At most university-compute centers, various research labs share their computing resources. There are different models for how this is done, but usually the IT administrators take a fair amount of control over the systems and prevent the scientists for installing (or upgrading or downgrading) needed software.
This means that, if a large model needs to be trained, then some poor graduate students need to make the training algorithm fit the infrastructure.
And this is difficult for a few reasons:
Some toys have complicated instructions. The Distributed Data Parallel (DDP) package is a pain to understand and work with. This is especially true for most machine learning researchers who are not well versed in the particulars of distributed computing. In addition to the basic set up of DDP, a peaceful training run over different GPU architectures across many nodes requires careful data splitting and arduous communication between GPUs and nodes. Some toys are better to play with than others. In a heterogeneous system, some GPUs are faster than others and some have more memory than others. This means that some GPUs get more data to crunch than others, which is fine, but it also means that the gradient averages and parameter updates need to be carefully weighted. Our parents won’t let us play with some toys. Most existing distributed GPU training platforms require extra packages like Docker and OpenMPI, etc. Unfortunately, most competent cluster administrators won’t allow users to have the administrative privileges that are needed to set up each node to fit the model. Some toys don’t work well with others. Deep learning packages like BERT and GPT2/3 developed by large companies generally have specific formats for the design model with several logic layers, making it difficult to use and adapt to a custom application.
Because of these problems we created a general system that wraps up all the complicated parts of DDP, data splitting, compatibility, and customizability and deployed this system at Notre Dame.
We call this system HetSeq. It was adapted from the popular PyTorch package and provides the capability to train large neural network models on heterogeneous infrastructure. It can be lightly set up over shared file system without extra extra packages and administrative privileges.
Here’s how to train BERT with HetSeq.
BERT at a University with HetSeq
Let’s get started with Anaconda. We’ll Create a virtual environment and install python.
Then we’ll install packages and HetSeq bindings: we’ll download HetSeq from GitHub, install packages in requirements.txt, as well as HetSeq and bindings from setup.py.
The last step before we train is to download the BERT data files including training corpus, model configuration, and BPE dictionary from this link. Download DATA.zip, unzip it and place it into the preprocessing/ directory.
Train BERT with HetSeq
The cool thing about HetSeq is that it abstracts-away all the details about distributed processing. So the training code for 100 GPUs the almost the same same as 1 GPU! Let’s try it out!
In this case let’s assume we have two compute nodes.
On node 1:
On node 2:
The two blocks of code are run on two different nodes. The TCP/IP address needs be set to one of the node’s IP addresses. Once these get started up, you’ll be able to watch the code run across 8 GPUs on 2 different nodes!
🤗🤗🤗
So how well does it work?
We have done some experiments (see https://arxiv.org/pdf/2009.14783 for details) over various homogeneous (hom) and heterogeneous (het) settings.
In total we were able to commandeer 32 GPUs across 8 heterogeneous nodes to reduce the training time for the BERT language model from seven days to about one day. 🤩🤩🤩
Under the hood of HetSeq
HetSeq package layout (Image by Author)
The HetSeq package contains three major modules illustrated on the left figure: train.py, task.py, and controller.py to coordinate the main components illustrated on the right. The train.py module initializes the distributed system and its various components.
The task.py module defines the model, dataset, data loader, and optimizer functions; it also executes the forward pass and backpropagation functions. The controller.py module acts as the main training controller. It executes the actual model, optimizer, and learning rate scheduler; loads and saves the checkpoint; communicates the loss; and updates the parameters.
But I don’t want to train BERT!
No problem. You can extend HetSeq to any other model. But you need to define a new Task with the corresponding Model, Dataset, Optimizer and Learning Rate Scheduler. An MNIST example is given with all the extended classes. Pre-defined optimizers , Learning Rate Scheduler , datasets and models can be reused in other applications. Check out our documentation for more details!
Conclusion
In this post, we introduce the background and actual steps to train BERT from scratch at university by using our released package — HetSeq. It allows us to set up easily on heterogeneous systems with multiple nodes multiple GPUs.
Would you like to learn more?
For more information, please check out HetSeq package (https://github.com/yifding/hetseq) and documentation (hetseq.readthedocs.io). Cheers!
[1] Yifan Ding, Nicholas Botzer and Tim Weninger. HetSeq: Distributed GPU Training on Heterogeneous Infrastructure. Proc. of Association for the Advancement of Artificial Intelligence (AAAI) Innovative Application of Artificial Intelligence, 2021.
|
https://towardsdatascience.com/training-bert-at-a-university-eedcf940c754
|
['Yifan Ding']
|
2020-12-13 19:11:15.882000+00:00
|
['Distributed Systems', 'Distributed Training', 'Deep Learning', 'Bert', 'Open Source']
|
#ThankyouUber for a hell of a ride.
|
#ThankyouUber for a hell of a ride.
Night at the Uber Greenlight Hub in Sydney Australia
In many ways, more than I can count, I’ve stumbled into something special at Uber. It feels too good to be true, especially reflecting now and knowing the hundreds of people I’ve interviewed whose dream it is to be a part of this company.
I don’t think I’ve paused for long enough to appreciate that fact. Now as my time at Uber comes to a close, hopefully, I can make up for it a bit here. It feels fitting to reflect on change within a company defined by change and to dwell a little longer on the countless stories I’ve accumulated and lifelong friends I’ve made. I’m thankful for this company’s effect on my journey and I hope those reading can appreciate the uniqueness of their own situation.
For a little context…
Before I joined Uber I could only call myself a “fraud designer”. I never intended a career in design, and I’ve never taken a design course. I’ve only read a design book or two to digest some of the lingo, and prior to Uber my only legitimized experience in design was hacking my way through product design at a small 10 person startup. At times, I still refer to myself that way to denote that design is often labeled something different than I define it.
But when I joined Uber as a designer, I saw what a unique opportunity it was, and it completely re-drew the path I saw in front of me. I was immediately surrounded by an incredible amount of skillful thought. From the former IDEO and Frog designers, to the Salesforce and Apple designers, PMs, Engineers, plus roles I had never heard of and working in so many formats, and processes, it was paralyzing. Honestly, I was terrified, but I tried my best to sponge up as much as possible.
I think anyone who joins these big brand companies at any level has a healthy amount of imposter syndrome, an understandable fear in the unknown. Even if their products are universally approachable, working at them is pulling back a curtain. Removing the veil on a company that has become so distinct that it has become a verb in society: Google, Uber, AirBnB, Facebook. Before you enter you can only guess: “What would that be like?” As time goes on, these unknowns become checkboxes. These checkboxes get checked, and where anxiety once filled every aspect of your day, you can now walk confidently in hallways where you once were nervous to roam freely.
In 3.5 years I went from a designer working on internal tools used by a few hundred colleagues around the world, to defining roadmaps for major company initiatives, to spinning up new products and features, and working with some of the most talented folks in the world, to mentoring designers from all corners of the company as I continued to sponge skills and learn as much as I could (mostly and more often from more junior designers).
These achievements are in no way my own, but a mixture of luck, collaboration, and hustle. In that collaboration came the most valuable thing that Uber gave me these past 3.5 years: a series of memories and moments that I can still bring up to those who were present and receive a unanimous smile, hug, or belly laugh.
A few of my favorite…
My second week at Uber: falling asleep in the dark, warm usability lab in Beijing, China. Thank you Jeremy, and Eric for “getting a nUber to China”.
Arjun and Jeremy walking through the Forbidden City, Beijing China
I wish I could convey what it’s like to watch Arjun dance with a breakdancing crew in an Indonesian night club in Jakarta.
Playing cricket with a group of boys in the hills around Jaipur, India.
A group of boys watches as we take turns subbing into their cricket match.
Celebrating Yizzy’s birthday in Mexico City.
Mexico City with Yizzy and Jake
Thinking every time my badge stopped working in the elevators that I was fired without my knowledge.
But most of all working with some of the brightest folks I may ever work with, through tears, frustrations, anger, and confusions.
Drawing on the office Windows in Mexico City
So it goes… #GoodbyeUber
A few years ago another hashtag was trending and the company I’ve called home for 3.5 years now was setting out on a journey of change. I hadn’t even completed a year yet and maybe I was in the eye of the storm, but in those moments when others saw a tornado of bad news and drama surrounding the company things began to quiet for me. I began to feel comfortable. I had developed a partnership and a playbook that was working and delivering results.
I feel somewhat defined by those moments, carrying a metaphorical badge of honor to have lived through that change. Perhaps it is fitting (or maybe even ironic) that change is also now the constant pushing me away from the company I admire; a company I will still watch eagerly to see a bright future fulfilled, fueled by some amazingly talented people.
After working on Internal Tools, Rider, Eats, Bikes & Scooters, Customer obsession (support), Safety, Voice Assistant, Uber.com, Uber Pool, and Uber 4 Business, it is time to take the next leap, to jump into the hacky, jack-of-all-trades entrepreneurial deep end from which I was lucky enough to emerge from once before.
Thank you for the chance. Thank you for the constant change. Thank you for the challenge, and thank you for the ride of a lifetime. 5 stars. Maybe we will get matched again on another trip down the road.
#ThankYouUber
|
https://medium.com/@titobgoldstein/thankyouuber-for-a-hell-of-a-ride-467daff7ba01
|
['Tito Goldstein']
|
2020-02-11 00:00:50.381000+00:00
|
['Product Design', 'Quitting A Job', 'Memories', 'Uber', 'Designer']
|
Angular: Dynamic Themes Without a Library
|
The concept of theming has been around as long as I can remember. Giving users the ability to choose the look and feel of your product has incredible value — it creates a more localized experience and reduces developer maintenance time.
How can we create something like this in our Angular apps?
Why Sass Alone Won’t Work
Although Sass variables may work to create a preset themed experience, the biggest drawback is that it can’t be manipulated by JavaScript. We need JavaScript to dynamically change the value of our variable!
Why Material Alone Won’t Work
Ever since Angular Material was released, developers have been flocking to this library to utilize their reusable components (not to mention built-in accessibility)
Material comes with built-in theming, but this may not work for two reasons:
By default, Material comes with it’s own color palette that is optimized for accessibility. If you want to generate more colors, you’ll need to pass it into their mat-palette mixin or create a new theme file, using 3rd party tooling. This creates an external dependency and restricts the ability to switch themes without touching code. Although it is a great option, not everyone wants to use Material! Many developers do not wish to import an entire library to utilize components and opt to create their own.
The solution? Sass + CSS Variables!
If you have never used native CSS Custom Properties (I call them variables), there is a great article (here) to help you get started. The reason this approach works is because CSS variables can be manipulated by JavaScript! With this combination, you can use a form to pass CSS variables to a Sass map, which can be used throughout the app.
Let’s See It!
This implementation:
Does not use any external libraries Allows multiple components to dynamically change styles through a form Saves the form as an object that can be saved in a database or local store Has the capability to load an external object as a preloaded or preset style
Link to Demo: https://native-theming-form-medium.stackblitz.io/
Link to Stackblitz: https://stackblitz.com/edit/native-theming-form-medium
The Magic
The core principle behind this method is combining Sass maps and CSS Variables.
In our theme.scss file, the default values are set and passed into a Sass map
theme.scss
// default colors .theme-wrapper {
--cardColor: #CCC;
--cardBackground: #FFF;
--buttonColor: #FFF;
--buttonBackground: #FFF;
--navColor: #FFF;
--navBackground: #FFF;
--footerColor: #FFF;
--footerBackground: #FFF;
--footerAlignment: left;
} // pass variables into a sass map $variables: (
--cardColor: var(--cardColor),
--cardBackground: var(--cardBackground),
--buttonColor: var(--buttonColor),
--buttonBackground: var(--buttonBackground),
--navColor: var(--navColor),
--navBackground: var(--navBackground),
--footerColor: var(--footerColor),
--footerBackground: var(--footerBackground),
--footerAlignment: var(--footerAlignment)
);
A function is created to return the native css variable from the global sass map
function.scss
@function var($variable) {
@return map-get($variables, $variable);
}
The components can now read these two files to host a dynamic variable that changes upon form resubmit
card.component.scss
@import '../../theme';
@import '../../functions'; .card {
background-color: var(--cardBackground);
color: var(--cardColor);
}
The card’s background color is now #FFFFFF and text color is #CCCCCC
But how do we change the values?
Through the theme-picker component!
In our theme-picker.component.html file, we are using template forms with ngModel to create an object with a unique key (style) and value (input). The object then gets passed to the TypeScript file which dynamically overwrites the variable.
theme-picker.component.ts
// searching the entire page for css variables
private themeWrapper = document.querySelector('body'); onSubmit(form) {
this.globalOverride(form.value);
} globalOverride(stylesheet) { if (stylesheet.globalNavColor) {
this.themeWrapper.style.setProperty('--navColor', stylesheet.globalNavColor);
} ... if (stylesheet.globalButtonColor) {
this.themeWrapper.style.setProperty('--buttonColor', stylesheet.globalButtonColor);
} }
The globalOverride function checks to see if a value exists for that specific variable, then replaces each CSS Variable with the new inputted one.
Violá!
This code can be better optimized to scale (using preset style objects, saving/publishing styles on submit), so feel free to play around with it!
UPDATE (3/26/19)
Removing Angular from the title since this can be implemented in React or Vanilla Javascript as well!
|
https://medium.com/atom-platform/angular-6-dynamic-themes-without-a-library-c21dfb2cb580
|
['Angela Damaso']
|
2020-03-30 16:41:18.625000+00:00
|
['JavaScript', 'Angular', 'Theming', 'Front End Development', 'React']
|
All about premature birth
|
Nobody, ever, plans a premature birth. Parents are often caught by utter astonishment when they’re informed that their little one might arrive sooner than expected.
According to WHO, ‘Preterm babies are those that are born alive before 37 weeks of pregnancy are completed.’ Preterm babies do not entirely complete their course of fetal development. These tiny ones sure love to surprise their mum and dad.
This can occur due to a range of conditions, such as infections, multiple pregnancies, diabetes, high blood pressure, poor maternal health, etc. We cannot point to a single reason, in an attempt to counter it. As you would imagine, it can be a very complicated situation to prevent.
Given the relatively underdeveloped state of preemies, they might keep their parents on their toes with regular doctor’s appointments and seeking certain specific treatments, as and when required.
Should this scare expecting parents and those already having premature babies? Not at all. There are several ways to combat this issue and to have healthy, active babies even if they were born ‘premature’. Thus, what has to be done is to provide absolute and intense care to the baby when and if this situation arises. But before discussing any of that, let’s first look into some stats.
Every year about 15 million babies are born before the completion of those 37 weeks. This just means, slightly more than 1 in every 10 babies are preemies or premature babies. WHO suggests that cost-effective, timely interventions have successfully saved and restored the health of many premature babies worldwide. In India, stats reveal that this is a common occurrence. This should help parents dealing with this situation. They should know that they aren’t alone in this, and there are several success stories out there.
Recently, I got the opportunity to speak to one such strong mother who raised her preemie just fine. She revealed that her experience at the time of pregnancy wasn’t positively memorable. She didn’t appreciate the way she was treated by her doctor, nor were the facilities in the hospital adequate. When she was informed about an urgent c-section, she was baffled but went ahead with it, without any second opinion. This happens when parents start panicking and stress gets the best of them. The mother suggested that a second opinion from a trustworthy practitioner is very important if time permits. But her real struggle began when she brought the baby home, from the neo-natal care, which was after a month. She had to visit the pediatrician every week. She was bombarded with excessive pieces of advice from every person she ever met and had to be extremely careful with what she eventually employed. It was with love and support that was received from both sides of the family and intense parental and professional care, that the baby could recover and grow normally. Now he’s an active little boy!
This got me thinking, there is a lot of emotional conflicts that parents and especially mothers have to go through when a situation like this arises. I could feel a strong sense of resentment in her tone as she spoke about the health care apparatus, that should’ve ideally been her ultimate support system.
This is of course not to say that it’s the fault of the health care professionals every time. As I’ve discussed, there are several reasons behind a preterm birth, and it is not a unidimensional issue. However, if this story resonates with your situation today, then you should know there’s light at the end of the tunnel. With proper care, love, professional help, and thorough research, these little ones can be restored to good health and can lead very active lives.
The key here is to, firstly, seek a second opinion from a trustworthy healthcare professional, and secondly, after birth, provide the appropriate attention and intense care, that is needed.
A good pediatrician can help you through this journey, in ways, unimaginable. The pediatricians deal with such cases on a daily, however, your baby shouldn’t be just another ‘number’ for him/her. So finding the right one who cares, can be, at times, challenging. But don’t lose hope, there are many, many good doctors out there who treat their patients just right! Locating one such pediatrician will require a bit of research on your part, or your close friends and family could help you with this.
Stress is normal, and it shouldn’t get in the way of you taking the right decision for your little miracle. As days pass, you’ll get more used to the situation and will learn relevant facts and will then be able to make the correct choices. This is, again, a process. After all, babies are little, emotional beings. Preterm babies or preemies might require a little extra physical and emotional support.
On this journey, every day, there is something to learn. This is true, for both, parents with premature babies, and those with normal term babies. So, every day you’ll get to watch your baby grow and learn new things about her. Soon enough, with the coordinated help of the pediatrician, this situation will become your very own success story.
Thus, don’t let worry and stress get the best of you! Keep pressing forward, and keep seeking the necessary help, as required. The baby you’re blessed with, needs, and deserves every bit of your protection and care. You got this!
Nobody, ever, plans a premature birth. Parents are often caught by utter astonishment when they’re informed that their little one might arrive sooner than expected.
According to WHO, ‘Preterm babies are those that are born alive before 37 weeks of pregnancy are completed.’ It goes without saying that preterm babies do not entirely complete their course of fetal development. These tiny ones sure love to surprise their mum and dad.
This can occur due to a range of conditions, such as infections, multiple pregnancies, diabetes, high blood pressure, poor maternal health etc. Clearly, we cannot point to a single reason, in an attempt to counter it. As you would imagine, it can be a very complicated situation to prevent.
Given the relatively underdeveloped state of preemies, they might keep their parents on their toes with regular doctor’s appointments and seeking certain specific treatments, as and when required.
Should this scare expecting parents and those already having premature babies? Absolutely not. There are several ways to combat this issue and to have healthy, active babies even if they were born ‘premature’. Thus, what has to be done is to provide absolute and intense care to the baby when and if this situation arises. But before discussing any of that, let’s first look into some stats.
Every year about 15 million babies are born before the completion of those 37 weeks. This just means, slightly more than 1 in every 10 babies are preemies or premature babies. WHO suggests that cost-effective, timely interventions have successfully saved and restored the health of many premature babies worldwide. In India, stats reveal that this is a common occurrence. This should definitely help parents dealing with this situation. They should know that they aren’t alone in this, and there are several success stories out there.
Recently, I got the opportunity to speak to one such strong mother who raised her preemie just fine. She revealed that her experience at the time of pregnancy wasn’t positively memorable. She didn’t appreciate the way she was treated by her doctor, nor were the facilities in the hospital adequate. When she was informed about an urgent c-section, she was baffled but went ahead with it, without any second opinion. This happens when parents start panicking and stress gets the best of them. The mother suggested that a second opinion from a trustworthy practitioner is very important if time permits. But her real struggle began when she brought the baby home, from the neo-natal care, which was after a month. She had to visit the pediatrician every week. She was bombarded with excessive pieces of advice from every person she ever met and had to be extremely careful with what she eventually employed. It was with love and support that was received from both sides of the family and intense parental and professional care, that the baby could recover and grow normally. Now he’s an active little boy!
This got me thinking, there is a lot of emotional conflicts that parents and especially mothers have to go through when a situation like this arises. I could feel a strong sense of resentment in her tone as she spoke about the health care apparatus, that should’ve ideally been her ultimate support system.
This is of course not to say that it’s the fault of the health care professionals everytime. As I’ve discussed, there are several reasons behind a preterm birth, and it is not a unidimensional issue. However, if this story resonates with your situation today, then you should know there’s light at the end of the tunnel. With proper care, love, professional help, and thorough research, these little ones can be restored to good health and can lead very active lives.
The key here is to, firstly, seek a second opinion from a trustworthy healthcare professional, and secondly, after birth, provide the appropriate attention and intense care, that is needed.
A good pediatrician can help you through this journey, in ways, unimaginable. The pediatricians deal with such cases on a daily, however, your baby shouldn’t be just another ‘number’ for him/her. So finding the right one who really cares, can be, at times, challenging. But don’t lose hope, there are many, many good doctors out there who treat their patients just right! Locating one such pediatrician will require a bit of research on your part, or your close friends and family could help you with this.
Stress is absolutely normal, and it shouldn’t get in the way of you taking the right decision for your little miracle. As days pass, you’ll get more used to the situation and will learn relevant facts and will then be able to make the correct choices. This is, again, a process. After all, babies are little, emotional beings. Preterm babies or preemies might require a little extra physical and emotional support.
In this journey, everyday, there is something to learn. This is true, for both, parents with premature babies, and those with normal term babies. So, everyday you’ll get to watch your baby grow and learn new things about her. Soon enough, with the coordinated help of the pediatrician, this situation will become your very own success story.
Thus, don’t let worry and stress get the best of you! Keep pressing forward, and keep seeking the necessary help, as required. The baby you’re blessed with, needs, and deserves every bit of your protection and care. You got this!
|
https://medium.com/@2monkeysandme/all-about-premature-birth-b88fe119a681
|
[]
|
2019-08-22 06:36:02.023000+00:00
|
['Birth', 'Pregnancy', 'Baby', 'Children', 'Kids']
|
Is a ‘Made in America’ watch really what consumers want?
|
Is a ‘Made in America’ watch really what consumers want?
If globalism killed the chance of a “made in America” watch, is that a bad thing?
Photo by LumenSoft Technologies on Unsplash
The Christmas season is enjoyable for many reasons, and the opportunity to feed my wrist-watch collecting addiction is just one of many. With this opportunity comes the chance to share the gift of something I love amongst friends and family, with each watch purchased with the intent of it matching the style and personality of the receiver. However, what watch do you select for someone who is a vocal “Made in America” acolyte?
As someone who considers themselves a watch connoisseur — the answer is you don’t.
With that said, there isn’t much of an argument anyway since the watch industry had been essentially outsourced to the Chinese and other various industrialized parts of Asia due to the “quartz crisis” (as the watchworld called the period of time that cheaper, yet more accurate, quartz movement watches caused the financial extinction of many mechanical watch brands who couldn’t compete) which all but forced watch manufacturers eastward in order to stay alive in the rapidly changing market by 1980. The Timex Group USA, inc., a brand with a long and cherished history of being the quintessential, all-American watch company, has been manufacturing and assembling their watches in China since the 1970’s. Invicta, the affordable favorite of many casually wrist watch wearers, pride themselves on their Swiss heritage going back all the way to 1837, but today a rather large number of their current inventory of watches don’t officially meet the criteria of being a “Swiss made” watch. In fact, many “Swiss” watches don’t meet the “Swiss made” criteria which is by far the gold standard in the watch world and even by casual watch consumers.
if it’s really a label that matters or is capable of being achieved in today’s globalized market where at the end of the day, consumers will still make the ultimate decision.
With this knowledge in hand, it should appear that the pursuit of the “Made in America” watch has already hit a dead end. However, this is where the journey takes a rather strange turn, especially if you’ve already started searching for “American watches” on Google. In the past decade, many American based microbrands have stepped up to the challenge. This is a good thing, right?
No — In fact with a deeper dive you’ll see that even those “American” originated watches you find still don’t even meet the muster of being considered “Made in America.” According to Zach Kazan at the popular online watch website wornandwound.com, “China is also a go-to resource for smaller enthusiast-focused brands in the United States, who have employed Chinese manufacturing and supply over the last several years, with great success.”
The Detroit based Shinola (more commonly known for their bikes than their watches) was one of the biggest perpetrators of attempting to ride the wave of enthusiasm over products made in the United States. In 2016, GQ’s Jake Woolf reported that the Federal Trade Commission (FTC) didn’t take kindly to Shinola claiming that their new line of watches were “Built in Detroit.” According to Woolf “The FTC said that Shinola did not meet the standard to add this mark to its products, citing the fact that Shinola’s watches aren’t manufactured in Detroit. They are assembled here in the states…but from parts made overseas…” Now Shinola had to spin the gimmick in order to save some face, and now state that their watches are “Built in Detroit of Swiss imported parts,” which Woolf accurately pointed out “doesn’t roll off the tongue quite nicely.”
While Shinola might be assembled in Detroit, there really isn’t anything remotely American about the watches themselves, except maybe the fact they were put together by American hands or how now the dials for their watches read “Shinola Detroit”, which is basically like starting a Chinese fortune cookie brand, having every part of the cookie process being done overseas, and them selling them as “American Fortune Cookies” because at least the owner is American, so that counts, right?
There was one watch recently which met the challenge of the “Made in America”, the Timex American Documents watch, which was made with the purpose of meeting the FTC’s “Made in America” standard exactly. The challenge was met and Timex can be proud of that, but for the more wordly watch consumers, what are you really getting when Timex, or any other company, tries to sell you a “Made in America” watch? Not really anything special other than the story behind the product.
The materials used to make Timex’s American Documents watch are the exact same or at least very similar to the materials used in Timex’s cheaper watches you’d find on the shelf of any clothing store. The big difference is the cheaper Timex watches most consumers are most likely to buy were made with outsourced parts by outsourced labor. The story behind the American Documents watch is cool, and by all the reviews I’ve seen, its a quality watch- but is it worth the much larger price tag? For Shinola, maybe they’d been treated better by watch snobs and consumers alike if they were upfront about how their watches were made instead of the sneaky advertising plot. Shinola did create new jobs for Detroit, that is certainly something to be proud of.
Whether it’s the world of cars, televisions, or brooms for that matter, American consumers have to really ask what they are really getting when they go searching for “Made in America” products; if it’s really a label that matters or is capable of being achieved in today’s globalized market where at the end of the day, consumers will still make the ultimate decision.
|
https://medium.com/richochet-and-win/is-a-made-in-america-watch-really-what-consumers-want-2b02b3403b80
|
['Remso W. Martinez']
|
2019-12-10 18:54:10.917000+00:00
|
['Trade', 'Industry', 'Watches', 'Fashion', 'Economics']
|
The best video games of 2020: CNET’s favorite titles of the year
|
As far as entertainment goes, 2020 took a lot from us. It took our movies, it took live music and it took audiences out of sports. But gaming is one area that’s been thriving in the coronavirus era. Not only have the big releases of the year launched as intended, people around the globe now have more time to play games than ever.
Coming into the year, there were three especially hyped games: Final Fantasy 7 Remake, Last of Us Part 2 and Cyberpunk 2077. The first two were acclaimed for living up to the hype — both receiving perfect scores from GameSpot, our sister site — while the third is getting love for its giant open world and hate for dodgy performance, especially on the PlayStation 4 and Xbox One.
Yes, I also want to receive the CNET Insider newsletter, keeping me up to date with all things CNET.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
And speaking of the PS4 and Xbox One — platforms that entered the year as “current-gen” are now “last-gen,” as 2020 brought the PlayStation 5 and Xbox Series X|S platforms. So, as you can tell, it’s safe to say the past year has been a huge one for gaming. Here, then, are our favorite games.
Fall Guys
Mark Serrels, Editorial Director
Sure, Fall Guys was a little flash-in-the-pan. Sure, no one really talks about it or plays it anymore. But for a period of around two months, I played nothing except Fall Guys. And it was awesome.
https://pianotiles.tumblr.com/post/637780576536068096/the-supreme-court-has-set-a-ridiculously-dangerous
https://note.com/nbclive/n/ne4aeaebc03b1
http://msnho.com/blog/supreme-court-has-set-ridiculously-dangerous-precedent
https://mojalos.hatenablog.com/entry/2020/12/18/045731
https://www.reddit.com/r/ThursdayLives/
https://www.reddit.com/r/ThursdayLives/comments/kf49iw/streamofficial_2020chargers_vs_raiders_live/
https://www.reddit.com/r/ThursdayLives/comments/kf49g3/officiallivestream_los_angeles_chargers_vs_las/
https://www.reddit.com/r/ThursdayLives/comments/kf49eo/streamofficialchargers_vs_raiders_live/
https://www.reddit.com/r/ThursdayLives/comments/kf49d1/officialtv_los_angeles_chargers_vs_las_vegas/
https://www.reddit.com/r/ThursdayLives/comments/kf49b9/streamreddit_los_angeles_chargers_vs_las_vegas/
https://www.reddit.com/r/ThursdayLives/comments/kf4988/livethread_los_angeles_chargers_vs_las_vegas/
https://www.reddit.com/r/ThursdayLives/comments/kf4b1w/streamofficial_2020hornets_vs_magic_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4b0b/streamofficialhornets_vs_magic_live_streamreddit/
https://www.reddit.com/r/ThursdayLives/comments/kf4ayg/official_livestream_charlotte_hornets_vs_orlando/
https://www.reddit.com/r/ThursdayLives/comments/kf4c8g/officiallivestream_detroit_pistons_vs_washington/
https://www.reddit.com/r/ThursdayLives/comments/kf4c68/streamofficial_2020pistons_vs_wizards_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4c4l/streamofficialpistons_vs_wizards_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4d97/streamofficial_2020spurs_vs_rockets_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4d7y/streamofficialspurs_vs_rockets_live_streamreddit/
https://www.reddit.com/r/ThursdayLives/comments/kf4d6v/officiallivestream_san_antonio_spurs_vs_houston/
https://www.reddit.com/r/ThursdayLives/comments/kf4ee2/officiallivestream_atlanta_hawks_vs_memphis/
https://www.reddit.com/r/ThursdayLives/comments/kf4ec4/streamofficial_2020hawks_vs_grizzlies_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4eai/streamofficialhawks_vs_grizzlies_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4fpj/officiallivestream_minnesota_timberwolves_vs/
https://www.reddit.com/r/ThursdayLives/comments/kf4fnv/streamofficial_2020timberwolves_vs_mavericks_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4fmi/streamofficialtimberwolves_vs_mavericks_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4gm0/officiallivestream_golden_state_warriors_vs/
https://www.reddit.com/r/ThursdayLives/comments/kf4gkn/streamofficial_2020warriors_vs_kings_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4gjb/streamofficialwarriors_vs_kings_live_streamreddit/
https://www.reddit.com/r/ThursdayLives/comments/kf4him/officiallivestream_utah_jazz_vs_los_angeles/
https://www.reddit.com/r/ThursdayLives/comments/kf4hhd/streamofficial_2020jazz_vs_clippers_live/
https://www.reddit.com/r/ThursdayLives/comments/kf4hfw/streamofficialjazz_vs_clippers_live_streamreddit/
https://bpa.st/ZCNA
https://slexy.org/view/s2uN16tKVG
https://jsfiddle.net/n8w34jrh/
https://paste.feed-the-beast.com/view/7071ae73
https://pastelink.net/2e4vq
https://paiza.io/projects/Nd8ylnZU64HMuBIEmot81g
http://www.4mark.net/story/2940834/thursday-live-stream-%e2%80%a2-r-thursdaylives
https://www.posts123.com/post/1153543/thursday-live-stream
https://www.links4seo.com/link/867286/thursday-live-stream
https://www.1upfun.com/link/675732/thursday-live-stream
https://www.reddit.com/r/allstream2k/
https://www.reddit.com/r/allstream2k/comments/kf41zd/officialreddithornets_vs_magic_live_streamsreddit/
https://www.reddit.com/r/allstream2k/comments/kf421w/officialhornets_vs_magiclive_streamsreddit2020/
https://www.reddit.com/r/allstream2k/comments/kf422b/officialreddithornets_vs_magic_live_streamsreddit/
https://www.reddit.com/r/allstream2k/comments/kf422k/officiallivestreamhornets_vs_magic_live/
https://www.reddit.com/r/allstream2k/comments/kf422r/officialreddithornets_vs_magic_live_streamsreddit/
https://www.reddit.com/r/allstream2k/comments/kf4233/officialredditpistons_vs_wizards_live/
https://www.reddit.com/r/allstream2k/comments/kf423m/officialredditpistons_vs_wizards_live/
https://www.reddit.com/r/allstream2k/comments/kf423z/officiallivestreampistons_vs_wizards_live/
https://www.reddit.com/r/allstream2k/comments/kf4249/officialpistons_vs_wizardslive_streamsreddit/
https://www.reddit.com/r/allstream2k/comments/kf424o/officialredditpistons_vs_wizards_live/
https://www.reddit.com/r/allstream2k/comments/kf424s/officialstreams2020spurs_vs_rockets_live/
https://www.reddit.com/r/allstream2k/comments/kf4255/redditofficial2020spurs_vs_rockets_live/
https://www.reddit.com/r/allstream2k/comments/kf425c/officiallivestreamspurs_vs_rockets_live/
https://www.reddit.com/r/allstream2k/comments/kf425j/officialstreams_spurs_vs_rockets_live/
https://www.reddit.com/r/allstream2k/comments/kf425n/officialtvspurs_vs_rocketslive_streamsreddit_game/
https://www.reddit.com/r/allstream2k/comments/kf4261/officialstreams2020hawks_vs_grizzlies_live/
https://www.reddit.com/r/allstream2k/comments/kf426a/officialtvhawks_vs_grizzlieslive_streamsreddit/
https://www.reddit.com/r/allstream2k/comments/kf426j/officialstreams_hawks_vs_grizzlies_live/
https://www.reddit.com/r/allstream2k/comments/kf426p/officialstreamshawks_vs_grizzlies_live/
https://www.reddit.com/r/allstream2k/comments/kf4272/officialreddithawks_vs_grizzlieslive_streams/
https://www.reddit.com/r/allstream2k/comments/kf427d/officialstreamstimberwolves_vs_maverickslive/
https://www.reddit.com/r/allstream2k/comments/kf427u/officialstreamstimberwolves_vs_maverickslive/
https://www.reddit.com/r/allstream2k/comments/kf4289/officiallivestreamtimberwolves_vs_mavericks_live/
https://www.reddit.com/r/allstream2k/comments/kf428n/officiallivestreamtimberwolves_vs_mavericks_live/
https://www.reddit.com/r/allstream2k/comments/kf4292/officiallivetvtimberwolves_vs_mavericks_live/
https://www.reddit.com/r/allstream2k/comments/kf429j/officialstreamswarriors_vs_kingslive_streams_on/
https://www.reddit.com/r/allstream2k/comments/kf42a3/officiallivetvwarriors_vs_kings_live_streamsreddit/
https://www.reddit.com/r/allstream2k/comments/kf42ae/officiallivestreamwarriors_vs_kings_live/
https://www.reddit.com/r/allstream2k/comments/kf42ag/officialstreamswarriors_vs_kingslive/
https://www.reddit.com/r/allstream2k/comments/kf42am/officiallivestreamwarriors_vs_kings_live/
https://www.reddit.com/r/allstream2k/comments/kf42b3/officialstreamsjazz_vs_clipperslive_streams_on/
https://www.reddit.com/r/allstream2k/comments/kf42ba/officialstreamsjazz_vs_clipperslive_streamsreddit/
https://www.reddit.com/r/allstream2k/comments/kf42br/officiallivestreamjazz_vs_clippers_live/
https://www.reddit.com/r/allstream2k/comments/kf42bw/officiallivestreamjazz_vs_clippers_live/
https://www.reddit.com/r/allstream2k/comments/kf42c7/officiallivetvjazz_vs_clippers_live_streamsreddit/
https://www.reddit.com/r/allstream2k/comments/kf42ck/officialstreamschargers_vs_raiderslive_streams_on/
https://www.reddit.com/r/allstream2k/comments/kf42ct/officialstreamschargers_vs_raiderslive/
https://www.reddit.com/r/allstream2k/comments/kf42d8/officiallivestreamchargers_vs_raiders_live/
https://www.reddit.com/r/allstream2k/comments/kf42dm/officiallivestreamchargers_vs_raiders_live/
https://www.reddit.com/r/allstream2k/comments/kf42dw/officiallivetvchargers_vs_raiders_live/
https://www.reddit.com/r/Nbalivegame4k/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4drx/officiallivestream_hornets_vs_magic_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4dsb/hornets_vs_magic_live_streamreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4dt4/officialhornets_vs_magic_live_streamsreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4du6/livestreamofficialhornets_vs_magic_live_streams/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4dv7/officiallivestream_pistons_vs_wizards_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4dx7/pistons_vs_wizards_live_streamreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4dxp/officialpistons_vs_wizards_live_streamsreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4dya/livestreamofficialpistons_vs_wizards_live_streams/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4dyw/officiallivestream_spurs_vs_rockets_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4dzh/spurs_vs_rockets_live_streamreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e03/officialspurs_vs_rockets_live_streamsreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e0v/livestreamofficialspurs_vs_rockets_live_streams/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e31/officiallivestream_hawks_vs_grizzlies_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e3z/hawks_vs_grizzlies_live_streamreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e4k/officialhawks_vs_grizzlies_live_streamsreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e57/livestreamofficialhawks_vs_grizzlies_live_streams/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e5v/officiallivestream_timberwolves_vs_mavericks_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e6k/timberwolves_vs_mavericks_live_streamreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e7c/officialtimberwolves_vs_mavericks_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e7y/livestreamofficialtimberwolves_vs_mavericks_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e8o/officiallivestream_warriors_vs_kings_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e9d/warriors_vs_kings_live_streamreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4e9x/officialwarriors_vs_kings_live_streamsreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4eat/livestreamofficialwarriors_vs_kings_live_streams/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4ebg/officiallivestream_jazz_vs_clippers_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4ec8/officiallivestream_jazz_vs_clippers_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4ecu/jazz_vs_clippers_live_streamreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4edh/officialjazz_vs_clippers_live_streamsreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4ee0/livestreamofficialjazz_vs_clippers_live_streams/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4eel/officiallivestream_chargers_vs_raiders_live/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4efh/chargers_vs_raiders_live_streamreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4eg4/officialchargers_vs_raiders_live_streamsreddit/
https://www.reddit.com/r/Nbalivegame4k/comments/kf4ego/livestreamofficialchargers_vs_raiders_live/
https://www.reddit.com/r/ChargersvRaidersLiv/
https://www.reddit.com/r/ChargersvRaidersLiv/new/
https://www.reddit.com/r/ChargersvRaidersLiv/top/
https://www.reddit.com/r/ChargersvRaidersLiv/comments/kf4ju3/officiallivestream_chargers_vs_raiders_live/
https://www.reddit.com/r/ChargersvRaidersLiv/comments/kf4juv/chargers_vs_raiders_live_streamreddit/
https://www.reddit.com/r/ChargersvRaidersLiv/comments/kf4jw0/officialchargers_vs_raiders_live_streamsreddit/
https://www.reddit.com/r/ChargersvRaidersLiv/comments/kf4jx7/livestreamofficialchargers_vs_raiders_live/
https://davidwarren102.medium.com/the-supreme-court-has-set-a-ridiculously-dangerous-precedent-bc55637c4f91
https://melinda-9876.medium.com/vaccine-update-hospitals-discover-a-surprise-in-their-vaccine-deliveries-extra-doses-76ebf4396d95
https://melinda-9876.medium.com/when-a-loose-ball-was-hacked-into-the-air-by-a-panicking-steve-hodge-52b338f52a44
https://davidwarren102.medium.com/the-supreme-court-has-set-a-ridiculously-dangerous-precedent-bc55637c4f91
https://medium.com/@foridy/new-covid-19-vaccine-candidate-valneva-starts-clinical-trials-9dfc2dc444c9
https://medium.com/@melinda-9876/covid-vaccine-update-the-supreme-court-has-set-a-ridiculously-dangerous-precedent-6222b0b1aecd
https://medium.com/@iuhjguhvghfyu/coronavirus-updates-covid-19-now-nations-leading-cause-of-death-editorial-says-california-1a7fd0ad50f7
https://medium.com/@foridy/us-sets-new-record-for-covid-19-cases-and-deaths-live-updates-870e0eface2
https://medium.com/@iuhjguhvghfyu/st-johns-county-sees-biggest-single-day-jump-in-covid-19-cases-since-pandemic-began-9a0e3dd7fe8c
https://melinda-9876.medium.com/vaccine-update-hospitals-discover-a-surprise-in-their-vaccine-deliveries-extra-doses-76ebf4396d95
https://melinda-9876.medium.com/when-a-loose-ball-was-hacked-into-the-air-by-a-panicking-steve-hodge-52b338f52a44
https://pianotiles.tumblr.com/post/637780576536068096/the-supreme-court-has-set-a-ridiculously-dangerous
https://note.com/nbclive/n/ne4aeaebc03b1
http://msnho.com/blog/supreme-court-has-set-ridiculously-dangerous-precedent
https://mojalos.hatenablog.com/entry/2020/12/18/045731
https://medium.com/@foridy/cardi-b-claps-back-at-troll-who-told-rapper-to-go-back-to-babysitting-your-cheating-ass-husband-2534746b9bf6
https://medium.com/@iuhjguhvghfyu/alaska-health-care-worker-had-severe-allergic-reaction-to-pfizer-covid-19-vaccine-f65447477205
Most online games, especially shooters like Counter-Strike or Valorant, take pride in balance. In removing random elements, creating an environment where the best players win through skill or ingenuity. Fall Guys did the opposite. Fall Guys was successful because of an absolute commitment to chaos.
It’s an insane battle royal game that takes its cues from shows like Ninja Warrior and Takeshi’s Castle — I described Fall Guys as a “brightly lit hellscape of late-stage capitalism in full bloom.” A video game where your potential successes and failures are almost solely dependent upon factors completely outside your control. There is no empathy in Fall Guys, no safety net. You will lose, and it will be completely unfair.
But for reasons I still haven’t quite figured out, that didn’t stop me coming back. Over and over and over again. Fall Guys, I love you and your bullshit.
Ghost of Tsushima
Dan Ackerman, Senior Managing Editor
It’s tough to pick a best game of 2020 when so little of what happened in 2020 passes for normalcy. We’ve suffered through forced indoor isolation, a tumultuous political season, massive social unrest, disinformation campaigns and various other worst-timeline calamities. What’s more, we got two band-new next-gen game consoles that, at least at launch, did little to actually move the needle on innovative gaming.
On a purely technical level, the game this year that was simply the best-made, most meticulously crafted and most clearly optimized to be a fun, engaging experience is Ghost of Tsushima. Everything about it feels incredibly finely tuned. I dismissed it at first as yet another combat-heavy skirmish game, which is not usually my kind of thing, but its mix of storytelling, voice acting, visual design and historical background is just a master class in game design.
Having played the next-gen version of Assassin’s Creed: Valhalla, I found that Ghost does just about everything better and looks better on a PS4 than most PS5 games do. Watch Dogs: Legion is not as finely tuned, but has a great semi-realistic London and cargo drones you can ride. Still, Ghosts is probably the most pure fun I’ve had playing a game this year.
Bonus: Actual best game of the year is Gloomhaven: Jaws of the Lion, a fantastic tabletop strategy dungeon crawl that feels like a great PC RPG in paper-and-plastic form.
Hades
Andrew Gebhart, Senior Associate Editor
Hades is a special, unique title in a year filled with sequels, retreads and remakes. It explores new ways to tell a story using the medium, and does so with great character work, fun gameplay and thrilling music. It’s my game of the year, and it’s not particularly close.
I’ve struggled to get into other games in the roguelike genre — one defined by starting from the beginning each time you die. In Hades, you do get gradually more powerful from run to run as opposed to starting completely from scratch (which puts in the roguelite subgenre). Early on, it’s easy to bank enough permanent resources to unlock a new weapon or a powerful stat upgrade, so motivation to keep trying was easier for me.
More than that, every time you die, you return to your home at the base of hell (your character is trying to escape) and you get to develop relationships with a roster of characters reimagined from classic Greek mythology. Imagine losing to the first boss, then returning to the base and chatting with her about it, or finally beating her and getting to gloat your next time through while she sits at the bar and tries to ignore you.
As you pile up more runs, you find out more about each of these surprisingly deep characters, and dialogue is impressively never repeated. The game is stunningly responsive to how you’re progressing, and your cohorts gradually open up to you and try to help you out if you’re attentive to them.
The gameplay also starts getting much more manageable as you increase your character’s stats and learn what powers work well together. The challenge is tough but fair, and it’s quite rewarding to finally beat a tough boss that’s taken you multiple tries.
Whether you like the genre or not, I strongly recommend giving Hades a try. The action won me over, but the way it weaves narrative and character arcs into your continued struggles is brilliant and what elevates Hades above the field for me in 2020.
The Last of Us Part 2
Oscar Gonzalez, Staff Reporter
As an elderly gamer with four decades of playing video games under my belt, I’m rarely surprised. There are the exceptions that come once a generation that completely rock me from the get-go and crush any preconceived notion I had. The Last of Us Part 2 did exactly that.
Naturally, I don’t want to delve into the spoilers of the storyline, but I will say that the game was the kind of rollercoaster ride of emotions you get from movies that are two hours long rather than a game that’s 10 times longer. It was apparent how Naughty Dog wanted to deliver a series of gut punches that would put Mike Tyson to shame, and I was on the receiving end of each. The development team also made sure the game was fun to play, whether it’s delivering stealthy and brutal takedowns or cowering in fear of the Clickers patrolling an area.
The wave of rage that came with the release of The Last of Us Part 2 is not lost on me. It’s the kind of never-ending outrage still burning today, months after the release. While I won’t see eye to eye with those who insist the game is subpar, I do understand how a game like The Last of Us had such an impact on those who played it.
Enlarge Image
Are you really excited to play Super Mario 64 again?
Nintendo
Super Mario 3D All-Stars
Jackson Ryan, Science Editor
Just kidding.
Spelunky 2
Morgan Little, Director of Social Media
No, it’s not as revelatory as Spelunky HD, nor have I gotten any better at the series. But in a year filled with slow, cautious progress punctuated by swift failure and a regression back to where it all began, what’s more thematically appropriate than Spelunky? You go into each level knowing what to do, what the traps and dangers will be (or you at least quickly learn them for the next go-around).
Now playing: PS5 vs. Xbox Series X: the ultimate comparison
What’s the worst that could happen? Oh, you were bopped you into the spike block you were too distracted to notice, and who knows when you’ll get a jetpack and 20 bombs again? You could have been more cautious. You could have considered your options, but your progress inspired bravado instead of level-headed consideration and now you’re back to square one.
That trajectory is at the core of Spelunky 2 and at the core of the 2020 experience. Here’s to defeating both at some point and never having to go back.
Animal Crossing: New Horizons
Daniel Van Boom, News Editor
I’m ashamed and disgusted that my colleagues haven’t showered more love on Animal Crossing: New Horizons.
As the world shut down in March, who was there for you? Tom Nook. When shelter-in-place orders were extended to the end of April, who kept you company? Timmy and Tommy. When lockdown became the new normal in May, what kept you calm? That’s right, watering the flowers in your tropical paradise.
Games like The Last of Us 2, Final Fantasy 7 Remake and Cyberpunk 2077 may have been the big titles of 2020 back in January. But when everything changed in March, it was Animal Crossing that became an unlikely sensation. It was the right game at the right time — or at least, the time it was most needed.
That’s made evident by its success. With over 26 million units sold, it’s a bigger moneymaker for Nintendo than Breath of the Wild, Smash Bros. and Super Mario Odyssey. The Last of Us 2 was fantastic, Final Fantasy 7 lived up to its namesake and Cyberpunk is indeed massive. But 2020 was Animal Crossing’s year.
Final Fantasy 7 Remake
Sean Keane, Staff Reporter
It came out. The fact that I played Final Fantasy 7 Remake in 2020 feels like a miracle — nearly 23 years after the glorious original hit PlayStation and 15 years after a lovely PS3 tech demo got fans clamoring for a remake, I booted it up on my PS4.
And it was incredible — Cloud and company are more engaging than ever, Square Enix used its experience with battle systems to make combat utterly exhilarating, and exploring Midgar was a joy. It’s one of the few virtual worlds I’ve wanted to see every inch of, as rearranged versions of composer Nobuo Uematsu’s original soundtrack sucked me in completely.
Also, the ending takes the kind of big, unexpected narrative swing that made me think “Did I like that?” After some reflection, I concluded that I loved it — any ending that makes me re-evaluate an experience is a winner for me.
Roll on, Part 2.
Madden NFL 20
Scott Stein, Editor at Large
I know what you’re going to say. “Madden NFL 20 didn’t even come out this year.” Shhh. Shhhhhhhh.
Want some games I loved? OK, Animal Crossing: New Horizons. Half-Life: Alyx. The weird Mario Kart Live: Home Circuit. Plenty of VR games, like The Room VR. I’ve hopped in and out of a lot of them.
But when the world shut down and I stopped playing Animal Crossing, I turned to Madden. I picked up my pathetic New York Jets, started diving into Franchise Mode, and went forward in time. A year, five years, ten years. I’m in 2034 now.
I’ve seen players rise and fall. I’ve seen championship runs aplenty. I’ve come to love players who don’t even exist, and felt saddened by their inevitable trades or releases. By the way, I’m not a good Madden player. I use coach suggestions. I use it as an anaesthetic, a meditative routine, a way of polishing my knowledge of an arcane American sport I only came to love because my dad made me go to the games as a kid.
What did I do every evening for months on end? Madden 20. What did I sink hundreds of hours into? Madden 20. And when Madden 21 came out and disappointed me with its less-than-stellar updates and, recently, its underwhelming next-gen features, what did I go back to again? Madden 20.
Where will I be tomorrow, probably, around midnight or so, much like every other night spent at home during the endless journey into isolation? Madden 20.
Here’s to 2021.
|
https://medium.com/@pipra-6543/the-best-video-games-of-2020-cnets-favorite-titles-of-the-year-dffad83774b9
|
[]
|
2020-12-17 20:50:28.095000+00:00
|
['Game Design', 'Game Development', 'Videogames', 'Gaming', 'Cnet']
|
Token Health Starts at the Community Level
|
Token Health Starts at the Community Level
Projects like Ternio and BelaCam are suffering from sufficient community support.
Photo by Clay Banks on Unsplash
I’m interested and hold several tokens issued by various projects. Some of these tokens are doing well, while others are doing fairly poorly, in terms of purchasing power. Many have argued that a lack of exchange listings is the problem. And admittedly being listed on a quality exchange is important for tokens. But there is something far more important than exchange listing, and that’s “use.”
Assets have value for many different reasons. If an asset is a currency, a lot of its value often is derived from its use as such. The more goods and services that can be exchanged for a given asset, the closer it is to being a true currency, and the healthier that currency becomes. There are two main tokens that I have in mind, concerning this topic: one is TERN and the other is BELA. These projects are unrelated, but they share similar problems that can be solved in similar ways, which is why I am mentioning both of them in the same article.
TERN
TERN is the cryptoasset that functions as the underlying material to convert the purchasing power of crypto to USD, for Ternio’s BlockCard. The project is an endeavor to bridge the gap between crypto and government-issued fiat. While it might be nice to see a world where anyone can use crypto directly, it will be a long time before we can stop relying on any USD transactions. The BlockCard exchanges a given cryptoasset for TERN. That TERN is then exchanged and converted into USD purchasing power at the moment of transaction, allowing BlockCard holders to spend TERN as if it were USD.
The problem that Ternio is having right now is that the price of TERN is quite weak, and there isn’t much support. While I have issues with speculators who grab a token in the hopes that it’ll “moon,” it’s important for tokens like TERN to at least maintain their purchasing power, or perhaps even increase slightly as the community grows. As mentioned, part of the problem is the lack of exchange listings. There’s very little exchange volume right now. But even if the token were listed on a ton of exchanges, why would there be high demand for the token? That’s where the community comes in, as I’ll explain in a minute.
If you’re not a Ternio card holder, you can use my referral link to get $10 when you deposit $100 or more. I’ll also receive $10, which would be a great tip! Aside from what I address below, getting more users interested in a project is another powerful way to support that project.
BELA
I became interested in BELA a while back. It’s the base token for BelaCam, a crypto image sharing platform, where people can tip in BELA. It’s not a bad idea. However, BELA has suffered from even greater price drops than TERN, and there’s no sign of a bottom. Many holders are losing hope, and it’s rather sad to see. Just like with Ternio, a lot of the users are complaining that there aren’t enough exchange listings. But they too are missing the bigger issue.
As with Ternio, a great way to support a project is to bring more people on board. Here’s my BelaCam referral link. If you’re interested in art, check out the project.
Community Use
Cryptoassets usually lack some underlying value and derive most of their value from their use as a currency. Therefore a cryptoasset that’s used as a currency must be used extensively for it to maintain value. The more people use the currency, the better the health of that currency. What that means is that members of the community that want to see projects such as Ternio and BelaCam thrive need to be willing to accept and use their respective tokens.
Obviously, there are considerations to be made when it comes to using cryptoassets to pay for goods and services, but those same considerations need to be made when dealing with crypto at all. However, once those considerations are made, the most important consideration is whether or not to accept the token in return for one’s work.
I’m confident that people involved in these communities come from many walks of life, and many have a lot of skills to offer. Freelancers who support these projects should be willing to accept payment in their supported token, in return for services rendered. Writing up a contract for some programming? Request that payment is to be made in TERN. Doing some artwork for someone? Request that the payment is to be made in BELA. The more people that use these tokens, the more demand there will be for them, and thus the healthier their purchasing powers will be. It’s that simple.
Project Integration
The greatest support would be for robust project integration. If these communities start their own projects that utilize their respective tokens, that will go a long way to promoting use. The Ternio community is pretty large. It probably has some skilled developers. Some of them could work to create an e-commerce platform that natively supports TERN. Similarly, the BelaCam community could work towards creating a print on demand service where users can pay in BELA.
Admittedly I would love to work on such projects, but I’m already overwhelmed. Still, if people contact me with significant interest, I’d be willing to do what I can. Moreover, I think if there’s enough interest from the community, and enough support from it, we could probably get some kind of support from the core projects themselves. Either way, side projects that utilize their respective tokens will go a long way to improving them. Moreover, as demand increases, larger exchanges may become more interested in the tokens, and it might become easier for the projects to list their tokens on better exchanges, for lower fees.
Originally posted on the BCU Times blog on publish0x
Further Reading
TRX Support for Ternio BlockCard? Maybe!
One of the things I’ve been pushing for is TRX support for Ternio. I don’t know if it’ll ever happen, but it would be a huge benefit to projects like mine which are on the Tron blockchain.
Project Outpost: A Cooperative Item Exchange
Consistent with my suggestion about user projects, one of the most ambitious projects I’ve been working on is Outpost, which is a trading cooperative that’s sort of a cross between a consignment shop, pawn shop, and credit union.
|
https://medium.com/b-c-times/token-health-starts-at-the-community-level-11865fc8fbd1
|
['Daniel Goldman']
|
2020-02-23 16:52:11.668000+00:00
|
['Ternio', 'Crypto', 'Finance', 'Community', 'Belacam']
|
Americans Making a Difference
|
Americans Making a Difference
By: Courtney Fatigato, Humanitarian Affairs Intern, U.S. Mission to the United Nations in Geneva.
Americans are the world’s most generous contributors to humanitarian assistance — not just with dollars, but with human capital. Thousands of Americans work across the globe in international humanitarian organizations, responding to crises, ensuring access to life-saving necessities, and protecting the most vulnerable populations.
Americans working within the United Nations ranks bring unique expertise, insight, and commitment. The Department of State’s Bureau of Population, Refugees, and Migration (PRM) funds Junior Professional Officer (JPO) positions for qualified and passionate young Americans to work with our humanitarian partners the UN Refugee Agency (UNHCR) and the International Organization for Migration (IOM). These positions provide young Americans an opportunity to expand their experience in refugee protection, refugee status determination, resettlement, and other positions.
Shahrzad Tadjbakhsh is a former JPO who now ranks among senior leadership at UNHCR as the Deputy Director of the Division of International Protection.
The hands-on nature of her work in a small regional office during her first JPO posting in Southeast Asia gave Shahrzad the opportunity to work directly on projects that would quite literally change lives. From implementing a family reunification program for refugees who fled Vietnam during the Vietnam War to influencing policy at the highest levels of the organization, Sharzhad’s career has allowed her to address the needs of some of the world’s most vulnerable people.
As far as advice for prospective JPOs, Shahrzad encourages applicants to “speak forthright about your passion” during the interview process. A JPO position is not only a step to further your career, it is an opportunity to positively impact the lives of people in need and the policies of UNHCR, so passion is a prerequisite.
The JPO program provides a vital entry path for Americans into UNHCR, which has an extremely competitive hiring environment. It provides a positive avenue for meeting the joint needs of the United States government, which wants to see talented Americans fill positions in critical international organizations; the United Nations, which will benefit from the talent America has to offer; and young Americans, who are eager for the opportunity to pursue their passions and positively impact the lives of people in need.
Preferred candidates have backgrounds in fields related to forced migration, human rights, humanitarian assistance, immigration law, or other relevant fields, preferably with a graduate or law degree, and preferably speaks a second UN language. A familiarity with the UN system would also be helpful for candidates to the JPO program. However, qualifications vary depending on the assignment, and if you are passionate about UNHCR’s humanitarian work and would like to assist in the protection and care of refugees, internally displaced persons, or stateless persons, this opportunity may be perfect for you.
You may be the next Junior Professional Officer, ten positions are open now, the application can be found here.
|
https://medium.com/statedept/americans-making-a-difference-2e048e61ba55
|
['U.S. Department Of State']
|
2019-04-01 18:33:07.299000+00:00
|
['Humanitarian', 'Refugees', 'United Nations']
|
Chainfund and Wanchain Announce Strategic Partnership
|
According to this agreement, Chainfund shall work with Wanchain as the exclusive public Blockchain technology partner. During this project term, Wanchain will provide Chainfund with blockchain advisory where necessary by introducing this project to leading blockchain consulting partners that can assist our project’s development. Furthermore, during this project term, Chainfund shall have the opportunity to access to Wanchain Investor network, promising huge opportunities for Chainfund’s vision of global expansion. Chainfund’s work in developing accessible digital asset funds fits well within Wanchain’s vision of rebuilding finance.
Wanchain and Chainfund at Zurich roadshow.
About Wanchain
Wanchain is a blockchain ecosystem that enables the exchange of digital assets between blockchains with privacy protection and cross-chain smart contracts. The Wanchain infrastructure simplifies the creation of distributed financial applications for individuals and organizations to access financial services such as loans, asset exchange, multi-asset ICOs, and other asset management capabilities. Wanchain 2.0 enables the cross-chain functionalities with Ethereum.
Wanchain is the only distributed financial infrastructure powered by private, cross-blockchain applications. Headquartered in Beijing, CN, with offices in Austin and London, Wanchain united East and West as the digital currency-agnostic platform for global financial services. Registered in Singapore as a non-profit organization, the Wanchain Foundation was initially funded by Wanglu Tech, a for-profit enterprise with investors including River Capital, Fenbushi Capital and etc. To learn more, visit Wanchain.org.
After graduating in one of the most prestigious schools in Europe, EPFL (École polytechnique fédérale de Lausanne), Thanh Do (on the left) worked as a Full Stack Software Engineer in Swissquote ( a Swiss banking group specializing in the provision of online financial and trading services) where he developed and maintained the Info Platform in both back end (Java, J2EE, Spring, Stripes) and front end (HTML5, CSS, JS). The Info Platform had to be efficient as it needs to handle the information of millions of financial products for hundreds of thousands users in real time.
About Chainfund
Chainfund is founded in October 2017 by Thanh Do & Yann Quelenn with initial funds of more than USD 1 million from more than 100 investors. Chainfund’s vision is to disrupt the asset management market with blockchain technology and connect traditional investors with modern crypto investment.
It was in Swissquote where Thanh Do met Yann Quelenn and formed the foundation to Chainfund.
Yann Quelenn worked in several institutions such as Swissquote Bank in Switzerland where he was Market Strategist. He was also FX Trader at Banque Privée Edmond de Rothschild and Portfolio Manager at Polaris Investment in Luxembourg. He has been featured in many tier 1 medias such as the Guardian, Bloomberg, Reuters and The Financial Times and has already participated in several speaking events.
Exciting news: Our private sale for MNODE Fund Starts NOW ! Find out here
Read more on our seamless platform which currently includes Dynamic, Extreme and ICO portfolios.
Register to receive exclusively reports and experience 1-Click investment. Investment starts at US$1000.
|
https://medium.com/chainfundch/https-medium-com-chainfundch-chainfund-and-wanchain-announce-strategic-partnership-3f110cd98c9c
|
[]
|
2018-08-13 05:54:12.347000+00:00
|
['Blockchain', 'Chainfund', 'Wanchain', 'Partnership', 'Cryptocurrency']
|
A “Krispr” Approach to Kubernetes Infrastructure
|
Introduction
Airbnb is built on a service-oriented architecture (SOA). In our production infrastructure, we run hundreds of services that do everything from calculating pricing to returning search results to sending messages to users. To unify and scale our infrastructure, we use Kubernetes, an open source container orchestration engine, to define and manage our workloads. We currently run hundreds of workloads in Kubernetes across tens of clusters and thousands of nodes. In this post, we will talk about how we use Krispr to inject infrastructure components into pods. The name “Krispr” is a play on words with two different references: 1) the CRISPR gene editing technique used to mutate the genomes of live organisms, and 2) the crisper drawer in a refrigerator that is used to keep vegetables fresh. One of the goals of Krispr is to keep our infrastructure components up to date and fresh.
Airbnb and Kubernetes
Airbnb has put significant effort toward simplifying the process of building and running services on Kubernetes. A major contributor to this simplification was the development of kube-gen, an in-house tool built to allow engineers to keep the configuration of various environments — such as production, staging, and canary — in sync. Kube-gen also provides a simplified interface for service owners. Instead of exposing all the bells and whistles of Kubernetes, we provide standard defaults, opinionated configurations, and validation. Kube-gen is effectively a compiler that runs in the pre-build phase of every service. It takes an internally defined format as input and outputs Kubernetes manifests. Like a compiler, the kube-gen binary is explicitly versioned. In order to get new features and settings, services are required to upgrade their version of kube-gen.
Shared Infrastructure Components
More and more features were added to kube-gen as it grew in complexity. One feature, called “components”, allows infrastructure engineers to create shared infrastructure components that can be injected into a service’s definition. This is a very powerful concept as it allows core infrastructure concerns like logging, metrics, and service discovery to run in separate sidecars and evolve independently from each other.
Given that kube-gen binaries are explicitly versioned, the rollout of new shared components was dependent on kube-gen version upgrades. So if the service discovery component was changed, it would not get picked up by a service until the service owner had upgraded their service to the newest version of kube-gen.
At its core, this model put product engineers, rather than infrastructure engineers, in the driver’s seat when it came to rolling out shared infrastructure components. This had detrimental downstream effects. One of the most problematic of these was that it was difficult to know when a shared component would be fully rolled out.
With hundreds of services owned by many different teams, this became a logistical challenge. Each time a shared infrastructure component needed to be updated, we had to corral all service owners to upgrade and deploy their service with the newest kube-gen version. Our infrastructure components ended up with significant version fragmentation, which increased complexity and costs of maintenance.
Among other disadvantages were that infrastructure engineers lacked the ability to target specific services or environments when rolling out changes, and product engineers lacked the necessary context to monitor rollouts adequately when upgrading kube-gen versions. All in all, no one was completely happy with the current state of things.
Potential Solution
As we searched for ways to address these issues, we came across Kubernetes’ mutating admission controller webhook. In short, an admission controller webhook is an HTTP callback that intercepts API calls and can modify objects before they are stored in the Kubernetes API. We realize that we could use a mutating admission controller to inject and/or modify pods as they are created in the cluster. We could leverage such a controller to inject components like service discovery. When the service discovery team wants to release a new version of their component, they need only update their webhook and all new pods will start picking up their changes.
We already had experience running a validating admission controller to enforce security policies in our clusters, but we had some reservations with mutating webhooks. Our biggest concern was that every webhook that we added would be part of the critical path to creating a new pod, meaning that we would be introducing new potential points of failure. Though many infrastructure teams wanted a new solution, they were not thrilled at the idea of maintaining and being on-call for these webhooks.
Mutators
To leverage mutating webhooks without creating maintenance overhead, we decided to separate the concern of “what” was being changed about the pod specification from “how” that change happens. We came up with a new approach that uses what we’ve dubbed a “mutator” to define “what” to change. A mutator is a pure function that accepts a Kubernetes manifest byte stream as input and returns a Kubernetes manifest byte stream as output.
As we looked at more and more examples, we realized that nearly all shared components were doing the same thing: injecting either an init container or a sidecar into a pod. To make it easier for other infrastructure developers to build mutators, we built a higher level “container mutator”. The container mutator requires just a single configuration file, which defines the container you want to inject into pods.
Instead of writing a function that knows how to manipulate a Kubernetes manifest byte stream, infrastructure engineers now need only provide a Docker image and a configuration file. We’ve named this framework “Krispr”.
Krispr
At its core, Krispr is a command line tool that is responsible for finding all the mutators that need to be applied and applying those mutators, one at a time, to a Kubernetes manifest byte stream. That also makes Krispr itself a mutator, since the set of mutators applied by Krispr fully defines what the output of a resulting pod will be.
We’ve built a mutating admission controller that passes all pods through to Krispr. Krispr knows how to find all the mutator configuration files and applies those changes to the pods. The mutating admission controller then takes the final pod definition and computes a JSON patch, which is used by the AdmissionReview API to translate the original incoming pod definition to the final one.
Since Krispr provides an abstraction layer on how mutators are run, we can run these mutators in other contexts besides an admission controller. In fact, we also run Krispr at build time, right after kube-gen generates the initial set of Kubernetes manifests. This provides us with two very useful properties. First, it allows us to relax the runtime requirements of the mutating admission controller. If it times out, or is temporarily down, we can still admit pods into the cluster, knowing that we have run Krispr and all of its mutators at least once at build time. This is huge from a reliability and operational perspective since we can now tolerate temporary downtime in our admission controller. Second, it lets us see errors and problems in Krispr much earlier. If we detect a bug in Krispr that causes build failures, we can roll back those changes before Krispr rolls out onto the admission controllers.
What happens if there is a bad rollout of an infrastructure component now? Previously, a service owner could abort and rollback the deploy in order to undo the infrastructure component change. Now, however, that is not the case. The rollback pods will get the new, bad infrastructure components injected. We’ve addressed this problem in Krispr by implementing a mutation pause period. If we detect that a pod has been mutated within the past two week, we will not re-mutate the pod. This allows service owners to deterministically rollback to a build within the last two weeks.
Conclusion
Previously, the rollout of shared infrastructure components was tightly coupled with the rollout of new versions of kube-gen. This gave service owners full control of when to do upgrades, at the cost of slowing down infrastructure changes. Furthermore, the resulting fragmentation and complexity made our systems less stable and reliable. We introduced the concept of mutators to make it easier for other infrastructure developers to build and to roll out new infrastructure components. We built Krispr to aggregate and to run mutators both as a pre-build step and in the mutating admission controller to ensure infrastructure components are always kept up to date, while keeping these mutators out of the critical path of creating new pods. Finally, we added a two-week mutation pause period to allow service owners to deterministically roll back builds up to two weeks old, while giving infrastructure developers an upper bound on how long it will take their components to roll out. We feel that this approach strikes the right balance between infrastructure stability and development velocity.
Krispr is the work of many different collaborators, and it would not have been possible without the contributions and support of Laurent Charignon, Bruce Sherrod, Evan Sheng, Nick Adams, Jian Cheung, Joseph Kim, Rong Hu, Chen Luo, Brian Wolfe, Rushy Panchal, Changgeng Li, Hua Zheng, Juwan Yoo, Daniel Evans, Stephen Chan, Ramya Krishnan, Johannes Ziemeke, Jason Jian, Liuyang Li, and Sunil Shah.
|
https://medium.com/airbnb-engineering/a-krispr-approach-to-kubernetes-infrastructure-a0741cff4e0c
|
['Daniel Low']
|
2020-11-19 21:05:05.247000+00:00
|
['Kubernetes', 'Admission Controller', 'Microservices', 'Infrastructure', 'K8s']
|
How to lose weight without Dieting
|
It can be difficult to stick to a conventional diet and exercise program. However, several proven tips can help you eat fewer calories easily.
These are effective ways to for weight loss and prevent weight gain in the future. These are some ways for weight loss without diet or exercise. All are based on science
● Chew well and slow down for weight loss
Chewing your food carefully makes you eat more slowly, which is associated with reduced food intake, greater fullness, and smaller portions. To get into the habit of eating more slowly, it can be helpful to count the number of times you chew each bite.
Eating slowly can help you feel fuller with fewer calories. It’s an easy way to lose weight and avoid weight gain.
● Use smaller plates for unhealthy foods
The typical dish is bigger today than it was a few decades ago. This tendency may help you to weight gain, as using a smaller plate will help you eat less by making portions appear bigger. On the other hand, a larger dish can make a serving size appear smaller, forcing you to add more food. You may utilize this to your advantage by presenting good items on bigger plates and less healthy things on smaller dishes. Smaller dishes can make your brain think you are eating more than you are.
Hence, it makes sense to consume unhealthy foods on smaller plates, which causes you to eat less.
● Eat lots of protein
Protein has powerful effects on appetite. They can increase feelings of fullness, reduce hunger, and help you eat fewer calories. This may be because protein affects numerous hormones that play a role in appetite and fullness, including ghrelin. Based on breakfast, you may want to consider switching to a meal that is high in protein, such as eggs.
Some examples of high protein foods include chicken breast, fish, Greek yogurt, lentils, quinoa, and almonds. Adding protein to the diet has been associated with weight reduction, even without exercise or deliberate calorie control.
● Keep unhealthy foods out of sight
Keeping unhealthy foods where you can see them can increase hunger and cravings, causing you to eat more. A recent study found that while high-calorie foods are more noticeable in the home, residents are more likely to weigh more than people who only keep a bowl of fruit visible. Keep unhealthy foods out of sight, such as in cabinets or cupboards, so they are less likely to catch your attention when you are hungry.
On the other hand, keep healthy foods visible on your shelves and place them in the front and center of your refrigerator. If you put unhealthy items on the countertop, you’re more inclined to have an impromptu snack. It’s best to keep healthy foods, like fruits and vegetables, insight
Check out our fat Burner and reduce your body fat: Fat Burner
● Eat foods rich in fiber
Eating foods high in fiber can increase satiety, helping you feel fuller for longer. Studies also indicate that one type of fiber, viscous fiber, is particularly useful for weight loss. Increases satiety and reduces food intake Viscose fiber forms a gel when in contact with the body. This gel increases the absorption time of nutrients and slows down stomach emptying.
Read more on Weight loss without Dieting
If you are interesting in travel and photography content, then must check-out: www.creativepic.in
|
https://medium.com/@viralkida02/how-to-lose-weight-without-dieting-83a671a71cf2
|
[]
|
2021-12-30 06:06:56.592000+00:00
|
['Diet', 'Weight Loss', 'Weightloss Recipe', 'Weightloss Foods', 'Weight Loss Tips']
|
It is not possible to reform the police.
|
Dear Movement —
It is not possible to reform the police.
We are in a moment of crisis and met with the opportunity for transformation. The brutal police murders of George Floyd, Breonna Taylor, Tony McDade, and so many other Black lives have sparked an uprising against racist police violence. At the heart of these uprisings have been calls to arrest, prosecute, defund, and abolish the police. As organizers and organizations with large platforms, and bases of individuals and their families locally and nationally, we have a responsibility to put forth and support the most transformative solutions we can imagine.
It is with this conviction that we raise our concerns with the proposed protocols proposed by Campaign Zero in “8 Can’t Wait”, a set of eight protocols recommended to reduce police violence by 72%. As members of a growing movement that aims to radically rethink the very idea of policing, we believe it is critical for our movement to engage in comradely debates on the best ways to do this. It should not be considered radical or out of reach to have a world where we can live freely without being killed by the state. This historic moment is calling on us to end police violence, not simply reduce it by a percentage as “8 Can’t Wait” has suggested.
When we imagine what safety means, police and prisons are not in our vision. We think of our families, our neighborhoods, schools, housing, food, and the breath of fresh air that comes with having control over our lives and having our needs met. Under our current failing system, public safety is equated with police and prisons, despite the centuries of racist violence and trauma that these institutions have brought into our lives and sustained. While these issues have primarily impacted Black, Afrolatinx, non-Black Latinx, Indigenous, Arab, Muslim and South Asian communities, and poor communities, the issues of policing and prisons impact us all because it is a sustaining force of racial capitalism.
Organizers around the U.S. are demanding that their local city administrations defund and disband the police. We support that demand and see it as a step towards the abolition of police. We stand against narrowly focused reformist demands that seek to use this moment for solutions that only get us a part of the way towards this goal. Demands and recommendations such as better police training to de-escalate and avoid shooting, better reporting post-incident, and stronger use of force policies have not proven to eliminate police killings or hold police accountable for their violence after the fact.
As the “8 Can’t Wait” campaign graphics show, in New York, Los Angeles, Chicago, and Minneapolis, for example, four or more of the eight recommended protocols are already in place, yet these cities continue to report high rates of racist and violent police misconduct. After Mike Brown’s murder in Ferguson, Missouri in 2014 many of us demanded body cameras and were supportive of that response to police violence. We quickly saw that body cameras did not make a difference and could not make a difference if the broader narrative around public safety was still supportive of policing.
As we enter this new phase of the Black Lives Matter Movement, we need demands that advance our struggle and move us closer to abolition, not ones that can be used by the state to give cover to the violence inherent in the system of policing. Even as we watch uprisings across the country, police continue to kill people across the country. We do not want the police to “exhaust all other means before shooting” because those means are also violent and deadly. We want the police disarmed like many other countries already have. We cannot in good conscience endorse a platform that would allow 28% of police violence to continue.
Transforming and, ultimately, abolishing policing today means we need to go beyond the policies that are already in place and failing. Transformative means we need to activate a radical imagination for a world beyond what we’ve seen in the past and continue to see today. A world where police end the war on Black people. A world where communities are in control of our lives, our health, our needs, and our futures.
Police violence is a tangible result of the insidious nature of racial capitalism. Law enforcement is used to control communities of color and poor people in order to maintain the status quo and protect the interests of the wealthy elites and those in power. Any demand made against policing in the U.S., and around the globe, must shift power out of the hands of the wealthy and the muscle of law enforcement — that includes federal, state, and local law enforcement, ICE, and private police — and into the hands of the historically oppressed. The issue is systemic and we will continue to push a systemic solution. Abolishing the police is our goal. Defunding, disarming, and disbanding are the means. No police violence is acceptable, and we stand firmly committed to defunding and abolishing the police.
Signed,
Organizations
The Action Center on Race and the Economy (ACRE)
UBUNTU Research and Evaluation
Gangland Political Party (GPP)
Alliance for Educational Justice
Showing Up for Racial Justice (SURJ)
Freedom, Inc.
Equity and Transformation (EAT)
Freedom, Inc.
Black Lives of Unitarian Universalism
Black Leaders Organizing for Communities (BLOC)
Black Visions Collective
Black Lives Matter Louisville
Southerners On New Ground
MPower Change
BYP100
Black Lives Matter Inland Empire
The BlackOUT Collective
Resource Generation
Revolve Impact
Liberation House
Essie Justice Group
Frontline Wellness Network
Revolve Impact
JusticeLA
La Defensa
Partners for Dignity & Rights
Flint Rising
Poder In Action
Black LGBTQ+ Migrant Project (BLMP)
ZEAL
Public Health Justice Collective
Resist. Reimagine. Rebuild. (R3 Coalition)
Fair World Project
Fund for Democratic Communities
TESA Collective
Law for Black Lives
Dignity & Power NOW
MPD150
Individuals
Maurice BP-Weeks
Alyxandra Goodwin
Tracey Corder
Akua G.
Brianna Gibson
Shawn Sebastian
Anshantia Oso
Tara Raghuveer
Ben Ishibashi
Dmitri Holtzman
Maurice Mitchell
Cherrell Brown
Divya Sundaram
Phillip Agnew
Montague Simmons
Dr. Monique Liston
Shavonda Sisson
Jonathan Stith
Andrea Ritchie
Erin Heaney
Jasson Perez
Richard Wallace
James Hayes
Leslie Mac
Christina Novaton
Daniel Aldana Cohen
Angela Lang
Keisha Robinson
Rick Banks
Renata Pumarol
Cazembe Jackson
Kandace Montgomery
Charlene A. Carruthers
Sister Alison McCrary, SFCC, Esq.
Kelcey Duggan
Saqib Bhatti
Lauren Mateo
Leigh Friedman
Chanelle Helm
Chris Love
Philip McHarris
Mary Hooks
Nafisah Ula
Lena K. Gardner
Lauren Jacobs
Katrina L. Rogers
Denzel Caldwell
Kifah Shah
Paige Ingram
Lau Barrios
Lydia Pelot-Hobbs
Yana Ludwig
Tyger Caygill-Walsh
Lauren Jacobs
Della Duncan
Sarah Treuhaft
Ameca Reali
Broderick Dunlap
Chinyere Tutashinda
Yahya Alazrak
Jonathan Lykes
Liz Sutton
Gina Clayton-Johnson
Ivette Alé
Eunisses Hernandez
Kelly Baker
Peter Sabonis
Lena K. Gardner
Nayyirah Shariff
Christine Mitchell
Viridiana Hernandez
Rachel Berkowitz
Helen Forsythe
Mike de la Rocha
Joshua Sankara
Amber Akemi Piatt
Jessica Quiason
Jae Hyun Shim
Amy Tran
Amy Livingston
Anna Canning
Arianna Nason
Marnie Thompson
Kyra Brown
Peter VanKoughnett
Luke Amphlett
Ricardo Levins Morales
Lex Steppling
Tony Williams
Molly Glasgow
Addison Turner
Reema Ahmad
Mariana M.
Kim Flores
Erika Thi Patterson
8 Can’t Wait Critique and Abolition Study Resources
1. 8 to Abolition — http://8toabolition.com
2. 8 can’t wait is based on faulty data science — https://medium.com/@8cantwait.faulty/8cantwait-is-based-on-faulty-data-science-a4e0b85fae40
3. Critical Resistance — https://twitter.com/c_resistance/status/1268712313634209794?s=21
4. Jenny J Lee (crediting other abolitionists) — https://www.instagram.com/p/CA_-_vgDukS/?igshid=1a1t1gssmta9m
5. Petition to recall 8 Can’t Wait: https://campaigns.organizefor.org/petitions/recall-8cantwait
6. Dignity and Power Now Statement — https://twitter.com/powerdignity/status/1268735286646726656?s=21
7. Youth Justice Coalition (in LA) — https://twitter.com/youthjusticela/status/1269082828358074368?s=21
8. LA CAN Network (statement and sign on letter) — https://cangress.org
9. Movement 4 Black Lives — https://medium.com/@m4blcomms/defunding-police-what-it-takes-to-end-police-violence-bb164a70e89b
10. BYP100 Demands
|
https://medium.com/breaking-down-the-system/it-is-not-possible-to-reform-the-police-72a5ccb9cb56
|
['Acre', 'Action Center On Race', 'The Economy']
|
2020-06-09 22:07:11.855000+00:00
|
['Police', 'Racial Justice', 'BlackLivesMatter', 'Police Brutality', 'Capitalism']
|
How to Download Music from YouTube to Phone and Computer
|
As the most popular online video service provider today, YouTube provides video uploading, playback, viewing, commenting, sharing, subscription, and other services to millions of users worldwide every day. Its website content is all-encompassing, such as music videos, news videos, game videos, movies, funny videos, review videos, tutorial videos, Vlogs, beauty fashion videos, language learning videos and so on. It must be said that the YouTube platform brings a lot of fun and convenience to people’s lives.
However, due to copyright issues, people can’t download any video directly from YouTube. This is very inconvenient for those who want to download music from YouTube to their computers or mobile phones.
Want to give up because of this? Please don’t but read this article first, and here you will find that you can use some YouTube to MP3 converters to extract music from YouTube videos. These tools are easy to use. Next, I’ll show you how to use different tools to download YouTube music to your iPhone, Android, and computer.
Part 1. How to Download Music from YouTube to Computer
To download YouTube music to your computer, you first need to download a YouTube video downloader with YouTube to MP3 capabilities. Here we recommend 4K video downloader, which is very powerful, not only supports downloading videos from YouTube, Facebook, Twitter, Instagram, etc. but also allows users to convert videos to MP3, MP4, MOV, MKV, AVI, and other formats. More importantly, it enables users to download videos with original audio and subtitles, which means we can download music from YouTube with quality guaranteed. Next, let’s take a look at how it works:
1. Download and Install 4K Video Downloader
The free video downloader can be downloaded from its official website, and fully compatible with Windows and Mac operating systems. Please download and install 4K Video downloader on your computer, then run it, a pop-up window will appear, tap Free Trial or close it.
2. Copy and Paste a YouTube Music Video URL
Here, you will see the main interface with many features, such as downloading, converting, editing, and more. You may be tempted to use this program, but before that, please open the YouTube website, then copy one or more links to the music videos you want to download, and click the “+ Paste URL” button. Now, wait for the YouTube music downloader to parse these links.
3. Select Quality and Download Music from YouTube
After the parsing is successful, a pop-up box will appear where you can customize to download only the video, download the original audio or convert to MP3. Given our topic today, please choose to convert YouTube to MP3, but it should be noted that this feature is only available for members. To use this feature, you first need to upgrade to the Pro version to unlock the limitation. After that, click “Download” to start the process. Once completed, you can listen to your favorite music directly on your computer.
Part 2. How to Download Music from YouTube to iPhone
If you’d like to download YouTube music to iPhone or iPad, you can use Online Video Converter. Just as its name suggests, this is a professional video converter that can be used to convert videos into various formats, which include MP3, AAC, M4A, WMA, FLV, MPG, etc. It works with a lot of sites, such as YouTube, Vimeo, Tumblr, LiveLeak, MySpace, etc. Next, I will show you how to download music from YouTube via this free online YouTube to MP3 converter:
Open YouTube and copy the link to the music video you want to download. Visit onlinevideoconverter.com and paste the URL in the box. Click the Filter button to select the output audio/video format. At this point, you should choose MP3 and click on More Settings to select the audio quality.
Then tap the Start button, and the program will begin to prepare for the conversion. After completed, you can scan the QR code below to download YouTube music directly to your iPhone!
Part 3. How to Download Music from YouTube to Android
As one of the best YouTube video download apps for Android, VidMate can not only help you download videos from YouTube to Android, but also download music. Any music files from YouTube, Dailymotion, SoundCloud, Vine, Vimeo, etc. can be downloaded for free via VidMate. You can choose MP3, M4A, WEBM, and other formats according to your preferences. Next, let’s have a look at its specific steps:
Download and install the VidMate app on your Android phone. Open the application and find the music video that you’d like to download. Play the video and click the Download icon. Select the music quality, and start to download music from YouTube to Android. After completed, your favorite music will be saved to your phone directly.
The Bottom Line
The above are three different ways to download music from YouTube. After reading this article, you will find that downloading YouTube music is simple as long as you find a great YouTube to MP3 converter. If you find a better YouTube music downloader, please leave a comment below; if you like this article or feel it helps you, please share it with your family and friends.
Related Articles
Last updated on by Jihosoft
|
https://medium.com/free-video-downloader/how-to-download-music-from-youtube-to-phone-and-computer-b0006cd083d0
|
['Merry Kitty']
|
2019-08-23 01:40:25.199000+00:00
|
['Songs', 'Music', 'Download', 'Audio', 'YouTube']
|
Does Math Have the Power To Heal?
|
Does Math Have the Power To Heal?
Photo credit: iStock
By Michael Allwright
I never imagined that math could be such a powerful force for healing childhood wounds. Especially given the scars from mathematics I’ve carried since middle school.
SCARS OF CHILDHOOD
When I was in 7th grade I hated math … or I should say, I hated my math teacher. He was the high school football coach and he “motivated” his students off the field just as he did on the field: intimidation, yelling, and public humiliation. I was the focus of his wrathful attention on more than one occasion because I couldn’t understand what he was teaching.
His tactics of humiliation and public shaming had the opposite effect of his intention if in fact, his intention was to impart an understanding of mathematics.
Math became a trigger for me as a child and young adult. I found myself often demotivated, frustrated, or checked out in math classes. I had the emotional scars of being “not good at math,” and it showed up from balancing my checkbook to learning calculus. Fractions haunted me when building our remote cabin in McCarthy, Alaska.
It took me quite some time to regain my confidence in my math abilities. I had to learn that my experience wasn’t really about me, but how the subject was presented and how I was (or in this case wasn’t) motivated to learn.
Okay, Allwright, sorry about your mean middle school math teacher, but are you going somewhere with this trip down memory lane?
|
https://medium.com/a-parent-is-born/does-math-have-the-power-to-heal-5555f11f825f
|
['Agents Of Change']
|
2020-12-16 12:31:24.294000+00:00
|
['Math', 'Life Lessons', 'Parenting', 'Emotions', 'Mathematics']
|
Full Moon: Magical Influence.
|
Manifest Whatever Your Heart Desires on a Full Moon.
The full moon amps all manifesting powers.
Pardon me and my English as I continue trying to perfect my writing skills.
On the off chance that you’ve been tracking with me for some time or took an interest in one of my moon cycle difficulties, you know there is huge force in bridling the energy and cadenced of the moon. Regardless of whether you need to improve your well-being, your vocation, or your connections, the eight periods of the moon are an insightful and clear guide to appearance. What’s more, when you line up with these stages, you can work with regular powers to explain and consistently refine what you need to call into your life until it happens as expected.
The Full Moon denotes the fulfillment of the (waxing) cycle and the development pattern of our expectation. The Full Moon’s energy is at its pinnacle, and is extremely ground-breaking. We can utilize this energy to perceive what is done serving our aims. It isn’t unexpected to feel excessively enthusiastic and separated at the pinnacle of this stage.
The Full moon’s energy likewise intensifies our feelings. … At the point when it’s full, it can carry the entirety of your feelings to the surface. At the point when it’s another moon, it’s an ideal opportunity to be more quiet and intelligent. “So don’t reprimand the moon for every one of your feels, express gratitude toward it for bringing everything up to the surface
The full moon — and the week driving up the full moon — can carry some insane energy with it. Nonetheless, it’s an enchanted opportunity to start goals that you are holding in your heart and take them to another vibration with the goal for them to at long last show. It’s a superb chance to initiate the Law of Fascination.
A Full Moon Will Intensify The Energy of our Goals, this permits us to project the most remarkable spells for you. Spells for the full moon
#tooblessed #tooblessedtobestressed #traditionalwitchcraft #travel #Travelgram #traveltuesday #tree_magic #trending #trip #truehappiness #truelove #trulyblessed #tshirtlover #tuesdaymotivation #tweegram #twinflame #twitter #unconditional #unforgettable #ungrateful #upperbody #upperbodyworkout #vegan #veganbodybuilding #vhope #vibes #video #vintagelover #viral #vitamind #vmysticmessenger #voodoo #voodoo13 #voodoobar #voodooblue #voodoochild
|
https://medium.com/divine-revelation/full-moon-magical-influence-bf9c06604382
|
['Shiloh Cyrus']
|
2020-12-24 19:06:21.896000+00:00
|
['Manifestation', 'Moon', 'Magic', 'Spells', 'God']
|
wealth and success = luck
|
I’m full of example today. Take the ‘catch me outside meme. DR Phill had a show for so long. He has had many guests on before her but for some reason, hers was the one to go viral. I would consider something like that luck.
My brother runs a software and tech business. I remember him telling me of a couple ‘lucky’ clients they got. They were aiming for someone else (I can’t remember if they landed them) But shortly after he got randomly introduced to the head of the police force (he had some special name).
Who ended up becoming a client.
Is this luck?
Well, that isn’t very easy to delve into, but there was an element of luck. If he went home straight after the event or if he went to the toilet at the wrong time. There were hundreds of things that would have ‘stopped’ that interaction.
BUT… he would have never gotten it if his product was still in the beginner phase (if he didn’t spend a considerable amount of time on it). If it wasn’t at one of the best industry standards, the officer would have gone to someone else.
My point with this is you get lucky along the way. Maybe a more prominent YouTuber shouts you out… Perhaps a publisher runs with your story on Medium. Perhaps you say something meme-worthy in an interview and go viral.
Unless you have a friend or know someone in the business, it’s more than likely your story got published, or you got a shout out based on luck (But even that doesn’t hold up to scrutiny because the YouTuber or medium person has to see value in your post and value comes with hard work).
The answer becomes more complicated than you think
I wish I could give you a definitive answer. There’s a lot of luck in the success, or there’s little to none. I just can’t. It seems to me it’s based off, of each interaction and specific case.
The deeper I go to more confused I get.
Take being an athlete, for example.
Being a professional footballer (a real footballer) is hard. You have so much competition, and individual coaches are looking for unique things. To make it pro; there is a fair amount of luck involved.
The right coach has to look at you play at the right time. The right coach has to see your highlight video when there looking for a forward or a winger but if you send it in when they already have a fantastic attack. It’s going to be impossible to get in.
So not only do you have to suit a team but you have to make trails etc. My point is there is a large amount of luck involved. Counter to that; there is a load of backbreaking work. #on the grind and all those tags. Notice me, somebody. 😆😆😆
How much luck is involved when your playing two games a week and getting in front of every pair of eyes that will watch, and maybe sign you. Is that luck? If you play for a high-level team also helps.
There’s also the aspect of training. If your unfit then it will be impossible to get a contract. Fitness is not luck; it’s work.
As we go deeper, it gets more and more confusing.
What can we take away?
Yes, there is luck. Some are luckier than others. There is also a tremendous amount of work involved in getting that luck.
I love a quote which just popped into my mind. I think I’m the one who made it, but you never know nowadays.
The amount of work you do relates to the amount of luck you have.
The more you work the lower you need your luck to be. The less you work the higher you need it to be.
When it comes to success
I think there’s a balance (Insert Thanos meme here). I don’t think you get luck without work. Vice versa. When it comes to being a success you don’t get there without both. I consider it to be like the yin and yang. Complete opposites but they come together to make something indestructible.
When it comes to Wealth
When it comes to wealth. I believe it’s more hard work than luck. Of course, there are cases where that’s not true. I think the lottery is a valid example for this, where that was completely based on luck or ‘destiny’ if you believe in that sort of stuff (which I kinda do but that’s another post).
Even then most lottery winners file for bankruptcy within 3 years. For creating wealth, I believe work is a lot more important than wealth. When we start talking about that, you need to include working smarter which is a whole post on its own.
This post is long enough. I’ll leave you with my last bit of findings.
People tend to need a surge of luck and then let their talent speak for itself.
That young athlete gets a chance to play as a sub in a competitive game and scores the winner. That editor decides to give the writer a chance and post their work. The work speaks for itself and everybody loves it. The IT company gets a influencer to shout them out and their services and product gets a lot of orders.
It’s a little different from what I usually write
Peace
|
https://medium.com/@1rishpher0/wealth-and-success-luck-4103dd85d939
|
[]
|
2020-12-25 12:03:22.281000+00:00
|
['Luck', 'Wealth', 'Wealth Creation', 'Success', 'Successful']
|
Connect Your React Application to a Rails API Using Active Storage (part 1)
|
Several weeks ago I set out to learn all about web sockets. If you’re thinking that something must have derailed me, because this blog isn’t about web sockets, you’d be correct. I wanted to show off my app’s users on the frontend by giving them each a little avatar. It seemed easy enough to do — I had heard about many people using this thing called Active Storage with Rails to connect image files to a model — so I thought, ‘Hey! I’ll just get this little bit working real quick before I move on to web sockets!’ After falling down a huge Internet rabbit hole of documentation, blogs, Stack Overflow, and many more random sites, I’ve finally found a working solution. So now I present it to you, my friends, hoping that you can avoid the pain I went through.
Setting up the Backend
My Rails backend will only be used as an API to send and receive information from my database to and from my frontend. For this project, I’m going to use PostgreSQL for the database. From your terminal enter
rails new avatar_app --database=postgresql --api
*Note — you must have PostgreSQL installed on your computer and running for this command to work.
Now let’s get Active Storage all setup.
rails active_storage:install
This command creates a migration file for you that will populate your database with two new tables — active_storage_blobs & active_storage_attachments. According to Rails documentation —
“A blob is a record that contains the metadata about a file and a key for where that file resides on the service.”
The active_storage_attachments table is a join table that connects the blob with the model it belongs to (technically, it’s a polymorphic table, but I prefer to think of it as a simple join table).
Before we migrate these migrations, let’s add a user model to our application. This app is going to be super simple — user’s will be able to create an account, upload an avatar, and show off their avatar — that’s it!
rails g resource User
This command does a few things, including giving us a migration file to create a users table — so let’s fill it out.
Notice that even though each user is going to have an avatar, we didn’t include that as a column on our users table. Instead, on our user model, we’re going to indicate that each user will have an avatar attached to it (in the form of a file) like so —
Before we forget, let’s run those migrations —
rails db:create
rails db:migrate
Because we’ll only be running our Rails API from a local server, we don’t need to do any additional configuration here. If however, you want to set up active storage to keep your files on a remote server — such as Amazon S3, Microsoft Azure, or Google Cloud — there are special instructions for how to do that in the Rails documentation.
Before we move on to uploading an image file from the frontend, I think it’s important to show you how to attach a file to a model from your backend. Let’s move into the seeds.rb file, to create a new user.
Now, if I want to attach an image file to Lucy from my backend, that image file must exist somewhere within my backend directory. It makes most sense to me to place a folder, called avatars, inside of my public folder. Then I’ll drop my image files in there.
*If you’re looking for free avatar icons to play around with, here’s a great site (mine came from this site and specifically this author).
Awesome! Now let’s attach one of those images to our user named Lucy.
Because we’ve already declared in our User model that each user has_one(avatar)_attached, now we can use Active Storage’s attach method to attach the file we want. Notice that the path given to File.open() matches the location of my file in my directory — if you choose to place your image in a different location within your directory, your path will need to reflect that.
rails db:seed
If we’ve written our seed file correctly, we should see our user, our active_storage_blob, and our active_storage_attachment in our database.
Here’s our user, Lucy!
Here’s our blob!
And here’s our attachment!
Cool, cool, and cool.
Now let’s setup the wiring that will allow our React frontend (which we haven’t created yet) to communicate with our API. First, let’s make a controller for our User model.
rails g controller Users
Now let’s fill out the controller actions we’ll need for our application.
I’ve filled out the index action to render all users, but I’m going to leave our create, show and update actions blank for now. We need to do just a couple more things to let our frontend and backend communicate smoothly. First, uncomment the ‘rack-cors’ gem in your Gemfile.
Then, inside of config >> initializers >> cors.rb, uncomment the following code, and change line 10 to read like so…
Then re-run
bundle install
to install that ‘rack-cors’ gem.
Finally — let’s move on to the fun part — the frontend!
Setting up the Frontend
To create a React app from scratch, enter the command
npx create-react-app avatar_app_frontend
from terminal. Then cd into that directory and open up your code. In order to use Rails’ Active Storage with React (a JavaScript framework), we need to install it through our node package manager (npm) —
npm install activestorage
My app will only allow a user to do one of two things: first, if they already have an account, they can ‘login’ (with fake authentication) and see their avatar on their profile page, else, they can create an account, upload an avatar (image file), and then see their avatar on their profile page. Reply to this post if you’d like to offer a ton of moolah to support this startup.
Because I would like to create some routes in my application, I’m also going to install react-router-dom.
npm install react-router-dom
Here is a quick demonstration of how my app is set up, including routes to the various pages, that, as of yet, don’t show much. I’m going to save us a bit of time by not going to go into the deets of how to build this out in this post.
Now we’ll go ahead and build those pages out, beginning with a login form.
Getting Images from the Backend
Here’s a login form all built out — almost. If you’re familiar at all with React controlled forms, this should look pretty familiar to you…
We have a form with two input fields, one for the user to enter their name, and the other to enter their password. Both inputs are using the onChange event listener, and both listeners call on the handleOnChange() function. This function is changing local state as the user types in those input fields. Finally, when the user hits submit, we call on the handleSubmit() function.
Let’s think for a moment about what this function needs to do. In order to login a user, it needs to communicate with our backend and find the user with the (kinda fake) credentials they’ve typed in, then return that user to us. When we get that user, let’s set them as the CurrentUser (in state) for our application. We also want to grab that user’s avatar from the backend, so let’s set that piece of state as well. Because different pieces of our application need that information, let’s set state for those things in our App Component.
Okay, now going back to our handleSubmit() function in our Login Component, let’s build out the fetch call that will communicate with our Rails API. The first thing we need to figure out is what url to make the fetch call to. If we enter the rails routes command in our terminal, we see that we can’t make a GET request to our users controller’s show action, without knowing a user’s id, and we actually need to give our application some information (the user’s credentials).
This can mean only one thing… we need to make a custom route!
Cool. Now we have a route to fetch to. Place a byebug in your users controller’s show action, and finish building out your fetch call like so
If you go to your frontend and try to login our user, Lucy, you should be paused in the byebug from your users controller. Let’s check what params are at this point…
Great! Using those params we can find our user.
Now if we log Lucy in with the correct credentials, we see her user object logged into the console, and if we try to log her in with incorrect credentials (i.e. a bogus password) we get the ‘This user is not authenticated’ message. Perfect! We know that we can use this information to set state for our currentUser, but how can we get her avatar?
Well, conveniently enough, Rails provides us a url that we can link to do just that — rails_blob_path. This is how we can use it to pass Lucy’s avatar (or a link to her avatar image) to the frontend…
Now on our frontend, we see not only our user object logged into the console, but also her avatar url.
Awesome. Let’s use this to set state in our app.
In our App Component, we have to make a few changes — first, we need to write a function, setCurrentUser(), that will take in the result from our fetch call (data), and set state for our CurrentUser and CurrentAvatar. Then, we need to pass that function down as props to our Login Component. In order for the Login Component to receive props, we need to change how we render that component.
render={() => <Login setCurrentUser={this.setCurrentUser} />}
Then, in our Login Component, rather than simply logging our fetched data to the console, we’ll call on our function (passed down as props).
Now our app has the information it needs to render Lucy’s avatar on her profile page. Let’s build that out!
First, on our App Component, let’s import the withRouter function from react-router, and then add some conditional logic on our Profile Component. If this.state.CurrentUser exists (meaning we have someone logged in), then we’ll show the profile page and pass down our current user’s information as props, else, we’ll re-render our Login Component. Also, notice that I’m passing through Route props with my /login route as routerProps. I’ll use this after my user logs in to redirect them to their profile page.
Now, let’s build out our Profile Component to actually show our user’s information…
Notice that, in order to display our user’s avatar, we have to provide the full path (prepending the actual domain address of our server, which for us is http://localhost:3000/). Now, when we log Lucy in, we go directly to her profile page, and … we see her avatar!
Awesome! Now we have the ability to grab our users’ avatar images from the backend — but that’s only half of what we’d like to accomplish. In part 2 of this post, I’ll show you how to allow a user to upload an image from their own computer, and store it on the backend. Stay tuned!
You can find part 2 of this post here.
|
https://medium.com/@jennyjean8675309/connect-your-react-application-to-a-rails-api-using-active-storage-part-1-e59dcacc481b
|
['Jennifer Ingram']
|
2019-09-09 21:09:00.736000+00:00
|
['Postgresql', 'Avatar', 'React', 'Active Storage', 'Rails']
|
This Sexual Assault Awareness Month, Remember Incarcerated Survivors
|
By Cynthia Totten, Deputy Executive Director at Just Detention International
In 2019, Devon, a gay man serving time at a prison in Nevada, was raped by two other prisoners whom he considered friends. When he told other inmates, they accused him of lying and warned that if he told staff what happened, he risked being hurt or killed. Still, Devon mustered the courage to come forward two months later. But when he reported the rape to facility officials, they responded indifferently. During the investigation, Devon was interrogated as if he had done something wrong. Ultimately, his report was deemed “unsubstantiated,” because he had been drinking with the rapists in his cell.
Devon’s experience has clear parallels with the mistreatment rape survivors in the community typically endure. But he noted one key difference between his story and those of survivors on the outside. In a letter, he wrote:
“In the age of #Metoo and Times Up, sexual assault is being taken more seriously. Men, women, and children are all stepping up and reporting those who sexually assaulted them. They have a voice and it matters so much. But as for inmates…our voice is not taken seriously. Sexual assault is not part of our punishment…but after what happened to me, I don’t know if I believe that anymore.”
Everyone has the right to be safe from sexual abuse — including people who are locked up. And yet the truth is that U.S. corrections facilities are plagued by this violence. Every year, a staggering 200,000 adults and children are sexually abused behind bars. The majority of survivors are abused not once, but again and again. About half of all assaults are committed by staff — the very people whose job it is to keep incarcerated people safe.
LGBT people are among those most vulnerable to abuse behind bars. The Bureau of Justice Statistics (BJS) found that in a 12-month period, roughly one in eight LGBT prisoners were preyed upon by another person in custody. These appallingly high rates of sexual abuse are a direct result of the rampant sexism and homophobia behind bars, particularly among staff. Many officials simply look the other way when an LGBT prisoner is assaulted, either because they assume what happened was consensual or they blame LGBT people for their own victimization.
Prisoners who are survivors of prior sexual abuse and those who have mental illness or disabilities are also at a disproportionately high risk of sexual abuse in detention. And sexual abuse doesn’t just happen in prisons and jails. It happens in juvenile detention centers that have a supposed focus on rehabilitation and putting youth on the right course; in community corrections facilities where people who have served their time are preparing to return home; and in police station lockups after an arrest.
Incarcerated survivors need and deserve help — and yet they are among the most isolated people in society. They are often locked up far away from their communities; they do not have access to the journalists or social media platforms — like Twitter, Instagram, and Facebook — that have helped make sexual assault and #MeToo a national topic of conversation. As a result, their voices have been almost entirely absent.
Perhaps it is the fact that prisoner rape survivors are removed from the rest of society that makes it so easy for us to ignore their plight. But it’s not merely that we are indifferent to the suffering of people behind bars. Many Americans also find humor in it. Indeed, the “don’t drop the soap” joke is so ubiquitous that it’s become cliché.
So when Harvey Weinstein was convicted of rape in February, it was no surprise to see the usual flurry of flippant remarks on Twitter. Such comments often appear when a high-profile defendant is convicted, but it’s especially ironic here given that Weinstein’s conviction is being heralded as a critical victory for the #MeToo movement.
At the same time, it’s important to note that a growing number of people are pushing back against prisoner rape jokes. Indeed, I have been heartened to see a number of people in my Twitter feed, including prominent feminists and journalists, decry the crude remarks about what Weinstein “deserves” in prison. Prisoner rape is still a popular punchline, but it feels like, finally, the culture is starting to shift.
So what can you do to support incarcerated sexual assault survivors?
Lift up the voices of incarcerated and formerly incarcerated sexual assault survivors this Sexual Assault Awareness Month, and encourage others to do so, including by sharing testimonies from Just Detention International’s website.
Help dismantle rape culture by remembering incarcerated sexual assault survivors in ongoing efforts — and public discourse — aimed at addressing sexual violence.
If you work in a rape crisis center, make sure that your program offers services to incarcerated sexual assault survivors. More than ever before, victim advocacy programs around the country are working with prisons and jails to offer crisis support to survivors behind bars.
Many people believe sexual abuse in detention is inevitable. On the contrary, when corrections leaders embrace safe practices and hold perpetrators accountable, this violence can be stopped. When we make light of rape in prison, we are giving a green light to those who commit these horrific abuses — and, all too often, get away with it scot-free. And we simply cannot tolerate that any longer.
Cynthia Totten is a Deputy Executive Director at Just Detention International (JDI), a health and human rights organization that works to end sexual abuse in all forms of detention. Founded in 1980, JDI is the only organization in the U.S. — and the world — dedicated exclusively to ending sexual abuse behind bars. Cynthia leads JDI’s national training and technical assistance program, supporting the work of state and tribal sexual assault coalitions, victim advocates, corrections officials, and funding administrators to ensure that incarcerated survivors have access to crisis services. A lawyer with two decades of experience in social justice and human rights, Cynthia also contributes to JDI’s federal policy and international advocacy efforts.
|
https://medium.com/sexual-assault-awareness-month-2020/this-sexual-assault-awareness-month-remember-incarcerated-survivors-5d33d5768ffe
|
['National Sexual Violence Resource Center']
|
2020-04-14 14:49:55.371000+00:00
|
['Prison Industrial Complex', 'Sexual Abuse', 'Saam', 'Sexual Assault', 'Prison Reform']
|
System Design Basics: Client-Server Architecture.
|
Client-Server architecture
Client/server architecture is an important concept for system design newbies. You may think it like the foundation of how the modern internet works. Nowadays, digital devices like computers, laptops, mobile devices are everywhere. Client-server architecture is the foundation of knowing how these computers talk to one another.
What do client and server mean in the client-server model? For simplification, we may say that a client is a machine that requests some data or service. And the server returns some service or data to the client. Servers listen to the incoming network requests. So, the client-server model is a modern system designed in such a way that clients request data or service from the server, and the servers provide data or service to the client. According to the wiki,
Client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients.
In this architecture, all the requests and services are delivered over a network. It is considered as a type of distributed computing system as the components complete their job independently.
Figure: Client-Server model (Image By Author)
Now we may check how this architecture works with an example. For example, what happens when we type www.medium.com in our browser and press the enter button. The client device, our own laptop or computer, does not really know what the medium.com URL means. The browser needs to communicate with the server machine where the medium URL is located.
The users who type a specific URL in this system are clients. The server provides information through the internet. So, these computers can be in different parts of the world.
A specific set of rules are needed for the interaction of two systems. The most popular are the HTTP and HTTPS.
The client computers request the required information from the server. That could be in any structure of formatted data. Mainly implemented and popular formats are in XML and JSON. Server and client request and respond mostly using these formats.
Now, we are going a bit into detail into what happens after the user types a URL in a browser. How does the browser know where the server is located? The browser performs a DNS query.
DNS query:
DNS means Domain Name System. It translates the human-readable name of the domain to a machine-readable IP address. Say, for example, we may use medium.com as the website name. But the machines don’t understand this name. Computers understand IP addresses through which they find each other. For example, an IP address can be 192.0 10. 101. Just imagine how many websites we use every day. And if we have to remember these types of IP address instead of a name like Facebook, Google, Medium, etc. It would have been impossible to use the internet.
Figure: Example DNS query by browser (Image By Author)
To put it simply, the client browser asks the DNS what the server’s IP address is. And DNS provides it to the browser.
IP Address: We may say an IP address is a unique identifier of a digital device. It is an address given to a digital machine that is connected to the public internet. So, all the digital machines have a public IP address. Data from the client reach the server by using that address. We can think of it as a mailbox for humans.
Now, after the DNS query, the client browser gets the IP address of the server. And the client sends requests to the server IP. The server sends the response to that request to the client.
The server listens for requests on specific ports. Every machine with an IP address has more or less 65K(²¹⁶) ports that it listens to. So, while communication between machines need not just IP address but also which port we want to communicate on. So, as clients, we have to specify a port number alongside the IP address to communicate with a server.
The IP address is like the mailbox of an apartment complex. And the ports are like the actual apartment number of a house. If a client uses the HTTP protocol to communicate with a server, the port number is always 80. For HTTPS, the port number is 443.
Now, how do two machine servers and clients communicate?
Network Protocol:
Say for humans if one speaks in English and another person in Spanish, it will be hard for them to understand each other. It is the same for computers; if the computers don’t have a common set of rules, how can they communicate.
Network Protocol is a set rule that both server and client-side agree to maintain while communicating. Servers and clients interact with one another using rules which are named as protocols.
Network protocols consist of various kinds of messages that are sent and received by server and client machines. These messages are sent over the network, the internet. The structure, format, and order of those messages create a network protocol.
There are a lot of protocols in computer science. We need to know about only the popular network protocols. The first one we are going to discuss is IP, which stands for Internet Protocol.
IP Protocol:
The Internet Protocol (IP) is a set of rules for addressing packets of data so that they can travel across networks and land at the correct destination machine. Data traversing through the Internet is divided into smaller units, called packets. This IP information attached to each packet helps network routers send packets to the right place.
The current internet system operates following the internet protocol. A server and client communicate and transfer data between each other using a form of packet system called an IP packet. This IP packet is how machines transfer data from one to another.
It consists of two parts: the IP header and data. The header contains information about the packet. It contains the source IP address, from where the data is sent from, the destination IP address, and where it is sent. So we may understand now that the source and destination of the machines are attached in the IP packet. This is the basic idea of how data is transferred through the internet.
Figure: IP packet transfer (Image By Author)
The total size of the packet is also added in the header. And also the is versions. IP versions IPv4 and IPv6 are two well-known IP versions. Based on the version, the packet format might change. As these are well-known rules, all the computers know how to interpret them. The rest of the packet is data. The IP packet size is not that big. It is mostly ²¹⁶ bytes(65000 bytes), which is actually 0.065 megabytes. So, we can see it is pretty small in size.
We transfer a lot bigger size of data. So, one IP packet can not contain all the data we want to transfer. IP packets don’t have an order by which we can serialize the data chunks. And some data might get lost over the network. In that case, the original data may not be recoverable.
TCP:
TCP means the transmission control protocol. It is built on top of the internet protocol. It maintains an ordered way to send IP packets. TCP transmission is more reliable. It gives us a way to find if any packet is lost in the way.
If you know that some packets are corrupted while transferring through the network, you can resend those packets. It allows us to send long sized data to other machines.
IP packet has a data portion. In this portion, there will be a TCP header. This header contains information about the TCP packet of the data, like the order of packets.
Photo by Sincerely Media on Unsplash
The first communication using TCP connection is a handshake between server and clients. Handshake is an interaction between two machines, where the source machine sends packets to the destination machine, saying, “I want to connect with you.” When the other computer responds, saying, “Ok. We can connect and communicate”.
Now the source machine sends packets saying, “We are connected now.” Then we have got an open TCP connection. Now that the connection is established, both the computers can send data to each other.
There is a time limit of connection; if a machine does not send data within that time period, that connection will be timed out.
While ending the TCP connection, any of the two machines can send a special message saying, “I am ending the connection.” TCP is like a wrapper around IP, and it is more powerful than IP.
But TCP lacks a framework for software developers to easy to use a communication channel for client-server in the system.
HTTP:
Hypertext transfer protocol, which is built on TCP. It provides a higher-level abstraction above TCP and IP. Its main characteristic is the request-response sequence. One machine sends a request, and the other machine needs to return a response. Most modern systems rely on HTTP protocol for communication.
We don’t need to know about the TCP and IP packets while using HTTP. It takes care of the developers of these complicated low-level machine-related things. Put, Get, Post, Delete are some basic HTTP methods. HTTP provides us the opportunity to add business logic than just data transfer.
So, typing the medium.com and pressing enter sends the request of the home page of the Medium. And the medium server sends back the HTML of the home page. Now the client browser gets the HTML and renders the page for the client device.
Conclusion:
In Client-Server architecture, all services and requests are spread over the network. The system components like servers and clients both perform tasks independently from each other. It is a simple but fundamental concept. The client-server architecture is how all the computers communicate with one another. HTTP is the most common protocol for server-client communication. Its main characteristic is the request-response sequence. The client makes a request and the server provide the response. As a system designer, you have to decide which request needs to made when. And what response the server needs to provide.
|
https://towardsdatascience.com/system-design-basics-getting-started-with-the-client-server-architecture-b02f9c9daae8
|
['Ashis Chakraborty']
|
2021-01-11 18:04:47.254000+00:00
|
['System Design Interview', 'Software Architecture', 'Technology', 'Software Engineering', 'Programming']
|
7 Actionable Steps to Feel in Control of Your Freelance Business
|
While the world may feel full of uncertainty, there are still things within your control.
Namely, what you do with your freelance writing business right now.
So many people are at home out of work and unsure of their next steps.
If you’re one of the lucky freelance writers who was at this before the COVID-19 pandemic hit, now is the time to buckle down and make the most of this ever-changing economy.
Here are seven actionable steps to take the lead in your business instead of letting fear take the reigns:
1. Create Useful Content
More than 80 million people are consuming content right now. Folks are on their devices more now than ever. Whether you have a blog, email newsletter, social media channels, etc. it’s the time to get cracking on your content calendar and create, create, create.
Now is the time to be pushing out content on valuable platforms.
This means wherever your ideal client is, get posting on that channel to get in front of their eyes.
What kind of content?
Anything that’s inspirational, educational, and generally helpful to your audience.
Bottom line: Be helpful to your audience — whoever that may be. People are gobbling up content right now. Make it valuable.
2. Take a Course
With your newfound time, schedule time each day or week to work on some new training. Marketing skills in particular are helpful, especially if you need to learn how to strengthen your offerings.
Look for free or low-cost trainings on:
Udemy
Teachable
HubSpot Academy
Or, dig out that training you paid for but never completed. Wherever you need to update your skills, take the time now to do it while you have spare moments. SEO, Google Analytics, Facebook ads, design skills, and others can be quite helpful to add to your toolbox.
Bottom line: Spend this time wisely. Learning new skills now allows you to charge more later once you’ve mastered a new skill.
3. Develop Smaller Offerings
While freelancers are still definitely slinging huge packages, they probably aren’t selling like hotcakes. That’s not to say you can’t sell big retainers or writing packages right now, but to do so in certain markets may come across as insensitive.
Think about how you can break down what you do into smaller, bite-sized offerings. If you normally blog twice a week for clients, could you perhaps offer editorial calendar management + 1 blog per week?
Keep in mind companies are slashing marketing budgets everywhere. When you do hop on discovery calls, listen to what your potential client is saying. If things are tight, it’s a no-brainer to send over a smaller package.
Bottom line: Offer a few selections, but be very thoughtful in your presentation. It matters right now.
4. Audit Your Expenses
This can be a rough exercise, but it’s a must. Look at your business expenses to see where you’re leaking money. Now is the time to audit what you’re spending your business funds on.
If you’re in a monthly membership group but haven’t used the offerings there in a few months, cancel the membership. Look at what’s not serving you and see how you can make some reductions.
A few essential business expenses I would NOT recommend cutting?
Internet speed
Bookkeeping software
Email service providers
Lead management systems
Virtual assistants or other essential team members
This economy is going to come back around. It will be rough at first, but what goes down must come back up. I feel the same about the economy. Keep the essentials that help your business run that you couldn’t do without. You don’t want to miss those essentials and fall behind once the economy is stable because you cut them now.
What do I recommend cutting?
Non-essential memberships
Book subscriptions (your library is free!)
Extra tech support you don’t use
Bottom line: Cut where it makes sense to cut back.
5. Create New Business
Now is not the time to let off the gas of your marketing efforts. If anything, go full speed ahead. If any of your clients have budgets that are drying up, you’re going to feel it in the next few months.
Where I’d start to drum up new business:
Warm leads
Job boards
Ask for referrals
Now is the time to get creative with your offerings, pairing any new skills you’ve learned with your packages. You can do trial runs and smaller projects to get your foot in the door of a company you want to work with so you keep money coming in.
Bottom line: Get creative. Offer one-on-one consultations, upsell current clients, and keep marketing your business. Don’t slow down. Just be respectful of how you market.
6. Apply for Financial Relief
Right now, small business owners everywhere are in weird spots. We’re all trying to navigate this COVID-19 issue that’s impacted the world. If you need financial help because your business is shaky, you can get it.
A few places to apply for financial relief:
Freelancers Union — They have a specific fund to support freelancers impacted by COVID-19. You can donate to the cause as well.
— They have a specific fund to support freelancers impacted by COVID-19. You can donate to the cause as well. File for unemployment — Thanks to the recent passage of the CARES Act, independent contractors can get some assistance, too.
Thanks to the recent passage of the CARES Act, independent contractors can get some assistance, too. Access a grant — There’s a COVID-19 economic injury disaster loan application you can apply for through the U.S. Small Business Administration. Check out eligibility here.
Bottom line: Don’t be too proud. If you need the money, you should apply for it.
7. Work on Those Back Burner Projects
Last but not least, we freelancers are guilty of putting our clients’ needs first. It’s how you keep getting repeat work, right? Whatever you’ve been putting off — building your email list, creating a Facebook group, writing an eBook — now is the time to get your butt in gear.
Put the time in now. You won’t regret it. Even if whatever it is doesn’t pan out the way you hope, at least you’ll have the experience and lessons of what to do differently next time.
Bottom line: You’ve got the time if you find the time. Right now, you’re quarantined. Make the time for your side projects and passion projects. Life goes too fast to not make it happen.
Your Freelance Business Matters Now and in the Future
Even if everything feels super unpredictable right now, that’s no reason to throw in the towel.
You’ve worked way too hard to let that happen.
This is a rough patch — for everyone — so know you’re not alone. The above action steps are just a few ways to take control of your freelance writing business. You also can control your mindset and action steps every single day.
Do take care of your freelance business right now. It needs love and attention every day. It matters now and in the future — to you and your clients. Don’t let the negativity and uncertainty throw you off course.
Freelancing gives you too much freedom to ignore that and go your own way. Just keep going. You’ve got this.
What action steps are you taking to secure your freelance business income? Share your creativity in the comments below!
|
https://medium.com/@serainepage/7-actionable-steps-to-feel-in-control-of-your-freelance-business-fd53fd92730
|
['Seraine P.']
|
2020-04-15 17:20:28.229000+00:00
|
['Freelance Writer', 'Freelance Writing', 'Freelancing', 'Writing']
|
Logistic Regression: Bottoms-up Approach.
|
Logistic Regression: Bottoms-up Approach.
(Feature engineering ideology — a bonus)
Quick Question?
Can we use RMSE or MSE as an evaluation metric for classification? If yes, when can we use it? If not why can’t we use it?
I will disclose the answer in the end.
What is in it for you?
This article will walk you through the nuances of logistic regression and I will make you familiar with the feature engineering ideology. The first thing I want to clarify is logistic regression uses regression in its core but is a classification algorithm.
My confession:
This is my second favorite algorithm after decision trees merely for the simplicity and robustness it offers. Despite having a ton of classification algorithms out there this age out algorithm survived the test of time and you could see its seamless integration with the neural networks is what makes it very special.
Let’s talk data
To meet the objectives I have picked up a data set that will allow me to discuss feature engineering as well. It is a password strength classification dataset. It has 3 classes in it, 0- weak, 1- medium and 2- strong. It has only two columns which are password and it’s strength class. The dataset has around 700K records so be assured it is not a toy dataset.
Disclaimer!
This article expects a few probability concepts from your end. Please refresh probability (5 minutes) and odds (8 minutes). Read these excellent articles from BetterExplained: An Intuitive Guide To Exponential Functions & e and Demystifying the Natural Logarithm (ln). Then, review this brief summary of exponential functions and logarithms.
OMG! I don’t have the right data.
You can follow the code here.
Pitfall 1: In the password dataset it is possible to have commas(,) and you are reading CSV file which is a common separated value file. You might read more commas than you need and might land in an exception. So, you can sidewalk from this by setting error_bad_lines = False.
Distribution of passwords based on their strength
The data we have is a list of passwords, which pandas identify as ‘object type’. But, in order to work with an ML model, we need numerical features. Let’s generate new features out of the text we have. This is where feature engineering saves us. Feature engineering often works on intuition. There is no formulated right way of doing it. Usually, in industry people tend to rely on domain expertise in identifying new features. As per our data you are the domain expert. You know what are the key elements in an ideal password and we will start from there.
Created features:
No of characters in a password
No of numerals in a password
No of alphabets in a password
No of vowels in a password
No of consonants in a password (more of these means less meaning the password has)
New features. We will have to see if they actually contribute something.
Logistic Function
So we want to return a value between 0 and 1 to make sure we are actually representing a probability. To do this we will make use of the logistic function. The logistic function mathematically looks like this:
Let’s take a look at the plot
You can see why this is a great function for a probability measure. The y-value represents the probability and only ranges between 0 and 1. Also, for an x value of zero, you get a .5 probability and as you get more positive x values you get a higher probability and more negative x values a lower probability.
Make use of our data
Okay — so this is nice, but how the heck do we use it? Well we know we have five attributes — char_count, numerics, alpha, vowels, consonants— that we need to somehow use in our logistic function. One pretty obvious thing we could do is:
Where CC is our value for char_count, N is our value for numerics, A is our value for alpha, V is our value for vowels, CO is our value for consonants. For those of you familiar with Linear Regression this looks very familiar. Basically, we are assuming that x is a linear combination of our data plus an intercept. For example, say we have a password with a char_count of 8, numerics of 4, alpha of 4, vowels of 1 and consonants of 3. Some oracle tells us that 𝛽0=1, 𝛽1=2, 𝛽2=4, 𝛽3=6, 𝛽4=8 and 𝛽5=10. This would imply:
x= 1 + (2*8) + (4*4) + (6*4) + (8*1) + (10*3) = 95
Plugging this into our logistic function gives:
So we would give a 100% probability to a password with those features is medium (I took first row data)
Learning
Okay — makes sense. But who is this oracle giving us our 𝛽 values? Good question! This is where the learning in machine learning comes in :). We will learn about our 𝛽 values.
Step 1 — Define your cost function
For the sake of simplicity, we will remove one class and make it a binary classification problem. Later in the article, we will see how to do multi-classifications as well. For now, I’m mixing medium and weak as one class. If you have been around machine learning, you probably hear the phrase “cost function” thrown around. Before we get to that, though, let’s do some thinking. We are trying to choose 𝛽 values to maximize the probability of correctly classifying our passwords. That is just the definition of our problem. Let’s say someone did give us some 𝛽 values, how would we determine if they were good values or not? We saw above how to get the probability for one example. Now imagine we did this for all our password observations — all 700k. We would now have 700k probability scores. What we would hope is that for the strong passwords, the probability values are close to 1 and for the weak passwords the probability is close to 0.
But we don’t care about getting the correct probability for just one observation, we want to correctly classify all our observations. If we assume our data are independent and identically distributed, we can just take the product of all our individually calculated probabilities and that is the value we want to maximize. So in math, If we define the logistic function and x as:
This can be simplified to:
The ∏ symbol means to take the product for the observations classified as that password. Here we are making use of the fact that our data is labeled, so this is called supervised learning. Also, you will notice that for weak observations we are taking 1 minus the logistic function. That is because we are trying to find a value to maximize, and since weak observations should have a probability close to zero, 1 minus the probability should be close to 1. So we now have a value we are trying to maximize. Typically people switch this to minimization by making it negative:
Note: minimizing the negative is the same as maximizing the positive. The above formula would be called our cost function.
Step 2 — Gradients
So now we have a value to minimize, but how do we actually find the 𝛽β values that minimize our cost function? Do we just try a bunch? That doesn’t seem like a good idea…
This is where convex optimization comes into play. We know that the logistic cost function is convex — just trust me on this. And since it is convex, it has a single global minimum which we can converge to using gradient descent.
Here is an image of a convex function:
source: utexas.edu
Now you can imagine, that this curve is our cost function defined above and that if we just pick a point on the curve, and then follow it down to the minimum we would eventually reach the minimum, which is our goal. Here is an animation of that. That is the idea behind gradient descent.
So the way we follow the curve is by calculating the gradients or the first derivatives of the cost function with respect to each 𝛽. So let's do some math. First realize that we can also define the cost function as:
This is because when we take the log our product becomes a sum. See log rules. And if we define 𝑦𝑖 to be 1 when the observation is strong and 0 when weak, then we only do h(x) for strong and 1 — h(x) for weak. So let's take the derivative of this new version of our cost function with respect to 𝛽0. Remember that our 𝛽0 is in our x value. So remember that the derivative of log(x) is 1/𝑥, so we get (for each observation):
And using the quotient rule we see that the derivative of h(x) is:
And the derivative of x with respect to 𝛽0 is just 1. Putting it all together we get:
Simplify to:
Bring in the negative and sum and we get the partial derivative with respect to 𝛽0 to be:
Now the other partial derivatives are easy. The only change is now the derivative for 𝑥𝑖 is no longer 1. For 𝛽1 it is CC𝑖 and for 𝛽2 it is Ni. So the partial derivative for 𝛽1 is:
For 𝛽2:
Step 3 — Gradient Descent
So now that we have our gradients, we can use the gradient descent algorithm to find the values for our 𝛽s that minimize our cost function. The gradient descent algorithm is very simple:
Initially guess any values for your 𝛽 values
Repeat until convergence:
𝛽𝑖=𝛽𝑖−(𝛼∗ gradient with respect to 𝛽𝑖) for 𝑖=0,1,2,3,4,5 in our case
Here 𝛼 is our learning rate. Basically how large of steps to take on our cost curve. What we are doing is taking our current 𝛽 value and then subtracting some fraction of the gradient. We subtract because the gradient is the direction of greatest increase, but we want the direction of the greatest decrease, so we subtract. In other words, we pick a random point on our cost curve, check to see which direction we need to go to get closer to the minimum by using the negative of the gradient, and then update our 𝛽 values to move closer to the minimum. Repeat until converge means keep updating our 𝛽 values until our cost value converges — or stops decreasing — meaning we have reached the minimum. Also, it is important to update all the 𝛽 values at the same time. Meaning that you use the same previous 𝛽 values to update all the next 𝛽 values.
Gradient Descent Tricks
Normalize variables:
This means for each variable subtract the mean and divide by standard deviation.
Learning rate: If not converging, the learning rate needs to be smaller — but will take longer to converge. Good values to try …, .001, .003, .01, .03, .1, .3, 1, 3, …
Declare converges if cost decreases by less than 10^−3 (this is just a decent suggestion)
Plot convergence as a check
Follow the code for the implementation. After building the model from scratch we got an accuracy of 76% and I have used the sklearn package and got an accuracy of 99% this is pretty good.
Advanced Optimization
So gradient descent is one way to learn our 𝛽 values, but there are some other ways too. Basically these are more advanced algorithms that I won’t explain, but that can be easily run in Python once you have defined your cost function and your gradients. These algorithms are:
BFGS
L-BFGS: Like BFGS but uses limited memory
Conjugate Gradient
I will leave it here for you to explore. I would suggest not many know about these advanced optimization techniques rather they work well with gradient descent and its variants as they do a pretty decent job.
Evaluating Classification
Now that we have spent some time understanding the math behind logistic regression, let’s look more at how to evaluation classification problems. We can now fit a logistic regression model. Logistic regression can also leverage the regularization techniques we used in the linear regression post. They are used in exactly the same way — we penalize large coefficients by adding and L1 or L2 norm to our cost function.
Note we won't be demonstrating CV for hyper-parameter optimization here as we did with linear regression, but that is still applicable. In fact, almost all of the techniques we discussed in the previous post can also be used here. We will just be discussing new topics. We already know about the train and test data from the previous post. Let’s see how does the model do on test data.
Accuracy
One of the easiest metrics to understand when it comes to classification is accuracy: what fraction did you correctly predict.
Excellent!
One of the downfalls of accuracy is that it is a pretty terrible metric when you have imbalanced classes. Let’s look at our class balance:
Class Imbalance
So about 88% of our data are of the negative class — so if we just predicted everything to be negative, our accuracy would be about 88%! Not bad. Imagine other data where the negative class is only represented 1% of the time — you would get 99% accuracy by just predicting everything to be positive. That accuracy might feel good, but it is pretty worthless if that is your model.
Precision, Recall, and F1
To deal with some of these issues, often you will see precision, recall, and F1 used to evaluate classification tasks.
Precision is the number of true positives divided by all of the positive predictions (true positives plus false positives): Basically, how precise are our predictions when we predict something to be positive.
Precision
Recall is the number of true positives divided by all of the actual positives (true positives plus false negatives): Basically, what fraction of all the positives do we actually predict.
Recall
These two metrics trade-off between each other. For the same model, you can increase precision at the expense of recall and vice versa. Often, you have to determine which is more important to the problem at hand. Do we want our predictions to be precise or have a high recall?
What are true/false positives/negatives? Let’s take a look at a confusion matrix.
Confusion matrix
Here is our confusion matrix for our test predictions. The rows are the truth. So, row 0 are actual 0 labels and row 1 are actual 1 labels. The columns are the predictions. Thus, cell 0,0 counts the number of the 0 class that we got right. This is called true negatives (assuming we consider the label 0 the negative label). And cell 1,1 counts the number of the 1 class that we got right — true positives. Cell 0,1 are our false positives and cell 1,0 false negatives.
As you can probably tell, confusion matrices are extremely useful tools for error analysis. In this example, we have pretty symmetric errors — four missed within both classes. This is not always the case, and often there are more than two classes, so understanding which classes get confused for others can be very useful in making the model better.
Sometimes you just want a balance between precision and recall. If that is your goal, F1 is a very common metric. It is the harmonic mean of precision and recall:
Harmonic mean of precision and recall
The harmonic mean is used because it tends strongly towards the smallest element being averaged. If either precision or recall is 0 then your F1 is zero. The best F1 score is one.
Precision / Recall Curves
With all of the previous metrics, we have been using binary predictions. But really logistic regression returns probabilities, which are extremely useful.
Now instead of a single 1/0 number for each prediction, we have two numbers: the probability of being class 0 and the probability of being class 1. These two numbers must sum to one so 1-probability of 1 = probability of 0. A useful thing to do is to make sure these probabilities actually seem to correlate with the classes:
What this plot is showing us is that for our training data that indeed our positive class tends to have a high probability of being positive and our negative class has a high probability of being negative. This is basically what we designed our algorithm to do, so this is somewhat to be expected. It would also be good to check that this is true for validation sets.
Sklearn’s predict function just takes the class with the highest probability as the predicted class, which isn’t a bad method. One might want, though, a very precise model for positive predictions. One way we could do this is only call something positive if the probability is over 90%. To understand the trade-offs between precision and recall with different cut-off values, we can plot them:
Nice! We can clearly see the trade-offs between P and R as we adjust our thresholds. And in fact, we can find the cut-off that maximizes F1 (NOTE: we are using testing data here, which would not be good in practice. We would want to use a validation set to choose a cut-off value)
ROC Curve
ROC curves are another popular method for evaluating classification tasks. It is a plot with the recall value (or true positive rate) on the y-axis and the false positive rate (false positives divided by all the actual negatives) on the x-axis. Thus, it shows how your model trades off between these two values. The closer your value is to the top-left corner of the graph, the better. Let’s take a look for our test data:
ROC curve USP: It would work even with highly imbalanced data
Clearly, our model has done very well. The AUC (area under the curve) score you see in the bottom right is the area under the ROC curve, a value of 1 being the best one can do. We got to one. This curve can also be used to pick a threshold value if you know the trade-off you want to make between TPR and FPR.
Multi-class
Logistic regression can also handle more than 2 classes. There are two ways we can do this:
One-vs-rest method: With this method, we could train a classifier per class where the positive values are when that class is active and the negative values for all other classes.
Cross-entropy loss: We can also change our loss function to incorporate multiple classes. The most common way to do this is by using the cross-entropy loss:
Cross entropy loss
This loss takes the negative average of the predicted probabilities for the correct class. Thus, optimizing it attempts to get the predicted probabilities for the correct class as high as possible.
Interpreting logistic regression
Interpreting the coefficients of logistic regression is a little trickier than that of linear regression. Let’s take a look at our coefficients:
Our most positive coefficient is on char_counts with a value of 5 (roughly). Since we used a log-loss, we need to convert this value by doing e^5 which is 147.52. That is for every 1 unit increase in char_counts the odds of the password being a strong one increases by 14752% because we now have converted to an odds ratio. For more details on this, check out this post.
Answer to the question
I would like to show it using an example. Assume a 2 class classification problem and 6 instances.
Assume, True probabilities = [1, 0, 0, 0, 0, 0]
Case 1: Predicted probabilities = [0.2, 0.16, 0.16, 0.16, 0.16, 0.16]
Case 2: Predicted probabilities = [0.4, 0.5, 0.1, 0, 0, 0]
The MSE in the Case 1 and Case 2 is 0.128 and 0.1033 respectively.
Although Case 1 is more correct in predicting the class for the instance if 0.5 is considered the threshold, the loss in Case 1 is higher than the loss in Case 2. Why is this? MSE is a metric that measures the standard deviation of the predictions from the ground truth. And also it’s application might be limited only to binary classification. It still might be used in specific instances. The RMSE is one way to measure the performance of a classifier. Error-rate (or No. of misclassification) is another one. If our goal is a classifier with low error-rate, RMSE is inappropriate and vice versa. This explains why we don’t use MSE/RMSE as an evaluation metric for classification problems. Other metrics provide more flexibility and understanding of the classifier performance than an MSE.
Code: Github
My article about linear regression
If you need any help you can get in touch: LinkedIn
References:
Hands-on machine learning by Geron Aurelien
|
https://towardsdatascience.com/logistic-regression-bottoms-up-approach-feature-engineering-ideology-a-bonus-81807fa881be
|
['Hemanth Devarapati']
|
2019-11-28 17:09:41.270000+00:00
|
['Logistic Regression', 'Machine Learning', 'Metrics', 'Data Science', 'Gradient Descent']
|
Where Are Your Tax Dollars Going?
|
New Jersey residents pay some of the highest state taxes in the country. Morristown Minute recently published an article about Morristown, NJs Biggest Problems, where we identified areas in which Morristown Struggles significantly.
The results of our investigation showed a need for further state funding in the areas of criminal justice reform, affordable housing, and mental health care. But how much of our tax dollars are going to the places we need them?
What have NJ taxes been up to in the past couple of years?
New Jersey taxes have risen across the board in all but one area. Income tax went up nearly 7% from 2019–2020, sales tax is up over 4%, with a total tax increase of 2.9%.
The only decrease in tax revenue for the state was in corporation tax, which dropped 13.3% from 2019–2020.
Close to 43% of New Jersey’s 2020 budget was funded by income tax paid by New Jersey residents.
Data from NJ FY2020 Budget
Where is our money going?
Significant portions of NJ taxes go to our state government, for things like corrections, public safety, the treasury department, environmental protection, agriculture, transportation, and more.
More money has been flowing into the departments of correction, law and public safety, treasury, and state services.
Meanwhile, less of your tax dollars are going towards environmental protection, agriculture, and transportation. As mentioned above, our investigation of NJs biggest issues showed a lack of adequate transportation and infrastructure. However, less of our tax dollars are going towards remedying these issues.
Schools, of which New Jersey has the best in the nation, saw a 3% increase in tax revenue from 2019–2020. There was an increase in aid to schools, at just under three percent.
Payment to teachers, which includes pension and annuity funds, post-retirement medical insurance, and social security, increased 3.2%.
Taxes to public higher education, listed separately from school and teacher aid, increased by 1.2% to help students with debt and financial aid. More money is going to students, meanwhile, less of that money is going to our public universities and colleges.
Breaking it down:
About 38 cents of each of your tax dollars go to education in the state of New Jersey.
15 cents of every dollar go to Human Services.
12 cents of every dollar go to state benefits, utilities, and property costs.
7 cents of every dollar go to the department of treasury.
Less than 1 cent of every dollar goes to agriculture, banking and insurance, environmental protection, labor, and military and veteran’s affairs.
What has Morristown’s Tax Rate been up to?
Data from NJ FY2020 Budget
The above graph represents Morristown’s tax rates over the last 24 years, from 1–1997 to 24–2020.
Our town’s tax rate has been relatively steady. Residents are paying a lower rate today than we were 24 years ago in 1997.
On average (from 1997–2020), Morristown residents pay a tax rate of 2.165%.
In 2020 we paid taxes at a rate of 2.118%, a slight decrease from our 24-year average. Our tax rate has been lower than the 24-year average since 2016, indicating a decreasing trend.
Where do we need our tax dollars to go?
As mentioned above, our investigation of Morristown’s key issues revealed our town needs greater financial support in the areas of environmental protection and infrastructure, criminal justice reform, affordable housing, and military and veteran’s affairs and mental health services including addiction services.
However, tax revenue for nearly all of these services decreased from 2019–2020. Environmental protection funds decreased more than 10%, transportation funds decreased over 55%, and corrections saw a modest increase of 0.4%.
Where do you want to see your money being used? What issues are most important to you? Let us know in the comments below.
|
https://medium.com/@MorristownMinute/where-are-your-tax-dollars-going-3910037b56c2
|
['Morristown Minute']
|
2021-12-21 16:24:04.363000+00:00
|
['New Jersey', 'Morristown', 'Property', 'Taxes', 'Income Tax']
|
If you don’t like the rules, create your own
|
My studies were largely guided by the mindset that graphic design and illustration are something extremely serious, something that you should not take lightly.
Many teachers had the attitude that graphic design can only be done the “right” way by the seasoned, high-status white male professionals who wear only black and edgy eyeglasses. They created only in black and saw other colours as something that’s meant for children. They were the ones who would set the rules for everybody else and you were supposed to follow them.
Design or illustration were definitely not done right by young women. They especially needed the guidance of old, serious men to get it right.
Olle Eksell is also famous for his bird drawings. So quirky!
Another mantra kept hearing during my studies was that your work shouldn’t feel easy. It could always be more perfect, more refined, more this, more that. If something came easily to you, you were probably cheating in a way or another.
Instead, creating design should be extremely painful. You should stay up late and feel stressed, suffer for your work and maybe even lose your mind in the process. That’s when you’re doing it right and your work can be approved by other Real Designers who suffer like you.
|
https://medium.com/@leena-kisonen/if-you-dont-like-the-rules-create-your-own-2e5e46fd1158
|
['Leena Kisonen']
|
2020-12-17 08:00:03.906000+00:00
|
['Finding Yourself', 'Leena Kisonen', 'Finding Your Voice', 'Olle Eksell', 'Illustration']
|
User Research Method Cards: Available for Download
|
User research helps to better understand end users’ needs and expectations, as well as typical working processes and routines. Now, an easy to use card deck with the 14 most commonly practiced user research methods at SAP is available for download. This blog explains the typical product development phases at SAP and the structure of the method cards.
The user research method cards deck covers the following methods:
360° Analysis A/B Testing Card Sorting Cognitive Walkthrough Fish Bowl Focus Groups Heuristic Evaluation Interviews / Field Research Shadowing Survey & Questionnaire Tree Test Usability Benchmarking Usability Testing Use Case Validation
Click here to download the card deck from the respective article in the SAP Fiori Design Guidelines.
When to use which method in the product development lifecycle
Design-led development (DLD) is SAP’s process to make sure that product requirements are derived from user research, turned into product design according to guidelines and best practices, and are eventually properly implemented. DLD consists of three phases: Discover, Design, and Deliver. Ideally, user research is practiced in all three of the phases. The graphic below gives an overview of the 14 user research methods in this card deck and their typical placement within DLD.
During the Discover phase, user research focuses on learning about stakeholder and end user requirements, such as their responsibilities, tasks and activities, typical use cases, workflows, and the use of other software and artifacts needed to perform their jobs.
Once a sufficient understanding of their needs has been achieved, you move on to the Design phase. Here, different methods are available for the validation of low and high-fidelity prototypes. Hence, throughout the design phase, the research focus shifts to getting end user feedback about design ideas and artifacts. User research is frequently confused with usability testing, but this is just one of the methods. In the Deliver phase, usability benchmarking and surveys help to evaluate the product.
Structure of the method cards
Each user research method has its strengths, weaknesses, and goals while sharing similarities with others: Most of them can be conducted physically and virtually. Also, the vast majority of methods are easy to implement — even for beginners. To help you best select the right method for your needs, each method card’s front side provides a short info about its requirements.
Below, you can see an example of the front side of the “Interviews” method card. The text next to the clock informs you about the time needed and, if applicable, about its steps (i.e. preparation, analysis, synthesis). By the people symbol, you can read how many participants, experts, instructors, and other roles are required for this method. Next to the cube, you can find information about required material, resources, and preparation. Some methods only need pen and paper, whereas others are more complex.
The section “Why and What” on the backside of each method card provides details about the goal and the outcome of the respective method. By the checklist symbol, easy-to-follow steps show how to conduct the method. At the bottom of the method card, you can find helpful tips and tricks.
Would you like to know more about user research? Check out the openSAP course Basics of Design Research.
|
https://medium.com/sap-design/user-research-method-cards-available-for-download-41949eccfefc
|
['Saskia Guckenburg']
|
2021-03-03 12:44:22.079000+00:00
|
['Sap', 'Usability', 'UX', 'Tools', 'User Research']
|
Tissue Diagnostics Market Worth $6.6 Billion By 2027
|
The global tissue diagnostics market size is anticipated to reach USD 6.6 billion by 2027, growing at a CAGR of 5.9% during the forecast period, according to a new report by Grand View Research, Inc. Accelerating demand for automated tissue diagnostic systems due to the lack of skilled pathologists has driven the market. The advent of advanced imaging techniques, such as autofluorescence, that minimize the need for invasive diagnostics further supplement market growth.
The digitalization of tissue diagnostic techniques has resulted in the improvement of workflows and better patient care. Ongoing strategic models taken-up by key companies to enhance the efficiency of diagnostics also drives the market. For instance, in December 2019, Philips collaborated with Paige to provide clinical-grade artificial intelligence-based solutions to pathology laboratories. This enhanced the speed and accuracy of cancer diagnostics in laboratories.
Tissue diagnostic techniques, such as immunohistochemistry (IHC) or in situ hybridization, are used in companion diagnostics (CDx) to determine the quantity of a target analyte present in the sample. Introduction of CDx tests, such as the launch of the VENTANA HER2 Dual ISH CDx
Click the link below:
https://www.grandviewresearch.com/industry-analysis/tissue-diagnostics-market
Further key findings from the report suggest:
|
https://medium.com/@marketnewsreports/tissue-diagnostics-market-22e0fe872ef3
|
['Gaurav Shah']
|
2020-12-21 07:58:07.232000+00:00
|
['Workflow', 'Japan', 'Breast Cancer', 'Canada', 'Germany']
|
DAYS LATER
|
DAYS LATER
It was like every other day; I woke up not incredibly excited about anything, ran through my chores for the day, and started coding. The advantage that comes with staying alone is I can choose to have lazy days whenever I feel like.
I took a break from studying and decided to come online in the evening. I saw a status update from him and we all know that’s not possible except the other person saves your contact (which I wasn’t expecting him to because the first conversation ended on a cliffhanger). I had to reply because the status was a cry for help again (he cries a lot, my big baby) then he calls my line when he saw my message, apologizing because he promised to call back when the first conversation ended (He’s very forgetful).
I was amused at the way he spelt my name (no one has ever gotten it right unless I help them out), no one has also ever spelt my name that way and it was hilarious. When I told him what the name meant (My name Owaibuekpo means money does the talking, money speaks or Owo ni koko), he reacted like every other person did, he screamed like an excited fan who just met the celebrity they stan (I dare you to deny this mate).
Due to his exposure to western culture (Yoruba), he calls me Owo ni koko (which no one has ever given me as a nick) and it makes me happy.
PS: I want to name this THE COCONUT HEAD SERIES but he’s fighting against it passionately.
|
https://medium.com/@buekpo/days-later-26b3e254338c
|
[]
|
2020-12-10 11:10:51.018000+00:00
|
['Friends', 'Friendship']
|
‘Road bowling’ connects Irish emigrants to their roots
|
By John Fox
Photos by Gretchen Ertl
The small iron spheres look identical to my untrained eye. But 56-year-old Florie O’Mahony rolls them around in his calloused hands like a seasoned baseball pitcher, testing each for grip and feel. He selects his weapon of choice and walks five or so yards up the road past the chalk mark that reads “START.” Florie’s 21-year-old son, Sean, is posted 50 yards down the road, carefully surveying the asphalt’s camber and pitch. “Keep to the inside!” he shouts to his dad, marking his target with a pile of wet leaves.
Florie takes a few deep breaths and breaks into a trot. When he reaches the starting line, he takes flight, snapping his arm in an underhand windmill movement and firing the ball down the road with ferocious speed. Spectators scatter as it rockets past them for 100-plus yards before being swallowed up by the surrounding brush. “Good bowl!” says one. “Not much dirt on it.”
Sean OʼMahony shows off his road bowling form.
This is road bowling, a little-known sport played mostly in two far-flung parts of Ireland — County Cork in the south and County Armagh in Northern Ireland (where it’s referred to as “bullets”). Played in one-on-one competition, the object of road bowling is simple and comparable to golf: to get a 28-ounce solid steel ball — called a “bowl” (pronounced like “foul”) — down a paved road to a predetermined finish line in as few shots as possible. Bowlers alternate shots, and wherever a bowl stops, or goes “dead,” is marked as the starting line for the next shot.
The otherwise obscure sport enjoys a fanatic following in its Irish heartland, where it’s not uncommon to encounter country lanes closed to traffic and packed with rowdy spectators. An official association governs the sport and organizes regular competitions for men, women, and youth of all ages. Unofficially, it attracts a lively gambling trade, with spectators betting on individual bowlers like prize cocks. Thanks to the Irish diaspora, road bowling has found its way over the years to several parts of the United Kingdom and to scattered parts of the U.S., especially West Virginia, New York, and Boston.
Like other games — from Gaelic football to bocce to mah jongg — road bowling serves as a kind of cultural glue, connecting far-flung emigrants and their children to their roots. Sports, it turns out, are particularly good at doing this. They’re structured around rules that are grounded in deep-seated cultural values like fair play and mastery. They have a language and terminology of their own that instantly separates insiders from outsiders. Like with any good ritual, when people play sports, they throw their whole selves into it: mind, body, and spirit. And the more obscure and unpopular the sport, the better it is at reinforcing tribal identity.
Chinese-manufactured steel balls designed for fence posts are the exact size and weight of league-sanctioned Irish bowls.
I meet Con O’Callaghan, the ringleader of the Boston road bowling chapter, in the parking lot of a state park just south of the city. It’s a hot June day and families are pouring out of minivans for a day of swimming, bike rides, and picnics. Con — a ruddy-faced 60-year-old sheet worker — pulls up in a pickup truck filled with construction debris and tells me to follow him in my car.
After trailing Con for several twisting miles deeper and deeper into the forest, we finally arrive at his bowling Brigadoon: a quiet mile-long loop of country road, free of traffic except for the occasional bewildered cyclist. “Hidden away!” he declares with the thick sing-song brogue he’s held onto after 30 years in the U.S. Several more trucks and work vans line the road, most of them emblazoned with logos of painters, electricians, and other contractors.
Con, like most of the other “lads” in his club, came to the U.S. from a small farming village in County Cork where road bowling has been the local pastime as long as anyone can remember. “It would be like baseball for you,” he explains. “Our parents bowled and our grandparents before them. We just grew up with it.”
Now they’re passing the baton to their American-born children and a surprising number are taking it up. A few years ago, Florie’s son, Sean, was the first Irish-American player ever to win in the All-Ireland championships. Like other children of immigrants, he was naturally drawn to the sports of his peers — softball, in his case. But he grew up spending Sundays bowling with his dad and the sport got under his skin. (It didn’t hurt that road bowling proves to be phenomenal training for fast-pitch softball.)
Most historic references to road bowling are from legal tracts limiting or prohibiting its play. Road bowlers have had the law at their heels since the sport’s inception.
Road bowling’s roots in the Emerald Isle run deep, stretching back to at least the early 1700s. Much to the dismay of some staunch Gaelic nationalists, however, historian Fintan Lane believes it originated in the north of England and Scotland, where accounts trace it to as early as the 15th century. Scottish textile workers brought it to Ireland’s north, according to Lane, and it spread to other parts of the island from there.
Then it kept heading west. Centuries before Con and his countrymen re-introduced their childhood game to America’s shores, road bowling enjoyed a brief burst of popularity in the early colonies, played by revolutionary soldiers at Valley Forge and on the cow paths of colonial Boston.
Most historic references to the sport are from legal tracts limiting or prohibiting its play. Road bowlers have had the law at their heels since the sport’s inception. In the early 19th century, as English and Scottish roads got busier with industrialization, town after town declared the sport a nuisance and shut it down. It didn’t help that bowling was played mostly by working-class laborers. Authorities viewed most working-class sports — including football — as unruly gatherings that threatened property and public order, especially when they involved gambling and drinking (which they often did).
America was no more welcoming. Boston issued a bylaw as early as 1723 prohibiting bowling on town roads. Maryland issued bans in the early 1800s and other states followed suit from there. By the late 19th century, bowling had been purged from most towns and roads, surviving only in the back roads of the Irish hinterlands.
If there’s a modern embodiment of road bowling’s outlaw past, it might be Lyndon Kiely. The 40-year-old excavator would blend in nicely on the set of “Game of Thrones,” with his barrel chest, jet-black beard, and exotic tattoos. The one-time rugby player, who “was practically born with a bowl in hand,” is vying today for a coveted spot at the U.S. playoffs in upstate New York. The top bowlers from Boston go up against players from New York and West Virginia (who, Con says, “bowl with a bottle of moonshine in the other hand”). The winners — whose ranks will include Sean O’Mahony — will head over to the All-Ireland competition to represent the U.S.
At the moment, Lyndon finds himself halfway down the road and in a dead heat with the elder Florie, a five-time All-Ireland champion who — when he’s not hurling iron — “does a bit of construction.” It seems to be a classic battle of brains versus brawn, though I don’t dare suggest it within earshot of either. Lyndon is on a tight curve with limited choices. He can’t loft his shot over the curve due to trees overhead, but it’s hard to see how he can find an inside track that won’t leave him dead in the rough. Darren, his chain-smoking “road shower,” an adviser similar to a golf caddy, approaches to consult. He carries a piece of sheet rock which he uses like chalk to mark the road. “Straight out now, no messin’,” he encourages Lyndon as he walks down the road, spreads his legs, and forms a human target. “This bowl has to fuckin’ fly!”
Fly it does — deep into the woods. A big fellow, brandishing a golf club with a magnet fashioned on the end, heads into a muddy ditch to lead the search party. Players hate to lose bowls, as they’re not easy to replace. After years purchasing official bowls from Ireland, American road-bowlers came upon a more affordable source: China. Turns out Chinese-manufactured steel balls designed for fence posts are the exact size and weight of league-sanctioned bowls.
After 10 minutes scouring the underbrush, the search is called off. Lyndon has to forfeit a shot, which puts him behind by one. Florie is in a strong position now, with the finish line in sight. His road-shower son marks a track with leaves — which they call the “sop” (a Gaelic word for a small pile of straw). “Watch my van, will you?” yells a spectator with a laugh, pointing way down the road to his parked van.
A family of hikers appears down the road. “Open up!” shouts a spotter, stationed there to prevent bodily harm to unsuspecting bystanders. Play is paused and the crowd parts to let the bewildered family through.
Florie squats like a pro-circuit golfer, eyes every angle, and switches sides of the road. Then winds up and nails his leafy target dead on to win the score.
“Split the sop!” calls out Sean.
“Ah, he’s a fox, all right,” says Con with a wink.
The men shake hands and we all stroll back to our cars replaying, and prosecuting, the final shots of the score. It’s mid-afternoon, but there’s still talk of a quick pub stop.
Over pints of Guinness, in a pub surrounded by Red Sox and Patriots banners, I ask Con why he and the rest of the fellows give up a hard-earned day off to play a game in the woods that no one’s ever heard of. He takes a sip and thinks.
“It’s a nice walk, I suppose,” Con says. He takes another sip. “And the camaraderie. In the end, you get beaten or you win. You go for a few pints. And what happens on the road stays on the road.”
|
https://medium.com/experience-magazine/road-bowling-connects-irish-emigrants-to-their-roots-23f1bf036478
|
['Experience Magazine']
|
2019-08-29 18:52:05.594000+00:00
|
['Sports', 'Ireland', 'Bowling', 'Immigration']
|
13 best single player card games
|
In endeavoring to catalog single player card games, one first finds that the word Solitaire must be included in every entry. For the word itself defines the content, meaning ‘a game for one player’. Alternate definitions describe a single inset gem. Think a dagger with a lone ruby in the pommel, which would be called ‘a ruby solitaire’.
Something to wager with once the shoes are gone, eh?
Let’s find our own diamonds.
1. The Idiot
Hopefully named after the Dostoevsky novel and not the intelligence quotient of its players, this Swedish game is simple yet devilishly difficult. It starts with 4 decks. One card is taken from each pile and laid down. If there’s more than one visible card in each suit, the lowest is removed, and so on, until only 1 visible card remains of each suit.
Four new cards are laid on top of the existing ones, rinse and repeat until there’s none left. Once a pile is exhausted, the top card from another pile can be moved to the empty pile. The overall objective is to get all four Aces in the bottom of each.
2. Solitaire
The king of hermetic card games; Solitaire is a staple of office workers and computer idlers worldwide. Patience games, while typically performed singly, can involve upwards of 2 players. In case you’re not familiar with this hour-whittling, dangerously addictive game, it involves manipulating and sorting cards. The most common variant involves dealing shuffled cards in a prescribed arrangement, while the player seeks to re-order the deck by means of shifting cards by suit and rank. If you haven’t yet, do.
3. Canfield — solitaire
Canfield to our American counterparts, Demon to our European contingent, is a solitaire variant reputedly named after the casino in which it was invented. Canfield players take misery in their stride, given the extremely low percentage chance of victory in a given game.
To play, thirteen face cards are dealt face up and then turned down. These are the reserves, and only the top can be played. A card is then placed on the first of four foundations to the right of the reserve, from whence all cards of the same rank must also start the other three foundations. Someone wins when all cards are placed on the foundations, statistically very unlikely, which Mr. Canfield knew deviously well.
4. Chain solitaire
Another patience game involving careful planning, with a higher likelihood of finishing. The objective is to make chains with cards while obeying the rules of regular solitaire; the cards need be in ascending or descending order, alternating between red and black cards.
5. March Same Rank
Similar to the above, only a slight variation whereby players seek to discard all cards by matching cards of the same rank, devoid of suit rank or colour.
6. Napoleon at St. Helena
If ever a man knew about whiling away the lonely hours, it was Napoleon. When not spurning the advances of his wife, setting the foundations for modern Europe, or drastically changing the meritocracy of his enormous armies, he played cards between rebellious power seizures.
Two full decks are required, shuffled together to start. The object of the game is putting Aces at the foundations as soon as they become movable, then discovering ways to build up all eight foundations from Ace through King, while moving a single card at a time. Variations include Lucas, Maria, Limited, Streets, Indian, Rank and File, Forty Thieves and, a personal favourite, Roosevelt at San Juan.
7. Devil’s Grip
While researching these articles, I found that the games were recommend for players 8 and upward. I don’t know about anyone reading, or whether we have among us genetically modified children with enormo brains visible atop their head in glass belljars, with nodes and whirring electricity boxes writhing like a Gorgon’s hair, but at age 8, my only skill was eating paste and pressing against windows so I resembled a pigboy. Fair play to anyone nailing these endless variants before double figures. You’re probably my boss now.
The object of the game is to place the entire deck into piles on the grid; jacks on top, queens in the middle, kings in the bottom. I’m not sure whether it’s a statement on the monarchy and the risen workman, or whether it just falls that way for entertainment purposes. I’m choosing to believe the latter, extolling my own virtues are a revolutionary while I do.
8. Klondike
Klondike is the standard North American and Canadian solitaire variant, so much that the word ‘Solitaire’ is substituted entirely for ‘Klondike’.
During the Gold Rush in the early 19th century, before poker ascended to become part of the cultural zeitgeist, leaving the military encampments when the conflict ended, travelling back thither with those who’d learned, in turn showing their friends and so on until GGPoker. A mass exodus of fortune-seekers left the safety of the east towards the Western and Northern Frontiers, spreading solo games like wildfire.
Lonely prospectors whiled away the long evenings with games of patience and chance. Klondike experienced unprecedented growth when shimmering ingots and untapped veins were found plentiful beneath the firmament.
All that glistens.
9. Spider Solitaire
Spider Solitaire is the standard boredom killer every PC comes equipped with. Whether it’s a long bus journey, sitting in for a package or brain games to make your thinking muscles bulge, Spider Solitaire makes fine company. Similar to other patience games on this list, the objective is to build cards of descending suit sequence from King to Ace. Once you’ve nailed one, it’s automatically jettisoned out the airlock to one of the 8 foundations. The game is won when all cards are played, with 8 separate King to Ace rows.
10. Beleaguered Castle
Rotten animal corpses are flung across the walls using catapults. Wells are poisoned. Rivers choke with bodies. Arrows darken the sky. Inside, the terrified denizens, frozen by the hiss of arrows tearing the air, which to their frightened ears sound like soaring serpents.
At least, they’re the images conjured in the mind upon hearing the name. In action it’s less complicated. There’s more maneuvering of rows and careful cunning, less sallying forth and breaching boundaries. A row of aces, first removed from the deck, are aligned vertically, forming the foundations of each row. Eight rows of six cards are placed either side, like the wings of a plane. If you’re playing with physical cards, it should resemble tiered seating leading to a central walkway once set.
Once all the dealt cards are built onto the foundations, rising to tear the clouds like Babel. There are several variants, each with a more fantastic name. Take Castle of Indolence for example, which wouldn’t sound out of place played in the longhall where Beowulf slumbered awaiting Grendel’s arrival, or a game advertised on the swinging sign outside the Prancing Pony.
Other Medievally-titled iterations include Citadel, Streets and Alleys, Selective Castle, Siegecraft and Stronghold.
11. Seahaven towers
Here’s one available in physical and video game format. Seahaven Towers, although sounding like a Westerosi town name, involves cards built down in suit, with kings or sequences beginning with kings exclusively filling empty tableau spots.
12/13. Pyramid/Pile of 28
No, it’s not a Sherlock Holmes title. Pile of 28 is dynamic and simple to set up, with the final goal being that all cards are removed from the pyramid and paired to a combined value of 13 points.
In Conclusion
The list goes on forever. Every casino has its own twists. Every windblown shack that ever hosted a game wrote their own rules. Try the ones we’ve listed. If you don’t enjoy them, choose from a million others, some slightly altered, others unrecognizable from the initial iteration.
Whichever you prefer, we hope you’ve learned something. Next time you’re on a train, idling funneling cards and staring at the window, wondering which of the fifty new single player card games might serve for such a trip, spare a thought for GGPoker.
|
https://medium.com/@GGPoker/13-best-single-player-card-games-f43031d70828
|
[]
|
2019-01-09 11:27:13.750000+00:00
|
['Poker', 'Solitaire', 'Cards', 'Single Payer', 'Card Game']
|
Door Window and Partition Dubai
|
Aluminum is the preferred material for door and window frames due to its inherent structural and aesthetic properties. Doors and windows made of aluminum frames with glass glazing may look the same from a distance but take a closer look and the difference between quality product and one that is fabricated by a local fabricator becomes apparent. If you are investing in doors and windows it pays to buy only quality products.Finding the right Aluminium and Glass Works in Dubai Company for your home renovation is a big deal. Zealcon is a name that you can trust.
Seamless integration of hardware with aluminum frame
The looks as well as performance of aluminum windows are dependent on and influenced by the way hardware is integrated into the frame. If the window has hinges then the way hinges are designed and fitted to the frame has a bearing on ease of opening and closing as well as reducing gaps and thereby preventing energy losses. Quality manufacturers design and manufacture their own hardware to go along with windows and doors.
Precision fabrication
Most general fabricators do not work to tight tolerances while fabricating frames from aluminum profiles. There can be gaps between joins and this can look unseemly and if the frame sections are not well aligned, closing and opening them becomes a chore. Quality manufacturers also include thermal breaks and a foam core that provide additional insulation.
Finish
Quality manufacturers offer aluminum in a variety of finishes such as natural anodized finish in various shades, metallic colours, permanent colour fast powder coating and woodgrain foil overlay for a natural look. You can go a step further and choose aluminum frames that have one colour finish for interiors and another on the exterior to match building facades.
Single or double or triple glazing?
When double glazed windows offer so many advantages there is little point in saving some money by choosing single glazing. Double or even triple glazing is better from energy conservation as well as acoustic insulation perspective. Some advanced manufacturers offer gas filled and totally sealed double glazing. You also have the choice of blinds integrated inside the glazing, which makes for a neater appearance and ease of use. When one talks about glazing, it must be kept in mind that glass varies widely. It is recommended to look for windows with low E internal glass and possibly toughened glass so that, in the event the glass breaks, the floor is not littered with glass shards. You can also select glass that is coated to reflect heat and thus reduce energy consumption.We are a glass company in UAE that has substantial years of experience in providing the best services to clients.
How secure are the aluminum windows?
Quality manufacturers offer windows that conform to British Standards PAS 24:2012. Material, design and manufacture of the windows makes it difficult for would be burglars to force the windows. You should look for multipoint steel locking system and internal glazing for better security.
It pays to invest in world recognized brands offering quality aluminum windows. Windows look perfect from inside and from the outside. They will be easy to operate and you will enjoy their use for years with minimum maintenance. It may cost more initially but a quality aluminum window also adds to the value of your house.From glass shower cabins, doors and windows to UPVC windows, folding doors, kitchen cabinets milgard windows and more; we have a great variety to offer. Check out our official website to explore more!
|
https://medium.com/@zealconuae1/door-window-and-partition-dubai-97037d7de81d
|
['Zealcon Luxury Glass Rooms Dubai']
|
2020-12-21 10:14:13.182000+00:00
|
['Design', 'Dubai', 'Uae', 'Life', 'Construction']
|
It’s Our Fault Christmas Is Unaffordable Again
|
A poem about how we are to blame for yet another money-tight Christmas.
“Marley’s Ghost” — original illustration from A Christmas Carol.
’Twas the night before Christmas, when all through the rental
not a bill was paid early, not even dental.
Although the stock market was booming, so how could that be?
I guess I spent too much time watching TV.
What I should have been doing is making speculative investments
in businesses with specifically low-grade assessments.
For that is the labour that our society needs,
the kind of innovation that only Capitalism breeds.
Who cares about the teachers, artists or physicians?
We need more ways to price gouge patients with pre-existing conditions!
We live in a meritocracy, where only the most talented prevail —
a warehouse worker is replaceable, so they deserve to fail.
Sure, increasing corporate profits and worker productivity are related,
but that doesn’t mean the workers should be the ones compensated.
Wages have been stagnant since the 70’s for a very good reason,
and it doesn’t matter if the affordability crisis intensifies with every season.
In the end of the day, our job creators deserve a break
for all the opportunities they provide and the risks that they take.
So stop the complaining and just celebrate your Christmas in debt,
you haven’t even been replaced by AI… well, yet.
|
https://medium.com/@conleywrites/its-our-fault-christmas-is-unaffordable-again-25680dac41b8
|
[]
|
2020-12-08 05:45:05.780000+00:00
|
['Socialism', 'Inequality', 'Christmas', 'Poetry', 'Capitalism']
|
Automating the hunt for illegal dumpsites in Turkey with satellite imagery
|
Automating the hunt for illegal dumpsites in Turkey with satellite imagery
For many years now, my experience as a GIS and remote sensing specialist has been marked by projects in the humanitarian sector: the UN, World Bank, Red Cross, WRI, Translators without Borders; you name it, I probably did some work for them. Back in 2018, when I worked for Urban Resilience Platform — a French startup dedicated to developing software for solid waste management in humanitarian crises — , we were given a very simple but hard to accomplish task: help UNDP and the Turkish government identify every illegal dumpsite in Turkey.
But why, though?
Turkey’s unique geographic position with a 911 Km border with Syria, and its standing as a land migration route to Europe has resulted in the country receiving a large influx of Syrian refugees. Since the beginning of the Syrian crisis in 2016, millions have fled across the border to the Southeastern Anatolia region in Turkey. In July 2018, the number of registered Syrians in Southeastern Anatolia was over 3,5 million.
Migration flows into provinces with highest number of Syrians in Turkey by 2016. Provinces in the southeast are the ones with the most refugee camps, like Sanliurfa, Gaziantep and Osmaniye.
This has led to rapid change in the demographic structure in several Turkish border cities. In Kilis, for example, the number of Syrian refugees is greater than that of the local population.
As populations move, so does the waste they generate. With the doubling of domestic solid waste quantities, Southeastern Analotia's provinces began to face ever-increasing issues with inadequate disposal and insufficient capacity to collect and treat all the waste being generated by its new populations. That led to an imbalance in the solid waste value chain, resulting in the apparition of many new illegal dumpsites to receive the waste that could not be processed by the municipalities.
An illegal dumpsite mostly used for domestic waste in outskirts of Gaziantep.
UNDP, with its clear mandate to support municipal waste management, responded with the Effective Urban Waste Management for Host Communities Programme, which incuded key infrastructure investments, like waste transfer stations and recycling facilities.
But the diversion of waste away from uncontrolled dumpsites is only the first step in the upgrading of the whole waste collection system. UNDP also wanted to ensure those dumpsites were on course for rehabilitation. Hence the need for a methodology to identify dumpsites.
How do you process imagery for a whole country?
Since we didn’t have the means to travel the country in the hopes of bumping into well hidden dumpsites, we resorted to the use of (drum rolls…) satellite imagery!
Folks in the remote sensing field have been working with machine learning techniques way before it became the hot stuff it is now. The main difference is that they’ve been doing it in a nearly analogic way, with a lot of button clicking involved. The process usually looks something like this:
Decide which satellite imagery provider you'll use, usually with the help of a system like Maxar's Discover;
Acquire each image you'll be needing, by downloading one by one from the server or buying it (which usually takes two or three e-mail exchanges);
Train desktop softwares like ERDAS Imagine, eCognition, and ArcMap to recognize whatever land feature you're interested in.
This is what Maxar’s Discover looks like. It’s good for picking one or two images, but not great if you need to assess an entire country.
Well, it's all fun and games until you have to download and analyse images for something as large as a country.
That would require you to download a lot of images, build a mosaic with them, set the right parameters and count on your machine to be able to process it in a reasonable timeframe.
Thankfully we can now run all of these processes with fairly simple scripts, and better yet, on the cloud. That’s what Google provides with its free-to-use Earth Engine tool.
GEE user interface, where documentation, code, console and map are all in the same page.
The right spectral bands for the right objects
If you’ve ever worked with image classification before, you know that one should always have a good understanding of the composition, form and texture of whatever you’re looking at. The components of waste however, have different densities, overall sizes, chemical composition, and therefore spectral signatures. Simply put, that is to say, our targets reflect different colors (or light wave lengths) in different intensities, making it difficult to detect dumpsites as one single kind of object.
Construction waste dumpsite photographed during a field visit to Osmaniye.
So we decided to make use of an Assisted Classification Model. This — very classic — model consists of “teaching” our code what dumpsites look like with training samples in a collection of satellite imagery, and telling it to search for more. That means in order to get started, we must first decide which image collection we'll be using. Since we don't have any budget for imagery acquisition, we'll use the best free imagery we can get: Copernicus Programme's Sentinel-2, which provides for a decent spatial resolution (20m pixel, 10m if you use pan sharpening) and an even better spectral resolution (13 bands ranging from 443 to 2190 nm).
In order to do that with GEE, we declare a variable that we'll call SENTINEL and then use the ee.ImageCollection method stating we're looking for the COPERNICUS/S2 collection. We can then filter this collection to find images that actually can be of use, by setting thresholds to the percentage of clouds (<10%) and date of acquisition (Jan-2018 to Aug-2018). We also used GEE's Assets tab to upload a zipped shapefile containing our area of interest which we called southeasternturkey and then applied it as a bounds filter.
|
https://towardsdatascience.com/automating-the-hunt-for-illegal-dumpsites-in-turkey-with-satellite-imagery-55288a687add
|
['Guilherme M. Iablonovski']
|
2020-08-13 01:42:31.359000+00:00
|
['Google Earth Engine', 'Humanitarian', 'Data Science', 'Geospatial', 'Remote Sensing']
|
Healthcare workers, all workers, deserve a major pay increase
|
You save workers by demanding they be fairly compensated for their labor, not by trying to spin the narrative that other people making a living wage is what’s hurting them. That’s doing just the opposite of helping them. It’s actively working to wipe out support for the only leverage they have, access to competitive pay outside of their current jobs. Healthcare workers were being shafted long before Taco Bell workers started being paid fairly.
CNAs (some of the most underpaid workers in the healthcare system) and other healthcare workers are out there taking care of sick people, pandemic or not. It’s not a hard narrative to grasp or relay that they should be making more money. Not just enough money to pay the bills and put food on the table but enough money that they can actually imagine being able to save for their future, enough to buy the things or experiences they deserve the ability to acquire as reward for their labor and for their dedication to a career that requires them to go above and beyond every single day to care for the sick and vulnerable. They deserve a wage worthy of motivation to continue this work, day after day.
Workers haven’t had any real bargaining power for decades unless you happen to be a union worker, and even those workers’ ability to organize and bargain for fair compensation has came under attack in a major way [especially so] in the last decade with union busting laws the likes of ‘right to work’ in states like Indiana, where lawmakers have been eager to shuffle through the legislation.
This pandemic has created a shortage in a couple of ways. Workers getting sick and the government trying to curb spread. That has led to labor shortages at places like Taco Bell (pictured above) and so on. These employers have all had to do something they never thought they’d do, raise wages in order to fill up their staff. The FB post pictured above highlights an important issue, the current competition that the healthcare and other industries are seeing as they [continue to] pay their employees poverty wages even when faced with those employees having access to a better wage outside of their respective industries.
Taco Bell (any many other service industry employers) employees making a living wage is super important in the fight for a fair wage in the healthcare industry and beyond.
Healthcare and other employers will have to start considering competitive pay in the wake of their employees being lured away by higher paying jobs or they will continue to struggle to fill their labor needs. Couple that with public pressure and support for better wages, their choices are limited.
This needs to be hammered over and over into the minds of the general public in order to grow favorable public sentiment, the belief that healthcare workers deserve higher pay, and the support that will follow. That’s not going to be a difficult narrative for the public to get behind.
It’s important to note, you do not want this to turn into an “us against them” campaign. As in “healthcare workers deserve the pay they’re fighting for but “burger flippers” do not”. The main reason for this, other than the fact that everyone deserves a living wage, is that if public sentiment shifts towards that narrative, then the enemy to defeat becomes the Taco Bell employees (etc.) that are earning a better wage that “they do not deserve”. This then bolsters those employers’ own quest to attempt to claw back employees’ pay gains allowing them to play on the image of having the public support to do it. If that happens, healthcare workers, all workers, will lose the only real leverage they have in this fight for fair pay.
If they don’t have any better paying job opportunities they can leave their current roles for, or if they aren’t able to already be working in other better paying jobs because those jobs no longer exist and/or aren’t any longer available, then when [or more likely if] they see any sort of gain in pay, even in a successful and publicly backed demand for a living wage, they will not be getting the above living wage rate they deserve. They’d be lucky to end up with a wage that covers the bare minimum at that point, i.e, the Taco Bell wage range of around $14 per hour.
So yes, there is a real need to highlight why healthcare workers should be paid more, but we need to also be bringing positive awareness to the fact that other industries have raised their standards, and that the healthcare industry should start compensating their employees for the level of work that is expected of them. Doing this without taking focus off of the real enemy, the employers, their corporations and the shareholders that allow worker exploitation.
The theft of ones’ labor is the enemy, not your neighbor that has secured a better wage. Workers need to build solidarity among one another, especially in the service industry, which honestly covers most of the work available in this country.
Profits are higher than ever. This is a direct result of increased worker productivity and the fact that those workers haven’t been given a share of the profit resulting from that rise in productivity. It has instead funneled into higher executive pay and larger payouts to shareholders. All while the cost of goods and living have continually and dramatically increased.
Corporate tax rates have also plummeted at the same time, leaving less resources for strapped American workers struggling to make ends meet and contributing to an overall decline in the greatness and ability of our country to do great things. Our roads are falling apart, our infrastructure overall is crumbling and so old and outdated it’s embarrassing. We are one of the most overworked and unhappy nations in the developed world. Our health outlooks and our actual life expectancy is in decline. We are taxed largely and unequally without proper representation. All of the real money is flowing to these employers who don’t want to pay American workers a proper wage and also don’t want to pay a dime in tax on any of it to support the infrastructure that moves their employees, their goods and their customers around and helps protect the overall health of this nation.
The fact of the matter is, almost no one is being compensated fairly for their labor in this country. People fought and died here for worker’s rights and proper compensation which paved the way for the reality of and the drive and motivation to achieve the American Dream.
That Dream being the promise that if you worked hard that you could provide for yourself and your family, own a car and a home, be able to afford to pay for vacations and experiences, to be able to save for your future and see your children live lives richer and happier and better off than their parents. That this was possible with a one person income, leaving the opportunity, that if desired, enabled someone in a two person household to spend their labor on things that would enrich the household in non monetary ways.
The American Dream was the promise that your friends and family and neighbors would have a good life and not continually struggle to make ends meet. It was the promise that that you could go to your job and share your labor but still be able to come home with enough time and energy to allow you to be of service to and within your community. The American Dream was never about this belief in rugged individualism that has managed to pit worker against worker. This propagandized theory is the very recipe that has led us to the place we now find ourselves in, the land of opportunity for only some. The upward class mobility and increased productivity that was driven by this Dream worked for America for decades, until the 80’s rolled around and every bit of what was fought so hard for started being systemically dismantled, one change after the other until the American Dream was effectively snuffed out leaving the promise held by generations of Americans before us as largely nothing more than a pipe dream.
That promise was able to be wrangled away from us through the age old strategy of allowing ourselves to be divided and concurred. If we ever hope to change anything we need to find a way to unite, even if only in the realm of working together to advance common goals.
This is the biggest opportunity for the advancement of fair compensation many of us have ever seen. We need to look for, recognize and actively squash attempts by these robber barons, and unfortunately the people who seem so determined to help them advance their goals, or this chance for something better, for ourselves and for those who come after us, will be gone.
|
https://medium.com/@gregoryharper/healthcare-workers-deserve-a-major-pay-increase-97338d0a8a33
|
['Gregory Harper']
|
2021-08-27 10:02:50.601000+00:00
|
['Workers Rights', 'American Dream', 'Wages', 'Labor', 'Living Wage']
|
The United States is Unnecessarily Culturally and Institutionally Vulnerable to Infectious Diseases
|
The United States is Unnecessarily Culturally and Institutionally Vulnerable to Infectious Diseases
In light of the COVID-19 outbreak we should take the time to reflect on these weaknesses and what we can do to rectify them.
The GW Hospital, the only hospital that I happened to have photos of on my laptop.
Over the past week, a growing number of cases of COVID-19 contracted from unknown sources have been reported. While we had more time to prepare than China, South Korea, Italy, or Japan, America’s respite is over; our chances of managing to track and contain the outbreak are increasingly slim. Unreassuringly, the US government’s preparations have been fraught with reports of faulty tests, inadequate testing criteria, and insufficient training and protective measures for the HHS workers who risked exposure. In addition to the ineptitude and denial that characterize the initial response, other institutional and cultural factors make the US unnecessarily vulnerable to the spread of the novel coronavirus (and disease in general). Although it is too late to fundamentally alter either the institutions or the culture in which these vulnerabilities are entrenched, a proper examination of them now can reveal stopgap measures and help us prepare for the next health crisis.
America’s work culture has, for decades, promoted the spread of disease. Employees aren’t given the leeway to be unhealthy; 45% of Americans have no paid sick leave.[i] Compounding on this, between 60 and 78% of Americans live paycheck to paycheck and, according to the AARP, 53% of American households have no designated emergency savings.[ii] Americans will work sick because they don’t feel like they can afford not to do so. These problems compound when income is taken into account. Only 33% of the bottom quarter of earners have paid sick leave.[iii] Customer service jobs, which have the potential to spread diseases to a larger and wider swath of people than other jobs, lack paid sick time 58% of the time. Workers of these types of jobs, which don’t pay as well as professional jobs, are also less likely to have emergency savings. With COVID-19 the financial stakes are even higher than a common cold or influenza. Even if the vulnerable worker doesn’t end up contracting the illness, the prospect of a two-week quarantine, for many people, spells financial disaster. 20% of Americans are unable to cover even a weeks’ worth of household expenses in the case of an emergency.
The cost of lost income isn’t the only factor motivating Americans to work sick. Employees attempting to call in sick are often asked if they’re sure that they’re unable to work or told to come in anyway. 34% of Americans state that they have gone to work sick due to pressure from their employer.[iv] This happens across all industries including in ones in which it is strictly illegal. I know multiple food service workers whose managers have forced them to work in contravention of OSHA standards. Many businesses lack the excess staffing capacity to deal with an unexpected illness in the workforce. Supervisors and managers often employ guilt to convince the sick worker to come in and stress the additional burden that the sick worker’s absence will place on the remaining staff. Even when possible, reduction of capacity, for example, a restaurant closing a section, is often viewed more negatively by the management than having a potentially ill employee work. Commercial America has a demonstrated tendency to put profit in front of public health. Again, it is the service industry, where a sick employee poses the greatest transmission risk, which suffers the most acutely from this issue. Professionals can often complete some work from home or make up missed work by working longer hours after they recover; service businesses cannot simply make up for the missed business at a later date.
With all of these factors taken into account, it’s no wonder that 90% of Americans admit to working sick.[v] Another driver of this trend is doctor’s note policies. While they appear to make sense from the employer’s perspective they fail in their basic premise. Instead of only sick employees staying home and healthy ones being unable to abuse false claims of illness, they drive employees to work sick. A sick employee may view the hassle of a clinic or urgent care visit to be just as draining as just going to work. To make matters worse, requiring employees to procure sick notes forces financial considerations into the equation yet again. Not only is a day of income lost but barring adequate insurance or employers willing to cover the cost, sick note policies force unhealthy workers to pay not to work. Even in Canada, where the cost of sick notes is cheaper if not free, eight out of 10 workers reported that they would go to work sick if employers require doctors’ notes for minor illness.[vi] Not only are doctor’s note policies ineffective from both the perspective of the employee and the employer (who loses productivity when disease ravages the office), but they fail from a public health perspective as well. These policies cause excessive traffic to hospitals and urgent care facilities resulting in the expenditure of resources on illnesses that may not need attention. Unnecessarily increasing the traffic to these facilities also risks putting immunocompromised patients in contact with contagious people (and exposing the sick employees to other diseases).
The culture of working sick isn’t the only way in which Americans are culturally susceptible to the spread of disease. High medical costs, the same ones that contribute to the failure of doctor’s note policies, make people hesitant to visit a doctor when they could just wait a few days and see if their condition improves. Under the right circumstances, this approach would be fine. Combined with America’s culture of working sick, however, it means slower recoveries and an increased chance that infectious diseases will be spread. Workers have no way to know if they are contagious or not if they are unable to access medical resources. In the context of the coronavirus, healthcare costs can, and likely will, hamper efforts to stymie the outbreak. Contact tracing and proper quarantine can only be undertaken when the case enters the healthcare system and is detected. When it comes to COVID-19, the fears of medical bills are compounded by the lack of available testing. While coronavirus testing is reportedly free, if a hospital does not have access to the tests to confirm the diagnoses then they will have to use other methods to try to rule it out. Additionally, CT scans are currently more effective at diagnosing the novel coronavirus than lab testing.[vii] Unlike the testing, these scans are not, by any stretch of the imagination, going to be free. Already, the story of a Florida man who had to pay $1,400 out of pocket as a result of his concern that he may have been infected has made the rounds around the internet. That man (who, it should be noted, had insurance) made the right choice, but after hearing the financial cost of his decision, others may be less likely to.
The bad news for Americans doesn’t stop there however, we don’t even wash our hands properly. One USDA study of consumers found that 97% of participants failed to correctly wash their hands, usually doing so for far too short for it to be effective.[viii] Relying on that statistic alone, of course, presumes that a handwashing attempt is even made at the proper time. Other studies have found that as many as one-third of Americans don’t bother to wash their hands after using the bathroom.[ix] The CDC recommends both hand washing and not touching one’s face as the most important preventative measures to protect against the coronavirus; given the state of American handwashing, we better hope that we’re better at not touching our faces.
So far, I’ve listed a lot of problems but not yet touched on any solutions. As always, identifying the problem is far easier than implementing a solution, and, as with every other issue, the solutions are debatable. One clear possibility, though, is paid sick leave. Keeping sick employees at home is not only good for the employee but also for the business. In addition to being less productive and a risk to other employees, sick employees are more likely to be injured in the workplace. This paid sick leave shouldn’t require a doctor’s note but rather function on trust. Yes, a system like this has the potential for abuse, but a fair number of days should just be considered part of the compensation package. This isn’t just good for the employees but the employer as well, access to paid sick leave decreases the probability of job separation by 25%.[x] Employees feel more valued and respected when they are trusted to make judgment calls, especially about their own health; let them decide whether or not they need to see a medical professional or just rest up.
Of course, this structural change will be wholly ineffective if the culture around working while sick doesn’t change as well. Admittedly, it may be difficult for most businesses to strike the proper staffing balances that allow for unexpected absences but do not result in overstaffing that negatively impacts the workers’ abilities to get hours. The simplest solution that doesn’t require compromising public health for service quality is capacity cutting. Admittedly, this is less than ideal from a financial standpoint and seems daunting at high volume businesses. The solution to this is that we normalize communicating issues, such as low staffing, to the customers. Everyone already knows that businesses are hiding harsh realities behind a polished veneer of projection. It’s about time that businesses drop some of the façade and instead communicated to establish accurate expectations in the customers’ minds. Are things going to be running slightly slower? Will there be fewer seats available? Just tell the customer. Either of those options should be viewed as far more acceptable as risking public health just to maintain the semblance of normal operations.
These policy changes need to come from the top-down, otherwise, managers will still feel the pressure to insist that sick workers place themselves and others at risk. Compassionate managers may be able to make a small difference but large scale changes cannot and will not happen without upper echelon support. It is impossible to simultaneously claim to trust on-site managers to make the right calls in these situations while exerting pressure from above which incentivizes making risky decisions.
One of the most effective domestic responses that I have seen so far is the decision by the state of New York to direct insurance companies to, “waive cost-sharing associated with testing for novel coronavirus including emergency room, urgent care, and office visits.”[xi] This is a bold step that will make testing more accessible for many. Unfortunately, steps like this taken during emergencies are limited in their efficiency compared to having a healthcare system where low or no cost accessibility is a permanent feature. The effectiveness of this measure will depend upon the quality of the messaging campaign notifying New Yorkers of these changes. There are also holes in this effort. The waived costs do not apply to New Yorkers whose plans are not regulated under the Employee Retirement Income Security Act of 1974 (ERISA).[xii] Additionally, the directive also does not help the uninsured access care.
The US response to the novel coronavirus is also being handicapped by exceptionally low trust in the government response. Mixed messaged, concerns about the competency of the leadership, and early mistakes have plagued the response so far. News stories and official releases urging concerned citizens to not hoard facial masks have been met with downright cynicism and supply shortages are imminent. This is not just a symptom of a lack of trust in the Trump administration but a result of years of waning trust in government and each other.[xiii] Rebuilding trust in the government, an impossible task to complete in the short term, would drastically improve the effectiveness of the messaging campaigns necessary to help the public understand and mitigate the threat of a disease like the novel coronavirus.
For now, we must play the hand that we have been dealt. As the situation worsens we see panic shopping for everything from N-95 masks (which have essentially no benefits when used by the untrained), to rubbing alcohol, to toilet paper. When this all ends, I suspect, much like Dr. Fauci, that we will look back “and say, boy, that was bad.”[xiv] This won’t have been the first bad outbreak in America’s history, nor will it be the last. It’s never too early to start preparing for the next one, and while we do so, I strongly suggest that we take the necessary actions to alleviate our unnecessary cultural and structural susceptibilities to the spread of disease.
[i] Lela Moore, “‘I Never Take a Sick Day’: Americans Talk About Reporting to Work When Ill,” New York Times, January 15, 2019, https://www.nytimes.com/2019/01/15/reader-center/sick-day-employment-policy-united-states-em.html.
[ii] Emmie Martin, “The government shutdown spotlights a bigger issue: 78% of US workers live paycheck to paycheck,” CNBC, January 9, 2019, https://www.cnbc.com/2019/01/09/shutdown-highlights-that-4-in-5-us-workers-live-paycheck-to-paycheck.html; “Unlocking the Potential of Emergency Savings Accounts” AARP Public Policy Institute, October 2019, https://www.aarp.org/content/dam/aarp/ppi/2019/10/unlocking-potential-emergency-savings-accounts.doi.10.26419ppi.00084.001.pdf
[iii] Heather Hill, “Paid Sick Leave and Job Stability,” Work and Occupations 40, no. 2, (November 2012) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3825168.
[iv] Abigail Hess, “A sobering stat during coronavirus fears — 90% of employees admit they have gone to work when sick,” CNBC, November 3, 2019, (title and story updated February 28, 2020), https://www.cnbc.com/2019/11/03/90percent-of-employees-say-they-come-to-work-sickheres-how-to-fix-that.html.
[v] Ibid.
[vi] Megan Collie, “Asking an employee to get a sick note is a ‘public health risk,’ experts say,” Global News, October 6, 2019, https://globalnews.ca/news/5985336/employers-sick-notes.
[vii] Radiological Society of North America “CT provides best diagnosis for COVID-19,” ScienceDaily, February 26, 2020, https://www.sciencedaily.com/releases/2020/02/200226151951.htm.
[viii] “Food Safety Consumer Research Project: Meal Preparation Experiment Related to Thermometer Use,” Food Safety and Inspection Service, May, 2018, https://www.fsis.usda.gov/wps/wcm/connect/1fe5960e-c1d5-4bea-bccc-20b07fbfde50/Observational-Study-Addendum.pdf?MOD=AJPERES
[ix] Katie Zezima, “For many, Washroom Seems to Be Just a Name,” New York Times, September 13, 2010, https://www.nytimes.com/2010/09/14/us/14hands.html.
[x] Heather Hill, “Paid Sick Leave and Job Stability,” Work and Occupations 40, no. 2, (November 2012) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3825168.
[xi] “Governor Cuomo Announces New Directive Requiring New York Insurers to Waive Cost-Sharing for Coronavirus Testing,” Goveronor.ny.gov, March 2, 2020, https://www.governor.ny.gov/news/governor-cuomo-announces-new-directive-requiring-new-york-insurers-waive-cost-sharing.
[xii] Ibid.
[xiii] “Public Trust in Government: 1958–2019,” Pew Research Center, April 11, 2019, https://www.people-press.org/2019/04/11/public-trust-in-government-1958-2019.
[xiv] Sarah Owermohle, “‘You don’t want to go to war with a president,’” Politico, March 3, 2020, https://www.politico.com/news/2020/03/03/anthony-fauci-trump-coronavirus-crisis-118961.
|
https://medium.com/age-of-awareness/the-united-states-is-unnecessarily-culturally-and-institutionally-vulnerable-to-infectious-diseases-bee8ae6ff43b
|
['Keegan Mullen']
|
2020-03-06 20:04:10.710000+00:00
|
['Covid 19', 'Health', 'Healthcare', 'Government', 'Coronavirus']
|
3 Reasons Why You Should Do Shorter Workouts
|
3 Reasons Why You Should Do Shorter Workouts
You get more bang for your buck
Photo by Karsten Winegeart on Unsplash
How long should your session last? There is much debate about how shorter workouts can affect the progress you make.
The fitness industry may encourage you to spend hours in the gym. But is this ideal for your health? Do you have to exercise 24/7 to achieve your goals? The concept of “training harder than last time” may suggest more extended periods to be better.
Yet, it depends on what you do during your workout. Until this day, I see people at my gym spending three hours there. But is it three hours of exercise? Nope. The more time you give yourself, the more you may procrastinate. So training for longer durations may not guarantee you train at a high intensity.
Still, exercising is one part of your life. Thirty to Sixty minutes is more than enough time to work your body and force it to improve. Since I have been lifting weights, I have found this to be the sweet spot. Long enough to make progress and short enough to be sustainable.
Here are three reasons why you should do shorter workouts.
|
https://medium.com/in-fitness-and-in-health/3-reasons-why-you-should-do-shorter-workouts-e32d890c6db7
|
["Daniele D'Alessio"]
|
2020-12-05 15:39:51.781000+00:00
|
['Self Improvement', 'Workout', 'Fitness', 'Health', 'Exercise']
|
Get ready for your CPQ Specialist Cert
|
The Salesforce CPQ Specialist credential is designed for individuals who have experience implementing Salesforce CPQ. This credential is a great way to demonstrate skills and knowledge in designing, building, and implementing quoting flows with Salesforce CPQ.
Hello Trailblazer,
Congrats! If you’re reading this, the CPQ Specialist certification is in your sight. If you are feeling nervous or anxious, I totally get you. After getting 6 previous Salesforce certifications (Admin, Advanced Admin, App Builder, Einstein Analytics and Discovery and Discovery, Nonprofit Cloud, and Sales Cloud), I consider this the most challenging I have ever faced.
Although there are great resources to study, I highly recommend creating a developer org and have hands-on experience practicing creating different products and pricing scenarios. Create products and bundles with things you have at your reach (fast food combinations, tiered pricing on streaming services, etc.).
Primary resources:
CPQ Specialist Exam Guide
It sounds like this goes by default, but we only focus on the exam outline part. Read the whole guide and learn in-depth the audience description for this test.
|
https://medium.com/@grodriguezroca/pump-up-for-your-cpq-specialist-cert-47e30aa49932
|
['Gustavo Rodriguez Roca']
|
2021-06-23 15:44:52.025000+00:00
|
['Cpq', 'Salesforce', 'Cpq Certification']
|
Self -Driving… Chemistry?
|
“Iterations” (Seema Gaur, Emil “GAN” Barbuta)
“Materials science” doesn’t have quite the pulse-raising associations as, say, “aerospace” or “robotics”. This is a shame! Stone tools — with wooden shafts — were key to the elevation of an ape with rather underdeveloped teeth and claws to the top of the food chain. Metallurgy gave us copper, bronze and iron — and corresponding waves of conquest and settlement (for at least 10,000 years, there has been strong selection pressure in favour of societies that let at least a few nerds tinker with ore and fire).
An extra pinch of carbon and a stubborn commitment to process improvement gave us high-quality steels to spark the industrial revolution. The semi-conductors inside the processor at the heart of the laptop on which I’m writing demonstrate our ability to manipulate matter at the nanometer scale… and yet, such achievements pale beside the sophistication of the composite materials in a shrimp’s claw (withstanding huge forces without the need of metals), or the efficiency (95%!) of the electron-harvesting elegance concealed within the green loveliness of chlorophyll. Life excels at creating macro-scale objects with nano-scale structure — we don’t.
Human mastery of matter at a certain scale is obvious — you can see our cities from orbit and we have changed even the composition of our atmosphere. However, our ability to manipulate matter at the tiniest scales remains rather limited. As we shall see, changing what is possible in this area may well become one of AI’s greatest contributions.
During the 20th century, physicists and chemists used the new understanding of the atom first to explore the structure of important molecules, and then, slowly, to learn to assemble them from simpler ingredients. Building larger molecules from simpler ones — synthesis — transformed our lives. Plastics are a great example — you have probably got some polymers in your clothes, but also insulating the wiring in your house, preventing food spoilage in your fridge and weatherproofing your exterior (even your lovely wooden furniture often relies on glues based on plastics). Chemosynthesis has been used to build more complicated molecules too, just look in your medicine cupboard — Aspirin, for example, or Salbutamol, the active ingredient in Ventolin inhalers — great medications that are now incredibly cheap to make.
It is here that we start to find some important limitations: we can synthesize simple molecules like these, but for more complicated ones — insulin, antibiotics — we have to ‘hire’ some assistants. Insulin, for example, is brewed in bioreactors using genetically modified yeast or bacteria: because we can’t figure out how to make it, we’ve copied and pasted the DNA sequence for the human version into tiny creatures who then churn it out in industrial volumes (a sort of high-end brewing process, not unlike beer-making). Although these medications have been an enormous boon, their discovery and manufacture also highlight our weaknesses:
New antibiotics must be found — they are not designed. For example, a key step in the mass-production of penicillin was the discovery of an unusually easy-to-grow mould on a cantaloupe in Peoria, Illinois. As much as we admire the diligent global search which found that cantaloupe, as a reliable, repeatable process for creating new medications, perhaps we could improve on it.
As obliging as our tiny bioreactor-dwelling workers are, their virtuosity in crafting large molecules is limited to what can be described by a DNA sequence (which itself must be short enough for us to synthesize successfully). This rules out almost the entire periodic table.
Even limiting ourselves to “things that yeast can make”, we still need to devise a DNA sequence or set of sequences that will yield the desired result. DNA is ‘expressed’ or ‘executed’ by being turned into proteins which need to fold themselves into the final, functional shape. Getting the sequence just right for a reliably folding result is not a simple problem.
We have discovered laws that describe the behaviour of matter at the tiniest scales with extraordinary precision — Quantum Mechanics — and so in principle, we should be able to design and build new structures at the tiniest scale at will. However, answering even a comparatively basic question about a new substance — “What is its boiling point?” remains extremely difficult. Instead of being designed from scratch, new materials are often found by a brute force search. Beyond finding the right material, actually making it in industrial quantities is an enormous challenge: new materials take between 5 and 15 years to commercialize. Even at Intel Corporation, with 50 years’ experience, where we do make devices that are exquisitely nano-structured, it takes an enormous building with a 10 to 11 digit price tag to get the job done.
One structure that illustrates both the difficulty and the potential is the carbon nanotube, a sheet of carbon atoms linked in a hexagonal pattern and rolled into a tube with walls just 1 atom thick. Tubes of this form have many interesting thermal and electrical properties — but their mechanical strength shows their potential: a cable of carbon nanotubes with a cross-sectional area of 1mm2 could hold a weight of 6 tonnes (picture a rather coarse thread suspending two pickup trucks!), a tensile strength more than 300 times stronger than high-carbon steel. Why don’t we see this wonder material everywhere? Two reasons:
Cost
As the 4th-most abundant element in the universe, we might naively expect carbon-based materials to be cheaper than steel (which is mostly iron, an element only about ¼ as abundant as carbon). Whilst the price of carbon nanotubes has fallen from $1600 per gram in 2000 (about 40 times dearer than gold) to $1 in 2018 (about 1/40th price of gold), a marvellous improvement, but it still leaves us 3 orders of magnitude short of matching the price of steel (about $1 per kilogram). “Cheaper than gold” isn’t a very exacting cost bar!
Quality
In 2007, the longest-ever tubes were about 1.8 centimetres in length. By 2013, the longest tubes were still just 50 centimetres long.
The enormous fall in price is due to the heroic efforts of experimenters to improve the synthesis of carbon nanotubes. So why can’t we ‘simply’ apply quantum mechanics to the problem of producing this material and skip all those tedious experiments? Shouldn’t those laws give us the recipe we need?
Unfortunately, simulations using these laws are exceptionally computationally demanding (at least on classical computers). Happily, physicists have found an elegant dodge: one way to think of machine learning models is as a function approximator — a way of looking at a complex piece of maths and coming up with a cheaper alternative that gets us within a whisker (“within epsilon”) of the right answer. Most of us think of machine learning as requiring a lot of computational power: physicists think of it as an incredible bargain offer that lets them simulate chemical reactions accurately for the low, low price of 1% of the cost of a “full fat” quantum simulation. We don’t have to know the exact parameters of the supply-demand curve for quantum simulations to forecast a correspondingly dramatic increase in the use of these simulations, as the cheaper (but still sufficiently accurate) machine-learning-based simulations become available. Already, simulations of this type have allowed scientists working at Canada’s National Research Center to predict the properties of carbon-based structures at an unprecedented scale, a key step to broadening the industrial use of carbon nanotubes and related structures.
Cheaper simulation is important as an enabling step, for understanding the behaviour of large quantum dynamical systems at the heart of material science, pharmaceuticals, computing and energy production. In addition, it also aids in designing or discovering useful molecules and, crucially, developing the synthesis methods needed to create them. At the time of writing, for example, designing new therapeutic molecules — new medicines — often involves working with rather hefty molecules containing thousands of atoms: amino acids and proteins are the language of molecular biology, hundreds of times larger and more complex than friendly little fellows like alcohol or caffeine.
One approach to discovering new drugs is to create a vast molecular library, then use robots to screen these molecules for therapeutic potential. Think of a drug target — a ‘receptor site’ (perhaps serving as a gatekeeper on the surface of a cell) as being an elaborate lock, and the molecular library as a vast collection of keys — jars of old relics from yard sales, master keys already known to open many locks, and new creations fabricated in the mere hope that their creator might one day find a corresponding lock (yes, chemists create variation after variation on interesting compounds, hoping one might turn out to be useful). The screening process is one of trying key after key in that tantalising lock, looking for something that turns smoothly and discarding those that don’t fit or jam — or open too many locks we would rather not touch. Imagine trying to create any other new product in the same way! Experts do try to hand-place atoms to craft new medicines, but, the very existence of robotic drug screening processes should warn the reader that this is a difficult endeavour.
If only we could build a suitably accurate (embodying the laws of quantum mechanics) simulation, capable of running at a reasonable cost, it would be possible to work in a different way. Instead of specifying a solution in the form of a molecule, a researcher would define desirable properties (perhaps the drug needs to bind to one particular ‘receptor’ site on a cell’s exterior but not to a great many similar receptors and to be susceptible to eventual breakdown into non-toxic metabolites by the liver’s enzymes) and allow a form of machine learning called reinforcement learning to construct a custom molecule. This form of machine learning is used to train policies to take a series of actions in a certain environment (usually simulated to reduce costs and development time), so as to make progress towards a specific goal. The learning works because the policy is adjusted by an optimization algorithm that attempts to maximize a ‘reward’. For example, changes to a molecule that increases toxicity would result in negative rewards, while changes that enhance specificity relative to the target receptor would attract positive rewards. This form of machine learning has produced policies (models) that defeat all opponents at games like Go and chess and has also had success with more frivolous tasks like using robotic hands to manipulate cubes, or training self-driving cars.
Unfortunately, finding the right molecular structure is only the first step: we then need to design a process to create that molecule and this is synthesis. We’ve considered a kind of self-driving chemistry for molecular design, but we could also benefit hugely from automating the design of synthesis, and figuring out the steps required to produce a particular molecule is a complex business which has so far proved resistant to automation. How important is the ability to design a synthesis process? A key component of the ‘green revolution’ that allowed agriculture to scale up in order to feed modern populations, is the use of artificial fertilizers. Among these, ammonia is especially vital — plants need it as a source of nitrogen. This might seem strange — our atmosphere is mostly nitrogen, so shouldn’t they just pull it out of the air? Unfortunately, the form of nitrogen we breathe (N2) is extremely unreactive, and plants have so far not evolved the necessary chemistry. Such is the difficulty that the humans who figured out a workable series of steps — the Haber-Bosch process — won a Nobel prize. The process is however extremely energy-intensive, accounting for more than 1% of global consumption of global energy usage, and contributing about half of the nitrogen in the tissues of the typical human (so great has been its contribution to agriculture).
That the synthesis of even a very small and simple molecule like ammonia can be extremely difficult (plants still can’t do it, and even human chemical geniuses need to use very high temperatures and pressures) should convince you of the potential benefits of automating the synthesis design, and the impact of the Haber-Bosch process should make clear the vast social value of such innovation. The example of the humble soil bacteria — casually fixing nitrogen at ambient temperatures and pressures so very different to the high-temperature high-pressure industrial process — shows us how much better we could be.
To connect the potential of machine learning to one of the great problems of our day — feeding an increasing global demand for energy while also reducing net CO2 emissions — better ways of generating, storing and transmitting energy hinge on materials science. In other words, better and cheaper composites for wind turbines, more efficient and cheaper solar panels, improved battery chemistries and high-temperature superconductors. Accelerating the discovery and production of new materials can have an enormously positive impact on how our societies develop and grow.
Returning to the topic of pharmaceuticals, it should now be clear that machine learning offers us a path to creating advanced drug design and testing in a datacenter. Given that the cost of bringing a new drug to market has reached $2.5 billion dollars and that the steady 20th-century trend of increasing life expectancies seems to be running out of steam (even in the face of unprecedented spending), the need to replace processes that hinge on luck and brute-force-search should be obvious.
In summary “self-driving chemistry” may allow machine learning to design new industrial and therapeutic molecules to order, and to transform the manufacturing processes which create them: it offers our most obvious path towards understanding the nanoscale world, and producing nanostructured materials in quantities and at prices that can really change our everyday lives. Since the advent of even one new material — copper, bronze, iron, steel — can remake our society, the value of a process that makes routine the design and production of new materials can hardly be overstated.
Follow me for more articles on new applications of Machine Learning and Deep Learning. To learn more about the application of machine learning to materials science, a good place to start is the Matter Lab (Toronto University) or this video of leading researcher Alan Aspuru-Guzik. Many thanks to another leading researcher in this field: I have Isaac Tamblyn to thank for the conversations which inspired this article.
|
https://medium.com/swlh/self-driving-chemistry-c173bf311e31
|
['Edward Dixon']
|
2020-06-06 21:20:36.188000+00:00
|
['Deep Learning', 'Materials Science', 'Machine Learning', 'Reinforcement Learning', 'Chemistry']
|
Your Coding Skill Is Not Enough
|
You need some other skills too
Photo by Arian Darvishi on Unsplash
I did it. I have learned enough about programming and technology to be considered an expert in some circles. Awesome! But as I look out from what I thought was “the” summit, I see other peaks around me that are just as important to my career. Let’s explore some of the other topics we developers should also work on.
Programming skills are not enough on their own.
Soft Skills
It is well known that “soft skills” are important for an employee. Soft skills are defined as people skills, social skills, communication skills, etc.
Human Resources has been trained to look for these skills in potential candidates. As a result most software development courses also include training in communications.
So this is an “easy” thing to see and train ourselves for.
Prove your skills
Almost all of the projects I’ve taken on have been “internal” proprietary projects for someone else. I don’t own the copyright on the code for those projects. That means I can’t ethically or legally use that code. Assuming I still have access to any such code.
I also can’t use code snippets from those projects to demonstrate what I know. The snippets are a bit of grey area when it comes to copyright, so I try not to touch that code for any reason. That keeps my conscience clear.
Copyright is a deep topic. There is fair use that might apply to code snippets. And the idea is fair game to be talked about— just not the code that implements the idea. The laws are likely different in your area, so we’ll just assume hands off to keep things simple for our discussion here…
That leaves me with little public material that can provide any evidence of my skills or expertise.
If you look around at the developers in your circle of contacts, this may sound familiar. I feel it is pretty common with most developers.
The problem then is how to demonstrate your capabilities to people outside your immediate circles? What evidence do you have to back up your claims?
This can be solved by writing about development. (I’m feeling a little meta mentioning that in this article.) Writing only does so much though. Providing code in some way is a great additional resource. Writing articles about the code you wrote is a great way to bring attention to that code and get feedback to help you grow as a developer AND a writer.
If you have a full time role and a young family who demands your time, that leaves precious little opportunity to tackle coding projects on the side. But code still needs to be written and posted somewhere. So a balance is needed.
My recommendation here is to create a pet project and use GitHub to store your repository. Treat this project as “production” code. Do everything that a production project would do — branching, pull requests, unit testing, documentation, etc. Dedicate at least a couple hours a month to work on that project. Adjust that time to fit your own needs of course. The goal here is to regularly contribute changes to the project and have it improve over time. If a project dries up or there just isn’t anything more to contribute to it, start another one. Pick something different that showcases a different aspect of your skills/experience.
Alternatively if you don’t want to create your own project you can contribute to an existing Open Source project. Your GitHub profile will show your contributions there as well as to your own projects. Or better yet, contribute to Open Source AND your own projects if time permits.
The idea is to show consistent contributions over time. Because the project(s) is/are public, your contributions can be reviewed by others when needed. Anyone investigating YOU will see you are an active developer and have code samples to checkout.
Do not use your research projects for this effort. A research project often has quick and dirty code in it — just enough to allow us to get to the thing we were researching. If you post these as public projects, make sure they are clearly marked or separated in some way so they are not confused with the “production” projects. It would be awful to have research code inadvertently give the impression of a sloppy coder. I suggest using private repositories for your research projects, or placing them on a different server or account (perhaps BitBucket for example).
To help with my research efforts, I place ALL my projects into a “swamp” directory. Where needed, I use a private repository on BitBucket for these research projects (or GitHub now that they offer private repos without charge). Then when a project matures enough or can otherwise be considered production ready, I “raise it from the swamp” into my main projects directory. Occasionally I have to “drain the swamp” and clear out older projects that are dead or no longer needed. This is me having fun with my own process though, you may choose to be more formal if you’d like.
Domain Expertise
A business application is the codification of the business rules involved. Once an application is completed and moves into the maintenance phase, the logic, decisions, and expertise represented by that application is an implementation of how your company applies the “domain knowledge” involved.
For example, consider the ordering process for an e-commerce system. We often describe this as 1) place order, 2) collect payment, 3) ship product. Anyone who has dealt with Magento or other e-commerce platforms will know that is a rather simplistic “order flow”.
Placing an order can be different for your organization and may involve seeing if inventory is available or if more has to be acquired. Or maybe the warehouse local to the customer does not have stock, but some remote warehouse does and a transfer order is needed.
Collecting the payment can have its own workflow. What happens if the payment is tagged as fraudulent? Or the payment is just rejected for non-sufficient funds? What if a customer is paying on a terms account? What if they want to pay at the door? What happens to the ordering process in each case?
The information that goes into HOW to place and complete an order is domain expertise.
If you built an application from the ground up, you are now the “domain expert” for all things related to that application. If you contributed to an application over time, then you have some domain knowledge relevant to the application. Either way, that domain knowledge or expertise gives you some credibility. More so if the application is well received and important to the business.
Ideally, you would have had some domain expertise in the project BEFORE you began coding it. Maybe you have developed a few e-commerce applications and that is why you are chosen to build this next one. Or maybe you have a previous career as a warehouse worker, so you understand the logistics needed for this new application.
It is too easy for developers to become domain experts in development only.
To truly excel at your career (in my opinion), you might want to pick a particular topic — e-commerce, logistics, business management, etc. And then focus your programming career on projects related to that topic. This will give you a huge boost to your credibility when discussing roles that may need that expertise. Without that focus, then the business may feel you are a capable coder but missing that extra thing that makes you ideal for the role.
Once you are working on a project you WILL gain domain expertise over time. At least for the way this organization does it. Look around and you will find people that are considered too important to fire or let go because they are the only ones who know how it all works. Or perhaps that person is just aware of the decisions that went into the projects that are being maintained now.
I’ve worked for a number of companies, and freelanced for quite some time. In all cases, that domain expertise, or lack of it, is what sets me apart. Being able to start with a core understanding of the tasks at hand before applying any coding considerations is what makes you uniquely qualified, compared to those who only have the coding expertise.
Understanding Business Needs
As developers we need to understand what it is we are building. We call these requirements. Given a good set of requirements we can go build a system to meet those requirements.
What is missing is WHY these are requirements. What are the business decisions that lead to the requirement? What are the business managers really asking for? Being aware of these considerations improves the chances the project is successful.
In my experience, I find an easy way to highlight this is to ask the question “what VALUE does this feature/requirement provide?” Understanding a business value is not always a logic thing and so may not be immediately clear to a developer.
For instance, we might be asked to build a login system. A developer may think the value being provided is that only authorized people can execute the tasks — typical security concerns. A business person may see that a little differently — they know security is important but are focusing on the “integrity of the data” as a value. We developers see “data integrity” as something different — usually related to well defined databases, foreign keys, indexes, triggers, etc. But to the business person, they are referring more to the amount of confidence in the data, the auditing/logging capabilities of the system, etc.
Understanding what the business is REALLY trying to accomplish in the big picture is just as important as understanding the immediate requirement. Understanding the actual business need may define a new feature request as inappropriate to the intent of the application. A developer who can create code that caters to the immediate need AND the larger goals is a valued member of the team.
Wrapping Up
It is sometimes important for developers to know how to use linked lists to create a binary tree, how to efficiently structure your data, or when to apply more hardware versus revising code to deal with a performance problem. But none of that matters when it comes to domain expertise, proving your capabilities, or understanding what the business is trying to do. Not unless you are building programming tools for other developers, of course — in which case, programming expertise IS the domain knowledge, and defines the values.
Adding those areas to your repertoire will give your career more focus. Without those areas, you will move from project to project, starting over each time with the domain knowledge.
I for one, have spent most of my career becoming really good at writing code, or making the computer do my tasks for me. My domain expertise is programming and technology. This has had a definite impact on my career as I built applications to solve the business needs of each new project. The projects sometime feel like I’m starting over as a junior and have to work my way up again.
Hopefully this article helps newer developers avoid some of the pitfalls I’ve encountered, or provides food for thought if you have already established your programming career.
Thanks for visiting!
|
https://medium.com/swlh/your-coding-skill-is-not-enough-36d2f756fa96
|
['Shawn Grover']
|
2020-10-05 16:46:44.116000+00:00
|
['Careers', 'Startup', 'Programming']
|
A history of Supply Side
|
Photo by Marc Schäfer on Unsplash
Before 1980, Democratic and Republican President had essentially all worked under the same underlying assumptions about the economy. All of them believed government investment was an antidote to recession, an economic worldview popularized by John Maynard Keynes after concluding this is what had helped the US get out of the Great Depression. Both Democratic and Republican Presidents believed in massive government jobs programs and welfare spending. Eisenhower did this through the Interstate Highway System, and having the Republicans essentially adopt the support of FDR’s various programs. Kennedy and Johnson pursued the same policy through the New Frontier and Great Society programs they championed. Richard Nixon proposed a universal health care system and universal basic income in addition to strengthening existing welfare programs.
With the election of Ronald Reagan in 1980, American economic policy was changed forever. A new economic philosophy known as supply side economics became a major part of the Republican agenda. Supply side economics aimed to reduce taxes on income, capital gains and corporations. The theory behind doing so was a belief that the freed up money would then be invested in the economy through expansion of business. These cuts in revenue to the federal government would then be used as the reason to reduce spending on government programs, a strategy known as “starving the beast”. Although intellectually inconsistent, some supply side ideologues also believe that the economic growth caused by lowering taxes actually results in more taxable income for the government. In addition to tax and spending changes, Reagan also adopted a stance of deregulation under the belief that regulations were slowing economic growth.
Before Reagan, these ideas were met with skepticism even from within his own party. Richard Nixon had famously declared “We are all Keynesians now”. President Gerald Ford had strongly criticized much of Reagan’s agenda in 1976, and President George H.W Bush referred to the ideas presented by Reagan as “voodoo economics”.
By the time Reagan had left office the economy was booming. The US was a averaging more than 3 percent growth yearly, Unemployment had decreased, as had rates of poverty and inflation. 21 million new jobs were created during Reagan’s time in office, the second largest peacetime expansion of the US economy ever.
However, America began to face different economic challenges. Under 8 years of Reagan, the national debt tripled. The US went from being the world’s largest creditor country, to the most indebted country on Earth. Wages for middle class and poor workers were stagnant. Most of the biggest economic gains benefited the wealthiest. A series of deregulatory legislation passed in the early 80s had led to the Savings and Loan Crisis-the biggest failure of financial institutions since the great Depression.
Throughout this time, Democrats had been taking political note, and put together a political group called the Democratic Leadership Council. The DLC was created in 1985 to pave the way for Democrats to retake the White House, and two of it’s biggest backers were Bill Clinton and Al Gore. The DLC was an attempt to moderate the Democratic Party, particularly through economic issues.
When Bill Clinton was President, he arguably pursued the same agenda as Ronald Reagan. Clinton muscled through the biggest cuts to welfare by any sitting President in 1996, promoted the elimination of as many tariffs as possible, Clinton also entered the United States into the North American Free Trade Agreement and World Trade Organization, two trade agreements championed by supply-siders. President Clinton catered to the business community through legislation such as the Digital Millennium Copyrights Act.
Clinton and the Republican Congress then passed a bill so dangerous it essentially caused the Great Recession of 2008 single handedly. The bill, known as the Gramm-Leach-Bliley Act of 1999 deregulated financial institutions, repealing safeguards put into law after the Great Depression. The most significant rollback was Glass-Steagall-a 1933 law that forbade commercial and investment banks from using the same pool of capital for investments. When Bush came to office, similar politics played out. Bush cut regulations, making it much easier for banks to take out risky loans and had placed voices sympathetic to Wall Street at the heads of major governmental regulatory bodies like the FDIC and SEC.
Decades of deregulation and tax cuts led to a major problem in 2008, when financial markets collapsed, and the US credit rating was downgraded in 2011 for the first time ever, due to massive debt.
In response to the crisis, Obama adopted an approach that attempted to reconcile the supply siders and Keynesians. Obama adopted the mostly Republican idea of tax cuts and investment, but also maintained a public works program in the same Stimulus bill. The Stimulus was composed of about $300 billion in tax cuts and 500 billion in new spending. Later in his term, Obama would pass an additional $400 billion in other tax cuts. Obama also echoed the supply side line by making 95 percent of the Bush tax cuts permanent. Obama would show other supply side tendencies in pushing for the Trans Pacific Partnership trade deal, and cutting the yearly deficit by about 60 percent. Obama tried to balance out his more free market ideas with the politically Keynesian ideas of new regulation and the passage of the Affordable Care Act.
The new regulations put in place by Obama were in a bill known as Dodd-Frank. Dodd-Frank was the most sweeping regulatory bill since 1938, and was seen as an attempt to put in place safeguards against another Great Recession. It stablished oversight of banks that pose systemic risk to the broader economy, closely monitored the derivatives market (a financial instrument that was a major component of the 2008 crash), the tightening of regulation on credit ratings agencies (the agencies had colluded with banking institutions before 2008) and the creation of the Consumer Protection Bureau (a body designed to identify financial fraud directed at consumers). Later added to the bill was the Volcker Rule, essentially a watered down version of the previously aforementioned Glass-Steagall Act.
With Trump in office, many of these safeguards that prevent a market crash have been slowly whittled away. The consumer protection bureau is now run by an appointee who doesn’t want it to exist. Democrats joined Republicans in rolling back bank “stress tests” that show whether or not a bank poses systemic economic risk. Derivatives are no longer as widely regulated as intended under the bill, and Republicans in the House have passed a full scale repeal. As a result, we’re already seeing investment banks as highly leveraged as they were in in the 1920’s and late 2000’s.
When the economy is doing well, deregulation and general supply side economics usually win all the political battles. In this respect 2018 was no different. The sad reality is that after years of deregulation, the US almost always sees a massive economic crash. Recessions themselves happen generally every ten years. With these facts in mind, it’s likely we’ll face a recession before the next Presidential term-whether it be Trump or a Democratic successor.
|
https://zacharytoillion.medium.com/a-history-of-supply-side-ae07abb93c82
|
['Zach Toillion']
|
2019-05-07 17:36:00.815000+00:00
|
['Economics', 'Republican Party', 'Trickle Down Economics', 'Politics', 'Supply Side Economics']
|
Jayson Waller of POWERHOME SOLAR: 5 Things I Wish Someone Told Me Before Became A Founder and CEO
|
Thank you so much for joining us! Can you tell us a story about what brought you to this specific career path?
After spending 10 years in the home security industry, I felt like the growth potential for me was just not there anymore. I saw Vivint’s Todd Pedersen branch into solar, and his success made me believe that solar was the future. I have always told myself that if I see an opportunity, I’m going 100 percent all in on it. I don’t want to be the guy who sees an opportunity and never ends up doing anything about it and then regrets it. When that horse comes along, you better be ready.
Can you tell us a story about the hard times that you faced when you first started your journey?
At the end of my first year with POWERHOME SOLAR, the company was losing money and I had to consider shutting the doors. Cash flow was a real problem because we weren’t getting paid until after projects were installed, and that was tough. I felt like the shot clock was running out and I just needed more time. I had to talk my wife into listing our lake house for sale because we had built some equity and we could go and buy a smaller house with cash and not have any bills. We were at that point. I’ve never been defeated before. I have, but I’ve never quit. What do I do? I was struggling with it. I was tearing up. My wife said, “You need to pray about it, figure it out. I trust you, I have faith, you always come out on top, you always find a way.” With that I made the decision to go all in. It was a huge risk, especially when my life savings were gone from my two prior businesses, but I love betting on myself.
Where did you get the drive to continue even though things were so hard?
It’s amazing what you can accomplish when you have no alternatives. I cleaned house of some company leaders that were not buying in and I started running the sales department and training reps myself. I sold deals, did all I could to turn things around and got our team passionate about the direction we were headed. It started working, and the business started to grow. Through selling my house and getting some cash flow relief from vendors that started paying us a portion of deals upfront because of our performance, things started turning for the better. My first paycheck came 18 months after starting the business. My rule in every company I’ve ever built, ran, been a part of is you pay your staff first, you pay your vendors second, you pay yourself last. That made it easier to keep going, but it was still far from easy. A lot of entrepreneurs want to pay themselves first, and that’s a mistake. You have to be humble.
So, how are things going today? How did grit and resilience lead to your eventual success?
We’ve built a business at POWERHOME SOLAR that has made the Inc. 500 list of fastest-growing private companies three times in the last four years, and we’ve virtually doubled in size each of the last two years. As our numbers get bigger, doubling annually is harder to do, but we’re super excited about our growth trajectory and the prospect of one day having a life-changing event for our employees through an IPO. They supercharged the growth in this company, and they need to be rewarded for it.
Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?
Can you believe that I didn’t know what EBITDA was until a couple of years ago? In building a team around me to grow the business, I found those people that helped me learn about the importance of EBITDA. You can never be scared of adding talent to your team and finding people who can fill critical roles. I’ve come to realize that I can’t be Superman. You need a team of Avengers to really do everything well so a company like ours can prosper.
What do you think makes your company stand out? Can you share a story?
Our company culture stands out. When COVID-19 reached pandemic status, we needed to decide whether to remain open amid the uncertainty. The four members of our executive team and I believed it was best to move forward, so we temporarily removed ourselves from payroll to give employees the opportunity to continue earning paychecks, but also gave concerned workers the chance to take leave without losing their jobs until conditions improved. What was amazing and got us emotional was that dozens of POWERHOME SOLAR staffers and other company leaders offered to give up a portion of their pay to ensure that cash flow would not be an issue for the business. While the company encountered soft sales over those first 30 days, we had no idea that seven straight months of sales records were ahead. We found homeowners with strong appetites for taking control of their energy futures. That’s how we’ve gone from 750 employees to more than 1,600 during a pandemic. It’s about this team and the culture we’ve built.
Which tips would you recommend to your colleagues in your industry to help them to thrive and not “burn out”?
Take time out for you. Whether that’s spending time with family or friends or getting to the gym, finding that balance is important. I found it helps me to have a routine. I help get my kids to school in the morning, and that’s my time with them. After work, I carve out time for their activities. It’s something that I work on every day.
None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful towards who helped get you to where you are? Can you share a story?
For me that person is my wife Liz. She’s the one who held everything steady in raising our family and allowed me to pursue opening my first home security company on nights and weekends when I was still working for a cell service provider as a business account manager. There are still times where she gives me the stink eye for investing too much time in the business, but I always try to make time for us, whether that’s in having game night with friends or watching TV together in bed before going to sleep. I appreciate her letting me play VR poker after that though!
How have you used your success to bring goodness to the world?
I believe a CEO should be very involved in the community, and not only be involved, but be passionate about it. If you’re going to employ people in a local area and you’re going to serve customers in that area, you need to serve the community and be a voice. You’re the leader of your organization and you need to bring that out to the people in the locations you’re at. You want to make a difference not just for your business, but for the community. I think we’ve done a great job with that at POWERHOME SOLAR. Whether it be supporting “Military Makeover” by giving deserving veterans a free solar energy system, donating thousands to the GivePower Foundation in support of providing clean water to people around the world, or in outreach efforts like Gobble Gobble Give or Toys for Tots, we want to make a difference in everything we do.
What are your “5 things I wish someone told me before I started leading my company” and why. Please share a story or example for each.
1. Don’t be intimidated by people more educated than you. In starting my first business, I was 25 years old and I had to learn how to lead people who were 10 or more years older than me and went to college. That was intimidating at first. But you learn that it is you that they are seeking answers from, and you need to be that person for them. You have to learn what it takes to handle that role.
2. Don’t be afraid to let go. When you build something from the ground up and know it better than anyone else, it’s hard to see inefficiency happen and not want to step in and fix it right away. But I learned it’s far more important to give people that opportunity and let them either sink or swim. Letting go is so hard, but it is truly what I needed to learn to take our business to the next level.
3. Don’t ever talk yourself out of saying, “Why not us?” We have a goal at POWERHOME SOLAR, which is to be the biggest and best solar company in America. Some might say, “How can that happen when the guy leading the company was a high-school dropout who also had his first child in his teens?” Thing is, there’s never a cap on what you can accomplish. But you have to put in the work. You have to take the stairs. There is no elevator to success. You have to grind; you have to want it more than the person next to you. Look at where we’re at now — tracking for $350 million in revenue this year. I’m just as hungry to keep climbing as I was when I first started, and our team is too.
4. You can’t always be the good guy in business. There’s always going to be a need to keep people in line in your business, and sometimes that means having frank discussions with people about performance. I never wanted to be that guy, but it’s important to have a voice like that who can hold people accountable when it’s needed.
5. Hire and build with smarter people in different subjects needed to grow. One of those was adding a CFO who was able to take our financials to the next level and keep a tighter rein on expenses and operations. We now have an accounting team of 7.5 people that is executing at an extremely high level, and they’ve done great work through better automated processes and tools. Also, even in building my True Underdog podcast, we’ve hired a team that can take my burgeoning show to another level.
You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger.
The great thing about POWERHOME SOLAR is that our motto is Building A Movement, or BAM for short. We do that one solar panel, one customer and one employee at a time. We are building a movement of clean, green energy, and that not only helps customers potentially save money but also does a great thing for the planet. It’s easy for our employees to understand and get behind our mission because of the amount of good we can do for customers. I am very blessed to have a chance to lead this company.
How can our readers follow you on social media?
https://www.facebook.com/JaysonWallerBAM/
https://www.instagram.com/JaysonWallerBAM/
https://www.youtube.com/channel/UCwTHT2Q-omcIjTOwOa2imTQ
https://twitter.com/JaysonWaller
|
https://medium.com/authority-magazine/jayson-waller-of-powerhome-solar-5-things-i-wish-someone-told-me-before-became-a-founder-and-ceo-7eee15ea1b44
|
['Candice Georgiadis']
|
2020-12-03 19:06:08.345000+00:00
|
['Business']
|
What Are the Strategies Startups Should Follow to Succeed in the Present Taxi Business
|
You now own an on-demand taxi company and if it is in the starting stage then, now is the time to infuse some strategies into the business. The strategies depend on the type of business and the business environment. Things that can be applied to a specific business some years back may not be relevant now. So, here I have handpicked some strategies for the app-based taxi business by taking into mind the present trend and future growth of the app-based taxi business.
Branding is the key
Once you have created a brand and people recognise your service with a brand then, everything will come into place. You can create a brand by providing some niche and unique service. Start this with the target audience. Once they are done move on to others. The name of the taxi firm and the design of the app creates a big role in branding.
I believe you are using Uber clone script for the app solution. And if you are new then I would recommend you to use Uber clone script which is similar to the Uber app. With a white label Uber script, you can easily customize the name and UI/UX of the solution. So, you can establish a brand by using trial and error method. But, be wary that you don’t customize the app frequently.
Use social media to the fullest
Social media is the best tool to establish a brand among the millennials. Leave no stones unturned during social media marketing since there is no extra cost involved in it. All the social media platforms are not the same and each requires a different type of content and marketing. For example, the type of marketing in Instagram should certain more of a picture than anything. You can also provide loyalty points to riders who update their ride status on social media.
Don’t hurry and start small
You cannot become Uber or Ola overnight. Take the first step and don’t take a further step until you get a good hold on that. You should follow this on every step. Experimenting and taking risks is mandatory to make a mark in business and that would be easy to do if your business is small. If you expand your taxi business indiscriminately, you will lose grip on the business and losing grip means losing everything. So, narrow your focus and put your entire effort on that.
All services under one roof
You can extend the taxi business to provide dedicated services like elderly care, kids taxi, taxi for physically challenged, and many more along with the regular service. You can allow the riders to select the type of service in the app. This enables many customers of all ages to prefer your taxi service. Since you provide multiple services under one roof, the service should be seamless with multiple payment integration, easy ride booking, and scheduling, etc.
Up to date technology
Companies that invest a part of their revenue in development will sustain in the long run. Already the taxi booking business is loaded with technology like taxi dispatch system and taxi app solution. Further, you have to invest in new technology on the tracking system, payment system, dispatch system, and many more. This helps to reinvent your business.
Think from the perspective of a customer
If you really want to improve the service, think from the place of a customer and have the psyche of a customer. Find the places where customers tend to book cabs and places they hesitate to take cabs. What will be their thinking in the middle of a traffic filled street? Whether they will use public transport or take taxis?
Associate with popular private spots like restaurants, hotels, malls, theatres, and other recreational spots. Place QR codes in these places so that riders can easily scan it and book a cab. This kind of work can strike a connection between you and the riders.
Conclusion
The strategies I have mentioned above are foolproof and can easily be followed. You don’t have to be a business pro to do all this. All the above points make your business stable by helping you have a good hold on the operations.
Earlier I told about the white label app solution. Such taxi app solution providers are many globally and only a few of them can give top-notch taxi app which is similar to giant taxi firms like Uber and Ola. Do associate with such solution providers.
While doing customization in the taxi app, inform them about the various strategies I have mentioned. This enables them to be more specific during taxi app development.
|
https://medium.com/hackernoon/what-are-the-strategies-startups-should-follow-to-succeed-in-the-present-taxi-business-a092a0dc3d63
|
['Karthik Shanmugam']
|
2019-07-08 14:21:01.184000+00:00
|
['Uber', 'Uber Clone Script', 'Uber App', 'White Label Uber Script', 'Uber Clone']
|
Animated bar race ranking in R
|
Instructions with code for how to create a bar race ranking GIF animation in R.
GitHub repository is available here.
Step 1. Packages
The packages below need to be installed prior to running the later code:
library(ggplot2)
library(gganimate)
library(tweenr
library(magick)
library(gifski)
library(dplyr)
library(magrittr)
library(countrycode)
library(grid)
Our ggplot will include flags positioned on the vertical axis. The package enabling this is called “ggflags”, however it is not available on CRAN. That’s why one needs to install it using devtools, as shown in the code below. For the installation to be successful, it is necessary to install devtools first, which in turn requires Rtools to be present in the operating system.
devtools::install_github("ellisp/ggflags")
library(ggflags)
Step 2. Data
The data set represents total exports from Scotland to other countries for different years. Although the excel spreadsheet below includes multiple columns, for the sake of this exercise important are only columns: “Year”, “Country”, “Total” and “Rank”. The data is available here.
In the data-manipulating code below, accomplished are three things:
Top 25 countries for each year are kept, other entries are discarded. The variable ‘Rank’ is multiplied by 120, this is to scale bars. A new column representing ISO country codes is added.
# DATA
COUNTRIES <- read.csv("COUNTRIES.csv")
D <- COUNTRIES %>%
group_by(Year) %>%
filter() %>%
top_n(n = 25, wt = Total) %>%
mutate(Rank = Rank * 120) %>%
ungroup()
D <- D[with(D, order(Year, -Total)),] D$Code <- tolower(countrycode(D$Country, "country.name", "iso2c"))
As a result of the data manipulation, the new table takes the following form:
Step 3. Plot
Next comes the creation of the ggplot to be animated. Please refer to inline comments to view explanations for each line of code.
Step 4. Animation
The plot above has been assigned to the variable “p”. This is then fed to the function animate() from the package “gganimate”. It also requires that the user specifies the frames per seconds, duration of the animation , and the size in pixels.
# ANIMATION
animate(p, fps = 30, duration = 30, width = 800, height = 600)
This will then render the animation.
Voila!
GitHub repository is available here.
|
https://medium.com/data-detective/animated-bar-race-ranking-in-r-e63440f149da
|
['Kamil Szymkowski']
|
2020-12-17 10:46:15.887000+00:00
|
['GIF', 'Data Visualization', 'Ggplot2', 'Gganimate', 'R']
|
Focal Stellia review: The most exquisite headphone we’ve reviewed to date
|
French high-end brand Focal is relatively new to the headphone game—its first model was introduced in 2012—but the company has more than 40 years of experience designing audiophile speakers and drivers, so it’s no surprise that Focal headphones are among the most highly regarded in the world. TechHive has reviewed two—the Clear and the Elegia—deeming both to be exceptional, though not inexpensive by any means.
The latest addition to Focal’s stable of headphones is the Stellia. Given what I’ve heard from the company’s headphones in the past, I was eager to give this one a long listen—and I was not disappointed!
This review is part of TechHive’s coverage of the best headphones, where you’ll find reviews of competing products, plus a buyer’s guide to the features you should consider when shopping.FeaturesThe Focal Stellia is a circumaural (around-the-ear) closed-back design that connects to the source device with actual wires—no Bluetooth here! In fact, the Stellia comes with two hefty cables: a 3-meter (10-foot) cable with XLR connector intended for home use with an amplifier such as the Focal Arche (review forthcoming) and a 1.2-meter (3-foot) cable with a 3.5 mm connector for mobile devices. Also included is a 1/4-inch adaptor that screws onto the 3.5mm plug, which is more secure than a snap-on design. At first, I thought the XLR cable facilitated a balanced connection, but it doesn’t; the connectors for each earpiece have only two conductors (signal and ground), so the connection is not balanced.
Focal The headband and earpiece yoke mechanisms are derived from Focal’s Utopia headphone, and they contribute to the Stellia’s high comfort factor.
The headband and earpiece yoke mechanisms are derived from the company’s Utopia headphone, a flagship, open-back model introduced in 2016. The consistent curve between these elements is said to offer superior comfort regardless of the shape and size of the listener’s head. In addition, the earpads consist of high-resilience memory foam that, along with the headband, is covered with full-grain leather.
Of course, the earcups are designed for more than comfort; they provide excellent isolation from ambient sound and an acoustic environment specifically designed for the full-range drivers. In fact, the earcups include two vents: one to control the balance between the bass and midrange and the other to evacuate the rear wave from the center of the motor to avoid compression and extend the low-frequency response—in other words, this is a bass-reflex headphone! At the other end of the sonic spectrum, EVA (ethylene-vinyl acetate) foam behind the drivers absorbs excess high-frequency energy. Finally, acoustic diffusers break up any standing waves and make the earcups more rigid and inert.
Focal The earcup and driver are carefully designed to deliver glorious sound quality.
The full-range drivers themselves are equally well-designed. The 40mm diaphragm is a pure-beryllium dome with very low mass, high rigidity, and high damping. The cross section of the diaphragm resembles the letter “M,” which is said to offer even higher rigidity for more precision and less distortion. Also, the Focal-exclusive frameless pure-copper voice coil reduces the mass even further. The result is a frequency response from 5Hz to 40kHz (±3 dB) with a THD of 0.1% @ 1 kHz/100 dB SPL and a sensitivity of 106 dB SPL/1 mW @ 1 kHz. The impedance is 35 ohms, making the Stellia relatively easy to drive.
I don’t normally comment on a product’s packaging, but I will in this case—it’s as high-end as the Stellia itself. A large, sturdy box includes the headphone in its rigid, woven-cover case along with separate case for the cables. Even the documentation, sparse as it is, comes in its own leather folder.
Vendor-provided art. The Stellia’s packaging is quite impressive.
PerformanceAt first, I couldn’t find any indication of which sides of the Stellia are left and right, but I finally found small labels on the bottom of each earcup near the cable connector. The cables for each side are marked as well.
The oversized earpads easily fit over my relatively large ears, and they are supremely comfortable. They don’t clamp down on the head, yet they provide excellent isolation from ambient noise. I could easily spend hours wearing the Stellia—and I did! As usual, I listened to tracks from Tidal’s Master high-res audio library playing from my iPhone XS through an iFi hip-dac, reviewed here.
Focal The Stellia does not clamp onto the head, offering the ultimate in comfort.
One of my favorite recording artists these days is young Jacob Collier, who is rightly hailed as a Millennial Mozart. His genre isn’t classical music, but rather an eclectic mix of jazz, rock, and pop that goes far beyond all three. For this review, I listened to his version of Lionel Richie’s “All Night Long” from the album Djesse Vol. 1. The clarity was amazing, with exceptional delineation of instruments and vocals, allowing me to hear well into the mix even as it remained entirely cohesive. The bass was deep but well balanced with the rest of the sonic spectrum, and the vocals were beautifully rendered.
Next up was some modern country from Rascal Flatts—specifically, “Feel It In The Morning” from How They Remember You. The mix is rich and dense, yet I could distinguish each element perfectly clearly. Everything was superbly balanced, and the bass was deep and clean. Also, the soundstage was wide with excellent imaging of each instrument and voice.
Focal The leather that wraps the earpads and headband is no less luxurious than that used in fine gloves.
As I was looking around the Tidal Master Library, I happened to listen to “Hallelujah!” from Creeper’s new album Sex, Death, and the Infinite Void. It’s a 45-second intro to the album with recorded rainfall and thunder with some low synth sounds and spoken words, all of which sounded wonderful; the rainfall and thunder were especially effective, encompassing a wide soundstage. When the first song started, I discovered it was quite thrashy and punk-like, so I quickly stopped it—not my cup o’ tea!
Another new discovery was “Running Away” by Omar Rodriguez-López from The Clouds Hill Tapes, Pt. III. This track opens with vocal chords and adds super-low synth bass, lead female vocal, piano, and other synth sounds. I was especially impressed with the synth bass, which was super-clean and perfectly balanced with the rest of the mix, and the piano was beautifully rendered.
Turning to some jazz, I listened to “How High The Moon” as sung by Patti Austin with Afro Blue and the Count Basie Orchestra on Ella 100: Live at the Apollo. Patti was definitely channeling Ella Fitzgerald, and the Stellia highlighted her vocal skills perfectly. Again, the clarity and presence of the entire sound was startling, and a wide soundstage spread out the band and background vocals in front of me. I would bet the sound I heard was better than actually being there.
case-100854518-large.jpg" class="lazy" loading="lazy"> Focal The stiff, molded case cradles and protects the Stellia and its cable.
One of my favorite artists from the 1980s is guitarist Pat Metheny, and one of my favorite Metheny albums is First Circle, which was remastered in 2020. I listened to “Forward March,” a short intro to the album that starts with a deep bass drum and a terrible, out-of-tune marching band. It almost hurts to listen to this cacophony, but it’s hilarious at the same time, and on the Stellia, I could hear each cracked note and inaccurate entrance all too clearly. After a couple of minutes, the music transitions to “Yolanda, You Learn,” a hard-driving jazz/rock track with synth guitar, drums, bass, and some vocals, all of which were super-clear, distinct, and well-balanced.
For a taste of classical, I found a beautiful version of the second movement from Dvorak’s Symphony No. 9, “New World,” arranged by Tamas Batiashvili to feature Lisa Batiashvili (relation unknown) on solo violin with Rundfunk-Sinfonieorchester Berlin under the direction of Nikoloz Rachveli on the album City Lights. On the Stellia, the violin sounded beautifully natural with gorgeous high harmonics, and the orchestra was exceptionally rendered with perfect balance between the rich basses and the rest of the sections, which I could discern perfectly.
I also wanted to try something from Tidal’s 360 Reality Audio library, which offers headphone-based immersive audio using Sony’s technology of the same name. I reviewed the technology here. For this review, I listened to “Tambo in 7/4” by Airto Moreira, remastered from his 2011 album Fingers. The Stellia rendered an excellent immersive soundstage with instruments all around and appearing to be outside the confines of the headphone, though there wasn’t much overhead.
Focal Focal’s Stellia headphone boasts frequency response from 5Hz to 40kHz (±3 dB) with a THD of 0.1% @ 1 kHz/100 dB SPL and a sensitivity of 106 dB SPL/1 mW @ 1 kHz.
Bottom lineThe Focal Stellia is the finest headphone I’ve ever heard, with exceptional clarity, superb soundstage and imaging, and an exceedingly well-balanced presentation of deep bass, clean mids, and sparkling highs. Even better, it’s super-comfortable to wear for extended periods, which I happily did.
More great cans Denon AH-D9200 Read TechHive's reviewMSRP $1,599.00See it Of course, that level of performance comes at a high cost—nearly $3,000 in this case. But that price tag is entirely justified; not only is the performance second to none, Focal used the highest quality materials in the construction and even in the packaging. The company touts it as a headphone you can take with you when you’re out and about, but at that price, I wouldn’t risk it.
Other than the high price tag, the only con I see is the lack of a balanced connection. I don’t consider this to be a big drawback, but some headphone aficionados might, especially for that kind of money. Theoretically, a balanced connection can increase an amplifier’s voltage-swing range while reducing THD (total harmonic distortion) and crosstalk, supposedly resulting in more refined detail and tighter low bass among other benefits. But you need a balanced amplifier and cable as well as the appropriate internal wiring within the headphone. Perhaps most importantly, any improvement to the Stellia’s sound quality would be marginal at best; I can’t imagine it sounding noticeably better than it already does.
If you’re thinking about investing in the Stellia and a good headphone amp, Focal is offering a package deal with its Arche DAC/amp, which sells for $2,490 by itself. But you can buy them together on Amazon for a total of $4,000, which saves you $1,480 off the combined individual prices. I’ll be reviewing the Arche in the next couple of months, so stay tuned for that. Incidentally, if you already own a Focal Stellia, Utopia, or Clear headphone, Focal will send you a $1,000 voucher toward the purchase of the Arche. You’ll find the details here.
Meanwhile, I’ll be listening to the Focal Stellia as much as I can, basking in its magnificence.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
|
https://medium.com/@Michael68790029/focal-stellia-review-the-most-exquisite-headphone-weve-reviewed-to-date-c6f744cf13ef
|
[]
|
2020-08-25 12:02:54.153000+00:00
|
['Chromecast', 'Entertainment', 'Services', 'Streaming']
|
GridDB on ARM, Time Series Database for your Raspberry Pi! | GridDB: Open Source Time Series Database for IoT
|
GridDB on ARM, Time Series Database for your Raspberry Pi! | GridDB: Open Source Time Series Database for IoT Israel Imru Follow Jul 16 · 2 min read
In this blog post, we’ll demonstrate how to build GridDB for 64-bit ARM and run it on a Raspberry Pi 3 or 4 running Ubuntu Server 18.04. In many use cases, it makes sense to store time series data in a NoSQL database at the edge of an IoT solution. Low-cost, low-power ARM devices are an excellent method of achieving this.
First, why Ubuntu Server 18.04? Well, Ubuntu Server 18.04 is the only Linux distribution for Pi 3 or Pi 4 that was 64-bit and had GCC 4.8 packages, two things GridDB needs. It can be downloaded from here Raspberry Pi Imager as a custom image.
Build the Server on Raspberry Pi
Next, with Ubuntu 18.04 installed and running on your Pi, we can install the build dependencies:
$ apt-get -y install gcc-4.8 g++-4.8 build-essential git tcl tk ant libz-dev autoconf automake
There are fixes for ARM not in the main GridDB git repository that are in the griddbnet/griddb_arm fork.
Now, we can can build the server:
$ cd griddb_arm
$ ./bootstrap.sh
$ CC=gcc-4.8 CXX=g++-4.8 ./configure
$ make
Start GridDB, Build and Run the Java Sample
With the build complete, we simply follow the README to start GridDB and run the Java sample.
Configure the server:
$ export GS_HOME=$PWD
$ export GS_LOG=$PWD/log
$ export PATH=${PATH}:$GS_HOME/bin
Start the server:
$ bin/gs_passwd admin -p admin
$ sed sed -i -e s/\"\"/\"defaultCluster\"/ conf/gs_cluster.json
Build and run the sample program:
$ export CLASSPATH=${CLASSPATH}:$GS_HOME/bin/gridstore.jar
$ mkdir gsSample
$ cp $GS_HOME/docs/sample/program/Sample1.java gsSample/.
$ javac gsSample/Sample1.java
$ java gsSample/Sample1 239.0.0.1 31999 defaultCluster admin admin
--> Person: name=name02 status=false count=2 lob=[65, 66, 67, 68, 69, 70, 71, 72, 73, 74]
Build the C Client
Like the server, the C client requires a few small fixes to run on ARM and we build and install it using the instructions in the README:
$ git clone https://github.com/griddbnet/c_client_arm.git
$ cd client/c
$ ./bootstrap.sh
$ ./configure
$ make
$ sudo make install
With that complete, you have GridDB running on your ARM-based Raspberry Pi 3 or 4. It’s also easy to build Python, NodeJS or other GridDB clients following the instructions to further enhance your experience.
|
https://medium.com/griddb/griddb-on-arm-time-series-database-for-your-raspberry-pi-64b22aa5a59c
|
['Israel Imru']
|
2020-08-12 23:49:40.775000+00:00
|
['Time Series Database', 'Internet of Things', 'Griddb']
|
Something Like Human
|
(photo credit: Carmen B. Pingree Autism Center of Learning)
cw/tw: suicide, substance abuse, hypersexuality
If a psychiatrist doesn’t diagnose someone with a mental “disorder/illness/disease,” does that mean that the person doesn’t have said “disorder”? And what happens when a diagnosis is inaccessible to the people who need it?
Unknowingly, those questions has hung in the air for me across my entire life. And they’re questions I’ve spent most of this year trying to answer.
While it is cliché to say – even more so due to the awareness of said cliché – I was definitely an outlier growing up. I was very quiet, I wasn’t very social, I was highly sensitive, I got mad pretty quickly, I cried a lot (aka I cried more than boys (I’m agender, btw; they/them are my pronouns) were expected to, which is not at all), I was always one of the smartest students in my classes, and prior to fifth grade none of that mattered much. At home, being highly sensitive was an issue every now and then, but that was it. My classmates and friends let me be – with the exception of a rare few who were my archenemies – and school was fine.
And then my family and I moved, I became the new kid again, and a massive shift in my understanding of who I was occurred. All the things that made me different from people, especially in comparison to my older brother, were weaknesses that everyone seemed superhumanly attuned to relentlessly attack.
In my previous experiences as The New Kid™️, classmates (who later became friends) comforted me and acted as guides while I got acclimated. On my first day as The New Kid™️ in my new fifth grade homeroom class, a classmate (who later became my introduction to abusive platonic relationships) asked me, “Are you a boy or a girl?” as I was crying into my arms on my desk. And so began eight years of bullying because I was me, and being me was always a social sin.
Everyone around me seemed capable of doing what normal pre-teenband teenage human beings were supposed to be doing – talking a lot, making friends, cutting ass on each other, having crushes, wanting to date, etc. – which at first made me question if I was human, then I wondered if I was defective. I also started to wonder if I was actually adopted because I started to feel like I couldn’t possibly be related by blood to the people I was supposed to call family.
By the time I graduated high school, I burned myself out trying (and failing) to be anyone but me in hopes that the bullying would subside or, at least, hurt less while trying not to drown in existential dread. Neither of those things happened.
Then I was suddenly granted freedom from the closed loop that was my social, school, and home life. And like Mike Shinoda says in Linkin Park’s song “Part of Me,” “Freedom can be frightening if you’ve never felt it.”
I was able to go out and make friends and be someone close to who I actually was around people in places so far removed from my life before then.
It felt good . . . liberating even . . . but only for a moment.
It didn’t take long for the realization to set in that I had to really figure out who I was, and why I was so noticeably different from everyone around me. Soon enough I found myself amidst a self-destructive, eleven-year long journey of trying to find anything – alcohol, drugs, sex – to help me perform being humans the way I was expected to, and quell all the anger and self-hatred I carried around from not being able to be human way I was expected to. And it all culminated in a failed suicide attempt that cost me meaningful friendships.
After that, I almost made peace with the idea that I would likely never really find what I was looking for, that obsessing over finding an answer in hopes that I could finally make sense of why it had to be me who people decided to bully relentlessly.
Almost.
But then a former roommate of mine brought it to my attention that I may be autistic.
And things started to click.
I was excited by the idea that I finally found the answer I nearly killed myself trying to find, and skeptical of it as well because of how simple it was. So I spent a few months reading everything I could about autism from psychiatrists, research papers, and from fellow autistic people as well as watching a ton of YouTube videos about autism from autistic people. And I felt confident enough to publicly self-diagnosing as autistic. I even took multiple different online autism diagnostic tests just to make sure, and all answers pointed to the fact that I am autistic. A minimally verbal autistic person at that.
Throughout that process and up until recently, I spent a lot of time going over different moments in my life that, in hindsight, screamed “I’m autistic!” Sometimes I’d be filled with joy and feel a sense of euphoria because I finally understood myself like I had always hoped, and it allowed me to continue healing so I could become the person I need to be for myself. Other times, it brought sadness because I denied so much time to learn to be myself.
Then I began the journey for a professional diagnosis to have it on record since self-diagnosis is, unfortunately, widely frowned upon and misunderstood to be “illnesses anxiety disorder” (even though self-diagnosing is not inherently more harmful and inaccurate than professional/medical diagnosing anyway), which leads me back to the two questions I asked at the beginning of this piece:
If a psychiatrist doesn’t diagnose someone with a mental “disorder/illness/disease,” does that mean that the person doesn’t have said “disorder”? And what happens when a diagnosis is inaccessible to the people who need it most?
Before I answer that, I’ll address the quotation marks around disorder, illness, and disease:
‘How has the DSM evolved to become seen as the “authoritative medical guide to all of mental suffering”? The credibility of psychiatry is tied to its nosology. What developed over time is the number of diagnoses, and, more importantly, the method by which diagnostic categories are established. You’re a practicing psychotherapist. Can you define “mental illness”? No. Nobody can. The DSM lists “disorders.” How are disorders different from diseases or illnesses? The difference between disease and disorder is an attempt on the part of psychiatry to evade the problem they’re presented with. Disease is a kind of suffering that’s caused by a bio-chemical pathology. Something that can be discovered and targeted with magic bullets. But in many cases our suffering can’t be diagnosed that way. Psychiatry was in a crisis in the 1970s over questions like “what is a mental illness?” and “what mental illnesses exist?” One of the first things they did was try to finesse the problem that no mental illness met that definition of a disease. They had yet to identify what the pathogen was, what the disease process consisted of, and how to cure it. So they created a category called “disorder.” It’s a rhetorical device. It’s saying “it’s sort of like a disease,” but not calling it a disease because all the other doctors will jump down their throats asking, “where’s your blood test?” The reason there haven’t been any sensible findings tying genetics or any kind of molecular biology to DSM categories is not only that our instruments are crude, but also that the DSM categories aren’t real. It’s like using a map of the moon to find your way around Russia.’
This isn’t to say that mental health conditions aren’t real – that would be the anti-psychiatry argument – it’s more that psychiatry is flawed in very obvious ways and is in desperate need of an overhaul, which is where critical psychiatry comes in. Because of racism, white supremacy, anti-Blackness, misogyny, homophobia, transphobia, and every other system of oppression, getting a diagnosis – let alone the correct one – is not as easy as some may think, especially because of how systems of oppression affect how psychiatrists and psychologists interact with marginalized communities (which also highlights the biggest issue in our current push for everyone to get professional help in regards to mental health). Not to mention that the research used to inform how we understand mental health conditions suffers greatly from a lack of diversity.
So, what happens when a mental health condition shows up in a Black patient, and most professionals’ only understanding of it is based on how it presents in white boys and white men who come from Western, educated, industrialized, rich, Democratic backgrounds? If they’re even properly informed about it to begin with (since most people, psychiatrists and psychologists included, regard autism as something to be feared and dreaded, and treat autistic people as if they’re only something like human . . . if they even consider the humanity of autistic people at all).
And what happens when psychiatry and psychology don’t take into consideration how prevalent PTSD and c-PTSD are in marginalized communities because their only understanding of those conditions are based on how they present in white people who come from Western, educated, industrialized, rich, Democratic backgrounds?
In my experience, psychiatrists and psychologists opt to dismiss anything I say because after two minutes of conversation they decided I wasn’t autistic since I was “too self-aware,” “too smart,” “too functional,” and I made eye contact. I’ve also been just as quickly diagnosed as possibly having borderline personality disorder, and told that it’s just “severe depression and trauma” that’s the issue not autism (as it autistic people can’t also be “severely depressed and traumatized). And I’m sent back to square one in my attempt to get help that seems almost mythical at this point.
So what is a severely depressed and traumatized queer agender autistic Black person supposed to do about getting the right diagnoses from an almost completely homogeneous field of professionals that hardly even have experience considering autism outside of its limited white scope? Do I keep hopping from professional to professional in hopes that one of them will be well-informed about autism in the Black community, queer and trans-affirming, willing to respect me enough to take what I say seriously, and he covered by my health insurance or offer free autism diagnostic tests knowing the chances of that are already incredibly slim? And if this is my only course of action, it’s both an indictment of psychiatry and a reason why we need to adopt a radical and critical approach to psychiatry.
The inaccessibility of many mental health resources due to ableism, racism, misogyny, white supremacy, etc. should be a bigger concern now that we’re normalizing talking about mental health issues, and telling people to get professional help. It creates the problem where so many people have gone through and are currently going through life without access to services that could help them lead healthier and more fulfilling live, especially because the stigma attached to every mental health condition and I would argue that labeling them as disorders or illnesses or sicknesses makes people less likely to see professional help since no one wants to be associated with anything that people perceive to mean “crazy or sick in the head.” And even if people do seek professionals help, the difference between getting the right diagnosis and proper treatment and getting misdiagnosed and being subject to neglect, abuse, trauma, and other forms of harm (ex: the harm caused by psychiatric medication, and sexual abuse in psych wards) is basically which psychiatrist you end up with.
It isn’t talked about enough (due to ableism in leftist/radical/revolutionary spaces), but we can’t ever hope to build an equitable, inclusive, and just world if we don’t also demand radical change in psychology and psychiatry. And while I understand there’s a general hesitancy around critiquing psychiatry and psychology for fears of entertaining/engaging in pseudo-science, anti-psychiatry, and pseudo-intellectual thought that exists alongside the glorification of psychiatrists and psychologists due to the idea that their education and training makes them inherently more intelligent about the human mind than less intelligent and woefully self-unaware masses; however, we do ourselves a disservice if we never engage with psychology and psychiatry beyond just accepting what psychiatrists and psychologists say at face value. It’s imperative that we become as engaged with psychology and psychiatry as we say we need to be about politics.
|
https://medium.com/@keixcruick90/something-like-human-10cf5a5bbd3
|
['Keillan Cruickshank']
|
2020-12-22 18:23:19.488000+00:00
|
['Politics', 'Psychiatry', 'Mental Health Awareness', 'Racial Justice', 'Autism Spectrum Disorder']
|
Money Laundering: Independent review in context of Pakistan
|
United Nations office of drug control and crime
The current era of globalization has made the global economy, the global village more interactive, interconnected, unified and organized, a means of simultaneous development but this development has opened the floodgates of ease and prosperity for human beings to criminals.
Less developed economies and especially developing and informal economies like Pakistan are more at risk of falling into the clutches of money launderers because developed economies have strong financial controls and effective laws to monitor the activities that pose these risks.
Weak financial systems and frail controls make it a crime in developing countries. Interpol’s definition of money laundering is that ‘every effort is made to conceal illicit earnings which show that this income is legitimate’.
Money laundering is a heinous crime that is partisan and well-executed. This crime is not against any one individual but against the nation, economy and rule of law which worsens the social, political and economic condition of the country.
Analysis:
Money is made from heinous activities like terrorism, illegal sale of arms, internal trade, speculation, prostitution, embezzlement etc. and it is transferred to safe havens.
To understand money-laundering, it is divided into three stages.
Placement
This stage can be considered as the initial stage. At this stage, the illicit money is derived from its source and divided into several parts and these parts are deposited separately in different types of banks and financial institutions at different times. Investments are made in different places. In simple words, money is spread everywhere. Because of the large amount of money that is distributed, officials are less likely to be vigilant.
Layering
As the name of this stage shows, money is transferred from one account to another and from one bank to another. In addition, the money is invested in various financial instruments and then sold, and the proceeds are diverted to more complex locations, including digital banks and cryptocurrencies. The world’s prime money launderers throw so much money into a financial market that they run the market at will and make a profit on the profits.
Integration
At this stage, money launderers reintroduce funds into the economy. After the learning phase, this amount is converted into land purchase, commodity purchase or expensive assets, etc. When money is integrated, it is very difficult to hold it, even it becomes impossible. Therefore, it is very important to track down the money before entering this stage and decide on the punishment for it.
The phenomenon of money laundering, amongst other economic and financial crimes have had better success in infiltrating into the economic and political structures of Pakistan therefore resulting to economic digression and political instability. Although, Pakistan has responded and continue to respond, through legislative measures, to the menace of money laundering, at national level, however, money launderers, have exploited the tax regulatory environment, vulnerable financial systems along with persistence civil and political unrest of Pakistan.
If you talk about Pakistan critically, the tactlessly ignorant bureaucracy in Pakistan, the crooked governments, the judiciary and weak institutions, and the very strong military that receives its stake, have all washed their hands of this flowing Ganges.
Pakistan’s weak establishments, incompetent authorities, made money out of the mountain of corruption, which was exposed in 2016 by a data leak known as the ‘Panama Papers’. These odious crimes have given rise to terrorism, bigotry, prostitution, unemployment and hunger in Pakistan and have severely spoiled Pakistani social values.
A common myth associated with the concept of money laundering is that black money and money laundering are two names for the same thing, but in reality, they are two different crimes. Like an interview aired on the BBC in which a former Pakistani finance minister who proudly calls himself a UK qualified chartered accountant and turns to tax returns to cover up his crime knowing the amount. Not paying taxes is called black money. Black money can be earned through legitimate means, but when it is hidden from the tax authority so that it does not have to be taxed, it becomes black money. Corruption and money laundering are largely diverted from the national economy to the developed world. Then the same or partial amount is returned to the base country as a loan to one’s own advantage and to the detriment of the nation.
|
https://medium.com/@ammar-islam253/money-laundering-independent-review-in-context-of-pakistan-9099c3c9069a
|
['Ammar Ul Islam']
|
2020-12-05 21:58:25.533000+00:00
|
['Pakistan', 'Fatf', 'Developing World', 'Money Laundering', 'Social Change']
|
Digital Analytics Scholarship @ CXL Institute — 5th Week Review
|
I have always thought of google sheets and excel as tools for data processing. but I never thought that someday, I will need to use them for digital marketing purposes. Now thanks to Fred Pike and CXL Institute, I know that a marketer should also know how to use it like a pro. I am going to tell you in advance that I’ve learned a lot in this course.
5- Excel and Sheets for marketers by Fred Pike
This is a combination of Excel and Google Sheets courses for marketers. It is actually awesome to learn how other marketers use these two great products in their daily lives and I’m excited to start it.
Fred started by using filter and sort in both tools. As Fred described it, it is a little bit more difficult to do these jobs in google sheets. But in the end, both tools are able to do them.
I would rather use google sheets over excel because I think google sheets is more dated and suits my needs better. But you have your own choice.
The next part was focused on the sum functions and by functions, I mean SUM, SUMIF, and SUMIFS. They are so easy to use and so capable of doing the work that you want to be done.
Fred talked about the absolute and relative referencing which was very well timed and became relevant during the lesson.
The great thing about the sum functions is that they are identical between excel and google sheets. You simply learn how to use them and after that, both tools are yours to use.
Guess what was the next lesson about? It was about count functions and as you might know, they are identical between excel and google sheets. Pretty much like sum functions, we have COUNT, COUNTA, COUNTIF, and COUNTIFS for different conditions.
And after that, Fred went to his beloved excel only to work on simple styles that addressed table headers. After that, he made a table out of the data in the excel file. To tell you the truth, I didn’t know the difference between tables and raw data. I always thought that every excel data in excel is always formatted as tables!
Then we dived into pivot tables both in excel and google sheets. Fred constantly talks about the weaknesses of google sheets which I don’t like to hear about. But he likes to talk about it.
I tried to create a pivot table that was introduced in the lesson in google sheets. It was damn easy and I did almost nothing to create it. The idea about rows and columns was very easy and I populated the cells with the data that I wanted.
But after that, I tried to do the same job in excel on my mac and I failed to do so. I don’t know whether it was just bad luck or Excel had a worse user interface. Fred showed in the video that you can use “analyze data” to expand or contract the rows based on your will. He also used some table variations that he evaluated as great, but I didn’t like them that much.
Believe me or not, Fred talked about calculated fields in google sheets and excel in the next lesson. The whole idea was to make fields with accurate data to use them in the pivot table.
Luckily, this time both excel and google sheet had the feature to add calculated fields and I think google’s approach was much cleaner to this specific topic.
Data for itself doesn’t convey any meanings, so we have to segment it to be able to have a more specific look at it. Excel gives us do this in its pivot tables and as it seems, google sheets can’t do that for now.
Microsoft excel has three segmentation tools for pivot tables. One of them is filters that let you have a drop-down and segment data based on whatever you want. After that, we have a great thing in excel called Slicer. By using a slicer, you can have a simple visualized representation of your filters or controls. It is very much like a dashboard tool that is right in front of your eyes and you can mess with it visually.
Finally, we have a timeline feature in the segmentation tools which only accepts time as input. By using a timeline, you can limit your data by a specific time. This too is like the things I’ve seen in the google data studio and reminds me of the features that are available in that google tool.
Oh, I didn’t know that we have slicers in google sheets as well. They work much better than their equivalents in excel. You can use them along with filters to segment your data.
And you know what? You can save your changes in the saved view to call it later. This feature reminded me of the saved reports in google analytics. They work in quite the same way.
So finally Fred got the chance to brag about his beloved excel. He said that pivot tables in excel have some features that google sheets currently does not. You can make named tables in excel and when you update your columns, the data in your pivot table responds to those changes.
There are also pivot charts in excel and google sheets doesn’t have it. And oh, I almost forgot to say that you can assign a specific tab in excel to filters that you apply to your pivot tables.
Do you have a dataset that you don’t know if is clean or not? Then you would need to use the deduplicate feature of excel and sheets. It is rather easy and can remove duplicate entries.
And after that, you can use text to column features. Using this feature, you can make long strings of text into smaller ones. I was very excited once I learned what they can do for me.
|
https://medium.com/@mamysamy/digital-analytics-scholarship-cxl-institute-5th-week-review-ecd87595126b
|
['Mohammad Sammak']
|
2020-12-21 06:30:20.728000+00:00
|
['Microsoft', 'Google', 'Excel', 'Google Sheets']
|
Numpy Cheat Sheet
|
This is a long note, make yourself a cup of tea, and let’s get started!
As always, we need to import NumPy library:
import numpy as np
1. N-Dimensional Array (Ndarray)
What are Arrays?
Arrays are a data structure for storing elements of the same type. Each item stored in an array is called an element. Each location of an element in an array has a numerical index, which is used to identify the element.
1D vs 2D Array
1D array (i.e., single dimensional array) stores a list of variables of the same data type. It is possible to access each variable using the index.
1D array
2D array (i.e, multi-dimensional array) stores data in a format consisting of rows and columns.
2D array
Arrays vs Lists
Arrays use less memory than lists
Arrays have significantly more functionality
Arrays require data to be homogeneous; lists do not
Arithmetic on arrays operates like matrix multiplication
NumPy is used to work with arrays. The array object in NumPy is called ndarray .
Create a Vector
To create a vector, we simply create a one-dimensional array. Just like vectors, these arrays can be represented horizontally (i.e., rows) or vertically (i.e., columns).
# Create 1 dimensional array (vector)
vector_row = np.array([1,2,3]) # Create vector as a row
>>> array([1, 2, 3]) vector_column = np.array([[1],[2],[3]]) #Create vector as a column
>>> array([[1],
[2],
[3]])
Create a Matrix
To create a matrix, we can use a NumPy two-dimensional array. In our solution, the matrix contains three rows and two columns.
matrix = np.array([(1,2,3),(4,5,6)]) # Two dimensional array
>>> array([[1, 2, 3],
[4, 5, 6]])
Creating a Sparse Matrix
A sparse matrix is a matrix in which most of the elements are zero. Sparse matrices only store nonzero elements and assume all other values will be zero, leading to significant computational savings.
Imagine a matrix where the columns are every article on Medium, the rows are every Medium reader, and the values are how long (minutes) a person has read that particular article. This matrix would have tens of thousands of columns and millions of rows! However, since most readers do not read all articles, the vast majority of elements would be zero.
Let’ say, we want to create a NumPy array with two nonzero values, then converted it into a sparse matrix. If we view the sparse matrix, we can see that only the nonzero values are stored:
from scipy import sparse
matrix_large = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[3, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) # Create compressed sparse row (CSR) matrix
matrix_large_sparse = sparse.csr_matrix(matrix_large)
print(matrix_large_sparse)
>>> (1, 1) 1
(2, 0) 3
In the example above, (1, 1) and (2, 0) represent the indices of the non-zero values 1 and 3 , respectively. For example, the element 1 is in the second row and second column.
Create Special Ndarray
np.zeros() function returns a new array of given shape and type, filled with zero.
# Create 1d array of zeros, length 3
np.zeros(3)
>>> array([0., 0., 0.]) # 2x3 array of zeros
np.zeros((2,3))
>>>array([[0., 0., 0.],
[0., 0., 0.]])
np.ones() function returns a new array of given shape and type, filled with one.
# Create 3x4 array of ones
np.ones((3,4))
>>> array([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]])
np.eye() function returns a matrix having 1’s on the diagonal and 0’s elsewhere.
# Create 5x5 array of 0 with 1 on diagonal (Identity matrix)
np.eye(5)
>>> array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 0., 0., 1.]])
np.linspace() function returns an evenly spaced sequence in a specified interval.
# Create an array of 6 evenly divided values from 0 to 100
np.linspace(0,100,6)
>>> array([ 0., 20., 40., 60., 80., 100.])
np.arange(start, stop, step) function returns the ndarray object containing evenly spaced values within the given range.
The parameters determine the range of values:
start defines the first value in the array. stop defines the end of the array and isn’t included in the array. step is the number that defines the spacing (difference) between every two consecutive values in the array and defaults to 1 .
Note: step can’t be zero. Otherwise, we will get a ZeroDivisionError . We can’t move away anywhere from start if the increment or decrement is 0 .
# Array of values from 0 to less than 10 with step 3
np.arange(0,10,3)
>>> array([0, 3, 6, 9])
np.full(shape, fill_value) function returns a new array of a specified shape, fills with fill_value .
# 2x3 array with all values 5
np.full((2,3),5)
>>> array([[5, 5, 5],
[5, 5, 5]])
np.random.rand() function returns an array of specified shape and fills it with random values.
# 2x3 array of random floats between 0–1
np.random.rand(2,3)
>>> array([[0.37174775, 0.59954596, 0.50488967],
[0.22703386, 0.59914441, 0.68547572]]) # 2x3 array of random floats between 0–100
np.random.rand(2,3)*100
>>> array([[23.17345972, 98.62743214, 21.40831291],
[87.08603104, 84.23376262, 63.90231179]]) # 2x3 array with random ints between 0–4
np.random.randint(5,size=(2,3))
>>> array([[2, 3, 4],
[4, 4, 0]])
2. Array shape manipulations
Shape
It will be valuable to check the shape and size of an array both for further calculations and simply as a gut check after some operation.
NumPy arrays have an attribute called shape that returns a tuple with each index having the number of corresponding elements.
arr = np.array([(1,2,3),(4,5,6)])
arr.shape # Returns dimensions of arr (rows,columns)
>>> (2, 3)
In the example above, (2, 3) means that the array has 2 dimensions, and each dimension has 3 elements.
Reshape
It is important to know how to reshape the NumPy arrays so that our data meets the expectation of specific Python libraries. For example, Scikit- learn requires a one-dimensional array of output variables y to be shaped like a two-dimensional array with one column and outcomes for each row.
Some algorithms, like the Long Short-Term Memory recurrent neural network in Keras, require input to be specified as a three-dimensional array comprised of samples, timesteps, and features.
reshape() allows us to restructure an array so that we maintain the same data but it is organized as a different number of rows and columns.
Note: The shape of the original and new matrix contains the same number of elements (i.e, same size)
arr1 = np.arange(1, 11) # numbers 1 to 10
>>> array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) print(arr1.shape) # Prints a tuple for the one dimension.
>>> (10,)
We can use reshape() method to reshape our array to a 2 by 5 dimensional array.
arr1_2d = arr1.reshape(2, 5) print(arr1_2d)
>>> array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10]])
If we want NumPy to automatically determine what size/length a particular dimension should be, specify the dimension as -1 which effectively means “as many as needed.” For example, reshape(2, -1) means two rows and as many columns as needed.
arr1.reshape(2, 5)
arr1.reshape(-1, 5) # same as above: arr1.reshape(2, 5)
arr1.reshape(2, -1) # same as above: arr1.reshape(2, 5)
Transpose
Transposing is a common operation in linear algebra where the column and row indices of each element are swapped.
From the last example, arr1_2d is a 2 by 5 dimensional array, we want to switch its rows with its columns.
arr1_2d.T
>>> array([[ 1, 6],
[ 2, 7],
[ 3, 8],
[ 4, 9],
[ 5, 10]])
Flatten a Matrix
flatten() is a simple method to transform a matrix into a one-dimensional array.
arr1_2d.flatten()
>>> array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
Resize a Matrix
resize(arr, new_shape) function returns a new array with the specified shape.
If the new array is larger than the original array, then the new array is filled with repeated copies of arr .
# Resize arr1_2d to 3 rows, 4 columns
resize_arr = np.resize(arr1_2d, (3,4))
resize_arr
>>> array([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 1, 2]])
Inverting a Matrix
The inverse of a matrix A is a matrix that, when multiplied by A results in the identity. A good example is in finding the vector of coefficient values in linear regression.
Use NumPy’s linear algebra inv method:
matrix = np.array([[1, 2],
[3, 4]]) # Calculate inverse of matrix
np.linalg.inv(matrix)
>>> array([[-2. , 1. ],
[ 1.5, -0.5]])
Convert Array to List and vice versa
When I was first learning Python, one of the errors that I ran into quite often — and sometimes still run into now — looked like this:
Arrays need to be declared whereas lists do not need declaration because they are a part of Python’s syntax. This is the reason lists are more often used than arrays. But in case of performing some arithmetic function to our list, we should go with arrays instead.
In case we want to store a large amount of data, we should consider arrays because they can store data very compactly and efficiently.
arr = np.array([(1,2,3),(4,5,6)])
>>> array([[1, 2, 3],
[4, 5, 6]]) arr_to_list = arr.tolist() # Convert arr to a Python list
>>> [[1, 2, 3], [4, 5, 6]] np.array(arr_to_list) # Convert list to array
>>> array([[1, 2, 3],
[4, 5, 6]])
Other useful functions to describe the array:
arr.size # Return number of elements in arr
len(arr) #Length of arrayarr.ndim # Number of array dimension
arr.dtype # Return type of elements in arr
arr.dtype.name # Name of data type
arr.astype(int) # Convert an array to a different type
arr.astype(dtype) # Convert arr elements to type dtype
np.info(np.eye) # View documentation for np.eye
3. Numerical Operations on Array
Trace (linear algebra)
The trace is the sum of all the diagonal elements of a square matrix.
arr = np.array([[2, 0, 0], [0, 2, 0], [0, 0, 2]])
np.trace(arr)
>>> 6
Determinant
Determinants a matrix is a special number that can be calculated from a square matrix. It can sometimes be useful to calculate the determinant of a matrix. NumPy makes this easy with det() .
matrix = np.array([[1, 2, 3],
[2, 4, 6],
[3, 8, 9]]) # Return determinant of matrix
np.linalg.det(matrix)
>>> 0.0
Find the Rank of a Matrix
The rank of a matrix is the estimate of the number of linearly independent rows or columns in a matrix. Knowing the rank of a matrix is important. While solving systems of linear equations, the rank can tell us whether Ax = 0 has a single solution or multiple solutions.
matrix = np.array([[1, 1, 3],
[1, 2, 4],
[1, 3, 0]]) # Return matrix rank
np.linalg.matrix_rank(matrix)
>>> 3
Find Eigenvalues and Eigenvectors
Many machine learning problems can be modeled with linear algebra with solutions derived from eigenvalues and eigenvectors.
eigenvalues and eigenvectors
In NumPy’s linear algebra toolset, eig lets us calculate the eigenvalues, and eigenvectors of any square matrix.
matrix = np.array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]) # Calculate eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(matrix)
eigenvalues
>>> array([ 1.33484692e+01, -1.34846923e+00, -2.48477279e-16])
eigenvectors
>>> array([[ 0.16476382, 0.79969966, 0.40824829],
[ 0.50577448, 0.10420579, -0.81649658],
[ 0.84678513, -0.59128809, 0.40824829]])
Scalar Operations
When we add, subtract, multiply or divide a matrix by a number, this is called the scalar operation. During scalar operations, the scalar value is applied to each element in the array, therefore, the function returns a new matrix with the same number of rows and columns.
new_arr = np.arange(1,10)
>>> array([1, 2, 3, 4, 5, 6, 7, 8, 9]) # Add 1 to each array element
np.add(new_arr,1)
>>> array([ 2, 3, 4, 5, 6, 7, 8, 9, 10])
Similarly, we can subtract, multiply, or divide a matrix by a number using functions below:
np.subtract(arr,2) # Subtract 2 from each array element
np.multiply(arr,3) # Multiply each array element by 3
np.divide(arr,4) # Divide each array element by 4 (returns np.nan for division by zero)
np.power(arr,5) # Raise each array element to the 5th power
Matrics Operations
A matrix can only be added to (or subtracted from) another matrix if the two matrices have the same dimensions, that is, they must have the same number of rows and columns.
When multiplying matrices, we take rows of the first matrix and multiply them by the corresponding columns of the second matrix.
Note: Remember “rows first, columns second.”
multiply matrices
It is important to know the shape of matrics. Then the matrics operations are simple using the NumPy library.
np.add(arr1,arr2) # Elementwise add arr2 to arr1
np.subtract(arr1,arr2) # Elementwise subtract arr2 from arr1
np.multiply(arr1,arr2) # Elementwise multiply arr1 by arr2
np.divide(arr1,arr2) # Elementwise divide arr1 by arr2
np.power(arr1,arr2) # Elementwise raise arr1 raised to the power of arr2
np.array_equal(arr1,arr2) # Returns True if the arrays have the same elements and shape
Other math operations:
np.sqrt(arr) # Square root of each element in the array
np.sin(arr) # Sine of each element in the array
np.log(arr) # Natural log of each element in the array
np.abs(arr) # Absolute value of each element in the array
np.ceil(arr) # Rounds up to the nearest int
np.floor(arr) # Rounds down to the nearest int
np.round(arr) # Rounds to the nearest int
4. Array Manipulation Routines
Adding/removing Elements
append() function is used to append values to the end of a given array.
np.append ([0, 1, 2], [[3, 4, 5], [6, 7, 8]])
>>> array([0, 1, 2, 3, 4, 5, 6, 7, 8]) np.append([[0, 1, 2], [3, 4, 5]],[[6, 7, 8]], axis=0)
>>> array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
The axis along which values are appended. If the axis is not given, both array and values are flattened before use.
insert() : is used to insert the element before the given index of the array.
arr = np.arange(1,6)
np.insert(arr,2,10) # Inserts 10 into arr before index 2
>>>array([ 1, 2, 10, 3, 4, 5])
delete() we can delete any row and column from the ndarray
arr = np.arange(12).reshape(3, 4)
>>> [[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]] np.delete(arr,2,axis=0) # Deletes row on index 2 of arr
>>> array([[0, 1, 2, 3],
[4, 5, 6, 7]]) np.delete(arr,3,axis=1) # Deletes column on index 3 of arr
>>> array([[ 0, 1, 2],
[ 4, 5, 6],
[ 8, 9, 10]])
sort() function can be used to sort the list in both ascending and descending order.
oned_arr = np.array([3,8,5,1])
np.sort(oned_arr)
>>> array([1, 3, 5, 8]) arr = np.array([[5, 4, 6, 8],
[1, 2, 4, 8],
[1, 5, 2, 4]]) # sort each column of arr
np.sort(arr, axis=0)
>>> array([[1, 2, 2, 4],
[1, 4, 4, 8],
[5, 5, 6, 8]]) # sort each row of X
np.sort(arr, axis=1)
>>> array([[4, 5, 6, 8],
[1, 2, 4, 8],
[1, 2, 4, 5]])
Join NumPy Arrays
Joining means putting contents of two or more arrays in a single array. In NumPy, we join arrays by axes. We pass a sequence of arrays that we want to join to the concatenate() function, along with the axis. If the axis is not explicitly passed, it is taken as 0.
# Adds arr2 as rows to the end of arr1
arr1 = np.array([1, 2, 3])
arr2 = np.array([4, 5, 6])
arr = np.concatenate((arr1, arr2), axis=0)
>>> array([1, 2, 3, 4, 5, 6]) # Adds arr2 as columns to end of arr1
arr1 = np.array([[1, 2, 3],[4, 5, 6]])
arr2 = np.array([[7, 8, 9],[10, 11, 12]])
arr = np.concatenate((arr1,arr2),axis=1)
>>> array([[ 1, 2, 3, 7, 8, 9],
[ 4, 5, 6, 10, 11, 12]])
Split NumPy Arrays
Cool, now we know how to merges multiple arrays into one. How to break one array into multiple? We use array_split() for splitting arrays, we pass it the array we want to split and the number of splits.
Note: If the array has fewer elements than required, it will adjust from the end accordingly.
# Splits arr into 4 sub-arrays
arr = np.array([1, 2, 3, 4, 5, 6])
new_arr = np.array_split(arr, 4)
>>> [array([1, 2]), array([3, 4]), array([5]), array([6])] # Splits arr horizontally on the 2nd index
arr = np.array([1, 2, 3, 4, 5, 6])
new_arr = np.hsplit(arr, 2)
>>> [array([1, 2, 3]), array([4, 5, 6])]
Select element(s)
NumPy offers a wide variety of methods for indexing and slicing elements or groups of elements in arrays.
Note: NumPy arrays are zero-indexed, meaning that the index of the first element is 0, not 1.
Suppose we have two arrays, one contains user_name, and the other stores the number of articles that the person has read.
user_name = np.array(['Katie','Bob','Scott','Liz','Sam'])
articles = np.array([100, 38, 91, 7, 25]) user_name[4] # Return the element at index 4
>>> 'Sam' articles[3] = 17 # Assign array element on index 1 the value 4
>>>array([100, 38, 91, 17, 25]) user_name[0:3] # Return the elements at indices 0,1,2
>>> array(['Katie', 'Bob', 'Scott'], dtype='<U5') user_name[:2] # Return the elements at indices 0,1
>>> array(['Katie', 'Bob'], dtype='<U5') articles<50 # Return an array with boolean values
>>> array([False, True, False, True, True]) articles[articles < 50] # Return the element values
array([38, 7, 25]) # Return the user_name that read more than 50 articles but less than 100 articles
user_name[(articles < 100 ) & (articles >50)]
>>> array(['Scott'], dtype='<U5')
We use similar methods for selecting elements in multi-dimensional arrays:
arr[2,5] # Returns the 2D array element on index [2][5]
arr[1,3]=10 # Assigns array element on index [1][3] the value 10
arr[0:3] # Returns rows 0,1,2
arr[0:3,4] # Returns the elements on rows 0,1,2 at column 4
arr[:2] # Returns returns rows 0,1
arr[:,1] # Returns the elements at index 1 on all rows
5. Statistical Operations
Find the Maximum and Minimum Values
Often we want to know the maximum and minimum value in an array or subset of an array. This can be accomplished with the max and min methods. Using the axis parameter we can also apply the operation along a certain axis:
Suppose we store the number of articles a person per month in an array.
articles = np.array([[10, 23, 17],
[41, 54, 65],
[71, 18, 89]]) # Return maximum element
np.max(articles)
>>> 89
np.max(articles, axis=0) # Find maximum element in each column
>>> array([71, 54, 89])
np.max(articles, axis=1) # Find maximum element in each row
>>> array([23, 65, 89])
We can use similar methods to find the minimum elements:
np.min(arr) # Return minimum element
np.min(arr,exis=0) # Find minimum element in each column
np.min(arr,axis=1)# Find minimum element in each row
Calculate the Average, Variance, and Standard Deviation
Just like with max() and min() , we can easily get descriptive statistics about the whole matrix or do calculations along a single axis.
np.mean(arr,axis=0) # Return mean along specific axis
arr.sum() # Return sum of arr
b.cumsum(axis=1) # Cumulative sum of the elements
np.var(arr) # Return the variance of array
np.std(arr,axis=1) # Return the standard deviation of specific axis
arr.corrcoef() # Return correlation coefficient of array
The code in this note is available on Github.
That’s it!
I consider this note as the basics of NumPy. You probably come across these functions repeatedly when reading existing code at work or doing tutorials online. I will try to continuously update this as I find more useful Numpy functions.
All learning activities are undertaken throughout time and experience. It is impossible to learn Python in a couple of hours. Remember that the hardest part of any endeavor is the beginning, and you have passed that, keep on, keeping on!!!
Resources
Numpy is a very important library on which almost every data science or machine learning Python packages such as SciPy, Matplotlib, Scikit-learn depends on to a reasonable extent. It is important to have a strong understanding of the fundamentals. Conveniently, there are some great resources to help with this task. I have listed some of my favorites below, some of which get deeper into aspects of linear algebra; check them out if you are eager to learn more!
|
https://towardsdatascience.com/numpy-cheat-sheet-4e3858d0ff0e
|
['Xuankhanh Nguyen']
|
2020-07-23 20:42:42.778000+00:00
|
['Machine Learning', 'Deep Learning', 'Programming', 'Data Science', 'Python']
|
A journey towards Trunk-Based Development
|
It's not surprising, then, that the team I'm working with had defaulted to using this approach at first: new features are developed in their own individual branches, which are submitted as pull requests and merged to the main, stable branch after having been approved by someone else via an asynchronous code review. It intends, after all, to bring seemingly good qualities to the development process such as isolation of unfinished work, peer-reviewing quality control and knowledge sharing through PR feedback.
I couldn't help but notice, however, some downsides to this approach in practice such as long lead times¹, large cognitive capacity toll on the team due to frequent context switching (since there're usually more than one task in progress and under review at any moment), fear of deployment to production and error-prone conflicts resolution during merges. (*)
I knew from having worked before in two non-trivial projects using Trunk-Based Development (TBD) and backed up by references such as DORA's State of DevOps report that we could do better than that given the team's conditions whilst keeping the upsides of a PR-based workflow — and by "better" I mean be able to quickly develop new features, adapt to new scenarios and keep the quality of our deliverables: in short, a high-performance team according to the important metrics discovered in DORA's empirical findings. Since our team was comprised of four developers building microsservices in a tech stack that we were already quite familiar with (Java, Quarkus, PostgreSQL, AWS, Gitlab…), we've agreed that there was enough room to experiment with the adoption of TBD for new features and check how it'd fare and what adaptations we'd need to do.
*Nonetheless, these are all common known consequences of adopting PRs and there's plenty of suggestions in the literature about how to mitigate them — my point is that, by the time you've created the conditions to apply them, you might not need to be using PRs anymore. So let's dive into how it went:
What's this Trunk-Based Development thing?
TBD is a high-maturity development workflow in which new commits are frequently integrated into a main branch that must always be ready for deployment to production. Let's dive a bit deeper into each of these highlighted aspects:
High-maturity : just like with microservices, adopting trunk-based development is not a one-size-fits-all approach for every context and team. It demands the presence of highly mature development practices such as continuous integration, build pipelines, easy deployments and error recovery, trustable quality control, etc — in short, TBD is far from being an excuse to get sloppy;
: just like with microservices, adopting trunk-based development is not a one-size-fits-all approach for every context and team. It demands the presence of highly mature development practices such as continuous integration, build pipelines, easy deployments and error recovery, trustable quality control, etc — in short, TBD is far from being an excuse to get sloppy; Frequent integrations : code changes are frequently submitted and made available to the rest of the team — they don't need to go through manual approval processes nor be asynchronously reviewed beforehand. Please notice that this doesn't mean a team member may not work temporarily in a separate branch for a short period of time in some cases as long as the modifications submitted to that branch are frequently synchronised with the rest of the team's work (at least once a day, for example);
: code changes are frequently submitted and made available to the rest of the team — they don't need to go through manual approval processes nor be asynchronously reviewed beforehand. Please notice that this doesn't mean a team member may not work temporarily in a separate branch for a short period of time in some cases as long as the modifications submitted to that branch are frequently synchronised with the rest of the team's work (at least once a day, for example); Main branch : the team should ideally work on a single branch that always represents the current state of the implementation. In general², there shouldn't be a need for the creation of feature branches, hotfix branches, etc;
: the team should ideally work on a single branch that always represents the current state of the implementation. In general², there shouldn't be a need for the creation of feature branches, hotfix branches, etc; Always ready for deployment to production: commits submitted to the main branch must always be in a "buildable" state, approved by automatic test suites and quality gates, and ready to be put in production. It's not required that every commit must be deployed to production, but it should be possible to do so if needed³.
Keeping up with the PRs
New methodologies, new tools (Source: https://www.flickr.com/photos/bre/552152780)
Each aspect described above may be able to solve the shortcomings associated to the usage of PRs: lead times tend to decrease since there's no asynchronous code review bottleneck anymore; the team's cognitive load is reduced given that team members can now work in a single task until its completion; merge conflicts become less frequent and simpler due to the frequent integration of smaller units of change; and finally, there's usually less fear and anxiety around deployments to production as a side effect of cultivating a codebase that's continuously validated and ready to be deployed (or even more frequently deployed to production).
The challenges arise, however, around finding ways to preserve the benefits associated to PRs:
Isolation of unfinished work
We had to reeducate ourselves more intentionally around decomposing new behaviour into smaller, self-contained chunks to interfere as least as possible with existing code while they're developed — the fact that team had already been developing the habit of using TDD was quite helpful here.
In order to be able to handle complex scenarios where it can be difficult to keep isolation between different work streams, we've brought awareness to the use of techniques such as Branch by Abstraction and Expand-Contract/Parallel Change ,and a broader use o Feature Flags.
Quality control
There was a clear need after some time to adopt formal, Gherkin-like acceptance criteria to describe the team's tasks and, when possible, to automate their compliance via acceptance tests using libraries and frameworks like Cucumber and RestEasy. For cases where automation was not feasible, we've been requiring an additional part to the task's Definition of Done consisting in explicitly showing to the rest of team how the acceptance criteria are being met through other means.
Unit and integration tests (the way these terms are described here) are adopted as part of the development business-as-usual workflow, having their coverage measured as part of the project's build pipeline. Code style patterns, code smells and code design constraints are automatically checked as soon as new code is committed (these checks can also be run locally by the developers) using tools like Checkstyle, SonarQube and ArchUnit. Failures in these code quality checks were not used as gates for the build pipeline at first, but they're enforced now as we've come to tune these checks to an agreed-upon set of criteria among the team. Cultivating a fast, reliable test suite and having an initial set of automated code quality validations allow us to practice continuous refactoring in order to keep the code complexity under control (a habit that's still evolving within the team, we must admit).
An example of automated code design constraint using ArchUnit (source in the image's link)
Last, but not least, the development/staging environment is always up-to-date with the main branch (each new commit to it triggers a new deployment to that environment, as well as the application of any pending database migration), which gives us more confidence that any validations we perform against that environment before deploying it to production.
Knowledge sharing
We've established that pairings would be done more frequently even if just in the form of checkpoints throughout a task. Please note that that by no means that implies pairing 8h/day nor does it mean doing it in an undisciplined fashion — for more details on how to apply proper pairing techniques, I strongly recommend this text.
In order to give room for even more knowledge sharing, the team has been considering holding regular brown bag sessions so that we can share recommendations on good coding practices, draw attention to interesting code design choices in the project, etc.
In conclusion…
Photo by Mark König on Unsplash
We don't have any formal metrics yet to state that we're unarguably better off than what we'd be compared to the previous PR-based workflow (we do observe a tendency towards shorter lead times, though). To be honest, even if we did, that probably wouldn't prove that adopting TBD was the superior choice for our context since it'd be hardly possible to isolate all the involved factors in this change — software engineering is not a science in the strict Popperian sense, and we'd probably still be programming in Assembly if we had always insisted on requiring that it'd be so.
In a qualitative sense, however, it's quite perceivable and frequently reported by the team how more satisfied they became with the new, leaner development workflow. The strong investment in automating quality controls as well as a more disciplined approach around having formal acceptance criteria gave a lot more autonomy to the team's developers while keeping the tech and product leaderships assured enough that their requirements would be met.
I must stress, finally, that having enough control over our own build pipelines and processes was of paramount importance to make this transition feasible, as well as being the only team that held ownership of this codebase — other teams are still required to submit PRs for code review when they need to change anything in the codebases we're responsible for since they may not be familiar with our team's practices.
Notes
"Lead time" is the time it takes from the moment someone starts working on a task until it gets put into production (there are diverging definitions for this term in the literature, but that's the definitions we're adopting here); It's usually acceptable to create short-lived branches in more complex teams and projects that are looking towards the benefits of adopting TBD; That's, in fact, the usual precise definition of Continuous Delivery (CD).
Take a look at the vacancies available in iFood and learn more about the selection process.
|
https://medium.com/ifood-engineering/a-journey-towards-trunk-based-development-f79ddd9123c8
|
['Felipe Martins']
|
2021-08-30 16:43:31.324000+00:00
|
['Team Dynamics', 'Backend', 'Continuous Delivery', 'Trunk Based Development']
|
Is Dancing A Sport?
|
There are many different forms of dancing. There is hip hop, ballet, tap, jazz, contemporary, etc. Dancing allows you to express yourself and it is a type of art. Despite it being art it also is a physical thing. A few students from Eastside High School was questioned on their opinion.
An Instagram poll showed that 82% people believed that dancing should be considered a sport while 18% of the voters said that dancing wasn’t a sport. A quick Google search stated that a sport is, “an athletic activity requiring skill or physical prowess”. Dancing, properly, definitely takes skill so it should be qualified as an sport.
A Junior here at Eastside High School, Hasani Tucker, stated that she believes that dancing isn’t a sport, “ Cause there is no main objective at the end. In all sports you’re trying to win or accomplish something.” Which isn’t true. There are a bunch of dance competitions. One of the many reasons why dancing is a sport is because it’s very competitive.
Not only is dancing a sport but it also makes people feel good about themselves. Freshman, Jasmine Quiros, is not only on the soccer team but is on Eastside’s dance team. She stated that, “ Dancing is something that I can do to let go and forget about everything. Feeling the music and moving around is fun and enjoyable to me.”
Dance might actually be a “superior” sport. While wrestling requires you to have muscles and to be flexible and soccer requires speed, stamina and muscle. Dance is the only sport that requires you to have stamina, speed, flexibility and muscles.
|
https://medium.com/@117282/is-dancing-a-sport-b18e351fc825
|
['Jahtya Tucker']
|
2019-04-11 16:14:24.385000+00:00
|
['Dance']
|
2020: Time for Space.
|
2017: The car accident that brought me here.
Three years since that Sunday I was kneeling on the floor of my apartment cracking hazelnuts mindlessly only few hours after I had brutally smashed my car on the highway. I still remember every moment, but most clearly I see the silver light. It’s because of that light that I am writing this review every year.
You might find familiar faces or places among the words below (and in the link at the bottom). At the very least, you know Antoncho. :-) But I’d be happier if what you find is your own self somewhere between the lines. If you’ve had any of these thoughts (hopefully without the traumatic experience of a car crash or the downsides of 2020), then we’re moving in the right direction. Collectively. :-)
—
For me this year was a year of building my way to courage — of making peace with my past and redefining my present. And being successful at it. Courage is so hard to build when your heart’s been hurt and is covered with fear. Nature is the strongest remedy and the best coach in how to change that.
I have been hiking so much that I reached the point when I felt so truly myself again that I wanted every other woman to feel like this way again through feeling herself in nature. Starting something on your own is just as courageous as hiking. It’s moving in the right direction. In slower steps, hard at times, but greatly rewarding.
2020 was a year of solitude for many — yes, distance, but also introversion and silence. For me that was a blessing. I couldn’t go to my vipassana course in Nepal because of the virus, but I still did more meditation sessions this year than I’ve ever done. I am known to be a happy shiny person who smiles often and talks a lot even though my ‘R’ sounds like a ‘G’. But I think this period of pleasant introversion changed that a bit. I am softer, more mellow and yet, still full of joy. And I am loving it.
My knowledge of myself has expanded tremendously. Partially thanks to my long-distance friends Oprah and Deepak Chopra, and Eckhart Tolle — who is one of those friends who points out how big our collective ego is and how we’re all part of it. I’m not an egocentric person, quite the opposite, but A New Earth inspired digging into that and re-building the relationship with my ego. Feels like the best one to work on.
This is the first time that I am consciously and continuously realizing how adaptive I am. How major external circumstances have little effect on my big little inner joys. No event did seriously trigger me, or crash me.
The only thing that brought me down this year was my pain — pain I have stored from the past. Surprisingly, I’ve had people tell me this year that I seem to be doing much better than them in dealing with the past. I’ve had people tell me I am the most joyful person they have ever met. I bet. But you know — you cant appreciate the beauty of life if you haven’t seen a lot of its pain, too.
And pain is something that heals slowly, with patience and practice. I give thanks to my practice with YWA, Santosha and OneCommune. And to Veni (Sutra) for her healing voice, her hatha sequences and the custom heart practice she designed for me once. I thank Anita for fine-tuning my soul and my gut with her rituals, and my newly-found book club for reminding my brain to read for fun.
I thank my breath. Literally, it’s the most valuable thing I have had this year (and always, I guess). I thank my body, which has never loved me more than it does now. And I thank my home, for providing the cozy inspiration to keep these practices.
All of these helped me realize that the pain is not really a part of me. And that some of that pain was not even mine in the first place. But I kept it, so that others be free of it. See, I am one of those people who connect with others’ personalities quickly, but in addition to that I connect to their vibrations and their energetic bodies. I am a natural healer. So I guess my inborn intention is to heal the world from pain? (deep shit, but no kidding)
I’m a giver so I forgive others very quickly. My problem is that I struggle forgiving myself. I always thought that when someone treats you badly it’s because you did something wrong. You deserved it. Apparently, that’s not the case.
My biggest regret for this year is that I spent a lot of time wondering what I did to deserve someone’s offensive words and disrespectful actions. So many times, I went back to painful events to try to find the reasons. I punished myself for acting the way I did, for not doing things differently, not being good enough. And then it hit me. And then I heard it again, and again somewhere else. And it changed my whole mindset: when someone treats you badly, it’s because of them, not you.
A new friend I made this year once mentioned something that resonated with me. “There are some people,” she said, “who are generally good people. But internally they are such a mess, that they create chaos around them and in others.” I knew what she meant.
If you are such a someone, please look into your soul and heal your pain. No human needs more harm done onto them, including you.
I have always sent positive thoughts to these people because I felt they were in pain themselves. But today I try to send more compassion my own way (thank you, Ira). I’ve always been good to others. I just try to be kinder to myself now as well. Being friends with me has become my favorite hobby.
This year taught me to distance myself from impure behavior. From people with pretty faces but empty souls, from people with hidden agendas, from people who use me to satisfy their needs, to gain and not return. These exact people would be shocked when you gave them some of their own behavior. If you spot such in your life, try this as a challenge. See how long they last.
I kept hearing that emotions should stay out of the business world. Cute opinion. But here’s a real-world fact — I have not met a single client or partner so far who has not been emotional in our business meetings at least a few times. Not a single one. And that’s what keeps us human. So please stop telling me business people should ignore their feelings and emotions. Instead, it seems like you should be asking me to teach you some.
“People will underestimate you.” I heard this several times in 2020. “Because you are young and you are a woman”. Truth is people don’t underestimate me. They listen to what I say, see what I do and evaluate based on that. And the perceived value is high. It’s only the person who says such words that believes them. But it makes me wonder why does someone need to say this to anyone? Instead of showing support, they talk to put them down. Why make them believe they are the weaker one. It’s sad someone wants to believe this. Because I don’t.
I am sincerely grateful for the strong male relationships in my life this year. Especially that guy I went on 5 dates with, only to see he wanted to have me near to guide him in building his business. I know I’m a great consultant, but sweetheart, you will have to be paying much more than you are now.
Besides that, however, I am grateful for remaining friends with a past lover and for being friends with someone who, for a change, doesn’t want to be my lover.
Another relationship I have improved, is the one I have with Instagram. I stopped watching who viewed my stories. Best. Improvement. Ever. (well, best right after the ones I did at home).
The fact that someone doesn’t watch your stories
doesn’t really mean they don’t watch your stories.
The fact that someone watches your stories
doesn’t mean they care about your life stories.
So why bother at all?
The time and energy one wastes in those scrolls are the most sinfully wasted resources ever. Direct interaction is all that matters — online and in person.
I use Instagram for my own inspiration and for supporting others. Now more than ever. I follow people that inspire me and businesses that have a purpose greater than the profit. I follow makers, artists and creators. And I interact with them. I talk, I comment and I ask. I engage because I care. AND I remove every stalker (not follower) who will gladly watch your every single move but will never say hi. Please. How does anybody have time for that?!
I have come to realize that there are many people that want your energy. But only when it’s the good energy. And Instagram is the easiest way to get that without any responsibility of a relationship of any kind — without being friends or good acquaintances. It takes much more maturity and emotional intelligence to be connected with someone in both their high- and low-energy moments. Reread that paragraph, please. It matters.
Freedom is fear of commitment when freedom lacks responsibility. As Matthew McConaughey says “Freedom is a concept of the ego. What the heart wants is responsibility.” This has been a huge topic for me in 2020, personally and professionally, and I couldn’t agree more. The first type of freedom can carry a feeling of anxiety, chaos or even arrogance and superficiality, and it is very dissatisfying. While the second is the freedom that comes with making the choice of the heart, the choice of love. Bam. Greenlight.
You can like whatever you want and not take responsibility for that on Instagram. Easy. Faces and legs are always pretty on Instagram. Even more easily. But handsome is as handsome does. So do wisely.
And the most handsome person in my life remains my father. Who is almost 70 now. Who has raised the bar so high in the past year that I am super curious to see the guy that takes up the challenge to be just as handsome through his actions.
My dad is almost 70 now. With chronic bronchitis, fluctuating blood pressure and joint pain. And even though he has hiked a dozen hills with me, the context of the year has certainly made me think about the inevitable death. (Meditation doesn’t stop such thoughts, no.) People would say “noo, don’t think that”. But if you were me, you would, too. I have dealt with death closely when I was 10, when I was 11 and painfully at 12 again. I’m kinda used to it now and maybe that’s why I’ve never been afraid of it. And after all I saw the silver light at 26.
I have faith. And I know my dad does too.
But I am worried that many don’t.
So I am on a mission to change that. After all, my name is Dobrinova (Good & New). So I believe that the good prevails and there comes a new beginning for all of us.
I am writing my new beginning now.
I have lived the story that others have told about me for quite a long time. The story including the offensive words, the labels of insecurity, unworthiness or excessive sensitivity. Truth be told, every single person is insecure in one way or another. Especially the ones that throw that label at others. But we are all worthy. And I know it’s exactly sensitivity that makes me the most joyful person, the most caring healer, and one of the best brand strategists.
My intention for my next year (dec 18th, 2020 to dec 17th 2021) is to update the story I am telling myself in a way that has much more of the “good” and the “new” in it.
I build stories for others for work, but in my personal life I would let others craft my story? Yeah, no.
It’s a long process, I know. I have almost 30 years of history to rewrite. But hey, I am a strategist — I’m always in it for the long haul.
And this sounds like the most important project of my life.
Thank you, 2020 for helping me start writing.
I will continue writing the right story,
I will use my voice to share and inspire,
I will keep my heart open for all that comes next.
***
And because spiritual wisdom is great, but we all need some hard core facts, here’s a list of all key moments that made 2020 a great third year of my new life.
|
https://medium.com/@ddobrinova/2020-time-for-space-abdc67b76071
|
['Dilyana Dobrinova']
|
2020-12-17 13:33:36.698000+00:00
|
['Spiritual Growth', 'Soul', 'Rebirth', 'New Beginnings', 'Personal Growth']
|
Ten Years after Kicking Off Her Solo Career, Little Boots Is Handing Fans a Long-Awaited Vinyl Rerelease
|
Ten Years after Kicking Off Her Solo Career, Little Boots Is Handing Fans a Long-Awaited Vinyl Rerelease
She’s just launched a campaign to reissue ‘Hands,’ her 2009 debut. Here, the self-made pop icon reflects on the DIY spirit that’s motivated her career and her bond with her fans.
With a decade’s worth of pop hits and her synth-nerd sensibilities, Victoria Hesketh, aka Little Boots, has carved out her very own niche in music.
She’s come a long way from writing her first songs in her mother’s garage in Blackpool, England. Following a stint with wave-pop trio Dead Disco, she broke out as a solo artist by uploading self-edited videos to YouTube at a time when that was barely a thing. In 2009, the buzz around her debut album culminated in her winning the BBC’s Sound of 2009 critics’ award. Ten years, three albums, and hundreds of performances in every corner of the world later, she’s learned a thing or two about self-made pop stardom. She writes and performs all her own songs, owns the masters, runs her own label, and is generally very much in charge of every aspect of her career.
That tireless DIY ethos motivated her to launch a Kickstarter campaign for a vinyl reissue of her debut album, Hands, which fans have been clamoring for for years. And she’s taking a rare moment to reflect on what she’s achieved so far.
We recently caught up with Little Boots in London to chat about her anniversary project, the wide-eyed naivete that defined her early career, and the merits of being stubborn when you’re doing things on your own terms.
— Julian Brimmers
Kickstarter: Revisiting the past is always a delicate thing to do. Now that you’re getting ready to reissue your debut album, Hands, what’s your impression of 2009-era Little Boots?
Victoria Hesketh (Little Boots): It’s funny. I see a real sense of naivete — in a nice way. I look back at old pictures and old songs and there’s a sense of wide-eyed possibility. Every day that it carried on happening felt like, “Whoa, this is still going on, cool!” I miss the innocence of that time. I didn’t really understand the meaning of a lot of things that were happening until later. But that was cool because you didn’t overthink things.
And now?
Now I overthink everything! Back then, I was just this nerdy girl. I recently read one of the first interviews I did with Pitchfork. It was me going on about Miley Cyrus and synths and other really dweeby stuff. I wasn’t cool at all, and then, for a hot minute, I became cool and I didn’t know what to do with it.
At that point you’d already had a career in various music groups. You said being a pop star was your ultimate goal, so going solo must have taken some confidence.
I don’t think I was that confident. I’m just DIY, and I always have been. If I want things to happen, I’ve got to make it happen with what I have around me. There are all those videos that I uploaded myself simply because I was bored of it not happening for me. I was bored of waiting for a record label to do something about my songs.
Even now, no one else is gonna do it but me. That’s just who I am as a person. I like to roll my sleeves up and get my hands dirty. Nothing at the time was thought out or cynical, or even planned. That’s when things work — when they’re really genuine and natural. When they’re not contrived. People can see when things are contrived straight away.
I was bored of waiting for a record label to do something about my songs.
What was your career like before ‘Hands’ came out? What were you waiting for or hoping to have happen?
[I was] in a million bands and projects as a teenager. Then I was in an indie band that was semi-successful called Dead Disco. We were into New York indie and new wave. At some point it didn’t work out and got kinda messy. We had done some stuff with [producer] Greg Kurstin. Greg told me, “You should keep doing this. You’re really good at writing pop songs.” That’s where a lot of the early songs, like “New in Town,” came from.
Somewhere in this messy process I had every A&R from every label hitting me up. But I was still contracted in this weird deal. I remember there was this long period, six to eight months, where I just didn’t know what was going on. I was half living in my mum’s house in Blackpool, writing songs in the garage in the freezing cold. And the other half [of the time] I spent in London, learning to DJ at The Old Blue Last, the Shoreditch pub owned by Vice, not making any money and not having any idea if I was ever getting a chance to be a pop singer.
It comes down to this: Half of it is talent, but half of it is getting through those periods of complete unknown and keeping faith. It was a strange time, but I guess the pressure didn’t really start until the BBC Sound of 2009 thing. That’s when it got crazy and I started to feel it.
Half of it is talent, but half of it is getting through those periods of complete unknown and keeping faith.
That must have felt surreal. Although I would assume it was also a huge sign of validation.
But I didn’t know what it was! Someone told me, “I think you’re gonna win this BBC thing,” and I was like, “What’s this BBC thing?”
Some of the biggest shocks came from getting thrown into photo shoots, award ceremonies, TV performances. You don’t get any lessons on how to do any of this, and I had to figure out quickly how to seem at least minorly convincingly glamorous or cool or whatever. It’s such a blur, but I’m very grateful for it.
I understand why there are so [many] mental health problems in the music industry and in general these days. Anxieties are off the charts. And when you’re doing things in the public eye, it’s very hard to stay grounded. Especially when you don’t stop every now and then to take it all in.
Being on top of everything you do seems to be the most reasonable way to stay sane in the music business — but then you’re also using yourself as a resource pretty much all the time.
Completely. It can be frustrating, but it’s more and more common for [musicians] to be involved, to own everything and keep creative control. But with that comes pressure, responsibilities, admin, and other stuff you don’t know how to do. You basically have to run a business on your own. It’s an overwhelming, huge process — I mean, it’s taken me years to get to grips with it.
How does this experience translate to revisiting your first album?
Even now, for the Hands reissue, there were points where I was completely overwhelmed. How are we gonna navigate the legal licenses to buy certain songs back from Warner? Things like that. It’s a minefield, but it’s an empowering learning curve. The satisfaction and the ownership you get from it is really rewarding. That’s why Kickstarter makes so much sense for someone like me. If Kickstarter had been on my radar a decade ago, there’s a good chance I would have chosen that route over doing the record deal. It completely makes sense for DIY artists like me, who like to have control and like to have a very transparent process.
However, I’m still grateful to have had the opportunity to go through the major system and have some amazing big opportunities I wouldn’t have had independently. It’s not like I’m going, “They’re all big bad labels.” It just wasn’t a sustainable model for me as an artist.
If Kickstarter had been on my radar a decade ago, there’s a good chance I would have chosen that route over doing the record deal. It completely makes sense for DIY artists like me, who like to have control and like to have a very transparent process.
Sure, it’s too easy to say there’s an inherent evil to the major-label system.
I mean, there might be a little bit of inherent evil. Just a drop. I certainly met a few people who are absolute cartoon villains in the music industry.
So how did you manage to create a sustainable path for yourself? It seems like much of it has to do with maintaining a direct connection with your fans.
A lot people who signed deals around the same time, 0.001 percent went on to be like Florence [Welch, of Florence and the Machine]. Everybody else was like, “This roller coaster is too much,” and they checked out. There have been many days of me having thought about that, too. But I can’t really do anything else but write pop songs. Well, I can, but I have to make music and I love to make pop songs.
The fans who’ve stayed with me all the way, they’re so much more invested in me. It feels like a real two-way relationship, which is why I’m really excited about doing Kickstarter. I feel like now my fan base completely understands me and wants to support where I go creatively. Doing it this way, I’m really excited to see if it takes even one more middleman out. We’re now this creative machine together.
And I guess social media can also be a really helpful way to understand your fans better and have a better relationship with them.
Yes, and to be honest, that’s one of the reasons why I’m doing this project. I floated the idea on social media and got such an overwhelming response — thousands of comments saying, “Please do this, we need these songs, we need CDs, we need vinyl and [to] hear these songs we’ve been asking about for years.” I went through all these comments and saw there’s a real demand for me to do this. This really pushed me to move forward with this project. If I hadn’t had such a direct response, I might have never finished this anniversary project.
Are you a nostalgic person? You don’t seem to be.
Hmm. Maybe not. However, working on this is making me nostalgic — just being made to look at all these things from this period. It kinda makes me sad, like, “Oh my god, look at this crazy shit we did!” It makes me feel emotional. I always try to move forward and not look back too much. But I’m sure when I play all of these songs live, I’m gonna be crying. If I remember the words.
It’s nice that this project is forcing me to be nostalgic and see all the positives. I get a lot of messages from fans that this [or] that song meant so much to them. Like it’s their coming-of-age record, or how they moved out of town, or it helped them come out. It’s really emotional when I hear people’s stories about the tracks.
What advice would you give 2009 Little Boots?
When the times were really, really crazy, I wasn’t as present as I could’ve been. I wish I had stopped now and again and taken a deep breath, been a bit more grateful. But that’s easy to say with hindsight. There are a few things I wish I’d done differently. In certain fields, I wish hadn’t been so stubborn.
The stubbornness seems to have paid off in the long run, don’t you think?
I don’t know — ask me in another 10 years!
|
https://medium.com/kickstarter/ten-years-after-kicking-off-her-solo-career-little-boots-is-handing-fans-a-long-awaited-vinyl-bf9064e605f4
|
[]
|
2019-09-12 13:15:23.183000+00:00
|
['Vinyl', 'Kickstarter', 'Music', 'DIY', 'Projects We Love']
|
The Difference between Left and Right
|
In times of increasing political division between the Right and the Left, I want to spend a couple of minutes leaving all the tension and controversy aside, approaching the whole problem of Left and Right from a somewhat different angle.
In answering this question, I will touch upon phone calls with aliens, an angry exchange of letters between the 17th centuries greatest thinkers, and of course particle physics.
An Unusual Phone Call
Imagine you answer your phone one day and an alien is on the other side of the line. It has learned to speak English very well, and, once you overcome your intial shock, you start to speak with it quite casually. After a bit of polite smalltalk, you realize the alien wants something from you, and soon enough, it reveals to you that it mainly called you because a question has been nagging it for quite some time. It has learned to speak English by listening to lots and lots of signals of humans talking with each other that it intercepted in the wide expanses of space, but it can’t really seem to understand a very important distinction: where left is and where right is.
Photo by Pablo García Saldaña on Unsplash
You immediately tell the alien that it is stupid, because the answer must of course be obvious. But the more you think about it, the more you realize you have a problem. You start out with giving the alien simple instructions on how to construct a three dimensional cube that, quite naturally, has a left side and a right side. But once the cube is nearly finished, the alien asks you again how to assign the labels left and right. You start pointing at your left hand or the left side of the room, but the alien can’t see you, so it doesn’t understand what you mean. You have an idea: you point to the milky way (and say that it rotates in a certain direction), but unfortunately, the alien tells you it is floating somewhere in empty space, without seeing any of the things you are seeing, and without having any access to the objects you have around you. The alien asks you who is stupid now?…and you hang up the phone in frustration.
While the odds are small that you will really be faced with this situation, it serves as an introduction to a more general problem with some serious philosophical implications.
The Ozma Problem
The Ozma Problem was introduced by Martin Gardner in his book The Ambidexterous Universe in 1964. It is characterized by the problem of communicating the difference between left and right via a serial communication channel.
If we assume that we don’t have shared access to any chiral objects like hands or spiral galaxies, the problem becomes quite tricky to resolve.
Children learn the distinction by pointing to things like their hands. I remember my class in elementary school singing a song (that has a nicer ring to it in German) that went something like “whoever can’t distinguish between left (upon which we would shake our left hand) and right (upon which we would shake our right hand) is a poor man.” I only later realized it’s a bit insensitive considering there are many people with left-right orientation, f.e. as part of the Gerstmann Syndrome. But it’s such an ordinary skill in daily life that it is easy to overlook the deeper significance behind the distinction.
Hands are chiral objects connected to each other through a mirroring operation which is formally called a parity transformation. When you see your left hand in a mirror, it looks like a right hand. Accordingly, Cartesian coordinate systems, which are chiral counterparts of each other, are called left-handed and right-handed systems, as the orientation of the axis neatly maps onto our fingers.
We can instruct the alien to build chiral objects, but as long as there is no palpable difference between the two mirror versions of the object, we can’t teach it to assign the labels left and right.
Leibniz, Newton and the Reality of Space
Photo by NASA on Unsplash
The Ozman problem brings us to the debate about the reality of physical space. In our thought experiment, the alien is floating in empty space, so it can not see any specific configurations of matter that have distinguishable left and right sides.
Note that the notion of space can be expanded to a more general concept: what is the structure of the vacuum state of the universe, as we would call it in quantum field theory? Is nothingness, i.e. complete emptiness of matter, really nothing, or is there some structure to nothing?
The alien is therefore constrained to solving the Ozma problem by finding a difference between left and right in the fundamental makeup of the universe. As we assume the laws of physics to be homogeneous in the universe, it could probe them everywhere in the universe, independent of its location.
We can rephrase the Ozma problem and ask whether the laws of nature are invariant under parity transformations. Do we see a difference between physical processes that are mirror images of each other? Are there physical processes that would work one way, but not as their mirror image/chiral counterpart? If this were indeed possible, we could instruct the alien to build both “versions” of the physical process, and see in which they worked differently. This one would then allow us to clearly distinguish between left and right.
Do mirror images of natural phenomena lead to different physics? Photo by Javier Graterol on Unsplash
As I mentioned, Leibniz and Newton already argued about this in a heated exchange of letters in the 17th century. They never met in person and groomed a personal dislike for each other (in part owing to their battle over the creative claims to calculus, which is admittedly something one wants to be credited for), but their differences extended to a professional level.
When it came to positing the reality of space, Leibniz took the position of a relativist. He claimed that space does not in itself exist, but only the things in space exist. Leibniz’ argument is the following: if one were to carry out a symmetry transformation (e.g. a rotation or a parity transformation) on all contents of spacetime, this would not lead to any observable difference. Assuming the identity of things that have no observable difference, the reality of space itself were an unnecessary assumption within the theory, as it does not lead to any observable differences.
On the other hand, Newton was substantialist in positing a reality to space that goes merely beyond its content, and has ontological status. The notions of absolute space and time were introduced by Newton in his Philosophiæ Naturalis Principia Mathematica, and play an important foundation when defining Newtonian mechanics. They are necessary for a coherent definition of inertial systems, which in turn are the foundation for the definition of forces.
So who is right? Or do we even know today who was right?Answering the question brings us to modern particle physics.
Parity Violation
As it turns out, there is indeed a solution to the Ozman problem (within our universe): we can instruct the alien to rebuild the most upbeat experiment in the history of physics: the Wu experiment.
The basic premise behind the experiment is to build to physical processes which are mirror images of each other, and test if there is a difference.
The Wu experiment. Credit to nagualdesign [CC BY-SA 3.0]
The beta emissions observed in the Wu experiment are communicated through the weak interaction, which is one of the four fundamental forces of nature.
In 1956, Madame Wu built two mirror image versions of the experiment (the left side and the right side of the figure), as you can see in the orientation of the coils. The beta decays in the mirror version (the right one) were pointing in the wrong direction, showing that parity symmetry in the weak interaction was indeed not conserved.
This, and many other consecutive experiments, have lead to the conclusion that the weak interaction is what physicists call maximally parity-violating, and that the laws of physics really make a difference between left and right (in technical terms, only left-handed fermions and right-handed antifermions interact via the weak force).
One can see this is an indicator that relationialist accounts of the world are incorrect, and there is something substantial to space. Nevertheless it is not as clear cut if this pertains to the reality of physical space. Parity symmetry is intrinsically related to time reversal symmetry and charge conjugation through the CPT theorem, which states that every physical theory that is Lorentz invariant has to be invariant under simultaneous time reversal, charge conjugation and parity transformations. Parity violation therefore implies time reversal symmetry violation (because both have to be violated if the combination is not), which has indeed been found to be violated as well (more on this in another article).
As always the case with modern physics, the implications are hard to make sense of. I’m leaning towards Newton’s substantialist notion in thinking that the laws of physics really have their own structure and reality, along the lines of structural realist interpretations of the world. These make the claim that the structures that are the contents of our physical theories do exist in a meaningful way.
Problems of interpretation aside, at least we can now walk through our daily lifes a little more confident knowing that in case an alien floating in space calls us out of the blue to ask us about the difference between left and right, we have a good answer for it.
|
https://medium.com/swlh/the-difference-between-left-and-right-8c4d8dddc74c
|
['Manuel Brenner']
|
2019-07-06 12:28:09.097000+00:00
|
['Parity', 'Physics', 'Aliens', 'Philosophy', 'Science']
|
One Class Learning in Manufacturing: Autoencoder and Golden Units Baselining
|
Recently I’ve been working with manufacturing customers (both OEM and CM) who want to jump on the bandwagon of machine learning. One common use case is to better detect products (or Device Under Test/DUT) that are defective in their production line. Using machine learning’s terminology, this falls under the problem of binary classification as a DUT can only pass or fail.
However, training a binomial classifier that requires samples from both cases of pass and fail turns out to be impractical. There are two reasons why.
Imbalanced data. Manufacturing processes are optimized to produce devices with high yield i.e. as little defect as possible. A typical scenario that happens when historical production data was collected is this: huge amount of good/pass data is obtained yet bad/fail data is hardly present. Underrepresented defective samples. There are many different type of defects that could result in failure. To enumerate each one of them is already difficult, let alone collecting enough samples from all. On the other hand, good data are typically alike and easier to define.
A slight philosophical discussion (Feel free to share your opinion, I could be wrong about this): Borrowing the idea from optimization theory, if we define goodness as a global optima that we are striving for and defects as sub-optimal samples, then there should only be one definition of goodness and infinitely many defectiveness (assuming a continuous space). Of course in practice, sub-optimal samples that are tolerantly close to global optima are accepted as good (due to imperfect processes in the world). But those that are extremely far away will be deemed as defective.
Our depiction of good and bad samples in manufacturing. Good samples are dense, defective samples are sparse.
Therefore, we need to find another training paradigm that is more suited for this scenario. Ideally, that new model must be able to learn only from good samples and reject defective ones that look immensely different with the former. One Class Learning is a good candidate.
“Wait that sounds familiar… Isn’t it just Baselining with Golden Units?”
said a six sigma expert during one fine afternoon in a meeting room. Yes indeed! It turns out that One Class Learning is not a foreign concept in manufacturing. It’s just that the terminology differs. In quality control, engineers will typically collect Golden Units, or known good components and perform some measurements on them (a.k.a. feature engineering). Next, tolerance limits will be calculated based on the distribution of measurements. One statistical method that can be used to calculate such limits is called Part Average Testing (published by the Automotive Electronics Council). During production, devices whose measurement values are within the limits will be passed while those outside are failed.
Part Average Testing Limit Calculation
Part Average Limits applied to distribution of measurement values
The whole baselining process can be visualized in the following diagram.
Golden Unit Baselining
Feature Engineering
The tricky part is deciding what features to extract. Traditionally, this requires a deep domain expertise. As a result, it is expected to see hundreds of algorithms deployed in a production line, each has been carefully designed to identify the presence of certain defects. The idea of machine learning is to remove this manual intervention all along, and let the model discovers commonalities and differences from the humongous data that it has been provided with.
Nonetheless, don’t get me wrong, such “traditional” methods are actually quite accurate. I’ve worked with domain experts who could visually identify defects in certain devices in less than a second (I’m still amazed until now) — they have written algorithms that are very good at identifying those flaws. In my view, machine learning could act as a complement to existing algorithms.
For the rest of the article, I’d like to give a brief overview of how one class learning is done. I will go through a particular algorithmic implementation of one class learning called autoencoder neural network. However, I will not dive deep into the theories of autoencoder. I’ll leave it to another post.
We shall also assume that the collected measurements are in the form of images (e.g. a visual inspection system). A dummy dataset from MNIST handwritten digits will be used.
Calculating Part Average Limits based on Autoencoder Reconstruction Error from Golden Units
Our experiment setup goes this way: Assume that images from good devices look like the digit 1. Therefore, during training we will only input images of digit 1s. To test the efficacy of this model, we will test it against an unseen set of digit 1s (good samples) and digit 2s (bad samples). The hypothesis is that reconstruction error for digit 1s will be low and digit 2s high. If this is true, we can then use reconstruction error as a feature. The whole pipeline is drawn in the following figure (notice the similarity with Golden Unit Baselining).
One Class Learning
Most of the code in this tutorial comes from Keras’ basic tutorial on autoencoder.
First, let’s limit the amount of GPU resource that tensorflow-keras will consume. By default it will attempt to reserve all available memory. In this case, we’ll only take 40% of the available memory.
import tensorflow as tf
import tensorflow.keras.backend as K """
Limit GPU memory consumption
""" tfgraph = tf.get_default_graph() with tfgraph.as_default():
config = tf.ConfigProto(
intra_op_parallelism_threads=12
)
# 0.4 means only 40% of GPU memory will be used
config.gpu_options.per_process_gpu_memory_fraction = 0.4
tfSess = tf.Session(config=config)
K.set_session(tfSess)
After that we can load the MNIST handwritten digit datasets from Keras. Subsequently, we normalize the image intensity values from 0–255 to 0–1. This allows us to use a sigmoid activation function at the last layer of the autoencoder (since the output domain will be between 0 and 1 too). If we stick to 0–255 at the input, we will have to scale the autoencoder’s output accordingly as by default there’s no activation function that has an output range of 0–255.
Tips: Make sure that your neural network’s input and output has the same domain. Otherwise your training loss will not converge properly, no matter how many epochs are used.
# import library dependencies
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist # import mnist dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data() # normalization
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))
We then split the data into good samples (images of 1s) and bad samples (images of 2s).
# get trainData of digit 1 only
trainX, trainY = zip(*filter(lambda x: x[1] == 1, zip(x_train, y_train)))
trainX = np.array(trainX) # get testData of digit 1 only
testGoodX, testGoodY = zip(*filter(lambda x: x[1] == 1, zip(x_test, y_test)))
testGoodX = np.array(testGoodX)
Good Samples i.e. images of 1s
# get testData of digit 2 only
testBadX, testBadY = zip(*filter(lambda x: x[1] == 2, zip(x_test, y_test)))
testBadX = np.array(testBadX)
Bad Samples i.e. images of 2s
To keep things simple, we’ll use the default definition of autoencoder from Keras. The only additional thing that we did here is to flatten the encoded layer into a 2-neuron bottleneck layer in the middle. Note that this choice is arbitrary. I didn’t perform any hyperparameter search in this experiment as I intended this to merely be a simple introduction. As for the choice of 2-neuron, this will be useful later on when I try to visualize the encoded embeddings in a 2D space.
https://blog.keras.io/building-autoencoders-in-keras.html from keras.layers import Input, Flatten, Reshape, Dense, Conv2D, MaxPooling2D, UpSampling2D
from keras.models import Model
from keras import backend as K input_img = Input(shape=(28, 28, 1)) # adapt this if using `channels_first` image data format # decode
x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
# at this point the representation is (4, 4, 8) i.e. 128-dimensional
encoded = MaxPooling2D((2, 2), padding='same')(x) # flatten and compress
shape = K.int_shape(encoded)
bn = Flatten()(encoded)
bn = Dense(2)(bn)
bnRec = Dense(shape[1] * shape[2] * shape[3], activation='relu')(bn)
encoded = Reshape((shape[1], shape[2], shape[3]))(bnRec) # encode
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x) # create encoder
encoder = Model(input_img, bn)
# create autoencoder
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='mse')
autoencoder.summary()
Autoencoder Summary
The next step is to train the autoencoder model. We call the .fit() function, and supply it with images of 1s. As for validation, we put images of 2s. Note that validation dataset will not be taken into account during backpropagation. I put it there so that we can see how the reconstruction errors between images of 1s and 2s diverge as training goes on.
history = autoencoder.fit(trainX, trainX,
epochs=30,
batch_size=128,
shuffle=True,
validation_data=(testBadX, testBadX))
We can then plot the reconstruction error over epochs for both images of 1s (good) and 2s (bad). It can be observed that error for 1s decreases over time while 2s stays high. This means that the autoencoder has learned to reconstruct good images that it has been trained with (1s), but not the bad ones (2s).
# summarize history for loss
plt.figure(figsize=(10, 3))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['good', 'bad'], loc='upper left')
plt.show()
Reconstruction Error for Good and Bad Samples over Time
The errors above are only point estimates. We need to see the whole distribution of reconstruction error in order to calculate the tolerance limits (e.g. Part Average Testing Limits). From the chart below, we can see that indeed bad images have higher reconstruction error than good images.
# predict
predGood = autoencoder.predict(testGoodX)
predBad = autoencoder.predict(testBadX) # reconstruction error
rec1 = np.sum((predGood - testGoodX)**2, axis=(1, 2, 3))
rec2 = np.sum((predBad - testBadX)**2, axis=(1, 2, 3)) # histogram
plt.figure(figsize=(10, 5))
plt.subplot(2, 1, 1)
plt.hist(rec1, bins=30, range=(0,300), color='g')
plt.tick_params(axis='x', which='both', bottom=False, top=False, labelbottom=False)
plt.ylabel("count of good images") plt.subplot(2, 1, 2)
plt.hist(rec2, bins=30, range=(0,300), color='r')
plt.xlabel("reconstruction error")
plt.ylabel("count of bad images") plt.show()
Reconstruction Error for good (Top) and bad (Bottom) images
We can now calculate the Part Average Limit. Following the golden units’ baselining experiment, we will only use errors from good samples and calculate the limits accordingly. The resulting threshold is visualized below.
# Part Average Limit
robustMean = np.median(rec1)
robustStd = (np.quantile(rec1, 0.75) - np.quantile(rec1, 0.25)) / 1.35
PAUL = robustMean + 6 * robustStd
PALL = robustMean - 6 * robustStd # histogram
plt.figure(figsize=(10, 4))
plt.hist(rec1, bins=30, range=(0,300), color='g', alpha=0.5, label='good samples')
plt.hist(rec2, bins=30, range=(0,300), color='r', alpha=0.5, label='bad samples')
plt.xlabel("reconstruction error")
plt.ylabel("count of images")
plt.axvline(x=PAUL, c='b')
plt.axvline(x=PALL, c='b')
plt.legend()
plt.show()
Part Average Limit on Reconstruction Error
The following are some samples of the autoencoder’s outputs for both good and bad images. We could visually see that images of 1s can be well reconstructed, while 2s are not. Interestingly, the autoencoder’s outputs on images of 2s look oddly similar to 1s. I don’t have a theoretical guarantee on this, but empirically it seems that the decoder part of the model has only learned to construct 1s with certain degrees of variety (e.g. angle and strokes), but not other digits.
Autoencoder’s reconstruction of good images
Autoencoder’s reconstruction of bad images
To further visualize how the decoder constructs images of 1s with different strokes and angles, we could look at the encoding produced in the bottleneck layer of the autoencoder model. We would like to see if there are any structures there. Could there be clusters of images with certain strokes / angles?
Since we have compressed the information into just a 2-neuron layer, we can easily represent each image as a 2-dimension vector and visualize all of them in a 2D plot.
testGoodEncodedX = encoder.predict(testGoodX)
np.shape(testGoodEncodedX) plt.figure(figsize=(8, 8))
plt.scatter(testGoodEncodedX[:,0], testGoodEncodedX[:, 1])
plt.show()
2D Scatter Plot of Encoded Images
Looking at the distribution, it seems that our initial suspicion that clusters of digits with certain strokes/angles exist is incorrect. Perhaps the pattern looks more like a spectrum instead of cluster? To visualize this, we can replace every dot in the plot above with its’ original image.
from matplotlib.offsetbox import AnnotationBbox, OffsetImage fig, ax = plt.subplots(figsize=(20, 20)) for x, xEnc in zip(testGoodX, testGoodEncodedX):
imagebox = OffsetImage(x[:,:,0], cmap=plt.cm.gray)
ab = AnnotationBbox(imagebox, [xEnc[0], xEnc[1]], frameon=False)
ax.add_artist(ab)
plt.xlim((testGoodEncodedX[:,0].min(), testGoodEncodedX[:,0].max()))
plt.ylim((testGoodEncodedX[:,1].min(), testGoodEncodedX[:,1].max()))
plt.show()
Indeed, we found that there are varying degrees of images of 1s in the embedding space. Going diagonally from the upper left to lower right quadrant, we see the various spectrum of angle. Meanwhile in the off-diagonal direction, we see thicker strokes as we get closer to the lower left quadrant.
Conclusion
I must first apologize that I couldn’t use dataset from a real production line. Nevertheless, I hope that this simple tutorial gives you a better insight of how one class learning and golden unit baselining with Part Average limit are quite similar in practice. I have provided a simple python code walkthrough in order to show aspects of one class learning that we usually analyze in practice.
Disclaimer: Having worked with data from real factory floors, I can testify that the reality is much messier than this simplistic sample of image of 1s and 2s. There are a lot of preprocessing that needs to be done. Moreover, not all of the data labels could be trusted — as these are annotated by humans, there could be errors due to inconsistency. Nonetheless, the fundamental concept stays the same.
|
https://towardsdatascience.com/one-class-learning-in-manufacturing-autoencoder-and-golden-units-baselining-4c910038a4b3
|
['Edward Elson Kosasih']
|
2019-12-11 17:23:16.645000+00:00
|
['Machine Learning', 'Keras', 'Manufacturing Analytics', 'Data Science', 'Industry 4 0']
|
An Introduction to the Buddhist Argument for Political and Economic Liberty
|
I’ve been a practicing Buddhist for three years and a libertarian for twenty. I came to the latter through long conversations in college with my friend Trevor Burrus, who is now my Cato Institute colleague and my co-host on our Free Thoughts podcast.
I came to Buddhism by way of an interest in ancient philosophy. I’d long been a fan of the Greeks, and still am very much influenced by Aristotle. A few years back, I decided it’d be worth my time to branch out a bit, however, and I began with Buddhism. This meant that most of my early reading wasn’t contemporary Buddhist writers, but instead the Pali Canon and philosophical commentaries. I approached Buddhism the way I did the Greeks, reading primary sources, as well as academic works on those sources. A real delight in these early days was how much Buddhist dialogs read like their Greek contemporaries. (I’ve long thought the turn away from the dialog format in Western philosophy was a mistake, not just because dialogs are fun to read, but also because they are a better way to communicate and clarify complex philosophical ideas.)
Reading sutta anthologies, then commentary by Thanissaro Bhikkhu, then books like Buddhism as Philosophy to get a bigger picture, and then moving on to Mahayana texts, convinced me not only that Buddhism is interesting, but that at a basic level, it’s probably true. Buddhism successfully identifies the causes of suffering and offers the most plausible way to deal with them I’ve found. Eventually I became convinced enough to begin calling myself a Buddhist. That’s, of course, a very broad description masking a ton of variety. So if compelled to affix a label to my Buddhism, I’d say I’m a non-denominational, secular Buddhist, with my primary influence coming from the Pali Canon, but with a splash of Nagarjuna and Mahayana because of my interest in the development of Buddhist philosophy that occurred in that tradition. My personal meditation practice is largely vipassanā, and my family belongs to a Tibetan temple in the Kagyu lineage.
All of this research largely avoided political matters initially, because the foundational ideas of Buddhism aren’t much concerned with government and economics. Thus my emerging Buddhism remained mostly orthogonal to my day job of advocating for political and economic liberty. But then I came across Matthew J. Moore’s book, Buddhism and Political Theory, and was struck by, on the one hand, how compatible Buddhist ethics are with libertarianism, but also by how that was the opposite direction most politically engaged Buddhists went. This lead to a lot more research and, eventually, to the idea for book.
The core idea is that, counter to much of what politically engaged Buddhists argue, free markets and radical political liberty aren’t incompatible with Buddhist values. In fact, Buddhists ethics are quite supportive of both. The book, which I’m in the middle of drafting, is my personal statement of that. It’s me explaining why I don’t see a tension between my Buddhism and libertarianism and, in fact, why I think part of my attraction to Buddhism comes from how the Buddha’s commitment to non-harm and non-violence, which point towards a political system that respects the dignity of individuals and pushes back on the idea that we should use state violence to control their lives.
Which brings me to…
An Overview of the Argument
The book is a work in progress and the argument is still developing. But it’s probably worth giving an overview to provide context to future issues of this newsletter. There are basically two prongs. The first is to argue that much, if not all, of what states necessarily do violates Right Action, specifically the first two precepts. The very nature of the state is to use violence against people. Max Weber’s canonical definition of the state is the “human community that (successfully) claims the monopoly of the legitimate use of physical force within a given territory.” If Buddhism prohibits killing and the state is the entity that claims a right to kill us (i.e., apply maximal physical force) if we disobey, then there’s an obvious incompatibility. Similarly, Buddhism tells us we must not take what is not freely given. But government depends for its very livelihood on taking resources, in the form of taxation, from people who wouldn’t otherwise give them, and does this by threatening violence if they refuse. Thus a full understanding of what the state is shows it as, at the very least, problematic from a Buddhist standpoint.
But my case is broader than that. I’ve argued elsewhere that claims that we have an obligation to obey and support the state ultimately fail, but quite a lot of people, Buddhists included, disagree with me. So I also want to show that free markets aren’t antithetical to Buddhism but, instead, the best way we have to realize Buddhist values, to approach each other with compassion, and to enable people to lead lives where they have the luxury of Buddhist practice. The Buddha was remarkably friendly to commerce, seeing no problem with his lay followers acting as merchants, so long as they avoided harmful trades. (Incidentally, some of those trades are the very one’s governments are most active in.) He also didn’t see the inequality that can result from market success as necessarily a problem, and told wealthy merchants they could use a significant portion of their earnings to make themselves and their families comfortable — provided they set aside a good chunk of it for helping those less fortunate. If we care about the global poor, we should want to see more nations embrace free markets, not fewer. We should recognize that there has never been a more powerful tool for ending poverty than free trade coupled with limited government involvement in the economy.
A lot of Buddhists, including the Dalai Lama, aren’t on board with that, of course. The Dalai Lama has said he’s a socialist because capitalism is based on greed and socialism on compassion. But the Dalai Lama isn’t an economist and, from looking at what he’s had to say on the matter, it’s clear that his understanding of economics isn’t deep. (Which is fine, as his area of expertise is elsewhere.) His views on capitalism vs. socialism fit with what I’ll call the “folk economics” common to a lot of Buddhist activists. I’ll argue that this understanding has things backwards, and that markets are much more driven by compassion, and a desire to meet the needs of others, than socialist policies, which have a track record of harming those they claim to help. I also try to show that Buddhist concerns about markets promoting consumerism are largely misplaced and, to the extent they have teeth, they actually counsel more against expanding the sphere of politics. (Look for my article on political consumerism coming soon. I’ll include a link in a future newsletter when it’s published.)
Furthermore, when we look to what the Buddha actually said about politics, we find that his expectations of the state were quite limited. Comparing his ideal monarch to modern governments today is rather striking. They vastly exceed the scope set out in, for instance, the Ten Duties of the King.
In the end, though, my goal with The Free Market Buddhist is modest. I’m not going to convince every Buddhist that the only reasonable path in politics is libertarianism. But I do want rehabilitate libertarianism from a Buddhist perspective. I want to make it an acceptable way to pursue Buddhist values in the political sphere. As a Buddhist, you aren’t required to reject free markets. You aren’t required to see them as, as best, a necessary evil. Buddhist libertarianism has strong roots in Buddhist philosophy, and it ought to be taken more seriously by practitioners hoping to move the world in a more enlightened direction. There are a lot of pieces to the project, and I hope you’ll join me as I explore them.
Some Preliminary Resources
While I’m still getting this Free Market Buddhist project off the ground, here’s some of my work elsewhere that might be of interest. The episode of my Free Thoughts podcast with Matthew J. Moore is a good overview of Buddhist political theory.
For a background on Buddhism and my relationship to it, listen to this episode of my personal podcast with Jason Kuznicki.
And if you want to look at an early and rough version of my argument for Buddhist libertarianism based on the first two precepts, take a look at this article on Libertariansim.org.
Until Next Time
My plan is for this newsletter to be a running record of my thoughts on Buddhist libertarianism, as well as links to the resources I’m finding helpful or interesting as I research Buddhist politics while writing the book, and keep at it after it’s published. I hope you’ll find the discussions rewarding. And I also hope you’ll let me know if you disagree with anything, or would like me to discuss topics. I want this to be a conversation.
Thank you for being a part of it.
|
https://medium.com/the-free-market-buddhist/an-introduction-to-the-buddhist-argument-for-political-and-economic-liberty-505538e35a54
|
['Aaron Ross Powell']
|
2020-09-03 11:58:11.725000+00:00
|
['Capitalism', 'Libertarianism', 'Economics', 'Buddhism', 'Philosophy']
|
Anatomy of Docker
|
How does Docker work?
Now that we learned about containerization and how containers work, it’s time to face ultimate truth. Docker is nothing but a containerization software and the Docker engine is nothing but a container engine.
A docker engine consists of Docker daemon and other utilities to create, destroy and manage containers. Docker daemon is a process running in the background that receives commands from local or remote Docker client (CLI) using HTTP REST protocol to manage containers. Hence Docker is said to follow client-server architecture where the server is Docker daemon.
When you install Docker on your system, you get the Docker engine, Docker command-line interface (Docker client) and other GUI utilities. When you start your Docker, it will start the Docker daemon.
☛ What is a Docker container?
The container we discussed so far is a general interpretation of what container is and how it works. Docker container is far more sophisticated than that.
A docker container contains application code and other dependencies. These other dependencies are what makes a container a “container”. These other dependencies consist of necessary (application-specific) libraries, binaries and other resources that are needed for our application to function.
An example of a container would be a node.js server. So our application code would consist of server.js containing application code and node_module library. But to run it, we need node installed in the container, hence we need a node binary file. node.js might depend on other binaries and libraries, hence we need that too. Then node.js needs an OS to run on for example CentOS, hence we need a customized binary for that too which Docker engine could utilize to talk to guest OS and kernel.
☛ What is a Docker image?
The node server example we just talked about contains many pieces that need to be present in the container so that our application could work. A Docker image is a zipped box that contains all these pieces.
We instruct Docker client to create a container from this image. Docker client instructs Docker daemon to unzip the image, reads the content and launch the container with server.js executing as a process. Depending on other instructions in the image, Docker daemon might expose some ports from the container which we can listen to and/or mount volumes and do other things.
To create a Docker image, we need a Dockerfile . Dockerfile is a configuration file with instructions to tell the Docker engine, how to build an image. These instructions can be what would be the base image, what would be the working directory inside the OS running inside the container, what application-specific files need to be copied from the system, what ports need to be exposed in the container and other zillions of things.
A base image is an official image provided by Docker in which we will add our application-specific code and instructions. A base image can contain the CentOS operating system installed with the Apache server.
A docker image follows a modified Union File System such as AuFS. Each instruction in Dockerfile creates a read-only AuFS layer. These layers are stacked on each other as mentioned in Dockerfile . Each layer is only a set of differences from the layer before it.
When we create a container from this image, we copy all these read-only layers and add a new read-write layer on top of it. The read-only layers are called image layers while the thin read-write layer in the container is called a container layer.
A typical Dockerfile would look like below (follow this link for other details of Dockerfile but below is a sample example).
FROM ubuntu:15.04
COPY . /app
RUN make /app
CMD python /app/app.py
In above Dockerfile , we are creating our image from base Ubuntu image of version 15.04 (provided by Docker Hub) which creates the first layer. Then we are copying everything from the current directory to the /app location in Ubuntu OS which creates a new layer and stacks on the previous layer. Then we are building the application using make command which writes output to the new layer and stacks on the top of the previous layer. Then we are running the python program using python command. The last instruction does not take any space in the layer as it is a bash command.
As you can see from the above image, when we run a container, it creates a read-write layer on the top of the image layers. All the changes made to the running container, such as writing new files, modifying existing files and deleting files, are written to this thin writable container layer.
When a container is running, the container layer needs to communicate with layers below it to merge the differences in each layer and generate an actual file system. This is done using storage drivers provided by the Docker engine.
When a layer (including container layer) needs to read a file in the below layer, it reads the file from that layer directly. While building an image, when a layer needs to write a file from the layer below it, that file is copied to the current layer and changes are made there (diff is saved in the layer).
In the container, when the container layer wants to write to the file from the layer below it, that file is copied to the container layer and changes are made to the file. The strategy of copying the file when we want to write (modify it) called a copy-on-write (CoW) strategy.
This makes writable layer lightweight, hence we call it the thin layer. Hence all modification made to the image layers lives in the writable container layer. When the container is destroyed, the container layer is destroyed too but image layers are preserved as it is. We can still save the writable layer of a container if we want to which is called a persistent Docker container.
Multiple containers can share some or all file system layers from one or many images. Since each layer is labeled with UUID which is a checksum of the content in the layer, they are very re-usable. If two containers are made from the same image, they share 100% of the image layers and have their own unique writable layer (as seen in the image below).
Having layered filesystem with a copy-on-write (CoW) strategy along with layer usability is what makes Docker containers so blazing fast to create. Hence containers are lightweight and have small sizes on the disk (size of the writable layer only).
|
https://medium.com/sysf/getting-started-with-docker-1-b4dc83e64389
|
['Uday Hiwarale']
|
2020-09-01 06:57:05.920000+00:00
|
['Hypervisors', 'Docker', 'Containerization', 'Virtualization', 'Containers']
|
What I Wish People Knew About Reporting Suicidal Friends on Facebook
|
What I Wish People Knew About Reporting Suicidal Friends on Facebook
With no one to turn to, I turned to Facebook — and ended up with a cop on my doorstep
Photo: Jack Halford/EyeEm/Getty Images
In the winter of 2013, I found myself spending a month on a leaky air mattress. I was staying at the home of my ex-fiancé’s Facebook friend, in Iowa. She’d generously welcomed me after my ex kicked me out of our shared Tennessee apartment.
I was three months pregnant and battling suicidal ideation every day. When my fiancé told me to go back to Minnesota and began spending all of his time trolling online for dates, my prenatal depression kicked into high gear. I was pregnant, recently dumped, filled with guilt, and terrified of being a bad mother. I was afraid my depression would prevent me from bonding with my child, and I was in desperate need of help. No matter how much people told me to move on, I couldn’t understand how to actually do it.
In those days, I still had a Facebook account, which constantly reminded me of the breakup. Everything online did, but Facebook was particularly good at it. Plus my Facebook posts were pretty damn depressing. I like to think I was careful about what I posted. I knew I shouldn’t tell people how much I wanted to die. I knew I shouldn’t share how often I went for walks in the middle of the night with a knife in my pocket.
One day I posted a status I don’t remember posting: “Today I’m thinking a lot about taking a walk and disappearing for good.”
I was alone when there was a loud knock at the door. Startled, I opened the door to see a police officer.
“Are you Shannon Ashley?” he asked.
“Yes,” I answered, the blood draining from my face. I didn’t understand what he wanted.
“One of your friends was concerned about some things you posted on Facebook,” he said. “Can I come in and talk?”
The officer sat down and asked me some questions about what was going on and how I was feeling. As I realized what was happening, I felt my face burn. Someone had reported my post to Facebook, which advised them to contact my local authorities.
I knew if I answered too honestly, I would have to go to the hospital. For a lot of folks battling suicidal ideation, going to the hospital is an unknown that seems even scarier than our darkest thoughts. We will do everything we can to avoid it.
So I was careful to tell the police officer just enough to get him to leave me alone. I’m not sure why we don’t talk more about this flaw in the system: So many of our protocols surrounding depression or suicide checks assume the person who needs help will tell the truth.
But a lot of us won’t.
The officer didn’t stay long, and my main concern was making sure he left before anyone else returned home. It was bad enough to feel so miserable; the last thing I wanted was to explain myself to somebody else.
I still don’t know who reported my post and called the police. I do know Facebook didn’t deem the post “against community guidelines” because it’s still visible six years later on my now-unused account.
“If someone you know is in danger, please contact local emergency services for help immediately,” Facebook advises on its help page. “After you’ve called emergency services, connect with your friend or call someone who can. Showing that you care matters. Make sure they know that you’re there for them, and that they aren’t alone.”
I’m glad to see Facebook recommends that the concerned user reach out to their friend, but I have mixed feelings about the entire policy. In my case, the person who reported my post and called the cops never revealed themselves or reached out to offer personal support.
People don’t know how to react to a pregnant woman who isn’t glowing with joy or delightfully sharing photos of baby showers and nursery makeovers. And they definitely don’t know how to deal with one in the deep throes of prenatal depression. Some friends did send baby gifts, but they were hard to look at — more reminders of what I didn’t have, and the massive responsibility that was about to come screaming into my unprepared arms.
|
https://humanparts.medium.com/i-made-a-facebook-post-that-had-the-police-knocking-on-my-door-b5e3d11baf20
|
['Shannon Ashley']
|
2020-01-15 17:30:33.282000+00:00
|
['Life', 'Facebook', 'Depression', 'Social Media', 'Mental Health']
|
Undip Conducts Research on Sustainable Aquaculture
|
The issue of sustainable aquaculture is increasingly being discussed around the world in line with the development and efforts to achieve the 17 goals of the Sustainable Development Goals, especially the 14th goal, Life Below Water.
As a research university with a Coastal Region Eco-development Principal Scientific Pattern, Diponegoro University (Undip) has also moved to research this topic. During the last few years, research on sustainable aquaculture has been carried out by Nuning Vita Hidayati (NVH), a student of the Aquatic Resources Management Doctoral Program, Faculty of Fisheries and Marine Sciences (FPIK) UNDIP, who is supported by the Government of Indonesia through the LPDP scholarship, the Indonesia — Overseas Lecturer Excellence Scholarship (BUDI) -LN) and the Undip-AMU collaboration program.
This research was conducted on the north coast of Central Java and in collaboration with Undip University Jenderal Soedirman, Raja Ali Haji Maritime University, and Aix-Marseille Université (AMU) from France.
This research raises the topic of emerging contaminants concerning sustainable aquaculture (sustainable aquaculture). The concept of sustainable aquaculture itself refers to the principle of sustainable development as it has been adopted in various sectors based on both natural resources (natural resources) and industry (manufacturing).
In the perspective of aquaculture, the principle of sustainability is interpreted as an effort to manage aquaculture resources responsibly while still ensuring environmental quality and efforts to conserve natural resources. In this context, pollutants are one of the key indicators.
This research thoroughly examines various types of emerging contaminants that are the focus of attention in cultivation activities, namely; heavy metals, Persistent Organic Pollutants (Organochlorine Pesticides, OCPs, and Polychlorinated Biphenyls, PCBs), and medicinal compounds (antibiotics). This topic was raised as a form of UNDIP’s concern in supporting efforts to achieve the Sustainable Development Goals (SDGs).
To read the original article: click here.
|
https://medium.com/@devitahs/undip-conducts-research-on-sustainable-aquaculture-766fa1dbd357
|
['Devita Salsabila']
|
2020-12-24 12:29:56.892000+00:00
|
['Sdgs', 'Indonesia', 'Undip', 'Universitas Diponegoro', 'Aquaculture']
|
A Case Study in Truthful Messaging
|
1. Background
Sivan Innovation is a medical technology company that develops early detection algorithms for chronic diseases. Their first product, Moovcare, alerts oncologists when a recovering lung cancer patient is about to relapse (i.e. that the cancer has returned).
Lung cancer patients in remission use Moovcare to report symptoms via a weekly questionnaire
Moovcare showed huge promise from the start. Clinical trials proved that patients who used Moovcare survived up to 7.6 months longer than average — and at a fraction of the cost and none of the discomfort of drug-based treatments.
But doctors weren’t interested. Sivan pitch to oncologists and senior hospital officials fell on indifferent ears.
2. The Brief
Reposition Sivan and its Moovcare device in a way that would interest and engage doctors and senior hospital administrators around the world. And do so while preserving the clinical integrity of the product.
3. The Problem
A quick review of Sivan’s marketing explained the doctors’ lack of interest. Doctors are clinical practitioners interested primarily in patient outcomes, yet the company’s literature focused heavily on advanced technology.
No overworked oncologist with 1,500 patients wants to read a 35-page marketing brochure
4. Research
After familiarizing ourselves with the company, the technology, the product and stacks of clinical reports, we reached out to oncologists who would one day use the Moovcare platform.
Photo by National Cancer Institute on Unsplash
We conducted in-depth phone interviews with three top oncologists in the US and France, and visited another three at the hospitals they worked at in Israel. We walked the wards with them. Observed their interactions with nurses and staff. Asked them about their job, their hopes, their frustrations. We witnessed first hand what they dealt with every day as front-line clinicians.
5. Key Insight
Over the course of many hours of conversation, we garnered a ton of valuable information, but the key insight revealed itself when we asked the doctors what their greatest fear was:
Having to tell a patient ‘If only I had known sooner’
Discovering a relapse too late to treat effectively was doctors’ №1 pain. Despite their many years of training and experience and know-how they felt they came up short for patients when medical complications were brought to their attention too late; when the patient’s health had deteriorated to the point that the optimum treatment was no longer an option. This was the source of their greatest professional and personal frustration — and we needed to address it head-on.
This led us to a simple and universally true insight:
The earlier you can detect a problem, the better you can treat it.
6. Truth-Based Strategy
Armed with this insight, we needed to understand what it was about Moovcare that was different and better versus the standard lung-cancer follow-up.
When we got back to the studio, we sketched it out.
Standard follow-up: bi-annual CT scan where anomalies can go undetected for months
The standard follow-up sees the patient return to the hospital on average once every six months for a CT scan. Hopefully the scan shows no relapse and the patient gets the green light. The problem occurs when the cancer returns days or weeks after the scan, and is not picked up until the next scheduled scan many months later.
Moovcare enhanced follow-up: weekly patient reports alert doctors to anomalies immediately
Moovcare solves this clinical blind spot by enabling patients to report their symptoms on a weekly basis through a simple-to-use web-based questionnaire. Sivan’s algorithm analyzes the answers, and if it detects an anomaly, alerts the doctor immediately.
In short, Moovcare allows doctors to detect the early telltale signs of a lung cancer relapse and intervene quickly with the best treatment — and outcome — for the patient.
We distilled this idea into seven words — our strategy statement:
It’s never too early to detect relapse
The real world test of an insight, as Ariel says, is to preface it with the words ‘Isn’t it true that…’ and see if it holds water. In this case it did: there’s not an oncologist on earth who would disagree with the premise of early detection.
We knew we were onto the big idea. Now we had to bring it to life.
7. Strategy Driven Creative
Every good brand identity is informed by a great brand strategy.
The creative concept for Moovcare was already contained within the strategy:
Celebrate ‘early’
Though many other healthtech companies were talking about early detection, no-one had embraced the universal notion of ‘early’ and made it the focal point of their brand communications. It was a differentiating strategy with exciting creative possibilities.
The first thing we did was further distill our succinct one-liner into a pithy tagline:
Early is on time
Does it pass the truth test? ‘Isn’t it true that early is on time’ is a statement 100% of our target audience — oncologists — would agree with, so yes.
Secondly, from a marketing perspective, positioning a company whose core business is developing disease-specific detection algorithms as the ‘early company’ future-proofs its messages for years to come. Whatever the company’s product lineup will look like in 2, 5, 10 years from now, the evergreen promise of early detection will always hold true.
If you’re going to stand for something, stand for something big — bigger than your company and your product — that will remain interesting and relevant to your audience years into the future..
|
https://medium.com/@guygordon/a-case-study-in-truthful-messaging-d4dd7e3e14b9
|
['Guy Gordon']
|
2020-12-21 14:30:40.762000+00:00
|
['Medical Devices', 'Research', 'Insights', 'Messaging', 'Brand Strategy']
|
Studying about digoxin
|
Photo from Chemistry World
Digoxin, also known as Lanoxin, is the most common prescribed type of medicine called the cardiac glycoside. Cardiac glycosides are medication that is commonly extracted from a foxglove plants like the digitalis purpurea and digitalis lanata. Digoxin is used to control arrhythmias, atrial fibrillation, congestive heart failure and other heart problems, and it is usually taken along with other medicines. This can come in tablets, liquid, and also by injection.
How does digoxin work?
This slows down the heart rate, and makes it easier for the heart to pump blood throughout the body. Digoxin works by affecting sodium and potassium that is inside the heart cells. Cardiac glycoside like the digoxin involves in the sodium-potassium pump or the Na+K+ATPase enzyme, where it causes the sodium to build up inside the heart cells, and decreases the ability of the Na+Ca+ exchanger to push the Ca+ out of the cells causing it to accumulate in the sarcoplasmic reticulum. Cardiac glycosides increases force of the contraction of the heart.
Digoxin lessens strain in the heart, and helps it to maintain a normal, steady, strong, and a better working heartbeat. This type of cardiac glycoside is binding reversibly to a receptor site in the sodium-potassium pump in where it inhibits the function of exchanging Na+ and K+ across the cell membrane. Digoxin lessens the symptoms during rapid ventricular contractions by blocking the electrical conduction between the atria and the ventricles, while digoxin does its work, there could be a slowing down of contraction of the ventricles.
Indications
Congestive heart failure
Ventricular rate control in atrial fibrillation and atrial flutter
Reentry SVTs
Arrhythmias
Atrial tachycardia
Contraindications
Ventricular fibrillation
Ventricular tachycardia
Constrictive pericarditis
Digitalis toxicity
Hypersensitivity
Atrioventricular block
What are the possible side effects of digoxin?
Nausea
Vomiting
Diarrhea
Headache and dizziness
Skin rashes
Tiredness/ fatigue
Anxiety and hallucinations
Electrolyte imbalances
Increased blood level
Arrhythmias
Doses for digoxin
For tablets:
0.0625 mg, 0.125 mg, 0.1875 mg, and 0.25 mg
For liquid:
0.05 mg/mL
By injection:
0.1 — 0.25 mg/mL
Possible Risks and Hazards
1. Acute digoxin intoxication
It is more likely to occur in younger patients following an acute overdose, and is most likely to cause gastrointestinal symptoms.
Symptoms include anorexia, vomiting, nausea, visual changes (alteration in color vision), hyperkalemia, cardiac arrhythmia, asystole, ventricular tachycardia, second or third-degree AV block and dysrhythmias.
Treatment includes immunotherapy where it will reverse the effects of the digoxin, lidocaine in ventricular tachyarrhythmias, atropine/ pacemaker in bradyarrhythmia, and calcium gluconate, +/- sodium bicarbonate, +/- sodium polystyrene sulfonate in hyperkalemia.
2. Chronic digoxin intoxication
This frequently occurs in elderly patients resulting in decreased clearance of digoxin, commonly due to drug interactions or declining renal function.
Symptoms include nausea, malaise, electrolyte abnormalities, fatigue, lethargy, confusion, and hyperkalemia/ hypokalemia.
Treatments include digoxin immune Fab fragments.
References:
|
https://medium.com/@bjxde/studying-about-digoxin-3831a49d596b
|
[]
|
2021-07-15 03:09:33.518000+00:00
|
['Drugs', 'Medicine', 'Digoxin', 'Medication', 'Pharmaceutical']
|
Steering Clear of Catastrophe
|
At the Centre for the Study of Existential Risk in Cambridge, top researchers study the global threats that can wipe out civilisation. On the occasion of his new book, we interviewed the centre’s co-founder Martin Rees, about existential risks and the long-term future of humanity.
A pandemic super-virus, a global climate collapse, or an unrestrained and malignant AI that makes humanity superfluous. All more or less likely disaster scenarios that pose a significant risk for the well-being of mankind in the long run.
In fact, one way of understanding the modern world is seeing it as controlled by risks, with all our decisions about the future made in an attempt to control and minimise the risk of something going wrong. The famous sociologist Anthony Giddens dates the breakthrough of risk thinking back to the 16th and 17th centuries when Spanish and Portuguese explorers used the term in relation to sailing into the unknown. At the time, the word ‘risk’ only contained a spatial dimension, but with the advent of capitalism, risk thinking became directed towards the future as it was used to assess possible returns or losses on investments. Since then, risk thinking has found its way into every corner of society.
We make risk assessments all the time when we think about the future — as individuals (should I invest in crypto-currencies?), as well as the societal level (how should the economic policies reflect the risk of a new recession within the next five years?). And then there are the global existential risks that threaten everything and everyone; these are poorly understood as we have only few historical references to draw on when we attempt to assess their probability, and take adequate precautions.
Luckily there is a research centre in Cambridge with the purpose of surveying these kinds of risks and preparing the world for them. The Centre for the Study of Existential Risk (CSER) contains specialists, technologists, and scientists from a broad range of disciplines, and the centre’s external advisors include, among others, the super-entrepreneur Elon Musk, the futurist Nick Bostrøm, the geneticist George Church, and (until recently) the late astrophysicist Stephen Hawking.
The co-founder of CSER, Martin John Rees, is a professor of cosmology and astrophysics, a member of the British House of Lords, Baron of Ludlow, and a former president of the Royal Society, the world’s oldest national scientific institution. He founded CSER in 2012, together with Huw Price (professor of Philosophy at Cambridge), and Jaan Tallinn (co-founder of Skype). Rees has a new book out, On The Future: Prospects for Humanity — a brief, but dense treatment of the great challenges and possibilities that will shape our future, according to the British professor. Within its mere 272 pages, the book covers vast themes such as population growth, space travel of the future, bio and cyber technology, robots and artificial intelligence. Also, a good part of the book is dedicated to the concerns of professor Rees and the other CSER researchers — the existential risks that threaten the future of humanity and civilisation. The researchers at CSER have divided this category of risks into four areas: Extreme technological risks, global catastrophic biological risks, extreme risks and the global environment, and risks from artificial intelligence.
We interviewed Rees about the global risk landscape, and where he believes us to be heading.
Why did you decide to start CSER together with Huw Price and Jaan Tallinn?
“I think all of us felt that, whereas there is a huge amount of study on more conventional risks — carcinogenic food, low radiation dosage, plane crashes and so on — there is not enough study of the newly emergent risks which are of low probability but of extreme consequence. CSER is based in Cambridge, which is probably the number one scientific university in Europe. So, we feel we have an obligation to create more awareness of the extreme risks, with the aim of trying to distinguish between those that are science fiction and those that are realistically emergent, and to try to minimize the very serious ones.”
So, the goal of CSER is to influence and guide public understanding of which risks are real and important and which are irrational?
“Yes, I would say that is our aim. We are a small group, and there are only half a dozen groups in the world like ours that focus on extreme risks. We do it because these kinds of threats are rather under-studied. Being embedded in a major university, we can draw on expertise from different fields and use our combined knowledge to try and discriminate between threats that should be worried about and which are important, and those which aren’t as important. Of course, experts can be wrong, but they are far more likely to offer sensible guidance than the people who rarely think about these things.”
What sparked your interest in existential risks and how to prevent them?
“I’ve always been politically engaged. I was campaigning during the Cuban missile crisis. In the 1980’s, I attended conferences where I had the privilege of meeting senior people like Joseph Rotblat and Hans Bethe who had been involved in making the atomic bomb and who both felt a special obligation to do what they could to harness the powers that they had helped unleash. They weren’t very successful in doing so, but they felt an obligation, nonetheless. I came to feel a similar obligation, both as President of the Royal Society and a member of the House of Lords, that we need to consider the social ramifications of the new technologies in development today.”
And how about your training as a cosmologist and astrophysicist? Has that shaped the way you think about our future and the risks facing us?
“Not particularly. Maybe in the sense that cosmologists and astrophysicists are perhaps more aware than the average person about the far future, because we work with enormous time-scales. As I say in my book, most educated people are aware that we are the outcome of four billion years of evolution. I suppose we have an extra perspective in that we realise that this century is very special when compared to previous ones and that, if we do things very badly, we can foreclose future potentialities.”
Extreme risks relating to the global environment is one of CSER’s main areas of research. Photo: Carmen Marchena Alonso.
We know plenty of recent examples of things ending badly; devastating pandemics like the Spanish Flu, or the several ‘close-calls’, where the world stood on the brink of nuclear war. And then there is climate changes, which brings its own set of threats. These kinds of risks are well-documented and easily understood. Other risks that you study at CSER seem more uncertain and speculative. Why, for instance, do you believe that developments in AI pose a potentially existential threat to us?
“One point I make is that we can’t predict more than 20 years ahead when it comes to technology. Some projections we can make with relative certainty — things like population increases and global warming. But when it comes to technology, we can’t predict with confidence that far ahead. The smart phone would have seemed like magic 20 years ago and no one would have predicted how fast it would spread or the impact it would have.
With that said, some scientists fear that computers may develop minds of their own and pursue their own goals that may be contrary to human wishes, or that may even treat humans as encumbrances. Some, for instance Stuart Russell at Berkeley, and Demis Hassabis of DeepMind, think that AI already needs guidelines for ‘responsible innovation’. Others, like roboticist Rodney Brooks, who created the Baxter robot and the Roomba vacuum cleaner, think these concerns are too far from realisation to be worth worrying about — they remain less anxious about AI than about real stupidity. What’s happened at our centre is that we now have a separate group called the Centre for Future Intelligence. This group tackles general issues arising from the social impact of AI.
My personal belief is that in the long run, AI does pose a risk, but in the short run I worry more about bio and cyber risks.”
How would you characterise the threat from cyber?
“Cyber is an example of how few people can cause major damage. In my book I quote a report from the US defence department corroborated recently by General David Petraeus. The report describes how a cyber-attack at the state level could shut down the electricity grid in a large part of the United States — and that this would merit a nuclear response. So, cyber threats are indeed very serious, especially when taken in connection with other threats such as nuclear.”
What about the risks relating to biotechnology?
“Misuse of biotech is another big risk because it’s very hard to enforce any regulation. We can do our best, but we can’t expect to be effective at it. Today, a single person or a small group of people can cause an effect that can cascade widely — and they don’t need a huge research facility to do so. In 2011, two research groups, one in Holland and another in Wisconsin, showed that it was surprisingly easy to make the H5N1 influenza virus both more virulent and more transmissible.
My worst nightmare would be an unbalanced ‘loner’, with biotech expertise, who believed, for instance, that there were too many humans on the planet and didn’t care who, or how many, were infected.”
How do we mitigate these kinds of risks — the ones that stem from human error, malice or poor judgement?
“As I say in my book, the global village will have its village idiots and they’ll have global range. For this reason, I believe the balance between freedom, privacy and security is going to have to shift a bit because the consequences of what can be done by error or design using these technologies are far greater today.”
the global village will have its village idiots and they’ll have global range. For this reason, I believe the balance between freedom, privacy and security is going to have to shift
How should we strike that balance? Are you in favour of regulating the use of certain technologies or limiting their spread?
“I don’t have a solution. Obviously, we must do everything possible to avoid something happening by mistake, but we can’t guard ourselves against the intentional misuse of these technologies. I think this is a serious challenge.
I also worry about the fragility of society right now. There have been catastrophes in history which, if something similar happened today, would have far greater consequences because of the level of interconnection today. In the 14th century, the Black Death killed half the inhabitants in some European towns. The surviving part of the population carried on. But if something similar happened today, once the hospitals reached their capacity, it is likely that the social fabric would break down because we are so dependent on our systems functioning.”
I think the fragility you describe is what motivates people who want to take survival after a potential societal collapse into their own hands. Recently there has been a lot of reporting on Silicon Valley hedge fund billionaires and technologists buying up real estate in New Zealand and building bunkers there to prepare for plan B in case things turn sour. What do you make of this, as someone who studies existential risks?
“What you are mentioning are extremists, often extreme libertarians. I don’t think they are the mainstream. In any case that sort of preparation is not as widespread as the development of fall-out shelters in the US in the 1950’s and 60’s when there was a very real threat of nuclear war.”
In Denmark many fallout shelters have been refurbished into rehearsing studios for musicians. Would this worry you?
“Not really, no. The reason fallout shelters are irrelevant today is that what is more likely than a nuclear war is a breakdown in society. Imagine what would happen in a major western city that had no electricity. Within a few days our cities would be uninhabitable and anarchic.”
The amount of active nuclear weapons was at its highest in 1986, reaching 70,300. In 2018 the number had been reduced to approximately 3,750 active warheads.
Isn’t the nuclear threat still very real?
“The risk of a nuclear bomb going off in the Middle East, India or Pakistan is probably higher than ever. And, of course, a third world war involving nuclear weapons would be over in a few days. But the risk of thousands of nuclear bombs going off is not as high as it was during the Cold War because there has been a scaling down of the number of weapons. When you realise that there have been situations such as the Cuban Missile Crisis where there was a 1 in 3 chance of a catastrophe that would destroy the fabric of European civilisation, as Robert McNamara later estimated, it’s clear that a Soviet takeover would have been preferable.
Of course, there is still the risk that the next nuclear standoff will not be handled as well as the ones during the Cold War were. So, the nuclear threat hasn’t gone away. To this should be added the 21st century technologies.”
Over the last decades there has been a shift in the public perception of the future from hopeful to a threatening place. The future is no longer as bright as it were in the 1950’s and 60’s when we were promised flying cars and bases on the moon. You see the shift most clearly in popular culture and science fiction which is almost always very bleak. Why do you think this is?
“I think it reflects the realisation that the stakes are higher than they have ever been. The worst that could happen now is global. There is a book by Jared Diamond, Collapse, in which he goes through examples of how and why societies have collapsed in the past. The difference is that those were all localised events. Today, it is impossible to imagine a local civilizational collapse. Such an event would almost certainly be global in scale.
I also think it has something to do with how hard it is to make realistic predictions based on the technology we have today. Manned spaceflight, people walking on Mars and supersonic airliners are all things that, 40 years ago, we thought that we would have today. And those things were possible had the investments been maintained, in the Apollo programme for instance. But it takes economic and political pressure. On the other hand, the development of the internet and smart phones have impacted the world far more and much faster than we could have imagined. And I would argue those technologies have had an overall benign effect. The same can be said of technologies that have to do with improving health.
So, there is a distinction between what can be done technologically and what actually happens. We don’t know which technologies will become widely adopted. This is true of AI and human enhancement as well.”
Your book is not a techno-utopian one. And it’s fair to say the work you do at CSER deals more with the negative aspects of technological growth than with the positives. Would you say you are optimistic that technology will solve more problems than it causes in the long run? Will it be a net gain or a net loss in terms of the global risk level?
“Up until now technology has done more good than harm. It’s clear the lives we live today are better than the previous generations — and that is largely thanks to technological developments. But I think one must keep in mind that while the new technologies have benefits, they also create a new class of risks. And to ensure that the balance remains positive as it has until now will be a big challenge.”
Your book deals with the more speculative and cosmic far future scenarios as well. Does humanity’s future lie off the Earth, on other planets?
“If I was an American, I would not support spending any money on NASA’s manned space programme. I wouldn’t support manned space programmes either if I was a European. The benefits of sending people into space have all disappeared with the advent of robotics. Robots can do what people do much more easily.
Nonetheless there are these private companies who are doing manned spaceflight. I think they are very good news. They will be able to send people who want to take risks into space just as an adventure. A 2 percent risk of failure is too great for NASA but not necessarily for a private space flight company sending out extreme sports people.”
Extreme sports?
“Yes, manned spaceflight should be for people prepared to accept high risks rather than for tourists. Looking further into the future, I think these people will be the first to redesign themselves into what will essentially become a new species. They will use all the resources of genetic technology to modify themselves and spearhead any expansion of intelligence from the Earth into wider space. They might become electronic immortal entities made for long-distance space travels.
I think we should encourage these rich people to spend money on it.”
You dedicate a few pages to the possibility (or risk) of finding alien life. What are your thoughts on this?
“If we do detect evidence of something artificial or technological in space, I think it’s far more likely it will come from something electronic rather than something made from flesh and blood.
In the cosmic scale, if there is life out there already, what happens to us humans is insignificant. But if life is unique to the Earth, then of course what happens here on Earth is of cosmic consequence. If we screw things up this century it forecloses the possibility of life spreading beyond Earth.”
It depends on what the village idiots will do?
“Yes, as I say in my book, there could be a ‘bottleneck’ at our own evolutionary state — the stage we’re at during this century, where we develop powerful technology. The long-term prognosis for ‘Earth-sourced’ life depends on whether humans survive this phase — despite vulnerability to the kinds of hazards we are currently facing.”
|
https://medium.com/copenhagen-institute-for-futures-studies/steering-clear-of-catastrophe-7c1fe6c2cd64
|
['Casper Skovgaard Petersen']
|
2019-11-18 13:54:15.810000+00:00
|
['Existential Risk', 'Artificial Intelligence', 'Cyberterrorism', 'Marvin Rees', 'Bioterrorism']
|
Mixed feelings about Italy’s new law on delivery riders
|
After years of debate and declared intentions, since last November 2nd Italy has a regulation that deals specifically with the work of delivery riders. Is it good or bad regulation? Hard to say. The premises were very negative. Unfortunately, the voice of very few people — who had worked as riders for no more than a few hours, in most cases — had generated a strong negative prejudice on the sector, preventing politics from learning to know it and understanding its enormous opportunities, and not only the risks.
The two governments that have followed one another in the last year have thought that making riders’ work stricter would have made them happier. They got quite shocked when, this summer, hundreds of riders all across the country finally expressed their opinion, strongly opposed to measures such as the prohibition of piecework or the obligation of a public insurance. And indeed, even if very little has emerged in the public debate, already today all the riders collaborating with major platforms work legally, have an insurance, earn well; and all this with a freedom unthinkable in almost any other job.
The new Italian law recognizes that riders are self-employed. It is this formal characteristic that gives them the freedom to choose whether, when and for how long to work, and fortunately the legislator has taken this into account. It is an important recognition, which gives a solid foundation to what almost all platforms have been claiming for some time: the problem of regulating the gig economy is not to guarantee safeguards that everyone is indeed happy to provide, but to make rigid what today is appreciated by all parties… precisely because it is not.
That said, the new law requires the platforms to reach a collective agreement within twelve months, on which to base the working conditions of the riders. In an ideal world, this is a desirable solution. In reality, however, there are two problems linked to each other. First: working as rider is a temporary solution in the vast majority of cases, and this means that almost no rider is registered with trade unions. Therefore, why should the platforms sign an agreement with unions that do not represent the actual interests of the workers for which the agreement is concerned? It would certainly have been preferable to clarify, in the law, that agreements could also be made with representatives of riders, notwithstanding their belonging to a trade union.
Furthermore, the law provides for the adoption of public insurance against accidents at work. As anticipated, already today the couriers collaborating with the main platforms are fully insured. Considering the peculiarities of this work, public insurance — normally intended for employees — is not able to maintain adequate levels of flexibility. For example, the new law provides for insurance to be guaranteed for each day of work, regardless of the number of hours worked. But why should a platform pay the same amount for a rider who works an hour, and for someone else who instead works six or eight hours in one day? And why should any platform pay to cover the same person, working as a courier for different platforms during the same day?
Finally, the new law provides that the discipline of employed work should apply to all collaborations in which the performances are mainly personal and continuous. Now, one could wonder what does “mainly personal and continuous” mean. The answer is easy: no one knows. There is no legal definition of criteria for a “mainly personal and continuous” collaboration. This certainly does not favor the certainty of the rules that any company needs in order to operate in a sustainable way, and to invest in the long term.
In conclusion, Italian law presents positive and negative aspects, largely due to the lack of understanding of the on-demand economy and the benefits it brings to all its stakeholders. That said, the recognition of couriers as self-employed workers upon which any social right or protection gets built certainly goes in the right direction for a sustainable and future-oriented regulation.
|
https://medium.com/hola-glovo/mixed-feelings-about-italys-new-law-on-delivery-riders-8def1a6ebf41
|
['Giacomo Lev Mannheimer']
|
2019-11-14 08:56:44.870000+00:00
|
['Italy', 'Regulation', 'Gig Economy', 'Riders', 'Policy']
|
“write Juan.”
|
A lot of phrases ring in my head during my daily stuff (I can’t really call something as chaotic as my life a routine right now), but the one I’ll start with is one I learned at my job straight out of college,
“Bottom line up front.”
I’m here to start writing a blog, I have no experience doing so, and I haven’t quite properly defined what I’m looking to get out of it beyond these three goals:
1. Improving my communication skills.
2. Giving my thoughts a place to exist where I can process them and where those I am close to can find them if they would like.
3. Attempting to reframe my reflection to include more positive experiences, sharing them will help (and any readers get something fun or warm).
That being said, I am kind of a talkative person so 1a. might be not boring the reader with excessive detail. I’ll probably look into techniques other writers use to avoid doing that. But while I work on that, please bear with me.
These are mostly selfish reasons, but this is on an online platform. So, if you happen to read this, who am I? What am I doing?
Well, at the time of writing it’s 1:36am and I’m in an “interesting” hotel in Dongguan with not much of a desire to sleep before work tomorrow, it seems.
My name is Juan, and deciding to write is a big reflective step that I’m taking…
My intention for the rest of this post is to provide an overview of everything I want to write about, in so doing, this will also serve as an introduction, so here goes.
|
https://medium.com/@jrlw318/write-juan-1573339dde87
|
['Juan A. R.']
|
2018-12-30 07:41:41.069000+00:00
|
['Esports', 'Engineering', 'Personal', 'Work', 'Hobby']
|
How To Be Vulnerable
|
How To Be Vulnerable
It’s Time to Open Up
Photo by Genessa Panainte on Unsplash
I cry a lot.
Nobody knows. Because I don’t want anyone to know. I don’t want anyone to comfort me. I don’t want anyone to console me. I don’t want anyone to touch me when I am sad. I like to feel sad. It reminds that I am still here. But it also f*cking hurts.
I bury myself in work. I hibernate in an Internet cocoon. I lay comfortably behind a screen. I unravel. I stop myself from crying. But it all builds up. And one day the dam will break. I will like it when it does.
I lose myself in fatherhood. I lose myself in loss. I lose myself in ego-spirals. I lose myself because of a broken world. Sometimes I want to lose myself. Sometimes it’s an excuse to hide. It is an excuse. But I don’t know what I’m scared of.
Maybe I’m scared of dying. Maybe I’m scared of losing someone I can’t live without. Maybe I belong alone. Maybe I am unlovable. Maybe I use maybe as yet another excuse not to say yes to everything.
I love myself more than I hate myself. I have no reason to hate myself, but still, some days I walk by the mirror and call myself a name. I don’t know why. I am objectively a good person, even to myself. I don’t know if I want to be more or less. Or to just disappear.
I think about how I was holding my mother’s hand when she took her last breath. I was 20. I think about how I was holding my father’s hand when he took his last breath. I was 33. I wonder what I’d be like if they were still here. I wonder what my kids would be like with them in their lives.
It’s easier for me to open up online. My friends don’t read this. I wouldn’t care if they did. They probably know all this. My oldest friends know me. They know me when I am pretending I am ok. I am not ok. I am rarely ok. But I am always stable.
Being vulnerable feels good. But the Internet has made vulnerability a commodity. There are people on this site who use it as a crutch. As a content churn. That is really f*cking sad. And messed up. But it’s real.
Sometimes I feel like I deserve bad karma. There is literally no reason for this. I’ve lived a fairly pristine existence, but for a few minor anomalies. It’s almost as if I want to be bad, but I can’t be. Because I know that inside, I am good. And I would be uncomfortable being bad.
I hold doors. I say thank you to everyone. I pay attention to people. I haven’t left my house much since March. Like, less than you. I can’t tell if I hate it or love it. Or if my existence right now is so static, I am in a simulation of myself. I am static.
It’s hard to be vulnerable. People judge your vulnerability. God forbid you admit that you are not ok. Diagnosis. Medication. Hospitalization. For some people, all of those are relevant and important and life-saving. But for some people, it’s ok to just feel like hurting yourself. Even if you never plan on doing it. Admitting it isn’t always a sign of more.
It’s ok if you want someone to break up with you because you don’t think you deserve them. It’s probably not healthy, but it’s ok. A lot is ok. It’s ok to feel like the world has failed you. What’s not ok is to act like an entitled prick all the time. Yeah, there are too many of those people.
Do you ever wonder what the point of someone who plagiarizes is? Is it that they are so desperate for attention that will do something that is so easy to figure out? Do you ever wonder if Internet sites truly care about ethics? Or is literally everything in our lives fully commercialized?
Some days I want to punch a wall. I don’t even know why. I am completely calm all the time. I think my insides hold some mild rage. Maybe because of all I’ve lost. I miss them. I miss being a son. I miss my mommy. I miss my daddy.
I cry a lot. Like, more than you would expect. I cry watching The Good Doctor. I cry watching A Million Little Things. I cry listening to Shallow. I’m kind of an emotional basket case. But no one knows. I am not embarrassed in the slightest bit. I like crying. Just not around anyone else.
I want to crack myself open and lay threadbare on a spool for the world to see.
I want to lay in a forest for eight hours. On the ground. I want to stare at the sky and the trees. I don’t care if a bird sh*ts on me. I want to breathe in the crispest air in the world. I want that air to cleanse me and make me feel better. Part of me doesn't want to feel better.
People used to say I moped around a lot. I did. I was lonely. I think I wanted to be held more than I was. I still mope around, but it’s different. Now I just see time slipping through my fingers while I stand in one place. My feet are cemented. I’m bored.
I’ve never been addicted to anything. I can stop doing anything. I don’t feel like I need anything in my life. Besides my kids. And that’s a problem. Because I probably need love. Like a lot of love. Better love. A different kind of love. From someone who actually gets me.
I think I deserve that, but in the same thought, I like to tell myself I am fine being alone. That solitude suits me. And in a lot of ways it does. But it’s because I am numb. The pins and needles of my life don’t allow me to feel enough. I’m scared to feel something. I’m scared to lose something.
I think about dying a lot. I worry that my kids will be lost without me and the next second I worry that they will be fine without me. And what was I here for anyway? Some days I wonder if I am really good at anything. Even the stuff I know I am good at.
I think someone will read this and suggest I get on some medication. I think that sounds stupid. Because the act of spraying vulnerable thoughts on a wall is not a defect. It’s just an emotional brainstorm. It feels good even when it feels bad.
I used to be rigid. Then I realized I had an appetizer portion of Asperger’s. A lot of people don’t believe that. They think it’s an excuse I use to make the fact that I don’t want to emote or connect or go to parties more palatable. I don’t give a f*ck what they think anymore. I fall where I fall. Sometimes I want to fall down and then stay there.
I’ve never heard voices. I wish I did. And that they were my mom or dad. Sometimes I see birds near my garage and I think they are them. I don’t care how that sounds. I miss them. I don’t think I’ve recovered from their deaths. It doesn’t matter how old I am. I can’t recover.
If I hadn’t found writing again a few years ago, I know I would have a lot more anger inside of me that would be constantly festering. If I hadn’t found meditation a few years ago, I may have had a heart attack. Or panic attacks. Or an overload of stress. Mindfulness and awareness are what makes me breathe. And get up in the morning. And my kids. Even when they don’t want to talk to me. I still love them more than I could possibly ever love anything.
I wonder if other people get tired of reading the same story from the same writers, in a different form, every other day. The same narrative. The same. The same. The same. The same. Different headline. Different paragraph structure. Same. Sh*t.
I like writing whatever the f*ck I want to. Including the * that I put in all the curse words in here except for prick. Because prick looks better and is contextually accurate without an *. I may be the only writer who doesn’t care if you read me. Or follow me. Or follow me just because you want something. I am oblivious. You can stop. I don’t notice. I don’t care.
I wonder what it would feel like to love someone like I love my kids. That would be nice. But I wonder if it wouldn’t be as nice as I think. Because I think I love my kids too much sometimes. And it causes pressure for them. And then I feel bad about myself, as if I can’t do anything right. Even when it comes to unconditional love.
Do you ever just want to spill your darkest secrets onto this screen? I don’t have many, but this is what it is to me. To be vulnerable. This is how to be vulnerable. This is how to stop caring about what everyone thinks. Or what comes next. Or what this will mean. Or anything.
These are just words. They come from the inside of my heart. But they are still just words. But when the words you write can touch someone else, even for a second, you have achieved something. It’s why I write poetry. Well, I have to write poetry.
It spills from my arm and eyes and ears and my mouth. It has to come out. But what I’ve come to find is that the less I care about who reads it, the more meaningful it is when someone does. And when it hits them straight in the chest. The biggest compliment I can get is a tear. Not a word. Emotion from my words. That is pure.
I cry a lot.
It feels good. Maybe one day I will be bold enough to cry with you. But until then, my words are my tears.
|
https://medium.com/assemblage/how-to-be-vulnerable-473977fa525f
|
['Jonathan Greene']
|
2020-12-19 04:36:02.362000+00:00
|
['Personal Growth', 'Self', 'Mental Health', 'Vulnerability', 'Writing']
|
Why Nobody Can Be Fascist Nowadays
|
As a politics addict, not only am I interested in my country’s domestic politics, but also in international politics. Each country differs in terms of culture, society, history and institutions. However, there’s one factor that makes all democracies similar; that is, the existence of two or more parties that are ideologically positioned on a continuum that goes from Left to Right. In England there are Labours and Conservatives, in the US there are Democrats and Republicans.
If we focus on each country’s political language rather than its party system, then we’ll find out that there is also another feature shared by nearly every western democracy. Each country has its own political language, which is clearly shaped by its historical roots. For instance, an Italian politician is much more likely to talk about the Catholic Church than a British one. However, as I’ve already said, there is also a strong similarity. When these countries experience times of social and political tension, we are likely to hear lots of undue references to Fascism.
What is Fascism? Have you ever thought about that? This word is used quite frequently in modern Politics, but does it really make sense? Politicians make use of several distortion tactics in their speeches. One of them is called “Name-Calling”, by which a politician attacks her/his rival by linking her/him to a negative symbol. Here, “Name-Calling” is represented by the word “fascist”, whereas the negative symbol is clearly Fascism. Still, does it really make sense to use this word and this symbol? The answer, to which I’m going to bring evidence, is no.
The problem is that, all around the world, several politicians usually just say something like, “He is a fascist,” and then drop the mic, as if nothing more needs to be said. But there is something more that has to be said. If a person attacks someone else by labelling her/him as a fascist, then s/he should be able to explain what being a fascist means. Otherwise, the attack will be no more than mere speculation.
In 1946, George Orwell published an interesting essay called “Politics and the English Language”. In this essay, Orwell criticized several inaccuracies of the written English of his time. He dedicated some lines to the word “Fascism” too. According to him, ”the word Fascism has now no meaning except in so far as it signifies ‘something not desirable’”. Hence, Orwell considered that the word “Fascism” had been deprived of its real meaning. He went on saying that:
The words democracy, socialism, freedom, patriotic, realistic, justice have each of them several different meanings which cannot be reconciled with one another. In the case of a word like democracy, not only is there no agreed definition, but the attempt to make one is resisted from all sides. It is almost universally felt that when we call a country democratic we are praising it: consequently the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using that word if it were tied down to any one meaning.
Mussolini and Fascist “Blackshirts” in Naples before the March on Rome (1922)
Historically Speaking
A few days ago, Grant Piper published a remarkable article for History of Yesterday called “A Definitive Guide to Historical Fascism”. I strongly suggest you to read it if you’re interested in the historical roots of Fascism.
Piper tries to find the ideological roots of Fascism too. He mentions an essay called “The Doctrine of Fascism” written by Mussolini in 1932 as a sort of fascist manifesto. It would be wrong to compare this essay to Marx’s and Engels’ “The Communist Manifesto”. They are different in terms of lexicon, style and purpose. Instead, it is much more reasonable to compare “The Doctrine of Fascism” to the “Mein Kampf”. Both are essays and not manifestoes. However, there are also a few differences. I will analyze just a couple of them.
First of all, the “Mein Kampf” was published in 1925 (eight years before Hitler seized power in Germany) while “The Doctrine of Fascism” was published in 1932 (ten years after the beginning of Mussolini’s presidency). The former was aimed at describing what Hitler would have done once in power; the latter, instead, was aimed at providing an ideological reason for what Mussolini had already done in office. Just to make a simplistic example, we can say that, as far as Nazism is concerned, the book was published before the movie; instead, in Italy, the book was published after the movie.
When Hitler’s dictatorship started, the “Mein Kampf” became a sort of sacred text. In schools, it was part of the curriculum. Whenever a couple got married, bride and groom were given a copy of it by the State. Each soldier was given a copy as well. Conversely, the distribution of “The Doctrine of Fascism” was nearly inexistent, as it was just an article in the “Treccani”, an Italian-language encyclopedia. Mussolini didn’t think that reading that essay should have been compulsory.
Mussolini’s desk at the headquarters of “Il Popolo d’Italia”, the newspaper he edited
Fascism as an Ideology?
By this point, it is reasonable to wonder whether Fascism was backed by an ideology or not. Merriam-Webster gives three definitions of “ideology”; the most useful for the purpose of this article is the second one: “[Ideology is] the integrated assertions, theories and aims that constitute a sociopolitical program”. This definition implies the fact that this body of theories and beliefs is immutable; or, at least, that the key principles of the ideology do not change over time. For instance, if you are Catholic, you will always believe in the existence of God. Perhaps you will change your opinions towards same-sex marriage, as the world we live in has finally developed; but you will never deny the existence of the Trinity.
What are the key principles of Fascism, then? We have already seen the fact that Italian Fascism didn’t have a manifesto, nor a book. Where can we find these principles? The answer is that we have to derive them from Mussolini’s life. This is because Fascism — and this is the main thesis of this article — was just Mussolini’s Fascism. Hitler’s was Nazism; Franco’s was Francoism. It follows that, if you attack a politician by saying s/he is a fascist, then you are saying that s/he is an Italian man born in 1883 who ruled over Italy from 1922 to 1943. It sounds quite weird, doesn’t it?
It might be argued that Italian Fascism was just a sort of paradigm. But still, what is this paradigm like? Can we confidently say which were Mussolini’s thoughts on race or his ideas on nation? These are the two themes I’m going to analyze, as they are the typical themes one refers to when s/he uses the word “Fascism” nowadays. However, the same analysis is valid for any other theme.
Mussolini in 1925
Mussolini’s Ideas on Race
Fascism and racism are usually deemed bound to each other. According to many people, if someone is racist, then s/he is also fascist. However, this is clearly wrong. There are thousands of events, in History, when someone was discriminated, or killed or deported, just because of her/his race. The majority of these events took place before 1919, the year in which Fascism was created by Mussolini. Obviously, racism spread around the world long before Mussolini created the Fasci Italiani di Combattimento.
Now, let’s reverse the reasoning. That is: if someone is fascist, then s/he is also racist. From a logical perspective, this is much more plausible. However, I’m going to show you that this is wrong as well. Remember that I’ll stick to Italian Fascism. To me, Italian Fascism and Fascism are the same thing, as I’ve already said, but I’m going to repeat it throughout the article just to be as clear as possible.
The aim of this article is not to prove that all beliefs about Fascism are wrong, the aim is to bring evidence to the argument according to which we cannot confidently say what Fascism is. There were multiple Fascisms. However, all of them were embodied by Mussolini and, to a larger extent, by the Italians who lived during Mussolini’s dictatorship.
What about Mussolini’s thoughts on race, then? It might be uncomfortable, but in his first years as President of the Council of Ministers, Mussolini was not racist at all. Usually, a fascist’ racism is believed to be addressed to Jews. However, in 1920 Mussolini published an article on “Il Popolo d’Italia”, the newspaper he edited, where he wrote that:
In Italy, there is absolutely no difference between Jews and non-Jews; in all fields, from religion, to politics, to weaponry, to economics… the Italian Jews have the new Zion here, in our adorable land.
It might be argued that this was Mussolini before he took power, then everything changed. Anyway, this is not totally correct. In 1929, the Kingdom of Italy and the Holy See reached an agreement to settle the “Roman Question”. Mussolini declared Catholicism to be the official religion in Italy. When he delivered a speech to the Chamber of Deputies to explain the content of the “Lateran Pacts”, he said that:
It is ridiculous to think, as somebody said, that the synagogues should be closed. Jews have been in Rome since the time of the kings. […] They were 50,000 in the days of Augustus and they asked to cry on Julius Caesar’s body. They will remain undisturbed.
Then, from 1938 to 1943, Mussolini promulgated the “Racial Laws” to enforce racial discrimination in Italy. They were directed mainly against the Italian Jews. It is commonly acknowledged by historians that they were due to Hitler’s influence on the Italian dictator, but this is not that important for the purpose of this article. What’s important, here, is the fact that Mussolini started by ruling Jews out of the public administration and, eventually, he sent them to concentration camps.
What is the right view of a fascist towards race, then? How can we confidently say that the “true” fascist discriminates ethnic minorities? At the beginning of his dictatorship, Mussolini didn’t do that at all. If you want to read more about this topic, take a look at “Hitler and Mussolini, a Love-Hate Relationship”.
The signatories of the “Lateran Pacts”
Mussolini’s Ideas on Nation
From 1922 to 1943, Mussolini’s ideas on the Italian nation were pretty much the ones we would expect. Throughout this period, Mussolini pursued the myth of the Roman Empire. He believed that Italy should have had an empire. This is the reason why the Second Italo-Ethiopian War was launched in 1935. This war of aggression resulted into a massive political success, as Fascism reached its peak of consensus; but it was also a huge economic failure, as Ethiopia was far from being a rich country. When the League of Nations imposed economic sanctions on Italy because of the conquest of Ethiopia, Mussolini delivered a speech to the Chamber of Deputies in which he talked about Italy as the victim of international conspiracies.
In 1943, after Mussolini’s capitulation, he was rescued by German paratroopers from his imprisonment in Gran Sasso massif. Brought in Munich, Mussolini met Hitler. Even if Mussolini considered his political experience ended, the Führer forced him to go back to Italy in order to take control of the Northern part of the peninsula. However, Mussolini was not ruling anymore. The Italian Social Republic was no more than a German puppet state. The Italian dictator was pretty aware of this.
Hence, what is a fascist’ ideas on nation? Are they similar to Mussolini’s original ones? If so, can we confidently say that, when Mussolini accepted to head a German puppet state, he wasn’t a fascist anymore?
Mussolini, Hitler and King Vittorio Emanuele III in Rome (1938)
Conclusion
What I’ve been trying to explain is that Fascism was just the story of a man and the people who decided to follow him. It is impossible to say correctly which are the ideas of a fascist. I’ve analyzed race and nation, but the same twists can be found in Mussolini’s attitudes towards political rivals, urbanization, and so on. It might sound weird, it might be uncomfortable, but from 1922 to 1943 the majority of the Italians simply decided to follow the moods of a single person. When you say that someone is fascist, then, what do you mean?
|
https://historyofyesterday.com/why-nobody-can-be-fascist-nowadays-53a6d751d607
|
['Michele Caimmi']
|
2020-07-20 06:46:01.183000+00:00
|
['History', 'Italy', 'Politics', 'Fascism', 'Mussolini']
|
Innocent until proven guilty: An argument for the unconstitutionality of the U.S. Bail System
|
Innocent until proven guilty: An argument for the unconstitutionality of the U.S. Bail System
Abstract
The current money bail system contributes to the increasing proportion of jail inmates who are not convicted of any crime. Statistically, that is 50% of jail inmates in 2000 and 62% in 2013 according to the American Bar Association. Traditionally, the system was developed to ensure that the defendant appears at court and to prevent new crimes. But in turn, the system is used to prey on the poor who don’t have funds to bond out of jail. Poor and Impoverished communities are disproportionately affected by the current U.S. bail system and consequently imprisoned for extended periods of time without being convicted of a crime thus infringing on their constitutional right affirmed by the Supreme Court in the 1895 case Coffin v. United States.
A key part of my research is delving into the Comprehensive Crime Control Act of 1984. It contains some of the most significant changes in the federal criminal justice system ever enacted at one time. The act overhauled the bail provisions, the sentencing system, the insanity defense, and the forfeiture laws, as well as created a number of new substantive offenses. The questions I plan to answer in this project are, why do we have a money bail system if we’re presumed innocent until proven guilty, how can the money bail system be amended to not disproportionately affect poor people, and who is the average american affected by the money bail system and what current systems are in place to help them? I used resources and scholarly essays from Memphis Law School to gather research and information about the subject. I also conducted interviews with communities mostly affected locally and nationally. Lastly, I have drafted a Legislative Proposal amending the current system to directly address the issue. Keywords: Money Bail, Bail Reform, Innocent until proven guilty, US Bail System
Introduction
According to 2016 data from the U.S. Department of Justice, almost two-thirds of the 730,000 people already incarcerated in U.S. jails on any given day have not been convicted of a crime (Zeng,2018). This is because most simply can’t afford to pay their money bail even when set at relatively modest amounts. On the contrary, others cannot afford it because the bail amount is set intentionally beyond reach so they remain in jail awaiting trial (Schnacke, 2017). When I read these statistics, and the circumstances surrounding them only one thing comes to mind, “What happened to innocent until proven guilty?” This is why bail reform of the U.S. Bail system is needed.
I plan to prove the current practice we have as a country is unconstitutional and reform needs to be implemented. I am a Graduate student at Hodges University, and a native of Memphis, Tennessee. Growing up in Memphis has made me passionate about bail reform because over 70% of Shelby County’s current jails house people who have not been convicted of a crime, they are simply poor and cannot afford the bail amount. The main questions I plan to address are, is the current Money Bail system in the country unconstitutional and what reforms can be put in place to address the issue?
Factual Background
In 2016, California Supreme Court Chief Justice Tani Cantil-Sakauye proclaimed “We must not penalize the poor for being poor” in reference to reforming the US bail system (Bureau of Justice Statistics (BJS), 2014). In 2013, 62 percent of county jail inmates were non-convicted individuals awaiting trial, the majority of whom were there because they could not pay their bail. Black and Latinos make up 50 percent of jail populations but only 29 percent of the U.S. population (Prison Policy Initiative, 2014). The bail system used in the United States actually originated from 1,000-year-old English roots. The Anglo-Saxon legal process was created to provide an alternative to blood feuds to avenge wrongs, which often led to wars.
As Anglo-Saxon law developed, wrongs once settled by feuds or by outlawry were settled through a system of “bots,” or payments designed to compensate grievances (Jackson, 2018). Essentially, crimes were considered private affairs unlike our current system of prosecuting in the name of the state. Suits brought by persons against other persons typically sought remuneration as the criminal penalty. In a relatively small number of cases, persons who were considered to be a danger to society, persons caught in the act of a crime, or the process of escaping were either mutilated or executed. All others were presumably considered to be “safe,” so the issue of a defendant’s potential danger to the community if released was not a primary concern. The Anglo Saxons felt prisons were costly and troublesome, but they were concerned that the accused might flee to avoid paying the bot, or penalty, to the injured persons (Jackson, 2018).
Therefore, a system was created in which the defendant was required to find a surety, or person who would vouch for them, who would provide a pledge to guarantee both the appearance of the accused in court and payment of the bot upon conviction. The amount of that pledge was named “bail” which became the modern term money bail bond, and was identical to the amount or substantive worth of the penalty. With this system in place, if an accused were to flee, the responsible surety would pay the entire amount to the private accuser, and the matter was done. The Anglo-Saxon bail process is referred to as “the last entirely rational application of bail” because the amount of the pledge was identical to the amount of the fine making the process equitable. Upon conviction, the system accounted for the seriousness of the crime and fulfilled the debt owed if the accused did not appear for trial. All prisoners facing penalties payable by fine were bondable, and the bail bond was perfectly linked to the outcome of the trial, basically money for money (Jackson, 2018).
In the period following the Norman invasion, criminal justice gradually became an affair handled by the state. The criminal process could be initiated by the suspicions of a jury or the sworn statements of the aggrieved, similar to the US system used currently. Capital and other forms of corporal punishment replaced money fines for all but the least serious offenses, and the delays between accusation and trial became lengthier as traveling royal justices held court in each town (Schnacke,2010). Over time, public mutilations and executions were gradually phased out, while the overall use of corporal punishment began to increase, giving many offenders a greater incentive to flee. System delays also caused many persons to suffer in jails, and the un-checked discretion given to judges and magistrates to release defendants led to instances of corruption and abuse (Jackson, 2018).
Moreover, as the penalties changed, ideas about which persons should be bondable also shifted. The first to lose any right to bail whatsoever were persons accused of homicide. Due to exposure of widespread abuse in the bail bond-setting process, Parliament passed the first Statute of Westminster, which assembled and codified 51 existing laws, many of which originated from the Magna Carta and covered bail. The Statute removed the traditional Anglo-Saxon customs by establishing three criteria to govern one’s bail options. First, the nature of the offense, the laws categorized offenses that were and were not bailable. Second, the probability of conviction required the sheriff to examine all of the evidence and measure the accused level of suspicion. Lastly, the criminal history of the accused, which was often referred to as the bad character of the accused (Prison Policy Initiative, 2014).
The English protection against pretrial detention over time evolved and began to comprise three separate but essential elements. The first was the determination of whether a given defendant had the right to release on bail, answered by the Petition of Right, by a long line of statutes which spelled out which cases must and must not be bailed by justices of the peace or what we now call sheriffs, and by the discretionary power of the judges of the king’s bench to bail any case not bailable by the lower judiciary. The second was the habeas corpus procedure which was developed to convert into reality rights derived from legislation that could otherwise be thwarted. Third was the protection against judicial abuse provided by the excessive bail clause of the Bill of Rights of 1689 (Schnacke,2010).
Even before some of England’s later reforms, in 1641 Massachusetts passed its Body of Liberties, creating an unequivocal right to bail for non-capital cases, and re-writing the list ofcapital cases. In 1682, Pennsylvania adopted an even more liberal provision in its new constitution, providing that all prisoners shall be bailable by sufficient sureties, unless for capital offenses, where proof is evident or the presumption great (Schnacke, 2010). The Pennsylvania law was quickly copied, and as the country grew the Pennsylvania provision became the model for almost every state constitution adopted after 1776. This is important considering that the United States Constitution itself only explicitly covers the right of habeas corpus in Article 1,Section 9. The Constitution also states the prohibition against “excessive bail” in the Eighth Amendment, which has been traced back to the 1776 Virginia Declaration of Rights. But there is no explicit right to bail in the U.S. Constitution, and the Constitution does not define which crimes are bailable, nor which defendants can be detained (Schnacke,2010).
As American law governing release on bail bonds was being established, cultural differences between the colonies and England also led to changes in the administration of bail.For less serious crimes, the Anglo-Saxon system provided for pretrial release. Under the Anglo-Saxon system of pretrial release, the sheriffs relied on a surety, or some third-party custodian who was usually a friend, neighbor, or family member, to agree to stand in for the accused if they ran. Over time, arbitrary money bail bond amounts coupled with a growing number of defendants who were unable to pay them either by themselves or with the help of friends or relatives, combined to create a profession unique to the field of American criminal justice.
Legal Background
The commercial money bail bond industry was born out of this dilemma. Taylor v. Taintor, an 1872 U.S. Supreme Court case is commonly cited as the authority for bail bondsmen to act as bounty hunters. In 1866, sureties made an $8,000 cash bond for Edward McGuire in Connecticut, after he was charged with grand larceny. While awaiting trial in Connecticut, McGuire returned to his home in New York. Unknown to the bondsmen in Connecticut, McGuire was wanted in Maine for another felony. Upon request from the Governor of Maine later in 1866, the Governor of New York extradited McGuire to Maine, where he was convicted of burglary in 1867 and imprisoned for fifteen years. When McGuire failed to appear for trial in Connecticut in October 1866, the cash bond was forfeited. The Connecticut bondsmen sought relief from the forfeiture on grounds that they were not at fault in failing to secure McGuire’s appearance, but rather that his nonappearance was the result of his extradition to Maine, an intervening “act of law” under the Extradition Clause of the U.S. Constitution. The Supreme Court held that the sureties were at fault and were not protected by the Extradition Clause by a vote of 4 to 3. In summary, the sureties’ neglect in failing to keep up with McGuire and to inform the New York authorities of the pending Connecticut case caused McGuire’s nonappearance. This case is commonly referred to as the landmark case that confirmed bail bondsmen as bounty hunters (Lund, 2000).
Following this the bail system evolved over time. In the 1960s there was a call for bail reform. In 1966, the Federal Bail Reform Act was written and passed. This act was the first major reform of the federal bail system since the Judiciary Act of 1789 and it contained numerous provisions (Jackson, 2018). First, a presumption in favor of releasing non-capital defendants on their own recognizance. Secondly, conditional pretrial release with conditions imposed to reduce the risk of failure to appear. Also, restrictions on money bail bonds, which the court could impose only if non-financial release options were not enough to assure a defendant’s appearance. Moreover, a deposit money bail bond option, allowed defendants to post a 10% deposit of the money bail bond amount with the court in lieu of the full monetary amount of a surety bond. Lastly, a review of bail bonds for defendants detained for 24 hours or more. The act stated that non-capital defendants were to be released pending trial on their personal recognizance or on “personal bonds” unless the judicial officer determined that these incentives would not adequately assure their appearance at trial.
Following this reform, the bail system was once again reformed in the 1970s and 1980s. Throughout this historical timeline of how the bail bond system was established, it’s quite easy to see how the bail system began leaning towards corruption and penalizing the impoverished. The amount of bail for the modern-day bail system depends on the severity of the crime but is also at the judge’s discretion. Some jurisdictions have bail schedules that recommend a standard bail amount. For example, in Los Angeles, the bail schedule recommends $25,000 for perjury or sexual battery, $100,000 for voluntary manslaughter, and $1,000,000 for kidnapping with intent to rape (Silverman, 2007). In determining bail, a judge may take into account this amount but will also consider the defendant’s criminal record, his or her history of showing up for past court appearances, ties to the community, whether the suspect is a danger to others and any other concerns that may be raised by the defendant’s attorney. In some cases, bail may be waived altogether.
The current money bail system has no relation to innocence. Over 70 percent of people currently in jail have not been convicted of a crime. Bail reinforces the racial disparities of the American criminal justice system. African-American and Hispanic people are more likely to be arrested, issued bail, and less likely to be able to afford it. Constitutionally speaking this entire practice is illegal. A bedrock principle of the American criminal justice system is that a defendant accused of a crime is presumed innocent until proven guilty beyond a reasonable doubt. This protection comes from the due process guarantees in the Fifth and Fourteenth Amendments of the U.S. Constitution. It exists to guard against convictions based on factual error. Citizens are considered innocent until proven guilty, and to take away their liberties and incarcerate them until they pay bail is criminal. Over 30 years ago, the Supreme Court explained in United States v. Salerno, “In our society, liberty is the norm, and detention prior to or without trial is the carefully limited exception.” This is not being practiced in our criminal justice system today (May, 2018).
Constitutional Interpretation
There are several constitutional rights that can be used to prove citizens’ civil liberties are being violated due to the current bail system. First, the presumption of innocence is the legal principle that one is considered innocent until proven guilty. In many countries, the presumption of innocence is a legal right of the accused in a criminal trial, and it is an international human right under the UN’s Universal Declaration of Human Rights, Article 1. The Fifth Amendment includes a due process clause stating that no person shall “be deprived of life, liberty, or property, without due process of law.” The Fifth Amendment’s due process clause applies to the federal government, while the Fourteenth Amendment’s due process clause applies to state governments. The Supreme Court has interpreted the Fifth Amendment’s Due Process Clause as providing two main protections; procedural due process, which requires government officials to follow fair procedures before depriving a person of life, liberty, or property, and substantive due process, which protects certain fundamental rights from government interference (Harper, 2007).
Procedural Due Process is a legal doctrine in the United States that requires government officials to follow fair procedures before depriving a person of life, liberty, or property. When the government seeks to deprive a person of one of those interests, procedural due process requires at least for the government to afford the person notice, an opportunity to be heard, and a decision made by a neutral decision maker. This amendment would apply to the pre-detention process when bail is being set. If judges were applying this doctrine correctly bail amounts wouldn’t be so egregious (Strauss, 2013).
The Supreme Court has held that the Excessive Fines Clause prohibits fines that are “so grossly excessive as to amount to a deprivation of property without due process of law” ( Strauss, 2013). The Court struck down a fine as excessive for the first time in United States v. Bajakajian in 1998. Under the Excessive Bail Clause in the Eighth amendment, the Supreme Court has held that the federal government cannot set bail at “a figure higher than is reasonably calculated” to ensure the defendant’s appearance at trial (Strauss, 2013). The Supreme Court has also ruled that the Cruel and Unusual Punishment Clause applies to the states as well as to the federal government, but the Excessive Bail Clause had not been applied to the states. On February 20, 2019, the Supreme Court ruled unanimously in Timbs v. Indiana that the Excessive Fines Clause also applies to the states. The pre-trial processes put into place are also aimed to prevent this, but yet the goal of the Eighth Amendment has continually been ignored and not properly applied. Due to the continued neglect of and/or misapplication of the intent of the Excessive Fines Clause and the Fifth Amendment’s Due Process Clause, specific legislation is needed.
Comprehensive Crime Control Act
The Comprehensive Crime Control Act is recognized as one of the largest and most significant reforms of the U.S. criminal justice system. The act contains 23 chapters, but it is the first 12 chapters that are most important. The most notable provisions of the Comprehensive Crime Control Act include sections on bail conditions, the insanity defense, sentencing, victims of crime, justice assistance, and drugs and narcotics. The act authorized courts to consider dangerousness when setting bail conditions, and also allowed courts to establish pretrial detention if necessary. If there is reason to believe a person will not return to court as required or that releasing someone on bail would put an individual or community at risk, certain bail conditions and pretrial detention are considered acceptable (Panter, 1985).
The Comprehensive Crime Control Act set the precedent for the gross negligence and abuse of the U.S. Bail System. The act reinforced bias and on occasion racist or discriminatory ideals towards black and brown poor people. Being jailed pretrial often leads to people losing their jobs, not being able to care for their children, and an overall disruption of their lives. Being held pretrial has been found to make both conviction and incarceration more likely. People held through pretrial detention are more likely to plead guilty simply to put an end to their case, with the hopes of returning home, and judges are statistically more likely to sentence someone to jail once they have been held in jail pretrial (Bailey, 2018). This means that people held on money bail are more likely to be convicted and sentenced because they are poor, which itself is not illegal. Statistics have found 68% of pretrial detainees have been charged only with drug, property, or public order crimes all of which are not dangerous (Bailey, 2018).
The United States spends about $38 million a day to detain people pretrial, and nearly $140 billion a year. This cost is paid by taxpayers and it could be redirected into education, housing, and economic development. Here locally the practice of using bail to determine pretrial release continues to drive the growth in local jail populations throughout Tennessee. The cost to local taxpayers to house pretrial detainees is approximately $35 per day, and that does not factor the additional cost that local taxpayers absorb to build more jails. In 2017, the average number of pretrial detainees housed in local jails totaled 14,664, presenting a total cost of nearly $60 million a year. Currently, over half of Tennessee’s total jail population is composed of pretrial detainees, up from 32 percent in 1997 (Warren, 2018). The U.S. Bail System has become a mechanism by which a person of means can purchase their freedom. A wealthy person charged with a violent crime can be released because they could afford bail. However, a poor person will often times remain in jail awaiting the disposition of misdemeanor possession charge.
This practice is a violation of citizens rights to be presumed innocent until proven guilty as decided in the landmark case Coffin v. United States. The Court held in this case that the presumption of innocence was a self-evident rule of the Anglo-American criminal justice system. The Court then went on to hold that all Judges in all jurisdictions of the United States are to be required to give an instruction when requested, and sometimes when not requested about the presumption of innocence to all juries. While the presumption of innocence is not expressly written into the constitution, it is expressed through the Fifth Amendment. If a citizen is detained for months or years as they often are in US Jails while awaiting trial for a crime they have not been convicted of due to not being able to pay bail, they have been deprived of life and liberty without due process of law. Due Process of the law requires government officials to follow fair procedures before depriving a person of life, liberty, and property. It is clear from the above statistics that unconstitutional practices are occurring (Strauss, 2013).
Counter Arguments
The pushbacks to bail reforms only bring up dated fears and bigotry against the poor. Some of the backlash may stem from a perception that pre-trial detention is part of the punishment for crime, so defendants who are released are seen as getting away with something. Punishment before trial is fundamentally at odds in a legal system where defendants are presumed innocent until proven guilty. There is no existing data to prove the pre-trial detention aids in public safety and deterrence of committing a crime. In fact, numerous research studies say quite the opposite. The Bronx Defenders through its affiliate the Bronx Freedom Fund, bailed hundreds of people between 2007 and 2009. According to Robin Steinberg, executive director of The Bronx Defenders, “During the 18 months that the Bronx Freedom Fund operated, 150 clients were bailed out and the return rate of our clients was an impressive 95 percent. Additionally, not one client who received Freedom Fund assistance, and was bailed out, was sentenced to jail on their case and almost 50 percent of the cases were dismissed entirely” (Pringle, 2012).
The high return rate fundamentally undermines the supposed purpose of bail of incentivizing defendants to show up to their court dates because they have a financial stake in their appearance. Furthermore, the fact that half the charges were dropped indicates that prosecutors were overcharging the defendants and hoping their inability to post bail would induce guilty pleas. There are numerous studies conducted in other cities addressing the poverty aspect of bail, each pilot program had similar results, over 90 percent of the people bailed out of jail returned for their court date. Therefore, the argument of public safety is null and void (Pringle, 2012).
Solutions
Bail reforms throughout the country have been successful. New Jersey was one of the first states to acknowledge that the use of money bail discriminated against the poor and then took action to overhaul its money bail system. A study released in 2013 by the Drug Policy Alliance revealed New Jersey’s cash bail system was clearly discriminating against the poor. “It was all based upon their economic circumstances,” said Judge Glenn A. Grant, Acting Administrative Director of New Jersey’s courts. “And that’s not what this country stands for.” These alarming statistics were the driving force for New Jersey to launch Criminal Justice Reform. In January 2017, the state essentially eliminated cash bail despite concerns from segments of the community who feared a public safety crisis. Nearly three years later, the first studies on the reform are emerging, and all signs are pointing toward success (Dabruzzo, 2019).
California has completely ended Money Bail in their state as of 2018. California Governor Jerry Brown signed legislation in August of 2018 that abolishes cash bail in the state. The legislation eliminates bail schedules, which tie bail amounts to the charged offense, and, as currently utilized in California, were likely unconstitutional. The legislation also gets rid of the bail bonds industry which is known for its corrupt and predatory practices and replaces a money based pretrial release system with a risk based system. The impact of this high rate of pretrial detention is huge throughout the country, not just in California. Not only are defendants deprived of their liberty and subject to the pains of confinement in often dangerous jails before being convicted of any offense, but they are more likely to plead guilty, more likely to be convicted, more likely to be sentenced to jail or prison. All of these consequences fall more heavily on the poor and racial and ethnic minorities.
New Jersey has also taken the plunge and removed money bail from their criminal justice system. Judges, prosecutors, public defenders, private counsel, and court administrators sat with representatives from the Legislature and governor’s office to find a solution. Both sides of the aisle and both sides of the courtroom came together focused on fixing what was broken with the state’s criminal justice system. Joseph E. Krakora, a New Jersey Public Defender stated, “There really has never been any logic to a system that determines your pretrial status based on your access to resources, your wealth” (Dabruzzo, 2019).
In 2014, the previously mentioned committee made a unanimous recommendation to completely overhaul the system, shifting it from one that relied heavily on monetary bail to one that takes money out of the equation and is “risk-based”, where defendants are held or released based on their risk of failing to show up for court or incurring new criminal charges. The courts’ default, the committee suggested, should be to keep people out of jail. The committee also recommended the use of a risk-assessment tool to help make risk determinations, a pretrial services program to monitor those who are released, speedy-trial laws to ensure that criminal cases are brought to trial more promptly, and a constitutional amendment that would allow “preventive detention”. Preventative detention would give judges the ability to detain a person without the option of bail if there’s a risk to public safety (Dabruzzo, 2019).
A study on bail reform in New York found New York’s lower reliance on bail hasn’t led to defendants not showing up in court. The city’s return-to-court rate is 86 percent compared to about 75 percent nationally. Unlike other states who passed legislative reforms, New York’s results are because of a culture change among judges and other decision-makers, not any change in statutes or court rules. New York Judges are now more aware of their full toolbox of alternatives to bail, including a “supervised release” program championed by Mayor Bill de Blasio that lets them release defendants under monitoring (Branigin, 2019). “We have also begun to understand how the repeat offenders in criminal court often have mental-health and housing issues that are only exacerbated by setting bail and sending them to jail,” said George A. Grasso, supervising judge of the Bronx Criminal Court and of all arraignments in New York City. New York City Prosecutors, in most parts of the city at least, have somewhat reduced how much bail they ask for in low-level cases. And a strong cohort of public defenders in the Bronx and Brooklyn have fought excessive pretrial detention, while making sure to bring community voices into the debate (Branigin, 2019). The Bail Reform methods in California, New Jersey, and New York have proven not only successful but adaptable to other states.
Legislative proposal
Our current U.S. Bail system is punishing those experiencing poverty and needs to be reformed. Multiple studies have shown bail is not the determining factor of whether someone will return for a court date. Also, the high return rate of those with no bail amount fundamentally undermines the supposed purpose of bail of incentivizing defendants to show up to their court dates because they have a financial stake in their appearance. There is also no existing data to prove that pre-trial detention aids in public safety and deterrence of committing a crime. In fact, numerous research studies say quite the opposite. In New York for example, the Bronx Defenders through its affiliate the Bronx Freedom Fund, bailed hundreds of people between 2007 and 2009 and the return rate of the clients was 95 percent.
SECTION 1. It is the intent of the Legislature by enacting this measure to permit preventive
detention of pretrial defendants only in a manner that is consistent with the United States Constitution, as interpreted by the United States Supreme Court, and only to the extent permitted by the Tennessee Constitution as interpreted by the Tennessee courts of review.
TCA code 40–11–118 addresses Execution and deposit Bail set no higher than necessary Factors considered Bonds and sureties. Existing Tennessee law provides for the procedure of approving and accepting bail, and issuing an order for the appearance and release of an arrested person.
As currently written, in determining the amount of bail necessary to reasonably assure the appearance of the defendant while at the same time protecting the safety of the public, the magistrate shall consider the following:
(1)The defendant’s length of residence in the community;
(2)The defendant’s employment status and history and financial condition; (3)The defendant’s family ties and relationships;
(4)The defendant’s reputation, character and mental condition;
(5)The defendant’s prior criminal record, record of appearance at court proceedings, record of flight to avoid prosecution or failure to appear at court proceedings;
(6)The nature of the offense and the apparent probability of conviction and the likely sentence;
(7)The defendant’s prior criminal record and the likelihood that because of that record the defendant will pose a risk of danger to the community;
(8)The identity of responsible members of the community who will vouch for the defendant’s reliability; however, no member of the community may vouch for more than two (2) defendants at any time while charges are still pending or a forfeiture is outstanding; and
(9)Any other factors indicating the defendant’s ties to the community or bearing on the risk of the defendant’s willful failure to appear.
It is clear through the research presented that all of these factors are not being considered when setting bail. Therefore I so move to amend the current bail regulations to include the following:
a. Persons arrested and detained are required to be subject to a pretrial risk assessment conducted by Pretrial Assessment Services, which is defined as an entity, division, or program that is assigned the responsibility to assess the risk level of persons charged with the commission of a crime, report the results of the risk determination to the court, and make recommendations for conditions of release of individuals pending adjudication of their criminal case. This bill would require the courts to establish pretrial assessment services, and would authorize the services to be performed by court employees or through a contract with a local public agency, as specified. The bill would also require, if no local agency will agree to perform the pretrial assessments, and if the court elects not to perform the assessments, that the court may contract with a new local pretrial assessment services agency established specifically to perform the role.
b. All persons arrested or detained for a misdemeanor, except as specified, are required to be booked and released without being required to submit to a risk assessment by Pretrial Assessment Services.Pretrial Assessment Services is authorized to release a person assessed as being a low risk, on his or her own recognizance, as specified. Additionally, a superior court is required to adopt a rule authorizing Pretrial Assessment Services to release persons assessed as being a medium risk, as defined, on his or her own recognizance. If a person is not released, the court is authorized to conduct a pre arraignment review and release the person. The court is allowed to detain a person pending arraignment if there is a substantial likelihood that no condition or combination of conditions of pretrial supervision will reasonably assure public safety or the appearance of the person in court.
c. In cases in which the defendant is detained in custody, the bill would require a preventive detention hearing to be held no later than 3 court days after the motion for preventive detention is filed. The bill would grant the defendant the right to be represented by counsel at the preventive detention hearing and would require the court to appoint counsel if the defendant is financially unable to obtain representation. By imposing additional duties on county public defenders, this bill would impose a state-mandated local program. The bill would require the prosecutor to give the victim notice of the preventive detention hearing. By imposing new duties on local prosecutors, this bill would impose a state-mandated local program.
d. The bill would create a rebuttable presumption that no condition of pretrial supervision will reasonably assure public safety if, among other things, the crime was a violent felony or the defendant was convicted of a violent felony within the past 5 years. The bill would allow the court to order preventive detention of the defendant pending trial if the court determines by clear and convincing evidence that no condition or combination of conditions of pretrial supervision will reasonably assure public safety or the appearance of the defendant in court. If the court determines there is not a sufficient basis for detaining the defendant, the bill would require the court to release the defendant on his or her own recognizance or supervised own recognizance and impose the least restrictive non monetary conditions of pretrial release to reasonably assure public safety and the appearance of the defendant.
The bill would require the Tennessee Supreme Court to adopt Rules of Court and forms to implement these provisions as specified, and to identify specified data to be reported by each court. The bill would require the Tennessee Supreme Court to, on or before January 1, 2021, and every other year thereafter, to submit a report to the Governor and the Legislature. The bill would provide that upon appropriation by the Legislature, the Tennessee Supreme Court would allocate funds to local courts for pretrial assessment services and the Department of Finance would allocate funds to local probation departments for pretrial supervision services, as specified.
Conclusion
In the words of Angela Davis, “Prisons do not disappear social problems, they disappear human beings. Homelessness, unemployment, drug addiction, mental illness, and illiteracy are only a few of the problems that disappear from public view when the human beings contending with them are relegated to cages.” As a society, we must take responsibility for the attack on impoverished communities, and the first step is acknowledging there is a problem.
Our current United States Bail system is flawed, somehow Pre-Trial Detention has become Poverty Detention. The system was developed to ensure that the defendant appears at court and to prevent new crimes. But in turn, the system is used to prey on the poor who don’t have funds to bond out of jail. My research has not only proven this system is unconstitutional but I have also provided solutions to said problem. Research has proven that states can end money bail and public safety will not be affected. States such as California, New York, and New Jersey have passed legislation and conducted internal training to end cash bail and bring the focus back to the original intent of bail.
My research has proven the current practice we have as a country is unconstitutional and reform needs to be implemented.Now, it is time we cease infringing upon individuals constitutional rights and end money bail.
References
Administrator, & Csdesignworks. (2019). Pretrial release without money: New York CITY, 1987–2018. Retrieved June 27, 2020, from https://issuu.com/csdesignworks/docs/cja_rwm_final/2
Branigin. (2019, March 11). Prosecutors around the Country Petition Governor Andrew Cuomo, state lawmakers to end Cash bail in New York. Retrieved June 27, 2020, from https://www.theroot.com/prosecutors-around-the-country-petition-governor-andrew-1833158650
Carbone, supra note 1, at 531 (quoting 5 American Char- ters 3061, F. Thorpe ed. 1909) (footnotes omitted).
The comprehensive crime control act — criminal justice — iresearchnet. (2015, April 20). Retrieved June 27, 2020, from http://criminal-justice.iresearchnet.com/crime/school-violence/the-comprehensive-crime-control act/
Dabruzzo, D. (2019, November 14). New Jersey set out to reform its Cash Bail SYSTEM. now, the results are in. Retrieved June 27, 2020, from http://www.arnoldventures.org/stories/new-jersey-set-out-to-reform-its-cash-bail-system-now-th e-results-are-in/
Dai. (2020, January 27). Arguments against New York’s Cash Bail reforms are dated. Retrieved June 27, 2020, from https://nyunews.com/opinion/2020/01/27/nyc-bail-reforms-progress/
Harper, T. (2007). Fifth Amendment. In The complete idiot’s guide to the U.S. Constitution (pp. 109–120). New York: Alpha.
J. (2018). Money over safety in Tennessee. Retrieved June 27, 2020, from http://www.beacontn.org/money-over-safety-in-tennessee/
Jackson, A. (2018, April 04). “Bail System History.”. Retrieved June 27, 2020, from http://www.burnsinstitute.org/blog/the-evolution-of-money-bail-throughout-history/.
Jail inmates AT MIDYEAR, 2014. (2014). Retrieved June 27, 2020, from http://www.bjs.gov/index.cfm?ty=pbdetail&iid=5299.
Lund v. Seneca County Sheriff’s Department, 230 F.3d 196, 198 (6th Cir. 2000)
May, K. (2018, August 31). How the bail system in the US became such a mess — and how it can be fixed. Retrieved June 27, 2020, from https://ideas.ted.com/how-the-bail-system-in-the-us-became-such-a-mess-and-how-it-can-be-fixe d/
Panter, D. (1985). The changes accomplished by the labor racketeering amendments of the Comprehensive Crime Control Act of 1984. Labor Law Journal, 36(10), 744–761.
Pringle, J. (2012, August 13). City limits: Bail fund aims to free poor defendants. Retrieved June 27, 2020, from https://www.bronxdefenders.org/bxd-in-the-news-bail-fund-aims-to-free-poor-defendants-city-limits-the-brooklyn-bureau/
Raifeartaigh, U. (1997). Reconciling Bail Law with the Presumption of Innocence. Oxford Journal of Legal Studies,17(1), 1–21. Retrieved June 23, 2020, from www.jstor.org/stable/764681
Roos, D. (2018, February 15). Cash bail punishes poor, but what’s the alternative? Retrieved June 27, 2020, from https://money.howstuffworks.com/cash-bail-punishes-poor-but-whats-alternative.htm
Schnacke, Timothy, et al. “The History of Bail and Pre-Trial Release.” Pretrial Justice Institute,2010,www.ncsc.org/__data/assets/pdf_file/0017/1628/schnacke-et-al-2010-history-of-bail-releas e.ashx.pdf.
Stanford Law Review. (2019, April 12). Punishing poverty. Retrieved June 27, 2020, from https://www.stanfordlawreview.org/online/punishing-poverty/
Strauss, P. Due process. (2013). Retrieved June 27, 2020, from https://www.law.cornell.edu/wex/due_process
Thurmond. (1984, September 25). S.1762–98th CONGRESS (1983–1984): Comprehensive Crime Control act of 1984. Retrieved June 27, 2020, from http://www.congress.gov/bill/98th-congress/senate-bill/1762
What Is the Presumption of Innocence? (2018, April 20). What is the presumption of innocence? Retrieved June 27, 2020, from
http://www.bradbaileylaw.com/legal-blog/2018/april/what-is-the-presumption-of-innocence-/ 26
|
https://medium.com/@thelawaccordingtoamber/innocent-until-proven-guilty-an-argument-for-the-unconstitutionality-of-the-u-s-bail-system-2a0c38966b15
|
['Amber Sherman']
|
2021-09-13 23:54:00.384000+00:00
|
['Criminal Justice', 'Thesis Writing', 'End Money Bail', 'Legal', 'Legislation']
|
The Ancient Practice of Beekeeping, and its Relevance for the Future
|
“The future is not in one beekeeper with 60,000 hives, but rather 60,000 people with one hive”
Sun creatures. Messengers of love. People get bewitched, they report, from beekeeping. Beekeepers are chosen by bees, one explains.
Honeybees are at least 50 million years old. The relationship between Homo Sapiens and Apis mellifera began around 10,000 years ago, when humans were honey hunters. There are images in Paleolithic rock paintings of tribes people climbing tall trees for the golden liquid in the beehives above. In Tanzania, they even work with honeyguide birds to score caches of hidden honey in the desert. Over time, the relationship evolved from hunting honey, to working with them — ‘keeping bees’.
A 3,000 year old jar of honey was found in King Tut’s tomb, and it was still edible! Honey was considered so sacred, a panacea for health problems, that it wasn’t even sold. Honey was a gift from bees, a substance that came 50 million years from the past. It was food and medicine, but it was also, in a very real way, a gift from the Gods, a gift from the mystery centers.
The honeybee was considered a sacred animal. The sacredness came out of the knowledge that the bee is one of the greater nurturers of life and fertility. Bees and honey are present in a remarkable number of creation myths around the world, in the cosmologies and sacred places of many diverse ancient cultures.
It’s not hard to see why.
Honeybees have an amazing array of capabilities. A beehive as an institution is a matriarchal society, led by one queen bee who can lay up to 1,500 eggs a day. She is aided by a robust team of nurses, workers, foragers, and honeycomb builders. The hive works together for the good of the whole.
The essential and invisible service that bees provide as pollinators of plants has largely been taken for granted. This is a departure from the past, when bees were revered. If we didn’t have bees, we’d mostly have to eat grains and a few nuts. This is because bees pollinate almost 80% of flowering plants, and a third of the food we eat. 4 out of every 10 bites of food comes from the pollination services of a honey bee! We rely on them, the USDA estimates that honeybees do $11-$15 billion of work in pollination for American farmers each year.
Honeybees are pollinators; messengers of love. Plants are stationary sun creatures, so they recruit creatures with wings, like honeybees, to assist them in mating.
Beauty and seduction are the flowers’ tools for reproduction and survival. Flowers developed enticing nectars, scents, and colors to entice pollinators to them. When a male flower loves a female flower, he makes a little pollen, which the winged bee is recruited to deliver to the female flower. The female produces the seed, which becomes a fruit.
Their ménage à trois is the love story that feeds us all.
Bees forage for the pollen and nectar, pollinating along the way, and then return to their hive. They use the nectar to create honey and beeswax, while pollen provides healthy fats and proteins for the brood, rounding out their otherwise carbohydrate based diet.
Once back at the hive, the nectar is passed from bee to bee. An enzyme in the bee’s stomach turns the sugar into a diluted kind of honey. That’s stored in comb cells where worker bees fan it with their wings to evaporate excess water. The honey is then ready to be stored for food during the winter when flowers are not in bloom.
Beeswax is also made, by specialized bees, from honey produced from the collected flower nectar. The bees use the forces of the sun to turn materialized light into wax; we can then use that wax to make candles in the darkness of winter!
From sunlight to honey, bees are transformative little beings.
They also have incredible natural defenses that have kept them healthy and thriving for over 50 million years. So when honeybee colonies were reported to be dying in industrialized countries in droves, a phenomenon deemed Colony Collapse Disorder, it was clear something was very wrong.
For all bees, foraging on flowers is a tough life. They must leave the hive and travel large distances to collect pollen and nectar from flowers, and return to the hive. It’s cognitively and energetically difficult on them, and in order to accomplish their feat they use their finely tuned spatial memory and senses. Anything that damages those abilities for learning and memory can make it very difficult to find food and safely return to the hive. This means that bee colonies are sensitive and vulnerable to “sublethal” stressors in their environment: factors that don’t outright kill them, but hamper and weaken their abilities.
Bees are dying from multiple and interacting causes, but scientists believe it boils down to the interconnected factors of monoculture farming and pesticide use. Experts have concluded that it’s the combination of pesticide exposure, lack of food, decreased nutrition and vitality, and weakened immune systems that are causing Colony Collapse Disorder.
Ultimately, bees dying reflect a flowerless landscape and dysfunctional food system.
Bees have been in decline in the industrialized world since World War II. The United States has half the number of managed hives compared to 1945, down to 2 million. After WWII the country changed farming practices drastically, and the chemicals from the war were made into the agricultural chemicals farmers now rely on. Clover and alfalfa, which are natural fertilizers, were no longer planted as cover crops as a rule of thumb, and instead we starting using synthetic fertilizers and herbicides from those wartime chemicals. It seemed like a good idea
(higher crop yields mean more people get fed), but the problem is clover and alfalfa are highly nutritious for bees, and many of the weeds that the herbicide chemicals target are flowering plants that bees require for their survival.
Another massive change was the introduction of monocultures, which became the industry norm. We started sowing larger and larger single crops, with acres upon acres dominated by one crop like soy or corn, and the very farms that used to sustain bees became agricultural food deserts, providing no food for honeybees.
As leading Vermont apiarist Dennis van Engelsdrop explains, it used to be a continuous paradise for honeybees in this country, from Pennsylvania through the West, but after farmers started planting monocultures, they had to start using pesticides. This became necessary because of the pests brought by the monocultures themselves.
“A monoculture from nature’s point of view is insane”, says journalist Michael Pollan, in the documentary Queen of the Sun.
Monocultures are an absolute distortion of the way a healthy ecosystem works. Biodiversity builds ecosystem health, so when there’s no natural barriers or plant diversity for acres and acres, as in a monoculture, it creates a feast for pests, which then creates a huge need for pesticides. Farmers use systemic pesticides to treat this problem, and they’re used heavily, so Americans have been consuming a cocktail of chemicals in our food for years.
Like a system of governance with no checks and balances, monocultures wipe out the strengthening aspects of diversity. We can try to add inorganic inputs like fertilizers and pesticides, but eventually the land gets too damaged, too distorted, diminishing Nature’s very capacity for regeneration.
Imagine a weakened bee, looking for food. The food she eats contains a neurotoxin that confuses her enough to the point she can’t find her way home. This is what experts suspect is happening. Researchers from Penn State University have started looking at pesticide resin in loads of pollen, and they’ve found that every batch of pollen a honeybee collects has at least six detectable pesticides in it: insecticides, fungicides, herbicides.
The Environmental Protection Agency, the arm of our government responsible for protecting the environment, operates on a basis of risk assessment; t hey manage environmental risks to humans. In response to honeybee colony collapse, the EPA conducted a scientific study to determine the safety of spraying systemic pesticides by testing for lethal doses of pesticides in adult honeybees.
You can see the study here: toxicologists measure single dose exposure of a test pesticide followed by an observation period of 48 hours. Data obtained from the single-dose study is used in estimating risk to individual larval bees based on a single exposure event.
The problem is, you don’t start seeing the effects of pesticides until several generations later. The observations of multi generational beekeepers testify that pesticides affect the bees far later than in a 48 hour period. Not to mention, when it comes to designing a scientific experiment, sublethal stressors are extremely difficult to capture in a data set of such interrelating factors. Since we do not have the current science to measure the interconnected phenomenon, and since science is the last word driving EPA policy, environmental policies can stay legal for longer than is truly healthy for the environment.
With the lack of up-to-date science informing policy, beekeepers are reporting from the ground, and passionately speaking out. David Hackenburg is the poster child for Colony Collapse Disorder. In 2016, Hackenberg sued the Environmental Protection Agency for “inadequate regulation of the neonicotinoid insecticide seed coatings used on dozens of crops.” Hackenberg was quoted as saying, “As a beekeeper for over 50 years, I have lost more colonies of honey bees in the last 10 years from the after-effects of neonic seed coatings than all others causes over the first 40 plus years of my beekeeping operation.”
As the old timers watch the bees disappear, they bid somber farewells to not only their livelihood, they also share their pain at watching a familiar standby of nature disappear. Honeybee vitality, or lack thereof, is a metaphor for the health of the whole planet. “This is bigger than us”, one beekeeper laments, tears in his eyes.
Honeybees are indicators of the health of an ecosystem. Taking care of bees is like taking care of ourselves; the sentiment is repeated everywhere. This is because what affects them will ultimately affect us. If pesticides accumulate inside honeybees in low, sub lethal doses, they will inevitably come to do the same in other creatures.
Could this small bee be holding up a large mirror to humans?
The honey bees could be telling us something. But the answers to this conundrum might not be found within the same mindset, the same paradigm.
“We’ve become so used to using nature for our owns means”, Gunther Hauk, lecturer and founder of Spikenard Farm Honeybee Sanctuary explains, “The honey bees are saying, if you continue on this path, we're withdrawing.”
The internal contradictions within the industrial agriculture system garner quick profits now, but could ultimately cause its own demise. It’s a bit of a Faustian deal with nature. The retreating bees are like the canary in the coal mine, a signal that they (and therefore we) are in distress. As Bill Maher once quipped, the honeybee’s disappearance is like Nature’s way of saying, “Can you hear me now?”
It’s enough to watch a disoriented bee on a flower fumble around before falling off to feel the question arising up from deep, “Why are we doing this?”
The pesticides and insecticides are the same chemicals as the original chemical agents of war, meant to kill. They’re poisonous. And at some point, you have to ask if dominating Nature in such a way might ultimately prove, in the long run, to be unwise. If, instead, there is a way of creating fertility without diminishing our world.
The treatment of the queen bee could be a metaphor for the way mother earth is treated; the feminine aspect herself.
Bees are a matriarchal society, and as such they are seen as the feminine aspect of the divine. The honeybee has long ties with the divine feminine, from priestess hoods of ancient Greece to practices of Celtic origin. The bee was revered as a symbol of generative power.
As Marguerite Rigoglioso, author and Ph.D, explains in the documentary Vanishing of the Bees,
“The bee clearly represents the female aspect of the divine, in the fact that the queen is the centerpiece of the bee culture. In antiquity, the queen and the bees were associated with the sacred feminine.”
This could be why so many women are stepping up into beekeeping roles as a compassionate response to their disappearance. It’s almost as if we know, deep in our collective conscious, that something is wrong. “The bees are saying we need to open our consciousness again to the idea that there is both masculine aspect of deity and feminine aspect of deity, and the two have to come into balance again, honoring the body of the earth as mother” says Rigoglioso.
“We’re so used to using nature, but the answers are to inner problems,” explains Gunther Hauk, expressing a similar concept, “We wont solve bees disappearing by killing a virus or mite or a fungi, because the problems are internal, requiring a mindset shift. It’s an inner problem, solved by an inner transformation in how we see nature.”
So what can we do to transform how we see nature and fix a dysfunctional food system? The great thing is, its an easy, and rather poetic answer:
Plant flowers!
Flower power is perhaps indeed more powerful than just an expression — bees love flowers! On a global level, to preserve bees we have to improve the environments in which they forage for food. Flowering plants and pollinators have been in a coevolutionary dance for millions of years, and we can participate in this dance by creating habitats that nurture their symbiotic relationship. We can reintroduce space for the sacred. We can venerate their role, and be open to the honeybees’ sweet messages.
To avoid a beepocalypse might require citizen vigilance. We must look out for the health of our air, soil, and water, and defend it, as patriotic duty. Big change can come from small acts. We can plant gardens with bee friendly flowers, we can build apiaries and bee sanctuaries, and we can avoid pesticides. Every tiny action we all take can make a difference.
In her passionate and informative TED talk Marla Spivak concludes: “Every one of us needs to behave a little bit more like a bee society. Each of our individual actions can contribute to a grand solution, an emergent property, as our acts become much greater than the mere sum of our individual actions.”
In the small act of planting flowers and keeping bees could be sown the seeds of large scale change. Plus, beekeeping is a dynamic, historic and exciting community to be a part of! As beekeeper Simon Buxton said, “The future of beekeeping is not in one beekeeper with 60,000 hives, but rather 60,000 people with one hive.”
There is much of value in life that eludes the calculus of money. The value of robust colonies of honeybees buzzing around the world, pollinating flowering plants, and building up the ecosystem, is priceless.
Want to Help Protect the Honeybee?
If you’re interested in learning about beekeeping hands-on, check out the many WWOOF farms in the US that offer educational opportunities by visiting WWOOFUSA.org
If you’re interesting in getting involved politically, check out Bee City USA, an intiative of the Xerxes Society, whose aim is “Making the World Safer for Pollinators, One City at a Time.”
|
https://medium.com/swlh/the-ancient-practice-of-beekeeping-and-its-relevance-for-the-future-d49432b119a3
|
['Jennifer Tarnacki']
|
2019-12-31 10:27:55.178000+00:00
|
['Ecology', 'Bees', 'Environmental Issues', 'Beekeeping', 'Honeybees']
|
Becoming a Digital Wealth Leader
|
Executive Summary
Now is the time for digital action — Although financial institutions have been talking about digitizing wealth management for several years, this has now become a strategic priority if they want to be able to show net revenue growth going forward. While the requirements of clients are relatively straightforward, digital solutions will change the way wealth managers meet these needs. This makes it critical for industry players to be able to prove their value-add via a level of automation they haven’t been able to achieve before.
Respond to evolving client needs — Given that the expectations of the next generation of clients are driven by their experiences with online retail offerings, wealth management providers need to deliver far greater levels of digital proficiency. Availability anytime, anywhere, as well as speed, ease of use, transparency and seamless omni-channel services and fair pricing are key success factors in today’s landscape.
Augment the role of client advisors — Client advisors will continue to play an important role in managing the overall client relationship in this new world of wealth management, combining the human touch with technology as an enabler to offer more customized and relevant solutions directly to individual clients. The digitization journey therefore needs to start by optimizing the processes of the most effective and productive client advisors — with the dual goal of improving time-to-market and customer experience. Wealth management firms also need to reduce the administrative workload of advisors by giving clients more control and functionality to be self-directed.
Turn higher value-add into more revenue — Once wealth management firms have implemented both internal and external efficiency measures, client advisors will be able to focus more time on maintaining close client contact. In turn, this should lead to more frequent discussions about other potential solutions that create greater revenue streams.
Get prepared to go digital — The success of digital transformation in wealth management is only possible if firms are ‘ready’ for it. Being prepared starts internally with a top-down process that crafts the ‘digital vision’. From here, suitable business and operational models are defined and formulated into the digital strategy.
|
https://medium.com/additiv/becoming-a-digital-wealth-leader-ecc6bf36a571
|
['Thomas Schornstein']
|
2020-02-28 15:06:00.622000+00:00
|
['Hybrid Wealth', 'Digital Transformation', 'Wealth Management', 'Platform', 'Fintech']
|
useTime() React Hook
|
useTime() React Hook
Countdowns in React are complicated/hard. Let’s use hooks to create a useTime hook.
Time keeps on ticking… Photo by Djim Loic on Unsplash
The Hook (Code First!)
Copy-paste this gist (https://gist.github.com/jamesfulford/7f3311bd918982e68d911a9c70b27415) and give it a try in your codebase. Don’t worry, this article will still be here when you get back.
useTime, getTime, and sample usage in a Countdown component.
This gist currently uses Luxon for handling dates, but the comments show how to convert it to use Moment, or native JS Dates. An epoch integer is also a wise move if timezones are complicated.
PSA: If you’re working with timezones, use a library. Else, this will happen to you and all your friends: (I warned you)
The Problem
Using the current time when rendering a React component can be tricky. The present literature (Medium articles, assorted blogs, and StackOverflow answers) often has 3 problems:
Implementation is Wrong
Working with the current time in React is hard. The common solution of setInterval callbacks incrementing/decrementing state will fall out of sync quickly because of a misunderstanding of setInterval in Javascript, and because of a misunderstanding of state transitions in React.
Directly accessing the current time in the render method will not trigger re-renders when needed, and passing through a prop/context only defers the problem to a different component. Therefore, a solution using state is the right solution.
Some implementations available on blogs, Medium articles, and StackOverFlow answers update state incorrectly. Some countdown implementations will store seconds remaining, then will decrement the seconds remaining every 1000ms using setInterval . This will fall out of sync with actual-time very quickly, depending on:
how long the tab is visible (browsers deprioritize setInterval calls for non-focused pages for performance reasons)
calls for non-focused pages for performance reasons) how complicated the rest of your app is (the thread may not get to the setInterval callback quickly enough if there is a lot of work to do)
callback quickly enough if there is a lot of work to do) the performance specs of the viewing device (mobile devices will execute JS slower, for example)
If you want to learn more, research the “Javascript Event Loop” — several good interview questions revolve around this concept. This is not specific to React.
Getting React-specific, some implementations do not consider the fact that this.setState will not immediately update state. This means the first state update may not have been executed before the second state update was queued. This is a problem if the implementation relies on the previous state to decide the next state, (i.e. time is incremented or decremented), because time will go out of sync (because two duplicate updates were queued).
The solution is either to pass a function to this.setState as shown in the React docs, or (better solution) don’t rely on the previous state to get the current state. Inside the setInterval callback, update state with the most recent time.
No Clean-Up is Done
In a class component, a componentWillUnmount method can clean up the interval id using something like clearInterval(this.state.intervalId) . However, some implementations neglect this important detail, potentially causing memory leaks in the code of those who copy the implementation.
My Lifecycle Methods Are Huge (“It’s not a hook”)!
If a React component gets large (maybe you inherited bad code), it can take quite a lot of scrolling to go from the setInterval call to the clearInterval call and the accessing of the current time from this.state . This violates one of my personal rules of thumb:
Things that are related should be close to each other.
Incidentally, this is the motivation for switching to hooks in React.
I have yet to find a hook that does this for consuming and updating the current time. So please, enjoy my implementation.
By the way, this “Things that are related should be close to each other” rule-of-thumb a special case of one of my personal favorite rules (which has life/moral implications if fully adopted):
Make it easy to do the right thing; make it hard to do the wrong thing.
To learn more about this philosophy, check out one of my earlier articles (hey, you read this far!):
|
https://medium.com/javascript-in-plain-english/usetime-react-hook-f57979338de
|
['James Fulford']
|
2019-10-17 08:27:13.883000+00:00
|
['Programming', 'Web Development', 'React Native', 'React', 'JavaScript']
|
A Grammar Lesson
|
People generally acquire their first language through exposure to the language over a long period of time. They hear the language spoken by the members in their environment and the media; they don’t spend much time studying the rules of the language.
If and when people learn a second language, though, they study the grammar of that language to make sense of its rules.
I hear and read people whose first language is English use the word “amount” with countable nouns, like people and books, and all I can think is they have probably heard a large number of people misuse it and they haven’t learned its correct usage.
Every English-speaking person, particularly writers and public speakers, should know this grammar rule, so I’ll make this as concise as possible, hoping everyone finds the time to read it.
The word “amount” is used as a quantifier (a word which indicates how much) with uncountable nouns.
the amount of coffee
a large amount of information
a small amount of olive oil
a considerable amount of bread
half the amount of sugar
The nouns coffee, information, olive oil, bread, and sugar are all uncountable, so only quantifiers for nouns that can’t be counted (much, little, a little, amount of) should be used with them.
&
The word “number” is used as a quantifier (a word which indicates how many) with countable nouns.
the number of cups of coffee
a large number of books
twice the number of oranges
a small number of people
a huge number of small pies
The nouns cups, books, oranges, people, and pies are all countable, so only quantifiers for nouns that can be counted (many, few, a few, a number of, several) should be used with them.
Thanks for reading.
|
https://medium.com/@authornikaparadis/a-grammar-lesson-63dcb43ce81f
|
['Nika Paradis']
|
2020-12-19 14:31:57.549000+00:00
|
['Language', 'Speaking', 'Grammar', 'Writing', 'English']
|
❤️ Logo inspiration, week #14
|
❤️ Logo inspiration, week #14
One logo a day to inspire all the logo lovers
Hello beautiful reader, this is Drahomír, the logo designer. Welcome to the curated logo design inspiration site (I am not the author of the featured logos). One logo every day in 2020. I hope you will like it.
Polarbröd logo design
Stu Toy Inventions logo design
Rhino logo design
Monkey reading logo design
UK Parliament logo design
Old Norse logo design
Rainforest Alliance logo design
Svan logo design
Bent Play logo design
|
https://medium.com/sketch-app-sources/%EF%B8%8F-logo-inspiration-week-14-100c3c79df13
|
['Drahomír Posteby-Mach']
|
2020-04-05 18:14:24.139000+00:00
|
['Inspiration', 'Design Thinking', 'Design', 'Graphic Design', 'Branding']
|
Questions of Ragnarök
|
Questions of Ragnarök
Thomas Hobbes argued for “the mutual relationship between protection and obedience.” He was an Englishman who had lived through the civil war, a conflict which traumatized a decade as ideologs and opportunists gambled power between the Monarchy and Parliament. Hobbes’ experience caused him to realize the dangers of an unyielding desire for liberty, his magnus opus the Leviathan argued against the current of revolutionary Europe. He believed in obedience to a government, even a tyrannical one, was necessary to preserve peace, order, and prevent the loss of life. His work may seem ridiculous to some, even dangerous perhaps; however, are global events not suggestive of his thesis. The French Revolution resulted in what? A short lived republic which was nothing more than a façade for murderous tyranny, and a belligerent empire which left a continent bleeding at the hands of a despotic hegemon. What of the dictatorships of Africa, Asia, Europe, and the Americas which arose in the vacuum of previous tyrants. What of the Arab Spring, what was the result of the desperate desire for the decimation of tyrants and the flames of liberty to be burned bright? A fire that engulfed a burned down house of cards. Everyday average people, human beings who just want to spend time with their families, make some money, and have a satisfying life; did they ask for revolution, and did they truly comprehend the price?
“Give me Liberty or give me Death,” Patrick Henry’s famous words. But what of the majority of people who wish to live, to not die, to not see their loved ones perish? What of the average human?
It is easy to be idealistic. However, we must ask these questions during this current period of time which has seen the rise of authoritarianism. Nations, such that of China, proving and expanding the philosophy of obedience to a supreme totalitarian state. Leaders all over the world, and across the political spectrum, who mask their crave for infinite hereditary power with democracy. A devious media which has abandoned the principles of the fourth estate, to be reduced to the pet guard dogs of ideologs. An advanced connected population herded by a few barking dogs. This is a time period when the Anti-Fascists are the best examples of Fascism. Authoritarianism, totalitarianisms, and legalism will dominate this coming decade and maybe even beyond. The high interconnectedness of the globe means that no longer will Revolutions be contained to a spot on the map. The Arab Spring spread through a little online blue bird across two continents. The bells of liberty will soon start ringing, and as the clock of the reaper strikes midnight we will have to chose to either fill his bowl of blood or take the whippings of peace.
Thomas Hobbes work should now weigh on the mind of the modern human, for now we carry the capacity of Ragnarök. Is the average person willing to pay the price of Liberty? This question must be answered in the coming years, for once Ragnarök starts there is no Valhalla left to run back to.
|
https://medium.com/@rish-choudhari/questions-of-ragnar%C3%B6k-ebe03116a73e
|
['Rishabh Choudhari']
|
2020-10-16 14:21:43.411000+00:00
|
['Liberty', 'Social Contract', 'Authoritarianism', 'Government', 'Revolution']
|
Please Don’t Wake Me Up
|
Hi there, my name is Angel
and I am only two days old.
Sorry about the way I am sleeping
but my mom purposely made me sleep that way.
She didn't want people to know I was a girl,
which I really don't understand why.
I just heard her talking about ‘female infanticide’
and that she would never let that happen to me.
I don’t even know what that means,
but I am sure it's something really bad.
So, if you find me super cute,
then promise not to wake me up.
Because I heard they woke my neighbor’s baby girl,
only to put her to sleep forever.
And as you know I am on a secret mission,
I need to live to free the world from this evil forever.
|
https://medium.com/flicker-and-flight/please-dont-wake-me-up-8b1606ff4165
|
['Bhavna Narula']
|
2020-11-02 20:36:45.758000+00:00
|
['Female Infanticide', 'Baby', 'Poetry', 'Short Read', 'Feminism']
|
Affiliate Marketing Case Study: Quora Ads Campaign
|
1. Write an article in which you promote multiple affiliate products (in this case study I chose 2 products) and publish it on your blog.
2. Drive traffic to the article by running paid Ads on Quora.
3. Sit back and earn passive income!
The Article Used
I have shown you in my previous article (another affiliate marketing case study) how to write an article promoting multiple affiliate products and how to publish the article. I have also shown you how to drive traffic to the article for free and get 2 conversions in 48 hours.
In this case study, I am going to use the same article that I’ve already used previously, which is a comparison between two email marketing services (SendinBlue and GetResponse), but this time I am going to drive traffic by running paid Ads on Quora.
SendinBlue VS GetResponse:
This article is a comparison between these two email marketing services. It highlights the benefits of each by organizing all the details in a table and you can see that the article is full of affiliate links to those products. What’s interesting about this is the psychology behind the whole thing. If you were able to target the right people who are searching for the best email marketing service, you will get a high conversion rate regardless of which one they are going to choose.
The Outcome of this Campaign:
To show you the campaign, I went to Quora and opened my Quora Ads Manager. On this page, there’s a dashboard and when you click on the traffic blog, you can see the campaign.
It shows you that I spent $14.22 on this campaign and as a result, I got 198 clicks and the clicks-through conversions were 64. So, I got approximately 64 views on my blog. Previously it was around 30 views; now it’s around 100 views, which means that I got 66 new views from Quora.
To see the results of this campaign, I first checked SendinBlue. I skipped the first 5.00 € since they are the revenue of the first case study. In total, I made 25 € which is about $30.
Although I didn’t get any new conversions in GetResponse, I got a lot of free accounts which means that anytime those free accounts get a new subscription, I will receive a commission. Now, this is due to the reason that GetResponse has a recurring commission program.
So, my final results were 25 € from SendinBlue and 15 new accounts on GetResponse. This means that I got a 100% ROI (Return On Investment).
Creating the Campaign:
You can simply create a campaign on Quora by clicking the blue link labeled “New Campaign” and then you will have to provide the following details:
1. Choose a name for your campaign. In my case, it’s “Traffic Blog”. You also have to indicate the objective of this campaign and in my case, it’s “Traffic” since I want to drive traffic to my blog.
2. Set the budget. For this campaign, I chose to make it 10$ which is a good amount of money to start with.
3. Choose a name for your ad set. The name can be anything. I went with “Adset 1”.
4. Pick a location. I went for “India” since I would get a lower CPC (Cost Per Click). You can add more countries to get more reach, but don’t go with countries with high CPC to keep the cost low. The point is to test the strategy safely before you scale up with bigger campaigns.
5. Choose primary targeting. This is how you will target a specific group of people for your campaign. My choice was “Contextual Targeting” and I filled in the targeted topics with “Email marketing”. In addition to that, I added “Email Service Provides”, “Email Deliverability”…etc. to get more reach.
In this way, I am targeting people who are asking and answering questions about email marketing and email marketing services on Quora. As you can see, the target audience is very precise, increasing the value and effectiveness of my ad.
6. Set the device and browser: I set it to “All desktop browsers” since I want all desktop users to see my ad.
7. Set the bid: I kept it as suggested which is “0.74$”.
Creating the Ad:
1. Choose a name for your Ad (example: Ad1) and then click on “Image Ad”.
2. Fill in with the business name (H-educate).
3. Set the headline and add an image. My headline is “SendinBlue vs GetResponse” and the body text is “Decide which email marketing service is better for your business.”
It’s a simple ad with a “learn more” button that leads people to the article on my blog.
This is exactly how I ran a paid campaign on Quora and got a 200% ROI. Your next step from here is to scale up and rise to the challenge. You can run this campaign for one month and study the results.
End of the Affiliate Marketing Case Study
Now that I suggested this, you might be asking yourself why I chose to stop this campaign ahead of time.
Well, that’s simply because I ran a native ad campaign experimenting with the same article. The purpose of that was to compare the strategies and find out which campaign works better. If you’re interested in seeing the results of the native ads campaign, check out the video below.
I hope this affiliate marketing case study clearly showed the steps to making the most out of affiliate marketing by running a paid ads campaign. If you test out this strategy, please report back in the comments below, I would love to hear from you.
Originally published at https://blog.h-educate.com
|
https://medium.com/@heducate/affiliate-marketing-case-study-quora-ads-campaign-ee4cbd9620de
|
['Hasan Aboulhasan']
|
2021-01-20 08:18:59.357000+00:00
|
['Quora', 'Affiliate Marketing', 'Affiliate Marketing Tips', 'Make Money Online Fast', 'Affiliate Marketing Guide']
|
Be Your Own Physical Therapist and Defeat Runner’s Knee
|
What Is Runner’s Knee?
The complicated thing about Runner’s Knee is that it has many names. It was formally labeled as Chondromalacia Patellae, and it’s now known as Patellofemoral Pain Syndrome or simply ‘anterior knee pain’. It is an overuse injury being caused by inflammation of structures (bursa, synovial membrane) around the knee cap. The pain will present itself at the articulation between the back of the patella (knee cap) and the patellar surface (groove at bottom of the leg bone). There is often popping and grinding present, and occasionally minor swelling will arise.
“Runner’s Knee is an overuse injury caused by excessive & repetitive strain. Or simply put, doing too much, too soon, too frequently.” — Alinda Kennedy, B.Physio
This can occur based on overtraining, and it’s typically spurred on by muscle imbalances. The knee is a high-pressure hinge joint that can take often take a beating. If there are deficits within the abductors, hamstrings, or lateral rotator muscles, we can pay the price over time due to the repetitive nature of running. Furthermore, if we feel tight or ‘muscle-bound’ within the quads, or notice that the ligaments around the knee are loose, this condition can also creep up.
For most individuals, this pain will gradually increase during a run, especially one of a longer distance. For severe cases, it may be bothersome throughout other aspects of the day like walking up stairs or down a hill.
Image from Comprehensive Orthopaedics
Think of Runner’s Knee as a warning sign that your training status and physical health needs adjusting. When you experience this discomfort, your body is throwing you a white flag stating that its tissues aren’t yet adapted to meet the needs of your current level of activity.
This is where knee-specific strengthening exercises come in.
Before we talk about therapeutic options, I want to be clear. As already mentioned above, some cases of Runner’s Knee are worse than others. Please use your discretion when applying any new exercises, and check with your health provider if you have any concerns. Some may need more invasive care, but we can always do our best to start with physical therapy. For the majority of us, completing these exercises will go a long way to improving running outcomes and overall strength/function within the knee joint.
|
https://medium.com/runners-life/be-your-own-physical-therapist-and-defeat-runners-knee-432cbf317964
|
['David Liira']
|
2020-12-16 20:30:48.548000+00:00
|
['Health', 'Inspiration', 'Fitness', 'Lifestyle', 'Running']
|
Mock Network Requests in UITests
|
Networking is a constant issue iOS developers have to face. There is nearly no app, which doesn't require it. In tests we need reproduceble and stable results, and network doesn't guarantee this. Ideally, we want to throw our Mac into a dark cave 100m below surface and run our automated tests. Having no network connection shouldn't be the reason for failing tests. Since the apps depend on getting data from backends, this needs some work on our side.
In Stub network calls in Unit-Tests we made sure to control our unit-tests. As a nice side effect it also offered the dark cave option. So there is nothing more to do. To recap, OHHTTPStubs was used to return data on NSURLRequest level. Sadly this trick doesn't work for any UITest framework which doesn't share the process with the app (EarlGrey could work ;)).
Continue using OHHTTPStubs for UI-Tests
Your first thought might be: "Why not use OHHTTPStubs within the app?" This is possible but there are a few reasons I dislike it:
It is hard to control what needs to be returned. It changes a lot of logic within the App. It adds (testing) code, which doesn't belong in the production code.
Control the backend
When writing UI-tests you need to consider what you are testing. Do you want to test solely your app, or do you also want to test your system? The companies I've worked for had a tendency towards testing the entire product. This was just a crutch as they didn't have the confidence in their backends working correctly.
Testing your system sounds nice, but an error doesn't really tell you, what is broken. You'll have to look everywhere. So I advice you to use your UI-tests for your app. To achieve this, you have to create your own backend for the tests. It has to run on the same machine, so no network dependencies exist. Setting up your entire backend on one machine sounds like overkill so let's skip this. Instead, I suggest creating a mock backend server on which you can control the responses you need for your test.
Basic structure of your Mock Backend
What are our requirements for our test-backend?
We can control, what it returns It runs outside our app Minimal changes to our production code
These can only be fulfilled by writing our own backend. The most basic idea would be to have a program with a dictionary for every request type. This dictionary maps paths to responses. So whenever we call 'GET /hello' it will return "200 OK - world".
In case you prefer to write this backend as a stand alone app, you can do it. Maybe have some kind of network API like:
"/getRoute?path=hello&result=world"
This would be quite easy to implement. Just have a dictionary [String:String] per request type and map the path towards the response.
I see a few issues with such a stand-alone backend:
You need to make sure it starts correctly before the UI-tests run
Changing paths require a network client
In my opinion unnecessary complexity
It's probably in a language not all iOS developer can use
But sometimes these issues are less of a problem or even an advantage. Imagine not being able to use XCUITest and instead use some kind of external tool. You will have to create and control it from outside of the testing code.
Thanks to a lot of diligent developers we can use Swift not only for apps, but also on the backend. So why not use it and make the iOS developers write their backend in the same language as the app? The advantages are:
No new language for developers
All app developers can work on it if necessary
Started from within XCUITest you can configure it within the code
Setting up the Backend
To implement our backend we will use Swifter. It is a tiny HTTP engine in which you can create quite fast a fully functional webserver. Interestingly it behaves quite similar to our internal structure described above.
To configure responses you simply set them internally via:
server.GET[path] = HttpResponse.ok(response)
Within a UI-test we can start the server, configure it and only afterwards start the app. Of course it offers the possibility to dynamically switch our responses in the middle of the test. Let's look at some pseudo code:
func test1() {
mockServer.configure()
mockServer.launch()
app.launch()
executeTest()
}
One nice advantage is, for the server to only exist while you are testing. In case macOS requests you to allow network connections (the usual firewall screen) you can either create a rule and permanently allow connections, deactivate the firewall (I wouldn't advice it), or just ignore the message. Interestingly while the dialog is open and you haven't declined it yet, the connections are still being established.
App
The only open question left is the app. As the server is running locally you will need to add a new environment to your app, which redirects all network requests to the server. Yuri Chukhlib has written a great post about handling app environments. So go there before continuing, in case you don't know how to handle these.
To use our mockserver we have to add another environment. This has a base url of "127.0.0.1" as it should connect to our locally running server. Depending on your release process you might not want to recompile your app, simply for testing purposes. So you somehow need to change the environment the app is working on. Some apps I've worked on have a secret gesture to get into a debug menu. Within this you were able to change the environment. Most often developers shy away from this option, as they are afraid of users finding them (it's just a matter of finding a good place to hide.. but still). So instead of doing this, we can pass launch arguments to the app.
This is one more thing you can do within the testing code:
app.launchArguments.append("USE_MOCK_SERVER")
Within the app we can check for this launch argument:
if ProcessInfo.processInfo.arguments.contains("USE_MOCK_SERVER") {
}
This is the only change we have to do within our production code. You can use this technique for other necessary changes within the app, but don't overuse it. I don't advice you to add different states within the app (such as "let's use this parameter to make the app think it's logged in"). Instead let the tests run a little bit longer and have less code in production. The more code there is, the more complicated it gets. So keep it as simple as possible.
Conclusion
With this mock server introduced we can run our tests in the dark cave. Interestingly enough, it helped my team to understand our network requests better. Suddenly we had to know, which calls were done at which time. It also sped up our testing time by quite a margin. All in all, I can only advice you to stub your entire network. It improves your testing stability and speed.
Next: Screen Objects
Previous: Automated UI-testing for iOS Apps
|
https://medium.com/mobile-quality/mock-network-requests-in-uitests-58c96967e928
|
['Jan Olbrich']
|
2018-04-01 22:31:25.339000+00:00
|
['iOS', 'Software Development', 'Testing', 'Swift', 'Software Engineering']
|
Why Social Platforms May Become the Next Killer App for Blockchain
|
At a high level, crypto trading only requires an exchange hosting digital wallets for 2 or more crypto assets that can be traded at their current market value. These are the components needed to execute a trade — but what happens before you make a trade? How do you perform research, discover the right asset, and track your performance?
Crypto enthusiasts around the world face these same problems. First, there is no easy way to learn the fundamentals of the cryptocurrencies and second, it’s difficult to find a community of people to bounce ideas off of. It’s also hard to discern if the advice you get is credible enough to risk your hard-earned money or coins on.
While the core components — an exchange and a digital wallet — aren’t going anywhere, we believe a platform designed to be social from the ground up will dramatically change the typical crypto traders research process.
Let’s explore how social platforms will disrupt the crypto trading model.
What is a social platform?
A social platform is one that’s designed from the ground up to foster social interactions. Those interactions could be comments, messages, pictures, videos, or more. We can attribute part of the success of Slack, WhatsApp, Telegram, and other chat apps to the communities that grew on their platform.
A free way to chat is nice, but the ability to use a Slack channel to collaborate with minimal friction or join a Telegram channel specifically dedicated to the hottest topic in crypto brings much more value to users.
Furthermore, the breadth of social products we use are broad. We can think of YouTube, a video sharing platform, as a social network. Yes, people come for a specific video but there’s also the ability to subscribe to channels, comment directly on videos, or share those videos across social media, email, and engage with others in the comment section.
Today, most people don’t think of combining “social” with “finance” but we believe that is changing. There are a few key reasons why a social platform has additional benefits when compared to traditional exchanges as we know them. We’ll dive into that next.
Learning
Humans are inherently social creatures. By leveraging the knowledge of its participants, a social platform makes it easier to onboard and teach new members. It’s one thing to read a concept on a frequently asked question (FAQ) section of a website, and another thing entirely to experience events (like a hard fork, or a main-net going live) in real-time while chatting with other crypto enthusiasts who are sharing their personal thoughts as the event unfolds.
The ability to ask a specific question or get clarity around a breaking event provides a memorable experience, something that’s hard to recreate without the authentic participation of engaged crowds.
We can look at Twitch as an example: The top live streamers on the platform garner hundreds of millions of views on their videos. Put another way, more people watch live streams of video gamers than any professional sports broadcasts from the MLB, NBA, or NFL.
Humans instinctively want to observe other humans engaging in activities that they can personally relate to. In addition, participating as observers is a great way to increase one’s own skill level.
Accountability & Visibility
Social platforms also provide accountability. We have all witnessed this anecdotally: Once you say something publicly, you are much more likely to follow through. But accountability also gets fostered in other ways.
Social interactions motivate people to be good actors. People who are misleading or come off as spammy get removed from the group by the community via reporting systems. There are also opportunities for the community to recognize and reward influencers. It’s only a matter of time before entrepreneurial types within the crypto community use social platforms to post helpful content and seek rewards for their contribution.
A Social Future
Social trading platforms provide an opportunity to introduce cryptocurrencies to hundreds of millions across the world. Today, there are approximately 34 million wallets on the Ethereum network alone, a fraction of the total addressable market for crypto.
That’s why we’re excited to introduce Hilo, our new social platform to help crypto traders and enthusiasts alike. There was no simple way for the Hilo founders to qualify research on new coins in this rapidly changing market, so our goal was to set out to build a platform that allows users to track influencers, learn about their portfolios, discover new coins, and catch up on all things crypto. Hilo is about building community driven discussion and empowering users to earn tokens for sharing their expertise in the form of producing video content and disclosing the percentage holdings of their portfolios.
Be on the lookout for the launch of Hilo and how their HILO token can incentivize users to explore, learn, and chat about what’s happening in the crypto economy.
Sign up for the Whitelist & Beta: https://www.hilo.io
Join our discussion on Telegram: https://t.me/HiloCrypto
|
https://medium.com/hilocrypto/why-social-platforms-may-become-the-next-killer-app-for-blockchain-483193b4b4fb
|
['Nicholas Donahue']
|
2018-05-29 23:07:07.049000+00:00
|
['Social Media', 'Blockchain', 'Bitcoin', 'Cryptocurrency', 'Blogs']
|
Broken hearted cities -Why driving is the new smoking
|
The harmfulness of driving
Heavy vehicles harm in sometimes acute, graphic, and sudden ways, but also frequently in obscure, insidious ways.
The less obvious ways that driving harms are more clearly similar to how smoking harms.
Consumer harm
Driving kills. Since the year 2000, more Americans died in traffic crashes than were lost during both World Wars.
Occupants of cars are exposed to higher levels of air pollution than people outside, as fumes and harmful particulates become concentrated in enclosed spaces.
Mortality from the worst non-communicable diseases would be significantly reduced if people who drive would cycle or walk for more of their journeys. For instance: commuting by bike can reduce the risk of heart disease by 46% and lower the risk of developing cancer by 45%!
Addiction
Driving is addictive. Once you own a car, driving tends to become difficult to give up.
Private motorised vehicles are as seemingly pervasive as cigarettes once were. Users can become agitated if they might have to walk further to their cars and are comforted by being able to keep them close by. Present bias plays into how habitual cars often are.
Driving is an expensive habit. Smoking a tank of petrol or diesel every other week can quickly put a big dent in back pockets.
There is also an enormous public bill for this addiction. Building and maintaining infrastructure, servicing the cost of ownership, and cushioning health and safety effects is significant, and contributing to intergenerational debt.
If someone really wants to quit driving there are often significant barriers that require will, creativity, and perseverance to overcome. Intervention helps.
Projected harm
Driving harms others. A significant aspect of the demise of smoking was the second-hand smoke issue, or ‘passive smoking’.
Inconsiderate jerk! © Andy Singer
I think of passive smoking as a form of ‘projected harm’ — where harm is done to others in an arbitrary, oblique way.
It is (now) unacceptable that smoking — an activity someone chooses to engage in — should expose others to serious health risks.
Are cars really so different? They are literally creating a passive smoke injustice in a majority of cities.
While electric cars reduce tailpipe emissions, they still generate lung-damaging particulate emissions from tyre, brake, and road wear.
Children are unwittingly exposed to air pollution when they are transported by car. Only the source and type of smoke from this passive smoking is different.
You might be thinking:
Does having a less recreational purpose for a product make any harm they inflict more acceptable? I’m not sure it does. Especially if that harm is to others.
Driving of private motor vehicles also projects harm onto others in ways other than air pollution.
Excessive car traffic in cities and residential areas creates a significant barrier for many to lead active lifestyles. In this way, driving is projecting harm through health risks due to inactivity.
This problem is also particularly unjust to children.
Driving also harms others through road violence. According to the World Health Organisation, 26% of deaths from road violence around the globe are people who were walking or cycling. Figuring out who does the killing is an interesting topic.
(Spoiler: It’s people driving heavy motorised vehicles)
Some other projected harms from driving:
noise pollution
light pollution
near misses and traumatic experiences
visual obstruction
constricted space for people
heightened caregiver anxiety and effort
reduced green space
suburban sprawl
social disconnection, mental health impacts
wildlife loss of habitat
dead animals
climate breakdown
Total annual deaths — comparing smoking and driving and divided by user harm and projected harm. The absolute total number of deaths from secondhand smoking is less than from projected harm from driving. Deaths related to climate change attributable to driving are rather small at this time, but this could become a large contributor if unmitigated.
Annual global deaths relative percentages — comparing proportions of causes of deaths between smoking and driving. The projected harm from driving is over half the total. Secondhand smoking was a significant motivator in choosing to reduce smoking rates — especially in public. What amount of projected harm from driving is needed until we feel the same urgency?
The health of cities
Crippling congestion of too many cars is to a city what a heart attack is to a person.
Dividing cities into sections. © Andy Singer
As a city’s transport corridors are filled with larger vehicles in less free flowing streams of traffic, the risk of clogged arteries grows.
On-road car storage is to city streets as plaque is to arteries. It builds up and slows the flow. If untreated road plaque increases the risk of traffic clotting and stopping the healthy circulation of people.
Cars create more distance than they overcome. As our addiction to cars grows, our cities expand, pushing people further from each other and exacerbating mobility challenges.
The analogy between city congestion and human biology has been employed before to illustrate this problem, but we need to shift our perspective of how to address it.
Cities stand to contribute a great deal in mitigating climate breakdown by transforming the mix of transport.
|
https://medium.com/@alex-m-dyer/broken-hearted-cities-why-driving-is-the-new-smoking-aa0cd7b6ae2e
|
['Alex Dyer']
|
2020-12-06 21:11:19.848000+00:00
|
['Mobility', 'Public Health', 'Urbanism', 'Smoking', 'Car Culture']
|
Future of digital marketing
|
If you are looking to know about the future of digital marketing then you are in the right place. You will find some important information over here.
As everything is getting digitalized these days, so why not one need to think to add their products or services online. Whatever you need you can get it with just one click, though everything is getting Digital.
Suppose a person wants to expand his business at some point he needs to show his products or services online because a wide number of people are actively online.
All the revenues could be doubled by 2022 in the area of digital advertising. Therefore, to put yourself in the using seat, all of the Indian businesses must be well-worse with digital marketing to constitute the country in the worldwide marketplace. Not most effective corporations however the applicants who’re searching out a profession possibility in this field can reveal lakhs of jobs in nearly all of the cities.
The scope is wider with digital advertising and marketing. Digital Marketing will stay because of the maximum effective manner of marketing in the future. But because the dynamics of digital advertising are converting each day, a digital marketer must be agile, alert, and clever and adapt to contemporary modifications. Not most effective that, digital marketer of nowadays will assume modifications and put in force it manner earlier than the change surely occurs so take gain of it.
According to the people’s demand, even technology is growing and increasing to make a person’s life easier.
That’s one of the biggest reasons why Digital Marketing is Boosting.
So in case you are wondering ought to or not no, it’s certainly proper time on the way to move digital!
|
https://medium.com/@digiprisma07/future-of-digital-marketing-f431f407156a
|
[]
|
2021-12-25 05:48:18.356000+00:00
|
['Digiprisma', 'Marketing', 'Digital Marketing', 'Digital Marekting Service', 'Digital Marketing Agency']
|
How to recognize your open source project contributors and grow your community
|
There’s a truism — if a community is not growing, it is slowly dying. How is your open source community doing? Is your contributor base stagnant, shrinking or growing? Are you like many open source community leaders with little idea of how to encourage new participation?
There are many opinions out there about growing the activity around an open source project. Successfully building an open source community-driven project is more than just throwing your code on Github and doing development in the open.
Folks must know the project exists, that you’re open to contributions, what the contribution process is, coding practices in the project, and so on.
One very visible tactic is to establish what some call “social proof”. That is, some kind of visual indicator that the project is currently receiving contributions.
What does the word “community” mean in this context?
A “community” is a group of people coming together for a shared purpose or shared goal. The traditional meaning is the folks living in a town, and their shared goal is living peacefully together in that city.
But communities can form for other purposes. For example, a Facebook group about electric motorcycles will host discussions of electric motorcycle brands, where to ride, how to maintain or customize the bikes, and so on. As the members get to know one another through discussing electric motorcycles, they form a community.
Likewise, the folks maintaining an open source software project also form a community whose goal is improving that software. This article is focused on one aspect of growing community participation in an open source project — acknowledging those who contribute to the project.
Many project websites have “widgets” showing data like build status, whether the tests are passing, and so forth. What if another widget showed an indicator of contributors to the project? Namely:
A list of folks making code contributions — demonstrating to the public that this project has contributors
Giving kudos to contributors, so they can have bragging rights, and to feel appreciated
Demonstrate there is communal ownership of the project
Demonstrate who has how much of a stake in the project
Tell the public this project is not the hair-brained idea of one guy/gal who’s coding to suit their whims
The existence of build status widgets and the like demonstrates a place for automatically-updated widgets giving data about open source projects. These widgets are geared for the public, and the purpose is reassuring potential users or contributors the project has an automated build and test system, and whether the current status is green.
But that’s not the only kind of status system a project team may use. For team management purposes, a team might use a private dashboard giving the status of various aspects of their project. Commercial software projects regularly do this. Dashboards are maintained by the product manager to measure progress towards goals. This post is not talking about that kind of status system, but instead, one that is shown to the public.
Isn’t it reassuring to know an open source project is team driven? That there is more than one set of eyeballs looking for bugs? That the direction is not the mad ravings of one person but driven by a collaborative process? If you’re looking to integrate an open source tool into the software driving your business, don’t you need to know the tool has a stable future?
Let’s think first about a status widget that does some of the above. Then look at what some prominent open source projects are doing along these lines. Finally look for any existing tool of this nature.
Brainstorming
Generally speaking, we’re talking about a “status widget” to install on project pages, like the source code repository. The widget must present some data about the contributors to the open source project, and implement as many of the ideas above as possible. Some possible attributes to show are:
Easily installed — insert an HTML widget into websites
Automatically retrieve data from Github/Gitlab/etc commits
Identify the type, size, etc, of code changes in commits
Present contributor data in several forms (customizability)
Present useful information about each contributor
Present useful information about total contributions
Be utterly objective about listing contributors
Actions by some Open Source projects to recognize contributors
Since it’s useful to take a look around and see what others are doing, let’s look at certain high profile open source projects. What are they are doing in terms of recognizing contributors?
Vue.js — This leading UI framework for modern web applications has a “Contributors” widget that links over to a Github page which displays Vue.js code contribution data. The contributor's widget is somehow derived from an OpenCollective widget showing “backers” of the Vue.js project. This shows monetary contributors. The avatars do not necessarily correspond to code contributors on the project.
ReactJS — This leading UI framework for modern web applications has a well-developed Contributors area. But nowhere was there found a listing or recognition of contributors.
Bootstrap — This leading responsive UI framework has a well developed Contributors area. On the main page of the repository are mentioned Mark Otto and Jacob Thornton as the Creators. Under “Copyright” it mentions ownership is split between Twitter and “The Bootstrap Authors”. The latter linking to the Github-generated list of contributors.
Webpack — The project homepage shows several lists of monetary contributors. Each generated by OpenCollective. On the Webpack project repository, it’s clear there is a well-developed Contributors area. It also includes a link to a Medium publication. Here they publish information about how to contribute to the Webpack project. The only folks mentioned here are the Webpack Core Team. Again the lists of monetary contributors generated by OpenCollective.
jQuery — This extremely popular library for DOM manipulation in web browsers has a very well developed contributors guideline. Nothing could be found listing the contributors.
ExpressJS — This popular framework for developing web applications with Node.js. It shows TJ Hollowaychuk as the original author and Douglas Wilson as the current project maintainer. It then links to the contributor list generated by Github. It’s clear from that list those two made the overwhelming majority of code contributions to the project.
|
https://medium.com/free-code-camp/how-to-recognize-your-open-source-project-contributors-and-grow-your-community-3eaa472344ab
|
['David Herron']
|
2018-12-03 18:01:44.327000+00:00
|
['Community', 'Open Source', 'Software Development', 'Github', 'Tech']
|
Newborns: Master the Chaos
|
When my son Kiaan was able to sit upright and support himself (6 months old) I felt the BIGGEST RELIEF. I thought to myself, “uhh HUGE WIN because I can finally get back to toning my arms.” Moms, I know I’m not the only one who feels like my biceps got Arnold Schwarzenegger jacked after carrying a baby around all day. Rest assured, it gets better after they start crawling. More importantly, I was excited for Kiaan to be more alert and interactive.
0–6 months was one hell of a roller-coaster. The highs felt incredibly comforting, like a cup of hot chocolate on a snowy cold day. But the lows really made me want to pull out my hair, curl up in bed and hide from it all. The lows are what made me realize how valuable it is to know that every mom goes through this. And how encouraging it is to hear stories that made me feel like everything is going to be all right.
The first few months seem like a blur, they flew by in diaper changing, nursing, learning all the different things about breast milk, decoding cries, sterilizing bottles, and all the things in between. Not only was I getting in sync with my baby to understand his needs, but I was also figuring out and making sense of the changes I was going through as a new mom. In those first 3 months, I don’t even think my son needed entertainment beyond feeding and pooping every 2 hours. But once things settled…THEN WHAT?
At some point you will master the chaos, you’ll start to realize there is time in the day to actually hang out with your baby. I felt like there wasn’t enough out there to tell you what to do with young babies! Here are some fun and interactive things I did:
Read books:
Try some touch and feel books, pop-up books, and pick up books your baby knows the song too. Here are some of Kiaan’s favorites:
Little Blue Truck by Alice Schertle
Llama Llama Hoppity-hop by Anna Dewdney
Alpha block by Christopher Franceschelli
Five Little Monkeys Jumping on the bed by Eileen Christelow
Around the farm by Eric Carle
Brown bear brown bear what do you see by Eric Carle
Follow me 123 by Igloo books
Follow me ABC by Igloo books
The itsy bitsy spider by igloo books
Everything is Mama by Jimmy Falon
Your baby’s first words will be Dada by Jimmy Falon
Fuzzy Yellow Ducklings by Matthew Van Fleet
The adventures of little Nutbrown hare by Sam McBratney
Talk!!!
I found myself talking out loud all day, often to the point where I thought I was losing all my shits. I swear, there were days where I was at the door like a lost puppy waiting for my husband to come home so I could finally speak to another adult! BUT, know your baby is listening to you and picking up on your words and sounds. Your voice is one your baby recognizes and it really does soothe and calm him. Pro Tip: try speaking to them in different languages — It keeps things a little more interesting when you’re talking to yourself all day.
Music/sing to them:
We used Spotify play lists — Overall singing became a really fun activity we did together.
Instrumental nursery music (early on, calmed Kiaan and helped put him to sleep)
Nursery rhymes (made him smile and clap)
Bollywood music (got Kiaan to kick and dance)
Prayers (soothing)
Dance together:
Sometimes you just need to boogie and bring beats and fun moves for lots of giggles and claps. This must be where Kiaan’s love for “Bongo Butts” began. Dancing and jumping around with him always brought positive energy and excitement that continuously lifts our moods.
Tummy time:
Tummy time works muscles in the neck and shoulder to make them stronger and prepare your baby to sit up, crawl, stand, and walk.
We used the boppy pillow for Kiaan, but to change things up, tummy time water mats can be a really fun alternative. They’re visually enticing with lots of color and can boost strengthening the core. Also, the floating water animals are fun for the little ones to track as the animals move around with you. DM or comment if anyone has tried the water mats — I would love to know your thoughts!
Play peek-a-boo:
You will see gradually they start to pick up on what this game is all about and their reaction is so cute!
Bubble time:
Early on, Kiaan tried eating, poking, popping and holding bubbles and as he grew a little older he tried chasing and catching bubbles by crawling and standing.
Play dates:
NO, I don’t only mean for your baby! Set plays dates for yourself WITHOUT your baby too. You need YOU time!
You may think what’s the point of all these play dates if my baby isn’t engaging or interacting with others? BUT, play dates are important for 3 main reasons: babies learn how to share their mom (which they aren’t used to), learn how to mix with others, and since all babies are different and advance in different areas — this gives your baby motivation/incentive.
Toys:
I had lots of lists of toys from various people but none that would clearly just tell me which ones to use for my really young infant. I found flash cards (foam ones), teething toys (Zoli chubby gummy teethers and silicone elephant teethers), and rattles, mobiles over the crib, and activity gyms to be the most useful for that 0–6 month period.
There are so many different things you can do with your infant early on. I touched on a few of my favorites, but for those of you looking for more, DM me and I would love to share! Also, comment below if you have other fun and interactive ideas! I would love to hear and learn from your own personal experiences. Remember, all these moments are truly amazing with your little ones, but the day they sit up and wobble around take that as a sign that life gets easier!
|
https://medium.com/@pooja_93210/newborns-master-the-chaos-96826ef1be88
|
['Pooja Mehta']
|
2020-06-14 16:51:57.951000+00:00
|
['Baby Activities', 'Baby', 'Motherhood', 'Newborns', 'Mom']
|
Oklahoma’s Burgeoning UAV Ecosystem
|
For too long I have flown over the middle of the United States ignoring the creativity of my fellow citizens below. Covid-19 changed everything, grounding planes, and forcing me on a cross-country adventure. To my surprise, the I-35 corridor that stretches from Dallas, Texas through Oklahoma City is one of the busiest centers for entrepreneurship in aviation, drones, and unmanned systems.
Last month I met with Dr. Jamey Jacob of Oklahoma State University’s Unmanned Systems Research Institute as part of my participation in AUVSI’s (virtual) Xponential conference. As the chair of OSU’s Aerospace Department, Professor Jacob literally sits in the heartland of America’s innovation ecosystem. Unlike the densely-populated coasts, the Sooner State offers robotocists wide-open areas to safely launch and test their inventions. Some of the leading organizations and companies embracing these resources are Skydweller Aero, NASA, Boeing, Baker Hughes, and Kratos Defense & Security Solutions. According to Dr. Jacob, OSU’s leadership in mechatronics was an outgrowth of the “the explosion of drone technology in the early two thousands” by the military and three letter government agencies. Oklahoma also has a long history in the aviation industry as Interstate 35 (I-35) is home to Boeing in Wichita, OK; Lockheed Martin in Dallas, TX; American Airlines’ prime Maintenance Repair and Overhaul (MRO) facility in Tulsa, OK; and the Department of Defense’s MRO base in Oklahoma City, OK. “You already had a large manned aircraft infrastructure supported in the State,” says Dr. Jacob. The scientist further explained that today’s requirements go well beyond quadcopters and UAVs, but manufacturing advanced avionics for companies like Uber and Bell for urban air transport.
The vastness of the State’s geography combined with its superior engineering talent is attracting venture capital dollars from around the globe. Israeli-based entrepreneur Jonathan ‘Yoni’ Frenkel is working with Atento Capital of Tulsa to lure Israeli founders to open their American offices in the Southwestern city. Frenkel describes his personal journey to Atento, “COVID challenged me to consider the trade-off of what’s important, not just regarding my professional work, but also health, relationships, and the impact I am making.” He continued, “Once I visited Tulsa, I understood the range of activities; of particular interest was how local Native American tribes like the Osage Nation have used their land to create drone testing corridors. What really impressed me however was the authentic desire of the Tulsa community to help startups succeed… maybe I am a jaded New Yorker but the sincerity impacted me in a positive way.” Lifelong Oklahoman Nathaniel Harding of Cortado Ventures concurs, “It’s no surprise that our state is seeing innovation happen across drones, aviation and robotics. Our unique edge is developing technologies that solve real challenges being faced by industry. We’re used to building things, and our historic strength in avionics and aerospace gives Oklahoma an edge in shaping the future in these sectors.” Harding further stated that the new normal of remote work is accelerating interest by entrepreneurs in moving to the center of the country, “The secret is out that you can build a company for 1/5th the cost in the mid-continent, which is better for both investors and entrepreneurs. Now that we realize what can be done with a distributed workforce, it will be very hard to go back to the way it was.”
The entrepreneurial spirit is not limited to start ups or the cities, but across the entire State. Oklahoma is home to 39 Native American tribes, accounting for almost half of its physical area. Earlier this week, I spoke with James Grimsley, Executive Director, of the Choctaw Nation’s Advanced Technology Initiatives. The Choctaw is one of the largest indigenous people in the United States with a reservation encompassing 11,000 square miles. Its natural resources are a boon for unmanned systems. Last year, the Federal Aviation Administration (FAA) selected the Choctaw Nation as one of ten national test sites for The Unmanned Aircraft System (UAS) Integration Pilot Program (IPP). For decades, the Choctaw has been successfully operating casinos and other commercial initiatives to drive revenues for quality of life improvements on the reservation. Grimsley believes that its participation in the UAS IPP will be “a catalyst for the high tech industry,” both inside the tribal nation and outside its borders. As a self- contained entity with its own infrastructure, retail network, and governance, the Choctaw offers innovators a living laboratory to conduct autonomous drone deliveries. For example, last month Bell Aircraft received Beyond Visual Line Of Site (BVLOS) approval for its Autonomous Pod Transport (APT) delivery missions to typically inaccessible areas of the Choctaw Nation. As the APT is a Vertical Take-Off and Landing (VTOL) craft it is also the basis for future urban transportation alternatives (such as flying cars and air-taxis). These types of initiatives are not limited to delivery and urban air transport tests, as leading Fortune 500 companies and federal agencies are now flocking to the Choctaw Nation to conduct trials of unmanned vehicles for power-line inspections, emergency management, agriculture, live stock monitoring, and weather analytics. The impact of UAS IPP is felt beyond the business sector, as the Choctaw has implemented a robust educational STEM curriculum and job training network for its people. In contrast to the current political climate, Grimsley states that Choctaw’s leadership operate with “less partisanship and more unity in the interest of the tribe.” He further elaborated that decisions are made with the lens of a 100-year outlook versus the next election cycle. The Choctaw today has spread its wings globally with investments in public and private entities that are already paying dividends for the next generation.
The Choctaw story is not unique across the State, other tribes are following suit in joining the larger entrepreneurial ecosystem. Back in New York, my inbox is inundated with inspirational messages of ways to bolster diversity in the greater venture community. Quite possibly, Oklahoma provides a novel paradigm of broadening the opportunities for minorities across the country. The promise of flying cars, unmanned systems and robots are not only a catalyst for changing technology, but society. Just like the wide-open skies of the prairie, everyone has the potential of benefitting from these innovations.
Returning to Stillwater, Oklahoma I nudged Professor Jacob on his view of the State’s progress “Our bread and butter is doing the next great thing in aerospace, really focus on the autonomous capabilities.” He then humbly shared his personal journey, “When I originally started working on unmanned aircraft as an undergraduate student I was using drones for tornado chasing. Then as a faculty member I started to develop autonomous flying aircraft for Mars… that was back in 1999. Now there is an aircraft going to Mars,” exclaims Dr. Jacob. While it has taken over twenty years to achieve intergalactic UAVs, he is optimistic that Oklahoma is soaring on the right trajectory. He pointed to his latest joint project with NASA JPL, solar balloons to explore Venus. He illustrated how these celestial endeavors are tackling climate change on Earth. The scientist wrapped up our conversation by declaring, “I am most excited about UAVs for weather” that provide “micro forecasts” for manned aircraft, unmanned systems, and everyday people.
Editor’s Note: Don’t forget to VOTE!
Gain Access to Expert View — Subscribe to DDI Intel
|
https://medium.datadriveninvestor.com/oklahomas-burgeoning-uav-ecosystem-1842ccac86b9
|
['Oliver Mitchell']
|
2020-11-08 15:36:59.591000+00:00
|
['Startup', 'Drones', 'Native Americans', 'Diversity In Tech', 'Robotics']
|
Enter the Zionverse — NFTs + Metaverse Crypto Gaming Platform
|
Zionverse is a decentralized metaverse gaming ecosystem based on Indian culture. It is a platform that integrates decentralized finance (DeFi), playable NFTs, and Play-to-Earn and Planetary DAOs.
Zionverse is focused on three main aspects: Playable NFTs, Composable Gaming Platform, and Indian Culture.
(1) Playable NFTs
Zionverse has two main NFTs: Lakshmi and Trimurti.
The first launch of Lakshmi NFTs contains gamified fixed deposits of USDC, where users get 3D models of the Goddess Lakshmi, spatial audio music, and 108% APY in USDC for one year (paid out monthly). Zionverse generates these returns through yield farming on stable coins. Their website shows they had 5555 NFTs sell out in 19 days.
The Trimurti NFTs provide intrinsic value through DAO tokens, revenue share, and by being collectible items. They also provide utility by giving players access to the Zion Gaming platform. The NFTs can also be converted into 3D playable characters in the metaverse game. Each character is unique with different rarity types.
Players can also get rewarded for purchasing NFTs and staking. The earlier you purchase an NFT, the higher the rewards you receive. To learn more about purchasing an NFT, check out the Lakshmi Zionverse Litepaper here.
(2) Composable Gaming Platform
Zionverse has developed a user-generated gaming platform on Unity. It’s a 3D voxel-based engine inspired by Minecraft. Users can use their NFTs in the game and are also able to import existing Minecraft maps, skins, and NPCs into the Zionverse.
Users can show off their skills in Vijayi Dash. In this game, players are in an obstacle course where they much run, jump, duck, and leap through dozens of challenges. Points are awarded based on how long you are able to stay on the obstacle course. Once ten games are played, your total score is calculated for the tournament. Users who make it onto the leaderboard also get rewarded. The higher rank you achieve, the higher your reward.
(3) Indian Culture
Lastly, the Zionverse is based on Indian culture, history, and mythology. The metaverse is inspired by these notions and is open to everyone. The project has plans to open the universe to Norse and Greek mythology as well.
Roadmap & Team
The Zionverse team just revealed their roadmap available on their website. In Q4 2021, they launched their website and released their Trimurti NFT. They are planning to release their DeFi layer in Q1 2022, and launch their planet DAO and tokens in Q2 2022. The team also has some of their information on their website.
Join the Zionverse Community:
Twitter: https://twitter.com/thezionverse
Telegram: https://telegram.me/zionverse
Discord: https://discord.com/invite/8G8P33nSGB
Instagram: https://www.instagram.com/zionmetaverse/
Thank you to Zionverse for sponsoring this article. Check out their website for more information on their project. This is not financial advice. Please do your own research before investing in any platform. Make sure to subscribe to the VoskCoin YouTube Channel to stay up to date on the latest cryptocurrency news and reviews.
|
https://medium.com/voskcoin/enter-the-zionverse-nfts-metaverse-crypto-gaming-platform-17d6d45d80f8
|
['Miss Vosk']
|
2021-12-31 21:07:29.662000+00:00
|
['Play To Earn', 'Metaverse', 'Zionverse', 'Lakshmi', 'Nfts']
|
Streaming!! Villanova vs Texas Live Free 2020 Basketball
|
The Big East/Big 12 Challenge gets underway in Austin on Sunday when Villanova (3–1) visits the Texas Longhorns (4–0). The Longhorns are looking for their fifth consecutive win to start the season after winning the Camping World Maui Invitational held in North Carolina.
Watch Now: https://dailyaccess.org/basketball/
Watch Now: https://dailyaccess.org/basketball/
Texas defeated the Tar Heels 69–67 when Matt Coleman III hit the game-winning jumper with 0.1 seconds left. In that game, Texas led by as many as 16 points in the first half, but North Carolina chipped away at the lead until they grabbed a 65–63 lead with 2:53 left in the game. The Longhorns rallied with a 6–2 run over the final minutes of the game to seal the victory. Coleman had 22 points in the win, hitting three three-pointers along the way.
The Longhorns held their final two opponents in the Maui Invitational to 33.9% shooting and won two of three games by one possession.
Villaniova has had a tumultuous beginning to their season with COVID cancellations and changes. The Wildcats were a quick replacement in Bubbleville when Temple had to quarantine. They ended up losing the replacement game against Virginia Tech in overtime 81–73. After rebounding against Hartford with a convincing 87–53 win, Villanova’s next two games were postponed due to COVID. Now, the Wildcats get a Texas team that has started the season strongly.
https://www.skiddle.com/whats-on/united-states/Online-Event/Streaming-Arsenal-vs-Tottenham-Live-Free-2020-ON-TV/14110749/
https://discourseanalysis.net/en/streaming-texas-vs-villanova-2020-live-free-basketball-online
https://www.quantiki.org/conference/streaming-villanova-vs-texas-live-free-2020-basketball
https://www.quantiki.org/conference/streaming-nfr-2020-national-finals-rodeo-live-free
https://discourseanalysis.net/en/streaming-nfr-2020-national-finals-rodeo-live-free
http://paste.jp/35386c68/
https://paiza.io/projects/ybBsG4ZqdYxRWCeb0aOg7A
https://pastelink.net/2c8ub
|
https://medium.com/@edwardrichard1984/the-big-east-big-12-challenge-gets-underway-in-austin-on-sunday-when-villanova-3-1-visits-the-812b90d8266f
|
[]
|
2020-12-06 15:08:42.991000+00:00
|
['Vs', 'Live', 'Free', 'Texas', 'Villanova']
|
The Ultimate Security Camera Installation and Purchasing Guide 2021 — Houston Security Solutions
|
The Ultimate Security Camera Installation and Purchasing Guide 2021 — Houston Security Solutions Robbie Handy Aug 9·36 min read
Pros of Security Cameras
The most important benefit of security cameras is that they deter crime. Whether the cameras are installed in your home or business, the sight is typically scary to anyone who has criminal intentions since they will realize their unlawful activities will be recorded on video. Security camera installation in Houston, TX is an excellent choice for areas with high crime rates. It will help keep your job or house from becoming a target.
Observing settings and actions — Security cameras may be installed virtually everywhere. Power over Ethernet (Poe), a newer technology, allowing power and video to be sent to a camera via a single connection. Depending on your needs and requirements, you may install visible or concealed (covert) cameras to monitor the actions of visitors to your home or business. This is a fantastic method for monitoring and tracking suspicious visitors.
Pick up and put together evidence — Security cameras placed by a competent security camera installation firm are ideal for monitoring sounds and activities. Furthermore, as technology advances, cameras are increasingly equipped with high-quality audio and video capabilities for recording and documenting occurrences.
Improve public safety — Surveillance cameras, which are commonly utilized in public locations such as crosswalks, malls, and parking lots, provide great surveillance options for preventing and deterring crimes in public.
Reduce crime in public places — It is improbable that a person will commit a crime if they are aware that a surveillance camera will capture them in the act. Furthermore, if there is any suspicion of a crime occurring in a certain location, the area might be vacated as a safety measure.
Convenient monitoring from anyplace — Surveillance cameras are highly efficient since the camera feed can be accessed via the internet or even your smartphone.
You may use the camera system to keep an eye on your children as well as your pets. Pets are an important part of many people’s lives, and leaving them at home alone may be distressing as well as costly. You may check in on your dogs from work with a professionally fitted security camera system.
Cons of Security cameras
Costs — When compared to fake cameras, genuine security cameras are clearly more expensive to install, depending on the features, number of cameras, and monitoring systems.
Vulnerability — Advances in technology have enabled thieves and other intruders to detect genuine or dummy cameras and develop ways to deactivate or disconnect the power supply of cameras that have not been professionally installed.
Privacy infringement — Security cameras have sparked debate across the board, particularly in the professional sectors. Employees may perceive security cameras as an invasion of privacy or interpret their existence as an indication that their boss does not trust them.
Installation costs a lot of money — This is a significant disadvantage of using security cameras. Professional systems are often acquired on an a la carte basis, which means they do not come in pre-configured generic bundles. Professional systems are often assembled by the installation firm to precisely match the customer’s application and demands.
Complex to use — If you are unfamiliar with technology, you may find it difficult to operate some of the highest-quality cameras on the market. This is becoming less of an issue as time goes on. Surveillance camera makers are figuring out how to include high-tech capabilities into surveillance component software in a way that non-tech users can locate and utilize.
Surveillance systems are easily abused — Hackers and vandals may attack surveillance cameras installed in public areas.
What is a Surveillance Camera?
Security cameras and surveillance cameras are the same thing. Cameras, on the other hand, are one of the most common and well-known technologies used to observe us as we go about our everyday lives. Local governments and companies deploy surveillance camera networks. With the advent of real-time crime centers that access public and private video cameras, the difference is becoming increasingly blurred. Surveillance cameras are commonly employed in public locations to monitor public behavior.
Difference between a Security and Surveillance Cameras
Surveillance cameras and security cameras are words that are sometimes used interchangeably. Both safeguard your house and let you examine video of incidents such as attempted break-ins. The phrases surveillance camera and security camera are sometimes used interchangeably to indicate whether a system is being professionally monitored. Security cameras, for example, are cameras that are actively monitored in the case of a break-in, a fire, or an accident. Surveillance cameras are cameras that monitor your house and can only be accessed on a smartphone, tablet, or computer.
CCTV and Surveillance Camera Fundamentals
A CCTV camera is a self-contained surveillance system that records or stores video. In the case of analog CCTV cameras, it transfers them to a recorder, which was formerly known as a DVR, either digitally or by cable wire. Surveillance cameras, on the other hand, are basic cameras that broadcast video and audio data to a network video recorder (NVR), where they may be watched and recorded. Surveillance cameras are used to secure your assets as part of security systems.
Involvement of Technology
The video feeds from all linked cameras are collected by the older CCTV system. It sends them to receiving devices, such as a DVR. In an analog system, this connection is usually made via a coaxial wire. In a more up-to-date camera setup. Ethernet cables are used to link an IP camera to a network video recorder (NVR) or an IP camera to a network switch.
The closed-circuit system used to monitor and govern a specific property is made up of a whole network of surveillance cameras. IP (internet protocol) networks are commonly used to connect security (surveillance) cameras from remote locations to a central location.
Features
To deliver video feeds to a restricted number of displays, CCTV cameras require cabling. Furthermore, the cameras must be carefully located in a single spot. Surveillance or security cameras, on the other hand, send recorded footage as digital signals to an NVR (Network Video Recorder) through a single PoE connection, eliminating the need for power cords.
Applications
CCTV cameras are used to manage the security of both public and private buildings. These systems can be used in conjunction with intrusion detection sensors to provide enhanced security. The surveillance camera, on the other hand, is ideal for monitoring a specific region and therefore controlling any undesired occurrences.
Different type of Security Cameras
Let’s say you’re thinking of getting a security camera for your house or office. In such a scenario, you’ll have to choose between wired and wireless options. There’s a lot of misunderstanding about these two kinds of cameras.
Wireless Cameras
A WiFi camera, often known as a wireless security camera, broadcasts video over WiFi and is powered by either AC or battery power. This necessitates the use of a power cord to connect it to an outlet for AC power. It’s important to note that a wireless camera isn’t necessarily wire-free; instead, it’s termed a wireless camera because it transmits data via a wireless network (WiFi). When a wireless camera is powered by a battery, it becomes really wire-free.
The footage from wireless security cameras is often stored on a cloud server, allowing you to access it from anywhere. Some cameras can also save video on local media, such as a micro SD card. Wireless cameras are popular because they are simple to set up and view from a smartphone or computer.
When motion or sound is detected by wireless security cameras, they usually start recording. Even so, if hooked into electricity, some may be programmed to record 24 hours a day, seven days a week. They record high-resolution video and, if equipped with night vision, can record at night. Some consumer brands also feature two-way audio capabilities, allowing you to converse with the person who is visible to the camera. Finally, some models include machine learning, a technique that enables cameras to perform useful functions such as alerting you when a person or item is detected.
Wired Cameras
A wired security camera system combines cameras and a recording device. The number of cameras typically begins at four and may go to 256. They can record 24 hours a day, can be viewed remotely via the internet, and are hard-wired to the internet and power. Traditional DVR systems, which utilize a coaxial cable and a separate power connection to link the cameras and record the footage, and newer NVR (networked video recorder) systems, which use Ethernet cables to both power and record video, are the two types of wired home security camera systems.
An Ethernet cable may link both DVRs and NVRs to the internet. NVRs are more sophisticated than DVRs and can record higher-quality video. Wireless cameras offer several capabilities that NVRs may include, such as two-way chat and person recognition. The IP cameras that come with a wired home security system get their power from the NVR or a Power over Ethernet switch and don’t require a plug. Most wired systems feature a smartphone app for watching footage. You may still watch the recordings and real-time feeds by connecting a computer display to the recording device.
Whenever possible, we recommend that you utilize wired security cameras instead of wireless security cameras.
In comparison to wireless security cameras, professionally installed hard-wired cameras are always a superior choice because-
The biggest disappointment you could have if you pick a wireless camera is the monthly costs. The majority of wireless cameras rely on cloud storage, which comes with a monthly cost. As a result, if you install a wireless camera at your home or business, you will be charged a monthly cost. It’s also possible that you’ll have to pay extra for smart features like person and vehicle recognition. A wired camera, on the other hand, does not usually require a monthly subscription.
Another disappointment is that wireless cameras only function with WiFi, implying that they are only as good as your home WiFi network. You may have problems if your WiFi is too sluggish or your camera is too far away from your router. This video lags, freezes, or is unable to get a live view at all at times. Wired cameras, on the other hand, are directly connected to the network through a network connection and operate without any technical difficulties or glitches 24 hours a day, seven days a week.
As your internet speed fluctuates, so will the quality of your video stream from wireless cameras. Even if you have 1 GB of internet, the quality of your WiFi will fluctuate depending on a variety of variables, including how many other people in your area are online at the same time and radio interference from other wireless devices in your home.
As a result, because there isn’t enough bandwidth to offer higher quality footage, your 4K cameras may occasionally communicate in 720p (not even full high definition). That is why, especially in areas of Houston, TX where crime is a concern and continual monitoring is required, it is preferable to utilize a professionally installed connected security camera.
Wireless cameras offer a lot of flexibility in terms of location and setup. Even so, you’ll need to connect your cameras to a solar panel or remember to keep their batteries charged. They have battery problems since wireless cameras can’t record continuously without fast depleting their batteries. Instead, they record in brief bursts (10 seconds to five minutes, depending on the brand and amount of activity), so you could miss important moments. Wired cameras, on the other hand, are permanently attached to a power source and may record continuously.
Wireless cameras are susceptible cyberattacks since they link to the internet and allow remote access. They have the potential to be hacked, jeopardizing your privacy and security.
As a result, we always recommend professional wired cameras installed by a security camera installation firm over wireless cameras for your home or business protection.
How Important are Security Cameras
As previously said, it has become important to install security cameras at your home or office in order to ensure your safety. Installing a security camera at your house or business as a security or preventative measure was always seen to be a dramatic, costly, and unneeded undertaking. However, given technology’s accessibility and cost, failing to install some sort of security camera now appears to be a reckless and strange move. Major advancements have been made possible because of technological advancements.
Surveillance cameras enable owners to see their house or place of business at any time and from nearly anywhere. Installing security cameras in your home is a wise choice for a variety of reasons. Here are some of the most compelling reasons to do so in Houston, TX.
Criminals and Crimes are being deterred.
Criminals are deterred by the sheer appearance of an outside camera. Even yet, relying on fake cameras is risky since seasoned burglars can usually identify them from a mile away. The majority of the time, criminals will inspect a residence before robbing it. They will most likely abandon the burglary attempt if they see cameras placed a competent installation company. Assume you’ve been a victim of a break-in. In that event, the cameras will catch the occurrence and aid in the arrest of the offender, as well as the restitution of your stolen property.
HPD said there had been 199 homicides within the city limits through Wednesday, June 10, 2021. Through the same time period in 2020, there had been 148 homicides. That’s a 35% increase.
Aiding The Police
Installing security cameras in your house may enable you to assist the authorities in the case of a break-in. The event will have been caught in high quality HD footage by your professionally placed cameras by a Houston camera installation business. These recordings and photos can help police apprehend the perpetrator, prevent future crimes, and restore your belongings.
Keeping an Eye On The Family
Surveillance cameras aren’t only for home security; they can also be used to keep an eye on your children while you’re at work. When a child gets out of school in the middle of the afternoon, many families with two working parents find themselves in a bind. A parent may always check in on their children from work utilizing a video security system’s remote monitoring option.
Don’t Forget About Your Pets
You may use the camera system to keep an eye on your kids, and you can even keep an eye on your pets. Pets are an important part of many people’s lives, and leaving them at home alone may be distressing, as well as expensive. You can check on your dogs from work with a professionally setup home security camera system.
Insurance Benefits
Following a burglary, you must file an insurance claim for vandalism or theft. This is when your high-definition surveillance camera comes in handy. You may use the film to chronicle the occurrence and back up your insurance claim. In addition, a security system may generally result in up to a 20% savings on home security.
Different Brands of Security Camera System
There are a multitude of security camera brands to select from. In today’s world, security is crucial. People want to feel safe in their own homes and businesses. Installing security cameras is one of the greatest methods to receive such protection. You should conduct research before purchasing any large-ticket products, such as a security camera, on which you will rely.
With the current epidemic, security cameras have become more popular as company owners seek to secure their assets amid nationwide shutdowns. There are a variety of CCTV camera brands available on the market. Professional security cameras and consumer security camera brands are the two most common types.
We’ve put up a list of some of the most well-known security camera brands for you to consider.
Professional Security Camera Brands
Axis Communications Offers The Best Security Systems For Business
RapidHandy, Inc. is a company that specializes in providing quick and easy Houston Security Solutions is a Certified Axis Security Camera installer that sells security cameras, video management software, and integration software.
Houston Security Solutions’ Axis Security Cameras are based on an innovative, open technology platform and offer the security market’s most comprehensive variety of professional quality video surveillance cameras solutions. With the launch of the world’s first network camera in 1996, Axis revolutionized security and has remained the worldwide market’s number one option for network video solutions ever since.
If you’re not in the security sector, you’ve probably never heard of Axis Communications. However, this is due to their concentration on security camera systems. They’ve been making high-end analog and digital security systems for years. They collaborate with partners all around the world and specialize in networked solutions.
Bosch Security Camera
Bosch Security camera Systems is a massive security company that provides all types of security systems. From security cameras to alarm systems. They have everything to offer, from small home systems to multi-building business systems. The company is pretty well-known among businesses and is utilized around the globe.
Avigilon Security Camera
Avigilon Certified Partner Offers The Best In Class Video Surveillance.
RapidHandy, Inc. is a company that specializes in providing quick and easy Houston Security Solutions is a full-service Avigilon partner that sells security cameras, video management software, and integration software from other Avigilon partners.
Avigilon ‘s sophisticated security system may aid in the reduction of theft, the prevention of violence, and the tracking of questionable persons. When combined with AI, Avigilon transforms into a security powerhouse that few other systems can match.
The security professionals at Houston Security Solutions have over a decade of expertise providing, installing, and integrating Avigilon security systems. We will personally guarantee that your Avigilon security is perfectly customized to your business size, and we will maintain it throughout its lifespan.
Avigilon manufactures a wide range of security camera solutions for use with CCTV cameras. They develop software, hardware, and analytics as a whole. A full-service solution ensures that you receive all you need from your goods and more. Avigilon products are used by industries and consumers all around the world.
Hanwha Wisenet Security
Hanwha Security Offers The Best In Class Video Surveillance.
Hanwha, a leading security company, offers video surveillance solutions such as IP cameras, storage devices, and management software that are based on world-class optical design, manufacturing, and image processing technology. They provide end-to-end security solutions and have seen global success in a variety of industries.
The security professionals at Houston Security Solutions have over a decade of expertise providing, installing, and integrating Hanwha security systems. We will personally guarantee that your Hanwha security is perfectly customized to your business size, and we will maintain it throughout its lifespan.
FLIR
FLIR is the world’s top infrared camera manufacturer. They also own Lorex, whether it’s surveillance cameras or systems meant to be installed under helicopters. They can now supply every degree of camera requirement thanks to Lorex’s acquisition. From the most basic security systems to the most advanced security systems, we have you covered.
Hikvision
Hikvision is a major manufacturer of security camera systems. They have a large number of employees who work hard to produce cutting-edge security technology. Products from Hikvision offer top-of-the-line lenses with NVR and HD features. With offices around the world, Hikvision’s sales of security cameras have done quite well.
Due to their dependability, Hikvision products can be found in homes and businesses around the world. They sell complete systems, including everything from the cameras themselves to video intercom systems and software to record, manage, and access footage. Some of their top solutions include applications in the retail industry, healthcare services, and smart buildings (including smart school systems). Such innovative products have led them to become one of the most well-recognized names in the industry. All the brands are available with Houston security camera installation.
Dahua
Dahua is a surveillance systems business that specializes in security systems. They provide a diverse selection of high-quality security solutions. A five-year warranty is also included.
Monitor and record your properties with LTS security cameras.
With constant surveillance and video recordings, LTS security systems keep businesses safe. Site Admins and security personnel may use LTS cameras to remotely monitor buildings, rooms, and outdoor locations at all times, without needing to be there. Employee regulation is aided by surveillance systems, which allow administrators and workers to notice and respond to problems. They help decrease liability by preserving incident evidence.
We installed LTS’s highest level video surveillance and video equipment. Our expert technicians install LTS security cameras, ensuring proper setup and positioning for the greatest possible coverage.
Uniview
Uniview, commonly known as Unv, is a well-known Chinese surveillance technology business. Uniview’s financials indicate that it is a strong firm that will continue to lead China’s quality camera sector. Their cameras are mostly IP camera systems that can be utilized by both companies and households.
Consumer Security Camera Brands
Swann Security Camera
Swann is an Australian firm that has grown from its humble beginnings in a basement to selling goods all over the world. They provide a wide range of security devices to assist in the protection of homes and businesses. Swann offers anything from simple systems with only two cameras and limited functionality to complex systems with NVR and a big number of cameras. There are even cordless security camera solutions available.
Lorex Security Camera
Lorex offers a wide range of devices that aim to provide both security and value to households. They provide a wide range of options, from plug-and-play to more sophisticated camera systems. They provide a number of wireless security camera systems with data transmission distances of up to 500 feet. In addition, 4K camera choices are available.
Logitech Security Camera
Logitech is another company that manufactures a wide range of goods. Almost certainly, you’ve heard of them before. The company has been in the webcam and home camera market for some time. They now offer a wide range of security camera systems to choose from.
Netgear Arlo Security Camera
Netgear started off as a networking company, so it’s not surprising that they’d branch out into security cameras. Arlo is their camera line, which includes both wired and wireless cameras that save data in the cloud and can be accessed from any phone, tablet, or computer. Audio recording is also available on certain of their gadgets.
Nest Security Camera
Nest is well known for its internet-connected thermostats. These are some of the items they produce. Alarm systems and security cameras are among the many security devices available.
D-Link Security Camera
D-Link is another networking brand that you may be familiar with. Their network routers have made them famous. They also provide a range of Wi-Fi, HD cameras with a number of functions that may assist safeguard any property.
The difference between the Consumer and Professional Security camera brands.
Dahua, Hikvision, and Uniview are professional security brands, whereas Nest, Also, and Ring are consumer brands. This easy-to-follow guide will provide you with useful information to help you make an informed decision about your new video surveillance system in Houston.
A professional video surveillance camera may feature a varifocal lens or, more often now, an autofocus lens that allows the user to optically zoom in on a specific target or zoom out for a wider view (through a web browser interface). This will save you time in the long run because you won’t have to climb a ladder to adjust the focal length. These cameras are designed for a wide range of purposes, including forensic detail and situational awareness.
Depending on the company demands and requirements, professional security camera installers can utilize a 360-degree fisheye camera with a multi-sensor. Fisheye cameras include a fisheye lens that allows for 180-degree surveillance while maintaining HD video quality. A single fisheye security camera may cover up to 4,000 square feet and can be used to replace many conventional cameras without sacrificing coverage. Fisheye cameras have just one wire, but conventional cameras require many cords.
Video from consumer camera systems may be stored in the cloud. In principle, this is a wonderful capability, but the cost increases as the resolution is increased. Yes, just because it’s an HD camera doesn’t imply you can keep HD recordings in the cloud at a reasonable price. Overall, the better the resolution and the more cameras you have, the more money you’ll spend on cloud storage. Furthermore, most out-of-the-box features limit recordings to 10-second snippets, which are insufficient for commercial purposes. In the professional security camera sector, this is a hot topic.
Commercial cloud video surveillance, often known as VSaaS (Video Surveillance as a Service), may have some promise in the near future. Still, there are too many restrictions in terms of picture quality, intelligent video, and overall evidence management for us to suggest these platforms to our business clients that require mission-critical surveillance systems.
Another issue with these consumer systems masquerading as professional video surveillance kits is that they are frequently shipped with a low-cost, low-quality Linux operating system that is unlikely to function with video surveillance hard drives. Finally, embedded DVR systems generally have a fixed amount of storage; if you add cameras later, these systems may not enable you to simply add another hard disk drive and instead force you to buy a new system.
How Much Does It Cost To Install A Security Camera?
The typical cost of installing video security cameras in Houston is $400 to $5000. Without installation, the national average for a system with four or more cameras, a recording system, Smart features, and Cloud possibilities is $600. A single-unit doorbell camera may be purchased and installed for approximately $175. For $2,500, you may have 12 or more high-tech camera-wired systems installed, with monitoring.
We have categorized the Houston Security Camera installation cost into various categories.
Security Camera Cost By Camera Type
Surveillance cameras exist in a variety of shapes and sizes, allowing them to be used for a variety of purposes and scenarios. The numerous versions on the market are differentiated by factors like as film quality, internet capability, and configuration flexibility. Understanding the many model kinds can help you pick the best camera for your needs, as costs and features vary greatly.
Dummy Security Camera Cost
Dummy cameras usually cost between $10 and $15. They’re phony cameras that don’t record video but provide the impression of a working surveillance system. While these cameras have the apparent disadvantage of providing no genuine monitoring capability, they are extremely inexpensive and need almost no setup. Many come with genuine flashing lights to provide the illusion of a working system, making fake cameras a viable option for homeowners seeking a low-cost deterrent.
Bullet Security Camera Cost
Bullet cameras range in price from $30 to $500 apiece and may be either low-cost security cameras or high-resolution beasts that can see the tiniest particle of salt on a white floor. A bullet camera resembles a box camera in appearance. Its lens, like that of a dome camera, is permanently mounted within a glass casing. These cameras are more discrete and are available in both indoor and outdoor models. Despite this, due to the permanent nature of the housing, repositioning and performing maintenance might be challenging. Bullet cameras work with both CCTV and IP systems.
PoE Security Camera Cost
PoE cameras range in price from $50 to $500. PoE cameras, or power-over-Ethernet cameras, get their power from an Ethernet connection rather than a coaxial cable, another cable type, or batteries. If you already have Ethernet wires in your house, these cameras may help you save money on installation. PoE cameras work with both CCTV and IP systems. PoE is used in 90% of professional security cameras.
Box Security Camera Cost
Each box camera costs between $100 and $750. Cameras having a box-like body that is attached to a separate lens are known as box cameras. These cameras are often larger, more costly, and less appealing than other varieties. However, they generally come with better performance and product life, as well as the ability to change lenses. They might be a good alternative for Businesses who want a better level of security. Box cameras work with both CCTV and IP systems.
Hidden Security Camera Cost
Each hidden camera costs between $50 and $250. They are distinguished from other camera kinds by the fact that they do not generally resemble cameras. To prevent discovery, hidden cameras are sometimes disguised as other items such as smoke alarms or clocks, or are extremely tiny. These characteristics make them ideal for covert observation. Despite this, their small sizes and odd forms can occasionally result in video quality and memory space restrictions. CCTV and IP systems are both compatible with hidden cameras.
Doorbell Security Camera Cost
Doorbell cameras are becoming more popular and cost $75 to $250. Doorbell cameras combine camera technology with traditional doorbell features to allow you to survey the area in front of your home’s door. Doorbell cameras typically offer Smart features like smartphone alerts when someone rings the doorbell or movement is detected. However, they rely on WiFi signals like other wireless camera types. Doorbell cameras can be compatible with CCTV or IP systems but are mainly used with IP systems.
Dome Security Camera Cost
The cost of a dome camera ranges from $80 to $300. The clear, dome-shaped glass covering that covers the lens gives dome cameras their name. Dome cameras have the advantages of being unobtrusive in appearance, resistant to damage owing to the protective glass, and it is difficult to determine which way the lens is pointed when tinted. However, because of the glass housing, accessing the lens to adjust it or perform maintenance might be difficult. Dome cameras work with both CCTV and IP systems.
Outdoor Security Camera Cost
The price of outdoor security cameras ranges from $50 to $600. Outside security cameras are security cameras with extra characteristics aimed toward outdoor use, such as waterproof enclosures and low-light capabilities. Outdoor security cameras may be more expensive than other models as a result of these increased capabilities. CCTV and IP systems are both compatible with outdoor security cameras.
License Plate Recognition Security Camera Cost
Cameras capable of collecting high enough pictures to view and read license plate numbers are known as license plate capture cameras. These cameras cost between $300 and $1000. It’s worth noting that cameras labeled with this phrase may or may not have software that can automatically interpret numeric data. Many cameras labeled as license plate capture cameras are merely those with image quality good enough to discern numbers while reviewing film. CCTV and IP systems are both compatible with license plate capture cameras.
PTZ Security Camera Cost
PTZ cameras range in price from $250 to $1500. PTZ cameras, also known as pan-tilt-zoom cameras, are remote-controlled cameras that can move, swivel, and zoom the lens. These cameras have the huge advantage of being able to instantly alter the camera angle without having to remount the camera. Some even have software that automatically adjusts the camera to movement. PTZ cameras work with both CCTV and IP systems.
Professional Grade Security Camera Cost By Brand
Brands We are offering as a security partner such Axis, Bosch, Avigilon, Wisenet, Hikvision, LTS Security Cameras cost are variable. Depending on futures and specifications prices are changed to $200 — $3000. However Security Cameras have cutting-edge technology with thermal sensors, people counting, motion detection, smart tracking, microphone and two way audio options. Also these security cameras came with different shapes and models. They are available in Analog and Network IP security cameras.
Hikvision Security Camera Cost
Hikvision cameras have a wide range of prices, ranging from $125 to $475 per camera. Another Chinese security manufacturer, Hikvision, offers a wide selection of camera solutions, including dome, bullet, PTZ, and license plate recognition cameras. Hikvision has a number of product lines that are customized to various security needs. They have cameras with deep-learning algorithms, cameras that catch color in low light, and even cameras that can withstand explosions, for example. DVRs, NVRs, and cabling are also available from Hikvision.
Dahua Security Camera Cost
Dahua cameras come in three different price ranges, including the Lite, Pro, and Ultra Series. Their cameras range in price from $75 to $350 on average. Data is a Chinese security company that manufactures dome, bullet, PTZ, and license plate recognition cameras. They also have a diverse product range of DVRs, NVRs, cabling, and smart home integration capabilities. Dahua also has a customer service line.
Consumer Grade Security Camera Cost by Brand
The brand of a security camera may tell you a lot about its quality, longevity, and efficiency. Based on the product and the warranty, installation, and other services they provide, various brands may be more or less attractive depending on your specific needs. These are all important factors to consider when selecting a camera brand.
Swann Camera Price Cost
Swann cameras are priced between $70 and $200. Swann is an Australian security company that offers a wide variety of camera types, including dome, bullet, and floodlight. Swann cameras are available in both wired and wireless versions, with some models including built-in lighting and Google Assistant/Amazon Alexa compatibility. Swann cameras are an excellent middle-of-the-road choice, offering high quality and functionality for the money.
Night Owl Security Camera Cost
The cost of a Night Owl camera ranges from $100 to $150. Bullet and dome security cameras are available from Night Owl. Many Night Owl security cameras have improved night vision capabilities, and some even have heat-detection capabilities. These are the most well-known characteristics of Night Owl cameras. They’re particularly well-suited to long-range, outdoor, or extremely low-light settings.
Lorex Camera Cost
The cost of Lorex cameras range from $100 to $175 apiece. Lorex is a Chinese security company that manufactures a high-quality line of wired and wireless cameras for usage in the home, business, and commercial sectors. Lorex provides bundle-style solutions that include numerous cameras and NVRs with excellent resolution for a low price, as well as doorbell and wire-free cameras.
Nest Camera Cost
Nest cameras range in price from $150 to $300. Nest cameras are part of Google’s Home suite of devices. They come in four different camera types: normal, smart, indoor, and outdoor cameras. For homes interested in smart AI features like Google Assistant integration, microphone communication capabilities, and automated Smart notifications when sounds and movements are detected, these cameras stand out.
Cost of a Security Camera Systems Based on Storage Capacity
When installing a security camera system, it’s important to think about how the footage from the cameras will be stored. Physical copies, such as SD cards and DVRs, are supplemented by cloud-based storage and hybrid approaches, such as network video recorders (NVRs). Consider how accessible the video will be through the internet and mobile devices, as well as whether you’ll be paying a one-time price, as with memory cards, or a recurring monthly subscription, as with Cloud-based services, when choosing a storage solution.
SD Card CCTV Camera Cost
SD cards range in price from $10 to $50 on average. SD cards are a physical way of storing footage on a camera’s card. SD cards are less expensive than other storage options, don’t require internet connection, and can be viewed on any PC or smartphone with the necessary software. However, compared to other techniques, SD cards have limited storage space, do not automatically post film to the internet for remote viewing, and can be lost if the camera is stolen.
Security Camera Systems with DVR Cost
The cost of a DVR ranges from $200 to $2500. DVRs, often known as digital video recorders, are hard drives for analog surveillance systems. The DVR receives the analog signal from the cameras and transforms it to digital footage before saving it. DVRs have higher storage capacity than other types of storage, such as SD cards. However, their capabilities are usually restricted to wired cameras.
Security Camera Systems with NVR Cost
The price of an NVR ranges from $250 to $3000. NVRs, also known as network video recorders, are hard drives that store video footage, similar to DVRs. NVRs have the advantage of being able to function with both wired and wireless IP cameras. This may be a huge benefit for homeowners who wish to install a wireless system. When utilized as part of a wireless system, though.
Security Camera Systems with Cloud Storage
The term “cloud storage” refers to the storing of video on distant servers. The cost of cloud storage ranges from $15 to $50 per month. This technique of storing has a number of benefits and drawbacks. You can view your film from nearly anywhere thanks to cloud storage. It saves you the trouble of manually archiving your film. Most businesses, on the other hand, charge a monthly subscription for using Cloud storage on their servers. You will not have a physical backup of your footage by default.
Cost of a Security Camera based on Field of View
Field of view, along with resolution, is one of the most critical elements in deciding whether the image your camera generates meets your distance and detail requirements. Lens millimeters or angle degrees are used to measure field of vision. Larger lenses often have a narrower field of vision but provide greater information over a longer viewing distance. Smaller lenses, sometimes known as wide-angle lenses, provide a broader field of vision but can only be used at close distances. When picking a field of view, think about whether you want to capture a larger perspective or a more precise, specific region.
6 mm Security Camera Price
6 mm cameras range in price from $100 to $250. Security cameras with 6 mm, or 50-degree, lenses have somewhat smaller fields of vision than cameras with wider fields of view. These cameras have a resolution of at least 2 megapixels. These cameras can handle somewhat longer distances without losing too much information in the local environment, making them an excellent choice for confined settings up to 16 yards away from the camera.
3.6 mm Security Camera
Security cameras with 3.6 mm, or 69-degree, lenses have fields of vision that are generally balanced in terms of width and distance. For 3.6 mm cameras, expect to pay between $50 and $400. These cameras have a resolution of at least 2 MP. These cameras are ideal for producing images with a nice blend of detail and short to mid-distances, thus they’re good for places up to 9 yards away from the camera.
2.8 mm Security Camera
Wide-field-of-view security cameras with 2.8 mm, or 90-degree, lenses are available. Cameras with a focal length of 2.8 mm cost between $50 and $500. These cameras have a resolution of at least 2 MP. These cameras capture a broad field of vision but aren’t ideal for long distances, thus they’re best for tiny places up to 5.5 yards away from the camera.
Motorized Varifocal Security Cameras
Motorized Varifocal Security Cameras are good solutions for long range of distance to monitor from. Motorized Security Cameras lens parameters start at 2 mm — 12 mm range and give you the opportunity to have enough room to play with it. Motorized Security Cameras are available with 2MP, 4MP, 6MP and 8MP 4K resolutions with analog and IP network security camera types. Motorized Security cameras are priced as $300 — $1200 in the security market as of the day of 2021.
Security Camera Cost by Resolution
The size or detail of the image produced by a camera is referred to as resolution. While resolution is not the only element that affects image quality, it is crucial for security cameras since the more detail your camera catches, the more you can see in your film. The greater the area you wish to scan in your house, the higher the resolution you should choose for better detail over longer distances. However, as resolution improves, so does the price and the amount of memory space needed.
2MP Security Camera Cost
CCTV cameras with a resolution of 2MP or 1080p cost between $40 and $100 apiece. The typical starting point for HD-quality security cameras is 2MP cameras, often known as 2-megapixel or 1080p CCTV cameras. These cameras have a resolution of up to 30 feet and an 80-degree viewing angle, making them ideal for facial recognition. They do not, however, provide the clarity that other higher resolutions provide.
4MP Security Camera Cost
4MP cameras, commonly known as 4-megapixel security cameras, have 30% more pixels than 2MP cameras in the business. The cost of a 4MP camera ranges from $80 to $200. This means they have an 84-degree viewing angle and image quality high enough to catch face characteristics from up to 50 feet away. One disadvantage is that as the resolution of the film rises, so does the price and the amount of memory necessary to store it.
8MP Security Camera Cost
8MP cameras, commonly known as 4K security cameras, range in price from $150 to $400 per camera. These cameras have an extremely high resolution, capable of generating 4K (or 8.3 million pixel) footage. These cameras are ideal for people who want to examine bigger areas from afar without losing information, such as outdoor spaces. 8MP cameras, on the other hand, need more bandwidth and storage space. These cameras, which have a resolution of 8 megapixels or greater, are typically used in bigger industrial, corporate, and commercial areas.
12MP Security Camera Cost
12MP cameras, often known as 12-megapixel cameras, have some of the best picture resolution available in security cameras today. 12MP cameras with a 2.8 mm lens are available with 1080p and 4K screen resolutions. Each 12MP camera costs between $800 and $1,000. These cameras are capable of capturing a lot of visual detail. Large stadiums, airports, and military facilities frequently employ them. They, like other high-resolution cameras, need a lot of storage space and are among the most expensive types available.
4K Security Cameras
4K security cameras are available with an 8MP lens and offer best quality video resolution for security camera systems. This type of security cameras are available as PTZ, Dome, Bullet and other types of cameras. Also 4K security cameras are available with motorized varifocal lens as analog and IP network security cameras. If you are thinking long term of your security we recommend professional licensed security camera installers to purchase the best quality of security cameras available in the security market. 4K security cameras are priced $300-$3000 for professional grade and customer grade of use.
Many factors determine the labor cost of security camera installation in Houston, TX. One is whether you are installing a wired or wireless system.
Establishing a wired system is more expensive than installing a wireless security system because wired security systems require additional cables, drilling, and installation processes. Assume you already have Ethernet lines in your house. In such an instance, it will considerably lower the overall cost of a wired system by removing much of the installation expense. A wired surveillance camera system costs $300 to $2,500 to install, bringing the total cost of supplies and installation to $500 to $3,000. In Houston, TX, CCTV systems are generally installed by a licensed security camera installation company, who may also provide the security cameras and equipment.
Wireless security camera installations, depending on the demands, are often significantly less expensive, costing approximately $50 — $100 per security camera. Depending on the arrangement, the total cost of supplies and installation for a wireless system ranges from $350 to $700. Professional installation may be a fantastic choice for getting the maximum performance out of your system.
To keep your security system running well, you’ll need to do routine maintenance on your security cameras. Hardware maintenance is one of the most important aspects, including keeping lenses clean, ensuring outside equipment and wires are secured, ensuring cameras are oriented in the proper direction, and safeguarding power and WiFi connections.
Regular software upgrades are necessary for optimal performance, as well as to avoid hacking and other security concerns. Many cameras have automatic software update choices, but it’s a good idea to see whether manual upgrades are required. Consider upgrading your camera every 1–2 years to maintain your gear up-to-date and enjoy enhanced security capabilities, as cameras constantly come out with higher resolutions and Smart features at more competitive rates.
Assume you have your camera system professionally installed and are paying for a remote monitoring service. Periodic professional maintenance will almost certainly be included in the price in such scenarios. If you don’t want to pay for a monitoring service, you can do it yourself. Cleaning items like microfiber cloths and compressed air, as well as checking your system for broken connections and poorly pointed cameras, generally cost $50 or less each year.
Many homeowners install security cameras in areas where there are a significant number of attempted crimes. Security cameras are commonly installed at the front entrance, first-floor windows, the rear door, and above garages. There are basic best practices for camera placement to guarantee the best security coverage outside of these specific locations. To gain a larger view of a room or region, position cameras in the corners. Another idea is to put cameras where they will be camouflaged (hidden), such as behind something or against a similar-colored surface.
However, depending on your security needs, you may want to position your camera(s) in a prominent, clearly visible area to provide the appearance of security and dissuade thieves. Finally, when installing exterior cameras, consider putting them where they will be protected from the elements and vandalism, such as high up or in a covered place. In any case, it’s usually a good idea to talk to the installation firm about the final placement of the cameras.
More Security Camera Features
When picking a security camera, keep in mind that it may offer a variety of specific functions. It depends on your unique circumstances; certain characteristics may give additional possibilities for properly monitoring your environment. Some camera functions are pre-installed, while others may be added afterwards. With a highly competent Houston security camera installation firm, you will receive all of the available features.
Videocheck Security Camera
Built-in security cameras are generally only offered as part of remote video monitoring service packages. These services typically cost $100 per month per camera on average. Although many cameras come with a built-in video monitoring capability that does not require a subscription, the homeowner must actively check their camera feeds at the time of the occurrence.
Surveillance Camera Floodlight
Camera for surveillance Large lights that are either integrated inside or next to a camera are known as floodlights. By providing a strong light on a specified location, these lights and cameras capture the finest video. Because they connect with the CCTV system feed, these camera-based and camera-sized floodlights vary from ordinary floodlights. When motion is detected, many immediately switch on and brighten the area, resulting in more real footage in low-light situations. The cost of cameras with built-in floodlights ranges from $140 to $280.
Surveillance Cameras with Motion Detector
A surveillance system’s functionality is not limited to continuous monitoring. You may also set up cameras that only turn on when they detect movement. This lowers the camera’s running expenses, such as energy or battery power. When motion-sensing cameras are activated, they may send an alert to your security provider via a smartphone app, letting you to get a feed just when you need it, rather than all the time. Costs range from $60 to $300.
Outdoor Security Cameras with Siren
Outdoor security cameras with sirens, either built-in or added afterwards, may be a useful deterrent for both notifying the public and discouraging criminals. These siren-equipped security cameras may be configured to automatically turn on whenever motion is detected or manually switched on and off by the homeowner via smartphone, depending on the camera, system, and user-determined specific settings. Many are also outfitted with red and blue lights similar to those used by cops to create the sense of enhanced protection. The cost of security cameras with built-in sirens ranges from $175 to $250.
Night Vision Camera Price
The price of a night vision camera can range from $50 to $500. The majority of cameras have night vision built in. It may, however, be feasible to purchase night vision lenses as an add-on for current cameras. The term “night vision” refers to cameras of varying resolutions that provide clearer images during the nighttime hours, when the majority of crimes occur.
Night vision cameras provide crisp pictures in low light in one of two ways: active or passive. Infrared light, which is invisible to the human eye, is combined with a camera lens that can take up infrared light and provide a clear image in active night vision cameras. Regular lenses are used in passive night vision systems, but image-intensifying technology amplifies the existing light in the image to create a brilliant image.
Security Camera with Mic
More cameras now include built-in microphones for communicating with pets, welcome visitors, or even undesirable attackers. Cameras with built-in mics range in price from $100 to $250. These cameras generally function in conjunction with an app or a cloud-based system to provide one-way or two-way communication with the person being recorded. Aside from cameras with built-in microphones, there are various independent microphones that may be added to operate with an existing camera. These stand-alone mics range in price from $20 to $35, not including installation.
Surveillance Camera with Facial Recognition
Artificial intelligence, sometimes known as facial recognition, is a function integrated right into certain modern cameras. Facial recognition cameras include software that automatically scans film for faces, including individual faces in certain situations. When used in conjunction with smart systems, this technology is extremely helpful.
When your face shows in the film, for example, certain smart cameras with facial recognition send smartphone notifications. Furthermore, cameras equipped with advanced or even basic facial recognition aid in the identification of suspects during a break-in, potentially speeding up the judicial process. The cost of a facial recognition camera ranges from $150 to $250.
IR (Infrared) CCTV Camera Price
Infrared CCTV cameras range in price from $150 to $250. IR cameras employ infrared technology to capture images in low-light conditions. Night vision technology is comparable to infrared technology. Many IR cameras are referred to as night vision cameras, and vice versa.
There are, nevertheless, significant variances. Some night vision cameras rely on some light and simply magnify it in the footage feedback to brighten the image, while infrared cameras function in a different way. They illuminate their subject using infrared light. Human eyes are unable to see infrared light. Even in low-light settings, it is visible to the camera and gives a considerably sharper image. Pre-installed lenses are common on infrared cameras.
Motion-Activated Smoke Detector Security Camera
When smoke is detected in the home, motion-activated smoke detector cameras detect, film, and even warn you through a mobile app. This Smart feature normally necessitates software that is only pre-installed and not accessible as an add-on. In the event of a fire, homeowners may wish to consider these for instant notice, allowing for a faster emergency response. Typically, motion-activated smoke detector cameras cost between $200 and $300.
Heat Sensor Security Camera
Heat sensor security cameras detect intruders by detecting heat rather than light, which allows them to detect attackers wearing dark clothing that other cameras may miss. The majority of cameras lack this function. Heat sensor security cameras, which typically cost $300 to $500, are required for homeowners that desire heat sensor security.
Additional Costs and Considerations for Installing Security Cameras in Houston
A security camera installation in Houston might rescue you from a variety of disasters. The possibilities of difficulties with camera wiring, power access, recording device access, and film access are greatly reduced with expert CCTV installation. Besides, most experts have insurance to cover any potential problems. Professional installation should come with a guarantee if your security camera system is a large, wired system.
While having a surveillance system installed may qualify you for a discount on your homeowner’s insurance, most insurance policies only cover expert monitoring and installation. Compare the monthly expenses, peace of mind, and insurance savings to see if this is the best option for you.
Although some wireless cameras come with a built-in battery, they are not designed for extended usage. Having a power supply nearby is helpful unless you’re using a motion-activated camera.
Most jurisdictions allow you to install a hidden camera with audio on your property. Audio, on the other hand, may be considered wiretapping in some circumstances and may be prohibited in some places. Check the regulations in your state before installing a camera with built-in audio.
The stream from most cameras with in-home monitoring may be seen for free on your tablet, phone, or computer. Some professional monitoring companies, on the other hand, may charge an additional $10 per month to see the same stream. To learn more, contact your employer.
The cost of installing four wireless cameras is widely available, ranging from $350 to $700 for the whole system. However, the best option for ensuring correct installation is to engage an experienced expert who will ensure that the cameras are properly installed and that all of the necessary connections and encryption security are operational.
Many homes utilize cameras to deter burglars and catch intruders in the case of a break-in. Cameras with motion detection, night vision, and facial recognition
recognition helps produce footage capable of detecting clothing, facial details, and other pieces of information to help catch criminals and regain losses.
Consider how many rooms or outside sides of your property you want to cover, as well as the varied views of your house, when choosing how many cameras you need. Angles have a big influence on the field of view. Furthermore, depending on the size of the room, more than one camera may be required to survey the whole space.
Security camera concealment is an important approach for maximizing the effectiveness of your security system. Some homeowners opt to conceal cameras in corners, which reduces visibility while providing a better, wider view of the film.
Cameras can also be hidden behind items or in regions that are similar in color to the camera. Some homeowners, on the other hand, opt to place their cameras in a prominent spot to signal to would-be burglars or trespassers that the area is under observation. Camouflaging your camera with standard concealing methods such as hidden camera photo frames, hidden camera electrical outlets, and even ordinary houseplants costs on average $30 to $90.
|
https://medium.com/@rahibismi/the-ultimate-security-camera-installation-and-purchasing-guide-2021-houston-security-solutions-a7b32926a160
|
['Robbie Handy']
|
2021-08-09 06:55:40.931000+00:00
|
['Cctv Installation', 'Security Camera Install', 'Cctv Surveillance System', 'Security Camera', 'Security Camera System']
|
How to Make Extra Money: You have to be willing.
|
Making extra money really isn’t some coded mystery that only the smartest among us can decrypt.
I’ve been making extra money my almost my entire life, going back to preadolescence and certainly has nothing to do with mental wizardry.
It all comes down to grit.
Let me tell you a quick story about WHY more people struggle to put extra cash in the kitty and then offer you quick ways to make extra cash.
I once had a fruitless conversation with a colleague in which she asked me to help her find a quick source of dough.
So I gave her two solid ideas — She scoffed at both.
In fact (I kid you not), she even went so far as to respond to the second suggestion with ‘blah, blah, blah’.
She did, however, continue to gripe about how crappy her entire life situation is.
And there you have it — this is exactly WHY so many struggle to make extra money. They have a negative mental state.
Bitching and moaning will always be easier than doing.
OK — so now that you’ve decided which position you’re going to take, let’s look at a how to make extra money:
1) Property maintenance — this applies to both lawn care and cleaning homes.
These are “Ugh jobs” no one wants to do unless they’re retired and need to fill time.
Simple flyering or posting ads in your local Facebook marketplace will get you your first gigs.
If you do a great job of things, I guarantee your side hustle will start to grow on autopilot by way of referrals.
2) Car Detailing— Start this the same way you would the property management gig.
The toolkit for the job will set you back about $30, tops.
If you’re over six feet tall, this gig can get very irritating, very fast, so do take that into consideration.
3) Leverage that flyering — Everyone needs to advertise and all those promo packs loaded with coupons and introductory offers all started the same way…
…with some dude figuring that he should be monetizing the time he spent flyering.
Find a few businesses that want in on your efforts.
4 ) Craigslist Arbitrage — this one takes patience and solid negotiation skills.
Here’s how it works: you hang out in the “Barter” section of the site and look for what people want to trade for.
Here’s an example — I once found a guy looking to trade for a two-wheel hitch trailer he owned (value of $2K) for a Street Fighter II vintage arcade game.
I found said game in 15 minutes. $300. Made the trade for the hitch trailer, which I sold for $1400.
5) Sell on Amazon — you don’t need money to get started…just hocking what’s around the house is enough to get you off to the races.
I’d strongly recommend you start by selling as much stuff you can obtain at no cost until you get the hang of it.
These five ways of making extra money won’t make you a fortune (unless you get real slick and start scaling up), but they will make a noticeable difference.
Be sure to check out this excellent primer on developing the “Matrix vision” that’ll allow you to see money-making opportunities everywhere around you.
|
https://medium.com/@dopaminetriggers/how-to-make-extra-money-you-have-to-be-willing-9d44b2707935
|
[]
|
2020-12-24 13:25:34.262000+00:00
|
['How To Make Money', 'Money Mindset', 'Side Hustle', 'Money Management', 'Making Money Online']
|
The Power of Positive Habits: 8 Ways To Boost Productivity
|
We all have goals. We all have ambitions. We all have things that we’d like to have happen in our lives. The good news is that all of those goals, ambitions and aspirations are possible. All that’s needed to achieve them is to start doing the work that needs to be done in order to make them real.
You’ve probably heard the phrase “carpe diem”. That’s Latin for “seize the day”. If you want to move your life in a different direction, one that produces positive changes, then make carpe diem your motto. You need to begin seizing the day and using your time productively so that you can begin building the future you want.
“You’ll never change your life until you change something that you do daily. The secret of your success is found in your daily routine.” — John Maxwell
This article is all about helping you to establish “power habits” in your daily routine that will allow you to crank your productivity to eleven. Adopting some or all of these habits and incorporating them into your day-to-day schedule will boost your overall productivity significantly. You’ll not only begin reaching your goals, you’ll also begin reaching them sooner than you ever thought possible.
1. Rise and Shine
When you get up earlier than normal two things happen. First, you have more time in you day to get more things done. Productivity is all about accomplishing tasks. The more hours you have, the more likely it becomes that you’re going to initiate and complete projects.
Second, let’s talk about energy. Now, maybe you’re already are a morning person. In which case, you already know what I’m talking about. However, if you’re not naturally a morning person, it may be time to think about becoming one. Study after study has shown that the most personally productive hours in the day occur before noon. It all has to do with natural body rhythms and cycles. So, when you get up earlier you not only have more time to get stuff done, you also have more energy. It’s a productivity win-win.
2. Be Punctual
You should have a schedule of what you want to accomplish on any given day. (If you don’t, then start using one.) In order to make this schedule as effective as possible, you need to be places, take phone calls and generally do things on time, as you scheduled them. When you’re not on time, you begin to fall behind on your schedule. That means that you have to start rushing to catch up, and rushing usually means that you aren’t doing your best work. Make an effort to be punctual. Get to appointments, meeting and phone calls on time. The more punctual you are, the more you’ll get done.
5. Have a Plan
One of the major drains on productivity is simply not knowing where to direct your attention. Let’s face it, each day we are faced with information overload. We get phone calls, e-mails, text messages and more. We use apps that are supposedly designed to make our lives easier, but instead start to compete for our attention by adding their reminders to the mix of information that is already bombarding us. What can happen is that we spend our time and energy dealing with situations that do not promote our goals and best interests.
3. Sleep and Move
Besides food and water, your body only has two other absolute needs — sleep and exercise. When you don’t get enough rest, you cannot maintain the mental energy levels that are required for you to be optimally productive. Likewise, when you don’t get enough exercise you don’t have the stamina it takes to remain productive over the course of a long day. Make sure that you put enough time aside for adequate amounts of both sleep and exercise. If you do, you’ll find that you have more energy and more energy equals greater productivity.
4. Develop Keystone Habits
All positive behavior is nothing more than habit. When you perform a positive action over and over it becomes habitual. You continue making the positive action without even thinking about it. In addition, one positive habit will lead to other positive behaviors which, over time, will also become habitual. That’s why developing keystone habits are so important to increased productivity.
Keystone habits are nothing more than simple acts that you routinely perform throughout your day. For example, if you make your bed every morning after you get up, no matter what, you establish a tone of positive production that will stay with you as you go about your business. This effect can be enhanced by layering one keystone habit on top of another. So, you not only make your bed every day, you also make it a habit to rinse and stack your breakfast dishes in the sink or put them in the dishwasher. The point is that when you routinely do what needs to be done, you start to develop a habit of treating everything that you do in a similar way. The end result is that your productivity soars.
5. Have a Plan
One of the major drains on productivity is simply not knowing where to direct your attention. Let’s face it, each day we are faced with information overload. We get phone calls, e-mails, text messages and more. We use apps that are supposedly designed to make our lives easier, but instead start to compete for our attention by adding their reminders to the mix of information that is already bombarding us. What can happen is that we spend our time and energy dealing with situations that do not promote our goals and best interests.
One of the best ways to combat this problem is with a daily plan. When it comes to any task, ask yourself “What is this?”, “Why am I doing it” and “What do I want to get out of it?” Simply posing these questions to yourself prior to doing anything will allow you to begin to prioritize what truly needs to be done and eliminate what truly is a time waster. Again, the end result is greatly increased productivity
6. Make Room for Down Time
There is a tendency to want to remain plugged in and on top of all communications simply because we can. However, to do so is a major mistake. One of the key ways to remain optimally productive is to know when to take it easy and allow your mind to rest.
Think of your productivity like a well. You lower down a bucket and pull up a drink of cold, clear water. However, if you lower down that bucket too many times in a row, you’re bound to come up empty because you’ve drained all the water.
You need to give yourself enough time to recharge and rejuvenate. You cannot be fully productive when your batteries are drained and you have nothing left to give. Remember to take a break on a regular basis. This means no checking your phone for messages, no answering e-mails, and no quick phone calls. Your time away is sacred. It is key to you being truly effective at what you do. Treat it as such.
7. Eliminate Distractions
In order to be fully productive, you have to be focused on the task at hand. However, in today’s always connected, modern world maintaining focus is increasingly difficult. Studies have shown that, on average, we are only able to concentrate on a given task for three to five minutes before being distracted by social media, e-mails or other things that interfere with concentration. Obviously, you are not going to be very productive and task oriented when you are only able to focus in three to five minute intervals.
The secret to staying focused on what you’re doing is by removing the sources of those distractions. Because social media is one of the biggest culprits, it’s important to install safeguards that allow you to resist the allure of checking for updates on social media sites. There are now apps that will completely block your ability to access certain sites for specific periods of time. The less distracted you are, the more you can maintain your focus and the more productive you become.
8. Make Your Workspace Inviting
Each of us spends a great deal of time in the space where we work. Now, that might be a physical office at a remote location or it might be a room or corner in your residence that has been reserved for that purpose. No matter where your “office” is located, it needs to be inviting, comfortable and welcoming. It has to reflect your personality and your style.
There’s absolutely no reason for your workspace to be spartan, cold or off-putting. You are not an anchorite and your office is not your cell. You are not literally or figuratively chained to your desk. Work should be rewarding, not a punishment and your office should reflect that fact.
Make sure that the furnishings are comfortable. A desk may be a necessity, but it doesn’t have to be uncomfortable. The same thing goes for a chair. Use a chair that makes you feel that your glad you sat down in it. Lighting, art, music and color all have an appropriate place in your office. Imagine how your productivity will skyrocket when you actually enjoy being in your workspace.
Start incorporating these, and other, positive habits into your daily routine and you’ll be amazed at what you can achieve.
|
https://medium.com/@betterselfdaily/the-power-of-positive-habits-8-ways-to-boost-productivity-f7311db66a97
|
[]
|
2020-01-24 11:32:41.506000+00:00
|
['Health', 'Positivity', 'Habit Building', 'Habits']
|
Start experimenting with our new API: Business Account Payment
|
Start experimenting with our new API: Business Account Payment
Announcement
Today, we launch a new API: the Business Account Payment API. This API can be used to initiate payments straight from a financial application, debiting an ABN AMRO business account. This way, manual actions to transfer payment information between applications are no longer necessary.
The API is currently in early access (closed beta), meaning that a select group of clients has access to the API in the production environment. However, the sandbox environment is open for all! So, don’t hesitate, start experimenting with the API and find out what use case is suitable for your organization.
The Business Account Payment API
Initiating payments without a banking tool? Why not! Data integrity combined with efficiency is amazing. So, why would you want to go -all the way- to your internet banking page to type in your payments? This information is already available in your financial application, so why not execute it directly? Manual actions to get this information into another application are error-prone and inefficient. Today, we launch the closed beta for the Business Account Payment API. Use this API to initiate payments straight from your financial application, debiting your own ABN AMRO business account.
Why use this API?
With the Business Account Payment API, you are able to initiate payments directly from your financial application without the need to down- and upload the files in a bank application. This improves security and makes processes leaner, saving you valuable time. The API supports SEPA Credit Transfers, SEPA Direct Debits, and Cross Border payments, so you don’t have to worry about multiple connections for your payments.
Who should use the API?
All companies that value efficiency. Treasurers from larger companies can embed the functionality of this API into their core activities, creating new and better user experiences and fuel leaner internal processes.
But you do not need to be a big company to profit from the additional value that this API can bring. The Business Account Payment API is well suited for small and medium enterprises. Let’s be honest, you want to focus on growth and automate banking as much as possible! Let’s do that together.
Early access (closed beta)
The early access (closed beta) started today, which means that a select group of clients have access to the API in the production environment. Our aim is to open this API to all ABN AMRO business clients as soon as possible. However, the sandbox environment is open for everyone. So, already experimenting with the API is possible!
Want to join the closed beta? Enrollment is open for a limited number of clients. Please send us a message via the contact form with your information and why your company would like to participate in the closed beta. We will let you know if a spot is still open. There are two prerequisites: you must be a business client of ABN AMRO, and you must have development capabilities.
What to expect from the Sandbox
For now, start experimenting in the sandbox! Experimenting in a Sandbox environment has multiple advantages. You can try out the API’s functionalities, which makes it easier to design your use-case and discover how the API fits in your business case. For instance, familiarize yourself with the possible conditions to receive notifications by creating subscriptions (specific condition sets that are stored at ABN AMRO). Next to that, you can experiment with the relationship between the subscriptions and transactions on your ABN AMRO business account.
The Sandbox environment has functionality that allows the initiation of payments. This functionality makes it possible to see what statuses your transactions would trigger. Within the Sandbox, it is only possible to initiate test transactions because the environment is not connected to an ABN AMRO payment engine.
Using the Sandbox to experiment, allows you to become more familiar with the API functionalities and possibilities. Our expectations for this API are high, so start experimenting and tell us what you think!
For more information, click here. Do you want to be informed when the API goes live? Leave us a message via the contact form on the ABN AMRO Developer Portal.
|
https://medium.com/abn-amro-developer/start-experimenting-with-our-new-api-business-account-payments-879fa817e7a
|
['Abn Amro']
|
2020-11-13 07:57:01.499000+00:00
|
['Announcements', 'Future Of Banking', 'Abn Amro', 'API', 'Open Banking']
|
Refueling Resiliency Reserves
|
2020 made us all dig deep into our ability to be resilient, and tapped a lot of us out. In looking back at lessons learned, among the most important for me has been a consistent practice of self-care that refuels my resiliency.
Go back and take care of yourself. Your body needs you, your feelings need you, your perceptions need you. — Thich Nhat Hanh
vedaprajvalan.photos
Throughout the day I pause to balance and raise my energy by whatever method seems most suitable at the time. Walking in nature, stretching and dancing are among my favorites, as well as music, mantras, bells and breathwork. Aromatherapy is always in the mix. It’s fun to discover what brings peace and joy and do more of it. :) And with added resiliency, it’s easier to transition through adversity and change.
Years ago I would knuckle down pushing through breaks and lunch, and maybe even dinner, working away. When I got home exhausted, I’d have a couple of drinks to wind down. I would then generally not sleep well, wake up tired and stressed, and do it all again. I pushed my body until I’d put on a lot of weight and had hypertension. Mentally I was in a depressive state with frequent downward spirals. The quality of my relationships reflected the level of care I was demonstrating towards myself — not good. It was a long road, one day at a time, practicing healthier habits and releasing beliefs and behaviors that no longer served me. Sometimes I made great strides and lots of times just baby steps. But it got much easier with practice to maintain a positive focus and pick up speed moving in the direction of health and joy.
Reach out if you’d like support in your journey to increase self-care and balance in the year ahead or to share your favorite resiliency boosters.
Be well!
|
https://medium.com/@vedaprajvalan/refueling-resiliency-reserves-317c8d6d683f
|
['Veda Prajvalan']
|
2020-12-18 06:49:26.503000+00:00
|
['Priorities', 'Self Care', 'Resilience', 'Resiliency', 'Balanced Life']
|
Exploratory testing is a new way of thinking: Automation has its limits.
|
Though the current trend in software testing is to push for automation, we all see automation as the best form of testing but taking a clear look at exploratory testing, I think you should give it a go if you have not started already.
Exploratory testing is a type of testing where test cases are not created in advance, but testers check the system on the fly. They may note down ideas about what to test before test execution. The focus of exploratory testing is more on testing as a “thinking” activity.
It focuses on the discovery and relies on the guidance of the individual tester to uncover defects that are not easily covered in the scope of other tests.
Wikipedia defined Exploratory testing as an approach to software testing that is concisely described as simultaneous learning, test design, and test execution. Cem Kaner, who coined the term in 1984, defines exploratory testing as “a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.
Exploratory testing can be applied at any stage in the development process. As a tester, you should be creative and able to invent test cases and find defects. The tester is allowed to think outside the box having in mind all about the product and what it entails.
Exploratory testing is carried out without a test plan, as opposed to scenario testing, it requires to simultaneously manage several testing activities: the product discovery, the understanding of its functioning, and setting up a testing strategy. Testers thus combine all these testing activities to carry out relevant testing cases that have not been previously considered. It is more of a mindset than a methodology.
EXPLORE — in exploratory testing
The EX — stands for experience: It involves an Investigation process that helps find more bugs than normal testing using bugs hunting experience.
The P stands for Preparation: even when it is called freestyle testing there is a need for careful planning to avoid just testing without a goal in mind.
The L stands for Learning Simultaneously… as you design you execute all at the same time.
The O — Outside the box: once you have the details of the product, you are free to create test cases without a script, hunt bugs, and create our test plan.
The R stands for re-evaluating your test design with all the freedom we can think of until we achieve our aim.
The E stands for Execute your test cases and logs your bug report.
Best Exploratory Practices
1. Prepare the Test.
Doing exploratory testing does not mean testing without control or randomly. It is a structured approach that requires thorough preparation.
· Classify the bugs.
· Present the products and the requirements.
· Set a time for the test.
· Define the testing scope.
2. Exhaustive testing is not allowed: Do not try to test everything:
The aim of exploratory testing is not the range of the tests done. It is good practice to not try to test everything but to test according to the campaign’s requirements.
3. Create a complete bug report:
A report that explains all the incidents encountered during the testing. In the case of exploratory testing, the creation of a bug report can be challenging. Testers must pay special attention to communication to explain in a complete and detailed manner the idea behind the tests done, the steps taken, the conditions in which the tests were carried out, the covered areas, the results, and the evaluation of defaults.
4. Debrief:
Drawing from the bug report and from the testers’ individual feedbacks, debriefing will help decide if additional testing is required.
5. Be Ready and Resourceful:
This test stresses the tester’s individual freedom as well as engagement and responsibility and so it is necessary that the tester has a good bug hunting experience and keen sense of observation and be able to create test cases using experience.
Benefits and Challenges of Exploratory Testing
The main advantage of exploratory testing is that less preparation is needed, important bugs are found quickly, and at execution time, the approach tends to be more intellectually stimulating than the execution of scripted tests. This testing is useful when required documents are not available or partially available.
Another major benefit is that testers can use deductive reasoning based on the results of previous results to guide their future testing on the fly. They do not have to complete a current series of scripted tests before focusing on or moving on to exploring a more target-rich environment. This also accelerates bug detection when used intelligently.
It Uncovers bugs that are normally ignored by other testing techniques. Helps to expand the imagination of testers by executing more and more test cases which finally improves productivity as well.
Challenges of Exploratory Testing
Tests invented and performed on the fly cannot be reviewed in advance (and by that prevent errors in code and test cases), and that it can be difficult to show exactly which tests have been run.
Learning to use the application or software system is a challenge and replication of failure is difficult.
Reporting of the test results is a challenge as the report does not have planned scripts or cases to compare with the actual result or outcome.
Documentation of all events during execution is difficult to record. Conclusion:
In Software Engineering, Exploratory testing is performed to overcome the limitations of scripted testing. It helps in improving Test Case suite. It empathizes on learning and adaptability.
References: Wikipedia and guru99.
Author:
more blogs…
|
https://medium.com/@qatechtalks/exploratory-testing-is-a-new-way-of-thinking-automation-has-its-limits-efb08cc035d0
|
['Qa Techtalks Community']
|
2021-07-15 11:13:07.783000+00:00
|
['QA', 'Software Testing', 'Exploratory Testing', 'Test Automation', 'Testing']
|
Hot Topics for the Music Business in 2018; What’s the Value of Bitcoin, Blockchain to the Music Industry?
|
Bitcoin, Blockchain Technology Info: What’s Its Value to Music Industry? — www.billboard.com
Suddenly, a business that has spent the last decade making it more convenient to pay for its products is experimenting with digital cryptocurrencies that are technologically innovative, mathematically secure, and actually fairly inconvenient to use. What is to be done?
2018’s hot topics for the music business — medium.com
Smart speakers are going to change the game for music. Without a visual interface, how are you going to get your music to people? How are you going to stay top of mind? It’s like a new age of radio, but this time it’s personalised.
Why Can’t I Stream Aaliyah? Late R&B Singer’s Discography Still Locked Down by Her Uncle — www.newsweek.com
The beloved R&B singer would have been 39 this year.
Sweden Makes Music: How Diversity, Education and Tech Propel Swedish Artists Onto the World Stage — www.billboard.com
Sweden may be a small country — roughly the same size as California, with just one-quarter of that state’s population — but its artistic community has been a dominating force on the Billboard charts for decades.
As Voice Continues Its Rise, Marketers Are Turning to Sonic Branding — www.adweek.com
Why Visa, California Closets and more are rethinking audio.
How Voice And Audio Have Become The New ‘Touch’ — www.geomarketing.com
Prior to a panel discussion at CES, Pandora’s Susan Panico discussed building compelling, contextualized experiences for a voice-first world.
Nipsey Hussle Breaks Down His Streaming Revenue, Endorses Tidal — www.hotnewhiphop.com
In the streaming service debate, Nipsey Hussle endorses Tidal.
Watch out Spotify, Amazon Prime Music is hot on your tail for popularity — routenote.com
Who is the big dog in music streaming right now? You’d all say Spotify right? Well look out because Amazon Prime Music is coming up on them quickly.
How the Internet Has Changed the Game for DIY Artists — blog.symphonicdistribution.com
If video killed the radio star, then streaming services continued to beat the corpse.
Qobuz’s 24-bit Music Streaming Service Will Arrive Stateside In 2018 — www.ubergizmo.com
There are some who are particular about their the audio quality when it comes to listening to music, which is why many audiophiles tend to snub their…
Spotify tells label story with Warp Records ‘history lesson’ — musically.com
To the list of ‘surprising things we’ve seen on Facebook in 2018’ we can now add ‘Nightmares on Wax talking about the early days of Warp Records’.
|
https://medium.com/platform-stream/hot-topics-for-the-music-business-in-2018-whats-the-value-of-bitcoin-blockchain-to-the-music-8db41713a33e
|
['Platform']
|
2018-01-18 13:59:48.126000+00:00
|
['Blockchain', 'Music', 'Streaming Music', 'Bitcoin']
|
Why Shopify Hires For Potential Not Talent And How You Can Too
|
Why Shopify Hires For Potential Not Talent And How You Can Too
Potential can beat talent.
Photo by Tim Marshall on Unsplash
While watching a podcast on Youtube last week, I had one of those aha moments.
The podcast was one of those random suggestions on Youtube and featured Tobi Lutke (Founder of Shopify). He was talking about how to build a team without access to a ‘primary’ talent market.
Silicon Valley is a well known primary talent market. The Bay area offers a high concentration of well qualified and talented people with specific skill sets. Drawn by the success of other startups, many people move there in the hopes of tasting their version of startup success.
Whereas Ottawa, where Tobi founded Shopify, is closer to a political hub for Canada. Ottawa is a center for the arts and cultural institutions, national museums, etc. Most of the “talent” not interested in Arts or Politics moved out of the area.
Tobi mentioned that many best selling business books are written about building unicorn companies in primary talent markets. He thought many of the ‘best practices’ for building a team encouraged in business books are not relevant to the average business.
One of the common maxims you’ll come across is ‘hire people who are better than you at what you don’t like to do.’ When all the books, articles, blog posts, podcasts, etc., that we consume tells us this, it’s easy to believe that this is the only way to build a company.
The problem is these people are often too expensive or not available in a given talent-pool.
Most of us don’t have the luxury of hiring from a ready-made work-force. We have to hire from talent pools with weaker skill sets but, importantly, find people with the same amount of potential.
Shopify realized this difference very early in their journey. Instead of focusing on hiring the best talent available, they built their business around hiring for potential and then developing that potential.
Fixed vs. Growth Mindsets
In secondary Talent pools, Tobi explains we need to create learner’s organizations. As much as a company aims to produce a product or service that people want and need. A company also needs to build a culture that encourages learning and development.
Shopify has created a hiring process that focuses on people’s potential rather than skill.
They look for people that will far exceed the role they are currently hiring them for; they are looking for tomorrow’s company leaders.
Because their focus is to hire based on potential, then they need to hire people who have the capacity and want to reach their potential.
Shopify differentiates between two types of people when it comes to potential.
People with a fixed mindset People with a growth mindset.
People with a fixed mindset “believe their qualities are fixed traits and, therefore, cannot change. These people document their intelligence and talents rather than working to develop and improve them. They also believe that talent alone leads to success, and effort is not required.” — Unknown.
While people with a growth mindset “have an underlying belief that their learning and intelligence can grow with time and experience. When people believe they can become smarter, they realize that their effort has an effect on their success, so they put in extra time, leading to higher achievement.” — Unknown.
We can see this in many areas of life.
My housemate is a personal trainer, and time again, he faces this difference in mindset. Some of his clients believe they are the way they are, and they can’t be helped. In comparison, others look forward to improving and watching their growth.
I’ve noticed people with a fixed mindset are on defense. “I can’t” or “I don’t have time” or “I’m not XYZ” There’s always a reason not to.
In contrast, people with a growth mindset are open to developing themselves and exploring new opportunities.
Fixed Mindset people make excuses and pass responsibility. Growth mindset people find a way and take responsibility.
Shopify internalised this distinction between people’s mindsets and used it to create ‘The Shopify Way.’ A system they have developed to hire for potential and develop that potential into world-class talent.
THE SHOPIFY WAY
This is how it works.
They hire for potential. They look for in others what others don’t see in themselves. They help people develop a growth mindset. They give them a Shopify education: Company history, previous mistakes, employees who previously held a fixed mindset. Reasons for doing what the company does. Not just saying this is the way it is, and you need to accept it. Instead, they provide context and explanations so people can find the reason by themselves. They develop their skill sets. Only then do they focus on building a person’s necessary skills. “Shopify aims to help people fulfill their potential 10–20 years earlier than they otherwise would have”. They support them with mentors. One skilled person is paired with five unskilled workers. Mentorship is essential in helping inexperienced employees navigate the nuances of personal growth. We don’t develop the same way and come unstuck at different points. We need people to support us through these moments. They give them challenges designed to push their staff past what they thought was previously possible. “Hey, we have this problem, and it’s vital for our company’s continued success, and we think you’re the right person for the job.” They remove self-imposed boundaries that people put on themselves.
Hire for potential, unlock a growth mindset, and support the journey. It works for Shopify and could work for you.
|
https://medium.com/the-innovation/why-shopify-hires-for-potential-not-talent-and-how-you-can-too-231f2fab2f37
|
['Rhys Jeffery']
|
2020-12-17 15:03:37.899000+00:00
|
['Hiring', 'Mindset', 'Talent', 'Human Resources', 'Entrepreneurship']
|
How to heal your relationship with money?
|
Credit: David Cerny
How to heal your relationship with money?
Talking about money can be impossibly hard. Our life experiences with money and the lack of it tend to be so extreme at the opposite ends of the spectrum that there is always an unwitting emotional charge attached to thinking or speaking about the subject. What we believe about money is also ever so often influenced by what our parents and early caregivers believed about money. The mindset and conditioning that was bequeathed to us when we we were not watching. It is a most tricky thing to unravel and unpack because we never really manage to think about it independently given the constant reinforcement and reiteration of those regurgitated ideas.
Speaking for myself, I have chronically misunderstood the role of money in life — either whimsically understating its importance or wildly overstating its significance. As a teenager, i was a money snob — thinking I can do without it altogether. For a brief period in the early twenties thereafter, I was infatuated with everything that money can do. I am still learning to treat it for what it is — an instrument that gives me the freedom to do the things I love doing most with the time left on this planet.
I have also been afflicted with a belief that is central to the human condition and the single biggest obstacle when it comes to money: the scarcity mindset. This is fundamentally a way of interpreting and understanding the world which is is independent of the reality of your present circumstances — whether scarce or not. Even when the reality changes, the mindset and behavior to act from this inaccurate impulse is all pervasive.
The thing is that while scarcity is a reality for some of us, it is also true, that for many others, a scarcity mentality is a stress-based negative feedback loop, fuelled by old beliefs and past experiences rather than our current reality. This wouldn’t be as harmful if not for the consequence that by focusing merely on our survival, this mindset keeps us from thriving and manifesting our dreams.
A mindset of scarcity impairs our judgement at multiple levels and primes us for a series of poor decisions.
Scarcity is not just a physical constraint. It is also a mindset. When scarcity captures our attention, it changes how we think. By staying top of mind, it affects what we notice, how we weigh our choices, how we deliberate, and ultimately what we decide and how we behave. When we function under scarcity, we represent, manage, and deal with problems differently. Essentially, we activate our lizard brain.
In this piece, I am going to attempt to lay out what a scarcity mindset looks and feels like, how it sabotages our attempt to a life that we aspire to lead and what one can do to confront and overcome it.
Scarcity is an irrational fear that rests deep inside all of us, whispering to our subconscious mind that we simply don’t have enough. More importantly, the fear that there isn’t enough for everyone out there. That life at every level, is a zero sum game.
I would wager that for upwards for 95% of people in India and other developing countries, a scarcity mindset is a default starting point for any conversation or even thinking about money. It is both subliminal and subconscious and unless one is terribly mindful, a state that is not easy to excavate and surface. When we start to feel like there isn’t enough of something, a combination of anxiety and an unexplained sinking feeling take over, complete with the pounding of the heart and the tensing of the shoulders. It is an uncanny but entirely familiar combination of dread, adrenaline rush and hopelessness.
Scarcity makes us hold ourselves back from pursuing opportunities of progress and prosperity. We start believing there isn’t enough money to support our particular opportunity, or we see other people in a similar niche and we think that there aren’t enough customers for us to be there, too. We get a degree in a field where jobs are plentiful, even if they don’t pay well, or we launch a product that is cheaper because we are afraid no one will buy the one we really want to sell.
We react in predictable ways when we think there isn’t enough of something. It doesn’t matter what the thing in question is — you can plug and play time, attention, bandwidth, money, energy, appreciation into this equation and the result is likely to be the same.
A scarcity mentality manifests in our thinking, behavior and action in different ways:
a) A common consequence for instance is the formation of a consistent “us versus them” mentality. You see someone with a lot of money, and before you realize your ‘pain body’ is activated: envy, jealousy and enemy images abound because not only do we not have that money, we also think that less of it is available overall.
Most of us spend an entire lifetime operating from this ‘zero sum game’ paradigm — the mental model that assumes that for someone to gain something, someone else somewhere, has to to lose an equal stake. It is flawed and fragile at once, making us have an incorrect and incomplete understanding of how things play out in the real world. This is not to say that inequality is not the biggest menace of our times, but that money in general and as a principle is not a zero sum game.
b) A scarcity mindset is intricately woven into being in survival mode. Many of us live in a state of hyper-arousal, where the slightest stressor gets interpreted by our nervous systems as a threat to our survival
The first muscle twitch that most of us get when we think of money is the fear that we don’t have enough of it or that what we have will run out soon. A doomsday scenario of the near or far future is a visualisation that is both common and endemic to both people who earn in five figures as it is to those with telephone number salary figures.
c) When we think something is scarce or in short supply, that very idea of scarcity compromises the way we think about the thing in question. Our amygdala gets hijacked, impairing both our IQ and EQ in the short term. Poor people, for instance, make poor decisions primarily because they are poor. When all bandwidth is occupied on figuring where the next meal is going to come from or earning an errand that will keep the kid in school, it is hard to have a long term orientation.
d) Scarcity also manifests in how we view ourselves and tend to think about our own lives. I used to be the kind of person who, when things were going well, used to wait for the other shoe to drop. It seemed like the good can’t go on forever, or that what is being earned in one aspect of life will have to compensated for elsewhere.
e) A scarcity mindset can also mean that we have difficulty in receiving. Compliments, gifts, someone treating us well often end up in us downplaying our achievements, or with the feeling like you have to even the score by paying the compliment back immediately. We erroneously believe that when someone gives something to us, they must have “lost” something in return. This is simply not true. I am still learning to resist the urge to deflect kind words. Another indicator of scarcity thinking is if we have an internal scoreboard that is keeping track of who did what. While fairness and reciprocity are both important values for many of us, tit for tat thinking can prevent us from accessing a space of open generosity and trust.
This scarcity — whether real or perceived — hampers our thinking in ways we cannot even begin to imagine.
A scarcity mindset in general can deplete our confidence, compromising our capacity to trust significantly. The state of constant arousal and hyper vigilance also places undue and extreme pressure on our nervous system and cognitive capacity leading to emotional exhaustion.
A scarcity mindset is easy to diagnose.
Do you:
i) Find it difficult to be generous or wholeheartedly celebrate other people’s success or find yourself wishing that others should not succeed
ii) Find yourself to be consumed by competition and comparison.
iii) Feel worse about yourself after an interaction with someone in person or on social media.
iv) Feel constantly anxious but do not know why.
v) Find yourself clinging to one idea of perfection as the ideal way of being/doing something.
vi) Find that there is a strict black and white lens with which you view the world. That you constantly are trying to prove to yourself that you are right and consider differing view points as a threat to your survival.
At the root of scarcity mindset is fear. Anxiety, hyper-vigilance, reactivity and defensiveness are often signs of a scarcity mentality. When it comes to money, most of us — across cultures and continents — seem to be cut from the same cloth.
How to confront and overcome it?
a) The opposite of a scarcity is not an abundance mindset, which is for all practical purposes, the other side of the same coin. The opposite of scarcity is knowing that there is enough. Far more importantly, it is knowing that you are enough.
When we believe that we are enough as we are, when we internalise this truth about ourselves over and over again, we begin to live and embody it. You will start to believe there’s enough money for your start-up and for those of others, that there is enough love for you to find it and for others, that there is enough time to work a good job and pursue a passion in the evenings.
When we believe there is enough, we will act as if there is enough, and then we will see that there is, actually, enough.
When we don’t believe we are enough — that we are doing enough, or that there is enough opportunity in the world for us — then a scarcity mindset is always knocking on the door. What we have is an explosive cocktail of shame, a disengagement from taking risks which may result in failure, a ridiculous fear being misunderstood or being seen as flawed.
b) Another way to confront the scarcity narrative is to actively seek different stories about money. Most folks, everywhere, are told the same stories about money.
When we don’t believe that there’s enough money, success, goodness, customer support, or whatever, we tend to get stingy. We don’t want to invest ourselves and our money in something, because it might not work out. We don’t want to try for something better, because we wonder what bad thing might happen to us in another part of our lives if that thing did work out.
c) Another antidote to scarcity is safety. When we feel safe, like the world isn’t out to get us and our place in it is secure, we are often more willing to risk. I think the same thing can be said for our financial lives, too.
When we know that we have enough money to cover most contingencies, when we feel like there is enough good in the world to cover both them and us, we will be more likely to take the sorts of risks that lead to wealth.
d) Eventually, the biggest obstacle to creating wealth is your own contempt towards money and those who have made it.
This is again a condition that goes undiagnosed for a lifetime, all too often.
Our thinking around the subject is mired entirely in greed and fear. Even when you seem to sort out your relationship with money, the world never lets you be.
Finally, the one medium and long term strategy is to be a wise consumer of information in our culture of ‘never enough.’ Relentless messages about whether you are enough or whether your are doing enough takes a toll on the brain and the body.
Marketers, advertisers and others desiring to get you to buy, vote, share, or believe are attuned to the psychology of human behavior, and are aware that a scarcity mindset is a powerful force of influence that allows fear and shame to be the leading emotions driving your decision making process. confuse what you value by using the fear of disconnection and rejection as your guide on how to think and act. Who we chose to listen to, the thoughts we allow to take root, and the philosophies that influence us.
It is also important to watch who and what are you allowing to influence your thinking. Of all the cliches in the world, the truest is that idea that we are eventually the average of the five people that we spend our most time with
P.S: This piece was written over a month involving over 30 hours of research, writing, editing and integration. If you found it useful, please consider ‘clapping’. You can clap upto 50 times. The more you clap, the easier it will become for other readers to discover this piece.
|
https://medium.com/@parmaryogesh/how-to-heal-your-relationship-with-money-dd33caa966eb
|
['Yogesh Parmar']
|
2019-02-26 17:56:52.806000+00:00
|
['Personal Finance', 'Scarcity', 'Money Mindset', 'Money Management', 'Money']
|
Spiritual Consciousness — An Explanation
|
Spiritual consciousness is to explore the knowledge of the spirit and its origin or reservoir through intuitive realization. The proper course of attaining higher levels of spiritual consciousness for a sincere lover of the Supreme Being, according to Huzur Maharaj — the second Master of the Radhasoami Faith — is “to acquire knowledge of the secrets and the order of creation and the means of traversing the distance between his dwelling in the body (the pupil of the eye [Third Eye Center accessed during meditation]) and the Abode of the Supreme Being, the prime source of everything; and to start on his journey with fervor and perseverance with the avowed object of reaching the presence of the Most High and Beloved Supreme Father.”
Love and devotion are the basic driving forces to attain success in life, whether temporal or spiritual. As a scientist, unless driven by a passionate zeal to find the truth, finds it difficult to surmount the hurdles in his path, so does a spiritual seeker, devoid of love and devotion, fail to attain any tangible results in his spiritual pursuit. Love and devotion are therefore the cardinal values in any endeavor to succeed. The medieval Bhakti traditions ushered by saints laid great emphasis on love and devotion for the master and Supreme Being. The Radhasoami Faith, which is the culmination of the saint traditions, lays great emphasis on love and devotion for the master and the Supreme Being. Hazur Maharaj says that “a heart devoid of love or affection is as hard as stone and does not form a suitable receptacle for the Light of Heavenly Grace and Compassion… The Supreme Being loves and takes special care of those who love Him with all their heart and soul and gradually draws them towards Himself — the Center of Pure Light and Attraction.”
— Agam Prasad Mathur, at the beginning of the chapter, Spiritual Consciousness — An Explanation, in the book, Spiritual Consciousness
|
https://medium.com/sant-mat-meditation-and-spirituality/spiritual-consciousness-an-explanation-91050c27a11e
|
[]
|
2020-02-21 15:47:46.975000+00:00
|
['Religion', 'Consciousness', 'Radhasoami', 'Spirituality', 'Meditation']
|
What are Methods in Java
|
In this tutorial, we will study What are Methods in Java, What are the types of Methods in Java, What are the components of a method like -
1. Access Specifier
2. Return Data Type
3. Name of the Method
4. Arguments
5. Body of the Method
|
https://medium.com/@usemynotes/what-are-methods-in-java-fcf0311862e0
|
['Use My Notes']
|
2020-12-16 16:20:23.138000+00:00
|
['Programming', 'Javascript Tips', 'Java']
|
Behind the Scenes for FashionOne
|
Equal parts magic and tragic, musings from someone who still feels like they’re 12. But definitely are not.
Follow
|
https://medium.com/mischke-business/behind-the-scenes-for-fashionone-8465e0ac5add
|
['Anna Mischke']
|
2018-09-10 18:49:05.567000+00:00
|
['Fashion', 'Music', 'Cambodia', 'Phnom Penh', 'Personal']
|
Vender pan no era un delito, al menos eso creía yo
|
Periodista, editor, crítico. Cofundador de Matavilela. «I lie to myself all the time. But I never believe me» S. E. Hinton
|
https://medium.com/@danielucasec/vender-pan-no-era-un-delito-al-menos-eso-cre%C3%ADa-yo-e07f3f472821
|
[]
|
2020-11-05 18:10:24.598000+00:00
|
['Personal', 'Español', 'Inspiration', 'Historias', 'Relatos']
|
Everyone has an experience that is worth sharing
|
Everyone has an experience that is worth sharing — and so I share mine with the hope that it benefits someone out there.
I was in a relationship with a guy who I had intentions to marry and he was loving, kind, very practicing, I mean he quoted verses and hadiths to me regularly — and that’s what I loved most about him. I was just waiting for his family to approach mine, and in the meantime, we were getting to know each other.
In the process of getting to know each other, he treated me well. He would send me gifts, ask after me every day, sometimes more than once and we spoke over the phone regularly, and of course he would make me laugh and it felt like was just perfect. I could see myself and him together, I could see our future, and I could see us bringing up beautiful children.
However, after a little while he became quite possessive. We spoke very often, and he would get annoyed if I never got back to his missed calls or messages ‘on time’. He made it very clear that he watched me on whatsApp to see if I came online, and if I did not respond to his messages he would get angry. At the beginning, I found it quite cute — I thought that he just wanted us to be together — well, that’s how he made it sound. In reality, I didn’t realise that he was becoming increasingly controlling and obsessive.
If I ever said that I did not want to speak to him, he would threaten me. He would also change his number regularly, so if I blocked him, he would still be able to get in contact with me. I eventually had to change my number, but still did not leave me alone and would email me from ‘anonymous’ emails. He kept trying to get grip on me, to get me back in his life somehow to the extent that he approached another brother I was looking into — and emailed him information about me, most probably false.
I was beginning to feel trapped and almost hopeless. My anxiety levels rose and my panic attacks began. I felt controlled from every angle in my life. I tried to break all ties with him, but he still wouldn’t leave me alone. I felt like I couldn’t breathe and my heart felt so heavy — I didn’t think I could get away from him.
After counselling and breaking all ties with him, I feel like the person I was before I met him. However, If it wasn’t for my family who have been so supportive, I wouldn’t be where I am today without them. I can’t express how important my families support was, and I can’t begin to tell you how important the counselling sessions were. Although the counselling and family support was vital for my wellbeing, I feel that the most important factor was ensuring that I had no trace of him in my life, which was the biggest step towards my happiness. Thank God I am happy now, but it amazed me how much one person can make a huge difference to your mental state, I never imagined I would have ever experiences intense anxiety and suicidal thoughts because of one person.
I won’t lie and say that I don’t think of him sometimes, because I do — but I was told not to weigh myself down with him in my thoughts, so instead I remember to thank God for giving me the strength to remove him from my life. I hope that anyone who may be in a similar situation reads my story and understands how important it is to learn to walk away. Grab all the help you can and need — and do not push your family away, they are the people you will need the most and will help you get through it all. If God has blessed you with a family, it is for a reason, and for that I am always going to be grateful.
|
https://medium.com/inspirited-minds/everyone-has-an-experience-that-is-worth-sharing-1435d965d8d5
|
['Inspirited Minds']
|
2015-12-06 21:41:30.024000+00:00
|
['Story', 'Depression', 'Mental Health']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.