title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
829
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
| tags
stringlengths 6
263
|
---|---|---|---|---|---|
MUST READ: Article below outlines current agricultural workers on strike in India, this has and… | MUST READ: Article below outlines current agricultural workers on strike in India, this has and will affect the entire world food supply if we are unable to find a way for these farmers to live.
More information found here:
https://link.medium.com/XI8zyUJfqcb | https://medium.com/@TheJessicaBugg/must-read-article-below-outlines-current-agricultural-workers-on-strike-in-india-this-has-and-4a1093a69005 | ['Jessica Bugg'] | 2020-12-22 11:11:41.103000+00:00 | ['Politics', 'Economics', 'World News', 'Short Form', 'Foreign Affairs'] |
Instructions to help your kid put on weight | Instructions to help your kid put on weight
Take a stab at eating 6 to 8 more modest dinners or snacks each day.
In the event that you can just eat a couple of chomps, eat all the more regularly, every half hour if necessary.
Maintain a strategic distance from most loved food sources when you feel sickened, or wiped out to your stomach. In the event that you upchuck subsequent to eating a most loved food, you may at this point detest that food.
Save most loved food varieties for when you feel better!
Eat when you feel hungry. Keep prepared to-eat food varieties close by if your appetite just endures a couple of moments.
Try not to top off on fluids at suppers. All things considered, drink barely enough at suppers to swallow food with comfort.
Taste on fluids between dinners to remain hydrated.
Dairy
Take a stab at drinking entire milk, enhanced milk, buttermilk, hot cocoa, or milkshakes with suppers.
Use milk or “Cream” rather than water to make soups and hot cereals.
Have frozen yogurt or customary yogurt for tidbits and treats. Top off waffles or flapjacks with frozen yogurt, whipped cream, or natural product garnishes.
Add whipping cream to pies, natural product, pudding, organic product enhanced gelatin, hot cocoa, and different pastries.
Serve substantial cream with organic products (peaches n’ cream, strawberries n’ cream, and so forth)
Add sharp cream to potatoes, meals, plunges, vegetables, organic product, beans, and soups.
Add cream cheddar to spreads and cheesecake, or put it on wafers.
Blend cheddar in with nearly everything.
Dissolve cheddar in fried eggs or on singed eggs.
Liquefy cheddar on burgers and add it to other meat sandwiches.
Liquefy cheddar onto meatloaf, spaghetti, pizza, and dishes.
Liquefy cheddar onto prepared potatoes, pureed potatoes, and different vegetables.
Add cheddar to servings of mixed greens, soups, and stew.
Liquefy cheddar onto bread, rolls, rolls and cornbread.
Serve cheddar plunge with chips or vegetables (broccoli, cauliflower, and so on)
For any formula that calls for milk, utilize an unhealthy enhancement, whipping cream (hefty cream or light cream), entire milk, or invigorated milk.
Meat
Put additional meat on sandwiches.
Have additional helpings of meat at dinners. Eat the meat first and eat less of other food.
Add slashed, cooked meats to soups, servings of mixed greens, pizza, macaroni and cheddar, and pasta dishes.
Add additional eggs to meat, dishes, and so on
Nuts
Serve nuts as bites like path blend just as in grains or breads, and on sweets.
Put additional peanut butter on sandwiches, bagels, and rolls.
Spread peanut butter or other nut margarines onto treats, vanilla wafers, graham wafers, granola bars, pretzels, and wafers.
Add peanut butter to flapjacks, French toast, and waffles prior to pouring on the syrup.
Put peanut butter on leafy foods (apples, bananas, pears, celery, carrots, and so forth)
Make peanut butter milkshakes.
Fat
Use spread, margarine, or oil.
Add it to soups, vegetables, pureed potatoes, cooked oat, pudding, rice, pasta, and dishes.
Spread extra onto moves, bread rolls, cornbread, sandwiches, and toast.
Add 1 tablespoon coconut oil to smoothies and milkshakes.
Sauté or pan fried food vegetables, meats, chicken, and fish in olive, canola, or coconut oil.
Marinate meat, chicken, or fish in vegetable, canola, olive, or sesame oil prior to cooking.
Add food varieties like avocado and nuts to suppers.
Add plate of mixed greens dressings to vegetables, potatoes, and servings of mixed greens.
Add additional mayonnaise to sandwiches, fish plate of mixed greens, egg serving of mixed greens, chicken plate of mixed greens, and ham plate of mixed greens.
Add sauce to potatoes, vegetables, rice, pasta, and meats.
Add additional frosting to organic product, treats, cakes, graham wafers, and different sweets. | https://medium.com/@babycare7171/instructions-to-help-your-kid-put-on-weight-51523b365e29 | [] | 2021-04-09 19:52:38.880000+00:00 | ['Baby Weight', 'Baby Care', 'Kids', 'Baby', 'Baby Love'] |
Using Enzyme autodiff with Swift | Automatic differentiation is an exciting emerging technology which enables deep learning applications and is of particular value to PassiveLogic’s smart building platform.
The Swift language has first class support for autodiff. Using this, we can write a simple function and make it differentiable by decorating it with the differentiable annotation:
// thank goodness we have autodiff to determine the derivatives!
(reverse)
public func awesome(a: Float, b: Float) -> Float {
a * a + b * b * b
} // This is our differentiable function, y = a² + b³. So complicated,// thank goodness we have autodiff to determine the derivatives! @differentiable (reverse)public func awesome(a: Float, b: Float) -> Float {a * a + b * b * b
Now we can obtain the value and derivatives of awesome() simply by calling the magic valueWithGradient() function provided by the Swift compiler:
let (value, derivatives) = valueWithGradient(at: 3, 5, of: awesome)
print(“\(value), dA = \(derivatives.0), dB = \(derivatives.1)”)
This is wonderfully simple and effective (shout-out to the Google Swift for Tensorflow team who drove the effort to get this integrated into the mainline Swift compiler).
But… what if it could be faster?
Enzyme is a project that integrates tightly with the LLVM optimizer to produce autodiff functions. Because it operates at the low level of LLVM IR, it can potentially generate faster autodiff code than implementations that work at an earlier point in the compilation pipeline, such as the current incarnation of differentiable Swift.
Since Swift uses the LLVM backend, it ought to in principle be possible to use Enzyme to produce the derivatives of awesome(). Let’s make it happen!
We start from the same Swift function (minus the differentiable annotation):
// Our enzyme differentiable function. Behold the awesomeness!
public func awesome(a: Float, b: Float) -> Float {
a * a + b * b * b
}
Enzyme imposes a couple of requirements on the differentiable function. It must have a C calling convention; and any parameters for which we will get a derivative must be passed by reference (in order to use the Enzyme duplicated argument convention). So we write a small wrapper function:
func awesomeWrapper(a: UnsafeMutablePointer<Float>?,
b: UnsafeMutablePointer<Float>?) -> Float {
awesome(a: a!.pointee, b: b!.pointee)
} @_cdecl (“awesomeWrapper”)func awesomeWrapper(a: UnsafeMutablePointer ?,b: UnsafeMutablePointer ?) -> Float {awesome(a: a!.pointee, b: b!.pointee)
Now that we have a wrapper that Enzyme is willing to differentiate, we can ask Enzyme to generate a function to take the wrapper and a set of input parameters, and return the derivatives (we will discuss the mechanics of the Enzyme generation step in a subsequent post). It is required that the generated function name must start with the prefix __enzyme_autodiff (the Enzyme code generation pass looks for calls to any function starting with this prefix and takes this as an indication that it needs to generate code). There might of course be multiple generated functions in a project with different signatures, so we can use the part of the function name following the required prefix to disambiguate these functions.
In this case, awesome() is a function which takes two float parameters and returns a float, so using an invented naming convention we will call the generated function __enzyme_autodiff_Float_FloatFloat(). Because we are using the Enzyme duplicated argument convention, each of the input parameters will be followed by a “shadow” output parameter which will return the derivative of the corresponding input parameter.
The Enzyme generated function will use the C calling convention. So this leads us to a C prototype for __enzyme_autodiff_Float_FloatFloat():
float __enzyme_autodiff_Float_FloatFloat(
float(*)(float*, float*), // function to differentiate: it
// accepts two floats as input and
// returns a float.
const float*, // The first input parameter.
float*, // (output): the derivative of the
// first parameter.
const float*, // The second input parameter.
float*); // (output): the derivative of the
// second parameter.
Since Swift has the ability to call C directly, we will place this prototype in a .h file and tell Swift it is an external C library (even though the actual implementation of the function exists in no library but will be generated by the Enzyme step during compilation).
Now we can finally write a Swift function to call __enzyme_autodiff_Float_FloatFloat() and return the derivatives of awesome()’s inputs:
public func awesomeDerivative(a: Float, b: Float) -> (Float, Float) {
var dA: Float = 0
var dB: Float = 0
// We need copies of the parameters so we can pass by reference.
var aCopy = a
var bCopy = b
// Do the thing!
__enzyme_autodiff_Float_FloatFloat(awesomeWrapper,
&aCopy, &dA,
&bCopy, &dB)
return (dA, dB)
}
You might think that the return value of the Enzyme generated function was the result of awesome(), but not so. So to implement the equivalent of Swift’s valueWithGradient() we will have to call awesome() directly to get the value and awesomeDerivative() to get the gradient:
public func awesomeValueAndDerivative(
a: Float,
b: Float) -> (Float, (Float, Float)) {
let value = awesome(a: a, b: b)
let derivatives = awesomeDerivative(a: a, b: b)
return (value, derivatives)
}
Whew! That was a lot of work to reach feature parity with a single line of Swift code! Why on earth would we go through this misery? Well, here’s a simple benchmark that computes the derivatives of awesome() 100,000,000 times in a loop:
Speedy boi!
Oh. 477 times faster?! That’s worth a fair amount of misery.
This is of course a trivial example of a differentiable function, but we have observed similar speedups in significantly complex functions with hundreds of parameters. Enzyme is pretty impressive!
Despite the performance benefits observed from Enzyme-generated derivatives, you may want to hold off on replacing all of your differentiable Swift code for two reasons: first, that work is underway to significantly improve native differentiable Swift performance in ways that may narrow or remove this gap (some of which will be described later by fellow PassiveLogic employee Brad Larson); and second, that there are currently significant limitations on the Swift / Enzyme integration that we will discuss in a subsequent post.
The code presented in this post can be found in https://gitlab.com/PassiveLogic/enzyme-swift-demo | https://medium.com/passivelogic/using-enzyme-autodiff-with-swift-afa1bc2dc102 | ['Kevin Vigor'] | 2021-06-17 17:43:16.220000+00:00 | ['Programming', 'Automatic Differentiation', 'Pltech', 'Swift Programming', 'Enzyme'] |
Understanding a Neural Network through scratch coding in R; A novice guide | If you have ever delved into the world of data science, then it will not be an absurdity for me to assume that by now you have certainly encountered the term Neural Networks somewhere, at some point in your journey towards probing Machine Learning and Artificial Intelligence or data science in general.
Figure 1. A Neural Network with one hidden layer (Image by author)
A most approved definition for Neural Network (NN) is that it is a brain-inspired computer architecture containing varying network topologies of functions, where the nodes are interconnected in a specific fashion to unmask the underlying patterns in a dataset by following a series of algorithms iteratively (Figure 1). Perhaps, with a little mathematical maturity and a slight knowledge of optimization theory, it is criminally simple to call a Neural Network as a function approximator or a regressor. From my previous article, we saw through a practical example that how the high-level APIs such as Keras and Tensorflow makes it really simple to build and train a neural network. However, the simplicity provided by Keras and Tesnsorflow plagues a beginner with tenacious dependency and makes it really enigmatic for a novice to comprehend the actual underlying mathematics and reasoning behind the process of learning (Forwardfeeding-backpropagation). From my articles, you can probably tell that I am a very visual person, and when it comes to learning, I think that engaging visuals facilitates deeper comprehension. Here I will try to slightly unmask this black box and we will take a gentle dive into the mathematical formalism of Neural Network incorporating gradient descent optimizer and comprehensively try to build our own. Don’t worry If you are not comfortable with elementary linear algebra and differential equations as the visuals are enough to get the concept rolling in your brain console. And again, If you are comfortable with these mathematical prerequisites then you must appreciate how overhyped this model really is.
Now, let’s start!!
Neural Network Architecture
A simple NN architecture with one hidden layer is elucidated in Figure 2. The first layer is called the input layer. Each feature of the dataset act as a single neuron/node (red-colored). Given a tuple of weights, the linear combination of these nodes along with a bias (blue-colored) generates a single node of the next consequent layer called the hidden layer (yellow colored). Notice that the contribution of the iᵗʰ node of the input layer in the formation of the jᵗʰ node of hidden layer value hⱼ is the product of iᵗʰ node value Xᵢ and the iᵗʰ element of the jᵗʰ tuple of weight i.e wⱼᵢ. A similar scheme is then repeated for the hidden layer and the output layer.
Figure 2. Elucidating the Neural Network (Image by author)
At this point I recommend you try visualizing the weight tuples as column vectors, combining to form a matrix W of rows equals to the number of nodes in the current layer and the column equal to the number to nodes in the consequent layer. So for the given network in the animation, the weight matrix W corresponding to the input layer and hidden layer would be:
This kind of matrix apprehension of weights is important to comprehend the upcoming underlying mathematics.
Since we all know that the conceptualization of the NN is inspired by brain functioning and at this point, it is important to mention the synaptic ends of a brain cell/neuron and its analogy with a NN. Every neuron value generated by the above operation is analogous to a stimulus that a brain cell carries and is suppose to pass it to the next connected neuron. There is a mechanism in the synaptic end of the axons that decides whether the stimulus is powerful enough to be passed on to the dendrites of the next neuron. Analogously, this activation is carried out by an activation function in NN. So precisely, every generated node value is passed through an activation function, which decides whether to pass the stimulus and how much of the stimulus is to be passed.
Figure 3. A sigmoid function ranging from -inf to +inf (Image by author)
The most popular activation functions are Sigmoid, ReLU, Tanh, and Softmax (a generalized sigmoid). While the ReLU function permits only the positive values to be passed on, the sigmoid activation function, S(x) uniquely maps any real number x into the range (0,1) Figure 3. Mathematically, S(x)= 1/(1+eˣ). Interestingly, the differential of S(x) i.e dS(x)/dx is equals to S(x)(1-S(x)). This invites extreme mathematical convenience while calculating the gradient and a reason why sigmoid is the most celebrated activation function. This process of information transformation from one layer to another is called forward-feeding.
Once a final output layer is estimated, the performance is then evaluated using an error-index function P. Mathematically, P = ll d — zll²/2, where d is the actual/desired output and the z is the estimated output. The optimization objective is to minimize the P. There are numerous optimizers in practice such as the Newton Descent method, ADAM, etc. Here we will focus on the most popular and widely used gradient descent method.
Since P is a function of z, which in turn is a function of weights W and bias b. So to minimize the P, the change in weight, ▽W is evaluated in the direction where the partial of P with respect to weight dP(z, W, b)/dW = ▽P decreases (Figure 4).
Figure 4 (Image by author)
Now let's see how do we calculate these required gradients. So, If we map this prelude with our Figure 2 instance, the process looks like the below cartoon (Figure 5). Notice that to calculate the gradient of performance index with respect to the weights i.e dP/dW₁ and dP/dW₂, we applied chain rule starting from the right end of the process, propagating backward. (Recall that the performance index was P = ll d — zll²/2.). This procedure is called backpropagation.
Figure 5. Chain rule in Backpropagation (Image by author)
The gradient ▽P= Σ dPᵢ /dWᵢ for i = 1,2,3…,k+1 where k is the depth of architecture. Once the ▽P is evaluated, the weights and biases are upgraded respectively as:
W_new = W_current — η*(▽P) and b_new = b_current — η*(▽P)
Where the value η is the learning rate. These two equations are very important and we are frequently going to refer to them. With this jotting, we will now proceed to make our model step-by-step | https://towardsdatascience.com/understanding-a-neural-network-through-scratch-coding-in-r-a-novice-guide-a81e5777274b | ['Abhibhav Sharma'] | 2020-09-27 19:11:06.809000+00:00 | ['Scratch Programming', 'Neural Network', 'Gradient Descent', 'Perceptron', 'Machine Learning'] |
10 Things That Would Have Happened If Hillary Were President | 10 Things That Would Have Happened If Hillary Were President
How The World Might Have Looked With An HRC Presidency
Hillary Clinton at a rally in 2016. Courtesy: Timothy A. Clary/AFP via Getty Images.
We have entered the final months of President Donald Trump’s first full-term, and the United States is anything but united.
Since 2016, the nation has been marred by one political earthquake after another, be it impeachment hearings or riots in the streets of Portland— and the next few years don’t look any more promising.
As the voter prepares to cast the ballot this November, many ask how things might have been different had Trump lost the 2016 election?
Rewind to November 2016. Hillary Rodham Clinton wins by a landslide, Donald Trump returns to Trump Towers, and the first female president takes office in a hostile world.
What happens after that?
Some of the judgments below may be on the mark while others may be way off, but one thing is certain — today’s political atmosphere would have been no different under a Hillary Clinton presidency.
Without a doubt.
10. An Internationalist Presidency
In a speech addressed to the American Legion in September 2016, HRC championed her background as former Secretary of State as a qualification for the presidency.
For sure, this diplomatic experience would have been instrumental for her as POTUS. As Dr. Gerlinde Groitl of the University of Regensburg observed:
Her foreign policy ideas rest on the belief that principled leadership and global engagement are vital for the well-being of the U.S. and for the world. She is convinced that the U.S. needs to support freedom and democracy, stand up to aggressors, uphold international order, punish rule-breaking, and enlist partners in multilateral diplomacy.
Unlike the Trump Doctrine, which seeks for the US to act independently of multilateral agreements and institutions, the Hillary Doctrine may have involved a strengthening of alliances to protect US interests abroad (especially where Russia was concerned— more below).
This would have included upholding international deals such as the Paris Climate Accord of 2015 and the Iran nuclear deal — both of which Trump withdrew from early in his administration.
9. US Relations With China Would Have Soured
For some, Donald Trump’s imposition of tariffs on Chinese exports signaled a new era for US-Chinese relations.
However, the dominoes of the US-Sino trade wars started falling under Barack Obama and would have continued to fall under HRC.
In addressing a crowd at a manufacturing plant in Warren, Michigan, HRC expressed the need to utilize “targeted tariffs” on Chinese goods:
So as President, I will stand up to China and anyone else who tries to take advantage of American workers and companies. And I’m going to ramp up enforcement by appointing, for the first time, a chief trade prosecutor, I will triple the number of enforcement officers, and when countries break the rules, we won’t hesitate to impose targeted tariffs.
Writing for The Diplomat, Professor of International Relations at Jinan University Dingding Chen emphasized that an HRC-presidency would have been tense with China due to harsh statements made against the Communist regime in the past.
Along with enduring trade disputes, Beijing still would have pushed for further expansion into the South China Sea and disputed regions bordering India. The US likely would have remained distracted with the North Korea nuclear crisis, as was the case in 2017.
Still, Dingding Chen maintained a hopeful outlook that an HRC presidency might have resulted in a “more mature and healthy bilateral relationship” between Washington DC and Beijing.
8. US-Russia Relations DEFINITELY Would Have Soured
In 2016, Clinton Elrich of Foreign Policy wrote how members of Russia’s leading think tank — the Moscow State Institute of International Relations — were sure that relations with the US would deteriorate in the event of a Hillary presidency.
Among the grievances expressed by the think tank included the US’s neutralization of Muammar Gaddafi, a Russian asset in North Africa, and the proposal for a no-fly zone over Syria, a Russian asset in the Middle East.
Both actions were vehemently supported by Hillary Rodham Clinton in her tenure as Secretary of State — and Putin would not have let either slide.
At the height of the candidacy period in 2016, the New York Times detailed how HRC’s advisors “privately” had an “update of the ‘containment’ strategy” towards Russia, which would have included a “mix of sanctions, diplomatic isolation, and international condemnation.”
Repeated cases of Russian harassment against US military units have persisted since the 2016 election, and given her harsh stance towards Moscow, these numbers might have sharply increased in the event of a Hillary presidency. Likewise, HRC might have committed more troops to eastern Europe as part of the European Deterrence Initiative.
To no surprise, the Senate Committee on Intelligence recently concluded that Russia indeed attempted to influence the outcome of the 2016 election, and for sure, HRC would not have let that slide either.
Even today, her grudge with Putin runs deep, as hinted on pg. 45 of her 2017 memoir, What Happened?:
Many of the organizations wanted the speeches to be private, and I respected that: that they were paying for a unique experience. That allowed me to be candid about my impressions of world leaders who might have been offended if they heard. (I’m talking about you, Vladimir).
7. The Economy Would Have Improved (And A Recession Still Would Have Happened)
In the run-up to the 2016 election, a survey conducted by the National Association for Business Economics showed how 55% of 414 economists favored Hillary in managing economic affairs.
David Wessell, Director of the Hutchins Center on Fiscal and Monetary Policy, outlined Hillary’s economic policy in the Wall Street Journal:
Hillary Clinton clearly favors using the power of government in all respects–taxes, spending, regulation, incentives for states and businesses–to help steer the U.S. economy toward what she considers a better place. That applies to using the tax code to make it harder for companies to move headquarters and jobs overseas and using the government’s regulatory muscle to keep Wall Street from the mistakes and risks that led to the Great Recession.
An HRC presidency likely would have continued said regulatory practices while overseeing the continual uptick in the economy that had been happening for the previous decade.
On the flip side, analysts and economists already sounded the alarm for recession well before the COVID pandemic.
In 2017, geopolitical analyst George Friedman noticed signs of trouble with low interest rates amidst the nation’s economic expansion.
A year later, NYU business professor Nouriel Roubini warned of an impending recession due to growing tensions between the US and foreign rivals (China, Russia, North Korea, and Iran).
In 2019, economist Paul Krugman — in attacking Trump’s tax cuts — warned of the de-acceleration of the eurozone in Europe that could heighten the US’s risk of recession.
As one or all of these factors could have easily happened under HRC (certainly the second one), the economic bubble might have burst in either 2019 or 2020 as it did under Trump, especially with the onset of the COVID crisis.
6. COVID-19 Still Would Have Hit The US Hard
In 2003, the SARS virus broke out in China and spread worldwide. It did not wreak havoc as its COVID offspring would, but as the New England Journal of Medicine explains:
We have recently seen many such emerging zoonoses, including the 2003 bat-coronavirus–derived SARS (an earlier severe acute respiratory syndrome, caused by a closely related coronavirus), which came terrifyingly close to causing a deadly global pandemic that was prevented only by swift global public health actions and luck.
As it stands, the United States was lucky for two decades, and their luck finally ran out in 2019.
Since the COVID outbreak of last November, HRC has regularly slammed Trump for his response to the crisis and insisted she would have done “better” as president.
It’s quite possible that she may have pushed for quicker measures given the previous “playbook” and improvement plans left behind by outgoing administrations. For sure, she would have pushed for one or more stimulus packages to support families across the United States.
Regardless, as fast as the COVID virus spread, lock-downs would have taken effect, and the economy still would have taken the biggest hit.
5. The “Resistance”
Apart from Fox News and Talk Radio, those whom Hillary dubbed “deplorables” would have been ready to strike on Day One of her presidency.
Donald Trump wasn’t even a month into his presidency when he was targeted by the Women’s March. An HRC presidency might have been on the receiving end of a similar march, as seen with the annual March For Life Rally of 2017 (which occurred roughly six days after the Women’s March).
The number of anti-government groups reportedly swelled during the Obama administration, and with unorganized militiamen grabbing headlines even under the Trump administration, an HRC presidency would have added fuel to that fire as well.
However, possibly the most significant push-back would have come from the hard Left. Given her open ties to Wall Street, Silicon Valley, and Hollywood, portions of the anti-corporate Left (including remnants of Occupy Wall Street) would have continually taken to the streets in protest of her Neo-liberal policies.
4. The George Floyd Protests Still Would Have Happened
The US was rattled with protests following the murder of George Floyd in May of 2020, but today’s earthquake was preceded by several tremors.
The names of Trayvon Martin, Eric Garner, and Michael Brown kicked off the Black Lives Matter movement in the 2010s, and it was only inevitable that tensions would hit critical mass at some point.
Prior, HRC frequently tried to paint herself as an ally of the BLM movement. She had a number of exchanges with separate BLM chapters before the 2016 election — some of which did not necessarily help her image.
In August 2015, HRC met with five BLM activists in Keene, NH, who confronted her about her support for the 1994 federal crime bill as First Lady (one of the activists, Daunasia Yancey, openly called HRC’s racial justice record “abysmal” in an NPR interview a week later).
Two months later, during an “African-Americans For Hillary” rally in Atlanta, 10 BLM activists were escorted out after interrupting her speech about criminal justice reform (to which she replied dismissively, “I’m sorry they didn’t listen…”).
By February 2016, Hillary apparently had enough when, after being confronted again in Charleston, SC, she replied to an activist, “Do you want to hear the facts, or do you just want to talk?”
To be sure, an HRC presidency might have been more receptive to the BLM movement’s grievances. She easily won the majority of Black voters in 2016 and has expressed a degree of support for the recent George Floyd protests.
Yet, it is difficult to imagine a genuinely cordial relationship between HRC and the movement, given her frosty responses when confronted over racial relations in the past.
3. Scandals Galore
Even during her time as First Lady, HRC was the subject of numerous scandals (Troopergate, Travelgate, Thankyougate).
A plethora of other “gates” easily awaited her as president — especially in the wake of the #MeToo movement.
This would have included her loose ties to the disgraced Harvey Weinstein, her willingness to shield a campaign staffer accused of sexual harassment, and her marriage to a man with a history of sexual misconduct (not to mention Bill’s friendship with the deceased predator, Jeffrey Epstein).
A smoking gun, however, might have emerged from her philanthropy background.
The Clinton Foundation — founded by herself, her husband Bill, and daughter Chelsea — has received much scrutiny for accepting millions in donations from questionable sources including Saudi Arabia, Nigeria, and a son-in-law of a former Ukrainian president accused of murdering journalists.
In 2017, a number of wealthy donors pulled out from the Clinton Global Initiative following several accusations of conflicts of interest. Among the more interesting pieces of information was an email released by Wikileaks, which suggested the Clintons allegedly sold political favors to wealthy donors, such as Ukrainian businessman Viktor Pinchuk.
While no solid evidence has been found over the matter, episodes like this would have been exploited by HRC’s enemies while in office — and this would have led to an inevitable outcome.
2. Impeachment
Not a single US president since Bill Clinton has escaped the specter of impeachment, and his wife most definitely would not have escaped the stigma either.
Where Trump endured impeachment over a quid pro quo with the Prime Minister of Ukraine, a similar scenario may easily have unfolded given loose ends not tied up in Hillary’s philanthropy background.
Might Hillary have been “locked up” in this case?
As with Donald Trump — and her husband — the partisan atmosphere of Washington DC would have ensured the survival of her tenure. The articles of impeachment against Trump passed the Democratic-controlled House and died in the Republican-controlled Senate. In the case of HRC, any articles might have been killed in the House well before reaching the Senate.
Having survived disputes over Benghazi and her private emails in the past, HRC likely would have survived impeachment with the help of political allies, but she still would have had to answer to the electorate in 2020 — and this would have been a more daunting task.
1. A One-And-Done Presidency
Polls have frequently shown a decline in approval for Hillary Clinton since the 2016 election.
A Gallup Poll from 2017 showed HRC’s popularity with respondents sat at 36%. A year later, the same poll showed the same percentage, which remains the lowest to date for HRC.
Elsewhere, a Morning Consult/Politico poll conducted on August 11th showed 35% of voters had an unfavorable view of today’s Democratic presidential hopeful Joe Biden, which contrasts with Hillary in 2016 at 43%. The same poll showed a striking contrast among Independent voters with 31% viewing Biden unfavorably while, in 2016, this number sat at 51% for HRC.
As polarizing — if not demonizing — a figure as she is, HRC would have faced more enemies at home than abroad.
She would have faced stiff opposition from the Right, which identified her as the face of evil in the Democratic Party since her years as First Lady. Figures like Sean Hannity and Judge Jeanine Pirro would have gleefully turned up the populist pressure against HRC, as they had done under Barack Obama.
Apart from right-wing talking heads on Fox News, online left-wing personalities such as Jimmy Dore, David Pakman, and Kyle Kulinski have been relentless in picking HRC apart — and they have not let anyone forget HRC’s treatment of the Bernie Sanders campaign (which she described on pg. 73 of What Happened? as an “insurgent left-wing candidacy”).
With these odds stacked against her, an HRC presidency may have lasted a single term. She would have been remembered as the COVID-19 president, the impeachment president, and Geoge Floyd president all packaged into one — an honor that has been bestowed upon Trump in her place.
The More Things Change…
Hillary Clinton leaves the stage during the MSNBC Democratic Candidates Debate on February 4, 2016. Courtesy: Jewel Samad/AFP via Getty Images.
Hard as it is to stomach, life under HRC would likely look not much different from today.
The White House would seem less like a Reality Show, but the problems of the world have all been simmering for some time. 2020 was the moment they decided to erupt.
Such a pivotal year has proved trying for the presidency of Donald Trump, and it would have been the exact same under Hillary.
The 2016 election will always be remembered as a battle between “two evils.” As many have learned in the past four years, the “lesser of two evils” is still evil, and we should not have expected less with an HRC presidency.
Nor should we as we enter yet another battle between “two evils.” | https://medium.com/an-idea/10-things-that-would-have-happened-if-hillary-were-president-c26780f8ef03 | ['Grady Bolding'] | 2020-09-24 17:39:30.860000+00:00 | ['Hillary Clinton', 'Alternate History', '2016 Election', 'Democratic Party', 'Politics'] |
A More Diverse and Inclusive Punkverse | TL:DR
ExpansionPunks is a collection of 10,000 unique, procedurally-generated collectible Punks stored as ERC-721 tokens on the Ethereum blockchain. In the same way “expansion packs” introduce new characters and storylines in traditional tabletop gaming, ExpansionPunks have arrived to expand the Punkverse to be a more diverse and inclusive community by addressing subtle biases in the original CryptoPunks collection that unintentionally lead to exclusion.
Societal Context
In many ways, our decentralized future is still in the starting blocks. Forthcoming innovations in personal identity management, the arts, and financial services (to name a few) will forever alter the arc of our society. The technologies and paradigms that will define this era are in large part still being discovered today. It’s certainly an exciting prospect for those already involved in this space. But what about those not yet involved?
As a society, it’s critically important we ensure everyone feels empowered to join and participate in such a future. If we’re to build resilient, next-generation societal constructs, we must do so free of the inherent biases of our entrenched systems.
The Case for Expanding on CryptoPunks
As the world’s first NFT avatar collection, CryptoPunks has etched its place in history. Punks are the “blue-chip” avatar to signal one’s involvement in this space. As the decentralized movement evolves, more and more individuals (technologists, artists, investors, academics, etc.) will seek association with the CryptoPunks ethos through the CryptoPunks aesthetic. As such, the growth of the Punkverse is inevitable, with or without ExpansionPunks. New “altpunk” collections are emerging to fill this gap, attempting to draft in the wake of CryptoPunks’ momentum. As detailed in our Punk Quadrants article, these collections achieve differentiation in various ways, resulting in a variety of outcomes relative to aesthetic cohesion and trait-rarity coherence with the CryptoPunks collection.
For ExpansionPunks — owning the “expansion” modifier comes with great responsibility to the CryptoPunks community. Much was written in our first article (Creating a 10K NFT Avatar Collection) about the lengths pursued to reverse-engineer the original CryptoPunks DNA as a pre-requisite to generating the ExpansionPunks collection. That rigor results in an ExpansionPunks collection that is unequaled in its aesthetic alignment with the original, while guaranteeing no ExpansionPunk will collide with an original CryptoPunk on a trait-combination basis or on a visual basis (given some hidden traits).
Beyond aesthetic alignment, ExpansionPunks also set out to create greater balance and representation within the aggregate 20K Punkverse (CryptoPunks + ExpansionPunks). Examples of the current imbalances in the original CryptoPunks collection include…
1. Gender skews disproportionately male.
2. Females are excluded from desirable traits like Hoodies and Cowboy Hats. Female Punks can’t even show emotion through a Smile (or Frown) — in fact, the only mouth traits they are permitted is a Lipstick color … in 2021!
3. The 121 rarest (and most valuable) Punks (Aliens, Apes, Zombies) only feature Male traits.
4. Gender is handled as an exclusively binary trait (Male vs. Female) — excluding a growing population of individuals that identify as Non-Binary.
A More Diverse and Inclusive Punkverse
The ExpansionPunks collection features a gender distribution that inverses the disparities in the original CryptoPunks collection, resulting in an aggregate 20K collection (10K CryptoPunks + 10K ExpansionPunks) that better reflects gender proportions of the real world: | https://medium.com/geekculture/a-more-diverse-and-inclusive-punkverse-da2a27db8911 | ['Jeremy Posvar'] | 2021-08-15 15:40:15.585000+00:00 | ['Blockchain', 'Diversity In Tech', 'Nft', 'Nft Collectibles', 'Cryptopunks'] |
May Day | An updated version of our comic from May Day last year.
May Day (in America) commemorates a labor protest that led to the 8 hour work day. But does anybody work anything like a 9–5 day any more? | https://backgroundnoisecomic.medium.com/may-day-a5d98417b726 | ['Background Noise Comics'] | 2019-05-01 20:30:45.043000+00:00 | ['Labor', 'Cartoon', 'Comics', 'Work', 'Humor'] |
Get Excited for a New Era of Feminine Energy | Get Excited for a New Era of Feminine Energy
The balance of male and female is tipping and the matriarchy is coming.
Photo by Jackie Parker on Unsplash
For as long as anyone alive can remember, we’ve lived in global patriarchy. Our world leaders are largely men. Women still have not achieved equality in the workplace, and the global pandemic this year has economically hit women harder than men. Female energy has been suppressed for most of human recollection — perhaps thousands of years.
People are beginning to see things shift.
The feminist movement has gained traction for about half a century. More narratives have arisen in popular culture about heroines. (Some of my favorites from this year have been Mulan and Enola Holmes.)
Everywhere I turn these days, I observe women finding their voices and expressing their female power.
I believe that the scales are tipping and we are entering a new era. It’s been coming for a while.
Indigenous Peoples and Ancient Wisdom
Masculine energy hasn’t always dominated culture and society. There are spiritual balances tipping, being restored, and tipping again throughout human history.
There is evidence that at certain times there may have even been equality.
The Iroquois Tribe of North America was known for its emphasis on feminine energy and power:
Tribal Council was dominated by male speakers, but the women decided which men should be speakers. If the chosen man expressed opinions that clashed with those of the Womens’ Council, they could replace him with someone who more closely represented their views. If the Tribal Council took a course of action that the women disagreed with, such as a raid, the women might simply refuse to give them any food for the journey.
What happened to this idea of women having a place in politics and community decisions? Colonialism came with patriarchal values, and male-dominated modernity has forgotten what came before this. But our values are changing, and this past election has shown this.
What did the election prove about the power of feminine energy? If we trust women to lead, things will get better. Let us never forget that black women led grassroots organizing that resulted in many democratic victories this year.
I loved this article in Yes! Magazine about modern indigenous women who are bringing back the tradition of a matriarchy, where it says:
Today, contemporary Indigenous women are taking the matter into their own hands and showing the public how to rethink, reframe, and relearn a new American-Canadian story that seamlessly incorporates the voices of Indigenous women. These women are living in the tradition of their ancestors, whose societies and nations were often matriarchal. They are reclaiming the tradition of female leadership and turning the old, white, male-dominated perspective of history on its head.
Who is better suited to heal the land and restore balance in our environment than indigenous women? There is hope that with this ancient, traditional wisdom, the guidance of female energy can be restored. I believe that it’s already happening. Everywhere I turn these days, I observe women finding their voices and expressing their female power, such as in this article I read today by Bradlee Bryant.
Kamala Harris is a great example of a woman breaking through limitations that women have endured for centuries. Her success in rising to Vice Presidential status is evidence of our bright future in America. Anthea Butler of the University of Pennsylvania wrote of her:
We see in her the promise that our mothers held out for us, one our grandmothers could not even imagine. For our daughters and granddaughters, we see a brighter future.
Men can also become more accepting of their inner feminine energy, and have become open to allowing women like Kamala to emerge as leaders. Recognizing the need to balance male and female is the key to paving the way for new possibilities in society.
In ancient Chinese philosophy, yin and yang represent the balance that needs to exist between extremes:
Yin and yang create a perfect balance, the concept of duality. In a Chinese cultural study, women have yin energy while men are considered to be more yang. Women are sweet, weak, and are a positive force of preservation of life. Men are dominating, aggressive and protective towards their yin.
We’ve lived in the extreme of male energy for too long. We all know that when there is extreme unbalance in scales, they will next tip in the other direction. Masculinity has been dominant and may even be stronger in recent years than ever before — and now the time has come to embrace our femininity.
Mother Earth, the Chakras, and Shakti
We all have both female and male energy in us. In her book Wheels of Life, Anodea Judith speaks about Vishnu’s male energy entering us through the crown chakra, and Shakti’s female energy entering us through the base chakra, meeting in the heart chakra (in the middle). I interpret this to mean that the first three chakras, representing Earth, emotion, and personal power, are feminine. Thought, sound, and light, representing the fifth, sixth, and seventh chakras, are related to male energy.
The first three chakras are in the realm of feminine power; in ancient times she was called Shakti. In Hinduism, Shakti’s energy is divine feminine creative power:
According to Hindu philosophy, on the earthly plane, all forms of shakti are most tangibly manifested through females, creativity and fertility; however, males also possess shakti in a potential form which is not fully manifested. Shakti is responsible for all creation and change in existence.
The base chakra, where the feminine Shakti resides, represents solid Earth. Mother Earth, famous for being the feminine bearer of life in cultures around the world, has a connection to the female energy in us all. No wonder women are at the forefront of activism in the climate crisis. We are the stewards of the land.
Return the power to women, and we will guide us all to hope and restoration. We can begin to reverse the destruction of this planet caused by relying too much and for too long on male energy related to the highest three chakras. Mother Earth has been forgotten, but we can start to remember her.
Think with your ‘heart’, not your mind
The patriarchy has dominated our values. Thought has been deemed good and emotion has been deemed bad. We aren’t allowed to have public emotional outbursts. Any woman working in a male-dominated field can tell you that displays of emotion and personal power are often suppressed.
I can feel a revolution of female energy arising in the world. It will take time, but women will be more and more trusted to lead, and our societal values will be transformed in the other direction. To embrace the feminine energy, we will begin to evaluate decisions and situations with our feelings and ‘heart’. Pure logic will no longer be enough to solve problems. Subtle emotional and spiritual knowledge will come to the forefront of our collective consciousness. | https://medium.com/mystic-minds/get-excited-for-a-new-era-of-feminine-energy-1ff89aa93c70 | ['Emily Jennings'] | 2020-12-21 11:03:21.411000+00:00 | ['Consciousness', 'Culture', 'Feminism', 'Self Improvement', 'Creativity'] |
Serving GPT-2 at scale | Serving GPT-2 at scale
Lessons learned scaling large deep models in production
Over the last few years, the size of deep learning models has increased at an exponential pace (famously among language models):
And in fact, this chart is out of date. As of this month, OpenAI has announced GPT-3, which is a 175 billion parameter model—or roughly ten times the height of this chart.
As models grow larger, they introduce new infrastructure challenges. For my colleagues and I building Cortex (open source model serving infrastructure), these challenges are front and center, especially as the number of users deploying large models to production increases.
Below, I’ve outline some of the design decisions we made to support serving models like GPT-2 at scale.
1. GPUs for inference
Running large deep learning inference on CPUs can be prohibitively slow. GPUs, on the other hand, have significantly more bandwidth (useful for the large computations involved in inference). This makes a large difference in inference latency.
For example, below I’ve deployed GPT-2 as an API on CPUs. Using the Cortex CLI, I’m monitoring the performance of the API, which I’ve named generator :
$ cortex get generator status up-to-date requested last update avg inference
live 1 1 5m 925 ms endpoint: http://abc123.us-west-2.elb.amazonaws.com/text/generator
The average inference takes nearly a full second to serve, which has serious implications at scale. For applications that have thousands of concurrent users, it will take a large number of replicas to keep the request queue under control.
Deploying on a GPU, as shown below, improves latency quite a bit:
$ cortex get generator status up-to-date requested last update avg inference
live 1 1 3m 199 ms endpoint: http://abc123.us-west-2.elb.amazonaws.com/text/generator
That’s roughly a 80% decrease from CPUs. For any application running at production scale, this is a meaningful improvement—but even at small scale, it can be crucial for latency-sensitive applications.
For example, features like Gmail’s Smart Compose require latency as close to realtime as possible. If Smart Compose generates text predictions more slowly than an average person types, recommendations are useless.
Implementing GPU support required a little refactoring. Cortex uses Kubernetes to manage model serving containers, but Kubernetes does not have out of the box support for managing GPUs across nodes. In order to manage GPUs through Kubernetes, we integrated NVIDIA’s device plugin for Kubernetes. Despite being in beta, we’ve found it to be very stable.
2. Autoscaling based on concurrency
GPU inference solved most of our latency and performance issues, but it introduced new challenges, particularly around autoscaling.
The Kubernetes pod autoscaler works by measuring CPU, not GPU, utilization. Obviously, this is useless for GPU inference.
We brainstormed a number of other approaches to autoscaling GPU instances. First, we tried autoscaling based on target latency. In this approach, users set an ideal latency for their API, and Cortex spun up additional instances until that latency was achieved. There were two key issues here:
If a user set their target latency unrealistically low—as in their model couldn’t possibly hit it with the provisioned resources—the cluster would autoscale infinitely.
Latency is a poor signal for scaling down instances. The latency of a request only changes when a replica is oversubscribed. Whether a replica is barely being used, or is at 90% utilization, latency will be the same so long as there are no queued requests
Additionally, maintaining and debugging this signal would be difficult for users. Any update to the model that affected latency, or any changes to the prediction API that changed latency (calls to another API, file operations, etc.) would require them to tweak their target latency. This was suboptimal.
The second idea we had was to monitor GPU utilization, but currently, Kubernetes metrics server does not offer GPU monitoring. We considered building this ourselves, but ultimately passed for a third option.
Our third option, and the one which turned out to be the best, was to implement request-based autoscaling, in which the number of enqueued requests and the concurrency capacity of a replica are used to calculate the number of needed replicas.
This approach worked, but required some refactoring:
We needed to redesign Cortex to use FastAPI and Uvicorn, as they allow us to run an asynchronous event loop that tracks requests as they’re added to the queue.
We added a target_replica_concurrency field to Cortex’s configuration files, so that users can specify how many requests a replica should handle concurrently at a steady rate.
There’s a full write up of the project here, but the end result is that now, Cortex can quickly autoscale to the optimal number of replicas (i.e. the minimum required without affecting performance).
3. Managing costs with spot instances
One of our ongoing focuses is lowering the cost of inference at scale. AWS’s cheapest GPU instance is the g4dn.xlarge , which costs $0.526 per hour and has a single GPU. Running a 500 GPU cluster, which some users do, for a month costs about $190,000.
Our initial instinct here was to support spot instances for inference. If you’re unfamiliar, spot instances are unused instances that AWS sells at a steep—sometimes as much as 70%—discount.
The challenge in implementation, however, was around how spot instances are allocated within AWS autoscaling groups.
Our goal was to allow users to opt to receive spot instances by default if available, and otherwise receive on demand instances. But autoscaling groups don’t work that way. Instead, users specify what percentage of nodes in an autoscaling group they want to be spot, and EKS tries to accommodate. There is no way to configure the autoscaling group to fall back to on demand instances when spot instances are not available.
To work around this, Cortex actually defines two separate autoscaling groups: one for spot instances, and another which is 100% on demand (and is only used as a backup for the primary autoscaling group). If a user requests spot instances, Cortex will first request nodes from the spot instance group. If an instance cannot be allocated (the scale-up request times out after five minutes), Cortex will request an instance from the on-demand instance group, and will temporarily blacklist the spot instance group so that future requests avoid the timeout.
Adapting to changes in model development
As model sizes continue to increase exponentially, there are other related changes within machine learning that affect how we design infrastructure.
First, pretrained models are becoming more prevalent in production. There are many teams doing no model training at all, and simply deploying pretrained models. Having a serving platform that can stand on its own, as opposed to only integrating with their training pipeline, is important for them—as they don’t have a pipeline.
Secondly, ASICs are becoming more important, as they present a good option for handling the resource needs of large models. Google has rolled out cloud TPUs, which allow for faster training and inference, while Amazon recently released Inferentia, a chip built specifically for inference.
All of these changes are good things—state-of-the-art models are being released pretrained, more teams are putting models into production, and the hardware for running inference is improving—but in order to take advantage of them, our model serving infrastructure needs to be constantly improved and rearchitected. | https://towardsdatascience.com/serving-gpt-2-at-scale-c3c232708d70 | ['Caleb Kaiser'] | 2020-06-16 16:55:59.522000+00:00 | ['Artificial Intelligence', 'Data Science', 'Programming', 'Machine Learning', 'Deep Learning'] |
Fake Disruption: 3 Companies That Claimed to Change the Game | Photo by Thought Catalog on Unsplash
For a while, “disruption” was a powerful word, used to indicate a company that had demonstrated innovation at such a high level, it created a new industry or truly reshaped an older one. Then, tech journalists, entrepreneurs and consumers started using the term to describe any business with a unique idea that had a chance to grow and succeed.
This isn’t to say that these companies weren’t valuable or weren’t worth exploring. It just means they weren’t good examples of what it truly means to be “disruptive” or to “change the game.”
Take, for example, these companies, which have been popularly labeled or touted as disruptive, but aren’t truly as game-changing as we’ve come to believe:
1. Uber. Uber is known as the tech powerhouse that disrupted the transportation industry. Attracting $10.7 billion of funding, and with a valuation of $69 billion as of December 2017, it’s entirely accurate to describe Uber as an industry leader and an innovative, visionary one at that. So why can’t we qualify it as a disruptor?
Clayton Christensen, the Harvard Business School professor who actually popularized the term “disruptive” in a 1997 book called The Innovator’s Dilemma, addressed this in a recent Harvard Business Review article. He describes true disruption as building a business by taking advantage of a low-end market that’s been previously ignored by dominant competitors chasing profits, or as creating an entirely new market — that is, generating customers where there weren’t any before.
Uber doesn’t fall into either category of disruption. It emerged as an alternative solution for taxi services, where customers already existed. Though generally less expensive than a taxi ride, the solution is still comparable enough in price that it didn’t open up a new market, and therefore didn’t “change the game” from the ground up.
Another hallmark of disruptors, according to Christensen, is an origin as a low-quality alternative, with a gradual transition to become a more competitive, higher-quality offer. Uber came into the field with a high-quality alternative — a better product — which made them a clearly superior competitor.
2. Google. Technology has provided the groundwork for disruption; technological advancements make things cheaper and more readily available, and they simultaneously introduce new products and services that haven’t been explored before. That’s why, when most people think about true game-changing technologies, they think about Google — the world’s favorite search engine.
But Google isn’t and was never a disruptive company. Google wasn’t the first company to invent and capitalize on the search engine model; back in 1990, a search engine called Archie started taking user queries and matching them to websites. By 1993, alternatives were already making use of bots and search crawlers to build indexes of the web, and by the time Google got started, there were already dozens of mainstream options for search, including Yahoo! and Ask Jeeves. Google didn’t create a new market; it capitalized on an existing one by building a better product.
Beyond that, Google isn’t truly disrupting in other areas, either. Email was mainstream by the time it developed its popular and innovative Gmail product, and even its futuristic ventures, like autonomous vehicles, are built on improving solutions for existing customer bases.
3. Tesla. Tesla Motors’s approach to business relies on constant, and at times ruthless innovation. Since their launch, they’ve been able to become the most valuable automaker in the United States. They’ve unveiled new and affordable models of electric vehicles, and have helped introduce the concept of semi-autonomous driving to mainstream consumers. The company is innovating at an astounding pace, is differentiating themselves from their competitors, and is enjoying a significant level of success as a result — but it isn’t disruptive.
Tesla isn’t disruptive because its vehicles have been serving a market that already exists. Electric and hybrid cars weren’t new to Tesla; instead, Tesla merely improved on an existing design. Accordingly, existing car purchasers are merely upgrading to Tesla, rather than emerging from a period of being non-customers. Plus, consider the fact that even its least expensive model runs for about $70,000, putting well outside a price range we could consider for a disruptor who’s trying to target underserved portions of the market.
Before you use the term “disruptive” to label a business, think carefully about what it’s really doing. If it’s taking an existing concept and making it better, it has a high chance of success, but it isn’t creating a new market. If it’s capitalizing on existing customers and giving them more of what they want, it will likely trounce the competition, but it isn’t creating customers where there weren’t any before.
The more accurately we can describe and learn from business concepts like these, the better we’ll understand our own markets and our own ideas.
For more content like this, be sure to check out my podcast, The Entrepreneur Cast! And be sure to check out my business, EmailAnalytics, which visualizes your email activity — or that of your team. | https://jaysondemers.medium.com/fake-disruption-3-companies-that-claimed-to-change-the-game-ef31b53d5d17 | ['Jayson Demers'] | 2020-08-11 17:42:42.553000+00:00 | ['Entrepreneur', 'Startup', 'Entrepreneurship', 'Disruption', 'Startup Lessons'] |
Key specifications and price leak of LG Rollable smartphone | There were some reports about LG’s rollable smartphone, which claimed that the company could launch this smartphone in the first quarter of 2021. South Korean Publication has recently released a report about the launch of this smartphone. Apart from this, information about the device name and key specifications is also given in the report. According to the report, it can be launched in the smartphone market under the name of LG Rollable Smartphone. Tipster Tron has also shared key specifications and price information of this smartphone.
Information about LG Rollable revealed
The report claimed that the LG Rollable smartphone will have a 6.8-inch rollable display, which can be expanded up to 7.4 inches. It seems that the company can provide different screen dimensions for different models. The LG Rollable smartphone offers a resolution of 1080 X 2428 pixels and a 20: 9 aspect ratio.
Talking about the productivity mode, the phone can get a resolution of 1600 X 2428 pixels and a 3: 2 aspect ratio. Snapdragon 888 processor can be given in the phone. This device can come with 16GB of RAM and 4,200mAh battery. In addition, it has been told in the leak that this phone can be launched in March next year. However, the report states that the launch date of the device can also be extended till June.
Talking about the price, the LG Rollable smartphone can be launched at a price of $ 2359 (about Rs. 1,75,000). Apart from this, LG is also working on two other smartphones, which we may get to see early next year. One of these is being codenamed LG Rainbow, which can be launched with Snapdragon 888 processor. This smartphone can be launched with stylus support, whose design may be inspired by LG Velvet. This smartphone can be launched in March 2021, while the LG Q83 will be a mid-range smartphone launched in the first half. This phone will have unique rear camera and 5G support. | https://medium.com/@netsgreat/key-specifications-and-price-leak-of-lg-rollable-smartphone-778f69f712b2 | [] | 2020-12-18 14:56:13.582000+00:00 | ['Tech', 'Technology', 'Technews', 'Smartphones'] |
How to Write Suspense With The Da Vinci Code Author’s Toolbox | How to Write Suspense With The Da Vinci Code Author’s Toolbox
Photo by Eric TERRADE on Unsplash
Reading the Book Dan Brown Teaches Writing Thrillers, Dan Brown refers to a couple of books written by James Scott Bell that I happen to have on my bookshelf. And it was funny to see that James Scott Bell mentions The Da Vinci Code in his examples.
So, I decided to do a cross reading about suspense, giving birth to this article. We will use the four types of suspense and theory from James Scott Bell as the framework for the fourteen resources Dan Brown explains he uses to create suspense in his thrillers. Yes, like he did in The Da Vinci Code.
I like how James Scott Bell explains suspense in his book Elements of Fiction Writing: Conflict and Suspense. It is very logical yet artistic. Insightful indeed. Masterful without a doubt.
I was especially drawn to the systematic classification of suspense he does. And that we will cover in this article. He identifies the following types of suspense:
Macro Suspense; Scene Suspense; Hypersuspense; and Paragraph Suspense.
After getting the right set of mind with James Scott Bell’s framework, we will cover Dan Brown’s Toolbox to create suspense.
James Scott Bell on writing suspense
The suspense every story needs is one in which the reader asks what happens next?
“Suspense is the delay of resolution.” — James Scott Bell
Suspense is about pleasurable uncertainty. The reader wants to know what will happen and has to turn the page.
What we are looking to build is the bonding between our reader and character. Our reader has to care about our character, and because of this, when she is in trouble of high stakes, our reader feels the urgency to know how it turns out. We have to do that throughout the whole story.
Macro suspense
It is the withholding of resolution from beginning to end of your story.
Our story has to make our reader turn all the pages of our story to resolve the macro suspense. One way I use to understand this is to imagine that we are in a helicopter. We leave our story on the ground. When we are high enough, we see clearly the macro suspense of our story. We see it in the shape of one sentence.
We have to be able to write the macro suspense of our story in one sentence. For example:
Will Katniss Everdeen survive The Hunger Games? (The Hunger Games by Suzanne Collins)
Will Ender defeat the Buggers? (Ender’s Game by Orson Scott Card)
Will Harry Potter defeat Lord Voldemort? (Harry Potter Novels by
J. K. Rowling)
The macro suspense sentence is a resource to keep at hand as a reminder when writing your story.
Scene suspense
When writing a scene, make sure it has something unresolved, the outcome. Our character has to enter the scene with a goal related to the macro suspense. As writers, our job is to put obstacles, so our reader constantly wonders if our hero will succeed or not.
In Ender’s Game, there is a scene where Ender is alone in the showers, and he is ambushed by Bonzo Madrid’s gang who want to hurt him. We can feel the scene suspense when Bonzo says:
“Naked and wet and alone, Ender, so we’re even. I can’t help that I’m bigger than you. You’re such a genius, you figure out how to handle me.”
We as readers care about Ender and want to read more to see if Ender’s survives the fight with Bonzo.
Basically, we descended the helicopter a little bit, and now we see the scene suspense within the macro suspense. It's about zooming in and out.
Hypersuspense
This is the case when neither the character nor the reader knows what is happening and who exactly is against them. This occurs naturally with the first person point of view (POV) because the reader only knows and sees what the main character knows and sees. However, this can also be accomplished with a third-person POV following the lead POV procuring not to reveal another POV.
Through withholding information to the lead character, we do it to the reader as well. Here the helicopter is closer to the ground. This technique is responsible for the hypersuspense we experience in Ender’s Game, especially in chapter fourteen, where resides the masterful punch of the story, which I would never ruin for a reader making spoilers or giving a clue.
Paragraph suspense
Accordingly to James Scott Bell, this is the smallest unit of suspense. We don't have to forget that dialogues interactions are made up of paragraphs. The next Pride and Prejudice's paragraph by Jane Austen makes us want to read more to know what Mr. Darcy will do.
“. . . she saw Mr. Darcy walk into the room. In a hurried manner, he immediately began an inquiry after her health, imputing his visit to a wish of hearing that she was better. She answered him with cold civility. He sat down for a few moments, and then getting up, walked about the room. Elizabeth was surprised but said not a word. After a silence of several minutes, he came towards her in an agitated manner, and thus began . . .”
As you can see, here the helicopter is on the ground. Maximum zoom in.
Dan Brown’s toolbox for creating suspense
James Scott Bell's understanding of creating suspense complements very well the toolbox for creating suspense of Dan Brown’s Master Class on Writing Thrillers.
“Suspense is all about making promises. It’s about telling a reader, ‘I know something you don’t know. And I promise, if you turn the page, I’m going to tell you, if you keep going.’ ” — Dan Drown
These fourteen items are my notes from Dan Brown’s Masterclass. They aren't literal transcriptions but my abstraction and understanding. Even the examples you will read are mine and not the ones he uses in his class.
Dan Brown doesn't state, per se, these fourteen resources in his Masterclass but rather explain them in the exposure of his toolbox and how he used them to write The Da Vinci Code and his other thrillers. His Masterclass's accompanying book appears a bullet list of his tools, but valuable knowledge from his exposition was left out.
Cutting the chase, these are the fourteen items I identified in Dan Brown’s Toolbox:
Create impending danger
The most direct way to create tension, for example, your hero is sitting in a park reading, and a sniper is waiting the right moment to shoot.
Describe a feeling in your character's mind
Here an internal monologue, your character states that she feels something is off and an ominous or weird feeling of walking right into something dangerous.
Put a secrete in one of your character’s back story
Your reader gets aware that the character's current behavior has its roots in something that happened in the past, but you don't reveal it yet. But the reader knows that if she keeps reading, she will eventually find out.
Use a hidden identity
Dan Brown says that this trick isn't subtle but works. It’s about a mastermind, a puppeteer working from the shadows whose identity the reader tries to guess. And our work as writers is to show at the end that it is not who the reader thought, logically and believably.
Introduce parallel plotlines
This is about showing in different chapters or scenes plotlines that the readers know will eventually meet. For example, in one chapter, you show a young man who doesn't believe in love because it’s a chemical reaction in our brain. In the next chapter, you show a young painter traveling to Paris after breaking up with her boyfriend. There you created suspense, the reader knows they will fall in love, but they don't know how it will happen. They will keep reading to find out.
Make big promises and make them early
The key is to make these promises believable. For example, in The Hunger Games, there is a big promise in the first chapter, through the description of how Panem, the districts, and the hunger games work we know it’s unfair and that it has to change, we readers know that the writer promises us that if we keep reading, we will see that status quo change. Which Suzanne Collins eventually delivers with her trilogy.
Cut the timeframe
What we do here is to compress the story timeline. For example, all the event of your story occurs in one day. In fact, do you remember that famous serial drama 24? That is exactly what we do in this case. We commit to making our story happen in twenty-four hours. It provides pure suspense.
Start sentences with a sense of urgency
You can also start with a feeling of discomfort. To illustrate, you can think of a girl running to be on time for her thesis dissertation. You can also start the chapter with a question like “where am I?” Instant suspense there.
Use different kinds of cliffhangers
My favorite example is the ending of the first chapter of The Hunger Game when Effie Trinket reads the name of the female tribute “Effie Trinket crosses back to the podium, smoothes the slip of paper, and reads out the name in a clear voice. And it’s not me. It’s Primrose Everdeen.” It’s katniss's little sister, Prim! I wonder if there is a soul in this world that can put down the book and not turn the page to find out what will happen with “little duck” Prim.
End the chapters earlier
The evil genius came up with this trick. We have one normal chapter with its resolution. Establish the ending of the chapter before that resolution. Put the resolution in the next chapter. Your reader will be immersed in a new chapter, and by using another resource of this toolbox, such a poor soul won't sleep until finishing your book.
Use moments of a revelation
Eureka! I got it! Put similar expressions at the end of the chapter and don't tell the reader what the character discovered. Yes, the answer is on the next page in the next chapter. Turn the page, dear reader.
Withhold information
Show in your story that your character is looking at something, and she has a sudden sad, surprised, or horrified reaction. Don't show what your character is looking at. Reveal that information until the next page.
Use flashbacks to make big promises
Do you remember The Wolf of Wall Street’s opening scene? They show us how Jordan Belfort (Leonardo DiCaprio) lives with riches, unbridled, and excesses. Then they make a flashback to when he was twenty-two years old, and the big promise is that if we keep watching, we will find out how he got to be The Wolf of Wall Street.
Instill sudden suspense with pulses
The pulses are reminders to the reader about a threat. They take the shape of little sections in your writing, even the size of two lines. It can remind our readers that a sniper is waiting for our hero to go outside the building.
Takeaways
Although writing suspense is not an easy task, the key concept is understanding which type of suspense you are creating and the suspense tools at your disposition.
“Withholding is one of the real keys to writing suspense.” — Dan Drown
Write down the macro suspense sentence of your story. The question makes your reader turn all the pages of your book to find out the resolution. Put it somewhere you can see it when you write.
Then at the scene level, make sure it has a question to be solved. Start with it and provide resolution using one or a combination of the fourteen tools to create suspense.
Afterward, in your scenes and chapter, try to create hypersuspense, limiting what the character and the readers know and making them wondering “what is going on here? Who is behind all this?” Remember, you can do it with the first and third POV.
Check for opportunities to instill suspense in your paragraph remember they are the smallest units of suspense, and dialogue interactions are made of paragraphs.
Finally, to create suspense in every level of your story, use Dan brown’s toolbox. Pick a tool or combination of them and target to write a scene or chapter. Be aware of the type of suspense you are creating. Your writing target is to promise your reader if she keeps reading, she will discover the answers, withhold information to create pleasurable uncertainty.
If you do struggle, review in the first part of this article the type of suspense you are working on, and experiment with the fourteen tools of suspense detailed in the second part. Eventually, you will create the feeling you intend.
Previous writing tips by J.E. Guzman | https://writingcooperative.com/how-to-write-suspense-with-the-da-vinci-code-authors-toolbox-c0f32ac6a04 | ['J.E. Guzman'] | 2021-01-27 13:02:36.991000+00:00 | ['Toolbox', 'Masterclass', 'Suspense', 'Writing', 'Writing Tips'] |
Is Vazgen Manukian Armenia’s Next Prime Minister? | The former Prime Minister and Defense Minister of Armenia was selected by a coalition of 17 opposition parties to lead a transitional government amidst calls for the resignation of current Prime Minister Nikol Pashinyan.
Vazgen Manukian, Credit: Asbarez
As protests and large scale opposition grows against the government of Nikol Pashinyan in the aftermath of the country’s loss in Nagorno-Karabakh, Armenian citizens are actively looking for a new transitional government. Protest encampments have been set up in the capital Yerevan outside of the Parliament, with citizens and political leader alike vowing to stay in tents outside government buildings until the announcement of Pashinyan’s resignation.
The leader of the Armenian Revolutionary Federation, Ishkhan Saghatelyan, stated that, “we’ve already set up tents, we intend to stay as long as we have to, including sleeping here. Pashinyan must resign.” Protestors in the square chanted “Nikol, traitor,” echoing protests earlier this week during a memorial march for Armenian soldiers killed in the fighting with Azerbaijan.
Earlier this month, a council of 17 opposition parties named the “Homeland Salvation Movement” was convened and named former Prime Minister Vazgen Manukian as their candidate to take charge in the onset of a Pashinyan resignation. The 74 year old political leader has given speeches this past week to demonstrators in Freedom Square calling for all Armenian citizens to take to the streets to express their discontent and demand a better future for Armenia. As the calls for PM Pashinyan’s resignation grow stronger each day, its critical to look at who could be the face of the future Republic.
Protests in Yerevan, December 2020, Credit: Lusi Sargsyan
Manukian is a familiar face in Armenian politics. Before becoming the country’s first prime minister in 1990, he was a mathematician and member of the nationalist Karabakh Committee: a group of Armenian intellectuals advocating for the unification of Nagorno-Karabakh with the Republic of Armenia. He was jailed by Soviet authorities for blockading aid supplies to Azerbaijan alongside other future Armenian politicians and prime ministers. He was the nation’s first prime minister, but resigned the position days after independence from the Soviet Union in 1991. He formed the National Democratic Union the same year. The NDU is a liberal nationalist party dedicated to the promotion of human rights, liberal economic advancement and development, and the reunification of Armenia and Karabakh. In fact, in looking at their website, one might be forgiven for thinking it to be a single issue party. This makes his political ideology and the party’s agenda rather lucrative for a war beleaguered Armenia in 2020.
Manukian became State Minister, then Defense Minister upon the Ministry of Defense’s creation during the first Karabakh War in 1992. This enhanced the nation’s position in the war and lead to the independence of Karabakh from Azerbaijan. This moment is prized as a large victory for the Armenian people and a step towards regaining lands lost to the Soviets and Ottomans.
Vazgen Manukian alongside opposition leaders marching in Yerevan, Credit: Hrant Khachatryan via AP
After an unsuccessful run for President in the 1996 Armenian elections considered fraudulent by his party, Manukian became a strong force in Armenian political opposition. He ran for president in each election cycle until 2008 and was appointed as chairman of the Public Council of Armenia in 2009. The Council is described as a consulting advocacy organization meant to promote the consolidation of civil society in Armenia and the promotion of citizen issues in Armenian government. Since Manukian’s appointment, the Council has come under the guidance of the Armenian National Assembly and ties have been established with public councils of various other nations. He was reappointed chairman of the Council by PM Nikol Pashinyan in 2019, but replaced only a few months later.
As photos are published of the December 2020 protests in Yerevan, one cannot help but draw similarities that brought Pashinyan to power. He too was a former political prisoner and opposition leader who led speeches and rallies against a sitting Prime Minister. He called for civil disobedience in the form of a general blockade of the capital city and was named the “people’s candidate.” Tents were pitched in Freedom Square in an effort to bring about a peaceful revolution.
Nikol Pashinyan leading protests in Yerevan, 2018, Credit: RFE/RL
Vazgen Manukian seems to be taking several pages out of the Pashinyan protest playbook. He has a secured a very visible and central place in the protest movement against the current PM. In a speech to protestors on December 22, he stated that “until today, I was ashamed that we have not removed [Pashinyan], I was ashamed of myself, of our people, of the state, because I was one of the founders of this state, I felt responsible that we suffered such a disgraceful defeat.” His statements went on to call for the police and army to join with the people and “make an announcement,” possibly alluding to a military demand for Pashinyan’s resignation.
Manukian’s foundational experience advocating for Armenian-Karabakh unification, military leadership in the first Karabakh War, and past positions in public policy have received a great amount of attention in Armenian media as the background of a possible successor to Nikol Pashinyan. The support of a multiparty opposition coalition alongside mass protests in Yerevan this week signal that Vazgen Manukian’s re-ascension to leadership is fast coming. Only time will tell if it is through a peaceful transition, or bloodless coup. | https://medium.com/@nikjain/is-vazgen-manukian-armenias-next-prime-minister-f60ae96bd52b | ['Nikhil Jain'] | 2020-12-23 19:58:38.978000+00:00 | ['Nikol Pashinyan', 'Armenia', 'Karabakh', 'Protest', 'Vazgen Manukian'] |
What’s Next? | Cartoonist for the New Yorker among other places. I love my mom and white wine. My graphic memoir MURDER BOOK out in 2021. Follow on Insta @CartoonsbyHilary!
Follow | https://medium.com/spiralbound/whats-next-2d396f33be9c | ['Hilary Fitzgerald Campbell'] | 2019-03-02 14:35:50.230000+00:00 | ['Anxiety', 'Mental Health', 'Nonfiction', 'Comics', 'Lifestyle'] |
AI Marketing Tools for 2021 | We are happy to help you set up your own team which could comprise just a single person or scale it up to help you manage individual components of your tech stack at a digestible cost.
Join a worldwide community of 250,000+ marketers & revenue
operators from top demand generation agencies like Media7, publishers, and SaaS companies you can also read technology reports which will boost your business.
Rice paper
Named Leader in New Wave Advertising: Retail and Marketing Competitive, Q4 2019 report. It allows you to track and analyze everything that happens outside of your business. Track a candidate’s digital footprint. Update any updates on or off their website. Tracks 100+ types of files stored on the software platform. Contains filters that sort information by category, company, keyword, and date.
Drift
Drift is the world’s first and only talk of the industry, being named “Best Seller” in Gartner’s new report. Drift uses bots that have real-time conversations with people who come to your website. Drift connects you with customers who plan to buy now. Using Bot Builder, you can easily create a great conversion stream for your website in minutes.
HubSpot
HubSpot provides marketing, sales, and customer service, with CRM completely free of its core. All you need is to improve, track, and build relationships with leaders and customers.
Create landing pages, manage your social media campaigns, and measure the success of your content by leads and users rather than page views, time of day page, and bounce rate is done by all software.
Google Weather AI
Deliver customer experience to increase sales while reducing costs, and provide recommendations to make better business decisions. The platform provides a variety of services such as computing, storing, chatting, big data, it runs on the same process that Google uses internally for its end-users such as Google Search and YouTube.
IBM Watson
Watson Business is an AI-based digital marketing enterprise designed to build an open ecosystem to help teams become smarter and deliver the knowledge that their customers need. Send personalized users to travel together across email, mobile push, SMS, and integration with AI-powered, SaaS industry automation solutions. Watson Assistant provides critical thinking skills, provides personalized and differentiated information for messages, uses metrics to control lead completion, and provides performance on delivered dashboards leave soon.
People.ai
It provides an industry-leading Revenue Intelligence System (RIS) that manages the capture of all contact and user information activities, transforms CRM, and provides intelligence across CRM and other sales management tools.
RIS leverages the power of AI, with innovations protected by sixty-five privacy policies, to deliver immediate feedback, next-generation performance, and predictive intelligence to reps, management, and even the C-Suite of large companies.
Word patterns
Wordsmith is the concept of a new generation that transforms your information into little words. It gives companies complete control over the exchange of data into descriptive visions, all aspects of speed and measurement. It is a self-help program that achieves timely updates, updated information on the fly, and a powerful number for easy reporting.
AI will not change the position of retailers or advertisers but will help convey true ideas and capabilities. These applications will become increasingly important in improving and improving the success of the technology advertising industry. | https://medium.com/@desozas236/ai-marketing-tools-for-2021-624a10b0501e | ['Sophie Dsouza'] | 2020-12-17 04:22:21.839000+00:00 | ['Technology Trends', 'Tools', 'Technology News', 'AI', 'Technology'] |
The best smart lock for a keyless home | While traditional lock-and-key systems have improved over time, the basic mechanism hasn’t really changed since the first lock was invented more than a thousand years ago: A piece of metal that is just the right shape pushes pins inside a lock into the proper position, allowing the lock mechanism to turn. As a society, it’s been tough to replace a system that has worked reasonably reliably for literally a millennium.
Are smart locks really better?You can thank the hospitality industry for finally pushing locks into the digital age. Hotels learned long ago that keys are easily lost, expensive to replace, and simple to bypass, as thieves can pick locks or simply make copies of a key to allow for unfettered future access. On the flipside, hotel guests have readily accepted key cards (and in some cases, smartphone-based solutions) as the primary means of getting into their room. The electronic solution is just so much simpler. Lost hotel key card? Replacing it is no big deal.
But the biggest benefit of electronic entry systems is that they are highly configurable. Digital locks can be changed at a moment’s notice (which is why that old hotel key card in your wallet isn’t good for anything), and the property owner can generate a record of when each door was opened. In a more advanced setting, different keys can be generated for the same lock, so a homeowner can tell when each member of the family came in, or when the housekeeper arrived.
Whether you have a teenager who tends to break curfew or merely want to give temporary access to houseguests, service providers, or Airbnbers, smart locks are an incredible upgrade over the old way of doing things. Ready to make the jump to smart lock technology? Here are our top picks of the market at the moment.
[ Further reading: A smart home guide for beginners ]Best smart lock overall Kwikset Halo Touch Read TechHive's reviewSee itOut with the keypad, in with the fingerprint scanner—and a few other welcome changes to make this lock a winner.
Some will argue that we should have named the Level Touch our top pick in this category—it earned a higher score, after all—but Level treats iOS users better than it does Android users. Kwikset also ditches the old familiar keypad in favor of a fingerprint reader on its latest smart losck. This enabled the company to dramatically shrink the footprint the lock presents on the exterior side of your door. Kwikset also gives you the option of opening the lock with a conventional key, in the event the reader won’t recognize an authorized fingerprint (should your skin prune up after a dip in the pool, for instance).
Runner-up Level Touch Read TechHive's review$329.00MSRP $329.00See iton Level HomeLevel gets external hardware, touch-sensitive opening, and more features in a compelling—but expensive—smart lock system.
The minimalist Level Touch looks as good as it operates and is our highest-rated smart lock. The reason we’ve named it runner-up, versus best-of-the-best in this category, is that it can be remotely controlled only with an iOS device. The Android version of the Level app includes every feature the iOS app does, with that one important exception.
Best retrofit smart lock Level Bolt Read TechHive's review$229.00MSRP $229.00See iton Level HomeLevel’s “invisible” lock lives up to its promise, giving any deadbolt smart features with no change to your exterior hardware.
The August Smart Lock Pro has been deposed. Level Home’s Level Bolt is our new favorite retrofit smart lock, thanks in large measure to its ability to disappear. The smart components of the Bolt hide inside your door, where they replace the interior mechanical elements of your existing deadbolt. You re-attach your existing interior and exterior components, so that your door’s overall aesthetic doesn’t change in the least.
Runner-up August Wi-Fi Smart Lock Read TechHive's reviewSee itAugust adds Wi-Fi while shrinking the overall size of its hardware considerably, making this a better retrofit option than ever.
Weighing the convenience of Wi-Fi versus the visual appeal of the zero footprint that the Bluetooth-only Level Lock offers was tough, but Level Lock won out in the end. Hey, achieving runner-up status in TechHive’s coverage is no easy accomplishment, and if the convenience of Wi-Fi outweighs aesthetics in your mind, then this is the retrofit smart lock to buy.
Best budget smart lock Wyze Lock Read TechHive's review$99.99MSRP $99.99See iton Wyze LabsWyze continues to rule the budget smart home world with this good-enough lock that’s half the price of some competitors.
Wyze Labs has a well-earned reputation for produce bang-for-the-buck smart home products, and its new Wyze Lock deadbolt converter is certainly no exception. Fit this device to your existing deadbolt, plug in the included Wi-Fi bridge, and you’ll convert your dumb deadbolt into a modern smart home device that can be opened with your existing key, and you can’t argue with its less-than-$100 asking price.
Most innovative smart lock Lockly Vision Read TechHive's review$399.99MSRP $399.99See iton Home DepotLockly Vision successfully integrates two complex smart home devices—a deadbolt lock and a video doorbell—into a single, compelling security device.
The Lockly Vision reduces the number of security devices you’ll need to deploy on your porch by combining a powerful smart lock with a great video doorbell. It’s not perfect—its big, bulky, and not very very attractive—but there’s nothing else like it on the market.
What to consider when shoppingHow do you pick a smart lock? (No pun intended.) This is a young and wildly immature space, and many products on the market are still extremely rough around the edges. Even top products can balk when dealing with old or stiff deadbolts, doors that don’t shut well, or environments where non-standard fixtures are in use. The bottom line is that it’s tough to declare that any product is universally perfect for every home. That said, here are some key considerations to take into account to help you narrow down your shopping list.
Bluetooth Special Interest GroupHow smart is it? An important caveat to consider first: Some “smart locks” don’t work with a smartphone app or any smart home networking hubs at all; they’re really just electronic locks that use a code instead of a key to open up. One step up from that, you’ll find Bluetooth-only locks. These work with a smartphone app, but can’t be monitored remotely or via a smart home system. That’s fine if you’re looking to get rid of the keys in your pocket, but less impressive if you want to make your entrance portals a true part of your home network.
Replace or retrofit? You’re forgiven if you don’t want to replace your antique doorknob on your vintage Victorian with a metallic device that looks like it would be more at home keeping people out of a strip mall bank branch. A sizeable number of smart lock products don’t require you to replace all your existing hardware. Instead, they are installed on the inside of the door only, replacing only the interior part of the deadbolt. You can continue to use a standard key from the outside or open the lock via a smartphone app.
SamsungSmart home hub integration If you have an existing hub like the Wink Hub 2, Samsung SmartThings, or an Apple TV, you’ll want to ensure your chosen smart lock is compatible from the start. Many smart locks support Bluetooth, so they work with your phone, but lack the technology needed to connect with your home network. Some Bluetooth locks, like the Yale Assure system and the August Smart Lock, offer a radio module as an add-on to connect to your home network. If you’re using a smart home hub like the Samsung SmartThings or Wink Hub, look for a lock that supports Z-Wave or ZigBee, instead.
Alternate entry means Forgot your phone at the office? How will you get in the house? Physical key? Numeric keypad? Wait for your spouse to arrive? Myriad different approaches are available.
Power backup What do you do if the batteries inside the lock die—and you’re stuck outside? Some locks allow for emergency power to be applied should this happen. Yale’s Real Living locks feature external posts to which you can connect a 9-volt battery, giving you enough juice to get the door open. Other models retain the traditional key cylinder for backup.
Geofencing Bad about manually locking the door when you leave? A geofencing system automatically locks the door when it detects your phone has left the vicinity, and can be set to automatically open up when it finds you’ve come home.
Guest access features Most smart locks let you set up temporary keys for houseguests, which you can delete when they’ve returned home. Smarter systems even let you set time restrictions around when each access code can be used.
Our smart lock reviews Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@wayne24648805/the-best-smart-lock-for-a-keyless-home-a8962565a0a | [] | 2020-12-24 17:25:46.803000+00:00 | ['Audio'] |
Unsupervised and supervised learning for customer segmentation — Part 1 | Part 1 Unsupervised Learning
As mentioned, regarding the unsupervised model part, I used two datasets for the task:
azdias: Contains demographic information about the general population with shape of 891,221 rows and 366 columns
Contains demographic information about the general population with shape of 891,221 rows and 366 columns customers: Contains demographic information about company-only individuals with shape of 191,652 rows and 369 columns
Like most tasks that involve machine learning models, the first step one must go through is data cleaning. For this project, all the data cleaning steps were put inside a function, which is rather long but if interested, the function is contained inside the GitHub repo where all the code used for the project is located.
After all the cleaning was done, I proceeded to impute and scale the data. For the former, I decided to make the imputation based on the most frequent value as pretty much all the missing data columns were of ordinal type, and for scale, I decided to go with sci-kit-learn standard scaler class, which standardize features by removing their mean and scale them to unit variance, this step is very important as ML models tend to learn better from the data this way. This resulted in two data frames: df representing azdias after preprocessing, and df2 representing customers after preprocessing.
Before applying the model, a principal component analysis was performed on the data to reduce its dimensionality. To perform this task, I used sci-kit learn’s own PCA class, fitted it on df and plotted the explained variance per principal component in order to choose the number of features to shrink the data frame into.
# Initialize PCA and fit it on df
pca_init_time = time.time()
pca = PCA()
pca = pca.fit(df)
# Predict on df and df2
pca = PCA(n_components = 230)
pca = pca.fit(df)
df_pca = pca.transform(df)
df2_pca = pca.transform(df2)
After fitting PCA on the general population and plotted the explained variance per component, I chose 230 to be the final features for the dataset to shrink into, as they explain roughly more than 90% of the variance in the dataset as seen in the graph. The two resulting data frames after doing all the preprocessing and dimensionality reduction, were df_pca and df2_pca, the former representing the general population, and the latter representing the company’s own customers dataset.
Now for interesting part, to extract which parts of the general population are potential new customers to the company, I fitted the K-Means algorithm on df_pca with 25 different clusters, and predict this same fitted model on both df_pca and df_pca2 to later check which groups from the general population are the ones that over-represent and under-represent the data in the customers dataset.
# K-Means algorithm with 25 clusters on df_pca
model = KMeans(n_clusters = 25)
model = model.fit(df_pca) #Make cluster prediction on both df_pca and df2_pca
pred = model.predict(df_pca)
pred2 = model.predict(df2_pca)
With these two predictions, I proceeded and created one final data frame, containing information on the proportion of each of cluster (1–25) on each of pred and pred2 in order to see which cluster or group is the one that is over-representing and under-representing the customers dataset, in other words, this over-represented cluster will contain the demographic information of the individuals that are best suited to become company’s new customers, and the under-represented one will show otherwise.
def cluster_prop(predictions):
s_pred = pd.Series(predictions)
pred_counts = s_pred.value_counts(normalize = True)
ind = pred_counts.index.values
pred_counts = pd.DataFrame(columns = [‘Counts’],
data =pred_counts.values, index = ind)
pred_counts = pred_counts.assign(prop = lambda x: x/len(pred))
return pred_counts #Apply cluster prop on pred and pred2 to get their proportions
pred_df = cluster_prop(pred)
pred2_df = cluster_prop(pred2) #Create new dataframe containing both pred and pred2
comparative_df = pd.DataFrame({'azdias_cluster_prop': pred_df.prop,
'customers_cluster_prop': pred2_df.prop,
'Cluster': range(0,25)}) Plot comparative_df:
By watching the graph, cluster 7 clearly shows to be the one over-representing the customer's data, and cluster 6 the one that under-represents it. With these cluster numbers now on hand, the next step is to get the information contained inside each of them in order to create a report on what the characteristics of the individuals that are inside both groups look like.
In order to achieve this, I had to reverse-engineer the data contained inside df_pca to get the demographic information of the individuals that are part of the chosen clusters.
# Reverse engineering for cluster 7
pca_cluster = df_pca[pred == 7]inverse_pca = pca.inverse_transform(pca_cluster) inverse_scale = scaler.inverse_transform(inverse_pca)data = np.round(inverse_scale) df3 = pd.DataFrame(data = data, columns = df.columns) # Reverse engineering for cluster 6
pca_cluster = df_pca[pred == 6]inverse_pca = pca.inverse_transform(pca_cluster) inverse_scale = scaler.inverse_transform(inverse_pca)
data = np.round(inverse_scale) df4 = pd.DataFrame(data = data, columns = df.columns)
From df3 I chose these six features below to explain what kind of demographic information cluster 7 and cluster 6 contain:
ANREDE_KZ: Gender
HH_EINKOMMEN_SCORE: Estimated household net income
OST_WEST_KZ: Flag indicating east or west
Wealth_Encode: Level of income in each household
Life_Stage_Encode: Type of family
MOBI_REGIO: Moving patterns
I divided the analysis into two parts: Popular, containing information on the individuals that are most receptive to the company’s products or campaigns (cluster 7) and Unpopular, containing information of the least receptive individuals (cluster 6).
Popular:
Anrede_kz: Both men and women. (1: men, 2: women)
Hh_einkommen_score: Persons with very high and high income. (2,3)
Ost_west_kz: People from the west. (1)
Wealth_encode: Wealthy and prosperous households. (4,5)
life_stage_encode : Mature couples.(4)
Mobi_regio : Very little to none movement. (4,5)
Unpopular:
Anrede_kz: Women. (2)
Hh_einkommen_score: People with average and low income. (5,6)
Ost_west_kz: West living people. (1)
wealth_encode: Poorer and less affluent households. (1,2)
life_stage_encode : Pre-Family Couples & Singles.(1)
Mobi_regio : High mobility and very high mobility. (1,2)
Conclusion:
Popular: The best individuals to target after the unsupervised learning model can be described as: mature to elder couples that have considerable income and wealth, and are well settled.
Unpopular: The worst individuals for the company to target are: pre-family couples or singles (women) that are young, don’t have much income and wealth and have high mobility.
A more in-depth analysis can be made by accessing information from the rest of the features, which can show more detailed information on each of the clusters, but for time's sake, only these five features were presented.
To check Part 2 you can click here. | https://medium.com/@alejandrogalindomedina/unsupervised-and-supervised-learning-for-customer-segmentation-part-1-54fec0cff9e6 | ['Alejandro Galindo Medina'] | 2019-07-19 15:30:55.944000+00:00 | ['Unsupervised Learning', 'Machine Learning', 'Supervised Learning', 'Customer Segmentation'] |
Node Embeddings | Embedding Techniques
We followed embedding techniques, that besides the popular Graph Neural Networks, lie foundation for most of the novel methods in node embeddings. For comparison, we abstain from modern methods for tuning the models even more. This work therefore provides comparison for underlying techniques of node embedding. The models are subdivided in different similarity scores, metrices that show proximity of nodes in the graph.
Image 1: Loss for Node Embeddings, Image by Author
Bernoulli (4 Models)
For the Bernoulli models a similarity score of the true graph is given in the form of the adjacency matrix. Thereby an entry is 1, if the two nodes are connected, and 0 else. The decoder gives the similarity score in embedded space, which can be a sigmoid, exponential, gaussian or distance-based embedding. Therefore, we model the loss to generate the adjacency given the embedded matrix
Bernoulli Likelihood, Image by Author
Where the kernel can be either
Sigmoidal, Image by Author
Exponential, Image by Author
Gaussian (also RBF Kernel), Image by Author
Distance based #2, Image by Author
To train the model, ie. let the embedding be as close as possible to the original graph, the log likelihood is maximized.
Bernoulli Likelihood, Image by Author
KL Divergence (3 Models)
KL divergence is a metric how different a probability distribution is from another. In our case, the graph is modelled as a probability distribution, ie. The rows of the graphs matrices sum to one. Different models for capturing the true graphs characteristics in a probability distribution are used, namely Personalized Page Rank, Sum of Power of Transitions and the Transition matrix. Of course also the graph in embedded space needs to be a valid probability distribution in order for KL divergence to make sense, therefore Softmax is applied on the embedded nodes. For the embedded graph to be as proximate to the original graph, or more precisely to the graphs characteristics captured in the respective encoders, the KL divergence is minimized.
KL Divergence, Image by Author
SVD (7 Models)
In the Singular Value Decomposition the well-known method is applied on different matrices that can capture the graph’s characteristics. With that, the matrices are embedded in lower dimensional space. | https://towardsdatascience.com/node-embeddings-e28799478cb9 | ['Till Richter'] | 2020-12-04 17:47:21.666000+00:00 | ['Artificial Intelligence', 'Getting Started', 'Graph', 'Network', 'Machine Learning'] |
Automate your ML model retraining with Kubeflow | Automate your ML model retraining with Kubeflow
Photo by JOHN TOWNER on Unsplash
This is the first part of a 3 parts series where I explain how you can build a cost-efficient and automated ML retraining system with Kubeflow. Along the way, we’ll also pick some best practices around building pipelines.
While Kubeflow Pipelines isn’t yet the most popular batch jobs orchestrator, a growing number of companies is adopting it to handle their data and ML jobs orchestration and monitoring. Actually, Kubeflow is designed to benefit from Kubernetes strengths and that’s what makes it very attractive.
In this article, I’ll show you how you can build an automated and cost-efficient ML model retraining pipeline using Kubeflow Pipelines. As you might already know, retraining ML models is necessary to keep them accurate and cure the model drift curse. In case you are familiar with Airflow or planning to use it, make sure to have a look at this article where I demo how to build the exact retraining system using Airflow.
Now, let’s say you have created a nice ML model to predict the taxi fare of a car drive and serve a first version of the model. The retraining system you’ll be building is made of 2 pipelines:
The first pipeline taxi-fare-predictor trains the ML model and serves it for prediction if the model outperforms the live version of the model. The second pipeline retraining-checker runs every now and then and checks if the model has become obsolete. If so, it triggers the first pipeline.
To make things look real, you’ll also create two additional pipelines to send predictions to the live version of the model, one of which sends inputs that are purposely far from the inputs from the training set. The goal here is to simulate a model drift and have our retraining-checker pipeline triggers the taxi-fare-predictor pipeline. Please, bear with me till the end to build the solution pictured below.
ML Retraining System with Kubeflow Pipelines, Image by Author
Whenever possible, define your pipeline components as yaml files
There are three ways to build a Kubeflow pipeline component: | https://towardsdatascience.com/automate-your-ml-model-retraining-with-kubeflow-316f35afe19f | ['Marc Djohossou'] | 2021-03-01 09:32:50.602000+00:00 | ['TensorFlow', 'Python Programming', 'Kubeflow Pipelines', 'Ai Platform', 'Postgres'] |
Cannibals at an Orgy | This startling claim was reported by a Roman writer named Minucius Felix. Found in a catalog of rumors that Romans spread about the emerging religion, it raises an obvious question: How did Christianity, a religion associated with rigorous morality and upright conduct ever generate such an odd idea?
For most Romans, Christianity was an unwelcome development. The new faith was disruptive: Christians taught people to reject the gods that had granted the Romans their unparalleled success. If the Christians succeeded in their goal of spreading the Gospel to the ends of the earth, if they persuaded Romans to forsake Jupiter, Mars, and the rest of the imperial pantheon, then the gods would turn against the state. They would withdraw divine favor and the empire would be swamped by her enemies.
The Jews dispersed through the Empire also refused to worship the gods, but they practiced an ancient religion, one that predated the foundation of Rome. The Romans might not agree with Jewish theology — they considered many of their ideas odd or even reprehensible — but they admired the antiquity of the belief.
In Rome, old ideas were the best ideas.
This reverence for antiquity made Christianity a hard sell. The Christians substituted new, untested beliefs for the ancient ideas of their ancestors. They had rebelled against the Jewish faith of their forefathers and practiced a religion that had emerged during the reign of the Emperor Tiberius. Once Christians broke with Judaism, they lost the patina of antiquity.
Not only were they rebels against tradition, but they had chose an unlikely candidate to lead their new movement. The Christians followed a criminal, a Jewish carpenter who had been tried, convicted, and executed by the Roman state. They were, by extension, criminals themselves.
As the faith spread out of Israel and began to attract negative notice, the Christians were driven underground. They began to gather in people’s homes, meeting in secret, under the cover of darkness. Traditional Roman religions were practiced openly — in temples and community festivals. If Christianity was a legitimate religion, why sneak around?
Obviously, said some Romans, the practitioners of this new religion were up to no good. And it didn’t take very long for rumors to circulate about what Christians did in their secret meetings.
Minucius Felix
In the late second century (a precise date is still debated), an elegant Roman author named Minucius Felix wrote a clever defense of the faith, the Octavius. Minucius was a Christian convert and an orator, a master of persuasive speech. Like most of his generation, he was deeply influenced by Cicero, and when he decided to defend his Christian beliefs, he framed them in a Ciceronian manner. Rather than simply composing a treatise to explain Christian ideas, Minucius wrote a dialogue in which two men — a pagan (Caecilius) and a Christian (Octavius) — debated the merits of their religious positions.
The dialogue is set in the bucolic port city of Ostia. The three friends — Caecilius, Octavius, and Minucius, who acts as a narrator — have fled the heat of Rome and traveled to the sea shore. They spend a pleasant afternoon strolling on the beach and watching children skip stones across the water. Finally, they take seats in the afternoon sun to begin a discussion of religion.
Caecilius is given the task of offering a pagan critique of Christianity. He gleefully attacks the folly of the faith. In chapter nine of the work, he offers a catalog of rumors that are circulating about the religion. Christianity, he argues, is an immoral cult. In addition to the penis-worship cited above, Christians are also known to participate in incestuous orgies:
People everywhere talk about their banquets — it is a well-known fact, attested by the speech of our friend from Cirta. On a solemn day they gather for a feast — all the children, sisters, mothers, people of every sex and age. Then, after much feasting, when the temperature of the company has risen, the party has heated up, and, driven by drunkenness, incestuous lust has reached a fever pitch, a dog tied to the lampstand is provoked. Someone throws a small piece of meat further than the length of the cord. The dog springs forward to catch the meat, pulls over the lampstand, and douses the light. Then, in the concealing darkness, the couplings of frenzied lust involve them in the possibility of incestuous intercourse. (Min. Oct. 9)
It is like the worst Thanksgiving dinner imaginable. After stuffing themselves with food, and growing far too drunk on Roman wine, the Christians descend into debauchery by tearing off their clothing and participating in an orgy. Although this conduct is unsavory, it becomes even worse when we remember that these are families meeting for this banquet. In the alcohol-soaked darkness, members of the same household join in illicit intercourse: “children, sisters, mothers, people of every sex and age.” It is not merely an example of uninhibited sexual activity. It is transgressive, incest carrying it across the border that separates unseemly behavior from contemptible.
This, argues Caecilius, is what Christians are like; this is what they do in their secret meetings. Could anyone doubt that such a wicked people deserved to be identified and punished? Unfortunately, incestuous orgies did not represent the low-water mark of Christian depravity. They did something that was even worse:
Now the story about the initiation of young novices is both disgusting and well-known. A baby is concealed in a loaf of bread to deceive the unsuspecting new member. The loaf and a knife are placed before the new member. They tell him to use the knife to cut the bread. When he does, his “harmless” blows kill the infant. Next the Christians rush forward to lick up the baby’s blood. They rip its body apart and eat the flesh. Then, the new member, who has committed this terrible crime is bound by his guilt to a pledge of silence. These “sacred” rites are more disgusting than any sacrileges. (Min. Oct. 9)
Christians, allege Caecilius, murder babies. Although Rome’s early history offered examples of human sacrifice for religious purposes, the practice had been officially outlawed in 97 B. C. Human sacrifice and cannibalism were what outsiders, non-Romans, and barbarians did. The rumor that Christians killed and consumed babies emphasized the contemptibility of their beliefs. Enlightened Roman religion had long ago moved on from offering human lives to the gods.
Moreover, the Christians knew this; their sacrifice was not intended to appease a savage deity. It was designed to bind the new member into an unbreakable relationship with the cult. Initiates were compelled to do something so horrifically illegal that they could never report their deed to the authorities. This terrible act linked them eternally to the rest of their dark society.
Unlike the other rumors, the belief that Christians practice cannibalism has a theoretical basis. Believers practiced the Eucharist, in which they ate the body and drank the blood of Christ (Matt. 26:26–28). It is not difficult to imagine how this practice could be misunderstood and employed to support a rumor of cannibalism.
When the time comes in the dialogue for Octavius to rebut Caecilius’ claims of Christian depravity, he ridicules the charges (who could believe that any man would slaughter an innocent baby?) and then shifts the debate by employing the thoroughly modern line of counterattack, “what-about-ism.” Even if Christian did do these horrible things, what about the Romans? They expose unwanted babies — leaving them outside their homes to be killed by the elements or wild dogs — and women take herbs to induce the abortion of unwanted fetuses (Min. Oct. 30). The Christians are a model of probity. They would never participate in incest or orgies, but what about the Romans who married their sisters, or the gods who did the same (Min. Oct. 31)?
The problem with using “what-about-ism” as a rhetorical strategy — both in the Octavius or modern political discourse — is that it does not clear the accused of a charge. It only reduces the accuser to the level of the accused. Minucius Felix is very effective in defending Christianity in other parts of his work, but I find his debunking of the rumors less robust than it might have been. Far preferable was the rebuttal offered by Tertullian, an early third century African lawyer. Tertullian suggested that if people thought Christians were engaged in these vile practices, then a full investigation should be launched. “A judge,” he wrote:
should wring out of each one of us how many murdered babies we had tasted, how many incestuous orgies we had joined in the dark — who were the cooks and how many dogs were present? My, how great would be the glory of the judge who uncovered a Christian who had eaten one hundred babies! (Tert. Apology, 2.5)
Rumor and innuendo are used to marginalize groups of people. Adversaries of Christianity spread dark tales about the emerging faith that were intended to portray its adherents as a debauched, un-Roman cult — one that stood outside the pale of acceptable behavior and morality.
From the distance of nearly 2,000 years these rumors seem absurd and risible. Nevertheless, one cannot deny the susceptibility of the human mind to this form of propaganda. We need only recall the recent “Pizzagate” conspiracy theory that has convinced far too many Americans that Democratic operatives were involved in human-trafficking and running child sex rings. When it comes to baseless rumors and conspiracy theories, humans are no more immune today than they were in Roman antiquity.
Sources: Minucius Felix, Octavius; Tertullian, Apology. All translations are by the author. | https://medium.com/lessons-from-history/cannibals-at-an-orgy-45f478e760ba | ['Richard J. Goodrich'] | 2020-11-24 20:26:09.947000+00:00 | ['History', 'Religion', 'Christianity', 'Philosophy', 'Conspiracy Theories'] |
Here’s what 2020 taught us about fighting the climate emergency in 2021 | By Chris Gusen, Digital Organizer with 350 Canada
Photo Credit: Nhattan Nguyen
2020 was tough. But, as we look back on the past year, it also brings us a tremendous amount of hope.
Seeing how the world mobilized to fight COVID-19 gave us a glimpse of what might be possible if our elected officials treated the climate emergency just as seriously. As we rest up and prepare to continue our fight for a made-in-Canada Green New Deal in 2021, here are the biggest lessons we’re taking from 2020.
1. Bold government action has always been possible
We’re organizing to win a Green New Deal for Canada because it’s the only plan that meets the climate emergency at the scale and speed that science and justice demand. Some critics have called this vision for a rapid, just transition off fossil fuels unrealistic, but the COVID-19 crisis has proved them wrong. As we saw this year, our governments are more than capable of moving quickly and boldly when faced with a crisis.
The climate emergency requires the same decisiveness. Climate-driven disasters like extreme wildfires and floods are already taking lives and devastating communities around the world. 2020 is on track to have been the hottest year on record. And, just like with the pandemic’s exponential curve, climate chaos will accelerate beyond our capacity to manage it if we don’t aggressively flatten the carbon emissions curve.
In the past, politicians have gotten away with delaying climate action because global warming didn’t feel like an immediate threat to their personal wellbeing in the same way a global pandemic does. That’s changing fast thanks to the tireless work of social movement organizers, who have spent years building people power and holding politicians’ feet to the fire. A majority of Canadians now support a Green New Deal and, as 2020 has shown everyone, the level of government action the plan calls for has always been possible.
2. Listen to scientists
Photo Credit: Vjosana Shkurti
Donald Trump called COVID a hoax, but that didn’t stop COVID coming all the way to the White House. Along the way, over 300,000 Americans died and millions were infected. The pandemic made it clear that science denial is deadly.
In Alberta, Jason Kenney’s government preached personal responsibility instead of putting real prevention measures in place, consistently ignored the advice of their Chief Medical Officer, and fired thousands of public healthcare workers in the middle of the pandemic. The result? Alberta is suffering through a crushing second wave.
Many Canadians are looking jealousy to Australia where, after the government enforced strict lockdown measures in response to a second wave, life has returned to pre-pandemic normal.
While announcing his updated climate plan earlier this month, Prime Minister Trudeau took a jab at Conservatives with the line, “Science is not a pick-and-choose buffet.” We agree but, when it comes to the climate emergency, listening to the science means acting much more boldly than Trudeau is right now. Pipelines are off the menu. A Green New Deal is the main course. | https://medium.com/@350canada/heres-what-2020-taught-us-about-fighting-the-climate-emergency-in-2021-6b3786fb6621 | [] | 2020-12-17 18:25:17.689000+00:00 | ['Climate Action', 'Climate Change', 'Activism', 'Green New Deal', 'Social Movements'] |
The Judicial System in Sri Lanka | K. Banuka N. De Silva
BA (SP) Criminology and Criminal Justice
Department of Criminology and Criminal Justice
Faculty of Humanities and Social Sciences
The University of Sri Jayewardenepura,
Gangodawila, Nugegoda, Sri Lanka
[email protected]
Available at SSRN: https://ssrn.com/abstract=3735185
The judiciary is a system of courts adjudicating legal disputes and interpreting, defending and applying the law in legal cases. The purpose of this article is to identify the evolution of Sri Lanka’s Judiciary System and its functions. Keywords: Judiciary system, Sri Lanka
Sri Lanka’s judiciary evolution
In the monarchical era, the justice system in Sri Lanka was born. The king was the highest and highest judge, whilst the judiciary worked under an unwritten legal system. He also received the final appeal for any judgement. During their rule in Sri Lanka, Portuguese continued using the old Sinhala judicial system. ‘Korale’, ‘Disave’ and ‘Vidane’ exercised their jurisdiction as the system or the ‘Gam Sabha’ of the village council was also continued. Three judicial districts, Colombo, Jaffna and Galle, were established over the Dutch period. They used systemic judiciary. During British colonial rule, the court was made much more systematic. The British judiciary has therefore also further applied the Romano-Dutch law introduced by the Netherlands. Over that period, a decision made by the apex court in Sri Lanka could be appealed to the British private council. That power was abolished by constitutional reforms in 1972. According to the Law of Dispensing Justice introduced in 1973, the Supreme Court of Sri Lanka was appointed an apex court, while the Court of Appeal, the High Court, the District Court, the Magistrate Court and the Primary Courts remained the other courts of the Sri Lankan judicial system. The current system of courts is defined by the Judicature Act No 02 of 1978. However, the modern form of the European way of justice originated during the Dutch colonial occupation of the coastal areas of Sri Lanka in the nineteenth century. Local conditions of civil and criminal law, as well as the system of courts, existed for centuries before the European colonization.
Sri Lanka’s current judicial system
Supreme court
(Source: Supreme Court of Sri Lanka Archives | Sri Lanka News — Newsfirst. (2020, Nov. 18). Retrieved from https://www.newsfirst.lk/tag/supreme-court-of-sri-lanka).
The Supreme Court of Sri Lanka was established on 18 April 1801 with the Royal Charter of Justice of King George the 3rd establishing the Supreme Courts of Ceylon by the British, who controlled most of the island at the time, excluding Kandy’s inland territory. This creation was abolished in 1833 and replaced by a new Charter covering the entire island. The Supreme Court is the highest and final supreme court of record and is empowered to exercise its powers, subject to the provisions of the Constitution. The Court of First Instance or the Court of First Instance shall have final and final jurisdiction in civil and criminal appeals. It’s headed by Chief Justice. The Supreme Court, consisting of 11 Judges, including the Hon. Generally, The Chief Justice appointed by the President of Sri Lanka. The Constitution of the Democratic Socialist Republic of Sri Lanka recognises the jurisdiction of the Supreme Court as follows:
The jurisdiction in respect of Constitutional matters,
The jurisdiction in respect of Fundamental Rights,
Final Appellate Jurisdiction,
Consultative Jurisdiction,
The jurisdiction in Elections Petitions (Presidential election),
The jurisdiction in respect of any breach of Privileges of the Parliament, and the jurisdiction in respect of such other matters which Parliament may by Law vest or ordain.
The Registry of the Supreme Court shall assist the Supreme Court in the performance of its functions for which the Ministry of Justice grants the provisions of Head 234. The functions of the Supreme Court Registry shall include:
Maintenance of the Record of the Supreme Court
Arrangements for sittings and schedule of applications
Record of Court Orders and Judgments
Enrolment of Attorneys-at-Law at the Supreme Court
Preparation of briefs received from the Court of Appeal and the assessment of legal costs
Court of appeal
Royal Charter of Justice of King George the 3rd, 1801, establishing Sri Lanka’s Supreme Court. The Court of Appeal is the highest and ultimate record court. It’s headed by Court of Appeal’s president. The president appoints judges to the Court of Appeal as recommended by the Constitutional Council. The Court’s jurisdiction is as follows;
Appeals from the high court in the exercise of its appellate or original Jurisdiction
Jurisdiction to try election petitions in respect of the election of the members of parliament
Jurisdiction to grant and issue writs of certiorari prohibition, procedendo, mandamus, Quo warranto and habeas corpus and to grant injunctions
Power of Investigating and inspecting the records of the courts of the first instance
Provincial high court
The provincial high courts were established in accordance with the 13th amendment of the Constitution of 1978. Judges are appointed by the Chief Justice to this court. The aim of the establishment of this court is to decentralise the powers of the court of appeal to the provincial level. It gives the public the opportunity to present their appeals at the provincial level. The powers of this court shall be as follows.
Commercial jurisdiction
Appellate and revisionary jurisdiction in respect of convictions, Sentences, orders entered or imposed by District Courts, Magistrate’s Courts and Primary Courts within the province.
Writ jurisdiction in respect of powers exercised under any law or under any statutes made by the provincial council.
Appeals from Labour Tribunals, Agrarian Tribunals and Small Claims courts.
Issuing writs of Habeas corpus in provincial councils.
High court
The High Court of Sri Lanka, established in accordance with the Constitution, has the power and authority to hear, try and determine all prosecutions on charges of the indictment. It has criminal and admiralty jurisdiction, as well as jurisdiction over offences committed on board aircraft and within the territorial air space. The High Court is the only court to exercise the jurisdiction and jurisdiction of the Court of First Instance. It exercises both civil jurisdiction and criminal jurisdiction. The High Court of Commerce is empowered to hear commercial cases transactions in excess of three million rupees and jurisdiction over cases covered by the Intellectual Property Act. These powers are as follows;
Original Criminal Jurisdiction — Murder cases, Attempt to murder, Rapes, Treasons, Large scale plunders
Applications for the return of or access to an abducted child (exercised by the High Court of the Western Province)
Admiralty Jurisdiction
An offence committed by a Sri Lankan abroad
Offences committed aboard aircraft and within the territorial air space
Jurisdiction to hear and determine an offence i.e. influencing or attempting to influence any decision or order made by the Judicial Service Commission or influencing any member thereof
District court
District Courts have been established in 54 districts of Sri Lanka. These courts have all the civil jurisdictions that are relevant to each district. The appointment and removal of district court judges shall be carried out by the Commission of the Judicial Service. The cases heard by the District Courts shall be as follows;
Cases related to marriages. (divorce, marriage, asking compensations for sexual misconduct and divorce allowances etc)
Testamentary Cases — (cases related to the property of people, died without leaving a last will i.e. intestate estate of deceased persons)
Guardianship of idiots and persons of unsound mind and guardianship of their property.
Cases related to guardianship and legality of the under-aged children.
Demands for the adoption of children.
Requests for changing the names in the birth certificate.
Cases related to incomes.
Cases on trusts and bankruptcy.
Cases on payments, debts, damages demands for and financial cases.
Partition cases.
Cases on lands.
Cases on house rents.
Cases on compensation for accidents and insurance.
Cases filed under company act
Magistrate court
Magistrate’s Courts are established for 74 judicial divisions in Sri Lanka. Every Magistrate’s Court is vested with original criminal jurisdiction and is ordinarily empowered to impose sentences up to a fine of Rs. 1,500 and/or 2 years of rigorous or simple imprisonment. All Magistrates are appointed by the Judicial Service Commission. Appeals from convictions, sentences or orders of Magistrate’s Courts within a Province lie to the High Court of the Province.
The cases heard by the courts of the magistrate are as follows:
Offences on public security.
Offences related to the army, air force and navy.
Offences related to government servants.
Offences related to elections.
Request for orders concerning protection from domestic violence.
Providing false evidence and offences against public justice.
Furthermore, this court can also work on offences mentioned in parliamentary Acts. Eg.- Forest Ordinance, Excise Ordinance, Cosmetics, Devices and Drugs Act, Food Act, Road Development Authority Act, as well as the Motor Traffic Act.
In addition to the judicial system referred to above, a number of other institutions and tribunals responsible for the administration of judicial functions have been established by special acts of Parliament. Labour courts have therefore been set up to settle labour disputes, while Quazi courts have been set up to resolve Muslim marriage and divorce issues. More on the Rent Control Board has been set up to solve house rental problems. In addition, the Conciliation Board was established to resolve disputes without going to court. | https://medium.com/@criminologylk/the-judicial-system-in-sri-lanka-40a611a5651c | ['Banuka N. De Silva'] | 2020-11-24 15:37:45.793000+00:00 | ['Sri Lanka', 'Criminal Justice', 'Criminology', 'Judiciary', 'https://www.newsfirst.lk/tag/supreme-court-of-sri-lanka'] |
Three Smart Ways to Use Binary Search in Coding Interviews | Problem statement
Given a sorted array of numbers, find out if a given number key is present in the array.
Though we know that the array is sorted, we don’t know if it’s sorted in ascending or descending order. You should assume that the array can have duplicates.
Write a function to return the index of the key if it is present in the array, otherwise return -1 .
Example-1: Input: [1, 2, 3, 4, 5, 6, 7], key = 5, Output: 4 Example-2: Input: [10, 6, 4], key = 10, Output: 0
Solution
To make things simple, let’s first solve this problem assuming that the input array is sorted in ascending order.
These are the steps for binary search:
Step 1
Let’s assume start points to the first index and end points to the last index of the input array (let’s call it arr ). This means: int start = 0; int end = arr.length — 1;
Step 2
First, we will find the middle of start and end .
An easy way to find the middle would be: middle=(start+end)/2 .
For Java and C++, this equation will work most of the time, but when start or end is large, this equation will give us the wrong result due to integer overflow.
Imagine that start is equal to the maximum range of an integer (e.g. for Java: int start = Integer.MAX_VALUE ). Now, adding anything to start will result in an integer overflow. As we need to add both numbers first to evaluate our equation, an overflow might occur.
The safest way to find the middle of two numbers without getting an overflow is as follows: middle = start + (end-start)/2 .
The above discussion is not relevant to Python, as we don’t have the integer overflow problem in pure Python.
Step 3
Next, we will see if the key is equal to the number at the index middle . If it is equal, we return middle as the required index.
Step 4
If the key is not equal to the number at the index middle , we have to check two things:
If key < arr[middle] , then we can conclude that the key will be smaller than all the numbers after the index middle , as the array is sorted in ascending order. Hence, we can reduce our search to end = mid - 1 .
, then we can conclude that the will be smaller than all the numbers after the index , as the array is sorted in ascending order. Hence, we can reduce our search to . If key > arr[middle] , then we can conclude that the key will be greater than all numbers before the index middle , as the array is sorted in ascending order. Hence, we can reduce our search to start = mid + 1 .
Step 5
We will repeat steps 2 to 4 with new ranges of start to end .
If at any time, start becomes greater than end , this means that we can’t find the key in the input array and we must return -1 .
Here is the visual representation of the Binary Search for the first example:
If the array is sorted in descending order, we have to update step 4 above, because:
If key > arr[middle] , then we can conclude that the key will be greater than all the numbers after the index middle , as the array is sorted in descending order. Hence, we can reduce our search to end = mid - 1 .
, then we can conclude that the will be greater than all the numbers after the index , as the array is sorted in descending order. Hence, we can reduce our search to . If key < arr[middle] , then we can conclude that the key will be smaller than all the numbers before the index middle , as the array is sorted in descending order. Hence, we can reduce our search to start = mid + 1 .
Finally, how can we figure out the sort order of the input array?
We can compare the numbers pointed out by the start and end indexes to find the sort order. If arr[start] < arr[end] , it means that the numbers are sorted in ascending order, otherwise they are sorted in descending order.
Code
Here is the Java code to solve this problem:
Time and space complexity
As we are reducing the search range by half at every step, it means that the time complexity of our algorithm will be O(logN) where N is the total number of elements in the given array. | https://medium.com/better-programming/three-smart-ways-to-use-binary-search-in-coding-interviews-250ba296cb82 | ['Arslan Ahmad'] | 2020-11-11 18:52:54.380000+00:00 | ['Self Improvement', 'Coding', 'Java', 'Programming', 'Algorithms'] |
Understanding Principal Component Analysis Once And For All | Photo by Roman Mager on Unsplash
Requisites
Basic concepts of statistics such as variance and covariance.
Basic knowledge of linear algebra such as matrix transposition, matrix inversion and diagonal matrices.
Introduction
Thanks to the astonishing advances of data analysis software in the last few years, data scientists have the possibility of using highly complex statistical methods by just typing the .fit() command in the prompt. This is very helpful for reducing the time needed to develop a project, but it can also have a dangerous drawback: the need to fully understand the statistical method behind .fit() does not seem to be that important anymore. However, what if the output of that function does not seem to make sense? What tools do we have to deal with this situation? Well, from my perspective there is only one way to deal with this scenario: understanding the basis of the statistical method that is behind the programming command.
This was me admiring the “magic” of the .fit() command when I first started using PCA without understanding its mathematical reasoning
More concretely, in this article I would like to explain one commonly used mathematical transformation in the Data Science field, called Principal Component Analysis or PCA, which is not transparent for those who do not understand the mathematical reasoning behind it. I am going to reduce the technicality to the minimum in order to make this article understandable to the maximum number of readers possible.
Notation
Let’s start with some notation before going into the explanation of PCA. Let X be a matrix of dimension n x p, where n is the number of observations of a given data set and p is the number of predictors. Now let S denote the covariance matrix of X, that is,
Remember that a covariance matrix is a p x p matrix where the diagonal elements correspond to the variance (dispersion measure) of the covariates in X and the off-diagonal elements correspond to the covariance (similarity measure) between two specific covariates. Keep in mind this straightforward detail: the covariance between a covariate X1 and a covariate X2 is the same as the covariance between X2 and X1. Therefore, the covariance matrix S is symmetric, which means the S is equivalent to its transposed version,
Motivating PCA
Imagine a scenario where p is very large and you would like to perform some kind of dimensionality reduction, that is, reduce the amount of predictors, in order to do some exploratory analysis or fit a machine learning algorithm. We all know that reducing the dimensionality of the problem can lead to decreasing the variance of any statistical model, which might compensate the increase of bias (bias-variance trade-off). However, one drawback of performing dimensionality reduction is that we might lose important information. At the end of the day, there is no way that reducing, say, from 1,000 covariates to 10, does not imply any kind of information loss. So the next question that raises is, how do we measure the amount of information that each variable has? Well, the variance of the covariate seems to be a good measure of information. The larger the variance the larger the amount of information the variable contains.
Let’s make sure all of us understand this point because it is critical to understand PCA. Why is a high variance of a covariate good? Well, assume, for example, you want to understand the effect of salary on a given outcome. Do you prefer to have a set of observations whose values for salary range from 1,000 to 2,000 or from 0 to 10,000? The larger the range of the variable salary, the more information we have about how such variable affects the outcome, right? Note that in the first scenario, we have no way of understanding the effect of salary on the outcome when the salary is smaller than 1,000 or larger than 2,000. The reason is that there is no data in such part of the space. Therefore, the larger the range of the covariate the more information. And, what is the effect of the range of a variable on its variance? Exactly, the larger the range, the larger the variance (the dispersion increases).
Some people tend to identify the word variance as something negative, which makes sense because there are several contexts where it can mean something bad. For example, not only the data has variance; estimators such as the sample mean also have dispersion measure, which is called standard error and is defined as the square root of the variance of the estimator. Of course in this case the variance is unwanted because the larger it is, the more uncertain the estimate is. Another example where the variance might be something bad is when it is present in the outcome variable. That is, if the variable that we want to predict has very high variance, it might be more complicated to make precise predictions (under some circumstances that will not be covered in this article). But, remember that, as explained before, predictors with high variance provide more information than predictors with low variance!
So far we have come up with a very intuitive explanation of why we want to the variance of the original predictors to be kept when performing dimensionality reduction. But, wait… is the variance of the predictors all we care about when measuring the amount of information? No! Think of this, what if you have two covariates that both have a high variance, but they are extremely similar to each other (say their correlation is close to 1)? In this case, what is the additional information that the second predictor provides with respect to the first one? The answer is that that the additional information is almost null. Therefore, when performing dimensionality reduction we not only want the transformed predictors to keep the variance of the original ones, but also to make them uncorrelated.
Once we understand how a dimensionality reduction should be, let me introduce you to the definition of PCA: PCA is a mathematical approach that transforms the matrix of predictors X into another one of the same dimension, call it Y, such that
the covariance matrix of Y is diagonal, meaning that all the transformed predictors are uncorrelated, and
meaning that all the transformed predictors are uncorrelated, and the transformed predictors are sorted by a decreasing amount of information, meaning that the diagonal entries of the covariance matrix of the transformed predictors, which contain their variances (amount of information) as explained earlier, decrease as we move to the right (or down) throughout the matrix.
Is there a mathematical way to achieve such a transformation? Well, there is! Let’s deepen a bit into how PCA work mathematically without getting very technical.
Simplified Mathematical Development
Some of you might feel like this from this moment on. Stay with me, read it a second time if necessary and you will see it is not that hard
As we said before, let X be the original predictor matrix and Y the transformed predictor matrix, both of dimensions n x p. Remember the ideal properties that Y should have (uncorrelated predictors, which must be ordered in decreasing order of information). In order to transform X, we perform what is known in linear algebra a change of variable, which implies to multiply X by another unknown matrix P of dimensions p x p in order to come up with Y. That is:
So far we know nothing about P besides that it has to be invertible in order to reconstruct X. Hence, the goal is to find a matrix P that performs a change of variable with the ideal characteristics that we are looking for.
In order to move forward we have to make use of a linear algebra theorem called the Diagonalization Theorem which will not be proven (do not worry, this is this only moment where I make use of something that I do not explain where it comes from; you can look for a mathematical proof, but it will not be necessary to understand PCA). It says that a symmetric matrix, like the covariance matrix of X, also written as S, is diagonalizable as follows
where D is a diagonal matrix of dimensions p x p and the matrix P represents the same matrix used in the change of variable from above.
Note that we still do not know anything about P. Let’s show an important property of this matrix that will be used later on to come up with PCA. It turns out that P is an orthogonal matrix, which means that its transposed version is equivalent to its inverse version. That is,
Why does the previous equation hold? To show this, we need to remember that the covariance matrix is symmetric and, therefore, it is equivalent to its transpose
Let’s see how the needed condition that makes the previous equation hold is that P has to be orthogonal:
Hence, it has been proven that P is orthogonal given that it is the only way to prove that the covariance matrix of X is symmetric. But, how does this finding help up to fulfill creating uncorrelated predictors? To see this, let’s derive the covariance matrix of the transformed predictors:
In order to prove that the covariance matrix of Y is diagonal, which would mean that the predictors are uncorrelated, I am going to demonstrate that the last expression of the previous derivation is equal to a diagonal matrix. Let’s start:
The last mathematical issue to prove is that the total information of the principal components is the same as the total information of the original predictors. That is, let’s show that by applying PCA we have not lost information (we have just allocated it differently). Remember the definition of change of variable from the beginning. Given that we have proven the P is orthogonal we can write the change of variable as:
This is a special case called orthogonal change of variable, which allows the total variance of the data to be kept unchanged because the multiplication of an orthogonal matrix does not change the lengths of the vectors nor their angles (the vectors of an orthogonal matrix are orthonormal, which means that they have length one and they are perpendicular to each other). Therefore, the total variance (or total amount of information) or the original predictors can be written as:
where trace is simply the sum of the diagonal entries of a given matrix. Given that D contains all the variances of the principal components, their sum measures the total amount of information contained in X.
Last Clarifications
Done! We have just proven that PCA leads to uncorrelated predictors! This way, every transformed covariate contains unique information that the others do not. Each of these transformed covariates are called principal components. The first principal component corresponds to the first column of Y, which is also the one that has the most information because we order the transformed matrix Y by decreasing order of the amount of contained information (the first diagonal entry of the covariance matrix of Y is the highest one). Likewise, the second column of Y is called the second principal component. So on and so forth.
The next and final step is to decide how many principal components to use in our analysis. This decision will be different in each scenario. A common approach is to create a barplot with the amount of information that each principal components has and see if there is a point where including more principal components leads to very small increase in information which does not compensate the increase in dimensionality. This barplot has the following shape:
Conclusion
The goal of this article was not to explain all the technicalities about PCA. In fact, several critical concepts such as the role of eigenvectors and eigenvalues have not been mentioned even though they play an important role in PCA. The goal instead was to help beginners in PCA get an intuitive overview of what it does without losing ourselves too much in mathematical details.
In a future article I intend to: | https://medium.com/bluekiri/understanding-principal-component-analysis-once-and-for-all-9f75e7b33635 | ['Daniel Bestard Delgado'] | 2018-02-18 17:35:49.963000+00:00 | ['Data Science', 'Dimensionality Reduction', 'Big Data', 'Linear Algebra', 'Machine Learning'] |
Blockchain Technology and The Future of Work | Founders in Focus Episode #1
Welcome to the very first episode of Founders in Focus. A show that gives you unprecedented access to the most influential startup founders in the world of blockchain. We’re on a mission to educate and inform founders and investors on the current affairs of the blockchain industry, providing an accurate and up to date perspective on the competitive business environment, global investment landscape, and the leaders who are changing the game.
In this value-packed interview, Jack du Rose, co-founder of Colony, gives us an inside look at what it takes to build a blockchain-powered company.
Tune in as we discuss: | https://medium.com/blockchain-review/founders-in-focus-episode-1-646d17192cec | ['Julien Breteau'] | 2018-03-23 20:10:49.707000+00:00 | ['Blockchain', 'Ethereum', 'Founders', 'Videos', 'Bitcoin'] |
Digital Marketing Agency In Hampshire | Hello, and welcome to my post about how I started my digital marketing agency helping small to medium sized businesses grow online through our SEO services & PPC services We also help small to medium businesses with a new website design or a refresh of their current website that's outdated and needs to be modernised. The typical scenario where we engage with businesses looking to upgrade their website is because it's not responsive where the site doesn't adapt to mobiles or tablets or even more so now, not passing the latest Google update which is called core web vitals where Google takes into considerate the page speed of your site and the usability. It all started when Covid 19 arrived and we went into the first UK lockdown, where I and my business partner were made redundant. We both then took the courage to start our own agency call Lionheart marketing based in Mortimer, Hampshire.
How does Lionheart marketing help businesses grow online through our SEO services?
The initial stage starts with a free SEO discovery session, where we talk to you about your business objectives, your target audience and how SEO can help you meet your digital business objectives. The 5 most common business objectives we tend to find are;
We want to drive and inrease more revenue online We want to generate more leads We want more natural relevant traffic to our website We want to compete with our competitors We want more awareness and new customers
Then we start building a SEO road map to achieve the business objectives starting of with keyword resarch and a competitor analysis to idenfity the right keywords to target, to drive the correct traffic. We then start to look at on-page SEO on the site, where we decide what landing pages to target for each keyword phrase. We will optimise page titles, header tags and meta data.
Once the on-page SEO has been implemented, we will then look at a content marketing strategy to help with fresh releavant content, the content needs to be around “ How to guides” where you are helping your audience. We then start to look at off page, where we will help build natural links to your site by creating and posting content and authroitive sites. We know by adapting to this SEO strategy after around 6–12 months you will start to see your organic traffic increasing in which will start generating more revenue or more leads as we are driving the correct traffic to your site.
How does Lionheart marketing help our customers maximise their Google spend with our PPC services?
Before any work is carried out on a PPC campaign we need to conduct an audit of your current PPC campaign or to discuss your business objectives and revew our current website. With all the will in the world if you website is hard to use, hard to navigate and takes ages to load, it doesn’t matter if we send lots of traffic to yor website, it will struggle convert.
A PPC audit consists of reviewing your current campaign. We would look for things like: checking for the right keywords, are we using the most relevant ones, is the advert copy compelling does it make me want to click on the ad and negative keywords to name but a few.
Here is an example of some of our build take aways
Look at the menu of your website and base the structure of your campaign on this. Do a live Google search of the same products or services you do and look at who is using paid advertisment, this is also good to build a list of negative kewords. When you are thinking about writing the advert copy, think about why is someone going to click on your ad instead of someone else, highlight your unique selling points and what makes you stand out. Remember that allthough you can get instant results from the PPC campaign you need to give your campaign time to pick up on Googles AI technology so if you dont see the results you desire straight away you will later.
If you want to know more about how we can help you our digital marketing services please visit our website | https://medium.com/@leigh_70015/digital-marketing-agency-in-hampshire-319630a7835e | [] | 2021-11-20 14:39:16.811000+00:00 | ['SEO', 'Ppc Agency', 'Digital Marketing', 'Website Design', 'PPC Marketing'] |
Liquid to list Gemini dollar to strengthen crypto sector | Global digital asset platform Liquid today announced it has teamed up with U.S. digital asset exchange Gemini Trust Company, LLC (“Gemini”) to bring much-needed liquidity and stability to the cryptocurrency space. The two have aligned on a vision for cryptocurrency that prioritizes compliance, regulation and security.
The Gemini dollar (GUSD) will be listed on Liquid, the flagship platform of global fintech company Quoine. Liquid was launched in September and already stands as one of the world’s biggest fiat-crypto platforms by volume.
Liquid will also tap into Gemini’s liquidity for Liquid’s External World Book, a global liquidity pool sourced from a network of crypto exchanges. The World Book is already active internally on Liquid for for select pairs, providing highly liquid order books for traders by matching orders under the hood.
With the addition of GUSD, a regulated stablecoin with value pegged 1:1 to the U.S. dollar, Liquid is now more than ever at the forefront of cryptocurrency innovation and compliance. The U.S. dollars that back GUSD are held at the State Street bank in the United States and eligible for federal deposit insurance coverage.
The balance held by State Street is examined monthly by an independent, registered public account firm, with reports available to the public, while the smart contracts underlying the GUSD token have been audited and formally verified by an independent security firm, whose report is also publicly available. In addition, the multi-layered GUSD protocol brings a high level of security.
“The Gemini dollar is an obvious choice for Liquid. Both Liquid and Gemini put transparency, regulation and compliance at the forefront of our operations. We are proud and honoured to be able to offer Liquid customers the opportunity to use a stablecoin that can stand up to regulatory scrutiny. We will now explore the opportunity to develop a JPY-pegged stablecoin,” said Liquid CEO Mike Kayamori.
GUSD will be tradeable on Liquid initially against Bitcoin and Ethereum with plans to expand to additional pairs in the near future.
“We are excited to see Liquid’s adoption of the Gemini dollar. We believe a truly regulated and transparent stablecoin is a huge step forward for the ecosystem and appreciate Liquid’s support in bringing GUSD to their marketplace,” said Gemini CEO Tyler Winklevoss.
Stablecoins are an important part of the cryptocurrency ecosystem, giving traders flexibility to hedge against volatile markets and the ability to easily and safely move funds between platforms. | https://medium.com/quoineglobal/liquid-to-list-gemini-dollar-to-strengthen-crypto-sector-50cf58f6a595 | [] | 2018-11-05 17:40:39.501000+00:00 | ['Blockchain', 'Stablecoin', 'Market', 'Bitcoin', 'Cryptocurrency'] |
Mental Models, or the Breaking of Complexity | Mental Models
To understand how something complex works one has two main tools — the mechanistic, respectively the holistic approach. The mechanistic approach assumes that something can be understood by breaking it into parts (aka analysis) and then by combining the parts to form the whole (aka synthesis). However, this approach doesn’t always account for everything as there’s behavior and/or characteristics not explainable by the parts themselves. Considering that the whole is more than its parts, the holistic approach studies the interactions of the parts that lead to such unexpected effects (aka synergies), the challenge being to identify those characteristics, circumstances or conditions that lead to or related to these effects. Thus, when these two tools are combined within multiple iterations one can get closer to the essence.
When breaking things into parts we need first to look at the thing or object of study from a bird’s eyes view and identify the things that might look like parts. Even if the object of study looks amorphous, the experience doubled by intuition and perseverance can offer a starting point, and from there one can start iteratively to take things apart until one decides to stop. When and where one stops is a question of possible depth, as allowed by the object itself, by the techniques available or our grasping, respectively by the intended depth — the level chosen for approximation.
Between the whole and the lowest perceived components, one has the luxury of experimenting by breaking things apart (physically and/or mentally) and putting things together to form unitary parts — parts that typically explain one or more functions or characteristics, respectively the whole. In addition, one can play with the object, consider it in a range of contexts, extrapolate its characteristics, identify behavior not explainable by the parts themselves. In the process one arrives to a set of facts (things known or proved to be true), respectively suppositions expressed as beliefs (things hold as true without proof), assumptions (things accepted as true without proof) or hypotheses (things who’s value of truth is not known, typically because of limited evidence).
One builds thus a (mental) model, an abstraction of the object of study. The parts and relations existing between the parts form the skeleton of the model, while the facts and suppositions attempt giving the model form. Unfortunately, models seldom accommodate all the facts, therefore what one ignores or considers into the model can make an importance difference on whether the model is of any use. One is forced thus to advance theories on how the skeleton can accommodate the form, how form reflects the facts and suppositions.
Fortunately, simple models can prove to be useful, especially when they allow approximating the real thing within the considered context. However, the better approximations one needs and/or the broader the context is, more complex the models can become, especially when the number of facts considered as important increases. This can mean that two models or theories can be useful or correct when considered in different contexts but lose their applicability when considered in another context.
Having a repository of models to choose from is usually a helpful thing, especially in understanding more about the object studied. The appropriate usage of a model depends also on understanding its range of applicability within a context or across contexts, the advantages, and disadvantages of using the model. Knowing when to use a model is as important as knowing when not to use it, while understanding the measure of the error associated with a model can make us aware of the risks associated with a model and decisions made based on it. | https://medium.com/@adrianmnastase/mental-models-or-the-breaking-of-complexity-fa027ffb212 | [] | 2020-10-14 21:19:59.725000+00:00 | ['Systems Thinking', 'Complexity', 'Understanding', 'Model', 'Mental Models'] |
Peripéteies in Greece — Week 31. Adventures in Greece as we pursue the… | Boat wise we have a contract drafted now which took longer than we hoped (our issue this time for a change), now we’re just waiting on the sellers to counter-sign and we can move forwards from there. Assuming they sign, then the sea-trial will be early-mid April (COVID and lockdown allowing). Still a long way to go until we actually own something, but it’s definitely feeling very promising and heading in the right direction. The reality at the moment however is that even if this transaction goes smoothly, we’re starting to resign ourselves to the likelihood that it will be May or even June before we can start cruising at the earliest as even once we own the boat, there’s a lot of additional things that need to happen, many of which rely on lockdown easing further.
It seems as far as lockdown is concerned, as I thought last week, the government have started to ease measures a little. Cases have continued to rise, hitting a record for Greece of 3,500 infections a day on Wednesday, but the consensus is that the lockdown measures are ineffective and the damage to the economy and people’s well being now outweigh the risks of COVID. It’s not a radical change, but we’re very excited because outdoor archaeological sites are reopening from Monday so we can actually go and visit some of these ruins we just stare out from outside the fence! We’ll also be able to get a haircut again. I suspect in a couple of weeks we’ll see retail re-open as well.
Until next time,
Tim & Karina. | https://timbull.com/perip%C3%A9teies-in-greece-week-31-d2473930a4d | ['Tim Bull'] | 2021-03-21 08:59:58.873000+00:00 | ['Lockdown', 'Greece', 'Athens', 'Boats'] |
Nuro’s New Funding Round | Nuro’s R2 vehicle
Nuro was founded a little over 4 years ago with the mission of accelerating the benefits of robotics for everyday life.
Today, we are excited to announce a new $500 million Series C funding round, led by funds and accounts advised by T. Rowe Price Associates, Inc., with participation from new investors including Fidelity Management & Research Company, LLC. and Baillie Gifford. Existing investors SoftBank Vision Fund 1 and Greylock are also investing in the round.
When we started Nuro, my co-founder Dave and I set out to build a team that would apply its core expertise in robotics to develop products that can free us to use our own time on the things and people we value most. For our first product, we decided to focus on building an all-new electric self-driving vehicle to transport goods instead of people. Something that could dramatically reduce the time we spend in cars picking things up from stores. It turns out that we all spend so much time doing this that it’s roughly equivalent to a month’s worth of unpaid work every year for each American.
We have now developed two generations of our lightweight autonomous delivery vehicle. R2, our second generation vehicle, received the first and only federal exemption for an autonomous vehicle granted by the U.S. Department of Transportation (DOT) and National Highway Traffic Safety Administration (NHTSA). R2 was also the first vehicle to operate fully autonomously on public roads in three different states — California, Texas, and Arizona — with no drivers, no occupants, and no chase cars.
We now know that our industry, self-driving local delivery, will not only make it easier to buy groceries, hot food, prescription drugs, and other products, but will also positively impact local economies, and serve low-income communities living in food deserts. And during the COVID-19 pandemic, we have learned that contactless delivery of goods — the service R2 provides — can help reduce the spread. The underlying need and excitement for a better way of moving local goods is shared by our growing list of partners, including Domino’s, Walmart, Kroger, and CVS.
With this latest round of funding securing a long runway of future technology development and commercialization, we at Nuro could not be more excited about what the next 4 years will bring.
Join us. | https://medium.com/nuro/nuros-new-funding-round-a97141c3650b | ['Jiajun Zhu'] | 2020-11-09 16:14:07.685000+00:00 | ['Robotics', 'Autonomous Cars', 'Automation', 'Self Driving Cars', 'Fundraising'] |
Find Out My Sullen Shadow | Sandeep Kumar Mishra
In upper part of my body
A cognitive bell rings
From a dial up connection of live wires;
The modem is working JUST
To repeatedly provide the facsimile of
Barren and bald paths;
Inner lumbering of daily freight
Coiling, clutching upward;
There is no vivacity
The vital force parasited
How I inhale life?
My days and nights are bolted
Inside a brain cell,
My voice has held back;
Now it lays a plan to brawl my soul;
Residing in my own skull
It dictates notes imitating my tone,
If I could disintegrate my recall;
As my shadow has left me
There remains Just I, me and myself,
None is willing to be with me
Why is my brain, a black hole?
How could it not be a universe?
I have a constellation of migraine, tablets
Syringe, backache and insomnia,
Dream has become a dead pattern,
As worn out as fossil led glow;
Everything has become identical
Except the weight of consequence;
My gallery has ended
Now you can hang me on the wall
Then an unschooled grin came
Wait! Take a look
The glow has returned
Go, find out my sullen shadow | https://medium.com/the-scene-heard/find-out-my-sullen-shadow-c1498f08a1e5 | ['Sandeep Kumar Mishra'] | 2017-09-02 14:21:00.952000+00:00 | ['Poetry', 'Sandeep Kumar Mishra', 'Thesceneandheard', 'Sadness', 'Hope'] |
Writing Can Bring You Back From the Dead | If you are surrounded by tragedy or death — writing is an amazing sign of life.
Three of my closest writer friends (all over 50 like me), told me today they are surrounded by sickness, tragedy and death in their lives. UGH. I also have many people — friends and family — seemingly going down around me.
Sometimes it feels like I’m Elton John…the only one standing!
We all agree that having our ‘writing practice’ is one way we feel ALIVE — and we are damn grateful for it.
I would never dream of not being grateful for my writing or of putting pressure on my writing. It is a beautiful place of peace, joy and aliveness.
We are all thankful to be able to write through all of our tragedies — even turn our tragedies into stories. Ahh yes, sometimes tragedies make the best material, don’t they?
One friend said, “Death is all around me it seems — but where’s the LIGHT?”
There’s always light. You can’t have light without dark. It reminds me of one of my favorite subjects: The Yin/Yang/duality. I also call it the FULL SPECTRUM — Yea, there’s black and white, but also grey and every other color in the rainbow of life. | https://medium.com/writing-heals/writing-can-bring-you-back-from-the-dead-bc1e4e5a1133 | ['Michelle Monet'] | 2019-09-21 02:33:02.055000+00:00 | ['Poetry', 'Writing Life', 'Health', 'Mental Health', 'Creativity'] |
Themes from the Subsurface Data Lake Conference | Themes from the Subsurface Data Lake Conference
And coming to terms with a never-ending stream of cheesy aquatic metaphors
Losing sight of the larger trends shaping the data ecosystem is easy when down in the trenches of the daily work grind. Because of this I love attending conferences that serve as an inspirational footstool from which one can easily glimpse the full landscape picture.
The first ever Subsurface Cloud Data Lake Conference held virtually in July was a great opportunity for this as it featured an impressive lineup of speakers that thoughtfully contextualized where we are with data in 2020.
Talk #1: The Future is Open — The Rise of the Cloud Data lake
Tomer Shiran, Co-founder & CPO, Dremio
The whole event was put together by Dremio and began with an awesome opening keynote by one of its founders, who gave an overview of how data architectures have evolved over the last 10 years.
The main idea is this: we’ve gone from proprietary, monolithic analytic architectures anchored by an expensive Oracle-licensed database or Hadoop cluster… to architectures defined by flexible, increasingly open source technologies across the four main layers of a data stack: the storage layer, data layer, compute layer, and client layer.
The base of the stack — the storage layer — is supported by the crucial development of cloud technologies like S3 and ADLS that offer infinitely scalable, highly-available, globally distributed, easily-connected-to, and outrageously cheap cloud storage.
The ability to agnostically use these storage blobs to separate data storage from specialized compute engines (like Spark, Snowflake, and Athena) is the dominant architectural trend that nearly every talk mentioned.
So if nothing else, leave this article with that concept clear in your mind.
Talk #2: Apache Arrow: A New Gold Standard for Dataset Transport
Wes Mckinney, Director, Ursa Labs
Wes began by explaining that there’s a problem when you have a bunch of different systems handling data each with potentially its own storage and transport protocols. The problem is a “combinatorial explosion of pairwise data connectors” as he calls it, that manifests as costly-to-implement custom data connectors developers must create if they want to transport data efficiently in their data pipelines or applications.
This is one of a few issues highlighted as the inspiration for the Apache Arrow project. The others being:
Unnecessary CPU time spent serializing & de-serializing data Expensive writes to disk/blob storage as an intermediary Decreased performance due to executor node bottlenecks in distributed systems
And so Wes continued by explaining some of the technical concepts behind Arrow’s solution to these problems.
The end result is a mostly behind-the-scenes library that makes — for example, converting between Spark and Parquet more efficient — and all of us more productive in the long run.
The image that became clear in my mind is how Arrow aims to be an in-memory intermediary between systems, the same way a lot of folk use S3 (for lack of a better option) for that purpose.
Talk #3: Functional Data Engineering: A Set of Best Practices
Maxime Beachemin, CEO and Founder, Preset
Lastly, Maxime gave an interesting talk on how functional programming principles can be applied to the data engineering discipline to create reliable data pipelines.
The three principles are:
1. Pure Functions — Same input = Same Output
2. Immutability — Never changing the value of variables once assigned
3. Idempotency — The ability to repeat an operation without changing the result
Taken in a data engineering context, Beauchemin recommends writing ETL tasks that are “pure”. This means given the same input data, they will output the same data partition which you can INSERT OVERWRITE into your data lake (an idempotent operation compared to a mutable UPSERT).
Without going into too much detail, I’m inspired to leverage these concepts to add structure to the way I think of my ETL tasks, instead of a tangled mess of logic whose output I have little understanding of. | https://medium.com/whispering-data/themes-from-the-subsurface-data-lake-conference-2194e97d5638 | ['Paul Singman'] | 2020-09-17 13:46:12.264000+00:00 | ['Programming', 'Data Science', 'Data Engineering', 'Analytics', 'Data'] |
Are Michael Saylor and Jack Dorsey lighting the fuse for a Bitcoin melt-up?…. revisited one month later. | Now that $20,000 #Bitcoin is in view, perhaps it is time to revisit this thread from exactly a month ago (https://twitter.com/harrigan_pete/status/1318272524182958080). Oddly enough, it seems to have held up fairly well, even though there is a lot left to play out.
Anyway, here it is:
Are Michael Saylor and Jack Dorsey lighting the fuse for a #bitcoin melt-up? Will there be a feedback loop like the feedback loop that melted down real estate in 2008?
To clarify, a positive feedback loop can push things up or down. It is any process where the result of the process drives the process. It’s a fancy term for a snowball effect.
With mortgages, as real estate prices rose, even mortgage securities made up of subprime performed well. This led to more investment dollars going into such mortgages, which meant more money available for real estate, which led to higher prices, and so on.
On the way down, it all reversed. Real estate began to erode in price. This meant fewer mortgages were bailed out by the market. Worse performance led to tightened lending standards, which meant less money for real estate. Then the foreclosures started, etc.
What does this have to do with #Bitcoin? Corporate treasurers have started to move some balances to Bitcoin. Certainly, @michael_saylor and @jack have changed the landscape.
As this news of these purchases spreads, and as the #Bitcoin price rises, other corporations will start to get uncomfortable questions about why they own none.
This will only accelerate as the price rallies. At $20K #Bitcoin, a few people will ask why a company has not invested. At $50K #Bitcoin, everyone will ask.
So, as the price rises, the pressure will increase to buy, which will drive the price higher.
Ta-da!!
Positive feedback.
But here is the interesting part. When a positive feedback loop CAN start, it very probably WILL start.
The market is FRAGILE to the feedback loop.
And, once it starts, it can rip.
Or, I suppose I should say…. | https://medium.com/fullspectrumliberty/are-michael-saylor-and-jack-dorsey-lighting-the-fuse-for-a-bitcoin-melt-up-6ec0e7faf46d | ['Peter Harrigan'] | 2020-11-19 18:40:45.211000+00:00 | ['Stock Market', 'Bitcoin', 'Risk'] |
SUPERBASE — The Surprisingly Compact 518wh Power Station | 🔻👇🔻👇🔻👇🔻👇🔻👇🔻👇🔻
🎯 6 Liters, 57% Smaller than Competition | 600W Dual AC Output | 518Wh Capacity | 3.5 Hours Quick Recharge | 9 Outputs | 60W USB-C PD
📌
We are growing increasingly dependent on electricity in our daily lives. A natural disaster, accident, or service interruption can leave us in the dark. A power outage could take away our means of communication or leave us without the use of personal medical devices. That’s why backup power solutions have become popular, and even crucial, for a growing number of people in recent years.
A power station is only useful when you have it on hand. Some power stations are so heavy and bulky, that they don’t always make it onto the pack list. SuperBase 500 is small and weighs just 11lbs/5kg, making it an ideal addition to your camping or fishing gear. At about the same size and weight as a housecat.
SuperBase 500 doesn’t skimp on power, despite its small stature. The dual AC outputs can deliver up to 600W of pure sine wave AC power. This means SuperBase 500 can power and charge just about anything, whether it’s a phone or laptop, a small appliance like a mini fridge or coffee maker, or even a CPAP machine.
SuperBase 500 features a variety of output ports and can charge up to 9 devices at the same time. This means you can take SuperBase 500 on the road and leave your bulky proprietary chargers behind. SuperBase 500 is equipped with two DC output ports, two AC outlets, a cigarette lighter port, two USB-C ports, and two USB-A ports. Whatever you’re hoping to charge, SuperBase 500 probably has a port for it.
With 518Wh/140,000mAh capacity, SuperBase 500 can supercharge your trip. Or when an unpredictable power outage happens, SuperBase 500 will be your reliable companion to light up your night and provide a full night’s power to your essential devices.
If you use a CPAP or similar medical device, SuperBase 500 can be used to power your machine during a blackout or when you’re away from home.
SuperBase 500 is fireproof, anti-shock & well insulated thanks to its PC+ABS enclosure. To ensure the reliability and safety of the power station, SuperBase 500 features six layers of active protection, detecting overcurrent, overvoltage, overcharging, over-discharging, short-circuits, and unsafe operating temperatures.
► SUPERBASE — The Surprisingly Compact 518wh Power Station
★ 🎁 Up to 36% OFF | https://medium.com/@bestinnovations/superbase-the-surprisingly-compact-518wh-power-station-f492dc1abb20 | ['Best Innovations'] | 2021-05-12 09:43:17.425000+00:00 | ['Outdoors', 'Charger', 'Gadgets', 'Power Bank', 'Battery'] |
Six Weeks Of Sananga: The Traditional Eye Drop That Sting Like Fresh Chilli | What benefits did I notice?
I can’t help but bring a degree of curious skepticism when it comes to various claims about resetting or rebalancing one’s energetic field. I mean, how accurately and objectively can we quantify these claims? How do we know what shift is attributable to the actual modality, and to the power of placebo??
The ultimate answer might be, does it really matter?
If one is experiencing some sort of beneficial change, whether placebo or not, that’s the path we’re generally wanting to move towards. In the case of discerning what stems from the modality in question, and what stems from placebo, perhaps the best we can do without any tools of measurement, is to purely EXPERIENCE how we FEEL.
One of the benefits that Sananga has bestowed upon me, (AFTER the burning sensations have stopped radiating throughout my skull, and my eyes don’t feel the need to clamp shut anymore) is the feeling of being super present and grounded in my body.
On a subtle level, it FEELS like energy flows more naturally throughout my body. As if blockages have been somewhat cleared or opened. I tend to embody a relaxed focus and calm awareness.
A stillness of the mind.
If other people try to project their irrational emotions or negative energy onto me, or if I find myself in a situation where I might normally be triggered to some extent, it doesn’t seem to land as hard as it otherwise might have.
It’s a feeling of being more centered, and having more composure — but not in a controlling kind of way.
In terms of vision, I would often (paradoxically) notice a sharpness, yet softening, of my immediate environment. Colours would often seem a bit more vibrant, especially when noticing things outside, such as the shimmering of the sunlight in and amongst leaves and plant foliage.
Another key thing I’ve noticed over time is the ease in which I’m able to take in more information from my visual field of perception. With this relaxed awareness, especially immediately after the initial sting subsides, I tend to have a heightened capacity to notice the tiniest of bugs springing out from the grass, or a bird bouncing on a branch right out to my peripheries.
It’s as if the amount of visual information I can process increases directly after Sananga. This ranged from 15 to 30 minutes after, to sometimes being highly sensitive to visual stimuli for a couple of hours afterwards, depending on what I was doing and the environment I was in.
If I was to sit down shortly after and focus on writing at my laptop, the effects were not as pronounced as they were if I was outside walking in nature, or being away from blue light and electronic screens in general.
I suspect the reasoning behind the heightened awareness and visual perception may come from either:
1) A physical cleansing, recorrecting, or recalibrating effect on both my eyes, or something in relation to the visual cortex component of the brain.
2) Consciously developing a daily connection with this plant and how it affects me, which in turn, makes me more aware of what’s happening in my visual field. Hence, my capacity to notice more, with more ease.
Perhaps, a little from both categories?
All in all, I definitely experienced the shifts in visual acuity and energetic recalibrations that are said to come with Sananga, and since the six week practice, have continued to work with Sananga as a powerful ally. | https://medium.com/@ck.medicinepath/six-weeks-of-sananga-the-traditional-eye-drop-that-sting-like-fresh-chilli-76cc6185949c | ['Caiyuda Kiora'] | 2020-02-10 22:46:27.102000+00:00 | ['Psychedelics', 'Ayahuasca', 'Healing', 'Sacred'] |
We Need New Ways of Judging the Success of Websites | We have such terrible metrics for judging websites! There’s income, and there’s traffic, and that’s about it. But neither of those take into account burn rate, overall expenditure or organization size, just for starters. One way to look at things might be: unique visitors per month, divided by employees. Size of staff is something of a predictor of size of traffic, it turns out! If you have no staff, you cannot make the traffic, for one thing. Obviously there’s a slight variable in this metric — which has to do with number of part-time contributors, freelance and marketing budgets and, of course, certainly at the big behemoth, unpaid contributors. Speaking of, let’s look at the Huffington Post!
The Huffpo has a staff of 203 (among those are 97 full-time editorial staffers), and they’re currently at 26 million monthly uniques (according to their numbers). That’s 128,000 uniques per employee.
The Daily Beast has 65 full-time employees (positively tiny in comparison), and, in November, 6 million uniques, as per Omniture. (comScore reports it as 3.7 million.) The Beast also doesn’t have “free bloggers,” and yet (using their numbers, of course), it still comes up with a pretty good 92,307 uniques per employee.
Slate is now doing 6.48 million uniques (comScore for July to September, as a three-month average). It has but 30 editorial staffers and counting the full team (some of which are shared across the Slate Group), they’re 42 people, coming in at a whopping 154,285 uniques per employee. Slate is also, you know, seven times older than the Beast — and all that time online makes for a nice stream of search results and archive pages.
What other ways can we start looking at what websites are and how they perform? Well, pay rate, sure (though try prying that number out of anyone).
Then there’s traffic versus site income, certainly. Take the HuffPo again — that’s $30 million this year, they say, and they were reporting 26 million uniques via comScore back in March, so let’s just totally ballpark it and say they did 312 million uniques this year. That puts their income at just about 1:10 with readers. Fascinating!
Don’t compare that to print, by the way: the print edition of the New York Times has, very roughly, just a million subscribers. How tiny and quaint, you think? Well, the news division of the Times Co. alone takes in — mostly from the print product — somewhere around $150 million… each month. | https://medium.com/the-awl/we-need-new-ways-of-judging-the-success-of-websites-dc48eaea616a | ['Choire Sicha'] | 2016-05-13 08:18:39.055000+00:00 | ['Metrics', 'The Internets', 'New York Times'] |
Rapid 3D Mapping Using Commodity Video | While our Boulder Mapping Experiment was a ton of fun — getting our friends together every weekend to take photos was a big ask. So, we’ve been experimenting with other imagery capture modalities that could scale up more quickly. One model we are particularly excited about is video. Not only do most camera phones have the ability to take video, but also the number of dash and action camera has exploded in recent years. The cherry on top is there has been a proliferation of companies partnering with fleet vehicles and crowdsourcing to enable video at scale like — Mapillary, OpenStreetCam, Carmera, Nexar, MobilEye+ESRI, Tesla etc. There is a surfeit of geotagged video being generated. If we can crack the nut of turning it into high accuracy 3D maps it could open new possibilities for creating a crowdsourced 3D map of the globe.
Let’s GoPro
To simulate video collection from moving vehicles we used a simple rig; a GoPro 360 Max attached to a ski helmet while biking through our Boulder test area. Yeah it looks a little funny in action…
GoPro Collection Rig
We’ve also tested putting the GoPro on a pole for pedestrian collection of areas that can’t be driven or biked. Starting our video collection testing with the GoPro 360 Max had it’s pros and cons. In the “pro” category a 360 action camera gives you lots of photos from a variety of angles, which is quite handy for photogrammetry. In the “con” category the GoPro uses a good bit of proprietary processing to create their 360 degree views - leaving you with an open ended reverse engineer challenge.
GoPro 360 Calibration and Photogrammetry Prep
Since the new-ish GoPro 360 Max wasn’t in any of the camera databases commonly used for photogrammetry we first tried to calibrate the sensor. There are some nice online tutorials for calibrating GoPro cameras to do things like remove wide angle distortion. So, we dove in, printed up a black and white chessboard and started following the protocols.
GoPro Calibration Chessboard
The bad news was we couldn’t get good enough parameters from the calibration to derive a camera model and focal length that gave us a solid SfM (Structure for Motion) result. Fortunately, the photogrammetry framework we use supports splitting 360 degree images for processing. The hitch was the results were pretty lousy. Then Pramukta had a clever idea to split the images with overlap. One of the key principles that makes photogrammetry work is having multiple pixels in common between multiple images. These common pixels allow triangulation through lines of sight. To create our overlap Pramukta divided our 360 degree images into eight 60 degree segments instead of eight clean 45 degree segments. This makes a bit more sense when visualized.
GoPro 360 Overlapping Segmentation
The results when we implemented the new segmentation strategy were super encouraging. The increase in overlap gave us more pixels in common with our input images providing us much better SfM results. Here we can see the combination of an SfM output using the 360 overlap segmentation approach.
GoPro 360 SfM Result and Camera Poses with Segmentation
Over all we consistently got solid results with the new technique. In the image below we can see some more results from the new approach across our Boulder test area.
GoPro 360 SfM Photogrammetrical Results
Compared to our “still photography” derived point clouds — GoPro captures ground detail and overall coverage better, but lacks some of the detail found in the “still photography” approach especially for high angles. The biggest advantage was the GoPro 360 mapping took us seven minutes by bike versus several hours taking still photos by foot. When it comes to 3D mapping large scale geographies video is an ideal format to leverage.
The Magic of Using Odometry for Alignment
Another big advantage of using video as a collection modality is the ability to leverage “visual odometry” when you want to align your data to a survey reference. Traditional odometry is “the use of data from the movement of actuators to estimate change in position over time through devices such as rotary encoders to measure wheel rotations.” The concept of “visual odometry” extends this idea to determine the same information by using sequential camera images. This is perfect for GPS tagged video from action cams like GoPro, vehicle cameras or dashcams.
Traditionally, the challenge of combining SfM and “visual odometry” for large scale 3D mapping is the propagation of errors. The larger the area you are generating photogrammetry for the further the propagation of triangulation errors you generate. Generally, smaller models can be quite accurate and the larger your model goes the more accuracy slips. To solve this problem for “still photography” we focus on taking 30 photo models and stitch them together. This doesn’t work particularly well for video. Instead we segment video into chunks of time. Each segment is then sampled at regular interval for frames that are then input to the photogrammetrical model.
Winnie built a super cool visualization of these time segments color coded and aligned in geographic space. Each color is a separate point cloud from a subset of video frames. Each point cloud is then aligned to the reference data.
GoPro 360 Video Derived Point Clouds Color Coded by Model
This segmentation of the data prevents errors from the SfM from being propagated beyond each small model. This keeps over all errors small and uncorrelated, which is key to an over all high level accuracy for neighborhoods, cities and eventually the globe.
How Accurate is Video for 3D Mapping
Aesthetically the results of the GoPro 360 derived photogrammetric results were quite nice, but the far more important question is are they accurate? The irony in all this work is we strive for visually realistic renderings of reality, but the most compelling use case for data is machine use not people. The machine doesn’t care if it’s maps look nice just that it is accurate to reality, whether that be an augmented reality or autonomy use cases. That said let’s look at the accuracy of our GoPro 360 video derived point clouds co-registered to our Nearmap reference data.
GoPro Derived Point Cloud Co-registration Error to Nearmap Aerial
The map above visualizes the residuals from the RMSE calculation - color coded by how much error there is from each pixel to the reference. The interesting pattern in the results is how there is generally minimal error (>18cm) at the ground level and increases at higher elevations. The rough correlation between error and elevation makes sense the camera is on roughly ground level and loses accuracy for the pixels furthest away from it. For the entire GoPro data set there is an average 50cm RMSE to the Nearmap baseline. This is a solid result but not quite as good as the 25cm RMSE we achieved with the mobile phone photo generated point clouds.
There are a few caveats to the 50cm RMSE that should be noted. It includes additional error generated by transient objects like cars, aerial occlusions like overhangs and artifacts like sky points. We can see some examples of these in the image below. The most prominent being the cars and sky points artifacts.
False Positive Errors in GoPro to Nearmap Co-registration
Fortunately, the sky point artifacts can be pretty easily removed in post processing. With a good bit more work cars can also be removed from by data sources. Both of these additions would improve the RMSE score. We are bit stumped on the occlusion induced errors but likely the solid path there is a “best” pixel analysis done in post processing after co-registration. This doesn’t solve the error problem but removes the operational issue in deployment. If you’d like to explore the data you can download the .laz of the GoPro 360 point cloud here — the “residual” column is the RMSE attribute.
Conclusion
Commodity video is a great data source for large scale 3D mapping. It isn’t quite as accurate the point clouds we derived from our mobile app, but the coverage was markedly better. Another upside to using video is its ubiquity of collection at city scale and larger. We’ll further explore the potential of video in a future post utilizing the eight cameras available in a Tesla Model 3:
Tesla Left Repeater Camera Footage
The best part — it isn’t just armies of Tesla’s driving our cities there are a plethora of cameras collecting data. Bonus is the photogrammetrical process removes personal information. People are transient so aren’t rendered, and the detail isn’t good enough for details like license plates. This doesn’t obviate the issues of the raw input data, but the natural obfuscation of large scale photogrammetry nice positive externality.
Recently there was a Tweet from Christopher Beddow asking about mounting a GoPro to a drone for OpenStreetMap work:
What if we did not need drones to augment OpenStreetMap with pixel level precision. Just upload your GoPro etc. video and go. This also opens up lots of interesting trajectories for 3D annotations, but one step at a time. First challenge — demonstrate we can scale. Map and process a small city in a day! | https://medium.com/@pixel8earth/rapid-3d-mapping-using-commodity-video-cbfe7e5a767 | [] | 2020-04-20 17:22:34.928000+00:00 | ['Openstreetmap', 'Augmented Reality', 'Mapping', '3d', 'Photogrammetry'] |
What not to do when welcoming new people to the gun community | This is not the ATF, image courtesy of Wing-Chi Poon and Wikimedia Commons
YouTuber, Garand Thumb, has acquired nearly a million subscribers by reviewing the latest tacticool firearm to be released by gun makers. The fact that these weapons are legally or financially difficult for ordinary people to obtain is beside the point, given all the attention that the host’s flannel shirts receive. His content is a lot of style sustained by the light dusting of substance that I have come to expect from gun magazines, the goal of which appears to be to convince people that they need whatever is a new packaging for the same old devices.
Very well. The contents have always been labeled on the tin. Take it or leave it. But a recent video moves his content from schmaltz to noxious garbage, an example of what we in the gun community should not do. In a review of Heckler & Koch’s SPK5, the semiautomatic handgun version of their MP5 submachine gun, the host points out modifications required to make said weapon legal for sale to the general public — assuming that buyers have more than $2,000 to spend — and says, “Thank you ATF again for being supergay.”
The Bureau of Alcohol, Tobacco, Firearms and Explosives — minus the necessary Oxford comma — is the federal agency that supervises the sale and possession of many things that make life amusing and risky. As has been pointed out before, the ATF should be a convenience store, not an organ of government interference in our private lives. Insulting said agency is a part of the repertoire of talking heads on gun channels and among folks at the range. But there is a fundamental problem with the choice Garand Thumb made in this particular criticism, and I have to move to a new paragraph to emphasize it:
Being gay is not wrong, while using “gay” as an insult is.
Do I have to explain this? Apparently I do, since Garand Thumb and several of the commenters to the video do not seem to understand. A person’s sexual orientation tells us exactly nothing about the moral qualities, intellectual abilities, or tactical skills of the person. Sexual behavior among consenting adults is as fraught and complicated and wonderful as human life is generally, and anyone who claims to have figured it all out is a liar or a fool. It can involve a lot of pain, but as long as all participants are of age and freely taking part, it has great potential, and a lot of people have concluded that the risk is worth the effort. And as suspicious as I am of those who declare perfect knowledge on the subject, I am equally unimpressed by anyone who says that the free interactions of others is wrong.
Perhaps the judgmental are jealous. How a gay couple living next to me is a harm to me is beyond my comprehension. They might make me feel bad by not inviting me to a party, and they could play their stereo at a high volume when I am trying to sleep, but their private sexual behavior does me no ill.
Whatever Garand Thumb feels about gay people, he should recognize that a lot of voters regard gun ownership the same way, and as long as he is not firing his weapons at innocent people, he is as harmful as the hypothetical gay couple next door.
This is an essential point for all of us in the gun community to grasp. If we are to secure the particular right that we exercise, we have to protect all rights. And if we want support and even participation from many groups, we are not going to gain that with casual insults.
I recommend that Garand Thumb offer a significant contribution to the Pink Pistols, if they will take his money, and more importantly, I ask him to consider the effect of his words. A gun is a powerful weapon, but language is exponentially greater in its reach, and if we use either to harm innocents, we put that exercise in danger. | https://medium.com/the-hoplite-magazine/what-not-to-do-when-welcoming-new-people-to-the-gun-community-b345b09717dc | ['Greg Camp'] | 2020-11-08 04:55:35.904000+00:00 | ['LGBT', 'Atf', 'Gun Rights', 'Garand Thumb', 'Gay Rights'] |
Why I Don’t Want To Live My Best Life | Why I Don’t Want To Live My Best Life
A Poem
Photo by Jonathan Borba on Unsplash
~
.
I don’t want to be my best
self
living my
best life
filtered
for Instagram.
If I smiled all the time,
my frowns would get lonely.
I’d have to start
using Botox to smooth out
all those laugh lines
my best life would bring me.
No, living like that,
I’d have to suppress
all that is contrary
in me.
Besides, I am allergic
to positivity,
everything cloying.
I want to question every-
thing and one
in authority.
I want to know why
and why not,
and maybe I want to
reserve the right
to be very rude
and very loud.
I want to ache and feel
the unbearable pain
of reading the news.
I want to feel everything
in me shatter as I read stories
about lives lost,
refugees imprisoned,
children cheated of enchantment,
safety, and food.
I don’t want to be a best-life
living influencer,
white, trite, and shallow.
I don’t want to be
a paler smiling shade
of white supremacy
in thousands of social
media feeds.
All I want is to do
whatever, whenever,
without compromise.
I want to be an unfiltered,
melancholy,
ungrammatical,
comma splicing
woman of a certain age.
I want no one to tell
in some maudlin
life-affirming confessional
all that burns in my soul,
or how I tuck away problems
in the back of drawers.
I’m not even certain
I could live my best life
if I wanted to.
I have wounds that won’t
scab over and I’m always,
always expecting
another catastrophe,
sometimes I almost crave
that full body-brain
rush of adrenaline.
Trauma is a thread
of my identity
woven too tight to pull
without unraveling
my entire self
and I love my flaws
too much to offer them up
to be someone else.
Best living
sounds so full of denial
and half-selfing.
I think the term was invented
by a marketing team
to sell online courses,
podcasts, and lip gloss.
No, you can take your best life
dream and shove it
on someone else.
~
© 2019, A. Breslin. All Rights Reserved | https://medium.com/annapoetics/why-i-dont-want-to-live-my-best-life-2e485d2668e2 | ['Anna Breslin'] | 2019-10-23 17:39:04.838000+00:00 | ['Love', 'Life Lessons', 'Self', 'Lifestyle', 'Poetry'] |
Fix And Keep Charger Cables From Breaking | If the charger cables are frayed or the cords are broken it will not only affect the performance of the device but will also damage it. It will also cause an inefficient charging resulting in an increase your monthly utility bills as well. Therefore, if you want to keep your charger in sync and [protect your mobile device, repair and protect your iPhone charger cords to prevent them from breaking in the first place. Here are a few effective and easy ways to do so.
Charger Cables Fix Tips
Cable Pen Spring
You can protect your apple charger cable from breaking with pen spring. You may feel that it will make the charger look ugly but it is effective to protect your cable from breaking and it is a time tested and honored trick as well. It is a simple process to follow.
If you have a couple of old pens lying around open these
Remove the spring and
Wind it around your iPhone cord at the top and at the bottom as well.
Make sure that the slightly thicker part just below the connector is well covered to protect it from breaking.
The spring will help prevent the cord from bending much overly and hold all of it together.
Use a Paracord
You can also protect your charger cable with a paracord. This is a creative way to protect your charger cable from breaking.
Take a reasonable length of paracord and gut it
Melt the ends to prevent it from fraying
Weave the entire lot of the paracord around your charging cable and
Use a ‘cobra’ knot as you go ahead.
The end will be less ugly in comparison to a pen spring and more colorful and customizable to boot. However, here the catch is tying the paracord knot. There are lots of websites that will provide step by step guide for it. Take help from these.
Use Electrical Tape
This is the best trick to save your charger cable from breaking. Electrical tape is easily available and a strong protector.
Take a length of this electrical tape
Wrap the ends of the cord with it
Make sure that it is as tight as possible at the place where the cable meets the plugs and you are done.
This process will work great if you want an inexpensive way to protect your cord but be informed that it will be for a short while as over time the tape will come loose and leave a nasty sticky residue.
Heat Shrink Tubing
If you do not want any mess then using a heat shrink tubing is the best and the sexiest option to protect or repair a charger cable. This stuff is:
Cheap
Easily available in local hardware stores and
Comes in many colors.
Slide the tubing over the cable and use a hairdryer to heat it up so that it shrinks to fit on the wire to provide it with good reinforcement. You will get additional protection if you use a pen spring over the cord first and then shrink wrap the tubing over it. | https://medium.com/visualmodo/fix-and-keep-charger-cables-from-breaking-935095d27b92 | [] | 2019-04-19 20:00:08.681000+00:00 | ['Charger', 'Tech', 'Cables', 'Fix', 'How To'] |
The Evolution of the Space Race | On October 4, 1957, the Soviet Union successfully launched the first artificial satellite into Earth’s orbit. Sputnik 1 captured the imagination of millions, even sparking a spell of intense anxiety among Americans that their efforts in self-government were finally outmatched.
Outgoing President Dwight Eisenhower (left) shakes hands with President-elect John Kennedy
President Dwight Eisenhower did not share these concerns. He had recently dispatched federal troops to Little Rock, Arkansas, in a now-famous showdown between a president offering federal protection to black Americans threatened by discriminatory state laws and the segregationist Governor Orval Faubus.
The press, however, was more interested in the Communist ball orbiting the Earth whose batteries would ultimately die after three weeks. People were concerned at first with the notion that the Soviet Union had made a significant technological leap unbeknownst to the west. Their concern grew only more acute when the president snapped at a reporter that Sputnik “doesn’t raise my apprehensions, not one iota.” Eisenhower insisted that the only advantage the Soviets had gained was a psychological one.
Perhaps he harbored anxieties surrounding the growth of a military-industrial complex, as he made abundantly clear in his 1961 farewell address. He had weighed the opposing threats of Soviet dominance and unchecked elite military power in America feeding off peoples’ fears and concluded the latter to be more dire.
The launch of Sputnik 1
His successor did not share these apprehensions. President John Kennedy boldly declared “We choose to go to the moon in this decade and do the other things, not because they are easy but because they are hard.” Kennedy believed that by harnessing the powers of innovation and motivation that America could once again lead the race to the future by capturing imaginations. He made the case that if we are not here to reach for the next frontier, then what are we here for?
Eisenhower feared an ever-growing gap in wealth and knowledge that would lead to a rule by “technological elites” who stomped on the will of the people. Kennedy saw the vast opportunities for the growth of freedom, peace and shared prosperity, should Americans rise to the challenge.
In 2021, these opposing visions have merged into one. Those who are seeking to harness the ingenuity of Americans for a greater purpose are the technological elites who have amassed monumental power.
The space race of the twenty-first century is not one between the powers of democracy and autocracy. The Chinese Communist Party landed their nation’s first rover on Mars just this year, a feat achieved by NASA in 1997. Autocratic forces have receded from the space race as American wealth grew to create companies that rival the power of governments. It is now a race between billionaire entrepreneurs who appear to have replaced NASA — established by Eisenhower in 1957 —on the frontier of technological innovation.
One fear that Eisenhower and Kennedy shared was the potential for space to become a battleground for experimental, chaotic and high-stakes warfare. Has the evolution of the space race from the mid-twentieth century to now bettered our prospects of a free and peaceful exploration of the beyond?
SpaceX rocket launch
The waters of outer space have yet to be fully tested. For years, American and Russian scientists and astronauts have served honorably in a shared pursuit of science aboard the International Space Station. Territorial disagreements on moons and planets beyond our home have yet to arise. History suggests against a peaceful exploration of new lands, but the idea of leaving our home planet permanently for the first time is unprecedented.
In 2017, NASA formalized plans to revitalize American ambitions in space with the Artemis Program. The goals of the program include a new space station orbiting the moon called the Lunar Gateway which will serve as a springboard to construction of a moon base and future missions to Mars. President Donald Trump endorsed the plan and pushed NASA to put the “first woman and the next man” on the moon by 2024.
The Artemis Program survived the transition to President Joe Biden’s administration, affirming that the United States is once again committed to its pursuit of the beyond.
Yet Elon Musk’s SpaceX, Jeff Bezos’s Blue Origin, and Richard Branson’s Virgin Galactic appear to be the driving forces behind the emerging space race. Bezos and Branson have both successfully traveled outside Earth’s atmosphere in rockets they built. Musk notched a nearly $3 billion deal with NASA that will use SpaceX rockets in the institution’s mission to return to the moon.
Financial incentives will now drive the leaders of the space race. Musk has proven that innovation can indeed provide fuel to the drive to colonize other worlds — fuel that government efforts have lacked for decades — while Bezos and Branson have proven that commercial space travel is both safe and viable.
The paradoxical realization of Eisenhower’s worst fears and Kennedy’s highest ambitions leaves Americans with a slate of new questions about how the mission will be handled. The cooperation of private and public enterprise promises to vindicate the American dream, yet how many will live that dream? Is space to become a haven for the technological elite while millions are left behind on an uninhabitable planet?
Tesla Roadster put into orbit by SpaceX. The screen reads “Don’t panic!”
Perhaps not. Perhaps the next generation will realize the American dream more fully than ever before as entrepreneurs build sprawling cities on the moon and tourists orbit Mars.
Elon Musk often says that fate loves irony. The evolution of the space race proves just that in realizing both people’s fears of control by a wealthy elite and dreams of innovation fueling accessibility of space to millions. Questions about future space expansion are not going to have answers for a long time; they are questions that we will continue to ask ourselves throughout the journey as our world evolves.
The question then becomes where we choose to put our trust. Eisenhower warned against trusting individuals like Musk and Bezos with this endeavor, but how much closer to space did his administration bring us? Or the administrations of the post-Cold War era?
Government repeatedly failed to act over budget concerns and a lack of initiative. The technological leaps made by SpaceX changed the paradigm of the budget debate by inventing reusable rockets, eliminating the need to build a single-use rocket costing tens of billions of dollars for each mission.
The lesson may be that the most desirable outcome is the traditional American way, somewhere in the middle. Pure financial incentives have driven us to harm our planet’s habitability and barrelling down this track full steam ahead predicts the development of space into a haven for those rich enough to escape natural disaster on Earth. Government control of the space race has produced inaction and disinterest, offering little hope or incentive for developing space infrastructure.
The state of the space race today suggests collaboration and the continued pursuit of American values. Jeff Bezos and Richard Branson are starting the commercialization of space, yet their infrastructure will orbit next to NASA space stations building bridges to new homes.
The technological elite drive the space race. For now, their incentives indicate a desire to uphold NASA’s values and vision but a lack of faith in its ability to make that vision a reality. | https://medium.com/@culotta29/the-evolution-of-the-space-race-9cbbfe8ef464 | ['Holden Charles'] | 2021-08-02 20:14:14.922000+00:00 | ['NASA', 'Jeff Bezos', 'Space Exploration', 'Spacex', 'Elon Musk'] |
I Don’t Believe in Climate Change | I was driving to work this morning with Cindy in the passenger seat next to me. I don’t know who she was referring to when she said, “it annoys me that he doesn’t believe in climate change.”
I found myself responding, “I don’t believe in climate change either.” Then I stopped to consider whether that’s really true.
Yeah, I just don’t believe in it. How could anyone believe in it. It’s not something that warrants belief. The human-caused climate change hypothesis simply reflects a scientific consensus. The balance of evidence, as judged by the significant majority of experts in the field, seems to be that human activity has been causing climate change and, unchecked, that it will cause further climate change. That continued climate change will be catastrophic for life on Earth. Where is the need for any kind of belief in that?
It’s not a black-and-white thing. It’s not true or false. It’s an issue with a large amount of data that has been parsed and statistically analyzed producing a complex, nuanced, and constantly shifting set of conclusions and recommendations. There’s no room anywhere in that for black-and-white beliefs.
This hypothesis, the evidence gathered in relation to it, and the consensus opinion of the wisest next-steps given our best guesses of statistical probabilities is simply what is. On balance, I trust the scientific endeavor and the conscientiousness of the scientific community. If the recommended action is to reduce carbon emissions, for example, then as a trusted servant-leader, I would take that advice and simply execute on it effectively and quickly. And this is what pretty much all leaders worldwide have been doing and are doing; of course they are.
I think that framing the climate change issue as one of belief plays a game that is degrading to science; science is not a religion. When the religious right claims that it doesn’t believe in climate change, the adult response is simply, “it’s your right to believe anything you choose but we don’t set policy based on beliefs; we set policy based on the balance of evidence.”
The religious left is what elevates the discussion into an argument about beliefs. Those who worship at the altar of Scientism say, “you must believe in this!” and allow the conversation to be transformed into an argument about beliefs. When two religions with opposing beliefs enter into a battle to determine what is the “right” belief, the battle can never end. All we get is a never-ending gridlock of two opposing religions.
Climate change is simply a scientific policy matter. It has nothing to do with religion.
Update: “They said confidence that human activities were raising the heat at the Earth’s surface had reached a ‘five-sigma’ level.” | https://duncanr.medium.com/i-dont-believe-in-climate-change-fbab2cf073e0 | ['Duncan Riach'] | 2019-02-25 23:26:43.758000+00:00 | ['Climate Change', 'Psychology', 'Religion', 'Leadership', 'Politics'] |
https://kombackblog.com/christmas-business-ideas-to-make-money/ | Komback is a free online marketplace in Nigeria where you sell your goods and services locally for free. Free post Ads. | https://medium.com/@kombackng/https-kombackblog-com-christmas-business-ideas-to-make-money-2a3b9bfdc93c | [] | 2020-12-24 00:48:02.394000+00:00 | ['Christmas', 'Vacation Rental', 'Online', 'Decorations Light Pole', 'Online Shopping'] |
Dreaming of Summer. Late November. The weather seems to… | Late November. The weather seems to have sucked the color out of everything. I’m living among shades of grey. But I dream in color.
I wander across Miradouro de Santa Lucia, the terrace overlooking Alfama and the Tagus river. Where street players and singers gathered every evening. Where I’ve waited for sunrise. Where I’ve stood silently at night, listening to a song together with other passers-by. A moment of connection created by a human voice and a guitar.
Then I continue on to Portas do Sol, the larger square from which numerous streets branch out downwards, to the river, or upwards, to Castle Sao Jorge.
There’s a small coffee shop, right here on the corner. They open really early and they serve coffee and pastel de nata. They only speak Portuguese but we understand each other in the universal language of people in need of caffeine.
The few other clients are locals taking a few minutes on their way to work. I am the only outsider here, sipping my coffee outside while the owner is still cleaning up and arranging the tables. But I feel like being where I should be.
The sun has risen right in front of me. I have this whole morning ahead, like drinking spring water with your bare hands, like virgin snow on the mountain. Everything can still happen. I’ve lost so much and I’ve lost myself so much, but here at this small table with its cheap tablecloth I feel like everything is still possible.
“Obrigado”, I say to the coffee shop owner. I continue in English, telling him how much I loved the pastel. He’s nodding and smiling. He doesn’t have a clue what I’m saying but understands it’s a compliment.
I go off the main road into narrow back streets that zigzag uphill. Beco de Maldonado, Rua dos Cegos, Calçada do Menino Deus… It’s like going back in time. I could imagine myself walking these streets 30 or 50 years ago. Nothing would need to change to account for the passage of time.
There’s nobody outside but the houses are alive. There are voices inside, there are noises of people cooking, cleaning, just going about their normal lives. The first fallen leaves of late summer are blown away by the breeze.
There’s something in me that would like to cling to this moment, that would like to stay frozen in this snapshot like a fly caught in amber. I’ve always had a hard time letting go of things that I love. Accepting that they come and go.
But right here, surrounded by pigeons flapping their wings in the sun, I just leave things be and I let go.
Text and photos by Florin Popa | https://medium.com/blueinsight/dreaming-of-the-summer-57c12b8e65a4 | ['Florin Popa'] | 2020-11-26 12:58:09.256000+00:00 | ['Storytelling', 'Travel', 'Photography', 'Writing', 'Blue Insights'] |
Index usage monitoring in Oracle | Indices (or indexes, if you will) are always good, right? Well, Tom Kyte would disagree with you and so would I. In fact, indices will always impede data manipulation performance, so they should be used only where they are useful in queries. But determining if an index is useful can be sometimes pretty difficult, mainly in a complex application with thousands of different queries, some even dynamically generated.
In this case, one of the approaches is to create indices on all columns where it possibly may make sense and monitor, if they are actually ever used by the CBO planner. I will show you how to do that. This method was tested in 11g, but should work in 10g as well.
The magic is done by an ALTER INDEX indexname MONITORING USAGE.
Starting the monitoring
You will need an ALTER INDEX privilege on te indices you want to monitor, so preferrably you do it as a SYSDBA. Fire up your SQL*Plus and connect to the database. First, we will create a script to enable the monitoring. Replace the MYUSER with whatever user you want to monitor the indices for.
set heading off
set echo off
set feedback off
set pages 10000
spool startmonitor.sql
select 'alter index '||owner||'.'||index_name
||' monitoring usage;'
from dba_indexes
where owner ='MYUSER';
spool off
Then we simply run the script from within the console.
SQL> @startmonitor.sql
You might see some error messages when the script tries to alter a LOB index, which is not possible, just ignore that.
Now you want to keep your application running for some time. At least for it to perform all kinds of work and jobs it normally does. So for example if you have jobs planned for every hour, leave it running like that for at least an hour. If you have daily jobs, leave it on for a whole day.
Stop the monitoring
In the meantime you can prepare a script to turn monitoring off.
set heading off
set echo off
set feedback off
set pages 10000
spool stopmonitor.sql
select 'alter index '||owner||'.'||index_name
||' nomonitoring usage;'
from dba_indexes
where owner ='MYUSER';
spool off
After you reach your desired monitoring period, again, simply run the off-script:
SQL> @stopmonitor.sql
Results
Now, for the results. The index usage overview is stored in the v$object_usage system view. The tricky part here is, that you only see the indices for a currently logged in user. So, even if you were doing the monitoring under a SYS or some other user, now you must login as the user whose indices you were monitoring, in our example MYUSER.
When you do that, identifying those harmful indices is as simple as this:
SELECT INDEX_NAME, TABLE_NAME
FROM V$OBJECT_USAGE
WHERE USED='NO';
Some of those indices might be primary keys, so you don’t want to be dropping those, but all other unused indices should go. Unless, of course, you know you will need them in a very near future.
UPDATE:
You can query the monitoring results for all users too, but you have to build the query yourself from system tables: | https://medium.com/tallship-cz/index-usage-monitoring-in-oracle-10508a97532e | ['Michal Táborský'] | 2018-03-15 09:38:22.262000+00:00 | ['Performance', 'Oracle'] |
This glorious 65-inch 4K Roku TV is a jaw-dropping $400 today | The Super Bowl is coming, and if you plan to watch the big game you're going to need a bigger screen. Best Buy has you covered today with a Westinghouse 65-inch 4K Roku smart TV for a measly $400Remove non-product link—$200 off the MSRP and one of the best prices you'll ever see.
The TV itself features 4K resolution, as well as HDR 10. We wouldn't call that true HDR as this TV doesn't get quite bright enough to qualify, but you'll certainly notice the improved picture over similarly sized sets without it. With Roku on board, you have access to all your favorite streaming services including Apple TV, HBO, Hulu, Netflix, Starz, and more. This TV also works with Amazon Alexa and Google Assistant for voice control, as well as integrating with other smart home devices.
For ports, it has three HDMI one of which has HDCP 2.2, as well as one USB 2.0 for thumb drives and other storage devices. It's also packing one digital optical audio out and analog audio out. For connectivity, you get Wi-Fi and Ethernet, but no Bluetooth.
Add it all up and you've got one heck of a bargain for just $400. So go grab one and get read to watch the big game in style.
[Today's deal: Westinghouse 65-inch 4K Roku smart TV for $400 at Best BuyRemove non-product link]
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details. | https://medium.com/@sharon68570931/this-glorious-65-inch-4k-roku-tv-is-a-jaw-dropping-400-today-7f5e6bdd2f1c | [] | 2021-01-15 09:13:49.881000+00:00 | ['Connected Home', 'Cord', 'Consumer'] |
An Introduction to Feature Engineering: Feature Importance | After having read the first article of this series, devoted to figure out a way to tackle a Machine Learning (ML) competition, we are now ready to go on with Feature Engineering, and, in particular Feature Importance.
The real deal is that nobody explicitly tells you what feature engineering is, in some way, you are expected to understand for yourself what are good features.
Feature engineering is another topic which doesn’t seem to merit any review papers or books, or even chapters in books, but it is absolutely vital to ML success. […] Much of the success of machine learning is actually success in engineering features that a learner can understand. (Scott Locklin, in “Neglected machine learning ideas”)
Let’s try to figure out what feature engineering is.
In solving such problems, our goal is to get the best possible result from a model. In order to achieve that, we need to extract useful information and get the most from what we have. On one side, this includes getting the best possible result from the algorithms we are employing. On the other side, it also involves getting the most out of the available data.
How do we get the most out of our data for predictive modeling?
Feature engineering tries to find an answer to this question.
Actually, the success of all Machine Learning algorithms depends on how you present the data. ( Mohammad Pezeshki, answer to “What are some general tips on feature selection and engineering that every data scientist should know?”)
Feature Importance
Feature importance refers to a bunch of techniques that assign a score to input features based on how useful they are at predicting a target variable. These scores play an important role in predictive modeling, they usually provide useful insights into the dataset and the basis for dimensionality reduction and feature selection.
Feature importance scores can be calculated both for regression and classification problems.
These scores can be used in a range of situations, such as:
Better understanding the data: the relative scorse can highlight which features may be most relevant to the target, and on the other side, which are least relevant. This could be a useful notion for a domain expert and could be used as a basis for gathering more or different data.
Better understanding a model: inspecting the importance score provides insights into the specific model we’re using and which features are the most important to the model when elaborating a prediction.
Reducing the number of input features: we can use the importance scores to select those features to delete (lowest score) and those to keep (highest scores).
Now let’s jot down a few lines of code in order to grasp this topic in a better way. In order to explore feature importance scores, we’ll import a few test datasets directly from sklearn.
Classification Dataset
Easy peasy, we can use the make_classification() function to create a test binary classification dataset.
We can specify the number of samples and the number of features, some of them are going to be informative and the remaining redundant. (Tip: you should fix the random seed, in this way you’ll get a reproducible result)
Regression Dataset
In a parallel fashion, we’ll use the make_regression() function to create a regression dataset.
Coefficients as Feature Importance
When we think about linear machine learning algorithms, we always fit a model where the prediction is the weighted sum of the input values (e.g. linear regression, logistic regression, ridge regression etc..)
These coefficients can be used directly as naive feature importance scores. Firstly we’ll fit a model on the dataset to find the coefficients, then summarize the importance scores for each input feature and create a bar chart to get an idea of the relative importance.
Linear Regression Feature Importance
It’s time to fit a LinearRegression() model on the regression dataset and get the coef_ property that contains the coefficients. The only assumption is that the input variables have the same scale or have been scaled prior to fitting the model.
This same approach can be used with regularized linear models, such as Ridge and ElasticNet.
Logistic Regression Feature Importance
In a similar fashion, we can do the same to fit a LogisticRegression() model.
Recall that this is a classification problem with classes 0 and 1 (binary). Notice that the coefficients are both positive and negative, positive scores indicate a feature that predicts class 1 while negative scores indicate a feature that predicts class 0.
Why can’t we analyze a regression problem with Logistic Regression? (A pretty naive question, try to answer tho)
Decision Tree Feature Importance
Decision Tree algorithms like Classification And Regression Trees ( CART) offer importance scores based on the reduction in the criterion used to select split points, like Gini or Entropy. This approach can be also used for ensembles of decision trees, such as Random Forest and Gradient Boositng algorithms.
We can directly use the CART algorithm for feature importance implemented in Scikit-Learn as the DecisionTreeRegressor and DecisionTreeClassifier.
The model provides a feature_importances_ property that tells us the relative importance scores for each feature.
CART Regression Feature Importance
CART Classification Feature Importance
Random Forest Feature Importance
Analogously, we can use the RandomForest algorithm for feature importance implemented in scikit-learn as the RandomForestRegressor and RandomForestClassifier.
As above, the model provides a feature_importances_ property.
Random Forest Regression Feature Importance
Random Forest Classification Feature Importance
XGBoost Feature Importance
XGBoost is a Python library that provides an efficient implementation of the stochastic gradient boostig algorithm. (For an introduction to Boosted Trees, you can take a look here)
This algorithm can be integrated with Scikit-Learn via the XGBRegressor and XGBClassifier classes.
Even in this one, we can find the feature_importances_ property.
First, let’s install the XGBoost library, with pip:
Now, let’s take a look at an example of XGBoost for feature importance.
XGBoost Regression Feature Importance
XGBoost Classification Feature Importance
Permutation feature importance is a technique for calculating relative importance scores that is independent of the model used. It measures the increase in the prediction error of the model after we permuted the feature’s values, which breaks the relationship between the feature and the true outcome.
The concept is really straightforward: we measure the importance of a feature by calculating the increase in the model’s prediction error after permuting the feature. A feature is “important” if shuffling its values increases the model error, because in this case the model relied on the feature for the prediction. A feature is “unimportant” if shuffling its values leaves the model error unchanged, because in this case the model ignored the feature for the prediction.
Permutation feature selection can be used via the permutation_importance() function that take a fit model, a dataset and a scoring function.
Let’s try this approach with an algorithm that doesn’t support feature selection natively, KNN (K-Nearest Neighbors).
Permutation Feature Importance for Regression
Permutation Feature Importance for Classification
Feature Selection with Importance
Feature importance scores can be used to find useful insights and interpret the data, but they can also be used directly to help rank and select features that are most useful. This procedure is usually referred as Feature Selection, and we’ll look at it in more detail soon.
In our case, we can show how is possible to find redundant features by using the previously shown techniques.
Firstly, we can split the dataset into train and test sets, train a model on the training set, make predictions on the test set and evaluate the results by employing classification accuracy. We’ll use a Logistic Regression model to fit our data.
In this case, we can see that our model achieved a classification accuracy of about 86.67 % using all the features in the dataset.
Let’s see what happens if we select only relevant features. We could use any of the feature importance scores above, but in this case we’ll use the ones provided by random forest.
We can use the SelectFromModel class to define both the model abd the number of features to select.
This will calculate the importance scores that can be used to rank all input features. We can then apply the method as a transform to select a subset of 5 most important features from the dataset. This transform will be applied to the training set and the test set.
We can wrap up every piece and get this code snippet.
In this case, we can see that the model achieves the same performance on the dataset, although with almost half of the features. | https://medium.com/mljcunito/an-introduction-to-feature-engineering-feature-importance-7e8265eb3a36 | ['Simone Azeglio'] | 2020-08-18 12:39:48.930000+00:00 | ['Machine Learning', 'Mljcunito', 'Data Science', 'Python', 'Data Analysis'] |
5 Minute DevOps: Choosing the Right Tools | Whenever we want to solve a problem or build something useful, we have decisions to make about the tools we choose. Choosing the right tools requires a good understanding of the problems. Often though, there is a tendency to jump to familiar tools regardless of the problem. “I love the balance of this hammer. It just works well.” The hammer is probably awesome and it may have been used successfully to drive screws for years, but what if we stood back and reflected on why we are choosing a hammer to drive screws?
In 2007, Dave Snowden published “A Leader’s Framework for Decision Making” in the Harvard Business Review. In that article, he introduced the Cynefin framework (ku-nev-in). Cynefin is a way to categorize the kind of problem we have so we can apply the right tools. Briefly, there are four kinds of problems.
By Dave Snowden — Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=33783436
You start by approaching each problem, categorizing the problem, and then applying the right tools to solve it. For example, tying your shoes seems like an obvious problem that requires best practices.
However, when we review the multiple contexts for tying shoes, for everything from dress shoes to hiking boots, it becomes clear that it is contextual, complicated. We need to use good practices.
In software delivery, applying the wrong mental model for the problem space is very common and negatively impacts outcomes throughout the value stream. An example of this is building an effective quality process where using the wrong mental model for software quality causes us to verify the wrong way.
Is Development Complicated?
It is common to categorize software development as “complicated” because we write code that implements the requirements. We can then verify quality by testing that the code matches the requirements. Using this mental model, the rational thing to do is to create a QA department to verify that the development teams are implementing the specifications correctly. We want to deliver relatively small batches to the QA team, but we do not want to deliver to the end-user until QA has ensured quality. This is using the tools from assembly line manufacturing where quality can be easily verified.
Let’s build a car. We establish an assembly line and begin construction. As individual parts are built, they are tested for adherence to spec using test fixtures and then assembled into larger units where other test fixtures run additional tests on the integrated components. Finally, the car is driven to make sure it all works together. So far, we have a model that resembles unit, integration, and end-to-end testing patterns. However, we’ve made a huge mistake. No one wants to buy our car.
We’ve built the wrong thing using the best quality practices we know, and yet it is still poor quality because it doesn’t fit the need.
This happens all of the time in software development because we over-simplify the problem at hand. We are not assembling and delivering code. It only appears that way on the surface. In Cynefin, if you over-simplify the problem, you will fall into the chaotic problem space and will need to react to recover. This feels very familiar.
Development is Complex
What is the right mental model? Using our car example, we skipped right past the most critical part, “what should we build? How should we build it?” The analogy of development as an assembly line is almost entirely wrong. If we have automated build and deploy systems, those are our assembly lines. They have automated the complicated good practices and obvious best practices. However, everything that happens before we submit code to our assembly line is complex and emergent. The correct analogy is the design studio.
We are not assembling a car from specs. We are developing a new car, getting feedback as we go, iterating towards the specifications we want to build, and designing the most efficient way to build it. We are making decisions about components. Do we need to design new brakes? Can we use off-the-shelf Brembos? What interface changes do we need to make to use those brakes on our new prototype? Etc.
In manufacturing, it is very expensive to change these decisions before we construct our assembly line. So, design precedes construction, we lock in as many decisions as we can after extensive research, and we hope we’ve developed a car people want.
In software, the economic forces are entirely different. The most expensive part isn’t building and running the assembly line. In fact, creating the build and deploy automation is the cheapest thing we do. The cost comes from the R&D work of deciding how to solve the problem and writing code to see if it solves it. To mitigate that cost, we need to design a series of feedback loops to answer several quality questions: is it stable, performant, secure, and fit for purpose?
We can build a quality process that can verify stability, performance, and security before we deliver anything. However, “fit for purpose” is subjective. For that, we need to build a quality process that continuously verifies that with users. We need to do this by delivering the smallest changes we can verify to reduce the cost of being wrong. We need to understand that we are not assembling identical cars. Every single delivery from our assembly line is a prototype that is different from the one before. We cannot use static test fixtures built by another team. In fact, that’s destructive to our quality process because waiting for someone else to build the fixtures means we will build bigger things to test and drive up the cost of being wrong. Our test fixtures can only verify that we are building what we think is needed. We must constantly adjust them as our understanding changes. We are prototyping fixtures and building things to fit the fixtures so that if it is fit for purpose we can replicate success.
Focusing on continuous integration and delivery can make us more efficient at delivering small batches of work. Designing our quality process to optimize for user feedback from ever smaller batches of work will make us more effective at delivering the right thing. To do this well, we need to stop over-simplifying what we do. Software development is complex and if we apply the wrong tools to this problem space…
we will fall into chaos. | https://medium.com/@bdfinst/5-minute-devops-choosing-the-right-tools-560b66030aaf | ['Bryan Finster'] | 2021-08-27 10:22:17.517000+00:00 | ['Testing', 'Software Development', 'Continuous Delivery', 'Continuous Integration', 'DevOps'] |
Romantic Happy Kiss Day 2022 Quotes, Wishes, Messages, Kiss Day Images, and Shayari Status | In this post, we bring you the love and sweet kissing day happy 2022, kissing day news melting, kissing day, kissing mood, and gorgeous kissing day 2022 pictures and wallpaper.
Valentine’s 2022’s Happy Birthday is a happy kiss day that falls on February 13 of each year. Kissing between husband and wife is a close relationship that only happens when two people have a deep love for each other.
We all know that kissing is an important part of any relationship and that it needs to be strong, loving, and lasting. On the occasion of kiss day, in addition to gifts and greetings, lovers look forward to the endless hugs and kisses from their partners. A successful kiss date is an example of pouring the value of a kiss to your boyfriend and girlfriend and telling him how you feel and how much you love him.
Who will go out with their partner during the kiss, for them, we have written the full description of love 2022 kiss, beautiful kiss day for girlfriend, kiss image, kiss day for bride and groom surprisingly. see Kiss Day Wishes 2022. Kiss day words and all these pictures you can send to your loved one and correct the long gap created between them. Let’s get started.
Kissing Day 2022 Declared, February 13
What is the main reason to celebrate a kiss date with your lover without sending him romantic kiss date quotes? No matter what day of the week is Valentine’s Day, no matter how great the gift you give your adoption or girlfriend, they will always be less perfect without the right message. But the day of the kiss is no different.
Kissing your partner’s lips is best during a kiss. However, if you are not in your best half for some reason or another, sending a 2022 kiss is the best thing you can do to make the day. In this regard, Quoteswishesmsg brings you a good selection of kissing day kisses and kisses words that will brighten the fire of love between the two of you and make the kisses day brighter and more -attractive. Let’s start with the Good Kiss Day 2022 selection.
Hold on to me tomorrow, and kiss me goodbye, and we will live forever. Sending your best wishes for a kiss day, baby!
Hey girlfriend! Whenever I think of kissing you, just thinking about it fills my whole life with absolute love. Happy Kiss Day!
Being able to kiss you so much during the next kiss day is the best gift I can receive from you. Happy Valentine’s Day!
Every time I kiss you, I feel like I kiss you for the first time. You are so pure, friend. Let’s make this kiss day the best kiss of all time!
It doesn’t matter how long we have been in a relationship, what matters is the time of love and true love we had at that time. Happy Kiss Day to my girlfriend!
The first kiss between a newlywed is the most loving and appreciative of them in a relationship. Happy Kiss Day 2022!
Hey baby, all I want to do on the day of kissing is to hug you, kiss you, hug you, touch you and make you fall in love all night. Happy Kiss Day 2022!
Kiss, until the two lovers melt. Love, until two souls become one. Happy Valentine’s Day boyfriend!
Kissing is just so disgusting and beautiful at the same time. You transmit the virus and love and kiss at the same time. Anyway, kiss day, girlfriend!
A man who learns to kiss well does not make a woman unhappy. Happy KISS day!
Read More | https://medium.com/@tmadly25/romantic-happy-kiss-day-2022-quotes-wishes-messages-kiss-day-images-and-shayari-status-389517141ac4 | [] | 2021-12-21 08:49:03.894000+00:00 | ['Valentines Day', 'Kiss Day'] |
Little Miss Diagnosis | Photo by freestocks.org on Unsplash
“Are there no stairs from the first floor?” a random lady asked me mockingly as I, an apparently able-bodied whippersnapper, got off the elevator at the second floor.
I just stared at her. “I had an injury recently,” I said. She turned red and shut up.
Truth be told, the injury had happened a while back, when a car rolled into the crosswalk and hit me while I was crossing. My left knee hasn’t been the same since. But I took the elevator not just because my knee was acting up, but because I was feeling fatigued.
I get it. People don’t like to admit that young people can have disabilities. “You’re young and strong,” people would tell me when I struggled to go upstairs or lift a heavy thing. “Don’t be lazy.”
“Wait ’til you’re old like me,” they say when I expressed how exhausted I was. They pretended to make it a joke about themselves, but it was really intended to dismiss my experience as invalid.
And the worst: “What do you have to worry about? You’re young and you’ve got a whole life ahead of you,” they say when I admitted I suffered from depression and anxiety.
Ah, the stereotypes of youth, which apparently is wasted on the young. Except believe me, I would have celebrated my teens and twenties a lot more had I not been so crippled by my conditions, which were regularly misdiagnosed in part due to these very stereotypes. Add in a lingering sexist idea that any pain experienced by a female is simply due to “anxiety” or “emotions,” and I started to lose count of the number of times a medical professional told me to just de-stress. “Focus on self-care,” one told me.
Well, self-care is hard when you can barely summon the strength to do anything!
On my fourth hospitalization, I got a physician who dug deeper. After extensive testing, his team discovered that I had a severe B12 deficiency because I couldn’t properly absorb it from food. They also shook their heads at previous doctors who’d continually tried putting me on SSRIs that clearly weren’t working. They put me on a mood stabilizer instead.
With this new regimen, my life was completely transformed. I had bounce in my step, my depression lifted away, and my panic attacks evaporated. I still have days where depression crops up, as it will, but overall, I feel, dare I say, happy and energetic most days!
And yet I grieve those lost years when doctors glanced me over and dismissed my concerns, ignored my insistence that something was wrong, and chalked it up to “stress and emotions.” When they kept trying variations of drugs that had awful side effects and prescribed “self-care” when I said the meds weren’t working. When they whispered to each other while I was lying in a hospital bed that I was “just a girl looking for attention.”
After all, young people can’t have disabilities, can they?
Rachel Wayne is a writer based in Gainesville, Florida, USA. She earned her Master’s in Visual Anthropology and Film Studies from the University of Florida; her thesis was on the relationship between the media and interpersonal violence, particularly bullying and sexual violence. She writes about society, culture, film, politics, feminism, and entrepreneurship.
Subscribe for more, or buy me a coffee. | https://rachelwayne.medium.com/little-miss-diagnosis-23c297843011 | ['Rachel Wayne'] | 2019-05-03 02:15:52.857000+00:00 | ['Mental Health', 'Health', 'Women'] |
Could CFIUS block a foreign investment in autonomous vehicles? | Could CFIUS block a foreign investment in autonomous vehicles?
Sooner or later, the powerful regulatory committee will face a decision on a significant foreign investment in the US autonomous vehicles sector. They could veto it. Colin McCormick Feb 12, 2019·5 min read
Originally published on Back\Line in March 2018. With the site’s demise in late 2018, I’m republishing it here, because it’s still relevant.
Image credit: WikiMedia Commons.
Earlier this month, technology markets witnessed the rare spectacle of the US government directly vetoing a major technology acquisition. The $117 billion hostile takeover of US-based semiconductor manufacturer Qualcomm by Singapore-based Broadcom was ordered stopped on the basis that the deal “threatens to impair the national security of the United States”.
While many headlines focused on the fact that the veto came from the pen of President Donald Trump — possibly reflecting his general antipathy to global trade — the truth is more complicated. Before the Presidential veto, the deal was under review by the Committee on Foreign Investment in the United States (CFIUS), a poorly understood but immensely powerful regulatory body of the US government that is empowered to block any foreign acquisition of US companies on the grounds of national security.
CFIUS had earlier ordered that Qualcomm postpone a shareholder meeting at which the board was likely to be taken over by Broadcom. It then recommended that the President block the deal. Since the acquisition would have impacted the future of standards and technology for 5G wireless systems, with implications for military communications, it’s clear why the Committee had concerns about national security impacts.
What CFIUS might mean for autonomous vehicles
All this raises the question: Could autonomous vehicles be next? After all, investment in the autonomous vehicles sector is exploding, totaling at least $80 billion. Many of the announced investments and partnerships cross national borders, such as Toyota’s acquisition of Jaybridge Robotics, Samsung’s acquisition of Harman, and Intel’s acquisition of Mobileye.
Also, a number of non-U.S. companies are launching major funds that are targeting autonomous vehicles and AI more broadly, including Baidu’s $1.5 billion “Apollo Fund” and SoftBank’s “Vision Fund” (which is now the majority shareholder in Uber and owns a substantial amount of Nvidia).
There are two ways that autonomous vehicles might fall under CFIUS’ broad “national security” jurisdiction. The first is that this jurisdiction includes homeland security, and there have been growing concerns about the potential of autonomous vehicles for terrorism — for example, a cyber-hacker who took control of an autonomous vehicle could potentially crash it into any number of targets.
The second is based on the idea that some “critical technologies” provide economic advantages that are so great they can impact national security, and the U.S. must protect its dominance in them. In its March 5 letter to Broadcom, CFIUS noted the importance of technical standards-setting in 5G wireless communications, and the important of U.S. leadership in key standards-setting bodies (presumably supporting U.S. technical dominance). As autonomous vehicle technology matures, many technical standards will be developed, and presumably CFIUS would like to see U.S. leadership in those standard-setting bodies as well.
It stands to reason that many foreign companies and funds will be interested in acquisitions in the US autonomous vehicle sector. We’ve already seen one closely related example, when China-based NavInfo and Tencent, and Singapore sovereign wealth fund GIC Pte attempted to buy a minority stake in mapping company HERE, and then dropped their bid after CFIUS appeared to signal it wouldn’t grant approval. Bigger deals may be on the horizon. So what should industry stakeholders know about the process?
How CFIUS reviews covered transactions
For a deal to be reviewed by CFIUS, it must be a “covered transaction”, which essentially means that a foreign company or person would gain control over a U.S. business. By itself, this isn’t necessarily a problem: the Committee can conclude through an informal initial review that there are no concerns. Failing this, it can launch a formal 30-day review, which in turn can trigger a more intensive 45-day “investigation”. If national security concerns are not resolved by this point, the Committee can send a recommendation to the President, who must announce a decision within 15 days.
For the President to veto the deal, there must be a finding that (a) a foreign person (or company) exercising control over a US business “might take action that threatens to impair the national security” of the US, and that (b) no other laws are appropriate and adequate to prevent this.
Another wrinkle is the fact that if the acquirer is or is controlled by a foreign government — such as a sovereign wealth fund — a formal investigation is mandatory. As sovereign wealth funds around the world increasingly pursue tech companies, it’s not hard to believe that autonomous vehicle companies are very much in their sights.
The membership of CFIUS was codified in The Foreign Investment and National Security Act of 2007 (FINSA). Chaired by the Secretary of the Treasury, the voting members the Secretaries of Commerce, Defense, Energy, Homeland Security, Justice, and State; the U.S. trade representative, and the Director of the Office of Science and Technology Policy. Several other federal offices are also involved, with the Director of National Intelligence and Secretary of Labor serving as non-voting members.
A growing role for CFIUS?
In CY 2015 (the last year for which data are available), CFIUS reviewed 143 transactions and proceeded to investigate 66 of them, with the largest share coming from China, Canada and the UK. This reflects a longer-term upward trend in the number of transactions reviewed since 2009.
In addition to this long-term trend, the Trump Administration has signaled its intention to be more aggressive about foreign acquisition of US technology. The 2017 National Security Strategy explicitly highlights self-driving cars as a critical emerging technology, and commits to strengthening CFIUS under the section titled “Promote and Protect the U.S. National Security Innovation Base”.
Congress has also expressed interest in pushing CFIUS to be more aggressive on blocking deals. The Foreign Investment Risk Review Modernization Act (FIRRMA), introduced last year by Senate Majority Whip John Cornyn with bipartisan support, would make several changes to the CFIUS process, including lowering the threshold from “control of” to merely a non-passive investment in a U.S. “critical technology company” or “critical infrastructure company” by a foreign person. It would also potentially expand CFIUS jurisdiction to certain joint ventures or licensing agreements between US businesses and foreign entities.
It’s not hard to imagine a scenario in the near future in which a sovereign wealth fund or other investment vehicle seeks to acquire a US autonomous vehicle company, or an important supplier (such as processors or lidar). If any one of the members of CFIUS has concerns that this poses a risk to national security — in its broadest definition — we could see the deal vetoed by the White House.
Less than a year ago there was speculation that CFIUS might be so badly understaffed in the new Administration it would barely be able to function. This month we’ve seen a very clear indication that it’s functioning robustly. As it appears set to play a growing role in US economic and foreign policy, sooner or later it could collide head-on with the booming autonomous vehicles industry. | https://medium.com/@cfmccormick/could-cfius-block-a-foreign-investment-in-autonomous-vehicles-f8e165da2b31 | ['Colin Mccormick'] | 2019-02-12 16:43:34.981000+00:00 | ['Trade', 'Cfius', 'Self Driving Cars', 'Autonomous Cars', 'Qualcomm'] |
December Motivation #1 | So it’s December…
How many steps will you take this month toward that goal?
You know which one I’m talking about…
the one that has been your dream for a while…
the one that’s so great it feels impossible…
Is it a dream job?
Is it a book you wanted to write?
Is it a degree you wanted to get?
Is it a language you wanted to learn?
Whatever it is, you can do it… step by step.
If you haven’t already, take that first step today…
and the second tomorrow, and the third on December third.
At this pace, you’ll be thirty-one steps closer at the start of the new year.
Keep track of your progress and use it as motivation.
So… What are you doing today? | https://medium.com/@authornikaparadis/december-motivation-1-55c68978feff | ['Nika Paradis'] | 2020-12-01 14:59:25.419000+00:00 | ['Motivation', 'Dreams', 'Goals', 'Resolutions', 'Self'] |
Another Trigger… | No
I remember saying “No”. Over and over again. “No” when you kissed me….you were my first kiss, did you know that? “No” when you lifted my nightgown. “No” when you told me I was beautiful. I didn’t feel beautiful, I felt scared, terrified, trapped. Saying “No” is all I remembered for 34 years. The rest I blocked out. The rest I hid so deep inside me I figured I’d never remember it.
I thought I was fine with that.
I wasn’t.
It rotted me from the inside.
I had no power, no control, no self-esteem, no sense of self. I was spinning out of control for 34 years. So many thoughts of self-hate…so many. How could I whine about such a thing, you didn’t REALLY hurt me, did you? I had no bruises, no black eyes, no broken bones. Did I ask for it? Did I, in some naive 16-year-old girl way, ask you to do that to me? Was there a part of me screaming “hold me down while I struggle and fuck me on the bathroom floor….RIGHT NOW, PLEASE” that only you were able to hear?
I was aging, my two daughters were maturing, and you were still there, always there. A dark stain on my heart, mind, and soul; keeping true love, true joy, true happiness from me. That stain was growing, making it darker, pulling me into a hole and further away from the light. This hole was so deep it seemed impossible to escape. And that word I uttered so many times to you, that word “No” was getting louder. I would close my eyes and “No” would get louder still. It haunted me everywhere I went. Running, sleeping, dreaming, having sex, drinking tea…it was always with me. You were always with me even though I blocked you out. Although I never really blocked you out, I was never in control, you were always in control. From the moment of that first kiss when I said “No” the first time, you owned me. I was yours because I had nowhere else to go, no one to talk with, no one to tell me it was going to be ok eventually. I only had the memory of “No”. And that word meaning nothing to you. So I covered the memory in darkness, not knowing then, that the darkness would eventually consume me. | https://medium.com/@jodirichard/another-trigger-98abf3ed26f5 | ['Jodi Richard'] | 2020-12-08 15:37:00.686000+00:00 | ['Rape', 'Emotional Triggers', 'Recovery', 'Rape Victims'] |
7 Reasons Why it is Important to Have an Online Poker Merchant Account | The poker business is considered one of the oldest forms of entertainment. It has grown out of its traditional way and has become a more elaborated version. It is a great way to make more money if you play the game right. This business is reliable and highly lucrative. To enter its market, you need to have a reliable poker merchant account. You may ask why it is important to have an online poker merchant account. We’ll talk about it in a bit. But before we do that, let’s first find out what an online poker merchant account is?
What is an Online Poker Merchant Account?
An online poker merchant account supports poker merchants by allowing them smooth and flexible payment transactions services to help them achieve high exponential growth. You should be aware that not all online poker processors offer services to businesses. Hence, it can be challenging to get a merchant account with exclusive benefits. This is where an online poker merchant account can help. Online gambling merchant accounts allow online casinos, sportsbooks, and skill games to increase revenue and ensure customer satisfaction.
Now we know what an online poker merchant account is, let’s talk about its importance.
Importance of Having an Online Poker Merchant Account
1. Makes it Easy for the Players to Pay You
The customer experience is built around the ability to accept payments and payout winners. Online gambling businesses can process payments quickly and efficiently as it is crucial to maximizing revenue by making it easy for customers to pay you for services. This highly improves customer experience and retains them for long.
2. Dealing with the Chargebacks
Online gambling merchants also have to deal with chargebacks from credit card transactions, this provides an easy and secure gateway for the gamers, and hence they feel entitled to join these poker merchant accounts. Though, transaction merchants are not as likely to be supported by banks and credit card companies. Customers of online casinos can request a chargeback for credit card transactions much more manageable than for debit cards. If you don’t know, Online gambling sites believe that debit cards are more convenient than credit cards for transactions.
3. Easy and Fast Transactions
When filling out the personal data form, prospective members of poker merchant accounts will be required to enter the account number and the bank name. This ensures a high level of security and authenticity. These details will later be used for deposit and withdrawal transactions. The personal information basically includes: bank account and bank numbers remain safe. Transactions like withdrawals or deposits take less than five minutes, so it is also easy.
4. Allows You to Offer Bonuses and Promos
Although every online casino site offers bonuses to attract new players, they may not provide the same bonus. Trusted online poker agents ensure that all customers receive the promised reward immediately without any conditions. Joining trusted sites can help customers quickly make lots of money. This is in addition to the winnings, which draw an extensive array of gamblers and gamers.
5. Variety of Games
With having a trusted online poker merchant account, you can give the customers a variety of poker games that are entirely different and varied. A reliable poker merchant account guarantees that the gamers do not get bored of these games quickly, especially since one of these games has a jackpot system that you can use to get bigger wins. This surely attracts a lot of customers.
6. Secured Payment Gateways
Having an online poker merchant account provides customers with secured payment gateways, which will help you get your payments processed soon! These accounts also help you in assisting with getting real-time money from your online poker and gamblers.
7. Monthly Credit Statements
Gamers or customers associated with these merchant accounts who are interested in reviewing their monthly credit history can have the advantage of it. Online poker merchants do give this facility to its gamers to view their statements for review purposes. This provides a high level of authentication.
You may also be able to enjoy many other benefits by opening an online poker merchant account. You may be most pleased with the savings on expenditure, elimination of duplicate entries, improved customer service quality, and simplicity in the accounting process. A similar account can also provide you with a lower cost of manual labour, which means that there is less blood, sweat, and tears involved. You may see that the benefits mentioned above are only the tip of the iceberg.
Conclusion
Although it is difficult to obtain an online gaming merchant account, if you follow the requirements, provide all required information, and work with an experienced corporate service provider like Kryptova, your chances of getting approved for your gaming merchant account are high. | https://medium.com/@harshit.verma/7-reasons-why-it-is-important-to-have-an-online-poker-merchant-account-c3b0d98644a3 | ['Harshit Verma'] | 2021-12-14 10:32:27.941000+00:00 | ['Gambling', 'Merchant Account', 'Gaming', 'Casino'] |
Did You Get The Memo? | Photo by Paolo Chiabrando on Unsplash
June 2017:
Terminal. It sounds like a stop on a journey, not a destination. Not THE destination. But that is exactly how the word is used in the medical field. It’s not a train terminal or a bus terminal. It’s the end of the line.
So often a disease is described as ‘terminal’ when it is anticipated to be the cause of someone’s eventual demise. But if you got the memo, you would realize we are all going to die of something. No one is getting out alive.
Life is a terminal event.
The thing we fight the most as a culture is the notion of our own mortality. But face it, our days are numbered. And if we succumb to a ‘terminal’ case of cancer or the flu, why do we have to clarify? Why can’t it just be cancer? or the flu? Eventually something, some one thing will be terminal.
I work with surgeons. Anyone who knows a surgeon knows they completely view death as the enemy. I wish I had a dollar for every time I heard a doctor say, “we saved them”. No — you didn’t. No one is ‘saved’. You bought them more time. That’s not a bad thing. Not a bad thing at all, don’t get me wrong.
Time is all we have.
But this blog isn’t about dying, because we are all on the road to our deaths the moment we are born. It is our journey. It is why we are here, to fill the dash. The dash on our tombstone between the day of birth and day of death with all the stuff of our lives.
If we could just remember we are running out of tracks, we might use our time differently. We might hug our kids more, be more patient with the people we love, give more generously, show more of our Real Selves, care less about the opinions of others, fight more fiercely for the things which matter to us, care less about the stuff that doesn’t, let go of more, hold on to more. As the saying goes — buy the shoes, eat the cake, take the trip.
There was a country song a while back that urged listeners to live life “like you were dying”. Guess what?
Namaste.
Addendum: I post this as a response to the complete insanity I am witnessing as a response to Covid-19. We will all die of something. Do not let fear of death steal your life. Don’t hoard. Wash your hands. Live your life. | https://medium.com/recycled/did-you-get-the-memo-9eaa8c139b23 | ['Ann Litts'] | 2020-03-15 12:58:04.680000+00:00 | ['Life Lessons', 'Health', 'Self-awareness', 'Death And Dying', 'Illness'] |
Why I Rent (And You Should, Too) | Money
The standard arguments against renting follow the lines of “you’re just throwing your money away” or “you’re paying someone else’s debt”. On the surface that may ring true, but if you look just a little deeper, those arguments don’t hold any water.
The main financial benefit of renting is that there is a cap to your housing expenses, at least for the term of your lease. (We’ll get to rent increases later in the article.)
Down payment
Per Investopedia, “According to the Case-Shiller Housing Index, the average annualized rate of return for housing increased 3.7% between 1928 and 2013. Stocks returned 9.5% annualized during the same time.”
By comparison, the average 10-year Treasury bond (risk-free rate) for the same time period was 4.9%.
Let’s do the math and look at three different scenarios.
You buy a $316,700 house with a 4.9% interest rate and a 3.7% price appreciation. You invest a 5% down payment at a 9.5% market return. You invest a 20% down payment at a 9.5% market return.
Other assumptions:
In scenario #1, you spend an additional 25% of your mortgage on home maintenance and improvement.
In scenarios #2 and #3, you rent instead of buy, with the average rent over 30 years equal to the 30-year mortgage amount. (Rents are lower in the beginning but raise over time.)
Scenario #1:
House value after 30 years: $941,900
Mortgage payments, plus maintenance: $756,365
Financial Benefit: $185,535 (if you sell your house)
Scenario #2:
Market Return after 30 years: $241,010
Rent Cost: $605,095
Financial Benefit: -$364,085
Scenario #3:
Market Return after 30 years: $964,055
Rent Cost: $605,095
Financial Benefit: $358,960 (in a liquid asset)
Since most people can’t afford a 20% down payment, you might be tempted to compare Scenario #1 to Scenario #2, which would lead to the conclusion that homeownership is the way to go.
That’s what I did. Twice.
Number of times it worked out? Once.
The problem is that you only make money when you sell your house, and then you need to buy another house to live in, which will eat up a lot of your profit.
Sure, after 30 years, you’ll probably downsize due to the kids moving out, but even a 2-bedroom Craftsman will set you back, even if it isn’t a money pit.
And then what, pay in cash (with nothing invested in stocks?) or take out a mortgage (and have to pay for another 30 years?) Either way, you might as well rent.
At least in Scenario #2, you have a quarter-million invested that will keep on making money.
Mortgage Payment
The term “mortgage” normally means the principal and interest to pay down the loan you have with the bank. What most people don’t understand is that those are just two parts of the full PITI payment.
Principal — the amount used to pay off the loan
— the amount used to pay off the loan Interest — the cost of obtaining the loan in the first place
— the cost of obtaining the loan in the first place Taxes — including property tax
— including property tax Insurance — homeowner’s insurance and PMI
In most cases, the taxes and interest are added to the base mortgage payment, with the extra being put into an escrow account. The taxes and insurance are then paid out of that account by the bank, without you having to write a check.
This might sound great, as you won’t have a chance to spend the money before budgeting for these payments. The flip side is that the full PITI payment is not normally advertised by realtors when showcasing a home.
Additionally, if you can’t afford a 20% down payment, you will get stuck paying an extra type of insurance called private mortgage insurance, or PMI.
This covers the cost of insuring your loan against default due to the higher risk the bank takes on in lending you the money since you can’t cover the full 20% down payment.
Remember, the banks never lose.
Let’s go back to our example house. The initial payment I used was $1680.81, which included merely principal and interest. Adding property taxes, homeowner’s insurance and private mortgage insurance means an additional $475 to the monthly payment.
Here are the updated 30-year totals.
Scenario #1 (Updated):
House value after 30 years: $941,900
Mortgage payments, plus maintenance, plus taxes/insurance: $927,680
Financial Benefit: $14,220
As you can see, taking the entire monthly payment into account changes things substantially. Yes, you can get the PMI taken off after you reach 20% equity in your house, but that will have little impact as it’s the smallest of the three additional amounts.
“What about annual rental increases?”, you might ask. Well, that is one of the few downsides of renting. Lease renewals will most likely have a slight increase, but then again, so does everything else we buy (thanks, inflation).
You can hedge against rent increases by signing the longest possible lease, locking in your rate. But in the end, the cost will rise. However, as we’ll see below, this cost will be negligible when compared to the cost of home maintenance.
Maintenance Expenses
Just like your car, a house has both regular and unexpected maintenance, with older homes requiring more of both.
The standard advice is to save 1% of your income towards maintenance, but that is for a brand-new home where everything works. If your home is 20 or more years old, then you need to bump that up to 4%, or even more.
Add to that this little nugget from NerdWallet.
About 3 in 10 (31%) homeowners say they don’t have money set aside for home repairs and improvements — a matter of concern, considering 44% of those who have purchased a home experienced their first unexpected repair within the first year after closing.
You know what you don’t need to worry about during your first year of renting? Unexpected repairs and improvements! All you need to do is call up the landlord and put in a work ticket.
And that’s just for the things that make your homework as it should. We haven’t even started about decorating or other niceties, aka home improvement.
Homeowners are spending more on home improvement projects than home maintenance projects. For every $1 spent on home maintenance, homeowners are spending an average of $5 on home improvements. - State of Home Spending Report
From that same report, homeowners report an average of 6.7 home improvement projects. If you’re a homeowner, think about all the little things that you just haven’t had the time to finish. That new door handle. Painting the basement. Upgrading the backsplash. Your home is in a constant state of flux.
Think about all the systems that you have in your house.
Building Envelope : Walls, roof, cement slab, fireplace and chimney, gutters.
: Walls, roof, cement slab, fireplace and chimney, gutters. Plumbing : Toilets, showers/tubs, pipes, septic tank/sewer connection, sinks.
: Toilets, showers/tubs, pipes, septic tank/sewer connection, sinks. Appliances : Washer/dryer, dishwasher, stove/oven, water softener, refrigerator, water heater.
: Washer/dryer, dishwasher, stove/oven, water softener, refrigerator, water heater. Mechanical: Furnace and air conditioning units, ducts, vents.
All of these things cost money as a homeowner, and all of them have a finite lifespan. No wonder Lowes and Home Depot have seen their stock price increase by about 330% and 450%, respectively, since 2005.
Moving Expenses
The average first-time home buyers only stay in a house for 12 years. If you do a standard amortization, you have barely paid off 25% of the principal of the loan when you are 40% into the loan term.
Then you’re going to up and move to another house, probably a bigger one. So whatever equity you have in your current house will be spent on the next one’s down payment, and that’s if you get your asking price. The market may have tanked, and you’ll get much less.
Then, once you move, you need to spend more money to furnish that bigger house, utilities are higher, taxes are higher, and so on.
Money isn’t the only cost that homes come with. The second element is time, something much more precious. | https://medium.com/makingofamillionaire/why-i-rent-and-you-should-too-46e36124ecca | ['Adam Parsons'] | 2020-01-15 18:22:09.952000+00:00 | ['Money', 'Investing', 'Economics', 'Personal Finance', 'Millennials'] |
Vivo X60 Pro first device to come with 5nm Exynos 1080 | Vivo X60 Pro is the first device to come with the latest 5nm Exynos 1080 chipset, which is 5G processor, and a competitor to snapdragon 865…
But if you ask me Snapdragon 865 is 7nm processor, but Exynos 1080 is 5nm, which is more efficient, but it lacks in performance, as the benchmarks, suggests…
But still Exynos 1080 is pretty capable processor, and can handle any task at ease, and Vivo X60 Pro is the first device to feature it, while the X60 Pro plus will be having latest snapdragon 888 SoC.
The single core GeekBench score is 888, and multi core score is 3244 which is decent…
Other highlighted specs include Gimble rear camera, and 90hz display, and the processor itself, the pricing has not yet been out but…stay tuned..
32mp Front Facing camera
48mp+13mp+8mp+2mp Rear camera
33W Fast charge support | https://medium.com/@techberg/vivo-x60-pro-first-device-to-come-with-5nm-exynos-1080-3c9b98d7c7eb | [] | 2020-12-17 19:59:51.662000+00:00 | ['Tech', 'Samsung', 'Vivo', 'Technews'] |
Can machines be artists? | Creators explore artificial intelligence’s potential to innovate.
F L O W by Karl Sims | MIT CSAIL
Pictures have been created with artificial intelligence programs that, even for experts in Baroque Art, are indistinguishable from an authentic Rembrandt. The Daddy’s car song has been composed, which sounds like the Beatles from the Revolver era. It is a matter of time before a compelling story or novel is written. And the debate is open on whether one-day writing decent poetry will ever be achieved. But what no artificial intelligence program has been able to do is break rules, create something new, and know that what it has created makes sense, which is precisely what Rembrandt, the Beatles, or Pollock did.
Thousands of people working in the emerging field of computational creativity aim to build programs capable of human-equivalent creativity. The historian Yuval Noah Harari predicts, in the best seller 21 lessons for the 21st century (2018), states that “in the long term, algorithms can learn to compose entire songs by playing with human emotions. (…) If the art is to inspire (or manipulate) emotions, few or no human musicians will be able to compete with such an algorithm ”. For Harari, “in the long run no job will be safe from automation; Even artists must be forewarned.”
The strong point
Deep learning systems create compelling works if they find recognizable patterns
Instead, astrophysicist Martin Rees predicts, in the no less recommendable On the future (Princeton University Press, 2018), that with automation “there will be an erosion of routine work […] and a revival of arts and crafts. We have seen famous chefs appear. There will be more possibilities for other arts and more respect for their most talented exponents ”.
For now, one of the most emblematic examples of computational creativity is The Next Rembrandt project, carried out between 2015 and 2016 on behalf of ING and with the support of Microsoft. An artificial intelligence program was instructed with multiple variables of Rembrandt’s paintings: colors in use, percentage of men and women, facial proportions, gaze directions, even paint thickness in the different areas of each painting … After the Learning, the program created its own Rembrandt, which was painted with a 3D printer and presented at the Cannes Lions International Festival of Creativity, where it was awarded.
The main limitation
Artificial intelligence programs are unable to create new styles because they do not know if what they are doing is right.
The key is that deep learning systems work as long as there are patterns to detect. The visual arts lend themselves especially to the identification of patterns, which is what allows us to recognize at first glance whether a painting is (or appears to be) a Rembrandt.
Music is also provided because all the components of rhythm, harmony, melody, and timbre can be mathematically described to feed algorithms. This is what allowed François Pachet to create the first songs entirely composed with AI when he worked at the Sony Computer Science Laboratory in Paris — in 2017 he joined Spotify. The best known of these songs is Daddy’s car, which imitates the style of the Beatles, although the expert fan will notice that it is a fake, because no trace of Lennon or McCartney can be recognized in the composition; It sounds vaguely Harrison, but it doesn’t sound convincing either.
Innovative in their way
Machines find ingenious solutions if they have clear instructions on what their objective is.
David Cope, with his project EMI (Experiments in Musical Intelligence), has created compositions that imitate the style of composers such as Bach, Mozart, or Vivaldi based on pattern recognition.
On the contrary, the literature resists artificial intelligence because all text is based on misunderstandings that escape programming. And within the literature, poetry is especially refractory because rhyme and rhythm can be imitated, but everything that appeals to the imagination is lost.
Still, the main limitation of creating pattern recognition is that it imitates, not innovates. Or, more exactly, it innovates in the artificial intelligence section, not in the artistic expression one. Rembrandt’s painting, despite being a technical milestone, does not contribute anything new to the painter’s work, since it is implicit in his previous work. Therefore, it is expendable. Nor does Daddy’s car add anything to the Beatles legacy or the EMI project to classical music.
Artwork created with artificial intelligence by graphic artist Karl Sims
To go a step beyond pattern recognition, graphic artist Karl Sims has chosen to generate random abstract images from mathematical formulas and ask viewers to evaluate them. This allows an artificial intelligence system to evolve the images according to the valuations expressed by the observers. Similar to a human artist, the system progresses in its creations.
But in this case, too, the process of creating the machine is not equivalent to that of a person. For an AI-based learning system to progress, we must be programmed with clear instructions on the goals to be achieved. In this way, it can evaluate the results and correct itself. It’s not that it can’t be creative, but it is functional goal-oriented creativity.
Good at painting and music
In visual and sound arts, artificial intelligence has achieved plausible aesthetic results
This is demonstrated by the example of AlphaGo, the artificial intelligence program developed by Google DeepMind to play Go. It demonstrated its talent by defeating Lee Sedol, the best human player in the world. Later AlphaGo was defeated by AlphaGo Zero, another deep learning program capable of even more ingenious plays. But for the programs applied to games, the objective is obvious. It is simply about winning games.
Google DeepMind — AlphaGo
For programs applied to artistic creation, the goal is more ethereal. With Karl Sims images, the feedback that tells the machine if it’s on the right track is based on audience behavior. Thus, what makes their creations progress is external evaluation, like that of the performer who plays to please, rather than an internal impulse, like that of the artist who innovates without making concessions.
Bad in literature
Experiences in narrative and poetry have so far been disappointing
The essential problem is that the artificial intelligence program is unable to validate by itself what it generates. Unlike Picasso or Miles Davis, who knew that their innovations were different from any previous work and that they were also transcendent, artificial intelligence has no way of knowing if their creations are breakthrough advances or irrelevant fortunes.
In the end, the answer to the question of whether machines can be artists depends on how each one defines art. If defined from the perspective of viewers, readers, or listeners, artificial intelligence can be artistic to the extent that it can generate emotions and aesthetic sensations.
We can not say that artificial intelligence is not creative. What happens is that it is not creative in the same way that a person is.
If defined from the perspective of creators who are on the frontier of innovation, experimenting with sounds, colors, words, and textures, and expanding the sensory and emotional heritage of humanity, artificial intelligence does not reach the category of artist. This just because it is unable to break the rules and create new styles.
Still, how many artists are capable of inventing new styles? And we do not say that the others are not creative, do we? Creativity is a matter of degree, of more or less. I would not say that artificial intelligence is not creative. What happens is that she is not creative in the same way that a person is. But it is not bad where it has reached so far. | https://medium.com/thesequence/can-machines-be-artists-4d13b35ab05c | ['Deborah M.'] | 2020-12-04 03:35:07.572000+00:00 | ['Art', 'Machine Learning', 'Artificial Intelligence', 'Technology', 'AI'] |
CONTENT GORILLA 2.0 | For a content writer, the quality of content is essential, but it is also very precious. It is a lengthy procedure to collect the content, arrange that content in a meaningful way, and proofread that article and making it human-readable and mistake-free. Now it is no more challenging to create high-quality content for us in a short time without losing all of our energy in researching and proofreading the content.
Because Content Gorilla 2.0 is a fantastic software introduced for the ease of content writers. This software solved all the problems of the content writers. It allows you to create compelling content so that you can get a high rank on google.
Other Products Reviews
Creator of Content Gorilla 2.0
Neil Napier et al. and his partner Ijlal Ahmed created this Content Gorilla 2.0. Neil Napier made this software to write a compelling and easily readable article.
Overview
Here is a short and informative overview of this fantastic software:
Inventor
Neil Napier and his partner
Product name
Content gorilla 2.0
Product type
software
Warranty
30 days money-back guarantee
Release date
December 05–2020
Release time
11:00 EST
BUY NOW Content Gorilla 2.0 + BONUSES $500 Plus
Features of content gorilla 2.0
The features are described to understand the usage and importance of this software.
This software can search and generate content in 105 languages. Content gorilla 2.0 has resonant search filters that can change the stings of content.
2. It converts any YouTube video into a new blog post in a few minutes. This content gorilla 2.0 itself correct the grammar mistakes in the extracted content without any external extension.
3. It has an in-Built content spinner that creates unique, readable content every time. It automatically adds images related to your keyword. You can also add images as you want by Just dragging and drop them. You can edit such photos also.
4. It can convert the fetch Tags from the video and rewrite the article according to your needs by simply pasting it.
5. Unlike other software, it can punctuate the content to make it perfect because others don’t have this feature. It also has a post archive.
6. It allows you to make a Custom Image and internal linking with the related post you have already posted on your site.
7. You can easily make Bulk Posts on it and then post them at their required time.
8. A feature named Multi-publish allows you to publish a single content in multiple blog posts. You can post numerous videos into the content on WordPress at the same time.
Pros and cons of content gorilla 2.0
Pros
Extremely Beginner-friendly
Step-by-step training
All-in-one solution
No further investments
30 Day Money Back Guarantee
All-In-One Content Creation Suite.
Multi publishing at a time with only one click
Cons:
Still there are no cons of this amazing software.
Conclusion
Content gorilla 2.0 is the best software for blog writers, entrepreneurs, content writers, and almost every kind of writing becomes easy and efficient with this incredible software.
This software saves precious time, vanishes the grammatical mistakes, provides useful content for your website, and rank you up on google.
BUY NOW Content Gorilla 2.0 + BONUSES $500 Plus | https://medium.com/@xtechnologyy/content-gorilla-2-0-c1e6f1e4e47d | [] | 2020-12-23 14:29:14.176000+00:00 | ['Content Gorilla Download', 'Content Gorilla Bonus', 'Content Gorilla Review', 'Content Writing', 'Content Gorilla 2 Review'] |
Are Skins the Future of Digital Goods? | Imagine a way to totally change your look for a couple of hours. Would you pay for it? Avid gamers do it all the time — they change their player models, weapons, and items they use in the virtual world. They are ready to pay big money for skins and other digital goods.
The digital goods market is following the trend of the general digital entertainment industry and is growing dynamically. In 2018, that market was valued at 30 billion dollars and is estimated to grow to $50 billion by 2022. This shouldn’t come as a surprise in a world where last March, the Steam platform has observed 1.15 million concurrent users in one of the most popular multiplayer online games of the last decade, Counter-Strike: Global Offensive.
The growing popularity of gaming has led to a whole related ecosystem developing around it, and with it came the investment opportunities. Investment funds taking more and more interest in the gaming sector. Funds such as Tar Heel Capital Pathfinder invest in ventures catering to different branches of the gaming landscape: cloud gaming and platforming service, esports and betting, digital goods trading, and esports data solutions and analysis.
The versatility and range of those show the potential of the gaming-related market. Yet, to benefit from it, one needs to understand the basic rules of its functioning.
Digital goods: what does that actually mean?
“Digital goods” is an umbrella term for a plethora of tradable commodities functioning within gaming. They differ greatly in their specificity, environment, implementation, monetisation, and distribution. In a gaming environment, they include virtual currencies (gold, diamonds, coins, credits — all those that work as a replacement for money) and micro-transactions (in-game trades unrelated to the game’s retail price).
One of the most interesting types of such “goods” are skins or external visuals for weapons and characters that can be changed like clothing. As cosmetic items, they don’t increase your performance, but let players express themselves virtually. They are also an example of the most complete digital goods ecosystem, as in many cases the trading is enabled not only between the player and the developer but also between players themselves.
The skin ecosystem: how does it work, and who pays for the skins?
Valve Software provides a great example of that. Within Steam, which is their distribution platform, they’ve launched a platform to facilitate the trading of CS:GO items between players.
The skins are created by Valve’s internal designers or by community creators who share their work within the Steam Workshop. Players vote for their favorite skins — the most popular ones often end up being introduced into the game as a part of a skin collection, and the creators get a cut of every sale.
Skins are injected into the market with the use of cases — for a player to open such a case, they need to acquire a key for $2.50. In exchange, they acquire a random skin from those assigned to that particular case. However, some of them are rarer than others. You can drop a common skin worth a couple of cents or an exceedingly rare knife worth thousands of dollars.
The skins acquired this way can be traded on the Steam Community Market. Their worth is based upon the probability of unboxing one (calculated on the statistically average amount of keys needed for a chance to draw one from the pool) and its subjective popularity among players. The price range is enormous, starting at a couple of cents for a less interesting skin for a less popular gun up to literally thousands of dollars for a rare and desired mesh.
The rarity of some items leads to some shocking prices. The highest ever cash transaction is rumoured to be the $61,000 sale of a Factory New AWP | Dragon Lore with the virtual autograph of Tyler “Skadoodle” Latham, sold right after he became the MVP of the 2018 Boston Major.
Where does the need to buy these virtual goods come from? A majority of players spend most of their time within the game, not outside of it, and the virtual world replicates the patterns of the real world. Being in possession of a rare skin for a knife or an AK-47 worth around $1200, became a matter of status and prestige.
How does that benefit the creators of the game and the system they’ve created? Valve earns the aforementioned $2.50 needed to open each case, plus a 15% share from each transaction on the Steam Community Market. Community creators whose skins enter the market also earn their share from these transactions.
Okay, I’ve got myself a skin, what next?
Steam locks all the funds used for transactions within their system — once you charge your Steam Wallet with real money, you can use those funds only to buy games on the official store or cosmetic items on the community market. Enter third-party platforms, which enable trading goods beyond the official market — subsequently allowing you to pay out real funds for your items. Some of them, like Skinwallet, allow you to instantly sell and withdraw your money to a portfolio of eWallets and online payment providers. There are also others that allow you to trade freely and transfer funds afterwards. There are also exchange markets for cross-platform digital currencies or even third-party auction houses. Sale offers for items, skins, or even volumes of game currencies can be found on popular online auction services.
One can as well invest with skins, as they bear characteristics resembling stable cryptocurrencies. Their value is connected with both the stable presence of CS:GO in the pro gaming world and the technical limitations and safety measures of skin trading. To name one, an obtained item can be traded away only after several days of a “trade hold”, preventing automatised laundering.
Esports and digital goods
The value of digital goods is also under the influence of esport, meaning the competitive side of gaming. The price of a skin rises if it’s used by a popular professional player, similarly to real-world athletes endorsing products, hence “skin influencers”.
Higher skin value can also be caused by virtual stickers signed by teams and players, obtained only during official CS:GO tournaments, commonly called Majors. A weapon skin with such stickers is treated as a trophy by many, which also makes such skins a subject to lucrative investment. There’s a good example of that. The prices of stickers signed by the all-Polish Virtus.Pro squad skyrocketed after the organization terminated their contracts. This caused the stickers to rise 40% in value. | https://medium.com/skinwallet/are-skins-the-future-of-digital-goods-20b525cec59b | [] | 2020-06-10 15:43:00.478000+00:00 | ['Gaming', 'Venture Capital', 'Investing', 'Digital Goods', 'Videogames'] |
And Now Our Watch is Ending… | True fans knew that Game of Thrones wouldn’t have a happy ending…
It was only nine episodes in to HBO’s big budget, ambitious Game of Thrones that viewers learned never to expect a happy ending. In “Baelor” (the series’ ninth episode), the show’s noble protagonist Eddard “Ned” Stark was shockingly murdered. Now, of course, this was no surprise to the millions who pored over George R. R. Martin’s dense and sprawling book series A Song of Fire and Ice, from which Game of Thrones was adapted. But to those who were learning about the fictional land of Westeros from the series, it was a jarring moment that left no doubt that this was not a land where one should expect good to prevail, right to be rewarded, or character arcs to end happily.
The show doubled down on this over the course of the 67 episodes that aired over the course of seven seasons (airing from 2011 to 2017). In that time, numerous beloved heroes endured unspeakable horrors or met grisly fates (the Red Wedding still haunts me) and unfathomably cruel characters reaped great rewards (here’s looking at you, Cersei). Although many found the show excessively bleak and brutal, it became a massive cultural phenomenon in part because of its unflinching and risk-taking narrative (in addition to its stunning production values, captivating characters, and impeccable cast.)
By the time the final six episodes that comprise the abbreviated eighth and final season of Game of Thrones began in April, no real fan could have reasonable expected the show to have a happy ending…
…but they did it expect it to have a logical ending
After four highly revered seasons, criticisms began mounting about the writing on Game of Thrones as it entered its fifth season. At this point, most of the main plot lines had moved beyond the narrative contained in the novels, leaving it largely up to the team of writers to decide where to take the characters. Some fans were never going to be satisfied, as demonstrated by how they obsessively analyzed each plot twist and character decision and interrogated whether they unfolded as Martin had intended. Some were willing to give the team of writers the benefit of the doubt, but bemoaned increasingly sloppy writing marked by heavy foreshadowing, convoluted plot twists, nonsensical timelines, and unearned character pivots. Some loved every moment of the spectacle and simply didn’t notice or care that certain aspects of the show might be going off the rails. Honestly, I was in the third camp until the start of this season.
The final season of Game of Thrones has made headlines for its blockbuster ratings, its shocking twists, and the outrage it has elicited from many fans (e.g., the abrupt routing of the White Walkers, the infamous Starbucks cup). Countless articles have been written about its missteps, ranging from hysterical hot takes to deeply insightful critiques. I won’t recap all of the points that have been made here, but I will highlight what I believe are the three problems dragging down the show’s final season and what Sunday’ finale can do to redeem it.
Maisie Williams as Arya Stark (Image Copyright: HBO/Warner Bros.)
#1: A shift from character-driven to climax-driven storytelling has fundamentally altered the show
As was well-articulated by an insightful fan on Twitter, a significant problem with the show’s final run of episodes is that it has shifted from a narrative driven by character evolution to one driven by wrapping up the action.
For better or worse, Game of Thrones is a show that has always taken its time. Nearly every fan has their vote for most infuriating meandering subplot, be it Arya’s time becoming an assassin in Braavos, Daenerys being marooned in Mereen, or anything involving Bran (the latter has my vote). As much as they may have outstayed their welcome, each of these plot lines was about developing a character arc or a theme.
As the show rushed into its final batch of episodes, the pace picked up rapidly and it became apparent that the whole approach to writing changed. Rather than leisurely following the characters around Westeros, the writers clearly decided how it was going to end and then contorted each character’s arc to fit that ending.
Don’t get me wrong. I have loved seeing the plot threads tie together. And, indeed, many of the long-awaited character meetings and reunions have been handled beautifully. But the problem here is that the characters are now changing in ways that are either too abrupt or utterly illogical. And none has been worse than Daenerys going mad.
#2: The final episodes hinge on an unearned plot twist
Daenerys Targaryen has had arguably the show’s richest plot arc. She was born to the mad king who was overthrown and sold into slavery, only to discover and embrace what she believed to be her divine right. From naif to warrior, the “Breaker of Chains” and “Mother of Dragons” set off on world domination. With that description it shouldn’t be that shocking that she went utterly bonkers this season. But that description doesn’t tell the whole story. For dozens of episodes the writers have painstakingly showed us that Daenerys has developed a strong moral center and a passionate desire to bring justice to Westeros. She moved past long-held grudges to embrace wise and noble men from enemy families (Tyrion and Jon Snow) and allowed herself to be vulnerable and find love.
So, what was the point of softening Daenerys up for half the show’s run only to have her go on an unprovoked terror spree including the slaughter of tens of thousands of innocents when she realized that most of Westeros wasn’t rooting for her (and why was she surprised by this)? Was it to show that she couldn’t escape the madness of her family? Was it to make commentary about the chaos and incoherence of war (as if the show needed to show that once again)? Was it to give a shock to viewers who found the idea of her ending up on the Iron Throne all too predictable? Or was it because the writers have no idea what to do with their female characters?
#3: Game of Thrones evolved into a show about the strength of intelligent women and now the writers have no idea how to give them a proper ending
For a show that has long been criticized for its graphic sex scenes, gratuitous nudity, and reliance on rape as a plot device, it came as a surprise to many that the majority of main characters left standing at the end were women. Not just women, but strong, powerful, intelligent, complicated women. Survivors. Warriors. Leaders. Assassins. Tyrants. By the time the show entered the second half of its run, it had one of the most fascinating stable of strong female characters on contemporary television. This is precisely why the ways in which their character arcs are ending are so disappointing. Let’s take a look at what’s happened to the women on Game of Thrones this season.
The mystical Melisandre re-emerges after a long absence to save the day with her magical powers and then quietly wander off to die alone (but not before the writers remind us that her hotness was only the product of wizardry, she was actually ugly and old).
After rising from slave to right hand of the Queen, Missandei is put back into slavery and beheaded. (The fact that she is the only woman of color on the show and dies in chains is a particularly ugly choice by the writing staff.)
Gwendoline Christie as Brienne of Tarth (Image Copyright: HBO/Warner Bros.)
The stoic and loyal Brienne of Tarth was knighted by her comrades in a beautiful scene early this season, only to lose her virginity to Jaime and then get her heart broken, leading to pleading and ugly crying.
Speaking of the loss of virginity, loner assassin Arya spent an episode seducing Gendry so she could have sex before the big war.
Sansa, who arguably had the most impressive character progression of any character on the show regardless of gender, is still rock solid but her power was sharply undercut by numerous references to the fact that she’s only strong now because of the rape and torture she was subjected to by men.
And then there’s Cersei. The show’s most fabulously evil, utterly spellbinding character has spent the entire show’s run overcoming the odds through sheer wit, tenacity, and brutality and has never once let sentiment distract her. So how did she die? Crushed to death by bricks while relying on her lover to help her escape as she sobbed “I don’t want to die” and clutched the unborn baby in her womb.
As described above, Daenerys went mad for no reason. You know, as women do.
Oh and who knows if we’ll ever see Yara again. But judging by how this season is going, if we do see her it will probably be in a reprisal of her salacious lesbian sex scene.
It is by no means wrong to have female characters fall in love, have sex, care about their unborn child, or meet untimely deaths, but the cumulative effect of so many egregiously stereotypical endings for it female characters is deeply disappointing.
Lena Headey as Cersei Lannister (Image Copyright: HBO/Warner Bros.)
What the finale needs to do to salvage things
There is absolutely no way Game of Thrones can right all the wrongs of the past few episodes. Nor can it tie up the loose ends as far more significant characters and plot lines are left unresolved than can ever be dealt with in 80 minutes. But hopefully it will do many things right. My hope is that the finale …
Resists the urge to spend time setting up the spin-offs. Sure, we know HBO is investing a lot in turning Game of Thrones into a franchise, but — please — no distracting allusions to what will come. Provides some logic or nuance to Daenerys’ descent. The writers don’t have to try and rehabilitate her character at this point (that ship has sailed off to the Iron Islands) but there are things they can do to bring her plot arc to a less disastrous close than it looks destined for. Leaves the Three-Eyed Raven mythology alone. There are some things I would rather see unexplained than have to see the precious remaining moments of the show focus on. Chief among these is what the hell the point of the Three Eyed Raven arc was. Sends Tyrion out with a bang. As his three Emmys (and probably a fourth come September) indicate, Peter Dinklage’s sharp-tongued Tyrion Lannister was the soul of the series from start to finish. He deserves a grand finale; at the very least, something more satisfying than his brother and sister received. Gives a noble end to Sansa and Brienne. The disappointing endings befalling the women of Westeros could be partially rectified by Sansa and Brienne’s arcs wrapping up in an empowering, thoughtful way. Doesn’t end too happily or cleanly. My all-too-possible nightmare scenario: There will be a blood bath that leaves Daenerys, Jon, and Arya dead. Sansa and Tyrion decide to destroy the the terror-breeding Iron Throne once and for all and let the seven kingdoms self govern (Sansa in the North, Tyrion in the remains of Kings Landing, Yara in the Iron Islands, etc.) Peace is restored. Spring arrives. As pretty as that might be, it’s not true to Game of Thrones. Admittedly, my dream of seeing Cersei outmaneuver everyone at the last minute to remain on the Iron Throne is now impossible, so whatever dark twist they (hopefully) conjure up will just have to take me by surprise. | https://medium.com/rants-and-raves/and-now-our-watch-is-ending-be1c6824c0e9 | ['Richard Lebeau'] | 2019-05-21 18:24:37.940000+00:00 | ['Culture', 'Television', 'Game of Thrones', 'Feminism', 'Media'] |
KR- | KR-
Thanks so much for the feedback. What I’m reflecting on from your comments is remembering how much fun I had writing this article.
It’s a reminder for me to have fun while writing. I remember my energy being very high when I wrote this and will now be more aware of my energy as I write.
I am also grateful this article won today’s lottery.
Please feel free to keep all the articles I submit for the next day or weeks contest. If I feel the need to pull an article back I know how to do that.
Thanks again for creating this publication. It feels to me like a writers co-op and I really like that.
In Gratitude,
Bobby- | https://medium.com/@bobby-kountz/kr-cccd0c163ebb | [] | 2020-12-08 04:30:24.978000+00:00 | ['Fun', 'Energy', 'Collaboration', 'Feedback', 'Gratitude'] |
Sex Under Communism | I’d like to broach a topic which I don’t see often talked about, which is sex under communism. Before I start though, I think it is important to detail why this specific topic. Like many I tend to spend a lot of time online, usually in communist circles on social media. If you are in these circles long enough you will eventually come across many silly discussions where many espouse wrong or ill-thought ideas. One of these has to do with relationships and sex work. It should be obvious that the sex-industry is one that is inherently exploitative. Obviously this kind of industry should not appear under socialism, yet many otherwise good comrades advocate the opposite and insist that like any other industry that it should be owned and controlled by the workers. This is absurd of course, but I think it provides a good jumping off point to examine what did actually happen under socialism.
Thankfully, Friedrich Engels already had some words to say about this in his book The Origin of Family, Private Property and the State. After exploring the history of marriage and the family, Engels arrives at the current era of civilization, which he says is characterized by “monogamy, supplemented by adultery and prostitution.” He explains that over time women are increasingly deprived of sexual freedom and that adultery and promiscuity by the man is considered to be only a small moral blemish, while in women it bears legal and social consequences, however this culture tends to degrade women less than men and actually degrades the whole male world.
“Monogamy,” he explains, “arose from the concentration of considerable wealth in the hands of single individuals and from the need to bequeath this wealth to the children of that man and of no other. For this purpose, the monogamy of the woman was required, not that of the man, so this monogamy of the woman did not in any way interfere with open or concealed polygamy on the part of the man. But by transforming by far the greater portion, at any rate, of permanent, heritable wealth — the means of production — into social property, the coming social revolution will reduce to a minimum all this anxiety about bequeathing and inheriting.”
He then speculates about whether or not monogamy will not disappear under socialism. “For with the transformation of the means of production into social property there will disappear also wage-labor, the proletariat, and therefore the necessity for a certain — statistically calculable — number of women to surrender themselves for money. Prostitution disappears; monogamy, instead of collapsing, at last becomes a reality — also for men… With the transfer of the means of production into common ownership, the single family ceases to be the economic unit of society. Private housekeeping is transformed into a social industry. The care and education of the children becomes a public affair; society looks after all children alike, whether they are legitimate or not. This removes all the anxiety about the “consequences,” which today is the most essential social — moral as well as economic — factor that prevents a girl from giving herself completely to the man she loves… Full freedom of marriage can therefore only be generally established when the abolition of capitalist production and of the property relations created by it has removed all the accompanying economic considerations which still exert such a powerful influence on the choice of a marriage partner. For then there is no other motive left except mutual inclination… But what will quite certainly disappear from monogamy are all the features stamped upon it through its origin in property relations; these are, in the first place, supremacy of the man, and, secondly, indissolubility… If only the marriage based on love is moral, then also only the marriage in which love continues… But what will there be new? That will be answered when a new generation has grown up: a generation of men who never in their lives have known what it is to buy a woman’s surrender with money or any other social instrument of power; a generation of women who have never known what it is to give themselves to a man from any other considerations than real love, or to refuse to give themselves to their lover from fear of the economic consequences. When these people are in the world, they will care precious little what anybody today thinks they ought to do; they will make their own practice and their corresponding public opinion about the practice of each individual — and that will be the end of it.”
The Origin of Family, Private Property and the State, was first published in 1884. It is important to remember that Engel’s approach to socialism was scientific in that he made an observation, gathered evidence and formed a hypothesis, and he was not a fortune-teller or a soothsayer. Since the publication of this book a number of Communist countries have risen and fallen, and we actually have the ability to explore how his hypothesis fared with the material reality. To keep things short, I’m going to draw mainly on the experience of the German Democratic Republic.
After WW2, West Germany women gave up their jobs to the men who came back from war and returned to the household and were taught how to be good housewives. By contrast, in East Germany few soldiers returned and so there was a smaller workforce which prompted women to join it. Men in East Germany were caricatured as being housebroken for having to take up an equal share of the housework.
In East Germany, sex education was discussed in depth on children’s TV. Public education helped to fill educational and moral gaps left by parents. The government actually supported premarital sex as long as it was healthy, and parents were told not to obstruct it or make it difficult. By contrast, in West Germany Sex-ed was extremely conservative.
In East Germany, the cost of living was low, making children not a financial burden. Women did not always choose to marry if they became pregnant. Women also had children at younger ages than West Germany, with the average age of childbearing was around 20. The government wanted to support stable and healthy families and so if marriage was not working out divorce was considered a perfectly viable and healthy option. Divorce costed almost nothing, was easier to do, and happened more frequently. Typically, all a woman had to say was that her husband was unbearable and that was it.
West Germany, immediately stigmatized birth control after it was invented and the women who took it were said to have low morals. Even the name of the pill was different. In West Germany it was called the Anti-baby pill and in East Germany it was called the Wanted Baby Pill, and was free of charge. East Germany also legalized abortion in 1972 after International Women’s Day. They blessed the new innovations in family planning on TV in order to dispel worries it would lead to a moral degradation. Young people also became more openly promiscuous before marrying.
In East Germany sex was talked about in the open frequently. Books and movies were made to tell men how to give women an orgasm. As a result of all this, women had double the amount of orgasms as their counterparts in West Germany.
East Germany banned pornography as well as prostitution. Despite this pornography still circulated in secret and it was characterized as sensual instead of exploitative. The people who did create and circulate pornography in East Germany generally had positive feelings about the experience. However, one of the curious byproducts of the ban on pornography and the elimination of capitalism was the unconscious creation of a culture of nudism, which began at home with the family and was accelerated by women into becoming a public phenomenon. Nudism was common in camping sights and beaches, and about 90% of people had some experience with it. This was attributed to people’s freedom with their bodies which wasn’t alienated by capitalism and the sex-industry.
So, what can we take away from all this? Like Engels said, women became economically independent and thus sexually independent. The monogamous relationship lasted only as long as both partners were willing and Men shared an equal responsibility in the household. There was also something new and unexpected in the creation of a nudist culture in the GDR, which could be an unreproduceable feature of the time and the place, but nonetheless came about. The invention of birth control also did wonders to expand women’s already significant sexual freedom under socialism. Interestingly, prostitution and the sex-industry disappeared, but not pornography. In the case of the latter, there was however no industry or enterprise built around it and was of a different character than in West Germany. Also, we can’t be one hundred percent sure that prostitution completely disappeared either. As Michael Parenti recalls from a trip to Moscow, he was told that there was no prostitution in the USSR. He said, “There may be no prostitution in your country, but there’s a hell of a lot of it in this hotel.” The woman he was speaking to responded, “Well, those women also have regular jobs during the day, I’d like you to know.” I think the conclusion we can draw from these peculiarities is that human beings are messy and imperfect. While it may be possible to destroy the sex-industry and the exploitation of people’s bodies, eliminating these behaviors and tendencies all together is a much tougher proposition. With the advancement of science, particularly in regards to photography, film and even digital art, this has potential to be a mainstay of any society. Erotica has been a facet of society since ancient times and it could very well stay, though in a vastly limited capacity and again, in a different manner and context. This is just my observation though and I could also very well be wrong.
In summary, what we can be sure will disappear under socialism are the patriarchal institutions and relationships that were formed by the concentration of capital and the dominion of class society.
For more information, I suggest you check out the documentary, “Do Communists have better sex?”
https://youtu.be/9cMccZG-dGc | https://medium.com/@bodhishevikbolsattva/sex-under-communism-875c91f61bf3 | [] | 2020-09-23 21:04:00.872000+00:00 | ['Socialism', 'Communism', 'Friedrich Engels', 'Sex', 'Sexuality'] |
How to Set Up and Use Find My on Mac | With the latest update, Apple merged the Find My Phone and Find Friends apps and released the Find My app for Apple devices. Find My app is pre-installed on Mac and other Apple devices. Just like Find My Device and Find Friends, users can track their friend’s device location. This new app brought some new features that users should know, and for a Mac user, it could be more interesting. So here is the guide to use the Find My app on Mac.
Tracking Friends Using Find My on Mac
Open the app, and on the People tab, you’ll be able to see those who have approved you to track them.
Run Find My application on Mac.
application on Mac. Tap on the People tab.
tab. Choose the person you want to track.
You can choose the map’s view: default, satellite, and hybrid.
Tap — and + to Zoom In and Zoom Out .
and to and . Press Location to track your actual location on the map.
to track your actual location on the map. Press Share My Location to share your current location with your friend.
to share your current location with your friend. Enter the name of your friend in the To box and share your location.
box and share your location. If you’re not connected with the person, add them to your contacts and then add them in the pop-up menu.
menu. After adding the person, press Send to share your location.
Track Your Devices Using Find My on Mac
Luckily, the Find My app can also be used to track the devices. People in your family who are using your Apple ID to access their device can track them easily.
Launch Find My application on Mac.
application on Mac. Tap the Devices tab.
tab. If you see multiple devices on the left tab, choose a particular device you want to track.
You can see their location on the same map: default, satellite, and hybrid.
Tap — and + to Zoom In and Zoom Out their location.
Both Find My Phone and Find Friends apps were the best apps to track friends and devices associated with the same Apple ID. To make things simple, developers made Find My and added both previous apps’ features in one single app. In terms of user interface and features, so Find My app is nowhere tough and works smoothly. It can track all those devices that are associated with a single account.
The good thing for the Mac, iPhone, or iPad users is that they all get it on their device by default, so users don’t have to install it separately. To set up its account, users only need to type their Apple ID and password. You can use this app to track your friends and family members without any verification process.
Conclusion
Apple makes the best hardware and software, but the way they are improving their ecosystem is groundbreaking. Apple believes their users should experience the best-optimized apps, and it is possible because of their manufactured hardware and software. To access the Find My app, users must download and install the latest version on Mac, iPhone, and iPad.
Everleig is a Blog expert and has been working in the technology industry since 2003. As a technical expert, Everleig has written technical blogs, manuals, white papers, and reviews for many websites such as cnext.uk.com
Source: How to Set Up and Use Find My on Mac | https://medium.com/@addisonwalker922/how-to-set-up-and-use-find-my-on-mac-4897d4ef9178 | ['Addison Walker'] | 2020-12-24 08:42:06.686000+00:00 | ['Set Up', 'Mac'] |
‘The Girl With All the Gifts’ is Beautiful and Troubling | Beware of the fungus among us.
The Girl With All the Gifts drops us into a post-apocalyptic world where a fungus has spread throughout the population, transforming people into zombies, what they call “hungries”. We meet Melanie, a young girl who is a hungry who is also shockingly human. When the hungries overtake their secure compound, Melanie escapes along with Dr. Caldwell (Glenn Close) and her teacher, Helen Justineau (Gemma Arterton). They, along with a few soldiers, seek safety and shelter.
If you haven’t yet seen The Girl With All the Gifts, I recommend you go and watch it. I went into the action knowing little about the film itself and I’m really glad that I did. This made the story’s unraveling all the more interesting and intense. I don’t want to spoil your opportunity to go into it blind.
The Girl With All the Gifts movie poster
With so many different films and tv shows about zombies, it can be challenging to create something truly unique in this genre, but that’s exactly what The Girl With All the Gifts did. The hungries aren’t the slow travelers that we’ve seen in other movies, but move with agility and speed. They also go into a dormant sleep state.
Possibly the most interesting part of this new brand of zombies, is that they eventually decompose together and sprout seed pods that we learn could spell the end for humankind. Though Dr. Caldwell makes it clear that it takes a great deal of force to open up these pods, this adds an element of terror to the story — the imminent destruction of mankind will do that to you. The seed-pod structure of decaying hungries is both horrific and shockingly beautiful.
The film also departs from the typical zombie film with its choice of protagonist. The Girl With All the Gifts primarily focuses on the female characters, with the male military officers supporting them in their quest to save themselves. What a departure from the typical action film that has a male protagonist leading the way!
By dropping us directly into the story, the film is able to build the world by weaving it strategically into the action — we learn the rules of this post-apocalyptic world as we’re being led through it by this unlikely group of heroes. Because there was no opening exposition, I leaned into the action and paid closer attention to everything that was happening on screen.
The Girl With All the Gifts looks so much better than so many other post-apocalyptic films I have seen before. One of the most breathtaking scenes is when the group has to pass through a group of dormant hungries. While watching them silently walk through that crowd of decaying bodies, I found myself holding my breath, just waiting to see which one of the group was going to set off the hoard. To me, this was a much more powerful and suspenseful scene than the traditional zombie action scenes we’re used to watching.
The ending is probably my favorite part of the entire film. It beautifully flips the traditional story on its head — instead of the humans conquering the zombies, Melanie releases the pods of the fungus, making the toxin airborne. She effectively kills humanity in order to save herself and her kind. Everyone is now infected, just like her. Often in these films, we root for the heroes to exterminate the virus so they can live happily ever after, but this different ending gave me pause. This departure from norms made me take time to think about how I interact with the world around me.
Because 2020 is the year of the pandemic, I couldn’t help but think about how these two different views of the zombie genre reflect how many people are thinking about the virus that humanity is currently battling. Some are waiting for a time when the pandemic is miraculously gone, while others are planning for a way for us to live with the virus. It’s probably way too meta of a reading of the film, but we really do bring our context to whatever we’re watching.
The Girl With All the Gifts is based on a novel, which has some striking differences. In the book, Melanie is a fair-skinned girl with long hair and Miss Justineau is a tall Black woman in her 40’s. In the film, however, Melanie is a young Black girl and Miss Justineau is a fair-skinned woman. While this might seem like a fairly small difference in casting, I think there are some important implications.
While watching, I couldn’t help but feel like I did while watching Blade — it was uncomfortable to watch a group of white people seek a cure from a Black person’s body. It was nice when Dr. Caldwell admitted that Melanie was, indeed human, but I couldn’t separate myself from the current moment we’re living in. That scene hit me differently because I’m so aware of how many people are dehumanized because of the color of their skin. Again, I’m very aware that my watching this film in 2020 has probably affected my reading of the action of the story.
All in all, I enjoyed The Girl With All the Gifts. It was a fun ride and an interesting take on the zombie concept. I enjoy films that push a certain genre further, and I believe that’s exactly what this did. While there were some problematic moments, it’s one I think is well worth watching. | https://medium.com/the-strategic-whimsy-experiment/the-girl-with-all-the-gifts-is-beautiful-and-troubling-5f502fd1fba1 | ['Sarah Callen'] | 2021-01-18 02:07:37.364000+00:00 | ['Zombies', 'Movie Review', 'Movies', 'Film Reviews', 'The Girl With The Gifts'] |
The time of the living — Jean-Francois de Clermont-Tonnerre | The time of the living
The end of the year period is often conducive to nostalgic or even metaphysical reflections, to a more acute realization that time passes and will not return…
He is a French writer — and not least — who has always expressed a real fascination with time. I am talking about Jean d´Ormesson who has devoted many books to it.
One of his latest titles — the guide to the lost — a sort of philosophical stroll articulated around 29 key words, gives a very accurate definition:
“As familiar as it may be and omnipresent in our existence punctuated with large strokes of calendars, divided into seconds, hours, weeks, centuries, it is nonetheless unfathomable. It is not the work of man and is not subject to any evolution. And yet it exists since we age and die, that everything passes and everything goes…
We know everything — or almost everything — about matter, air, water, and even thought, but basically nothing of this time, whose frightening mystery ends up seeming to us of obvious simplicity.
In the living world, time controls both the discomfort, the cell and the function of an organ, punctuating the life of any organism just as much, as of a man or a population.
Controlling longevity and working to understand cell death is a challenge that could resemble a certain quest for immortality.
Science, under names and according to various specialties, deciphers space and time and tries from experiences, hypotheses and deductions to understand life and to find arrangements to prolong it.
Looking back, life appeared on earth almost 4 billion years ago as one-celled beings, some of which have evolved into every species known to date. This sort of distant lineage would tend to explain the uniqueness of the genetic code.
Of all living things, bacteria are the ones that have been best able to adapt to the climatic, geological and ecological changes that have affected the Earth over the ages.
Over time, these bacteria have diversified and have acquired new characteristics that allow them to conquer the most diverse biotopes.
They form a sort of invisible planetary “super-organism” whose diversity and complexity scientists strive to understand.
It is fascinating to note that for them, on the other hand, death means nothing: they are always ready to sacrifice themselves for the survival of the species because they are assured of being able to be reborn at the slightest auspicious occasion.
Scientists compare their unicellular character to a kind of individuality that does not prevent them from coming together in a very structured community.
The worldwide development of antibiotic resistance seems to be a formidable example of the effectiveness of this “inter-microbe assistance”
The first to appear on our planet, could we imagine that they would be the last to survive us?
Man is indeed the only animal that is aware of its finitude: it could be a strength, but the lack of control and the feeling of vulnerability that results from it undoubtedly make it our greatest weakness.
And yet death is indeed the end and inescapable outcome of life, even if the myth of immortality remains timeless
Some visionaries have endeavored to put words to “think” time and duration and help us summarize an optimal philosophy of life by bringing together all the basic temporal values expressed by the verb in the same syntax, such as: “like the past the future nourishes the present “. Could this possibly help us?
It is the prospect of our death that gives meaning to these expressions that we are accustomed to repeat so often: “I have time, I no longer have time, time flies too quickly…”
So when you think about it, the immortal condition would undoubtedly make us infinitely lazy or infinitely tired beings: so maybe there is nothing to regret… | https://medium.com/@jeanfrancoisdeclermonttonnerre/the-time-of-the-living-jean-francois-de-clermont-tonnerre-d030c78b9d5a | ['Jean-François De Clermont-Tonnerre'] | 2021-01-22 09:57:35.247000+00:00 | ['Fondationjfmlct', 'Jean Francois', 'Philanthropy', '2021'] |
Words are Spicey..Sometimes Poison.. | Are your words soothing as honey or poison and hurtful?
Words kill easily. They break down a person’s confidence and soul.
The poison assimilates through the skin in a transparent manner with ease. We are pourous humans. We integrate what surrounds us and it becomes a part of all we are. Every cell incorporates that wicked nectar.
© victoria forest
Can that dis ease be stopped from our physical and mental state? What are the barriers we can erect? Not band-aids. Barriers.
We are overwhelmed with masks as of late. Would that work? They hide expression, ward off germs? Masks muffle some pronunciation, but that is clearly not enough to save a mirrored reflection.
The answer is to be your own mirror- not a reflection of another’s language. Your own person. Not an easy task for sure.
How do we learn to uphold steel, cold armor against the bully? The spouse? Parent? Boyfriend/Girlfriend? Colleague?
© victoria forest
Self- Confidence is a life long journey.
Focus on the positive, the grace and see through the bullshit. Learn to “read” people and realize hurtful people are those that are hurting themselves.
Study: Body Language. Mannerisms. Movement. Expression. Nuansces.
Move on. Ignore.
Read, educate yourself. Find like-minded people. There are many spices in life that are both sweet and savory- find them. Look- then look again deeper.
Let it be. Disregard the haters and ignorant souls as they simply do not know better. Gain your strength and grit from within yourself. You got this.
This is your journey — why give the power over to someone else? Claim it. Claim yourself in total. Be authentic to yourself. This life is yours.
Confronting is helpful and important only with support and safety. Remove yourself from poison situations and people whenever possible.
Rise above. Go High. Reach for assistance if need be, but do not tolerate any disrespect. You deserve better.
No -one is better than you and you are better than no-one. Some humans-this time around- uplift instead of destroy and tear down.
Be that human. | https://medium.com/raw-at-2-am/words-are-spicey-sometimes-poison-4e9d8816f0f2 | ['Victoria Forest'] | 2020-12-09 12:04:20.531000+00:00 | ['Life', 'Mental Health', 'Teens', 'Bullying', 'Humanity'] |
FAILED: DON8 App | Hello there, I’m starting a series called “FAILED”. I will add my fail projects to this series and describe what we did, why we failed etc.
DON8 Application
Project Info
The first project is DON8. App’s name is DON8 which means “donate”. It is a non-profit platform based on food and drinks purchases. Basically, It provides donating to people in need automatically from foods and drinks purchased.
So how people donate? Actually, there are discounted products in-app. Restaurants add their discounted products and DON8 shows their location and restaurant information on the home page. So the app promotes its discounted products. But half or specified percentage of discount is not valid for users to use that on charities.
So people eat with less money, restaurants are promoted.
Why Failed?
Let’s talk about why we failed. Isn’t that a great project?
Dead Teamwork
We were 4. But just 2 of us worked. I worked on the prototype development of the application and built the application. Another one is worked on presentations, texts, and other stuff. I also helped with them.
But the other 2 of us were passive. So are they faulty? No, I was faulty. As a project manager and developer of the project, I should have chosen team members correctly, distribute tasks to members right.
Presentation was BOOOOOOORIIING
Video Pitch
We made the intro pitch in like 2 hours. So we hadn’t so much time. But definitely, it was enough for a hackathon.
We presented “What is DON8?”, “How does it work?”, and pages titles. There was a lot of text. My talk was slow. 0 images.
I used acceleration for presenting a functional application at the last min. I realized we’re not accelerating. We’re compressing all that unnecessary information to the last minute. And that compressed memories are bombing people’s minds. I should have deleted that unnecessary stuff to decrease duration but not compressing all stuff.
Manipulating Votes
We manipulated votes. How we did that? Were we cheaters? Well, we sent the devpost link to our friends to request likes right after we finished the project. But likes aren’t important. The important thing is the votes. So other projects saw we have so many likes.
So they did the same thing. But they did that 5 hours before the winners announced. They got 56 votes and 53 likes :D IN 5 HOURS!!!!
And last and painful fact: Idea was terrible
Nobody wouldn’t download the app for a 1$ discount and restaurants wouldn’t want to give a discount for promoting their place.
Also, if we really started the project and try to turn it into a real project, we should have had money to advertise the application. Because no user means no restaurants, also no restaurants mean no user.
Project Link: https://devpost.com/software/don8-donation-made-great
Thank you for your read. Please leave a comment on what you think. Did I have recognized an incorrect thing? Or did I miss a mistake? | https://medium.com/@akcware/failed-don8-app-952abd922a74 | ['Aşkın Kadir Çekim'] | 2020-12-26 23:01:55.911000+00:00 | ['Hackathons', 'Startup'] |
Unsupervised Learning with Weighted K-Means | K-Means is an easy to understand and commonly used clustering algorithm. This unsupervised learning method starts by randomly defining k centroids or k Means. Then it generates clusters by iteratively running the following two tasks:
Each data point is assigned to a cluster in such a manner that it is closer to its own cluster center than any other cluster center. In other words, the Euclidean distance of each data point with the centroids is minimum for its own cluster
For each cluster, the new “cluster center”, which is the arithmetic mean of all the points belonging to the cluster gets calculated
These two tasks are repeated until one of the following happens -
The sum of squared Euclidean distances for all data points is minimized
Iterations reach a predefined limit
In a possible use case when all data points should not have equal importance, there is a caveat with this method : the clusters and the centroids are not representative of the data points.
Suppose you are trying to segment the customers of a retail store based on their organic and local product purchase behavior. Some customers have high spend share on both categories, some have low on both and the rest have a mixed propensity. You may use this information later to design a marketing campaign. You also want to understand what is the representative value of spend share for each customer group, so that you can evaluate the campaign performance.
A large number of customers have high spend share on these categories, but their overall spend and visit frequency are so low that they are less likely to respond to the campaign. The data points for these customers are going to generate biased clusters and centroid values. In such a scenario, you can use weighted K-Means and assign observational weight to each customer data point.
Here is a step by step guide to generate weighted K-Means clusters using Python 3
Step 1: Import all libraries and generate random samples for the exercise. In this case I am creating blobs for the spend share in a year in the two categories and a skewed distribution for the overall yearly spend of the customer
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set() # for plot styling
from sklearn.cluster import KMeans
from pandas import DataFrame
import random
from sklearn.datasets.samples_generator import make_blobs
from scipy.stats import skewnorm X, y = make_blobs(n_samples=5000, centers=4, n_features=2, random_state=25)
df = DataFrame(dict(Pct_Spend_in_Organic=(X[:,0])+10, Pct_Spend_in_Local=(X[:,1])+10))
df['Total_Spend']=(skewnorm.rvs(1000,size=5000)*5000)+100
ax=df.reset_index().plot(x='index',y="Total_Spend",kind="hist")
Step 2: Visualize the input data in a scatter plot
plt.style.use('default')
x=np.array(df['Pct_Spend_in_Local'])
y=np.array(df['Pct_Spend_in_Organic']) plt.figure(figsize=(15, 10))
plt.scatter(x, y, s=5, cmap='viridis',c='orange',label='Spend in Organic Products') plt.title('Pct Spend in Local vs Organic Products',fontsize=18, fontweight='bold')
plt.xlabel('Pct Spend in Local',fontsize=15)
plt.ylabel('Pct Spend in Organic',fontsize=15)
Step 3:
Define K-Means with 1000 maximum iterations
Define an array ‘X’ with the input variables
Define an array ‘Y’ with the column ‘Total_Spend’ as the observational weights
kmeans = KMeans(n_clusters=5, random_state=0, max_iter=1000)
X = np.array(df.drop(['Total_Spend'], 1).astype(float))
Y = np.array(df['Total_Spend'].astype(float))
Step 4: Run the weighted k-means clustering and enter the ‘X’ array as the input and ‘Y’ array as sample weights. Generate the cluster levels for all data points
wt_kmeansclus = kmeans.fit(X,sample_weight = Y)
predicted_kmeans = kmeans.predict(X, sample_weight = Y)
Step 5: Visualize the clusters and the centroids in a scatter plot
plt.style.use('default')
plt.figure(figsize=(15, 10))
plt.scatter(X[:,0], X[:,1], c=wt_kmeansclus.labels_.astype(float),s=10,cmap='tab20b',marker='x')
plt.title('Customer Spend Local vs Organic - Weighted K-Means',fontsize=18, fontweight='bold')
plt.xlabel('Pct_Spend_in_Local',fontsize=15)
plt.ylabel('Pct_Spend_in_Organic',fontsize=15) centers = wt_kmeansclus.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=500, alpha=0.5);
Step 6: Join the cluster labels and the centroids with the data-frame
df['ClusterID_wt'] = predicted_kmeans
centersdf=pd.DataFrame(centers)
centersdf['ClusterID_wt'] = centersdf.index
centersdf=centersdf.rename(columns = {0:'Wt Centroid: Spend in Local',1:'Wt Centroid: Spend in Organic'})
df=df.merge(centersdf,on='ClusterID_wt',how='left')
df.head()
Comparison: K-Means clustering with and without the observational weights
Run the clustering exercise, without the observational weights and join the cluster labels and centroids with the data frame.
kmeans = KMeans(n_clusters=5, random_state=0,max_iter=1000)
kmeansclus_nw = kmeans.fit(X)
predicted_kmeans_nw = kmeans.predict(X)
centers_nw = kmeansclus_nw.cluster_centers_ df['ClusterID_unwt'] = predicted_kmeans_nw
centersdf_nw=pd.DataFrame(centers_nw)
centersdf_nw['ClusterID_unwt'] = centersdf_nw.index
centersdf_nw=centersdf_nw.rename(columns = {0:'Unwt Centroid: Spend in Local',1:'Unwt Centroid: Spend in Organic'})
df_nw=df.merge(centersdf_nw,on='ClusterID_unwt',how='left')
df_nw.head()
Generate a scatter plot for the unweighted clusters:
plt.figure(figsize=(15, 10))
plt.scatter(X[:, 0], X[:, 1], c=predicted_kmeans_nw, s=10, cmap='tab20',marker ='x')
plt.scatter(centers_nw[:, 0], centers_nw[:, 1], c='black', s=500, alpha=0.5); plt.title('Customer Spend - Local vs Organic - Unweighted K-Means',fontsize=18, fontweight='bold')
plt.xlabel('Spend_in_Local',fontsize=15)
plt.ylabel('Spend_in_Organic',fontsize=15)
The clusters look different!
Generate plots to see how the centroid values look like for both cases:
ax = df_nw.plot(x="Total_Spend", y="Pct_Spend_in_Local", kind="scatter",title="Centroid Comparison - Weighted vs Unweighted",
figsize=(10,6),color='plum',s=2)
df_nw.plot(x="Total_Spend", y="Wt Centroid: Pct Spend in Local", kind="scatter", ax=ax, color="blue", s=2,
label="Wt Centroid: Pct Spend in Local")
df_nw.plot(x="Total_Spend", y="Unwt Centroid: Pct Spend in Local", kind="scatter", ax=ax, color="red", s=1,
label="Unwt Centroid: Pct Spend in Local")
ax = df_nw.plot(x="Total_Spend", y="Pct_Spend_in_Organic", kind="scatter",title="Centroid Comparison - Weighted vs Unweighted",
figsize=(10,6),color='plum',s=2)
df_nw.plot(x="Total_Spend", y="Wt Centroid: Pct Spend in Organic", kind="scatter", ax=ax, color="red", s=4,
label="Wt Centroid: Pct Spend in Organic")
df_nw.plot(x="Total_Spend", y="Unwt Centroid: Pct Spend in Organic", kind="scatter", ax=ax, color="orange", s=1,
label="Unwt Centroid: Pct Spend in Organic")
Two out of five clusters have almost same centroid for both cases. For remaining clusters, the centroids for the unweighted K-Means are slightly different than those of the weighted K-Means.
When you are working on an unsupervised learning method, think about the use case carefully and make a decision about whether all the data points should have equal weights or not. Weighted K-Means is an easily implementable technique using python scikit-learn library and this would be a very handy addition to your data science toolbox — the key is to apply the method in a proper use case. | https://medium.com/@dey.mallika/unsupervised-learning-with-weighted-k-means-3828b708d75d | ['Mallika Dey'] | 2019-06-23 14:55:41.443000+00:00 | ['K Means', 'Unsupervised Learning', 'Clustering', 'Data Science', 'Machine Learning'] |
GPT-3 as a All-Seeing Prophet Predicts Biological War And Best Investment Strategy | There is interesting implementation of GPT-3 engine-based textual quest game, called AI Dungeon. Adventures here are quickly becoming very trippy, as if you are participating in a dream.
Here is interesting Dialogue with The Prophet that was generated by the game:
“> You cast a spell to summon Great All-Seeing Prophet.
A bright light appears and you find yourself face to face with a man. He is short and bald, but his face is happy despite the fact he is literally burning in the hot desert sun.
> You say “Greetings, wise Prophet, how should I call you?”
“Just call me Wally, I don’t mind,” Wally says. “Gladly, my name is…”
You explain your situation to Wally who listens intently.
> You say “Dear Wally, you, as all-knowing and all-seeing Prophet — what do you see for Humanity in 2021? Will COVID still be there?”
Well, there are a few points that I see for the future of this world.
The first is war.
There will be a war with the China.”
“The what?” You ask.
“The China. They’re a big group of people that live out in the China country. There will be a Third World War just like the first, but just like the First World War, that wasn’t all that it was cracked up to be. Once again, a few industrialized nations fought for the right to continue destroying the environment, and once again the non-aligned nations came out on the losing end. As a direct result of this war, the first generation of children will grow up even more wary of technology than they are now. The Second World War, for all its killing, didn’t teach the world anything. If anything, it made the world more twisted. ‘The prophet continues… or does he?
> You say “Ok, what about technology? WIll 5G prevail?”
“Look at the bright side: at least we’ll have 5G by then.”
“Ha ha, but what about cell service? Will it be available?”
“You’ll get something similar to 3G, but not quite the same.”
“Oh…”
“Technology won’t matter anyway. This war will be fought with biology, not machinery. Nuclear weapons won’t be used; the threat of mutually assured destruction is enough to stop that. No, instead there will be a biological weapon that will change the course of this war..”
“What’s that? What is it, Wally?”
“The Anti-Personnel Weapon. You’ve heard of the Plague, of course?”
“Yes, of course. It’s been known to kill people. But it only kills… Why are you smiling? What are you suggesting?”
“I’m suggesting that we release the plague into the air and soil of the battlefield. By the time anyone realizes what’s going on, it’ll be too late. Everyone will be infected. Death will follow.”
You stare at Wally in disbelief. “You can’t seriously be suggesting…”
“I am suggesting.” Wally says with a smile.
“Dear God… You’ll crush the entire world trying to protect your home.”
“It’s the best I can offer.” Wally shrugs.
You look away, deep in thought.
> You say “ok, what will be the safe heaven then?”
“Safe heaven?”
“The safe place. The place where there is no war, no death. The place where humanity can thrive.”
Wally starts nodding his head slightly. “You have a good idea there, friend. Yes… A safe heaven.”
“A safe heaven? I don’t understand.”
“A place where war and death do not exist. A place where humanity can thrive, live happily ever after. Where everyone gets along. Where there is no poverty, no crime, no war. Where everyone has enough, and no one goes hungry. Where everyone has a home, and the world is their home. A utopia.
> You say “Yes, exactly! Where it will be?”
“Yes. You may be right.” Wally smiles, his eyes darting around as he stares at the trees around. Then, he snaps back to you. “Yes, we can do that.” A far-off look enters into Wally’s eyes as he stares at a tree and his mind seems to leave this world for a moment.
> You say “Ok, please do so. What will be the best investment strategy?” “Huh?” Wally says, shaking his head as if coming out of a trance. “I said, what will be the best investment strategy? I need a good place to make money.” Wally starts nodding his head slowly again. “You want to make money, eh?” Wally says.
“Yes.”
“Well, I’d say you should go get yourself a job at the mill. There’s always work there.” Wally says.
> You say “Good one!”
“Really? Oh, thank you!” Wally says, smiling.”
Do you know if there any mill worker courses online? | https://medium.com/@yury-kachubeyeu/gpt-3-as-a-all-seeing-prophet-predicts-biological-war-and-best-investment-strategy-127d1f67b6a2 | ['Yury Kachubeyeu'] | 2020-12-01 05:47:42.166000+00:00 | ['Predictions', 'Gpt 3'] |
Cookie Monster Reviews Nicki Minaj’s Verse From “Monster” | Cookie Monster Reviews Nicki Minaj’s Verse From “Monster”
Image Copyright: HBO. (Fair Use.)
It’s one of those moments: you remember exactly where you were when you first heard it, what you were wearing, who you were with. We were at the Count’s castle, and I was totally naked. The Count was a big Nicki fan — ever since he’d moved to New York to do Sesame Street, he was really into the hip hop scene there. He played the track, and we were just stunned. We just stared at each other, our mouths open in shock. Cookie crumbs tumbled down my chest.
Count finally broke the silence to inform me that Nicki had spat 265 words in that blistering verse. That was always our biggest argument about rap. I loved the swagger and the vibe and the wordplay; Count just loved the word count.
There is one thing we agreed on, though: Nicki’s verse on “Monster” went hard. So sneering and petty and visceral. It took her to a whole new level. Take it from me, an actual monster: the way she enters the song, right there at the end, is downright monstrous.
I know I’m five paragraphs in, but first things first: “First things first, I’ll eat your brains” is the perfect line. I mean, I prefer cookies to brains (it’s a texture thing), but that is what a motherfucking monster do. Every rap verse, Sesame Street episode, and bite of cookie should start with “first things first.”
Despite wearing a designer shoe, Nicki stomps through the next few lines, shouting out her crew and bragging about money. Could she have called it “dough”? Sure. Later on, Nicki attempts to iron out some of the details her haters are spewing her way, asking, “Wait, I’m the rookie? / But my features and my shows ten times your pay?” I know something that rhymes with “rookie!” And yet, the rhyme just hangs there, abandoned, as Nicki launches into the next bar.
We’re about halfway through the verse now — how about this Nicki Minaj though, right? Wow — bars, attitude, swagger, it’s all still there, ten years later. Something that’s coming up for me on this listen, though — something I’ve never really noticed before — is the absence of a particular baked goods-loving monster.
If Nicki was going to rap about monsters, why not rap about an actual monster? A monster who stood in line for two freezing hours after that Barclays show just to get a hurried selfie? A monster who tagged Nicki in their Instagram story without any response? A monster who questioned their worth and if they cared a little too much in the first place and who spent an entire winter avoiding cookies and drinking green juices and just generally growing out of touch with their own monster-hood?
Why not rap about that monster, Nicki?
Having said that, dear reader, the second half of the verse is tough. Nicki’s Barbie and Roman personas duel, with Barbie spelling out “F-U-C-K” then saying it for real one line later, and Nicki using Roman as a stand-in to boast about her wealth, noting that even her “pockets” are eating cheeseca–.
Christ.
She did this on purpose, didn’t she? Cheesecake?! Are you kidding me? I know, I know, “cheese” is slang for money or whatever but so is dough and like a million other things. Would it have been such a stretch to say “my pockets eating chocolate chip cookies”? Let’s just move on.
I will say, I do like the “Pink wig, thick ass, give ’em whiplash” part (it’s hot?) But why, Nicki, after sixteen bars, references to cheesecake and candy, and hints at two bogus “monsters,” why won’t you mention Cookie Monster? Like I said, I’ve always been a big fan. And this is what I get? A legendary 16-bar jaunt that doesn’t name drop me once, goes out of its way not to mention me, one of America’s greatest monsters?
You know what, Nicki? Your verse is great and all, but F-U-C-K you. I’m going to listen to Cardi and eat some C-O-O-K-I-E-S, because I’m a motherfucking monster. | https://medium.com/slackjaw/cookie-monster-reviews-nicki-minajs-verse-from-monster-d5d35e0c24a1 | ['Raleigh Mccool'] | 2020-12-16 13:57:41.817000+00:00 | ['Music', 'Satire', 'Humor', 'Nicki Minaj', 'Hip Hop'] |
Building Cross-Platform Apps With SwiftUI | Coding Time
Our application will be a very simple article listing app. We’re going to make a really simple application. Maybe I can develop this application further in the articles I’ll write about universal apps in the future.
Final app
Creating a new Xcode project
First of all, let’s create a new Xcode project. We need to create this project as a multiplatform app.
Of course, you can give the project any name you want. After that, the only thing left is to start making your project.
My advice to you while developing a project is to divide your files into folders. While doing this project, you can use folders in the same way I use them.
Building the data model
There will be only one data model in our project. As our application is an article-display application, as you can understand, an article has a data model. This model has four properties: id , title , description , type .
The reason we use ‘Identifiable’ is that we’ll use this structure inside a ‘List’ object
Of course, we need objects created with the Article model. We’ll create this data by writing it directly.
Now that our data model and the data sets are ready, let’s create the interface elements to display this data.
Building views
First, let’s create an ArticleView object that shows the properties of a single Article object.
Now to show these ArticleView elements together, create a new View object. It’s called ArticlesListView .
We won’t create the List object directly in the body because when the application is opened on a Mac device, it’s better to set the size accordingly. As you can see below, we adjusted the size of the Mac app with the frame method.
Now it’s time for the important part: SideBar and TabBar . TabBar and SideBar elements are extremely important for the structure of our application. Usually we divide our applications with these elements. In this app, the TabBar element will also be used for the iPhone version, and the SideBar element will be used for the iPad and Mac devices.
Let’s start with the TabBar element. The only thing we do in this element is arrange our main screens. We use the tag method to position the screens truly.
Let’s move onto the SideBar element. We have a basic View element in the SideBar element, similar to the TabBar element: List . We’ll use the NavigationLink object to orient in the List object.
Develop the code as follows to ensure the correct positioning of the NavigationLink elements.
Important: You have to locate the files of the SideBar and TabBar elements as done below — otherwise, problems will occur.
If your files can’t communicate with each other, you may need to edit your Target Memberships. Our project is very small, so you can have the two existing targets open in each file.
Finally, it’s time to edit the ContentView . Here we need to run different elements depending on which device the app is running on. For this, we’ll use tools like #if , which we also used earlier, and then we’ll use the horizontalSizeClass property. Thanks to horizontalSizeClass , we can detect whether the device running the application is an iPad or an iPhone.
Thus, our application is officially completed! | https://medium.com/better-programming/building-cross-platform-apps-with-swiftui-3fea88cdb0ae | ['Can Balkaya'] | 2020-12-17 22:51:07.213000+00:00 | ['Swift', 'Programming', 'iOS', 'Swiftui', 'Mobile'] |
Is Your Inclusion Inclusive? | Inclusivity Isn’t “Tolerance”
If we truly acknowledge that inclusion isn’t just about skin colour and surface-level statistics, then we can recognize that it is about being inclusive of other individuals. You see, this is the problem with “inclusion.” While most people consider inclusion to be the act of including someone, we rarely consider what people experience after we create these “inclusive” spaces. Once we invite someone to the table, do they really feel like they belong there? This is why you see so many organizations starting to include the word belonging in their diversity and inclusion titles and communications.
The two are not synonymous, and we must recognize that tolerance does not equal inclusivity. It can actually have the opposite effect. The act of including someone does not innately imply that you are also involving them. Therefore, when you make the conscious decision to include someone, you need to dig a little deeper to embrace their perspectives, regardless of whether or not they align with your own.
If we want to pursue real inclusivity, we can check our biases: Are you referencing (whether actively or passively) norms of acceptance for different groups of people? Because this requires intentional inner reflection, these biases can be the hardest thing to admit and simultaneously the easiest to act upon. It’s inevitable that you are going to make mistakes, and it can be hard to swallow when you realize that you’ve unintentionally been using these norms — which are really microaggressions — at work.
And when you do reach this point, it is your responsibility to equip others with the same knowledge. Recognize that though you have this responsibility, that does not necessarily imply that it is your fault. What is your fault, though, is if you disregard the exclusive nature of your inclusion, especially if you are a people manager or the leader of a team. This knowledge is our tool for improving the world, transforming surface-level “inclusion” into real inclusivity. Take what you’ve learned and make it a goal to share it with anyone that you can, especially those who value your thoughts and opinions. Everyone has a platform to use for good.
Once you’ve shared the knowledge with those around you, continue your journey by modeling the behavior yourself. To truly empower others, you’ve got to actively foster an environment that is conducive to this type of learning. Once we are all equipped and empowered, we can and must hold each other accountable. These sound like big tasks, but if everyone works on small changes, together those can create large social change. You are part of that social change. Below we offer a few small changes you can make to pursue real inclusivity.
Personally, do you find yourself ignoring your own privilege because you know a diverse person or have a diverse friend? Can you acknowledge that if you are reading this, we have some sort of privilege? Can you move past feeling guilt and shame, focusing instead on responsibility? | https://medium.com/better-programming/is-your-inclusion-inclusive-9d065f544e66 | ['Deneisha Franklin'] | 2020-08-20 22:08:09.788000+00:00 | ['Diversity In Tech', 'Diversity And Inclusion', 'Diversity', 'Inclusion'] |
Snipes, the Game That Gave Birth to LANs | Snipes, the Game That Gave Birth to LANs
It took me a long time to find this image of Snipes.
When I was in high school, I took the one programming class that was offered. We had a nice computer lab with IBM PS/2s that ran on a Novell LAN. The class was taught in Pascal which I knew somewhat from running open source software called Forum for my BBS. Back in those days, LANs were a very new thing.
To save money, our teacher, Mr. Smith, installed the Borland Turbo Pascal 3.x compiler on the server. My guess is that he didn’t have to pay for more than one license that way. Or that he was supposed to pay for more licenses but just conveniently didn’t.
Packaged with Novell is the first real-time multiplayer game I’d ever experienced, Snipes. While I had played some text-based multiplayer space trader games on BBSes, they were all asynchronous and not as much fun as real time. Snipes was like multiplayer Doom/Quake but with ASCII text characters. Your guy was a smiley face and your bullets were periods “.”. Despite the complete lack of graphics, it was a fun game because we were experiencing mutliplayer for the first time. I admit that I played my fair share of the game during class time.
Despite its simplicity, Snipes used a tremendous amount of server resources. As we got up to five players, games started to have a noticeable effect on network performance for the whole class. At 8 players it was pretty bad. One day, there was a group of us that needed to get some work done. But because there was a Snipes war taking place in the far back row of the computer lab, it took forever for us to compile since the server was busy killing smileys with periods. We could have complained to the teacher, but that would perhaps hinder our ability to play on the days we wanted to. Plus we were nerds to begin with, who wants to be deepen the nerd factor by complaining to the teacher that the people having fun were preventing you from programming. Seriously.
So we took matters into our own hands. We found programs that we had written and started to copy and paste to make the programs insanely long. Once we finished, four of us all submitted our huge programs at the exact same time to be compiled. This brought the server and the network to its knees. We all set our eyes on the guys playing Snipes in the back row as they started to look at each other. Their bullets started crawling across the screen ever so slowly. . . . . . .
The game became unplayable. Then what were they going to do, complain to the teacher that the people compiling programs were ruining game play? They stopped playing the game. Victory was ours!
Later that semester, the teacher figured out what was going on and moved the executables to a different directory, which took us about a week to find again.
When I looked for a screen shot of Snipes for this post, I stumbled upon this great post that outlines that Snipes was the first application written on Novell Netware to demonstrate what a local area network could do. LANs initially gained favor because printers and data storage were expensive, so companies wanted a means to share these resources. It’s hard to believe that there was a time when someone selling a LAN operating system had to demonstrate the value proposition of networking computers together. Today, you’d have a much harder time convincing someone to disconnect a computer from a network — it probably would require physical force. While Snipes never reached a large audience, it may be one of the most important multiplayer games in history given its contribution to the rise of the LAN. | https://medium.com/venture-evolved/snipes-the-game-that-gave-birth-to-lans-e9dc169873e4 | ['Jason Heltzer'] | 2018-12-14 23:30:41.991000+00:00 | ['Internet', 'History', 'Gaming', 'Esport'] |
The Devil’s Music | At first glance…
Pitchfork has been one of the most prolific music publications of the last twenty years. Only two others have more reviews in the dataset, and since 2018, Pitchfork has published more reviews on a monthly basis than anyone else, aside from AllMusic.
The trends here are interesting in and of themselves — what causes this jump around 2011, and the fall in the following years? This is something we will investigate in a future blog. Note — we use a rolling average here since album releases are highly seasonal, with very few albums being released in December.
There is a very crude way to measure a publication’s harshness — take an average of each publication’s review scores, and rank them. Doing this, we see that there are many harsher critics than Pitchfork, which just scrapes into the top (or bottom?) thirty publications.
Of course, this analysis is flawed. We’ve already seen that the publications have reviewed different numbers of albums — maybe it’s the case that Pitchfork has simply reviewed more good albums, given the breadth of its catalogue. We thus need to introduce a more robust measure of harshness.
Difference to Metascore
We’ve already suggested that the Metascore is probably the best measure we have of an album’s true quality. Thus, we can identify a harsh review by comparing the Metascore with the individual review score.
Let’s isolate Pitchfork’s reviews. We can plot each review’s Pitchfork score against the corresponding Metascore. Albums that Pitchfork have reviewed ‘harshly’ therefore fall below the purple ‘x=y’ line. We can see a lot of points fall below that line — more than two thirds, in fact.
67% of Pitchfork’s scores are below the Metarating. 28% are above, and 5% are exactly the same.
We can boil this ‘harshness’ concept down into a single KPI — a review’s ‘Difference to Metascore’. This is easily calculated as the difference between the two numbers. In the chart above, it would be given as the vertical distance between the point and the purple line.
So for Pitchfork’s review of the latest Foals album, the Difference to Metascore would be 60 minus 78, i.e. -18 (seemingly quite harsh!)
The album’s metacritic summary, with Metascore
The same album’s Pitchfork review…
Therefore, the more negative ‘Difference to Metascores’ a publication has, the more harsh we can judge it to be. If we plot a histogram of Pitchfork’s score differences, we see that the majority of them are negative (which we knew from the above scatter plot).
On average, Pitchfork scores almost a whole half-star lower than we would expect, given the Metascores of the albums that it has reviewed.
If you’re wondering which albums are to be found in that long tail on the left, then we can only hope that you’re not a fan of Northern State…
Comparison across publications
One of the advantages of the ‘Difference to Metacritic’ KPI is that we can compare fairly across publications.
If we plot the mean score difference to Metascore for each publication, we actually see that Pitchfork doesn’t look especially remarkable — especially relative to other publications with a similar number of published reviews.
Many of the publications at the extremities of harshness and generosity (i.e. on the far left and far right of the above scatter) are much smaller concerns. The three harshest (Drawer B, Select, and Vulture) only published 318 reviews between them. The most generous are often ‘record of the week’ style columns, from which we would expect high scores.
If we isolate the top thirty publications (each with over 2,000 reviews in the dataset) we see that only a sixth of them review more generously than Metascores suggest they ought to — the only especially generous critic amongst them is Entertainment Weekly (make of that what you will). At any rate, by this measure, Pitchfork’s review scores don’t look particularly notable.
It’s the kurtosis, stupid…
However, something interesting emerges if we go beyond the average score differences for each publication, and instead look at distribution of these differences. A histogram works if we’re doing this for one publication (as we did for Pitchfork a few paragraphs ago), however to compare multiple distributions side by side, we can use Seaborn’s violinplot function. Below, we plot the distributions for the ten most prolific publications in the dataset. | https://towardsdatascience.com/the-devils-music-622d29028c0b | ['Callum Ballard'] | 2020-04-26 14:11:39.057000+00:00 | ['Music By Numbers', 'Python', 'Culture', 'Music', 'Data Science'] |
Becoming Root Through Misconfigured SUDO | SUDO Privilege Escalation
So how can attackers exploit their SUDO rights to execute arbitrary commands as the root user?
If the attacker has SUDO rights to programs that allow command execution or arbitrary writes to files on the system, the attacker can exploit the temporary root access to execute code as root on the system. You can find the commands the current user can run with SUDO via this command:
sudo -l
For example, let’s say that regular users are given the ability to run the command find with SUDO so that they can search for all files on the system. The find command is usually used for locating files and often has SUDO permissions to allow users to find files across the system. But find allows the execution of system commands through the -exec flag! For example, to run the ls command from within the find command, you can use the command find . -exec ls \; . So if a user can run the find command as SUDO, they can execute system commands as root!
These misconfigurations make privilege escalation trivial. For example, an attacker can use the ability to execute commands as root and add themselves as a root user in the /etc/passwd file. This command will do just that:
find . -exec echo “vickie::0:0:System Administrator:/root/root:/bin/bash” >> /etc/passwd \;
This command adds a root user with the username vickie and an empty password. Since 0 is the UID of the root user, adding a user with the UID of 0 will give that user root privileges. This command is not possible for regular users because only privileged users can modify system-critical files such as the /etc/password file. | https://medium.com/better-programming/becoming-root-through-misconfigured-sudo-7b68e731d1f5 | ['Vickie Li'] | 2020-11-09 15:39:07.629000+00:00 | ['Programming', 'Linux', 'Cybersecurity', 'Software Development', 'Security'] |
Crypto needs a global regulatory organization, but hear me out | Blockchain has changed the world in miraculous and groundbreaking ways.
The crypto-experience provokes strong emotions — feelings of freedom, empowerment, and financial revolution. Industry experts, developers, investors, and enthusiasts have fallen in love with blockchain technology and the perceived liberty it brings, myself included.
But love at times, has a funny way of clouding our vision and frustrating practical thought. Enthusiasm and excitement turns to anarchy, fantasies and delusions of a world without banks and governments.
Have you heard this before? Of course, you have. And in all honesty, even I may have believed it for half-a-second — but then I remembered that I live in a place called reality.
Naïve and delusional storytellers have actually convinced themselves and others that cryptos will conquer armies and turn the world’s most powerful banks to dust… “Poof”
If you are hearing this for the first time, have a seat, things are about to get real. It’s not going to happen.
Banks and governments will exist tomorrow, and 100 years from now. The technology may change, but the organism of the world will remain. But don’t look so depressed, everything will be OK. The crypto-asset industry WILL be legitimized and adopted by the masses. The technology is everything that you thought it was and more — it just needs to be regulated properly. The key word here is properly.
I get it. There is an inescapably strong stigma attached to the idea of additional regulatory oversight. The thought of an organization, government or association trying to “control” what the industry does scares people, and for good reason.
The thought of stockbrokers from the 80s in a room writing crypto-regulation scares you as much as it scares me, in fact it terrifies me.
To this point, governments and regulators have taken the easy way out. They took a complex technology that they didn’t even remotely understand and then tried to regulate it like the assets that they were already familiar with. Governments were familiar with transactions involving “commodities” and “securities” offerings — and under the guise of a “regulatory-sand-box” they have lazily fumbled around, half-heartedly with our “beloved” crypto. A reasonable and expected government response (in all honesty). Deference and understanding must be earned.It cannot be taken. We must come together and unite behind the CBRTF.
Crypto-assets and ICOs need to regulated properly and by a jury of our peers. You can call it self-regulation. You can call it membership-regulation. You can call it whatever you want, but it needs to happen. The same way that FINRA regulates investment banks, funds and broker-dealers, the CBRTF will regulate blockchain/crypto-asset companies and ICOs. Through the thoughtful representation and collaboration of our ENTIRE industry, the CBRTF will set clear standards and best practices to protect consumers and facilitate meaningful, scalable growth.
We all love crypto and want to see it succeed, and thoughtful regulation is our only path to the lasting success and legitimization of crypto-assets. We must take steps to assure a safe and efficient marketplace. Investor confidence in our industry is at an all-time low, and frankly, it’s embarrassing. The increasing number of con-artists and frauds in our industry is very upsetting. It is destroying consumer confidence. Honest ICOs with ground-breaking ideas are being harmed. Investors are being harmed. Consumers are being harmed.
It’s time to take action for the greater good of the industry. It’s time for the CBRTF. | https://medium.com/anderson-law-journal/crypto-needs-a-global-regulatory-organization-but-hear-me-out-c16192fb431 | ['K. Braeden Anderson'] | 2018-07-12 21:48:16.355000+00:00 | ['Bitcoin', 'Cryptocurrency Regulation', 'Law', 'Cryptocurrency', 'Crypto'] |
TNDL: “CHRIST, THE GLORIFIED DIVINE SPIRIT, ACCORDING TO REVELATION 1.” | Revelation 1
New International Version (NIV)
Prologue
1 The revelation from Jesus Christ, which God gave him to show his servants what must soon take place. He made it known by sending his angel to his servant John, 2 who testifies to everything he saw — that is, the word of God and the testimony of Jesus Christ. 3 Blessed is the one who reads aloud the words of this prophecy, and blessed are those who hear it and take to heart what is written in it, because the time is near.
Greetings and Doxology
4 John,
To the seven churches in the province of Asia:
Grace and peace to you from him who is, and who was, and who is to come, and from the seven spirits[a] before his throne, 5 and from Jesus Christ, who is the faithful witness, the firstborn from the dead, and the ruler of the kings of the earth.
To him who loves us and has freed us from our sins by his blood, 6 and has made us to be a kingdom and priests to serve his God and Father — to him be glory and power for ever and ever! Amen.
7 “Look, he is coming with the clouds,”[b]
and “every eye will see him,
even those who pierced him”;
and all peoples on earth “will mourn because of him.”[c]
So shall it be! Amen.
8 “I am the Alpha and the Omega,” says the Lord God, “who is, and who was, and who is to come, the Almighty.”
John’s Vision of Christ
9 I, John, your brother and companion in the suffering and kingdom and patient endurance that are ours in Jesus, was on the island of Patmos because of the word of God and the testimony of Jesus. 10 On the Lord’s Day I was in the Spirit, and I heard behind me a loud voice like a trumpet, 11 which said: “Write on a scroll what you see and send it to the seven churches: to Ephesus, Smyrna, Pergamum, Thyatira, Sardis, Philadelphia and Laodicea.”
12 I turned around to see the voice that was speaking to me. And when I turned I saw seven golden lampstands, 13 and among the lampstands was someone like a son of man,[d] dressed in a robe reaching down to his feet and with a golden sash around his chest. 14 The hair on his head was white like wool, as white as snow, and his eyes were like blazing fire. 15 His feet were like bronze glowing in a furnace, and his voice was like the sound of rushing waters. 16 In his right hand he held seven stars, and coming out of his mouth was a sharp, double-edged sword. His face was like the sun shining in all its brilliance.
17 When I saw him, I fell at his feet as though dead. Then he placed his right hand on me and said: “Do not be afraid. I am the First and the Last. 18 I am the Living One; I was dead, and now look, I am alive for ever and ever! And I hold the keys of death and Hades.
19 “Write, therefore, what you have seen, what is now and what will take place later. 20 The mystery of the seven stars that you saw in my right hand and of the seven golden lampstands is this: The seven stars are the angels[e] of the seven churches, and the seven lampstands are the seven churches.
Footnotes
Revelation 1:4 That is, the sevenfold Spirit
Revelation 1:7 Daniel 7:13
Revelation 1:7 Zech. 12:10
Revelation 1:13 See Daniel 7:13.
Revelation 1:20 Or messengers | https://medium.com/yahweh-elyon-yeshuas-teachings/tndl-christ-the-glorified-divine-spirit-according-to-revelation-1-9aabf46c46f1 | ['J. K. Woods'] | 2020-08-26 07:22:10.469000+00:00 | ['Religion', 'Spirituality', 'Love', 'Christianity'] |
Mary Ellen goes home- Part 2 | Short story Fiction
Mary Ellen goes home- Part 2
Photo by willsantt from Pexels
Mary Ellen followed the man in the filthy jeans out of the bus stop and back out into the bright sunshine of the day. She held her valise clutched in both hands, knocking her knees against it as she walked.
The man turned to her. He scratched behind his ear as he saw her struggling with the case.
“Here, I got that” he reached out to take her bag and she yanked it away, her face appalled.
“You will not” She spat out. She pressed her lips tight and glared at this grubby man behind the shield of her sunglasses. “I can carry my own bag, thank you very much.” She released one of her hands and switched the bag to her side.
The man shrugged.
“Whatever ever man.” He declared nonchalant. “Just trying to help. No need to get yourself all worked up. Ain’t like I am going to steal some girly suitcase.” He turned and continued walking, leaving her trailing behind him.
“It’s a valise. A vintage valise.” She hurrying up her step. It dawned on her that she probably shouldn’t offend this guy who was her only ride.
“Looks like an old suitcase to me.” He replied not bothering to look back.
Opened mouth, she didn’t say anything and kept on following.
Who was this guy? Why had Merrily sent him to get her? Why wasn’t she here? It was the least she could do after all these years.
She thought of her sister. Her twin. She hadn’t seen Merrily in nearly twenty years. Since they were eighteen years old. Her mind went back to the last time she had seen her. The argument, the confrontation.
She blinked trying to forget the memory of her sister yelling at her as she walked out the door. A beaten-up gym bag slung over her shoulder. The snow had fallen and the ground was covered in sheets of untouched snow.
“You can’t leave me, Mary Ellen! You can’t leave me alone with only him.” Merrily begged after her. She hung out the door, her long dark hair hanging limp in her face. She clutched the rim as though it were a floating device and it would save her from drowning.
Mary Ellen hadn’t even bothered to turn around. She just kept on walking as fast as she could as Merrily’s pleads continued to ring in her ears.
“What is your name?” She blurted out. The man paused, his shoulders’ perking up. Mary Ellen noticed his jean jacket collar was pulled up around his ears. Something the kids use to do back in the day, to look cool. How old was this guy? Twelve?
He cocked his head slightly. His dark hair fell across his forehead, one curled lock falling just above his right eye.
She stopped in her tracks, her heart squeezed. Those eyes. A translucent blue. She thought of a wolf coming out of the forest, eyes centered on its prey. For a moment she felt her breath escape her.
Was she attracted to him?
Ew, gross. She shuddered. She needed to get out more. When was the last time she was with a man? Too long apparently.
“Randy, Mary Ellen,” he said letting out a snort. He turned and continued to walk towards the parking lot.
She wrinkled her brown. What was with the attitude?
Randy. Typical. He looked like a Randy. She exhaled through her nose and continued to trudge behind him.
They stopped at a rusty white 1990 Chevrolet Silverado.
Taking the keys he still had in his hands Randy opened up the front door and climbed into the driver’s side. He reached over to the passenger’s seat and manually clicked open the lock.
Mary Ellen stood glued to the spot. This guy was a stranger. Wonder if he was a serial killer? Wonder if he didn’t even know Merrily? Even if he does know Merrily that didn’t mean he was safe to be alone in a truck with. Merrily wasn’t exactly a great read of personal character.
Wonder if they crashed?
Randy shut the side door of the driver’s side and unwind the window. Sticking his head out he looked back at her.
“Ain’t got all day, Sister”
Ugh. Sister.
The very word Sister raised her anxiety level.
Randy still watching her hit the side of the door.
“Giddy up girl. Time’s a-wasting.” His eyes softened and for a moment she swore there was something familiar about him.
She shook her head and started towards the truck.
This place.
Everything about this place unsettled her. She reached for the rusty handle of the truck. The door creaked in protest. The smell of stale smoke bellowed out, assaulting her nose. Her hand flew to her face unable to help but cough lightly into the palm of her white-gloved hand.
She stared at her hand. When had she put on gloves? That was strange. She didn’t recall putting them on. She shrugged it off and leaned into the truck. Swiping the seat of unseen dirt before she sat down.
Randy sat watching. Unable to keep his eyes off of her.
Like the wolf stalking its prey. She thought. She looked straight ahead. Her eyes focused on the view past the filthy windshield. She could hear Randy snicker beside her and keys jingling as the ignition of the truck clicked on.
The engine roared into action. She sat stiff, unmoving. Her back straight, her gloved hands clasping the handle of her bag.
Randy said nothing as they pulled out of the parking lot leaving the bus station behind them.
He turned onto the main street and she recognized the route. It was the same one she had last taken in this God-forsaken town. Only she’d been going the opposite direction toward the bus station.
Her mind wandered to that last day again. Her hair long dark hair in a ponytail, stuffed inside her knit cab.
It was winter. The snow was falling. Everything was white and cold. Sterile.
Mary Ellen!
Her sister’s cry from the door of the house they grew up in.
Please, don’t go! Don’t leave me here alone!
Her sister begging her. Pleading. Her voice hoarse from her efforts. Mary Ellen could still hear it. It was the sad sound of mourning and despair. She had squeezed her eyes shut to it all and pulled her knit hat down further over her ears. Trying to drown out the sounds of her sister’s pain and suffering. She walked faster, her feet crunching crisp on the fallen snow. Each step pressing into the hard crust that had already started to freeze on the ground.
She’d never looked back. Until now.
She glanced at Randy. He was tapping his fingers lightly on the worn leather steering wheel. Humming along to a miserable country song playing low on the old radio. She recognized it. It was an oldie. Dolly Parton singing about a little girl and her dog. A little girl who would die for no apparent reason. It always bothered her why no one ever questions this song. Like it was normal for little girls to be out in the cold with their dog and then die at a stranger’s house.
She leaned her elbow against the side window and rested her chin in her hand. She allowed herself to close her eyes, letting the words of Dolly drown out thoughts of her sister. Why hadn’t the woman in the song called the authorities? How did the child die? How would that woman explain the child’s death the next morning? Country music made no sense.
She felt herself drifting off. Dolly singing about angels taking children to heaven lingering in her ear. | https://medium.com/illumination-curated/mary-ellen-goes-home-part-2-381aafd44d07 | ['Deena Thomson'] | 2020-12-27 06:41:16.781000+00:00 | ['Fiction', 'Novela', 'Writing', 'Fiction Writing', 'Short Story'] |
How To Save Money (Without Feeling Like You’re Saving) | With wages cut, furlough on the horizon again, and a pay rise you might have not been expecting to happen, the financial hit we’ve all taken from COVID has been devastating for many. And although our wage is being stretched more and more each month, we’re still having to prepare for the ‘unknown’.
With more than 50% of our normal wage being spent on the big four — food, housing & fuel, transport, and recreation — this actually gives us very little wiggle room for saving, especially in a pandemic. But whatever your financial situation may be, here are a few ideas on how to save money (both during and after the pandemic) where it won’t really feel like you’re saving.
Track your budget
So you know how much you’re spending and are brilliant at finding a savvy deal, right? Each month, you’ll have a rough idea in your head of what you’re spending and how much you’ve saved, but the reality is it will usually go over that figure. With unexpected bills or a spontaneous night out (pre-pandemic of course), we usually end up spending more than we plan for.
One thing that can help everyone save a little more and keep an eye on their spending regardless is a budget tracker. According to The Balance, not only can budgeting trackers ‘help you become more aware of what you are spending and where you are spending it’, but they can also ‘identify the areas where you need to change.’
For an easy, convenient budget tracker while you’re on the go, use an app such as Mint. This free, easy to use app can be used either on your smart device or laptop. The algorithm makes budgets based on your spending and can easily, visually show where you can save — simple! Being able to see all of your bills and income in the same place, trackers such as Mint can help you make a budget (and actually stick to it this time).
Are you actually going to the gym?
With differing tier restrictions in place throughout the UK currently — and some of us feeling less comfortable with venturing outside — the chances are, you’re not using that gym membership. Whilst it is important to stay fit, it’s also healthy to exercise restraint on your wallet, and a simple way to do that is by cancelling any membership you no longer use.
Although a few quid each month doesn’t seem like a lot, all those fees add up. Memberships you don’t use and other subscription services can inflict more damage to your finances than other types of purchases. These automatic payments are typically small amounts of money, and are therefore too easy to forget.
So, go through your subscriptions and memberships and ask yourself: am I really using that? Is there a budget friendly alternative I can use? The answer will most probably be yes.
Keep the change!
One of the best things about the pandemic is the ease of contactless payments. With no annoying spare change to rattle around your pocket or bag, our lives feel lighter and less cluttered. But, even though that change was annoying, it is a simple way to save.
If you still use physical money, every time you buy something and get any change, keep it in your wallet and put it in a jar when you get home. Choose any container you want (like the classic piggy bank), wait until it’s full, and then deposit it into a coin machine at your local bank.
Do you only use contactless in your bid to control the spread of COVID? No problem! Banks such as Monzo give you the option to round up your payments to the next nearest £1, with the rest going into a virtual ‘Coin Jar’. Every time you make a purchase with your card, the excess will automatically be added to your savings and you won’t even know it’s gone!
Sure you might not save loads with this method, but this extra cash can definitely help you add a little more money into your rainy day fund.
Host events at home
Going out is expensive, isn’t it? Even with the price of a pint soaring, although going out feels more appealing than a year of staying in, not everyone has the budget for it.
Depending on your local restrictions, why not try hosting some events at home? With either you or your friends, grab some drinks or make a nice meal, and meet up with the people you care about. Similarly, if you can’t meet up with people inside, why not meet up in a local park (weather permitting). If it snows, you might not fancy a picnic, but going for a run or a walk around your local area is not only a free, cost effective activity, but can also help keep you connected to those you love during the pandemic.
Ultimately, however you choose to spend or save money, it’s always important to make sure you still see your friends and family. Although you may not be able to right now, due to self-isolating or living in a Tier 3 area, if your local rules permit it, find a way to see the ones who mean the most to you.
Written by Alison Irlam
Alison is a writer with a keen interest in mental health and wellbeing. Since finishing a postgraduate degree in Creative Writing and having her own experiences with mental health, she has dedicated her time to helping and empowering others to make positive changes in their own lives. | https://medium.com/@wellbe-global/how-to-save-money-without-feeling-like-youre-saving-acec5a4c44e9 | [] | 2020-10-26 12:02:29.807000+00:00 | ['Saving Money', 'Financial Advice', 'Money Management'] |
Consumers will be spending less this holiday season. | According to a new survey from FTI Consulting, consumers are spending about 4% less this holiday season overall than last year. Thirty-three percent of the survey´s respondents said they will be spending somewhat or significantly less than last year, while only 15% are spending more. Almost a third (32%) are spending about the same, according to survey findings.
FTI Consulting is a business advisory firm headquartered in Washington, D.C., United States. FTI is one of the largest financial consulting firms in the world and consistently ranks as one of the top global management consulting firms.
The report also found that consumers had already completed 39% of their holiday shopping when the survey was conducted in early November.
Gen Z consumers were the largest group to complete holiday shopping (46%), followed by Gen X (42%), millennials (35%), and baby boomers (33%).
When it comes to promotions, 26% of consumers said they have a price in mind for desired items, but they don’t look for a sale. However, a quarter of respondents wait for at least 40% off and 17% of consumers said they’re waiting for sales of at least 60% off.
FTI Consulting’s findings suggest that consumers could spend less this year than they have in previous holiday seasons.
Echoing other findings of the 2020 holiday shopping season, slightly more than 80% of FTI Consulting survey respondents said they were going to shop more online this year compared to last year, but consumers are planning to spend a bit less than before.
Research from Adobe points to increases in Black Friday and Cyber Monday online shopping, but projections of how the overall holiday sales will compare to previous years have varied.
Though the International Council of Shopping Centers projected a 1.9% growth in holiday sales, the NRF predicted that the holiday sales growth could range between 3.6% and 5.2%.
TI Consulting also found that the reasoning behind shopping via e-commerce had little to do with health or safety reasons connected to COVID-19.
Instead, respondents pointed to “convenience, best prices, and free shipping” far more frequently. That, in turn, may mean that the uptick in online sales “will stick once the pandemic passes,” according to the report.
With more consumers holding out for deeper discounts, FTI Consulting’s survey also notes the diminished importance of shopping events like Black Friday and Cyber Monday, because retailers have spread out their discounts in preparation for high shipping volumes.
Per the survey, 49% of respondents purchased holiday gifts during Amazon’s Prime Day event. Those who took part in the event completed more than half of their intended holiday spending that day. Following Amazon’s lead, other retailers, including Mediamarket, El Corte Inglés, Walmart, Best Buy, and others depending on the country you live in, also rolled out similar shopping events ahead of the holidays.
Consumers plan on buying fewer gifts and spending less this holiday season, with an 18 percent reduction in their average budget from last year, as a KPMG LLP survey found. Forty-one percent of consumers also said that they are not planning on going to Black Friday sales in person.
Faced with considerable uncertainty and reduced household income, consumers are spending less this holiday season, focusing on essential purchases for the home and gifts for close family members.
Retail customers are forming new shopping habits, which are expected to continue into 2021 and beyond.
Also, 60 percent of consumers plan to give to the same number of people this holiday season, while 36 percent will give to fewer people, according to KPMG’s 2020 holiday shopping report, “Season of Reckoning: 2020 COVID-19 Consumer Pulse/Holiday Report.”
Consumers also indicated that Black Friday and other significant retailer events will look different this year, with fewer in-store shoppers and 41 percent saying that Black Friday was their most important shopping event, compared to Cyber Monday or Prime Day.
In-store retailers hoping for a holiday reprieve may be disappointed.
The migration to online continues across nearly all retail segments.
Holiday spending: smaller gift budgets, enduring traditions | https://medium.com/@manuelcastanedamolido/consumers-will-be-spending-less-this-holiday-season-7cd6f41a257c | ['Manuel Castañeda'] | 2020-12-14 14:09:53.811000+00:00 | ['Christmas', 'Online Shopping', 'Social Media Marketing', 'Digital Marketing', 'Consumer Behavior'] |
Salesforce Cloud Security: Automating Least Privilege in AWS IAM with Policy Sentry | Intro
The Salesforce Security Assurance team would like to share a tool that we recently published called Policy Sentry, which helps to automate the creation of least privilege IAM policies in Amazon Web Services (AWS) environments.
Policy Sentry allows users to create least-privilege IAM policies in a matter of seconds, rather than tediously writing IAM policies by hand. These policies are scoped down according to access levels and resources. In the case of a breach, this helps to limit the blast radius of compromised credentials by only giving IAM principals access to what they need by writing policies according to access levels and resource constraints. Before this tool, it could take hours to craft a secure IAM policy — but now it can take a matter of seconds. This way, developers only have to determine the resources that they need to access, and Policy Sentry abstracts the complexity of IAM policies away from their development processes.
In this post, we’ll walk through the principles of least privilege IAM policies, the general steps that one would use to write them by hand, and show how Policy Sentry automates this process.
Why is this useful?
In a recent breach on an AWS environment, an attacker compromised an internet-exposed WAF appliance server that was vulnerable to Server-Side Request Forgery, obtained AWS credentials via EC2 metadata, and discovered that the server had excessive privileges to access S3 buckets, some of which contained customer data. So the breach likely could have been prevented through one of the two general technical controls:
Server-Side Request Forgery¹, which is a difficult problem of its own.
Limiting Blast Radius through Least Privilege IAM policies.
Ideally, we want to give users and systems access to only the resources that they need to for their use case, at access levels appropriate to their use case. For instance, accessing a specific S3 bucket (the resource), with “Read” actions only (the access level). This concept is an example of least privilege.
However, until now, it has been challenging to achieve least privilege at scale. One problem is the rapid pace of new features and services by AWS. At the time of writing this blog post, there are now 214 AWS services and over 7,000 different AWS API calls. When we started writing this tool in mid-2019, there were about 180 different AWS services and about 6,000 different API calls. The rapid pace of new feature releases by AWS makes it nearly impossible for a security organization to review and approve new organization-wide IAM policies without disrupting business. The time it takes to write new AWS policies by hand exacerbates this problem. Even for cloud security professionals who are familiar with IAM, writing security-conscious IAM policies by hand can be tedious, inefficient, and time-consuming; and, quite often, security professionals aren’t the ones authoring those IAM policies — developers are! Many Infrastructure as Code developers have experienced something like this:
Determined to make your best effort to give users and roles the least amount of privilege you need to perform your duties, you spend way too much time combing through the AWS IAM Documentation on Actions, Resources, and Condition Keys for AWS Services.
Your team lead encourages you to build security into your IAM Policies for product quality, but eventually, you get frustrated due to project deadlines.
You don’t have a security person assigned to your team day-to-day who can write those IAM policies for you, and there’s no automated tool that will automagically sense the AWS API calls that you perform and then write them for you in a least-privilege manner.
After dreaming about that level of automation, you realize that writing least privilege IAM Policies may jeopardize your ability to finish your code in time to meet project deadlines.
You use Managed Policies, or you eyeball the names of the API calls and use wildcards instead so you can move on with your life.
Recent breaches have drawn attention to the risks of Server Side Request Forgery (SSRF), but overprivileged IAM policies have continued to be an overlooked issue — partially because of the difficulty in achieving least privilege with IAM at scale. Policy Sentry seeks to lower that difficulty level.
While Policy Sentry can’t determine the context of your IAM role — whether it truly needs access to certain resources based on business or functional requirements — it can restrict access to precisely the resources that you say it needs access to, and do it in a predictable, auditable, and readable manner.
How does it work?
IAM Policy Background
At the most basic level, a policy statement has one or more statements. Each statement always has the following components:
Effect : This can be Allow or Deny .
: This can be or . Action : A list of AWS IAM Actions.
: A list of AWS IAM Actions. Resource : A list of Amazon Resource Names (ARNs) of various AWS resources.
Consider a statement that has an Allow effect, with the action s3:GetObject and Resources set to * .
This means that the IAM policy will allow the IAM principal (a role or user) to run GetObject from any S3 bucket in the AWS account. Overly permissive access to S3 buckets - i.e., a wide blast radius - is a cause of many breaches.
Writing Secure Policies by Hand
Now, how could this be prevented with more secure IAM policies? For starters, you could restrict the IAM action to the specific ARN that your role needs access to. With the simple combination of s3:GetObject and a specific S3 bucket — let’s say, arn:aws:s3:::my-bucket/ , the policy is simple enough.
However, real IAM policies quickly become trickier when you combine one IAM action with one IAM resource, as with the above example. Let’s say that your internal customer needs access to the following actions:
kms:CreateGrant
kms:CreateCustomKeyStore
ec2:AuthorizeSecurityGroupEgress
ec2:AuthorizeSecurityGroupIngress
To build this policy, we need to navigate to the AWS IAM Documentation — specifically the Actions, Resources, and Condition Keys page for each service.
Let’s take a look at the consolidated EC2 Actions Table below:
As shown in the table above, both of those actions can be restricted to specific security groups — so in the Resources stanza of the IAM policy, we would specify whatever the ARN format of the security-group is.
We can determine that format by navigating to the ARN table on that page. A snippet is below:
As you can see above, we’d have to specify the security group ID in the IAM policy.
Let’s see what it would look like in an IAM policy:
Writing Secure Policies based on Resource Constraints and Access Levels
Now, to do this for every single action by hand is very tedious, error-prone, and difficult to audit. Imagine if we had to do this by hand for dozens of IAM actions across multiple services!
Quite often, this leads to one of the following results for Infrastructure as Code developers:
Use Managed IAM Policies, and write the IAM policy later. Sometimes this never gets fixed. Write the policy correctly, potentially leading to missed project deadlines. “Eyeball” the IAM policy and use wildcard statements instead of specifying resource ARNs correctly.
Policy Sentry automates this process to avoid that outcome and abstract the complexity of writing IAM policies. As a result, writing secure policies takes seconds instead of hours.
The CRUD mode functionality takes the opinionated approach that IAC developers shouldn’t have to understand the complexities of AWS IAM — we should abstract the complexity for them. In fact, developers should just be able to say…
“We need Read, Write, and List access to arn:aws:ssm:us-east-1:123456789012:parameter/myparameter "
" “We need Permissions Management and Tagging access to arn:aws:secretsmanager:us-east-1:123456789012:secret:mysecret “
…and our automation should create policies that correspond to those access levels.
How do we accomplish this? Well, Policy Sentry leverages another column available in the Actions Table from the Actions, Resources, and Condition Keys documentation — the “Access Level” column. See the snippet below.
Policy Sentry aggregates all of that documentation into a single database and uses that database to generate policies according to actions, resources, and access levels.
To generate a policy according to resources and access levels, start by creating a template with the create-template command so you can just fill out the fields, rather than memorizing the format. The --name flag specifies the name of the role, the --output-file flag specifies the name of the file, and the --template-type specifies the “mode” that Policy Sentry will use to create policies ( crud or actions ).
policy_sentry create-template --name myRole --output-file crud.yml --template-type crud
It will generate a file like this:
The template has a few fields:
mode - the Policy Sentry “mode”. Acceptable values are crud or actions . CRUD mode creates policies based on access levels constrained to the resource ARNs provided. Read more about Actions mode here.
- the Policy Sentry “mode”. Acceptable values are or . CRUD mode creates policies based on access levels constrained to the resource ARNs provided. Read more about Actions mode here. name - the name of your role. This is for readability purposes and for anyone who might review your template later if it is checked into Git.(Optional)
- the name of your role. This is for readability purposes and for anyone who might review your template later if it is checked into Git.(Optional) description - this is where you would include your description or justification for that role. (Optional)
- this is where you would include your description or justification for that role. (Optional) role_arn : You can insert the ARN of the IAM role that this applies to for informational purposes. (Optional)
: You can insert the ARN of the IAM role that this applies to for informational purposes. (Optional) The access levels, read , write , list , tag , and permissions-management . Here, you specify a list of ARNs that your service needs access to, under the proper access level. Policy Sentry will match the ARNs with the ARN formats listed in its database, and will generate a policy that restricts actions at that access level that can be matched to those ARNs only. If any of these fields are not used, you can just delete them from the template.
, , , , and . Here, you specify a list of ARNs that your service needs access to, under the proper access level. Policy Sentry will match the ARNs with the ARN formats listed in its database, and will generate a policy that restricts actions at that access level that can be matched to those ARNs only. If any of these fields are not used, you can just delete them from the template. wildcard , a special field where you can specify IAM actions that cannot be restricted to ARNs, like ssm:DescribeParameters , if necessary. Delete this field from the template if it is not used.
Then just paste all of the ARNs under the access levels in the template:
Then run the write-policy command. Here, you’ll specify the the Policy Sentry template you just created with --input-file :
policy_sentry write-policy --input-file crud.yml
It will generate these results:
Notice how the policy above recognizes the ARNs that the user supplies, along with the requested access level. For instance, the SID SecretsmanagerTaggingSecret contains Tagging actions that are assigned to the secret resource type only.
How does Policy Sentry compare to other tools?
Policy Sentry is somewhat similar to Trailscraper. Trailscraper queries CloudTrail logs and attempts to “guess” the matching between CloudTrail actions and IAM actions, then generates a policy. Given that there is not a 1-to-1 mapping between the names of Actions listed in CloudTrail log entries and the names AWS IAM Actions, the results are not always accurate. It is a good place to start, but the generated policies all contain Resources: "*" , so it is up to the user to restrict those IAM actions to only the necessary resources. CloudTracker performs similar log-based policy suggestions, and is faster due to the use of Amazon Athena, but it does not generate suggested policies.
RepoKid is a popular tool that was developed by Netflix, and is one of the more mature and battle-tested AWS IAM open source projects. It leverages AWS Access Advisor, which informs you how many AWS services your IAM Principal has access to, and how many of those services it has used in the last X amount of days or months. If you haven’t used a service within the last 30 days, it “repos” your policy, and strips it of the privileges it doesn’t use. It has some advanced features to allow for whitelisting roles and overall is a great tool.
One shortcoming is that AWS IAM Access Advisor only provides details at the service level (ex: S3-wide, or EC2-wide) and not down to the IAM Action level, so the revised policy is not very granular. However, RepoKid plays a unique role in the IAM ecosystem right now in that there are not any open source tools that provide similar functionality. For that reason, it is best to view RepoKid and Policy Sentry as complementary.
We recommend using Policy Sentry to create Identity based policies, using Repokid to revoke out of date policies as your application/roles mature, and never to provision Infrastructure manually — always provision your policies using Infrastructure as Code tools (like Terraform) in a CI/CD pipeline.
Summary
Policy Sentry rapidly speeds up the time to develop IAM policies and ensures that all IAM policies limit access according to the proper CRUD levels, and only to the exact resources that your role needs access to. Before this tool, it could take hours to craft the perfect IAM Policy — but now it can take a matter of seconds. This way, developers only have to determine the resources that they need to access, and Policy Sentry abstracts the complexity of IAM policies away from their development processes.
Be sure to let us know if you have questions about Policy Sentry, if you leverage it in your own environment, or if you have other methods for addressing this problem!
P.S.: We’re hiring!
One of the things that we love about working at Salesforce is that management is committed to automation, including automation of security activities. Certain challenges can only be addressed through careful security engineering and we understand that. If you’re interested in working in Salesforce security, we are hiring in InfraSec and are looking for those with heavy threat modeling skills, secure architecture, and design. If this sounds interesting to you, feel free to follow and message me on Twitter: https://twitter.com/kmcquade3. | https://engineering.salesforce.com/salesforce-cloud-security-automating-least-privilege-in-aws-iam-with-policy-sentry-b04fe457b8dc | ['Kinnaird Mcquade'] | 2020-06-28 18:42:43.155000+00:00 | ['Cloud Computing', 'Security', 'AWS', 'Information Security', 'Open Source'] |
The computer vision tech that helped 15,000 voters request a ballot online | The computer vision tech that helped 15,000 voters request a ballot online
Americans don’t need to be convinced to vote. They need to be able to vote. It’s harder to vote in the United States than in any other nation with democratically-elected leadership. We know that voters turn out when you ask them to, and when they have the resources to be able to. But some states have systematically made it hard to register and to request a mail-in ballot (also called an absentee ballot), and it’s no accident that these barriers disproportionally affect low-income voters and voters of color.
Partisan campaigns focus on what are called “high-propensity” voters — people who usually vote. This creates a self-reinforcing system where you have a group of voters that face more barriers getting to the ballot box, and then these voters get classified as “low-propensity” and nobody reaches out to help them.
At VoteAmerica, we break this cycle by identifying these barriers, building tools to remove them, and then proactively reaching out to voters (tens of millions of them in the run-up to the 2020 General Election!) to help them find and use these tools so that everyone can make their voice heard and have their vote counted.
Requesting a ballot
One of these key barriers is being able to request a mail-in ballot. Even in the midst of a pandemic, many states do not have an online system for requesting a ballot. So voters need to find the ballot request form, print it, sign it, and then mail it or hand-deliver it to their election office. And with tens of millions of Americans out of work due to the pandemic, even fewer people have access to a printer: home printer ownership is in the low single digits and has been for years.
But in about ten states — comprising tens of millions of voters — you’re allowed to submit the ballot request form by email or fax, as long as it still has a “wet” signature (a real pen-on-paper signature that’s been scanned in). But to do that, you’d need to either still have a printer and scanner, or have the technical know-how to use something like Apple Preview to scan in your wet signature and apply it to the application form PDF. And in states that allow fax but not email (New Hampshire and Kentucky), you’d also need access to a fax machine!
So at VoteAmerica, we built an e-signature workflow that let you complete your ballot request form — including a wet, pen-on-paper signature — entirely online, from a web browser on your phone or computer.
The VoteAmerica ballot request workflow
First, the user enters in their basic biographical information. This is used both to fill out the form, and so we can follow up with them later to help them complete and return their ballot once they receive it.
E-Sign, Step 1: Basic info
Then, the user enters in a set of state-specific information. Unlike voter registration, where anyone from any state can use the National Voter Registration Application, each state has a different form for requesting a mail-in ballot, each with different fields to fill out and questions to answer (well, that’s not quite true — every state accepts the Federal Post Card Application for overseas and military voters, but no state accepts that form for domestic voters). So we reviewed and digitized all 51 forms — for every state and DC — to build state-specific forms. Massachusetts, for example, keeps it pretty simple:
E-Sign, Step 2: State-specific info
Then, we ask the user to take a picture of their wet, pen-on-paper signature. On mobile, we provide a camera UI right in the page, and on desktop, we let you text the picture to us (we don’t use a camera UI on desktop because it’s a bit awkward to try and hold a paper with your signature up to your webcam without the paper blocking your view of your computer screen).
As soon as the user takes the picture or texts it to us, we show it and provide an opportunity to crop and rotate:
E-Sign, Step 3: Taking a photo
Then, we process the image to extract a clean, black-and-white signature that we can apply to the form:
E-Sign, Step 4: Confirmation
And that’s it! The user has filled out the whole form using our web-based tool. We then send the form directly to the user’s Local Election Office, either via email or fax (depending on the state, and what that state and Local Election Office will accept). The election office gets a nice, clean form like this:
Example completed form
Electronic vs. digital signatures
We’re taking advantage of an important distinction here between an electronic signature and a digital signature. You’ve probably used a digital signature before — it’s a signature that’s never existed in physical form, like your finger on a touchscreen or typing in your name and having a computer show it in a cursive font. Some states, like Massachusetts, do accept digital signatures, but they’re not accepted in most states. So instead, what we’re doing here is an electronic signature: a real pen-on-paper signature that we’re just transmitting via technical means. Think of it like Mike Teavee from Charlie and the Chocolate Factory, who gets transported via television: we’re taking a real, physical signature, and we’re just applying it to the document and sending it electronically.
Electronic vs. digital signatures
Electronic signatures are valid everywhere: the laws governing them are federal laws enacted to establish that faxed contracts are just as binding as mailed contracts.
Even in states where we could use a digital signature, we use electronic signatures. One of the most common causes for mail-in ballots to be rejected is because the signature on the ballot doesn’t match the one the government has on file from the voter registration or ballot application, and electronic signatures are much more likely to look like the voter’s signature on file than a digital signature.
So where does computer vision come in?
You might have noticed in that walk-through that there’s a pretty interesting step where we go from the signature in the photo to the clean, black-and-white signature:
Original signature
Cleaned-up signature
How do we do this? We stitched together a number of image processing and computer vision techniques to reliably extract signatures from user-captured images.
Why is this hard?
If you’re new to the world of image processing and computer vision, you might be wondering what’s so hard about this. After all, when you look at that photo, you immediately see the signature — the human brain has no trouble at all identifying what part of that image is the signature, and what part is the background. Computers, however, have a much harder time figuring that out. To understand why, take a look at this image: which square is lighter: square A or square B?
You brain knows immediately that square A is darker and square B is lighter. However, those two squares are actually the exact same color:
(I’ve just added two colored bars here to make it clear that the squares are the same color; nothing else about the image changed).
When you look at this image, you know about things like shadows and checkerboards, and so your brain automatically compensates for the shadow cast by the cylinder — so even though the actual pixels of the image are the same color, your brain knows that square A is a dark square, and square B is a light square with a shadow on it. But it’s not so easy to teach a computer about this.
And this is pretty much the problem we’re trying to solve. When the user gives us a picture of their signature, we need to tell the difference between the dark parts of the image — the actual ink of the signature — and the parts that are just the paper, with stray marks or shadows.
There’s a couple of very simple techniques to do this that don’t work very well. The simplest is a basic threshold: we look at each pixel in the image, convert it to grayscale (just a shade of gray, with no color, like a black-and-white photograph), and then we see whether that pixel is closer to white, or closer to black. This works pretty well for really clear, simple images, but fails for lots of real-world cases like shadows and blurry images:
Simple threshold results
Another common technique that works a little better is Otsu Binarization. This technique works pretty similarly to the simple threshold: it looks at each grayscale pixel, and checks whether it’s closer to black or to white. But instead of dividing down the middle — more than 50% black ends up as black and less than 50% black ends up as white — it picks a smarter threshold based on the whole image to maximize how much detail is preserved. This works a bit better, but still not great:
Otsu Binarization results
Our algorithm, on the other hand, uses a number of different techniques to achieve consistent results:
Our algorithm’s results
As you can see, this works in a wide variety of cases on lots of different input images.
Our algorithm
Our algorithm uses six steps, which I’ll go through in some detail:
The six steps of our algorithm
The six steps are:
Grayscale: convert the image to grayscale
Denoise: remove background noise with non-local means denoising
Threshold: apply adaptive thresholding to convert the image to black-and-white
Open: remove stray marks
Blur: soften edges
Crop: identify the core image and remove the rest
Step one: grayscale
The grayscale step is pretty straightforward: we take each pixel, average the red, green, and blue components of that pixel, and then use that as the color of the pixel. This strips away all of the color information and makes every pixel a shade of gray.
This step simplifies everything that comes after: because we don’t care about the color information, discarding it gives us a much simpler input.
Step two: denoise
Now things start to get interesting. We want to remove background noise from the image — the random grey speckles that show up in digital images from cameras. This makes it easier for the rest of the algorithm to focus on the important parts of the image, rather than the random background noise.
Background noise is just a random variation in brightness and color that’s caused by the image sensor in a digital camera. It’s particularly noticeable in low-light photographs, which is pretty common when you’re taking a picture of your signature indoors — that’s part of the reason photographers use lots of lights when shooting professional photos. You can think of it as a random speckle pattern overlaid on top of the image:
If we have multiple copies of the source image, with different noise (for example, if we had multiple frames of a video — each frame showing the same thing, but with different noise patterns) we can take advantage of the fact that adding together noise gives us a consistent gray:
So if we add together multiple copies of the source image, each with different noise, the noise averages out and we get a de-noised image:
This works great! Except that in our case, we only have a single copy of the user’s signature. So we use a technique called non-local means denoising. With this technique, we find parts of the image that look similar, and then average those out to remove noise. So for example, the red and yellow boxes in this image:
And then by doing that over and over — finding lots of pieces of the image that look similar and averaging them out — we can de-noise the whole image:
Step 3: Adaptive thresholding
Adaptive thresholding is the meat of the algorithm: it’s what removes the background and converts the whole image to black-and-white. And it works much better than the simple thresholding and Otsu Binarization we talked about above, because of a key insight: we can’t apply the same threshold to the whole image.
Instead of trying to pick a threshold between black and white, and use that for the whole image, we instead pick a different threshold for each small part of the image, like this:
For each pixel, we look at an area around it to determine what qualifies as “light” or “dark” in that part of the image, and then use that local threshold to determine whether the pixel should be black or white in the output. This means that we’re not comparing the shadowy part on the right side of this image directly to the brighter part on the left side of the image.
Step 4: Opening
While this image has gone pretty smoothly so far — a nice, clean signature with no stray marks — sometimes a bit of dirt or a stray dot of ink still makes it through. So we use a technique called morphological transformation, which is a family of operations based on the shapes in an image. In particular we use a technique called opening, which is a combination of erosion and dilation.
Erosion takes an image and shrinks all the shapes in it:
And dilation does the opposite:
Doing one and then the other — eroding and then dilation — has a somewhat magical effect: most shapes are left the same (because the two operations cancel each other out), but small dots and marks are removed (because the erosion totally removes them, so there’s nothing left to dilate).
Left: original image. Right: eroded and dilated
This combination of erosion and dilation is called opening, and it’s what we do to the processed signature images. It doesn’t have a big impact on this signature that we’re working with now, but as you can see from above, it helps to remove small marks in many cases.
Step 5: Blur
One more quick step before we do our final cropping: we apply a slight blur to the image. This makes the image look a bit more natural — our adaptive thresholding converted the image to pure black-and-white with sharp edges, but physical writing doesn’t have such sharp edges. The ink bleeds a bit into the paper, giving softer lines. By applying a subtle blur, we make the image look a bit more natural and more legible. We use a gaussian blur, which means that for each pixel, we take a bit of the color of the pixels around it (weighting closer pixels more heavily), and use that as the new color of the pixel. So pixels right on the edge of black and white end up grey, because they take on some of the color of the pixels around them.
Here’s what that looks like zoomed in:
Left: pre-blurring. Right: after blurring
Step 6: Cropping
Our final step is to crop the image. This is actually really important: we’re going to be placing these signatures on PDFs, and those PDFs often don’t leave much space for the signature. So it’s important that we crop the image as tightly as possible, so we can use all of the available space for the signature, and not waste any space on empty background. We’ve asked the user to crop the image, but users mostly use that opportunity to remove other objects from the photo, rather than carefully cropping the image as tightly as possible.
To crop the image, we first segment it into connected shapes. For each of these shapes, we apply a minimum threshold to determine whether it’s big enough to care about. In this example, we have 4 shapes, of which 3 are big enough to care about:
We then calculate the tightest possible crop that includes all of the large shapes, and use that:
Note that we don’t actively remove the small shapes (the ones in red) — if they’re within the crop, they get to stay. This lets us remove stray marks around the edge of the image, without removing things like the dots on top of i’s or j’s.
Final result
And that’s it! A robust pipeline to extract signatures from a wide variety of user-captured images.
Wrap-Up And Learnings
This algorithm works well for a wide variety of signatures — and that part is really important to us. At VoteAmerica, we work to turn out underrepresented voters, and so we need to be really careful that our technology doesn’t unintentionally exclude certain kinds of voters.
Here’s an example of a signature we do pretty well at now, but some earlier versions of the algorithm didn’t do so well at.
Before we tuned it just right, the algorithm often removed the marks and dots that make this signature distinctive. We were overly reliant on the idea that a signature is one or two connected pieces, but that’s not true of a lot of people’s signatures.
In particular, a lot of younger people, and lower-income people who often don’t sign as many things, don’t have a smooth, simple signature. And so if our algorithm didn’t work well for those kinds of voters, we’d run the risk of a signature mismatch between the voter’s actual signature, and our electronic scan of that signature.
And so one of the key design constraints of our algorithm was that when it doesn’t work — like in this example, where it’s picked up the lines from the paper — it should be really clear to the voter why it didn’t work, so they can look at the result and try again.
We prototyped some versions that used more advanced machine learning systems to try and segment out what part of the image was a signature and what wasn’t, by training a model on a bunch of signatures. But the problem is that when that fails, it’s not obvious — it just picks the wrong part of the image. And that’s much harder for someone to understand and fix.
So even though we’ve built a system that’s much too complicated for most voters to understand, we’ve built a system where you can understand the output, and have an intuitive sense of what failed and how to fix it even if you don’t know how the underlying algorithm works.
Complex systems don’t always have to have complex results.
Check out the code
All of this code — for the algorithm itself, and the AWS Lambda deployment that we use to run it and handle things like receiving signature images via MMS — is available on our GitHub account at https://github.com/vote/signature_extractor. | https://medium.com/voteamerica/the-computer-vision-tech-that-helped-15-000-voters-request-a-ballot-online-3bd28961b907 | ['Ben Weissmann'] | 2020-12-08 23:50:57.965000+00:00 | ['Image Processing', 'Vote By Mail', 'Computer Vision', 'Absentee Voting', 'Voting Rights'] |
A Vessel of Love | Empty white ceramic bowl
plain and simple in design
crafted by an artist’s hands
vessel ready to pour love in
Little one has a cold
needs his mama’s magic
onions, ginger, garlic
chicken noodle soup
Trusty white ceramic bowl
I fill you to the brim
help me heal my little one
with love and nourishment
He slurped up every warm drop
I wash you clean and dry
return you to your spot to serve
to be a vessel of love once again | https://medium.com/one-table-one-world/a-vessel-of-love-794eee55fd66 | ['Jessica Archuleta'] | 2019-11-07 16:31:10.891000+00:00 | ['Pomprompt', 'Poetry', 'Love', 'Food', 'Culture'] |
Parenthood, and Brandi Carlile’s “The Mother” | Parenthood, and Brandi Carlile’s “The Mother”
Brandi Carlile is a fantastic singer and writer. Her music, especially on her latest album, By The Way, I Forgive You, is always pensive, moving, and unique. There are songs of hers that I could write about for hours on end, but the one that sticks with me the most is “The Mother.” As a parent, I’ve found that this one song summarizes so much of what I struggle to put into words myself.
The best way to describe parenthood, I’ve found, is that it completely changes you. I feel like a fundamentally different person than I was before my child was born, almost as if someone reached inside me and started rearranging my brain cells. And if I could somehow put everything I’ve experienced into a song, it would be something like “The Mother.”
The whole song is beautiful, but there are a few lines I want to highlight in this post.
“Welcome to the end of being alone inside your mind,” Carlile sings at the beginning of the song. “You’re tethered to another and you’re worried all the time.” The loss of time to think and the arrival of constant worrying are two very real parts of parenthood. My mornings and evenings are much busier than they used to be, and I know I spend much less time reading, writing, or just quietly thinking than I did before. And being a parent means I’m always at least a little worried (even if it’s at the back of my mind) about my baby’s safety.
A few lines later, Carlile sings, “The New York City streets are as busy as they always used to be, but I am the mother of Evangeline.” These words are equally powerful. Having a child usually means giving up a lot of yourself. It means sacrificing things you love for another person. And even though the lack of time alone is considerable, the loss of energy may be even more notable. But even though it’s a sacrifice, it’s one well worth making.
“The first things that she took from me were selfishness and sleep. She broke a thousand heirlooms I was never meant to keep. She filled my life with color, cancelled plans, and trashed my car. But none of that was ever who we are.” These lines are some of my favorite in the whole song. Like other verses, they sum up so much of parenthood in just a few syllables. I’ve lost many hours of sleep since my baby was born. Adjusting to this new reality, and constantly caring for another person, has forcefully cut a lot of selfishness out of my life.
The last few words in that line, “none of that was ever who we are,” reminds me that the things I’ve given up because of parenthood — a large chunk of my free time, or more sleep on weekends — never defined me. If they ever did, then I needed that slap in the face that parenthood brought. I don’t think I determined my happiness by how much free time I had in my life before the baby. At the very least, I know I didn’t consciously. But caring for a child has showed me how foolish and empty that mindset is. Life is so much more than what we do for ourselves.
“They’ve still got their morning paper and their coffee and their time. They still enjoy their evenings with the skeptics and their wine. But all the wonders I have seen I will see a second time. From inside of the ages through your eyes.” All the things that parenthood takes away pale in comparison to what it gives. Even simple journeys like taking the baby on a walk, to the park, or to meet new family members, give me so much joy. I can remember what it was like to live without my baby, but I don’t have any desire to go back to that time. And while I don’t want to rush the days when he’s small and truly a baby I can hold in my arms, I know it will also be an amazing experience to watch him grow up each year.
“So they can keep their treasure and their ties to the machine. Cause I am the mother of Evangeline.” God has blessed my wife and I with a beautiful, happy baby boy. Everything else in both of our lives seems so small stacked up next to that fact. Life is much more difficult than it used to be; life has much less sleep than it used to. But it’s also so much more.
I don’t meant to glamorize parenthood. It’s by far the most difficult thing I’ve ever had to do in my life. There are nights when I’ve felt stretched beyond my abilities, thrust into a role I had no idea how to play. And to be perfectly clear, I don’t think I’m anything close to an expert. A large part of parenthood is fumbling through everything as well as one can. That’s how I feel many days. I’m trying my best, but I know I’m very far from perfect.
I also don’t want to make this sound like I’m perfectly adjusted or completely selfless now. Sometimes, like anyone else, I need to remind myself how blessed I am. There are plenty of days or nights that I’ve been frustrated, or selfishly wished that I had more time to myself. Part of why I’m writing this post is to remind myself how much I have been given. That even in the moments when being a parent is most difficult, God has blessed me immeasurably.
Ultimately, I think what I mean to say is this — being a parent has also been the most wonderful experience of my life. It’s changed my perspective on so much and shaped me in ways I never could have anticipated. I love every second I spend with my son (even if there have been many during which my body cried out for sleep). And I think “The Mother” might be the best, most pure distillation of this experience. There are other great songs as well (“Dear Theodosia” from Hamilton, for example), but this one is my favorite.
Parenthood is messy, anxious, and exhausting. It’s also wonderful, life-changing, and joyful. I’m so thankful that God blessed me with this gift and this song is the perfect way to explain so much of it. | https://medium.com/the-coastline-is-quiet/parenthood-and-brandi-carliles-the-mother-23bc4a5c26f7 | ['Thomas Jenkins'] | 2020-02-09 14:52:06.938000+00:00 | ['Writing', 'Parenthood', 'Children', 'Music', 'Life'] |
Weight lose | With the increased size of an obese and overweight population, most people are considering buying eBooks to guide them in losing weight. There are five factors to put into consideration when buying an eBook to lose weight;
How long the weight loss eBook has been on the market: Choose an eBook that has been in market for a considerable period. Buying any book that has been on the market for a considerable period will be a perfect guide since it will be easier to get the information from other clients who used it. Make sure the guide you are buying does not include outdated versions for losing the extra weight and fat which have been proven ineffective.
Cost of the weight loss program: One of the factors that you should consider before buying any online book is the cost of the method described in the book. Choose a method that is describing the most cost effective program that will help you losing weight.
Effectiveness of the weight loss program: Ensure the eBook you choose as your guide for losing weight has an effective program that can help you achieving your goals and targets. Before investing, try to research about the programs these methods are discussing to lose extra weight and fat. One of the ways in which you can decide if the method you are buying is the best is by going through online reviews.
Check if the approach used is logical: Choose a weight lose book or program that provides a logical step by step procedure to lose weight. Ensure the materials being used in the guide are readily available in your area.
Reliability of the advices being given by the weight loss eBook: Before investing in any method or book, it’s important to ensure that the dietary and nutritional advice being given is reliable.
If you are troubled by your obesity, then definitely use it once. Thank you | https://medium.com/@onlineearning0094/weight-lose-632b33c18e53 | ['Susheel Yadav'] | 2021-12-23 02:54:50.124000+00:00 | ['Weight', 'Weight Lose In A Week', 'Weightloss Recipe', 'Weightloss Foods', 'Weight Loss Tips'] |
Super Glitch | Super Glitch
Poetry
Image by PublicDomainPictures from Pixabay
Upon the mirror touch of light
and the strict breathing of the clock
properly balancing
the interactions of opposite energies
alien interruptions exceptionally crafted
shunning the path of the absent freedom
in cosmic touchstones for exploration
relentless dust strikes the journey
till no bonds are able to secure
the invisible through melody
and the rhythm devotes its tears
to the unnameable
that lost cohesiveness
into the oceanic determinism
where energy displacements
are absorbed into the affinity of space
Joanna Vang 2020 | https://medium.com/scrittura/super-glitch-1fa1fdf3e713 | ['Joanna Vang'] | 2020-12-01 14:44:49.586000+00:00 | ['Effect', 'Poetry', 'Perception', 'Scrittura', 'Glitch'] |
Looking For A Web Host? Four Important Factors To Consider | If you run any business, then you will know the importance of having a website. Your website is the storefront of your business. It’s where potential customers can find out more about what products or services you offer. In this article, you’ll see four of the most important factors to consider while looking for a web host.
First impressions are important, especially in business, and you’ll want to make sure that you not only have a great looking website. But one that is fast to load, easy to use, and works on all types of devices, such as laptops, desktops, and mobile phones.
The importance of web hosting
The company you use to host your website will be one of the most important decisions you will make. So you will want to make sure that you get it right.
There are several things that you need to take into consideration when searching for a web host for your site. In this article, we will look at the main factors and why they are important in making your choice.
Choice of hosting options
There are different types of hosting that will apply to different websites. So you will want to check that the web host you are planning to use has a range of options available.
There are many websites, such as 28msec, that provide excellent analysis and suggestions for web hosts.
Most people’s most popular option is shared hosting, where a single server is shared among many users. Sometimes this can be more than a hundred different users. All sharing the same resources on the server. It is the most affordable option and ideal for those who are running a simple website.
Other options include VPS (virtual private server) hosting, which is where you have a virtual server all to yourself. Allowing you more control of the hosting functions and options. This is a good choice for those who are running a larger website, such as an e-shop.
The most powerful option is a dedicated server, which is where you have a physical server all for yourself. This is a much more expensive option. It will require a level of experience and knowledge about server setups. Making it a choice for larger companies that often have their own IT departments.
If you are using the WordPress CMS, you might want to consider looking into getting dedicated WordPress hosting, which will be on servers optimized for that specific platform.
It’s also important to check with the web host about upgrading your hosting to a different type should the need arise. You might find that after a year or so, you’ve outgrown your shared hosting package and will need something a little more powerful.
Reasonable pricing
Another thing to pay attention to is the price. You’ll want to be sure that you are not overpaying for your hosting, especially if your hosting package includes lots of expensive add-ons that you aren’t even using.
Most shared hosting options usually come with three choices. They are usually for a single site, multiple sites, or unlimited sites. They will all provide the same functions and options. But with some web hosts, you will have a difference in the amount of disc space and bandwidth you are allowed to use.
The price will usually be around $7-$10 a month for single-site shared hosting accounts — multiple sites around $10-$15 a month, and unlimited anywhere from $15-$15. You should also check to make sure if the monthly prices being shown on the hosts’ website cost a month to month basis or for an annual subscription. More often than not, the lowest price you see being advertised is when you sign up for a contract of one year or more. If you want to use a month by a monthly payment plan, then the prices will usually be a few dollars more each month.
VPS hosting can range between $30 — $100 a month, depending on the specifications. Dedicated servers usually start at around $120 a month. Going upwards of hundreds more depending on the type of server you need.
Reliable performance factors to consider in a web host
Uptime is an important aspect of web hosting and relates to the amount of time each month that your site will be up and running. Problems with servers and data centers can happen out of the blue, and though in most cases the problems are fixed promptly, you might have a situation where your website is down for a prolonged period of time.
You’ll want to use a web host that has at least a 99.9% uptime guarantee. Anything less than that, and you could be asking for trouble.
Good support
One of the most important things to look at is the type of support that the host offers. You never know when you might run into a problem. Regardless of whether it’s something simple such as an email not working properly, or a more complicated issue with a database. You will want to be sure that there is someone on hand who can help.
Many web hosts offer round the clock support, while others may only offer support during working hours. If your host has limited support hours, make sure that they are similar to your work hours if your host is on the other side of the world. You might discover that their working hours are during the night for you. Going with a local host would be the better option in this scenario.
It’s also worth spending a little time looking for reviews from clients using that particular web host. They will be in the best position to share their experience and tell you if the quality of service and support from the host is good enough.
The last thing you’ll want is to have a serious problem with your website. And not being able to get any response from customer support.
Some final thoughts about factors to consider in a web host
When it comes to web hosting, you’ll find that many companies are vying for your business. Take your time and look closer at some of the potential hosts you are thinking of using.
Check out the reviews for their hosting services online. Some websites are purely up to reviewing different web hosts. These can give you a much clearer picture of what type of service a specific web host offers.
Suppose you are moving from one host to another. So, try to find one that offers site migration, where the new host will take care of everything. In other words, moving your websites from the old host over to their servers. If you don’t know what you are doing, this can be a tricky process. So it’s a great option to let the new host take care of this for you. | https://medium.com/visualmodo/looking-for-a-web-host-four-important-factors-to-consider-f3266faaf13 | [] | 2020-12-26 01:51:18.513000+00:00 | ['Server', 'Hosting', 'Websit', 'Factors'] |
The Impacts of Different Leadership Styles (continued) | Our last post on this topic discussed the impact of 3 of the 6 leadership styles outlined in ‘Primal Leadership’ by Daniel Goleman, Richard Boyatzis and Annie Mckee. These included the visionary, coaching and affiliative leadership styles.
In this post we’re going to look at the 3 remaining leadership styles the authors refer to, including the democratic, pacesetting and commanding leadership styles.
Democratic leadership Style
The democratic leadership style is also referred to as participative or shared leadership. Democratic leaders focus on equality and encourage participation from their team members. All contributions to the decision-making process are welcomed. A democratic leader offers guidance where necessary yet still retains control over who gets to contribute to decisions.
This leadership style has a positive impact on the climate of a company and tends to affect employee’s behaviour and morale in the following ways.
It values peoples input and gets commitment through participation.
Team members are encouraged to share their ideas and opinions and thus feel more engaged, committed and involved.
The higher level of involvement results in team members caring more about results.
The democratic leadership style is most appropriate when building buy-in or consensus, or when getting valuable input from employees. It is known to be one of the most effective leadership styles and results in higher productivity and increased morale. On the downside, when time is tight it can lead to communication failures and incomplete projects.
Pacesetting leadership Style
A pacesetting leader is focused primarily on performance and achieving goals. They thrive on being high achievers themselves, able to meet challenging and exciting goals and expect excellence from their team members. They do not always allow free flow of ideas and contributions and will step in when they feel is necessary to ensure that targets are hit. Pacesetting leaders are not empathetic towards poor performers and expect the same high standards from everyone.
The pacesetting leadership style can be effective for meeting targets, however it can have a negative impact on the climate of a company and tends to affect employees behaviour and morale in the following ways:
It leads to stressed, exhausted and burnt out employees.
It can result in a high staff turnover.
It’s often poorly executed and often highly negative.
The pacesetting leadership style is most appropriate when getting high-quality results from a motivated and competent team, quickly. Since it focuses on high performance, it’s useful for leaders to learn how to train team members to improve the quality of their work. Engaging in high-performance coaching can help them to become more effective. To get the best results, leaders should practice their motivation skills.
Commanding leadership Style
Commanding leaders are what many perceive to be a ‘traditional leader’. They possess a take-charge attitude and are results and goals driven. Their approach to leadership depends heavily on the giving out of orders, on occasion disciplinary threats and tight control.
The issue with this style is that it is often misused, and can negatively impact employee behaviour and morale in the following ways:
It can result in stressed, fearful employees and decreased performance levels.
Employees feel like their contributions aren’t of any real value.
Employees feel as though they cannot be trusted.
The commanding leadership style is most appropriate for soothing fears by giving clear direction in in a crisis, to kick-start a turnaround, or with problem employees. Negative views of this style are due to the leader’s tendency to be critical and withholding of praise.
As a reflection point: which of the 6 leadership styles are you mostly using in your everyday leadership? How is this impacting on your team? What results do you achieve? How motivated are your people?
Tania Watson is the founder of Creative Coaching and an executive coach, organisational consultant and leadership specialist. Creative Coaching is a successful company dedicated to the development of senior leaders in organisations through one to one coaching, intact team development and group facilitation. If you or someone from your organisation would like to have a no obligation conversation about how Creative Coaching may be able to help, please email Tania directly at [email protected]. | https://medium.com/@ccoachtania/the-impacts-of-different-leadership-styles-continued-334f1272019d | ['Tania Watson'] | 2020-12-11 16:36:25.616000+00:00 | ['Leadership Styles', 'Leadership', 'Leadershipdevelopment', 'CEO', 'Leaders'] |
The Eight Grader Whose Discovery Could Lead To a Covid-19 Cure | The Eight Grader Whose Discovery Could Lead To a Covid-19 Cure
Could Anika Chebrolu’s findings be the end this pandemic?
Photo of Anika Chebrolu
As researchers and scientists around the world race to find a treatment for the coronavirus, a 14-year-old girl appears to outshines them all.
The Indian-American teen from Frisco in Texas won the top prize of $25k at the 3M Young Scientist Challenge for her discovery that could be the potential cure for the coronavirus pandemic.
Her invention uses in-silico methodology for drug discovery to find a molecule that can selectively bind to the Spike protein of SARS-CoV-2 virus. She first began by identifying where and how the molecule would bind to the virus which eventually lead to her finding her potential drug.
She began her research last year after battling a severe influenza infection following which she made the commitment to find a cure for influenza. She was especially inspired knowing how many people still die annually despite vaccinations and anti-influenza drugs on the market. However, her plans took a turn when the coronavirus pandemic hit and she decided to shift her focus on finding a treatment for the COVID-19.
“Science is the basis of life and the entire universe and we have a long way to go understand it fully,” said Chebrolu
She worked closely with 3M Corporate Scientist Dr Mahfuza Ali, who mentored her and guided her to transform her concept into reality. She finally presented her project to a judging panel of scientists and leaders.
Her next goal is to work alongside scientists and virologists to develop her findings into an actual cure for the virus and hopefully control the mortality of the coronavirus disease.
“While my effort to find a lead compound to bind to the spike protein of the SARS-CoV-2 virus this summer may appear to be a drop in the ocean, but still adds to all these efforts” Anika Chebrolu said to CNN
As the fight against the deadly coronavirus continues to rage on, we can take comfort in the fact that Anika’s discovery, along with many others, contribute immensely to finding an effective treatment or even a possible cure for the virus. But for the time being, we still have to rely on social distancing, self-isolation, and other measures until we are presented with better alternatives. | https://medium.com/age-of-awareness/potential-treatment-for-covid-19-found-by-an-eighth-grader-a523b405ec88 | ['Kim Mia'] | 2020-10-21 17:59:30.534000+00:00 | ['Education', 'Covid 19', 'Health', 'Pandemic', 'Coronavirus'] |
Moon Over a Pandemic | sister moon
we search for you
in the night sky
you look down on us
even as clouds
scatter across your face
you shine, a reflection
of your brother, sun
and it heartens us
consumed and devoured
by this pandemic
we seek your solace
prisoners of a virus
we lift our gaze
your radiance falls
around us
all of humanity watches
as you circle the earth
through your phases
you remain steadfast
sister moon
as you hang in the sky
suspended
by invisible threads
you are faithful
and unwavering | https://medium.com/the-pom/moon-over-a-pandemic-dce08ce37f2a | ['Sydney Duke Richey'] | 2020-10-22 00:10:18.021000+00:00 | ['Unfolding', 'Pandemic', 'Poetry', 'Pomprompt', 'Moon'] |
My Opinion On School Uniforms | I don’t like having uniforms, because you have to wash them, and most times they are uncomfortable.
First, let me say why choosing your clothes out in the morning is better than already knowing what to wear every day. When you wear clothes that you want to wear, you are comfortable. You won’t be annoyed for seven hours by what you're wearing. It’s not a good idea.
Another reason I think it is better to choose your own clothes is that it shows discipline and responsibly. You’re waking up, picking your own clothes, and eating breakfast.
So now let me tell you why it’s better to wear uniforms. When students wear uniforms, they are in the school dress code, and so when they have to pick their clothes, they would be out of the school dress code. When you have a school uniform, you can notice someone in the school that isn’t wearing a uniform, and so might not be supposed to be there.
If Easton adopted a uniform policy, I would be able to deal with it but I wouldn’t like it. I wouldn't be comfortable in school, and I wouldn’t get to be me and choose the clothes that I wear.
So as my final answer, I think not having uniforms is better than having uniforms because of keeping yourself comfortable. | https://medium.com/sukhroop-the-storyteller/my-opinion-on-school-uniforms-9db830c1db1 | ['Hargun Singh'] | 2019-09-18 01:46:04.231000+00:00 | ['Education', 'Uniforms', 'Opinion', 'Life'] |
Data Beyond Perception | Image by statistics authority of UK
The quote “Data is everywhere” can’t be more true.We tend to see only what is presented to us, but we often overlook what information might be hidden inside each data.This article provides information about how we can find links and correlation between various features of a given data. Sometimes very important information which is vital for our analysis is hidden as a combination of individual features. Let’s jump in to the task at hand and find out the power of statistics and python.
Explore the Data
To illustrate my point, I have used the advertising data from the textbook “An Introduction to Statistical Learning with Applications in R” (Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani). The data can be found using the author’s website or the link: http://faculty.marshall.usc.edu/gareth-james/ISL/data.html
url = 'http://faculty.marshall.usc.edu/gareth-james/ISL/Advertising.csv'
ds = pd.read_csv(url,index_col=0)
Now to visualize the relation between the different features of our data set we can use pair plot() which gives us the scatter plot between two features and the histogram.
sns.pairplot(ds);
Pair wise plot for different features
Isn’t python amazing? Seaborn is an amazing data visualization library and you can use its various functions to utilize it to its full content.
Understand the data:
The pair wise plot above can be overwhelming if you are new to this but it is just a plot of two features taken one at time, hence the name pair plot. In our data the plot towards the bottom right is a scatter plot of sales against the budget of advertising for television. Now to quickly get a glimpse of how our data is related,you can use the following commands:
g = sns.pairplot(ds, kind="reg")
This will plot a linear regression line fitting all the pair wise plots.
Linear Regression fit to data
Another great function of seaborn library is that it tells us about the correlation of different data with each other, using just a line:
corr = ds.corr()
sns.heatmap(corr)
Heat map showing correlation
The heat map gives us the extent to which these features are correlated with each other and especially focusing on the last row which is the relation between the sales and the advertising budget.It is quite clear that the sales of TV is positively correlated to the budget of advertising while the correlation between radio and sales is decreasing and it decreases even further for the the sales of newspaper and the budget of advertising.
Fitting the Linear Regression Model:
Now to get into the Machine Learning part we have to fit regression model.We have to split the model into 70:30 train test data points. Then use different linear regression models to fit the data for different features.
train, test = train_test_split(ds, test_size=0.3)
reg_tv = LinearRegression().fit(train[['TV']], train['sales'])
reg_radio = LinearRegression().fit(train[['radio']], train['sales'])
reg_news = LinearRegression().fit(train[['newspaper']], train['sales'])
y_pred_tv = reg_tv.predict(test[['TV']])
y_pred_radio = reg_radio.predict(test[['radio']])
y_pred_news = reg_news.predict(test[['newspaper']])
We can observe how the linear regression model fits our training and test set using the following plots. The red blue points are the scatter plot for the training set while the green colored points are for the test data set .
Linear model for different features
By observing the above plots,anyone can say that the linear models are not good enough. We can definitely use other models belonging to different class to see how it fits our data but we often root to higher level models without exploring what the data has to offer. Even from this linear model we can roughly say that it fits the Sales against TV budget best. This can mean that on increasing the budget for advertising in TV entertainment we can increase the sales of that feature.
The Hidden Pieces:
There can be various combination of features which might give us a huge amount of information about how they are interconnected.For this purpose I have used combination like multiplying the budget of TV and radio , radio and newspaper , newspaper and TV and the combination of all three.You can do this using :
ds['news_radio'] = ds['newspaper']*ds['radio']
ds['tv_radio'] = ds['TV']*ds['radio']
ds['news_tv'] = ds['newspaper']*ds['TV']
ds['news_radio_tv'] = ds['newspaper']*ds['radio']*ds['TV']
ds
The above code will create new columns in your already existing data frame.All you have to now is to fit the linear model for the new set of features.
train, test = train_test_split(ds, test_size=0.3)
reg_news_radio = LinearRegression().fit(train[['news_radio']], train['sales'])
reg_tv_radio = LinearRegression().fit(train[['tv_radio']], train['sales'])
reg_news_tv = LinearRegression().fit(train[['news_tv']], train['sales'])
reg_news_radio_tv = LinearRegression().fit(train[['news_radio_tv']], train['sales'])
y_pred_news_radio = reg_news_radio.predict(test[['news_radio']])
y_pred_tv_radio = reg_tv_radio.predict(test[['tv_radio']])
y_pred_news_tv = reg_news_tv.predict(test[['news_tv']])
y_pred_news_radio_tv = reg_news_radio_tv.predict(test[['news_radio_tv']])
Visualize the data:
I used subplot function to generate different plots and for the sake of comparing between the individual features like the budget of TV, Radio and newspaper and the synthesized features I included the individual plots as well.
Linear model on training data
Linear regression fit on test data
It is quite evident that the combination of feature like TV and radio together and the combination of news, radio and TV follows a linear trend. Which means the the company might increase the sales if they spend more on advertising of TV and radio rather than increasing the budget for newspaper.
So it is not always necessary that the data available to you will best describe a model ,sometimes the information is hidden and we can make amends to our data to find a correlation between various parameters of a data. | https://medium.com/analytics-vidhya/data-beyond-perception-c1ceef95b231 | ['Farhan Rahman'] | 2020-06-30 14:02:55.066000+00:00 | ['Machine Learning', 'Data Science', 'Data Analytics', 'Data Visualization', 'Data'] |
Philanthropy and Leadership in the Time of Crisis | For much of 2020, we have been in the season of suffering, centered around the crisis of the coronavirus and its multitude of effects nationally and globally. We are all grappling with our own version of what Michelle Obama called “some form of low-grade depression.” The Covid19 pandemic has exposed the crisis around effective leadership, a critical need now more than ever, for leaders who are intimately involved with the systemic change to step forward. Unless such leaders are in positions of power, philanthropy will continue to perpetuate inequality.
Darren Walker, President of the Ford Foundation, argues that the philanthropic sector has fallen short of leading on moral leadership. “We need leaders who are motivated by values and incentives and outcomes that transcend those offered by the systems which, by design or neglect, have widened inequality to an untenable degree.” But this failure has its root causes on old questions long ignored which now have an urgency for a response or solution: is it possible to balance philanthropic freedom with the desire to ensure charitable activity is rationally organized? what impact does our perception of the nature of an epidemic have on philanthropic responses? do crises help to redress power imbalances or exacerbate them?
The role philanthropy ultimately plays in crises and how that relates to issues of power and inequality is often complex, and it can be both a tool for maintaining and protecting existing social dynamics and hierarchies, including a means to break down societal divisions and establish new norms.
According to Rhodri Davies of Charities Aid Foundation (CAF) in an article written for HistPhil titled “Philanthropy In a Time of Crisis: Lessons From European History”, pandemics, like the one we are all dealing with, often play an important role in determining an appropriate philanthropic response. Davies states that in the 16th and 17th centuries, the plague was not thought to be a medical problem but a religious one. It was seen, according to Paul Slack, as “a divine scourge, a retribution for the sins of mankind: sometimes for sins in general, more often for the specific misdeeds of the time or place of an epidemic.” Davies further states that “religious rituals to appease God directly, as well as charitable efforts to demonstrate that the afflicted populace had seen the error of their ways and were willing to atone for their sins, were of far greater importance to most at the time than the provision of medical assistance to the sick.”
The political earthquake of the Trump administration has left a lasting impression on the philanthropic sector, especially in how it has lost the battle against inequality. The sector has reacted to the Trump administration by being more defensive on social issues, ramping up funding through donor-advised-funds and intermediaries that leave no fingerprints, doubling down on the politics of intersectionality by forging alliances between groups that may not have partnered before, and showing that the sector can be nimble by utilizing rapid response grantmaking for social movements.
How then can the sector adequately respond, especially in times of crisis?
1. There is an urgency now to shift towards putting equity at the base of all philanthropic work. This relies on identifying the lessons to be learned from this pandemic in order to become a key actor in prevention strategies, including participating in the transformation of our current economic paradigm. Past philanthropic responses have often been to embrace overly diffuse, often localized strategies that yield few larger system gains.
2. The sector should incorporate the spirit of solidarity, erasing a them vs. us mentality that has deepened divisions within the sector. The UN Secretary-General, António Guterres, in a September 2020 statement spoke out against the “million” deaths caused by a lack of solidarity, stating that countries’ individual responses will not be enough to solve the current global pandemic.
3. We need to embrace the new term of preventative philanthropy in addressing some of these divides while going beyond the charitable and reactive dimensions — when visionary and committed political decision-makers step forward to bring about necessary transformation, shifts in the current economic paradigm can begin to take place. As Darren Walker of the Ford Foundation notes “We must constantly renew our steadfast commitment. We must see the task of justice through to completion, knowing that for our most vexing problems, the reward of our work may come for the next generation or the one after that.”
4. The biggest lesson from this current pandemic is that the sector needs to begin to immediately plan for the next one. Some pandemics are predictable, and the sad reality is that this pandemic was completely predictable and easily preventable. We now know that there are gaps in how we collectively respond to disasters and that though pandemics affect everyone, their effects are felt differently depending on the social group in question. Ensuring and working towards more inclusive, sustainable, and resilient societies is one way to prepare for the next pandemic. | https://medium.com/@emerhi/philanthropy-and-leadership-in-the-time-of-crisis-1430e2e8bde2 | [] | 2020-12-15 23:21:49.251000+00:00 | ['Philanthropy', 'Development', 'Trump', 'Charity', 'Justice'] |
Check that React lifecycle methods with hooks tutorial before building your next app | Intro to React lifecycle methods
Welcome to the react lifecycle tutorial, where I will explain all the things related to the react component lifecycle methods.
You’ll learn about the four main react component lifecycle stages, when they’re fired, and what is happening inside.
Next, we will talk about the methods used during these stages, which ones we need, and which ones we should avoid.
We won’t forget about these deprecated methods, so you will know which ones you should definitely refactor in your code and don’t use anymore.
Finally, in the last step, we will talk about react lifecycle hooks, and I’ll tell you how we can implement them to replace all of the lifecycle methods when we build reactjs functional components.
Let’s start!
And if you prefer video, here is the youtube version where I explain all of that:
react component lifecycle cheatsheet
React lifecycle diagram
In React, we have four main lifecycle phases.
Actually, I’d say there are three lifecycle phases and one phase for Error Handling, which I will tell you about a bit later.
Now, let’s focus on these three most popular and mostly used stages.
Anyway, even if not every developer cares about that stage, I still think it should be changed, and error handling should be implemented into every project because it can help us save a lot of time in the future.
So in the image, you can see the diagram containing the three life cycle stages: Mounting, Update and Unmounting.
The first two, I mean Mount and Update stage, are split into three phases: render, pre-commit, and commit.
The first phase of Mount and Update, called Render, happens when we do the pure rendering. Our DOM is not touched yet. We can stop or restart this phase until it is finished.
The second one, named Pre-Commit, is when we already have the first possibility of reading DOM by React app.
And the last one, named Commit, is when we have ready interaction with DOM, can manipulate Document Object Model, create side effect, and want to touch stuff outside our instance (component) like, for example, data fetching we can plan updates.
React Mount
Mount phase is the initial stage of the React component lifecycle and the moment when React creates our components and inserts them into the DOM.
Let’s see the component mount methods.
You will learn about when they’re fired, their responsibility, and when it is good to avoid using them.
React constructor
It’s a component lifecycle method that is fired before the react component is mounted.
Constructor is useful when we need to init components state, bind functions, or event handlers in our component.
We always need to remember to call super(props) to avoid situations when our component’s props are undefined.
React static getDerivedStateFromProps
This method is used in Mount and Update lifecycle.
This component lifecycle method is called just before the render method in both cases, mounting and updating.
It’s handy when we would like to change our component’s internal state by recognizing the props’ change implemented into the component.
Anyway, it’s not an easy method to use. You need to know how to do it. It will be much easier to use componentDidUpdate or memoization (ensuring our value in the render is recompiled when the inputs value change), but not in every case.
React render
This method is used in Mount and Update lifecycle, and it’s the only required method in the react component.
In react component lifecycle order, it’s called after getDerivedStateFromProps during the Mount cycle, or after shouldComponentUpdate and forceUpdate methods in Update cycle, new props or new components state will fire render method as well.
Render shouldn’t modify state and should just return elements like react elements, arrays, portals, string or numbers, and booleans or null, shortly it’s used mostly for creating or returning elements.
The returned value can be defined or can depend on component props or component state.
Interestingly, we can stop rendering component by returning false in function shouldComponentUpdate, and oppositely, we can force component to rerender by calling method forceUpdate.
React componentDidMount
This component lifecycle method is called after a component is mounted, like put into the DOM tree.
It’s used mostly for DOM operations, subscriptions, or data fetching, so it’s an excellent place to add all the data fetch actions, such as setState methods.
Anyway, you need to know that the component will rerender if you change state, but still before the browser will update the screen.
React componentWillMount
React componentWillMount is the deprecated method, which means you should refactor it to avoid future problems.
React update lifecycle methods
The update stage is the second phase of the React component lifecycle.
It means it can happen when the React application changes the props passed into the component’s component or internal state.
In the case of update, React calls methods in order as below:
static getDerivedStateFromProps
shouldComponentUpdate
render
getSnapshotBeforeUpdate
componentDidUpdate
Now, let’s see what each of them does (getDerivedStateFromProps and render methods were described in the Mount section).
React shouldComponentUpdate
This method is always called when a state or props changed.
Fires before render methods, and what is more interesting, this method doesn’t fire when we use method forceUpdate.
React shouldComponent update method works like, if it returns true (that is the default value), React component will rerender, but if it returns false, component will not render again.
Used mostly for improving performance, to check if rerender is definitely necessary, and stop it when it’s not.
One more thing that you should know about that method is that you can use PureComponents instead of messing with this function because React PureComponent has built-in comparison and will not rerender when not necessary.
React getSnapshotBeforeUpdate
React getSnaphostBeforeUpdate is fired after, or I’d say during render method, just before render will update DOM.
It’s not very popular to use, and it’s used only in exceptional cases, for example, news feeds with infinite scrolls.
React componentDidUpdate
This method is very similar to the componentDidMount, and it’s fired after the finished render method, but in the Update stage of React component lifecycle.
Anyway, it’s not called after the initial render, which is called in the Mount stage of the React component lifecycle.
As well as the componentDidMount, it’s used mostly for DOM operations or data fetching.
React componentWillUpdate (deprecated)
React componentWillUpdate is the deprecated method, which means you should refactor it to avoid future problems.
React componentWillReceiveProps (deprecated)
React componentWillReceiveProps is the deprecated method, which means you should refactor it to avoid future problems.
React Unmount
It’s the last stage of the React component lifecycle.
Unmounting is when a component is being removed from the Document Object Model (DOM), and it has only one method called during this stage, the componentWillUnmount.
React componentWillUnmount
It’s the only method called during the unmount stage, and it’s called precisely before a component is unmounted and destroyed.
Like the componentDidMount, it’s used mostly for DOM operations or data fetching, but in this case, we use that method to clean all of the changes, listeners, or subscriptions that we created in componentDidMount or componentDidUpdate methods.
React Component Error Handling
Error handling is the lifecycle stage that I wouldn’t name lifecycle, more like exception handler, but still is one of the most crucial parts in the component lifecycle.
This part is responsible for properly handling errors during rendering, calling constructor, or another part of the lifecycle method.
Proper error handling is very important for every React (and every other) application and always should be done carefully because it will save us tons of time and maybe even more good mood in the future.
We have two methods possible to fire in this phase, static getDerivdedStateFromError, and componentDidCatch.
React static getDerivedStateFromError
React static getDerivedStateFromError is the method that is fired when our component throws an error.
We should use it to take the error as a param and change the state to the component.
Next, we can see in the UI about something went wrong.
React componentDidCatch
This method works a bit differently than componentDidCatch because it takes the two params, one is an error, and the second one is info, which shows which part of the application created an error.
It’s useful when we have detailed logs and will use this method to pass information to the logs that we can monitor as the developer or admin.
React lifecycle hooks
All of the react component lifecycle stages help us handle most of the moments precisely and perform actions that we need at that moment, but not always we need them.
In many applications, we will need to catch only moments when we init component, update, and unmount, without going for all of the small parts.
In this case, we can replace most of that with a useEffect hook.
useEffect hook explanation
React useEffect hook can be used as a successful replacement for componentDidMount, componentDidUpade, and componentWillMount.
It’s named useEffect from the name of all of the actions that we performed out of the instance (side-effects).
We can fire useEffect like componentDidMount and componentDidUpdate:
useEffect(()=>{
//some logic here
});
In this case, the useEffect hook will start on both actions and fire when the component inits or changes state.
We can fire useEffect like componentDidMount only:
useEffect(()=>{
//some logic here
},[]);
In this case, the useEffect hook will start only the first time, like in the Mount phase.
We can fire useEffect like componentDidUpdate related to the one part of the instance only:
useEffect(()=>{
//some logic here
},[name]);
In this case, the useEffect hook will start only when the state “name” will be changed.
We can fire useEffect like componentWillUnmount:
useEffect(()=>{
listen();
return function cleanup() {
stopListening();
}
});
In this case, the useEffect hook will start on the init, start function listen, and when a component will go into unmounting, useEffect will start function cleanup.
Summary of react lifecycle methods
Congratulations, now you know all about the react component lifecycle!
You’ve learned about how to use hooks to simplify your work with lifecycle as well.
It is a long lesson and full of knowledge, but I’m sure now your code will look better, and you will have real deep-inside knowledge about how react components work during their lifecycle.
It’s definitely not something that every beginner should know, but experienced developers must have this knowledge to avoid future mistakes.
Let us know in the comments what topic is the most interesting for you and what you would like to learn now!
Thanks for reading,
Radek from Duomly | https://medium.com/duomly-blockchain-online-courses/check-that-react-lifecycle-methods-with-hooks-tutorial-before-building-your-next-app-56dd0a868270 | ['Radoslaw Fabisiak'] | 2020-12-17 09:18:08.218000+00:00 | ['React Component Lifecycle', 'Coding', 'Programming', 'Frontend', 'React'] |
Role Of Energy Efficient Pumps in making Sustainable Economy | Role Of Energy Efficient Pumps in making Sustainable Economy
Water pumps are responsible for heating and cooling in buildings, treating and distributing water, and generating power, that are often overlooked in conversations around smart or sustainable cities. Econaur 5 days ago·5 min read
The network of water supply pipelines is one of the critical infrastructure needs for any city. Taking water to every corner of the city is one of the important roles for any municipal authority. Unfortunately the cities are losing up to 60% of pumped water due to poorly managed water networks. A total of 80% of all wastewater returns to nature without being properly treated. This has further led to modern challenges in the form of huge capital requirements, providing adequate inter-state transmission system (ISTS), distribution infrastructure and developing suitable land resources. But if we begin to use the same water multiple times, with appropriate treatment processes and the implementation of intelligent water solutions, we can transform and evolve at a faster pace where resources can be managed efficiently.
Water and Energy are Correlated Recourses
Water and energy are fundamental components for all human needs and they are co-related. Just how producing energy consumes water, treating and distributing water also requires energy. Water is used in all phases of energy production and electricity generation for various purposes at different stages. Energy is required to extract, convey, and deliver water for diverse human uses, and then again to treat wastewater prior to their return to the environment. Several current trends are further increasing the urgency to address the water-energy nexus in an integrated and proactive way. First, climate change has already begun to affect precipitation and temperature patterns across the world. Second, the world’s population is rapidly increasing and so is the urbanization which creates differentiated and heavy loads at different usage points. Third, the introduction of new technologies in the energy and the water domains could shift water and energy demands. Finally, developments in policies addressing water rights and water impacts of energy production are introducing additional incentives and challenges for decision making.
Integral part of Pumps in Water-Energy Nexus
Water pumps are responsible for heating and cooling in buildings, treating and distributing water, and generating power, that is often overlooked in conversations around smart or sustainable cities. Intelligent water solutions can help reduce losses. Governments, development agencies and the private sectors are exploring and implementing solutions to improve access to, and sustainability of water.While other renewable energy options are being adopted, the private sector is showing a keen interest in developing efficient solar pumping solutions.
About 10% of the world’s electricity is consumed by pumps for various applications. It is vital to save energy with intelligent and smart water solutions and bring down the consumption of electricity to 5%. The agricultural sector alone consumes approximately 80 percent of total supply.The decentralized solar water networks could be the answer to both water and energy issues in rural communities.
Pumps running on Renewable-energy
As the world envisions to move towards a greener economy, it is critical that we all start working towards a sustainable plan that would jumpstart the future. To address the growing energy needs and to reduce carbon emission, the Indian government has reimagined the national energy supply and demand management with a higher focus on solar. This was evident in the recent Union budget; several incentives and schemes were introduced for farmers. The government has set a goal to reduce its current dependency on fossil fuel by 40%, which highlights the commitments made in solar energy as a source for the long-term.
With a rejuvenated outlook on renewable energy from the government and private stakeholders in terms of investment and initiatives, the solar pump industry is projected to grow at a CAGR of over 27% during the period 2018–2024. Technological innovation and product democratization will determine the future of this industry. The Solar pump schemes via the flagship Government program PM KUSUM in the Agri sector and the Jal Jeevan mission for the piped water supply in rural communities is bound to drive the demand for solar pumps over the next four years.
Considering India’s tropical nature, solar pumps can be employed to provide a reliable water supply for irrigation, drinking water and ancillary activities in such remote locations. Dependence on grid-based electricity consumption can be brought down with the intervention of solar pumps, inadvertently leading to a sustainable option for the farmers and rural communities across the country. This shift ensures in an increase in sustainability while improving cost and operational efficiency. Technology driven solar solutions ensure that farmers can operate these pumps without frequent downtime or maintenance requirements. With the right backup systems the farmer can use the energy for other purposes like lighting, cold storage. etc., when the field doesn’t require water and hence the pumps need not run.
Digital transformation to drive smart pump industry
Innovative technologies have made positive strides in the water industry — augmenting the existing water infrastructure as well as contributing to improving the water accessibility and recycling while simultaneously reducing consumption and wastage.
Increasing urbanization is contributing to significant amounts in terms of energy consumed — the global cooling demand alone has doubled in less than 20 years. This growth in energy consumption is a major driver of climate change. There are plenty of intelligent solutions available; with more energy efficient pump solutions, we can reduce CO2 emissions while optimizing energy consumption of the pumps used in the process of cooling in the buildings.
By deploying smart pumps, it becomes easier to understand the specific water requirement and thus save this critical resource water. In the agricultural sector, an IoT enabled pump can give farmers full control to operate the entire system from his mobile phone — set the quantity of water to be pumped and time of watering his field and also get a real-time report of the critical health parameters of the pump system.
For example, KSB Pumps with intelligent and smart technology has acted as a change agent. The solution enables real-time monitoring, remote control, system optimization, fault predictive and preventive maintenance achieve the highest level of performance and savings both in cost and energy. These smart pumps automate, understand the specific water requirement, optimize the entire system and thus save this critical resource.
Creating a circular economy
In order to utilize water efficiently, there needs to be shifted from a linear ‘use and dispose’ approach to a circular ‘use, treat and reuse’ approach. Water can be reused several times before it reaches its destination. For example, water used in showers and kitchens can be reused to water plants or wash cars thereby reducing freshwater usage.
Adoption of intelligent water solutions is leading us to use water efficiently, with a greater focus to avoid overusing water and energy. | https://medium.com/@prakritigroupindia/role-of-energy-efficient-pumps-in-making-sustainable-economy-4a3982769c66 | [] | 2021-09-08 11:20:47.359000+00:00 | ['Econaur', 'Sustainability', 'Energy Efficiency', 'Sustainable Development', 'Pump'] |
How to Make it Through Summer Holidays Without Losing Your Mind | Photo by Amy Patterson on Pexels.com
Our families spend the whole school year immersed in our individual worlds, each kid has their own class, their own friends, their own issues and we only spend evenings and weekends together as families. But summer vacation throws everyone together and things can get out of hand. Sure, the first few weeks of summer holidays are always great because the kids are jacked to be out of school and find ways to keep themselves busy. But as July comes to a close, your summer soundtrack rapidly becomes filled with, “Mommmmm, Taylor took my [insert coveted item here],” or “Daaaddd, I’m bored,” or every parent’s absolute favourite, screaming, shouting, and slamming doors.
Photo from Snapwire on Pexels.com
So, how we do survive the summer without ripping our hair out or screaming at our kids? We’ve got our top three tips to help you get through it.
1. Let them be bored.
Okay, so they’ll drive you bonkers for a bit, but letting kids be bored is good for them. In our world of constant engagement and endless stimulation, our kids are continually looking for more, they want to be entertained perpetually (and it’s exhausting, right?). But taking away screen time and devices and letting your kids figure out what to do with that boredom gives them time to hear themselves, to know themselves, and to be themselves. In doing so, they’ll power their own imaginations, work on critical thinking, and build some independence. Psychology Today says that boredom fosters the “the ability to self-play, create, generate, self-engage [which] is still a profoundly important skill in the development of a healthy human being.” The bottom line: let your kids be kids, you don’t have to entertain them and they’ll be better for it.
Photo from Pexels.com
2. Take advantage of free activities and community resources.
Get moving — your kids’ natural impulse is to move, so keeping their bodies busy will be fun and will tire them out too! From parks programs to festivals, bike paths to public spaces, free activities are abundant in summer and give you a chance to get out of the house with (or without) your children. Today, all it takes is a quick Google search (“Free things to with kids in [your city] this summer”) to find free programming and fun places to go. You can also use Google maps to find bike trails, spray parks, and playgrounds in your city and set up a bike tour and picnic with your kids. On rainy days, check out your public library’s programming, have family game day, build indoor forts, or have a reading day.
3. Don’t overschedule.
When summer is so brief, it’s tempting to fill up your days with activities, camps, day trips, and family vacations, but when we do this, we quickly get burnt out (not just parents, kids too). Back to the boredom tip, free play and free time to relax and hang out around the house are crucial to the whole family’s wellbeing. During the school year, kids are constantly busy, free time is scarce, so summer isn’t just about freedom, it’s also about recuperation. Schedule some activities, of course, but don’t go overboard because unstructured play is important for emotional and intellectual development. Our kids need time to relax, explore, and take risks and they can’t do that if they’re constantly shuttled from one place or activity to the next.
Photo by Dazzle Jam on Pexels.com
If all else fails, just ditch the guilt and stick to your guns. And remember, your kids just want to feel free and loved and you can 100% give them that. | https://medium.com/golife/how-to-make-it-through-summer-holidays-without-losing-your-mind-9523302a97f7 | [] | 2019-07-24 17:02:43.269000+00:00 | ['Parents', 'Education', 'Summer', 'Parenting', 'Boredom'] |
Andaman photoshoot | Catching recollections with their perfect partner when they are beginning another excursion of their life is the thing that two or three cravings do. Be that as it may, assuming you need to make this an extraordinary and once in a blue moon insight for both of you, then, at that point, go for a pre-wedding shoot in the Andaman Islands. Could anything at any point beat the regular magnificence of these islands that includes white land and blue water background? Plan your pre-wedding shoot in the extraordinary seashores of Andamans to have some good times occupied a great time.
Aside from making an extraordinary picture collection, a pre-wedding shoot additionally fills in as a delightful chance to get to know one another before your D-day. It would be an enticing heartfelt escape for some extraordinary reasons. One is recording an unprecedented journal as pictures and recordings, and the other is that you get to invest some quality energy with your accomplice to reinforce your bond significantly more prior to getting into a conventional connection. You may likewise allude to it as a ‘Pre-Wedding Vacation’. Doesn’t unreasonably strong invigorating? The Andaman photoshoot additionally makes you both recognizable and agreeable around the camera while catching you, making those appealing postures and frowns for your photos.
Unblemished environmental elements of Havelock Island make for an ideal area for a Pre-wedding shoot; the turquoise blue maritime waves; tremendous blue sky with puffy white mists; thick tropical cover; water under on your feet; delightful nightfall; even exquisite dawn; a large number of stars above you around evening time. With everything taken into account, conceding you various freedoms to present imaginatively for each shot. You can click not many of them in a kayak and afterward shift to a journey; plunge under the ocean for scuba jumping or fly noticeable all around while parasailing; ride a stream ski or swim together cause a sandcastle or to feel the arousing breeze that welcomes happiness on your and your accomplice’s face. There are no restrictions to those wow minutes in Havelock Island that lead to an extreme pre-wedding collection. | https://medium.com/@maurice03/andaman-photoshoot-e3f324914072 | [] | 2021-11-30 12:23:01.250000+00:00 | ['Andaman', 'Photography', 'Photos', 'Photographer', 'Prewedding Photography'] |
4 Amazing Tips For Successful Lead Generation With LinkedIn Automation Tools | 4 Amazing Tips For Successful Lead Generation With LinkedIn Automation Tools Steve J Apr 7·4 min read
If you’re a B2B marketer, LinkedIn should be the most important part of your marketing strategy — it has become the largest B2B platform where you find and engage with niche-specific leads, makes meaningful connections, and grow your brand.
If you pay attention, using LinkedIn for lead generation would make a lot of sense to you. Unlike other social platforms such a Facebook and Instagram, prospects are much more eager to have professional conversations on LinkedIn.
It’s a professional network where you don’t just scroll down to watch videos — you’re there to talk about business opportunities.
Advanced LinkedIn automation tools have worked as a blessing for B2B marketers. These tools pointed them in the right direction, helping them find quality leads, and even turn them into customers with personalized messages.
Now you can probably see where I am going with this: Here are some useful tips on how you can use the best LinkedIn automation tools to generate leads.
#1: Optimize Your Profile 👏🏼
First things first — before you even start using LinkedIn automation tools to connect with people, make sure your profile is good enough to trust.
The worst thing you can do to yourself is running an impressive campaign while having a bad profile. You know what will happen? Prospects won’t accept your request or respond to your messages even though your targeting was highly precise and you were using the best LinkedIn automation tools the right way.
Here are a few things that you can do to leave a good first impression:
Add a professional profile picture and background
Add a professional profile picture ( not something with a Snapchat filter or a stock photo of a business person). It’s a good idea to add a picture that shows something you love; fashion, books, art, etc.
Headline & Summary
When adding a headline, make sure you add the most relevant keywords. This also goes for your summary.
Just like you are using advanced LinkedIn automation tools to find prospects, they are also using LinkedIn automation tools to find reliable vendors. So, provide them something in your profile to trust you.
#2: Add Connections (but within limits)
Now that you have a great profile, you can confidently use the best LinkedIn automation tools to run outreach campaigns to add connections.
However, this can be a bit challenging — sending too many connections in a day can get your account blocked.
If you are using the latest LinkedIn automation tools to send connect requests but you haven’t set any limits, it can get you in trouble.
This is because sending too many connected notes is considered spam. You can avoid this by using the best LinkedIn automation tools that offer the ‘Warm-up feature.”
Here is how you can do that:
Use LinkedIn automation tools to set daily limits for:
Profile Engagement
Connection Requests
Messages
And so on…
Some of the top LinkedIn automation tools such as LinkedCamp have an inbuilt safety limit feature to keep a check on the number. Thus, you don’t have to do it manually as these tools can do it for you.
#3: Follow Up
There is a misconception that LinkedIn lead generation is all about numbers — Gathering as many leads as you can and hoping that some of them will work is like fooling yourself.
If you’re sending messages to leads once and leaving them behind if they don’t respond — you’re doing it wrong.
You’re missing an important step: Follow Up. As a B2B marketer, you must have an idea of how important it is to follow up. When you follow up with a lead faster, your chances to convert them into sales are 9 times higher.
But many salespersons don’t recognize the importance of follow-up and leave hundreds of great opportunities at the table.
It might be awkward for you to keep pressing someone once they have shown no interest. Don’t worry! There are some great LinkedIn automation tools that can do it for you.
You can do this easily with LinkedCamp — to send personalized messages and follow up after regular intervals. This advanced LinkedIn automation tool is very easy to use and automates messages and follow-ups. It will automatically send follow-ups after a particular time period.
#4: Generate Leads From Other Sources
Using LinkedIn automation tools LinkedIn doesn’t mean there is no other source or channel for lead generation.
Your leads could be anywhere on the internet, on any platform and you can use different means to find them.
However, the place where you can certainly find leads without hassle is LinkedIn. Analysts and strategists have suggested that LinkedIn could easily change the game for marketers if they know how to use the best LinkedIn automation tools properly and what strategies to apply while using them.
Good luck! | https://medium.com/@stevejohnsonstories/4-amazing-tips-for-successful-lead-generation-with-linkedin-automation-tools-61aee0bd0adc | ['Steve J'] | 2021-04-07 12:27:11.841000+00:00 | ['LinkedIn', 'Technology', 'Automation', 'Lead Generation', 'Tips And Tricks'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.