title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
829
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
| tags
stringlengths 6
263
|
---|---|---|---|---|---|
As You Will | How gently you rock the cradle O Sire,
Such deft rhythm like a sonorous hum
Lulling my cries into a pleasant dream,
All promptly done after casting me in fire!
How sweet Thy gestures of hope to heart,
Arriving like cool rain in drenching waves
That our parched tongue in time saves,
After unerringly turning our feet to desert!
How lofty Thy muse’s subtly grand flights
Outwinging imagination’s free ranges
To pour some heaven-ink on our pages,
Only ’cause Thou didst rob all our sights!
How many limbs are Thine to embrace
Us upon the forlorn roads of Night
Where no fond familiar face is in sight,
After Thou decreed us to loneliness!
How many splendours are Thine really?
The archangels gossip to our eager ears
Of rose and nectar and heavenly apsaras,
But these accounts I surmise is not of Thee!
How many odes to Thy idylls sung
The marbled halls with ruby and amethyst,
Where souls aspire to dwell in content,
All because by Thee their hearts were wrung!
These fistful of words I add to Thy horde,
Do as you will O most sought Beloved. | https://medium.com/inevitable-word/as-you-will-30b9651ccae1 | ['Mahesh Cr'] | 2020-12-26 15:35:49.470000+00:00 | ['Yoga', 'Poetry Writing', 'Poetry On Medium', 'Poetry', 'Sri Aurobindo'] |
New: Deeper insights into each Transaction | Partners can now see details into their customers’ journey as they make purchases through Safepay
Transaction Events
Background
Earlier this year in May, we launched Safepay as a way for merchants in Pakistan to start accepting digital payments for their products and services from customers all over the world. In order for merchants to stay informed of all of their payments flowing through Safepay, we also gave merchants access to a dashboard that shows each transaction as a line item. Each transaction is also paired with its current state which allows merchants to detect if their customers are having issues while trying to pay. Every transaction flows through the following State Machine
TRANSACTION_STARTED This state is triggered as soon as a customer clicks on the Safepay Checkout Button and a transaction record is initialized in the system. TRANSACTION_ENROLLED This state is triggered as soon as a customer submits his credit card number to our system and the payment network detects that this card is enrolled in 3D-Authentication. Customers are then redirected to their bank to confirm payment through 2FA TRANSACTION_AUTHORIZED This state is triggered as soon as a customer’s card is successfully authorized by his bank. TRANSACTION_ENDED This is state is triggered as soon as a customer’s authorization is captured TRANSACTION_CANCELLED This state is triggered if the customer cancels his payment or is unable to complete his payment. TRANSACTION_REFUNDED This state is triggered if a merchant performs a refund for his customer.
What we’re releasing
Today, we’re excited to release a new state for transactions and a bunch of improvements and features to our dashboard that give merchants more knowledge and power to control their payments.
Before today, merchants weren’t sure where their customers were dropping out. They had no insight into the problems their customers were facing with declined transactions and resorted to contacting Safepay Customer Support for more knowledge on these issues. Furthermore, certain times, customers would have their card authorized but would not be able to complete the payment due to settlement issues with their banks. This would result in “authorization holds” on customers cards causing them to panic thinking that money had left their accounts even though they were unable to complete the purchase.
Today, merchants can log in to their Safepay dashboard and see a few new things.
On the Payments page, merchants can now see an event feed for every state a transaction goes through. And for each event, Safepay now shows you whether the event was successful or not. Transactions that were Authorized but never Settled can now be “Reversed” either through the Payments page or the Payment Details Page . Reversed transactions now have the state TRANSACTION_REVERSED The Payment Details Page now shows a Transaction History with appropriate Reason Codes and Descriptions mapping each Reason Code to a human readable message. These reasons can be communicated with customers if they are having any issues while processing payments. Each Event now comes with its own Request ID that can be communicated with Safepay in case of any issues with the particular transaction.
Why we’re releasing this
Our goal is to empower merchants with the right tools and software to enable them to scale their businesses and sell online through a simple and secure interface. Empowerment comes with providing the right knowledge and the right tools to execute on that knowledge.
Starting today we are providing merchants with information related to both transactions that were successfully processed, and those are were unsuccessful. With double the knowledge, our hope is that merchants can help their customers convert unsuccessful transactions into successful ones and increase their revenue.
Conclusion
Stay tuned for Part II of this series that provides details on how to make use of these new features. If you have any questions or concerns, or come across something that doesn’t look right, please reach out to us at [email protected] or post a message to our community for everyone else to see.
Part II of this series can now be found by click here. Read on to understand how to use transaction events for your business. | https://medium.com/safepay/new-deeper-insights-into-each-transaction-4e451a5c6158 | ['Ziyad Parekh'] | 2019-09-02 19:52:31.791000+00:00 | ['Payments', 'Pakistan', 'Startup', 'Dashboard'] |
Buenas Prácticas Flutter: De una idea a un MVP 💡📲 - Parte I (Colores) | Flutter developer, NodeJS, MongoDB, Laravel, Angular, UX/UI Designer and more interesting things like IoT, Elastic Stack… | https://medium.com/@christian-llansola/buenas-practicas-flutter-parte-i-8bd046511991 | ['Christian Llansola'] | 2020-12-06 20:34:16.759000+00:00 | ['Web Development', 'Mobile Development', 'Flutter', 'Mobile Apps'] |
Hey are you Looking for Resort in saharnpur. | End your Search here pench o the resort Providing a truly memorable experience, Best Resorts Near Dehradun Pench-O The Resort which is within 200 kms from Delhi is all about surrendering oneself to comforts, luxury and enthralling activities. It s all about Entertainment, Entertainment and just Entertainment! The resort near dehradun is strategically designed and built by the JDA Infra Limited. Pench-O the resort is a renowned wellness property that extends a distinctive flavor of hospitality. It offers topnotch extravagance while remaining profoundly embedded in its local heritage. An eco-friendly resort, strategically created among the beautiful nature. The resort is blessed with pleasant weather throughout the year. The resort is located on the Delhi Dehradun Highway, Pench-O The Resort Near resorts near saharanpur, Best Resorts in Dehradun, suiting any type of groups, be it family, corporate, friends, wedding or for just some honeymoon vibes. An ideal location to spend some time with your beloved ones. It is also another luxury resort that serves as a retreat for corporate. Away from the honking of vehicles this resort is a perfect pick for overwhelming tourists. The ideal and serene location of this resort park, ample open space available, the lush gardens, modern youth vibes, tree house, open MP3 screen and a expanded vibrant pool makes it an exciting destination for all age people. | https://medium.com/@isolssaurabh/hey-are-you-looking-for-resort-in-saharnpur-7d47353958f6 | [] | 2021-12-17 11:59:04.202000+00:00 | ['Events', 'Resorts', 'Photography', 'Hotels', 'Marriage'] |
How to cook Baingan ka Bharta in Punjabi Style | Ingredients
Brinjal (large) — 500g (2)
Onion (finely chopped) — 2
Tomato (finely chopped) — 400g (4)
Boiled peas — 100g
Green chillies (finely chopped) — 4
Ginger garlic paste/finely chopped — 1tbsp
Mustard oil
Salt
Red chilly powder
Garam masala
Turmeric (optional)
Coriander leaves
Procedure
1. Roast and clean the brinjal (15 mins)
Optional: Brush some oil on the brinjal before roasting
Roast the brinjal on the gas stove until it is like the image above. After removing it from the stove, put it in water and remove the skin.
Not punjabi style: Peel, cut in small pieces and boil with 1/2 cup water instead of roasting (won’t taste like roasted at all)
2. Cooking (30 mins)
Heat the mustard oil to the smoking point and turn off the gas so that the onions don’t suddenly burn.
and turn off the gas so that the onions don’t suddenly burn. Put the onion in the pan and turn on the gas. Saute the onions until it is translucent . Do not let it go brown.
. Do not let it go brown. Add ginger garlic paste and saute some more till it is not raw.
Add tomato and let it cook until some oil starts separating. This may take 15–20 mins.
Add the dry spices.
Add the brinjal and mash it, add boiled peas and roast it on low flame until some oil starts separating. Might take around 10 minutes.
3. Garnish | https://medium.com/@punjabimom/how-to-cook-baingan-ka-bharta-in-punjabi-style-297d43dc426d | ['Punjabi Mom'] | 2020-05-16 12:04:41.545000+00:00 | ['Cooking', 'Food', 'Punjabi', 'Moms'] |
AZTEC vs PGC or the quest for confidential transactions | Confidential transactions are transactions where the transferred amounts are “hidden”, however the validity of the transaction still remains publicly verifiable (i.e. no money is created out of thin air). Although they do not provide anonymity per se, unlike Monero and ZCash, that provides confidentiality AND anonymity, confidential transactions are an important first step in the right direction and in some use cases it is not even desired to hide the identity of the parties, rather just transacted amounts to retain them as business secrets.
Without doubt, the flagship of these proposals is the AZTEC protocol (Anonymous Zero-Knowledge Transactions with Efficient Communication), created and invented by Zachary Williamson. First, Zac proposes a novel perfectly hiding and computationally binding commitment scheme, which is tailored to admit efficient range proofs. Afterwards he defines the Joinsplit protocol and a NIZK (non-interactive zero-knowledge) proof system to prove the correctness of Joinsplit transactions.
What are Joinsplit transactions? AZTEC represents money as notes, one might think of them as UTXOs (unspent transaction outputs). In a Joinsplit transaction input notes are destroyed and output notes are created in equal amount, in a way that the values of these notes remain encrypted. More precisely, both input and output notes are commitments to the value these notes represent. A novel NIZK proof system is introduced in AZTEC to prove that the commitments in a Joinsplit transaction are well formed (the transaction originator can open the commitments) and the sum of input notes equal to the sum of output notes. Zac shows in the AZTEC paper that the proposed proof system achieves perfect completeness, special soundness and special honest verifier zero-knowledge in the random oracle model under the q-Strong Diffie-Hellman (SDH) assumption.
AZTEC’s efficiency comes at a price. It relies on a trusted setup, whose protocol details are not public yet, although I encourage anyone to participate in the trusted setup once the trusted setup MPC date and protocol details are known. Basically in the trusted setup, participants need to create Boneh-Boyen signatures on all the numbers in the interval [0..2³²-1] in a way, that the secret signing key is additively shared among participants and no participant knows the secret key in its entirety. The number 2³²-1 is defined in the protocol as the highest representable AZTEC note denomination. | https://medium.com/@istvan-a-seres/aztec-vs-pgc-or-the-quest-for-confidential-transactions-d137a0e4f7ed | ['Seres István András'] | 2019-07-15 15:00:34.981000+00:00 | ['Ethereum', 'Cryptocurrency', 'Cryptography', 'Privacy', 'Blockchain'] |
When It’s Guaranteed To “Go Viral”Run | Let us first address the elephant in the room: It is important to keep in mind that not every campaign will go viral. Marketing agencies that tell you it is guaranteed to go viral are simply overselling themselves and you should not fall for it. It is however, not impossible to increase our chances of having a viral and successful campaign by coming up with creative ideas and strategies that speak to the personalities of our audiences.
While publishers are gatekeepers of the content we produce and how it is distributed to our audiences, we are also trying to connect with real people and once the content is produced and released you cannot really control the outcome of the campaign and the people’s response to it. In this article, we will look at two campaigns that started out on a good note and ended terribly after some backlash.
1. UK Government #EatOutToHelpOut Campaign
The UK Government came up with a campaign designed to help the failing food and drink sectors during consecutive and extensive lockdowns and as the government, in position of power, it was dramatic to have contradictory campaigns from one source. The UK population described it as “misleading and contradictory”. The campaign pledged to give diners a 50% discount on food and non-alcoholic drinks on Mondays, Tuesdays and Wednesdays throughout August in various restaurants and cafes including fast food chains like Ed’s Diner, KGC, Burger King and Pizza Hut.
Eat Out To Help Out — UK Campaign (stock images)
In as much as this initially was a noble cause, there was a previous campaign by the government after news that being fat makes Covid19 symptoms worse. To the British public this translated as “ the public should lose weight AND eat fast food to revive the country”. This implication did not go unnoticed and the Public responded scandalized on different social platforms.
Brighton Journal tweet screen grab
UK Prime Minister Boris Johnson acknowledged that the Government was wrong and in a statement said “It was very important to keep those jobs going. Now, if it, in so far as that scheme may have helped to spread the virus, then obviously we need to counteract that and we need to counteract that with the discipline and the measures that we’re proposing.” In the next lockdown, the pubs and restaurants were closed and this campaign was put to a halt.
2. Four Seasons Total Landscaping & President Trump
On November 8th President Donald Trump tweeted that his lawyers would be addressing reporters from the Four Seasons Philadelphia at 11 a.m. , however the hotel management quickly tweeted back that it was Four Seasons Total Landscaping, which has no relation to the hotel. Unfortunately, the tweet came down within minutes, which in PR is considered a big mistake.
Four Seasons Philadelphia Hotel response tweet.
Turns out the event was to be held in the parking of a Landscaping company, which in its response responded in an amusing way that PR specialists called “pure gold”.
Four Seasons Total Landscaping response to Donald Trump
Handling a Bad PR Disaster
a. Don’t delete any social media activity
Coward! A coward is what you will be called for erasing any evidence. On social platforms such as twitter which are somewhat a justice for the mob, it is highly likely that someone has taken a screenshot before you even delete the evidence and chances are it will bite back. The best thing to do as an individual is to own up the mistake or keep quiet and monitor the activity until you are ready to address it.
b. Appoint a response team
If you are a firm, company, business, political party or anything collective, it is good to have someone assigned to respond but also a compliance team to agree on the communication strategy should there be need for a statement to be released. If the communication is centralized, there are less chances that an individual could respond in a heat of the moment or out of emotions which might cause even more damage.
c. Laugh at yourself
Yup! you read well. Laughing at yourself makes people sympathize and see that you have practically thought about all the trolls they could throw. If you look back at the Emily Crisp campaign referred to in our previous article, you will realize that their campaign went viral for all the reasons they certainly had not fathomed or planned for while coming up with their strategy.
d. Avoid defensiveness
Whatever the case, do not be rude, defensive or aggressive in responding to a campaign gone wrong. Be diplomatic, considerate, responsive and humble. On top of a campaign gone wrong, you can’t afford to get into a heated argument with your audience.
e. Move on
Social media moves on very fast, and so should you. Don’t shy away from engaging your audience with more positive posts, deals if you’re a business and consistently conversing with the online audience in an upbeat manner. Don’t dwell on the disaster, Move On!
In conclusion, there is no such thing as 100% guaranteed successful campaign. People have different feelings, emotions and interpretations of information. While some might engage positively, some might have mixed feelings or completely opposing feelings and we must always be open to all kinds of interactions and be ready to address it accordingly and as professionally as possible. | https://medium.com/@tealgold/when-its-guaranteed-to-go-viral-run-91a8a88727c5 | ['The Magenta Lilacs'] | 2021-12-09 13:35:58.372000+00:00 | ['Covid 19', 'Public Relations', 'Marketing', 'Social Media'] |
Hash.Pro Miner Mac Version is now out for your MacBook | Dear Miners,
Today we are excited to announce the launch of our Mac Version for the Hash.Pro Miner, now you can mine bitcoin on your Mac.
This new version is now showing a better compatibility with your Mac.
With the Hash.Pro Miner account, you can track your device’s status, use mining calculator to estimate your earning, withdraw your earned bitcoins, as well as buy our cloud mining contract. The essential feature of Hash.Pro Miner is its dynamic algorithm change, the system will automatically switch algorithm to mine the most profitable coins to maximize miners’ profits. All you need is a Mac and stable internet connection, you can download the app for Mac HERE.
The daily mining profits depend on your computer’s performance, we estimate that one computer with 1080Ti Graphic Card will mine 0.7 USD daily profits.
You can contact us via our Telegram channel or leave a message to [email protected].
Download Hash.Pro Miner now and start mining your first cryptocurrency!
Happy Mining! | https://medium.com/hash-pro-cloud-mining-official-blog/hash-pro-miner-mac-version-is-now-out-for-your-macbook-314cf57d0e2f | ['Hash.Pro Cloud Mining'] | 2018-12-21 08:16:43.613000+00:00 | ['Bitcoin', 'Cloud Mining Free'] |
Not All Here | Written by
Almost famous cartoonist who laughs at her own jokes and hopes you will, too. | https://marcialiss17.medium.com/not-all-here-ccacc27a707e | [] | 2020-08-17 13:16:18.082000+00:00 | ['Mood', 'Humor', 'Comics', 'Life', 'Cartoon'] |
Coding Upgradable Smart Contracts | What is upgrading Smart Contracts meant for?
As Developers, we always wanted to upgrade our code to introduce new features and fixing bugs too. talking about blockchain, whatever is written on blockchain is immutable. even Smart contracts, once deployed on blockchain network we can’t alter or modify it. in the past few years, we have seen many attacks like, DAO, Re-entrancy attacks, Parity Wallet hacks caused a loss of millions of ether only because of small bugs in smart contracts. So testing smart contract tens of times before deploying is very essential to avoid these vulnerabilities. What if, we wanted to introduce new features to our deployed smart contract? since it is not even possible with blockchain, Right? but somehow, there is a way to upgrade our contracts. how? “A way of Upgrading without actually upgrading.” seems funny? yeah give me a minute, you might be aware of Call diversion feature in your Smartphone, once call diversion is activated for your mobile number with your preferred new number, every call that comes to your mobile number will be diverted to your preferred mobile number. in the same way, Upgradable contract works. your old contract handles the data and redirects every function call that comes to your old contract address to newly deployed contract ie..upgraded one.
for good reason, gochain provides a CLI protocol to Upgrading contract. So what are you waiting for? let's go.
In the demonstration, I’m gonna upgrade simple greeter contract from v1 to v2 .in the version v1, there is no getter method to return greeting. in v2, getter method is added as a bug fix or new feature whatever you call it is. code available here.
Environment
Ubuntu Desktop With gochain web3 library.
Pre-Requisites
1.gochain Web3 library
curl -LSs https://raw.githubusercontent.com/gochain-io/web3/master/install.sh | sh
2. Gochain wallet and some GO-tokens.
Go to https://explorer.gochain.io/wallet/create and store your privatekey somewhere else safe.
For doing any operations on network, you need tokens. so inorder to get testnet tokens, join their official telegram channel https://t.me/gochain_testnet and ask them to provide some testnet tokens by pasting your previously created wallet address**
Steps
1. Environment Setup
a). Export private key to your Env. variables (require root user privileges )
sudo -s
export WEB3_PRIVATE_KEY=0x...
Check if your private key is correct by this command $ web3 myaddress if this command returns your wallet address that you created as in pre-requisites, then you are good to go.
b). Use the Go-Chain testnet
$ export WEB3_NETWORK=testnet
2. Deploying an Upgradeable Contract
a). Compiling Contract
copy greeter.sol file to your directory, then
web3 contract build greeter.sol
b). Deploying contract
web3 contract deploy --upgradeable greeter.bin
Once deployed, you will get contract address, add this address to env. variables
export WEB3_ADDRESS=0x...
Deploying upgradable contract actually deploy two contracts. original greeter contract and proxy contract which will redirect calls to upgraded contract. check where our proxy contract redirecting calls and storage with this command
web3 contract target
c). Writing Transaction (Greeting)
web3 contract call --abi greeter.abi --function greet "Hi Guys, This is from Old Contract"
Once transaction went through, we successfully added greeting on blockchain. but still, there is no option to return the added greeting. thats why we are trying to upgrade contract with the new code.
3. Upgrading Contract
Copy greeterv2.sol to your directory and then,
web3 contract build greeterv2.sol
web3 contract deploy greeter.bin
Once deployed, it will return the newly deployed contract address. using this address we can upgrade our old contract greeter to greeterv2. run below command to finish one last step
web3 contract upgrade --to 0x.... //add your newly deployed contract address here
Now you can call new contract with the same address. in old contract, there is no getter function for variable , in new contract, we added getter function getGreeting . now you can call it using this command.
web3 contract call --abi greeter.abi --function getGreeting >Hi Guys, This is from Old Contract
4. Pausing and Resuming Contracts
Upgradeable contracts also include the ability to pause & resume execution. if you found any bug in contract you can simply pause (or resume later) the contract with these commands
web3 contract pause
web3 contract resume
Once paused, if you call the function you will get unmarshalling empty output string till you resume contract.
BOOM… | https://medium.com/coinmonks/writing-upgradable-smartcontracts-883b3a3d29a3 | ['Salman Dabbakuti'] | 2020-08-26 13:34:26.330000+00:00 | ['Upgradable Contracts', 'Dapps', 'Smart Contracts', 'Ethereum', 'Blockchain'] |
Why you should start using short URLs in your posts | Photo by Luke Chesser on Unsplash
When you’re looking at social media posts, people are sharing many URL’s, blog posts, etc… But I feel that people forget to analyse these clicks. When you’re using the non-premium LinkedIn you get insights into how many people have seen your posts, but no indication of any clicks. When you’re using Twitter, for example, you get a much more in-depth analysis of the number of clicks, likes, interactions,…
You might be thinking: “We have Google Analytics on our webpages for that”. And that’s fine, but what if you have no insights in those analytics? Let’s say you want to get to know your audience by posting content on different topics. You might receive many likes from friends or colleagues, but you don’t know if anyone is actually opening the content. That’s where short URLs come in!
I’m sure that everyone knows what a short URL is, but here is a short explanation. You take a long URL, like www.this-is-a-very-long-URL-and-not-so-interesting-to-post-on-social-media-especially-when-your-tweet-can-only-be-280-characters.com. This long URL gets converted into an URL that looks like https://cutt.ly/XXXXXX, much shorter right? That’s the only thing it does. Or so you might think.
When you’re using platforms like Cutt.ly, you get much more than a better-looking URL. You get a dashboard for all the data that your links collect.
In the graph below, you can specifically see in what timeframe people are clicking your links, from which devices they are engaging with your content, and what their language is. You even get their estimated location.
Screenshot by Author
Screenshot by Author
Now, why is this so important?
In this world, where every application and every social media post fights for the audience’s attention, you have to be able to send your audience the content they want. This is not that important when you’re writing as a hobby, or not trying to build an audience. But when you are looking to engage with an audience and grow it, you have to give them what they want.
In my example, I can see that most of the people that clicked the link speak English. That confirms the language that I write my content in English. It’s important to you know what your audience likes. This is true in business, marketing, product development, and writing. It’s probably true for every kind of product. You might have the craziest idea, that solves all your problems, but if no one has the problems you solved with your idea, no one will use your solution. And certainly not pay for it.
Of course, it’s important to analyse this kind of data, but knowing who opened the link is not enough. You have to combine all your analytics, but this is the first step to some amazing insights without much effort. | https://medium.com/@joachimzeelmaekers/why-you-should-start-using-short-urls-in-your-posts-b6acb9155d9d | ['Joachim Zeelmaekers'] | 2020-12-26 17:24:25.995000+00:00 | ['Analytics', 'Data', 'Short Story', 'Programming', 'Content Strategy'] |
Getting started with Visualizations in Python | Visualizing anything gives us a better understanding, a holistic picture of things, be it the monthly turn over of your company, the increasing temperature, or even tensed situations where you could have outrun everyone else, believe me, the later one works.
The aim of art is to represent, not the outward appearance of things, but their inward significance.
— Aristotle
Visualization is an art, it does not just represent the data better, but it actually pictures the trends, the patterns and other signs that are hidden deep within the data, and can’t be figured out just staring at the data.
It’s one of the crucial arts’ that need to be mastered by anyone who wants to pursue Data Science, without which you’ll be incomplete, your assignments, your tasks will be incomplete especially when you turn up to someone to explain, Why did this Happen?
Let’s have a look at some of the most common plots;
Hold your Horses First.
Shouldn't we first look into what will it take here to plot?
There are so many open source libraries available at your disposal, and what it actually takes to do is the will to do.
Don’t kid me now!
Obviously, you need a laptop and Jupyter Notebook, maybe some of Python basics too.
Let’s get started now. | https://towardsdatascience.com/getting-started-with-visualizations-in-python-945d92b85823 | ['Pranjal Gupta'] | 2019-05-06 13:34:54.562000+00:00 | ['Data Visualization', 'Python3', 'Data Science', 'Plotly'] |
Silly Conversations | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/chalkboard/silly-conversations-a64ce638d8cd | ['Priyanka Srivastava'] | 2020-11-06 19:32:02.346000+00:00 | ['One Line', 'Relationships', 'Writing', 'Sisters', 'This Happened To Me'] |
Youku: Supply Chain of Long Video Content | Catch the replay of the Apsara Conference 2020 at this link!
By AliENT
At the sub-forum on Smart Entertainment Industry Practices at the 2020 Apsara Conference, Qian Kai, Senior Technical Expert of the Youku Technology Center Content Platform, talked about Youku’s digital practice in the supply chain of long video content.
The following content is a summary of his presentation.
The core process of Youku’s long video business is to obtain a large number of high quality purchased and self-made content and distribute them to users through platform products. As shown in the following figure, Youku’s content platform consists of three modules: content supply, content management, and content delivery. Among them, content management and content delivery are relatively mature because they are more digital and intelligent and have been integrated into platform products. Comparatively speaking, the digitization construction of content supply lags behind due to the features of the content production industry. There are many difficulties to be solved, and today, I will focus on these problems.
The Production Process and Industry Characteristics of Traditional Films and TV Shows
The production process of a TV show includes script creation, preparation, filming, post-production, and content release. Each step may also contain several sub-steps. For example, the preparation period includes team building, determination of major creators, and scene selection. The post-production stage includes editing, voice dubbing, and special effect production. Therefore, an excellent TV series can have a production cycle of more than one year, with over 20 processes and costs of RMB 5 million for each episode. Therefore, the characteristics of a typical content production process are very clear: non-standard products, long production cycles, and large investments for a single project.
The Challenges of Content Delivery from Youku’s Perspective
The core challenge is still in content supply, which will frequently cause delays and result in poor quality content. According to statistics, 66% of content delivery is delayed, and 34% of content delivery is not up to standard. After reviewing these problematic projects in detail, we found the core issues.
Content evaluation relies heavily on the human experience and lacks a structured and systematic evaluation system. The production information is not online, and the project data is not collected in a unified manner, making it difficult to ensure the project progress. The platform lacks a unified content strategy that can be used consistently.
The Digital Supply Chain System of Content
Based on the preceding problems, our solution is to build a digital supply chain system of content. After researching most digitalization solutions of traditional industries, we found that the general digital transformation of enterprises can be divided into three stages. The first stage is to structure the basic elements of the industry. The second stage is to move the business process online. The third stage is to digitize the operation strategy.
Based on this idea, we also split the digitalization solution for the supply chain of long video content into three stages.
Establish a structured evaluation system to realize the accurate evaluation of content production in the whole cycle. Establish an information production management mechanism to strictly control the quality and progress of content output. Build a unified content strategy mechanism to achieve supply and demand matching.
The Construction Process of the Content Supply Chain System
First of all, let’s look at the construction idea of the structured content evaluation system. The figure on the left shows the architecture of the entire evaluation system, which is divided into three layers from the bottom to top, including the data layer, capability layer, and scenario layer. The data layer is used to produce and store the structured data of production factors. The capability layer is mainly to construct single-factor evaluation capability and multi-factor combination evaluation capability. The scenario layer is mainly about the application scenario of content evaluation. It includes project approval evaluation, evaluation for major creators, and product acceptance evaluation.
The figure on the right shows an example of a project approval evaluation. First of all, evaluate a single factor, such as an actor, a screenwriter, and a director. Here, I take the evaluation of an actor as an example. The evaluation of actors includes their attraction to an audience and fan value. With the evaluation of these elements, the overall evaluation of the project can be given. The comparative data between each element and the corresponding benchmark element will also be given. It can be used as a reference for leaders to make decisions.
Next, I will introduce the technical solution for the production of structured data. As shown in the figure on the left, the system first obtains data from a platform, processes data through ETL, knowledge standardization, knowledge integration, and knowledge reasoning, to produce structured data. The produced data includes three parts: basic factor data, content understanding data, and quantitative index data. Among them, the most important data is still the basic factor data, including the core factor data, such as directors, screenwriters, leading roles, suppliers, and IP.
The figure on the right shows an example of data production. By following the example, we can produce data related to an actor. First, capture data from various sources, clean and merge the data through ODPS, then import the data to a database after manual review. By doing so, online services can be provided through the OLAP engine. With this data production system, more than 50 billion pieces of data can be processed daily, and 30 million entities and entity relationships can be accumulated. Among them, the accuracy of entity fusion can be more than 92%.
After obtaining the structured data, we need the evaluation capability. The core is the intermediate evaluation engine, which mainly includes the following capabilities:
Score a single element or multiple elements by using the rule engine Understand the script and video through AI technologies, such as CV, NLP, and multi-modal fusion
Based on this evaluation system, our evaluation accuracy can be over 80%.
The second step is to build an information-based production management system. The core is to make information and processes online. Therefore, we sort out the core nodes in production management, including sourcing, preliminary screening, project approval, signing, preparation, filming, post-production, review, delivery, and broadcasting. All core nodes will be put online, and a standard mechanism of information production management will be established to ensure the quality and progress of content output. Most importantly, a complete notification and warning mechanism is required to ensure that project risks can be identified and handled promptly. When the entire production management is completed online, more than 20 teams will collaborate efficiently online and more than 100 risk points will be fully monitored.
The third step is to implement a unified content policy to prevent users from paying too much for the purchased content. The core is to solve the problem of matching the supply to the demand. The solution is to establish a strategic linkage mechanism based on user needs, which can be implemented throughout all teams. Then, each team can have the same goals and direction. The supply side can be matched with the demand side and the content ROI can be improved. From end-to-end, there are four policies for users, namely the distribution policy, broadcasting policy, reserving policy, and purchasing policy.
Let me introduce the reserving policy and broadcasting policy in detail:
By calculating the reserve for long-term content, the reserving policy can predict the content supply and demand for the next 1 to 2 years. Through internal broadcasting and external broadcasting, the broadcasting policy can give a warning of a lack of video content and predict the supply and demand in a short period.
With these two policies, the supply and demand of the entire platform can be obtained. Short-term demand is met through short-term strategies, such as exchanging videos with other platforms and purchasing new videos. Long-term demand is met through long-term content production, such as producing self-made videos and purchasing new videos.
Summary and Prospects
I have demonstrated Youku’s practice in digitizing the content supply chain, including structured content evaluation, information production management, and digital content strategy. Our supply chain system has been operating for half a year, covering all TV series projects. Through the system, we have stopped more than ten risky projects and directly recovered hundreds of millions of economic losses. The system has achieved good phased results.
Looking into the next decade, the digital transformation of enterprises will become a major trend and bring a wave of technical benefits. At the same time, we believe that digitalization will bring changes to the upstream of content production.
Original Source: | https://medium.com/@alibaba-cloud/youkus-digital-practice-in-the-supply-chain-of-long-video-content-e2d114083143 | ['Alibaba Cloud'] | 2020-12-15 06:53:02.620000+00:00 | ['Alibabacloud', 'Multimedia', 'Supply Chain', 'Digital Transformation', 'Trends'] |
Gerry O’Neill: The High Frontier | The High Frontier is a documentary film subtitled The Untold Story of Gerard K. O’Neill. The film documents the life of Gerry O’Neill and his influence on modern space exploration, told through the eyes of his family, peers and members of the younger generation whose lives and careers were inspired by him (so called “Gerry’s Kids.”). Notably Jeff Bezos has discussed Gerry’s influence on his space ambitions and referred to Gerry’s work in his high school valedictorian speech.
The documentary pays tribute to the life and work of Gerry O’Neill, telling the little known stories of the enormous impact he made on the modern world and the space industry.
Jeff Bezos pictured with Tasha O’Neill accepting the Gerry O’Neill Award in 2019
The Settlement of Space
Gerry O’Neill became interested in the possibility of humans living and surviving in outer space during his time teaching physics at Princeton University. O’Neill researched the possibilities and presented his findings on the futuristic idea of humans settling space in paper written in 1970.
O’Neill struggled for several years to get this paper published. Despite submitting it to several different magazines and scientific journals, including Scientific American, the paper was rejected many times over the period of four years by reviewers. In September of 1974, it was finally published in Physics Today.
During the time spent waiting for publication, O’Neill regularly lectured on the idea of space settlement at Princeton and at other universities in the United States. Many attendees of these lectures, including staff as well as students, became enthused by the ideas espoused by O’Neill. O’Neill believed that it would be possible to build self-sufficient and pleasant dwelling places for humans in space within two decades of writing his first paper, which he stated would solve many of the problems being experienced on Earth.
Gerry O”neill in 1981
NASA Studies
In 1974, O’Neill led a two-day conference attended by representatives of NASA and held at Princeton. Following the eventual publication of his paper The Colonization of Space, O’Neill held a much larger conference in May 1975 at Princeton on the topic of space manufacturing.
In June of the same year he led a study for NASA, which took place over 10 weeks and explored permanent space habitats. In subsequent years he would lead further studies at NASA Ames on space manufacturing. Throughout the late seventies, NASA supported the work of O’Neill with grants reaching as much as $500,000 per year. Despite this support, O’Neill became frustrated with the restrictions of government-funded research, caused by politics and bureaucracy.
In 1977, together with his wife Tasha, O’Neill established the Space Studies Institute at Princeton as a non-profit, private organization to fund and support research into space exploration, with a focus on developing the technologies required for human settlement and manufacturing in space. In the first year, this organization received almost $100,000 in funding from private donors.
Gerry O’Neill in the lab at Princeton University
Particle Physics Research
In addition to his interests in space exploration, Gerry O’Neill was also a respected physics professor at Princeton with a focus on high-energy particle physics research. In 1956, he theorized in a published paper that particles could be stored in a storage ring for a few seconds after being produced by a particle accelerator. In 1965, in collaboration with Burton Richter, the first experiment in colliding beam physics was conducted by O’Neill. Many believe that should O’Neill not passed away an early death from Leukemia, he would have been a leading candidate for the Nobel Prize once the efficacy of particle accelerators became well known in the 1980s and 90s. Regardless, O’Neill’s legacy in space is unmatched and he has inspired two generations of space advocates.
The legendary Gerry O’Neill
About The Production Team
Space investor and philanthropist Dylan Taylor, who serves as Chairman & CEO of Voyager Space Holdings and was formerly a Fortune 1000 executive and Director at Jones Lang LaSalle, Colliers International, UMB Financial and other multi-national firms, is the executive producer of the film and has a strong professional and personal interest in space exploration and settlement. In 2020 he was awarded the industry’s top award for Business and Finance by the Commercial Spaceflight Federation.
Dylan Taylor, Executive Producer
Will Henry has served as associate producer of the film and works for the Denver based production firm Multiverse Media in collaboration with Morgan Brook Films and Morgan Brook Capital. Will’s film career has spanned nearly ten years and he is an award winning film maker.
Post production has been handled by Subtractive, a world leader in film and psot production for the Space industry. The film features many notable interviews including Peter Diamandis, Rick Tumlinson, Laetitia Garriott, Virgin Galactic astronaut Loretta Whitesides, the late Freeman Dyson, author Frank White and many others | https://medium.com/@taylordylan/gerry-oneill-the-high-frontier-b165b895a5f4 | ['Dylan Taylor'] | 2020-05-17 15:37:45.200000+00:00 | ['Documentary', 'Film', 'Space', 'Space Exploration', 'Space Travel'] |
Using styled-components and Props With Typescript React | Styling and building a sleek user interface or UI is a fundamental process in front-end development. One of the fast, high-performance and easy-to-use styling libraries is the styled-components.
Typescript is an awesome programming language built on javascript, although much code might be written for a similar task in javascript the advantages worths it.
Type Checking
Typescript is a strictly typed language and just like Java, C++, C#, etc she ensures you set a type for every variable, function, and function argument.
let num: number = 12;
const myFunc = (name: string):string => name + 'john';
This is a basic rule of Typescript and you can read more about it from her docs
Create React App
We will begin firstly by creating a typescript-enabled react app, if you already have one you can skip this step.
Open your terminal and run the below command
npx create-react-app my-app --template typescript # or yarn create react-app my-app --template typescript
This will create a basic template like the one below
typescript react
Next, You install styled-components and the type definitions by running the below commands at your root directory
npm i styled-components @types/styled-components #or yarn add styled-components @types/styled-components
Developing A Button Element
Open your App.tsx or App.ts file and import the styled-components package
import styled from 'styled-components'
Above the App function, create a styled button element and render it inside a HTML section tag
const BUTTON = styled.button`
border: 1px dotted yellow;
background: white;
color: black;
font-weight: 500; ` const App(){
return (
<section>
<BUTTON> First Btn </BUTTON>
<BUTTON> Second Btn </BUTTON> </section> ) }
Passing Props To Component
Properties or props for short, are used to make UI components more dynamic, consider you need two buttons with a different border-color but other CSS properties are the same, you can pass props to each button, with some set of conditions that if that props value is available it should change the CSS property.
let’s do this!
... <BUTTON submit={true}> first Btn </BUTTON>
<BUTTON pay="shopping"> Second Btn </BUTTON> ...
From the code snippet above I gave the first button a prop of submit and the data type is a boolean, for the second button I gave it a prop of pay and the data type is a string.
Now, I define an interface at the top of the styled button with both properties
...
interface IBtn { submit: boolean,
pay: string } const BUTTON = styled.button`
border: 1px dotted yellow;
background: white;
color: black;
font-weight: 500;
`
...
Now, you simply pass this interface to the styled button, this helps typescript to check the types of pros to be received for the component
...
const BUTTON = styled.button<IBtn>`
...
simple! right?
And then we can define the CSS properties based on the props the component receive without typescript throwing any error
...
const BUTTON = styled.button<IBtn>`
border: 1px ${props => props.submit ? 'solid' : 'dotted'} yellow;
background: white;
color: black;
font-weight: ${props => props.pay === "shopping" ? 'bold' : 500};
`
...
congratulations!
You made this far in safely passing props to styled components without typescript yelling at you. I know you might be wondering why I use an interface instead of a type, well you can read more about that from this article.
Thanks for reading this article, you can follow me on Twitter, Github, and Linkedin, have a lovely day. | https://blog.devgenius.io/using-styled-components-and-props-with-typescript-react-a3c32a496f47 | ['Ndukwe Armstrong'] | 2021-04-23 07:26:52.871000+00:00 | ['Javascript Development', 'CSS', 'Frontend Development', 'Typescript', 'Styled Components'] |
Frida Product Reviews | If you’re a new parent and find yourself thinking “if only I had something to solve baby’s/my XYZ…”, Frida has probably made it. Their products for both mum and baby are good, and the messaging/packaging is nothing to sniff at either. I’ve tried a few of their products, and here are my thoughts on them.
SnotSucker — Definitely the MVP of our Frida purchases! Zac was screaming bloody murder even after we fed and changed him and eventually we figured out his nose was blocked. Now, about once every fortnight (Seattle winter) we use this little pipe to suck it out the boogers when Zac’s nose is blocked. Don’t worry, there’s a sizable disposable filter to make sure nothing ends up in your mouth! It’s also easy to clean by just running it under the tap. Peri Bottle — A must-have for my postpartum kit! I used this after each poo and pee for 3 weeks because the thought of wiping all that sensitive skin with toilet paper made me break out in sweat. The hospital gave me a far less ergonomic bottle for free, which got the job done but required more maneuvering. The Frida bottle was angled so I didn’t have to bend over and stick my hand all the way into the toilet bowl to get clean. FlakeFixer — Surprisingly helpful after Zac developed mild cradle cap. Our doctor said it’s basically harmless baby dandruff, but on newborns it looks yellow, crusty, and kind of nasty. After watching Susan Yara’s video, I quickly bought a cradle cap brush before Zac’s condition worsened. There’s three steps — a sponge to lather shampoo, a rubber brush to gently remove flakes, and then a comb to remove the stuff. After using it three times during baths, Zac looked fresh as a daisy! Cooling Pad Liners — To ease postpartum swelling, the nurses taught me to make a pad sandwich (future blog post!) that involved layering three Tuck’s witch hazel pads along the length of my underwear. Tuck’s pads are circular and it was annoying to position them perfectly only to pull up my underwear and have them slide out of place. Zacding over postpartum was painful! So I bought the Frida liners, which are shaped to fit a maxi pad (genius!). Sadly I reverted back to Tuck’s because they stayed cool and soothing for longer. While the Frida liners are thoughtfully shaped, they dried out more quickly and stuck to my skin. Instant Ice Maxi Pads — I thought these pads would be a good replacement for the hospital ice packs. Unlike the hospital pads, the Frida pads can be stored at room temperature — just snap one like a glow stick and it cools down immediately. Ultimately, I found ice packs in general too bulky and not worth the effort. After about an hour of actual cold therapy, I’d find myself walking around with an uncomfortably heavy pad full of melted liquid. That said, for those who like ice packs, these pads may be useful. Donut Cushion — After sitting down for my first meal at home on our trendy but very hard dining chairs, I immediately bought this cushion. My healing episiotomy stitches were extremely painful and I needed help! The Frida cushion was thoughtfully designed, but I don’t think it was well executed. I was primarily looking for cushioning, which was almost non-existent because the Frida cushion isn’t made out of foam like traditional cushions. Instead, it is thin and inflatable to make it compact and portable. There is also a rubber-type cold pack that you can freeze and slip into the cushion for additional relief, but with the pad sandwich I wore postpartum, any benefit was unnoticeable. I ended up loving this other orthopedic donut cushion and used it non-stop for the first six weeks! | https://medium.com/@sw2020/frida-product-reviews-305f15434736 | [] | 2021-02-26 22:55:12.065000+00:00 | ['Frida', 'Baby Products', 'Baby', 'Newborn', 'Baby Care'] |
Yes, Vue 3 is out but you probably don’t need it 🤷♂️ | I know I know, this article will probably cause some heat 🔥
Why would someone say that “you don’t need Vue 3”?
You may even be thinking: Yet another article about Vue 3.
Yes it is ! Same same but different.
Don’t worry, you will see clearly after reading this.
In this article, I’m going to compare side by side how to acheive Vue 3 stuff with the Vue 2 API, and basic JavaScript design patterns.
Why? Because sometimes, you can’t migrate your project to Vue 3 directly, but still want your code to be better.
If you want the “long-story-short” — jump to the next bold title
On September 18th 2020, Vue 3.0.0 One Piece came out 🎉
This version brings a lot of improvements under the hood and also in it’s API.
Evan You himself said Vue 3 would be: faster, smaller, more maintainable, more native-friendly and easier to use.
The biggest noticable change is without a doubt: the composition API.
During Vue 3's final development between 2019 & 2020, a lot of articles, videos and tutorials were already praising the composition API and, for me, something felt wrong.
Most of the arguments in favor of the composition API could easily be done with Vue 2 and some JavaScript design patterns.
Hear me out! I’m not saying Vue 3 is useless, I’m saying wanting to switch because it’s new, and without a meaningful reason is useless.
If you read any of my other stories, you would know I stand by 2 simple rules:
Focus on concepts and methods, not tech
Use the right tool for the right task
Let’s see when and when not to use Vue 3 in my humble opinion.
When to use Vue 3 and when not to
If you need IE11 support: don’t use it, support is not there yet If you are on a large existing project: don’t use it, depending on your code, the migration time and performance benefits may not be worth it If you have performance issues even after some optimizations: use it If you need a better TypeScript support: use it, it’s way better than before! If your dependencies support Vue 3: use it (captain obvious speaking 👨✈️)
In the end, I encourage any new project that doesn’t need support for IE11 to use Vue 3.
Now, let’s see features where you think you might need Vue 3, but actually don’t necessarily.
How does the composition API work in a nutshell?
So basically the current options API in Vue 2 has a big issue: it splits data, methods, computed and watch across the component.
It mixes concerns and makes it harder to reason about.
The Composition API makes it easy to group things because the component isn’t the one in charge of the reactivity anymore.
Here’s an image to illustrate.
Options API vs Composition API (source: www.infoq.com)
But the thing is… There are ways to do this in Vue 2 too (little pun there).
Use Vue 2 hooks in a smarter way
So basically in Vue 2, hooks are options, so they are also split acrouss the component, but there’s a way to overcome that!
Let’s say you have an autosave method that you execute each 10 seconds. You want the interval to start when the component is created, and clear it when the component is destroyed.
Normally you would do this:
Options API with regular lifecycle hooks
A better way to centralize this is to use the $on(‘hook:event’, func)
Options API with lifecycle hooks events
This gives you the same ability as the Composition API would.
Composition API
Use the factory design pattern to extract behaviour
Another great feature of the new Composition API is the ability to extract logic.
Components is good to extract logic with template, but there were no “good” way to do that in Vue 2… Well maybe there was!
Mixins and Renderless components to the rescue!
Let’s say I’m building an admin panel. In this admin panel I have a basic crud in many places, but with a different UI. How would I hadle this?
With a mixin! But mixins merges stuff so it can create name collissions.
How can we solve that? With a factory! | https://itnext.io/yes-vue-3-is-out-but-you-probably-dont-need-it-%EF%B8%8F-3e60634991b4 | [] | 2021-05-09 01:36:14.788000+00:00 | ['Vue 3', 'Vuejs', 'Vue', 'Vuejs 2', 'Design Patterns'] |
SIMPLE PREDICTION FOR THE NEXT FOUR YEARS IN AMERICA UNDER PRESIDENT BIDEN | january 2021 to november 2024 in social media
Here is my prediction for the next four years public discourse in the United States, from the January inauguration of President Joe Biden to the Presidential election of 2024.
I warn you, this isn’t going to be an optimistic prediction.
And what’s more, I very much hope to be wrong. Take that as a given.
Here we go then.
Trump rallies will start immediately after Joe Biden’s inauguration. Perhaps the first rally will be scheduled to coincide with the very day of the inauguration — for maximum media coverage and maximum disruptive momentum.
The insatiable need for clickbait will ensure news media continues to fixate on Trump hysteria.
Media’s predisposition to sensationalize will find in Trump’s neo-revivalist stadium rallies an unending source of material — especially as it’ll be in marked contrast to the studied rhetoric of sleepy Joe Biden in the White House.
He’ll be painted alternately as a modern-day Hitler (archenemy of the liberal left) or a 21st-century Messiah (savior of the conservative right) depending on the news outlet.
What makes this enemy-savior dynamic a problem is the way it’s become a dichotomy that serves the needs of our moribund but pragmatic political duopoly. This unholy convergence of interests will guarantee us four more years of public discourse dominated by a news cycle fed entirely from the bullshit spigot.
POLARIZED OPPOSING PARTIES
Both parties must be delighted. They get to sell “join us — be loyal — don’t question — march in line — we must Save America” to their supporters, who’ll inevitably be pushed deeper into hyper-polarized territory of partisan truth.
The GOP can use Trump and his rallies to weaponize the ‘unfair’ election loss, legitimizing their grand conspiracy theory of having been victimized by the liberal establishment’s collusion. Trump and his fans can enjoy the spotlight, and meanwhile, the base is galvanized around an unapologetically fundamentalist Republican worldview.
While the Trump bandwagon rolls around America, the GOP can focus party energy on pro-life pro-gun pro-wealth governance and taking advantage of the conservative-majority Supreme Court.
The Democrats get to use Trump as their bogeyman. It’s known territory, but rallies will make an energizing spectacle. Enablers in the press will make sure coverage of orange excess is continuous and damning, playing neatly into the DNC-neoliberal centrists’ need to whip loyalty into the annoying progressives.
The urgency of the Trump threat will stay front and center. It’ll be used both to obfuscate Biden’s disinterest in pushing genuine left-wing legislation and to soften up the Democratic Socialists’ calls for action. The left-wing of the Democratic Party will be maneuvered into accepting public displays of woke-culture (renaming buildings, removing statues) and non-binding promises of future concessions (that’ll never happen) in lieu of meaningful legislation.
Regardless of party affiliation, the average supporters will continue to believe themselves as right-minded loyalists battling the very embodiments of anti-American corruption. The irony of the same narrative fitting two supposedly diametrically opposed ideologies will be lost on the participants.
And so, for the next four years, social media, late-night talk shows, papers, magazines, and all the outrage-hungry news channels can be filled with variations of the familiar well-practiced informative-disinformation pantomime.
It’ll somehow always be urgent, always polarizing, always parsed in terms of good guys — with the correct opinions — versus bad guys — with incomprehensible wrong ones.
Echo chambers have already formed around the protagonist demographics and these will, if anything, grow more exclusive, more hostile to outside nuance, each audience cherry-picking “facts” to fit the orthodoxy of its particular confirmation bias.
Media, GOP, and Democratic Party won’t challenge each other’s role in subverting public discourse, because each has a vested interest in the subversion. Why rock the boat?
Instead, whether it’s tacitly agreed behind closed doors or an opportunist understanding of the economic and political path of least resistance, the atomized voter public will remain hypnotized by a stream of mostly irrelevant 24/7 pseudo-news from the mainstream media’s bullshit spigot.
The specifics of a day’s news won’t matter. Stories will be deployed in a constant process, interpreting the zeitgeist through the prism of party orthodoxy but above all in service of whatever’s necessary to run long-term cover, so the plutocrats stay unaccountable while they carry on the business of maximum profit and upward wealth transfer.
The ruling class and those it represents will get richer. The average American will be kept busy by the duopoly’s aggressively polarized team game, arguing at the margins, convinced of the righteousness of their politics, captivated by the self-important certainty of their own integral role in the story.
It’s going to be a long four years. | https://medium.com/@mertonfuton/simple-prediction-for-the-next-four-years-in-america-under-president-biden-bbb6431fb71b | [] | 2020-12-02 21:20:48.115000+00:00 | ['Clickbait', 'Inauguration', 'Rally', 'President', 'Inauguration Day'] |
A Boy who Loved Boys: Our Life | A couple years ago, I was digging around in old boxes and came across some of my school things from 2nd to 3rd grade. Journals, projects, and worksheets, with sloppy handwriting and crude drawings.
I’d heard somewhere that every time you recall a memory, it’s altered. Each remembering adds more context, loses details, shifts things slightly. In my case, most of my memories of childhood are either gone or well-touched, covered in a patina of 20+ years, and feel almost like adult memories. My problems — an overbearing father, struggling focus in school, my extremely early start to puberty—are seen through my adult eyes. “Wow, that was rough, wasn’t it! I made it through okay, didn’t I?” is the impression I would get whenever I thought about it.
When looking at my old things, however, the sensation of the memories was very different. Untouched, something like a moth trapped in amber. Smaller and younger perhaps? I’m not entirely sure, but seeing physical evidence of my age and the surreal sensation these objects brought up gave me a much better understanding of who those memories belonged to: a little, gentle boy, who liked butterflies and whose favorite color was pink.
Our Life’s writing, especially in the first two ages, does an excellent job of getting you into this headspace of being a child. Choices are often oriented around how your character might be feeling, inviting you to roleplay instead of merely following along. There is a focus on things children care about and find important. Are you too shy to ask for ice cream directly, having to whisper your answer to your mom instead? What sort of a backstory do you imagine for your new dolphin balloon? You worry about whether your Cove will care about the snail you found, worry about building sandcastles that are too similar, worry if your friends consider you a friend, too.
It touched me very deeply, and I found myself on the verge of tears often. Cove, a shy boy, reminded me of myself at that age. Instead of roleplaying an introverted, quiet child, the one I remember being, I found myself making choices that would comfort and encourage Cove. I wanted him to be happy, and to be happy with him. | https://medium.com/@rainythursday/a-boy-who-loved-boys-our-life-4d24e6fd2b05 | ['Thursday Rain'] | 2020-12-12 22:30:21.521000+00:00 | ['Childhood Memor', 'Queer'] |
It’s Time to Get Intimate with Your Ideal Customer | It’s Time to Get Intimate with Your Ideal Customer
If you don’t have a crystal clear persona etched into your brain, you’re probably missing out on customers.
Knowing your customer really well means also knowing something about their family, work, education, hobbies, frustrations, and passions. [Image: user:3643825 on Pixabay]
Whether you prefer to call it an avatar, a customer persona or your ideal customer, getting to know your target customer really well lies at the heart of your marketing strategy.
Digging deep into exactly who your ideal customer is, where his pain points are and how your company can solve your avatar’s biggest problems impacts every area of your business — from R&D (what new products or services will your customer need down the road) to the type of content you produce across different social media platforms to where you’ll get the highest ROI for your advertising dollars.
What Exactly is a Customer Avatar, Anyway?
A customer avatar is a representation of your perfect customer, the one who will stick with you through thick and thin. They’re the one that buys the most, most often, and raves about your product or service. He’s the one who posts five-star reviews when he is happy and takes the time to let you know when something needs improvement. When you release a new product, accessory or premium subscription plan, your perfect person will be first in line to sign up.
In other words, they are your DREAM CUSTOMER.
When we talk about creating an avatar or ideal customer profile, we are talking about getting very specific, going well beyond ‘a guy in his 40s’, or ‘moms’. By homing in on that perfect buyer, your entire team will be able to focus on what will keep that customer coming back year after year. In fact, completing a super-detailed customer persona will change the entire way you do business. There’s a reason why customer LTV (Lifetime Value) is such a useful metric. Courting devoted customers who will return time and again is far more useful that in the long run than selling to someone who may purchase only once.
Is a B2B Target Customer the Same as a B2C Target Customer?
While the general principles are the same whether you are a B2B (business-to-business) or a B2C (business-to-consumer) business, there are a few differences that are important to understand.
If you are in the B2B space, at one level, your ideal customer profile is another business that needs the product or service you offer. However, within that business, you will actually be dealing with a living, breathing human being and that individual is the person who, ultimately will either control or influence the company’s buying decision.
[Image: Design_Miss_C of Pixabay]
In a B2C scenario, you may need to go beyond the obvious answer to get at the real customer. If, for example, you produce high-quality wooden toys for children, the ideal customer is not a five-year-old girl living in Washington, DC. The end user may be a child, but the purchaser is more likely to be his mother, grandmother or perhaps a favorite uncle. If you are a manufacturer (we’ll stick with the toy example for now), you may have two divisions — one that sells directly to consumers through Amazon and another that provides wholesale distribution to toy shops and tourist boutiques (for your line of miniature wooden tourist monuments). Each of the buyers represents a different avatar.
In this post, though, let’s focus on how you would develop one ideal target customer profile in a B2C scenario.
Note: The ideal customer target is so important, it’s the first thing you address when completing The Marketing Canvas® (our secret weapon when creating high-performing growth strategies). The beauty of using The Marketing Canvas is that you can repeat this process for various avatars and then plug them into a successful marketing strategy designed to help you maximize your ROI and, ultimately, grow your business.
What does an Ideal Customer Look Like?
Let’s have a look at what you need to know about your target customer and how to build a comprehensive (and useful) ideal customer profile.
[Image: LubosHouska of Pixabay
Consider this scenario:
Bookstore Owner: What book would you sell to the next customer who walks in?
What book would you sell to the next customer who walks in? New Employee: Um, Harry Potter. I loved that book.
Um, Harry Potter. I loved that book. Bookstore Owner: Who cares?
Who cares? New Employee: So… what would you recommend?
So… what would you recommend? Bookstore Owner: Whatever book the customer is going to love.
Too often we get so excited about our products and services we fall all over ourselves telling the world how great our offer is. Effective marketing is all about putting the right product in front of the right customer at the right time. There’s no point in telling the next customer who walks into the bookstore all about Harry Potter when what she is looking for is a book about managing diabetes.
While the bookstore example may seem like an obvious one, too often businesses waste a whole lot of time and effort designing fancy logos and gorgeous-looking ads before they have nailed down who their ideal customer is, what problem they need to solve, and why your company can provide the perfect solution.
When you are thinking about your ideal customer, ask yourself who is likely to stick around, even if there’s a dip in the economy. Who is most likely to keep buying products and services? Who is going to ascend your value ladder when you roll out new products and services in the future? Obviously, a loyal, die-hard client is much more valuable than someone who buys once and is never heard from again.
Let’s Create Your Ideal Customer Profile
When the time comes to brainstorm with your team and create your avatar, keep these basics in mind.
[Image: Geralt on Pixabay]
Get Specific
Be as detailed as you can be as you go through this process. Give your dream customer a name, address, and a job. Is she married? Does he have children? A pet?
What’s your avatar’s annual income? What car does she drive?
What does your avatar want out of life? And, what’s stopping him from achieving that goal?
What are your avatar’s biggest challenges and problems, particularly relating to the products and services your company sells?
What influencers does your target customer listen to? Who does she trust? What books does he read? What shows does she watch on Netflix? What social platforms does he use (when, how, and how often?). Does she read blogs? Listen to podcasts? Which ones? How often?
Why does your ideal customer love your company and your products? What keeps her coming back? What makes her mad? What is something you might do that would send him running for the door with no intention of ever coming back?
Find a photo of your avatar and print it out. Stick the photo at the top of the whiteboard when you and your team get together to brainstorm about your next ad campaign.
Ultimately, the goal of going through the exercise of creating a target persona is to remember that you are always selling to a real person, not some nebulous ‘customer’ and not an entire business. The most effective avatars are those we could imagine taking home for Thanksgiving dinner. They are as real as a well-developed character in an epic story. In fact, they are your company’s brand hero. Think Luke Skywalker, Frodo, Neo, and Harry Potter.
Don’t Guess
Sometimes we think we know who our customers are and what they want, but make sure you aren’t basing your marketing decisions on wishful thinking. Get the data. Ask your sales team. Analyze the metrics.
Hard Data
Data collection has never been easier. With tools like audience insights in Facebook (and the equivalent on other social media platforms), Google analytics, the data you collect from your CRM — we know more today about buyer behavior at every step along the customer journey than has ever before been possible.
Knowing the data is out there and knowing how to interpret it are two different things. It’s beyond the scope of this post to dig deep into the data parsing tools that are out there but don’t ignore the hard facts when you go through the avatar-creation exercise.
Soft Data
There’s nothing like going directly to the source when you want to learn the truth. Tempting though it may be to rely entirely on the data you collect through various automated tools, never forget that ultimately, the source of that data is a real person.
Your customers and potential customers can provide answers to all kinds of questions if you take the time to ask them. Surveys (conducted at different points in the customer journey), the content of comments on a social media post (not just the quantity), focus groups and listening to your front line sales team members when they report back on what your customers are saying are all invaluable sources of information. Take what you learn and adjust your customer avatar accordingly.
Rinse and Repeat
Remember, too, that an avatar — just like the real person she represents — is not static. The time of year, her state of health, what else is going on in her world, from passing a passing a milestone birthday to taking on the care of an ailing parent — these will all impact her needs, challenges, and how she will interact with your brand. Even for the same avatar, you may need to make adjustments based on considerations like how far away Christmas is when the kids need to be thinking about heading back to school (or, off to college).
What Will Your Target Customer Tell You?
Gaining a deep understanding of your avatar’s goals, values, challenges, frustrations, and fears enables you to develop a deep empathy for her and to adjust what you are doing along every step of the customer journey. You’ll know where to find your avatar, what hooks, stories, and messages are likely to grab her attention, and what is most likely to get her to engage with your content.
You’ll know what she can afford and when she will be ready to buy. You should know your avatar so well that you’ll be able to predict what she’s going to need even before she knows she needs it and you can make sure your company will have the next product ready when she’s ready to purchase. When you have truly nailed your avatar, you’ll know because that is the same customer who will share your posts, tell friends and colleagues how great your company is, and will, in fact, become a de facto member of your sales team.
It all begins with knowing who that ideal buyer is. Again, that’s why “Target Customer” is the first step in The Marketing Canvas™ framework. Taking the time to go through this exercise to create an avatar for each of your ideal customers pays off because no business gets far without knowing all about their number one ally — their hero — in the marketplace. | https://medium.com/digital-marketing-from-a-z/its-time-to-get-intimate-with-your-ideal-customer-6e8680fd12f3 | ['Ginger Zumaeta'] | 2019-06-28 17:02:07.181000+00:00 | ['Business', 'Marketing', 'Evergreen', 'Targeting', 'Branding'] |
6 Basic Traits You Will Need to Succeed as a Professional Writer | 2. Organization
When you work as a professional writer, you will typically have several projects on the go, simultaneously. Currently, I have my writing job, book, ghostwriting clients, Medium, website, and writing for a blog.
In order to juggle all of these projects, I have a schedule I abide by. This is a flexible schedule because I have to account for waiting on project approval and other normal parts of the job.
Every week looks different, so every Sunday, I sit down and figure out my week. I account for any appointments or non-work commitments I have on the go for that week and work around my husband’s work schedule with the military. Writing down my schedule helps me stay on track.
I also have two different lists, one for writing pitches for my job, and another for Medium article ideas. Having a list ensures that I will never forget any of my ideas.
My lists and schedule are on my Notes application on my iPhone. My Notes app automatically syncs to my MacBook, so whenever I create a list in my Notes, it shows up on both my phone and computer. I believe OneNote for Android will do this for you if you don’t use Apple products.
Google Calendar is also a great application to stay organized and keep track of deadlines and appointments. Stephen Dalton, another professional writer on this platform, uses a large paper desktop calendar where he writes everything down.
It’s up to you to choose what medium you would like to use to keep yourself organized, but always use something to stay on track or you will quickly become overwhelmed. | https://medium.com/better-marketing/6-basic-traits-you-will-need-to-succeed-as-a-professional-writer-e27605b733af | ['Amy The Maritimer'] | 2020-12-11 10:05:37.814000+00:00 | ['Writers On Writing', 'Writing', 'Careers', 'Professional Development', 'Writing Tips'] |
How Integrity Is Intertwined in Various Aspects of Our Life | We are not unfamiliar with the word “integrity”. The most interesting thing is that even a small baby responds if integrity breaks. If there are two kids in front of you and if you do not behave with them equally, they will protest. In general sense, integrity means equality. Integrity is so important that it is not only an important human virtue but also is true for all other arenas.I will discuss that in detail in subsequent paragraphs.
Let’s proceed our discussion by explanations with examples. In an organization, there are many departments or sections. Integrity is a very important factor which needs to be balanced everywhere properly. Otherwise, lack of integrity in one place threatens the integrity in all other places. For example, in an organization if employees are not treated well, they will not treat their stake holders up to the mark. From the perspective of society and country, people also show sheer level of integrity that they get from their ancestors, cultures or surroundings.
One good example of the type of integrity is verbal integrity which is ubiquitous and seen easily. In another word, it can be said that it is a kind of balance between people’s speech and action. If speech and action are not aligned, trust erodes. Folks do not believe those persons who do not keep their words. That is why doing nothing rather than breaking promises is better in certain circumstances.
Integrity among speech, emotions, feelings are crucial. For example, a police officer is interrogating a thief regarding a burglary. At that moment, the thief is narrating a sad story with joy or his feeling reflects the reverse. Somebody is pleading for help but the emotions or feelings do not support the true nature of the crisis. There are lots of example of hiding emotions and feelings while being corrupt. Lack of integrity signals problem. There is linear pattern among speech, emotions and feelings.
Integrity among goals/objectives, time span and execution is important. In politics,sometimes, many hefty promises are seen without being materialized. So, strategy must be formulated, aligning the goals with the duration for ensuring materialization.
Lack of systematic integrity may pose a risk of collapse of a whole system. There are many nuts and bolts in systems. Many people in various hierarchies run a system.A tiny part’s breakdown halts the operation of the whole system.
Integrity among knowledge, beliefs, ideology, thoughts, culture and an individual’s behavior is easily measured. For example, an individual thinks one and behaves the other way reflects lack of integrity.
Lack of Integrity and fraud is correlated. In other words, lack of integrity to a heightened extent means corruption.
Integrated system is a standard system. Any deviation from the standard means corruption. If deviations become too much and widespread, huge lack of integrity occurs, which may lead to an ultimate collapse. When we start to sense an ever increasingly uncontrolled lack of integrity, it is a red signal of indiscipline, lawlessness, chaos, heightened level of conflicts, lack of trust, disrespect, frequent use unsympathetic words in language, lack of empathy, painful conflicting emotions, increased level of sickness and so on.
Integrity is an essential human virtue which needs to be nursed for long for developing. For integrity to be learned, a high level of integrity based paradigm is necessary and it takes a long time to fully acquire it. Integrity cannot be built overnight rather weaved in decades. If you do not show integrity as a prior human virtue, you never expect to get it back from others. Every single human being reflects unique paradigm. In a group of people, paradigmatic integrity is needed. For example, in an integrated circuit, all tiny parts are in proper balance while operating. Lack of integrity creates conflict and when a solution is sought, integrity is much searched for as well.
Integrity is remembered more during a life threatening crisis like COVID-19 when there is serious need to execute policies in no time. Integrity among different departments is involved. One department’s agility does not work when other departments do not respond at the same pace. In the mean time, crisis deepens at geometric rate and everything goes beyond control. This sort of environment reflects huge level of blaming, accusations, excuses, denying of responsibility, stealing and corruption in a bigger scale. Under this circumstances, correcting individuals is a wrong choice. Because there is an urgency to execute everything very quickly and correctly. To change human behaviour or to get the crowd behave correctly for that very period of time, extreme level of enforcement is necessary. That is why it is told that life threatening crisis quickly changes paradigms which may have not been possible in ages or generations.
Nature is the greatest example of an integrated system. Everything is in proper balance by the Almighty. Whoever or whatever ever tries to dismantle that balance, nature automatically protests and upholds or rescues the system again. Nature itself possesses its own paradigm. Integrity with the nature’s paradigm is utmost important. Questions like who, when, where, how,what,why may not be answered in short time but if there is integrity, everything is automatically determined with the course of time with perfect balance. On the other hand, If all of the above questions are answered or known but still lack of integrity persists then nothing changes. Integrity is a word related to many things and is a good scale to build or measure a network or group of people. Also, fixing lack of integrity expedites solving many chronic problems which have been remaining unsolved for ages. | https://medium.com/@moshiurrahman-1989/integrity-of-paradigms-or-paradigmatic-integrity-539586136ae8 | ['Moshiur Rahman'] | 2021-06-18 10:43:34.865000+00:00 | ['Integrity', 'System', 'Integrated Circuits', 'Nature', 'Corruption'] |
Hierophany: How The Sacred Manifests In The Profane | The concept of “hierophany “was dear to Mircea Eliade, the scholar of religious experience. Hierophany is the manifestation of the Sacred. There is a paradox to the Sacred: While it transcends the Ordinary, its manifestation is in the Ordinary, where it hides in plain sight.
Now, this may seem a bit abstract. So I will use an example, something that happened a few years ago, the sordid story of Donald Trump and Stormy Daniels.
Trump’s sexual adventures and his lying and bullying are solidly entrenched in the underbelly of the realm of the Ordinary. But he is not just an ordinary man. He made a career of embodying a certain kind of power that attempts to mask the fear of being weak.
And Stormy Daniels is not just an ordinary woman, either. She has figured out how to draw power from appearing to be an object of fantasy. She knows how to speak power to power.
So, as the sordid tale unfolds, we can also see in it the unfolding of one of the most sacred mysteries of the human condition: the dance of Sex, Power & Shadow. The larger-than-life cast of characters in this story makes it easier for us to see that archetypal dance.
Hopefully, seeing it reminds us that archetypes, and the Sacred, do not exist in a separate plane. The Sacred is not any truer, or less true, than the Ordinary, and vice versa. They are different experiences.
Limiting ourselves to seeing the Ordinary is an impoverishment. Seeing only the Sacred is an insult to common sense. In stating this, I am not claiming to represent what Mircea Eliade thought. I am articulating my own perspective.
My perspective is that of a therapist. In therapy, as we slow down the process in a state of heightened awareness, we become more in touch with what could be called the Sacred.
But the term is misleading if it implies that we see the Sacred as something that needs to be dealt with at a metaphysical level. In a down-to-earth manner, we pay moment-by-moment attention to experience, including the physicality of it.
This is the paradox of therapy: We deal with the Sacred in a down-to-earth way. Of course, it is only a paradox if we see the Sacred as being on a different plane. | https://medium.com/indian-thoughts/hierophany-how-the-sacred-manifests-in-the-profane-d104d370bf09 | ['Serge Prengel'] | 2020-12-12 19:34:50.565000+00:00 | ['Society', 'Sacred', 'Trump', 'Sprituality', 'Sex Scandals'] |
Best L Shape Height Adjustable Desk | Description
The new trend in the office is the ergonomic sitting L Shape Height Adjustable Desk at the standing desk. When people and work are more flexible and change the culture of work these days. Ergonomics and employee convenience have become the management’s main aim in ensuring optimal effectiveness for every employee. Height adjustable desk is the greatest choice for long hours on the desk. This makes the job easier mentally and physically.
If you want to get a modern, office, or home office table height adjustment for your office. Twice I don’t have to think. The greatest place is Salam UAE. Just call us or visit us at our Ajman showroom Office Furniture Dubai.
L Shape Height Adjustable Desk
We Provide Different Kinds of Height Adjustable Desk like L Shape Height Adjustable Desk, office furniture dubai, office furniture, height adjustable desk, height adjustable desk Dubai, height-adjustable desk ikea, electric height adjustable desk, show electric height adjustable desk, best electric height adjustable desk, height adjustable desk amazon, height-adjustable desk legs, height adjustable desk frame Office Furniture Sharjah.
Ikea height adjustable desk, electric height adjustable desk, height adjustable table Ikea, height-adjustable table Dubai, standing table UAE, IKEA standing desk use, sit-stand table Dubai, electric height adjustable desk, uplift desk uae, best height adjustable desk, manual height adjustable desk.
small height adjustable desk, flex spot height adjustable desk, Steelcase height adjustable desk, flex spot electric height adjustable desk, DIY height adjustable desk.
Modern Design Height Desk
Electric dual-engine ergonomic design Adjustable modern desk height. A fantastic home, office, and gaming desk. The top-quality Adjustable Desk in Dubai is provided by Salam UAE. Office Desks in Dubai give unique and awesome. Buy office desks and chairs in Dubai in a cost-effective way.
Visit Our Facebook page Salam UAE | https://medium.com/@salamuae008/best-l-shape-height-adjustable-desk-had-15-salam-uae-f5cd12c78de3 | [] | 2021-12-15 12:14:22.453000+00:00 | ['Furnace Repair', 'Office Culture', 'Furniture Design', 'Dubai', 'Office Furniture'] |
This 120-Year-Old Lightbulb in Livermore Is Still Burning Strong | To see one of the most remarkable objects in the Bay Area, drop by Fire Station #6 in Livermore — and knock on the front door. (Actually, these are Covid-19 times, so probably best to call ahead). A weary fireman will answer, and if things are quiet and the hour is reasonable, he’ll let you in.
Aside from the normal giant red trucks and array of neatly-coiled hoses you’d find in any firehouse, what makes Fire Station #6 special? A lightbulb. Specifically, the Centennial Lightbulb, which has been burning since 1901, making it the longest burning bulb in history.
This strange, small bulb is in the Guinness Book of World Records, has been recognized by Ripley’s Believe it or Not, and has received declarations from the President of the United States, Congress, various senators, the California state assembly, and more. Former President George W Bush said that the bulb is “an enduring symbol of the American spirit of invention” and the California Senate called the bulb’s firehouse home a “historical shrine.”
The Centennial Bulb was originally installed in 1901 at a firehouse on L street. It was moved several times over the course of the city’s history, often accompanied by a full parade and police escort. The bulb runs on standard 110-volt power but is connected to a special power supply to keep it running even if there’s a power outage. It’s gone off a few times — most recently in 2013 — but save for these short outages, the bulb has burned continuously for 120 years.
I visited the Centennial Bulb in May of 2019. The bulb itself hangs from the high ceilings of the firehouse, giving off a gentle glow. It originally burned at 60 watts — about the same as a modern lightbulb — but faded to around 4 watts over time. It looks a bit like the hanging globe lights which have become popular during the pandemic, or the exposed filament lights you might find in a hipster coffee shop. The bulb is accompanied by a small historical exhibit on its installation, life and local history.
Why has the bulb burned for so long?
The bulb was made by the Shelby Electric Company. It has a carbon filament and a hand-blown glass enclosure. The main theory as to why it has burned so long is that the bulb was extremely high quality when it was initially made. That sets it in contrast to the incandescent bulbs of today, which usually burn out in a few years.
The bulb has burned through two World Wars, the entirety of the 1918 flu pandemic, the Great Depression, the Summer of Love, the birth of the Internet, and much else. During a year that has felt tumultuous and unpredictable, it’s comforting to know that the Centennial Bulb has been there with us the whole time, quietly burning.
// Firehouse Six is located at 4550 East Ave., Livermore, California. You can try to drop by, or call (925) 454–2361 to ask about a visit. If you can’t make it to Livermore or the firehouse is too busy to accommodate your visit, you can always see the Centennial Bulb online, via its dedicated bulb cam. | https://thebolditalic.com/this-120-year-old-lightbulb-in-livermore-is-still-burning-strong-9e3ae60109b0 | ['Thomas Smith'] | 2021-05-25 19:54:37.342000+00:00 | ['Bay Area', 'Lightbulbs', 'San Francisco', 'History'] |
How to Manifest a Baby- 5 Easy Tips — Abundance Mindset Mama | Have you ever wondered how to manifest a baby? You probably won’t find many “how to manifest a baby” guides, but manifesting a baby is just like any other manifestation practices- its just typically more challenging because of the strong emotions it can bring up!
In fact, I already just posted 5 steps to manifesting pregnancy and childbirth, but it’s such an important topic for women that I wanted to add even more tips.
I never struggled with infertility myself, however, I waited until I was 35 to have my son because financially I was never in a good place. So I know how painful it is year after year to see your friends start their families, to see other people getting pregnant, and feeling like your fertile years are passing you by!
You too might be dreaming to conceive a happy, healthy child yet the fertility struggles along the way might be putting your morale down. It can be so easy to get discouraged when you are manifesting a baby. It can consume you!
Life seems cruel when month after month you are so badly wishing for something to happen, still each passing month it seems like a far-fetched dream.
But as they say, nothing is impossible if there is a strong will behind it to make it happen. So, beating infertility and manifesting a baby is not impossible either!
All it requires is a little faith and a bit of patience. Countless people have made it happen, and so can you!
There are thousands of success stories all over the internet where women have manifested getting pregnant just through the power of their mind.
Here is just one success story to inspire you:
If they can do it, you can do it too!
Law of attraction is all about using the power of your subconscious mind to attract what you want in your life. But a strong faith in oneself and the universe are prerequisites for it.
So, if you are ready to believe in your own ability to make magic happen then you can also get over your struggles and get pregnant by using the law of attraction.
Here, I will be sharing with you some powerful, time-tested law of attraction and manifesting tricks and techniques that will help you in your infertility journey.
All the methods I am sharing should of course be used along with whatever medical support you are receiving. Of course, there are some times when our manifestations do not come to be the exact way we want them to. That doesn’t mean it didn’t work- just that there is something that is better for you.
See, we think we know what we need best- but God, the Universe, etc. knows what we need better than we do! Manifesting does take a little faith, but if you truly believe everything is working for your good, it’s easier to do. But you must believe this or none of what you do will matter.
So whenever you are manifesting something, you must try your best to detach from the outcome. That can be challenging to do when its something very specific and emotional like a baby- but it is still necessary! So keep reading for my best tips on how to manifest a baby and get the manifesting a baby mindset!
Let’s get started then…
Women dealing with fertility issues struggle with staying positive.
It is so obvious…
Month after month, when you keep on waiting for something, yet it never comes to pass; you are bound to feel hopeless and depressed.
But let me tell you that these negative emotions are one of the biggest obstacles in your fertility journey. You must control your emotions- or they will control you!
Our bodies are super intelligent and super sensitive to our emotional health. They already know how to manifest a baby-we just have to listen!
When we are in a positive, happy space — our bodies tend to work at their optimal.
But when negative emotions over power our minds, the body is pulled out of the homeostasis. All the optimal functioning goes out of the window. Your body senses that there is a danger lurking around and thus it goes into survival mode.
Survival mode is not a mode for creation.
Getting pregnant is essentially creating a new life and it requires bulk loads of energy. But if you are letting your negative emotions suck all the energy out of you then guess what — you would not have enough energy to create a new life.
So, it goes without saying that to get pregnant you need to take control of your emotions!
A regular meditation practice will slowly wipe out all the negative energy stored in your body and will give your body the chance to reclaim its optimal health.
If you have never meditated before, start with just 5 minutes a day and slowly build it longer over time.
Just after a few days of meditation, you will notice a difference in your emotions. You will feel calmer, happier and more resilient in face of any adversity.
Meditation is a magic tool for living a more fulfilled life. It is transformational. It’s a shame that only so many people actually take out time to meditate daily.
The same can be said for visualizing. Meditating and visualizing daily are easy and effective ways to change your mindset. You can do both for only a few minutes when you wake up, before you fall asleep- and any other time during the day you start to feel negative.
Both of these tools help prepare your mind for the new reality you want to manifest.
Here are some previous articles I’ve posted to help you get started with both:
Getting Started with Meditation for Manifesting How to Visualize Better
If you are a woman struggling with fertility issues, chances are that you might be suffering from a poor sense of self-worth too. And as I’ve stated many times on this blog, self-love is really what I consider to be the biggest factor for manifesting. Manifesting a baby is no difference!
When you love yourself, all the other major manifesting and abundance blocks are easier to overcome- you can trust the universe has your back, you lose any doubts, and it’s easier to let go and detach!
If you struggle to manifest what you desire, you are likely struggle with self-love.
It’s also a vicious cycle! When we try so desperately for something and it doesn’t seem to happen, we start questioning our worth and our abilities.
Thus, if you are dealing with any kind of infertility issues, it’s highly likely that your self-confidence and self-worth might be shattered or at least damaged.
You might be worried that your dream of getting pregnant may never be realized. You might also be scared of what the future holds for you.
All these are valid concerns, but these beliefs will do you no good.
So, right now is a good time to drop those self-sabotaging notions and build an empowering self-image.
It’s time to believe in yourself and the universe.
Forget about your age, your weight, your genes or any other reason as to why you cannot conceive.
Seriously.
Just for a few weeks (at least), pretend like everything is rigged in your favor.
See yourself as 100% fertile.
Forget all the complications you had in the past related to your fertility and act as if getting pregnant is the easiest and the most natural thing your body knows.
For a while, stop paying attention to any information that tells you otherwise. Even if it’s coming from your doctor.
Sometimes the doctors would give us negative news about our health but it’s our choice whether to completely believe every word they say or to believe in the natural ability of your own body to heal and conceive.
People literally cure themselves of terminal illnesses just by believing in themselves and not the diagnosis. There are thousands of miracle stories all over the web. Read them to get some inspiration.
So, if one can even reverse terminal illness through the power of thought then why can’t you fix your fertility problems?
It’s all about building a strong faith.
See yourself as whole.
Healthy and fertile.
Every time a contradictory thought comes to your mind, lovingly replace it with a positive one.
After a few weeks of deliberately watching your thoughts, this new self-image will be impressed on your subconscious mind.
You will start seeing yourself worthy of a healthy pregnancy and this new awareness is crucial for manifesting your desire.
“Living in the end” is a technique by one of the most famous reality creation teachers — Neville Goddard.
Neville says that whenever you want to manifest a particular desire; you should start imagining the end result and live in that state.
Sounds confusing?
Let me explain.
Ask yourself that if you are using the law of attraction for fertility, what is your end goal?
It is to conceive a healthy child, right?
So, the end result you intend is to be pregnant.
Now using the “living in the end” technique, you must start acting like you are already pregnant with your child.
Feel the excitement you would feel of a baby growing inside you. Think of a name for your baby. When you go shopping, spend some time in the section for newborns; feel excited that you will soon need to buy all this stuff for your own child.
Basically, do everything that brings you closer to the feeling of being pregnant.
This technique works like charm because it cleverly harnesses the power of our feelings — which essentially is one of the main ingredients for any law of attraction manifestation.
Know that more deeply you immerse yourself in the feelings of being pregnant, more quickly will the manifestation follow in reality.
So start becoming conscious of your feelings and deliberately shift them to what you want to feel!
Gratitude is the starting point of all manifestations. It also has so many amazing benefits!
When you indulge in gratitude, your vibration rises and through the law of attraction you are likely to attract more circumstances that will make you feel this way.
So, if you start feeling grateful for what you currently have you will manifest what you want easily and effortlessly. It’s a law.
“Be thankful for what you have; you’ll end up having more. If you concentrate on what you don’t have, you will never, ever have enough.” -OPRAH WINFREY
Gratitude is like a muscle.
You gotta practice it everyday to make it stronger.
My favorite method of practicing gratitude is to maintain a daily gratitude journal.
Every night before going to sleep, take a journal and write at least 10 things you are grateful for. Be it your health, your spouse or even your favorite pair of shoes — write it all down.
Make this activity interesting by using your creative imagination. Try coming up with 10 new things every day.
Just this simple practice of writing down things you are grateful for every day will transform your mindset.
You will shift from a lack mindset to an abundance mindset.
You will start focusing on things that are going good in your life rather than focusing on that one thing that is not going your way.
When you become aware of your focus, you become aware of your life.
It’s like taking back all your energy that you were previously wasting on feeling bad about what you do not have.
Also, when you are in a grateful state of being, you eliminate the stress disturbing the natural mechanisms of your body and thus your body moves back to creation mode.
All these law of attraction methods work really well when used consistently. So remember! Patience is your friend on this journey.
When you will be patient and would trust the timing of the universe — the universe will surprise you with miracles. How to manifest a baby? Believe you can and be consistent in your manifesting practices!
Try it for yourself. You will be manifesting a healthy baby (or other blessings) in no time!
One you are done manifesting a baby, read this article on how to have a healthy childbirth:
Did you find these 5 steps for how to manifest a baby helpful? Manifesting a baby may seem impossible at first, but it’s really not! Share in the comments below which how to manifest a baby steps you took or will try! | https://medium.com/@abundancemindsetmama/how-to-manifest-a-baby-5-easy-tips-abundance-mindset-mama-c45f7428ba24 | ['Abundance Mindset'] | 2021-08-23 21:48:48.783000+00:00 | ['Childbirth', 'Fertility', 'Law Of Attraction', 'Miscarriage', 'Manifestation'] |
Is It Weird to Regift Toys My Kids Don’t Want to My Ex’s Girlfriend’s Children? | Is It Weird to Regift Toys My Kids Don’t Want to My Ex’s Girlfriend’s Children?
Although it’s December, I’ve been doing some spring cleaning. In cleaning out my bedroom and my kids’ room, I’ve found a lot of toys that my boys don’t want. These toys are still in their packaging. Mostly these are gifts that my sons received for their birthdays from their friends. They opened the gift wrapping, took one look at the toys, and said nah.
Legos, Legos, and more Legos. That’s what the majority of these toys are. My sons’ friends’ moms probably stopped by Target on the way to the parties — just like I do — snagging a box of “Legos for boys” as a gift, not realizing that my sons have never been into Legos.
I’m not putting down the plastic bricks. They’re great for children’s imaginations. Legos are also quite expensive. However, my sons just don’t like them. Because of their low opinions of Legos, these unopened boxes have been sitting around my house for years, moldering in my closet and under my sons’ beds.
So in discovering all these Legos still in their original packaging during my recent cleaning spree, I got an idea: I could regift them to my ex-husband’s girlfriend’s children.
Her kids are younger than our kids, and I already regifted them toys my sons didn’t want a few months back. I had my ex-husband give his girlfriend’s children some Roblox-themed toys that he bought for our sons, which they didn’t like.
Again, my kids didn’t even open the boxes. Please don’t think my kids are so spoiled. They like videogames, but Roblox has also fallen into bad favor.
My ex regifted the toys to his girlfriend’s children — and guess what? They loved them!
So why not do it again? But then I started wondering: is this weird?
Is regifting tacky — especially when you do it to your ex-husband’s girlfriend’s kids?
Let’s face it — regifting is a little gauche. However, I also think about how much stuff we waste. Should I throw out the Legos and add more plastic to the landfill? Or should I just see if some other kids want my children’s unopened “throwaways”?
The kids I’ve chosen to enjoy these gifts just happen to be my ex’s girlfriend’s children. Is there anything wrong with regifting them toys my sons don’t want?
No.
And if I can show I’m past my divorce and able to maintain not only good relations with my sons’ father but with his new girlfriend too — then why not do everything in my power to do that? Not only has she had a good effect on my ex, making him happier — which is better for our children — but I feel like because he’s happier, I get along with him better as well.
I want to stay in good relations with his girlfriend. I want my ex and his lady to stay together! And her kids are just kids, like my own children, also growing up in a home where their parents don’t live together and now have new partners. At the very least, I can make life a little better for her kids, too.
I often belabor the point of just how important it is to continue to get along with your ex-spouse if you have children together. You can’t not maintain good relations with the other parent of your children. One way I’m trying to do this is by giving toys to my ex’s girlfriend’s kids.
These toys just happen to be regifted. There’s nothing wrong with that. | https://medium.com/eros-is-everywhere/is-it-weird-to-regift-toys-my-kids-dont-want-to-my-ex-s-girlfriend-s-children-12174da1a1bb | ['Elle Silver'] | 2020-12-15 02:04:00.214000+00:00 | ['Parenting', 'Recycling', 'Relationships', 'Divorce', 'Family'] |
Listen… Real Love is Messy, Annoying, and Awesome, Okay? | For the first few years of my marriage, my wife found my jokes endearing and funny. I could get a room going by re-telling mishaps and adventures in our dating life, marriage, stories from my time in the military, or some amusing event that happened in the week. Now when I do the same, her response to the people in the room is “don’t encourage him!“ I first realized something was shifting in our relationship when we went to New Year’s Eve Party at a friend’s house. At the party, they had one of those photo booths where you can take fun pictures. Since the beginning of our relationship, we’d been attending events where we’d end up in these booths and take goofy photos together. That evening, I unbuttoned my shirt for the photo, put the party cap over my face, and started rubbing my nipples. This gesture and subsequent photo didn’t go over well, as evidenced by the featured photo in this article that succinctly captured my wife’s expression. Once again, other people found it amusing, but what she once found funny was now annoying. What I’ve since realized is that as time has continued forward, we’ve both discovered new and similar versions of the other that we sometimes dislike.
I guess that’s the hard part in dating and marriage in the 21st Century. The minute something is no longer endearing, we chalk it up as a red flag, and go on the hunt again. Instead of seeing the messy parts that grate against us or the different stages that emerge, we run right back to the Burger King mentality — I want it my way because I didn’t sign up for version 3.0 that now annoys me. That’s also why I feel bad for those who don’t stick it out to see what type of growth and love emerge from that refining fire. It’s true that the fire melts everything down to its core and you’re laid bare and unbelievably exposed in front of someone else. The prospect of putting your heart in the hand of a person who wields that much power is terrifying, but that’s also what love is — emotional risk and exposure — which is why it makes it so damned messy. And sticking it out when you don’t feel love toward your significant other is the messiest love of all.
When single people ask me how I’ve maintained a strong marriage, they almost romanticize my wife and I’s relationship. It’s like they get a glimpse of Baby Yoda, but never have to stick around for the firefights and insanity that the Mandalorian endures just to keep the little, green Jedi alive. Instead, there’s a lot of messy days where we fall in love all over again and have to keep learning about the other. We have to fight battles, love stretch marks and wrinkles, endure poorly timed jokes, give up a dream so one person can have theirs, and raise some crib midgets who sometimes make you want to drink heavily. It’s an absolute and utter mess, but that’s also the distinct beauty in it.
Often when love gets hard, it’s like someone took a hammer to a bunch of colored glass and started smashing pieces, only to rearrange them. Up close, it’s just shattered glass and a giant mess on the floor. But if you take a moment to step back and move away from the mess, a stain glassed picture appears. All the broken little pieces that have been rearranged in your relationship are suddenly — and strikingly — beautiful. The mess is no longer pronounced, but something far more magnificent takes it’s place:
True love. | https://psiloveyou.xyz/listen-real-love-is-messy-annoying-and-awesome-okay-8a496b8e64bb | ['Benjamin Sledge'] | 2020-02-11 19:02:25.099000+00:00 | ['Love', 'Dating', 'Relationships', 'Marriage', 'Life Lessons'] |
6 Techniques Which Help Me Study Machine Learning Five Days Per Week | I quit Apple. Started a web startup, it failed. My heart wasn’t in it.
I wanted to learn machine learning. It got me excited. I was going to learn it all. I wouldn’t need to program all the rules, the machine would learn it for me. But I had no job.
Excitement doesn’t pay for things.
I started driving Uber on the weekends to pay for my studies.
I loved meeting new people but I hated driving a car all the time. Traffic, stop, start, fuel do I have enough fuel I think I do, the air, the aircon, changing gears, you shouldn’t go that way you should go this way, all of it.
I studied machine learning. All day, five days a week. And it was hard. It’s still hard.
Uber on the weekends. Machine learning during the week. That was my routine. I had to learn. I must learn this, I can’t keep driving, I don’t know what my goal is yet but I know it’s not driving. One Saturday night I earned $280 and got a $290 fine. -$10 for the night.
9-months into my self-created AI Masters Degree, I got a job. It was the best job I’ve ever had.
How’d I study every day?
Like this.
1. Reduce the search space
Machine learning is broad. There’s code, there’s math, there’s probability, there’s statistics, there’s data, there’s algorithms.
And there’s no shortage of learning resources. Having too many options is the same as having no options.
If you’re serious about learning, set yourself up with a curriculum. Rather than spend weeks questioning over whether you should learn Python or R, take a course on Coursera or edX, start with math or code, spend one week planning out a rough plan, then follow it.
For me, this was creating my own AI Masters Degree. I decided I was learning code first and Python would be my language. I searched far and wide for different courses and books and put the ones which interested me most together. Was the path I made the best for everyone? Probably not. But it was mine, that’s why it worked.
Once I had a curriculum, I had a path I could follow, there was no more wasting time trying to decide what was the best way to go. I could get up, sit down and learn what I needed (wanted) to learn.
It wasn’t strict either. If something came up which caught my interest, I followed it and learned what I needed as I went.
If you’re learning online and not through university, you should make your own path.
2. Fix your environment
Your grandfather’s first orange farm failed.
The soil was good. The seeds were there. All the equipment too.
What happened?
It was too cold. Oranges need warm temperatures to grow. Your grandfather had the skills to grow oranges but there was no chance they were growing in a cold climate.
When he moved to a warmer city, he started another orange farm.
12-months later, your grandfather was serving the best orange juice in town.
Studying is like growing oranges.
You could have a laptop, an internet connection, the best books and still not be motivated to study.
Why?
Because your environment is off.
Your room is filled with distractions.
You try to study with friends but they aren’t as dedicated as you.
Whatsapp goes off every 7-minutes.
What can you do?
I turned my room into a studying haven. Cleaned it. Put my phone in a drawer in another room, turned off notifications everywhere.
I told my friend. My phone is going off until 4 pm, I’ll talk to you then. He said okay.
Friends are great when it comes to friend time but study time is study time. Can’t do a whole day without your phone? Try an hour. Any drawer you can’t see will work. Do not disturb should be a default.
Fix your environment and let the knowledge juices flow.
3. Set the system up so you always win
Problem 13 has me stumped. I’m stuck.
I wanted to get it done yesterday but couldn’t.
Now it’s time to study but I know how hard I worked yesterday and got nowhere.
I’m putting it off. I know I should be studying. But I’m putting it off.
It’s a cycle.
Aghhhhhhh. I’ve seen this cycle before. I know it. But it’s still there.
The pile of books stares at me. Problem 13. I set a timer. 25-minutes. I know I might not solve the problem but I can sit down for 25-minutes and try.
4-minutes in, it’s hell. Burning hell. I keep going. 24-minutes in and I don’t want to stop.
The timer goes off and I set another. And then another. After 3 sessions, I solve the problem. I tell myself, I’m the best engineer in the world. It’s a lie but it doesn’t matter. Even a small milestone is a milestone.
You can’t always control whether you make progress with study. But you can control how much time you spend on something.
Can control: four 25-minute sessions per day. Can’t control: finishing every task you start each day.
Set the system up so you always win.
4. Sometimes do nothing
I came to the conclusion. Learning is the ultimate skill. If I can learn how to learn better, I can do anything better. I can learn machine learning, I can become a better programmer, I can learn to write better. I must improve my learning, I thought. I began at once.
I did the Coursera Learning How to Learn course. One of the main topics was focused versus diffused thinking.
Focused thinking happens when you’re doing a single task.
Diffused thinking happens when you’re not thinking about anything.
The best learning happens at the crossover of these two. It’s why you have some of your best thoughts in the shower. Because there’s nothing else happening.
When you let diffused thinking takeover, it gives your brain space to tie together all of the things it absorbed during focused thinking.
The catch is, for it to work properly, you need time in both.
If you’ve set the system up so you do four 25-minute sessions of focused work, go for a walk after. Have a nap. Sit and think about what you’ve learned.
Once you start doing nothing more often, you’ll see many things are valuable because of the empty space. A room is four walls around space, a tire is filled with nothing but air, a ship floats because of the empty space.
Your study routine could do with more of nothing.
5. Embrace the suck
Studying sucks.
You learn one thing and forget it the next day.
Then another and forget it.
Another.
Forgotten.
You spend the whole weekend studying, go to work on Monday and no one knows.
Someone asked me how do I remember things from books deeply? I said I don’t. If I’m lucky I remember 1% of a book I read. The magic happens when that 1% crosses over with another 1% of something else. It makes me feel like an expert dot connector.
After studying something for a year you realise how much more there is to still learn.
When will it end?
It doesn’t. It’s always day one.
Embrace the suck.
6. The 3-year-old principle
I was at the park the other day.
There was a young boy running around having the time of his life. Up the slide, down the slide, in the tree, out of the tree, in the dirt, out of the dirt, up the hill, down the hill.
He was laughing and jumping then laughing again.
His mum came over to pick him up.
“Come on, Charlie, we’ve got to go.”
He kept laughing as she carried him away, waving his blue plastic shovel.
What is it that fascinated him?
He was playing. He was having fun. The whole world was new. Our culture has a strict divide between work and play. Study is seen as work.
You’re supposed to study to get more work. You’re supposed to work to earn money. The money buys you leisure time. Once you’ve bought leisure time, then and only then can you be like Charlie and run around laughing.
If you have it in your head study is work, it will be hell. Because there’s always more to learn. You know how it goes, all work and no play.
But suppose, you have the idea studying is the process of going through one topic and then to the next.
Connecting different things like a game.
You start to have the same feeling about it as you might have if you were Charlie going down the slide.
You learn one thing, you use it to learn something else, you get stuck, you get over it, you learn another thing. And you make a dance out of it.
I learned if you have structured data like tables, columns or data frames, ensemble algorithms like CatBoost, XGBoost and LightGBM work best. And for unstructured data like images, videos, natural language and audio, deep learning and/or transfer learning should be your model of choice.
I connected the dots. I tell myself I’m an expert dot connector. Dancing from dot to the next.
Do this and you’ll finish a study session with more energy than you started.
This is the 3-year-old principle. Seeing everything as play.
That’s enough for now.
It’s bedtime for me.
That’s a bonus.
7. Sleep
Poor sleep means poor studying.
You’re probably not getting enough.
I wasn’t. The best money for driving Uber was Friday and Saturday nights. People go out to dinner, to parties, to night clubs, I didn’t, I was driving. I’d go ’til 2 am, 3 am, come home and sleep until the sun woke me up at 7–8. I was a train wreck for two days. Monday would come and I’d be in a different timezone. Tuesday got better and by Wednesday I was back where I needed to be. Then the cycle would repeat on Friday.
This broken sleep schedule was unacceptable. My goal was to learn better. Sleep cleanses the brain and allows new connections in the brain to happen. I cut myself off from driving at 10 pm, 11 pm, got home and got the 7–9 hours. Less money, more learning.
Don’t trade sleep for more study time. Do the opposite.
Machine learning is broad.
To study it well, to study anything well, you need to remind yourself.
Reduce your search space
Fix your environment
Embrace the suck
Sometimes do nothing
Treat study as play and
Sleep your way to better knowledge
Goodnight. | https://towardsdatascience.com/6-techniques-which-help-me-study-machine-learning-five-days-per-week-fb3e889fad80 | ['Daniel Bourke'] | 2019-09-30 04:13:27.681000+00:00 | ['Machine Learning', 'Productivity', 'Data Science', 'Learning', 'Tds Narrated'] |
Fatherhood: Joseph, Jesus’ Father | Image credit to Shutterstock
It is strange that in all this time Joseph, Jesus’ earthly father, was never promoted as the role model for men, or a champion of masculinity — although he seemed to have it all.
What do I mean by having it all? Kindness, love, patience, wisdom, emotional intelligence, the gift of intuition, integrity, a great kid, a great wife, both of whom he protected, in a highly patriarchal society and time.
Think about it: he is one of the few men in the Bible who don’t make you cringe, but a refreshing departure from the stereotype, all those cruel and confused men in it. Joseph inspires awe (“Wow, good men who walk their talk exist and existed even then!”) and effortless respect. It was no coincidence that he was entrusted with the task of raising Jesus.
So for 2000 years, the culture has not paid attention to the admirable, cool guy, the man with the open heart. While the violent, shady characters were deemed important.
What does that tell you about the culture?
Holy people go unnoticed. We don’t read about them. We feel them.
24 December 2014
…
Shared before on my blog: | https://medium.com/@nandajurela/fatherhood-joseph-jesus-father-ed65d64d2a44 | ['Nanda Jurela'] | 2020-12-24 21:54:17.534000+00:00 | ['Fatherhood', 'Parenting', 'Jesus', 'Joseph', 'Masculinity'] |
Rethinking Wellness in 2021 | How Dyson Pure Humidify+Cool makes at-home health a priority
Ah, January. That time of the year where we make steadfast resolutions to improve ourselves, many of which are health and wellness related. This year, however, these goals look somewhat different than previous ones; we can’t go to the gym, try out a nearby yoga studio, or join a meditation class. Many of our solutions for taking care of ourselves just don’t work during a global pandemic, where social distancing has become the norm.
Photo by <a href=”https://stocksnap.io/author/burstshopify">Burst</a> from <a href=”https://stocksnap.io">StockSnap</a>
Instead of looking outwards this year to improve our wellness, many of us are taking the opportunity to make our homes as conducive to healthy activities as possible. Some have decided to invest in an at-home gym, be it with an increasingly popular Peloton, or, in my case, something a whole lot simpler (a yoga mat counts!). No matter the type of health-related journey you are hoping to embark on in 2021, transforming your physical home to be a space that inspires wellness is an important part of the process. As our days are spent increasingly in a single space, having a place that feels healthy is of the utmost importance.
Part of making one’s home feel this way has to do with more than what fitness equipment you have or what app you’ve recently subscribed to. Rather, a big part of at-home health has to do with how comfortable you feel within your own space. Feeling at ease and content at home greatly contributes to a sense of overall wellness. As a result, smart home products — as much as the next workout app — can help us to create a healthy environment at home. That is, smart home products help us to shape our spaces to our preferences, whether that be in relation to temperature or humidity, lighting, or other types of household automation. Decisions that can be made through these products are reflective of what we need to feel ‘well’ in our homes. This is especially significant given the challenging times that we live in. The health crisis has shown how much smart home products can aid in our overall quality of life. Naturally, these products can also create unwanted stress (from difficulties installing a new product, to figuring out how to use it!), but the overall promise of these products — to help us to rethink what wellness means in 2021 — is exciting.
The Dyson Pure Humidify+Cool
There are a plethora of smart home products, from smart wall plugs to robotic vacuums. Some of these devices truly deliver on improving upon the comfortable home experience. I recently had the opportunity to experience the Dyson Pure Humidify+Cool — which is one of the products that makes at-home health a priority. The Dyson Pure Humidify+Cool serves multiple purposes, from purifying the air of an entire room to adjusting humidity and temperature. The Dyson Pure Humidify+Cool uses Air Multiplier technology to circulate clean air and push dirty air back to the device for purification. When purifying the dirty area, the Humidify+Cool uses a fully-sealed filter system with HEPA media and activated carbon, meaning that 99.97% of pollutants, particles, gases, and odors — even those as small as 0.3 microns — can be captured. Beyond its essential functionalities, the Humidify+Cool can also be controlled via remote control (which is fairly inaccessible due to its lack of tactility) and voice (with both Alexa and Google). In a world where we are increasingly suspicious of what surfaces we might touch, being able to operate a device in a touch-free way has become valuable.
While my home has its fair share of smart home products, I was delighted to discover that the capabilities of the Humidify+Cool — from air purification to humidity and cleaning — to be incredibly impactful in creating a sense of ease and content in my home, from both a psychological and physical standpoint.
Indoor Air Quality (IAQ) refers to the air quality that people experience in and around buildings. Indoor air pollutants can cause a variety of health implications, with both immediate and long-term consequences. As Melanie Carver, Chief Mission Officer for the Asthma and Allergy Foundation of America notes, “poor indoor air quality can trigger asthma symptoms including chest tightness, coughing, wheezing, and shortness of breath. Poor indoor air quality can also negatively impact quality of life. Better air quality inside of your home, place of work, school setting, and even the car you drive can reduce allergy and asthma triggers.” In my research, I was surprised to find that indoor air pollution can come from a variety of sources — including those that are less suspect, such as newly-installed flooring or carpeting, and even excess moisture in the air. With the Dyson Pure Humidify+Cool, you can watch your IAQ ebbs and flows in real-time either via the Dyson Link App, or the LCD screen on the device itself.
The above gif showcases the metrics (and other info!) of the IAQ in my home. You can see that there is average air quality that you can view — but this is also broken down into different units, such as PM 2.5 which looks at the finest airborne particles, to PM10, which refers to pollen in the air. While the breakdown does leave me with some unanswered questions — for example, I thought that tapping through the “i” icon would tell me more about what each unit means, and how it might affect me — the opportunity to see the IAQ in a visually-appealing graph is quite useful.
You can also see that there are significant changes in the overall air quality over time. According to Dr. Mckeon, CEO and Founder of Allergy Standards Ltd. these changes can be a result of a number of factors, including lack of ventilation (causing pollutants to accumulate), dust mites, poor cleaning (or poor maintenance of filters), presence of pets, and even cooking (and other types of combustion). I personally noticed changes when cooking dinner — or when my new ‘pandemic puppy’ (yes, I got a dog during quarantine!) is shedding more heavily. The knowledge that the Humidify+Cool is tackling these pollutants gives me peace of mind, particularly during the current pandemic, when we are all at risk of contracting an airborne disease. Knowing that the IAQ of one’s home is safe creates a psychological sense of comfort — especially when one compares their IAQ with the air quality outside (as you can do with the Humidify+Cool). As Kenneth Mendez, the Chief Executive Officer of the Asthma and Allergy Foundation of America points out “we already spend about 90% of our time indoors. Due to the global pandemic, most of us are spending even more time inside of our homes. Indoor air can be two to five times more polluted than outdoor air, so during this time it’s important to be conscious of the air quality of your living space.”
Dr. Mckeon, the CEO and Founder of Allergy Standards Ltd. speaks to the importance of maintaining a Relative Humidity (RH) of 30–60% for optimal comfort (as is recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers). It is important to note that RH of above 50% provides conditions that will support the growth of dust mites, but greater than 60% will support the growth of mold. Mold isn’t the only concern when the RH is above 60%. If the RH is too high, as Dr. Mckeon notes, it’s harder for “sweat to evaporate, the body works harder to keep itself cool and this can result in the body overheating and also loss of water, salt, and chemicals the body needs.” On the flip side, you can also have RH that is too low, causing dry skin and airway irritation. Either way, as Melanie Carver points out, “humidity levels that are too high or too low can irritate the airways and trigger asthma symptoms. When the air is too humid, it can feel heavier and harder to breathe. High humidity can increase ground-ozone levels, dust mites, and mold — all of which can be asthma triggers.” Naturally, we can see that the proper humidity within a home has an enormous impact on one’s sense of wellness.
Having grown up with humidifiers, I can attest to the dance-like game that must be played in order to get things ‘just right’ — not unlike a sort of Goldilocks situation. Like the experts have outlined, high humidity creates uncomfortable environments, whereas low humidity creates dry — and itchy! — skin. With the Humidify+Cool, finding that sweet spot has been natural, creating a sense of physical comfort in my home. The typical discomfort that follows the winter season — from allergens to stocking up on soothing skincare remedies — has been replaced by an effortless solution that supports my personal humidity needs.
Another differentiator with the Humidify+Cool is how ridiculously easy it is to clean. A humidifier works by allowing water dispersion in the air to increase Relative Humidity, so they need a source of water in order to function. Unfortunately, this also means that they can become a ground where bacteria and mold thrive, impacting the air you breathe. As Melanie Carver suggests, “if you use a humidifier or dehumidifier, it is important to clean its fluid reservoir at least twice a week to prevent mold growth”. With the Humidify+Cool, not only do you get a reminder to do the cleaning — via the app and the LCD screen’s clear orange and then red warning lights — but the cleaning is, essentially, hands-off. It’s as simple as removing parts of the device, dissolving citric acid powder in water in the base, setting the machine, and rinsing everything once you are finished. For me, the immediacy of knowing that the Humidify+Cool was back in action — the warning lights disappear — was a comfort. In between cleaning, the Humidify+Cool uses UV-based disinfecting technology, which as Dr. Mckeon points out, helps to “sanitize the water in the receptacle and to prevent the dispersion of those microbes.” Dr. Mckeon also recommends that when looking for a humidifier, try to select one that has been independently tested and demonstrated its impact on micro-organisms. When you bring a device into your home to help you to create a sense of wellness, the last thing you want to think about is the fact that the product might actually be a cause of illness in and of itself. With the Humidify+Cool, not only is there peace of mind in this regard, but also a sense of relief in how easy it is to know, and do the right thing.
From air purification to humidity control and cleaning capabilities, the Humidify+Cool is certainly paving the way for me to create a healthier 2021 at home. Kenneth Mendez has a few tips for those looking to improve their IAQ: “Small steps can make a big difference. These include making sure you have an efficient HVAC filter. You should also be sure to look for the asthma & allergy friendly® certification mark when choosing an air purifier or other products.” These devices have been designed to reduce indoor air pollution — and can create a sense of wellness in your home. Products like the Dyson Pure Humidify+Cool can help you make at-home health a priority — and create some much-needed peace of mind in the new year. | https://medium.com/@lbinhammer/rethinking-wellness-in-2021-ad8ecce1cf34 | ['Lisanne Binhammer'] | 2021-01-27 16:34:57.182000+00:00 | ['Smart Home', 'Humidifier', 'Air Quality', 'Dyson', 'Wellness'] |
Mock GraphQL web app | Documenting RESP APIs is quite easy with Swagger, but what about GraphQL interfaces?
Simple mocking server with ability to upload GraphQL files is published in the repository https://github.com/ABB-RARO/VisualQL
As we in ABB recognise value of open source software ad community, this is our team small (but I hope useful and efficient) contribution.
How easy it is?
Well, with NodeJS ApolloServer and Express library it is very easy. Full code can be reviewed in the repository, let me pinpoint some snippets with comments.
Development details one can find in the Apollo docs https://www.apollographql.com/docs/apollo-server/testing/mocking/, this is in fact recommended reading.
Let’s start with what we need:
Those are our essential imports. Core things is importSchema, GraphQLSchema, ApolloServer and makeExecutableSchema.
Straight away we define function to read schema dynamically form file:
Here SPEC_DIR is our global variable keeping folder where graphql files are stored.
Next in a row of important code pieces is how to read the schema and create middleware, which servers graphiql user interface:
Here mocks is object keeping mocking values, one can also use mock list.
Last thing we need to do is:
a) server the middlewares for GraphQL files
b) serve the application as coherent web server
Our objective a) looks like this:
Objective b) is classic express server:
Easy huh? The last thing is how we change this dynamically. In the project I used simple upload function, which accepts just PIN to provide “authentication”, stores the file and restarts the middleware:
Why setTimeout — well this is stupid, but somehow effective DDoS prevention :) As long as we do not have proper authentication, let’s have at least this.
For upload persistence we use multer, but this is really not that important.
Conclusion
I hope this will help to build, document and spread GraphQL interfaces as IMHO GraphQL is really future way of creating the portable, extensible and flexible service interfaces. | https://medium.com/@spilka-ondrej/mock-graphql-web-app-b86d0d489c7 | ['Ondřej Spilka'] | 2020-12-17 09:58:56.281000+00:00 | ['Mock', 'Nodejs', 'GraphQL', 'Web Applications'] |
In-Focus with Pete Weston | The people and the stories behind Focus Solutions.
Pete Weston, Senior Developer at Focus Solutions, talks about the day-to-day role and occasional challenges he faces, his love of the outdoors and his new hobby, paddle-boarding.
How long have you worked for Focus?
Been here since December 2007, so just the 11 and a half years.
What did you do before Focus?
I started off working the cash desk at HFC in Wolverhampton then moved to the Head Office and performed several roles around the business, including developing and building documents and reports using ISIS. The English legal agreements were fine, but the Polish and Hungarian documents were a little more time consuming! Eventually I found myself in the testing department and obtained some qualifications. I got a call out of the blue about a testing role at Focus and decided to come for an interview and I was in, hoping to move into development at some point.
What made you follow this particular career path?
Honestly, I fell into it. After school I undertook an apprenticeship course in business studies then I had to choose between working at the local newspaper or for an American banking corporation… well I thought there would be more money in banking, so chose that. How wrong I was! Sometimes I imagine that I could have been a journalist, reporting on football or all kind of sports, travelling around the world…
What does your day-to-day job entail?
Depends on what project I’m working on, but could be a simple list of caption changes or changing field properties, maybe writing some simple or complex database migration scripts, maybe having “fun” merging a Product release into an Implementation and upgrading the client, or even solutionizing a client’s new idea and trying to see how best it can fit into focus:360°, occasionally there’s some complex calculations that need to be written within a new or bespoke service.
There’s a wide variety across the focus:360° application that I get involved in, mainly for an Implementation of some kind. I’d say there’s never a dull moment, but then I would be lying, every job has some degree of monotony. Also difficult bugs that take time in hunting down the cause, seem to keep finding their way to my workload.
Favourite thing about working for Focus?
Rarely having to work weekends, everyone needs some downtime, too much work is bad for you. We only get one go on this earth, do what makes you smile. Oh and the people, yeah, the people here are great, some even good friends.
Outside of work, how do you relax and like to spend your free time?
Running, well occasionally a run is a relaxing thing, honest.
Golf, now that’s not actually relaxing — stupid game.
Paddle-boarding, a new thing I have taken up and enjoy, love being on the water.
Cycling (MTB) well a duathlon\triathlon usually means some time in the saddle.
Walking, love getting out in the countryside, especially if there’s a pub or two on route (there’s usually at least one!).
Camping, well now I have a VW camper, be wrong to not use it for a weekend getaway or two.
Holidays, can’t beat getting on a plane to go snowboarding or soak up some rays.
Pub, well, everyone loves a bit of pub time, surely?
See, told you weekends were important.
Interesting fact about you?
Interesting? Me? Erm…I ran a marathon once without training for it. Nope, that’s not interesting. I’m planning to take part in the UK Paddle-boarding polo championships later this year, something different right? No, hmmm, I have a metal plate attached to my jawbone.
You’re stranded on a desert island; what three things would you want with you?
Hunting knife, lighter, sun cream | https://medium.com/@Focus_Solutions/in-focus-with-pete-weston-c5d7fff6745 | ['Focus Solutions'] | 2019-04-12 13:18:33.163000+00:00 | ['Fintech', 'Team', 'Focus', 'People', 'Teamwork'] |
How NOT to Write Medical Scenes in Fiction | By Lawrence Martin, M.D.
[email protected]
As a retired physician (pulmonary and critical care medicine), I pay special attention in my critique group to “medical scenes.” Almost every week I hear read aloud at least one such scene — involving doctors, nurses or paramedics. Character A is shot and taken to the ER; Character B visits a specialist for her rare disease; Character C overdoses on medication; Character D is visited by family in the intensive care unit. There is a wide variety, but they almost all have one thing in common: something is not right with the portrayal.
I always try to point out just what isn’t right, why the scene doesn’t work or make sense, and why other readers with any knowledge of the medical field will find the scene problematic. I do this in a friendly way, and usually the writer thanks me, and agrees to make needed changes (but not always!).
I can’t overemphasize that if the writer gets it wrong, the wrongness will likely have an adverse effect on the story. Here are just two common mistakes from novels in progress, shown by example.
***
BACKSTORY: Dirk McGirt tried to kill his business rival Sam Simpson in New York City. Despite being shot with three bullets, Simpson lived. After life-saving surgery, Simpson is transferred to the hospital’s ICU, in “critical condition.” This information is in the next day’s newspaper. McGirt reads the paper and agitates over how to finish the job; he worries about being implicated if his victim survives to talk to the cops. McGirt decides to call the hospital, to check on Simpson’s status. His call is transferred to a nurse in the ICU, where a nurse takes the call.
SCENE
“Hello, can I help you?”
“Yes, this is George Simpson, Sam Simpson’s brother. I’m in California and just learned my brother is in the hospital. Can you tell me something about his status? Is he going to make it?”
“So far, your brother is stable. He’s on life support, but the doctors think he’ll pull through.”
“Life support?”
“Yes, a mechanical ventilator. It’s doing the work of breathing for him. We’re hoping he can get off the machine in another two to three days. Will you be traveling to see him?”
“I’m leaving on a plane this afternoon, will be there tomorrow. When are visiting hours?”
What’s wrong with this scene? It is simply not realistic, given the HIPPA (Health Insurance Portability and Accountability Act) regulations. Medical information is not divulged to strangers, and certainly not over the phone. It’s an unintended mistake that weakens the ensuing story.
***
BACKSTORY: Jon Jones, age 47, is seeing a female psychiatrist, for depression and anxiety. He is a writer, currently suffering writer’s block, which he attributes to a failed marriage. He and his wife, now living separately, are in the midst of a contentious divorce. His psychiatrist is an attractive woman, in her late 30s, Dr. Melissa Throckmorton. Jon has been seeing her once a week, for the past two months. In this scene in her office, Jon is talking about his wife. It takes place toward the end of his session, where the patient is talking with Dr. Throckmorton about his wife.
SCENE
“Melissa, she is really busting my chops. Now she wants my car. First it was the house, then all the jewelry, then the time share. But why does she need my Ferrari? It’s just a legal game, to get at my retirement money.”
“And it keeps you from writing?”
“How can I? Every time I sit down to write, instead I just think about how I can hurt her, get her off my back.”
“Honey, as I’ve told you before, don’t think those thoughts. You need to concentrate on the positive. Get through this divorce, and go back to doing what you know best.”
[After further conversation]
“Well, Melissa, I feel a little better. Just hope it lasts ’til next week.”
“It will.”
Doctor and patient stand, and as he prepares to leave, she gives him a hug. “Hang in there, Jon. Things will get better. You’ll see.”
What’s wrong with this scene? You should not show professionals in an unprofessional light, unless that is to be part of their character. In this example, Jon calls his psychiatrist by her first name, and then in the end she is shown acting in an unprofessional manner, hugging and calling him “honey.” This would never happen in a true psychiatrist-patient relationship, and when I pointed this out, the author replied he wanted to show the doctor had empathy for the patient. Well, no! That’s not how psychiatrists show empathy. More importantly, the reader is going to assume there is some hanky-panky going on, and that it will all come out later. Except there isn’t and it doesn’t; the author meant to portray this scene as an ordinary therapist-patient relationship, leaving the knowledgeable reader confused. If you are going to show professionals acting unprofessionally, there should be a reason.
***
There are other examples I’ve encountered of “medical mistakes” in fiction scenes. Here are a few more
· Telling us what is happening without showing. “The doctor assessed the wounds and decided surgery was needed” [What were the wounds?]
· Factual mis-statements, e.g., administering penicillin to a patient in 1935, when the drug was not available until the mid-1940s.
· Using clichés like “my doctors told me…” when only one doctor is involved
· Giving the wrong skill set to a specialist (e.g., having a gastroenterologist do abdominal surgery)
· Making a patient’s status “Do Not Resuscitate” without discussion with the patient or the family
· Having doctors perform an autopsy without obtaining permission from the family
· Mis-stating medical treatment for specific conditions, e.g., surgery for a collapsed lung (ordinarily treated with a chest tube)
I am not alone in pointing out factual errors in our club members’ fiction. A retired marine points out mistakes in military scenes. A retired police chief does the same regarding crime scenes. And our technology guru frequently points out mis-statements about items like computer viruses, cell phones, and space travel. (If you’re going to put astronauts on Mars and have them call home, don’t show a scene with zero delay in the conversation; each transmission between the planets will take several minutes.)
In fiction, of course, the story is made up. But if you want to engage readers, information that you intend to be factual or realistic should be so. It slows the reader down, or stops them completely, when you craft some aspects of your work out of ignorance. The mistake may be minor, but sometimes minor things bother people the most.
So if you want to write medical scenes, you have two choices. Make them realistic, so they don’t stop the reader. Otherwise, major deviations (e.g., an unprofessional psychiatrist, a stupid nurse, a doctor practicing beyond his training), should be integral to how you plan to develop your story. Unrealism is okay if you can show the reader you know what you’re doing, and the deviation makes sense. Otherwise, not recommended. | https://medium.com/@drlarry437/how-not-to-write-medical-scenes-in-fiction-ced87e7dea0f | ['Lawrence Martin'] | 2020-12-22 19:37:32.512000+00:00 | ['Medical Fiction', 'Medical', 'Fiction Writing', 'Writing Tips'] |
“I’ll Always Be a Tool & Die Maker” | By Max Larkin
In advance of our radio show on the future of work in the United States, we (Conor Gillies of WBUR’s Stylus and I) visited Peter Forg Manufacturing, an unusual factory (est. 1881) between Park Street and Rev. Nazareno Properzi Way in Somerville, Mass., about half a mile from Harvard Square. It is, in the words of its receptionist, “an anachronism.”
We went to record a short audio segment to play during our show with the technologist Ray Kurzweil and the economist Andrew McAfee. Here that is:
For three years in and out of college I lived right across a lot from the Forg Mfg. building. I knew it as a metal-stamping factory that opened unpredictably on weekdays, but always early. I quickly forgot how apocalyptically loud it is during hours of operation.
Many of the neighborhood’s old warehouses are now being repurposed as condominiums and businesses that serve the city’s whimsical youth, places where you study “circus arts” and do high-priced indoor rock-climbing. The receptionist (17 years employed there) conceded that the plant could benefit from free advertising on public radio. Everyone who lives on Properzi Way in Somerville knows Peter Forg is there, still rattling China cabinets 130 years after its opening. Set backthe street it looks semi-abandoned, though, awaiting its second life. (Plus, when you see ‘Mfg.’ don’t you think, gone?)
The firm is now run by Dave Forg, who lives in Lexington with his wife and children, and who represents the fifth generation of Forgs to direct the stamping of metal in Somerville. He started there in his early teens, and, but for a brief interlude in business school, he’s received his paychecks from the firm ever since.
Dave began our tour by scrounging around a little to find something — a yellow rake designed to rip down shingles — that comes out of Forg as a recognizable consumer product. (They only make the teeth of the rake.) He employs a little more than a dozen people —mostly men over 55 — and since most of their products is contracted and sold to other manufacturers, they are part of a makers’ economy in America that is largely invisible to the people who buy things. We will not necessarily know if it closes. | https://medium.com/o-s/ill-always-be-a-tool-die-maker-8bebadea6024 | ['Radio Open Source'] | 2015-10-07 17:56:27.939000+00:00 | ['Work', 'Manufacturing', 'Massachusetts'] |
Type Inspiration 02 — August. Featuring Helvetica and Love by VJ Type… | This experiment combines one of my favourite fonts, Helvetica, with a font called Love from https://vj-type.com/. It’s a quirky mashup of old and new, but I have used this exercise to push my design style. | https://medium.com/the-innovation/type-inspiration-02-august-95e16d6d82e9 | ['Debbie Turner'] | 2020-08-06 17:27:52.738000+00:00 | ['Design Process', 'Inspiration', 'Type Inspiration', 'Design', 'Typography'] |
SwiftUI renderer for React Native | Last year at Software Mansion we announced our full-time commitment to Open Source, and since then, we’ve managed to grow our React Native open source team which has greatly impacted the way our open source libraries like react-native-gesture-handler, react-native-reanimated or react-native-screens are maintained. Our responsiveness on issues and pull requests is now much better, developers get new releases on a regular basis, and we also have the opportunity to work on new investments in the area of React Native Open Source. A prominent example of such an investment is the work on the next iteration of the Reanimated library (read more in the announcement post). But we constantly seek out for new ways to contribute our skills to the development of React Native platform and its ecosystem, and it feels like the best way to do so in our position is to undertake new projects and experiment with some ideas that we have. In this post, I want to share the story of one such experiment we decided to try out recently and the lesson we learned from it — the experiment was about building a SwiftUI-backed renderer for React Native.
Motivation
Ever since React became mainstream we have observed a trend of new UI systems being designed around very similar, declarative principles. This trend goes beyond JavaScript frameworks, and recently reached the native mobile ecosystem with SwiftUI (by Apple) and Jetpack Compose (for Android) being the new contenders for becoming the default UI systems for their corresponding platforms.
We believe that React Native will need to adapt to those new ways of interacting with the platform-specific native UI APIs — the problem here is that there are many different ways of approaching such an integration, and none of them seemed trivial to us. React has been designed to work with tree-based UI systems (e.g., DOM, Android’s view system, UIKit, AppKit) by performing direct tree manipulations like adding new nodes or changing node attributes. With SwiftUI we no longer interface with the UI tree structure directly, and in fact the underlying tree does not even need to exist in some cases. This means that in React Native we can no longer hold references to native view instances and further use these references to make updates to the UI.
With a number of different ideas in mind, we decided to build a prototype implementation of the SwiftUI renderer for React Native — a drop-in replacement for the core’s UIKit renderer which may allow us to remove the dependency on UIKit altogether. It is important to note that embedding custom SwiftUI components in React Native is already possible (this can be done using a “hosting” container where components written with SwiftUI can be placed as a part of the UIKit view structure — we recommend this great post by Alexey Kureev which covers this approach). However, building a React Native renderer is a completely different thing that opens up a new set of possibilities and not only brings full control over how all view primitives are rendered, but also allows to freely build nested hierarchies of such views.
SwiftUI
SwiftUI is a declarative UI framework introduced by Apple in 2019. The keyword “declarative” makes it sound similar to React, doesn’t it? Behind the scenes, SwiftUI follows the same principles as React and React Native, especially the reconciliation concept that makes the rendering engine only re-render those views whose state has changed.
We are already seeing Apple invest heavily in unifying the software running on all of its platforms (and even hardware by switching Macs to ARM architecture). This also applies to the way they recommend developers build apps for these platforms. I wouldn’t be surprised if some new UI features come only to SwiftUI, just like how new features or frameworks are made available from Swift but not from Objective-C. Additionally, SwiftUI has some other advantages over UIKit:
First-class support for all Apple platforms: iOS, iPadOS, macOS, watchOS, tvOS and extensions such as iOS widgets or watchOS complications.
It provides many great perks right out of the box like handy built-in SVG-like shapes and visual effects such as shadows, masks, blurred views or even changing a view’s saturation.
A more composable approach. Instead of having a base class ( UIView ) that is heavy and complex as it has to know how to handle a lot of different properties, SwiftUI uses lightweight primitive views responsible for applying only specific kinds of effects. Every view in SwiftUI is just a struct that implements the View protocol, thus we’re not dealing with any base class.
) that is heavy and complex as it has to know how to handle a lot of different properties, SwiftUI uses lightweight primitive views responsible for applying only specific kinds of effects. Every view in SwiftUI is just a struct that implements the protocol, thus we’re not dealing with any base class. A much easier and cleaner way to create custom native components. It’s as easy as writing a few lines of code.
The scope of the research
During this research I tried to cover some core, currently existing functionality of React Native. The goal was to build a functioning React Native app demo without relying on UIKit for rendering any views. Below I listed the steps that I have gone through as part of this project:
Investigate how to integrate with Fabric and how to map shadow nodes to SwiftUI views instead of UIView s.
s. Use Yoga layout metrics to set the proper view size and offset.
Listen for shadow tree updates and update the UI accordingly.
Read component props passed from the JavaScript side and be able to use them in Swift.
Implement adding some basic styles like: background, color, border, and opacity.
Deal with text and its frame size that we must calculate and pass on to Yoga. Native text view takes only as much space as it needs and so Yoga itself cannot measure it properly in a cross-platform manner.
Send native events to JavaScript using event emitters.
Implement some of the most commonly used components like Button , ScrollView , Image and TextInput to check how much of the existing functionality we’re able to cover.
, , and to check how much of the existing functionality we’re able to cover. Try out some SwiftUI features such as masking a view with another, adding a shadow to the view or adding a blur.
I managed to put all the bricks together and came up with a foundation that allows us to build and run a simple application with proper layout, some styling and interactions. As it would be too much for a single blog post to discuss the implementation details, in this article we only focus on the results and limitations that we discovered while we will share some more insights about the internals in the follow up blog post.
iOS Widgets
iOS 14.0 introduced a new kind of app extensions — widgets. It turns out that they don’t support the traditional way of making apps based on UIKit. Surprisingly, UIKit is available there, however, it’s impossible to use it for rendering, making them the first feature of iOS that is designed to only support SwiftUI as the UI framework. The lack of UIKit support means we cannot use React Native for building widgets, and even if we know how we can embed SwiftUI components within the React Native view tree that still wouldn’t help here. However, since widgets can’t be too complex visually (they occupy a small portion of the screen and aren’t really interactive), we would only have to expose a small number of SwiftUI components to React Native. This sounds easy, but it’s actually a long way off. The SwiftUI-based React Native rendering layer would work here, even if the main app were to use the standard React Native renderer.
macOS
Recently, Apple announced the Mac Catalyst project (also known as UIKitForMac) which enables developers to make iPad apps available for macOS users too. That is amazing and good enough for many projects that don’t have to care too much about this platform and don’t use any of its platform-specific features. However it was never meant to be a fully-fledged solution to developing macOS apps. Some people got it running in React Native with a few modifications, but it isn’t going to be officially supported any time soon.
I must applaud Microsoft for putting a lot of effort into creating and maintaining react-native-macos which is a react-native fork with a lot of patches that makes React Native work on macOS based on the OS native UI framework — AppKit. As it turned out, React Native uses UIKit framework in many more places than just the UI-related codebase, so I had to use some of those patches to make my SwiftUI renderer compile on this platform.
Limitations
Life is real, nothing is ideal. With SwiftUI we gain a lot but lose something else. To put it bluntly—SwiftUI is still not as powerful as UIKit or AppKit and probably won’t be for the next few years.
For instance, I found it quite painful to work with text inputs ( TextField in SwiftUI’s nomenclature). The only thing that can configured is the value and placeholder. It’s not possible to change the tappable area, get cursor position nor read text selection. To support multiline input we have to use TextEditor which doesn’t even let us know once something is changed. Moreover, triggering on-demand events such as focus or blur is not possible and most methods that we call on components’ refs have no counterparts in SwiftUI yet. Lastly, there is no way to scroll to a specific point in scroll views like you can in UIKit. Instead, we can only scroll to a subview by its identifier (although this may actually be a good thing because it’s what we usually want to do).
Conclusion
A SwiftUI renderer may be a good solution to bring React Native to more devices, with just one UI codebase used across all Apple platforms. Of course, it’s hard to predict the direction of SwiftUI development and how fast we’ll get to the point where using SwiftUI is a reasonable choice, but it’s always better to be prepared
Here is a sneak peek of a React Native app and a widget that I’ve been using as a playground, entirely rendered by SwiftUI. It presents all of the currently implemented functionality.
In the next blogpost I’ll describe the technical side, show you some examples, and finally make the repository with our proof of concept public and open it for contributions.
… to be continued soon. | https://blog.swmansion.com/swiftui-renderer-for-react-native-2b62fda38c9b | ['Tomasz Sapeta'] | 2020-12-17 17:50:26.017000+00:00 | ['User Interface', 'Swiftui', 'Mobile Development', 'React', 'React Native'] |
What sets us apart from AI | What sets us apart from AI
Photo by Andy Kelly on Unsplash
Today’s job market in certain countries seems saturated with competent professionals flooding vacancies because of a global job shortage of “regular jobs”. The idea some have is that AI is progressively replacing humans, ridding them of jobs…and indeed it is, but not completely.
One of the big questions workers and professionals are trying to answer in this age of incredible technological progress, is whether our work will be substituted by Artificial Intelligence. The short answer is no and, well, the long answer is in the following lines.
In history, humans have been at the center of their jobs and many of these have been manually executed for centuries. We are now watching, as automation changes not only our job opportunities but also the way we work. The IT sector has grown exponentially and we are failing to keep up with its changes and to observe where opportunities are awaiting.
Artificial intelligence is an IT product created by humans to help us solve problems more quickly. Thanks to it, we can now calculate faster than ever before and tackle complex problems with a brain-power exceeding that of every human who has ever lived. AI can make machines move like humans and perform many mechanical tasks humans do every day.
So what makes humans unique and why won’t Artificial Intelligence take our place soon?
Humans can communicate with emotions and emotions cannot be learned by a computer. They can be recognized, someone might attempt to replicate them, but emotions are a free flow of feelings that humans have embedded into their brains and their hearts.
This makes us different. No calculation will take the place of emotions and no machine will be able to substitute us in roles where communicating is fundamental. Nowadays it’s vital to form great relationships with people, even in an era in which social media is so widespread. Our challenge lies, in fact, in utilizing technology to become better communicators and exploit all the possibilities IT opens up for us.
Is it really true that there are fewer jobs?
Not really. Jobs come and go.
The job market is evolving quickly and we should try to evolve with it. Non-formal learning is progressively taking the place of the traditional learning experience, teaching us that expensive degrees aren’t more valuable than self-learning. “Old” job roles are progressively disappearing but many new job roles are making their way into view. We don’t have fewer jobs, we simply have different jobs.
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.datadriveninvestor.com/what-sets-us-apart-from-ai-9d5171f086b9 | ['Karen Ercoli'] | 2020-06-19 04:09:49.831000+00:00 | ['Jobs', 'Communication', 'AI', 'Artificial Intelligence', 'It'] |
The Danger of Call-Out Culture in the Sex-Positive Community | The Danger of Call-Out Culture in the Sex-Positive Community
Are we taking things too far?
Photo by Bruna Rodriguez Torralba on Scopio
“I’m so sick of women posting photos of their dildos and thinking they’re all sex-positive and woke. People just want to do the fun stuff when it comes to being in this industry. No one wants to do the work.”
I’ve heard this endless amounts of times in my social media circles, lately. Sometimes with even more critical and downright insulting language.
I find myself questioning if some of it is going too far — especially when it comes to people talking about sex, writing about sex, and doing healing work around sexuality.
Are we failing to be sex-positive with the widespread vocalization of our judgments of other people’s inner work around sexuality?
What’s “the work?”
This term, “the work,” is making the rounds these days. I think the general consensus is that it means “deep self-explorations and emotional audits.” And taking responsibility for oneself and the biases that our culture and experiences taught us. And moving on with a broader, more compassionate, more inclusive lens on life.
“The work,” though is taking on new meaning as it barrels through different social landscapes, different industries, different groups and subgroups.
When it comes to people whose work is part of the sex-positive movement or sex wellness, what does “the work” mean? No, really. What exactly does it entail?
I’m sure there are a lot of things that many of us would touch on here: healing our histories with sexual trauma, for instance, or identifying our biases and toxic mindsets around sexuality, or actively unpacking our sexual shame.
But also…there’s a lot of emotional, social, and spiritual “stuff” around human sexuality within a puritanical, patriarchal, white supremacist culture and an endless amount of ways that “stuff” manifests within each of us.
So why are we acting like there’s a recipe here that we’re all supposed to follow to become sexually woke? Why are we acting like there’s approved course material around sex-related speaking, educating, writing, or healing that is universally recognized?
What does anyone know about someone else’s journey?
When people talk about this, especially on social media, sending their message out into the universe, I always want to ask: Who exactly are you talking to? These criticisms made toward an unidentified group of people don’t feel very helpful to me. In fact, I fear those who are just starting their journey might internalize this in those insecure first steps toward their own liberation and pause or slow down because of that.
So here we are, using the considerable influence of a large social media platform to make criticisms of unidentified people’s journey to sexual liberation. And that helps how?
I’m not sure that’s doing anything but creating more judgment around people’s sexuality and sexual journeys. Because, and let’s be very clear about this: another person’s sexual journey is none of anyone’s business. And where they are in that journey might not be (probably will not be) apparent to an outsider even if they share a lot about their sexuality in very public ways.
I really think that we need to explore this tendency to judge, monitor, or criticize people’s journey of sexual liberation within the sex-positive community. Because it doesn’t seem very sex-positive to me.
Why are we so focused on other people’s journey?
I know there are good educators and healers and writers out there who feel this way because they are worried about the impact made by someone entering the field who is only in it to make a buck (Ha! Good luck with that!) or who still has a lot of toxic biases around sexuality.
But you know what? It’ll be okay.
We can’t try to hold up a perimeter around this space out of fear that people will come in and make all kinds of messes. First of all, no one owns this space and secondly, people are going to come in and make a mess everywhere they go because we’re human and that’s what we do.
But here’s the good news: It doesn’t matter.
If someone is in this for money or social clout, their followers will eventually see through the mask. Anyone who is not the “real deal” will not be able to trick people for long.
Tricksters, posers, and phonies aren’t going to permanently damage the entire sex-positive community or the intentions of this movement. They’re not ultimately going to negatively affect the bottom line of anyone who is in this for the long haul. And if they screw up and make mistakes…well, guess what? We all have. They deserve the same opportunity to mess up, misstep, flounder, and fall down.
Yes, we need people who are strong, committed, and rooted in the desire for everyone’s sexual liberation to participate in this community. But the ones who are not will not stick around for long.
Where’s the “positive” in sex-positive?
Ultimately, I think we can do a little better in the sex-positive community. Yes, we can — and should — hold ourselves to high standards. We can ask others — without expectation — to hold themselves to high standards. And, just as important, we can make room for those who haven’t hit that bar yet.
I’m not sure who it helps to criticize vague groups of unidentified people whose progress in their journey to sexual liberation we profess to know.
Ultimately, we have our own work to do, don’t we? Maybe we can leave other people to their journeys and get back to work. People’s actions will speak for them and everything will sort itself out, accordingly.
On that note, I should probably stop criticizing vague groups of unidentified people and get back to work because what other people are doing really isn’t my business…
© Yael Wolfe 2020 | https://medium.com/sexography/the-danger-of-call-out-culture-in-the-sex-positive-community-316956e611db | ['Yael Wolfe'] | 2020-08-19 04:14:10.947000+00:00 | ['Culture', 'Sexuality', 'Self', 'Relationships', 'Growth'] |
Fintech Predictions for 2021 — Why We’re On The Precipice of Significant Change | By Formatorginal
When Sean and I started Anthemis more than a decade ago, we set out to transform the financial system for a digital age. We wanted to deploy human and financial capital in a diverse and inclusive way and, in the process, create a more transparent, resilient and fair market system for all. You know, just the simple stuff.
Fast forward to this most challenging year, and our early thesis has never been more relevant. The role and promise of fintech to help people and businesses navigate the financial hurdles of 2020 have been enormous. There’s a growing industry-wide focus on issues of diversity, inclusivity and democratization. And, we are seeing a record number of deals in the sector, high-end valuations, increased M&A activity and potential IPOs.
The takeaway is that the pandemic proved to be a catalyst for some much-needed change. Collectively, the steps we take next will define a new era for the financial system and, if we do it thoughtfully, we can create a system that’s not only more resilient but also more accessible and diverse. As we wrap up an unforgettable year, here’s a look at the trends that will drive additional changes in 2021 and beyond.
The focus on diversity in founders and teams is more than a trend. Over the years, I’ve witnessed much discussion about — but very little action around — funding startups founded by women and people of color. But it feels very different this time. The industry recognizes the opportunities for generating alpha when you’ve got a diverse team looking for deals, not to mention diverse leaders creating products and building and running companies. But, too many times, excuses about how hard it is to find and back these founders result in dismal change in the numbers. At Anthemis, 26 percent of our founders and CEOs are people of color and 21 percent are women. Our team has always been at least half women, 40 percent of our team identity as BIPOC and 11 percent of our employees are LGBTQ. And, we are at the center of an ecosystem that is exclusively financial services — one of the least diverse industries in the world. So, forgive us if we have a hard time believing the excuses.
I’m thrilled to see the discussion among investors and allocators — the LPs and institutions that back the private equity market — turn to this topic in earnest as I believe that, this time, it will yield more than just talk. We need to hold one another accountable for change, which means getting real about the numbers. We should demand that the industry reports on its diversity statistics and create a standard of excellence for what is meaningful and fair. We need to bring a cross-section of viewpoints and experiences front and center in everything we do — from building our pipeline to finding new ways to engage the market. And, we need to do the work with less diverse founders, to help them build consciously and purposefully the same level of resiliency into their companies. It’s a tangible shift, which we are starting to see across the industry, and one that will benefit companies and customers alike.
Business models serving underrepresented communities and creating democratized access to capital will lead the economic recovery. The pandemic shone a spotlight on the growing inequality that’s plagued our economy for centuries. It was the workers with little or no financial safety nets who found themselves suddenly unemployed. It was small business owners, especially in communities of color, who struggled to access relief funds. But, these problems also opened the door for new startups and models to serve markets that have been ignored. For example, there are still lots of small and mid-sized companies that remain underbanked. And there are millions of consumers who are ready to get out of debt responsibly, access new products and services as they enter a new part of their life cycle and who want to — and frankly need to — prioritize their financial wellbeing. As we move toward recovery, they’re going to need new financial services and new ways of interacting with banks, insurers and more. The startups that rise to this occasion will pave the way for a new era of more inclusive financial services.
Financial services are starting to adapt to new models of work and life. The financial services industry was created to serve Fortune 1000 companies at one end and traditionally structured small businesses — those with full-time employees and a physical location — at the other. It’s a barbell shape that doesn’t offer many resources for those in the middle. Meanwhile, the way we live and build businesses is changing dramatically. Think about the explosion of gig workers and the full-time employee who also has a lucrative side-hustle. As we head into the new year, expect to see more products and services designed for non-traditional business owners. We’re also starting to see fintech designed to support financial wellness for unbanked or underbanked communities on the consumer side. Peer-to-peer banking systems will give people access to products that haven’t been available to them before.
Techfin gains traction bringing new players to the game and demanding a new look at and understanding of regulation for the digital age. While the culture and origins of fintech can feel very different from traditional financial services, fintechs still operate in the same space. They’re managing assets or taking consumer deposits and, as a result, are subject to all the related regulations as their traditional counterparts. But, the componentization of financial services — the idea that all companies have finance embedded within them — means that there is an increased number of players who will become an important part of the larger financial services ecosystem. Therefore, a number of opportunities there will emerge to serve financial customers outside of the traditional regulatory structure. This transition to “techfin” acknowledges the ability to do just that. We’re going to hear more discussion that moves away from issues of what should — and shouldn’t — be regulated and, instead, address how we can create solutions that provide the best of both worlds.
For investors, Silicon Valley matters less and less. We made an early bet on distributed teams — we’ve been distributed since Anthemis started. We’ve long thought it was important that our people understand the nuances of different geographies and believe that they could get into the global deal flow. But, those deals come from everywhere. We don’t rely on Silicon Valley exclusively to generate investment opportunities. Looking ahead, more VC firms will rethink their reliance not just on California but North America in general. Instead, they’re going to focus on accessing transactions that make sense for their mission and doing this in a way that balances their portfolio return. That said, having a deep understanding of local regulatory, institutional and cultural frameworks is still essential and having people on the ground around the world will matter. The building blocks of finance may be similar across regions, but being local will not only help you access the best deals, it will give you a bird’s eye view of how the industry is evolving.
New asset management products will emerge to capitalize on the shift toward investing for a double bottom line. In the same way diversity, equity and inclusion (DEI) has been spotlighted this year, we have also seen the focus shift toward environment, social and governance (ESG) investing. If this renewed and expanded perspective is going to stick, it will require products to emerge that prioritize and mandate virtuous cycle outcomes. The financial services industry has long been vulnerable to predatory behavior by an unscrupulous few whose efforts can create a ripple effect of negative outcomes for many. While we have witnessed the impact of this in waves and spurts throughout the last century, the stakes for change have never felt higher. While we are optimistic about the new funds and structures emerging to back companies with ESG as a priority, we are even more excited about the idea that the asset management industry itself can evolve in its economics to support this democratization even further. There are several new models emerging in which funds and products require — and are built for and by the participants — where owners, customers and employees all share the wealth, and we are hopeful that 2021 is the year these products take off.
2020 was a painful year for so many. But, the challenges have spurred people to think deeply about problems in our society in ways they haven’t before. It’s hard work, but it’s also the start of much hope. And, I think that, when we reflect back on 2021, we’ll have made that much more progress toward a financial system that’s truly built for everyone. | https://medium.com/anthemis-insights/fintech-predictions-for-2021-why-were-on-the-precipice-of-significant-change-4daf52302ec1 | ['Amy Nauiokas'] | 2020-12-23 15:15:19.400000+00:00 | ['2021 Prediction', 'Fintech', 'Techfin', 'Fintech Startups', 'Diversity In Tech'] |
Madi Rifkin and Mount — Cohort 13 Founder Spotlight | By Tiger Tam, Program Associate at Blue Startups
This week, Blue Startups sat down with Cohort 13 founder, Madi Rifkin, CEO and founder of Mount, a platform that generates ancillary revenue streams at AirBnB properties through the management of electric scooters and bikes. Madi touched on the vision behind the business, micro mobility, and the future of Mount.
CEO and Founder of Mount, Madi Rifkin
What is the story behind Mount? How did the idea come to life?
When I was twelve-years-old, I created a bike lock (out of necessity, because I forgot it everywhere). I entered it into a Shark Tank-style competition and won. By the time I was fifteen, I had a fully-funded patent.
In college, I was studying entrepreneurship and decided to start a bike lock company. I learned all I could about bikes and this eventually evolved into scooters. I took the lock into the scooter market, where I learned about micro mobility and met Mount’s team. We ran with the scooter lock idea for a while until we came out with an IoT device and a management platform for assets. Then COVID happened. Micro mobility went on the downturn and we had to figure out a path forward. Micro mobility is too chaotic on its own, so I asked myself, what’s an industry where micro mobility would do really well? We had the idea of giving it structure via hospitality.
How did you meet the team that founded Mount?
Rishab Naya, CTO and Co-Founder at Mount
Rishab, our CTO and co-founder went to BU and I went to Northeastern. We worked on various projects together, and we have very different skill sets so it works well. Our third co-founder, Matt, comes from the micro mobility industry. He was working at JUMP and running a fleet of 500 bicycles in Denver. I showed up at his warehouse and said, here’s what I’m doing, wanna come on board? So he did. At this point he’s more of a founder advisor but he definitely brings that expertise to the table.
Can you explain micro mobility for people who don’t know?
Micro = small, mobility = movement of things. Micro mobility means any trip taken on an electric vehicle that’s under five miles. 60% of all car trips in the US are under six miles (2017). It seems really inefficient to have a car if you’re only going five miles. That’s the whole premise behind the term micro mobility. Make cities more bike-able and walkable which would in turn help the environment. Unfortunately scooter companies say they’re reducing your carbon footprint, but what they don’t tell you is that a scooter lasts three months on the street and goes to a landfill. Hopefully they can do better.
Did your travel experiences impact the company and the vision in any way?
Totally. Mount’s team is made up of people that are a part of micro mobility and also travel enthusiasts. We’re all remote at this point because of COVID but we enjoy our roles because we can travel, visit our AirBnB hosts, and visit our customers.
I was traveling through Amsterdam with a friend and we showed up to an AirBnB property. At times traveling abroad I feel awkward because everyone is staring and I don’t want to be that dumb American tourist — it’s a sigma that won’t leave you. But our AirBnB host gave us two bikes, and we hopped on and went exploring. Immediately, the mindset shifted. We felt more like a local, because everyone is on bikes in the city. That’s when I thought this was such an easy amenity to add on. It also allowed us to explore in ways other than we thought. That’s what we wanted to bring to Mount… It’s not just a scooter at an AirBnB property, it’s a whole experience.
Can you share a major milestone in building Mount thus far?
I think there’s two. One would be after our initial pilot in Sept 2020. We took four scooters to a boutique hotel in Avon, CO and asked them to test for us. We weren’t even in hospitality at that point, it was just to test the platform. And we stumbled upon a unique circumstance in that they were only operating at 20% capacity because of COVID, but every single guest that came — and even the staff — was on the scooters. One person took it 22 miles. It wasn’t what we intended and it was a use case that opened our eyes to the hospitality world.
The second cool milestone is when we weren’t sure if the AirBnB market was right. so we posted in a Facebook group announcing a scooter trial, sign up if interested. We got a crazy number like 130–140 individuals that were AirBnB hosts signed up in a week for scooters. I went and talked to all of those people, interviewed them, and got them on a waitlist. We have a pretty substantial waitlist for scooters right now.
Mount is active in Florida, Oklahoma, Denver, Arizona, California, and Canada
Is Mount planning on expanding to hotels?
That was another interesting fact-finding mission. We were talking to some large resorts and told them what we could do with bikes and scooters. They asked if we could put this on all of their moving assets to give them a picture of what’s happening. They were the ones who gave us the idea for maid carts, bellhop carts, golf carts, construction equipment, and anything that moves. We thought: Wow, that’s way bigger than we thought it could be. Resorts are really interested in knowing where everything is, and that’s exactly what we do. You can be alerted when your equipment needs maintenance, how many miles it has, etc. It’s useful for that information to be streamlined, because the majority of staff uses a walkie talkie and a clipboard. It’s very inefficient. When someone leaves the company (which happens a lot in those type of jobs), the info goes with them. There’s no paper or tech trail.
What cities are you hoping to expand to?
We just hit nine cities as of this week. Our target areas are coastal vacation areas like Florida, Hawaii, and California coast. In reality, we’re not geographically constrained because depending on your area there’s definitely an amenity or asset that you’re going to want to track or monetize. That’s why we opened 9 new markets in a few months. All the way from Florida, Oklahoma, Denver, Arizona, California, and Canada.
What is your favorite piece of professional advice that you’ve received?
One of my favorites comes from a founder whose company was acquired. He got his group of friends together from college and he said they didn’t set out to build a multi-billion dollar company. They started it because they all wanted to work together with their friends. They asked themselves, what do we want to get out of this? And answered, we want to be a YC company and we want to have fun while doing it. His piece of advice was, you don’t start wanting to be a billion dollar company. You should have some goals and objectives and what you want to get out of it. Because we had those, it led to something a lot bigger that we didn’t anticipate. After I heard that, I asked our whole team, what do you want to get out of this? Travel was one. Having fun and expanding was one. Not taking it too seriously but also hopefully having a pretty good outcome. It took some stress off the table and refocused everyone about the culture of the company. Phil Knight, the founder of Nike, said that as well: Our goal was never to make money, we just wanted to have fun making track running shoes.
To learn more about Mount, visit: http://www.mountlocks.com/ | https://medium.com/@BlueStartups/madi-rifkin-and-mount-cohort-13-founder-spotlight-f664f72a424c | ['Blue Startups'] | 2021-09-01 22:52:17.943000+00:00 | ['Entrepreneurship', 'Hawaii', 'Tech', 'Female Founders', 'Startup'] |
[2020-March-Newest] Fortinet NSE 6 NSE6_FWB-6.0 Dumps | KillerDumps offers you a Fortinet exam NSE6_FWB-6.0 dumps free demo from the NSE 6 NSE6_FWB-6.0 exam preparation material bundle. In case you are interested to get the full version from the Network Security Specialist NSE6_FWB-6.0 exam dumps bundle, it will be available for immediate download upon successful purchase.
Download Fortinet NSE6_FWB-6.0 Dumps Bundle Pack [PDF + Practice Exam Software]:
https://www.killerdumps.com/fortinet-nse6-fwb-6.0-braindumps
[2020] Top-Notch Fortinet (NSE6_FWB-6.0) Dumps Questions For Your Achievement — KillerDumps
Confused from the untrusted Fortinet NSE6_FWB-6.0 Dumps that holds so many wrong and misleading information? Don’t worry KillerDumps will get you the best and verified NSE6_FWB-6.0 exam dumps that will help you clear the Network Security Specialist NSE6_FWB-6.0 certification exam from Fortinet in the shortest possible time error-free from the first NSE6_FWB-6.0 exam attempt.
Fortinet NSE 6 NSE6_FWB-6.0 Pretest Questions Are Designed by Fortinet Experts:
Our Fortinet NSE 6 — FortiWeb 6.0 preparation material for the NSE6_FWB-6.0 exam from Fortinet was developed and designed based on the feedback of more than 90,000 of the best Fortinet technology experts in their field. KillerDumps has built the Fortinet exam NSE6_FWB-6.0 braindumps to satisfy all your needs to make your NSE6_FWB-6.0 study material easier.
Fortinet Network Security Specialist (NSE6_FWB-6.0) Certification Exam — Is It Good for IT Career:
To sum up, using the NSE6_FWB-6.0 dumps will be your best companion during your practicing journey for the NSE6_FWB-6.0 exam braindumps from Fortinet. Additionally, the NSE 6 NSE6_FWB-6.0 Practice test is considered the best-selling NSE6_FWB-6.0 Exam Questions for the Network Security Specialist NSE6_FWB-6.0 exam from Fortinet.
Get Latest Precious NSE 6 NSE6_FWB-6.0 Practice Questions Formats:
KillerDumps provides you with two easy to use NSE6_FWB-6.0 Dumps Questions formats for the NSE6_FWB-6.0 exam from Fortinet which are:
Fortinet NSE6_FWB-6.0 PDF:
KillerDumps provides you with a relevant Fortinet Network Security Specialist (NSE6_FWB-6.0) PDF Dumps that is regularly updated to cover all the real scenarios found in the NSE6_FWB-6.0 Exam Dumps from Fortinet. One of the best built-in features that KillerDumps has included in the NSE 6 NSE6_FWB-6.0 PDF questions that is portable and printable. Consequently, you will be able to practice anywhere you go in your free time. Moreover, the (NSE6_FWB-6.0) Dumps PDF smoothly fits on your laptop, PC, tablet, mobile and any other smart device you own. Even more, using the NSE6_FWB-6.0 Questions PDF does not need any installation you can use it directly.
Fortinet NSE6_FWB-6.0 Practice Exam Software:
The practice Fortinet NSE 6 — FortiWeb 6.0 (NSE6_FWB-6.0) software contains a great number of NSE6_FWB-6.0 mock exams that have hundreds of actual NSE6_FWB-6.0 questions and answers with PDF. Indeed, KillerDumps constantly updates the NSE6_FWB-6.0 practice Fortinet NSE 6 — FortiWeb 6.0 test software with verified NSE6_FWB-6.0 questions that are similar to the real Fortinet NSE6_FWB-6.0 exam questions. Moreover, the NSE 6 NSE6_FWB-6.0 practice Fortinet NSE 6 — FortiWeb 6.0 exam software can be customized by time and question types. One more advantage of using the NSE6_FWB-6.0 test simulator software that keeps all your Network Security Specialist NSE6_FWB-6.0 practice attempts and displays all the changes/improvements in your results at the end of each NSE6_FWB-6.0 exam attempt. Typically, the Fortinet (NSE6_FWB-6.0) practice software is supported on all Windows-based platforms.
Guaranteed Success in NSE6_FWB-6.0 Exam with Latest NSE6_FWB-6.0 PDF Dumps {2020}:
Our NSE6_FWB-6.0 PDF for the NSE6_FWB-6.0 exam from Fortinet is trusted and verified so it will be all you need to clear the Network Security Specialist NSE6_FWB-6.0 certification exam from Fortinet easily with 100% success guaranteed. Moreover, KillerDumps aware with the high registration fees for the NSE 6 NSE6_FWB-6.0 exam from Fortinet so it provides you the NSE6_FWB-6.0 exam preparation material at an affordable price.
100% Money Back guarantee — In Case of NSE6_FWB-6.0 Exam Failure:
KillerDumps provides you with a 100% money-back guarantee. If for any reason you were not able to install the NSE 6 NSE6_FWB-6.0 exam test simulator, then KillerDumps will refund all your money back. Hurry up and get your copy to guarantee your success in the NSE6_FWB-6.0 exam from Fortinet. (Terms and conditions apply, please check KillerDumps guarantee page for more information.)
Our Fortinet NSE6_FWB-6.0 Braindumps Features: | https://medium.com/@williamsonjohn013/2020-march-newest-fortinet-nse-6-nse6-fwb-6-0-dumps-8223c1eca170 | ['John Williamson'] | 2020-03-11 12:58:57.135000+00:00 | ['Questions', 'Certification', 'Dumps', 'Pdf', 'Exam'] |
Introduction to Data Science | After reading this article, you will be able to
Explain the steps in data science Apply these steps to predict the EPL winner Explain the importance of data quality Define data collection methods
Data Science Life Cycle
Step 1: Define Problem Statement
Creating a well-defined problem statement is a first and critical step in data science. It is a brief description of the problem that you are going to solve.
But why do we need a well-defined problem statement?
A problem well defined is a problem half-solved. — Charles Kettering
Also, all the efforts and work you do after defining the problem statement is to solve it. The problem statement is shared by your client. Your client can be your boss, colleague or it can be your personal project. They would tell you the problems they are facing. Some examples are shown below.
I want to increase the revenues I want to predict the loan default for my credit department, I want to recommend the job to my clients
Most of the times, these initial set of a problem shared with you is vague and ambiguous. For example, the problem statement: “I want to increase the revenue”, doesn’t tell you how much to increase the revenue such as 20% or 30%, for which products to increase revenue and what is the time frame to increase the revenue. You have to make the problem statement clear, goal-oriented and measurable. This can be achieved by asking the right set of questions.
“Getting the right question is the key to getting the right answer.” – Jeff Bezos
How can you ask better or right questions to create a well-defined problem statement? You should ask open-ended rather than closed-ended questions. The open-ended questions help to uncover unknown unknowns. The unknown unknowns are the things which you don’t know you don’t know.
Source: USJournal
We will work on a problem statement “Which club will win the EPL?” | https://towardsdatascience.com/intro-to-data-science-531079c38b22 | ['Ishan Shah'] | 2020-01-13 09:39:41.921000+00:00 | ['Data Analysis', 'Epl', 'Premier League', 'Data Science', 'Data Visualization'] |
Agreed but with the caveat that they need to (A) proactively recruit BIPOC who are qualified (not… | Agreed but with the caveat that they need to (A) proactively recruit BIPOC who are qualified (not just anyone who happens to check the Black box) and (B) actually intend to hire the best person for the job even if they happen to be Black or otherwise not a “culture fit" (because no qualified person wants to dance like a monkey for the sole purpose of the White Man’s entertainment).
Applying and nterviewing for no reason always makes me feel like a minstrel character in Bamboozeld, a Spike Lee joint (2000). Like you, Rebecca, I’m over it. I’d rather spend my time working on my memoirs. While I might be more likely to win the lottery than a getting any job I’m trained and qualified to do, I think the odds of a bestselling book are at least as good as (or greater than) equitably compensated employment in the next 3-5 years (or ever in my lifetime).
At this point, I’d rather be poor and true to myself than rich and costumed for the colonizers’ pleasure. | https://medium.com/@aprilhamm/agreed-but-with-the-caveat-that-they-need-to-a-proactively-recruit-bipoc-who-are-qualified-not-c5c38462b2da | ['April M. Hamm', 'She They'] | 2020-12-26 20:49:00.759000+00:00 | ['White Privilege', 'Life', 'Bamboozled', 'White Supremacy', 'BlackLivesMatter'] |
How to Start the Keto Diet — and Ways to Stick With It | How to Start the Keto Diet — and Ways to Stick With It Dlkto Dec 23, 2021·4 min read
You’ve decided the keto diet is the plan for you, and you’re eager to get started. But it can be tough to jump straight into a low-carb, high-fat way of eating. There’s no doubt about it — this is a restrictive meal plan. Plus, of course, there’s the fact that the diet of most Americans is high in carbs and processed foods, which are definitely on the keto naughty list. (As well as nixing all refined carbs and junk, you have to avoid starchy veggies, grains, sauces, and juice, and limit fruits.)
So, take time to get prepared before you go full-on keto. “Go through your kitchen and pantry and get rid of foods that you don’t need for keto, or consolidate them if other people in your house will be eating them,” Naomi Whittel, Gainesville, Florida-based author of High Fiber Keto, tells Health.
Next, choose several keto recipes for the week and think about what you can grab quickly if you get hungry between meals. Make a list and go shopping. The more keto foods you have in the house, the easier it will be to stick to the plan. Whittel’s staples include avocados; olives; nuts and seeds (she recommends almonds, macadamia nuts, and pumpkin seeds); coconut and olive oils; eggs; canned salmon; and collagen protein (which you can buy in powdered form and easily mix into a hot or cold drink).
RELATED: Your Ultimate Keto Diet Grocery List
Whittel also advises stocking up on pre-washed, pre-cut veggies like mixed greens, broccoli, and zucchini, all of which are good low-carb, high-fiber options. And if you’re really short on time, you can buy these prepared from the store. Having coconut milk, frozen berries, and spinach on hand means you have everything to make a tasty keto smoothie.
Keto isn’t a quick fix
While you’re preparing your food, prepare your mindset. “It’s hard to make a big change, so really think about what’s going to keep you on track,” Whittel says. “For me, it is really helpful to connect my daily food choices to the bigger ‘why.’ Everyone has their own personal reasons behind their health choices and other individual goals that drive their interest in keto. Get clear about your ‘why’ and then remind yourself of it often.” It might help to think of keto as a lifestyle choice instead of a quick-fix diet, she adds.
Initially, Whittel recommends sticking to keto as close to 100% as possible. “This will support the body’s metabolic transition into ketosis,” she says. (Ketosis is a metabolic state where the body burns stored fat for energy instead of blood sugar.) “Keep in mind that down the road, there will likely be room to enjoy a glass of wine, a keto version of your favorite dessert, or even a higher level of carbohydrates more regularly.” So, by accommodating these “treats,” you can avoid the trap of all-or-nothing thinking, which Whittel says makes it easier to get back into keto if you do stray.
RELATED: 7 High Fiber Keto Foods
Dealing with friends and family when you go keto
Sometimes the toughest part of making a big change to the way you eat is dealing with other people’s reactions. At best, they might question your choice; at worst, they might turn up at your door with a basket laden with refined carbohydrates.
“If someone is trying to ‘sabotage’ your best efforts, practice politely declining with a phrase of your choice,” Whittel says. She suggests saying something like, “I’m satisfied with what I’m eating,” or “I’m figuring out what works best for my body right now,” or “Thanks for your concern, but I’ve got this.”
Another challenge can be if you’re eating keto but the rest of your family isn’t. In this situation, Whittel suggests cooking keto as the main component of a meal, then offering carbohydrate sides for the non-keto eaters. For instance, cook steak and roasted asparagus with butter for everyone, then have some sweet potatoes or rice as a side option.
RELATED: The 7 Dangers of Keto
Keto has certain benefits, but it isn’t for everyone
Burning stored body fat for fuel has its benefits. The keto diet has been shown to reduce seizures in people with epilepsy, and animal studies suggest that it may also have anti-aging, anti-inflammatory, and cancer-fighting benefits. But if you’re planning to go keto to lose weight, you need to know that it poses certain side effects and complications. For one thing, you could regain the weight you lost when you go back on carbohydrates. And if not done right, the diet can boost your risk of heart disease and diabetes. In fact, if you have type 1 or type 2 diabetes, experts say you shouldn’t go on a keto diet without your doctor’s advice and supervision.
Finally, be aware that keto can make you feel a little worse before you feel better. So-called “keto flu” can include both physical and emotional symptoms, including cramps, constipation, nausea, low energy, fatigue, and mood swings.
“Your body is adjusting to new foods, and your metabolism is adjusting to using fat as its primary fuel,” Whittel explains. During this time, it’s really important to stay hydrated, eat enough salt to keep your electrolytes balanced, because your body releases less insulin on keto, causing the kidneys to excrete more sodium, and rest when needed. Whittel also suggests supplements like MCT oil, digestive enzymes, and electrolytes to help your body adjust. | https://medium.com/@dl03k1to/how-to-start-the-keto-diet-and-ways-to-stick-with-it-17c006c0e418 | [] | 2021-12-23 06:53:11.209000+00:00 | ['Weight Loss', 'Keto', 'Business Strategy', 'Internet Marketing', 'Make Money Online'] |
Two thousand Two Zero Party Over oops, Out of time… | Linkedin, I appreciate the suggestion to @ people or whatever to expand my reach, engage in conversation, believe me, and understand the advertising algorithm world of “reach” very well, which is a big part of the problem …📌 But I’m not that thirsty right now- kind of catching my breath after having a front-row seat at the Census Bureau for the past year, and watching the relentless attempt to dismantle the worlds premier statistical agency from within by feckless political appointees- and witnessing the heroic, Yes, valiant effort by the career data scientists, officials, and front line employees, and community partners who stopped it. Data integrity means something to me, integrity means something. It used to in a pre-Facebook world, anyway. So… the very idea that this sad sloppy reckless mess of a company inhibits the same world as the analytics and data professionals who actually give a shit about decency- data & otherwise… it breaks my heart.
Changes will be made, or we will look back at 2020 with fondness because the real fascist shit show waits in the same opaque shadows as their algorithms that have emboldened white supremacy, hate & destroyed democracy while creating an entire segment of the population untethered from the truth. Hashtag, wake the fuck up ALL. Polite water cooler conversation is so pre-pandemic. Let’s get real with this conversation and action to restore civic virtue, which the #facebook advertising business model has eviscerated. | https://medium.com/@billmacconnel-83565/two-thousand-two-zero-party-over-oops-out-of-time-8ff119c3dba6 | ['Bill Macconnel'] | 2020-12-11 14:31:22.887000+00:00 | ['Facebook Marketing', 'Algorithms', 'Democracy', 'Advertising And Marketing', 'Diversity In Tech'] |
What will liberate the next billion? | Swami Vivekananda said, “ Education is the manifestation of perfection already in human beings”.
In the centre of the development narrative for the world, is the issue of poverty and resulting inequality — an issue that goes back aeons.
It’s widely established that mass education is one of the surest ways to pull people out of poverty. So why does poverty, inequality and global tensions stemming from unequal progress still persist after so many decades of global efforts?
After thinking deeply about this for a really long time, I have come to the conclusion that the model of education that we widely envisage and apply has a fundamental flaw. Maybe, after all, it’s not so much of a skilling problem, as much it’s a manifestation problem.
Let me explain …
The current education system deeply assumes that people know nothing. So it focuses on grades and certificates, curriculum and tactical technicalities. It weaves not just a primary and higher education system built on these low-ROI beliefs — but also a continuous learning system that spans edtechs and learning for adults too now which is constantly reminding people of the next thing they don’t know and monetizing that insecurity. The current process does very little to build strong will, minds that can weather uncertainties, risk-taking, self-confidence, fearlessness, etc.
Quoting Vivekananda again “All the wealth of the world cannot help one little Indian village if the people are not taught to help themselves”
A critical step to survive the future and reboot humanity will be educating our education system to shift the focus on building the mind from building herds in the name of education.
“No more weeping, but stand on your feet and be men. It is a man (woman)-making religion that we want. It is man(woman)-making theories that we want. It is man(woman)-making education all round that we want “ Swami Vivekananda
Your thoughts?
Prayers for the planet.
#edtech #impacttech #vivekananda #futurism | https://medium.com/@subhadeepbhattacharyya/what-will-liberate-the-next-billion-9416006016da | ['Subhadeep Bhattacharyya'] | 2020-12-13 06:44:34.699000+00:00 | ['Sdgs', 'Edtech', 'Change', 'Education', 'Startup'] |
Build an Impressive Github Profile in 3 Steps | Build an Impressive Github Profile in 3 Steps
Impress Recruiters with Your Skills and Cool Stats Graph when Viewing your Github Profile
Motivation
If you are looking for a new opportunity in the data science or programming field, you should probably try to optimize your Github profile.
Don’t expect the recruiters to know your skills by investigating each of your repositories. If you have something to show off, put everything about you on the front page so the recruiters do not need to spend a lot of energy to learn about you.
This includes links to your website and social media accounts, your activity on Github, and what you are interested in.
This article will show 3 simple steps to build an impressive Github profile. After this article, you will learn how to create an impressive Github README like below
Picture from my Github Profile
You found a secret!
To add a README.md to your Github profile, hover your mouse to the plus icon in the top left corner and click “New repository”.
Type your Github username in the “Repository name” box. You will see a text like below. Congratulations! You discovered a secret.
Check the box “Add a README file”
then click “Create repository”.
Now you should see a README on top of your profile page. Let’s edit this README to make it look better.
Add Description and Emojis
Click “Edit this file” to edit the README.md file and you will see something like this
Right now, you cannot see the sentences below “Hi there” because they are commented out. To uncomment, delete <!-- and --> . Now you should see the text with emoji in your README
If you are not familiar with the syntax of Github flavored markdown, here is a good resource for that.
Refer to this cheetsheet to add other emojis that you like to your README. Here is the markdown of my README with emojis.
- 🌱 I’m addicted to learning and growing every day
- :earth_africa: I am currently sharing a little bit of my knowledge to the world through my blogs
- 📫 How to find me:
- :bulb: [Medium articles](
- :pencil2: [Daily Tips](
- :office: [LinkedIn](
- :speaker: [Podcast]( - :zap: I love math, programming, data science, and books- 🌱 I’m addicted to learning and growing every day- :earth_africa: I am currently sharing a little bit of my knowledge to the world through my blogs- 📫 How to find me:- :bulb: [Medium articles]( https://medium.com/@khuyentran1476 - :pencil2: [Daily Tips]( https://mathdatasimplified.com/ - :office: [LinkedIn]( https://www.linkedin.com/in/khuyen-tran-1ab926151/ - :speaker: [Podcast]( https://medium.com/@theartistsofdatascience/why-we-should-be-more-like-winnie-the-pooh-khuyen-tran-on-the-artists-of-data-science-c610c91d4c14
Here is the output
Add Github Stats
To add Githut Stats, copy and paste this into your README
Change ?username=khuyentran1401 to ?username=your-username and change Khuyen to your name. You should see something like this
You can also show top languages you use like this
using
[](https://github.com/anuraghazra/github-readme-stats)
Change ?username=anuraghazra to ?username=your-username .
To customize how it looks, find the instructions here.
Show your Articles
If you write articles on Medium and want to show your articles on your README, you can include the link to them. But better yet, including the images of your article so others are more likely to click on the link.
To add your latest article to your Github README, simply copy and paste this
Change @khuyentran1476 with your username. That’s it! Now you should be able to see your latest article like below
To add your second latest article, swap 0 with 1
Do the same with other articles to get the n-th latest article. Here is how my README looks like after adding five of those
Pretty cool, isn’t it? You can find the source code for this here.
Conclusion
Congratulations! You have just learned how to create a cool Github README to show off your skills and stats in 3 easy steps. Start creating your Github README now to impress recruiters and other programmers!
I like to write about basic data science concepts and play with different algorithms and data science tools. You could connect with me on LinkedIn and Twitter.
Star this repo if you want to check out the codes for all of the articles I have written. Follow me on Medium to stay informed with my latest data science articles like these: | https://towardsdatascience.com/build-an-impressive-github-profile-in-3-steps-f1938957d480 | ['Khuyen Tran'] | 2020-12-04 20:30:42.238000+00:00 | ['Code', 'Data Science', 'Github', 'Programming', 'Markdown'] |
From Photographer to Video Game Creator — Day 17 | Spent most of today on a plane traveling from Hawaii to North Carolina, but lucky for me I had plenty of coding to do!
I started another weapon powerup for my Player today. I was pretty excited for this since it was really the first objective that was totally up to us. I decided to add a weapon that would act as a reflector for Enemies and Enemy lasers. The idea is to have the Player launch out a capsule that will fly upwards and spawn a barrier from its location of destruction. This barrier will then hover back and forth on the screen for a set amount of time while deflecting Enemy lasers and destroying Enemies that come in contact with it.
First up was to create a new object with a new script. This script will be for defining the capsule that will be launched from the Player object. This capsule is set to a slower speed and coded to destroy after 1.5 seconds. Here is a look at the beginning stages of the script:
I also made sure that the barrier powerup was able to be collected by the Player. This sets “_canBarrierFire” to true which allows the Player to fire the barrier. I included a quick debug to double check that the scripts were communicating correctly.
At this point I made a simple object image for my barrier just so I could visualize the Player instantiating something other than the laser. It works!
Now it’s time to make it look cool. Using Procreate, I drew up a barrier that would take the place of the previous image. This barrier has some lights and bright bars, and it will also have a thruster animation playing underneath it!
The final task for the day was to create the capsule that would be carrying up the barrier from the Player. I haven’t finished the textures on this object yet, but I’ve got the outline and lights added.
I’m really enjoying creating and adding my own images into Unity. It really opens up the possibilities of what I can create!
Tomorrow is Christmas Eve so I may be MIA for the next couple of days, but I’ll be sure give some updates this weekend!
Cheers!
Myles M. | https://medium.com/@mylesmarion/from-photographer-to-video-game-creator-day-17-d5687c4f7553 | ['Myles Marion'] | 2020-12-23 19:47:05.351000+00:00 | ['Software Development', 'Unity', 'Videogames', 'Coding', 'Games'] |
HTTP 500.19 Error in IIS | We had an issue where we were hitting a 500.19 error in IIS. We didn’t see the details of that error until we went onto the server itself and “browsed” from IIS console. Clients were getting a simple 500 error, which led us to believe something bad was happening in the code. Event viewer showed nothing.
The “.19” part was important because Microsoft helpfully provides information specific to that error. As is often the case, Microsoft’s information was not directly relevant to the problem but long story short, we figured out that we needed to install dotnet core 2.2 hosting bundle onto the server. That solved the issue.
Hopefully this will help someone else in the future.
Happy coding!
</end> | https://medium.com/@pagalvin/http-500-19-error-in-iis-2885113cddef | ['Paul Galvin'] | 2020-09-18 13:43:06.077000+00:00 | ['500', 'Iis', 'Dotnet Core'] |
A Conversation with Butterfly Defect | A Conversation with Butterfly Defect
Meet Andrew Stier, co-founder and director of the improv troupe Butterfly Defect.
Butterfly Defect is a longform comedy improv troupe in Austin, TX with a unique format: they perform an improvised choose-your-own-adventure story, where the audience votes on the decisions the characters make, and the actors make up the (often zany) consequences of the decisions in real-time. The B. Iden Payne Awards Council, which has been recognizing excellence in Austin for 45 years, recently awarded Butterfly Defect the Ethel Hinkley Award for Outstanding New Improvisational Troupe.
Below, Stier shares a bit about how the troupe formed, what makes for a successful improv scene, and how you can improvise with Butterfly Defect!
How did Butterfly Defect get together?
Brandt Taylor, Nicholas Marino, and I had all been part of UT’s short-form improv troupe, Gigglepants. We originally started this group to have an outlet to continue playing around with the friends we had improvised with in college. We’re also all gamers, so the format was largely inspired by choice-based videogames like “Life Is Strange,” which the three of us had played through together when we were roommates.
We recruited fellow former troupe-mates Ricci Valice, Kelsey Matteson, and Brigid Ludwig, who ended up coming up with the name when I pitched the idea to her over beers at a neon house party (we were both wearing glow sticks at the time): Butterfly Defect.
I was taking improv classes at Hideout Theatre at the time, and my classmate Jennifer McMurtry announced that she was producing a show at the Institution Theatre. Right place, right time, she needed acts and we needed a stage — she took a chance on us and suddenly we had booked our first set of shows!
“Afterwards we popped champagne, ate chocolate cake, and celebrated what suddenly felt like a truly real improv troupe.”
I was part of Austin’s hip-hop dance crew EPEK at the time, and a huge group of them surprised me by coming out to support us at our first show. Along with friends, family, and former Gigglepants troupe-mates, they packed the house! I still get anxious giggles when I remember the nervous energy that was coursing through us backstage. But once the show started, with all those friends cheering us on and laughing at our jokes, it ended up feeling impossible to fail, and we had a really great, really FUN show. Afterwards we popped champagne, ate chocolate cake, and celebrated what suddenly felt like a truly real improv troupe. I attribute much of the success of that night to those people that came to support us, and I think it really gave us momentum to keep doing this thing.
After that show run, we started applying to be booked in shows at improv theatres around Austin. We’re lucky to live in a city that has so many. We have since had a lot of guest improvisers perform with us, and have added Nathan Berkowitz and Retha Cioppa as permanent members of the group.
What makes Butterfly Defect different from other improv groups?
We’re the only group I’ve seen do this choose-your-own-adventure format (I’ve since learned of past shows that were similar, but even those were not the exact same as ours). In general, longform narrative improv is a format where you make up an entire play from beginning to end based off of audience suggestions, like in Netflix’s Middleditch and Schwartz. Our twist, though, is that at points throughout the show, the characters will come up to audience and ask them to make a choice on the character’s behalf. We present two options, and the audience votes by applause.
The result is really fun. The audience feels constantly involved, and the choices affect the improv in unexpected ways. We end up elevating our own ideas because we present choice A, the obvious choice, and then we have to come up with a choice B, that usually ends up being whatever dumb idea we can come up with on the spot — which sometimes ends up being way sillier or more creative. The story can go in a lot of interesting directions. The audience is helping us write the story, and whether they realize it or not, they’re pretty good at it! Sometimes it feels like cheating when I don’t know the most interesting decision to make — I just have them make it for me! It ends being a really cool, collaborative experience with the audience.
“It ends being a really cool, collaborative experience with the audience.”
How has the group adapted to performing on the digital stage?
We’ve actually done a surprising amount of virtual improv, and found a surprising amount of joy with it. We’ve mostly just been practicing for fun (we don’t call it rehearsal since there are no lines to memorize) as a way to entertain ourselves and connect with each other during quarantine. But we have performed online a couple times through Hideout Theatre, who has set up a really great infrastructure for online performances streamed through Twitch.
It’s different, and we can’t pretend it’s not different. Personally, we started having the most fun when we let go of trying to make Zoom improv feel seamless and embraced it for what it was — a bunch of goofballs making up scenes with each other over a video call. Improv already requires so much imagination, so it’s not much of a stretch to imagine we’re on a stage together.
There are some fun upsides. Using physical props is usually discouraged on the improv stage, but they work a lot better on Zoom when you have random things lying just off screen — in our first online performance, Nathan used a colorful hacky sack held close to his camera to represent a time portal.
Another advantage is we can invite friends and family who live outside the city to watch our shows.
The coolest thing though, is we’ve been able to improvise with a lot of our friends who now live in different cities and states. During one practice, we had people from five different states, three different time zones, and we even had an old troupe-mate pop in from Australia!
“Personally, we started having the most fun when we let go of trying to make Zoom improv feel seamless and embraced it for what it was — a bunch of goofballs making up scenes with each other over a video call.”
What makes for a successful improvised scene?
Everyone has to be on the same page. The audience, your scene partner, everyone needs to know exactly what’s going on in the scene. That’s why they always say “Yes, and” is the most important rule of improv. It means saying “yes,” the reality you just established with your line of dialogue is true, and then saying “and,” and adding more information that builds on that reality.
The next level to that I think is “deep” yes and, where you don’t just accept the idea your scene partner presented, but you reinforce it, give it more details or even just repeat it. It lets the detail marinate in everyone’s mind so everyone remembers it, and it’s like telling your scene partner “Yes! I like your idea and I’m gonna take it and I’m gonna run with it! We’re on the same page here, let’s do this!”
An important caveat to this is no one can “yes and” any ideas if they stay in your head, so if you have thoughts about where you are or what your relationship to the character is, you cannot assume that other people have those same ideas so you need to make sure you voice them. Again, everyone has to be on the same page.
Can you tell us a bit about your latest award, and what it means to you guys?
It’s really an honor. The B. Iden Payne awards are widely recognized not just in the Austin improv community, but also the entire Austin theatre community as a whole. I attended the award ceremony last year, and I remember thinking how great it would be to receive an award.
We were up against some awesome improv troupes with some members that we really respect, so I had actually messaged the group beforehand telling them it was very unlikely we would win. We were very surprised when we won! We know some of our friends and family really like us, but this gives us a new type of encouragement.
The B. Iden Payne committee put on a very nice online awards announcement ceremony, complete with prerecorded acceptance speeches and an ASL interpreter. They also hand delivered a trophy I have proudly displayed on my dresser.
I’m glad they have a category specifically for new improv troupes, because in the beginning phases you’re still trying to secure a fan base, and no one really knows who you are. This award is a really gratifying recognition, and hopefully serves to get our name out to more people.
What’s next for Butterfly Defect?
Even with the vaccines rolling out, we are not sure when the theatres here will open again. In the meantime, we’ve been discussing the possibility of a socially-distanced live performance on an outdoor stage in a park, with audience members sitting six feet apart from each other. Classic Theatre in San Antonio did a socially-distanced show that was very safe, and I’ve been inspired since. Our troupe members are part of the same quarantine pod, which puts us in the unique position to be able to do something like this without endangering our cast. The other unique advantage is that we are based in Texas, where the warmer weather could permit this. No official plans on this yet, but you can stay tuned on our Instagram @ButterflyDefectImprov for updates.
Whatever happens, I’m hoping Butterfly Defect is able to continue putting on fun shows, giving performance opportunities to new guest improvisers, and to ultimately keep growing, alike in our comedic talent, fan base, and bond as a group.
Is there anything I should have asked, or anything you would like people to know?
We have improvisational guests all the time! If you’re interested in guesting with us, you can message us on Instagram or Facebook! | https://medium.com/players-performers-portrayers/a-conversation-with-butterfly-defect-6d003f8191c3 | ['Julia Stier'] | 2020-12-22 16:50:13.048000+00:00 | ['Creativity', 'Improv', 'Interview', 'Performance', 'Theatre'] |
“Backchannel”: How Conversations Flow and People Bond | “Backchannel”: How Conversations Flow and People Bond
This little-known term will help you understand the patterns of how you connect
Photo by Bewakoof.com Official on Unsplash
I’m going to tell you about an incredibly useful linguistic concept — one of those things where you’re like “oh there’s a word for that, that’s so useful! Now I have a way to talk about it.”
You know how when you say something, sometimes you want the other person to literally hear the content of your message (“pass the salt”) and sometimes it’s a social action that happens to use words — but the words themselves don’t matter (“you’re welcome”). And a whole lot of awkwardness comes about when people can’t gauge whether they’re supposed to listen to the content of the words, or treat it as a social action (“How’s it going?”)
And of course, most things you say operate on both levels. If you message your partner to tell them about a funny thing someone said at work, you’re saying both the funny thing, and “I think about you when you’re not around.”
So backchannel is one of the kinds of speech that’s all action, no content. It’s the mm-hmms, yeahs, right of courses that you say when you’re listening, and it means “I’m still paying attention.”
It’s super important! And different cultures have different ideas about the amount of backchannel that’s appropriate. East Asian people tend to do a lot more of it than Westerners, for example. Too much feels like the listener is bored and trying to hurry you along; too little feels like the listener has drifted off, so cultural mismatches cause problems. There’s also confusion around “Yes” (I heard you, I get what you’re saying) and “Yes” (I agree with you, I’m going to do that).
I often use “that makes sense” as backchannel, and occasionally people respond with annoyance — “I know it makes sense!” like I’ve said “I’m judging you, and you’ve passed” (which is just as insulting as “I’m judging you, and you failed”).
But now I can say “no it’s just backchannel!” and… well, then I have to explain what backchannel is and then I’ve totally derailed the conversation, which is even ruder. I should probably just try to wean myself off “that makes sense.”
And sometimes I go blank because I can’t think of the right backchannel to give. A Facebook Like is kind of like backchannel, and you know, before reacts, when someone posted some bad news, you didn’t know whether you should Like it or not? Does it mean “I Like that” or “I read it and hear you”?
But now that more people know the word, if I can’t think of the right backchannel, I can just say “backchannel” instead of “um, I’m listening, I just don’t really have anything to add.” | https://mckinleyvalentine.medium.com/backchannel-an-incredibly-useful-word-to-know-87b522ff63ec | ['Mckinley Valentine'] | 2020-12-24 01:00:39.009000+00:00 | ['Self Improvement', 'Life Lessons', 'Psychology', 'Life', 'Relationships'] |
The Win that is Beneficial for Abbas but not for Palestinians | The Win that is Beneficial for Abbas but not for Palestinians Krish Mathew ·Nov 25, 2020
Here’s something interesting about what I read today. It’s Abbas and his advisors have to learn to adjust to the current strategic conditions in the face of the new opportunities. Yes, Biden’s victory will restore some much needed equilibrium to the region’s US strategies, but the two-state option is unlikely to become a reality.
As I keep on digging about the situation, the slow Palestinian land consumption will continue for a while and the possibility of opposing US pressure could be overrated. Looking at it, true solidarity is an aim that remains elusive and the absence of which continues to ridicule their leaders, is what the Palestinians need. | https://medium.com/@krishmathew910/the-win-that-is-beneficial-for-abbas-but-not-for-palestinians-a5c43fcdc583 | ['Krish Mathew'] | 2020-11-25 13:39:59.868000+00:00 | ['Oppurtunity', 'Condition', 'Abba', 'Joe Biden', 'Palestine'] |
Modifying an existing Sequelize migration | If you’ve just started using sequelize and sequelize CLI in development, you definitely had frequent addition and deletions of columns. And yeah, all these times you directly modified the migrations file, dropped all migrations and re-ran migrations again, so that your changes will reflect.
But I’m glad you’re here because you realize that’s wrong, or if you’re just finding out for the first time, happy for you. In production, you can’t just drop all database tables because you updated a migration. That’ll be one big mess up, even though there are backups.
Anyways in this article, I’ll show you how to update the column, add a new column or columns, and delete an existing column from/in an existing migration.
Here’s something you should know and understand, it’ll help when you have issues with sequelize modifications. But if you just want a quick solution you can skip this and move to the more meaty part.
QueryInterface: A migration is simply a set of instructions using the queryInterface method, it’s a middleman between you and your database, so there is you and there is the database, you create and customize your table yourself, now you want to load that table into the database, you tell queryInterface to take the table you created and put it into the database, with the createTable command.
Now we don’t want to create a new table, we want to modify the existing table. queryInterface is still your man, but it doesn’t understand all commands, it has its own command it understands, so to communicate with it you have to use its own command, you can tell it to:
addColumn if you want to add a new column or columns. This is a sequelize method queryInterface uses, to add a new column to a table. It takes the column and the name of the table from you goes into the database, searches for the table name, and adds it.
changeColumn if you want to change column, probably the datatype of a column. This is also a sequelize method *queryInterface* uses, to alter a column. It takes the new adjustments you gave to it, the name of the table, and goes into the database, looks for the table, and make the adjustment.
removeColumn if you want to remove a column, probably if you find it unuseful. This is a sequelize method queryInterface uses, to delete a column. It looks for the table and column and makes the action for you, you still have to provide the name of the table and then name of the column.
createTable if you are creating a whole new table entirely. I believe you may already be familiar with this. Without this one, I guess you can’t add, change or remove a column.
In essence, you have to give *queryInterface* something to add, here’s how you can do it. This is where we get down on it, we’ll use a project to test.
So if you don’t have a project ready-up and you just want to learn this, I made a starter on Quicksi you can easily use, everything is set up for you, check it out.
If you’re using the starter, kindly follow the How to get started to get the starter, then create a *.env* file and add your database URL. Then create a new sequelize migration, that you can make modification on.
1. Updating a column
Now let’s update a column in the migration you have or created from the starter.
So let’s say we have a User table, with firstname, lastname, and april columns. And we want to update the april datatype from INTEGER to FLOAT, with an allowNull key and a property of false.
User table before:
user table before
queryInterface is waiting for you to give it the command, including what it should update and in what table it should it be done.
To give queryInterface something, you need to create a new migration:
Run this command:
sequelize migration:create — name name_of_your_migration
A new migration has been generated for you. Next edit the new migration created.
Then run the migration:
sequelize db:migrate
And that’s it! Notice we passed in the name of the table and the column in the table we want to update, queryInterface takes this and does its job on the database.
2. Deleting a column
Let’s delete the column you updated on the User table above.
You need to create another migration file.
sequelize migration:create — name name_of_your_migration
Now edit the migration file and give it to queryInterface with the removeColumn method. Update with:
Run migration:
sequelize db:migrate
3. Adding a new column
Now let’s add a new column in the user migration. Let’s say we want to add another column called `June`. Create another migration file.
sequelize migration:create — name name_of_your_migration
Now edit the migration file and give it to queryInterface with the addColumn method.
Now run:
sequelize db:migrate
Don’t forget to update your models with your new changes.
I do hope this explains sequelize modification methods clearer.
Follow me on twitter: @AnayoOleru
Have a great day and stay safe. | https://medium.com/@anayooleru/modifying-an-existing-sequelize-migration-7ec206ce2b92 | ['Anayo Oleru'] | 2020-05-09 13:03:25.932000+00:00 | ['Modifications', 'Postgresql', 'Migrations', 'Quicksi', 'Sequelize'] |
Making an animated donut chart with d3.js | Making a donut chart with d3.js is not as difficult as it may seem. We’ll start by making a simple donut chart, then add buttons to switch between data sets with a smooth, animated transition. See an example of my donut chart in action here.
First things first, this tutorial uses d3.js version 4.6.0. I’m using the CDN found on Cloudflare.
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/4.6.0/d3.min.js"></script>
I put this script tag at the end of my html before the script tags for my graphs.
Add a div with the ID #donut to your html where you’d like the donut chart to go.
<div id="donut"></div>
After adding those to the html file, create a new javascript file, where your code for your donut chart will be (don’t forget to link to your new js file in the html, after the d3.js CDN).
Getting started with the javascript
Let’s put some data in, even if it’s just dummy data, so you have something to work with. The data I’m using for this donut chart is about ice cream flavor preferences and is provided by YouGov. I’m putting the data directly into my javascript file.
My data:
//both female and male data
var totals = [{
title: "Soft-serve",
value: 286,
all: 1098
},
{
title: "Scooped",
value: 472,
all: 1098
},
{
title: "No Preference",
value: 318,
all: 1098
},
{
title: "Not Sure",
value: 22,
all: 1098
}
]; //female
var femaleData = [{
title: "Soft-serve",
value: 25,
all: 100
},
{
title: "Scooped",
value: 44,
all: 100
},
{
title: "No Preference",
value: 28,
all: 100
},
{
title: "Not Sure",
value: 3,
all: 100
}
]; //male
var maleData = [{
title: "Soft-serve",
value: 27,
all: 100
},
{
title: "Scooped",
value: 42,
all: 100
},
{
title: "No Preference",
value: 30,
all: 100
},
{
title: "Not Sure",
value: 2,
all: 100
}
];
Next, let’s set up some variables we’ll use for sizing our donut chart. Feel free to adjust these to your needs.
var width = 360;
var height = 360;
var radius = Math.min(width, height) / 2;
var donutWidth = 75; //This is the size of the hole in the middle
We also need to set up a color variable. If you aren’t sure what colors you’d like, you can use a d3 scheme. You can also customize your colors. I chose really bright colors for my chart, because ice cream = summer and summer = bright colors. Replace my colors with ones of your choosing.
//Only choose one! This one for a d3 color scheme:
var color = d3.scaleOrdinal(d3.schemeCategory20c); //Or this one for a customized color scheme:
var color = d3.scaleOrdinal()
.range(["#5A39AC", "#DD98D6", "#E7C820", "#08B2B2"]);
Onto making the chart! If you’ve made a pie chart with d3 before, this is will look very familiar. We are basically doing the same thing here- creating a pie chart, but making use of innerRadius.
var svg = d3.select('#donut')
.append('svg')
.attr('width', width)
.attr('height', height)
.append('g')
.attr('transform', 'translate(' + (width / 2) + ',' + (height / 2) + ')'); var arc = d3.arc()
.innerRadius(radius - donutWidth)
.outerRadius(radius); var pie = d3.pie()
.value(function (d) {
return d.value;
})
.sort(null); var path = svg.selectAll('path')
.data(pie(totals))
.enter()
.append('path')
.attr('d', arc)
.attr('fill', function (d, i) {
return color(d.data.title);
})
.attr('transform', 'translate(0, 0)')
Optional: adding a legend. If you don’t like this style or placement, there are many other options. I wanted the legend to be in the middle of the donut, and to have circular keys.
var legendRectSize = 13;
var legendSpacing = 7; var legend = svg.selectAll('.legend') //the legend and placement
.data(color.domain())
.enter()
.append('g')
.attr('class', 'circle-legend')
.attr('transform', function (d, i) {
var height = legendRectSize + legendSpacing;
var offset = height * color.domain().length / 2;
var horz = -2 * legendRectSize - 13;
var vert = i * height - offset;
return 'translate(' + horz + ',' + vert + ')';
}); legend.append('circle') //keys
.style('fill', color)
.style('stroke', color)
.attr('cx', 0)
.attr('cy', 0)
.attr('r', '.5rem'); legend.append('text') //labels
.attr('x', legendRectSize + legendSpacing)
.attr('y', legendRectSize - legendSpacing)
.text(function (d) {
return d;
});
Transitioning between data sets
Let’s add buttons to the html first. When these buttons are clicked, the data will move between data sets. I put the buttons below the #donut div, not inside it. Add ID’s here that you can reference later in the javascript.
<div>
<button id="everyone">Everyone surveyed</button>
<button id="women">Only women surveyed</button>
<button id="men">Only men surveyed</button>
</div>
Back to the javascript. When these buttons are clicked, we’re going to call a function that changes the dataset (which I’ve named change() ) and redraws the graph. Before we get to the change function, let’s write the onclick functions:
d3.select("button#everyone")
.on("click", function () {
change(totals);
})
d3.select("button#women")
.on("click", function () {
change(femaleData);
})
d3.select("button#men")
.on("click", function () {
change(maleData)
})
The change function redraws the graph, so many parts of this function look familiar. We’re just feeding in new data.
function change(data) {
var pie = d3.pie()
.value(function (d) {
return d.value;
}).sort(null)(data); var width = 360;
var height = 360;
var radius = Math.min(width, height) / 2;
var donutWidth = 75; path = d3.select("#donut")
.selectAll("path")
.data(pie); // Compute the new angles
var arc = d3.arc()
.innerRadius(radius - donutWidth)
.outerRadius(radius); path.transition().duration(500).attr("d", arc); // redrawing the path with a smooth transition
}
The key part of this function is the last line. That’s where the transition happens! You can play around with it to get your desired effect.
Another key part is the .sort(null)(data) . If you don’t include the .sort(null) , it redraws the graphs from lowest to highest segments. I found it to be confusing if my segments were not consistently in the same order. But I did find watching them reorder more fun to watch. ¯\_(ツ)_/¯
Note: if you’d like to experiment with this, only take out the .sort(null) . Leave (data) where it is.
You did it! You made a donut chart that transitions between data sets!
If you have any issues and want to see my full code, check it out on github. | https://medium.com/@kj_schmidt/making-an-animated-donut-chart-with-d3-js-17751fde4679 | ['Kj Schmidt'] | 2019-02-05 23:52:49.033000+00:00 | ['Data Visualization', 'JavaScript', 'Data', 'Donut Chart', 'D3js'] |
Fixated on Taking Selfies? You Might Have these Psychological Issues | Fixated on Taking Selfies? You Might Have these Psychological Issues
#2 needs to be fixed right now
Photo by Cristina Zaragoza on Unsplash
Is that the picture of a glass of milk? No, that’s a glass of black tea taken in my uber-cool new selfie camera!Apparently, the camera doesn’t like the ‘dark’ tone.
I know that joke is a bit of a stretch, but selfie cameras and the Artificial Intelligence powering them, have come close to creating a near-perfect version of anything it captures. They remove the spots, patch the scars, pull the beauty filters, and do a million things you can’t imagine.
Take a stroll down the mobile shop, see how the industry has raced to near saturation in selfie technology. The punch hole camera, the twin optics, the lens that magically appears from nowhere, all sorts of innovation within a tiny space at the top of your 6-inch device. We happily pay a premium for them. And we revel in posting selfies.
But does taking selfies mean you have some psychological condition?
After all, it’s a harmless means of fun that doesn’t poke into someone’s territory. Your face. Your camera. Your rules.
But let me tell you that taking excessive selfies can be an indication of an underlying psychological problem. But here is the million-dollar question: What’s ‘excessive’? What is the limit one has to cross to be bothered about it?
How much is too much?
The American Psychiatric Association (APA), in a study, confirms three levels of disorders linked to an excessive affinity for selfies.
The Borderline: Taking (atleast) three selfies a day, but not posting them on social media. The Acute: Taking (atleast) three selfies a day and posting them all on social media. Chronic: An uncontrollable urge to take selfies all day and post them (atleast) six times a day.
The author calls such people with an excessive affinity for selfies as SELFITIS.
So based on the above measure, are you a selfitis? Even if you are not one, I will encourage you to read further. I will tell you four reasons why people show a craving for taking selfies. No offence.
#1 You Could be a narcissist!
“The narcissist enjoys being looked at and not looking back.” — Mason Cooley
A narcissist is a person who pursues gratification from vanity or egotistic admiration of one’s perfect self-image.
From blowing his own trumpet to an outright exhibition of arrogance, a narcissist would seek to build his image, and no wonder selfies occur to him as a convenient tool.
To maintain his own meticulous and perfect image, a narcissist has to post his flawless pictures. While letting someone handle the camera can go, either way, the ability to control his picture, with the perfect angle and the right looks, gives him the necessary control that allows him to have the image exactly as he wanted it.
Besides, narcissists are also people who want them to be considered special. A selfie lets him be the lone subject of a photograph and feeds his craving to be the star. It allows them to set the exact public image they yearn for.
A study published in The Open Psychology Journal, done by researchers from Swansea University and Milan University, confirms the link between selfie craving and narcissism. Out of the 74 test subjects in the study, those who used social media through excessive visual posting showed a 25% increase in narcissistic traits.
#2 Do You have problems presenting your body?
Low self esteem involves imagining the worst that other people can think about you. — Roger Ebert
Many people are not impressed by the way they look, and much like a narcissist, they want to be in control of their images, albeit for a different reason.
From being fat to having a slightly long nose, there is a myriad of details about themselves they would like to either hide or project. In my part of the world, fair skin is another illusion that’s coveted. And hence the cams that turn black coffee to milk!
Selfie looms in the air as a rescue for all such people nursing a low self-image, for you can have the ‘perfect’ shot to your design.
They adjust the angle, tweak the settings, put the filters, and use every big and small feature to ensure that what they don’t want is not captured and what they want is standing out and staring at the beholder.
In a study published on Sciencedirect.com, done with a participation of 259 young women, it was found that social networking sites’ selfie activities are linked with body-related and eating concerns.
However, such negative impressions of oneself can work the other way round too.
For instance, if you have abysmal self-esteem, so much that you think your ‘poor looks’ cannot be fixed with any advanced effect of your smartphone, you might entirely withdraw from the act of taking selfies.
#3 How is your social life going?
A healthy social life is found only, when in the mirror of each soul the whole community finds its reflection, and when in the whole community the virtue of each one is living. — Rudolf Steiner
If you take a lot of selfies and are alone in most of them, you better ask this question yourself.
Ideally, a picture should be taken to cherish a special moment. But if most of your selfies are not encompassing such moments and instead show you in different backgrounds, it could be a strong indication that you are not entirely content with your social life. Perhaps you yearn to mix more but just cannot.
Peerayuth Charoensukmongkol, who researched the connection between personal characteristics and selfie addiction, says that selfies are meant to improve self-disclosure and social communication on social media sites, and therefore, essential for those who experience loneliness.
Posting pictures and getting feedback from social media websites can enhance their social communication. However, the approach can backfire as relationships erected through social media communications can often be shallow. This could further worsen your loneliness.
Your loneliness can have several reasons behind it. It is possible that you recently broke up with your partner or that you have social anxiety issues. It makes sense for you to build a healthy social life or address the core issue, rather than seeking temporary relief in selfies.
#4 Look at me!
Seek respect, not attention. It lasts longer.
Some people can’t get enough attention and approval. Taking your pictures and engaging them on social media is one of the easiest methods to draw eyeballs towards you. Besides, with all those filters and smiley faces, you can build a ‘my-life-is-cooler-than-yours’ aura around you.
For such people, their mind perceives the ‘likes’ and ‘reactions’ they receive as a token of acceptance. Needless to say, if your self worth is invested in someone else’s opinion about you, you will end up living a life to impress. You will seldom listen to your inner calls.
Besides, such attention-seeking behavior, in the long run, may adversely affect your self-confidence.
For instance, if one of your pictures fails to draw the same successful results as your previous one did, suddenly, you may start doubting yourself. “Have people stopped loving me” “Don’t I look good in that photo” a million questions erupt.
In an article published in psychologytoday.com, the author, Martin Graff Ph.D., says that attention-seeking may be one of the main reasons people take selfies and use social media. He adds that people would do so to feel more popular.
Letting others’ opinion dictate your life is not the way to live. If you are posting more selfies to gain attention, you need to rethink your priorities.
Final thoughts
If you go through the image gallery of your smartphone, rest assured, you are going to find quite a few selfies. But that wouldn’t make you a sefitist or an addict. You have already read the parameters of becoming an addict. If you don’t fit that profile, you have nothing to worry about.
However, if you are overly drawn to your front camera, with an irresistible temptation to post them on social media, there are things to ponder. You have to start introspecting now! | https://medium.com/indian-thoughts/fixated-on-taking-selfies-you-might-have-psychological-issues-a751927b4ed3 | ['Aravind Balakrishnan'] | 2020-12-11 14:59:27.530000+00:00 | ['Technology', 'Mental Health', 'Psychology', 'Culture', 'Social Media'] |
Running better Marketing Campaigns w/ Advanced Customer Segmentations | Marketing does not work in silos.
When the work in the team begins, it evolves with the product, customer support, sales, and operations team and becomes the core of the business.
The reason marketing is still vital because impact in any stage of the funnel increasing sales, attracting new customers, generating potential customers, increasing brand awareness and overall customer satisfaction.
Marketing campaigns always start from a business goal: increase brand awareness, raise new leads, more downloads for an app or reach out to more audience in order to boost your e-commerce sales.
How do you currently achieve these goals?
In the following steps, I’m going to explaining a process to engage better and more customers to uplift your e-commerce revenue.
1. Starting export your Google Analytics Data
Starting from the data we have available is very important to be able to intercept the public most in line with our business. In this guide I’m going to consider the generated data through Google Analytics tracking.
Once you read this guide you can collect all data tracked in order to create the RFM analysis to understand how is your customer base under the hood. Thanks to this algorithm you can create a simple customer segmentation based on their purchasing behaviour based on quantiles calculation. This article allow you easily generate the clusterization thanks to the sheet attached at the bottom of the page, where you’ll find all calculation ready to use inside.
You can enrich your data creating a unique UserID for each customer session and associate his email (ClientID) when prompted, in order to extract all transaction and customer data needs to create the customer base segments and mapping his customer journey.
Pay attention that your privacy policy is compliance to the actual international GDPR laws.
For instance, the following datas that has been generated when you create these custom dimensions ( UserID and C lientID) associating to a custom goal.
Exporting data in Big Query or simply on your Google Sheet it’s possible to integrate them with other Google Analytics insights, referring as primary key to these custom fields.
The link below is used to query data on Google Analytics, using Google Tools, in order to evaluate which combination of dimensions and metrics could be more interesting.
Proceeding in the next step, which concerns the RFM analysis, it’s necessary to export all the required fields for the algorithm:
Google Tools — Transaction info
https://ga-dev-tools.appspot.com/query-explorer/?start-date=2020-09-01&end-date=yesterday&metrics=ga%3Ausers%2Cga%3Atransactions%2Cga%3AtransactionRevenue&dimensions=ga%3AtransactionId%2Cga%3Adate&sort=ga%3Adate
If you correctly created the ClientID dimension, you can also export the customer email column too, getting all the information in one go.
Enhance data in Google Analytics
Once you exported these data from your CRM, E-Commerce or whatever, you could import Transaction ID with purchased products and demographic information of our customers, in order to enhance and enriching your customer profiles and order insights.
Follow this link to the official Google guide about this topic, for proceeding to import these information.
1.1 Exporting One Time your E-Commerce Data
You could export your e-commerce data, using SQL queries or ETL tool, cleaning and normalizing all values and put them inside a Google Sheet.
If you have a Prestashop, you can use SQL query tool, built inside the e-commerce in the order tables.
If you have a Shopify or Magento 2 you can use their native API connecting them to Integromat for exporting all orders data.
1.2 Exporting Google Analytics data in Big Query with GA4
Upgrade to new GA4 in order to integrate/export your data to Big Query easily to manipulate and normalize them as if you were using the 360 version.
This way it’s a little bit more complicated but you can go deep inside your data analytics and extract whatever you need as a pro.
For instance, you can understand for each transaction all related information to what products are most purchased, which products are most purchased together in a specific cluster or for a single customer too.
2. Creating Customer Segmentation using RFM
Reporting your data in Google Sheet, you should be able to easily obtain a table similar to the image below, where you can actually understand which are your best customers, those you should pay more attention to and the others.
Yes, I know, It’s not an easy task read customer insights in this way… reason why in the next point I’m going to show you how you can represent them in a Data Studio Dashboard.
3. Data Studio Dashboard
Thanks to this product it will be possible to read all the insights that we have previously exported and apply filters to explore the segments in depth.
Open Data Studio and click on:
“New / Report” and from the top menu select “Add Data”, which will open a modal with the possibility of integrating many other sources.
Select Google Sheet and proceed by selecting the sheet where you worked on the step before and the tab where your RFM is.
To make your job easier, below the link where you will find a ready-to-use dashboard for Google Studio.
Thanks to the data filter properties present in almost all available charts, it’s possible to inspect individual customers or clusters to understand useful and important information about:
AOV per each cluster and customer
LTV per each cluster and customer
How is made up the customer base
What are the main cluster where I’m losing money
If your data have information about products bought in a single transaction, you could analyze some more useful insight for your campaigns, understanding customer purchasing behaviour in order to create ad hoc carousel in Facebook, for instance. Some of the information you can extract:
Most purchased products from each segment
Most common products purchased together with other products
Each segment can be exported for use in marketing campaigns or in marketing automation flows, using the filter feature as shown below in the next section.
4. Using data in Marketing Ads Campaigns
I will only address two channels, which I think are the most used to most professional / business owners.
4.1 Facebook Ads
Exporting Champions or Loyals segment, you can create an LAL audience to acquire new audiences that have these characteristics and which potentially be interested in your merchandising.
By selecting the clusters that interest us most, we can subsequently export them:
Once the data has been exported to CSV, you must clean and prepare it for importing into Facebook merging with their default CSV columns, which includes several unused fields, for this example, which should be deleted keeping those present in our data.
4.2 Google Ads
Similarly, you can import a customer list, as shown below, sharing this asset within the Google Client Center (if available) or simply within in your Google Ads account.
Here, we have two ways:
Create a LAL audience in Google Ads;
create a custom audience inside Google Analytics.
The first option is simpler and more accurate, since the second depends mainly on the amount of information collected and the sampling set: if you left the default one you will have a low sampling.
You can change it, using Tag Manager, inside the Google Analytics Tag modified as follows, adding siteSpeedSampleRate with a value of 100.
4.2.1 Create a LAL audience in Google Ads
Logging into your Ads account, clicking on:
Tools and settings from the top menu, then in “Shared Library” click on “Manage Audience segments”;
You will access the audience segments section, with the + button on the left, select “Customer list”;
Below you can see the screen you should view for loading the exported CSV from Data Studio.
4.2.2 Create a custom audience inside Google Analytics
Log in to your Google Analytics account and entering in the “Audience” section and on top of this view, accessing the section dedicated to the audiences.
There, you can create a new cluster based on demographic data, selecting specific informations of your customer base or data acquired in Google Analytics.
PRO TIPS — You could export all demographic data from Google Analytics at the point 2 of this article adding these informations and representing them inside Data Studio when you created the RFM analysis.
Based on your customer behaviours and purchasing data you must select in “Enhanced E-Commerce” tab as follow:
NOTE — The subset is sampled, pay attention to the total users number in this bucket, as depending on the filters you selected it may not be so large or suitable as to create a similar audience for Google Ads campaigns.
Once you’ve saved the segment, you’ll need to make it available by clicking on this:
A page will open where you can enter the days for the search and the duration of the subscription: these are parameters indicates how relevant the data sample must be — it’s your choice.
Once selected these settings, choose a name for this segment and select the account Ads to share with.
By following the first two steps of the previous point, you will see in the table list the newly created audience segment that Google is generating and that will be available shortly for your marketing campaigns with increased ROAS :) | https://medium.datadriveninvestor.com/running-better-marketing-campaigns-w-advanced-customer-segmentations-5c55a9f90ba | ['Alex Genovese'] | 2021-01-27 20:38:29.274000+00:00 | ['Digital Marketing', 'Growth', 'Marketing', 'Ecommerce', 'Growth Hacking'] |
The future of Black Girl Fest | A Letter from Nicole Crentsil, co-founder & CEO
2020 was a weird year for all of us. I spent a large part of it feeling super anxious but trying to remain positive and hopeful.
I remember feeling despondent and generally low amidst the global lockdown, BLM and #ENDSARS protests. However, one thing I appreciated was the space to think about Black Girl Fest. Although we had announced a break at the start of the year (which in hindsight was so lucky) I still had no idea if we could ever run large scale events anytime soon. I felt a sense of urgency to build a new covid-proof business model that would allow us to survive, whatever the future holds.
The Black Girl Fest story
Black Girl Fest was created out of a desire to see more Black women in arts programming. I couldn’t have imagined that this idea, born from a spreadsheet, would become an important and fruitful space for our community.
We really created something out of nothing and scrambled the whole thing together from a tiny studio flat in North London. From our crowdfunding campaign to filling the Business Design Centre with 3000 people, it’s all been such an incredible journey.
I’ll never forget the queue to get into the first festival. We packed out a small venue in Shoreditch that was quickly overcapacity. We had no idea that the thousands of people who booked tickets would actually show up — and boy did you show up.
Black Girl Fest 2017, Photography by Bernice Mulenga
A memory I’ll hold dear is seeing a little Black girl buying a Black doll that looked exactly like her and watching her reaction when she saw herself reflected. I even remember crying when a group of aunties held our hands and thanked us for what we had created. That moment truly felt like our ancestors were speaking to us.
Although we spent the next few months working full time and fitting Black Girl Fest in and around lunch breaks and weekends, you honestly couldn’t have noticed because we did the absolute most. From our TimeOut takeover issue to working with Penguin to create the Becoming Festival, anyone looking in would not have believed this output really came from a side hustle.
But we were determined to take up space, celebrate ourselves and ensure that everyone would be paid for their time. We were bold, confident, and brimming with creative ideas. We had all the spark and enthusiasm but needed to figure out how to create a sustainable business model that didn’t rely on crowdfunding campaigns each year.
BGF Becoming Festival 2019, Photograph by Krystal Neuvill
We didn’t prioritise self-care in those days and I remember moments where I would cry in the work toilets, exhausted and stressed out. I don’t think people really knew what it took to run such a large scale event. The meticulous planning and detailing but also the creativity and innovation to create something new and exciting each year. Going freelance was a big relief, it meant that I could get better at the juggling act we had created for ourselves.
Black Girl Festival 2018, Photograph by Krystal Neuvill
But we were still working on a project by project basis and the end goal was to get everything paid for. For us, it wasn’t enough to do unpaid projects because we had to support the communities we are committed to serving and that included paying everyone, ourselves included.
In 2018, I quit my job at Fearless Futures and my old boss was kind enough to introduce me to an investor. I didn’t know much about the VC world and wasn’t sure if this was the right route for Black Girl Fest, but we needed money and we needed to be self-sufficient. I remember having a phone call with a founder friend three days before Christmas in 2018. They strongly advised that we build a commercial arm and reinvest the profits back into the business.
So I spent my Christmas eve building a growth strategy to do exactly that. I wanted to ensure we were intentional whilst navigating this new space. Our goal was to create meaningful partnerships that would spark new conversations and create new opportunities for our community. This led to working with clients such as Lush, Give Blood NHS and Today at Apple.
And now, we’re here.
“We Move” are the only words that truly define my mind-set for 2021. Who would have thought I’d be relaunching Black Girl Fest sat on my balcony in Ghana — truly taking working from home to a whole new level. After three years, I’m so excited to announce that Black Girl Fest 2.0 is finally here — a new and improved platform dedicated to Black women, girls and non-binary people. From what started as a side project, today we are relaunching as a fully-fledged creative company led by a fantastic new team of three with actual salaries.
We rest and now we move
As the world continues to be halted by the uncertainty of the pandemic and we start to hear new conversations about vaccines and global recessions, all I could think about was the future of Black Girl Fest. In a world that still relied on the strength and tenacity of Black women, what kind of platform could Black Girl Fest be? What purpose could it serve in this new normal and how would I pivot the business to survive whatever the future had in store?
Over the last few months, I’ve been in research mode: reading, listening to and taking part in as many conversations about and with our community. In that time, I clocked that everything Black Girl Fest had created so far was really the perfect mix of having a great product and service for an underserved market. We had no infrastructure and just worked on whatever came our way. We needed to be stabilised by a strong vision and focused on a clear mission that allowed us to be covid-proof for the future.
Black Girl Fest is a success story and we need more stories.
So this letter is sort of an accountability post, mainly to document this process but also to be more transparent in the hopes that this could guide and inspire the next entrepreneur.
2020 showed me that Black Girl Fest could not simply exist as a yearly cultural event. We needed to be more — we needed to do more. Innovation looks like putting everything on the table and asking “What aren’t we doing? Why aren’t we doing it? What do we need to do? How do we do it?” And that’s exactly what I did to roadmap a future for the business.
What aren’t we doing? Right now, we’re not running physical events.
Why aren’t we doing it? The pandemic makes it hard for us to safely connect and engage with each other.
What do we need to do? Establish Black Girl Fest as a global platform for all Black women, girls and non-binary people
How do we do it? Explore new ways to produce engaging content, build an arm to the business that focuses on digital events and support a community to thrive in this space.
Why I’m pivoting Black Girl Fest
Change is scary but change is necessary.
In “The Hard Things About Hard Things” Ben Horowitz says “Innovation requires a combination of knowledge, skill and courage” so I’m taking a leap of faith and pivoting the organisation. Change is scary but change is necessary. I can only hope that our community continues to support Black Girl Fest as we embark on this new and exciting journey. My love for my community runs deep, the work I do is bigger than me, so right now is the time to do it.
This past year, I became an angel investor to make investments in businesses owned by Black women. I’ve been working with institutions to create more opportunities for young Black girls and been consulting with brands and businesses to ensure our community is well represented across the board. All of this was intentional to explore what kind of platform I could build for Black Girl Fest. What new challenges did our community face and how could we create a space to solve this?
The Black community, and especially Black women, are failed across economic, social and educational spaces. Recent research shows that 0.02% of total venture capital invested over the last 10 years went to Black women entrepreneurs. Also, Black women make up just 0.1% of active professors in the UK compared to 68% who are white men.
What we do know is that Black women drive culture. We are at the centre of every major cultural moment but never directly benefiting from it.
This is why we see Black Girl Fest as an ecosystem because we cater to a community that is often ignored. According to Google in a recent survey for #YoutubeBlack, 75% of Black millennials are more likely to consider a brand that positively reflects black culture.
Black Girl Fest is that brand — we just needed to do the work to reflect this.
For me, seeing my community thriving is my jam. I’ve spent the last 6 years building a career that creates spaces for Black women — there isn’t an opportunity that I’m not sharing, creating or supporting. From Unmasked Women in 2016 to Black Girl Fest and now BIG SIS, I’ve been on a mission to see Black women thriving. I’m passionate about the work I do and feel energised to do more, so it makes sense to put this energy back into Black Girl Fest.
Unmasked Women exhibition 2016, Photograph by Holly Cato
Onwards and upwards
Through all the challenges, Black Girl Fest saw some success. Due to our commitment to the Academy programme, we were awarded a large grant which allowed us to extend the content, offer kick-starter grants to all participants on the programme and develop new ideas for the future. That burst of good news gave me the spark to realise that even within the uncertainty of a global lockdown, Black Girl Fest was still very much an important vessel for the community.
So with that said, I introduce Black Girl Fest 2.0!
Our new vision is a world where our community is thriving. We’re on a mission to create access, learning and development for Black women, girls and non-binary people. We aim to inspire and empower our community, equipping them with the tools to flourish.
We’ve spent the past year redesigning our brand to give Black Girl Fest a fresh new look across all our digital content. We want to mirror where we’re at now and where we want to be through our overall design and style. For us, it’s important to ensure we’re learning and evolving with our growing community.
We’re building a platform and it’s dedicated to all of you. Expect new digital programmes, engaging editorial content, and access to new industries such as tech & business, and hair & beauty.
My dream is to one day own our very first space. Imagine a physical space that is Black-owned with creative studios for co-working, pop up shops and marketplaces with a HUGE events venue to run the biggest and baddest events this industry has ever seen. I know, super ambitious, but the belief I have in my community and business is so high, no one can tell me that Black women couldn’t make it happen.
I want Black Girl Fest to be the leading platform for all Black girl content, culture and conversation globally and for the first time in a long time I know exactly how I’m going to do it. I want more physical spaces and more safe digital spaces. I want to see more Black women winning in every field we incubate because we DESERVE it!
So as I look forward into the future of Black Girl Fest whilst reflecting on the past three years, I do not doubt that with the support and energy from our community, anything is possible and that possibility starts today.
I really hope you all enjoy what we have built so far and continue to join us as we build for the future. As a community, we’ve been through so much and my hopes are for Black Girl Fest to be that positive vessel for enjoyment, creativity and hope. Hope for a better world that puts Black women at the core.
To celebrate our relaunch we’re thrilled to announce The BGF Creative Fund, a new grant to support Black women and non-binary UK-based creatives affected by Covid-19. You can apply for it here.
Stay tuned for lots more exciting updates and announcements coming real soon.
P.s. Special acknowledgement to my friends, family, mentors and community for picking up my fallen pieces, providing the glue to put me back together and allowing Black Girl Fest to shine. I can’t thank you enough for your words, advice and love.
With love,
Nicole x | https://medium.com/blackgirlfest/the-future-of-black-girl-fest-a9e9a69848a3 | ['Black Girl Fest'] | 2021-01-11 09:39:30.521000+00:00 | ['Vision', 'Black Women', 'Blackgirlfest', 'Pivot', 'Business'] |
In the dark | Without the presence of light you’re unable to distinguish colour, texture, depth and that’s how we understand some aspects of loneliness and battles with seasonal depression.
We are unable to understand the meaning of a helping hand and one that may be detrimental because it is not presented the same way we see it in the light. So we wonder, we wallow and attempt to find that glimmer that is encapsulated by a freshly lit candle. We know what it is but we’re unsure, as we question it’s validity in the situation we find ourselves in.
In some moments we distance ourselves in the hopes of waking up with more light in our lives and sometimes we do but in tough seasons we don’t. So you continue to maneuver in the dark, holding onto routine in the hopes of building momentum and in some of those moments we find the familiarity that comes with the light.
We step outside for the first time, breathe in new air and the will to make a change. Even if it is minimal, but it’s a step in the right direction. The night time presents many possible dangers and some are mostly due to not being able to see beyond our current circumstances. So as you wonder in dark. I offer you a lantern of hope, one that guides you to healing, forgiveness and understanding.
From the darkness you rise to your former self, one that walks with the authority of knowing who you are and what you’re able to overcome. Someone who understands themselves a little better in the dark. | https://medium.com/@bnkonde1/in-the-dark-9d6109f50bf | [] | 2021-06-08 16:30:47.755000+00:00 | ['Hope', 'Seasonal Depression', 'Light', 'Darkness', 'Strength'] |
Testing Spark Structured Streaming Applications: | Let’s now walk through the test case above. The pattern for loading a scenario comes from trial by fire while trying to figure out the easiest way to generate repeatable tests based off of real fake data.
My team at Twilio (Voice Insights ) came up with some neat internal generators of fake telephony data — modeled off of real live traffic. Through the generator we can create true to life scenarios instead of just testing randomly generated data or some bogus nonsense! This sometimes means we generate thousands of scenarios that will be loaded and tested. This process is replicated in many spark applications.
Our TestHelper has a lot of nice utilities including the ability to load a scenario (basically read json and parse to our protobuf) which was shown below.
def loadScenario[T<: Message : ClassTag](file: String): Seq[T] = {
val fileString = Source.fromFile(file).mkString
val parsed = mapper.readValue(fileString, classOf[Scenario])
parsed.input.map { data =>
val json = mapper.writeValueAsString(data)
convert[T](json)
}
}
example of scenario loading method above ^^
We then create our MemoryStream and take our scenario iterator and generate a pile of Kafka data in the form of our MockKafkaDataFrame.
case class MockKafkaDataFrame(key: Array[Byte], value: Array[Byte])
This allows us to stream data through our unit tests in the same format that we will be reading from Kafka in the real world. This saves us a ton of time over running locally embedded Kafka.
We then create a streaming query that writes to a MemorySink. This allows us to test an end to end streaming query, without the need to Mock out the source and sink in our structured streaming application. This means you can plug in the tried and true DataSources from Spark and focus instead on ensuring that your application code is running hiccup free.
What the hell. That was easy right. Yep. | https://medium.com/@newfrontcreative/testing-spark-structured-streaming-applications-like-a-boss-95a1b261cd35 | ['Scott Haines'] | 2019-06-08 16:18:40.137000+00:00 | ['Testing', 'Big Data', 'Structured Streaming', 'Apache Kafka', 'Apache Spark'] |
Reasons to Pick Commercial Roof Company | Why Choose Commercial Roof Company?
If you have a business you must make sure that you take good care of the business property you own. This is essential because you want to ensure that you provide your employees with a better work environment. This is exactly why you have to look for professionals that can repair and maintain your office roof. Unlike other parts of your office property, the roof is always exposed to external factors and harsh weather. This can make your roof vulnerable to cracks and damages over a period of time. Hiring a commercial roofing company can make things convenient for you because you want to get the best deals.
Commercial Roofing for Better Safety
Safety is the most important thing you must provide to your employees. Hence, you need to look for firms and contractors that can provide you with excellent roofing services. There are many entrepreneurs and business owners that own business office spaces and properties. If you need to focus on how you can manage and repair the roof in time you need to look for a commercial roof company that can make things convenient and simple. Hence, you have to evaluate different firms that can offer you the best services at the best price.
Variety of Commercial Roofing
You also have to focus on the types of commercial roofing before you make a final decision. This is essential because you must have clarity on what services you want and how it can allow you to reduce the cost and provide a better work environment for your employees. There are many firms that can provide you with a variety of services and therefore you need to evaluate and compare different options you can find in the market. When you have someone that can provide you with more options you can always save time and effort in the future.
Budget for Commercial Roofing
It is also essential that you focus on the overall cost of the commercial roofing services that you get. Different companies have a different budget and therefore you need to ensure that you know how much you want to spend on it. Even before you decide what services you want you to need to focus on how much you want to spend on it. This will allow you to make the best use of the data and insights you have.
Conclusion
Roof maintenance and repair can cost you more than you can imagine especially if you don’t resolve your roof issues in time. You need to ensure that you look for roof specialists that can offer better safety and a variety of services under the right price. | https://medium.com/@sandiegocountyroofing/reasons-to-pick-commercial-roof-company-a866c3ee9fc4 | ['San Diego County Roofing'] | 2019-09-14 07:26:12.430000+00:00 | ['Commercial Roofing', 'Startup', 'Commercial Roofers', 'Commercial Roof Repair'] |
Subverting Cult Tropes in ‘The Other Lamb’ | The narrative around cults both in fiction and the real world is that the paternal figure gains power only to drag his followers down, and that arc certainly has real-world precedence that would be impossible to ignore. Cult leaders, from Charles Manson and Jim Jones to any number of lesser-known figures, tend to revolve around a central, often male, leader who gains power, becomes overwhelmed by his own paranoid rants, and ultimately causes the deaths of several followers.
While this is all true to life, it isn’t all there is to the sociological phenomenon of cults. Often, women are utilized as weapons against other women, drawing each other into an increasingly horrifying lifestyle via assurances of sisterhood despite overwhelming patriarchal overtones. Yet still, there are the children that are simply born into cults, who are raised within a world of indoctrination and cannot imagine a world outside of that. It is here that filmmaker Małgorzata Szumowska finds her inspiration for her film, The Other Lamb (2019). | https://manorvellum.medium.com/subverting-cult-tropes-in-the-other-lamb-e4794dab67e0 | ['Manor Vellum'] | 2020-10-08 19:06:21.829000+00:00 | ['Film', 'Cult', 'Independent Film', 'Feminism', 'Horror'] |
How does Secure Messengers like Wire and Signal make money? | Photo by Andrej Lišakov on Unsplash
In these uptight times, privacy and confidentiality are amongst the most sensitive subjects of modern times. Every day you get the news about some security violation or breach due to application flaws. And then you get the news that yet another platform wholly neglects user privacy for sake of maximizing profit (hey Zoom).
Messages are being leaked daily and accounts hijacked due to improper database maintenance like storing user passwords in plaintext almost in plain sight. It is disturbing and not particularly endearing.
This kind of environment caused the growing demand in secure private messengers like Signal Chat App and the likes that treat users and their data with respect.
In this article, we will talk about secure messengers, why they matter and also discuss the ways they generate revenue.
What are secure messengers apps like Signal and Wire?
Secure private messenger is a messaging application that emphasizes privacy and confidentiality of users using encryption and service transparency.
While every modern messenger system is using different security practices (most prominently SSL/HTTPS) — the difference between secure and classic messengers is that we don’t know in the scope of implementation and the approach to user data.
Secure messengers evolved into a distinct category due to the growing awareness that communication over the internet is accessible by third parties and reasonable concerns that the messages can be used against the users.
People are using messaging to share personal information, photos, and other files. Why should this information be accessible to anyone else?
In addition to that, messengers are also used to report on sensitive political and social issues, especially in the countries where the government monitors the internet. Another contributing issue is lack of regulation over what is going on the private-own platforms like Facebook or Google. It is a well-known fact that big tech is using user data, including personal messages, to adjust advertisement targeting.
How does secure messengers like Signal work?
The core principle behind secure messaging is end-to-end encryption. For example, how does Signal work? Signal App is using its own encryption method titled Whisper Protocol. It is also used in WhatsApp, which makes it better than Facebook Messenger.
This type of encryption uses a multi-layered approach that makes it nearly impossible to brute force your way into the data. This feature makes the Signal app secure.
Here’s how end-to-end encryption works:
Two users start a conversation. This event creates two sets of keys.
The private key that remains on the user’s device.
The public key that is stored on the service provider’s server.
When user A writes to user B — the public key is retrieved and used to encrypt the message so that it would be available only through the private key. The message is then sent to the user via server and decrypted with the private key.
As you can see — it is simple.
The reason why end-to-end encryption works is its sheer impenetrability.
The data stored on the server is no use in its encrypted form — it is just some letters and numbers beyond comprehension (unless you’re into conceptual writing). No one can read it without the private key. It is so complex, brute force methods of deciphering are no good for it — too hard to match a key with long strings of various randomly generated characters (there are way too many possible combinations).
Unless the perpetrator can find a way of retrieving a private key from the user’s device (it is possible, sim swap and losing a phone helps) — the intercepted data is as good as the cat’s full body typing documents.
What is secure messenger’s value proposition?
Now let’s look at some of the major secure messaging apps and their business revenue models.
First, let’s nail down the basics.
The value proposition of the encrypted messaging app is built upon four basic principles:
End-to-end Encryption (aka E2EE);
Message deletion;
Limited use of Metadata
Transparency of service and product.
Let’s look at them one by one.
Encryption is the most prominent selling point of the encrypted messaging app. It is used to guarantee user privacy and data security by scrambling messages and making them unreadable without the encryption keys.
Message deletion is another critical piece of the puzzle. While message deletion is one of the basic features of every messenger, secure or not, there is a caveat. It is usually just a prop. Users can delete messages on their device, but not from the other user’s conversation or server log. These messages may remain on the service provider’s servers. Facebook is doing it all the time; Google does it, Yahoo probably remembers what you did in your past life. So actually giving users the power to erase their messages completely is an excellent step towards building a trustworthy relationship with the users. For example, Signal has a timeout option that automatically deletes messages after a certain period upon the message is read.
Metadata is one of the most problematic elements of user privacy. It is used to identify users and their credentials. Most messaging apps store message metadata by default. It includes such data as time, sender, receiver, contact list, device ID, and so on. This information can be used by hackers to identify the user and apply social engineering skills to retrieve the decryption key. The most prominent example of storing metadata on servers is WhatsApp. While it applies E2EE to messages, it also saves some aspects of message metadata, which is a reason to be concerned. On the other hand, Signal stores only the last connection data for log-in and nothing else. The rule of thumb is — the less metadata secure messenger uses, the more secure your data is.
Transparency for secure messengers is twofold. First, you have terms of service which directly states the intention of providing a safe and confidential communication platform. But talk is cheap, and if you talk the talk, you need to walk the walk. That’s where the second aspect of transparency kicks in. The real sign of clarity is when your secure messaging app has open source code. So that anyone can look inside and check whether it is safe enough with no backdoors, that is the real indication that there is nothing to hide. On the other hand, it is also the right way of improving the quality of an application by the crowd effort as any expert can contribute to the polish of the product.
These are the foundation blocks of the product upon which the monetization model is built. Now let’s look at how different secure messaging platforms operate and generate revenue.
How does secure messengers make money?
In this section, we will look at business revenue models of several favorite secure messaging apps.
WhatsApp — the most popular secure messaging app. Curiously enough, WhatsApp uses signal app encryption protocol. WhatsApp is unique in a way that it enjoys the luxury of being a part of a Facebook corporation and benefits from its massive infrastructure. This aspect is also an explanation of their business model.
Originally, WhatsApp had a subscription fee, but it was scrapped upon Facebook purchase, and now the app is free to download. Given the fact that Facebook has a big advertising platform and given the fact that WhatsApp stores user metadata — it is fair to say that it is figured into the ad targeting mechanisms and subsequently applied on Facebook. In addition to that, WhatsApp is currently testing in-app advertisements and monetization that will also use the aforementioned targeting mechanisms.
Signal is the trailblazer of secure messaging applications. Signal is another hard case but for a different reason. The thing is — there is no real answer to “how does Signal make money?” — because it doesn’t generate revenue. Signal’s developer Whisper Systems doesn’t operate as a business and stays afloat on the government grants instead. While this stance is noble and deserves respect — there is no business angle to speak of.
Although, you might argue that this kind of project is very beneficial for reputation and can open up many different partnering opportunities.
With Wire things get interesting. Wire’s pitch is basically “Slack and Skype but with End-to-end encryption and no fuss.” The application is built around a subscription revenue model with different features. Let’s look at how Wire makes money:
There is a streamlined free personal app that offers an excellent showcase of the platform’s possibilities.
Then there is the Pro version (6€ monthly) with more features for a reasonable price; it can be used for groups to discuss projects and maintain confidential communication all the way through.
Wire Pro is further expansion of the Enterprise version that adds third-party integration and on-premise deployment to the mix.
Finally, there is a crisis communication edition designated Red with the extreme levels of security designed for emergency events like malware attacks and breaches.
Wickr is another secure messaging app whose business revenue model rides on the Enterprise wagon. It is specifically designed to deal with company-wide confidential communication. The majority of Wickr’s features are tailor-made for handling sensitive information. For example, in addition to the traditional timer, you get a shredder feature that deletes all traces of the message or file that there were.
How does Wickr make money? The revenue model revolves around a multi-level tier of subscription with the gradual expansion of features that culminates in the ultimate enterprise package.
Threema is probably the most interesting of the bunch. It is very similar to WhatsApp in terms of positioning — secure, strictly confidential messenger. Their main selling point is the so-called “no strings attached” policy as the application requires no telephone number tie. This feature makes it even more anonymous than the other apps on the list. Like Wickr and Wire, Threema’s uses a multi-tier subscription revenue model with different types designed for different communication approaches — it is easy to understand how they manage to generate a steady stream of revenue. Here’s how Threema makes money:
There is a standard Work model designed for team communication.
Then there is a top-down Broadcast model. It is designed for more formal communication like general announcements.
The most interesting is Threema Gateway, which is not a product but a service. With that, users have their own communication tools and use Threema’s server as a secure transmitter for a reasonable fee.
Why secure communication is essential for business?
In the context of business operation, communication is a vital element of maintaining an efficient and dynamic working process. It lets you keep everything up to date and on the same page.
And since many things are going on at the same time — tools like messengers are one of the many helpers that make the working day a little more manageable.
So why bother with secure messengers?
Here’s why. The information that goes through the messaging applications requires a certain level of confidentiality that most of the messengers can’t provide. It is one thing when user compromises, and it is an entirely different thing when the service provider itself is compromised. You can’t gamble your confidential information like that.
Some of the information, like employee and customer data, proprietary information, data directly linked to business performance or future projections, may be strictly under a non-disclosure agreement. Without proper encryption, it remains vulnerable to exposure. The chances are slim, but the possibility remains.
And there are people interested in acquiring that sensitive information and they like to play dirty because getting a competitive advantage is a decent motivation to go beyond the law. And when private conversations leak, especially the business-related ones — the impact is comparable with Titanic hitting an iceberg.
Encrypted communication prevents this from happening. So what’s the problem.
I want to give you one example from personal experience. A couple of years ago, I worked for NGO regarding the legal system. The team preferred to work within a Facebook Messenger private chat (right choice, already) and the conversations ranged from unflattering to scathing, especially regarding the grant supervisors whose reporting requirements were strict. Guess what happened next.
What’s the problem with privacy in messaging services?
You may say “but isn’t every messenger supposed to be private by design?” — The answer is yes, but not really because The Internet doesn’t work that way.
Here’s why.
The information transmission on the internet is always monitored in one way or another. After all, it is being transmitted through the servers from point A to point B and so on. The communication via messenger application is always happening through the third party — a service provider.
While the terms of use suggest that all user data is private and seemingly untouchable — it is available to the service provider and can be used elsewhere due to users accepting terms of service agreement upon registering on the platform. In other words, it is flimsy.
Now let’s get back to encryption.
Encryption is an inherent part of modern messenger services. In one way or another, every platform brags about encrypting user messages. But not every type of encryption means real privacy and confidentiality.
The standard encryption method used in favorite messaging apps like Facebook Messenger and Skype is SSL (Secure Socket Layer). This type of encryption prevents third parties from intercepting messages in transit (this action is called a “man-in-the-middle” attack). It is crucial, but there are also other ways of breaching privacy.
The thing is — at the same time, service providers have access to user information in its full scope — starting from messages and attached files and going as deep as habits and preferences using sentiment analysis.
For example, Facebook is monitoring user conversations on Messengers and figures this data into the advertisement targeting (by the way, it is not very ethical).
Now imagine a situation in which service provider is compromised. Things look pretty grim in this scenario, right? It happens all the time. WhatsApp just experienced a massive breach with lots of data exposed. Messages itself are left unscathed, but the contacts have leaked.
In addition to that, service providers are obliged (under specific circumstances) to share user data with law enforcement. Which seems reasonable, but law enforcement also can abuse this right (look at China or Russia). But when you have end-to-end encryption — the information on the service provider’s servers are useless to the third party.
In conclusion
Secure messenger market is one of the fastest growing in the software industry. Due to increasing privacy concerns and a neverending stream of various data breaches, more and more companies are starting to take their communication practices very seriously, and that makes them turn to secure messengers.
As you can see, getting into secure messenger game is not quantum physics. You need to have a solid value proposition, open source transparency, and reasonable prices in exchange for services. Everything else is a matter of positioning. | https://medium.com/datadriveninvestor/how-does-secure-messengers-like-wire-and-signal-make-money-f30e04830b88 | ['Volodymyr Bilyk'] | 2020-06-10 03:27:08.143000+00:00 | ['WhatsApp', 'Secure Messaging', 'Signal', 'Privacy Protection', 'Privacy'] |
Tech Diaries: Apple’s Reinvention, Starlink launch & Microsoft’s Pivot towards Blockchain | Tech Diaries: Apple’s Reinvention, Starlink launch & Microsoft’s Pivot towards Blockchain
Apple enters Digital Payments, Global Streaming Internet by SpaceX & Microsoft’s Blockchain Projects
Welcome to the first edition of Tech Diaries — my new blogging series about important issues pertaining to the tech world. For those you who have across my other blogs, I have a similar edition called Crypto Diaries which pertains mainly to Cryptocurrencies & Blockchain. So without further delay, let’s dig into the Tech World.
Apple Card
I still remember how excited I was when I got the first edition of the iPhone years ago. Apple brought in a revolution with its smartphone devices. Over the years, Apple has become one of the biggest & most valuable brand names in the World. It is one of the elite members of the tech group dubbed as GAFA (Google, Amazon, Facebook, Apple).
Last year, Apple was the first of the big four to hit the $1 trillion mark valuation followed by Amazon & most recently Microsoft. Market correction recently has brought down the valuation of these companies from the highs, but all four of them are still within touching distance of that coveted prize.
Coming back to Apple, the strong growth over the years has been spearheaded by its flagship product — the iPhone. While Korea’s Samsung & Chinese Huawei have squeezed Apple’s smartphone market share, but a loyal customer base has been Apple’s biggest prize.
For years iPhone printed money for Apple but with increased competition & squeezing gross margins, Apple’s growth began to stall around 2016. This was about the time when they realized that stagnating innovation & higher-priced phones were not going to spur the growth.
They needed to diversify into something which can outlast the shine on the brand new iPhone. Services like Apple Music, App Store purchases, iCloud subscriptions, as well as new products such as the Apple Watch, AirPods and the HomePod have given a new lease of life to the dwindling revenue of the company. The most important announcement yet in this tilt towards subscription services came in March when Apple announced the launch of the Apple card.
The latest innovation from Apple is a digital payment system with the conveniences customers desire — No late fees, No wait to qualify, Cash backs of up to 3% & No long strings of numbers! While its tech counterpart Facebook mulls over developing its own digital payment system for its social media empire, Apple has already taken the first step towards the future of money.
And the one thing that Apple has an abundance of that Facebook doesn’t is the trust factor of its client base. Apple is basically enhancing the user experience by providing interactive tools & charts in the Daily Cash App to track your balances, cash-backs, payments, expenses, etc. This will strengthen its ever-growing ecosystem of Apps, services & hardware even further.
Apple has also promised not to eavesdrop on your spending habits — a believable claim considering there haven’t been any data breach scandals, unlike its contemporaries FB & Google which are much more reliant on third-party advertisements for their revenue generation, not to forget their data breaches!
Endeavors like this are shaping how the future of money looks like & this summer we will find out how Apple Card is received by consumers. There is a small catch though — are you willing to buy a $1000 smartphone to access all this?
Global Internet
I respect Elon Musk as a true visionary of our times, despite his tweeting rants and his occasional pot smoking podcasts. Companies like Tesla, Solar City, SpaceX, Hyperloop, OpenAI, Neuralink are the brainchild of this genius. His financial troubles, especially at Tesla, maybe a topic for another day.
But for now, SpaceX is launching the first batch of 60 satellites today for the company’s Starlink project — an endeavor to provide broadband internet service to the global population by beaming it from the satellites positioned in the low-Earth orbit (LEO).
These satellites can basically beam the internet to anywhere on the globe. To achieve this kind of global connectivity, a constellation of about 12,000 satellites would be needed eventually. The network of satellites would be communicating with each other via lasers and will deliver 40 times faster internet that current satellite service by employing Ku/Ka-band broadband.
The Falcon 9 will be launching the first batch of satellites later today. Amazon CEO Jeff Bezos’s Blue Origin is planning a similar project called ‘Project Kuiper’ where 3,236 satellites would be used to provide a global broadband internet service, which would be launched via its new New Glenn rocket in 2021. Either way, these are significant steps towards bridging the global digital divide. Bravo!
The critics, however, argue that kind of massive satellite deployment is going to aggravate the burgeoning space junk problem that we are dealing within the Earth’s orbit. Scientific American states a worst-case scenario can trigger “ Kessler syndrome “ — triggering a cascade of space debris which would trap humans on Earth.
SpaceX defended the action by saying that the satellites will be orbiting in a much lower orbit keeping them out of range of the current space debris in higher orbit & also easier to retire once they have outlived their usefulness — which will happen after 5 years, according to NASA. Let’s look at the bright side… Shall we?
Microsoft affinity to Blockchain
Microsoft has evolved tremendously from being largely a software producer of its Windows operation system which controlled the majority of the computer systems around the World in the ’90s. The gradual shift towards services under the dynamic leadership of its current CEO Satya Nadella has given it a new innovative spirit backed by strong growth. As you can see the company’s rock-solid performance evident from the charts above.
The important thing to note is that of the $110.36 billion in revenue in 2018, $27.6 billion or almost a quarter of it came from its Microsoft Azure Cloud service. Talking about its Azure Cloud service — the platform has started to take a liking towards blockchain over the past year with the releases of its blockchain development kit and the Azure Blockchain Workbench.
Two weeks ago, the company released Azure Blockchain Services — a full managed BaaS (Blockchain as a Service) platform that allows the formation, management & governance of the consortium blockchain networks. The release also supports other advanced tools for its developers including AI, mixed reality, IoT. This came on the heels of Amazon launching its own BaaS platform — Amazon Managed Blockchain (AMB).
More recently, the Seattle-based tech giant also announced that it is building a decentralized identity (DID) network on top of the Bitcoin blockchain. The open standards infrastructure, known as the Identity Overlay Network (ION) will give control of data to the users backed by decentralized networks while enabling privacy & security at the same time.
The main net for the platform will be launched in the coming months. These moves by tech giants signal a move towards broader blockchain adoption & bode well for the future of this technology.
Before wrapping up, just a quick recap with what’s happening with Huawei — the Chinese tech giant. With the trade war escalation, U.S has just announced broad-based curbs on the company. The move would effectively stop American companies from using Huawei’s equipment or providing it with any essential supplies. The company which is already facing severe challenges with the roll-out of its 5G networks globally in the face of American opposition could be headed for more trouble.
Email 📭| Twitter 📜 | LinkedIn 📑| StockTwits 📉 | Telegram 🔗 | https://medium.com/technicity/tech-diaries-apples-reinvention-starlink-launch-microsoft-s-pivot-towards-blockchain-5727818a40c7 | ['Faisal Khan'] | 2019-10-23 02:57:10.946000+00:00 | ['Blockchain', 'Future', 'Technology', 'Artificial Intelligence', 'Science'] |
Home Deep Cleaning Services in Moshi, Pune | S3 Care Services — All types of Cleaning Solutions for your Home and Office
For being healthy, a hygienic environment is essential in your home. With the help of cleaning professionals, you can deep clean your house correctly and creates a completely sterile environment. If you are in Pune and looking for the best top-class home cleaning services, there are a lot of professional cleaning services available in Pune. S3 Care Services is a renowned company in cleaning services and registered as the top-rated cleaning services company in Pune for many years. Nowadays, many service providers put their best efforts with their prior experience to give the best cleaning service.
Services Covered by us in Home Cleaning.
Floor Cleaning: We use disinfecting chemicals and machinery equipment to remove the deep layer of dust from the floor surface.
We use disinfecting chemicals and machinery equipment to remove the deep layer of dust from the floor surface. Cleaning of Toilet and Bathroom: We do deep scrubbing of bathrooms, disinfect & sanitize it properly.
We do deep scrubbing of bathrooms, disinfect & sanitize it properly. Clean All Surfaces: Its surface, including steel, wooden, and glass of furniture and floor.
Its surface, including steel, wooden, and glass of furniture and floor. Walls & Ceiling: In the entire house, we remove dust from all the walls and ceiling.
In the entire house, we remove dust from all the walls and ceiling. Kitchen Cleaning: Clean all the parts of the kitchen, including all furniture and electrical textures.
Clean all the parts of the kitchen, including all furniture and electrical textures. Window Cleaning: All of all the glass doors and windows in the house with proper care.
All of all the glass doors and windows in the house with proper care. Other Cleaning:Cleaning of electrical appliances, ceiling fans, switchboards, etc.
Other Cleaning services offered by S3 Care Services(Rather than home deep cleaning service)Carpet Cleaning ServicesCarpet is one of the essential parts of the home’s surface, and it enhances the home. The daily walk across them, splitting of food and drinks by children and pets make it dirty. Bt doesn’t worry about carpet cleaning; S3 Care services are there for you to solve all your household and office cleaning issues.Office Deep Cleaning Services
The office is such a place that represents your business and professionalism.Cleaning the office is essential, but regular cleaning gives an excellent look to the office, but for long-term maintenance of assets and office furniture, deep office cleaning serves the best. In Pune, S3 Care Services is the best for deep office cleaning, and it helps businesses to maintain their office building and provides a clean and hygienic environment.
Housekeeping Services
Hiring a housekeeping service is a pretty tricky task nowadays because of lots of availability of housekeeping services. But choosing the best one is quite challenging, we offer you the best housekeeping service which is trusted and work with us for years. You will get all the essential benefits of hiring a housekeeper with us. Moreover, it is recommended that you hire a housekeeper from a company as they have trusted persons along with them, which works perfectly for you.
Facade Cleaning Services
Cleaning the interior of a building is accessible, but the exterior part cleaning is quite tricky. And for ordinary people, it is challenging to clean the exterior of the building, including facades, glass doors, and other designing elements. So, if you own an office or structure in such a place as the main road where there is plenty of pollution or dust, your building fades away its color. You must need a professional service facade cleaning service for maintaining the exterior of the building and S3 Care Services for you in Pune.
Sofa Home Deep Cleaning Services
Whether it’s office or home, the sofa is the most used furniture for sitting. Being one of the most used furniture for sitting and relaxing, it is natural that it gets dirty more than any other furniture. The sofa is not clean through regular cleaning as it requires more attention and special tools for deep cleaning of the sofa. In Pune, S3 Care Services is one of the prominent sofa cleaning service providers. You can surf the internet for sofa cleaning services in Pune and check our services review.
Chair Cleaning Services
The chair is another piece of furniture used for sitting, and chairs are widely used furniture. It is available everywhere, including at home, school, office, hospitals, airport, and all over. In public places or commercial places, it is clean by the concerned authorities. But in the office, these chairs need to be adequately cleaned as it represents cleanliness. The ordinary cleaning is not efficient in cleaning chairs properly, so professional cleaning service is best for deep cleaning of chairs.
Kitchen Cleaning Services
The kitchen is one of the dirtiest and oily parts of the home. You will do the regular cleaning, but the daily making of sweet dishes and meals makes it a mess. Moreover, the electrical appliances like the fridge and oven also get dirty. For kitchen cleaning, professional service is best to clean the kitchen, including refrigerator, microwave oven, dishes, walls, and other items.
https://www.s3deepcleaningpune.com/home-and-office-deep-cleaning-services-in-moshi.php | https://medium.com/@jayeshos00/home-deep-cleaning-services-in-moshi-pune-d0270d7cd811 | [] | 2021-12-25 05:42:18.056000+00:00 | ['Cleaning', 'Cleaning Services'] |
The world is going through historic yet unpleasant time. | The world is going through historic yet unpleasant time. We are prepared and pledged to fight this pandemic with social distancing which I would rather call it 'physical distancing 'and quarantined life.
Efforts go on with prayers and hope to tackle the issue and its impact on the economy, ecology, and livelihood.
Today economy and livelihood have been suffering because of a microscopic virus. It presents a real case of out of the box chunking. For all to accept to cruise towards times to come. The virus has changed globally the way we are living in. We could create a better quality of life for every creature if we want to learn from this pandemic.
On the 50th anniversary of the world, Earth Day on April 22, WHO chief said, "COVID 19 is reminding us of a simple but vital truth; WE ARE ONE SPECIES: SHARING ONE PLANET. "Nature has given us this warning. It’s putting us right now in our time out room and we should this time to think. When we come out of this, how we are going to treat this planet and nature. It has become more necessary to do this -to give time for humility and the earth to heal. If the life of materialism came to a standstill, it is wonderful that mother nature emerges once more at the forefront of our awareness. social
(Physical )distancing provides an ideal environment for self-reflection and to enhance hidden creative skills. A movement towards inner peace and enlightenment could be one outcome from COVID-19
Fear of this virus provokes emotions that we all share in one another 's suffering and joy and we are all tied up with a cosmic rope. We need to become more inclusive and emphatic for fellow creatures and the environment.
For us, all, glimmers of hope emerged from this pandemic COVID 19. It is a great time to rediscover our compassion as one people and step up to alleviate the sufferings of all those who are vulnerable among us. As Bhagwat Gita says "that karma yog is the surest path to personal truth and enlightenment. "
Every drop is an ocean in itself. So think about doing small acts, you can do to brighten someone else’s day.
IMPACT OF THE PANDEMIC-
Time to stay focused on the spiritual path and prayers. Watch for ways to grow and learn. We can truly evolve ourselves during these times of the crisis
The anxiety caused by this situation can be channeled into spiritual growth. Though the impact of this pandemic is not the same. We can say that we are in the same storm but we are sailing different boats. For some it is a time of reflection, to some, it is a crisis, few of us are facing loneliness, others are looking for the ways to survive, some are in a great depression as they have seen deaths of their dear and near ones, and for some, life is smooth and it is not a big deal. So the perceptions and needs of every person are different, we are not in a state to judge the situation rather we should try to navigate the route with compassion.
HOW DOES A PANDEMIC END?-
Fear and ignorance nourish a pandemic. Either we get medical treatment or we need to fight with our fears. So it gets weaken and slowly eradicate from society. Bigger threat other than this is the recession facing economy.which either may fall into depression or we may choose another way to live life. Whatever it is but together we need to fight and win the war against this pandemic.
Japanese author Haruki Mura Kam writes in his novel 'Kafka on the Shore' -"and once the storm is over, you won’t remember how you made it through, how you manage to survive, you won’t even be sure in fact weather the storm is over but one thing is certain, when you come out of the storm, you won’t be the same person who walked in. "
So maybe the world is not going to change as after tsunami nothing has changed in affected areas, but it’s people certainly will. Whenever a challenge comes up a lot of theorists assume how things will take a turn, certainly negatively or positively, though not well-rooted in reality. But in my opinion, we should use the present COVID-19 situation as a great upheaval to change certain aspects of life. | https://medium.com/@shobhajain292/the-world-is-going-through-historic-yet-unpleasant-time-bc1d4ebd2c65 | ['Shobha Jain'] | 2020-06-10 13:30:22.042000+00:00 | ['Covid Diaries', 'Covid 19', 'Spirituality', 'Covid 19 Crisis'] |
Human Brain — A Bayesian model. In the previous post we discussed how… | In the previous post we discussed how the human brain makes predictions and the model underneath it to forecast the surrounding world’s behaviour. Now, we’ll dive into the details of our brain’s Bayesian nature.
The fundamental principle of Bayesian probability is that in the light of new data, you change your assumptions. For example, if you see a dog-like entity running towards you, you may be 34 percent in your confidence that it may assault you; if you acknowledge that it is an unmuzzled pit bull, your belief that you will be assaulted will rise to 78 percent; if it continues to bark, 92 percent; if another barking dog passes you from behind the chance that your attack-belief may be valid drops again. This is a vague and incomplete example, but it shows how the obtaining of new facts changes your certainty of a conviction. So, Original Trust + New Proof = Revised Faith.
The Bayes’ law is the mathematical representation of this idea : P(B|E)=P(E|B)*P(B)/P(E), that calculates the conditional likelihood that your conviction B is valid, provided proof E. During predictive analysis, this is what the brain approximates. In fact, in line with Bayes’ theorem, the brain updates its mathematical model of the universe by incorporating prediction errors which leads to the idea that the nature of the human brain would be Bayesian.
Your brain knows about the likelihood P(E|B) of E, given hypothesis B, with the model’s estimation or prior probability P(B) and the lower-level data E inside a wider hypothesis space P(E). This yields a posterior probability P(B|E), which calculates the prediction error, by applying Bayes’ law. The error signal, modulated by attention (an estimated accuracy), then propagates back upwards to the next higher stage where the construct is modified, the probabilistic map of truth is corrected by your brain, and the hypothesis checking begins. Just assume, for example, that you want to cross a street:
Your brain measures a hierarchical cascade of projections P(B) on the basis of a large collection of assumptions B about the present case, like
priors about how to change shapes, colours, and noises (for low-level perceptual inference),
priors about how to change shapes, colours, and noises (for low-level perceptual inference),
priors in motion and adjusting traffic signals for cars (for high-level perceptual inference) and
a prior about you standing on the other side of the street (for high-level active inference).
2. Your brain collects sensory data E, like sensory information E, on what is currently happening on the street and in your body, like
proprioceptive information representing the movements of the eye, head, and leg,
conceptual information representing vehicles and traffic signs, as well as
agentic data representing your relation to the goal state.
3. Your model contextualizes the knowledge to obtain P(E) and, considering your predictions, calculates the probabilities of the data P(E|B).
4. In order to measure the posterior probabilities P(B|E), the brain indirectly applies Bayes’ theorem and calculates prediction errors at the appropriate processing layers by:
matching posteriors and priors regarding shapes, colors, and noises,
matching posteriors and priors regarding your eye, head, and leg movements,
matching posteriors and priors regarding vehicles and traffic lights, and
matching the posterior and the prior regarding your goal state.
5. Weighed by predicted accuracies, the prediction errors spread the hierarchical cascade, level by level, and adjust the resulting collection of hypotheses, thereby updating the model to minimize possible prediction errors; or they are actively reduced to the muscles that shift your eyes, head and legs via motor commands (more about that second option in a second).
You will find yourself on the other side of the street after ample cycles of hypothesis checking and paradigm updating during continuous sensorimotor contact with the universe (if all goes well). It is possible to characterise this dynamic predictive mechanism as the brain constantly executing Bayes’ theorem.
So, what does this explain?
The spectrum is highly optimistic for the Bayesian brain theory. For both neural, emotional, and psychological phenomena, it’s supposed to be a unifying structure. If real, predictive modelling describes everything about the brain and mind at a computational stage, for reasons we’ll see shortly. But let’s first take a look at how the predictive processing paradigm relates to particular cognitive phenomena:
Perception is the prediction of sensory signals and the inferential mechanism of adjusting the internal predictive model to minimize predictive error. This is perceptual inference: by modifying the model so that sensory responses fit previous expectations, reducing prediction error. You make the model more analogous to the environment by interpretation.
Imagination can be a side-function of the predictive machinery of the brain based on high-level forecasts that do not move all the way down the hierarchical cascade and may not fit actual sensory results, such as dramatically decreasing the weighting of accuracy on low-level error signals.
Delusions and hallucinations can arise from improvements in the ability to combine incoming information with perceptual predictions.
2. Action is proprioceptive inference and the inferential mechanism of modifying sensory signals to minimize predictive error. This is successful inference (also called predictive control): by rotating the body so that sensory signals meet previous expectations, decreasing predictive error. You make the world conform to your model through motor action. Although the expected proprioceptive states are not yet possible, activity affects the universe to make them so (which means that all your actions are essentially self-fulfilling prophecies).
At the lowest layer of the active inferential phase, reflex arcs run and work to fulfil proprioceptive predictions before the expected sensory feedback is received. The model parameters are not modified, but held constant, as with perceptual inference.
A high-level forecast of an arbitrary potential target state is deliberate action. When a concrete chance to act occurs, multilevel cascades of lower-level predictions are involved in transforming the environment in a manner that makes the high-level prediction come true (i.e., reaching the target state), thus minimizing errors in long-term prediction. A clear example of predictive modeling working at large spatiotemporal scales is the attainment of conscious goals, since it may take minutes, months or years to accomplish a goal, and it may require one, say, to fly overseas)
3. Emotion is interoceptive prediction and the successful inferential mechanism of inducing physiological modifications to minimise prediction error.
Drive is the active-inferential mechanism by which high-level behaviour minimises interoceptive prediction error. For eg, if the brain senses low blood sugar levels by interoceptive inference and the resulting estimation errors are not corrected through autonomic regulation (e.g., body fat metabolization), you would feel empowered by voluntary intervention to mitigate error signals (e.g., eat sugary food).
Mental disorder may be the inability to reduce interoceptive prediction errors.
4. Attention is expected precision optimisation; it modulates the weight of prediction errors. The effect of focus changes being that the same sensorimotor hypotheses (lower-level priors) can not be held for very long and the universe is an ever-changing place that will destroy you if you don’t change (which you can’t if you never move the mind if you still assign the same weight to the mistakes of similar predictions).
5. Learning is the updating, based on prediction errors, of your internal model so that your forecasts steadily change. The higher your forecasts of the world’s causal, probabilistic nature, the more you can communicate with it more efficiently. This is why, after having practiced the related motor skills, adolescents travel more fluently through space than children, why you are less clumsy when playing a sport or a musical instrument, and why it is important to seek truth — enhanced effectiveness due to better predictions.
6. Memory consists of your inner model’s learned parameters, while the inherent knowledge evolution has genetically developed into your nervous system its non-acquired parameters. Both elements dictate the brain’s predictions.
7. Self-awareness is the inferential approach of minimizing the error of inference by modifying the internal self-model, i.e. the model that makes assumptions on what’s most likely to be “you”.
Agency benefits from a sufficiently strong match between the projections of the self-model and the exteroceptive input. You sound like you’re not in control of your decisions if the forecast error is too high.
Ownership results from a strong enough match between the predictions and proprioceptive feedback of the self-model. You can experience that one of your limbs does not belong to you if the estimation error is too high, or you may have an out-of-body experience.
8. Belief is a hyperprior; a high degree of abstraction of a structural prior; a high-level prediction that requires universal understanding of the universe. Some examples:
Physical beliefs. You expect situations to shift over time, you expect large items to collapse easily, and you expect that you will not be able to switch right and left at the same time.
Physiological beliefs. When you open your eyes, you expect to see something and you expect fire to damage and burn your flesh.
Psychological beliefs. To make you feel proud, you expect a fantastic success and you expect to feel regret when you shy away from a challenge.
Social beliefs. You want happy people to smile, provocative words to spark an answer, and the ability to corrupt you.
Cultural beliefs. When they approach stop signals, you want vehicles to slow down, exclusive sales to be displayed in shops, and handshakes to not last an hour.
What’s more interesting is that this bayesian nature of our brain tends to control some of our deepest ethics and morals -
We value truth because concrete beliefs about our surroundings empower us to make good predictions.
We respect integrity and sincerity so we know what we can expect from truthful, sincere individuals, which in social situations involving them improves our prediction making capabilities.
We respect simplicity because straightforward beliefs allow us to easily produce high-level predictions.
We value wisdom because as mirrored life experience equips us with reasonably stable hyperpriors that reduce long-term error in prediction.
At the same time, human cognition’s predictive existence illustrates why we appear to become enslaved by patterns, feel addicted to comfort zones, and shy away from uncertainty. Our priors and hyperpriors are less accurate outside routines, comforts, and certainty, which causes them to make poorer decisions and get higher prediction errors. Yet high errors in estimation are exactly what the brain tries so hard to avoid. The effort to dramatically redesign our templates to fit novel, unfamiliar situations can be so strong that the brain actually decides to cause feelings of terror, anxiety, or irritation to enable us to adhere to what we can anticipate more accurately, i.e. our normal patterns of action in comfort zones of calming regularity.
If we want to abandon or extend our comfort zones, we have to persuade our minds that at a wider time scale, the instantly resulting high prediction errors are worth it. We may thus conceptualise willpower as a hyperprior, namely as the abstract structural prediction that in some cases, short-term higher prediction errors can lead to long-term lower prediction errors. Thus, self-control can minimise overall errors in prediction.
But then again, how does the average error in estimation over time really matter to a brain that at any given moment continuously has to contend with immediate errors? We’ll get to this part in our next post… | https://medium.com/@millennial-talks/human-brain-a-bayesian-model-7d0766eaee0a | ['Millennial Talks'] | 2020-12-19 23:39:51.479000+00:00 | ['Cognition', 'Bayesian Statistics', 'Predictions', 'Brain', 'Reverse Engineering'] |
Vue 3 — Teleport. How to render items in the location of… | Photo by Axel Ahoi on Unsplash
Vue 3 is in beta and it’s subject to change.
Vue 3 is the up and coming version of Vue front end framework.
It builds on the popularity and ease of use of Vue 2.
In this article, we’ll look at how to use the teleport component to render elements and components in a different location in the DOM.
Teleport
We can use the teleport component to let us render parts of our Vue template in a location that’s different from its usual location in the DOM
This is handy for creating things like modals and overlays.
The DOM element that we want to render our items in must already exist.
Otherwise, we’ll get an error.
For example, we can write:
<!DOCTYPE html>
<html lang="en">
<head>
<title>App</title>
<script src="https://unpkg.com/vue@next"></script>
</head>
<body>
<div id="app">
<teleport to="#footer">
<p>footer</p>
</teleport>
</div> <div id="footer"></div> <script>
const app = Vue.createApp({}); app.mount("#app");
</script>
</body>
</html>
We added the teleport component to our template with the to prop set to the selector to mount the content inside it.
Therefore, the div with ID footer will hold the p element that’s inside the teleport component.
Using with Vue components
If teleport has a Vue component, then it’ll remain a child component of the teleport ‘s parent.
For example, we can write:
<!DOCTYPE html>
<html lang="en">
<head>
<title>App</title>
<script src="https://unpkg.com/vue@next"></script>
</head>
<body>
<div id="app">
<teleport to="#modals">
<div>A</div>
</teleport>
<teleport to="#modals">
<div>B</div>
</teleport>
</div> <div id="footer"></div> <script>
const app = Vue.createApp({}); app.component("parent-component", {
template: `
<h2>parent</h2>
<teleport to="#footer">
<child-component name="james" />
</teleport>
`
}); app.component("child-component", {
props: ["name"],
template: `
<div>{{ name }}</div>
`
}); app.mount("#app");
</script>
</body>
</html>
We put the parent-component in the root template of our app.
It has a teleport that has to set to #footer .
So it’ll be rendered in the div with the ID footer .
It has the child-component inside it.
And that’ll be rendered inside div with ID footer also.
Using Multiple Teleports on the Same Target
We can have multiple teleports with the same selector set as their to prop value.
They’ll be rendered by in the order that they’re defined in.
For instance, if we have:
<!DOCTYPE html>
<html lang="en">
<head>
<title>App</title>
<script src="https://unpkg.com/vue@next"></script>
</head>
<body>
<div id="app">
<teleport to="#footer">
<div>foo</div>
</teleport>
<teleport to="#footer">
<div>bar</div>
</teleport>
</div> <div id="footer"></div> <script>
const app = Vue.createApp({}); app.mount("#app");
</script>
</body>
</html>
Then the rendered result would be:
<div id="footer">
<div>foo</div>
<div>bar</div>
</div>
Photo by chuttersnap on Unsplash
Conclusion
The teleport component lets us render parts of a Vue template in a location that’s different from the usual location in the DOM.
Enjoyed this article? If so, get more similar content by subscribing to Decoded, our YouTube channel! | https://medium.com/javascript-in-plain-english/vue-3-teleport-73d573648b38 | ['John Au-Yeung'] | 2020-11-16 20:15:11.291000+00:00 | ['Technology', 'Programming', 'Software Development', 'Web Development', 'JavaScript'] |
5 Elements to Include On Your HubSpot Blog | 5 Elements to Include On Your HubSpot Blog
The average blog will have a high bounce rate as people consume your content and subsequently leave the site with the information they need. Thrive Marketing Follow Jul 1, 2020 · 4 min read
But what if there was a way to keep them engaged? A great blog will use different elements to keep each website user interested whether this is driving traffic to other website pages or generating email subscriptions.
If you’ve got a HubSpot website — check out the benefits it could have for your business’ lead generation strategy — then you’ve got plenty of tools at your disposal to make your blog even better!
Blog Authors
If several members of your team are contributing to your blog, then you might want to consider adding authors to each piece of content. By composing a bio for each author, containing information on their current position, industry background and an interesting fact or tidbit, you can create a connection with your readers.
Your business may assign specific blogs on a certain topic, product or service to a single author which a web user may find valuable. They can filter through all of your blogs using authors, finding content that will be more relevant for them.
Read Time
Sometimes a blog can be short and sweet and other times it can be bordering onto a dissertation word count. The best way to set expectations for your readers — saving their fingers from endless scrolling — is to display read times.
A quick 2-minute blog — great, they can read when they get a short break between tasks! However, something informative and extensive that may take 10+ minutes to read may deserve scheduling some time out of the calendar to properly get stuck in.
Email Subscription
Post blogs on a regular basis? You should send out a weekly blog newsletter to drive traffic to your website. Increase subscriptions by including an email subscription CTA on your blog — choose a prominent place so that users can easily sign up.
If you opt for a pop-up, check your site on both desktop and mobile to ensure the user experience isn’t affected. An annoying pop-up can easily lead to higher bounce rates.
Social Sharing
Have you ever read or seen something incredible online that you just needed to share with someone? Chances are you’ve used dark social methods to send your friends, family or colleagues to a site. This has no effect on you, but for businesses, it can be frustrating — where has this traffic come from?!
To ensure the attribution data in your analytics is as clear as possible, you can use social sharing buttons displayed alongside your blog to encourage readers to publicly share your content. Not only will this amplify your post across social media, but you can also compare channels like Facebook or LinkedIn — this will be helpful for seeing which platforms are most valuable for your business’ social strategy.
Recent Posts
So someone has finished reading your blog, what next? Direct your web visitors to other recent or popular posts that they might also find interesting. A good blog shouldn’t just end with your conclusion, you should be actively thinking of ways to drive this visitor into a lead which can eventually be converted into a customer.
Now that you’ve had a look at what elements you should include on your HubSpot blog, you may have noticed that we practice what we preach! We use each of the above elements on our blog to signpost you to other pieces of content which may be of value to you. | https://medium.com/thrive-marketing/5-elements-to-include-on-your-hubspot-blog-68add817b749 | ['Thrive Marketing'] | 2020-07-01 07:01:01.398000+00:00 | ['Blogging Tips', 'Hubspot Marketing Blog', 'Inbound Marketing', 'Blogging', 'Hubspot'] |
Climate Action With Style: B Corps in the Ethical Fashion Industry Minimize Environmental Impact… | How has your B Corp aligned/shifted its business practices to advance progress on the SDGs? How has this affected your supply chain companies? Any adjustments to employee practices?
Michelle Sheldon: Back in 2015, a year before we became B Corp Certified, Eco Promotional Products Inc. began creating products for the UN promoting the SDGs. Since our inception in 2008, we have based our supply chain research around the UN Global Compact. With a turbulent political climate in recent years and B Corp bringing more awareness to the SDGs, we have since updated our vetting process.
We strive to investigate if and how our supply chain is helping to move the needle toward a more just future based around the SDGs and are creating our own score card for our supply chain. We’re partnering with suppliers with more transparency and those actively involved in vetting their process through the Fair Labor Association and other third-party verifiers.
Shamini Dhana: Dhana is empowering customers to value people (social) and planet (environment) every step of the way through the creation of their own fashion. The SDGs align with Dhana’s vision and ability to report and manage how we use business to “unite humanity through fashion.” In 2019 Dhana published the WearOurValues report, which looked at brand-customer value alignment in the fashion industry and found that 97% of the over 5,000 respondents demanded greater transparency in the fashion industry. That has prompted us to advocate for consumers and empower them to drive impact through the medium of fashion. Additionally, we showcase our supply chain partners and teams to honor their work and provide a connection to the hearts and hands behind each piece of garment created. Our employees are proud and, more than ever, empowered to be part of the solutions to climate change and social justice.
The Good Tee’s transparent practices include sharing information about its supply chain partners.
Adila Cokar: The Good Tee is only a year old, so we are still new and learning from partners to make improvements. As a B Corp we are grateful to learn so much about inclusivity, which also includes age. As a fashion brand we recognize that our elders do not have fair representation in the media and felt we could do better. We’ve started on our blog a #goodgen series where we interview inspirational seniors and to mitigate ageism. We are also adding more seniors to model our products. We are working toward increasing our size ranges also and as the company grows we can allocate budget to add larger sizes.
Why is it important for companies in the apparel industry to take the lead on sustainability and environmental protections?
Sheldon: This question highlights a lot of scary topics — unfair labor practices, worker safety, fair pay, discrimination, human trafficking, and so much more. Too many garment workers are forced to work unreasonable hours at ridiculously low wages; child labor is used and they are considered sweatshops. Cheap apparel comes at a huge human rights cost. Consume smarter; pay a bit more if you know it does not sacrifice the well-being of others to protect our people, planet, and profit.
Shamini Dhana, Founder of Dhana Inc., a California-based fashion technology company that currently focuses on eight SDGs.
Dhana: Every second the equivalent of a truck filled with clothing ends up in the garbage — over 70% of clothing in the almost $3 trillion fashion industry ends up in the landfill. This linear business model needs to move toward a circular and regenerative business model. Embracing circularity means valuing people and planet every step of the way. When customers co-create through circular fashion, we amplify how we as people have a choice to make an impact on social and environmental issues that we value.
People in Generation Z are demanding solutions to the current state of waste and pollution created by fashion as they are driving the resale of clothing industry ($24 billion market today), so fashion companies and brands need to adhere to the call of this growing customer base if they plan to be in business in the near future. This is demonstration of not only support for sustainability but also being transparent and accountable of the treatment of employees and workers in the supply chain they rely on. The brand’s identity is an extension of the social and environmental footprint they are impacting on a day-to-day basis. This is the real deal.
Cokar: The apparel industry is the second-largest polluter in the world next to oil and is extremely complex for industry newcomers to navigate. In addition, the 2013 manufacturing building collapse in Bangladesh put a huge stigma on offshore manufacturing. However, overseas manufacturing provides access to a wider range of resources to get the job done. These families and individuals in this industry have been doing this for generations and have a very special skill. Their jobs are not that easily replaced and people are counting on us to bring them work. Also, contrary to what people in North America hear, clothing can be made both sustainably and ethically. | https://bthechange.com/climate-action-with-style-b-corps-in-the-ethical-fashion-industry-minimize-environmental-impact-d1a913743c16 | ['B The Change'] | 2021-04-26 15:19:16.751000+00:00 | ['Fashion', 'Ethical Fashion', 'Sdgs', 'Climate Change', 'B Corp'] |
Battling Resentment in 3 Steps | Our stories are different yet we all walk through times of inevitable hurt and resentment. You feel alone, crushed and even consumed. You consume yourself in anger and negativity until those feelings become the way you carry yourself. They guide you through your day, your coffee tastes a bit more bitter and the sky is a bit less blue. Feelings of resentment lead you to deeper waters, you are no longer scared of the dark because feeling alone becomes the norm. The unfairness you felt when someone hurt you, captivates you into a world of grief and hate. Your days turn into question marks and you live your days searching for the answers. Resentment lives on both of your shoulders, you feel heavy and tired almost everyday. At night, as you dream of dancing under red skies, betrayal takes your hand and all you feel is hurt. Resentment is powerful and when she arrives, she brings friends. Her friends bring, “Anger” and “Hate” to the party and with all these guests, you feel more alone than ever. Until you’re ready, you allow yourself to drown in these emotions because you’re allowed to feel hurt but the amount of time you let it control you, is up to you.
Apologize
I know what you’re thinking, “I am the one who deserves an apology! Who am I apologizing to?”. Apologize to yourself. Apologize for the sleepless nights and confusing thoughts that clouded your mind. Apologize to yourself for, as a result of someone treating you poorly, you walk with a little less conviction and confidence with each step. Apologize for the hate you carry like a 55LB backpack on your back. While staring back at your reflection, apologize to yourself for all the hurt someone else caused you. Apologize for the unknown, the closure you may never receive, and the reasoning you many never know. Do this as many times as it takes for you to start remembering to believe in yourself.
2. Permission
Allow yourself to ditch the backpack. Today is the day you realize, you are carrying 55LBS too many. Grant yourself the freedom of accepting an apology that came from you. Acknowledge your opportunity for growth and, if you have to, convince yourself that you are worthy of that growth. Forgiveness within yourself is the most expensive form. Take the time to realize that with forgiveness you are granting yourself permission to learn. If forgiveness feels unobtainable, try again. It is difficult and you are not going to heal on anyones schedule but your own. Sometimes an explanation of why something happened may not excuse the hurt and bitterness you feel. There isn’t a way to reverse time, we can only continue to analyze and gain understanding from it. Oftentimes, your pain is silent to others. Only you can hear it and its deafening. You may feel it is difficult to concentrate and no one can hear you. There are no magic words that will take away any hurt someone has caused you but as long as you never turn a deaf ear to yourself, you are progressing. You may feel like you are screaming out into an empty void but just know, if you close your eyes and listen to your own screams, hear them echo in a distance, it means you’re alive. You are allowed to feel hurt and you are allowed to scream, but within these screams, you must listen to yourself. Dissect every word your are screaming and understand why you are screaming. This is how you form your apologies to yourself.
REPEAT IF NECESSARY .
3. Give thanks
This is your time to thank yourself for growing through the rain and blossoming. Thank yourself for your own patience and devotion. You are alive and you are stronger than any other point in your life. You are a warrior and you have learned healing may not come easy but you set yourself to accomplish it. Say, “thank you” for each apology you gave yourself and for the weeks, months, or years it took you to forgive. Take a deep breath and close your eyes. You are a sponge, you consume every moment of each day. Whether it be good or bad, you are walking into each day ready to consume it. Be mindful of the energy you are using and the feelings you are feeling. Thank yourself for the emotions, the opportunity for growth, the sadness, the anger, the hatred, but most importantly, the love. Thank yourself for the moment you decided to love and heal yourself. There is no validation for this, no round of applause or standing ovation but within, you know you saved yourself from the hate and hurt that clouded your mind. You are the one who turned on the light switch and you decide how to move forward in your life. No more dark clouds and bitter coffee. You have picked up the wounded parts of yourself and created art. You are art. Continue to walk with conviction because you deserve it.
This is a conversation you may not be ready to have with yourself. There is no race, start slow and go at your own pace. Remember not to forget who is in charge. You grant yourself the permission to move forward. You may have to dance alone but the guests you once danced with, are no longer welcome. Your heart has a door, you must remember those who wish to enter must knock first. This will not be easy, you may have to repeat, “Step 1” a few times before you can move forward and that is okay. Nothing is set in stone, repeat any step until you feel comfortable enough to move forward. | https://medium.com/@daisywrites/battling-resentment-in-3-steps-7cd93ff46226 | ['Daisy G'] | 2020-12-22 21:10:59.371000+00:00 | ['Growth Mindset', 'Relationships', 'Self Improvement', 'Personal Growth', 'Personal Development'] |
CARES Relief and the $2,000 Stimulus | Yesterday, President Trump made it clear that Congress limp attempt at personal stimulus for individuals ($600) is simply not enough. He wants to eliminate all of the money going to foreign countries for things like sex-change surgeries in Pakistan and put it in the pocket of citizens here. What a concept! Finally, a bit of common sense is being heard from concerning a new stimulus package. President Trump is demanding that Congress modify the bill that was just passed to provide for $2,000 stimulus per qualifying person.
The funny thing is that President Trump has been telling both the Democratic and Republican politicians on Capitol Hill for months that the amounts they’ve been arguing over were not sufficient. No one chose to listen because it didn’t fit the immediate establishment goal of removing him from office. At least some are listening now. Thank goodness. Hopefully, the light that President Trump has shined down upon this mess will encourage enough of the Senators and Representatives to provide more help to the people of this country than the $600.
Why stop at $2,000 per qualifying person? Why not set a minimum floor of $2,000 for all persons with a sliding scale that increases the amounts similar to the income tax rates only in reverse? Those at the top brackets for purposes of income tax would receive the $2,000. Everyone else would receive more based upon the bracket they fall in with a maximum of $5,000 per person. In parts of the country which have not seen significant lockdowns that may seem high. But, in parts of the country which have seen significant lockdowns even that is probably not sufficient. After all, if you look at it from a six month standpoint, then even at the highest amount it is less than $1,000 per month per person. That’s not a lot when people are forced into lockdowns, businesses are lost, and jobs are destroyed. Unemployment and subsistence benefits just don’t cover it.
I encourage the members of Congress and our President to consider this option instead and truly serve the people of this country. Despite what some might say, it is both possible and prudent. But, those arguments are for another day! | https://medium.com/@dbclaw/cares-relief-and-the-2-000-stimulus-20a20e72dc6e | ['David B. Christian'] | 2020-12-23 14:46:22.439000+00:00 | ['Trump', 'Congress', 'Stimulus', '2000', 'Cares Act'] |
Listen. Watch. Defend. | Photo by Gayatri Malhotra on Unsplash
Awhite friend of mine asked me what he should do in response to all of the rioting. I withheld publishing this blog at the time of this event. This occurred around May of 2020, but I didn’t want to draw any attention towards me, at that time.
He wanted to know how could he contribute to bettering things in response to the brutal murder of George Floyd and the recurring civil injustices along with the widespread racism towards black people; which seems to be new to some people. Hmm…
I didn’t know how to respond. After all of the work and protests to have our voices heard by members of society, how does one effectively respond to an individual who wants to know how to change their actions for the betterment of black people.
I knew this ugly, cancerous monster of racism and injustice wasn’t all on his shoulders to change, but I knew that this is where the solution starts. With the individual. Just like fighting cancer is a battle that begins on the inside of the body. Racism can’t be beat on the skin level but on a deeper cellular level.
I told him the first thing that I could come up with at the time. That the best he could do was to listen and take witness of the gravity and power of racist acts and injustices that happen against people of color. Especially, what people in authority do to us.
The greatest hope he could have to make a positive civil affect was to understand that black people are deeply affected, traumatized and brutalized by the harmful ways that someone with his skin color has said, behaved and done against people of my skin color for generations.
To know that we are human beings just like him and that our lives matter too. The melanin in our skin color doesn’t mean we are any less important than he, or anyone else is.
It wasn’t until the following day that my thoughts settled on three powerful words that summed up everything I was trying to say. Three words I believe can help people like him who want to combat racist thoughts, acts, and poisonous beliefs toward people of color.
Listen. Watch. Defend.
I knew this was a direct link resulting from my relationship with God. First came the words. Then came the definitions.
Listen = to how you say things.
Watch = how you behave.
Defend = what you believe.
It all makes sense to me. Allow me to explain why.
Listen.
Everyone contributes to the societies that we live in. You and I make these societies what they are. We do this by what we add to the social plot each and everyday. Comments on social media, words said to our neighbors or the people walking down the sidewalk. Communication being had with our friends, family, co-workers and numerous bystanders.
It is our thoughts and emotions that fuel our responses and reactions to external stimuli including other people. Our thoughts are governed by our mental filtering process before we ever speaking a word. That’s what makes what we say, “a Choice”; just like deciding when “I want” to release this blog.
The effects of external circumstances cause us to have emotions. Emotions fuel and color our thoughts. Our thoughts propel us to commit an action. That action can be something we do with our hands or with our mouths.
My father told me recently to get into the habit of waiting 30 seconds before making any emotional decision. This way, you truly allow yourself to think your decisions through, instead of being emotionally immature and impulsive.
Our words are powerful forms of communication that we must be weary of. They can be used to inspire, motivate, seduce, tear down, insult, convince, empower, attack, or build up, among many other things. They are a form of power that can effect us to our core, and we must be mindful of how we use our words if you wish to change the script on how you treat other people.
Your words have energy and cast spells. That’s why it’s called “Spelling.” So be careful of what you say to others and to yourself.
If you wish to make things better for others, you must pay attention to how you say things. We can all foster the extra time in-between voicing our thoughts and thinking through how and what we say could affect others. It’s more tedious to take that 10-30 secs of proofreading our thoughts, but it’s the only irreplicable action we can take back.
Once we speak it out of our mouths, that’s it.
It’s your choice to put in the extra effort and work of thinking your speech through, regarding who you’re talking to and what you really want to say, so choose wisely.
Watch.
Our thoughts fuel our actions, so we must watch and be aware of our thoughts prior to committing actions. That is our responsibility as peace-seeking human beings.
It’s so easy to believe our own junk without double checking it. It’s more difficult and time consuming to put in the work of verifying our thoughts, before speaking, but it’s necessary and it’s worth it.
Don’t believe that you know what you saw, heard, believed, thought about anyone before simply questioning what you experienced a little deeper.
Is what you thought about someone true or some form of gossip, discrimination or stereotype.
There’s a simple way to mitigate your thoughts from betraying your intentions. Just ask the person directly in an open and friendly manner, but be mindful of the questions you ask.
Is your question coming from a friendly disposition or a pre-loaded emotional dagger dipped with poisonous intentions? Ensure that your questions aren’t connected to a harmful, exploding bag of emotional shrap metal ready to explode like a dirty bomb in that person’s mind.
Asking someone a direct question can be welcomed, but only when it from kindness, compassion and honest consideration. If it doesn’t feel right to ask something, follow the golden rule:
Don’t have anything nice to say, don’t say anything at all.
Defend.
Our thoughts are fueled and secured by our beliefs. It is up to us to decide what we want to believe and why.
Many beliefs are instilled in us through the influences made in our childhood, but we can change what we were taught if we truly believe that those beliefs are wrong. That’s a personal decision only you can discover and act on.
Consider this; All things in life are made of inexhaustible energy, so you cannot erase something that you once believed without replacing it with something of equal weight and value. If you replace fear and hate with love, you will have to fight continuously to alter those old values, but it is possible.
If your battling with dark beliefs that you’ve been instilled with, take it slow. Any change made in an instant is bound to waver. You’re human, which means change comes slowly. Take it one day at a time.
Continue to practice your new-found beliefs as often as you can by showing love to all of those around you. This may not be an easy task, as our previous beliefs have been established deep in the roots of our psyche. Which is why you will have to make the decision to defend this new belief with vigor.
Defend it from external influences, common mental triggers, and all other internal threats of reversion. Stay focused on the end result.
A more just society for all.
Listen. Watch. Defend.
This is just my two-cents on how we can fight the cancer of racism in the world. The truth is, the only answer to hate from within and from without is Love.
Let’s learn to live with love in our hearts for each other and for ourselves. That’s the only way to better the world we live in.
Fore within us, we are all beautiful, valuable individuals worthy of love.
Share the love!!
Here’s to a new world understanding! | https://medium.com/@jordanrobinson88/listen-watch-defend-a13713b41d2d | ['What The Http'] | 2021-07-19 16:43:44.079000+00:00 | ['Listening', 'BlackLivesMatter', 'Civil Rights', 'George Floyd', 'Equality'] |
[賽德計畫][firebase]投票狗. 這是一個供多人投票並實時觀看開票結果的小專案 | Abstract function return type in typescript
The problem is where you put the type parameter. If the type parameter is on the method, then that method must be… | https://medium.com/@collin6308/%E8%B3%BD%E5%BE%B7%E8%A8%88%E7%95%AB-firebase-%E5%A4%A7%E5%AE%89%E7%8B%97-47f5d3411c3a | [] | 2020-12-30 07:20:22.255000+00:00 | ['Firebase', 'Angular'] |
New Generation of Bug Fixing | Bug fixing is dear, which attracts developers and researchers to review the way to effectively resolve bugs. Therefore, it’s become a hot research topic in software engineering. During the bug fixing process, developers leverage various software artefacts (e.g., bug reports, commits, log files, and source files) and explore multisource heterogeneous information (Q&A websites, web resources, and software communities) to breed bugs, localize bugs, identify candidate fixing solutions, apply fixes and validate fixes. The rich data provides important information about bug fixing, which may guide developers to resolve bugs. for instance, a bug report not only shows the small print of the reported bug but also shows the potential method of bug fixing. Therefore, the way to analyze and utilize such data is a crucial step for bug fixing.
The special issue will specialise in the new generation of bug fixing. Generally, bug fixing process includes bug understanding (i.e., bug reproduction, severity/priority verification, bug summarization, bug classification, and bug knowledge extraction), bug localization, bug fixing, and bug validation. By using data processing, information retrieval, machine learning, tongue processing, AI technologies, visualization technologies, human-computer interaction technologies and code analysis technologies, a series of latest automated algorithms are often proposed to enhance the performance of bug fixing.
We invite submissions of high-quality papers describing original and significant add all areas of the new generation of bug fixing including (but not limited to): 1) providing a summary of research that advances intelligent bug fixing using multiple data analysis and processing techniques, and 2) serving as a comprehensive collection of a number of the present state-of-the-art technologies within this content. We especially encourage the authors of the simplest papers accepted by the 2nd IEEE International Workshop on Intelligent Bug Fixing 2020 to submit | https://medium.com/@shohandinar/new-generation-of-bug-fixing-b98ce58b7e4 | ['Shohan Dinar'] | 2020-12-22 16:23:23.253000+00:00 | ['Wordpress Web Development', 'Wordpress Plugins', 'Bug Fixing', 'Wordpress Themes', 'WordPress'] |
Cardio is making you FAT! | What if I told you that cardio is making you fat?
In our quest for fitness, somewhere along the line the idea was introduced that you can burn fat by running. While cardio is an effective supplement to an overall balanced workout regimen, too much of it can actually work against you.
Before I go much further, let me be clear: I AM NOT AGAINST CARDIO!
In fact, I believe cardiovascular training is a necessary component to fitness. But there is a method to using cardio for fat loss.
Let’s start by making sure we’re talking about the same thing here. “Cardio” is technically anything that raises your heart rate. So in reality, anything you do that exerts effort can be considered cardio, including weight training. More often than not, “cardio” is synonymous with steady pace (10 min or more) activities such as running, cycling, elliptical machines, etc.
For our intents and purposes, we’re talking about steady pace cardio here.
Now don’t get me wrong, steady pace cardio has its place. Steady pace cardio is great for endurance training. Think about it like this: the human body is designed to adapt. As you begin running you are placing stress on the body, causing your heart rate to rise as blood pumps to your muscles and organs to handle it. As you continue, your body is desperately looking for the most efficient way to handle this stress so the longer you go, the less you energy is needed to maintain. Over time, your body will get used to this stress allowing you to perform the same activity with less exerted effort. Your heart is is actually gaining strength.
This is an amazing benefit for your overall well being!!
Not so much for fat loss.
See, that spike in your heart rate requires energy. From a top down view, calories = energy. So as long as you eat, you’ll have sufficient energy for long, steady pace cardio, right?
Sort of.
The body typically uses carbohydrates as it’s main source of energy. When carbs are present and activity is initiated, the body has the fuel it needs to produce energy and sustain the effort. When carbs are depleted or you are at a carb deficit, the body defers to fat stores as the next best source of energy. While this sounds good, fat is a far less efficient source of energy that may help maintain the effort for a short period of time but will fizzle out rather quickly.
So what happens when the effort needs more energy than carbs and fat can provide?
As I mentioned before, the body is designed to adapt. When intense effort is consistently placed on the body over long periods of time, it bypasses carbs AND fat as the go-to source of energy and starts to burn lean muscle for energy.
As you can imagine, this becomes HIGHLY problematic if fat loss is your goal.
Lean muscle is actually the greatest ally in burning fat.
Lean muscle is dense, and its development requires more and more blood to maintain. As mentioned before, an increase in your heart rate needs fuel and your body will tap into its energy stores (carbs and body fat) to match the need. So in essence, lean muscle burns fat (don’t worry ladies, lean muscle is grows inward before it expands. You don’t have to worry about getting”big”).
Knowing this, it doesn’t make sense to burn lean muscle to produce the energy needed for an activity that is supposed to burn fat.
This is how you become “skinny fat”.
THE REMEDY
Its great to have all this information, but how do we apply it practically? Here are 4 tweaks you can make to your current routine that will produce instant results.
Less Steady Pace Cardio
Unless you are training for an event that specifically requires you to run for endurance, it’s best to limit your steady pace cardio to 2–3 times per week. This includes idle time on the elliptical and that spin class you love so much. Remember, just because you are sweating doesn’t mean you are burning fat. Lift More, Lift Heavier
IF your goal is to lose more body fat (not scale weight. We don’t go there), make sure your main focus is on consistent and progressive resistance training. Believe it or not, you sweat just as much on the squat rack as you do on the treadmill. Except on the squat rack you are developing lean muscle in HUGE muscle groups (quads, hamstrings, glutes) that will require more blood to maintain way after your workout is complete. How’s THAT for cardio! Do Complimentary Cardio
In addition to lifting more to develop lean muscle, you can actually use your cardio to aid in maintenance and recovery. Try doing your cardio AFTER an intense lifting session. While those muscles are their thirstiest, try 10–15 min of moderately intense cardio keep your gains and help with cool down. Modify Your Intensity
Cardio is not the bad guy, it’s just how you use it. Instead of running at the same pace for 10 min, try intervals. Sprint for 1 minute, them walk for 1 minute, repeat. That fluctuation in intensity will cause your heart rate to rise and fall, which will burn more in 10 minutes than you would with an hour of steady pace cardio. Explosive movements like jumping or burpees are a great way to spike your heart rate in a short period of time.
At the end of the day, it’s about finding your balance. Too much of anything isn’t great for you. Cardio is no exception.
If you found this information valuable, please comment and share.
For your 7-Day Meal Plan and Workout Plan, visit 7DayStimulus.com
#dubblup | https://medium.com/@mrdubblup/cardio-is-making-you-fat-83382504cb45 | [] | 2020-04-23 14:49:02.420000+00:00 | ['Fitness Tips', 'Fat Loss', 'Fitness', 'Running', 'Health'] |
Characterizing Pandemic Information on the Social Media | After careful analysis, we have observed that keywords of the obtained topics in our LDA analysis were more closely related to our current research hence we decided to go forward with the LDA model for our future analysis. Since we intend to understand the nature of scientific communication over time, we have plotted the frequency of tweets for each topic according to the corresponding month.
Fig. 2: Time series analysis of the expert topics using LDA
Now, in order to understand the non-expert tweets according to the identified topics, we have classified them using the six previously identified topics. This step was also suggested by our peers during the intermediate report review.
Fig. 3: Topic wise frequency distribution of the tweets by the non-expert users
Subjectivity Analysis:
As discussed earlier, subjectivity can be defined as the quality of being influenced by personal feelings or opinions. For characterizing the degree of subjectivity of the tweets in both the datasets, we used a Python package named TextBlob.
The TextBlob package is quite a convenient package to perform a lot of Natural Language Processing(NLP) tasks like Sentiment Analysis, Subjectivity Analysis, Noun Phrase Extraction, Part-of-speech(PoS) Tagging, Spelling Correction etc. It not only handles negation, but it also takes into account modifier words like “very”, “almost” in order to predict the polarity and subjectivity of a text. The subjectivity range lies from 0.0 to 1.0, where 0.0 signifies that the text is objective while 1.0 signifies that the text is highly subjective.
We have performed subjectivity analysis on both the expert and non-expert users’ tweets
Subjectivity Analysis for the expert users: This is particularly important as it helps us analyze our ground truth effectively. We aggregated the tweets for each user based on the subjectivity score returned by TextBlob and generated an average subjectivity for every user. We have observed that the subjectiveness does not exceed much beyond 40%. In other words, the expert users are objective at least 60% of the time.
Fig. 4: Overall subjectivity of each of the expert users
We have also counted the number of tweets vis-a-vis the subjectivity scores. It has been observed that only 5.68% of the total expert users’ tweets were found to be highly subjective(i.e., subjectivity score equal to 1). This implies that most of the expert users’ tweets were objective and thus expert users’ tweets contain more fact than fiction.
Fig. 5: Histogram comparing the count of tweets across different subjectivity scores
We have also analyzed that out of those highly subjective tweets,
21% belonged to Topic 5 (work, read, great) ,
20% belonged to Topic 1 (year, thread, health)while
Topic 6 (join, news, tweet) had the least number of subjective tweets, i.e., 11% of the highly subjective tweets.
But no clear pattern emerged from the combination of the results from the topic modeling and subjectivity analysis.
Subjectivity Analysis for the non-expert users: We also calculated the subjectivity scores of the non-expert users’ tweets to understand how non-experts’ tweets are different from those of experts. We have observed that for the non- expert users’ tweets, only 4% of the total tweets were highly subjective(subjectivity score equal to 1). Thus, the tweets by non-expert users in our dataset are also highly objective.
Fig. 6: Histogram comparing the count of tweets across different subjectivity scores
Out of those 4% tweets,
Topic 4 (mask, time, school) had the highest number of subjective tweets at 28%, followed by
Topic 1 (year, thread, health) at 22% while
Topic 2 (discuss, trump, live)had the least subjective tweets at 8%.
Again, no clear pattern emerged from the combination of the results from the topic modeling and subjectivity analysis of the non-expert users’ tweets.
Sentiment Analysis:
Sentiment analysis is used to understand and classify emotions in data. It is extremely useful in social media monitoring as it allows us to gain an overview of the wider public opinion behind certain topics. In order to understand the sentiments related to the predicted topics and advance scientific communication we performed sentiment analysis on our datasets using two well known methods:
BERT- Bidirectional Encoder Representations from Transformers is a paper published by researchers at Google. BERT caused excitement in the Machine Learning community by introducing state-of-the-art results in a wide variety of NLP tasks. Flair- It is a simple natural language processing (NLP) library developed and open-sourced by Zalando Research. Flair’s framework builds directly on PyTorch, one of the best deep learning frameworks out there and comprises popular and state-of-the-art word embeddings.
We have implemented these methods to understand the expert users in order to compare and find out which model best suits our needs. We found out that BERT predicted more neutral values than positive or negative while Flair had very less neutral values. We randomly selected a few tweets and made sanity checks in order to better understand the model predictions and choose the best model for our further analysis.
Fig. 7: Comparison of the sentiment classification by the Flair model over time
Fig. 8: Comparison of the sentiment classification by the BERT model over time
We have also observed that Flair classified our tweets more accurately. For example, the tweet “another child died” was classified as Neutral by BERT, while Flair classified it more accurately as Negative. Another example: the tweet “coronavirus could put hospital weeks even kill…” was classified as Neutral by BERT, however, again, Flair classified it more accurately as Negative. Hence, we selected Flair as our classifier of choice.
For the expert users, we found out that there were more positive tweets than negative, however the percentages of the positive and negative tweets are very close at 51% and 48% respectively. But in case of the non-expert users, negative tweets dominated with 68% of the total tweets while positive tweets were only 32%.
Fig. 9: Sentiment distribution of expert tweets
Fig. 10: Sentiment distribution of non-expert tweets
Limitation and Ethical Issues:
Bias in expert selection: We have collected a list of 50 experts in this field. But we understand that there may be inherent bias in this selection since it is difficult to find a set of users who are absolutely neutral to the surrounding politics and overall negativity around the issue of COVID-19.
Issues with topic modeling : We have extracted six topics related to COVID-19, initially from the expert users’ tweets analysis. However, all the collected tweets may not fit perfectly well under these topics and hence could create an obstruction in gaining a wholesome understanding of the scenario.
Problems with Elasticsearch: At first, we planned to upload all the tweets that we collected as a part of the TweetsCOV19 dataset to Elasticsearch so that we can utilize the Kibana dashboard linked to it and analyze the tweets. But due to the low computing power available as a part of the AWS Free Tier plan, our analysis was quite slow and we did not pursue it further.
Challenges in collecting response tweets: Along with the topic modeling, we aimed to collect the tweets which were responses to the tweets of the experts. This would give us more context about the response to the experts and their influence. However, we faced a challenge that we were able to only collect response tweets from the last 7 days, which did not help us in our initial objective of checking if the tweets are backed up by evidence.
Conclusion:
It has been widely acknowledged that the ongoing pandemic is accompanied by various kinds of misinformation about the nature, cure, long-term impact, and social implications of COVID-19.
While social media makes it easy for people to access information, separating facts from misinformation is an arduous task for a naive internet user. Our attempt in this project is a step forward towards solving this problem.
With the recognition of social media platforms, getting to know various facts and information has become significantly easier for the general population. Nevertheless, this sort of convenience also creates the spread of fiction at the same time.
In summary, we have implemented the following methods on both expert and non-expert users’ tweets:
Topic Modeling Subjectivity Analysis Sentiment Analysis
After careful analysis of the results, we have observed that most of the tweets in our dataset, for both the expert and non-expert users, are quite objective. In order to understand the sentiment related to these tweets, we have observed that the expert users have an almost equal combination of positive and negative tweets while the non-expert users have more negative tweets. We have also identified the popular topics from these tweets but no correlation was found between the subjective tweets and the related keywords of the topics.
In conclusion, we would like to point out that though the results say it loud and clear that the tweets related to COVID-19 are fairly objective, these results may change if the size of the datasets change significantly.
Thank you for reading! | https://medium.com/social-media-theories-ethics-and-analytics/characterizing-pandemic-information-on-the-social-media-504014209576 | ['Soumyadeep Basu'] | 2020-12-23 23:37:11.256000+00:00 | ['Final Report', 'Project', 'Covid 19', 'Social Media', 'Topic Modeling'] |
KGR Public Sale Notice:The first nad only public sale are finished in 3minutes | KGR Public Sale is Officially Completed!
All funds have been received within 3minutes, and transferred to team’s operational wallet to drive the project to the next stage.
Announcements on exchange listings shall follow shortly on tomorrow.
We thank our community for their committed participation, and look forward to progressing KGR onwards and upwards!
About HimegamiProtocol
The Himekami Protocol is a DeFi (Decentralized Finance) project that aim breaking away from centralized finance and fixing problems with existing stable coins.
The Kagura Token (KGR) is the first legitimate stable token from Japan and peg with YEN.YEN is the one of the most stable value in the world.
Omoikane Taken is the governance token for investor participation in management of finance of KGR.
The Himegami Protocol will be the fairest stable token in the world by operating with smart contract.
The addressable market for staked assets today is only less than 5,000,000USD, KGR has a vision of take over10% share all of stable token, that meas the value of KGR will be over a billionUSD or more in “Total Value Unlocked (TVU)” for users by the end of 2021.There are a bright future awaits KGR.
If you are interested in Himegamiprotocol you can get in touch with project team members from the following official channels:
For partnerships, media, or other collaboration opportunities, please email
[email protected]. | https://medium.com/@himegamiprotocol/kgr-public-sale-notice-the-first-nad-only-public-sale-are-finished-in-3minutes-d118b79cb231 | ['Himegami Protocol'] | 2020-12-26 16:27:09.653000+00:00 | ['Cryptocurrency', 'Btc', 'Defi', 'Kgr', 'Crypto'] |
Why This Trans Woman Is No Longer Fully Comfortable With the “Trans Women are Women, Period” Trope | Why This Trans Woman Is No Longer Fully Comfortable With the “Trans Women are Women, Period” Trope
Gender identity is in your brain at birth, and it’s not hard-binary for everyone
Photo from the March 2019 St. Petersburg, FL. Trans Day of Visibility Festival. By www.stpetepride.com
A young Facebook friend of mine, who lives in Pennsylvania, posted a meme this morning, poking fun at transphobes and being supportive of transgender people like me. I slightly modified her meme, filed the serial numbers off of it, and added a line; this is my version of it (BTW, she is cis, not trans, but she is 100% totally committed to LGBTQ acceptance):
Image by Author
This was my response to her:
Gender identity is in your brain at birth. It’s just there. I take estrogen hormone therapy, and I got an orchiectomy (surgical removal of my testicles), to find some kind of peace with my gender issues, and to feel better about myself. I didn’t do it as a “lifestyle change,” or to try to “convert myself into a woman” — I already was one. Four years on estrogen now, and every day, I become more sure that Transition was exactly what I needed to do. When they wake up in the morning, go through their daily routines, and finally prepare to go to bed at night, cisgender people know exactly who they are. They never need to question their own identity.
The only times in my life when I knew who I was, and felt at peace and happy, were the 29 years I spent with my wife Lynn, and these last 4 years post-Transition. This photo — that’s me on the left — is one of only a few photos of myself in my former gender that I don’t cringe at, and it’s because I was with Lynn that day. In fact, it was Lynn that shot the photo.
The author, left, and her wife Lynn. This is a composite of two photos shot on a daysail we did in June 1987, near Brannan Island State Park on the San Joaquin River Delta in central California. The sailboat is a MacGregor 25 that I owned for a few years in the late ‘80’s and early ‘90’s. This was about 11 months after our wedding.
For nearly half my life — this being about nine years longer than the sum total of the years that my young friend has been alive — I hated myself. I hated my name. I hated looking in mirrors. Cross-dressing was something I was compelled to do by some internal force that I didn’t understand, and while it comforted me somehow, it also terrified me, not just because I was worried about “getting caught,” but because 𝙄 𝙙𝙞𝙙𝙣’𝙩 𝙠𝙣𝙤𝙬 𝙬𝙝𝙤 𝙄 𝙬𝙖𝙨.
I used to echo that stock phrase that’s popular with the trans community, “Trans women are women, period,” but I don’t “shout it out” myself so much, not anymore. Each of us is a unique individual, and I have come to know many trans-feminine people, and not all of them are hard-binary trans women. Some of them are bi-gender and seem to live happy, dual-gendered lives: male sometimes, female other times.
Photo by Zackary Drucker, via the Gender Spectrum Collection. from www.vice.com
These are real friends, not just casual acquaintances, and I have had some deep conversations with them about who they are inside. Some, but not all of them, are married. Most, but not all, are attracted to women. They are thus “straight” when living in their male mode, and “trans-lesbian” when in female mode.
It must seem utterly bizarre to most cis/het people, especially men, but I will submit to you that these friends are “just people.” They are IT professionals, Civil Engineers, Doctors and Nurses, Architects, Accountants — almost every job classification you could possibly think of. Some of them are parents, mostly with kids that are grown up and have lives of their own now.
And in many ways, I am a lot like them. I lived very happily as Lynn’s boyfriend and husband for almost 29 years, and 𝙄 𝙙𝙞𝙙𝙣’𝙩 𝙝𝙖𝙩𝙚 𝙢𝙮𝙨𝙚𝙡𝙛 𝙣𝙚𝙖𝙧𝙡𝙮 𝙖𝙨 𝙢𝙪𝙘𝙝 𝙙𝙪𝙧𝙞𝙣𝙜 𝙩𝙝𝙤𝙨𝙚 𝙮𝙚𝙖𝙧𝙨 𝙖𝙨 𝙄 𝙙𝙞𝙙 𝙗𝙚𝙛𝙤𝙧𝙚 𝙄 𝙢𝙚𝙩 𝙇𝙮𝙣𝙣, or during the two years immediately following her death. This seems to me to mean something profound, but it’s something that I don’t have the language to describe, even to myself. But in order to integrate my past life as Lynn’s husband, into my life today as Laura-Ann, I have decided that I have to loosen my grip on the “Trans women are women, period” trope.
Today, it comforts me that I seem to be usually perceived as a woman, and I get treated as one by customer service employees at businesses and restaurants. I asked for, and was granted, the right to change my legal gender from male to female, and my name from “Laurence George” to “Laura-Ann Marie”. And I hate to be misgendered. But I have come to realize that I have always been simply “this person” — the consciousness in this brain, and the soul under the skin and living in this heart, from which my inner light pours forth to illuminate the darkness. Just because my parents gave me the name I had, and that Lynn knew me by that name, shouldn’t mean that it was only that name that made me feel bad about myself.
Where the rubber meets the road, I felt bad about myself because the soul of a woman is what lives in me, and that fact could not be openly admitted until the time was right. And at the same time, I have to acknowledge that, when I was trying to be a guy, for the sake of my family, I did a reasonably good job. I earned an honest living, and I kept a roof over our heads. Lynn thought I was a pretty good husband, and, although I can’t ever be anyone’s husband again, Pauline seems to like me, too. Lynn and Pauline both were able, apparently, to see that inner light, even when I often can’t see it myself in the mirror.
Left image: My wife Lynn and I, September 2013, in Sequoia/Kings Canyon National Park. This was the last road trip I got to do with Lynn; she died about 8 weeks after I shot this photo of us. Right image, Pauline and I at El Tapatio Mexican restaurant in Citrus Heights, California, October 2017. This is one of our favorite restaurants. I just hope it can survive the COVID pandemic. It’s been shut down for all but three of the last 16 weeks.
I’ve been very lucky, and had a very good life, despite having to put up with decades of gender dysphoria. Way fewer than half of the people I know had full and happy marriages that didn’t end in bitter divorces. My own father only stayed with my Mom out of a sense of duty, and the hope that, despite how awful my Mom was — alcoholism would eventually kill her at age 52 with terminal liver and kidney failure — that I would turn out okay, which I think I did.
July 5, 1990, Reno, Nevada. Family Reunion at my Dad’s 73rd birthday. From right to left: the author and her father Walter (below), author’s daughter Shanna (above), and next to her, author’s wife (Shanna’s Mom) Lynn. Far left, author’s aunt Helen, and her husband, my Uncle George. My assigned-at-birth middle name, George, is in honor of this man. George is my godfather; he was 5 years younger than my Dad, and of the five siblings in that family, my Dad and George had the closest relationship. Of the six people in this photo, only Shanna and I are still living.
After 59 years of looking in the mirror, and at photos of myself, and more-or-less hating what I saw there, I now finally see someone worthy of being loved, even by myself. And that’s all the justification any trans person should ever have to give anyone for “why” they undergo gender transition.
***************************************************************
I have one other thought about gender identity and sexuality that I am going to add here. This came to me a couple of days ago, while I was talking to my friend Alicia about our experience of being transwomen. As I stated above, some of my friends in the MtF support group I belong to, the River City Gems, are not absolutely transwomen. They’re bi-gender, or gender-fluid, and I thought of a metaphor to illustrate how one might interpret this concept of non-binary gender identity or sexual orientation. Imagine if you will, that somewhere deep in the human brain, are a pair of dials, like the tuning dials of an old fashioned radio or TV set. One of these dials is labelled “Gender Identity”, and the other is labelled “Sexual Orientation”. The Gender Identity dial has two positions labelled on it : “100% Male”, all the way to the left, and “100% Female”, 359 degrees of rotation to the right. There are an infinite number of intermediate settings in between the two end stops on this dial, but none of them are labelled, and the dial itself, of course, is invisible. You can’t see it, because it’s way deep inside your brain, and it’s no different in density from any other brain tissue, so you can’t image it on an X-ray, CT scan, or MRI scan. You can only feel the effects on your self-perception of gender, that this dial controls.
Mutiband Ham Radio tuner, image from Wikipedia.
The other dial, for Sexual Orientation, is pretty much the same in structure, except that the two end points are labelled “100% Hetero Sexual” on the left, and “100% Gay” on the right. Again, you can’t see this dial to verify just where the dial is pointing. My thought is that a lot of people are living with one or both of these dials set somewhere that is not on the extreme end points of their respective arcs of travel. And to muddy the water even more, I now think that either, or both, of these dials can and do move, slowly, throughout a person’s life, as that person gains life experience, and is subject to decades of crises and traumas, loves and losses, and the beginnings, growth, decline, and ending, of all the thousands of relationships and friendships that most people will experience in their lifetimes. | https://medium.com/gender-from-the-trenches/why-this-trans-woman-is-no-longer-fully-comfortable-with-the-trans-women-are-women-period-trope-8197a4cd2cbf | ['Laura-Ann Marie Charlot'] | 2020-07-30 10:20:30.793000+00:00 | ['Personal Growth', 'Self', 'Nonbinary', 'Culture', 'Transgender'] |
From The Top Down: Why Organization In Leadership Matters | It may be too simplistic to say that business leaders have to be organized to maximize success. As leaders juggle a number of balls and wear the many hats of the entrepreneur, it’s easy to get sidetracked or miss something. Many leaders are quick on their feet and highly creative. It might seem like a waste of time to stop and organize their processes, or to ensure that their organizational example is being followed. However, these facets can be crucial elements to harnessing overall success in any industry.
The “Why” Of Organizational Leadership
When leaders are organized, they are much more productive and able to react to anything that comes up. Being organized doesn’t mean you need to lose agility or flexibility. In fact, being truly organized makes it easier to be agile without dropping balls-if something comes up, you know what shifting your schedule really means across the board. Rather than missing something by mistake, being organized allows you to make the most informed decisions, prioritizing your schedule as needed.
A leader should have certain qualities that set the tone for the business. Some of these qualities and practices include:
Arriving early for everything
Responding to emails and phone calls in a timely manner
Flexibility in focus
Awareness of daily schedule
Remaining involved with finances
You need to know where you stand. The only way to truly be an effective leader is if you are plugged in. People are looking to you for the answers and trusting you to be thoughtful in your methods. Some entrepreneurs find the ideation part of business thrilling, while they find sorting the details mundane. However, the details are where you will really set yourself apart as a business leader.
If you fail to pursue organizational skills as a leader, you can really hurt your professional success. Your partners and stakeholders will view your lack of organization as a threat to your company. All it takes is one mistake due to a careless approach or lack of organization, and your partners or investors will lose confidence. Mistakes and missteps happen, but they should never happen because you refused to prepare.
Choose One Focus
Multi-tasking is a myth for many leaders. To be effective, you need to focus on one thing at a time. A huge benefit of being organized is that you can see which priorities are most important and time-sensitive. You can fully focus on one task at a time, giving it your full attention until it is completed.
Pick one focus per active consideration. If you accomplish that one goal, then you can start another one. By only choosing one focus, you completely put that priority above all others. It doesn’t mean you won’t get anything else done, but it does mean you can mitigate potential mistakes resulting from distraction that will require additional attention in the long-run.
Practice Self-Reflection and Mindfulness
Sometimes, we brush aside important moments of quiet because we feel too rushed and pressured to be productive. This can be a monumental mistake. The quiet times of reflection can spark our best ideas. They can also be times when we find ourselves refueled on a mental level. Leaders, busy entrepreneurs, and stressed employees are highly encouraged to pause for an hour every day to think in a quiet space.
If possible, turn off your phone, emails, internet, social media, TV, news and music. Complete silence. It can be a little uncomfortable at first, but it’s worth it. You don’t need to follow a guided meditation. Just focus on your successes, failures and professional goals. An hour can be a very long time at first, so you might start with 20 minutes each day and work your way up to an hour within the month.
Choose a Helpful Organizational Software
There are many tools to help with staying organized in your business and personal life. Asana is a particularly useful organizational tool that can help you stay connected with your team, track tasks, and accomplish items in real-time. Organizational apps, in general, make everything more streamlined and help to manage time more efficiently. Choose a software tool to help you delegate, so you aren’t trying to manage small tasks that could be done by someone else. Leveraging technology to remain organized is a great way to relieve some of the stress associated with attempting to organize your day, analog-style.
Take Notes and Revisit Ideas
Whether you hear a good idea in a meeting or think of something over a lunch break, jot it down. Many successful leaders keep track of ideas without getting sidetracked by utilizing tech-based applications, or even pen and paper. By writing down your ideas, you have the opportunity to revisit them later (maybe during your hour of self-reflection). It is so important to capture your thoughts without losing focus. Being organized in this way allows you to be fully in the moment and hold on to those important moments of inspiration. | https://medium.com/@vitalepeter/from-the-top-down-why-organization-in-leadership-matters-547896d29d99 | ['Peter Vitale'] | 2020-12-15 17:30:11.724000+00:00 | ['Time Management', 'Leadership', 'Business Development', 'Leadership Skills', 'Organization'] |
2020’s bad TV: The Undoing — not even the best actors can save a stale story | With the recent six episode miniseries ‘The Undoing’, a prestige crime drama starring Nicole Kidman and Hugh Grant, HBO attempts to create a painful critique of modern day’s shallow upper class. However, HBO seems to forget they already did that in the recent years, with ‘Big Little Lies’, starring the same Nicole Kidman, created and written by the same David E. Kelley, but that time they really succeeded. It had an almost air-tight story, nuance, and it handled difficult subjects with grace. Unlike ‘Big Little Lies’, ‘The Undoing’ is not what it wanted to be. It benefits from beautiful cinematography and some memorable acting but a lazy screenplay and general inconsistency in execution ultimately doom even what you could call Hugh Grant’s best work in many years.
Hugh Grant and Nicole Kidman have good chemistry in ‘The Undoing’
Kidman and Grant are Grace and Jonathan Fraser, a rich and successful New York couple of doctors with a huge, lavishing apartment and a smart kid in middle school, Henry (Noah Jupe, whom you may know from 2019’s great film ‘Honey Boy’). She is a psychiatrist, he is a children’s oncologist and both are ‘elite’ in their jobs. Her father, Franklin (Donald Sutherland) is a millionaire from a long line of millionaires.
The Frasers complete each other perfectly. Grace is the quiet one in the couple: self-contained, cautious, speaking more through her eyes than her words, yet passionate about what she loves. Jonathan is the loud one: he’s outgoing and larger-than-life, his empathy is equalled only by his dry sense of humour.
Their seemingly perfect lives are disrupted by the sudden appearance, and then disappearance, of a woman. Elena Alves (Matilda de Angelis), a lower to middle class latina artist, starts sending her son to the same pretentious Manhattan school for the filthy rich that the Frasers send Henry to, slowly entering their high-class, exclusively white community. Of course, Elena’s son Miguel benefits from a scholarship, not from the hundreds of thousands of dollars that the Frasers’ son Henry does.
Grace and her father
Right from the get go, to Grace, Elena’s beautiful presence is mysterious, mesmerising even. There’s something strange about her that Grace can’t quite grasp. Her intuition that there’s something off about Elena pays off when she’s found bludgeoned to death in her workshop, her skull brutally bashed in. Not long after this disconcerting news, Grace finds out that Elena was her husband’s lover, for years, and they started their relationship after Jonathan treated her son Miguel’s cancer.
As Jonathan becomes the prime suspect in Elena’s murder, some new characters enter the scene: Fernando Alves, the angry widower with two children, Miguel and an infant daughter, Joe Mendoza, a no nonsense cop who wants to get to the bottom of things, Sylvia, Grace’s best friend, just as privileged, not as nuanced, and Haley Fitzgerald, the best and most ruthless defence attorney money can buy. | https://medium.com/@valentin-antonescu/2020s-bad-tv-the-undoing-not-even-the-best-actors-can-save-a-stale-story-1f3dfcbf93cf | ['Valentin Antonescu'] | 2020-12-25 09:51:51.748000+00:00 | ['Hugh Grant', 'Nicole Kidman', 'The Undoing', 'HBO', 'TV Series'] |
How to Optimize Your Instagram Profile to Get More Leads | Instagram is the fastest growing Social Media Network. So today, in this article, we will see How to Optimize Your Instagram Profile to get more leads and sales.
Instagram has recently undertaken a series of modifications. This shift makes it more difficult for businesses to develop their following, emphasizing the need for optimization. Perhaps you post on Instagram whenever you want, and you may be losing out on many views and followers. If you’re trying to advertise your business on Instagram, the greatest approach to ensuring success is increasing your audience first.
The more users you can reach, the more likely you are to generate sales. As a result, Instagram improvement is essential. Optimizing Instagram is the most challenging aspect since Instagram SEO differs from standard Google SEO, so it will take some time to get used to it. Here are some top features, and also it explains how to optimize them.
Tricks to Optimize Instagram Profile
1. Optimize with Searches
Instagram claims that the search results are influenced by several criteria, such as the profiles you follow and are linked to yours. In addition, the photos you like on Instagram have possibilities to grab your relevant post, reel, IGTV and all other features in Instagram.
So, if you can optimize your Instagram profile with keywords, you’ll be more likely to appear in relevant searches. Username and Name — It’s a good idea to include target keywords in your name and username to increase your chances of being found in relevant searches. Optimizing with searches might not be conceivable with usernames, as you must commonly stick to your authoritative brand name.
It would be best if you used the keyword and keyword family in the bio of Instagram. This area can be used to characterize your company and include available secondary keywords. If you want to reach the search terms “Cake,” you may include supplementary keywords such as “Whipping,” “cream,” or “muffins” in your bio.
Read More: How to Do Social Media Marketing
2. Optimize with Keyword
Instagram offers customized content suggestions based on each user’s activities and connections. For example, if you frequently “like” food-related articles, your explore page would most likely feature information that matches your preferences. That’s where your captions for your posts come in.
It also determines how various connected accounts are based on bios, usernames, names, and comments and descriptions. That’s why creating informative comments and descriptions with specific keywords will increase your possibilities of appearing on relevant people’ explore pages. Instagram will identify which subjects of interest fit your account based on the keywords in your post captions.
3. Optimize with Hashtags
Hashtags on Instagram work similarly to keywords in traditional search engines to assist users in finding related material. Once you search for a hashtag, it will display all of the posts that contain that hashtag. Tagging your posting with a hashtag, on the other hand, implies that it will appear whenever anyone searches for that hashtag.
As a result, it’s vital to include hashtags related to the content in your postings to attract the right audience. However, avoid overcrowding the caption box with hashtags, as this might be distracting. Since you can use up to 30 hashtags in every post, ensure to select the correct hashtag concentration.
Instagram will rank your post in hashtag search results based on other variables such as participation and account relevancy. The more time and effort you invest in, the more followers, views, likes. If you want to capture your audience’s attention, you should only post the highest-quality material.
Read More: Importance of Social Media Marketing
4. Optimize with Alt Text
Instagram provides a function where you may input alternate text to describe your photo in further depth. The software then uses a screen reader to read aloud the picture description, allowing you to hear what the snapshot includes. Alt text feature was created to aid visually impaired users in comprehending Instagram’s visual content.
Alt text can use this function for Instagram SEO for visually challenged users. The goal is to utilize keywords in your alt text to assist the Instagram algorithm in grasping what the image contains and its relevance to certain people.
Conclusion
Your Instagram profile might be the difference between being a successful brand and simply another company. So, use Instagram SEO to increase your exposure on the platform and raise brand recognition among a suitable demographic.
Use Instagram, as mentioned earlier, optimization strategies to increase your visibility on the platform. Also, don’t forget to take advantage of Keyhole’s free trial and use the automatic suggestions to improve your Instagram posts and increase interaction. | https://medium.com/@guestpostmagzine/how-to-optimize-your-instagram-profile-to-get-more-leads-ef8a401d795a | ['Sofiya Hayat'] | 2021-12-19 18:11:27.589000+00:00 | ['Instagram', 'Instagrammarketing', 'Instagramprofile', 'Howtogrowoninstagram'] |
Hand-Woven vs Machine-Made Rugs: Which Option is Better? | Rugs have always been popular but recently they have made their mark in modern-day home decor. Why? Maybe because they are a perfect example of how globalization works in spreading a popular idea in a specific region across so many different countries. However, we are not here to talk about why rugs have become such a sensation. Instead, we will discuss the two commonly available variants of area rugs: handwoven rugs and machine-made rugs.
If you’re new to the world of rugs, it is difficult to tell the difference between the two variants. Don’t worry, we are here to help you with the advantage of making an informed buying decision when you’re shopping for rugs online. It’s already clear from their names how these two types of rugs differ from one another. However, for an untrained eye, they all appear to be the same at first impression.
The only way an amateur rug buyer could possibly tell if the rugs that they’re planning to buy is a hand-woven type or a machine-made one is by looking at the price tag. Again, it’s pretty clear that the former type is going to be more expensive than the latter. But pricing isn’t always a reliable factor to judge the authenticity of, well, anything!
In this article, we’ll be discussing each of these two types of rugs in detail so that you can understand your rugs better. Ready? Let’s begin!
Hand-Woven Rugs
We know everybody is fully aware of what ‘hand-woven’ means, but still, we feel the need to elaborate on this. In the simplest of words, a hand-woven rug is a rug that is made by hand on a specialized loom.
While the process of hand-knotting may not seem too difficult, it requires an immense amount of patience and skilled craftsmanship to be able to carry it out. Contemporary area rugs inserts the knots into the foundation of the rug, which are then are tied by hand.
Hand-knotted rugs are made out of wool, cotton, silk, jute and other natural materials. In general, a fine-quality handmade rug takes about a year (or more) to make. Unreal! Yes, a single rug takes that long, hence it’s over the top price.
Machine-Made Rugs
Machine-made rugs are mass-produced through giant machines, which are known power looms. Being low-effort rugs, they are obviously much cheaper than their hand-woven counterparts.
But the manufacturing process isn’t the only key difference here. Machine-made rugs are made with a cheaper and a lower quality synthetic material as the natural fibers can’t withstand all the force from heavy machinery.
Which one to go forward with?
Even if there’s a gun to our head, we would still say, ‘It depends’. There’s no clear winner here, and depending on what you’re looking to get out of your rug, your ideal choice is most likely going to vary. If you want an extremely durable rug that will last generations, lying around your space, the hand-woven rugs should be your go-to option. On the contrary, a machine-made rug will usually last only about 20 years, tops. However, they are cheaper and easily replaced. So, for those of you who have flexible choices and don’t want to spend too much on rugs, your ideal answer is machine-made rugs.
No matter which one you choose, rugs could truly be an amazing addition to your home decor. If you manage to select the right one for your place, it will certainly make your guests’ heads turn! | https://medium.com/@shaunmatthews/hand-woven-vs-machine-made-rugs-which-option-is-better-453c98100e6a | ['Shaun Matthews'] | 2020-04-23 08:52:03.241000+00:00 | ['Rugs Online', 'Smart Home', 'Area Rugs', 'Designer Rugs', 'Home Decor'] |
Fomenting Producer Evangelism | When setting out to build a product dubbed as a “Platform” or a“Marketplace”, there is always the problem of the chicken and the egg. The market forces that attract both sides of a market — consumers, and producers, are so compelling that traditional businesses are throwing vast amounts of resources to establish themselves as an intermediary between both sides and extract value this way.
Where to start?
In this post, I’ll focus on the producer side. A producer, in this case, is someone that is generating value. It can be from offering a physical product like an apple to a service that is completely ethereal. At ShareHouse where I work as a product manager, we are building a marketplace where producers a.k.a Warehouse Owners, provide a service — storing goods in their warehouse. Many people are constantly looking for space to store their stuff, so a place online where you can easily find one right away is our product’s vision.
So how do we get producers to spread the word?
Three words: positive feedback loops.
Producers have to feel good enough with the product, in order for them to advocate their own customers and other Warehouse Owners to join them on this new platform, and so on. This is what Sangeet Paul Choudary and his colleagues call Producer Evangelism.
The “producer evangelism strategy” involves designing a platform to attract producers who can then persuade their customers to become users of the platform. Crowdfunding platforms such as Indiegogo and Kickstarter thrive in this regard by targeting creators who need funding with the infrastructure to host content about their idea and manage the fundraising campaign.
Read more about other strategies
In our case, we are just getting started and a lot of hypotheses have to be tested before dishing out any recommendations but here are some ways we are fomenting this strategy in the warehousing and logistics industry and forging positive feedback loops:
Tailored profiles — for producers that want to display their best qualities i.e. certifications, necessary equipment to carry out any heavy lifting or details about their buildings this is where they start. Open negotiation — it wouldn’t be a proper market if you were not able to negotiate a sweeter deal on the spot. We created an order ping-pong area before the order is completed where both sides can talk directly and add, subtract or modify any value-added service or a discount based on volume and/or contract duration. Broader diffusion — usually getting noticed when you’re business is smaller or remotely located is a futile marketing exercise. Even though we are working within a niche (warehousing & value-added services), our growing network makes it virtually free for producers to gain exposure and offer their business to interested parties. New revenue stream — a platform is a great way to expand horizons and having clients that come straight to you after cutting through the noise of the world wide web is tapping into a new source of income.
Last thought
Going back to the first terms “fomenting” and “evangelism” — both might be a little over-the-top or carry a negative connotation as according to the dictionary one usually foments a rebellion or discord, and evangelism is to preach the gospel with a missionary zeal.
So as much as anyone would want to encourage a whole side of a market to advocate your product with the same conviction as Hernan Cortes advocated Christianity to the Aztec empire, well… it could get out of hand quickly.
Nevertheless, that is what we are striving to. | https://medium.com/inteligencia-log%C3%ADstica/fomenting-producer-evangelism-90d11ede565a | ['Ivan Flores Hurtado'] | 2019-02-12 14:59:14.448000+00:00 | ['Sharing Economy', 'Startup', 'Product Management', 'Marketplaces', 'Logistics'] |
Changemaker Melina Masnatta, Co-founder and Executive Director of “Chicas en Tecnología”, Argentina | “Together with her team, Changemaker Melina Masnatta, Co-founder and Executive Director of Chicas en Tecnología, promotes the empowerment of adolescent girls and therefore contributes to social innovation and equality in the digital age. We are happy to support her important work as part of the Covid-19 Aid Program,” says Christiane Hölscher, Global CSR Manager at Beiersdorf AG.
Melina Masnatta on stage for “Chicas en Tecnología”
Closing the gender gap and offering new opportunities
There is a worldwide gender gap in science- and technology-related disciplines. At a regional level, current challenges include internet access and computer literacy. Latin American women are at a disadvantage due to having greater social and cultural barriers. Gender stereotypes are constructed at an early age, while the interest in developing oneself in these areas becomes defined in adolescence. According to research in Argentina: STEM (Science, Technology, Engineering, and Mathematics) degree enrolments are made up of only 33% women, and in degrees linked to programming this number drops to 16%.
Creating technological solutions with social impact
Chicas en Tecnología (CET) is an Argentinean non-profit civil society organization that works to reduce the gender gap in technology and encourages young women throughout the region to be creators of technological solutions with a social impact. Since 2015, more than 7,000 girls have participated in the organization’s programs and initiatives. They are trained in skills such as leadership, communication, teamwork, programming, computer thinking and design, among others.
Co-founder and Executive Director Melina Masnatta has been dedicated to creating educational solutions and opportunities with technology in a comprehensive and systemic way for many years. Alongside her bachelor’s degree in Education Sciences and master’s degree in Educational Technology, she has developed a profound experience in design, development, research, as well as the evaluation of innovation programs in education and technology.
Through her work at “Chicas en Tecnología”, young women learn to transform realities through technology. In the long-term, increased opportunities for women in these fields translate into greater economic success and equality at all levels.
Girls and young women in class of ‘Chicas en Tecnología’
Offering new opportunities during the pandemic
Throughout the COVID-19 crisis, digitalization processes have accelerated at all levels. On the other hand, women have been the most vulnerable group during the pandemic, as they have to cover different roles, such as labor, adaptation, as well as the unpaid work that represents household tasks. In order to encourage more people to join in the creation of technology with a social impact, CET has launched new opportunities as well as free and open proposals, to put the spotlight on young women and their communities.
CET has launched 25 new online, free and open interactive workshops and webinars during the period of social isolation. More than 2,090 educators, leaders in technology, families and young women from 18 countries interested in the approach and topics related to the technological ecosystem have participated so far. CET digitalized their programs, strengthened their educational platform, and reached more than 1,700 adolescent women in Argentina and the whole region.
Melina Masnatta, Co-founder and Executive Director of Chicas en Tecnología
Melina Masnatta: “With the support of Beiersdorf we will create new solutions that will help girls from Latin America to become creators of technology with social impact and shape their own future, even in uncertain times”.
Beiersdorf supports Ashoka to fight COVID-19 consequences
COVID-19 keeps hitting the whole world hard — and its ultimate socioeconomic consequences cannot be foreseen today. With this in mind, Beiersdorf launched its global “Care Beyond Skin” aid program with a volume of 50 million euros at the very beginning of the crisis. The program comprises two pillars: immediate aid, and long-term support to fight the pandemic’s consequences. Among the long-term initiatives, Beiersdorf has set up partnerships with international NGOs, Ashoka being one of them. With the Beiersdorf funding, the world’s largest network of social entrepreneurs is able to expand its “Changemakers United” initiative beyond Europe to Africa, Latin America and South Asia. The program promotes and supports social innovators whose ideas and ventures aim specifically at reducing the socio-economic consequences of COVID-19. Additional funding is provided to five social entrepreneurs within the Ashoka network, who contribute with their distinct and innovative approach to Beiersdorf’s mission and social focus area “Girls’ Empowerment.”
Find out more about Beiersdorf.
https://chicasentecnologia.org | https://medium.com/@beiersdorf/changemaker-melina-masnatta-co-founder-and-executive-director-of-chicas-en-tecnolog%C3%ADa-argentina-1ca2f9e6d2d7 | ['Beiersdorf Ag'] | 2021-01-29 13:53:05.390000+00:00 | ['Pandemic', 'Empowerment', 'Changemaker', 'Ashoka'] |
Why Women Call Each Other Gorgeous (Too Much) | “I attribute my youthful looks to processed foods and preservatives.” Joan Rivers
Why is it every time a woman posts a picture of herself, other women feel the need to tell her she’s gorgeous? If the woman’s in her twenties then, sure, it’s possible — even likely. If she’s in their late fifties, rounding sixty — and not Christie Brinkley — how gorgeous can she be? She’s a bystanding, bench-warming senior (I say this as someone who’s past the bench-warming stage; I’m practically part of the bench).
I talked to an anthropologist friend of mine, a “bone scrubber,” as I call him, and asked — anthropologically speaking, of course — if human degeneration allows for gorgeousness in older women. “Of course it does,” he said. “My wife’s gorgeous. Wrinkled as a prune, but she’s seventy-five.”
So it’s really a compliment within the context of age. Nobody’s saying you’re a knock-out. They’re saying, given your many years of processed foods and preservatives, you don’t look half bad. You’re well-preserved.
Women rounding sixty don’t want to be called well-preserved.“They’re not brontosaurus bones,” he said, getting a bit hysterical over the phone. “You’re not studying parent atoms, for crying out loud.”
What I don’t understand is, why not say that? Again, my anthropologist friend summed it up. Women rounding sixty don’t want to be called well-preserved.“They’re not brontosaurus bones,” he said, getting a bit hysterical over the phone. “You’re not studying parent atoms, for crying out loud.”
I don’t even know what a “parent atom” is, but obviously you don’t want to go around telling a woman she’s got good “parent atoms.” Next thing you know, she’s asking “I’ve got good what?” and then you’ve got a bunch of “gorgeous claimers” wondering what the hell’s wrong with you. “Can’t you just give a normal compliment?” they’ll say, which is exactly what my anthropologist friend said before his wife told him to stop talking to me.
“My wife thinks you’re weird,” he said, before hanging up.
It still confuses me how one word, even in the context of age, can be tossed around so indiscriminately. To see it used, over and over again, on every profile picture, isn’t that excessive? Aren’t they flattering earbashers?
“Not at all,” a woman told me, asking to remain anonymous. Her inner circle constantly calls each other gorgeous, awesome and amazing. “Why not if it makes the person feel good?” she said.
It’s sort of like saying, “Happy Holidays.” I say “Happy Holidays” even when it’s September. I’m ramping up.
So, whether a person’s gorgeous or not, it doesn’t matter. The woman feels good and nobody’s the wiser. It’s sort of like saying, “Happy Holidays.” I say “Happy Holidays” even when it’s September. I’m ramping up.
Yet isn’t saying “gorgeous” all the time more reflexive than genuine sincerity? And what determines the level of sincerity if it’s tossed around the same way I say “Happy Holidays”?
Rather than call my anthropologist friend again — and, no doubt, have his wife tell me I’m weird — I decided to go where all women go when they want the honest, unblemished truth: Reddit’s Ask Women.
Here I discovered that “gorgeous” serves many purposes, from ingratiating oneself to — you guessed it — being a snippy bitch. As one woman explained, “It’s sarcasm intended to drive the target to an eating disorder.”
That’s a little harsh, and not the general sentiment expressed by less snippy bitches in their responses.
We’ll go on Facebook and remark on some profile picture saying, “Nice truck.” If we’re being really personal, we’ll say “Nice teeth.”
Broken down, women generally feel a sense of duty to compliment each other, unlike men. We’ll go on Facebook and remark on some profile picture saying, “Nice truck.” If we’re being really personal, we’ll say “Nice teeth.”
Women, on the other hand, take this duty seriously. One woman admitted she compliments women on Facebook three or four times a day. “I just do it,” she said. According to her, it’s an important affirmation, especially for someone cresting the big 60 (like she is, I think, but I was too scared to ask)
Digging deeper, I decided to ask these women, “Do you ever mix it up a bit? Maybe throw another word in there, like ‘fabulicious,’ or ‘heart-stopping’?”
This was met with dead silence until a woman asked if I was a troll. “You’re probably a Russian hacker,” she said. “I mean, who else but a Russian would call themselves Fred here?” (Okay, I did call myself Fred, but I wasn’t wearing a fur hat)
It was also apparent that even “gorgeous” women don’t mind getting down and dirty, calling a suspected troll a “dilhole,” which I’m not even sure has a Russian translation.
I was soon met with many pejoratives by numerous women, confirming what I was trying to understand in the first place. Women don’t have a limited vocabulary. Based on what they called me, I’d say they have many words they use on guys—especially ones named Fred (by the way, what’s a “dilhole”?)
It was also apparent that even “gorgeous” women don’t mind getting down and dirty, calling a suspected troll a “dilhole,” which I’m not even sure has a Russian translation. Rather than look it up in the Urban Thesaurus, I found myself falling back on my old standard and wishing them all “Happy Holidays.”
Since we’re relatively close to Christmas, that didn’t seem to bother these women at all. They responded with, “You’re sweet,” and even gave helpful guidance when I asked, “Have you ever thought of telling women, when they post a picture, something simple like ‘Nice shot’?”
The comments were many, starting with one woman who asked how I’d feel if I sent out nude selfies and someone responded with: “Nice teeth.” I told her I’d feel a bit let down. Surely there’s more to me than teeth. “Exactly,” she said. “You need to know you’re the whole package—like Pop Tarts.”
So it’s really about being specific without being specific, meaning an all encompassing word like “gorgeous.” On one hand, you endear yourself, on the other, you assure her she’s a Pop Tart. Ultimately, you’ve done the job that “Nice truck” definitely can’t. That’s if you’re a woman
Even an offhanded comment by a sales clerk, for instance, who says, “That dress looks good on you,” will be repeated more often than men repeat box scores.
Men are still thrilled to read “Nice truck.” Then again, we’re simple beasts, and not nearly as cerebral as women. It seems they define themselves by compliments. Even an offhanded comment by a sales clerk, for instance, who says, “That dress looks good on you,” will be repeated more often than men repeat box scores.
To prove my point, I looked up some profile pictures on Facebook, commenting on each one. Instead of “gorgeous,” I put “Really gorgeous,” even going so far as telling one woman “You’re more ravishing with each passing day.”
The response was immediate. “You dear, sweet man,” she said, figuring I must be her friend Fred. I told her I wasn’t that Fred—or any Fred, to be honest. I was a distant admirer, a lover of beauty, given to words of poetic resonance.
“Stop, I’m blushing,” she said.
Too much, obviously. She started asking what part of her, in particular, I found most ravishing. “My eyes, my lips?” she asked. Needless to say, I was out of my depth. All I could do was search for sentiments I’d used in the past: a Valentine, an emotional shopping list. Nothing seemed to fit. Then I glanced at her profile again. Behind her was a Ford F-150, tricked out with overhead fogs and a Weslin winch mount grille guard.
“Nice truck,” I said.
“What!” she responded.
“Okay, nice teeth.”
“Jerk. Your name isn’t even Fred, is it?”
“No, it’s Boris. And you’re gorgeous.”
“That’s better.”
I think I’ve got this “gorgeous” thing figured out now.
Robert Cormack is a satirist, novelist, and former advertising copywriter. His first novel “You Can Lead a Horse to Water (But You Can’t Make It Scuba Dive)” is available online and at most major bookstores. Check out Skyhorse Press or Simon and Schuster for more details. | https://medium.com/datadriveninvestor/why-women-call-each-other-gorgeous-too-much-d44e28e79500 | ['Robert Cormack'] | 2020-12-12 12:21:23.634000+00:00 | ['Humor', 'Satire', 'Life', 'Social Media', 'Women'] |
Sitting In The Corner Of My Memory | from the confines of my room, to the utter peace that I felt
Nudging forward, younger me whining
My family spoke highly of you
I barely knew anything, nothing
Other than Arthur the aardvark’s fascination with you
My fingers
Ran along your surface
Gently
In the midst of my unfamiliarity
I remembered my teacher
Smacking my fingers with a ruler
Whenever I forgot to cut my nails
A sin — as though they were screeching against rails
Aside from the techniques and examinations
I grew to love
The sound of your notes
The comfort that surrounded me
The avenue I was able to express what I felt
You were able to transport me
From the confines of my room
To the utter peace that I felt
The calm
Reminded me of the pier
I used to kick my feet
Over the waters by the pier
Soaking in the sunshine
Kicking my thoughts into gear
Of everything and anything
That I held dear
As I grew older
Your keys collected dust
As my attention shifted
To excel in my education, a must
Sitting in the corner of my memory
Patiently, like a resounding melody
As I went to explore the unknown
And other first loves, sprouting from seeds I sown
As I got my heart broken
My fair share of failures
Maneuvering between words spoken
And people favoured
Years later I returned
Rosy-cheeked from abroad
Reminiscing, I turned
Playing as if I never left, awed
Hello, we meet again
As my fingers ran across your keys
Not in the eyes of men
Just me
You
And the calm of the vast seas | https://medium.com/moonrise-literary/sitting-in-the-corner-of-my-memory-e5513933a5e | ['Lily Low'] | 2020-11-16 14:10:11.199000+00:00 | ['Storytelling', 'Poetry', 'Prose', 'Literature', 'Moonrise'] |
HOW EXACTLY TO Lose Weight In four weeks | This article features proven tips that are based on scientific research and experience to assist you reach your weight loss goals quickly and efficiently. Indeed, studies show that despite suggestions that women gain minimal amount of weight during the first trimester, in actuality, this is the trimester when the biggest amount of excessive fat gain (pounds gained above the recommended levels) occurs.
best diet for women over 50
Even when I’m not really exercising for stretches at the same time, my pounds holds steady at 185, and it has been that way for 4.5 years now. If you drop your calorie consumption without maintaining your nutrient intake, your body takes steps to make sure its nutritional requirements are met as well as your weight loss efforts are nearly certain to fail.
The intervention focused on eliminating junk food, eating more vegetables and reducing the entire number of calories the ladies were consuming eventually, she told Live Science. And over the next couple of weeks, a woman may also expect to lose the pounds of the extra liquid in her body that built up during pregnancy. In fact, this is what the Atkins Diet is banking on. Eating plenty of lean protein means that you can cut back on calories and shed weight without feeling hungry. In short, avoid your body from catching on to your weight loss efforts (see intensive training ), otherwise weight reduction shall slow or stop.
Jill Kanaley and her co-workers recruited 75 obese women and men who had been identified as having type 2 diabetes. It keeps your fat in check as well as away from all the fat associated is in fact the fastest method to lose excess weight without having to diet. This awesome food supplement can help with weight loss and has a bevy other other beneficial health effects. They tell me their advice remains the same for any post menopausal weight gain: grab your activity and lessen calories. My results are, I believe, pretty typical among those who are just losing a pound or less every week. I tried to lose excess weight the slow way by eating just a little less or doing slightly more exercise for several years.
If you don’t have so very much weigh to lose that your doctor prescribes a very-low-calorie diet, you can safely lose excess weight on your own by reducing your daily calorie intake by 500 to at least one 1,000, which will make you lose about 1 to 2 2 pounds per week.
Research shows that drinking 5 cups of green tea each day increases metabolism by 20%, assisting you burn lose and fat weight, while 2 cups of coffee will also boost metabolism. A motley complex of emotional and behavioral issues possess a powerful impact on the way women and men approach weight loss. However, 12 months seems to be the upper limit for how long it will take for women to lose all of their pregnancy weight. The key to slimming down is eating fewer calories than you consume via food and drinks. With my clients I teach them to put into practice measurements to obtain a good indicator of progress instead of scale weight. This article is part of a Live Science Special Report on the Science of Weight Loss. You should think of weight loss in conditions of permanently changing your eating habits.
For example, when I think that my husband’s ideal weight is almost 100 pounds more than mine I do not get as frustrated by the fact that he can eat more, because it’s just physics. But if I eat 1500–1900 calories each day, and keep the carbs in the Primal Sweet place of 50–100 carbs per day, and workout at least three times per week, I continue to lose weight. Try not to eat straight from a huge package of food — it’s simple to lose track that way. If you are overweight, we suggest that you combine all three methods listed, and obvious changes in both your bodyweight and overall health should be visible in less than two weeks. In fact, with consistency and discipline, you can even lose 2o pounds in 2 weeks.
In a report published in the January 2009 problem of the Proceedings of the National Academy of Sciences , even though women said they weren’t hungry when asked to smell, taste, and observe treats such as pizza, cinnamon buns and chocolate cake, brain scans showed activity in the regions that control the drive to consume (false for men).
from Burn Fat http://ift.tt/2eoYfvL
via eugenioplit.jigsy.com
Originally published on Tumblr
Read More earnonline11 on jigsy.com | https://medium.com/@xavierpzir/how-exactly-to-lose-weight-in-four-weeks-3682315873ea | [] | 2016-10-21 20:49:21.611000+00:00 | ['Health Foods', 'Weightloss Foods', 'Fat Burning Foods', 'Health Recipe', 'Weightloss Recipe'] |
Why Creative Blocks Happen | In The Dynamics of Creation, psychoanalyst and psychologist Anthony Storr attempts to understand what motivates people to dedicate themselves to creative work. Considering that creativity is a difficult endeavour and often carries few rewards, it has to be about more than fame or money.
Storr believed that for some people, creativity could substitute for unmet needs in the real world. Individuals who fail to find satisfaction in relationships and the like turn more to their internal worlds.
Writing, music, painting, and other forms of art can be a way of communicating with others on your own terms. It can feel safer than doing so through conversation because the whole situation is under your control. You don’t need to fear other people tricking you into revealing more than you wish.
For those who struggle to maintain a solid sense of self, creativity can be a means of self-assertion.
It’s a way of expressing your thoughts and emotions with minimal pressure to confirm. Some creative people gain self-esteem from their work, as opposed to their concrete selves. Equally, making stuff is a form of escapism. It takes you out of yourself for a while.
Storr goes on to write that some people put more of themselves into their creative efforts than they do into their social lives. Those who feel unable to reveal their true selves to others in relationships can end up feeling their art is the sole real expression of their identity. Storr writes:
‘Great artists are seldom great talkers; and when they are, like Oscar Wilde, one has the impression that they would have produced more work of lasting value had they talked less. This needful secrecy, however, means that creative people very often reveal less of themselves in company than ordinary people, and thus may not experience the reinforcement of the sense of identity which comes from a true exchange of one’s ‘real self’ with the real selves of others.’
Taken too far, you end up hyper-sensitive about your art because any criticism of it is an attack on your fundamental self. Taken even further, making anything becomes impossible. Finishing a project and risking exposure is too dangerous.
Caring too much about your art can prevent you from actually making anything.
Conversely, creating in order to attain necessary doses of self-esteem can make you too eager to release your work as often as possible, to make it visible to as many people as possible, to get as much of what’s in your mind out as you can. Needing that reassurance is a barrier to spending a long time working on a bigger, more complex, or more developed thing. | https://medium.com/swlh/why-creative-blocks-happen-580315369a8 | ['Rosie L'] | 2020-08-01 16:20:46.361000+00:00 | ['Creativity', 'Self Improvement', 'Productivity', 'Life Lessons', 'Writing'] |
Bokeh 0.12.6 Released | We are pleased to announce the release of Bokeh 0.12.6!
This update includes the following highlights:
Headless, programmatic export of SVG and PNG images
New annotations Whisker and Band for displaying error estimates
and for displaying error estimates Fine-grained sub-element patching for images and other “multi” glyphs
Hover hit-testing extended to segments and all markers
Fixes for sorting and selecting from DataTables
Large cleanup and refactor of the layout system
Improved formatting options for hover tool fields and axis tick labels
Additionally, this release contains many other small bugfixes and docs additions. For full details, see the CHANGELOG and Release Notes.
As a reminder, we now package and upload examples for each Bokeh release to our CDN. Click here to download.
This release can most easily be installed from the Bokeh channel on Anaconda.org by executing the command conda install -c bokeh bokeh if you are using Anaconda, or pip install bokeh otherwise.
SVG and PNG Export
Not surprisingly, #538-Headless static (svg, png) image generation was one of the oldest open issues on the Bokeh issue tracker. Despite enormous interest in this capability (from users and the core devs alike), technical hurdles and trade-offs prevented realizing this feature until now.
I am very happy to report that this issue has been completed! It is now possible to export static SVGs and PNGs directly from Python code thanks to the great work of Luke Canavan.
Exporting to PNG is as simple as calling one function:
export_png(plot, filename="plot.png")
This function generates a PNG image that is essentially a full screenshot of what would be rendered in the browser. Here is a PNG of the unemployment example:
It’s also possible to generate SVG output, which is often requested by people wanting to use Bokeh plots in scientific publications:
plot.output_backend = "svg"
export_svgs(plot, filename="plot.svg")
Do note the plural in the function name. When exporting grid plots or layouts with multiple plots, export_svgs will generate a separate SVG for every plot: plot.svg , plot_1.svg , plot_2.svg , etc. This is different from the "full screenshot" style of the PNG export, but should be useful for those embedding plots into publications who want finer control of what-goes-where. Here is an SVG of the same unemployment example above:
https://blog.bokeh.org/images/release-0-12-6/unemployment.svg
To use this new capability, some additional optional dependencies (such as Selenium) must first be installed. See the new User’s Guide section Exporting Plots for information about how to get everything up and running. Like all new features, there will almost certainly be some kinks to straighten out. Please report any issues (with full details) on the issue tracker.
Layout System Improvements
Bokeh 0.12 introduced a new layout system for entire documents, affording the ability to display sophisticated data applications with multiple controls and plots. However, the layout system is complex, and more than a few problems have turned up since it was added. Thanks to the tireless work of Mateusz Paprocki, this release saw a major cleanup of the entire layout system. In general, the responsiveness of layout has been improved, but most importantly many bugs were fixed:
#4764-Issue with interactions between widgets and plots using bokeh server
#4810-Trouble Swapping out layout contents when using server
#5131-Unexpected initial layout with DataTable and layout()
#5518-Add new child to existing column
#5879-Make “bokeh finished rendering heuristic” work with non-plot examples
#6213-Appending layout regression
Additionally, a lot of behind the scenes refactoring puts the Bokeh layout capability on solid footing for the future.
Bands and Whiskers
Another long-requested feature is annotations for reporting errors and uncertainty. The plot below shows off the new Whisker and Band annotations:
Future planned improvements include an errorbar function to help make using these annotations even easier.
For good measure, this plot also shows off some new syntax that makes overriding tick formatting a snap. The x-coordinates in the plot number successive trading days. We want to label every fifth value (i.e. every Monday) as a new week, which can now be done easily like this:
p.xaxis.ticker = mondays # [0, 5, 10, ...]
p.xaxis.major_label_overrides = { d: "week %d" % i for i, d in enumerate(mondays)}
Efficient Streaming for Higher Dimensional Data
Bokeh has had patch and stream methods on ColumnDataSource for some time now. These allow for efficient or incremental updates to individual items in CDS columns. For most cases, where the columns are one-dimensional arrays, this is a great benefit. But for some glyphs, such as MultiLine and Image , the items in the columns are themselves arrays. In these cases, patch was not nearly as useful as it could be-until now. Now it is possible to use patch to partially update individual locations or slices of CDS column items that are arrays. The demo below shows patch being used to update subsets of circles, multi-lines, and images all at once:
You can find more information in the Reference Guide or check out and run the source code for the above example at examples/howto/patch_app.py .
High-level Charts
Looking towards a 1.0 release, we are in the process of paring down Bokeh for long term stability and maintenance. We want to make it a rock solid platform for other high level and domain specific tools to build on. As part of this, we have decided to cleave bokeh.charts from the core library.
bokeh.charts is now a separate package.
The new project is called bkcharts . Current usage of bokeh.charts will work (with a deprecation warning) until Bokeh 1.0 is released. At that time, it will be necessary to separately install and import from the bkcharts package to continue using this API. However, due to resource constraints, support and maintenance for bkcharts by the current team will be extremely minimal from this point on .
Accordingly, I would instead encourage everyone looking for very high level charting with Bokeh to look in a new (and much better) direction: HoloViews is a separately maintained package that provides a concise declarative interface for building Bokeh plots. It is actively maintained by Jean-Luc Stevens, Philipp Rudiger, and Jim Bednar. HoloViews is particularly focused on interactive use in a Jupyter notebook, allowing quick prototyping of figures for data analysis. Click on any of the thumbnails below to see concrete usage of HoloViews’ very high level, data-oriented API:
In addition to close integration with Bokeh, Holoviews can also coordinate the Datashader library for dealing with large (billion+ point) data sets. Together, these three projects bring together the vision for “high-level data applications for large and streaming data in the browser” that Peter Wang and I set out to create four years ago. Expect much more information, demonstration, and documentation about using these tools seamlessly together in the coming weeks. In particular, there will be a tutorial at SciPy 2017 in July (available afterward on YouTube).
Next Steps
As stated above, there aren’t a lot of big-ticket features left to put in to Bokeh before we are ready to cut a 1.0 release. Here is a rough outline (subject to change) of the work we plan to do over the next few months:
0.12.7
Network/Graph data source and renderers
Improved binary transport for array data
Scriptable animations and transitions
Improvements to Categoricals (e.g. nested axes)
New events for document init, busy/done, etc.
0.12.8
Complete migration to TypeScript
Publish BokehJS reference documentation
Support “Patches with holes” for GIS usage
Integrate with VegaLite/Altair
1.0
Remove all existing deprecations
Cleanup: bugfixes, docs, automation, polish
As we’re getting closer to a Bokeh 1.0 release, I’d like to thank the 231 total contributors who have helped make Bokeh such an amazing project. As always, for technical questions or assistance, or if you’re interested in helping out, please post to the Bokeh mailing list or join the chat on Gitter.
Thanks,
Bryan Van de Ven | https://medium.com/bokeh/bokeh-0-12-6-released-85a12cff2b68 | [] | 2020-07-06 21:30:45.085000+00:00 | ['Data Visualization', 'Bokeh', 'Data Science', 'Python', 'Open Source'] |
Minimize Rework, Improve Performance, and Manage Costs of your cloud migrations with Snowflake and Talend | Minimize Rework, Improve Performance, and Manage Costs of your cloud migrations with Snowflake and Talend John O'Brien Follow Jul 30 · 8 min read
As today’s cloud architecture capabilities continue to mature, many more organizations are moving away from typical on-premise architecture and adopting a cloud-centric strategy. Along with increasing volumes of data the need for accurate and timely processing of data has increased. Oftentimes data processing challenges directly impact the availability of data and insights to support timely business decisions, leading to missed opportunities. One way our clients gain efficiencies with their data processing is to migrate their workloads to modern cloud solutions.
At Cervello, we have assisted numerous clients in successfully navigating this pathway. One such migration I recently completed was from an existing on-prem Data Warehouse solution to a modern cloud-based architecture. The architecture included Snowflake, a modern built-for-the-cloud data warehouse platform and Talend, a modular development framework with pre-packaged data orchestration capabilities. I’ll use this project as an example to help highlight some best-in-class practices to adhere to if you were to undertake a similar initiative.
During the course of the project, Cervello worked alongside the client team in architecture advisory and lead development roles. Our goal was to train the existing client development team on the new technologies being introduced during the development effort which provided the team with the knowledge and know-how to maintain, develop, and enhance the solution further once our migration was completed.
In this blog post, I’ll highlight important areas that will enable you to minimize rework, improve performance, and manage costs while increasing flexibility with Snowflake and Talend.
So let’s get to it!
Minimize Rework with Talend Templates
While examining our client’s existing on-prem processes we noticed that we could categorize the data ingestion process into three distinct flow types. Utilizing Talend, templates were created for each of these flows and the templates were used to put in place the base framework for the load. This process resulted in only specific changes being required for any new source introduced to the loading process.
Flat File to Snowflake: changes required (source file name, target table name, file schema) Full Table Data Source to Snowflake: changes required (source table, target table, table schema mapping) Incremental Data Source to Snowflake: changes required (source table, target table, table schema mapping)
An example of the template created for a Full Table Data Source to Snowflake Talend Job is shown below. This template was used to create all data ingestion jobs required for a full table migration and only required modification for the SQL_Server_Source _Table and Snowflake_Target_Table objects.
Talend Template: Minimize development time per data source
Utilizing templates made possible in Talend, the development timeframe for ingesting data into the 450 tables within the Snowflake DataWarehouse was dramatically reduced. Moving forward, these templates will reduce development efforts and facilitate the addition of new data sources (flat-file, tables) easily.
Improving Performance with Snowflakes Bulk Loading
Once the processes to ingest data from our required source systems into Snowflake were complete, it was time to run an end to end test and see how the process compared to the existing on-prem solution. In order to take full advantage of Snowflakes architecture the following techniques were implemented, resulting in dramatic performance gains.
Limiting file sizes during data ingestion
During the data ingestion process best practice dictates data to be loaded into Snowflake be broken up into files sized between 10–100Mb.
This allows Snowflake to load data in parallel utilizing multiple threads within each node of the Virtual Data Warehouse. Each thread within a virtual warehouse can process 1 data file in parallel, therefore processing 8 small files provides greater performance than processing 1 large file.
Snowflake: Utilizing Virtual Warehouse Threads for parallel processing
This can be done programmatically within your process, however by using Talend to handle our data ingestion process it will take care of the file splitting automatically. As data is being pulled from the source query, small files are pushed to Snowflake and loaded in smaller chunks resulting in drastic performance improvements.
By utilizing Talend’s architecture to parallelize the loading process, and Snowflake’s performance boost for loading / ingesting data, the data ingestion process was reduced from 16 hours to 4 ½ hours using only an x-small virtual warehouse. Tests were run increasing the warehouse size in attempts to improve performance, however it was quickly determined that the query response time from the source systems was a limiting factor.
Stop using ETL constructs, and focus on adopting an ELT mindset
Another best practice followed is to evolve from using an ETL (Extract-Transform-Load) approach and adopt a process that focuses on ELT (Extract-Load-Transform). By adopting an ELT mindset, data is consumed into Snowflake first, and all Transformations happen within the Snowflake architecture.
The ELT Approach allows us to utilize Snowflakes flexible compute architecture to scale up when needed, and only when needed. In the ETL world, any processing for Transformations would directly impact users / other processes working within the system. By assigning separate virtual warehouses to each process, there is no contention for compute resources, and each process can be scaled accordingly to only use the amount of compute necessary.
The ELT approach also allows us to capitalize on Snowflakes Bulk-Loading capabilities. Within the Transformation process, utilize the following statements :
CREATE WITH SELECT
INSERT INTO WITH SELECT
MERGE INTO
The goal being to minimize single row insert statements and consolidating Insert, Delete, and Update statements as much as possible.
As an added benefit COPY INTO and INSERT INTO are non-blocking statements.
By utilizing Snowflake’s Bulk-Loading capabilities and Talend’s orchestration processes the existing on-prem Transformation process to populate 200+ Dimension and Fact tables was migrated reducing the overall time to completion from 12 hours to 3 ½ hours.
Manage Costs with Snowflake’s Virtual Warehouses
In typical on-prem solutions the architecture usually revolves around the highest peak usage and requires a delicate balance to manage costs while also meeting SLAs set forth by the business. A high watermark for processing power is set and the system is usually at 100% capacity during nightly batch processes, however minimally used throughout daily activities.
With Snowflake, Storage and Compute resources are not tied to each other, this provides flexibility in sizing required compute resources based on usage types (data ingestion vs. reporting) as well as only using larger compute resources when necessary and turning them on and off when not in use.
Separate compute resources for ingestion & reporting
Throughout the day various activities happen within a DataWarehouse environment that require different sets of resources to be successful. Large batch processing can require massive amounts of CPU with little to no CACHE required, where daily reporting requirements may require less CPU, yet benefit greatly by data result sets.
By separating our virtual warehouses into task based assignments, users requesting data for daily analysis are not impeded by any large scale batch processing that may be consuming large amounts of resources.
Depending on the reports/queries being generated, and the business criticality of timely return of a queries results different warehouse sizes can be allocated to the Analysts / Data Scientists to control costs while also meeting the businesses needs.
Suspending & resuming virtual warehouses
The ability to use what you need, when you need it is a huge benefit when it comes to cloud computing, and Snowflake makes this available inside your Data Warehouse environment. When configuring Virtual Data Warehouses for use, it is possible to take advantage of Snowflakes per-second billing to manage costs by suspending / resuming a virtual warehouse when not in use.
Suspending and Resuming warehouses can be done programmatically within your code (best used for scheduled batch processing) or by configuring the Virtual Warehouse to allow Snowflake to Automatically resume a warehouse when needed, and suspend the warehouse once a threshold of inactivity has been reached on the warehouse.
Throughout our migration process we took advantage of Snowflake’s ability allowing you to ‘Pay for what you use’ by using various Warehouse Sizes only for the durations we required an increased server capacity.
Viewing the total duration (start to finish) of data being brought into snowflake from our various sources, to the availability to the end-consumer within our Dimension / Fact tables gives us a view of our credit usage:
X-Small Warehouse — 6h 40m (6.4 credits/day)
Large Warehouse — 1h 20m (10.4 credits/day)
Total Usage = 8h (16.8 credits/day)
We were able to switch warehouses for each individual process by issuing a ‘USE WAREHOUSE <WAREHOUSE_NAME>;’ prior to each execution. This allowed us to bypass the need to scale our warehouse size to our largest process requirement. If we were constrained to one size fits all our credit usage would have looked like the following:
Large Warehouse — 8h (64 credits / day)
That’s a savings of 47.2 credits/day or 73% of our costs each day.
In Conclusion
Although this was only a migration of our client’s existing Data Warehouse platform to a cloud based solution, in doing so, we provided a stable foundation for their evolution to a full end to end cloud architecture. The end result provided faster processing capabilities, enabling next day reporting as opposed to day — 2 reporting and reduced processing costs. In addition, the Development team now has more flexibility to introduce changes on a shortened development and test cycle. Lastly, our client now has a greater confidence in their ability to scale the environment allowing for new data sets to be brought into the process, further enabling their business strategy through enhanced insights. That last one, that is the one we are proudest of.
I hope these takeaways are helpful to you on your migration journey.
About Cervello, a Kearney company
Cervello, is a data and analytics consulting firm and part of Kearney, a leading global management consulting firm. We help our leading clients win by offering unique expertise in data and analytics, and in the challenges associated with connecting data. We focus on performance management, customer and supplier relationships, and data monetization and products, serving functions from sales to finance. We are a Solution Partner of Snowflake due to its unique architecture. Find out more at Cervello.com.
About Snowflake
Thousands of customers deploy Snowflake’s cloud data platform to derive insights from their data by all their business users. Snowflake equips organizations with a single, integrated platform that offers the data warehouse built for the cloud; instant, secure, and governed access to their network of data; and a core architecture to enable many other types of data workloads, such as developing modern data applications. Find out more at Snowflake.com. | https://medium.com/cervello-an-a-t-kearney-company/minimize-rework-improve-performance-and-manage-costs-of-your-cloud-migrations-with-snowflake-and-993a1504713b | ["John O'Brien"] | 2020-07-30 13:42:52.795000+00:00 | ['Snowflake', 'Cloud Migration', 'Data Warehouse'] |
Lessons from 2020 to inspire Smart Cities in 2021 | Smart cities are also resilient. As May came, the buzz around building resiliency in cities was abounding. But what does resilience really mean? The Oxford English Dictionary defines resilience as:
1) the capacity to recover quickly from difficulties, and 2) the ability of a substance or object to spring back into shape, to be elastic.
The modernization and innovation of our electrical system is one way in which cities can be more elastic in their approach to energy. The Greencity in Zürich Süd, Switzerland, made it to our Use Case of the Month for May because of its resiliency due to its capacity to generate 100% of its energy from locally generated renewable energy sources.
What can we learn from this Use Case?
The topic of Big Data became a fundamental issue during this project. Balancing the critical infrastructure of the energy infrastructure with Big Data while also protecting personal data should be at the forefront of any city’s pre-planning efforts.
Cities must tackle governance issues in relation to emerging technologies and data, and this will continue to play a progressively more significant role in subsequent years.
‘It is better to be a Young June-Bug than an Old Bird of Paradise’
The European Commission has made some big leaps this year towards a sustainable future, and in late June we celebrated the EU (European Union) Sustainable Energy Week, focused on clean energy. So many creative and innovative solutions have been developed in 2020 and The Aberdeen City Council in Scotland inspired us with its Use Case of their hydrogen bus project and refuelling stations. The project, H2 Aberdeen, is also an example of how to combine local, national and European level funding to pursue hydrogen technology innovation.
How can others use this trial of hydrogen solutions for further implementations?
The use of hydrogen presents a unique opportunity for businesses in the future. The private sector should consider these opportunities for future pursuits. This pilot Use Case shows the capabilities and explores further steps to test modern technologies with hydrogen as well as incorporate other solutions, such as the deployment of hydrogen cargo bikes and other vehicles, into the blueprint for cities in the future. The Aberdeen City Council also invites others to contact them for more lessons learned.
July is a Blind Date with Summer
Sratumseind Street in Eindhoven, Netherlands. Photo by Omar Ram on Unsplash
This year, city life was certainly lacking the proverbial sounds of typical summer festivities. Nonetheless, sound is a fundamental component of an urban environment and can be used as a tool for safety measures. Dreaming of future post-pandemic days where crowds of people can safely celebrate in communal areas, Eindhoven’s innovative and clever usage of sound sensors and cameras made it to the spotlight of our July issue. In non-pandemic years, Stratumseind in Eindhoven is the largest pub street in the Netherlands. With the bar owners and resident participation, the sound cameras and sensors can detect sounds of aggression and loud noise disturbances to increase safety and security for all in the area.
What can we learn from this project?
Existing infrastructure can help during this implementation. In addition, the close cooperation with the bar owners is a necessary component of this project. Some complications can occur with the tuning of the sensors. It can also be considered that video surveillance systems can add additional information to the case. Cities looking to implement similar technologies should consider citizen perspectives, due to some scepticism of such surveillance techniques. In addition, consideration should be given to the deployment of such technologies, as to not discriminate against certain groups.
August is like the Sunday of Summer
Reviving our own energy in the summer is often done by refuelling on some renewable sun sourced energy. Likewise, our old buildings often need revitalization and integral renovation to meet net-zero energy goals. The refurbishment of two old textile factories into energy-efficient buildings in Barcelona, Spain, can be used as a source of inspiration for those looking to preserve historical buildings while decreasing energy consumption.
How can we use this experience for future actions?
Usage of local energy generation helps to protect the heritage status of buildings. However, it is crucial that staff receives training in the operations and maintenance of the building. This project was a prime example of a public-private partnership. Working together with the urban planning department of the municipality from the design phase of the project is strongly recommended in order to select the most appropriate innovative technologies that respect the historical value of the buildings.
Ah, September!
Use of electric vehicles (EVs) has increased this year and previous years and these changes consequently require a shift in infrastructural needs. Increasing the number of EV charge points (EVCP) to support these shifts represent a challenge for cities but one that can be tackled with strategies such as the geospatial analysis undertaken by Energeo for East Lothian Council in Scotland. This use case delivers data, innovation, and actionable strategies — which is why we chose it in September as our use case of the month. We can take plenty of these lessons into 2021 as environmental and sustainability objectives drive electric vehicle policy support at all governance levels.
How can the insights and lessons learned guide infrastructure plans in other cities?
It is absolutely crucial that the data is up-to-date, high quality and high resolution to achieve optimal results. This means that a data audit should be taken pre-project to determine feasibility. Cooperation with stakeholders also plays a vital role in building trust between the supplier and customer and to bring a wider technical understanding to the team.
‘October is a Symphony of Performance and Change’
Photo by Julia Solonina on Unsplash
It is with the topic of citizen engagement and empowerment in mind that we selected the citizen platform for urban air quality from Breeze Technologies as our Use Case of the month for October. Through this platform, residents of Rothenburgort in Hamburg can access real-time data about local air quality, which considers the special needs of sensitive population groups. At BABLE, we are strong advocates for a citizen-first approach to all smart city solution implementation, so platforms designed with and for the citizens make us want to share these products with the world.
How can we use this story to shape a better future?
The most important lesson to draw from this use case is to move beyond citizen engagement to citizen empowerment. A bottom-up approach is a solution where everyone benefits and should be considered from the start-line to the finish line and beyond. There should be clear goals and metrics of the project, strong and consistent communication with the citizens, open discussions and a user-friendly interface.
Listening and learning from the people most impacted by a problem is critical to supplying a solution that solves the core of the problem.
Fallen Leaves in November
As the cold set in across (most of) Europe and the second wave of COVID-19 pushed citizens inside and to stay at home, we chose the Smart Home Solution Pilot in Tartu, Estonia, as an exemplary case for how we can help make residents comfortable, both physically and mentally, at home. For example, solutions like this pilot create a richer environment for improved indoor air quality benefits. Therefore, this project promoted not only energy usage efficiency and sustainability but also increased quality of life — which should always be the end goal.
What are the lessons learned and next steps?
Implementors should expect, anticipate and listen to resident concerns regarding the safety and health risks that they associate with the technology. In particular, care should be given to older residents who tend to struggle more with changes and technological adaptations, especially when they are expected to interact with technology on an individual level. These main lessons can be funnelled into active citizen engagement in the design phase. As additional cable works make up a significant part of the costs of setting up the smart home solution, the availability of these connections should be taken into consideration as a proactive measure when planning and implementing retrofitting activities, even if initially there is no intention of installing and integrating smart home devices in the apartments.
‘It is December, and nobody asked if I was ready’
The holiday season this year is certainly lacking the same lust without the bright lights of Christmas markets and the smell of Glühwein in the crisp air, but practical mobility measures like the bicycle tram in Konya, Turkey, bring us out of the dreariness and into the holiday spirit. The environmental movement loves to re-use, and this Use Case is an ideal example of how to take old items and refurbish them to fit new needs. Encouraging active and sustainable modes of transport is also a major plus of an exclusive bicycle tram, refitted from an old tram that no longer fit into the city’s current mobility plan.
How can we ‘re-use’ this Use Case in the forthcoming years?
Innovation often can come from the repurposing of the old. Cities can look to this Use Case as an example of how to fit old items into new mobility plans for the future. Mobility measures should always consider the variant needs of older and less mobile populations, while also encouraging sustainable and active modes of transport for the health of the citizens and the health of the planet. Looking at patterns in mobility (i.e. an increase in cycling behaviours) can give us the tools we need to provide an improved standard of life for commuters and travellers.
So, What can we leave behind in 2020 and what can we bring with us into 2021?
Photo by Intricate Explorer on Unsplash
While many of us would like to sweep 2020 under the rug and leave it there, we can also choose to embrace the lessons it has stubbornly gifted us. The future is looking hopeful and we remain optimistic that transformations to improve urban life will continue accelerating more rapidly than ever.
The European Union is pledging a united front against the threat of climate change and towards regional development and cohesion. The EU has also made major plans to transform the transport sector to provide green, smart and affordable mobility. Funding opportunities for green initiatives are becoming increasingly easier to capture with the proper guidance, and the private and public sectors are joining forces and working more closely together than ever before.
The geographic dispersity of these Use Cases shows us how inspiration can come from various regions and how partnerships from all countries, regions and cities are needed to bring about the greatest chances for innovation in a post-COVID-19 world.
So 2021, together, we are ready for you.
Are you ready, too? | https://medium.com/bable-smart-cities/lessons-from-2020-to-inspire-smart-cities-in-2021-f75d11f671aa | ['Bable Community'] | 2020-12-22 13:33:36.539000+00:00 | ['Startup', 'Cities', 'Technology', 'Smart Cities', 'Life'] |
10,000 INFLUENCES. Being able to influence well over… | In ourselves, we can be both very influential and very influenced. According to sociologists, even the most isolated individual will influence 10,000 other people during his or her lifetime!* We are in turn significantly influenced, on average, by at least 25 colleagues, 14 family members and 150 friends and associates over a lifetime.** That is a lot of influence going around! Put another way, we are all rather influential and impressionable.
Being able to influence well over 10,000 others, to me, is an awesome opportunity to make a positive difference in the lives of others. The example we set by what we do (and don’t do) impacts greatly on others. In the smallest of ways, both privately and publicly, we should try to set a good example by living uplifting and constructive lives. Younger folks are watching and noting accordingly. What values do you find most central and essential to who you are? Do you actively promote and reinforce these qualities within your circle of influence? Would others know that these attributes are central to your identity?
Being significantly influenced by about 200 others seems reasonable to me. I would suggest that about 20 people influenced most of who and what I became, without any one of them I would be a different person. These 20 had faith in me, were excellent role models, were kind and patient, mentored and monitored me along the way and opened doors. The next 180 keep me on my path, encouraged me and taught me the finer skills of life. A limited few were examples of what not to do, especially useful in its own way.
I recently sent a sincere thank you to one of my key enablers, something I would truly recommend.
Being influenced by so many should make us mindful of the company and influencers we associate with or follow. Role models matter. On the influential side, perhaps there are some younger folk you could potentially mentor.
Overall, consider the influences on your own life, and see if you can make these relationships more constructive and effective.
Physically distance (when required or helpful), never socially distance.
Reflection Source: www.Smallercup.org
Please freely share and widely, there are no copyright concerns.
*: Developing the Leader within you 2.0 by John C. Maxwell
**: https://blog.adioma.com/counting-the-people-you-impact-infographic/ | https://medium.com/@smallercup/in-ourselves-we-can-be-both-very-influential-and-very-influenced-1ad689559b4d | [] | 2020-12-21 14:23:06.280000+00:00 | ['Career Advice', 'Followers', 'Influence', 'Influencers', 'Opportunity'] |
Turkey says that despite US sanctions, it will not reverse the purchase of the Russian S-400 missile | Foreign Minister Mevlut Cavusoglu declared on Thursday that Turkey would not reverse its purchase of the Russian S-400 missile defence system and will take reciprocal action after reviewing the US sanctions imposed on the purchase of Russian weapons.
On Monday, the United States placed sanctions on NATO member Turkey’s Directorate of Defense Industry (SSB), its head, Ismail Demir, and three other officials over its acquisition of the S-400 missile.
On Wednesday, President Recep Erdogan claimed that the sanctions were a hostile strike on the defence industry in Turkey, and they would inevitably fail and censor the sanctions as a significant mistake.
2017 The implementation of the Countering America’s Adversaries by Sanctions Act (CAATSA) is intended to prohibit nations from buying military arms from NATO adversary Russia.
Reuters cited Cavusoglu as saying, “This isn’t following the rules of international law and it is a strategically and legitimately wrong decision,” If there is a way to reverse the decision, it would have happened by now,”If there was a way to revert decision, it would have occurred by now,” If it cooperated with Turkey and NATO, the United States could resolve the matter with good judgement, he added.
In comparison, Turkey claims that its procurement of the S-400s was not a choice, but rather a requirement, as any NATO partner could not secure a defence system on agreeable terms.
The U.S. believes the S-400s pose a threat to its F-35 warrior aircraft and the more robust NATO defence system. Turkey denies the allegations and claims that the S-400s are not going to be integrated into NATO. | https://medium.com/@benjaminrichards707/turkey-says-that-despite-us-sanctions-it-will-not-reverse-the-purchase-of-the-russian-s-400-95adc81582c1 | ['Benjamin Richards'] | 2020-12-19 06:45:47.233000+00:00 | ['Turkey', 'Russia', 'Nato', 'United States'] |
Against Outsourcing the Mind to a Soul | image: tw
Brains are not just for eating
Until 9/11 I thought that the days were over when people could be immolated for having the wrong ideas about the soul. People have long been controlled by a fear that their immortal essence could suffer for eternity if they did not obey certain spiritual authorities.
“One of these mornings, the chain is gonna break.” — Aretha Franklin, Chain of Fools
Brains, Not Chains
While those chains have now crumbled for a lot of us, belief in a soul is still pretty common. It might be less so if more of us were aware of its stunning inconsistency with everyday reality: particularly the fact that we have brains.
There is overwhelming evidence that brains are what make our bodies into people; people who speak, think, adapt, and cope with life on the planet. Brains gather and integrate sensory information derived from energy fluctuations in the environment. The more precisely we can measure brain activity the more detailed we find to be their correlations with human conscious experiences.
And it’s not just humans. When we look at nervous systems throughout the animal kingdom, we see that brains operate bodies to enable their survival as well.
Nevertheless, the ineffable, private nature of human consciousness is often part of an argument that something — call it a soul — that is not physical causes consciousness. If that’s true, then why does the brain seem so obviously involved?
Can’t You Hear Me Knocking?
To answer this question many have used the analogy of the brain being like a radio receiver. As one fairly typical source puts it:
The brain does not produce consciousness, but acts as a kind of receiver which “picks up” the fundamental consciousness that is all around us, and “transmits” it into our own being.
Some proponents think deeply enough to realize this comes with an issue. It’s whether a soul is somehow an individual or else is a sort of world soul. Maybe, they say, the world soul has knots in it that correspond to individual people. Or, assuming panpsychism, also knots for individual monkeys, mice, molds, and even molecules. If it’s all just knots, which are presumably temporary, then maybe we are at least off the hook for eternal damnation. I certainly hope so.
By the modern account of soul-ism, all those messy neuronal networks roiling away in brains only appear to be responding to the outside world. Instead, some outside spirit runs the show. This is a dualistic position, subject to … more issues. For starters, what is left for the brain to do? If it has no purpose, why does it expend more energy than the leg muscles of a marathon runner?
Many of you are aware that there is massive and detailed evidence that life here, including animals with brains, evolved by the selection of genetic changes that contributed to survival and reproduction. Consistent with that view, brains evolved as a way of reacting to the physical world, the better for the organism to deal with it. Why then would the brain also function as a receiver of information from sources external to the physical world? That’s like having a bicycle that’s also a TV, but where the same parts serve both functions. A dual-function brain seems at least awkward and possibly self-contradictory.
We can dig a bit deeper. Consider what, given the soul, is the purpose of the brain and the organism in which it resides? Why would such a puny, space- and time-limited thing as a brain or an organism need to exist when the important, eternal stuff of consciousness is really elsewhere?
We know that our physical person lives in this world. Sane people know that to survive we have to perceive this world, think about it, respond to it, etc. But if all our consciousness — including what happens when we drive the car, eat the food, go to the work, hide from the boss — if all that we experience comes from that soul, how and why does the soul know exactly what is happening in the mundane world? The soul could know this if it was part of that mundane world, and thus subject to the world’s laws, but then the soul could not be transcendent, let alone eternal.
The Watcher-Watcher
So that can’t be what’s happening. The alternative would be something like a perfect watcher. And only Dr. Seuss could contrive anything quite so — well, contrived. The soul, perched on a lookout in some other dimension, uses an inter-dimensional portal to watch our body moving about in our mundane world. Instant by every instant the soul decides and deduces everything that needs to happen in our consciousness.
For example, based on a previous thought (that it already sent to us) about wanting to look at something, the soul pokes our brain, making it in turn move our eyeballs. After the first few hundredths of a degree of movement, the soul looks through the portal at our physical environment, deduces what our visual field should now look like, and sends to our brain that new picture, so that we consciously see it. And repeats this cycle of controlling every tiny mental change for billions of instants the rest of our life and then on to the subsequent incarnations. All that electrochemical activity in the retina and up the neuro-visual circuits, all the spinal motor nerve activity — all that has no purpose at all. Seems like a waste.
So, why would something as cumbersome as a perfect watcher ever happen in any universe? Why use a brain at all? The soul should just tell the meat puppet what it perceives and what to do.
So the soul gets rid of the need for a brain, really. But what about the body, and, for that matter, the whole world it lives in? Maybe that is superfluous, too.
Why would we have a remote controller pulling our strings and dictating our reality from a realm outside of our realm, the one where we actually have to live? If such a controller did exist, it’s more like a god than an individual agent. After all, it has perfect control over our experience of “reality.” So a godlike soul does not need a body. In fact, it doesn’t need any reality other than its own. Maybe it’s just playing a virtual reality game in which we, the bodies with brains, are the pieces on the board?
It’s a Team Player, too
Now we have to ask, why does the game being played by my soul seem consistent with the game played by your soul, in the sense that they seem to be in one world, with consistent laws of physics, configurations of biology, timelines, and so forth? Like, who’s coordinating the souls to create this one-world illusion? How many levels of coordinators would there be? Here, I admit, is where a world soul would have an easier time of it, keeping things consistent.
Going back to real life for a moment: there’s a strong case that each of us (using our brains) models and lives in a kind of personal virtual reality, called an Ego Tunnel by philosopher Thomas Metzinger. What makes this work is that we are all doing our individual take on a base reality that contains all of us.
But if souls are actually in charge, doing the models and everything else, then uh-oh, there’s that coordination issue again. Souls that did not coordinate well, who had differences in their recipes for reality soufflé, would lead some of us to experiences that others would never see. People who believe in and experience unusual (“supernatural”) things should thus find the concept of souls sensible, and I think that they often do.
Just Give Brains a Chance
The brain receiver idea pops up often, as if it solves some conceptual problem of connecting the physical world with consciousness, which seems askew from the physical. But the concept has problems, as I hope I have expressed. It seems so much more sensibly self-consistent that consciousness is just a refined way of using nervous tissue to do a better job of adapting to the physical, measurable, and shared non-supernatural world that we perceive as our conscious reality.
We can be open to the possibility that all experience, even the weird stuff, comes from one universe, and that brains participate actively/creatively in all of it. For example, there are certainly transcendent experiences. Perhaps these happen when higher parts of the brain/mind’s conscious model “get out of the way.” Something like this was proposed by Metzinger to describe the experience of “pure consciousness” — the “void” or sometimes, “clear light”— during deep meditation.
To summarize: one can argue that reducing the brain to a sort of side effect doesn’t make much sense. There are also arguments that the brain is involved in rarer conscious states (like near-death or out-of-body experiences) that some think to require a soul instead. Those are too involved to address here. Maybe some readers will remind us of them. | https://medium.com/the-philosophers-stone/against-outsourcing-the-mind-to-a-soul-22ce1e1243c8 | ['Ted Wade'] | 2020-10-19 01:10:43.460000+00:00 | ['Philosophy', 'Brain', 'Consciousness', 'Psychology', 'Science'] |
How we implement Singletons in Scala at dataxu | At dataxu, we have been using Scala for a while now, and in our journey we have perfected our best practices. Among these best practices, we have refined our implementation of the Singleton design pattern in Scala. In implementing this pattern there are some aspects that require significant attention to detail to increase its ease of use, especially when testing.
The Singleton design pattern, according to Wikipedia, is a software design pattern that restricts the instantiation of a class to one object. This means that this pattern promises to be handy when dealing with a class that we want to instantiate once, and only once, across a system (or, more generally, a subsystem), and have that same instance shared by every potential user of the class. For example, think about a DatabaseConnection class that handles the connection between the system and its database, it is reasonable to have only one instance of the class and make any database client use that same connection.
But the question is, how do we implement this Singleton design pattern in Scala?
Implementing the Singleton design pattern in Scala
If you have already given Scala a try, you are probably familiar with Scala’s Singleton Objects. These are Scala’s approach to the static keyword in Java. Any function or variable defined in a Scala Object can be used statically, as there is one and only one instance of an object, which is not instantiated by any consumer, but just exists.
Our first hunch when implementing the Singleton design pattern in Scala was to make use of the Singleton Objects.
Let’s say we actually want to implement the DatabaseConnection class we mentioned above following the Singleton design pattern. This will allow different consumers to share the same DatabaseConnection instance for running commands on our application’s database. Using Scala’s Singleton Objects we can achieve this by doing:
And that’s all. We can now support different consumers running commands on a shared DatabaseConnection instance implementing the Singleton design pattern. For example, we could have a function somewhere else that retrieves all the Users in the database, as follows:
As you can see, there’s no need to instantiate a new DatabaseConnection nor to somehow keep a reference to it — we have direct access to the one and only DatabaseConnection instance just as if it was a static call in Java.
As we started using this approach in our code, we found it had some limitations that were important to note:
There’s no way one can easily swap the implementation being used. The fact that the user does not control the creation of this singleton instance — as it exists naturally in the program — makes it impossible to easily swap the implementation being used. It is common for us at dataxu to need a different implementation of our DatabaseConnection class when running our unit tests so that we don’t connect to the actual database but instead use an in-memory implementation of the class which is faster and cheaper. Using the Singleton Object does not allow us to swap the implementation of DatabaseConnection we want to instantiate as the singleton, but instead we would need to modify each and every consumer of the class to point to a different Object.
The fact that the user does not control the creation of this singleton instance — as it exists naturally in the program — makes it impossible to easily swap the implementation being used. It is common for us at dataxu to need a different implementation of our DatabaseConnection class when running our unit tests so that we don’t connect to the actual database but instead use an in-memory implementation of the class which is faster and cheaper. Using the Singleton Object does not allow us to swap the implementation of DatabaseConnection we want to instantiate as the singleton, but instead we would need to modify each and every consumer of the class to point to a different Object. Can’t be instantiated lazily. It might be the case that we need our singleton instance to be created lazily — only instantiate it once it has been requested for the first time by a consumer, not before. Again, as we don’t control the creation of this singleton instance, there’s no way we can do so lazily — it will be instantiated upon program execution.
These limitations are a consequence of the fact that these Scala Singleton Objects are not meant to be used this way. These Objects are not a built-in implementation of the Singleton design pattern, but instead are Scala’s approach to Java’s static. Hence, we should not include in these Objects code that is intended to be used as non-static code accessed through a singleton instance, but instead we should only include static code.
Perfecting our implementation of the Singleton design pattern
Given the limitations of our first approach, we came up with a different strategy that could effectively implement the Singleton design pattern without sacrificing the ease to test and use, while still being clean and elegant.
We knew that we shouldn’t use a Scala Object, as it proved to not fit our needs. Instead, we should use a regular Scala Class and have some infrastructure for holding a single instance of it and making sure we only allow that one instance to be used.
By the way, this is similar to how we implement the Singleton design pattern in Java — we define a class and its functionality, and add static infrastructure for handling the singleton instance. In this case, Scala helps to make our code cleaner by more explicitly encapsulating the static code in a Scala Object.
Our code now looks like this:
And we can use the singleton instance this way:
And voilà — we can now easily change the implementation of DatabaseConnection we want to instantiate as the singleton instance by just modifying the instantiation of the val instance in our object DatabaseConnection, as well as instantiating the singleton instance lazily just by adding the lazy keyword before the val for the instance.
This infrastructure can easily (and we think it should) be extended to allow programmatic swapping of the singleton instance being held. This means adding support for overriding the implementation being used, using a single line of code, which comes in very handy for solving the case mentioned above where we want to use a different implementation of the DatabaseConnection class when testing.
We decided to add this infrastructure to new functions to make our lives easier when testing:
A force(…) function that receives an instance of DatabaseConnection and forces the singleton instance to be the one received.
function that receives an instance of DatabaseConnection and forces the singleton instance to be the one received. A reset() function that puts the singleton instance back to its default (not-forced) one.
Generalizing our solution
Once we were happy with our solution, and proved that it was an effective implementation of the Singleton design pattern in Scala, we tried to generalize our solution to make it as simple as possible for other classes using this infrastructure. We came up with a simple utility that any developer who needs to implement a Singleton class in Scala can simply extend and start using.
And can be simply used as follows:
We also created a test base for our solution so that those who are using it can very easily have their Singleton Instance Providers thoroughly tested, just by extending this trait in their test classes. | https://medium.com/dataxutech/how-we-implement-singletons-in-scala-at-dataxu-8ccba9e6dd59 | ['Javier Buquet'] | 2018-01-15 14:04:17.010000+00:00 | ['Software Design Patterns', 'Scala', 'Software Design', 'Singleton'] |
The Ultimate Intermediate Ruby on Rails Tutorial: Let’s Create an Entire App! | There are plenty tutorials online which show how to create your first app. This tutorial will go a step further and explain line-by-line how to create a more complex Ruby On Rails application.
Throughout the whole tutorial, I will gradually introduce new techniques and concepts. The idea is that with every new section you should learn something new.
The following topics will be discussed throughout this guide:
Ruby On Rails basics
Refactoring: helpers, partials, concerns, design patterns
Testing: TDD/BDD (RSpec & Capybara), Factories (Factory Girl)
Action Cable
Active Job
CSS, Bootstrap, JavaScript, jQuery
So what the app is going to be about?
It’s going to be a platform where you could search and meet like-minded people.
Main functionalities which the app will have:
Authentication (with Devise)
Ability to publish posts, and search and categorize them
Instant messaging (popup windows and a separate messenger), with the ability to create private and group conversations.
Ability to add users to contacts
Real time notifications
You can see how the complete application is going to look.
And you can find the complete project’s source code on GitHub.
Table of Contents
Prerequisites
I will try to explain every line of code and how I came up with the solutions. I think it is entirely possible for a total beginner to complete this guide. But keep in mind that this tutorial covers some topics which are beyond the basics.
So if you are a total beginner, it’s going to be harder, because your learning curve is going to be pretty steep. I will provide links to resources where you could get some extra information about every new concept we touch.
Ideally, it’s best if you are aware of the fundamentals of:
Setup
I assume that you have already set up your basic Ruby On Rails development environment. If not, check RailsInstaller.
I had been developing on Windows 10 for a while. At first it was okay, but after some time I got tired of overcoming mystical obstacles which were caused by Windows. I had to keep figuring out hack ways to make my applications work. I’ve realized that it isn’t worth my time. Overcoming those obstacles didn’t give me any valuable skills or knowledge. I was just spending my time by duct taping Windows 10 setup.
So I switched to a virtual machine instead. I chose to use Vagrant to create a development environment and PuTTY to connect to a virtual machine. If you want to use Vagrant too, this is the tutorial which I found useful.
Create a new app
We are going to use PostgreSQL as our database. It is a popular choice among Ruby On Rails community. If you haven’t created any Rails apps with PostgreSQL yet, you may want to check this tutorial.
Once you are familiar with PostgreSQL, navigate to a directory where you keep your projects and open a command line prompt.
To generate a new app run this line:
rails new collabfield --database=postgresql
Collabfield , that’s how our applications is going to be called. By default Rails uses SQlite3, but since we want to use PostgreSQL as our database, we need to specify it by adding:
--database=postgresql
Now we should’ve successfully generated a new application.
Navigate to a newly created directory by running the command:
cd collabfield
And now we can run our app by entering:
rails s
We just started our app. Now we should be able to see what we got so far. Open a browser and go to http://localhost:3000. If everything went well, you should see the Rails signature welcome page.
Layout
Time to code. Where should we start? Well, we can start wherever we want to. When I build a new website, I like to start by creating some kind of basic visual structure and then build everything else around that. Let’s do just that.
Home page
When we go to http://localhost:3000, we see the Rails welcome page. We’re going to switch this default page with our own home page. In order to do that, generate a new controller called Pages . If you are not familiar with Rails controllers, you should skim through the Action Controller to get an idea what the Rails controller is. Run this line in your command prompt to generate a new controller.
rails g controller pages
This rails generator should have created some files for us. The output in the command prompt should look something like this:
We are going to use this PagesController to manage our special and static pages. Now open the Collabfield project in a text editor. I use Sublime Text, but you can use whatever you want to.
Open a file pages_controller.rb
app/controllers/pages_controller.rb
We’ll define our home page here. Of course we could define home page in a different controller and in different ways. But usually I like to define the home page inside the PagesController .
When we open pages_controller.rb , we see this:
controllers/pages_controller.rb
It’s an empty class, named PagesController , which inherits from the ApplicationController class. You can find this class’s source code in app/controllers/application_controller.rb .
All our controllers, which we will create, are going to inherit from ApplicationController class. Which means that all methods defined inside this class are going to be available across all our controllers.
We’ll define a public method named index , so it can be callable as an action:
controllers/pages_controller.rb
As you may have read in the Action Controller, routing determines which controller and its public method (action) to call. Let’s define a route, so when we open our root page of the website, Rails knows which controller and its action to call. Open a routes.rb file in app/config/routes.rb .
If you don’t know what Rails routes is, it is a perfect time to get familiar by reading the Rails Routing.
Insert this line:
root to: 'pages#index'
Your routes.rb file should look like this:
config/routes.rb
Hash symbol # in Ruby represents a method. As you remember an action is just a public method, so pages#index says “call the PagesController and its public method (action) index .”
If we went to our root path http://localhost:3000, the index action would be called. But we don’t have any templates to render yet. So let’s create a new template for our index action. Go to app/views/pages and create an index.html.erb file inside this directory. Inside this file we can write our regular HTML+ Embedded Ruby code. Just write something inside the file, so we could see the rendered template in the browser.
<h1>Home page</h1>
Now when we go to http://localhost:3000, we should see something like this instead of the default Rails information page.
Now we have a very basic starting point. We can start introducing new things to our website. I think it’s time to create our first commit.
In your command prompt run:
git status
And you should see something like this:
If you don’t already know, when we generate a new application, a new local git repository is initialized.
Add all current changes by running:
git add -A
Then commit all changes by running:
git commit -m "Generate PagesController. Initialize Home page"
If we ran this:
git status
We would see that there’s nothing to commit, because we just successfully committed all changes.
Bootstrap
For the navigation bar and the responsive grid system we’re going to use the Bootstrap library. In order to use this library we have to install the
bootstrap-sass gem. Open the Gemfile in your editor.
collabfield/Gemfile
Add a bootstrap-sass gem to the Gemfile. As the documentation says, you have to ensure that sass-rails gem is present too.
...
gem 'bootstrap-sass', '~> 3.3.6'
gem 'sass-rails', '>= 3.2'
...
Save the file and run this to install newly added gems:
bundle install
If you are still running the application, restart the Rails server to make sure that new gems are available. To restart the server simply shutdown it by pressing Ctrl + C and run rails s command again to boot the server.
Go to assets to open the application.css file:
app/assets/stylesheets/application.css
Below all the commented text add this:
...
@import "bootstrap-sprockets";
@import "bootstrap";
Now change the application.css name to application.scss . This is necessary in order to use Bootstrap library in Rails, also it allows us to use Sass features.
We want to control the order in which all .scss files are rendered, because in the future we might want to create some Sass variables. We want to make sure that our variables are going to be defined before we use them.
To accomplish it, remove those two lines from the application.scss file:
*= require_self
*= require_tree .
We’re almost able to use Bootstrap library. There’s a one more thing which we have to do. As the bootstrap-sass docs says, Bootstrap JavaScript is dependent on jQuery library. To use jQuery with Rails, you have to add jquery-rails gem.
gem 'jquery-rails'
Run…
bundle install
…again, and restart the server.
Last step is to require Bootstrap and jQuery in the application’s JavaScript file. Go to application.js
app/assets/javascripts/application.js
Then add the following lines in the file:
//= require jquery
//= require bootstrap-sprockets
Commit the changes:
git add -A
git commit -m "Add and configure bootstrap gem"
Navigation bar
For the navigation bar we’ll use Bootstrap’s navbar component as the starting point and then quite modify it. We will store our navigation bar inside a partial template.
We’re doing this because it’s better to keep every component of the app in separate files. It allows to test and manage app’s code much easier. Also we can reuse those components in other parts of the app, without duplicating the code.
Navigate to:
views/layouts
Create a new file:
_navigation.html.erb
For partials we use underscore prefix, so the Rails framework can distinguish it as a partial. Now copy and paste navbar component from Bootstrap docs and save the file. To see the partial on the website, we have to render it somewhere. Navigate to views/layouts/application.html.erb . This is the default file where everything gets rendered.
Inside the file we see the following method:
<%= yield %>
It renders the requested template. To use ruby syntax inside the HTML file, we have to wrap it around with <% %> (embedded ruby allows us to do that). To quickly learn the differences between ERB syntax, checkout this StackOverflow answer.
In Home page section we set the route to recognize the root URL. So whenever we send a GET request to go to a root page, PagesController‘sindex action gets called. And that respective action (in this case the index action) responds with a template which gets rendered with the yield method. As you remember, our template for a home page is located at app/views/pages/index.html.erb .
Since we want to have a navigation bar across all pages, we’ll render our navigation bar inside the default application.html.erb file. To render a partial file , simply use the render method and pass partial’s path as an argument. Do it just above the yield method like this:
...
<%= render 'layouts/navigation' %>
<%= yield %>
...
Now go to http://localhost:3000 and you should be able to see the navigation bar.
As mentioned above, we’re going to modify this navigation bar. First let’s remove all <li> and <form> elements. In the future we’ll create our own elements here. The _navigation.html.erb file should look like this now.
layouts/_navigation.html.erb
We have a basic responsive navigation bar now. It’s a good time to create a new commit. In command prompt run the following commands:
git add -A
git commit -m "Add a basic navigation bar"
We should change the navigation bar’s name from Brand to collabfield . Since Brand is a link element, we should use a link_to method to generate links. Why? Because this method allows us to easily generate URI paths. Open a command prompt and navigate to the project’s directory. Run the the following command:
rails routes
This command outputs our available routes, which are generated by the routes.rb file. As we see:
Currently, we have only one route, the one which we’ve defined before. If you look at the given routes, you can see a Prefix column. We can use those prefixes to generate a path to a wanted page. All we have to do is use a prefix name and add _path to it. If we wrote the root_path , that would generate a path to the root page. So let’s use the power of link_to method and routes.
Replace this line:
<a class="navbar-brand" href="#">Brand</a>
With this line:
<%= link_to 'collabfield', root_path, class: 'navbar-brand' %>
Remember that whenever you don’t quite understand how a particular method works, just Google it and you will probably find its documentation with an explanation. Sometimes documentations are poorly written, so you might want to Google a little bit more and you might find a blog or a StackOverflow answer, which would help.
In this case we pass a string as our first argument to add the <a> element’s value, the second argument is needed for a path, this is where routes helps us to generate it. The third argument is optional, which is accumulated inside the options hash. In this case we needed to add navbar-brand class to keep our Bootstrap powered navigation bar to function.
Let’s do another commit for this small change. In the upcoming section we’ll start changing our app’s design, starting from the navigation bar.
git add -A
git commit -m "Change navigation bar's brand name from Brand to collabfield"
Style sheets
Let me introduce you how I structure my style sheet files. From what I know there aren’t any strong conventions on how to structure your style sheets in Rails. Everyone is doing it slightly differently.
This is how I usually structure my files.
Base directory — This is where I keep Sass variables and styles which are used throughout the whole app. For instance default font sizes and default elements’ styles.
— This is where I keep Sass variables and styles which are used throughout the whole app. For instance default font sizes and default elements’ styles. Partials — Most of my styles go there. I keep all styles for separate components and pages in this directory.
— Most of my styles go there. I keep all styles for separate components and pages in this directory. Responsive — Here I define different style rules for different screen sizes. For example, styles for a desktop screen, tablet screen, phone screen, etc.
First, let’s create a new repository branch by running this:
git checkout -b "styles"
We’ve just created a new git branch and automatically switched to it. From now on this is how we’re going to implement new changes to the code.
The reason for doing this is that we can isolate our currently functional version (master branch) and write a new code inside a project’s copy, without being afraid to damage anything.
Once we are complete with the implementation, we can just merge changes to the master branch.
Start by creating few directories:
app/assets/stylesheets/partials/layout
Inside the layout directory create a file navigation.scss and inside the file add:
app/assets/stylesheets/partials/layout/navigation.scss
With these lines of code we change navbar’s background and links color. As you may have noticed, a selector is nested inside another declaration block. Sass allows us to use this functionality. !important is used to strictly override default Bootstraps styles. The last thing which you may have noticed is that instead of a color name, we use a Sass variable. The reason for this is that we are going to use this color multiple times across the app. Let’s define this variable.
First create a new folder:
app/assets/stylesheets/base
Inside the base directory create a new file variables.scss . Inside the file define a variable:
$navbarColor: #323738;
If you tried to go to http://localhost:3000, you wouldn’t notice any style changes. The reason for that is that in the Bootstrap section we removed these lines:
*= require_self
*= require_tree .
from application.scss , to not automatically import all style files.
This means that now we have to import our newly created files to the main application.scss file. The file should look like this now:
app/assets/stylesheets/application.scss
The reason for importing variables.scss file at the top is to make sure that the variables are defined before we use them.
Add some more CSS at the top of the navigation.scss file:
app/assets/stylesheets/partials/layout/navigation.scss
Of course you can put this code at the bottom of the file if you want to. Personally, I order and group CSS code based on CSS selectors’ specificity. Again, everyone is doing it slightly differently. I put less specific selectors above and more specific selectors below. So for instance Type selectors go above Class selectors and Class selectors go above ID selectors.
Let’s commit changes:
git add -A
git commit -m "Add CSS to the navigation bar"
We want to make sure that the navigation bar is always visible, even when we scroll down. Right now we don’t have enough content to scroll down, but we will in the future. Why don’t we give this feature to the navigation bar right now?
To do that use Bootstrap class navbar-fixed-top . Add this class to the nav element, so it looks like this:
<nav class="navbar navbar-default navbar-fixed-top">
Also we want to have collabfield to be to the Bootstrap Grid System’s left side boundaries. Right now it is to the viewport’s left side boundaries, because our class is currently container-fluid . To change that, change the class to container .
It should look like this:
<div class="container">
Commit the changes:
git add -A
git commit -m "
- in _navigation.html.erb add navbar-fixed-top class to nav.
- Replace container-fluid class with container"
If you go to http://localhost:3000, you see that the Home page text is hidden under the navigation bar. That’s because of the navbar-fixed-top class. To solve this issue, push the body down by adding the following to navigation.scss :
body {
margin-top: 50px;
}
At this stage the app should look like this:
Commit the change:
git add -A
git commit -m "Add margin-top 50px to the body"
As you remember, we’ve created a new branch before and switched to it. It’s time to go back to the master branch.
Run the command:
git branch
You can see the list of our branches. Currently we’re in the styles branch.
To switch back to the master branch, run:
git checkout master
To merge our all changes, which we did in the styles branch, simply run:
git merge styles
The command merged those two branches and now we can see the summary of changes we made.
We don’t need styles branch anymore, so we can delete it:
git branch -D styles
Posts
It’s almost a right time to start implementing the posts functionality. Since our app goal is to let users meet like-minded people, we have to make sure that posts’ authors can be identified. To achieve that, authentication system is required.
Authentication
For an authentication system we are going to use the devise gem. We could create our own authentication system, but that would require a lot of effort. We’ll choose an easier route. Also it’s a popular choice among Rails community.
Start by creating a new branch:
git checkout -b authentication
Just like with any other gem, to set it up we’ll follow its documentation. Fortunately, it’s very easy to set up.
Add to your Gemfile
gem 'devise'
Then run commands:
bundle install
rails generate devise:install
You probably see some instructions in the command prompt. We won’t use mailers in this tutorial, so no further configuration is needed.
At this point, if you don’t know anything about Rails models, you should get familiar with them by skimming through Active Record and Active Modeldocumentations.
Now let’s use a devise generator to create a User model.
rails generate devise User
Initialize a database for the app by running:
rails db:create
Then run this command to create new tables in your database:
rails db:migrate
That’s it. Technically our authentication system is set up. Now we can use Devise given methods and create new users. Commit the change:
git add -A
git commit -m "Add and configure the Devise gem"
By installing Devise gem, we not only get the back-end functionality, but also default views. If you list your routes by running:
rails routes
You can see that now you have a bunch of new routes. Remember, we only had a one root route until now. If something seems to be confusing, you can always open devise docs and get your answers. Also don’t forget that a lot of same questions come to other people’ s minds. There’s a high chance that you’ll find the answer by Googling too.
Try some of those routes. Go to localhost:3000/users/sign_in and you should see a sign in page.
If you went to localhost:3000/users/sign_up, you would see a sign up page too. God Damn! as Noob Noob says. If you look at the views directory, you see that there isn’t any Devise directory, which we could modify. As Devise docs says, in order to modify Devise views, we’ve to generate it with a devise generator. Run
rails generate devise:views
If you check the views directory, you’ll see a generated devise directory inside. Here we can modify how sign up and login pages are going to look like. Let’s start with the login page, because in our case this is going to be a more straightforward implementation. With the registration page, due to our wanted feature, an extra effort will be required.
Login page
Navigate to and open app/views/devise/sessions/new.html.erb .
This is where the login page views are stored. There’s just a login form inside the file. As you may have noticed, the form_for method is used to generate this form. This is a handy Rails method to generate forms. We’re going to modify this form’s style with bootstrap. Replace all file’s content with:
views/devise/sessions/new.html.erb
Nothing fancy is going here. We just modified this form to be a bootstrap form by changing the method’s name to bootstrap_form_for and adding form-control classes to the fields.
Take a look how arguments inside the methods are styled. Every argument starts in a new line. The reason why I did this is to avoid having long code lines. Usually code lines shouldn’t be longer than 80 characters, it improves readability. We’re going to style the code like that for the rest of the guide.
If we visit localhost:3000/users/sign_in, we’ll see that it gives us an error:
undefined method 'bootstrap_form_for'
In order to use bootstrap forms in Rails we’ve to add a bootstrap_form gem. Add this to the Gemfile
gem 'bootstrap_form'
Then run:
bundle install
At this moment the login page should look like this:
Commit changes:
git add -A
git commit -m "Generate devise views, modify sign in form
and add the bootstrap_form gem."
To give the bootstrap’s grid system to the page, wrap login form with the bootstrap container.
views/devise/sessions/new.html.erb
The width of the login form is 6 columns out of 12. And the offset is 3 columns. On smaller devices the form will take full screen’s width. That’s how the bootstrap gridworks.
Let’s do another commit. Quite a minor change, huh? But that’s how I usually do commits. I implement a definite change in one area and then commit it. I think doing it this way helps to track changes and understand how the code has evolved.
git add -A
git commit -m "wrap login form in the login page with a boostrap container"
It would be better if we could just reach the login page by going to /login instead of /users/sign_in . We have to change the route. To do that we need to know where the action, which gets called when we go to login page, is located. Devise controllers are located inside the gem itself. By reading Devise docs we can see that all controllers are located inside the devise directory. Not really surprised by the discovery, to be honest U_U. By using the devise_scope method we can simply change the route. Go to routes.rb file and add
devise_scope :user do
get 'login', to: 'devise/sessions#new'
end
Commit the change:
git add -A
git commit -m "change route from /users/sign_in to /login"
For now, leave the login page as it is.
Sign up page
If we navigated to localhost:3000/users/sign_up, we would see the default Devise sign up page. But as mentioned above, the sign up page will require some extra effort. Why? Because we want to add a new :name column to the users table, so a User object could have the :name attribute.
We’re about to do some changes to the schema.rb file. At this moment, if you aren’t quite familiar with schema changes and migrations, I recommend you to read through Active Record Migrations docs.
First, we have to add an extra column to the users table. We could create a new migration file and use a change_table method to add an extra column. But we are just at the development stage, our app isn’t deployed yet. We can just define a new column straight inside the devise_create_users migration file and then recreate the database. Navigate to db/migrate and open the *CREATION_DATE*_devise_create_users.rb file and add t.string :name, null: false, default: "" inside the create_table method.
Now run the commands to drop and create the database, and run migrations.
rails db:drop
rails db:create
rails db:migrate
We added a new column to the users table and altered the schema.rb file.
To be able to send an extra attribute, so the Devise controller would accept it, we’ve to do some changes at the controller level. We can do changes to Devise controllers in few different ways. We can use devise generator and generate controllers. Or we can create a new file, specify the controller and the methods that we want to modify. Both ways are good. We are going to use the latter one.
Navigate to app/controllers and create a new file registrations_controller.rb . Add the following code to the file:
controllers/registrations_controller.rb
This code overwrites the sign_up_params and account_update_params methods to accept the :name attribute. As you see, those methods are in the Devise RegistrationsController , so we specified it and altered its methods. Now inside our routes we have to specify this controller, so these methods could be overwritten. Inside routes.rb change
devise_for :users
to
devise_for :users, :controllers => {:registrations => "registrations"}
Commit the changes.
git add -A
git commit -m "
- Add the name column to the users table.
- Include name attribute to sign_up_params and account_update_params
methods inside the RegistrationsController"
Open the new.html.erb file:
app/views/devise/registrations/new.html.erb
Again, remove everything except the form. Convert the form into a bootstrap form. This time we add an additional name field.
views/devise/registrations/new.html.erb
Commit the change.
git add -A
git commit -m "
Delete everything from the signup page, except the form.
Convert form into a bootstrap form. Add an additional name field"
Wrap the form with a bootstrap container and add some text.
views/devise/registrations/new.html.erb
Commit the change.
git add -A
git commit -m "
Wrap the sign up form with a bootstrap container.
Add informational text inside the container"
Just like with the login page, it would be better if we could just open a sign up page by going to /signup instead of users/sign_up . Inside the routes.rb file add the following code:
devise_scope :user do
get 'signup', to: 'devise/registrations#new'
end
Commit the change.
git add -A
git commit -m "Change sign up page's route from /users/sign_up to /signup"
Let’s apply a few style changes before we move on. Navigate to app/assets/sytlesheets/partials and create a new signup.scss file. Inside the file add the following CSS:
assets/stylesheets/partials/signup.scss
Also we haven’t imported files from the partials directory inside the application.scss file. Let’s do it right now. Navigate to the application.scss and just above the @import partials/layout/* , import all files from the partials directory. Application.scss should look like this
assets/stylesheets/application.scss
Commit the changes.
git add -A
git commit -m "
- Create a signup.scss and add CSS to the sign up page
- Import all files from partials directory to the application.scss"
Add few other style changes to the overall website look. Navigate to app/assets/stylesheets/base and create a new default.scss file. Inside the file add the following CSS code:
assets/stylesheets/base/default.scss
Here we apply some general style changes for the whole website. font-size is set to 62.5% , so 1 rem unit could represent 10px . If you don’t know what the rem unit is, you may want to read this tutorial. We don’t want to see a label text on bootstrap forms, that’s why we set this:
.control-label {
display: none;
}
You may have noticed that the $backgroundColor variable is used. But this variable isn’t set yet. So let’s do it by opening variables.scss file and adding this:
$backgroundColor: #f0f0f0;
The default.scss file isn’t imported inside the application.scss . Import it below variables, the application.scss file should look like this:
assets/stylesheets/application.scss
Commit the changes.
git add -A
git commit -m "
Add CSS and import CSS files inside the main file - Create a default.scss file and add CSS
- Define $backgroundColor variable
- Import default.scss file inside the application.scss"
Navigation bar update
Right now we have three different pages: home, login and signup. It is a good idea to connect them all together, so users could navigate through the website effortlessly. We’ll put links to signup and login pages on the navigation bar. Navigate to and open the _navigation.html.erb file.
app/views/layouts/_navigation.html.erb
We’re going to add some extra code here. In the future we will add even more code here. This will lead to a file with lots of code, which is hard to manage and test. In order to handle a long code easier, we’re going to start splitting larger code into smaller chunks. To achieve that, we’ll use partials. Before adding extra code, let’s split the current _navigation.html.erb code into partials already.
Let me quickly introduce you how our navigation bar is going to work. We’ll have two major parts. On one part elements will be shown all the time, no matter what the screen size is. On the other part of the navigation bar, elements will be shown only on bigger screens and collapsed on the smaller ones.
This is how the structure inside the .container element will look like:
layouts/_navigation_html.erb
Inside the layouts directory:
app/views/layouts
Create a new navigation directory. Inside this directory create a new partial _header.html.erb file.
app/views/layouts/navigation/_header.html.erb
From the _navigation.html.erb file cut the whole .navbar-header section and paste it inside the _header.html.erb file. Inside the navigation directory, create another partial file named _collapsible_elements.html.erb .
app/views/layouts/navigation/_collapsible_elements.html.erb
From the _navigation.html.erb file cut the whole .navbar-collapse section and paste it inside the _collapsible_elements.html.erb . Now let’s render those two partials inside the _navigation.html.erb file. The file should look like this now.
layouts/_navigation_html.erb
If you went to http://localhost:3000 now, you wouldn’t notice any difference. We just cleaned our code a little bit and prepared it for a further development.
We are ready to add some links to the navigation bar. Navigate to and open the _collapsible_elements.html.erb file again:
app/views/layouts/_collapsible_elements.html.erb
Let’s fill this file with links, replace the file’s content with:
layouts/navigation/_collapsible_elements.html.erb
Let me briefly explain to you what is going on here. First, at the second line I changed the element’s id to navbar-collapsible-content . This is required in order to make this content collapsible. It’s a bootstrap’s functionality. The default id was bs-example-navbar-collapse-1 . To trigger this this function there’s the button with the data-target attribute inside the _header.html file. Open views/layouts/navigation/_header.html.erb and change data-target attribute to data-target="#navbar-collapsible-content" . Now the button will trigger the collapsible content.
Next, inside the _collapsible_elements.html.erb file you see some if else logic with the user_signed_in? Devise method. This will show different links based on if a user is signed in, or not. Leaving logic, such as if else statements inside views isn’t a good practice. Views should be pretty “dumb” and just spit the information out, without “thinking” at all. We will refactor this logic later with Helpers.
The last thing to note inside the file is pc-menu and mobile-menu CSS classes. The purpose of these classes is to control how links are displayed on different screen sizes. Let’s add some CSS for these classes. Navigate to app/assets/stylesheets and create a new directory responsive . Inside the directory create two files, desktop.scss and mobile.scss . The purpose of those files is to have different configurations for different screen sizes. Inside the desktop.scss file add:
assets/styleshseets/responsive/desktop.scss
Inside the mobile.scss file add:
assets/styleshseets/responsive/mobile.scss
If you aren’t familiar with CSS media queries, read this. Import files from the responsive directory inside the application.scss file. Import it at the bottom of the file, so the application.scss should look like this:
app/assets/stylesheets/application.scss
Navigate to and open navigation.scss file
app/assets/stylesheets/partials/layout/navigation.scss
and do some stylistic tweaks to the navigation bar by adding the following inside the nav element’s selector:
assets/styleshseets/partials/layout/navigation.scss
And outside the nav element, add the following CSS code:
assets/styleshseets/partials/layout/navigation.scss
At this moment, our application should look like this when a user is not logged in:
Like this when a user is logged in:
And like this when the screen size is smaller:
Commit the changes.
git add -A
git commit -m "
Update the navigation bar - Add login, signup, logout and edit profile links on the navigation bar
- Split _navigation.scss code into partials
- Create responsive directory inside the stylesheets directory and add CSS.
- Add CSS to tweak navigation bar style"
Now we have a basic authentication functionality. It satisfies our needs. So let’s merge authentication branch with the master branch.
git checkout master
git merge authentication
We can see the summary of changes again. Authentication branch is not needed anymore, so delete it.
git branch -D authentication
Helpers
When we were working on the _collapsible_elements.html.erb file, I mentioned that Rails views is not the right place for logic. If you look inside the app directory of the project, you see there’s the directory called helpers . We’ll extract logic from Rails views and put it inside the helpers directory.
app/views/pages
Let’s create our first helpers. Firstly, create a new branch and switch to it.
git checkout -B helpers
Navigate to the helpers directory and create a new navigation_helper.rb file
app/helpers/navigation_helper.rb
Inside helper files, helpers are defined as modules. Inside the navigation_helper.rb define the module.
app/helpers/navigation_helper.rb
By default Rails loads all helper files to all views. Personally I do not like this, because methods’ names from different helper files might clash. To override this default behavior open the application.rb file
config/application.rb
Inside the Application class add this configuration
config.action_controller.include_all_helpers = false
Now helpers are available for corresponding controller’s views only. So if we have the PagesController , all helpers inside the pages_helper.rb file will be available to all view files inside the pages directory.
We don’t have the NavigationController , so helper methods defined inside the NavigationHelper module won’t be available anywhere. The navigation bar is available across the whole website. We can include the NavigationHelper module inside the ApplicationHelper . If you aren’t familiar with loading and including files, read through this article to get an idea what is going to happen.
Inside the application_helper.rb file, require the navigation_helper.rb file. Now we have an access to the navigation_helper.rb file’s content. So let’s inject NavigationHelper module inside the ApplicationHelper module by using an include method. The application_helper.rb should look like this:
helpers/application_helper.rb
Now NavigationHelper helper methods are available across the whole app.
Navigate to and open the _collapsible_elements.html.erb file
app/views/layouts/navigation/ _collapsible_elements.html.erb
We’re going to split the content inside the if else statements into partials. Create a new collapsible_elements directory inside the navigation directory.
app/views/layouts/navigation/ collapsible_elements
Inside the directory create two files: _signed_in_links.html.erb and _non_signed_in_links.html.erb . Now cut the content from _collapsible_elements.html.erb file’s if else statements and paste it to the corresponding partials. The partials should look like this:
layouts/navigation/collapsible_elements/_signed_in_links.html.erb
layouts/navigation/collapsible_elements/_non_signed_in_links.html.erb
Now inside the _collapsible_elements.html.erb file, instead of if else statements, add the render method with the collapsible_links_partial_path helper method as an argument. The file should look like this
layouts/navigation/_collapsible_elements.html.erb
collapsible_links_partial_path is the method we are going to define inside the NavigationHelper . Open navigation_helper.rb
app/helpers/navigation_helper.rb
and define the method inside the module. The navigation_helper.rb file should look like this:
app/helpers/navigation_helper.rb
The defined method is pretty straightforward. If a user is signed in, return a corresponding partial’s path. If a user is not signed in, return another partial’s path.
We’ve created our first helper method and extracted logic from views to a helper method. We’re going to do this for the rest of the guide, whenever we encounter logic inside a view file. By doing this we’re making a favor to ourselves, testing and managing the app becomes much easier.
The app should look and function the same.
Commit the changes.
git add -A
git commit -m "Configure and create helpers - Change include_all_helpers config to false
- Split the _collapsible_elements.html.erb file's content into
partials and extract logic from the file into partials"
Merge the helpers branch with the master
git merge helpers git checkout mastergit merge helpers https://gist.github.com/domagude/419bba70cb97e27f4ea04fe37820194a#file-rails_helper-rb
Testing
At this point the application has some functionality. Even thought there aren’t many features yet, but we already have to spend some time by manually testing the app if we want to make sure that everything works. Imagine if the application had 20 times more features than it has now. What a frustration would be to check that everything works fine, every time we did code changes. To avoid this frustration and hours of manual testing, we’ll implement automated tests.
Before diving into tests writing, allow me to introduce you how and what I test. Also you can read through A Guide to Testing Rails Applications to get familiar with default Rails testing techniques.
What I use for testing
Framework: RSpec
When I started testing my Rails apps, I used the default Minitestframework. Now I use RSpec. I don’t think there’s a good or a bad choice here. Both frameworks are great. I think it depends on a personal preference, which framework to use. I’ve heard that RSpec is a popular choice among Rails community, so I’ve decided to give it a shot. Now I am using it most of the time.
RSpec When I started testing my Rails apps, I used the default Minitestframework. Now I use RSpec. I don’t think there’s a good or a bad choice here. Both frameworks are great. I think it depends on a personal preference, which framework to use. I’ve heard that RSpec is a popular choice among Rails community, so I’ve decided to give it a shot. Now I am using it most of the time. Sample data: factory_girl
Again, at first I tried the default Rails way — fixtures, to add sample data. I’ve found that it’s a different case than it is with testing frameworks. Which testing framework to choose is probably a personal preference. In my opinion it’s not the case with sample data. At first fixtures were fine. But I’ve noticed that after apps become larger, controlling sample data with fixtures becomes tough. Maybe I used it in a wrong way. But with factories everything was nice and peaceful right away. No matter if an app is smaller or bigger — the effort to set sample data is the same.
factory_girl Again, at first I tried the default Rails way — fixtures, to add sample data. I’ve found that it’s a different case than it is with testing frameworks. Which testing framework to choose is probably a personal preference. In my opinion it’s not the case with sample data. At first fixtures were fine. But I’ve noticed that after apps become larger, controlling sample data with fixtures becomes tough. Maybe I used it in a wrong way. But with factories everything was nice and peaceful right away. No matter if an app is smaller or bigger — the effort to set sample data is the same. Acceptance tests: Capybara
By default Capybara uses rack_test driver. Unfortunately, this driver doesn’t support JavaScript. Instead of the default Capybara’s driver, I chose to use poltergeist. It supports JavaScript and in my case it was the easiest driver to set up.
What I test
I test all logic which is written by me. It could be:
Helpers
Models
Jobs
Design Patterns
Any other logic written by me
Besides logic, I wrap my app with acceptance tests using Capybara, to make sure that all app’s features are working properly by simulating a user’s interaction. Also to help my simulation tests, I use request tests to make sure that all requests return correct responses.
That’s what I test in my personal apps, because it fully satisfies my needs. Obviously, testing standards could be different from person to person and from company to company.
Controllers, views and gems weren’t mentioned, why? As many Rails developers say, controllers and views shouldn’t contain any logic. And I agree with them. In this case there isn’t much to test then. In my opinion, user simulation tests are enough and efficient for views and controllers. And gems are already tested by their creators. So I think that simulation tests are enough to make sure that gems work properly too.
How I test
Of course I try to use TDD approach whenever is possible. Write a test first and then implement the code. In this case the development flow becomes more smoother. But sometimes you aren’t sure how the completed feature is going to look like and what kind of output to expect. You might be experimenting with the code or just trying different implementation solutions. So in those cases, test first and implementation later approach doesn’t really work.
Before (sometimes after, as discussed above) every piece of logic I write, I write an isolated test for it a.k.a. unit test. To make sure that every feature of an app works, I write acceptance (user simulation) tests with Capybara.
Set up a test environment
Before we write our first tests, we have to configure the testing environment.
Open the Gemfile and add those gems to the test group
gem 'rspec-rails', '~> 3.6'
gem 'factory_girl_rails'
gem 'rails-controller-testing'
gem 'headless'
gem 'capybara'
gem 'poltergeist'
gem 'database_cleaner'
As discussed above, rspec gem is a testing framework, factory_girl is for adding sample data, capybara is for simulating a user’s interaction with the app and poltergeist driver gives the JavaScript support for your tests.
You can use another driver which supports JavaScript if it’s easier for you to set up. If you decide to use poltergeist gem, you will need PhantomJS installed. To install PhantomJS read poltergeist docs.
headless gem is required to support headless drivers. poltergeist is a headless driver, that’s why we need this gem. rails-controller-testing gem is going to be required when we will test requests and responses with the requests specs. More on that later.
database_cleaner is required to clean the test database after tests where JavaScript was executed. Normally the test database cleans itself after each test, but when you test features which has some JavaScript, the database doesn’t clean itself automatically. It might change in the future, but at the moment, of writing this tutorial, after tests with JavaScript are executed, the test database isn’t cleaned automatically. That’s why we have to manually configure our test environment to clean the test database after each JavaScript test too. We’ll configure when to run the database_cleaner gem in just a moment.
Now when the purpose of these gems is covered, let’s install them by running:
bundle install
To initialize the spec directory for the RSpec framework run the following:
rails generate rspec:install
Generally speaking, spec means a single test in RSpec framework. When we run our specs, it means that we run our tests.
If you look inside the app directory, you will notice a new directory called spec . This is where we’re going to write tests. Also you may have noticed a directory called test . This is where tests are stored when you use a default testing configuration. We won’t use this directory at all. You can simply remove it from the project c(x_X)b.
As mentioned above, we have to set up the database_cleaner for the tests which include JavaScript. Open the rails_helper.rb file
spec/rails_helper.rb
Change this line
config.use_transactional_fixtures = true
to
config.use_transactional_fixtures = false
and below it add the following code:
spec/rails_helper.rb
I took this code snippet from this tutorial.
The last thing we’ve to do is to add some configurations. Inside the rails_helper.rb file’s configurations, add the following lines
spec/rails_helper.rb
Let’s breakdown the code a little bit.
With require methods we load files from the new added gems, so we could use their methods below.
config.include Devise::Test::IntegrationHelpers, type: :feature
This configuration allows us to use devise methods inside capybara tests. How did I come up with this line? It was provided inside the Devise docs.
config.include FactoryGirl::Syntax::Methods
This configuration allows to use factory_girl gem’s methods. Again, I found this configuration inside the gem’s documentation.
Capybara.javascript_driver = :poltergeist
Capybara.server = :puma
Those two configurations are required in order to be able to test JavaScript with capybara . Always read the documentation first, when you want to implement something you don’t know how to.
The reason why I introduced you with most of the testing gems and configurations at once and not gradually, once we meet a particular problem, is to give you a clear picture what I use for testing. Now you can always come back to this section and check majority of the configurations in one place. Rather than jumping from one place to another and putting gems with configurations like puzzle pieces together.
Let’s commit the changes and finally get our hands dirty with tests.
git add -A
git commit -m "
Set up the testing environment - Remove test directory
- Add and configure rspec-rails, factory_girl_rails,
rails-controller-testing, headless, capybara, poltergeist,
database_cleaner gems"
Helper specs
About each type of specs (tests), you can find general information by reading rspec docs and its gem docs. Both are pretty similar, but you can find some differences between each other.
Create and switch to a new branch:
git checkout -b specs
So far we’ve created only one helper method. Let’s test it.
Navigate to spec directory anhttps://gist.github.com/domagude/3c42ba6ccf31bf1c50588c59277a9146#file-navigation_helper_spec-rbd create a new directory called helpers .
spec/helpers
Inside the directory, create a new file navigation_helper_spec.rb
spec/helpers/navigation_helper_spec.rb
Inside the file, write the following code:
spec/helpers/navigation_helper_spec.rb
require ‘rails_helper' gives us an access to all testing configurations and methods. :type => :helper treats our tests as helper specs and provides us with specific methods.
That’s how the navigation_helper_spec.rb file should look like when the collapsible_links_partial_path method is tested.
spec/helpers/navigation_helper_spec.rb
To learn more about the context and it , read the basic structure docs. Here we test two cases — when a user is logged in and when a user is not logged in. In each context of signed in user and non-signed in user , we have before hooks. Inside the corresponding context, those hooks (methods) run before each our tests. In our case, before each test we run the stub method, so the user_signed_in? returns whatever value we tell it to return.
And finally, with the expect method we check that when we call collapsible_links_partial_path method, we get an expected return value.
To run all tests, simply run:
rspec spec
To run specifically the navigation_helper_spec.rb file, run:
rspec spec/helpers/navigation_helper_spec.rb
If the tests passed, the output should look similar to this:
Commit the changes.
git add -A
git commit -m "Add specs to NavigationHelper's collapsible_links_partial_path method"
Factories
Next, we’ll need some sample data to perform our tests. factory_girl gem gives us ability to add sample data very easily, whenever we need it. Also it provides a good quality docs, so it makes the overall experience pretty pleasant. The only object we can create with our app so far is the User . To define the user factory, create a factories directory inside the spec directory.
spec/factories
Inside the factories directory create a new file users.rb and add the following code:
spec/factories/users
Now within our specs, we can easily create new users inside the test database, whenever we need them, using factory_girl gem’s methods. For the comprehensive guide how to define and use factories, checkout the factory_girl gem’s docs.
Our defined factory, user , is pretty straightforward. We defined the values, user objects will have. Also we used the sequence method. By reading docs, you can see that with every additional User record, n value gets incremented by one. I.e. the first created user‘s name is going to be test0 , the second one’s test1 , etc.
Commit the changes.
git add -A
git commit -m "add a users factory"
Feature specs
In the feature specs we write code which simulates a user’s interaction with an app. Feature specs are powered by the capybara gem.
Good news is that we’ve everything set up and ready to write our first feature specs. We’re going to test the login, logout and signup functionalities.
Inside the spec directory, create a new directory called features .
spec/features
Inside the features directory, create another directory called user .
spec/features/user
Inside the user directory, create a new file called login_spec.rb
spec/features/user/login_spec.rb
That’s how the login test looks like:
spec/features/user/login_spec.rb
With this code we simulate a visit to the login page, starting from the home page. Then we fill the form and submit it. Finally, we check if we have the #user-settings element on the navigation bar, which is available only for signed in users.
feature and scenario are part of the Capybara’s syntax. feature is the same as context / describe and scenario is the same as it . More info you can find in Capybara’s docs, Using Capybara With Rspec.
let method allows us to write memorized methods which we could use across all specs within the context, the method was defined.
Here we also use our created users factory and the create method, which comes with the factory_girl gem.
js: true argument allows to test functionalities which involves JavaScript.
As always, to see if a test passes, run a specific file. In this case it is the login_spec.rb file:
rspec spec/features/user/login_spec.rb
Commit the changes.
git add -A
git commit -m "add login feature specs"
Now we can test the logout functionality. Inside the user directory, create a new file named logout_spec.rb
spec/features/user/logout_spec.rb
The implemented test should look like this:
spec/features/user/logout_spec.rb
The code simulates a user clicking the logout button and then expects to see non-logged in user’s links on the navigation bar.
sign_in method is one of the Devise helper methods. We have included those helper methods inside the rails_helper.rb file previously.
Run the file to see if the test passes.
Commit the changes.
git add -A
git commit -m "add logout feature specs"
The last functionality we have is ability to sign up a new account. Let’s test it. Inside the user directory create a new file named sign_up_spec.rb . That’s how the file with the test inside should look like:
spec/features/user/sign_up_spec.rb
We simulate a user navigating to the signup page, filling the form, submitting the form and finally, we expect to see the #user-settings element which is available only for logged in users.
Here we use the Devise’s build method instead of create . This way we create a new object without saving it to the database.
We can run the whole test suite and see if all tests pass successfully.
rspec spec
Commit the changes.
git add -A
git commit -m "add sign up features specs"
We’re done with our first tests. So let’s merge the specs branch with the master .
git checkout master
git merge specs
Specs branch isn’t needed anymore. Delete it q__o.
git branch -D specs
Main feed
On the the home page we’re going to create a posts feed. This feed is going to display all type of posts in a card format.
Start by creating a new branch:
git checkout -b main_feed
Generate a new model called Post .
rails g model post
Then we’ll need a Category model to categorize the posts:
rails g model category
Now let’s create some associations between User , Category and Post models.
Every post is going to belong to a category and its author (user). Open the models’ files and add the associations.
class Post < ApplicationRecord
belongs_to :user
belongs_to :category
end class User < ApplicationRecord
...
has_many :posts, dependent: :destroy
end class Category < ApplicationRecord
has_many :posts
end
The dependent: :destroy argument says, when a user gets deleted, all posts what the user has created will be deleted too.
Now we’ve to define data columns and associations inside the migrations files.
db/migrate/CREATION_DATE_create_posts.rb
db/migrate/CREATION_DATE_create_categories.rb
Now run the migration files:
rails db:migrate
Commit the changes:
git add -A
git commit -m "
- Generate Post and Category models.
- Create associations between User, Post and Category models.
- Create categories and posts database tables."
Specs
We can test the newly created models. Later we’ll need sample data for the tests. Since a post belongs to a category, we also need sample data for categories to set up the associations.
Create a category factory inside the factories directory.
spec/factories/categories.rb
spec/factories/categories.rb
Create a post factory inside the factories directory
spec/factories/posts.rb
spec/factories/posts.rb
As you see, it’s very easy to set up an association for factories. All we had to do to set up user and category associations for the post factory, is to write factories’ names inside the post factory.
Commit the changes.
git add -A
git commit -m "Add post and category factories"
For now we’ll only test the associations, because that’s the only thing we wrote yet inside the models.
Open the post_spec.rb
spec/models/post_spec.rb
Add specs for the associations, so the file should look like this:
spec/models/post_spec.rb
We use the described_class method to get the current context’s class. Which is basically the same as writing Post in this case. Then we use reflect_on_association method to check that it returns a correct association.
Do the same for other models.
spec/models/category_spec.rb
spec/models/user_spec.rb
Commit the changes.
git add -A
git commit -m "Add specs for User, Category, Post models' associations"
Home page layout
Currently the home page has nothing inside, only the dummy text “Home page”. It’s time to create its layout with bootstrap. Open the home page’s view file views/pages/index.html.erb and replace the file’s content with the following code to create the page’s layout:
views/pages/index.html.erb
Now add some CSS to define elements’ style and responsive behavior.
Inside the stylesheets/partials directory create a new file home_page.scss
assets/stylesheets/partials/home_page.scss
In the file add the following CSS:
assets/stylesheets/partials/home_page.scss
Inside the mobile.scss file’s max-width: 767px media query add:
assets/stylesheets/responsive/mobile.scss
Now the home page should look like this on bigger screens
and like this on the smaller screens
Commit the changes.
git add -A
git commit -m "
- Add the bootstrap layout to the home page
- Add CSS to make home page layout's stylistic and responsive design changes"
Seeds
To display posts on the home page, at first we need to have them inside the database. Creating data manually is boring and time consuming. To automate this process, we’ll use seeds. Open the seeds.rb file.
db/seeds.rb
Add the following code:
db/seeds.rb
As you see, we create seed_users , seed_categories and seed_posts methods to create User , Category and Post records inside the development database. Also the faker gem is used to generate dummy text. Add faker gem to your Gemfile
gem 'faker'
and
bundle install
To seed data, using the seeds.rb file, run a command
rails db:seed
Commit the changes.
git add -A
git commit -m "
- Add faker gem
- Inside the seeds.rb file create methods to generate
User, Category and Post records inside the development database"
Rendering the posts
To render the posts, we’ll need a posts directory inside the views.
Generate a new controller called Posts , so it will automatically create a posts directory inside the views too.
rails g controller posts
Since in our app the PagesController is responsible for the homepage, we’ll need to query data inside the pages_controller.rb file’s index action. Inside the index action retrieve some records from the posts table. Assign the retrieved records to an instance variable, so the retrieved objects are going to be available inside the home page’s views.
If you aren’t familiar with ruby variables, read this guide.
If you aren’t familiar with retrieving records from the database in Rails, read the Active Record Query Interface guide.
The index action should look something like this right now:
controllers/pages_controller.rb
Navigate to the home page’s template
views/pages/index.html.erb
and inside the .main-content element add
<%= render @posts %>
This will render all posts, which were retrieved inside the index action. Because post objects belong to the Post class, Rails automatically tries to render the _post.html.erb partial template which is located
views/posts/_post.html.erb
We haven’t created this partial file yet, so create it and add the following code inside:
views/posts/_post.html.erb
I’ve used a bootstrap card component here to achieve the desired style. Then I just stored post’s content and its path inside the element. Also I added a link which will lead to the full post.
So far we didn’t define any routes for posts. We need them right now, so let’s declare them. Open the routes.rb file and add the following code inside the routes:
routes.rb
Here I’ve used a resources method to declare routes for index , show , new , edit , create , update and destroy actions. Then I’ve declared some custom collection routes to access pages with multiple Post instances. These pages are going to be dedicated for separate branches, we’ll create them later.
Restart the server and go to http://localhost:3000. You should see rendered posts on the screen. The application should look similar to this:
Commit the changes.
git add -A
git commit -m "Display posts on the home page - Generate Posts controller and create an index action.
Inside the index action retrieve Post records
- Declare routes for posts
- Create a _post.html.erb partial inside posts directory
- Render posts inside the home page's main content"
To start styling posts, create a new scss file inside the partials directory:
assets/stylesheets/partials/posts.scss
and inside the file add the following CSS:
assets/stylesheets/partials/posts.scss
The home page should look similar to this:
Commit the change.
git add -A
git commit -m "Create a posts.scss file and add CSS to it"
Styling with JavaScript
Currently the site’s design is pretty dull. To create contrast, we’re going to color the posts. But instead of just coloring it with CSS, let’s color them with different color patterns every time a user refreshes the website. To do that we’ll use JavaScript. It’s probably a silly idea, but it’s fun c(o_u)?
Navigate to the javascripts directory inside your assets and create a new directory called posts . Inside the directory create a new file called style.js . Also if you want, you can delete by default generated .coffee , files inside the javascripts directory. We won’t use CoffeeScript in this tutorial.
assets/javascripts/posts/style.js
Inside the style.js file add the following code.
assets/javascripts/posts/style.js
With this piece of code we randomly set one of two style modes when a browser gets refreshed, by adding attributes to posts. One style has colored borders only, another style has solid color posts. With every page change and browser refresh we also recolor posts randomly too. Inside the randomColorSet() function you can see predefined color schemes.
mouseenter and mouseleave event handlers are going to be needed in the future for posts in specific pages. There posts’ style is going to be different than posts’ on the home page. When you’ll hover on a post, it will slightly change its bottom border’s color. You’ll see this later.
Commit the changes.
git add -A
git commit -m "Create a style.js file and add js to create posts' style"
To complement the styling, add some CSS. Open the posts.scss file
assets/stylesheets/partials/posts.scss
and add the following CSS:
assets/stylesheets/partials/posts.scss
Also inside the mobile.scss add the following code to fix too large text issues on smaller screens:
assets/stylesheets/responsive/mobile.scss
The home page should look similar to this right now:
Commit the changes
git add -A
git commit -m "Add CSS to posts on the home page - add CSS to the posts.scss file
- add CSS to the mobile.scss to fix too large text issues on smaller screens"
Modal window
I want to be able to click on a post and see its full content, without going to another page. To achieve this functionality I’ll use a bootstrap’s modal component.
Inside the posts directory, create a new partial file _modal.html.erb
views/posts/_modal.html.erb
and add the following code:
views/posts/_modal.html.erb
This is just a slightly modified bootstrap’s component to accomplish this particular task.
Render this partial at the top of the home page’s template.
views/pages/index.html.erb
To make this modal window functional, we have to add some JavaScript. Inside the posts directory, create a new file modal.js
assets/javascripts/posts/modal.js
Inside the file, add the following code:
assets/javascripts/posts/modal.js
With this js code we simply store selected post’s data into variables and fill modal window’s elements with this data. Finally, with the last line of code we make the modal window visible.
To enhance the modal window’s looks, add some CSS. But before adding CSS, let’s do a quick management task inside the stylesheets directory.
Inside the partials directory create a new directory called posts
assets/stylesheets/partials/posts
Inside the posts directory create a new file home_page.scss . Cut all code from the posts.scss file and paste it inside the home_page.scss file. Delete the posts.scss file. We’re doing this for a better CSS code management. It’s clearer when have few smaller CSS files with a distinguishable purpose, rather than one big file where everything is mashed together.
Also inside the posts directory, create a new file modal.scss and add the following CSS:
assets/stylesheets/partials/posts/modal.scss
Now when we click on the post, the application should look like this:
Commit the changes.
git add -A
git commit -m "Add a popup window to show a full post's content - Add bootstrap's modal component to show full post's content
- Render the modal inside the home page's template
- Add js to fill the modal with post's content and show it
- Add CSS to style the modal"
Also merge the main_feed branch with the master
git checkout master
git merge main_feed
Get rid of the main_feed branch
git branch -D main_feed
Single post
Switch to a new branch
git checkout -b single_post
Show a single post
If you try to click on the I'm interested button, you will get an error. We haven’t created a show.html.erb template nor we’ve created a corresponding controller’s action. By clicking the button I want to be redirected to a selected post’s page.
Inside the PostsController , create a show action and then query and store a specific post object inside an instance variable:
controllers/posts_controller.rb
I'm interested button redirects to a selected post. It has a href attribute with a path to a post. By sending a GET request to get a post, rails calls the show action. Inside the show action, we’ve an access to the id param, because by sending a GET request to get a specific post, we provided its id . I.e. by going to a /posts/1 path, we would send a request to get a post whose id is 1 .
Create a show.html.erb template inside the posts directory
views/posts/show.html.erb
Inside the file add the following code:
views/posts/show.html.erb
Create a show.scss file inside the posts directory and add CSS to style the page’s look:
assets/stylesheets/partials/posts/show.scss
Here I defined the page’s height to be 100vh-50px , so the page’s content is full viewport’s height. It allows the container to be colored white across the full browser’s height, no matter if there is enough of content inside the element or not. vh property means viewport’s height, so 100vh value means that the element is stretched 100% of viewport’s height. 100vh-50px is required to subtract navigation bar’s height, otherwise the container would be stretched too much by 50px .
If you click on the I'm interested button now, you will be redirected to a page which looks similar to this:
We’ll add extra features to the show.html.erb template later. Now commit the changes.
git add -A
git commit -m "Create a show template for posts - Add a show action and query a post to an instance variable
- Create a show.scss file and add CSS"
Specs
Instead of manually checking that this functionality, of modal window appearance and redirection to a selected post, works, wrap it all with specs. We’re going to use capybara to simulate a user’s interaction with the app.
Inside the features directory, create a new directory called posts
spec/features/posts
Inside the new directory, create a new file visit_single_post_spec.rb
spec/features/posts/visit_single_post_spec.rb
And add a feature spec inside. The file looks like this:
spec/features/posts/visit_single_post_spec.rb
Here I defined all steps which I would perform manually. I start by going to the home page, click on the post, expect to see the popped up modal window, click on the I'm interested button, and finally, expect to be redirected to the post’s page and see its content.
By default RSpec matchers have_selector , have_css , etc. return true if an element is actually visible to a user. So after it was clicked on a post, testing framework expects to see a visible modal window. If you don’t care if a user sees an element or not and you just care about an element’s presence in the DOM, pass an additional visible: false argument.
Try to run the test
rspec spec/features/posts/visit_single_post_spec.rb
Commit the changes.
git add -A
git commit -m "Add a feature spec to test if a user can go to a
single post from the home page"
Merge the single_post branch with the master .
git checkout master
git merge single_post
git branch -D single_post
Specific branches
Every post belongs to a particular branch. Let’s create specific pages for different branches.
Switch to a new branch
git checkout -b specific_branches
Home page’s side menu
Start by updating the home page’s side menu. Add links to specific branches. Open the index.html.erb file:
views/pages/index.html.erb
We are going to put some links inside the #side-menu element. Split file’s content into partials, otherwise it will get noisy very quickly.
Cut #side-menu and #main-content elements, and paste them into separate partial files. Inside the pages directory create an index directory, and inside the directory create corresponding partial files to the elements. The files should look like this:
views/pages/index/_side_menu.html.erb
views/pages/index/_main_content.html.erb
Render those partial files inside the home page’s template. The file should look like this:
views/pages/index.html.erb
Commit the changes.
git add -A
git commit -m "Split home page template's content into partials"
Inside the _side_menu.html.erb partial add a list of links, so the file should look like this:
views/pages/index/_side_menu.html.erb
An unordered list was added. Inside the list we render another partial with links. Those links are going to be available for all users, no matter if they are signed in or not. Create this partial file and add the links.
Inside the index directory create a side_menu directory:
views/pages/index/side_menu
Inside the directory create a _no_login_required_links.html.erb partial with the following code:
views/pages/index/side_menu/_no_login_required_links.html.erb
Here we simply added links to specific branches of posts. If you are wondering how do we have paths, such as hobby_posts_path , etc., look at the routes.rb file. Previously we’ve added nested collection routes inside the resources :posts declaration.
If you pay attention to i elements’ attributes, you will notice fa classes. With those classes we declare Font Awesome icons. We haven’t set up this library yet. Fortunately, it’s very easy to set up. Inside the main application.html.erb file’s head element, add the following line
The side menu should be present now.
Commit the changes.
git add -A
git commit -m "Add links to the home page's side menu"
On smaller screens, where width is between 767px and 1000px , bootstrap’s container looks unpleasant, it looks overly compressed. So stretch it among those widths. Inside the mobile.scss file, add the following code:
assets/stylesheets/responsive/mobile.scss
Commit the change.
git add -A
git commit -m "set .container width to 100%
when viewport's width is between 767px and 1000px"
Branch page
If you try to click on one of those side menu links, you will get an error. We haven’t set up actions inside the PostsController nor we created any templates for it.
Inside the PostsController , define hobby , study , and team actions.
controllers/posts_controller.rb
Inside every action, posts_for_branch method is called. This method will return data for the specific page, depending on the action’s name. Define the method inside the private scope.
contorllers/posts_controller.rb
In the @categories instance variable we retrieve all categories for a specific branch. I.e. if you go to the hobby branch page, all categories which belong to the hobby branch will be retrieved.
To get and store posts inside the @posts instance variable, get_posts method is used and then it is chained with a paginate method. paginate method comes from will_paginate gem. Let’s start by defining the get_posts method. Inside the PostsController ’s private scope add:
controllers/posts_controller.rb
Right now get_posts method just retrieves any 30 posts, not specific to anything, so we could move on and focus on further development. We’ll come back to this method in the near future.
Add the will_paginate gem to be able to use pagination.
gem 'will_paginate', '~> 3.1.0'
run
bundle install
All we miss now is templates. They are going to be similar to all branches, so instead of repeating the code, inside every of those branches, create a partial with a general structure for a branch. Inside the posts directory create a _branch.html.erb file.
posts/_branch.html.erb
First you see a page_title variable being printed on the page. We’ll pass this variable as an argument when we’ll render the _branch.html.erb partial. Next, a _create_new_post partial is rendered to display a link, which will lead to a page, where a user could create a new post. Create this partial file inside a new branch directory:
posts/branch/_create_new_post.html.erb
Here we’ll use a create_new_post_partial_path helper method to determine which partial file to render. Inside the posts_helper.rb file, implement the method:
helpers/posts_helper.rb
Also create those two corresponding partials inside a new create_new_post directory:
posts/branch/create_new_post/_signed_in.html.erb
posts/branch/create_new_post/_not_signed_in.html.erb
Next, inside the _branch.html.erb file we render a list of categories. Create a _categories.html.erb partial file:
posts/branch/_categories.html.erb
Inside the file, we have a all_categories_button_partial_path helper method which determines which partial file to render. Define this method inside the posts_helper.rb file:
helpers/posts_helper.rb
All categories are going to be selected by default. If the params[:category] is empty, it means that none categories were selected by a user, which means that currently the default value all is selected. Create the corresponding partial files:
posts/branch/categories/_all_selected.html.erb
posts/branch/categories/_all_not_selected.html.erb
The send method is used here to call a method by using a string, this allows to be flexible and call methods dynamically. In our case we generate different paths, depending on the current controller’s action.
Next, inside the _branch.html.erb file we render posts and call the no_posts_partial_path helper method. If posts are not found, the method will display a message.
Inside the posts_helper.rb add the helper method:
helpers/posts_helper.rb
Here I use a ternary operator, so the code looks a little bit cleaner. If there are any posts, I don’t want to show any messages. Since you cannot pass an empty string to the render method, I pass a path to an empty partial instead, in occasions where I don’t want to render anything.
Create a shared directory inside the views and then create an empty partial:
views/shared/_empty_partial.html.erb
Now create a _no_posts.html.erb partial for the message inside the branch directory.
posts/branch/_no_posts.html.erb
Finally, we use the will_paginate method from the gem to split posts into multiple pages if there are a lot of posts.
Create templates for hobby , study and team actions. Inside them we’ll render the _branch.html.erb partial file and pass specific local variables.
posts/hobby.html.erb
posts/study.html.erb
posts/team.html.erb
If you go to any of those branch pages, you will see something like this
Also if you scroll down, you will see that now we have a pagination
We’ve done quite a lot of work to create these branch pages. Commit the changes
git add -A
git commit -m "Create branch pages for specific posts - Inside the PostsController define hobby, study and team actions.
Define a posts_for_branch method and call it inside these actions
- Add will_paginate gem
- Create a _branch.html.erb partial file
- Create a _create_new_post.html.erb partial file
- Define a create_new_post_partial_path helper method
- Create a _signed_in.html.erb partial file
- Create a _not_signed_in.html.erb partial file
- Create a _categories.html.erb partial file
- Define a all_categories_button_partial_path helper method
- Create a _all_selected.html.erb partial file
- Create a _all_not_selected.html.erb partial file
- Define a no_posts_partial_path helper method
- Create a _no_posts.html.erb partial file
- Create a hobby.html.erb template file
- Create a study.html.erb template file
- Create a team.html.erb template file"
Specs
Cover helper methods with specs. The posts_helper_spec.rb file should look like this:
spec/helpers/posts_helper_spec.rb
Again, specs are pretty simple here. I used the stub method to define methods’ return values. To define params, I selected the controller and simply defined it like this controller.params[:param_name] . And finally, I assigned instance variables by using an assign method.
Commit the changes
git add -A
git commit -m "Add specs for PostsHelper methods"
Design changes
In these branch pages we want to have different posts’ design. In the home page we have the cards design. In branch pages let’s create a list design, so a user could see more posts and browse through them more efficiently.
Inside the posts directory, create a post directory with a _home_page.html.erb partial inside.
posts/post/_home_page.html.erb
Cut the _post.html.erb partial’s content and paste it inside the _home_page.html.erb partial file. Inside the _post.html.erb partial file add the following line of code:
posts/_post.html.erb
Here we call the post_format_partial_path helper method to decide which post design to render, depending on the current path. If a user is on the home page, render the post design for the home page. If a user is on the branch page, render the post design for the branch page. That’s why we cut _post.html.erb file’s content into _home_page.html.erb file.
Inside the post directory, create a new _branch_page.html.erb file and paste this code to define the posts design for the branch page.
posts/post/_branch_page.html.erb
To decide which partial file to render, define the post_format_partial_path helper method inside the posts_helper.rb
helpers/posts_helper.rb
The post_format_partial_path helper method won’t be available in the home page, because we render posts inside the home page’s template, which belongs to a different controller. To have an access to this method, inside the home page’s template, include PostsHelper inside the ApplicationHelper
include PostsHelper
Specs
Add specs for the post_format_partial_path helper method:
helpers/posts_helper_spec.rb
Commit the changes
git add -A
git commit -m "Add specs for the post_format_partial_path helper method"
CSS
Describe the posts style in branch pages with CSS. Inside the posts directory, create a new branch_page.scss style sheet file:
stylesheets/partials/posts/branch_page.scss
Inside the base/default.scss add:
assets/stylesheets/base/default.scss
To fix style issues on smaller devices, inside the responsive/mobile.scss add:
assets/stylesheets/responsive/mobile.scss
Now the branch pages should look like this:
Commit the changes.
git add -A
git commit -m "Describe the posts style in branch pages - Create a branch_page.scss file and add CSS
- Add CSS to the default.scss file
- Add CSS to the mobile.scss file"
Search bar
We want not only be able to browse through posts, but also search for specific ones. Inside the _branch.html.erb partial file, above the categories row, add:
posts/_branch.html.erb
Create a _search_form.html.erb partial file inside the branch directory and add the following code inside:
posts/branch/_search_form.html.erb
Here with the send method we dynamically generate a path to a specific PostsController ’s action, depending on a current branch. Also we send an extra data field for the category if a specific category is selected. If a user has selected a specific category, only search results from that category will be returned.
Define the category_field_partial_path helper method inside the posts_helper.rb
helpers/posts_helper.rb
Create a _category_field.html.erb partial file and add the code:
posts/branch/search_form/_category_field.html.erb
To give the search form some style, add CSS to the branch_page.scss file:
assets/stylesheets/partials/posts/branch_page.scss
The search form, in branch pages, should look likes this now
Commit the changes
git add -A
git commit -m "Add a search form in branch pages - Render a search form inside the _branch.html.erb
- Create a _search_form.html.erb partial file
- Define a category_field_partial_path helper method in PostsHelper
- Create a _category_field.html.erb partial file
- Add CSS for the the search form in branch_page.scss"
Currently our form isn’t really functional. We could use some gems to achieve search functionality, but our data isn’t complicated, so we can create our own simple search engine. We’ll use scopes inside the Post model to make queries chainable and some conditional logic inside the controller (we will extract it into service object in the next section to make the code cleaner).
Start by defining scopes inside the Post model. To warm up, define the default_scope inside the post.rb file. This orders posts in descending order by the creation date, newest posts are at the top.
models/post.rb
Commit the change
git add -A
git commit -m "Define a default_scope for posts"
Make sure that the default_scope works correctly by wrapping it with a spec. Inside the post_spec.rb file, add:
spec/models/post_spec.rb
Commit the change:
git add -A
git commit -m "Add a spec for the Post model's default_scope"
Now let’s make the search bar functional. Inside the posts_controller.rb replace the get_posts method’s content with:
controllers/posts_controller.rb
As I’ve mentioned a little bit earlier, logic, just like in views, isn’t really a good place in controllers. We want to make them clean. So we’ll extract the logic out of this method in the upcoming section.
As you see, there is some conditional logic going on. Depending on a user request, data gets queried differently using scopes.
Inside the Post model, define those scopes:
models/post.rb
The joins method is used to query records from the associated tables. Also the basic SQL syntax is used to find records, based on provided strings.
Now if you restart the server and go back to any of those branch pages, the search bar should work! Also now you can filter posts by clicking on category buttons. And also when you select a particular category, only posts from that category are queried when you use the search form.
Commit the changes
git add -A
git commit -m "Make search bar and category filters
in branch pages functional - Add by_category, by_branch and search scopes in the Post model
- Modify the get_posts method in PostsController"
Cover these scopes with specs. Inside the post_spec.rb file’s Scopes context add:
spec/models/post_spec.rb
Commit the changes
git add -A
git commit -m "Add specs for Post model's
by_branch, by_category and search scopes"
Infinite scroll
When you go to any of these branch pages, at the bottom of the page you see the pagination
When you click on the next link, it redirects you to another page with older posts. Instead of redirecting to another page with older posts, we can make an infinite scrolling functionality, similar to the Facebook’s and Twitter’s feed. You just scroll down and without any redirection and page reload, older posts are appended to the bottom of the list. Surprisingly, it is very easy to achieve. All we have to do is write some JavaScript. Whenever a user reaches the bottom of the page, AJAX request is sent to get data from the next page and that data gets appended to the bottom of the list.
Start by configuring the AJAX request and its conditions. When a user passes a certain threshold by scrolling down, AJAX request gets fired. Inside the javascripts/posts directory, create a new infinite_scroll.js file and add the code:
assets/javascripts/posts/infinite_scroll.js
The isLoading variable makes sure that only one request is sent at a time. If there is currently a request in progress, other requests won’t be initiated.
First check if pagination is present, if there are any more posts to render. Next, get a link to the next page, this is where the data will be retrieved from. Then set a threshold when to call an AJAX request, in this case the threshold is 60px from the bottom of the window. Finally, if all conditions successfully pass, load data from the next page using the getScript() function.
Because the getScript() function loads the JavaScript file, we have to specify which file to render inside the PostsController . Inside the posts_for_branch method specify respond_to formats and which files to render.
controllers/posts_controller.rb
When the controller tries to respond with the .js file, the posts_pagination_page template gets rendered. This partial file appends newly retrieved posts to the list. Create this file to append new posts and update the pagination element.
posts/_posts_pagination_page.js.erb
Create an update_pagination_partial_path helper method inside the posts_helper.rb
helpers/posts_helper.rb
Here the next_page method from the will_paginate gem is used, to determine if there are any more posts to load in the future or not.
Create the corresponding partial files:
posts/posts_pagination_page/_update_pagination.js.erb
posts/posts_pagination_page/_remove_pagination.js.erb
If you go to any of the branch pages and scroll down, older posts should be automatically appended to the list.
Also we no longer need to see the pagination menu, so hide it with CSS. Inside the branch_page.scss file add:
stylesheets/partials/posts/branch_page.scss
Commit the changes
git add -A
git commit -m "Transform posts pagination into infinite scroll - Create an infinite_scroll.js file
- Inside PostController's posts_for_branch method add respond_to format
- Define an update_pagination_partial_path
- Create _update_pagination.js.erb and _remove_pagination.js.erb partials
- hide the .infinite-scroll element with CSS"
Specs
Cover the update_pagination_partial_path helper method with specs:
spec/helpers/post_helper_spec.rb
Here I’ve used a test double to simulate the posts instance variable and its chained method next_page . You can learn more about the RSpec Mocks here.
Commit the changes:
git add -A
git commit -m "Add specs for the update_pagination_partial_path
helper method"
We can also write feature specs to make sure that posts are successfully appended, after you scroll down. Create an infinite_scroll_spec.rb file:
spec/features/posts/infinite_scroll_spec.rb
In the spec file all branch pages are covered. We make sure that this functionality works on all three pages. The per_page is will_paginate gem’s method. Here the Post model is selected and the default number of posts per page is set.
The check_posts_count method is defined to reduce the amount of code the file has. Instead of repeating the same code over and over again in different specs, we extracted it into a single method. Once the page is visited, it is expected to see 15 posts. Then the execute_script method is used to run JavaScript, which scrolls the scrollbar to the browser’s bottom. Finally, after the scroll, it is expected to see an additional 15 posts. Now in total there should be 30 posts on the page.
Commit the changes:
git add -A
git commit -m "Add feature specs for posts' infinite scroll functionality"
Home page update
Currently on the home page we can only see few random posts. Modify the home page, so we could see a few posts from all branches.
Replace the _main_content.html.erb file’s content with:
pages/index/_main_content.html.erb
We created sections with posts for every branch.
Define instance variables inside the PagesController ’s index action. The action should look like this:
controllers/pages_controller.rb
We have the no_posts_partial_path helper method from before, but we should modify it a little bit and make it more reusable. Currently it works only for branch pages. Add a posts parameter to the method, so it should look like this now:
helpers/posts_helper.rb
Here the posts parameter was added, instance variable was changed to a simple variable and the partial’s path was changed too. So move the _no_posts.html.erb partial file from
posts/branch/_no_posts.html.erb
to
posts/shared/_no_posts.html.erb
Also inside the _branch.html.erb file pass the @posts instance variable to the no_posts_partial_path method as an argument.
Add some style changes. Inside the default.scss file add:
assets/stylesheets/base/default.scss
And inside the home_page.scss add:
assets/stylesheets/partials/home_page.scss
The home page should look similar to this right now
Commit the changes
git add -A
git commit -m "Add posts from all branches in the home page - Modify the _main_content.html.erb file
- Define instance variables inside the PagesController ’s index action
- Modify the no_posts_partial_path helper method to be more reusable
- Add CSS to style the home page"
Service objects
As I’ve mentioned before, if you put logic inside controllers, they become complicated very easily and a huge pain to test. That’s why it’s a good idea to extract logic from them somewhere else. To do that I use design patterns, service objects (services) to be more specific.
Right now inside the PostsController , we have this method:
controllers/posts_controller.rb
It has a lot of conditional logic which I want to remove by using services. Service objects (services) design pattern is just a basic ruby class. It’s very simple, we just pass data which we want to process and call a defined method to get a desired return value.
In ruby we pass data to Class’s initialize method, in other languages it’s known as the constructor . And then inside the class, we just create a method which will handle all defined logic. Let’s create that and see how it looks in code.
Inside the app directory, create a new services directory:
app/services
Inside the directory, create a new posts_for_branch_service.rb file:
services/posts_for_branch_service.rb
Here, as described above, it is just a plain ruby class with an initialize method to accept parameters and a call method to handle the logic. We took this logic from the get_posts method.
Now simply create a new object of this class and call the call method inside the get_posts method. The method should look like this right now:
controllers/posts_controller.rb
Commit the changes:
git add -A
git commit -m "Create a service object to extract logic
from the get_posts method"
Specs
A fortunate thing about design patterns, like services, is that it’s easy to write unit tests for it. We can simply write specs for the call method and test each of its conditions.
Inside the spec directory create a new services directory:
spec/services
Inside the directory create a new file posts_for_branch_service_spec.rb
spec/services/posts_for_branch_service_spec.rb
At the top of the file, the posts_for_branch_service.rb file is loaded and then each of the call method’s conditions are tested.
Commit the changes
git add -A
git commit -m "Add specs for the PostsForBranchService"
Create a new post
Until now posts were created artificially, by using seeds. Let’s add a user interface for it, so a user could create posts.
Inside the posts_controller.rb file add new and create actions.
controllers/posts_controller.rb
Inside the new action, we define some instance variables for the form to create new posts. Inside the @categories instance variable, categories for a specific branch are stored. The @post instance variable stores an object of a new post, this is needed for the Rails form.
Inside the create action’s @post instance variable, we create a new Post object and fill it with data, using the post_params method. Define this method within the private scope:
controllers/posts_controller.rb
The permit method is used to whitelist attributes of the object, so only these specified attributes are allowed to be passed.
Also at the top of the PostsController , add the following line:
controllers/posts_controller.rb
The before_action is one of the Rails filters. We don’t want to allow for not signed in users to have an access to a page where they can create new posts. So before calling the new action, the redirect_if_not_signed_in method is called. We’ll need this method across other controllers too, so define it inside the application_controller.rb file. Also a method to redirect signed in users would be useful in the future too. So define them both.
controllers/application_controller.rb
Now the new template is required, so a user could create new posts. Inside the posts directory, create a new.html.erb file:
posts/new.html.erb
Create a new directory and a _post_form.html.erb partial file inside:
posts/new/_post_form.html.erb
The form is pretty straightforward. Attributes of the fields are defined and the collection_select method is used to allow to select one of the available categories.
Commit the changes
git add -A
git commit -m "Create a UI to create new posts - Inside the PostsController:
define new and create actions
define a post_params method
define a before_action filter
- Inside the ApplicationController:
define a redirect_if_not_signed_in method
define a redirect_if_signed_in method
- Create a new template for posts"
We can test if the form works by writing specs. Start by writing request specs, to make sure that we get correct responses after we send particular requests. Inside the spec directory create a couple directories.
spec/requests/posts
And a new_spec.rb file inside:
spec/requests/posts/new_spec.rb
As mentioned in the documentation, request specs provide a thin wrapper around the integration tests. So we test if we get correct responses when we send certain requests. The include Warden::Test::Helpers line is required in order to use login_as method. The method logs a user in.
Commit the change.
git add -A
git commit -m "Add request specs for a new post template"
We can even add some more request specs for the pages which we created previously.
Inside the same directory create a branches_spec.rb file:
spec/requests/posts/branches_spec.rb
This way we check that all branch pages’ templates successfully render. Also the shared_examples is used to reduce the repetitive code.
Commit the change.
git add -A
git commit -m "Add request specs for Posts branch pages' templates"
Also we can make sure that the show template renders successfully. Inside the same directory create a show_spec.rb file:
spec/requests/posts/show_spec.rb
Commit the changes.
git add -A
git commit -m "Add request specs for the Posts show template"
To make sure that a user is able to create a new post, write feature specs to test the form. Inside the features/posts directory create a new file create_new_post_spec.rb
spec/features/posts/create_new_post_spec.rb
Commit the changes.
git add -A
git commit -m "Create a create_new_post_spec.rb file with feature specs"
Apply some design to the new template.
Within the following directory:
assets/stylesheets/partials/posts
Create a new.scss file:
assets/stylesheets/partials/posts/new.scss
If you go to the template in a browser now, you should see a basic form
Commit the changes
git add -A
git commit -m "Add CSS to the Posts new.html.erb template"
Finally, we want to make sure that all fields are filled correctly. Inside the Post model we’re going to define some validations. Add the following code to the Post model:
models/post.rb
Commit the changes.
git add -A
git commit -m "Add validations to the Post model"
Cover these validations with specs. Go to the Post model’s spec file:
spec/models/post_spec.rb
Then add:
spec/models/post_spec.rb
Commit the changes.
git add -A
git commit -m "Add specs for the Post model's validations"
Merge the specific_branches branch with the master
git checkout -b master
git merge specific_branches
git branch -D specific_branches
Instant Messaging
Users are able to publish posts and read other users’ posts, but they have no ability to communicate with each other. We could create a simple mail box system, which would be much easier and faster to develop. But that is a very old way to communicate with someone. Real time communication is much more exciting to develop and comfortable to use.
Fortunately, Rails has Action Cables which makes real time features’ implementation relatively easy. The core concept behind the Action Cables is that it uses a WebSockets Protocol instead of HTTP. And the core concept of WebSockets is that it establishes a client-server connection and keeps it open. This means that no page reloads are required to send and receive additional data.
Private conversation
The goal of this section is to create a working feature which would allow to have a private conversation between two users.
Switch to a new branch
git checkout -B private_conversation
Namespacing models
Start by defining necessary models. We’ll need two different models for now, one for private conversations and another for private messages. We could name them PrivateConversation and PrivateMessage , but you can quickly encounter a little problem. While everything would work fine, imagine how the models directory would start to look like after we create more and more models with similar name prefixes. The directory would become hardly manageable in no time.
To avoid chaotic structure inside directories, we can use a namespacing technique.
Let’s see how it would look like. An ordinary model for private conversation would be called PrivateConversation and its file would be called private_conversation.rb , and stored inside the models directory
models/private_conversation.rb
Meanwhile, the namespaced version would be called Private::Conversation . The file would be called conversation.rb and located inside the private directory
models/private/conversation.rb
Can you see how it might be useful? All files with the private prefix would be stored inside the private directory, instead of accumulating inside the main models directory and making it hardly to read.
As usually, Rails makes the development process enjoyable. We’re able to create namespaced models by specifying a directory which we want to put a model in.
To create the namespaced Private::Conversation model run the following command:
rails g model private/conversation
Also generate the Private::Message model:
rails g model private/message
If you look at the models directory, you will see a private.rb file. This is required to add prefix to database tables’ names, so models could be rec | https://medium.com/free-code-camp/lets-create-an-intermediate-level-ruby-on-rails-application-d7c6e997c63f | ['Domantas G'] | 2017-12-19 15:52:00.251000+00:00 | ['Ruby on Rails', 'Software Development', 'Programming', 'Startup', 'Web Development'] |
Acrylonitrile Butadiene Styrene (ABS) Market Analysis, Growth By Top Companies, Trends By Types And Forecast To 2021–2026 | Acrylonitrile Butadiene Styrene (ABS) Market 2021–2026
Straits Research has recently added a new report to its vast depository titled Global Acrylonitrile Butadiene Styrene (ABS) Market. The report studies vital factors about the Global Acrylonitrile Butadiene Styrene (ABS) Market that are essential to be understood by existing as well as new market players. The report highlights the essential elements such as market share, profitability, production, sales, manufacturing, advertising, technological advancements, key market players, regional segmentation, and many more crucial aspects related to the Global Acrylonitrile Butadiene Styrene (ABS) Market.
Acrylonitrile Butadiene Styrene (ABS) Market
The Major Players Covered in this Report:
Some of the key players profiled in the global acrylonitrile butadiene styrene market are INEOS Styrolution Group GmbH, Trinseo, Kumho Petrochemical, Formosa Chemicals & Fibre Corp., LG Chem, SABIC, BASF SE, Toray Industries Inc, Mitsui Chemicals Inc, Ravago Americas, Emco Industrial Plastics, and Elix Polymers.
Get a Sample PDF Report: https://straitsresearch.com/report/acrylonitrile-butadiene-styrene-market/request-sample
Segmentation is as follows: -
By Raw Material ,
Acrylonitrile , Polybutadiene , Styrene ,
By Type,
High gloss, Low gloss, High impact, General purpose, High flow, Plate-able ,
By End-User,
Automotive, Building and construction, Electronics, Consumer goods, Others ,
The report specifically highlights the Acrylonitrile Butadiene Styrene (ABS) market share, company profiles, regional outlook, product portfolio, a record of the recent developments, strategic analysis, key players in the market, sales, distribution chain, manufacturing, production, new market entrants as well as existing market players, advertising, brand value, popular products, demand and supply, and other important factors related to the Acrylonitrile Butadiene Styrene (ABS) market to help the new entrants understand the market scenario better.
Important factors like strategic developments, government regulations, Acrylonitrile Butadiene Styrene (ABS) market analysis, end-users, target audience, distribution network, branding, product portfolio, market share, threats and barriers, growth drivers, latest trends in the industry are also mentioned.
Regional Analysis For Acrylonitrile Butadiene Styrene (ABS) Market:
North America (United States, Canada, and Mexico)
Europe (Germany, France, UK, Russia, and Italy)
Asia-Pacific (China, Japan, Korea, India, and Southeast Asia)
South America (Brazil, Argentina, Colombia, etc.)
Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria, and South Africa)
In this study, the years considered to estimate the market size of the Acrylonitrile Butadiene Styrene (ABS) are as follows:
• History Year: 2014–2019
• Base Year: 2019
• Estimated Year: 2020
• Forecast Year 2021 to 2026
The main steps in the investigation process are:
1) The first step in market research is to obtain raw market information from industry experts and direct research analysts using primary and secondary sources.
2) Extracts raw data from these sources to extract valuable insights and analyze them for research purposes.
3) Classify the knowledge gained by qualitative and quantitative data and place it accordingly to make final conclusions.
Key Questions Answered in the Report:
• What is the current scenario of the Global Acrylonitrile Butadiene Styrene (ABS) Market? How is the market going to prosper throughout the next 6 years?
• What is the impact of COVID-19 on the market? What are the major steps undertaken by the leading players to mitigate the damage caused by COVID-19?
• What are the emerging technologies that are going to profit the market?
• What are the historical and the current sizes of the Global Acrylonitrile Butadiene Styrene (ABS) Market?
• Which segments are the fastest growing and the largest in the market? What is their market potential?
• What are the driving factors contributing to the market growth during the short, medium, and long term? What are the major challenges and shortcomings that the market is likely to face? How can the market solve the challenges?
• What are the lucrative opportunities for the key players in the Acrylonitrile Butadiene Styrene (ABS) market?
• Which are the key geographies from the investment perspective?
• What are the major strategies adopted by the leading players to expand their market shares?
• Who are the distributors, traders, and dealers of the Global Acrylonitrile Butadiene Styrene (ABS) market?
• What are sales, revenue, and price analysis by types and applications of the market?
For More Details On this Report: https://straitsresearch.com/report/acrylonitrile-butadiene-styrene-market/
About Us:
Regardless of whether you’re looking at business sectors in the next town or crosswise over continents, we understand the significance of being acquainted with what customers purchase. We overcome the issues of our customers by recognizing and deciphering just the target group, while simultaneously generating leads with the highest precision. We seek to collaborate with our customers to deliver a broad spectrum of results through a blend of market and business research approaches. This approach of using various research and analysis strategies enables us to determine greater insights by eliminating the research costs. Moreover, we’re continually developing, not only with regards to where we measure, or who we measure but in how our visions can enable you to drive cost-effective growth.
Contact Us:
Company Name: Straits Research
Email: [email protected]
Phone:
+1 646 480 7505 (U.S.)
+91 8087085354 (India)
+44 208 068 9665 (U.K.) | https://medium.com/@shubhamk.straitsresearch/acrylonitrile-butadiene-styrene-abs-market-analysis-growth-by-top-companies-trends-by-types-and-a4ca23e50910 | ['Shubham K'] | 2021-12-23 03:32:05.847000+00:00 | ['Market Research Reports', 'Market Research', 'Abs', 'Butadiene Rubber', 'Acrylonitrile Butadiene'] |
Your Proactive Caregiver Advocate: Dr Cynthia Speaks! | Caregiving and Finances: Preparation Prevents Challenges
Have you ever considered how costly the caregiving role can be? The financial portion of caregiving is assumed by those not directly involved in the role’s day-to-day duties. Understanding the cost of caregiving can be considered a ‘No Brainer’ but appearances can be deceiving when it comes to finances. It is a significant component in the caregiving field. Money matters, plus, it is needed to provide supplies, services, and equipment that a loved one may need. Money helps in making choices and if not discussed, it can create a friction between family members.
Discussing the financial resources for a loved one ahead of time is a proactive step in preparing for what may come. In my book, From The Lens of Daughter, Nurse, and Caregiver: A Journey of Duty and Honor, I share my story of quitting my job to care from my mother. Placement in a nursing home was never an option, so my decision to leave a high-paying job took some short-term soul searching. I had to consider what it would look like going from a two-income family to a one-income family. Yes, I must admit, I loved my job. Nursing is my sacred profession. I was at the top of my game, as they say in corporate America. As a case manager of a cardiovascular unit, I had earned the respect from physicians in the Texas Medical Center and Texas Heart Institute. Many days I would have physicians at my office door asking my opinion. I was good at helping colleagues, patients, and their families navigate the challenges of heart disease and home life.
My decision to bring my nursing skills home required financial planning. Please do not delay discussing the financial management of care. It ought to be discussed with the care recipient (if possible), and those who may be involved in any aspect of caring for a loved one. Consider the financial aspects of your care village and allow others to align their resources with the care needs of your loved one. The role of caregiving is stressful enough. Proactive planning will provide you a sense of security and it will also allow you to plan for the resources that will help you attain holistic care for your loved one. | https://medium.com/@drcynthiaspeaks/your-proactive-caregiver-advocate-dr-cynthia-speaks-985674f0597 | ['Dr. Cynthia J. Hickman'] | 2020-12-18 16:08:28.484000+00:00 | ['Care', 'Financial Planning', 'Caregiving Tips', 'Preparedness', 'Family Caregiving'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.